[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117893870B - Animal husbandry and veterinary animal body temperature detection system based on IR thermal imaging - Google Patents

Animal husbandry and veterinary animal body temperature detection system based on IR thermal imaging Download PDF

Info

Publication number
CN117893870B
CN117893870B CN202410289424.8A CN202410289424A CN117893870B CN 117893870 B CN117893870 B CN 117893870B CN 202410289424 A CN202410289424 A CN 202410289424A CN 117893870 B CN117893870 B CN 117893870B
Authority
CN
China
Prior art keywords
visible light
image
points
edge
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410289424.8A
Other languages
Chinese (zh)
Other versions
CN117893870A (en
Inventor
朱琴
陈建秋
霍萍丽
王琪
任勇
张自龙
高辉民
姜林林
赵闯凡
李新贞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Yunzhixin Technology Development Co ltd
Original Assignee
Dalian Yunzhixin Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Yunzhixin Technology Development Co ltd filed Critical Dalian Yunzhixin Technology Development Co ltd
Priority to CN202410289424.8A priority Critical patent/CN117893870B/en
Publication of CN117893870A publication Critical patent/CN117893870A/en
Application granted granted Critical
Publication of CN117893870B publication Critical patent/CN117893870B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/70Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in livestock or poultry

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Radiation Pyrometers (AREA)

Abstract

The invention relates to the technical field of animal image fusion, in particular to an animal body temperature detection system for livestock and veterinary based on IR thermal imaging. The method comprises the steps of obtaining a foreground region of a visible light image and an infrared image, and selecting edge feature points in the foreground region; obtaining structural feature degree and screening out structural feature points and texture feature points according to the distribution features of the edge feature points and the density of surrounding feature points; matching foreground areas of the visible light image and the infrared image according to the structural feature points; clustering the foreground region to divide a texture feature point region and a structural feature point region; overlapping the two areas to obtain an intersection area, and obtaining visible light weight in the intersection area; obtaining a fusion image; and monitoring the body temperature of the animal according to the fusion image. The invention ensures higher accuracy of image matching, calculates the weight of each intersection region when the images are fused, and ensures that the fused images can more clearly detect the body temperature of animals.

Description

Animal husbandry and veterinary animal body temperature detection system based on IR thermal imaging
Technical Field
The invention relates to the technical field of animal image fusion, in particular to an animal body temperature detection system for livestock and veterinary based on IR thermal imaging.
Background
In animal husbandry and veterinary animal body temperature detection scenes, IR (Infrared-Radiation) represents Infrared rays, IR thermal imaging is Infrared thermal imaging, and the Infrared thermal imaging body temperature detection technology has been widely popularized and implemented due to the advantages of non-contact temperature measurement, short temperature measurement time, wide temperature measurement range, digital display and the like. However, the infrared image is imaged by collecting infrared waveforms of the object, so that the definition is low, the local information of the animal is not obvious, and the monitoring of the body temperature of the animal is not facilitated. In order to solve the problem, an operator can acquire an infrared image and a visible light image, and fuse the infrared image and the visible light image to strengthen detailed information in the infrared image.
In the prior art, image fusion is often carried out on an infrared image and a visible light image so as to improve the definition of texture details of the infrared image, and image matching is needed on the two images during image fusion.
Disclosure of Invention
In order to solve the technical problems that images cannot be well matched due to excessive characteristic points during image fusion, image accuracy after fusion is affected, and then animal body temperature detection is affected, the invention aims to provide an animal husbandry and veterinary animal body temperature detection system based on IR thermal imaging, and the adopted technical scheme is as follows:
An IR thermal imaging based animal temperature detection system for a veterinary animal comprising a memory, a processor and a computer program stored in said memory and executable on said processor, which processor when executing said computer program performs the steps of:
Obtaining visible light images and infrared images of animals;
Obtaining a foreground region of the visible light image and the infrared image; obtaining all edge feature points in the foreground region; obtaining the structural feature degree of each edge feature point according to the distribution feature of each edge feature point and the density of surrounding feature points; screening out structural feature points according to the structural feature degree to obtain matching feature points of all the structural feature points in a foreground region in the infrared image in the foreground region in the visible light image;
Obtaining the texture information carrying degree of the edge feature points according to the structural feature degree; screening out texture feature points in the visible light image according to the texture information carrying degree; clustering the texture feature points in the visible light image and the structural feature points in the infrared image respectively, and sequentially obtaining a texture feature point region on the visible light image and a structural feature point region on the infrared image; matching and superposing the foreground region according to the matching characteristic points to obtain an intersection region between the texture characteristic point region and the structural characteristic point region, and obtaining visible light weight of each group of pixel points at corresponding positions in the intersection region according to infrared temperature difference and texture distribution characteristics in the intersection region; fusing each group of corresponding position pixel points of the intersection region according to the visible light weight to obtain a fused pixel value; for the non-intersection region, taking the pixel value of the pixel point corresponding to the infrared image as the fusion pixel value to obtain a fusion image;
And monitoring the body temperature of the animal according to the fusion image.
Further, the method for acquiring the distribution characteristics comprises the following steps:
Calculating Euclidean distance between each edge characteristic point and adjacent edge characteristic points on the same edge, and gradient average value of all pixel points on the same edge between the adjacent edge characteristic points;
and taking the maximum value of the product of the Euclidean distance between each edge characteristic point and all adjacent edge characteristic points on one edge and the gradient mean value as the distribution characteristic of each edge characteristic point on the corresponding edge.
Further, the method for obtaining the density of the surrounding feature points comprises the following steps:
Presetting a first window by taking each edge characteristic point as a center; and taking the number of edge feature points in the first window as the surrounding feature point density of each edge feature point.
Further, the method for obtaining the structural feature degree comprises the following steps:
and normalizing the product of the distribution characteristic of each edge characteristic point and the reciprocal of the density of the surrounding characteristic points to obtain the structural characteristic degree of each edge characteristic point.
Further, the method for acquiring the structural feature points comprises the following steps:
presetting a first threshold value, and taking the edge feature points with the structural feature degree larger than the first threshold value as the structural feature points.
Further, the method for acquiring the matching feature points comprises the following steps:
calculating the connection line distance between each structural feature point and other structural feature points on the visible light image and the infrared image; taking each structural feature point on the visible light image as a feature point to be matched of each structural feature point on the infrared image;
calculating the minimum value of the ratio between each connecting distance of each structural feature point on the infrared image and all connecting distances of each feature point to be matched as a connecting matching value of each connecting distance of each structural feature point on the infrared image;
Averaging the connection matching values of all connection distances of each structural feature point on the infrared image to obtain the matching degree between each structural feature point and each feature point to be matched on the infrared image;
and setting a second threshold value as a value of 1, and taking the feature points to be matched, the matching degree of which is closest to the second threshold value, as the matching feature points of each structural feature point on the infrared image.
Further, the visible light weight acquisition method includes:
Calculating the difference between the gray average value of the infrared image in the intersection area and the gray average value of the foreground area of the infrared image to obtain the infrared temperature difference in the intersection area;
Calculating the product of the texture feature point density and the gray value variance of the visible light image in the intersection area as the texture distribution feature in the intersection area;
Obtaining visible light weight according to the infrared temperature difference and the texture distribution characteristics; the visible light weight and the infrared temperature difference are in negative correlation and positive correlation with the texture distribution characteristic.
Further, obtaining a visible light weight according to the infrared temperature difference and the texture distribution feature comprises:
the visible light weight is obtained according to a visible light weight calculation formula, wherein the visible light weight calculation formula is as follows:
; in the/> Represents the/>Visible light weights for the intersection regions; /(I)Indicating that the infrared image is at the/>A gray average value in the intersection region; /(I)Representing the gray average value of the foreground region of the infrared image; /(I)Indicating that the visible light image is at the/>The number of texture feature points in the intersection regions; /(I)Represents the/>An area within the intersection region; /(I)Indicating that the visible light image is at the/>Gray variance in the individual intersection areas; /(I)Representing the normalization function.
Further, fusing each group of corresponding position pixel points of the intersection area according to the visible light weight to obtain a fused pixel value, including:
traversing all intersection areas to obtain the visible light weight of each group of pixel points at corresponding positions in each intersection area; obtaining infrared weights of pixel points at corresponding positions in each intersection region according to the visible light weights; the visible light weight and the infrared weight are in negative correlation;
Calculating a first product between the infrared weight and the gray value of the pixel point in the foreground region of the infrared image and a second product between the visible light weight and the gray value of the pixel point in the foreground region of the visible light image in each group of pixel points in the corresponding positions;
And adding the first product and the second product to obtain a fused pixel value fused with each group of pixel points at the corresponding position in the intersection region.
Further, the first threshold is set to 0.7.
The invention has the following beneficial effects:
Firstly, obtaining visible light images and infrared images of animals, so that the subsequent image fusion is facilitated; only obtaining the foreground region of the visible light image and the infrared image can reduce unnecessary texture information; obtaining edge feature points of a foreground region, obtaining structural feature degrees according to distribution features of the edge feature points and surrounding feature point densities, and reflecting contribution degrees of each edge feature point to image structural features; structural feature points in the two images are screened out, the boundary structures of the two images are reflected by the structural feature points, and foreground areas can be matched according to the matched feature points; according to the structural feature degree, the texture information carrying degree of the edge feature points can be obtained, the texture feature points are screened out, and the texture information in the visible light image is reflected; clustering the foreground areas to obtain texture feature point areas on the visible light image and structural feature point areas on the infrared image respectively, wherein the texture feature point areas reflect obvious texture information in the foreground areas of the visible light image, and the structural feature point areas reflect temperature distribution characteristics in the foreground areas of the infrared image; overlapping the texture feature point areas and the structural feature point areas to obtain intersection areas, and facilitating subsequent calculation of visible light weights of each intersection area; calculating visible light weights of different intersection areas to calculate fusion pixel values of subsequent images, taking pixel point pixel values of infrared images in non-intersection areas as fusion pixel values, and obtaining a final fusion image, wherein the fusion image can reflect an area with higher temperature and can reflect texture characteristics of the area; and monitoring the animal condition by using the temperature characteristics and the texture characteristics of the fusion image. The invention reduces the interference of the invalid characteristic points on the image matching, ensures higher accuracy of the image matching, calculates the weight of each intersection area when the images are fused, and ensures that the fused image can more clearly detect the body temperature of animals.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for implementing an IR thermal imaging-based animal body temperature detection system for a herd veterinarian according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of the specific implementation, structure, characteristics and effects of the animal body temperature detection system for livestock and veterinary based on IR thermal imaging according to the invention with reference to the attached drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the animal body temperature detection system for livestock and veterinary based on IR thermal imaging provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for implementing an IR thermal imaging-based animal body temperature detection system for a livestock veterinarian according to an embodiment of the present invention is shown, the method includes:
Step S1: and obtaining visible light images and infrared images of the animals.
The embodiment of the invention provides an IR thermal imaging-based animal body temperature detection system for livestock and veterinary, which is characterized in that an infrared thermal imaging body temperature detection technology needs to acquire an infrared image of an animal at first. Because the infrared image has the problem of lower definition, the characteristic information of the animal cannot be observed well, and therefore, the visible light image of the animal needs to be acquired so as to be convenient for subsequent fusion with the infrared image.
In one embodiment of the invention, the visible light image and the infrared image of the animal at the same time point are shot at the adjacent positions of the same angle through the high-definition visible light camera and the Gao Qingre infrared camera. And preprocessing the acquired visible light image and infrared image.
It should be noted that, in one embodiment of the present invention, gaussian filtering and graying are performed on the collected visible light image and infrared image, preprocessing operations on the collected visible light image and infrared image, and image acquisition methods of the visible light image and infrared image are all technical means well known to those skilled in the art, and are not described herein, and in other embodiments of the present invention, the specific implementation personnel may set the image acquisition method and the image preprocessing operations by themselves, which is not limited herein.
Step S2: obtaining a foreground region of a visible light image and an infrared image; obtaining all edge feature points in a foreground region; obtaining the structural feature degree of each edge feature point according to the distribution feature of each edge feature point and the density of surrounding feature points; and screening out structural feature points according to the structural feature degree to obtain matching feature points of all the structural feature points in the foreground region in the infrared image in the foreground region in the visible light image.
The embodiment of the invention mainly aims to process the characteristic information of animals, and a large number of useless characteristic points exist in the background area of the visible light image. In order to reduce the number of invalid feature points, it is necessary to acquire a visible light image and an infrared image including only a foreground region of the animal. In one embodiment of the invention, the visible light image and the infrared image may be image segmented by a neural network, and the training set may be established by the visible light image and the infrared image of consecutive frames. It should be noted that, the neural network is a technical means well known to those skilled in the art, and other image segmentation methods may be used to obtain the foreground region in other embodiments of the present invention, which is not limited herein. Briefly, the method for obtaining the foreground region of the visible light image and the infrared image by using the neural network for image segmentation includes the following steps:
the specific training method of the neural network comprises the following steps:
(1) The visible light image and the infrared image of continuous frames are used as training data. And labeling the foreground region pixel point as 1, and labeling other pixels as 0 to obtain label data.
(2) The semantic segmentation network adopts an encoding-decoding structure, and the training data and the label data are input into the network after being normalized. The semantic segmentation encoder is used for extracting the characteristics of the input data and obtaining a characteristic diagram. The semantic segmentation decoder performs sampling transformation on the feature map and outputs a semantic segmentation result.
(3) The network is trained using a cross entropy loss function.
The foreground region of the visible and infrared images is thus obtained.
Image matching is required for the visible light image and the infrared image before the subsequent image fusion is performed, so that edge feature points on the visible light image and the infrared image need to be acquired first. In one embodiment of the invention, the SIFT algorithm is utilized to extract the edge feature points of the foreground areas of the visible light image and the infrared image. It should be noted that, the SIFT algorithm is a technical means well known to those skilled in the art, and is not described herein, and in other embodiments of the present invention, other algorithms such as Harris corner detection may be used to extract edge feature points of the foreground region, which is not limited herein.
The distribution of the edge feature points in the animal image can reflect the structural features of the animal image to a certain extent, the distribution features of the edge feature points can reflect whether the structural features of the animal are clear or not, the distribution of the edge feature points reflecting the structural features is discrete, and the distribution of the edge feature points reflecting the texture features is dense, so that the density of the surrounding feature points of the edge feature points can also reflect the contribution degree of the edge feature points to the structural features. Therefore, in the embodiment of the invention, the structural feature degree of each edge feature point is obtained according to the distribution feature of each edge feature point and the density of surrounding feature points.
Preferably, in one embodiment of the present invention, the distributed feature acquisition method includes:
Calculating Euclidean distance between each edge characteristic point and adjacent edge characteristic points on the same edge, and gradient average value of all pixel points on the same edge between the adjacent edge characteristic points; and taking the maximum value of the product of the Euclidean distance between each edge characteristic point and all adjacent edge characteristic points on one edge and the gradient mean value as the distribution characteristic of each edge characteristic point on the corresponding edge. The larger the gradient mean value of the pixel points on the edge line is, the more the gray level change of the edge line is prominent, the clearer the edge line is, and the farther the distance between adjacent edge feature points is, the higher the contribution degree of the distribution features of the edge feature points to the structural features is. It should be noted that, other distance algorithms may be used to calculate the distance between each edge feature point and the adjacent edge feature point, which is not limited and described herein.
Preferably, in one embodiment of the present invention, the surrounding feature point density obtaining method includes:
Presetting a first window by taking each edge characteristic point as a center; and taking the number of the edge feature points in the first window as the density of surrounding feature points of each edge feature point. In one embodiment of the invention, the first window is arranged as a 10 x 10 square window. It should be noted that, in other embodiments of the present invention, the first window may be set by an operator, which is not limited herein.
Preferably, in one embodiment of the present invention, the method for obtaining the structural feature degree includes:
And normalizing the product of the distribution characteristic of each edge characteristic point and the reciprocal of the density of surrounding characteristic points to obtain the structural characteristic degree of each edge characteristic point. In one embodiment of the present invention, the structural feature degree calculation formula is as follows:
In the method, in the process of the invention, Represents the/>Structural feature degrees of the edge feature points; /(I)Represents the/>The Euclidean distance between each edge feature point and the adjacent edge feature point on the same edge; /(I)Represents the/>First/>, between each edge feature point and adjacent edge feature points on the same edgeGradient values of the individual pixels; /(I)Represents the/>Sequence numbers of pixel points between each edge feature point and adjacent edge feature points on the same edge; /(I)Represents the/>The number of pixel points between each edge feature point and adjacent edge feature points on the same edge; Represents the/> The surrounding feature point density of the individual edge feature points; /(I)Representing a maximum function; /(I)Representing the normalization function.
In the structural feature degree calculation formula,Represents the/>Distribution characteristics between each edge characteristic point and adjacent edge characteristic points on the same edge, wherein the first/>, is selectedThe furthest Euclidean distance between each edge feature point and the adjacent edge feature points on the same edge participates in calculation, and calculates the gradient mean value of the pixel points on the edge between the adjacent edge feature points at the moment, and the larger the gradient mean value is, the longer the edge line is, which shows the/>The clearer the edge line between each edge feature point and the adjacent edge feature point on the same edge, namely the first/>The larger the distribution feature of the individual edge feature points, the higher the contribution degree to the structural feature, the/>The higher the structural feature degree of each edge feature point is; since the edge feature points of the foreground region boundary can well reflect the structural features of the foreground region, the edge feature point distribution is more discrete, and the edge feature point distribution reflecting the texture features is more dense, so the/>The smaller the surrounding feature point density of the edge feature points, the/>The greater the degree of structural features of the individual edge feature points.
The structural feature degree of each edge feature point reflects the contribution degree of the edge feature point to the foreground region structure, namely the animal trunk part, and the edge feature point with larger contribution degree is selected as the structural feature point, so that the structural representation of the animal trunk part in the visible light image and the infrared image can be reflected better, and the matching of the visible light image and the foreground region of the infrared image is facilitated. Therefore, in the embodiment of the invention, the structural feature points are screened out according to the structural feature degree.
Preferably, in one embodiment of the present invention, the method for obtaining structural feature points includes:
Presetting a first threshold value, and taking edge feature points with the structural feature degree larger than the first threshold value as structural feature points. In one embodiment of the present invention, the first threshold is set to 0.7, and when the structural feature degree of the edge feature point is greater than 0.7, the edge feature point is a structural feature point. It should be noted that, the first threshold may be set by an implementation person according to a specific implementation scenario, which is not limited herein.
After the structural feature points of the visible light image and the infrared image are acquired, the foreground area needs to be matched, the matching feature point of each structural feature point in the infrared image is acquired, and matching is completed.
Preferably, in one embodiment of the present invention, the method for acquiring the matching feature points includes:
Calculating the connection line distance between each structural feature point and other structural feature points on the visible light image and the infrared image; taking each structural feature point on the visible light image as a feature point to be matched of each structural feature point on the infrared image; calculating the minimum value of the ratio between each connecting distance of each structural feature point on the infrared image and all connecting distances of each feature point to be matched as a connecting matching value of each connecting distance of each structural feature point on the infrared image; averaging the connection matching values of all the connection distances of each structural feature point on the infrared image to obtain the matching degree between each structural feature point and each feature point to be matched on the infrared image; the second threshold is set to be a value of 1, and the feature points to be matched, the matching degree of which is closest to the second threshold, are used as the matching feature points of each structural feature point on the infrared image. In one embodiment of the present invention, the matching degree calculation formula is as follows:
In the method, in the process of the invention, Representing the matching degree between each structural feature point and each feature point to be matched on the infrared image; /(I)Representing the number of connecting lines between each structural feature point and other structural feature points on the infrared image; /(I)A serial number representing a connection line between each structural feature point and other structural feature points on the infrared image; /(I)Representing the/>, between each structural feature point and other structural feature points on the infrared imageThe strip connecting line distance; /(I)Representing the/>, between each feature point to be matched and other feature points to be matchedThe strip connecting line distance; /(I)Representing a minimum function.
In the matching degree calculation formula, each structural feature point on the infrared image is matched through a distance, and the matching degree between each structural feature point and each feature point to be matched on the infrared image is calculated. Calculating the minimum value of the ratio between each connecting distance of each structural feature point and all connecting distances of each feature point to be matched on the infrared image, taking the connecting distance between the feature points to be matched at the moment as the connecting matching value of each connecting distance of each structural feature point on the infrared image, calculating the average value, matching each connecting distance between each structural feature point and each feature point to be matched on the infrared image, taking the connecting distance with the best matching condition as the connecting matching value, and averaging to obtain the matching degree of the visible light image and the structural feature points in the infrared image at the moment.
In one embodiment of the present invention, after the matching degree between each structural feature point and each feature point to be matched on the infrared image is obtained, the second threshold is set to be a value 1, and after the matching degree between each structural feature point and each feature point to be matched on the infrared image is obtained, the feature point to be matched, the matching degree of which is closest to the value 1, is used as the matching feature point of each structural feature point on the infrared image, all the structural feature points on the infrared image are traversed, the matching feature points of all the structural feature points in the foreground region of the infrared image are obtained, and the foreground region matching process is completed.
It should be noted that, in other embodiments of the present invention, other image matching methods may be used for matching, and the image matching algorithm is a technical means well known to those skilled in the art, which is not limited and described herein.
Step S3: obtaining the texture information carrying degree of the edge feature points according to the structural feature degree; screening out texture feature points in the visible light image according to the texture information carrying degree; clustering texture feature points in the visible light image and structural feature points in the infrared image respectively, and sequentially obtaining a texture feature point area on the visible light image and a structural feature point area on the infrared image; the foreground region is subjected to matching superposition according to the matching characteristic points to obtain an intersection region between the texture characteristic point region and the structural characteristic point region, and for the intersection region, visible light weight of each group of pixel points at corresponding positions in the intersection region is obtained according to infrared temperature difference and texture distribution characteristics in the intersection region; fusing each group of corresponding position pixel points of the intersection region according to the visible light weight to obtain a fused pixel value; and for the non-intersection region, taking the pixel value of the pixel point corresponding to the infrared image as a fusion pixel value to obtain a fusion image.
Since the edge feature points in the foreground region only show the structural features and the texture features in the region, the texture information carrying degree of the edge feature points can be obtained according to the structural feature degree. In one embodiment of the invention, the firstThe structural feature degree of each edge feature point is/>Then/>The texture feature carrying degree of each edge feature point is/>First/>The sum of the structural feature degree and the texture information carrying degree of each edge feature point is a value of 1.
In one embodiment of the invention, edge feature points with the texture feature carrying degree larger than 0.7 are used as texture feature points in the visible light image. In other embodiments of the present invention, the filtering conditions of the texture feature points may be set by the practitioner, which is not limited herein.
The texture feature points on the foreground region of the visible light image are clustered to obtain a wound texture region, and the structural feature points on the foreground region of the infrared image are clustered to obtain an animal high-temperature region; in a practical scenario, the practitioner often focuses on the high temperature region of the animal and the wound region of the animal and takes further medical measures, so that the images after subsequent fusion need to show both the animal temperature and the wound texture. Therefore, in the embodiment of the invention, the texture feature point area on the visible light image and the structural feature point area on the infrared image need to be acquired first.
In one embodiment of the invention, texture feature points on a visible light image are clustered according to the distance between the texture feature points on the visible light image, foreground areas in the visible light image are divided according to a clustering result, and the foreground areas of the divided visible light image are used as texture feature point areas; clustering the structural feature points on the infrared image according to the distance between the structural feature points on the infrared image, dividing the foreground region in the infrared image according to the clustering result, and taking the segmented foreground region of the infrared image as the structural feature point region.
It should be noted that, clustering algorithms such as a K-means clustering algorithm, a DBSCAN clustering algorithm, etc. may be used to cluster texture feature points on the visible light image and structural feature points on the infrared image, and the clustering algorithm is a technical means well known to those skilled in the art, and is not limited and described herein in detail.
It should be noted that, in other embodiments of the present invention, other methods for obtaining the texture feature point region on the visible light image and the structural feature point region on the infrared image are well known to those skilled in the art, and are not limited and described herein.
And overlapping and fusing the texture feature point areas and the structural feature point areas which are obtained by dividing the visible light image and the infrared image foreground areas to obtain all intersection areas. When pixels at corresponding positions in the intersection area are fused, different weights exist for different texture information, some intersection areas pay attention to the texture information, such as a wound, the visible light weight of a visible light image of the intersection area is larger than that of an infrared image, the visible light weight of the infrared image is larger than that of the visible light image in the area with temperature viewing weight, and the sum of the weight and the visible light weight of the infrared image is equal to a value of 1. Therefore, in the embodiment of the invention, the visible light weight of each pixel point in each intersection area is obtained.
Preferably, in one embodiment of the present invention, the visible light weight obtaining method includes:
calculating the difference between the gray average value of the infrared image in the intersection area and the gray average value of the foreground area of the infrared image, wherein the temperature of the area with higher gray value in the infrared image is higher, so that the difference between the gray average value in the intersection area and the gray average value of the foreground area of the infrared image can reflect the temperature difference of the intersection area relative to the foreground area of the infrared image, and the infrared temperature difference of the intersection area relative to the foreground area of the infrared image can be obtained; calculating the product between the density of texture feature points and the variance of gray values in the intersection region of the visible light image, wherein the larger the density of the texture feature points and the larger the variance of the gray values are, the more abundant the texture details in the intersection region are, the denser the distribution of the texture feature points is, and the more important the texture features in the intersection region are; taking the product of the density of texture feature points in the intersection area and the variance of the gray value as the texture distribution feature in the intersection area; obtaining visible light weight according to the infrared temperature difference and the texture distribution characteristics; the visible light weight and the infrared temperature difference are in negative correlation and in positive correlation with the texture distribution characteristics.
It should be noted that, in other embodiments of the present invention, other mathematical operation methods may be used to express a negative correlation between the visible light weight and the infrared temperature difference, and the mathematical operation method is a technical means well known to those skilled in the art, and will not be described herein.
Preferably, in one embodiment of the present invention, obtaining the visible light weight according to the infrared temperature difference and the texture distribution feature includes:
the visible light weight is obtained according to a visible light weight calculation formula, and the visible light weight calculation formula is as follows:
In the method, in the process of the invention, Represents the/>Visible light weights for the intersection regions; /(I)Indicating that the infrared image is at the/>A gray average value in the intersection region; /(I)Representing the gray average value of the foreground region of the infrared image; /(I)Indicating that the visible light image is at the/>The number of texture feature points in the intersection regions; /(I)Represents the/>An area within the intersection region; /(I)Indicating that the visible light image is at the/>Gray variance in the individual intersection areas; /(I)Representing the normalization function.
In the formula of the weight calculation of the visible light,Represents the/>Infrared temperature difference in the individual intersection regions, when the infrared temperature difference is smaller, the/>The temperature change of the intersection areas relative to the infrared image foreground area is small, and the temperature change of the intersection areas is used for fusing the imagesThe influence of the gray value of the pixel point in the intersection region is reduced, and the/> -th should be givenThe visible light weight of the intersection areas is larger; /(I)Representing texture distribution characteristics when the visible light image is at the/>When the density of texture feature points and the variance of gray values in each intersection region are larger, the texture details of the intersection region are more abundant and the distribution of the texture feature points is more dense, and at the moment, the texture distribution features are the first/>, after the image fusionThe influence of the pixel gray value in the individual intersection region increases, and the visible light weight is larger.
And fusing the pixel points at the corresponding positions in the intersection region according to the visible light weight of the intersection region. Preferably, in one embodiment of the present invention, fusing pixel points at each set of corresponding positions in an intersection area according to a visible light weight to obtain a fused pixel value includes:
Traversing all intersection areas to obtain visible light weights of pixel points at corresponding positions in each group of intersection areas; obtaining infrared weights of pixel points at corresponding positions in each intersection region according to the visible light weights; the visible light weight and the infrared weight are in negative correlation; calculating a first product between the infrared weight and the gray value of the pixel point in the foreground region of the infrared image and a second product between the visible light weight and the gray value of the pixel point in the foreground region of the visible light image in each group of pixel points in the corresponding positions; and adding the first product and the second product to obtain a fused pixel value fused with each group of pixel points at the corresponding position in the intersection region. In one embodiment of the present invention, when pixel points exist in the corresponding positions of the visible light image foreground region and the infrared image foreground region, the calculation formula of the fused pixel value is as follows:
In the method, in the process of the invention, Representing the/>, in the intersection regionFused pixel values for the individual pixels; /(I)Represents the/>Visible light weights for the intersection regions; /(I)Representing the infrared image foreground region at the/>First/>, of the intersection regionGray values of the individual pixels; /(I)Representing visible light image foreground region at the/>Intersection area number/>Gray values of the individual pixels; /(I)Represents the/>Infrared weights for the intersection regions.
In the fused pixel value calculation formula, the visible light weight is utilized to set the visible light image foreground area in the first positionThe gray values of all the pixel points in the intersection areas are weighted to obtain a second product when the pixel points at the corresponding positions of each group are fused; Subtracting the/>, from the value 1Visible light weight acquisition of the intersection region/>Infrared weighting of the intersection regions, and utilizing the infrared weighting to locate the infrared image foreground region at the/>The gray values of all the pixel points in the intersection areas are weighted to obtain a first product/>, when the pixel points at the corresponding positions of each group are fused; And taking the sum of the first product and the second product as a fused pixel value after fusing the pixel points at the corresponding positions of each group.
For the non-intersection region, the infrared image is used as a reference for fusion, so that the pixel value of the pixel point corresponding to the infrared image is used as a fusion pixel value in the non-intersection region, and the fusion image is obtained.
Thus, a fusion image of the visible light image and the infrared image foreground region is obtained.
Step S4: and monitoring the body temperature of the animal according to the fusion image.
The fusion image obtained in the step S3 can reflect the body temperature change of animals and monitor the disease condition of the scars of the wounds of the animals. In one embodiment of the invention, areas where temperatures are high and scarring of the wound occurs are monitored and such areas are used as problem areas and follow-up treatments are performed.
In summary, the invention firstly obtains the visible light image and the infrared image of the animal; obtaining a foreground region of a visible light image and an infrared image; obtaining edge feature points of a foreground region and obtaining structural feature degrees according to distribution features of the edge feature points and surrounding feature point densities; screening out structural feature points in the two images, and matching the foreground region according to the matched feature points; according to the structural feature degree, the texture information carrying degree of the edge feature points can be obtained, and the texture feature points are screened out; clustering the foreground areas to obtain texture feature point areas on the visible light image and structural feature point areas on the infrared image respectively; overlapping the texture feature point areas and the structural feature point areas to obtain intersection areas, and facilitating subsequent calculation of visible light weights of each intersection area; and calculating visible light weights of different intersection areas to calculate a fused pixel value of each pixel point when the subsequent images are fused, taking the pixel point pixel value of the infrared image in the non-intersection area as the fused pixel value, and obtaining a final fused image. The invention reduces the interference of the invalid characteristic points on the image matching, ensures higher accuracy of the image matching, calculates the weight of each intersection area when the images are fused, and ensures that the fused image can monitor the body temperature of animals more clearly.
An embodiment of an image fusion method for infrared images and visible light images of animals comprises the following steps:
When the infrared image and the visible light image are fused in the prior art, the two images are required to be matched, and as a large number of characteristic points are usually generated between the infrared image and the visible light image, the two images cannot be matched well, so that the accuracy of the fused image is affected. In order to solve the technical problem, the embodiment provides an image fusion method of an infrared image and a visible light image of an animal:
Step S1: and obtaining visible light images and infrared images of the animals.
Step S2: obtaining a foreground region of a visible light image and an infrared image; obtaining all edge feature points in a foreground region; obtaining the structural feature degree of each edge feature point according to the distribution feature of each edge feature point and the density of surrounding feature points; and screening out structural feature points according to the structural feature degree to obtain matching feature points of all the structural feature points in the foreground region in the infrared image in the foreground region in the visible light image.
Step S3: obtaining the texture information carrying degree of the edge feature points according to the structural feature degree; screening out texture feature points in the visible light image according to the texture information carrying degree; clustering texture feature points in the visible light image and structural feature points in the infrared image respectively, and sequentially obtaining a texture feature point area on the visible light image and a structural feature point area on the infrared image; the foreground region is subjected to matching superposition according to the matching characteristic points to obtain an intersection region between the texture characteristic point region and the structural characteristic point region, and for the intersection region, visible light weight of each group of pixel points at corresponding positions in the intersection region is obtained according to infrared temperature difference and texture distribution characteristics in the intersection region; fusing each group of corresponding position pixel points of the intersection region according to the visible light weight to obtain a fused pixel value; and for the non-intersection region, taking the pixel value of the pixel point corresponding to the infrared image as a fusion pixel value to obtain a fusion image.
Since the specific implementation process of steps S1-S3 is already described in detail in the above-mentioned temperature detection system for livestock and veterinary animals based on IR thermal imaging, no further description is given.
The technical effect of this embodiment is: in the embodiment, firstly, a visible light image and an infrared image of an animal are acquired, so that the subsequent image fusion is facilitated; only obtaining the foreground region of the visible light image and the infrared image can reduce unnecessary texture information; obtaining edge feature points of a foreground region, obtaining structural feature degrees according to distribution features of the edge feature points and surrounding feature point densities, and reflecting contribution degrees of each edge feature point to image structural features; structural feature points in the two images are screened out, the boundary structures of the two images are reflected by the structural feature points, and foreground areas can be matched according to the matched feature points; according to the structural feature degree, the texture information carrying degree of the edge feature points can be obtained, the texture feature points are screened out, and the texture information in the visible light image is reflected; clustering the foreground areas to obtain texture feature point areas on the visible light image and structural feature point areas on the infrared image respectively, wherein the texture feature point areas reflect obvious texture information in the foreground areas of the visible light image, and the structural feature point areas reflect temperature distribution characteristics in the foreground areas of the infrared image; overlapping the texture feature point areas and the structural feature point areas to obtain intersection areas, and facilitating subsequent calculation of visible light weights of each intersection area; and calculating visible light weights of different intersection areas to calculate fusion pixel values of subsequent images, taking pixel point pixel values of infrared images in non-intersection areas as fusion pixel values, and obtaining a final fusion image, wherein the fusion image can reflect an area with higher temperature and can reflect texture characteristics of the area. According to the embodiment, the interference of the invalid feature points on the image matching is reduced, the image matching accuracy is higher, and the visible light weight of each intersection region is calculated when the images are fused, so that the fused image not only maintains temperature information, but also shows corresponding texture details.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (9)

1. An IR thermal imaging based animal temperature detection system for a veterinary animal comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor when executing said computer program performs the steps of:
Obtaining visible light images and infrared images of animals;
Obtaining a foreground region of the visible light image and the infrared image; obtaining all edge feature points in the foreground region; obtaining the structural feature degree of each edge feature point according to the distribution feature of each edge feature point and the density of surrounding feature points; screening out structural feature points according to the structural feature degree to obtain matching feature points of all the structural feature points in a foreground region in the infrared image in the foreground region in the visible light image;
Obtaining the texture information carrying degree of the edge feature points according to the structural feature degree; screening out texture feature points in the visible light image according to the texture information carrying degree; clustering the texture feature points in the visible light image and the structural feature points in the infrared image respectively, and sequentially obtaining a texture feature point region on the visible light image and a structural feature point region on the infrared image; matching and superposing the foreground region according to the matching characteristic points to obtain an intersection region between the texture characteristic point region and the structural characteristic point region, and obtaining visible light weight of each group of pixel points at corresponding positions in the intersection region according to infrared temperature difference and texture distribution characteristics in the intersection region; fusing each group of corresponding position pixel points of the intersection region according to the visible light weight to obtain a fused pixel value; for the non-intersection region, taking the pixel value of the pixel point corresponding to the infrared image as the fusion pixel value to obtain a fusion image;
Monitoring the body temperature of the animal according to the fusion image;
The method for acquiring the matching feature points comprises the following steps:
calculating the connection line distance between each structural feature point and other structural feature points on the visible light image and the infrared image; taking each structural feature point on the visible light image as a feature point to be matched of each structural feature point on the infrared image;
calculating the minimum value of the ratio between each connecting distance of each structural feature point on the infrared image and all connecting distances of each feature point to be matched as a connecting matching value of each connecting distance of each structural feature point on the infrared image;
Averaging the connection matching values of all connection distances of each structural feature point on the infrared image to obtain the matching degree between each structural feature point and each feature point to be matched on the infrared image;
and setting a second threshold value as a value of 1, and taking the feature points to be matched, the matching degree of which is closest to the second threshold value, as the matching feature points of each structural feature point on the infrared image.
2. An IR thermal imaging based animal husbandry and veterinary animal body temperature detection system according to claim 1, wherein the method of obtaining the distribution characteristics comprises:
Calculating Euclidean distance between each edge characteristic point and adjacent edge characteristic points on the same edge, and gradient average value of all pixel points on the same edge between the adjacent edge characteristic points;
and taking the maximum value of the product of the Euclidean distance between each edge characteristic point and all adjacent edge characteristic points on one edge and the gradient mean value as the distribution characteristic of each edge characteristic point on the corresponding edge.
3. An IR thermal imaging based animal husbandry and veterinary animal body temperature detection system according to claim 1, wherein the method of obtaining the surrounding feature point density comprises:
Presetting a first window by taking each edge characteristic point as a center; and taking the number of edge feature points in the first window as the surrounding feature point density of each edge feature point.
4. An IR thermal imaging based animal husbandry and veterinary animal body temperature detection system according to claim 1, wherein the method of obtaining the structural feature level comprises:
and normalizing the product of the distribution characteristic of each edge characteristic point and the reciprocal of the density of the surrounding characteristic points to obtain the structural characteristic degree of each edge characteristic point.
5. An IR thermal imaging based animal husbandry and veterinary animal body temperature detection system according to claim 1, wherein the method of obtaining structural feature points comprises:
presetting a first threshold value, and taking the edge feature points with the structural feature degree larger than the first threshold value as the structural feature points.
6. An IR thermal imaging based animal husbandry and veterinary animal body temperature detection system according to claim 1, wherein the visible light weight acquisition method comprises:
Calculating the difference between the gray average value of the infrared image in the intersection area and the gray average value of the foreground area of the infrared image to obtain the infrared temperature difference in the intersection area;
Calculating the product of the texture feature point density and the gray value variance of the visible light image in the intersection area as the texture distribution feature in the intersection area;
Obtaining visible light weight according to the infrared temperature difference and the texture distribution characteristics; the visible light weight and the infrared temperature difference are in negative correlation and positive correlation with the texture distribution characteristic.
7. An IR thermal imaging based animal husbandry and veterinary animal body temperature detection system according to claim 6, wherein obtaining visible light weights from the infrared temperature differences and the texture distribution features comprises:
the visible light weight is obtained according to a visible light weight calculation formula, wherein the visible light weight calculation formula is as follows:
; in the/> Represents the/>Visible light weights for the intersection regions; /(I)Indicating that the infrared image is at the/>A gray average value in the intersection region; /(I)Representing the gray average value of the foreground region of the infrared image; /(I)Indicating that the visible light image is at the/>The number of texture feature points in the intersection regions; /(I)Represents the/>An area within the intersection region; /(I)Indicating that the visible light image is at the/>Gray variance in the individual intersection areas; /(I)Representing the normalization function.
8. An IR thermal imaging based animal husbandry and veterinary animal body temperature detection system according to claim 1, wherein fusing each set of corresponding position pixels of the intersection region according to the visible light weight to obtain fused pixel values comprises:
traversing all intersection areas to obtain the visible light weight of each group of pixel points at corresponding positions in each intersection area; obtaining infrared weights of pixel points at corresponding positions in each intersection region according to the visible light weights; the visible light weight and the infrared weight are in negative correlation;
Calculating a first product between the infrared weight and the gray value of the pixel point in the foreground region of the infrared image and a second product between the visible light weight and the gray value of the pixel point in the foreground region of the visible light image in each group of pixel points in the corresponding positions;
And adding the first product and the second product to obtain a fused pixel value fused with each group of pixel points at the corresponding position in the intersection region.
9. An IR thermal imaging based animal husbandry and veterinary animal body temperature detection system according to claim 5, wherein the first threshold is set to 0.7.
CN202410289424.8A 2024-03-14 2024-03-14 Animal husbandry and veterinary animal body temperature detection system based on IR thermal imaging Active CN117893870B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410289424.8A CN117893870B (en) 2024-03-14 2024-03-14 Animal husbandry and veterinary animal body temperature detection system based on IR thermal imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410289424.8A CN117893870B (en) 2024-03-14 2024-03-14 Animal husbandry and veterinary animal body temperature detection system based on IR thermal imaging

Publications (2)

Publication Number Publication Date
CN117893870A CN117893870A (en) 2024-04-16
CN117893870B true CN117893870B (en) 2024-06-07

Family

ID=90652022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410289424.8A Active CN117893870B (en) 2024-03-14 2024-03-14 Animal husbandry and veterinary animal body temperature detection system based on IR thermal imaging

Country Status (1)

Country Link
CN (1) CN117893870B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118362207B (en) * 2024-06-19 2024-08-30 开信(大连)互联网服务有限公司 Animal body temperature detection system for livestock and poultry

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198157A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 Heterologous image interfusion method based on well-marked target extracted region and NSST
CN111667655A (en) * 2020-07-10 2020-09-15 上海工程技术大学 Infrared image-based high-speed railway safety area intrusion alarm device and method
CN111681198A (en) * 2020-08-11 2020-09-18 湖南大学 Morphological attribute filtering multimode fusion imaging method, system and medium
CN116563127A (en) * 2022-01-28 2023-08-08 北京华航无线电测量研究所 Visible light infrared image fusion device based on contrast saliency
CN116612306A (en) * 2023-07-17 2023-08-18 山东顺发重工有限公司 Computer vision-based intelligent flange plate alignment method and system
CN116801093A (en) * 2023-08-25 2023-09-22 荣耀终端有限公司 Image processing method, device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198157A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 Heterologous image interfusion method based on well-marked target extracted region and NSST
CN111667655A (en) * 2020-07-10 2020-09-15 上海工程技术大学 Infrared image-based high-speed railway safety area intrusion alarm device and method
CN111681198A (en) * 2020-08-11 2020-09-18 湖南大学 Morphological attribute filtering multimode fusion imaging method, system and medium
CN116563127A (en) * 2022-01-28 2023-08-08 北京华航无线电测量研究所 Visible light infrared image fusion device based on contrast saliency
CN116612306A (en) * 2023-07-17 2023-08-18 山东顺发重工有限公司 Computer vision-based intelligent flange plate alignment method and system
CN116801093A (en) * 2023-08-25 2023-09-22 荣耀终端有限公司 Image processing method, device and storage medium

Also Published As

Publication number Publication date
CN117893870A (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN110084156B (en) Gait feature extraction method and pedestrian identity recognition method based on gait features
Zhang et al. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application
WO2023137914A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN114972418B (en) Maneuvering multi-target tracking method based on combination of kernel adaptive filtering and YOLOX detection
CN112734731B (en) Livestock temperature detection method, device, equipment and storage medium
CN108985345B (en) Detection apparatus based on lung medical image fusion classification
Tan et al. DeepBranch: Deep neural networks for branch point detection in biomedical images
CN102496016B (en) Infrared target detection method based on space-time cooperation framework
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
US20100111375A1 (en) Method for Determining Atributes of Faces in Images
Naufal et al. Preprocessed mask RCNN for parking space detection in smart parking systems
CN117893870B (en) Animal husbandry and veterinary animal body temperature detection system based on IR thermal imaging
CN117132510A (en) Monitoring image enhancement method and system based on image processing
Raju et al. Fuzzy segmentation and black widow–based optimal SVM for skin disease classification
CN109886271A (en) It merges deep learning network and improves the image Accurate Segmentation method of edge detection
Xiao et al. A multi-cue mean-shift target tracking approach based on fuzzified region dynamic image fusion
Tang et al. Retinal image registration based on robust non-rigid point matching method
Najjar et al. Histogram features extraction for edge detection approach
CN111932549B (en) SP-FCN-based MRI brain tumor image segmentation system and method
Wang et al. Infrared and visible image fusion based on Laplacian pyramid and generative adversarial network.
CN111881803A (en) Livestock face recognition method based on improved YOLOv3
Wu et al. Vehicle detection in high-resolution images using superpixel segmentation and CNN iteration strategy
CN114862868B (en) Cerebral apoplexy final infarction area division method based on CT perfusion source data
Qian et al. LiMFusion: Infrared and visible image fusion via local information measurement
Liu et al. [Retracted] Mean Shift Fusion Color Histogram Algorithm for Nonrigid Complex Target Tracking in Sports Video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant