CN117726994A - Vehicle re-identification method, apparatus, device, storage medium, and program product - Google Patents
Vehicle re-identification method, apparatus, device, storage medium, and program product Download PDFInfo
- Publication number
- CN117726994A CN117726994A CN202410018287.4A CN202410018287A CN117726994A CN 117726994 A CN117726994 A CN 117726994A CN 202410018287 A CN202410018287 A CN 202410018287A CN 117726994 A CN117726994 A CN 117726994A
- Authority
- CN
- China
- Prior art keywords
- compared
- attribute
- images
- vehicle
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 230000004927 fusion Effects 0.000 claims description 110
- 235000019580 granularity Nutrition 0.000 claims description 69
- 238000004590 computer program Methods 0.000 claims description 39
- 230000008569 process Effects 0.000 description 25
- 238000012549 training Methods 0.000 description 18
- 238000001514 detection method Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 230000011218 segmentation Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The application relates to a vehicle re-identification method, device, apparatus, storage medium and program product. The method comprises the steps of obtaining two images to be compared; the images to be compared comprise vehicles; determining attribute description data and corresponding attribute characteristic data of each image to be compared based on an attribute identification network; based on an image recognition network, extracting vehicles in each image to be compared to obtain vehicle characteristic data of each image to be compared; and determining whether vehicles in the images to be compared are the same according to the attribute characteristic data, the attribute description data and the vehicle characteristic data between the images to be compared. By adopting the method, the interpretability of the vehicle re-identification result can be improved.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a vehicle re-identification method, apparatus, device, storage medium, and program product.
Background
With the development of the technical field of image processing, an image-based object detection model can play an important role in a plurality of industries. For example, based on the image-based vehicle re-recognition model, the received vehicle image may be processed and a vehicle re-recognition result may be output that may be indicative of whether the vehicle in the vehicle image to be detected is the same vehicle as the vehicle in the target vehicle image.
However, the existing vehicle re-identification model generally performs feature comparison based on the extracted vehicle abstract features, so as to obtain a vehicle re-identification result, and the interpretation of the vehicle re-identification result is not high.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a vehicle re-recognition method, apparatus, device, storage medium, and program product that can improve the interpretability of a vehicle re-recognition result.
In a first aspect, the present application provides a vehicle re-identification method, including:
acquiring two images to be compared; the images to be compared comprise vehicles;
determining attribute description data and corresponding attribute characteristic data of each image to be compared based on an attribute identification network;
based on an image recognition network, extracting vehicles in each image to be compared to obtain vehicle characteristic data of each image to be compared;
and determining whether vehicles in the images to be compared are the same according to the attribute characteristic data, the attribute description data and the vehicle characteristic data between the images to be compared.
In one embodiment, determining whether the vehicles in the images to be compared are the same according to the attribute feature data, the attribute description data and the vehicle feature data between the images to be compared includes: carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared to obtain corresponding fusion feature data; and determining whether vehicles in the images to be compared are the same or not according to the fusion characteristic data and the corresponding attribute description data of the images to be compared.
In one embodiment, determining whether vehicles in each image to be compared are the same according to fusion feature data and corresponding attribute description data of each image to be compared includes: if the fusion characteristic data of the images to be compared are similar, determining whether vehicles in the images to be compared are the same or not according to the attribute description data of the images to be compared; if the fusion characteristic data of the images to be compared are dissimilar, vehicles in the images to be compared are different.
In one embodiment, determining whether vehicles in each image to be compared are the same according to attribute description data of each image to be compared includes: if the attribute type in the attribute description data of each image to be compared and the attribute value of the corresponding attribute type are the same, determining that the vehicles in each image to be compared are the same; if the attribute types in the attribute description data of the images to be compared and the attribute values of the corresponding attribute types are different, determining that the vehicles in the images to be compared are different.
In one embodiment, the attribute feature data and the vehicle feature data respectively comprise feature data with different feature granularities, and the attribute feature data corresponds to the different feature granularities of the vehicle feature data; correspondingly, the attribute feature data and the vehicle feature data of the same image to be compared are subjected to feature fusion to obtain corresponding fusion feature data, and the method comprises the following steps: and aiming at any feature granularity, carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared to obtain fusion feature data under the corresponding feature granularity.
In one embodiment, the feature granularity includes at least two of a whole vehicle granularity, a component granularity, and a unit granularity.
In one embodiment, the attribute identification network and the image identification network are jointly trained in the following manner: acquiring a sample comparison image group; the sample comparison image group comprises two sample comparison images; each sample comparison image comprises a vehicle; acquiring an attribute identification tag and an image identification tag of a sample comparison image group; inputting the sample comparison image group into an attribute identification network to be trained to obtain a first prediction result; inputting the sample comparison image group into an image recognition network to be trained to obtain a second prediction result; and adjusting network parameters of the attribute identification network and the image identification network according to the difference condition between the first prediction result and the corresponding attribute identification label and the difference condition between the second prediction result and the corresponding image identification label.
In a second aspect, the present application further provides a vehicle weight identification device, including:
the image acquisition module is used for acquiring two images to be compared; the images to be compared comprise vehicles;
the first determining module is used for determining attribute description data and corresponding attribute characteristic data of each image to be compared based on the attribute identification network;
The second determining module is used for extracting vehicles in each image to be compared based on the image recognition network to obtain vehicle characteristic data of each image to be compared;
and the third determining module is used for determining whether the vehicles in the images to be compared are the same according to the attribute characteristic data, the attribute description data and the vehicle characteristic data between the images to be compared.
In a third aspect, the present application also provides a computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring two images to be compared; the images to be compared comprise vehicles;
determining attribute description data and corresponding attribute characteristic data of each image to be compared based on an attribute identification network;
based on an image recognition network, extracting vehicles in each image to be compared to obtain vehicle characteristic data of each image to be compared;
and determining whether vehicles in the images to be compared are the same according to the attribute characteristic data, the attribute description data and the vehicle characteristic data between the images to be compared.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring two images to be compared; the images to be compared comprise vehicles;
determining attribute description data and corresponding attribute characteristic data of each image to be compared based on an attribute identification network;
based on an image recognition network, extracting vehicles in each image to be compared to obtain vehicle characteristic data of each image to be compared;
and determining whether vehicles in the images to be compared are the same according to the attribute characteristic data, the attribute description data and the vehicle characteristic data between the images to be compared.
In a fifth aspect, the present application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of:
acquiring two images to be compared; the images to be compared comprise vehicles;
determining attribute description data and corresponding attribute characteristic data of each image to be compared based on an attribute identification network;
based on an image recognition network, extracting vehicles in each image to be compared to obtain vehicle characteristic data of each image to be compared;
and determining whether vehicles in the images to be compared are the same according to the attribute characteristic data, the attribute description data and the vehicle characteristic data between the images to be compared.
According to the vehicle re-identification method, the device, the equipment, the storage medium and the program product, for the two images to be compared, the images to be compared are processed based on the attribute identification network and the image identification network, the attribute description data, the attribute feature data and the vehicle feature data of the two images to be compared are respectively obtained, and the vehicle re-identification is carried out on the vehicles in the two images to be compared according to the attribute feature data, the attribute description data and the vehicle feature data between the two images to be compared, so that the vehicle re-identification result is obtained. The vehicle characteristic data obtained based on the image recognition network is comprehensive, and the vehicle re-recognition is carried out on the two images to be compared according to the vehicle characteristic data, so that the accurate vehicle re-recognition result can be ensured. Further, since the attribute description data and the attribute feature data obtained based on the attribute recognition network are visual and interpretable, the vehicle re-recognition result is determined by combining the attribute description data, the attribute feature data and the vehicle feature data, so that the vehicle re-recognition result has more interpretability on the premise of being accurate.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for a person having ordinary skill in the art.
Fig. 1 is an application environment diagram of a vehicle re-identification method provided in this embodiment;
fig. 2 is a flow chart of a vehicle re-identification method according to the present embodiment;
fig. 3 is a schematic flow chart for determining whether vehicles in the images to be compared are the same according to the embodiment;
fig. 4 is a schematic diagram of a split network according to the present embodiment;
fig. 5 is a schematic diagram of detection of a target detection network according to the present embodiment;
fig. 6 is a schematic diagram of attribute recognition of an attribute recognition network according to the present embodiment;
fig. 7 is a flowchart of a combined training method for an attribute recognition network and an image recognition network according to the present embodiment;
fig. 8 is a flowchart of another vehicle re-identification method according to the present embodiment;
fig. 9 is a block diagram of a vehicle re-identification apparatus according to the present embodiment;
fig. 10 is a block diagram showing the construction of another vehicle re-identification apparatus provided in the present embodiment;
fig. 11 is an internal structure diagram of a computer device according to the present embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Before introducing the vehicle re-identification method provided by the embodiment, it should be noted that the vehicle re-identification is one of core technologies of intelligent traffic, that is, a vehicle image is given in a specific range, and vehicles with the same identity in a monitoring range are identified through a matching algorithm, so that the vehicle re-identification method has great significance and practical value in the fields of vehicle retrieval, security protection and the like. However, in the conventional technology, the final vehicles are usually screened and determined one by one in a mass traffic monitoring video based on manual observation, which consumes manpower and time and is difficult to meet the accuracy requirement. With the rapid development of artificial intelligence technology, the current intelligent vehicle re-recognition method is mainly based on feature comparison of collected vehicle images by a deep learning algorithm to obtain a vehicle re-recognition result, but due to differences caused by different visual angles and different backgrounds, in addition, due to changes of illumination, collection parameters and the like, the effect of a deep learning model is poor, so that the deep learning model needs to be optimized for a vehicle re-recognition application scene to improve the robustness and accuracy of vehicle re-recognition. However, in the conventional vehicle re-recognition model, component feature extraction is generally completed through means such as segmentation, and finally, vehicle re-recognition is completed by combining global features and local features. The feature comparison method is essentially used for comparing the distances among different vehicles, so that the similarity of the vehicles is judged. The features are abstract, so that the feature-based method has no strong interpretation, has limited analysis on the re-identification result, is difficult to improve the network result, and the embodiment of the application aims to solve the problem.
The vehicle re-identification method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. Specifically, the server 104 may acquire two images to be compared through the terminal 102, and process the two images to be compared based on the attribute identification network, so as to determine attribute description data and corresponding attribute feature data of each image to be compared. The server 104 extracts vehicles in each image to be compared based on the image recognition network, and obtains vehicle feature data of each image to be compared. Further, the server 104 determines whether the vehicles in the images to be compared are the same according to the attribute feature data, the attribute description data and the vehicle feature data between the images to be compared, and outputs the determined result to the terminal 102. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a vehicle re-identification method is provided, and the method is applied to the server 104 in fig. 1 for illustration, and includes the following steps. Wherein:
s201, two images to be compared are obtained.
The image to be compared can be a two-dimensional image with the vehicle re-identification requirement, and the image to be compared contains the vehicle. It is to be understood that the vehicles contained in the two images to be compared may be the same vehicle or not, which is not limited.
There are many ways to obtain the images to be compared, and one implementation way may be that the images to be compared are stored in an image database for storing the images in advance, and when the server detects that there is a need for vehicle re-identification, two images can be randomly extracted from the database as the images to be compared. Another implementation manner may be that when the server detects that there is a need for vehicle re-identification, an image uploading instruction is sent to the connected terminal, so as to instruct the user to upload the image to be compared through the terminal, thereby obtaining the image to be compared. In another implementation manner, the vehicle re-identification operation triggered by the user is responded, the vehicle re-identification request sent by the user is obtained, and the vehicle re-identification request is analyzed, so that an image to be compared is obtained.
S202, determining attribute description data and corresponding attribute feature data of each image to be compared based on an attribute identification network.
The attribute recognition network may be a pre-trained network for acquiring attribute information of the images to be compared. The attribute description data may be information for describing the vehicle in the image to be compared, and may be, for example, attribute information of a category, an aspect ratio, a color, and the like of the vehicle. Accordingly, the attribute feature data may be an attribute value corresponding to the attribute description data, and the attribute feature data corresponding to the category of the vehicle may be a truck, a sedan, a minibus, or the like, for example. The attribute feature data corresponding to the color of the vehicle may be white, black, gray, or the like.
Optionally, in this embodiment, the two images to be compared may be input to a pre-trained attribute identifying network, where the attribute identifying network processes the received images to be compared, and outputs attribute description data and corresponding attribute feature data of the two images to be compared. Another implementation manner may be that two images to be compared are input into a pre-trained feature extraction network to obtain target features of the two images to be compared, and the target features of the two images to be compared are input into an attribute identification network to obtain attribute description data and corresponding attribute feature data of the two images to be compared.
It should be noted that, the attribute identification network and the feature extraction network are both constructed based on a common convolutional neural network, which is not described in detail.
And S203, extracting vehicles in the images to be compared based on the image recognition network to obtain the vehicle characteristic data of the images to be compared.
The image recognition network may be a pre-trained network for performing image recognition on the images to be compared and outputting image features. The vehicle characteristic data may be data characterizing abstract characteristics of the vehicle in the image to be compared.
Optionally, in this embodiment, the two images to be compared may be input to an image recognition network, where the image recognition network processes the received images to be compared and outputs vehicle feature data of each vehicle in the two images to be compared. Another implementation manner may be that the two images to be compared are input into a feature extraction network trained in advance to obtain target features of the two images to be compared, and the target features of the two images to be compared are input into an image recognition network to obtain vehicle feature data of the two images to be compared.
Further, in this embodiment, the image recognition network may include an image segmentation network and an image detection network, which can respectively process the images in different dimensions. The image segmentation network can perform semantic segmentation and extraction on vehicles in the images to be compared to obtain a semantic segmentation result. The image detection network can carry out target detection on the vehicle in the image to be compared to obtain a target detection result. In this embodiment, in order to make the vehicle feature data more accurate, the semantic segmentation result and the target detection result may be used together as the vehicle feature data.
S204, determining whether vehicles in the images to be compared are identical or not according to the attribute feature data, the attribute description data and the vehicle feature data between the images to be compared.
Optionally, in this embodiment, attribute feature data, attribute description data and vehicle feature data of the two images to be compared may be input into a pre-trained vehicle comparison model, and the vehicle comparison model compares the received data and outputs a comparison result whether the vehicles are the same. In another implementation manner, whether the vehicle feature data of the two images to be compared are identical is firstly judged, and under the condition that the vehicle feature data of the two images to be compared are identical, the attribute feature data and the attribute description data of the two images to be compared are respectively compared, if the attribute feature data and the attribute description data of the two images to be compared are identical, the vehicles in the two images to be compared are identical, otherwise, the vehicles in the two images to be compared are not identical is determined. Another implementation manner may be to compare attribute feature data, attribute description data and vehicle feature data between two images to be compared respectively, and determine that vehicles in the two images to be compared are the same when all three data are the same; otherwise, not the same.
It should be noted that the vehicle comparison model in the above embodiment may be constructed based on a common neural network, which is not described in detail. Further, in the process of training the vehicle comparison model, sample data (including attribute feature data, attribute description data and vehicle feature data of two sample images) and sample labels corresponding to the sample data may be input into the vehicle comparison model, the vehicle comparison model processes the received data, outputs a vehicle comparison result, and performs supervised training on the vehicle comparison model according to the vehicle comparison result and the sample labels, so as to improve the comparison accuracy of the vehicle comparison model. The process of performing supervised training on the vehicle comparison model may be a conventional training process of neural network training, and will not be described herein.
In the vehicle re-recognition method, for the two images to be compared, the images to be compared are processed based on the attribute recognition network and the image recognition network to respectively obtain attribute description data, attribute feature data and vehicle feature data of the two images to be compared, and vehicle re-recognition is performed on vehicles in the two images to be compared according to the attribute feature data, the attribute description data and the vehicle feature data between the two images to be compared to obtain a vehicle re-recognition result. The vehicle characteristic data obtained based on the image recognition network is comprehensive, and the vehicle re-recognition is carried out on the two images to be compared according to the vehicle characteristic data, so that the accurate vehicle re-recognition result can be ensured. Further, since the attribute description data and the attribute feature data obtained based on the attribute recognition network are visual and interpretable, the vehicle re-recognition result is determined by combining the attribute description data, the attribute feature data and the vehicle feature data, so that the vehicle re-recognition result has more interpretability on the premise of being accurate.
Further, to make the vehicle re-recognition result more accurate on the basis of the above embodiments, in one embodiment, as shown in fig. 3, determining whether the vehicles in the images to be compared are the same according to the attribute feature data, the attribute description data, and the vehicle feature data between the images to be compared includes:
and S301, carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared to obtain corresponding fusion feature data.
The feature fusion refers to combining or integrating information from different information sources or different feature sets to extract features with more comprehensive and more information quantity, and in this embodiment, feature fusion is performed on attribute feature data and data of two dimensions of vehicle feature data, so that comprehensive features of vehicles in images to be compared can be obtained. The fusion feature data may be, for example, a result obtained by feature-fusing the vehicle attribute feature data and the vehicle feature data.
Optionally, in this embodiment, for each image to be compared, weighting and summing processing may be performed on attribute feature data and vehicle feature data corresponding to each predetermined weight corresponding to each data, so as to obtain fusion feature data corresponding to the image to be compared. The weights corresponding to the attribute feature data and the vehicle feature data may be determined based on manual experience, or may be determined through a large number of experiments, which is not limited. Another implementation manner may be that attribute feature data of the image to be compared and vehicle feature data are input into a feature fusion model trained in advance, the feature fusion model processes the received data, and fusion feature data is output. Thus, fusion characteristic data corresponding to the two images to be compared are determined.
It should be noted that the feature fusion model may be constructed based on a common neural network, which will not be described in detail. Further, in the process of training the feature fusion model, sample data (including attribute feature data and vehicle feature data) and a sample tag corresponding to the sample data may be input into the feature fusion model, the feature fusion model processes the received data, outputs fusion feature data, and performs supervised training on the feature fusion model according to the fusion feature data and the sample tag, so that feature fusion accuracy of the feature fusion model is improved. The process of performing supervised training on the feature fusion model may be a conventional training process of neural network training, and will not be described herein.
S302, determining whether vehicles in the images to be compared are the same or not according to fusion characteristic data and corresponding attribute description data of the images to be compared.
Optionally, in this embodiment, the fusion feature data and the attribute description data of the two images to be compared may be compared respectively, and if the fusion feature data and the attribute description data are the same, it is determined that the vehicles in the images to be compared are the same; otherwise, determining that the vehicles in the images to be compared are different.
In one embodiment, to make the vehicle re-identification result more accurate, an alternative implementation is provided: if the fusion characteristic data of the images to be compared are similar, determining whether vehicles in the images to be compared are the same or not according to the attribute description data of the images to be compared; if the fusion characteristic data of the images to be compared are dissimilar, vehicles in the images to be compared are different.
Optionally, in this embodiment, there are many ways to determine whether the fusion feature data of each image to be compared is similar, and one implementation may be:
and determining the similarity between the fusion characteristic data of the images to be compared, for example, determining the cosine similarity between the fusion characteristic data and the cosine similarity, and determining whether the images are similar or not according to the magnitude between the cosine similarity and a first preset similarity threshold. Alternatively, a Pearson correlation coefficient (Pearson correlation coefficient, pearson) is determined between the two, and whether the two are similar is determined based on the magnitude of the Pearson correlation coefficient and a second predetermined similarity threshold. The cosine similarity and the pearson correlation coefficient between the two are determined respectively, the weighted summation processing is carried out on the two kinds of similarity according to the weight corresponding to each preset similarity to obtain the target similarity, and whether the fusion characteristic data of each image to be compared are similar or not is determined according to the size between the target similarity and a third preset similarity threshold value. The first preset similarity threshold, the second preset similarity threshold and the third preset similarity threshold can be determined based on manual experience, can be determined through a large number of experiments, and can be adjusted correspondingly according to requirements without limitation.
It can be understood that when the similarity (cosine similarity, pearson correlation coefficient or target similarity) is greater than the corresponding preset threshold (first preset similarity threshold, second preset similarity threshold and third preset similarity threshold), the fusion characteristic data of the images to be compared are determined to be similar, otherwise, the vehicles in the images to be compared are determined to be different.
Illustratively, the cosine similarity between the fused feature data of the respective images to be compared may be determined according to the following formula:
;
in the method, in the process of the invention,cosine similarity between fusion characteristic data of the images to be compared; />K-dimensional vector data corresponding to fusion characteristic data of one of the images to be compared are represented; />Representation->Each component in the vector; />Representing another waitingComparing k-dimensional vector data corresponding to the fusion characteristic data of the images; />Representation->Each component in the vector.
The pearson correlation coefficient between the fusion feature data of each image to be compared can be determined according to the following formula:
;
in the method, in the process of the invention,the pearson correlation coefficient between the fusion characteristic data of each image to be compared; />K-dimensional vector data corresponding to fusion characteristic data of one of the images to be compared are represented; / >Representation->Each component in the vector; />K-dimensional vector data corresponding to fusion characteristic data of another image to be compared is represented; />Representation->Each component in the vector; />,/>。
Further, the determination formula of the target similarity may be as follows:
;
in the method, in the process of the invention,target similarity between fusion characteristic data of two images to be compared;cosine similarity between fusion characteristic data of the images to be compared; />The pearson correlation coefficient between the fusion characteristic data of each image to be compared; />K-dimensional vector data corresponding to fusion characteristic data of one of the images to be compared are represented; />Representation->Each component in the vector; />K-dimensional vector data corresponding to fusion characteristic data of another image to be compared is represented; />Is a preset weighting coefficient. It should be noted that the cosine similarity and the pearson correlation coefficient have values ranging from [ -1,1]Between, target similarity->The range of the value of (C) is [ -1,1]The closer the value is to 1, the more similar.
Further, when the fusion characteristic data of the images to be compared are similar, if the attribute type in the attribute description data of the images to be compared and the attribute value of the corresponding attribute type are identical, determining that the vehicles in the images to be compared are identical; if the attribute types in the attribute description data of the images to be compared and the attribute values of the corresponding attribute types are different, determining that the vehicles in the images to be compared are different. Wherein, the attribute category can be the type, color and the like corresponding to the vehicle; the attribute value may be a specific type of vehicle, such as a truck or car, etc.; and a specific color of the vehicle, such as white or black, etc.
In this embodiment, the attribute values corresponding to the attribute categories of the images to be compared are compared respectively, and if the attribute values are the same, it is determined that the vehicles in the images to be compared are the same, otherwise, the vehicles are different. In the case where the vehicles are similar, the vehicle re-recognition result is further determined based on the attribute information of the vehicles, so that the vehicle re-recognition result is more interpretable.
In the above embodiment, the attribute feature data of the same images to be compared and the vehicle feature data are subjected to feature fusion, and after the fused feature data are obtained, whether the vehicles in the images to be compared are the same is determined based on the fused feature data and the attribute description data. Because the fusion characteristic data represents abstract characteristics and the attribute description data represents intuitive characteristics of the vehicle, the vehicle re-identification result is determined based on the two data, so that the re-identification result is more accurate.
On the basis of the above embodiments, in order to further improve the accuracy of the vehicle re-identification result, in one embodiment, the attribute feature data and the vehicle feature data respectively include feature data with different feature granularities, and the attribute feature data corresponds to the different feature granularities of the vehicle feature data; illustratively, the feature granularity includes at least two of a whole vehicle granularity, a component granularity, and a unit granularity. For example, full vehicle level fine grain, large component level fine grain, detail component level fine grain, and the like may be included. Correspondingly, the semantic segmentation network can process the images to be compared to obtain a semantic segmentation result shown in fig. 4. The image detection network may process the images to be compared to obtain the target detection result shown in fig. 5. Further, the attribute identification network may process the images to be compared to obtain attribute feature data as shown in fig. 6. Correspondingly, the attribute feature data and the vehicle feature data of the same image to be compared are subjected to feature fusion to obtain corresponding fusion feature data, and the method comprises the following steps: and aiming at any feature granularity, carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared to obtain fusion feature data under the corresponding feature granularity. That is, in the process of feature fusion, in this embodiment, any feature granularity is fused, so that fused feature data under each feature granularity can be obtained.
In the above embodiment, the attribute identification network and the image identification network may process the images to be compared, and output attribute feature data and vehicle feature data under each feature granularity, so that the attribute feature data and the vehicle feature data are richer. Meanwhile, in the process of subsequent feature fusion, fusion is carried out according to any feature granularity, so that fusion feature data under a plurality of feature granularities is obtained, the fusion feature data is more comprehensive, and the accuracy of a subsequent vehicle re-identification result can be further improved.
On the basis of the above embodiments, in order to improve the accuracy of data processing of the attribute identification network and the image identification network, further, as shown in fig. 7, the attribute identification network and the image identification network are obtained by adopting the following joint training method:
s701, acquiring a sample comparison image group.
The sample comparison image group comprises two sample comparison images; the sample comparison images may be two-dimensional images, each including a vehicle.
There may be many ways to obtain the sample comparison image set, and one implementation way may be to store the sample comparison image in the sample database in advance, and when there is a training requirement of the attribute recognition network and the image recognition network, the server may randomly obtain two sample comparison images from the sample database as the sample comparison image set. When the training requirements of the attribute identification network and the image identification network are detected, the server can output a sample input instruction to the user terminal so as to instruct the user to input a sample comparison image group through the terminal, so that the server acquires the sample comparison image group.
S702, acquiring attribute identification tags and image identification tags of the sample comparison image group.
The attribute identification tag may be attribute feature data corresponding to each sample comparison image in the predetermined sample comparison image group. The image identification tag may be vehicle feature data corresponding to each sample comparison image in the predetermined sample comparison image group.
The attribute identification tag and the image identification tag of the sample comparison image group can be manually marked based on manual experience, and can be obtained while the sample comparison image group is obtained. After the sample comparison image group is obtained, a sample label input instruction is output to the user terminal so as to instruct the user to input the sample label through the terminal, so that the server obtains the attribute identification label and the image identification label corresponding to the sample comparison image group.
S703, inputting the sample comparison image group into an attribute identification network to be trained to obtain a first prediction result; and inputting the sample comparison image group into an image recognition network to be trained to obtain a second prediction result.
The first prediction result may be attribute feature data corresponding to each sample comparison image in the sample comparison image group, which is obtained after the attribute identification network to be trained processes each sample comparison image. The second prediction result may be vehicle feature data corresponding to each sample comparison image obtained after the image recognition network processes each sample comparison image. The second preset result can also output the vehicle type corresponding to the image to be compared, and the image identification tag also comprises a vehicle type tag correspondingly.
Specifically, in this embodiment, the sample comparison image set is input to the attribute recognition network and the image recognition network, respectively, to obtain the first prediction result and the second prediction result, respectively.
And S704, adjusting network parameters of the attribute identification network and the image identification network according to the difference condition between the first prediction result and the corresponding attribute identification label and the difference condition between the second prediction result and the corresponding image identification label.
Specifically, in the present embodiment, the difference between the vehicle type prediction result and the vehicle type tag in the first prediction result may be taken as the first lossTaking the difference value between the predicted coordinate value of each corresponding point in the image group to be compared and the corresponding coordinate value in the image identification tag as a second loss +.>Taking the difference between the category, color and type of the vehicle in the first prediction result and the corresponding tag in the attribute identification tags as a third loss +.>、/>And->A target loss is determined from the first loss, the second loss, and the third loss, and network parameters of the identification network and the image identification network are adjusted based on the target loss.
Alternatively, the target loss may be determined by taking the sum of the first loss, the second loss, and the third loss as the target loss. The first loss, the second loss, and the third loss may be weighted and summed according to a predetermined weight corresponding to each loss, thereby obtaining the target loss. The weight corresponding to each loss may be determined based on manual experience, or may be determined based on a large number of experiments, which is not limited.
In the embodiment, the attribute recognition network and the image recognition network are subjected to network training based on the combined training, so that the network precision of the attribute recognition network and the image recognition network is higher, the accuracy of attribute feature data and vehicle feature data is improved, and the accuracy of a vehicle re-recognition result is improved.
Further, in order to facilitate understanding of the present solution by those skilled in the art, as shown in fig. 8, a vehicle re-identification method is described in detail, including the following steps:
s801, two images to be compared are acquired.
The image to be compared comprises a vehicle.
S802, determining attribute description data and corresponding attribute feature data of each image to be compared based on the attribute identification network.
S803, extracting vehicles in each image to be compared based on the image recognition network, and obtaining the vehicle characteristic data of each image to be compared.
The attribute feature data and the vehicle feature data respectively comprise feature data with different feature granularities, and the attribute feature data corresponds to the different feature granularities of the vehicle feature data; the characteristic granularity comprises at least two of whole vehicle granularity, component granularity and unit granularity.
The attribute recognition network and the image recognition network are obtained by adopting the following mode of joint training: acquiring a sample comparison image group; the sample comparison image group comprises two sample comparison images; each sample comparison image comprises a vehicle; acquiring an attribute identification tag and an image identification tag of a sample comparison image group; inputting the sample comparison image group into an attribute identification network to be trained to obtain a first prediction result; inputting the sample comparison image group into an image recognition network to be trained to obtain a second prediction result; and adjusting network parameters of the attribute identification network and the image identification network according to the difference condition between the first prediction result and the corresponding attribute identification label and the difference condition between the second prediction result and the corresponding image identification label.
S804, aiming at any feature granularity, carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared to obtain fusion feature data under the corresponding feature granularity.
S805, judging whether the fusion characteristic data of the images to be compared are similar, if so, executing S806, and if not, executing S808.
S806, judging whether attribute values of the attribute category and the corresponding attribute category in the attribute description data of each image to be compared are the same, if so, executing S807, and if not, executing S808.
S807, determining that the vehicles in the images to be compared are the same.
S808, determining that the vehicles in the images to be compared are different.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a vehicle re-identification device for realizing the vehicle re-identification method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in one or more embodiments of the vehicle re-identification device provided below may be referred to the limitation of the vehicle re-identification method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 9, there is provided a vehicle re-recognition apparatus including: an image acquisition module 901, a first determination module 902, a second determination module 903, and a third determination module 904, wherein:
the image acquisition module 901 is configured to acquire two images to be compared.
The image to be compared comprises a vehicle.
A first determining module 902, configured to determine attribute description data and corresponding attribute feature data of each image to be compared based on the attribute identification network.
The second determining module 903 is configured to extract vehicles in each image to be compared based on the image recognition network, and obtain vehicle feature data of each image to be compared.
A third determining module 904, configured to determine whether the vehicles in the images to be compared are the same according to the attribute feature data, the attribute description data, and the vehicle feature data between the images to be compared.
In one embodiment, the third determination module 904 includes a feature fusion unit and a first determination unit. Wherein:
and the feature fusion unit is used for carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared to obtain corresponding fusion feature data.
And the first determining unit is used for determining whether vehicles in the images to be compared are the same or not according to the fusion characteristic data and the corresponding attribute description data of the images to be compared.
In one embodiment, the first determining unit is specifically configured to determine, if the fusion feature data of the to-be-compared images are similar, whether the vehicles in the to-be-compared images are the same according to the attribute description data of the to-be-compared images; if the fusion characteristic data of the images to be compared are dissimilar, vehicles in the images to be compared are different.
In one embodiment, the first determining unit is further configured to determine that the vehicles in the images to be compared are the same if the attribute type in the attribute description data of the images to be compared and the attribute value of the corresponding attribute type are the same; if the attribute types in the attribute description data of the images to be compared and the attribute values of the corresponding attribute types are different, determining that the vehicles in the images to be compared are different.
In one embodiment, the attribute feature data and the vehicle feature data respectively include feature data of different feature granularities, and the attribute feature data and the vehicle feature data correspond to the different feature granularities of the feature data; correspondingly, the feature fusion unit is used for carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared aiming at any feature granularity, so as to obtain fusion feature data under the corresponding feature granularity.
In one embodiment, the feature fusion unit performs feature fusion on the attribute feature data of the image to be compared and the vehicle feature data, and the feature granularity comprises at least two of a whole vehicle granularity, a part granularity and a unit granularity.
In one embodiment, as shown in fig. 10, the vehicle re-identification apparatus further includes a network training module 905 for acquiring a sample alignment image set; the sample comparison image group comprises two sample comparison images; each sample comparison image comprises a vehicle; acquiring an attribute identification tag and an image identification tag of a sample comparison image group; inputting the sample comparison image group into an attribute identification network to be trained to obtain a first prediction result; inputting the sample comparison image group into an image recognition network to be trained to obtain a second prediction result; and adjusting network parameters of the attribute identification network and the image identification network according to the difference condition between the first prediction result and the corresponding attribute identification label and the difference condition between the second prediction result and the corresponding image identification label.
The respective modules in the above-described vehicle re-identification apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In an exemplary embodiment, a computer device, which may be a terminal, is provided, and an internal structure thereof may be as shown in fig. 11. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a vehicle re-identification method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 11 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one exemplary embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring two images to be compared; the images to be compared comprise vehicles;
determining attribute description data and corresponding attribute characteristic data of each image to be compared based on an attribute identification network;
based on an image recognition network, extracting vehicles in each image to be compared to obtain vehicle characteristic data of each image to be compared;
and determining whether vehicles in the images to be compared are the same according to the attribute characteristic data, the attribute description data and the vehicle characteristic data between the images to be compared.
In one embodiment, the processor when executing the computer program further performs the steps of: carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared to obtain corresponding fusion feature data; and determining whether vehicles in the images to be compared are the same or not according to the fusion characteristic data and the corresponding attribute description data of the images to be compared.
In one embodiment, the processor when executing the computer program further performs the steps of: if the fusion characteristic data of the images to be compared are similar, determining whether vehicles in the images to be compared are the same or not according to the attribute description data of the images to be compared; if the fusion characteristic data of the images to be compared are dissimilar, vehicles in the images to be compared are different.
In one embodiment, the processor when executing the computer program further performs the steps of: if the attribute type in the attribute description data of each image to be compared and the attribute value of the corresponding attribute type are the same, determining that the vehicles in each image to be compared are the same; if the attribute types in the attribute description data of the images to be compared and the attribute values of the corresponding attribute types are different, determining that the vehicles in the images to be compared are different.
In one embodiment, the processor when executing the computer program further performs the steps of: under the condition that the attribute feature data and the vehicle feature data respectively comprise feature data with different feature granularities, and the attribute feature data and the vehicle feature data with different feature granularities are corresponding, carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared according to any feature granularities to obtain fusion feature data with the corresponding feature granularities.
In one embodiment, the processor when executing the computer program further performs the steps of: the set feature granularity comprises at least two of a whole vehicle granularity, a part granularity and a unit granularity.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring a sample comparison image group; the sample comparison image group comprises two sample comparison images; each sample comparison image comprises a vehicle; acquiring an attribute identification tag and an image identification tag of a sample comparison image group; inputting the sample comparison image group into an attribute identification network to be trained to obtain a first prediction result; inputting the sample comparison image group into an image recognition network to be trained to obtain a second prediction result; and adjusting network parameters of the attribute identification network and the image identification network according to the difference condition between the first prediction result and the corresponding attribute identification label and the difference condition between the second prediction result and the corresponding image identification label.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Acquiring two images to be compared; the images to be compared comprise vehicles;
determining attribute description data and corresponding attribute characteristic data of each image to be compared based on an attribute identification network;
based on an image recognition network, extracting vehicles in each image to be compared to obtain vehicle characteristic data of each image to be compared;
and determining whether vehicles in the images to be compared are the same according to the attribute characteristic data, the attribute description data and the vehicle characteristic data between the images to be compared.
In one embodiment, the computer program when executed by the processor further performs the steps of: carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared to obtain corresponding fusion feature data; and determining whether vehicles in the images to be compared are the same or not according to the fusion characteristic data and the corresponding attribute description data of the images to be compared.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the fusion characteristic data of the images to be compared are similar, determining whether vehicles in the images to be compared are the same or not according to the attribute description data of the images to be compared; if the fusion characteristic data of the images to be compared are dissimilar, vehicles in the images to be compared are different.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the attribute type in the attribute description data of each image to be compared and the attribute value of the corresponding attribute type are the same, determining that the vehicles in each image to be compared are the same; if the attribute types in the attribute description data of the images to be compared and the attribute values of the corresponding attribute types are different, determining that the vehicles in the images to be compared are different.
In one embodiment, the computer program when executed by the processor further performs the steps of: under the condition that the attribute feature data and the vehicle feature data respectively comprise feature data with different feature granularities, and the attribute feature data and the vehicle feature data with different feature granularities are corresponding, carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared according to any feature granularities to obtain fusion feature data with the corresponding feature granularities.
In one embodiment, the computer program when executed by the processor further performs the steps of: the set feature granularity comprises at least two of a whole vehicle granularity, a part granularity and a unit granularity.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a sample comparison image group; the sample comparison image group comprises two sample comparison images; each sample comparison image comprises a vehicle; acquiring an attribute identification tag and an image identification tag of a sample comparison image group; inputting the sample comparison image group into an attribute identification network to be trained to obtain a first prediction result; inputting the sample comparison image group into an image recognition network to be trained to obtain a second prediction result; and adjusting network parameters of the attribute identification network and the image identification network according to the difference condition between the first prediction result and the corresponding attribute identification label and the difference condition between the second prediction result and the corresponding image identification label.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring two images to be compared; the images to be compared comprise vehicles;
determining attribute description data and corresponding attribute characteristic data of each image to be compared based on an attribute identification network;
based on an image recognition network, extracting vehicles in each image to be compared to obtain vehicle characteristic data of each image to be compared;
and determining whether vehicles in the images to be compared are the same according to the attribute characteristic data, the attribute description data and the vehicle characteristic data between the images to be compared.
In one embodiment, the computer program when executed by the processor further performs the steps of: carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared to obtain corresponding fusion feature data; and determining whether vehicles in the images to be compared are the same or not according to the fusion characteristic data and the corresponding attribute description data of the images to be compared.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the fusion characteristic data of the images to be compared are similar, determining whether vehicles in the images to be compared are the same or not according to the attribute description data of the images to be compared; if the fusion characteristic data of the images to be compared are dissimilar, vehicles in the images to be compared are different.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the attribute type in the attribute description data of each image to be compared and the attribute value of the corresponding attribute type are the same, determining that the vehicles in each image to be compared are the same; if the attribute types in the attribute description data of the images to be compared and the attribute values of the corresponding attribute types are different, determining that the vehicles in the images to be compared are different.
In one embodiment, the computer program when executed by the processor further performs the steps of: under the condition that the attribute feature data and the vehicle feature data respectively comprise feature data with different feature granularities, and the attribute feature data and the vehicle feature data with different feature granularities are corresponding, carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared according to any feature granularities to obtain fusion feature data with the corresponding feature granularities.
In one embodiment, the computer program when executed by the processor further performs the steps of: the set feature granularity comprises at least two of a whole vehicle granularity, a part granularity and a unit granularity.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a sample comparison image group; the sample comparison image group comprises two sample comparison images; each sample comparison image comprises a vehicle; acquiring an attribute identification tag and an image identification tag of a sample comparison image group; inputting the sample comparison image group into an attribute identification network to be trained to obtain a first prediction result; inputting the sample comparison image group into an image recognition network to be trained to obtain a second prediction result; and adjusting network parameters of the attribute identification network and the image identification network according to the difference condition between the first prediction result and the corresponding attribute identification label and the difference condition between the second prediction result and the corresponding image identification label.
It should be noted that, the information (including but not limited to the image information to be compared) and the data (including but not limited to the data for analysis, the stored data, the displayed data, etc.) related to the application are all information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to meet the related regulations.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.
Claims (11)
1. A vehicle re-identification method, the method comprising:
acquiring two images to be compared; wherein the image to be compared comprises a vehicle;
determining attribute description data and corresponding attribute characteristic data of each image to be compared based on an attribute identification network;
extracting vehicles in each image to be compared based on an image recognition network to obtain vehicle characteristic data of each image to be compared;
And determining whether vehicles in the images to be compared are the same according to the attribute characteristic data, the attribute description data and the vehicle characteristic data between the images to be compared.
2. The method of claim 1, wherein determining whether vehicles in each of the images to be compared are identical based on attribute feature data, attribute description data, and vehicle feature data between each of the images to be compared comprises:
carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared to obtain corresponding fusion feature data;
and determining whether vehicles in the images to be compared are the same or not according to the fusion characteristic data and the corresponding attribute description data of the images to be compared.
3. The method according to claim 2, wherein determining whether vehicles in the images to be compared are identical according to the fusion feature data and the corresponding attribute description data of the images to be compared comprises:
if the fusion characteristic data of the images to be compared are similar, determining whether vehicles in the images to be compared are the same or not according to the attribute description data of the images to be compared;
If the fusion characteristic data of the images to be compared are dissimilar, vehicles in the images to be compared are different.
4. A method according to claim 3, wherein said determining whether vehicles in each of the images to be compared are identical based on the attribute description data of each of the images to be compared comprises:
if the attribute category in the attribute description data of each image to be compared and the attribute value of the corresponding attribute category are the same, determining that the vehicles in each image to be compared are the same;
and if the attribute types in the attribute description data of the images to be compared and the attribute values of the corresponding attribute types are different, determining that the vehicles in the images to be compared are different.
5. The method according to any one of claims 2-4, wherein the attribute feature data and the vehicle feature data each comprise feature data of different feature granularities, and the attribute feature data and the vehicle feature data correspond to different feature granularities;
correspondingly, the feature fusion is carried out on the attribute feature data and the vehicle feature data of the same image to be compared to obtain corresponding fusion feature data, which comprises the following steps:
And aiming at any feature granularity, carrying out feature fusion on the attribute feature data and the vehicle feature data of the same image to be compared to obtain fusion feature data under the corresponding feature granularity.
6. The method of claim 5, wherein the feature granularity comprises at least two of a whole vehicle granularity, a component granularity, and a unit granularity.
7. The method according to any of claims 1-4, wherein the attribute identification network and the image identification network are co-trained in the following way:
acquiring a sample comparison image group; the sample comparison image group comprises two sample comparison images; each sample comparison image comprises a vehicle;
acquiring an attribute identification tag and an image identification tag of the sample comparison image group;
inputting the sample comparison image group to an attribute identification network to be trained to obtain a first prediction result; the method comprises the steps of,
inputting the sample comparison image group into an image recognition network to be trained to obtain a second prediction result;
and adjusting network parameters of the attribute identification network and the image identification network according to the difference condition between the first prediction result and the corresponding attribute identification label and the difference condition between the second prediction result and the corresponding image identification label.
8. A vehicle re-identification apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring two images to be compared; wherein the image to be compared comprises a vehicle;
the first determining module is used for determining attribute description data and corresponding attribute characteristic data of each image to be compared based on an attribute identification network;
the second determining module is used for extracting vehicles in the images to be compared based on the image recognition network to obtain vehicle characteristic data of the images to be compared;
and the third determining module is used for determining whether vehicles in the images to be compared are the same according to the attribute characteristic data, the attribute description data and the vehicle characteristic data between the images to be compared.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
11. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410018287.4A CN117726994A (en) | 2024-01-05 | 2024-01-05 | Vehicle re-identification method, apparatus, device, storage medium, and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410018287.4A CN117726994A (en) | 2024-01-05 | 2024-01-05 | Vehicle re-identification method, apparatus, device, storage medium, and program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117726994A true CN117726994A (en) | 2024-03-19 |
Family
ID=90208912
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410018287.4A Pending CN117726994A (en) | 2024-01-05 | 2024-01-05 | Vehicle re-identification method, apparatus, device, storage medium, and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117726994A (en) |
-
2024
- 2024-01-05 CN CN202410018287.4A patent/CN117726994A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180197049A1 (en) | Activation layers for deep learning networks | |
CN110765954A (en) | Vehicle weight recognition method, equipment and storage device | |
WO2023201924A1 (en) | Object defect detection method and apparatus, and computer device and storage medium | |
CN111667001B (en) | Target re-identification method, device, computer equipment and storage medium | |
CN111932544A (en) | Tampered image detection method and device and computer readable storage medium | |
CN110245714B (en) | Image recognition method and device and electronic equipment | |
CN113705297A (en) | Training method and device for detection model, computer equipment and storage medium | |
CN112650868A (en) | Image retrieval method, device and storage medium | |
CN116310656A (en) | Training sample determining method and device and computer equipment | |
CN115577768A (en) | Semi-supervised model training method and device | |
Sadiq | Improving CBIR Techniques with Deep Learning Approach: An Ensemble Method Using NASNetMobile, DenseNet121, and VGG12 | |
CN116630630B (en) | Semantic segmentation method, semantic segmentation device, computer equipment and computer readable storage medium | |
CN110852261B (en) | Target detection method and device, electronic equipment and readable storage medium | |
CN113704276A (en) | Map updating method and device, electronic equipment and computer readable storage medium | |
Mu et al. | Finding autofocus region in low contrast surveillance images using CNN-based saliency algorithm | |
Oh et al. | Deep feature learning for person re-identification in a large-scale crowdsourced environment | |
CN112183303A (en) | Transformer equipment image classification method and device, computer equipment and medium | |
Abate et al. | Partitioned iterated function systems by regression models for head pose estimation | |
CN117726994A (en) | Vehicle re-identification method, apparatus, device, storage medium, and program product | |
Bornia et al. | Detecting objects and people and tracking movements in a video using tensorflow and deeplearning | |
CN116958615A (en) | Picture identification method, device, equipment and medium | |
Laptev et al. | Integrating Traditional Machine Learning and Neural Networks for Image Processing | |
CN118114123B (en) | Method, device, computer equipment and storage medium for processing recognition model | |
Zhao et al. | Hand Detection Using Cascade of Softmax Classifiers | |
CN115965856B (en) | Image detection model construction method, device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |