[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113987232A - Multi-dimensional feature selection method based on deep learning - Google Patents

Multi-dimensional feature selection method based on deep learning Download PDF

Info

Publication number
CN113987232A
CN113987232A CN202111198581.0A CN202111198581A CN113987232A CN 113987232 A CN113987232 A CN 113987232A CN 202111198581 A CN202111198581 A CN 202111198581A CN 113987232 A CN113987232 A CN 113987232A
Authority
CN
China
Prior art keywords
network
dimension
feature
model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111198581.0A
Other languages
Chinese (zh)
Inventor
赖韩江
胡宇杰
潘炎
印鉴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202111198581.0A priority Critical patent/CN113987232A/en
Publication of CN113987232A publication Critical patent/CN113987232A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a depth learning-based multi-dimensional feature selection method, which learns general features of an image through an inclusion depth learning model and a feature dimension random interception model, stores the general features of the image in a database, selects the feature dimension of a query image through a well-designed feature dimension selection model, and queries in the database according to the dimension, so that the time required by the query is reduced.

Description

Multi-dimensional feature selection method based on deep learning
Technical Field
The invention relates to the field of computer application technology and computer vision, in particular to a multi-dimensional feature selection method based on deep learning.
Background
In recent years, a search method based on a deep network has been remarkably developed. More extensive research efforts are devoted to learning accurate image retrieval models. However, for huge image data on the internet, accuracy alone cannot meet practical requirements, so that academic researchers have shown great attraction to faster image retrieval techniques.
Current retrieval techniques most existing metric learning methods convert all input samples into fixed-length feature vectors. These existing methods ignore simple examples that can be represented with shorter feature dimensions, and therefore retrieval is relatively inefficient,
for some of the above problems, it is naturally thought to reduce the search time by dynamically selecting the dimensions of the features, and in order to select the features, a model capable of extracting common features, specifically an inclusion network, is first required. Through tests, compared with the features trained independently, the universal features extracted by the inclusion network have small precision loss. Then, a feature dimension selection module is designed, and specifically comprises an Actor Network (Actor Network), a criticizing Network (Critic Network) and a Reward Function (Reward Function).
Disclosure of Invention
The invention provides a multi-dimensional feature selection method based on deep learning, which reduces the time required by query.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a multi-dimensional feature selection method based on deep learning comprises the following steps:
s1: establishing a deep learning network model G for extracting general features of the image;
s2: adding a characteristic dimension random interception model behind the network model G;
s3: training on a training set to obtain general characteristics of the training set and a test set;
s4: after the general features of the image are obtained, a feature dimension selection model is established;
s5: training and testing a feature dimension selection model;
s6: establishing a process for providing a background interface, providing a retrieval inlet and returning a retrieval result.
Further, the specific process of step S1 is:
s11: establishing a feature extraction layer of the G network, representing each frame of picture in each preprocessed video into a low-dimensional real number vector, and importing a pre-trained model into an increment network on a large-scale labeled picture;
s12: a set of feature vectors X for the image of a set length is extracted by training the inclusion network.
Further, the dimension intercepting module of step S2 is specifically designed to:
s21: mapping the feature vector X with set length into a real number vector with K dimension by using a full connection layer, wherein K is the maximum allowable feature dimension size
S22: after S21, each vector is coded into a real number vector, a dimension intercepting module of the G network is established, a dimension is randomly selected from the minimum dimension (set as 16) to the maximum dimension (set as 128) through the dimension intercepting module, a feature with a random length is obtained through sequential interception, the same feature dimension is used in a small batch, the network is trained through the features with different dimensions each time, a general feature with the maximum length of K is obtained, and when a feature with the random length is needed, the general feature only needs to be intercepted sequentially.
Further, the specific process of step S3 is:
s31: dividing the data set into training data and testing data;
s32: the integral model is trained, and the training steps of the G network are as follows: extracting image features with the length of the maximum dimension K from each small batch of image samples through a G network, performing feature randomization through a feature dimension random interception model, extracting an integer from the minimum dimension to the maximum dimension K as a dimension, then sequentially intercepting the features of the maximum dimension to obtain a feature matrix of the dimension, training the G network model by using the minimization of a loss function, and training the parameters of the G network;
s33: the test steps of the model are as follows: training a model with a certain dimension to obtain a plurality of fixed-dimension feature extraction models, and then performing the following operations on each fixed-dimension model: firstly, training a data set once, inputting test data into a G network, then generating features by the G network, and storing the features into a database. Then, after a test data set is passed, the test data set is used as a query set, and the distance between the characteristics of each image and the data in the database is calculated to calculate R @ K, wherein the specific calculation mode is as follows: calculating the distances among all image features, then sorting the image features from small to large according to the distances, then judging whether the images belong to the same type of video, if the images in the first K images have the same type, determining the image features as 1, otherwise, determining the image features as 0, and averaging all results in the test set to obtain a final result R @ K.
And intercepting the front k dimensions of the general model and comparing the front k dimensions with the corresponding fixed dimension network.
Further, the specific process of step S4 is:
s41: establishing an Actor network consisting of three layers of full connection, wherein the network has the functions of taking the general characteristics of an image as state input and outputting predicted proper dimensionality as action output;
s42: establishing a criticic network consisting of a plurality of layers of full connections, wherein the network has the function of taking the general characteristics of an image as the action input of state and Actor network output and outputting the score of the Actor network so as to optimize the Actor network;
s43: establishing a Reward function, returning a dimension aiming at the output of the Actor network, combining the length penalty of the dimension, and taking the score of the accuracy penalty determined by the output of the Actor network and the actual evaluation standard (R @ K) as the supervision information of the criticic network;
further, the specific process of step S5 is:
s51: dividing the data set into training data and testing data;
s52: training the integral model, wherein the training steps of the characteristic dimension selection model are as follows: the method comprises the steps that general image features are extracted from a G network, an Actor network and a Critic network are alternately updated, a slow updating method is used, the Critic network is fixed, selected dimensions are obtained through the Actor network, and scores of the Critic network are used for optimizing the Actor network. And secondly, fixing the Actor network, and comparing the score output by the Actor network by the Critic network with the score of a Reward function to supervise and train the Critic network. Wherein the updated learning frequency and frequency of the two networks are different;
s53: during testing, the Actor network is used to obtain the selected dimension d, and the dimension d is compared with the front dimension d of the general features of the training set in the database to obtain a sequence. The ordering is evaluated using the lines R @1, R @2, R @4, and so on. The specific calculation method is as follows: calculating the distances among all image features, then sorting the image features from small to large according to the distances, then judging whether the images belong to the same type of video, if the images in the first K images have the same type, determining the image features as 1, otherwise, determining the image features as 0, and averaging all results in the test set to obtain a final result R @ K.
Further, the specific process of step S6 is:
s61: storing the trained Incepotion model and the feature dimension selection model;
s62: establishing a background service process, and reserving an interface for image input;
s63: the image is input by accessing the interface created in S62, and then the background service process of S62 preprocesses the image to the input format required by the inclusion model of S61. Next, the inclusion model stored in S61 is retrieved, and the processed image is input into the model, so that the image is used in common. And then obtaining a proper dimension d through a feature dimension selection model in S61, sequentially intercepting the feature, performing distance calculation with the front d dimension of the image general feature data stored in the database, sorting according to the size, and returning the front k images which are the retrieval results of the k closest images.
Further, in step S12, the feature extraction process is as follows: the Inception model is pre-trained through an imagenet picture data set, and then fine-tuned. After each image is subjected to a pre-trained inclusion model, a group of feature vectors with the length k is generated, and the k refers to the maximum feature length of the image.
Further, in step S53, the Reward function combines penalties of length and accuracy such that the selected length has little loss of accuracy if it is as short as possible. The evaluation criterion used for the loss of precision is R @ K, and the loss of length is determined by the length of the output of the Actor network. The concrete implementation of the Reward function is as follows:
Reward=Rc×Ra=recalli/recallall×c×(2-c).
wherein c is equal to 1-di/dall and represents the penalty of length, recall represents R @ K of the selected length, recall represents R @ K of the ancestor length, and the ratio of the two represents the SGD adopted for optimization in the precision loss training process.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention can learn the general features of the image through the inclusion deep learning model and the feature dimension random interception model, store the general features of the database image, select the feature dimension of the query image through the well-designed feature dimension selection model, and query in the database according to the dimension, so that the time required by query is reduced.
Drawings
FIG. 1 is a complete diagram of the algorithmic model of the present invention;
FIG. 2 is a schematic diagram of a feature selection module according to the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
As shown in fig. 1-2, a method for selecting multidimensional features based on deep learning includes the following steps:
s1: establishing a deep learning network model G for extracting general features of the image;
s2: adding a characteristic dimension random interception model behind the network model G;
s3: training on a training set to obtain general characteristics of the training set and a test set;
s4: after the general features of the image are obtained, a feature dimension selection model is established;
s5: training and testing a feature dimension selection model;
s6: establishing a process for providing a background interface, providing a retrieval inlet and returning a retrieval result.
The specific process of step S1 is:
s11: establishing a feature extraction layer of the G network, representing each frame of picture in each preprocessed video into a low-dimensional real number vector, and importing a pre-trained model into an increment network on a large-scale labeled picture;
s12: a set of feature vectors X for the image of a set length is extracted by training the inclusion network.
The dimension intercepting module of step S2 is specifically designed as follows:
s21: mapping the feature vector X with set length into a real number vector with K dimension by using a full connection layer, wherein K is the maximum allowable feature dimension size
S22: after S21, each vector is coded into a real number vector, a dimension intercepting module of the G network is established, a dimension is randomly selected from the minimum dimension (set as 16) to the maximum dimension (set as 128) through the dimension intercepting module, a feature with a random length is obtained through sequential interception, the same feature dimension is used in a small batch, the network is trained through the features with different dimensions each time, a general feature with the maximum length of K is obtained, and when a feature with the random length is needed, the general feature only needs to be intercepted sequentially.
The specific process of step S3 is:
s31: dividing the data set into training data and testing data;
s32: the integral model is trained, and the training steps of the G network are as follows: extracting image features with the length of the maximum dimension K from each small batch of image samples through a G network, performing feature randomization through a feature dimension random interception model, extracting an integer from the minimum dimension to the maximum dimension K as a dimension, then sequentially intercepting the features of the maximum dimension to obtain a feature matrix of the dimension, training the G network model by using the minimization of a loss function, and training the parameters of the G network;
s33: the test steps of the model are as follows: training a model with a certain dimension to obtain a plurality of fixed-dimension feature extraction models, and then performing the following operations on each fixed-dimension model: firstly, training a data set once, inputting test data into a G network, then generating features by the G network, and storing the features into a database. Then, after a test data set is passed, the test data set is used as a query set, and the distance between the characteristics of each image and the data in the database is calculated to calculate R @ K, wherein the specific calculation mode is as follows: calculating the distances among all image features, then sorting the image features from small to large according to the distances, then judging whether the images belong to the same type of video, if the images in the first K images have the same type, determining the image features as 1, otherwise, determining the image features as 0, and averaging all results in the test set to obtain a final result R @ K.
And intercepting the front k dimensions of the general model and comparing the front k dimensions with the corresponding fixed dimension network.
The specific process of step S4 is:
s41: establishing an Actor network consisting of three layers of full connection, wherein the network has the functions of taking the general characteristics of an image as state input and outputting predicted proper dimensionality as action output;
s42: establishing a criticic network consisting of a plurality of layers of full connections, wherein the network has the function of taking the general characteristics of an image as the action input of state and Actor network output and outputting the score of the Actor network so as to optimize the Actor network;
s43: establishing a Reward function, returning a dimension aiming at the output of the Actor network, combining the length penalty of the dimension, and taking the score of the accuracy penalty determined by the output of the Actor network and the actual evaluation standard (R @ K) as the supervision information of the criticic network;
the specific process of step S5 is:
s51: dividing the data set into training data and testing data;
s52: training the integral model, wherein the training steps of the characteristic dimension selection model are as follows: the method comprises the steps that general image features are extracted from a G network, an Actor network and a Critic network are alternately updated, a slow updating method is used, the Critic network is fixed, selected dimensions are obtained through the Actor network, and scores of the Critic network are used for optimizing the Actor network. And secondly, fixing the Actor network, and comparing the score output by the Actor network by the Critic network with the score of a Reward function to supervise and train the Critic network. Wherein the updated learning frequency and frequency of the two networks are different;
s53: during testing, the Actor network is used to obtain the selected dimension d, and the dimension d is compared with the front dimension d of the general features of the training set in the database to obtain a sequence. The ordering is evaluated using the lines R @1, R @2, R @4, and so on. The specific calculation method is as follows: calculating the distances among all image features, then sorting the image features from small to large according to the distances, then judging whether the images belong to the same type of video, if the images in the first K images have the same type, determining the image features as 1, otherwise, determining the image features as 0, and averaging all results in the test set to obtain a final result R @ K.
The specific process of step S6 is:
s61: storing the trained Incepotion model and the feature dimension selection model;
s62: establishing a background service process, and reserving an interface for image input;
s63: the image is input by accessing the interface created in S62, and then the background service process of S62 preprocesses the image to the input format required by the inclusion model of S61. Next, the inclusion model stored in S61 is retrieved, and the processed image is input into the model, so that the image is used in common. And then obtaining a proper dimension d through a feature dimension selection model in S61, sequentially intercepting the feature, performing distance calculation with the front d dimension of the image general feature data stored in the database, sorting according to the size, and returning the front k images which are the retrieval results of the k closest images.
In step S12, the feature extraction process is as follows: the Inception model is pre-trained through an imagenet picture data set, and then fine-tuned. After each image is subjected to a pre-trained inclusion model, a group of feature vectors with the length k is generated, and the k refers to the maximum feature length of the image.
In step S53, the Reward function combines penalties of length and accuracy such that the selected length has little loss of accuracy if it is as short as possible. The evaluation criterion used for the loss of precision is R @ K, and the loss of length is determined by the length of the output of the Actor network. The concrete implementation of the Reward function is as follows:
Reward=Rc×Ra=recalli/recallall×c×(2-c).
wherein c is equal to 1-di/dall and represents the penalty of length, recall represents R @ K of the selected length, recall represents R @ K of the ancestor length, and the ratio of the two represents the SGD adopted for optimization in the precision loss training process.
The same or similar reference numerals correspond to the same or similar parts;
the positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A multi-dimensional feature selection method based on deep learning is characterized by comprising the following steps:
s1: establishing a deep learning network model G for extracting general features of the image;
s2: adding a characteristic dimension random interception model behind the network model G;
s3: training on a training set to obtain general characteristics of the training set and a test set;
s4: after the general features of the image are obtained, a feature dimension selection model is established;
s5: training and testing a feature dimension selection model;
s6: establishing a process for providing a background interface, providing a retrieval inlet and returning a retrieval result.
2. The method for selecting multidimensional features based on deep learning of claim 1, wherein the specific process of the step S1 is as follows:
s11: establishing a feature extraction layer of the G network, representing each frame of picture in each preprocessed video into a low-dimensional real number vector, and importing a pre-trained model into an increment network on a large-scale labeled picture;
s12: a set of feature vectors X for the image of a set length is extracted by training the inclusion network.
3. The method for selecting the multidimensional features based on the deep learning of claim 2, wherein in the step S2, the specific design process of the feature dimension random interception model is as follows:
s21: mapping the feature vector X with the set length into a K-dimensional real number vector by using a full connection layer, wherein K is the maximum allowable feature dimension size;
s22: after S21, each vector is coded into a real number vector, a dimension intercepting module of the G network is established, a dimension is randomly selected from the minimum dimension to the maximum dimension through the dimension intercepting module, a feature with a random length is obtained through sequential interception, the same feature dimension is used in a small batch, the network is trained through the features with different dimensions each time, a general feature with the maximum length is obtained, and when a feature with the random length is needed, the general feature is only required to be intercepted sequentially.
4. The method for selecting multidimensional features based on deep learning of claim 3, wherein the specific process of the step S3 is as follows:
s31: dividing the data set into training data and testing data;
s32: the integral model is trained, and the training steps of the G network are as follows: extracting image features from the G network, carrying out feature random interception by a feature dimension random interception model, training a G network model by using the minimization of a loss function, and training parameters of the G network;
s33: and (3) passing the training set data through a feature extraction model G to obtain the universal feature with the maximum length, storing the universal feature in a database, and evaluating the effectiveness of the universal feature after obtaining the universal feature with the full length for the test set data.
5. The method for selecting multidimensional features based on deep learning according to claim 4, wherein the specific process of the step S4 is as follows:
s41: establishing an Actor network consisting of a plurality of layers of full connection, wherein the network has the function of taking the general characteristics of an image as state input and outputting predicted proper dimensionality;
s42: establishing a criticic network consisting of a plurality of layers of full connections, wherein the network has the function of inputting the general characteristics of an image as the state and the dimension of the output of the Actor network and outputting the score of the Actor network so as to optimize the Actor network;
s43: and establishing a Reward function, returning a dimension aiming at the output of the Actor network, and combining the length penalty of the dimension and the precision penalty of the actual evaluation standard to be used as the supervision information of the criticic network.
6. The method for selecting multidimensional features based on deep learning of claim 5, wherein the specific process of the step S5 is as follows:
s51: dividing the data set into training data and testing data;
s52: training the integral model, wherein the training steps of the characteristic dimension selection model are as follows: extracting general image features by a G network, wherein each time a model is updated, the method comprises two steps, namely, fixing a Critic network, obtaining selected dimensions through the Actor network, and optimizing the Actor network by using the score of the Critic network; fixing the Actor network, and comparing the score output by the Actor network by the Critic network with the score of a Reward function to supervise and train the Critic network;
s53: during testing, the Actor network is used to obtain the selected dimension d, the selected dimension d is compared with the front d dimension of the general features of the training set in the database to obtain a sequence, and the sequence is evaluated by using an evaluation standard.
7. The method for selecting multidimensional features based on deep learning of claim 6, wherein the specific process of the step S6 is as follows:
s61: storing the trained Incepotion model and the feature dimension selection model;
s62: establishing a background service process, and reserving an interface for image input;
s63: inputting the image by accessing the interface created in the S62, and then preprocessing the image by the background service process of S62 to obtain an input format required by the inclusion model of S61; then, calling the inclusion model stored in S61, inputting the processed image into the model, and obtaining the general purpose of the image; and then obtaining a proper dimension d through a feature dimension selection model in S61, sequentially intercepting the feature, performing distance calculation with the front d dimension of the image general feature data stored in the database, sorting according to the size, and returning the front k images which are the retrieval results of the k closest images.
8. The method for selecting multidimensional features based on deep learning of claim 7, wherein in step S12, the feature extraction process is as follows: the method comprises the steps that an inclusion model is pre-trained through an imagenet picture data set, and then fine tuning is carried out; after each image is subjected to a pre-trained inclusion model, a group of feature vectors with the length k is generated, and the k refers to the maximum feature length of the image.
9. The method for selecting multidimensional features based on deep learning of claim 8, wherein in step S22, the dimension truncation module sets the minimum dimension to 16 and the maximum dimension to 128.
10. The method for selecting the multidimensional features based on the deep learning of claim 9, wherein in step S53, a Reward function combines penalties of length and precision, so that the selected length has little precision loss when the selected length is as short as possible, wherein the evaluation criterion used for the precision loss is R @ K, and the length loss is determined by the length output by the Actor network, and the SGD is used for optimization during the training process.
CN202111198581.0A 2021-10-14 2021-10-14 Multi-dimensional feature selection method based on deep learning Pending CN113987232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111198581.0A CN113987232A (en) 2021-10-14 2021-10-14 Multi-dimensional feature selection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111198581.0A CN113987232A (en) 2021-10-14 2021-10-14 Multi-dimensional feature selection method based on deep learning

Publications (1)

Publication Number Publication Date
CN113987232A true CN113987232A (en) 2022-01-28

Family

ID=79738655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111198581.0A Pending CN113987232A (en) 2021-10-14 2021-10-14 Multi-dimensional feature selection method based on deep learning

Country Status (1)

Country Link
CN (1) CN113987232A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230394104A1 (en) * 2021-12-06 2023-12-07 AO Kaspersky Lab System and method of a cloud server for providing content to a user

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230394104A1 (en) * 2021-12-06 2023-12-07 AO Kaspersky Lab System and method of a cloud server for providing content to a user
US12093334B2 (en) * 2021-12-06 2024-09-17 AO Kaspersky Lab System and method of a cloud server for providing content to a user

Similar Documents

Publication Publication Date Title
US8195674B1 (en) Large scale machine learning systems and methods
KR100545477B1 (en) Image retrieval using distance measure
CN112949740B (en) Small sample image classification method based on multilevel measurement
CN109902190B (en) Image retrieval model optimization method, retrieval method, device, system and medium
CN112395487A (en) Information recommendation method and device, computer-readable storage medium and electronic equipment
CN115495555A (en) Document retrieval method and system based on deep learning
CN116049450A (en) Multi-mode-supported image-text retrieval method and device based on distance clustering
CN113806580B (en) Cross-modal hash retrieval method based on hierarchical semantic structure
CN110851584A (en) Accurate recommendation system and method for legal provision
CN115795018A (en) Multi-strategy intelligent searching question-answering method and system for power grid field
CN112148831A (en) Image-text mixed retrieval method and device, storage medium and computer equipment
CN101685456A (en) Search method, system and device
CN113656700A (en) Hash retrieval method based on multi-similarity consistent matrix decomposition
CN113987232A (en) Multi-dimensional feature selection method based on deep learning
CN114239730B (en) Cross-modal retrieval method based on neighbor ordering relation
CN114579794A (en) Multi-scale fusion landmark image retrieval method and system based on feature consistency suggestion
CN118036555B (en) Low-sample font generation method based on skeleton transfer and structure contrast learning
CN112330387B (en) Virtual broker applied to house watching software
JP2020086548A (en) Processor, processing method and processing program
CN110717068B (en) Video retrieval method based on deep learning
JP5061147B2 (en) Image search device
CN115757464A (en) Intelligent materialized view query method based on deep reinforcement learning
CN112199461B (en) Document retrieval method, device, medium and equipment based on block index structure
CN114647754A (en) Hand-drawn image real-time retrieval method fusing image label information
CN108470181B (en) Web service replacement method based on weighted sequence relation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination