CN110222685A - One kind being based on two stage clothes key independent positioning method and system - Google Patents
One kind being based on two stage clothes key independent positioning method and system Download PDFInfo
- Publication number
- CN110222685A CN110222685A CN201910411194.7A CN201910411194A CN110222685A CN 110222685 A CN110222685 A CN 110222685A CN 201910411194 A CN201910411194 A CN 201910411194A CN 110222685 A CN110222685 A CN 110222685A
- Authority
- CN
- China
- Prior art keywords
- clothes
- key point
- clothes key
- coordinate
- prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses one kind to be based on two stage clothes key independent positioning method and system, belong to technical field of image processing, method includes: to be positioned using the deep neural network based on multitask to the clothes key point in image of clothing, obtains the coarse positioning information of clothes key point;According to the coarse positioning information of clothes key point, clothes region interested is extracted;The clothes key point in clothes region interested is positioned using the full convolutional network of depth, obtains the final location information of clothes key point;The deep neural network based on multitask passes through while the coordinate of training clothes key point returns task, the structure type of clothes key point obtains after predicting task and clothes key point visualization types prediction task.The method of the present invention is able to solve the not high problem of the existing localization method positioning accuracy based on single phase, and better adapt to clothes block, deformation, situations such as postural change.
Description
Technical field
The invention belongs to technical field of image processing, fixed based on two stage clothes key point more particularly, to one kind
Position method and system.
Background technique
In field of image processing, the position of target critical point and its entrained information have emphatically the tasks such as target identification
The effect wanted, for example face key point, just there is the influence that can not ignore to recognition of face.It can be said that target critical point location is
One basic work of field of image processing.Equally, the work of clothes key point location also can be to tasks such as costume retrievals
Bring active influence.And costume retrieval technology has become one of the task of major electric business platform emphasis tackling key problem, because, clothing
Product is one of the e-tailing commodity having the call.
The research of current clothes key independent positioning method is still in infancy, and existing clothes key independent positioning method is very
It is limited, be largely the positioning based on single phase, positioning accuracy is not high, and locating effect be easy blocked by clothes, deformation, posture
The influence of the problems such as variation.
It can be seen that there are positioning accuracies that not high, locating effect is easy impacted technical problem for the prior art.
Summary of the invention
Aiming at the above defects or improvement requirements of the prior art, it is crucial based on two stage clothes that the present invention provides one kind
Independent positioning method and system, thus solving the prior art, that there are positioning accuracies is not high, locating effect holds susceptible technology and asks
Topic.
To achieve the above object, according to one aspect of the present invention, it provides a kind of based on two stage clothes key point
Localization method includes the following steps:
(1) the clothes key point in image of clothing is positioned using the deep neural network based on multitask, is obtained
The coarse positioning information of clothes key point;
(2) according to the coarse positioning information of clothes key point, clothes region interested is extracted;
(3) the clothes key point in clothes region interested is positioned using the full convolutional network of depth, obtains clothes
The final location information of key point;
The deep neural network based on multitask passes through while the coordinate of training clothes key point returns task, clothes
It is obtained after the structure type prediction task and clothes key point visualization types prediction task of key point.
Further, the training of the deep neural network based on multitask includes:
To each sample image of clothing in sample image of clothing set, coordinate, the clothes for marking clothes key point are crucial
The structure type and clothes key point visualization types of point, the sample image of clothing set after being marked;
The structure type of the clothes key point passes through the cluster centre vector after clustering to clothes key point vector
It indicates, using the sample image of clothing set after mark as the input of the deep neural network based on multitask, by clothes key
Point prediction coordinate, the prediction probability of clothes key point visualization types and the prediction coordinate of clothes key point and cluster centre to
Output of the distance between the amount as the deep neural network based on multitask, with the minimum target of loss function while training clothes
The coordinate for filling key point returns task, the structure type of clothes key point prediction task and the prediction of clothes key point visualization types
Task obtains the trained deep neural network based on multitask.
Further, the structure type of clothes key point includes clothes fashion, disposing way and clothes region interested
Size.
Further, clothes key point visualization types include it is visible, block and be not present.
Further, loss function includes first-loss function, the second loss function and third loss function,
The first-loss function be clothes key point coordinate and clothes key point prediction coordinate between it is European away from
From;
Second loss function is the softmax loss function for predicting clothes key point visualization types;
The third loss function is between the vector and cluster centre vector of prediction coordinate composition of clothes key point
Euclidean distance.
Further, the training of the full convolutional network of depth includes:
To each sample image of clothing in sample image of clothing set, extract sample clothes area image interested,
The coordinate of clothes key point is marked on sample clothes area image interested;
Using the clothes area image interested of the sample after mark as the input of the full convolutional network of depth, by the full convolution of depth
The corresponding coordinate of maximum value on the response diagram of network output is as the clothes key point on sample clothes area image interested
Prediction coordinate, the full convolution net of trained depth is obtained with the 4th loss function minimum target training full convolutional network of depth
Network;
4th loss function is the prediction coordinate and mark of the clothes key point on sample clothes area image interested
Euclidean distance between the coordinate of the clothes key point of note.
Further, the coarse positioning information of clothes key point includes: the coarse positioning coordinate and clothes key of clothes key point
The structure type of point.
Further, step (2) includes:
Using the structure type of the coarse positioning coordinate combination clothes key point of clothes key point, adaptive generation includes clothes
Rectangle frame including key point, using rectangle frame as clothes region interested.
Further, step (3) includes:
The full convolutional network of depth predicted using n grades positions the clothes key point in clothes region interested;
In first order prediction, clothes key point prediction is carried out to clothes region interested using depth full convolutional network,
Obtain the response diagram of first order prediction;
As i >=2, in i-stage prediction, the response diagram that clothes region interested and (i-1)-th grade are predicted is inputted into depth
Full convolutional network carries out clothes key point prediction, obtains the response diagram of i-stage prediction;
Deconvolution operation is carried out to it after obtaining the response diagram of n-th grade of prediction, obtains final response diagram, it will be final
The corresponding coordinate of maximum value on response diagram is believed as the final positioning of clothes key point to be thought.
It is another aspect of this invention to provide that providing one kind based on two stage clothes key point positioning system, including such as
Lower module:
Coarse positioning module, for being clicked through using the deep neural network based on multitask to the clothes key in image of clothing
Row positioning, obtains the coarse positioning information of clothes key point;
Clothes region extraction module interested extracts clothes interested for the coarse positioning information according to clothes key point
Region;
Fine positioning module, for being determined using the full convolutional network of depth the clothes key point in clothes region interested
Position, obtains the final location information of clothes key point;
The deep neural network based on multitask passes through while the coordinate of training clothes key point returns task, clothes
It is obtained after the structure type prediction task and clothes key point visualization types prediction task of key point.
In general, through the invention it is contemplated above technical scheme is compared with the prior art, can obtain down and show
Beneficial effect:
(1) present invention positions the key point of image of clothing by two stages, and first stage combination clothes close
The structure type and clothes key point visualization types of key point obtain the coarse positioning information of clothes key point, and extract clothes interested
Region is filled, second stage carries out the accurate positioning of key point again on the clothes region interested of acquisition, can be realized picture
The crucial point location of plain class precision, and locating effect is changeable to the posture of clothes, blocks and have situations such as deformation stronger Shandong
Stick.
(2) present invention is in the coarse positioning stage, by the structure type of coarse positioning coordinate and key point using key point come
The problem of obtaining clothes region interested, can be avoided the clothes region interested loss key point of acquisition, and can be effective
Background is removed, cleaner clothes region interested is obtained, is ready for subsequent fine positioning.
(3) present invention is in the full convolutional network of depth of the multistage prediction in fine positioning stage, by the response of previous stage output
Figure is used as context, is input in rear stage, and so as to improve the accuracy of response diagram prediction step by step, final help improves whole
Precision of prediction of a scheme to key point position.
(4) present invention is in the full convolutional network of depth of the multistage prediction in fine positioning stage, by the convolution operation stage
Afterbody output response diagram carry out deconvolution, response diagram identical with input image size is obtained, so that crucial
The prediction of point position reaches pixel class precision.
Detailed description of the invention
Fig. 1 is a kind of flow chart based on two stage clothes key independent positioning method provided in an embodiment of the present invention;
Fig. 2 is the structure chart of the deep neural network provided in an embodiment of the present invention based on multitask;
Fig. 3 is clothes method for extracting region flow chart interested provided in an embodiment of the present invention;
Fig. 4 is the full convolutional network structural schematic diagram of depth of multistage prediction provided in an embodiment of the present invention;
Fig. 5 (a) is the clothes key point coarse positioning figure that the embodiment of the present invention 1 provides;
Fig. 5 (b) is the clothes area marking figure interested that the embodiment of the present invention 1 provides;
Fig. 5 (c) is the clothes key point accurate positioning figure that the embodiment of the present invention 1 provides.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.As long as in addition, technical characteristic involved in the various embodiments of the present invention described below
Not constituting a conflict with each other can be combined with each other.
As shown in Figure 1, a kind of be based on two stage clothes key independent positioning method, include the following steps:
(1) the clothes key point in image of clothing is positioned using the deep neural network based on multitask, is obtained
The coarse positioning information of clothes key point;
(2) according to the coarse positioning information of clothes key point, clothes region interested is extracted;
(3) the clothes key point in clothes region interested is positioned using the full convolutional network of depth, obtains clothes
The final location information of key point;
As shown in Fig. 2, fc indicates full articulamentum.The deep neural network based on multitask (is extracted using VGG_16
Feature) pass through while the coordinate of clothes key point being trained to return task, the structure type of clothes key point prediction task and clothes
It is obtained after key point visualization types prediction task.
Specifically, the training of the deep neural network based on multitask includes:
To each sample image of clothing in sample image of clothing set, coordinate, the clothes for marking clothes key point are crucial
The structure type and clothes key point visualization types of point, the sample image of clothing set after being marked;
The structure type of the clothes key point passes through the cluster centre vector after clustering to clothes key point vector
It indicates, using the sample image of clothing set after mark as the input of the deep neural network based on multitask, by clothes key
Point prediction coordinate, the prediction probability of clothes key point visualization types and the prediction coordinate of clothes key point and cluster centre to
Output of the distance between the amount as the deep neural network based on multitask, with the minimum target of loss function while training clothes
The coordinate for filling key point returns task, the structure type of clothes key point prediction task and the prediction of clothes key point visualization types
Task obtains the trained deep neural network based on multitask.
Further, the structure type of clothes key point includes clothes fashion, disposing way and clothes region interested
Size.
Further, clothes key point visualization types include it is visible, block and be not present.
Further, loss function includes first-loss function, the second loss function and third loss function,
The first-loss function be clothes key point coordinate and clothes key point prediction coordinate between it is European away from
From;
Second loss function is the softmax loss function for predicting clothes key point visualization types;
The third loss function is between the vector and cluster centre vector of prediction coordinate composition of clothes key point
Euclidean distance.
Further, the training of the full convolutional network of depth includes:
To each sample image of clothing in sample image of clothing set, extract sample clothes area image interested,
The coordinate of clothes key point is marked on sample clothes area image interested;
Using the clothes area image interested of the sample after mark as the input of the full convolutional network of depth, by the full convolution of depth
The corresponding coordinate of maximum value on the response diagram of network output is as the clothes key point on sample clothes area image interested
Prediction coordinate, the full convolution net of trained depth is obtained with the 4th loss function minimum target training full convolutional network of depth
Network;
4th loss function is the prediction coordinate and mark of the clothes key point on sample clothes area image interested
Euclidean distance between the coordinate of the clothes key point of note.
Further, the coarse positioning information of clothes key point includes: the coarse positioning coordinate and clothes key of clothes key point
The structure type of point.
As shown in figure 3, step (2) includes:
Using the structure type of the coarse positioning coordinate combination clothes key point of clothes key point, adaptive generation includes clothes
Rectangle frame including key point, using rectangle frame as clothes region interested.
As shown in figure 4, step (3) includes:
The full convolutional network of depth predicted using n grades positions the clothes key point in clothes region interested;
In first order prediction, clothes key point prediction is carried out to clothes region interested using depth full convolutional network,
Obtain the response diagram of first order prediction;
As i >=2, in i-stage prediction, the response diagram that clothes region interested and (i-1)-th grade are predicted is inputted into depth
Full convolutional network carries out clothes key point prediction, obtains the response diagram of i-stage prediction;
Deconvolution operation is carried out to it after obtaining the response diagram of n-th grade of prediction, obtains final response diagram, it will be final
Final location information of the corresponding coordinate of maximum value as clothes key point on response diagram.
Fig. 5 (a) is the clothes key point coarse positioning figure that the embodiment of the present invention 1 provides, and Fig. 5 (b) is that the embodiment of the present invention 1 mentions
The clothes area marking figure interested supplied, Fig. 5 (c) are the clothes key point accurate positioning figures that the embodiment of the present invention 1 provides.According to
Fig. 5 (a) -5 (c) can be seen that the present invention positions the key point of image of clothing by two stages, first stage knot
The structure type and clothes key point visualization types for closing clothes key point obtain the coarse positioning information of clothes key point, and extract
Clothes region interested, second stage carry out the accurate positioning of key point, energy again on the clothes region interested of acquisition
Enough realize the crucial point location of pixel class precision, and locating effect is changeable to the posture of clothes, block and situations such as deformation has
Stronger robustness.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to
The limitation present invention, any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should all include
Within protection scope of the present invention.
Claims (10)
1. one kind is based on two stage clothes key independent positioning method, which comprises the steps of:
(1) the clothes key point in image of clothing is positioned using the deep neural network based on multitask, obtains clothes
The coarse positioning information of key point;
(2) according to the coarse positioning information of clothes key point, clothes region interested is extracted;
(3) the clothes key point in clothes region interested is positioned using the full convolutional network of depth, obtains clothes key
The final location information of point;
The deep neural network based on multitask passes through while the coordinate of training clothes key point returns task, clothes key
It is obtained after the structure type prediction task and clothes key point visualization types prediction task of point.
2. as described in claim 1 a kind of based on two stage clothes key independent positioning method, which is characterized in that described to be based on
The training of the deep neural network of multitask includes:
To each sample image of clothing in sample image of clothing set, the coordinate of clothes key point, clothes key point are marked
Structure type and clothes key point visualization types, the sample image of clothing set after being marked;
The structure type of the clothes key point indicated by cluster centre vector after clustering to clothes key point vector,
Using the sample image of clothing set after mark as the input of the deep neural network based on multitask, by the pre- of clothes key point
Between survey coordinate, the prediction probability of clothes key point visualization types and the prediction coordinate of clothes key point and cluster centre vector
Output of the distance as the deep neural network based on multitask, with the minimum target of loss function, training clothes are crucial simultaneously
The coordinate of point returns task, the structure type of clothes key point prediction task and clothes key point visualization types and predicts task,
Obtain the trained deep neural network based on multitask.
3. as claimed in claim 2 a kind of based on two stage clothes key independent positioning method, which is characterized in that the clothes
The structure type of key point includes the size of clothes fashion, disposing way and clothes region interested.
4. as claimed in claim 2 a kind of based on two stage clothes key independent positioning method, which is characterized in that the clothes
Key point visualization types include it is visible, block and be not present.
5. as claimed in claim 2 a kind of based on two stage clothes key independent positioning method, which is characterized in that the loss
Function includes first-loss function, the second loss function and third loss function,
The first-loss function is the Euclidean distance between the coordinate of clothes key point and the prediction coordinate of clothes key point;
Second loss function is the softmax loss function for predicting clothes key point visualization types;
The third loss function is European between the vector and cluster centre vector of the prediction coordinate composition of clothes key point
Distance.
6. a method as claimed in any one of claims 1 to 5 a kind of based on two stage clothes key independent positioning method, which is characterized in that institute
The training for stating the full convolutional network of depth includes:
To each sample image of clothing in sample image of clothing set, sample clothes area image interested is extracted, in sample
The coordinate of clothes key point is marked on clothes area image interested;
Using the clothes area image interested of the sample after mark as the input of the full convolutional network of depth, by the full convolutional network of depth
The corresponding coordinate of maximum value on the response diagram of output is as the pre- of the clothes key point on sample clothes area image interested
Coordinate is surveyed, the full convolutional network of trained depth is obtained with the minimum target training full convolutional network of depth of the 4th loss function;
4th loss function is the prediction coordinate and mark of the clothes key point on sample clothes area image interested
Euclidean distance between the coordinate of clothes key point.
7. a method as claimed in any one of claims 1 to 5 a kind of based on two stage clothes key independent positioning method, which is characterized in that institute
The coarse positioning information for stating clothes key point includes: the coarse positioning coordinate of clothes key point and the structure type of clothes key point.
8. as claimed in claim 7 a kind of based on two stage clothes key independent positioning method, which is characterized in that the step
(2) include:
Using the structure type of the coarse positioning coordinate combination clothes key point of clothes key point, adaptive generation includes that clothes are crucial
Rectangle frame including point, using rectangle frame as clothes region interested.
9. as claimed in claim 6 a kind of based on two stage clothes key independent positioning method, which is characterized in that the step
(3) include:
The full convolutional network of depth predicted using n grades positions the clothes key point in clothes region interested;
In first order prediction, clothes key point prediction is carried out to clothes region interested using depth full convolutional network, is obtained
The response diagram of first order prediction;
As i >=2, in i-stage prediction, the response diagram input depth that clothes region interested and (i-1)-th grade are predicted is rolled up entirely
Product network carries out clothes key point prediction, obtains the response diagram of i-stage prediction;
Deconvolution operation is carried out to it after obtaining the response diagram of n-th grade of prediction, final response diagram is obtained, by final response
The corresponding coordinate of maximum value on figure
Final location information as clothes key point.
10. one kind is based on two stage clothes key point positioning system, which is characterized in that including following module:
Coarse positioning module, for being determined using the deep neural network based on multitask the clothes key point in image of clothing
Position, obtains the coarse positioning information of clothes key point;
Clothes region extraction module interested extracts clothes region interested for the coarse positioning information according to clothes key point;
Fine positioning module, for being positioned using the full convolutional network of depth to the clothes key point in clothes region interested,
Obtain the final location information of clothes key point;
The deep neural network based on multitask passes through while the coordinate of training clothes key point returns task, clothes key
It is obtained after the structure type prediction task and clothes key point visualization types prediction task of point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910411194.7A CN110222685A (en) | 2019-05-16 | 2019-05-16 | One kind being based on two stage clothes key independent positioning method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910411194.7A CN110222685A (en) | 2019-05-16 | 2019-05-16 | One kind being based on two stage clothes key independent positioning method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110222685A true CN110222685A (en) | 2019-09-10 |
Family
ID=67821153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910411194.7A Pending CN110222685A (en) | 2019-05-16 | 2019-05-16 | One kind being based on two stage clothes key independent positioning method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110222685A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111444928A (en) * | 2020-03-30 | 2020-07-24 | 北京市商汤科技开发有限公司 | Key point detection method and device, electronic equipment and storage medium |
CN111932621A (en) * | 2020-08-07 | 2020-11-13 | 武汉中海庭数据技术有限公司 | Method and device for evaluating arrow extraction confidence |
CN112200183A (en) * | 2020-09-30 | 2021-01-08 | 北京字节跳动网络技术有限公司 | Image processing method, device, equipment and computer readable medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105760834A (en) * | 2016-02-14 | 2016-07-13 | 北京飞搜科技有限公司 | Face feature point locating method |
CN106599830A (en) * | 2016-12-09 | 2017-04-26 | 中国科学院自动化研究所 | Method and apparatus for positioning face key points |
CN109359568A (en) * | 2018-09-30 | 2019-02-19 | 南京理工大学 | A kind of human body critical point detection method based on figure convolutional network |
-
2019
- 2019-05-16 CN CN201910411194.7A patent/CN110222685A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105760834A (en) * | 2016-02-14 | 2016-07-13 | 北京飞搜科技有限公司 | Face feature point locating method |
CN106599830A (en) * | 2016-12-09 | 2017-04-26 | 中国科学院自动化研究所 | Method and apparatus for positioning face key points |
CN109359568A (en) * | 2018-09-30 | 2019-02-19 | 南京理工大学 | A kind of human body critical point detection method based on figure convolutional network |
Non-Patent Citations (3)
Title |
---|
SIJIE YAN等: "Unconstrained Fashion Landmark Detection via Hierarchical Recurrent Transformer Networks", 《ACM》 * |
董瑞霞: "结合人脸检测的人脸特征点定位方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
陈媛媛等: "基于关键点的服装检索", 《计算机应用》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111444928A (en) * | 2020-03-30 | 2020-07-24 | 北京市商汤科技开发有限公司 | Key point detection method and device, electronic equipment and storage medium |
WO2021196718A1 (en) * | 2020-03-30 | 2021-10-07 | 北京市商汤科技开发有限公司 | Key point detection method and apparatus, electronic device, storage medium, and computer program |
TWI763205B (en) * | 2020-03-30 | 2022-05-01 | 大陸商北京市商湯科技開發有限公司 | Method and apparatus for key point detection, electronic device, and storage medium |
CN111932621A (en) * | 2020-08-07 | 2020-11-13 | 武汉中海庭数据技术有限公司 | Method and device for evaluating arrow extraction confidence |
CN111932621B (en) * | 2020-08-07 | 2022-06-17 | 武汉中海庭数据技术有限公司 | Method and device for evaluating arrow extraction confidence |
CN112200183A (en) * | 2020-09-30 | 2021-01-08 | 北京字节跳动网络技术有限公司 | Image processing method, device, equipment and computer readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109559320A (en) | Realize that vision SLAM semanteme builds the method and system of figure function based on empty convolution deep neural network | |
CN110222685A (en) | One kind being based on two stage clothes key independent positioning method and system | |
CN110163836A (en) | Based on deep learning for the excavator detection method under the inspection of high-altitude | |
CN111797846B (en) | Feedback type target detection method based on characteristic pyramid network | |
CN107182036A (en) | The adaptive location fingerprint positioning method merged based on multidimensional characteristic | |
CN111079604A (en) | Method for quickly detecting tiny target facing large-scale remote sensing image | |
CN105792353A (en) | Image matching type indoor positioning method with assistance of crowd sensing WiFi signal fingerprint | |
CN100511269C (en) | Image rapid edge matching method based on angle point guiding | |
CN109145911A (en) | A kind of street is taken a picture target person extracting method | |
CN110263731B (en) | Single step human face detection system | |
CN109583329A (en) | Winding detection method based on the screening of road semanteme road sign | |
CN112507845A (en) | Pedestrian multi-target tracking method based on CenterNet and depth correlation matrix | |
Liu et al. | Semantic classification for hyperspectral image by integrating distance measurement and relevance vector machine | |
Qian et al. | Object detection using deep convolutional neural networks | |
Wei | Small object detection based on deep learning | |
CN114140700A (en) | Step-by-step heterogeneous image template matching method based on cascade network | |
CN116229286B (en) | Knowledge-driven space target situation awareness method and system | |
CN110851669A (en) | Mechanism naming disambiguation method and device based on geographic position information | |
CN110580462A (en) | natural scene text detection method and system based on non-local network | |
CN117557901A (en) | Detection model of small target crops in field and construction method | |
Eden et al. | Indoor navigation using text extraction | |
CN114708321A (en) | Semantic-based camera pose estimation method and system | |
Wang et al. | Improved military equipment identification algorithm based on YOLOv5 framework | |
Wang et al. | An RGB-D Based Approach for Human Pose Estimation | |
Jiang et al. | Development and application of deep convolutional neural network in target detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190910 |
|
RJ01 | Rejection of invention patent application after publication |