[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Batch Coherence-Driven Network for Part-Aware Person Re-Identification

Published: 01 January 2021 Publication History

Abstract

Existing part-aware person re-identification methods typically employ two separate steps: namely, body part detection and part-level feature extraction. However, part detection introduces an additional computational cost and is inherently challenging for low-quality images. Accordingly, in this work, we propose a simple framework named Batch Coherence-Driven Network (BCD-Net) that bypasses body part detection during both the training and testing phases while still learning semantically aligned part features. Our key observation is that the statistics in a batch of images are stable, and therefore that batch-level constraints are robust. First, we introduce a batch coherence-guided channel attention (BCCA) module that highlights the relevant channels for each respective part from the output of a deep backbone model. We investigate channel-part correspondence using a batch of training images, then impose a novel batch-level supervision signal that helps BCCA to identify part-relevant channels. Second, the mean position of a body part is robust and consequently coherent between batches throughout the training process. Accordingly, we introduce a pair of regularization terms based on the semantic consistency between batches. The first term regularizes the high responses of BCD-Net for each part on one batch in order to constrain it within a predefined area, while the second encourages the aggregate of BCD-Net’s responses for all parts covering the entire human body. The above constraints guide BCD-Net to learn diverse, complementary, and semantically aligned part-level features. Extensive experimental results demonstrate that BCD-Net consistently achieves state-of-the-art performance on four large-scale ReID benchmarks.

References

[1]
E. Ristani and C. Tomasi, “Features for multi-target multi-camera tracking and re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 6036–6046.
[2]
Q. Yang, H.-X. Yu, A. Wu, and W.-S. Zheng, “Patch-based discriminative feature learning for unsupervised person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 3633–3642.
[3]
X. Yang, M. Wang, and D. Tao, “Person re-identification with metric learning using privileged information,” IEEE Trans. Image Process., vol. 27, no. 2, pp. 791–805, Feb. 2018.
[4]
B. Chen, W. Deng, and J. Hu, “Mixed high-order attention network for person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 371–381.
[5]
J. Guo, Y. Yuan, L. Huang, C. Zhang, J.-G. Yao, and K. Han, “Beyond human parts: Dual part-aligned representations for person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 3641–3650.
[6]
K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang, “Omni-scale feature learning for person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 3701–3711.
[7]
Z. Dai, M. Chen, X. Gu, S. Zhu, and P. Tan, “Batch DropBlock network for person re-identification and beyond,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 3690–3700.
[8]
B. Nguyen and B. De Baets, “Kernel distance metric learning using pairwise constraints for person re-identification,” IEEE Trans. Image Process., vol. 28, no. 2, pp. 589–600, Feb. 2019.
[9]
J. Dai, P. Zhang, D. Wang, H. Lu, and H. Wang, “Video person re-identification by temporal residual learning,” IEEE Trans. Image Process., vol. 28, no. 3, pp. 1366–1377, Mar. 2019.
[10]
Z. Feng, J. Lai, and X. Xie, “Learning modality-specific representations for visible-infrared person re-identification,” IEEE Trans. Image Process., vol. 29, pp. 579–590, 2020.
[11]
Z. Zhang, Y. Xie, W. Zhang, Y. Tang, and Q. Tian, “Tensor multi-task learning for person re-identification,” IEEE Trans. Image Process., vol. 29, pp. 2463–2477, 2020.
[12]
D. Chen, S. Zhang, W. Ouyang, J. Yang, and Y. Tai, “Person search by separated modeling and a mask-guided two-stream CNN model,” IEEE Trans. Image Process., vol. 29, pp. 4669–4682, 2020.
[13]
Y. Suh, J. Wang, S. Tang, T. Mei, and K. M. Lee, “Part-aligned bilinear representations for person re-identification,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 402–419.
[14]
Z. Zhang, C. Lan, W. Zeng, and Z. Chen, “Densely semantically aligned person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 667–676.
[15]
Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang, “Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline),” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 480–496.
[16]
W. Li, X. Zhu, and S. Gong, “Harmonious attention network for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 2285–2294.
[17]
H. Yao, S. Zhang, R. Hong, Y. Zhang, C. Xu, and Q. Tian, “Deep representation learning with part loss for person re-identification,” IEEE Trans. Image Process., vol. 28, no. 6, pp. 2860–2871, Jun. 2019.
[18]
F. Zhu, X. Kong, L. Zheng, H. Fu, and Q. Tian, “Part-based deep hashing for large-scale person re-identification,” IEEE Trans. Image Process., vol. 26, no. 10, pp. 4806–4817, Oct. 2017.
[19]
F. Zhenget al., “Pyramidal person re-identification via multi-loss dynamic training,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 8514–8522.
[20]
H. Zhaoet al., “Spindle net: Person re-identification with human body region guided feature decomposition and fusion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 1077–1085.
[21]
Q. Zhouet al., “Robust and efficient graph correspondence transfer for person re-identification,” IEEE Trans. Image Process., vol. 30, pp. 1623–1638, 2019.
[22]
M. S. Sarfraz, A. Schumann, A. Eberle, and R. Stiefelhagen, “A pose-sensitive embedding for person re-identification with expanded cross neighborhood re-ranking,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 420–429.
[23]
C. Su, J. Li, S. Zhang, J. Xing, W. Gao, and Q. Tian, “Pose-driven deep convolutional model for person re-identification,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 3960–3969.
[24]
D. Li, X. Chen, Z. Zhang, and K. Huang, “Learning deep context-aware features over body and latent parts for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 384–393.
[25]
P. Ren and J. Li, “Factorized distillation: Training holistic person re-identification model by distilling an ensemble of partial ReID models,” 2018, arXiv:1811.08073. [Online]. Available: http://arxiv.org/abs/1811.08073
[26]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 770–778.
[27]
S. Li, S. Bak, P. Carr, and X. Wang, “Diversity regularized spatiotemporal attention for video-based person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 369–378.
[28]
K. Wang, C. Ding, S. J. Maybank, and D. Tao, “CDPM: Convolutional deformable part models for semantically aligned person re-identification,” IEEE Trans. Image Process., vol. 29, pp. 3416–3428, 2020.
[29]
L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re-identification: A benchmark,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 1116–1124.
[30]
Z. Zheng, L. Zheng, and Y. Yang, “Unlabeled samples generated by GAN improve the person re-identification baseline in vitro,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 3754–3762.
[31]
W. Li, R. Zhao, T. Xiao, and X. Wang, “DeepReID: Deep filter pairing neural network for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 152–159.
[32]
L. Wei, S. Zhang, W. Gao, and Q. Tian, “Person transfer GAN to bridge domain gap for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 79–88.
[33]
T. Chenet al., “ABD-Net: Attentive but diverse person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 8350–8360.
[34]
W. Yang, H. Huang, Z. Zhang, X. Chen, K. Huang, and S. Zhang, “Towards rich feature discovery with class activation maps augmentation for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 1389–1398.
[35]
L. Songet al., “Unsupervised domain adaptive re-identification: Theory and practice,” Pattern Recognit., vol. 102, Jun. 2020, Art. no.
[36]
F. Yanget al., “Part-aware progressive unsupervised domain adaptation for person re-identification,” IEEE Trans. Multimedia, early access, Jun. 12, 2020. 10.1109/TMM.2020.3001522.
[37]
Z. Zhong, L. Zheng, Z. Luo, S. Li, and Y. Yang, “Invariance matters: Exemplar memory for domain adaptive person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 598–607.
[38]
C. Ding, K. Wang, P. Wang, and D. Tao, “Multi-task learning with coarse priors for robust part-aware person re-identification,” IEEE Trans. Pattern Anal. Mach. Intell., early access, Sep. 18, 2020. 10.1109/TPAMI.2020.3024900.
[39]
Y. Fuet al., “Horizontal pyramid matching for person re-identification,” in Proc. AAAI Conf. Artif. Intell., 2019, pp. 8295–8302.
[40]
G. Wang, Y. Yuan, X. Chen, J. Li, and X. Zhou, “Learning discriminative features with multiple granularities for person re-identification,” in Proc. 26th ACM Int. Conf. Multimedia, Oct. 2018, pp. 274–282.
[41]
J. Xu, R. Zhao, F. Zhu, H. Wang, and W. Ouyang, “Attention-aware compositional network for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 2119–2128.
[42]
M. M. Kalayeh, E. Basaran, M. Gokmen, M. E. Kamasak, and M. Shah, “Human semantic parsing for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 1062–1071.
[43]
L. Zhao, X. Li, Y. Zhuang, and J. Wang, “Deeply-learned part-aligned representations for person re-identification,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 3219–3228.
[44]
X. Liuet al., “HydraPlus-Net: Attentive deep features for pedestrian analysis,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 350–359.
[45]
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proc. Int. Conf. Mach. Learn., 2015, pp. 448–456.
[46]
S. Singh and A. Shrivastava, “EvalNorm: Estimating batch normalization statistics for evaluation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 3632–3640.
[47]
S. Ioffe, “Batch renormalization: Towards reducing minibatch dependence in batch-normalized models,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 1945–1953.
[48]
W.-G. Chang, T. You, S. Seo, S. Kwak, and B. Han, “Domain-specific batch normalization for unsupervised domain adaptation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 7346–7354.
[49]
Y. Li, N. Wang, J. Shi, J. Liu, and X. Hou, “Revisiting batch normalization for practical domain adaptation,” in Proc. ICLR, 2017, pp. 1–12.
[50]
H. Zheng, J. Fu, T. Mei, and J. Luo, “Learning multi-attention convolutional neural network for fine-grained image recognition,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 5209–5217.
[51]
M. Simon and E. Rodner, “Neural activation constellations: Unsupervised part model discovery with convolutional networks,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 1143–1151.
[52]
X. Zhang, H. Xiong, W. Zhou, W. Lin, and Q. Tian, “Picking deep filter responses for fine-grained image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 1134–1142.
[53]
N. Wang, S. Ma, J. Li, Y. Zhang, and L. Zhang, “Multistage attention network for image inpainting,” Pattern Recognit., vol. 106, Oct. 2020, Art. no.
[54]
V. Nair and G. Hinton, “Rectified linear units improve restricted Boltzmann machines,” in Proc. 27th Int. Conf. Mach. Learn. (ICML), 2010, pp. 807–814.
[55]
F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 815–823.
[56]
A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re-identification,” 2017, arXiv:1703.07737. [Online]. Available: http://arxiv.org/abs/1703.07737
[57]
J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 7132–7141.
[58]
P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1627–1645, Sep. 2010.
[59]
Z. Zhong, L. Zheng, D. Cao, and S. Li, “Re-ranking person re-identification with k-reciprocal encoding,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 1318–1327.
[60]
Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmentation,” in Proc. AAAI Conf. Artif. Intell., 2020, pp. 13001–13008.
[61]
I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton, “On the importance of initialization and momentum in deep learning,” in Proc. Int. Conf. Mach. Learn., 2013, pp. 1139–1147.
[62]
L. Zheng, H. Zhang, S. Sun, M. Chandraker, Y. Yang, and Q. Tian, “Person re-identification in the wild,” in Proc. IEEE Int. Conf. Comput. Vis., Jul. 2017, pp. 1367–1376.
[63]
L. V. D. Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res., vol. 9, no. 1, pp. 2579–2605, 2008.
[64]
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 618–626.
[65]
X. Chang, T. M. Hospedales, and T. Xiang, “Multi-level factorisation net for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 2109–2118.
[66]
J. Siet al., “Dual attention matching network for context-aware feature sequence based person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 5363–5372.
[67]
C. Luo, Y. Chen, N. Wang, and Z.-X. Zhang, “Spectral feature transformation for person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 4975–4984.
[68]
R. Hou, B. Ma, H. Chang, X. Gu, S. Shan, and X. Chen, “Interaction-and-aggregation network for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 9317–9326.
[69]
X. Chenet al., “Salience-guided cascaded suppression network for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 3300–3310.
[70]
G. Chen, C. Lin, L. Ren, J. Lu, and J. Zhou, “Self-critical attention learning for person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 9636–9645.
[71]
X. Jin, C. Lan, W. Zeng, G. Wei, and Z. Chen, “Semantics-aligned representation learning for person re-identification,” in Proc. AAAI Conf. Artif. Intell., 2020, pp. 11173–11180.
[72]
P. Fang, J. Zhou, S. Roy, L. Petersson, and M. Harandi, “Bilinear attention networks for person retrieval,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 8029–8038.
[73]
K. Zhu, H. Guo, Z. Liu, M. Tang, and J. Wang, “Identity-guided human semantic parsing for person re-identification,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 1–19.
[74]
H. Lingxiao, Y. Wang, W. Liu, H. Zhao, Z. Sun, and J. Feng, “Foreground-aware pyramid reconstruction for alignment-free occluded person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 8449–8458.
[75]
Y. Wanget al., “Resource aware person re-identification across multiple resolutions,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 8042–8051.
[76]
Y. Sun, L. Zheng, W. Deng, and S. Wang, “SVDNet for pedestrian retrieval,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 3800–3808.
[77]
C. Wang, Q. Zhang, C. Huang, W. Liu, and X. Wang, “Mancs: A multi-task attentional network with curriculum sampling for person re-identification,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 365–381.
[78]
Z. Zheng, X. Yang, Z. Yu, L. Zheng, Y. Yang, and J. Kautz, “Joint discriminative and generative learning for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 2138–2147.
[79]
X. Qian, Y. Fu, T. Xiang, Y.-G. Jiang, and X. Xue, “Leader-based multi-scale attention deep architecture for person re-identification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 371–385, Feb. 2020.
[80]
J. Li, S. Zhang, Q. Tian, M. Wang, and W. Gao, “Pose-guided representation learning for person re-identification,” IEEE Trans. Pattern Anal. Mach. Intell., early access, Jul. 16, 2019. 10.1109/TPAMI.2019.2929036.
[81]
Z. Zhang, C. Lan, W. Zeng, X. Jin, and Z. Chen, “Relation-aware global attention for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 3186–3195.
[82]
L. Wei, S. Zhang, H. Yao, W. Gao, and Q. Tian, “GLAD: Global-local-alignment descriptor for pedestrian retrieval,” in Proc. 25th ACM Int. Conf. Multimedia, Oct. 2017, pp. 420–428.

Cited By

View all
  • (2024)Text-based occluded person re-identification via multi-granularity contrastive consistency learningProceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v38i6.28433(6162-6170)Online publication date: 20-Feb-2024
  • (2024)Fine-grained Semantic Alignment with Transferred Person-SAM for Text-based Person RetrievalProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681553(5432-5441)Online publication date: 28-Oct-2024
  • (2024)STFE: A Comprehensive Video-Based Person Re-Identification Network Based on Spatio-Temporal Feature EnhancementIEEE Transactions on Multimedia10.1109/TMM.2024.336213626(7237-7249)Online publication date: 5-Feb-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Image Processing
IEEE Transactions on Image Processing  Volume 30, Issue
2021
5053 pages

Publisher

IEEE Press

Publication History

Published: 01 January 2021

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 15 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Text-based occluded person re-identification via multi-granularity contrastive consistency learningProceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v38i6.28433(6162-6170)Online publication date: 20-Feb-2024
  • (2024)Fine-grained Semantic Alignment with Transferred Person-SAM for Text-based Person RetrievalProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681553(5432-5441)Online publication date: 28-Oct-2024
  • (2024)STFE: A Comprehensive Video-Based Person Re-Identification Network Based on Spatio-Temporal Feature EnhancementIEEE Transactions on Multimedia10.1109/TMM.2024.336213626(7237-7249)Online publication date: 5-Feb-2024
  • (2024)Disentangled Sample Guidance Learning for Unsupervised Person Re-IdentificationIEEE Transactions on Image Processing10.1109/TIP.2024.345600833(5144-5158)Online publication date: 1-Jan-2024
  • (2023)Prototype-guided Cross-modal Completion and Alignment for Incomplete Text-based Person Re-identificationProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3613802(5253-5261)Online publication date: 26-Oct-2023
  • (2023)Context Sensing Attention Network for Video-based Person Re-identificationACM Transactions on Multimedia Computing, Communications, and Applications10.1145/357320319:4(1-20)Online publication date: 27-Feb-2023
  • (2023)Quality-Aware Part Models for Occluded Person Re-IdentificationIEEE Transactions on Multimedia10.1109/TMM.2022.315628225(3154-3165)Online publication date: 1-Jan-2023
  • (2023)Uncertainty-Aware Clustering for Unsupervised Domain Adaptive Object Re-IdentificationIEEE Transactions on Multimedia10.1109/TMM.2022.314962925(2624-2635)Online publication date: 1-Jan-2023
  • (2023)RMHNet: A Relation-Aware Multi-granularity Hierarchical Network for Person Re-identificationNeural Processing Letters10.1007/s11063-022-10946-y55:2(1433-1454)Online publication date: 1-Apr-2023
  • (2022)Temporal-Consistent Visual Clue Attentive Network for Video-Based Person Re-IdentificationProceedings of the 2022 International Conference on Multimedia Retrieval10.1145/3512527.3531362(72-80)Online publication date: 27-Jun-2022
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media