[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

3D Person Re-Identification Based on Global Semantic Guidance and Local Feature Aggregation

Published: 01 June 2024 Publication History

Abstract

Person re-identification (Re-ID) has played an extremely crucial role in ensuring social safety and has attracted considerable research attention. 3D shape information is an important clue to understand the posture and shape of pedestrians. However, most existing person Re-ID methods learn pedestrian feature representations from images, ignoring the real 3D human body structure and the spatial relationship between the pedestrians and interferents. To address this problem, our devise a new point cloud Re-ID network (PointReIDNet), designed to obtain 3D shape representations of pedestrians from point clouds of 3D scenes. The model consists of modules, namely global semantic guidance module and local feature extraction module. The global semantic guidance module is designed by enhancing the point cloud feature representation in similar feature neighborhoods and to reduce the interference caused by 3D shape reconstruction or noise. Further, to provide an efficient representation of point clouds, we propose space cover convolution (SC-Conv), which efficiently encodes information on human shapes in local point clouds by constructing anisotropic geometries in the coordinate neighborhoods. Extensive experiments are conducted on four holistic person Re-ID datasets, one occlusion person Re-ID dataset and one point cloud classification dataset. The results exhibit significant improvements over point-cloud-based person Re-ID methods. In particular, the proposed efficient PointReIDNet decreases the number of parameters from 2.30M to 0.35M with an insignificant drop in performance. The source code is available at: <uri>https://github.com/changshuowang/PointReIDNet</uri>.

References

[1]
M. Ye, J. Shen, G. Lin, T. Xiang, L. Shao, and S. C. H. Hoi, “Deep learning for person re-identification: A survey and outlook,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 6, pp. 2872–2893, Jun. 2022. 10.1109/TPAMI.2021.3054775.
[2]
G. Wanget al., “Cross-modality paired-images generation and augmentation for RGB-infrared person re-identification,” Neural Netw., vol. 128, pp. 294–304, Aug. 2020.
[3]
L. Wuet al., “Pseudo-pair based self-similarity learning for unsupervised person re-identification,” IEEE Trans. Image Process., vol. 31, pp. 4803–4816, 2022. 10.1109/TIP.2022.3186746.
[4]
L. Wu, Y. Wang, L. Shao, and M. Wang, “3-D PersonVLAD: Learning deep global representations for video-based person reidentification,” IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 11, pp. 3347–3359, Nov. 2019. 10.1109/TNNLS.2019.2891244.
[5]
Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang, “Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline),” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 480–496.
[6]
X. Zhang, Y. Yan, J.-H. Xue, Y. Hua, and H. Wang, “Semantic-aware occlusion-robust network for occluded person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 7, pp. 2764–2778, Jul. 2021. 10.1109/TCSVT.2020.3033165.
[7]
J. Li, S. Zhang, Q. Tian, M. Wang, and W. Gao, “Pose-guided representation learning for person re-identification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 2, pp. 622–635, Feb. 2022. 10.1109/TPAMI.2019.2929036.
[8]
G. Yan, Z. Wang, S. Geng, Y. Yu, and Y. Guo, “Part-based representation enhancement for occluded person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 8, pp. 4217–4231, Aug. 2023. 10.1109/TCSVT.2023.3241764.
[9]
A. Bhuiyan, Y. Liu, P. Siva, M. Javan, I. B. Ayed, and E. Granger, “Pose guided gated fusion for person re-identification,” in Proc. IEEE Winter Conf. Appl. Comput. Vis. (WACV), Mar. 2020, pp. 2664–2673.
[10]
Y. Huang, S. Lian, H. Hu, D. Chen, and T. Su, “Multiscale omnibearing attention networks for person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 5, pp. 1790–1803, May 2021. 10.1109/TCSVT.2020.3014167.
[11]
Q. Yang, A. Wu, and W.-S. Zheng, “Person re-identification by contour sketch under moderate clothing change,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 6, pp. 2029–2046, Jun. 2021. 10.1109/TPAMI.2019.2960509.
[12]
K. Zhu, H. Guo, S. Liu, J. Wang, and M. Tang, “Learning semantics-consistent stripes with self-refinement for person re-identification,” IEEE Trans. Neural Netw. Learn. Syst., vol. 34, no. 11, pp. 8531–8542, Nov. 2023. 10.1109/TNNLS.2022.3151487.
[13]
J. Chenet al., “Learning 3D shape feature for texture-insensitive person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2021, pp. 8142–8151.
[14]
Z. Zheng, X. Wang, N. Zheng, and Y. Yang, “Parameter-efficient person re-identification in the 3D space,” IEEE Trans. Neural Netw. Learn. Syst., early access, Oct. 31, 2022. 10.1109/TNNLS.2022.3214834.
[15]
A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik, “End-to-end recovery of human shape and pose,” in Proc. IEEE Conf. Comput. Vis. pattern Recognit., Jun. 2018, pp. 7122–7131.
[16]
C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep hierarchical feature learning on point sets in a metric space,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 5099–5108.
[17]
J. Wuet al., “An end-to-end exemplar association for unsupervised person re-identification,” Neural Netw., vol. 129, pp. 43–54, Sep. 2020.
[18]
Y. Feng, Z. Zhang, X. Zhao, R. Ji, and Y. Gao, “GVCNN: Group-view convolutional neural networks for 3D shape recognition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 264–272.
[19]
A. Hamdi, S. Giancola, and B. Ghanem, “MVTN: Multi-view transformation network for 3D shape recognition,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 1–11.
[20]
D. Maturana and S. Scherer, “VoxNet: A 3D convolutional neural network for real-time object recognition,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Sep. 2015, pp. 922–928.
[21]
H.-Y. Meng, L. Gao, Y.-K. Lai, and D. Manocha, “VV-Net: Voxel VAE net with group convolutions for point cloud segmentation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 8499–8507.
[22]
R. Q. Charles, H. Su, M. Kaichun, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 77–85.
[23]
W. Wu, Z. Qi, and L. Fuxin, “PointConv: Deep convolutional networks on 3D point clouds,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 9613–9622.
[24]
C. Wang, X. Ning, L. Sun, L. Zhang, W. Li, and X. Bai, “Learning discriminative features by covering local geometric space for point cloud analysis,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 10.1109/TGRS.2022.3170493.
[25]
Z. Liu, H. Hu, Y. Cao, Z. Zhang, and X. Tong, “A closer look at local aggregation operators in point cloud analysis,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2020, pp. 326–342.
[26]
W. Zhang and C. Xiao, “PCAN: 3D attention map learning using contextual information for point cloud based retrieval,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 12428–12437.
[27]
M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu, “PCT: Point cloud transformer,” Comput. Vis. Media, vol. 7, no. 2, pp. 187–199, Jun. 2021.
[28]
X. Yu, L. Tang, Y. Rao, T. Huang, J. Zhou, and J. Lu, “Point-BERT: Pre-training 3D point cloud transformers with masked point modeling,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 19291–19300.
[29]
C. He, R. Li, S. Li, and L. Zhang, “Voxel set transformer: A set-to-set approach to 3D object detection from point clouds,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 8407–8417.
[30]
A. Dosovitskiyet al., “An image is worth 16 × 16 words: Transformers for image recognition at scale,” 2020, arXiv:2010.11929.
[31]
N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2020, pp. 213–229.
[32]
T. Xiang, C. Zhang, Y. Song, J. Yu, and W. Cai, “Walk in the cloud: Learning curves for point clouds shape analysis,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 895–904.
[33]
X. Ma, C. Qin, H. You, H. Ran, and Y. Fu, “Rethinking network design and local geometry in point cloud: A simple residual MLP framework,” 2022, arXiv:2202.07123.
[34]
G. Qianet al., “PointNeXt: Revisiting PointNet++ with improved training and scaling strategies,” 2022, arXiv:2206.04670.
[35]
H. Ran, J. Liu, and C. Wang, “Surface representation for point clouds,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 18920–18930.
[36]
D. Gray and H. Tao, “Viewpoint invariant pedestrian recognition with an ensemble of localized features,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2008, pp. 262–275.
[37]
L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re-identification: A benchmark,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 1116–1124.
[38]
Z. Zheng, L. Zheng, and Y. Yang, “Unlabeled samples generated by GAN improve the person re-identification baseline in vitro,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 3774–3782.
[39]
W. Li, R. Zhao, T. Xiao, and X. Wang, “DeepReID: Deep filter pairing neural network for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 152–159.
[40]
L. Wei, S. Zhang, W. Gao, and Q. Tian, “Person transfer GAN to bridge domain gap for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 79–88.
[41]
X. Ning, K. Gong, W. Li, and L. Zhang, “JWSAA: Joint weak saliency and attention aware for person re-identification,” Neurocomputing, vol. 453, pp. 801–811, Sep. 2021.
[42]
X. Ning, K. Gong, W. Li, L. Zhang, X. Bai, and S. Tian, “Feature refinement and filter network for person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 9, pp. 3391–3402, Sep. 2021. 10.1109/TCSVT.2020.3043026.
[43]
Z. Pang, J. Guo, Z. Ma, W. Sun, and Y. Xiao, “Median stable clustering and global distance classification for cross-domain person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 5, pp. 3164–3177, May 2022. 10.1109/TCSVT.2021.3103753.
[44]
X. Bai, M. Yang, T. Huang, Z. Dou, R. Yu, and Y. Xu, “Deep-person: Learning discriminative deep features for person re-identification,” Pattern Recognit., vol. 98, Feb. 2020, Art. no.
[45]
D. Tao, Y. Guo, B. Yu, J. Pang, and Z. Yu, “Deep multi-view feature learning for person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 28, no. 10, pp. 2657–2666, Oct. 2018. 10.1109/TCSVT.2017.2726580.
[46]
Y.-J. Li, C.-S. Lin, Y.-B. Lin, and Y. F. Wang, “Cross-dataset person re-identification via unsupervised pose disentanglement and adaptation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 7918–7928.
[47]
M. Zheng, S. Karanam, Z. Wu, and R. J. Radke, “Re-identification with consistent attentive Siamese networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 5728–5737.
[48]
D. Chen, D. Xu, H. Li, N. Sebe, and X. Wang, “Group consistent similarity learning via deep CRF for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 8649–8658.
[49]
C. Zhao, X. Lv, Z. Zhang, W. Zuo, J. Wu, and D. Miao, “Deep fusion feature representation learning with hard mining center-triplet loss for person re-identification,” IEEE Trans. Multimedia, vol. 22, no. 12, pp. 3180–3195, Dec. 2020. 10.1109/TMM.2020.2972125.
[50]
S. He, H. Luo, P. Wang, F. Wang, H. Li, and W. Jiang, “TransReID: Transformer-based object re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 14993–15002.
[51]
Y. Li, J. He, T. Zhang, X. Liu, Y. Zhang, and F. Wu, “Diverse part discovery: Occluded person re-identification with part-aware transformer,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2021, pp. 2897–2906.
[52]
U. Saeed, A. Armghan, W. Quanyu, F. Alenezi, S. Yue, and P. Tiwari, “One-shot many-to-many facial reenactment using bi-layer graph convolutional networks,” Neural Netw., vol. 156, pp. 193–204, Dec. 2022.
[53]
Y. Liu, B. Fan, S. Xiang, and C. Pan, “Relation-shape convolutional neural network for point cloud analysis,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 8887–8896.
[54]
F. Rosenblatt, “The perceptron: A probabilistic model for information storage and organization in the brain,” Psychol. Rev., vol. 65, no. 6, pp. 386–408, 1958.
[55]
J. Moody and C. J. Darken, “Fast learning in networks of locally-tuned processing units,” Neural Comput., vol. 1, no. 2, pp. 281–294, Jun. 1989.
[56]
Z. Wuet al., “3D ShapeNets: A deep representation for volumetric shapes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 1912–1920.
[57]
J. Miao, Y. Wu, P. Liu, Y. Ding, and Y. Yang, “Pose-guided feature alignment for occluded person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 542–551.
[58]
N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “ShuffleNet v2: Practical guidelines for efficient CNN architecture design,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 116–131.
[59]
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 4510–4520.
[60]
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 2261–2269.
[61]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 770–778.
[62]
Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, “Dynamic graph CNN for learning on point clouds,” ACM Trans. Graph., vol. 38, no. 5, pp. 1–12, Oct. 2019. 10.1145/3326362.
[63]
S. Liao, Y. Hu, X. Zhu, and S. Z. Li, “Person re-identification by local maximal occurrence representation and metric learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 2197–2206.
[64]
L. Zheng, Y. Yang, and A. G. Hauptmann, “Person re-identification: Past, present and future,” 2016, arXiv:1610.02984.
[65]
T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang, “Joint detection and identification feature learning for person search,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 3376–3385.
[66]
Z. Zheng, L. Zheng, and Y. Yang, “A discriminatively learned CNN embedding for person reidentification,” ACM Trans. Multimedia Comput., Commun., Appl., vol. 14, no. 1, pp. 1–20, Feb. 2018.
[67]
A. Schumann and R. Stiefelhagen, “Person re-identification by deep learning attribute-complementary information,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), Jul. 2017, pp. 1435–1443.
[68]
Y. Sun, L. Zheng, W. Deng, and S. Wang, “SVDNet for pedestrian retrieval,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 3820–3828.
[69]
Y. Chen, X. Zhu, and S. Gong, “Person re-identification by deep learning multi-scale representations,” in Proc. IEEE Int. Conf. Comput. Vis. Workshops (ICCVW), Oct. 2017, pp. 2590–2600.
[70]
Y. Huang, J. Xu, Q. Wu, Z. Zheng, Z. Zhang, and J. Zhang, “Multi-pseudo regularized label for generated data in person re-identification,” IEEE Trans. Image Process., vol. 28, no. 3, pp. 1391–1403, Mar. 2019. 10.1109/TIP.2018.2874715.
[71]
J. Xu, R. Zhao, F. Zhu, H. Wang, and W. Ouyang, “Attention-aware compositional network for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 2119–2128.
[72]
Z. Zhong, L. Zheng, Z. Zheng, S. Li, and Y. Yang, “Camera style adaptation for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 5157–5166.
[73]
M. S. Sarfraz, A. Schumann, A. Eberle, and R. Stiefelhagen, “A pose-sensitive embedding for person re-identification with expanded cross neighborhood re-ranking,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 420–429.
[74]
E. Ristani and C. Tomasi, “Features for multi-target multi-camera tracking and re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 6036–6046.
[75]
W. Li, X. Zhu, and S. Gong, “Harmonious attention network for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 2285–2294.
[76]
X. Chang, T. M. Hospedales, and T. Xiang, “Multi-level factorisation net for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 2109–2118.
[77]
J. Siet al., “Dual attention matching network for context-aware feature sequence based person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 5363–5372.
[78]
L. Zhao, X. Li, Y. Zhuang, and J. Wang, “Deeply-learned part-aligned representations for person re-identification,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 402–419.
[79]
M. M. Kalayeh, E. Basaran, M. Gökmen, M. E. Kamasak, and M. Shah, “Human semantic parsing for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 1062–1071.
[80]
G. Wang, Y. Yuan, X. Chen, J. Li, and X. Zhou, “Learning discriminative features with multiple granularities for person re-identification,” in Proc. 26th ACM Int. Conf. Multimedia, Oct. 2018, pp. 274–282.
[81]
Z. Zheng, X. Yang, Z. Yu, L. Zheng, Y. Yang, and J. Kautz, “Joint discriminative and generative learning for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 2133–2142.
[82]
Z. Zheng, L. Zheng, and Y. Yang, “Pedestrian alignment network for large-scale person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 10, pp. 3037–3045, Oct. 2019. 10.1109/TCSVT.2018.2873599.
[83]
Y. Linet al., “Improving person re-identification by attribute and identity learning,” Pattern Recognit., vol. 95, pp. 151–161, Nov. 2019.
[84]
Z. Changet al., “Weighted bilinear coding over salient body parts for person re-identification,” Neurocomputing, vol. 407, pp. 454–464, Sep. 2020.
[85]
Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmentation,” in Proc. AAAI Conf. Artif. Intell., 2020, vol. 34, no. 7, pp. 13001–13008.
[86]
Z. Wang, F. Zhu, S. Tang, R. Zhao, L. He, and J. Song, “Feature erasing and diffusion network for occluded person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 4744–4753.
[87]
L. Tan, P. Dai, R. Ji, and Y. Wu, “Dynamic prototype mask for occluded person re-identification,” in Proc. 30th ACM Int. Conf. Multimedia, Oct. 2022, pp. 531–540.
[88]
X. Xu, W. Liu, Z. Wang, R. Hu, and Q. Tian, “Towards generalizable person re-identification with a bi-stream generative model,” Pattern Recognit., vol. 132, Dec. 2022, Art. no.
[89]
Z. Ran and X. Lu, “Camera domain adaptation based on cross-patch transformers for person re-identification,” Pattern Recognit. Lett., vol. 159, pp. 84–90, Jul. 2022.
[90]
Q. Zhang, J. Lai, Z. Feng, and X. Xie, “Seeing like a human: Asynchronous learning with dynamic progressive refinement for person re-identification,” IEEE Trans. Image Process., vol. 31, pp. 352–365, 2022. 10.1109/TIP.2021.3128330.
[91]
A. Khatun, S. Denman, S. Sridharan, and C. Fookes, “Pose-driven attention-guided image generation for person re-identification,” Pattern Recognit., vol. 137, May 2023, Art. no.
[92]
W.-S. Zheng, X. Li, T. Xiang, S. Liao, J. Lai, and S. Gong, “Partial person re-identification,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 4678–4686.
[93]
Y. Geet al., “FD-GAN: Pose-guided feature distilling GAN for robust person re-identification,” in Proc. Adv. Neural Inf. Process. Syst., 2018, pp. 1222–1233.
[94]
L. He, J. Liang, H. Li, and Z. Sun, “Deep spatial feature reconstruction for partial person re-identification: Alignment-free approach,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 7073–7082.
[95]
L. He, Z. Sun, Y. Zhu, and Y. Wang, “Recognizing partial biometric patterns,” 2018, arXiv:1810.07399.
[96]
H. Huang, D. Li, Z. Zhang, X. Chen, and K. Huang, “Adversarially occluded samples for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 5098–5107.
[97]
R. Zhang, Z. Zeng, Z. Guo, X. Gao, K. Fu, and J. Shi, “DSPoint: Dual-scale point cloud recognition with high-frequency fusion,” 2021, arXiv:2111.10332.
[98]
A. Berg, M. Oskarsson, and M. O’Connor, “Points to patches: Enabling the use of self-attention for 3D shape recognition,” in Proc. 26th Int. Conf. Pattern Recognit. (ICPR), Aug. 2022, pp. 528–534.
[99]
S. Huang, Y. Xie, S.-C. Zhu, and Y. Zhu, “Spatio-temporal self-supervised representation learning for 3D point clouds,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 6515–6525.
[100]
K. Tirta Wijaya, D.-H. Paek, and S.-H. Kong, “Advanced feature learning on point clouds using multi-resolution features and learnable pooling,” 2022, arXiv:2205.09962.
[101]
C. Sun, Z. Zheng, X. Wang, M. Xu, and Y. Yang, “Self-supervised point cloud representation learning via separating mixed shapes,” IEEE Trans. Multimedia, early access, Sep. 14, 2022. 10.1109/TMM.2022.3206664.
[102]
D. Leeet al., “Regularization strategy for point cloud via rigidly mixed sample,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2021, pp. 15895–15904.
[103]
S. Cheng, X. Chen, X. He, Z. Liu, and X. Bai, “PRA-Net: Point relation-aware network for 3D point cloud analysis,” IEEE Trans. Image Process., vol. 30, pp. 4436–4448, 2021. 10.1109/TIP.2021.3072214.
[104]
X. Chen, Y. Wu, W. Xu, J. Li, H. Dong, and Y. Chen, “PointSCNet: Point cloud structure and correlation learning based on space-filling curve-guided sampling,” Symmetry, vol. 14, no. 1, p. 8, Dec. 2021.
[105]
L. Xueet al., “ULIP: Learning a unified representation of language, images, and point clouds for 3D understanding,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2023, pp. 1179–1189.
[106]
K. Han, Y. Huang, S. Gong, L. Wang, and T. Tan, “3D shape temporal aggregation for video-based clothing-change person re-identification,” in Proc. Asian Conf. Comput. Vis., 2022, pp. 2371–2387.
[107]
Y. Xiu, J. Yang, D. Tzionas, and M. J. Black, “ICON: Implicit clothed humans obtained from normals,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 13286–13296.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Circuits and Systems for Video Technology
IEEE Transactions on Circuits and Systems for Video Technology  Volume 34, Issue 6
June 2024
1070 pages

Publisher

IEEE Press

Publication History

Published: 01 June 2024

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)PV-LaPSignal Processing10.1016/j.sigpro.2024.109749227:COnline publication date: 1-Feb-2025
  • (2024)Intelligent Customer Service System Optimization Based on Artificial IntelligenceJournal of Organizational and End User Computing10.4018/JOEUC.33692336:1(1-27)Online publication date: 21-Feb-2024
  • (2024)Multi-Stage Auxiliary Learning for Visible-Infrared Person Re-IdentificationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.342553634:11_Part_2(12032-12047)Online publication date: 9-Jul-2024
  • (2024)Deep learning based computer vision under the prism of 3D point clouds: a systematic reviewThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-023-03237-740:11(8287-8329)Online publication date: 1-Nov-2024
  • (2024)GPSFormer: A Global Perception and Local Structure Fitting-Based Transformer for Point Cloud UnderstandingComputer Vision – ECCV 202410.1007/978-3-031-73242-3_5(75-92)Online publication date: 29-Sep-2024

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media