Abstract
Semantic image segmentation plays a crucial role in scene understanding tasks. In autonomous driving, the driving of the vehicle causes the scale changes of objects in the street scene. Although multi-scale features can be learned through concatenating multiple different atrous-convolved features, it is difficult to accurately segment pedestrians with only partial feature information due to factors such as occlusion. Therefore, we propose a Xiphoid Spatial Pyramid Pooling method integrated with detailed information. This method, while connecting the features of multiple atrous-convolved, retains the image-level features of target boundary information. Based on the above methods, we design an encoder-decoder architecture called DXNet. The encoder is composed of a deep convolution neural network and two XSPP modules, and the decoder decodes the advanced features through up-sampling operation and skips connection to gradually restore the target boundary. We evaluate the effectiveness of our approach on the Cityscapes dataset. Experimental results show that our method performs better in the case of occlusion, and the mean intersection-over-union score of our model outperforms some representative works.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Pohlen, T., Hermans, A., Mathias, M., Leibe, B.: Full-resolution residual networks for semantic segmentation in street scenes. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3309–3318 (2017)
Yang, M., Yu, K., Zhang, C., Li, Z., Yang, K.: DenseASPP for semantic segmentation in street scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3684–3692 (2018)
Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015)
Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2018)
Chen, L.-C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. CoRR, abs/1706.05587 (2017)
Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6230–6239 (2017)
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: ECCV (2018)
Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3213–3223 (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37, 1904–1916 (2014)
Liu, W., Rabinovich, A., Berg, A.C.: ParseNet: looking wider to see better. CoRR, abs/1506.04579 (2015)
Fu, C.-Y., Liu, W., Ranga, A., Tyagi, A., Berg, A.C.: Dssd: deconvolutional single shot detector. CoRR, abs/1701.06659 (2017)
Lin, G., Milan, A., Shen, C., Reid, I.D.: RefineNet: multi-path refinement networks for high-resolution semantic segmentation. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5168–5177 (2017)
Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2016)
Ronneberger, O., Fischer, P., Brox, T.: Convolutional networks for biomedical image segmentation. In: MICCAI, U-net (2015)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. CoRR, abs/1511.07122 (2016)
Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1800–1807 (2017)
Acknowledgments
The authors would like to thank the anonymous reviewers for their helpful and constructive comments. This work was partially supported by the National Natural Science Foundation of China (NSFC Grant No. 61972059, 61702055, 61773272, 61272059) Natural Science Foundation of Jiangsu Province under Grant (BK20191474, BK20161268). Research and Innovation Fund of the Science and Technology Development Center of the Ministry of Education (2018A01007), and Ministry of Education Science and Technology Development Center Industry-University Research Innovation Fund (2018A02003), and Humanities and Social Sciences Foundation of the Ministry of Education under Grant 18YJCZH229.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Shang, Y., Zhong, S., Gong, S., Zhou, L., Ying, W. (2019). DXNet: An Encoder-Decoder Architecture with XSPP for Semantic Image Segmentation in Street Scenes. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Communications in Computer and Information Science, vol 1143. Springer, Cham. https://doi.org/10.1007/978-3-030-36802-9_59
Download citation
DOI: https://doi.org/10.1007/978-3-030-36802-9_59
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36801-2
Online ISBN: 978-3-030-36802-9
eBook Packages: Computer ScienceComputer Science (R0)