Abstract
Face parsing infers a pixel-wise label to each facial component, which has drawn much attention recently. Previous methods have shown their efficiency in face parsing, which however overlook the correlation among different face regions. The correlation is a critical clue about the facial appearance, pose, expression, etc., and should be taken into account for face parsing. To this end, we propose to model and reason the region-wise relations by learning graph representations, and leverage the edge information between regions for optimized abstraction. Specifically, we encode a facial image onto a global graph representation where a collection of pixels (“regions”) with similar features are projected to each vertex. Our model learns and reasons over relations between the regions by propagating information across vertices on the graph. Furthermore, we incorporate the edge information to aggregate the pixel-wise features onto vertices, which emphasizes on the features around edges for fine segmentation along edges. The finally learned graph representation is projected back to pixel grids for parsing. Experiments demonstrate that our model outperforms state-of-the-art methods on the widely used Helen dataset, and also exhibits the superior performance on the large-scale CelebAMask-HQ and LaPa dataset. The code is available at https://github.com/tegusi/EAGRNet.
W. Hu—This work was in collaboration with JD AI Research during Gusi Te’s internship there.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Face parsing assigns a pixel-wise label to each semantic component, such as facial skin, eyes, mouth and nose, which is a particular task in semantic segmentation. It has been applied in a variety of scenarios such as face understanding, editing, synthesis, and animation [1,2,3].
The region-based methods have been recently proposed to model the facial components separately [4,5,6], and achieved state-of-the-art performance on the current benchmarks. However, these methods are based on the individual information within each region, and the correlation among regions is not exploited yet to capture long range dependencies. In fact, facial components present themselves with abundant correlation between each other. For instance, eyes, mouth and eyebrows will generally become more curvy when people smile; facial skin and other components will be dark when the lighting is weak, and so on.
The correlation between the facial components is the critical clue in face representation, and should be taken into account in the face parsing. To this end, we propose to learn graph representations over facial images, which model the relations between regions and enable reasoning over non-local regions to capture long range dependencies. To bridge the facial image pixels and graph vertices, we project a collection of pixels (a “region”) with similar features to each vertex. The pixel-wise features in a region are aggregated to the feature of the corresponding vertex. In particular, to achieve accurate segmentation along the edges between different components, we propose the edge attention in the pixel-to-vertex projection, assigning larger weights to the features of edge pixels during the feature aggregation. Further, the graph representation learns the relations between facial regions, i.e., the graph connectivity between vertices, and reasons over the relations by propagating information across all vertices on the graph, which is able to capture long range correlations in the facial image. The learned graph representation is finally projected back to the pixel grids for face parsing. Since the number of vertices is significantly smaller than that of pixels, the graph representation also reduces redundancy in features as well as computational complexity effectively.
Specifically, given an input facial image, we first encode the high-level and low-level feature maps by the ResNet backbone [7]. Then, we build a projection matrix to map a cluster of pixels with similar features to each vertex. The feature of each vertex is taken as the weighted aggregation of pixel-wise features in the cluster, where features of edge pixels are assigned with larger weights via an edge mask. Next, we learn and reason over the relations between vertices (i.e., regions) via graph convolution [8, 9] to further extract global semantic features. The learned features are finally projected back to a pixel-wise feature map. We test our model on Helen, CelebAMask-HQ and LaPa datasets, and surpass state-of-the-art methods.
Our main contributions are summarized as follows.
-
We propose to exploit the relations between regions for face parsing by modeling on a region-level graph representation, where we project a collection of pixels with similar features to each vertex and reason over the relations to capture long range dependencies.
-
We introduce edge attention in the pixel-to-vertex feature projection, which emphasizes on features of edge pixels during the feature aggregation to each vertex and thus enforces accurate segmentation along edges.
-
We conduct extensive experiments on Helen, CelebAMask-HQ and LaPa datasets. The experimental results show our model outperforms state-of-the-art methods on almost every category.
2 Related Work
2.1 Face Parsing
Face parsing is a division of semantic segmentation, which assigns different labels to the corresponding regions on human faces, such as nose, eyes, mouth and etc.. The methods of face parsing could be classified into global-based and local-based methods.
Traditionally, hand crafted features including SIFT [10] are applied to model the facial structure. Warrell et al. describe spatial relationship of facial parts with epitome model [11]. Kae et al. combine Conditional Random Field (CRF) with a Restricted Boltzmann Machine (RBM) to extract local and global features [12]. With the rapid development of machine learning, CNN has been introduced to learn more robust and rich features. Liu et al. import CNN-based features into the CRF framework to model individual pixel labels and neighborhood dependencies [13]. Luo et al. propose a hierarchical deep neural network to extract multi-scale facial features [14]. Zhou et al. adopt adversarial learning approach to train the network and capture high-order inconsistency [15]. Liu et al. design a CNN-RNN hybrid model that benefits from both high quality features of CNN and non-local properties of RNN [6]. Zhou et al. present an interlinked CNN that takes multi-scale images as input and allows bidirectional information passing [16]. Lin et al. propose a novel RoI Tanh-Warping operator preserving central and peripheral information. It contains two branches with the local-based for inner facial components and the global based for outer facial ones. This method shows high performance especially on hair segmentation [4].
2.2 Attention Mechanism
Attention mechanism has been proposed to capture long-range information [17], and applied to many applications such as sentence encoding [18] and image feature extraction [19]. Limited by the locality of convolution operators, CNN lacks the ability to model global contextual information. Furthermore, Chen et al. propose Double Attention Model that gathers information spatially and temporally to improve complexity of traditional non-local modules [20]. Zhao et al. propose a point-wise spatial attention module, relaxing the local neighborhood constraint [21]. Zhu et al. also present an asymmetric module to reduce abundant computation and distillate features [22]. Fu et al. devise a dual attention module that applies both spatial and channel attention in feature maps [23]. To research underlying relationship between different regions, Chen et al. project original features into interactive space and utilize GCN to exploit high order relationship [24]. Li et al. devise a robust attention module that incorporates the Expectation-Maximization algorithm [25].
2.3 Graph Reasoning
Interpreting images from the graph perspective is an interesting idea, since an image could be regarded as regular pixel grids. Chandra et al. propose Conditional Random Field (CRF) based method on image segmentation [26]. Besides, graph convolution network (GCN) is imported into image segmentation. Li et al. introduce graph convolution to the semantic segmentation, which projects features into vertices in the graph domain and applies graph convolution afterwards [27]. Furthermore, Lu et al. propose Graph-FCN where semantic segmentation is reduced to vertex classification by directly transforming an image into regular grids [28]. Pourian et al. propose a method of semi-supervised segmentation [29]. The image is divided into community graph and different labels are assigned to corresponding communities. Te et al. propose a computation-efficient and posture-invariant face representation with only a few key points on hypergraphs for face anti-spoofing beyond 2D attacks [30]. Zhang et al. utilize graph convolution both in the coordinate space and feature space [31].
3 Methods
3.1 Overview
As illustrated in Fig. 1, given an input facial image, we aim to predict the corresponding parsing label and auxiliary edge map. The overall framework of our method consists of three procedures as follows.
-
Feature and Edge Extraction. We take ResNet as the backbone to extract features at various levels for multi-scale representation. The low-level features contain more details but lack semantic information, while the high-level features provide rich semantics with global information at the cost of image details. To fully exploit the global information in high-level features, we employ a spatial pyramid pooling operation to learn multi-scale contextual information. Further, we construct an edge perceiving module to acquire an edge map for the subsequent module.
-
Edge Aware Graph Reasoning. We feed the feature map and edge map into the proposed Edge Aware Graph Reasoning (EAGR) module, aiming to learn intrinsic graph representations for the characterization of the relations between regions. The EAGR module consists of three operations: graph projection, graph reasoning and graph reprojection, which projects the original features onto vertices in an edge-aware fashion, reasons the relations between vertices (regions) over the graph and projects the learned graph representation back to pixel grids, leading to a refined feature map with the same size.
-
Semantic Decoding. We fuse the refined features into a decoder to predict the final result of face parsing. The high-level feature map is upsampled to the same dimension as the low-level one. We concatenate both feature maps and leverage 1 \(\times \) 1 convolution layer to reduce feature channels, predicting the final parsing labels.
3.2 Edge-Aware Graph Reasoning
Inspired by the non-local module [19], we aim to build the long-range interactions between distant regions, which is critical for the description of the facial structure. In particular, we propose edge-aware graph reasoning to model the long-range relations between regions on a graph, which consists of edge-aware graph projection, graph reasoning and graph reprojection.
Edge-Aware Graph Projection. We first revisit the typical non-local modules. Given a feature map \({\mathbf{X}}\in \mathbb {R}^{HW \times C}\), where H and W refer to the height and width of the input image respectively and C is the number of feature channels. A typical non-local module is formulated as:
where \(\theta \), \(\varphi \) and \(\gamma \) are convolution operations with \( 1 \times 1\) kernel size. \({\mathbf{V}}\in \mathbb {R}^{HW \times HW}\) is regarded as the attention maps to model the long-range dependencies. However, the complexity of computing \({\mathbf{V}}\) is \(\mathcal {O}(H^2W^2C)\), which does not scale well with increasing number of pixels HW. To address this issue, we propose a simple yet effective edge-aware projection operation to eliminate the redundancy in features.
Given an input feature map \({\mathbf{X}}\in \mathbb {R}^{HW \times C}\) and an edge map \({\mathbf{Y}}\in \mathbb {R}^{HW \times 1}\), we construct a projection matrix \({\mathbf{P}}\) by mapping \({\mathbf{X}}\) onto vertices of a graph with \({\mathbf{Y}}\) as a prior. Specifically, we first reduce the dimension of \({\mathbf{X}}\) in the feature space via a convolution operation \(\varphi \) with \(1 \times 1\) kernel size, leading to \(\varphi ({\mathbf{X}}) \in \mathbb {R}^{HW \times T} \), \(T < C\). Then, we duplicate the edge map Y to the same dimension of \(\varphi ({\mathbf{X}})\) for ease of computation. We incorporate the edge information into the projection, by taking the Hadamard Product of \(\varphi ({\mathbf{X}})\) and \({\mathbf{Y}}\). As the edge map \({\mathbf{Y}}\) encodes the probability of each pixel being an edge pixel, the Hadamard Product operation essentially assigns a weight to the feature of each pixel, with larger weights to features of edge pixels. Further, we introduce an average pooling operation \(\mathcal {P}(\cdot )\) with stride s to obtain anchors of vertices. These anchors represent the centers of each region of pixels, and we take the multiplication of \(\varphi ({\mathbf{X}})\) and anchors to capture the similarity between anchors and each pixel. We then apply a softmax function for normalization. Formally, the projection matrix takes the form (Fig. 2):
where \(\odot \) denotes the Hadamard product, and \({\mathbf{P}}\in \mathbb {R}^{HW/s^2 \times HW}\).
In Eq. (2), we have two critical operations: the edge attention and the pooling operation. The edge attention emphasizes the features of edge pixels by assigning larger weights to edge pixels. Further, we propose the pooling operation in the features, whose benefits are in twofold aspects. On one hand, the pooling leads to compact representations by averaging over features to remove the redundancy. On the other hand, by pooling with stride s, the computation complexity is reduced from \(\mathcal {O}(H^2W^2C)\) in non-local modules to \(\mathcal {O}(H^2W^2C/s^2)\).
With the acquired projection matrix \({\mathbf{P}}\), we project the pixel-wise features \({\mathbf{X}}\) onto the graph domain, i.e.,
where \(\theta \) is a convolution operation with \( 1 \times 1\) kernel size so as to reduce the dimension of \({\mathbf{X}}\), resulting in \(\theta ({\mathbf{X}}) \in \mathbb {R}^{HW \times K}\). The projection aggregates pixels with similar features as each anchor to one vertex, thus each vertex essentially represents a region in the facial images. Hence, we bridge the connection between pixels and each region via the proposed edge aware graph projection, leading to the features of the projected vertices on the graph \({\mathbf{X}}_G \in \mathbb {R}^{HW/s^2 \times K}\) via Eq. (3).
Graph Reasoning. Next, we learn the connectivity between vertices from \({\mathbf{X}}_G\), i.e., the relations between regions. Meanwhile, we reason over the relations by propagating information across vertices to learn higher-level semantic information. This is elegantly realized by a single-layer Graph Convolution Network (GCN). Specifically, we feed the input vertex features \({\mathbf{X}}_G\) into a first-order approximation of spectral graph convolution. The output feature map \(\hat{{\mathbf{X}}}_G \in \mathbb {R}^{HW/s^2 \times K}\) is
where \({\mathbf{A}}\) denotes the adjacent matrix that encodes the graph connectivity to learn, \({\mathbf{W}}_G \in \mathbb {R}^{K \times K}\) denotes the weights of the GCN, and ReLU is the activation function. The features \(\hat{{\mathbf{X}}}_G \) are acquired by the vertex-wise interaction (multiplication with \(({\mathbf{I}}- {\mathbf{A}})\)) and channel-wise interaction (multiplication with \({\mathbf{W}}_G\)).
Different from the original one-layer GCN [32] in which the graph \({\mathbf{A}}\) is hand-crafted, we randomly initialize \({\mathbf{A}}\) and learn from vertex features. Moreover, we add a residual connection to reserve features of raw vertices. Based on the learned graph, the information propagation across all vertices leads to the finally reasoned relations between regions. After graph reasoning, pixels embedded within one vertex share the same context of features modeled by graph convolution. We set the same number of output channels as the input to keep consistency, allowing the module to be compatible with the subsequent process.
Graph Reprojection. In order to fit into existing framework, we reproject the extracted vertex features in the graph domain to the original pixel grids. Given the learned graph representation \(\hat{{\mathbf{X}}}_G \in \mathbb {R}^{HW/s^2 \times K}\), we aim to compute a matrix \(\mathbf {V} \in \mathbb {R}^{HW \times HW/s^2}\) that maps \(\hat{{\mathbf{X}}}_G\) to the pixel space. In theory, \(\mathbf {V}\) could be taken as the inverse of the projection matrix \({\mathbf{P}}\). However, it is nontrivial to compute because \({\mathbf{P}}\) is not a square matrix. To tackle this problem, we take the transpose matrix \({\mathbf{P}}^{\top }\) as the reprojection matrix [27], in which \({\mathbf{P}}^{\top }_{ij}\) reflects the correlation between vertex i and pixel j. The limitation of this operation is that the row vectors in \({\mathbf{P}}^{\top }\) are not normalized.
After reprojection, we deploy a \(1 \times 1\) convolution operation \(\sigma \) to increase the feature channels in consistent with the input features \({\mathbf{X}}\). Then, we take the summation of the reprojected refined features and the original feature map as the final features. The final pixel-wise feature map \({\mathbf{Z}}\in \mathbb {R}^{HW \times C}\) is thus computed by
3.3 The Loss Function
To further strengthen the effect of the proposed edge aware graph reasoning, we introduce the boundary-attention loss (BA-Loss) inspired by [33] besides the traditional cross entropy loss for predicted parsing maps and edge maps. The BA-loss computes the loss between the predicted label and the ground truth only at edge pixels, thus improving the segmentation accuracy of critical edge pixels that are difficult to distinguish. Mathematically, the BA-loss is written as
where i is the index of pixels, j is the index of classes and N is the number of classes. \(e_i\) denotes the edge label, \(y_{ij}\) denotes the ground truth label of face parsing, and \(p_{ij}\) denotes the predicted parsing label. \(\left[ \cdot \right] \) is the Iverson bracket, which denotes a number that is 1 if the condition in the bracket is satisfied, and 0 otherwise.
The total loss function is then defined as follows:
where \(\mathcal {L}_{\text {parsing}}\) and \(\mathcal {L}_{\text {edge}}\) are classical cross entropy losses for the parsing and edge maps. \(\lambda _1\) and \(\lambda _2\) are two hyper-parameters to strike a balance among the three loss functions.
3.4 Analysis
Since non-local modules and graph-based methods have drawn increasing attention, it is interesting to analyze the similarities and differences between previous works and our method.
Comparison with Non-local Modules. Typically, a traditional non-local module models pixel-wise correlations by feature similarities. However, the high-order relationship between regions are not captured. In contrast, we exploit the correlation among distinct regions via the proposed graph projection and reasoning. The features of each vertex embed not only local contextual anchor aggregated by average pooling in a certain region but also global features from the overall pixels. We further learn and reason over the relations between regions by graph convolution, which captures high-order semantic relations between different facial regions.
Also, the computation complexity of non-local modules is expensive in general as discussed in Sect. 3.2. Our proposed edge-aware pooling addresses the issue by extracting significant anchors to replace redundant query points. Also, we do not incorporate pixels within each facial region during the sampling process while focusing on edge pixels, thus improving boundary details. The intuition is that pixels within each region tend to share similar features.
Comparison with Graph-Based Models. In comparison with other graph-based models, such as [24, 27], we improve the graph projection process by introducing locality in sampling in particular. In previous works, each vertex is simply represented by a weighted sum of image pixels, which does not consider edge information explicitly and brings ambiguity in understanding vertices. Besides, with different inputs of feature maps, the pixel-wise features often vary greatly but the projection matrix is fixed after training. In contrast, we incorporate the edge information into the projection process to emphasize on edge pixels, which preserves boundary details well. Further, we specify vertex anchors locally based on the average pooling, which conforms with the rule that the location of facial components keeps almost unchanged after face alignment.
4 Experiments
4.1 Datasets and Metrics
The Helen dataset includes 2,330 images with 11 categories: background, skin, left/right brow, left/right eye, upper/lower lip, inner mouth and hair. Specifically, we keep the same train/validation/test protocol as in [34]. The number of the training, validation and test samples are 2,000, 230 and 100, respectively. The CelebAMask-HQ dataset is a large-scale face parsing dataset which consists of 24,183 training images, 2,993 validation images and 2,824 test images. The number of categories in CelebAMask-HQ is 19. In addition to facial components, the accessories such as eyeglass, earring, necklace, neck, and cloth are also annotated in the CelebAMask-HQ dataset. The LaPa dataset is a newly released challenging dataset for face parsing, which contains 11 categories as Helen, covering large variations in facial expression, pose and occlusion. It consists of 18,176 training images, 2,000 validation images and 2,000 test images.
During training, we use the rotation and scale augmentation. The rotation angle is randomly selected from \((-30^\circ , 30^\circ )\) and the scale factor is randomly selected from (0.75, 1.25). The edge mask is extracted according to the semantic label map. If the label of a pixel is different with its 4 neighborhoods, it is regarded as a edge pixel. For the Helen dataset, similar to [4], we implement face alignment as a pre-processing step and the results are re-mapped to the original image for evaluation.
We employ three evaluation metrics to measure the performance of our model: pixel accuracy, mean intersection over union (mIoU) and F1 score. Directly employing the accuracy metric ignores the scale variance amid facial components, while the mean IoU and F1 score are better for evaluation. To keep consistent with the previous methods, we report the overall F1-score on the Helen dataset, which is computed over the merged facial components: brows (left+right), eyes (left+right), nose, mouth (upper lip+lower lip+inner mouth). For the CelebAMask-HQ and LaPa datasets, the mean F1-score over all categories excluding background is employed.
4.2 Implementation Details
Our backbone is a modified version of the ResNet-101 [7] excluding the average pooling layer, and the Conv1 block is changed to three \(3 \times 3\) convolutional layers. For the pyramid pooling module, we follow the implementation in [35] to exploit global contextual information. The pooling factors are \(\{1, 2, 3, 6\}\). Similar to [36], the edge perceiving module predicts a two-channel edge map based on the outputs of Conv2, Conv3 and Conv4 in ResNet-101. The outputs of Conv1 and the pyramid pooling serve as the low-level and high-level feature maps, respectively. Both of them are fed into the EAGR module separately for graph representation learning.
As for the EAGR module, we set the pooling size to \(6 \times 6\). To pay more attention on the facial components, we just utilize the central \(4 \times 4\) anchors for graph construction. The feature dimensions K and T are set to 128 and 64, respectively.
Stochastic Gradient Descent (SGD) is employed for optimization. We initialize the network with a pretrained model on ImageNet. The input size is \(473 \times 473\) and the batch size is set to 28. The learning rate starts at 0.001 with the weight decay of 0.0005. The batch normalization is implemented with In-Place Activated Batch Norm [37].
4.3 Ablation Study
On Different Components. We demonstrate the effectiveness of different components in the proposed EAGR module. Specifically, we remove some components and train the model from scratch under the same initialization. The quantitative results are reported in Table 1. Baseline means the model only utilizes the ResNet backbone, pyramid pooling and multi-scale decoder without any EGAR module, and Edge represents whether edge aware pooling is employed. Graph represents the EAGR module, while Reasoning indicates the graph reasoning excluding graph projection and reprojection. We observe that Edge and Graph lead to improvement over the baseline by \(1\%\) in mIoU respectively. When both components are taken into account, we achieve even better performance. The boundary-attention loss (BA-loss) also leads to performance improvement.
We also provide subjective results of face parsing from different models in Fig. 3. Results of incomplete models exhibit varying degrees of deficiency around edges in particular, such as the edge between the hair and skin in the first row, the upper lips in the second row, and edges around the mouse in the third row. In contrast, our complete model produce the best results with accurate edges between face constitutes, which is almost the same as the ground truth. This validates the effectiveness of the proposed edge aware graph reasoning.
On the Deployment of the EAGR Module. We also conduct experiments on the deployment of the EAGR module with respect to the feature maps as well as pooling sizes. We take the output of Conv2 in the ResNet as the low-level feature map, and that of the pyramid pooling module as the high-level feature map. We compare four deployment schemes: 1) 0-module, where no EAGR module is applied; 2) 1-module, where the low-level and high-level feature maps are concatenated, and then fed into an EAGR module; 3) 2-modules, where the low-level and high-level feature maps are fed into one EAGR module respectively; 4) 3-moduels, which combines 2) and 3). As listed in Table 2, the scheme of 2-modules leads to the best performance, which is the one we finally adopt.
We also test the influence of the pooling size, where the number of vertices changes along with the pooling size. As presented in Table 2, the size of 6 \(\times \) 6 leads to the best performance, while enlarging the pooling size further does not bring performance improvement. This is because more detailed anchors lead to the loss of integrity, which breaks the holistic semantic representation.
On the Complexity in Time and Space. Further, we study the complexity of different models in time and space in Fig. 4. We compare with three schemes: 1) a simplified version without the EAGR module, which we refer to as the Baseline; 2) a non-local module [19] employed without edge aware sampling (i.e., pooling) as Without sampling; and 3) a version without graph convolution for reasoning as Without graph. As presented in Fig. 4, compared with the typical non-local module, our proposed method reduces the computation time by more than 4\(\times \) in terms of flops. We also see that the computation and space complexity of our method is comparable to those of the Baseline, which indicates that most complexity comes from the backbone network. Using Nvidia P40, the time cost of our model for a single image is 89ms in the inference stage. This demonstrates that the proposed EAGR module achieves significant performance improvement with trivial computational overhead.
4.4 Comparison with the State-of-the-Art
We conduct experiments on the broadly acknowledged Helen dataset to demonstrate the superiority of the proposed model. To keep consistent with the previous works [4,5,6, 33, 38], we employ the overall F1 score to measure the performance, which is computed by combining the merged eyes, brows, nose and mouth categories. As Table 3 shows, Our model surpasses state-of-the-art methods and achieves \(93.2\%\) on this dataset.
We also evaluate our model on the newly proposed CelebAMask-HQ [1] and LaPa [33] datasets, whose scales are about 10 times larger than the Helen dataset. Different from the Helen dataset, CelebAMask-HQ and LaPa have accurate annotation for hair. Therefore, mean F1-score (over all foreground categories) is employed for better evaluation. Table 4 and Table 5 give the comparison results of the related works and our method on these two datasets, respectively.
4.5 Visualization of Graph Projection
Further, we visualize the graph projection for intuitive interpretation. As in Fig. 5, given each input image (first row), we visualize the weight of each pixel that contributes to a vertex marked in a blue rectangle in the other rows, which we refer to as the response map. Darker color indicates higher response. We observe that the response areas are consistent with the vertex, which validates that our graph projection maps pixels in the same semantic component to the same vertex.
5 Conclusion
We propose a novel graph representation learning paradigm of edge aware graph reasoning for face parsing, which captures region-wise relations to model long-range contextual information. Edge cues are exploited in order to project significant pixels onto graph vertices on a higher semantic level. We then learn the relation between vertices (regions) and reason over all vertices to characterize the semantic information. Experimental results demonstrate that the proposed method sets the new state-of-the-art with low computation complexity, which efficiently reconstructs boundary details in particular. In future, we will apply the paradigm of edge aware graph reasoning to more segmentation applications, such as scene parsing.
References
Lee, C.H., Liu, Z., Wu, L., Luo, P.: MaskGAN: towards diverse and interactive facial image manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5549–5558 (2020)
Zhang, H., Riggan, B.S., Hu, S., Short, N.J., Patel, V.M.: Synthesis of high-quality visible faces from polarimetric thermal faces using generative adversarial networks. Int. J. Comput. Vis. 127, 1–18 (2018)
Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig. Process. Lett. 23(10), 1499–1503 (2016)
Lin, J., Yang, H., Chen, D., Zeng, M., Wen, F., Yuan, L.: Face parsing with ROI tanh-warping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5654–5663 (2019)
Yin, Z., Yiu, V., Hu, X., Tang, L.: End-to-end face parsing via interlinked convolutional neural networks. arXiv preprint arXiv:2002.04831 (2020)
Liu, S., Shi, J., Liang, J., Yang, M.H.: Face parsing via recurrent propagation. In: 28th British Machine Vision Conference, BMVC 2017, pp. 1–10 (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 770–778 (2016)
Henaff, M., Bruna, J., LeCun, Y.: Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163 (2015)
Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering. In: Advances in Neural Information Processing Systems, pp. 3844–3852 (2016)
Smith, B.M., Zhang, L., Brandt, J., Lin, Z., Yang, J.: Exemplar-based face parsing. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3484–3491 (2013)
Warrell, J., Prince, S.J.: Labelfaces: parsing facial features by multiclass labeling with an epitome prior. In: IEEE International Conference on Image Processing (ICIP), pp. 2481–2484 (2009)
Kae, A., Sohn, K., Lee, H., Learned-Miller, E.: Augmenting CRFs with Boltzmann machine shape priors for image labeling. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2019–2026 (2013)
Liu, S., Yang, J., Huang, C., Yang, M.H.: Multi-objective convolutional learning for face labeling. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3451–3459 (2015)
Luo, P., Wang, X., Tang, X.: Hierarchical face parsing via deep learning. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2480–2487 (2012)
Zhou, E., Fan, H., Cao, Z., Jiang, Y., Yin, Q.: Extensive facial landmark localization with coarse-to-fine convolutional network cascade. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 386–391 (2013)
Zhou, Y., Hu, X., Zhang, B.: Interlinked convolutional neural networks for face parsing. In: Hu, X., Xia, Y., Zhang, Y., Zhao, D. (eds.) ISNN 2015. Lecture Notes in Computer Science, vol. 9377, pp. 222–231. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25393-0_25
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
Vaswani, A., et al.: Attention is all you need. In: Advances in neural information processing systems, pp. 5998–6008 (2017)
Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)
Chen, Y., Kalantidis, Y., Li, J., Yan, S., Feng, J.: A 2-nets: double attention networks. In: Advances in Neural Information Processing Systems, pp. 352–361 (2018)
Zhao, H., et al.: PSANet: point-wise spatial attention network for scene parsing. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 267–283 (2018)
Zhu, Z., Xu, M., Bai, S., Huang, T., Bai, X.: Asymmetric non-local neural networks for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 593–602 (2019)
Fu, J., et al.: Dual attention network for scene segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3146–3154 (2019)
Chen, Y., Rohrbach, M., Yan, Z., Shuicheng, Y., Feng, J., Kalantidis, Y.: Graph-based global reasoning networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 433–442 (2019)
Li, X., Zhong, Z., Wu, J., Yang, Y., Lin, Z., Liu, H.: Expectation-maximization attention networks for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9167–9176 (2019)
Chandra, S., Usunier, N., Kokkinos, I.: Dense and low-rank gaussian CRFs using deep embeddings. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5103–5112 (2017)
Li, Y., Gupta, A.: Beyond grids: learning graph representations for visual recognition. In: Advances in Neural Information Processing Systems, pp. 9225–9235 (2018)
Lu, Y., Chen, Y., Zhao, D., Chen, J.: Graph-FCN for image semantic segmentation. In: Lu, H., Tang, H., Wang, Z. (eds.) Advances in Neural Networks – ISNN 2019, ISNN 2019. Lecture Notes in Computer Science, vol. 11554, pp. 97–105. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22796-8_11
Pourian, N., Karthikeyan, S., Manjunath, B.S.: Weakly supervised graph based semantic segmentation by learning communities of image-parts. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1359–1367 (2015)
Te, G., Hu, W., Guo, Z.: Exploring hypergraph representation on face anti-spoofing beyond 2D attacks. In: 2020 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2020)
Zhang, L., Li, X., Arnab, A., Yang, K., Tong, Y., Torr, P.H.: Dual graph convolutional network for semantic segmentation. arXiv preprint arXiv:1909.06121 (2019)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: 5th International Conference on Learning Representations, Conference Track Proceedings, OpenReview.net, ICLR 2017, Toulon, France, 24–26 April 2017 (2017)
Liu, Y., Shi, H., Shen, H., Si, Y., Wang, X., Mei, T.: A new dataset and boundary-attention semantic segmentation for face parsing. AAA I, 11637–11644 (2020)
Le, V., Brandt, J., Lin, Z., Bourdev, L., Huang, T.S.: Interactive facial feature localization. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) Computer Vision – ECCV 2012, ECCV 2012. Lecture Notes in Computer Science, vol. 7574, pp. 679–692. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33712-3_49
Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2881–2890 (2017)
Ruan, T., Liu, T., Huang, Z., Wei, Y., Wei, S., Zhao, Y.: Devil in the details: towards accurate single and multiple human parsing. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 4814–4821 (2019)
Rota Bulò, S., Porzi, L., Kontschieder, P.: In-place activated BatchNorm for memory-optimized training of DNNs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)
Wei, Z., Liu, S., Sun, Y., Ling, H.: Accurate facial image parsing at real-time speed. IEEE Trans. Image Process. 28(9), 4659–4670 (2019)
Acknowledgement
This work was supported by National Natural Science Foundation of China [61972009], Beijing Natural Science Foundation [4194080] and Beijing Academy of Artificial Intelligence (BAAI).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Te, G., Liu, Y., Hu, W., Shi, H., Mei, T. (2020). Edge-Aware Graph Representation Learning and Reasoning for Face Parsing. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12357. Springer, Cham. https://doi.org/10.1007/978-3-030-58610-2_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-58610-2_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58609-6
Online ISBN: 978-3-030-58610-2
eBook Packages: Computer ScienceComputer Science (R0)