1 Introduction
Tracking dynamic expressions of human faces is an important task, with recent methods [26, 5, 28, 3, 18] achieving impressive results. However, difficult problems remain due to variations in camera pose, video quality, head movement and illumination, added to the challenge of tracking different people with many unique facial expressions.
Early work on articulated face tracking was based on Active Shape and Appearance Models [12, 11, 20] that fit a parametric facial template to the image. The facial template is learned from data and thus the tracking quality is limited by the amount of training samples and optimization method. Recently, alternative regressionbased methods [13, 8, 29, 24, 32] have resulted in better performance due to greater flexibility and computational efficiency.
Another common approach for face tracking is to use 3D deformable models as priors [4, 1, 14, 15, 17, 23]. In general, a 3D face model is controlled by a set of shape deformation units. In the past, generic wireframe models (WFM) were often employed for simplicity. However, WFM can only represent a coarse face shape and is insufficient for finegrained face tracking when dense 3D data is available.
Blendshapebased face models, such as the shape tensor used in the FaceWareHouse
[7], were developed for more sophisticated, accurate 3D face tracking. By deforming dense 3D blendshapes to fit facial appearances, facial motions can be estimated with high fidelity. Such techniques have gained attention recently due to the proliferation of consumergrade range sensing devices, such as the Microsoft Kinect
[22], which provide synchronized color (RGB) images and depth (D) maps in real time. By integrating blendshapes into dynamic expression models (DEM), several approaches [28, 3, 18] have demonstrated stateoftheart tracking performance on RGBD input. It can be observed that all of these tracking frameworks rely heavily on the quality of input depth data. However, existing consumergrade depth sensors tend to provide increasingly unreliable depth measurements when the objects are farther [22]. Therefore, these methods [28, 3, 18] only work well at close range, where the depth map retains fine structural details of the face. In many applications, such as roomsized teleconferencing, the individuals tracked may be located at considerable distances from the camera, leading to poor performance with existing methods.One way of addressing depth sensor limitations is to use color as in [6, 5]. These RGBbased methods require extensive training to learn a 3D shape regressor. The learned regressor serves as a prior for DEM registration to RGB frames. Despite the high training cost, these methods have tracking results comparable to RGBDbased approaches. Although RGBonly methods are not affected by inaccurate depth measures, it is still challenging to track with high fidelity at large objectcamera distances. This is in part due to reduced reliability of regressionbased updates at lower image resolutions, when there is less data for overcoming depth ambiguity. Instead, we expect to achieve better tracking results if we were able to incorporate depth data while intelligently handling its inaccuracies at greater distances.
This motivates us to propose a robust RGBD face tracker combining the advantages of RGB regression and 3D point cloud registration. Our contributions are as follows:

Our tracker is guided by a multistage 3D shape regressor based on random forests and linear regression, which maps 2D image features back to blendshape parameters for a 3D face model. This 3D shape regressor bypasses the problem of noisy depth data when obtaining a good initial estimate of the blendshape.

The subsequent joint 2D+3D optimization matches the facial blendshape to both image and depth data robustly. This approach does not require an apriori blendshape model of the user, as shape parameters are updated onthefly.

Extensive experiments show that our 3D tracker performs robustly across a wide range of scenes and visual conditions, while maintaining or surpassing the tracking performance of other stateoftheart trackers.

We use the DEM blendshape as a prior in a depth filtering process, further improving the depth map for fine 3D reconstruction.
The rest of this paper is organized as follows. Section 2 outlines our proposed 3D face tracking framework. Section 3 describes the 3D shape regression in detail. DEM registration is further elaborated in Section 4. Section 5 describes our depth recovery method using a 3D face prior. Section 6 presents the experimental tracking and depth recovery results.
2 System Overview
In this section we present the blendshape model that we use in this work, and our proposed tracking framework.
2.1 The Face Representation
We use the face models developed in the FaceWarehouse database [7]. As specified in [7], a facial expression of a person can be approximated by
(1) 
where is a 3D matrix (called reduced core tensor) of size (, , ) (corresponding to number of vertices, number of identities and number of expressions, respectively), is an
dimension identity vector, and
is an dimension expression vector. (1) basically describes tensor contraction at the 2nd mode by and at the 3rd mode by .Similar to [6], for realtime face tracking of one person, given his identity vector , it is more convenient to reconstruct the expression blendshapes for the person of identity as
(2) 
where is the precomputed weight vector for the th expression mode [7]. In this way, an arbitrary facial shape of the person can be represented as a linear sum of his expression blendshapes:
(3) 
where is the neutral shape, and is the blending weight, . Finally, a fully transformed 3D facial shape can be represented as
(4) 
with the parameters , where and respectively represent global rotation and translation, and defined in (3) represent the expression deformation parameters. In this work, we keep the 50 most significant identity knobs in the reduced core tensor , hence (, , ) = (11510, 50, 47).
2.2 Framework Pipeline
Fig. 2 shows the pipeline of the proposed face tracking framework, which follows a coarsetofine multistage optimization design. In particular, our framework consists of two major stages: shape regression and shape refinement. The shape regressor performs the first optimization stage, which is learned from training data, to quickly estimate shape parameters from the RGB frame (cf. Section 3). Then, in the second stage, a carefully designed optimization is performed on both the 2D image and the available 3D point cloud data to refine the shape parameters, and finally the identity parameter is updated to improve shape fitting to the input RGBD data (cf. Section 4).
The 3D shape regressor is the key component to achieve our goal of 3D tracking at large distance, where quality of the depth map is often poor. Unlike the existing RGBDbased face tracking works, which either heavily rely on the accurate input point cloud (at close distances) to model shape transformation by ICP [28, 3] or use offtheshelf 2D face tracker to guide the shape transformation [18], we predict the 3D shape parameters directly from the RGB frame by the developed 3D regressor. This is motivated by the success of the 3D shape regression from RGB images used in [6, 5]. The approach is especially meaningful for our considered large distance scenarios, where the depth quality is poor. Thus, we do not make use of the depth information in the 3D shape regression to avoid profusion of inaccuracies from the depth map.
Initially, a color frame is passed through the regressor to recover the shape parameters . The projection of the () landmarks vertices of the 3D shape to image plane typically does not accurately match the 2D landmarks annotated in the training data. We therefore include 2D displacements in (7) into the parameter set and define a new global shape parameter set . The advantages of including in are twofold. First, it helps train the regressor to reproduce the landmarks in the test image similar to those in the training set. Second, it prepares the regressor to work with unseen identity which does not appear in the training set [5]. In such case the displacement error may be large to compensate for the difference in identities. The regression process can be expressed as , where is the regression function, is the current frame, and are the input (from the shape regression for the previous frame) and output shape parameter sets, respectively. The coarse estimates are refined further in the next stage, using more precise energy optimization added with depth information. Specifically, are optimized w.r.t both the 2D prior constraints provided by the estimated 2D landmarks by the shape regressor and the 3D point cloud. Lastly, the identity vector is reestimated given the current transformation.
3 3D Shape Regression
As mentioned in Section 2.2, the shape regressor regresses over the parameter vector . To train the regressor, we must first recover these parameters from training samples, and form training data pairs to provide to the training algorithm. In this work, we use the face databases from [16, 27, 7] for training.
3.1 Shape Parameters Estimation from Training Data
We follow the parameter estimation process in [6]. Denoting the camera projection function from 3D world coordinates to 2D image coordinates, are first extracted by minimizing the 2D errors in each sample:
(5) 
where are the ground truth landmarks of the training data and . Note that will be discarded since we only need to generate the individual expression blendshapes of the current subject as in (2) for later optimization over (R,T,e).
With the initially extracted parameters in (5), we refine by alternatingly optimizing over and (R,T,). Particularly, we first keep (R, T, ) fixed for each sample, and optimize over across all the samples of the same subject:
(6) 
where denotes the total number of training samples for the same subject. Then for each sample we keep fixed and optimize over (R, T, ) as in (5). This process is repeated until convergence. We empirically observe that running the above process for three iterations gives reasonably good results. We then can generate userspecific blendshapes as in (2).
Finally, we recover the expression weights by minimizing the 2D error over (R,T,e) again:
(7) 
where and is a 3D landmark vertex of the blendshape corresponding to . From (7), we also obtain the 2D displacement vector as a byproduct. Eventually, following [5], for each training data sample, we generate a number of guesstruth pairs , where the guessed vector is produced by randomly perturbing the ground truth parameters extracted through the above optimization. In this way, we create training pairs in total.
3.2 Shape Regression Training
Given the training pairs from the previous section, we follow the feature extraction and shape regression method in
[24], which combines local binary features extracted using the trained random forests of all the landmarks. The local binary features are aggregated into a global feature vector which is then used to train a linear regression model to predict the shape parameters. In our work, we train the regressor to predict simultaneously, directly from the input RGB frame in contrast to [24] where the regressor simply updates only the 2D displacements.Algorithm 1 shows the detailed training procedure. In particular, we calculate the 2D landmark positions from the shape parameters, and for each landmark , we randomly sample pixel intensitydifference features [8] within a radius . These pixeldifference features are then used to train a random forest . For every training sample , we pass it through the forest and recover a binary vector which has the length equal to the number of leaf nodes of the forest. Each node that responds to the sample will be represented as 1 in ; otherwise it will be 0. The local binary vectors from landmarks are concatenated to form a global binary vector representing the training sample . Then, the global binary feature vectors are used to learn a global linear regression matrix which predicts the updating shape parameters from those binary global vectors. After that, the guessed shape parameters are updated and enter the next iteration.
Similar to [24], we let the regressor learn the best search radius during training. The training face samples have been normalized to the size of approximately 120x120 pixels, about the same size as the face captured by Kinect at 0.7m distance. Thus at runtime, we simply rescale the radius inversely proportional to the current ztranslation .
4 3D Shape Refinement
At this stage, we refine the shape parameters using both RGB and depth images, and also update the shape identity. Specifically, and are alternatingly optimized. After convergence, the identity vector is updated based on the final shape parameters vector .
4.1 Facial Shape Expressions and Global Transformation
We simultaneously refine by optimizing the following energy:
(8) 
where is a tradeoff parameter, is the 2D error term measuring the 2D displacement errors, is the 3D ICP energy term measuring the geometry matching between the 3D face shape model and the input point cloud, and is the regularization term to ensure the shape parameter refinement is smooth across the time. Particularly, , and are defined as
(9)  
(10)  
(11) 
In (9), the tracked 2D landmarks {} are computed from the raw shape parameters , which are usually quite reliable. In (10), is the number of ICP corresponding pairs that we sample from the blendshape and the point cloud, and and denote point in the point cloud and its normal, respectively. By minimizing , we essentially minimize the pointtoplane ICP distance between the blendshape and the point cloud [19]. This is to help slide the blendshape over the point cloud to avoid local minima and recover a more accurate pose. In (11), is the raw output from the shape regressor, and are the shape parameters from the previous two frames, and and are tradeoff parameters. The two terms in (11) represent a data fidelity term and a Laplacian smoothness term.
In our implementation, we iteratively optimize over the global transformation parameters and the local deformation parameter , which leads to faster convergence and lower computational cost. In the optimization, is set to 2; , are set to 100 and 10000 for , 0.1 and 10 for , respectively. For optimization over , is set to 0.5; and are both set to zero so as to maximize spontaneous local deformations. The nonlinear energy function is minimized using the ALGLIB::BLEIC bounded solver^{1}^{1}1http://www.alglib.net/ to keep in the valid range of [0,1].
Fig. 3 gives an example to show the effect of the term. We can see that for the result without using , there is a large displacement between the point cloud and the model and there is also noticeable overdeformation of the mouth. This demonstrates that without using the 3D information, the 2D tracking may appear fine yet the actual 3D transformation is largely incorrect.
4.2 Updating Shape Identity
In the last step, we refine the identity vector to better adapt the expression blendshapes to the input data. We solve for by minimizing the following objective function:
(12) 
where
(13)  
with .
Note that is the pointtopoint ICP energy and it behaves slightly differently from in (10). Minimizing helps align the blendshape to the point cloud in a more direct way on the surface to recover detailed facial characteristics.
In our experiments, we empirically set to 0.5, meaning that we give more weight to the 2D term to encourage the face model to fit closer to the tracked landmarks, especially the face countour. Gradientbased optimizations such as BFGS are ineffective toward this energy, and thus we run one iteration of coordinate descent at each frame to stay within the computational budget. We find that usually converges in under 10 frames after tracking starts. To save computational time, we set a simple rule in which updating identity stops either after converges or after 10 frames.
Fig. 4 shows some results on adapting the identity parameter over time. After a few iterations of updating , the face model fits significantly better to each individual subject.
5 Depth Recovery with Dense Face Priors
In this section, we further develop one application to show the usefulness of the final blendshape model for each frame, i.e. using the dense blendshape model as the prior for depth recovery. Although the final blendshape itself is a good approximation to the real face and sufficiently good for the tracking purpose, it might not be sufficient for other applications such as the 3D face reconstruction. Thus, it is meaningful to use face priors to refine noisy depth maps. Existing methods for depth recovery [21, 25, 30, 33, 9] usually utilize general prior information such as piecewise smoothness and the corresponding color guidance, and thus they tend to produce a planelike surface. To address these deficiencies, the use of semantic priors has also been considered, e.g., rigid object priors [2] and nonrigid face priors [10], for 3D reconstruction and depth recovery.
Our work is based on [10] but extends it in several significant ways. [10] mainly introduces the idea of using the face prior and focuses on the depth recovery of one single RGBD image with the help of face registration. It uses a coarse generic wireframe face model, which can only provide a limited reliable depth prior. In contrast, we employ our optimized final blendshape model which can provide dense prior information. We also incorporate depth recovery with realtime face tracking, for which we develop a local filtering based depth recovery for fast processing.
In particular, similar to [10], the recovery of depth map is formulated as the following energy minimization problem:
(14) 
where the smoothness term measures the quadratic variations between neighboring depth pixels, the fidelity term is adopted to ensure does not significantly depart from the depth measurement , and the face prior term utilizes the blendshape prior to guide the depth recovery. We define
(15) 
where and represent the pixel index, is the set of neighboring pixels of pixel , is the normalized joint trilateral filtering (JTF) weight which is inversely proportional to pixel distance, color difference, and the depth difference [10]. For the fidelity term , we use the Euclidean distance between and , i.e., . For simplicity, we use to represent the depth map generated by rendering the current 3D blendshape model at the color camera viewpoint. Then, the face prior term is computed as .
A simple recursive solution to (14) is obtained by the vanishing gradient condition, resulting in
(16) 
where the superscript represents the number of iterations. Such filtering process is GPUfriendly and the number of iterations can be explicitly controlled to achieve a better tradeoff between recovery accuracy and speed.
6 Experiments
6.1 Tracking Experiments
We carried out extensive tracking experiments on synthetic BU4DFE sequences and real videos captured by a Kinect camera. We compared the tracking performance of our method to that of RGBbased trackers DDER[5], CoR[32] and RLMS[26] in terms of average root mean square error (RMSE) in pixel positions of 2D landmarks. In the tracking context, we evaluated trackers’ robustness by comparing the proportions of unsuccessfully tracked frames.
6.1.1 Evaluations on Synthetic Data
The BU4DFE dataset [31] contains sequences of highresolution 3D dynamic facial expressions of human subjects. We rendered these sequences into RGBD to simulate the Kinect camera [22] at three distances: 1.5m, 1.75m and 2m with added rotation and translation. In total, we collected tracking results from 270 sequences. The dataset does not provide ground truth, so we used the RLMS tracker [26], which works well on BU4DFE sequences, to recover 2D landmarks on the images rendered at 0.6m, which were then reprojected to different distances and treated as ground truth.
6.1.2 Experiments on Real Data
We compared the tracking performance of our approach to other methods on 11 real sequences at various distances, with different lighting conditions, complex head movements as well as facial expressions. We used RLMS to recover the ground truth, and manually labeled the frames that were incorrectly tracked.
The results are shown in Table 2. For RLMS, we only considered the performance on frames that had been manually labeled, since its results were otherwise used as ground truth. Note that the inclusion of RLMS is mainly used as a reference and does not reflect its true performance, as only incorrectly tracked frames were measured. Once again, our method outperformed DDER and was very close to CoR. The consistent error values demonstrated that our tracker is stable, particularly under large rotations or when the face is partially covered, as illustrated in Fig. 6 and Fig. 7.
To better assess the robustness of each tracker, we compared the percentage of aggregated lost frames from all sequences in Table 3. The mistracked frames were decided either by empty output, or by large RMSE (, with ). We also did not count sequences luc03 for DDER, nor luc03 and luc04 for CoR, toward their overall percentages because the faces were not registered correctly from the beginning, which was perhaps largely due to the face detector failing to locate the face correctly. This showed that the 2D+3D optimization combination of our method provides robust tracking overall.
Dataset  DDER [5]  CoR [32]  RLMS [26]  Ours 
dt01  9.65  4.15  6.04  4.51 
ar00  3.41  66.72  7.41  2.36 
dt00  3.57  1.65  4.63  2.29 
my01  5.61  2.79  4.35  2.89 
fw01  6.5  3.27  36.11  4.85 
fw02  5.34  1.80  2.56  3.50 
luc01  4.96  2.38  5.86  3.49 
luc02  3.95  1.51  2.04  3.02 
luc03 (2m)  37.17  n/a  1.67  1.77 
luc04 (2m)  2.63  62.45  n/a  1.84 
luc05  3.39  2.39  3.44  2.88 
DDER [5]  CoR [32]  RLMS [26]  Ours 
2.21%  7.22%  3.61%  0.74% 
6.1.3 Running Time
Our tracker is implemented in native C++, parallelized with TBB^{2}^{2}2https://www.threadingbuildingblocks.org/, with the GPU only used for calculating the 47 base expression blendshapes in (2). Running time was measured on a 3.4GHz Corei7 CPU machine with a Geforce GT640 GPU. Shape regression ran in 5ms, refining (R,T,e) took 12ms, with auxiliary processing taking another 10ms. Overall, without identity adaptation, the tracker ran at 30Hz. The bottleneck is in optimizing for which took 14ms, while calculating 47 base blendshapes took 80ms on the GT640 GPU with 384 CUDA cores. This process is only carried out at initialization or during tracker restarts. The use of modern GPU cards with higher CUDA core counts should remove this bottleneck.
6.2 Depth Recovery Experiments
6.2.1 Synthetic Data
We used the same set of BU4DFE sequences as in section 6.1.1 at 1.75m and 2m. Instead of evaluating the tracking accuracy, we measured the surface reconstruction error with respect to the 3D synthetic surface used for generating the data. To simulate different depth ranges of the target, we increased the noise level of the input depth map according to [22]. We ran the tracker on these sequences and collected the surface of the blendshape (BSSurface) as well as the enhanced depth map, which was filtered using face priors (DRwFP). We compared these two surfaces to the ground truth surface. Additionally, we compared our method to the depth recovery method in [9] using Mean Absolute Error (MAE) in mm, where is the set of valid depth pixels, while and are the recovered and ground truth depth values respectively. The results are summarized in Table 4.
The results show that the high noise levels, often higher than that of the actual Kinect depth data, led to large errors in blendshape modeling. However, the face guidance filter mitigated these problems and recovered depth maps that were closer to the ground truth surface. The improvements range from at 1.75m to at 2m better than [9].
Dataset  [9]  BSSurface  DRwFP 

BU4DFE (1.75 m)  2.83  9.16  2.39 
BU4DFE (2.0 m)  3.59  8.79  2.85 
6.2.2 Real Data
As we do not have ground truth for real data, in this section we only provide visual results of the recovered depth map at 2m. Fig. 8 shows depth recovery results on two sample depth frames. It is difficult to recognize any facial characteristics from the raw depth maps. The filter in [9] smoothed out the depth maps but was not able to recover any facial details. In contrast, our depth filter with face priors was able to reconstruct the facial shapes with recognizable quality.
7 Conclusion
We presented a novel approach to RGBD face tracking, using 3D facial blendshapes to simultaneously model the head movements as well as facial expressions. The tracker is driven by a fast shape regressor, which allows the tracker to perform consistently at any distance, beyond the working range of current stateoftheart RGBD face trackers. This 3D shape regressor directly estimates shape parameters together with 2D landmarks in the input color frame. The shape parameters are refined further by optimizing a welldesigned 2D+3D energy function. Using this framework, our tracking can automatically adapt the 3D blendshapes to better fit the individual facial characteristics of tracked humans. Through extensive experiments on synthetic and real RGBD videos, our tracker performed consistently well in complex conditions and at different distances.
With the ability to model articulated facial expressions and complex head movements, our tracker can be deployed in various tasks such as animation and virtual reality. In addition, we use the blendshape as a prior in a novel depth filter to better reconstruct the depth map, even at larger distances. The refined depth map can later be used together with the blendshape to reproduce the facial shape regardless of objectcamera distances.
Acknowledgements
We specially thank Deng Teng for his kind help in collecting the real test sequences used in this paper.
References
 [1] J. Ahlberg. Face and facial feature tracking using the active appearance algorithm. In 2nd European Workshop on Advanced VideoBased Surveillance Systems (AVBS), pages 89–93, London, UK, 2001.

[2]
S. Bao, M. Chandraker, Y. Lin, and S. Savarese.
Dense object reconstruction with semantic priors.
In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, pages 1264–1271, June 2013.  [3] S. Bouaziz, Y. Wang, and M. Pauly. Online modeling for realtime facial animation. In SIGGRAPH, 2013.
 [4] Q. Cai, D. Gallup, C. Zhang, and Z. Zhang. 3d deformable face tracking with a commodity depth camera. In Europ. Conf. Comput. Vision (ECCV), 2010.
 [5] C. Cao, Q. Hou, and K. Zhou. Displaced dynamic expression regression for realtime facial tracking and animation. In SIGGRAPH, 2014.
 [6] C. Cao, Y. Weng, S. Lin, and K. Zhou. 3d shape regression for realtime facial animation. In SIGGRAPH, 2013.
 [7] C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou. FaceWarehouse: A 3D Facial Expression Database for Visual Computing. IEEE Transactions on Visualization and Computer Graphics, 20(3):413–425, March 2014.
 [8] X. Cao, Y. Wei, F. Wen, and J. Sun. Face alignment by explicit shape regression. In CVPR, 2012.
 [9] C. Chen, J. Cai, J. Zheng, T. J. Cham, and G. Shi. Kinect depth recovery using a colorguided, regionadaptive, and depthselective framework. ACM Trans. Intell. Syst. Technol., 6(2):12:1–12:19, Mar. 2015.
 [10] C. Chen, H. X. Pham, V. Pavlovic, J. Cai, and G. Shi. Depth recovery with face priors. In Asian Conf. Computer Vision (ACCV), Nov 2014.
 [11] T. Cootes, G. Edwards, and C. Taylor. Active appearance models. IEEE Trans. Pat. Anal. Mach. Intel., (23):681–684, 2001.
 [12] T. Cootes, C. Taylor, D. Cooper, and J. Graham. Active shape models  their tranining and applications. Comput. Vis. Image Underst., (61):39–59, 1995.
 [13] D. Cristinacce and T. Cootes. Boosted regression active shape models. In BMVC, Sep 2007.
 [14] D. DeCarlo and D. Metaxas. Optical flow constraints on deformable models with applications to face tracking. Int. J. Comput. Vis., 38(2):99–127, July 2000.
 [15] F. Dornaika and J. Ahlberg. Fast and reliable active appearance model search for 3d face tracking. 34(4):1838–1853, 2004.

[16]
G. B. Huang, M. Ramesh, T. Berg, and E. LearnedMiller.
Labeled faces in the wild: A database for studying face recognition in unconstrained environments.
Technical Report 0749, University of Massachusetts, Amherst, October 2007.  [17] J. G. J. Orozco, O. Rudovic and M. Pantic. Hierarchical online appearancebased tracking for 3D head pose, eyebrows, lips, eyelids and irises. Image and Vis. Comput., 31(4):322–340, 2013.
 [18] H. Li, J. Yu, Y. Ye, and C. Bregler. Realtime facial animation with onthefly correctives. In SIGGRAPH, 2013.
 [19] K. Low. Linear leastsquares optimization for pointtoplane ICP surface registration. Technical Report TR04004, Department of Computer Science, University of North Carolina at Chapel Hill, 2004.
 [20] I. Matthews and S. Baker. Active appearance models revisited. Int. J. Comput. Vis., 60(2):135–164, 2004.
 [21] D. Min, J. Lu, and M. Do. Depth video enhancement based on weighted mode filtering. 21(3):1176–1190, 2012.
 [22] C. Mutto, P. Zanuttigh, and G. Cortelazzo. Microsoft Kinect range camera. In TimeofFlight Cameras and Microsoft Kinect, SpringerBriefs in Electrical and Computer Engineering, pages 33–47. Springer US, 2012.
 [23] H. X. Pham and V. Pavlovic. Hybrid Online 3D Face and Facial Actions Tracking in RGBD Video Sequences. In Proc. International Conference on Pattern Recognition (ICPR), 2014.
 [24] S. Ren, X. Cao, Y. Wei, and J. Sun. Face alignment at 3000 fps via regressing local binary features. In CVPR, 2014.
 [25] C. Richardt, C. Stoll, N. A. Dodgson, H.P. Seidel, and C. Theobalt. Coherent spatiotemporal filtering, upsampling and rendering of RGBZ videos. Comp. Graph. Forum, 31(2pt1):247–256, May 2012.
 [26] J. M. Saragih, S. Lucey, and J. F. Cohn. Deformable model fitting by regularized landmark meanshift. International Journal of Computer Vision, 91(2):200–215, 2011.
 [27] F. Tarrés and A. Rama. GTAV face database. https://gtav.upc.edu/researchareas/facedatabase.
 [28] T. Weise, S. Bouaziz, H. Li, and M. Pauly. Realtime performancebased facial animation. In SIGGRAPH, 2011.
 [29] X. Xiong and F. D. la Torre. Supervised descent method and its applications to face alignment. In CVPR, 2013.
 [30] J. Yang, X. Ye, K. Li, and C. Hou. Depth recovery using an adaptive colorguided autoregressive model. In Europ. Conf. Comput. Vision (ECCV), pages 158–171, Florence, Italy, 2012. SpringerVerlag.
 [31] L. Yin, X. Chen, Y. Sun, T. Worm, and M. Reale. A highresolution 3d dynamic facial expression database. In 8th IEEE International Conference on Automatic Face Gesture Recognition, pages 1–6. IEEE, Sept 2008.
 [32] X. Yu, Z. Lin, J. Brandt, and D. Metaxas. Consensus of regression for occlusionrobust facial feature localization. In European Conf. Computer Vision (ECCV), Sep 2014.
 [33] M. Zhao, F. Tan, C.W. Fu, C.K. Tang, J. Cai, and T. J. Cham. Highquality Kinect depth filtering for realtime 3d telepresence. In IEEE International Conference on Multimedia and Expo (ICME), pages 1–6, July 2013.
Comments
There are no comments yet.