Skip to main content

One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing

In this report, we will look at the latest work published in CVPR 21 in the domain of one-shot talking-head synthesis. This is a translated version of the article. Feel free to report any possible mis-translations in the comments section
Created on August 26|Last edited on August 26
For reasons we won't belabor, video conferencing has gained a tremendous user base in the last year. But despite its rise, it's not accessible to many because of the high network bandwidth required to carry both video and speech in real-time. Deep learning techniques (especially GANs) can deliver high-quality video via image compression at lower bit rates.
But before we dig into this research on a really interesting application of deep learning called talking head synthesis, we recommend checking out the brief video below. It'll help anchor the research as we dig a bit deeper into "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing" (Wang et al., 2020).

Project Page | Paper | Online Demo




Overview of the Proposed Method

Let's first level-set on the notations that we'll be using and get a little clarity on the goal of this research. From the paper:
Letssbe an image of a person, referred to as the source image. Let{d1,d2,...,dN}\{d_1, d_2,...,d_N\}be a talking-head video, called the driving video, wheredid_i’s are the individual frames, andNNis the total number of frames.
Our goal is to generate an output video{y1,y2,...,yN}\{y_1, y_2,...,y_N\}, where the identity inyiy_i’s is inherited fromssand the motions are derived fromdid_i’s.
Depending on thess(i.e. the image of the person), the goal can be either one of the two broader deep learning tasks:
  • If the person in the source image (ss) is the same as in the driving video (did_i), then it's avideo reconstruction task. The generated output video (yiy_i) still takes the identity information fromssand motion information fromdid_i.
  • If the person inssis not the same as indid_i, then it's amotion transfer task.
To inherit the features from the source image and control the novel synthesis of the talking head, the authors devised an unsupervised approach for learning a set of 3D keypoints and their decomposition.
The proposed method can be divided into three major steps:
  • Source image feature extraction
  • Driving video feature extraction
  • Video generation
The beauty of the proposed solution is the joint training of all the architectures, in all three stages. We will look into the training details in a moment, but let's first quickly look at the architectural design for source image feature extraction.

Source Image Feature Extraction

Figure:Different features extracted from the source image. (Source)
Four separate neural network architectures (well, three actually) are used to extract identity-specific information. Digging in a bit:

3D Appearance Feature Extraction (FF):

Using a neural networkFF, the source imagessis mapped to a 3D appearance feature volumefsf_s. The networkFFconsists of multiple downsampling blocks followed by a number of 3D residual blocks to compute the 3D feature volumefsf_s.
Figure:Architectural design ofFF. (Source)

KK3D Canonical Keypoint Extraction (LL):

Using a canonical 3D keypoint detection networkLL, a set ofKK  canonical3D keypointsxc,kR3x_{c, k} \in \R^3and their JacobiansJc,kR3×3J_{c, k} \in \R^{3 \times 3}are extracted fromss.
The Jacobians represent how a local patch around the keypoint can be transformed into a patch in another image via an affine transformation.
The authors have used a U-Net style encoder-decoder to extract canonical keypoints.
Figure:Architectural design ofLL. (Source)
Our canonical keypoints are formulated to be independent of the pose and expression change. They should only contain a person’s geometry signature, such as the shapes of face, nose, and eyes.

Head Pose (HH) and Expression Extraction (\triangle):

A pose estimation networkHHis used to estimate the head pose of the person inss. It is parameterized by a rotation matrixRsR3×3R_s \in \R^{3 \times 3}and a translation vectortsR3t_s \in \R^3. The rotation matrixRsR_sin practice is composed of three matrices - yaw, pitch
Expression deformation estimation network\triangleis used to estimate deformation of keypoints from the neutral expression. Thus there areKK3D deformationsδs,k\delta_{s, k}.
Note that the authors have used a common backbone with shared weights for bothHHand\triangle. This is evident from the proposed architecture for the same.
Figure:Architectural design ofHHand\triangle. (Source)
Note: The same architecture is used to extract motion-related information from the driving video.
Using the information from all 3 architectures, the authors haveproposed a transformationTTto obtain the final 3D keypointsxs,kx_{s, k}and their JacobiansJs,kJ_{s, k}for the source image.TxT_xis applied to the keypoints andTjT_jto the Jacobians such that:
xs,k=Tx(xc,k,Rs,ts,δs,k)Rsxc,k+ts+δs,kx_{s, k} = T_x(x_{c, k}, R_s, t_s, \delta_{s, k}) \equiv R_sx_{c, k} + t_s + \delta_{s, k}
Js,k=Tj(Jc,k,Rs)RsJs,kJ_{s, k} = T_j(J_{c, k}, R_s) \equiv R_sJ_{s, k}

Driving Video Feature Extraction

Figure:Different features extracted from the driving video. (Source)
The driving video is used to extract motion-related information. To this end, head pose estimation networkHHand expression deformation estimator network\triangleis used. Note that 3D feature extractor (FF) and canonical key point extraction network (LL) are not used. This is inclined to the formulated goal. From the paper,
Instead of extracting canonical 3D keypoints from the driving imageddusingLL, we reusexc,kx_{c, k}andJc,kJ_{c, k}, which were extracted from the source imagess. This is because the face in the output image must have the same identity as the one in the source imagess. There is no need to compute them again.
Using the identity-specific information (xc,kx_{c, k}andJc,kJ_{c, k}) and motion-related information, final 3D keypointsxd,kx_{d, k}and their JacobiansJd,kJ_{d, k}is computed for the driving video. The same transformationsTxT_xandTjT_jare used such that,
xd,k=Tx(xc,k,Rd,td,δd,k)Rdxc,k+td+δd,kx_{d, k} = T_x(x_{c, k}, R_d, t_d, \delta_{d, k}) \equiv R_dx_{c, k} + t_d + \delta_{d, k} Jd,k=Tj(Jc,k,Rd)RdJd,kJ_{d, k} = T_j(J_{c, k}, R_d) \equiv R_dJ_{d, k}
This 3D keypoint and its Jacobian are derived for every frame in the driving video. Since identity-specific information is used for computing final 3D keypoints and Jacobians for the driving video frame, we can provide user-specific rotation and translation matrix to change a person's head pose.
Our approach allows manual changes to the 3D head pose during synthesis. LetRuR_uandtut_ube user-specified rotation and translation, respectively. The final head pose in the output image is given byRdRuRdR_d \leftarrow R_uR_dandtdtu+tdt_d \leftarrow t_u + t_d. In video conferencing, we can change a person’s head pose in the video stream freely despite the original view angle.

Video Synthesis

Figure:Video synthesis pipeline. (Source)
The 3D keypoints and Jacobians extracted from the source image and the driving video frame are used to estimatewarping flow maps.This flow mapwkw_kis generated based on thekthk^{th}keypoint using the first-order approximation. This flow fieldwkw_kis used to warp the source featurewk(fs)w_k(f_s) wherek{1,2,..,K}k \in \{1,2,..,K\}. First Order Motion Model for Image Animation by Siarohin et al. might be a useful read. You can check out this paper's summary by Lavanya Shukla here.
These warped features are fed to a motion field estimator networkMMwhich is a 3D U-Net style network. As shown in the figure, two outputs are estimated using this network - maskmmand occlusionoo.
Figure:Architectural design ofMM. (Source)
  • Softmax activation is used to obtain the flow composition maskmm, which consists ofKK3D masks. These are again combined withKKwarping flow mapswkw_kto obtain the finalcompositeflow fieldww. This is finally used to obtain the warped source featurew(fs)w(f_s).
  • Warping leads to occlusions. A 2D occlusion maskoo is predicted to be inputted to the generator.
The authors have used a generator networkGGthat takes the warped 3D source feature mapw(fs)w(f_s)and first projects them back to the 2D feature. This is then multiplied with the occlusion maskoofollowed by a series of 2D residual blocks and upsampling layers to obtain the final image.
Figure:Architectural design ofGG. (Source)
To summarize so far, we have source imagessand driving videodd.The task is to generate output videoyy such that it has the identity-specific information fromssand motion-specific information fromdd.To obtain identity-specific information different neural networks are used and the same goes for motion-specific information. These pieces of information are used to obtainKK3D keypoints and Jacobians for bothssanddd.
These keypoints and Jacobians are then used to warp the source appearance featurefsf_sextracted fromssfrom which they generate the final output image using a generator networkGG.
So how did the authors train this system? We'll cover the procedure, the dataset they used, and their losses.

Training the Models

The authors used a dataset of talking-head videos to train their models here. They mention the use of VoxCeleb2 and TalkingHead-1KH for evaluation, though it is a bit unclear which dataset they used for training.
For each video, two frames were sampled:
  • one would act as a source imagess,
  • and the other as the frame from the driving videodd.
The networksFF,LL,HH,\triangle,MM, andGGare trained together by minimizing the following loss:
L=LP(d,y)+LG(d,y)+LE({xd,k},{Jd,k})+LL({xd,k})+LH(Rd,Rˉd)+L({δd,k})\mathcal{L} = \mathcal{L}_P(d, y) + \mathcal{L}_G(d, y) + \mathcal{L}_E(\{x_{d,k}\}, \{J_{d, k}\}) + \mathcal{L}_L(\{x_d, k\}) + \mathcal{L}_H(R_d, \bar{R}_d) + \mathcal{L}_{\triangle}(\{\delta_{d, k}\})
Let's go through each term one-by-one:
  • Perpetual Loss (LP\mathcal{L}_P):Perpetual loss is commonly used in image reconstruction tasks. Here's a nice description of this loss function. In short, a pre-trained VGG network is used to extract features from both the ground truth image and the reconstructed image. The L1L_1distance is computed between the features. The features are extracted from multiple hidden layers of varying resolutions. Besides regular VGG (trained on ImageNet) the authors have also used a pre-trainedfaceVGG network for obvious reasons.
  • GAN Loss (LG\mathcal{L}_G):The authors have usedpatch GAN implementation along with hinge loss. Check out this quick summary here.
  • Equivalence Loss (LE\mathcal{L}_E):This loss ensures the consistency of the estimated keypoints. Letxdx_dbe the detected keypoints for the input imagedd. When a known transformationTTis applied to the image (T(d)T(d)), the detected keypoints should be transformed in the same way.L1L_1distance is minimized such thatxdT1(xT(d))1||x_d - T^{-1}(x_{T(d)}) ||_1tends to zero. HereT1T^{-1} is the inverse of the known transform. The same logic is applicable to the jacobians of the keypoints.
  • Key Prior Loss (LLL_L):This loss encourages the estimated image-specific keypointsxd,kx_{d, k}to spread out across the face region, instead of crowding around a small neighborhood. Distance is computed between the keypoint pairs and penalized if the distance is below some threshold.
  • Head Pose Loss (LHL_H):L1L_1distance is computed between the estimated head poseRdR_dand the one predicted by a pre-trained estimatorRˉd\bar{R}_d. This approximation is as good as the pre-trained model head pose estimator.
  • Deformation Prior Loss (LL_{\triangle}):This loss is simply given asL1L_1norm of the deviation such that,L=δd,k1\mathcal{L}_{\triangle} = ||\delta_{d, k}||_1.
The models are trained using the coarse-to-fine technique. ADAM optimizer with a learning rate of 0.0002 is used to train the model on 256x256 resolution images for 100 epochs. This is then fine-tuned on 512x512 resolution images for 10 epochs.

Conclusion

The results of this research are really quite promising. The techniques we talked about here resulted in a 10X bandwidth reduction and, if you had a chance to look at the video in our introduction, you can see that the video quality is incredibly high considering that reduction. Models like this could make it possible to democratize access to video conferencing and reduce strain on networks, especially in residential and rural areas where bandwidth is already harder to come by.
The paper is packed with implementation details, failure modes, and other nitty-gritty. I highly recommend going through the paper especially the appendix section.
I would also like to thank Justin Tenuto for his edits.