One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing
In this report, we will look at the latest work published in CVPR 21 in the domain of one-shot talking-head synthesis. This is a translated version of the article. Feel free to report any possible mis-translations in the comments section
Created on August 26|Last edited on August 26
Comment
For reasons we won't belabor, video conferencing has gained a tremendous user base in the last year. But despite its rise, it's not accessible to many because of the high network bandwidth required to carry both video and speech in real-time. Deep learning techniques (especially GANs) can deliver high-quality video via image compression at lower bit rates.
But before we dig into this research on a really interesting application of deep learning called talking head synthesis, we recommend checking out the brief video below. It'll help anchor the research as we dig a bit deeper into "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing" (Wang et al., 2020).
Project Page | Paper | Online Demo
Overview of the Proposed Method
Let's first level-set on the notations that we'll be using and get a little clarity on the goal of this research. From the paper:
Letbe an image of a person, referred to as the source image. Letbe a talking-head video, called the driving video, where’s are the individual frames, andis the total number of frames.
Our goal is to generate an output video, where the identity in’s is inherited fromand the motions are derived from’s.
Depending on the(i.e. the image of the person), the goal can be either one of the two broader deep learning tasks:
- If the person in the source image () is the same as in the driving video (), then it's avideo reconstruction task. The generated output video () still takes the identity information fromand motion information from.
- If the person inis not the same as in, then it's amotion transfer task.
To inherit the features from the source image and control the novel synthesis of the talking head, the authors devised an unsupervised approach for learning a set of 3D keypoints and their decomposition.
The proposed method can be divided into three major steps:
- Source image feature extraction
- Driving video feature extraction
- Video generation
The beauty of the proposed solution is the joint training of all the architectures, in all three stages. We will look into the training details in a moment, but let's first quickly look at the architectural design for source image feature extraction.
Source Image Feature Extraction

Four separate neural network architectures (well, three actually) are used to extract identity-specific information. Digging in a bit:
3D Appearance Feature Extraction ():
Using a neural network, the source imageis mapped to a 3D appearance feature volume. The networkconsists of multiple downsampling blocks followed by a number of 3D residual blocks to compute the 3D feature volume.

3D Canonical Keypoint Extraction ():
Using a canonical 3D keypoint detection network, a set of canonical3D keypointsand their Jacobiansare extracted from.
The Jacobians represent how a local patch around the keypoint can be transformed into a patch in another image via an affine transformation.
The authors have used a U-Net style encoder-decoder to extract canonical keypoints.

Our canonical keypoints are formulated to be independent of the pose and expression change. They should only contain a person’s geometry signature, such as the shapes of face, nose, and eyes.
Head Pose () and Expression Extraction ():
A pose estimation networkis used to estimate the head pose of the person in. It is parameterized by a rotation matrixand a translation vector. The rotation matrixin practice is composed of three matrices - yaw, pitch
Expression deformation estimation networkis used to estimate deformation of keypoints from the neutral expression. Thus there are3D deformations.
Note that the authors have used a common backbone with shared weights for bothand. This is evident from the proposed architecture for the same.

Note: The same architecture is used to extract motion-related information from the driving video.
Using the information from all 3 architectures, the authors haveproposed a transformationto obtain the final 3D keypointsand their Jacobiansfor the source image.is applied to the keypoints andto the Jacobians such that:
Driving Video Feature Extraction

The driving video is used to extract motion-related information. To this end, head pose estimation networkand expression deformation estimator networkis used. Note that 3D feature extractor () and canonical key point extraction network () are not used. This is inclined to the formulated goal. From the paper,
Instead of extracting canonical 3D keypoints from the driving imageusing, we reuseand, which were extracted from the source image. This is because the face in the output image must have the same identity as the one in the source image. There is no need to compute them again.
Using the identity-specific information (and) and motion-related information, final 3D keypointsand their Jacobiansis computed for the driving video. The same transformationsandare used such that,
This 3D keypoint and its Jacobian are derived for every frame in the driving video. Since identity-specific information is used for computing final 3D keypoints and Jacobians for the driving video frame, we can provide user-specific rotation and translation matrix to change a person's head pose.
Our approach allows manual changes to the 3D head pose during synthesis. Letandbe user-specified rotation and translation, respectively. The final head pose in the output image is given byand. In video conferencing, we can change a person’s head pose in the video stream freely despite the original view angle.
Video Synthesis

The 3D keypoints and Jacobians extracted from the source image and the driving video frame are used to estimatewarping flow maps.This flow mapis generated based on thekeypoint using the first-order approximation. This flow fieldis used to warp the source featurewhere. First Order Motion Model for Image Animation by Siarohin et al. might be a useful read. You can check out this paper's summary by Lavanya Shukla here.
These warped features are fed to a motion field estimator networkwhich is a 3D U-Net style network. As shown in the figure, two outputs are estimated using this network - maskand occlusion.

- Softmax activation is used to obtain the flow composition mask, which consists of3D masks. These are again combined withwarping flow mapsto obtain the finalcompositeflow field. This is finally used to obtain the warped source feature.
- Warping leads to occlusions. A 2D occlusion maskis predicted to be inputted to the generator.
The authors have used a generator networkthat takes the warped 3D source feature mapand first projects them back to the 2D feature. This is then multiplied with the occlusion maskfollowed by a series of 2D residual blocks and upsampling layers to obtain the final image.

To summarize so far, we have source imageand driving video.The task is to generate output video such that it has the identity-specific information fromand motion-specific information from.To obtain identity-specific information different neural networks are used and the same goes for motion-specific information. These pieces of information are used to obtain3D keypoints and Jacobians for bothand.
These keypoints and Jacobians are then used to warp the source appearance featureextracted fromfrom which they generate the final output image using a generator network.
So how did the authors train this system? We'll cover the procedure, the dataset they used, and their losses.
Training the Models
The authors used a dataset of talking-head videos to train their models here. They mention the use of VoxCeleb2 and TalkingHead-1KH for evaluation, though it is a bit unclear which dataset they used for training.
For each video, two frames were sampled:
- one would act as a source image,
- and the other as the frame from the driving video.
The networks,,,,, andare trained together by minimizing the following loss:
Let's go through each term one-by-one:
- Perpetual Loss ():Perpetual loss is commonly used in image reconstruction tasks. Here's a nice description of this loss function. In short, a pre-trained VGG network is used to extract features from both the ground truth image and the reconstructed image. The distance is computed between the features. The features are extracted from multiple hidden layers of varying resolutions. Besides regular VGG (trained on ImageNet) the authors have also used a pre-trainedfaceVGG network for obvious reasons.
- GAN Loss ():The authors have usedpatch GAN implementation along with hinge loss. Check out this quick summary here.
- Equivalence Loss ():This loss ensures the consistency of the estimated keypoints. Letbe the detected keypoints for the input image. When a known transformationis applied to the image (), the detected keypoints should be transformed in the same way.distance is minimized such thattends to zero. Hereis the inverse of the known transform. The same logic is applicable to the jacobians of the keypoints.
- Key Prior Loss ():This loss encourages the estimated image-specific keypointsto spread out across the face region, instead of crowding around a small neighborhood. Distance is computed between the keypoint pairs and penalized if the distance is below some threshold.
- Head Pose Loss ():distance is computed between the estimated head poseand the one predicted by a pre-trained estimator. This approximation is as good as the pre-trained model head pose estimator.
- Deformation Prior Loss ():This loss is simply given asnorm of the deviation such that,.
The models are trained using the coarse-to-fine technique. ADAM optimizer with a learning rate of 0.0002 is used to train the model on 256x256 resolution images for 100 epochs. This is then fine-tuned on 512x512 resolution images for 10 epochs.
Conclusion
The results of this research are really quite promising. The techniques we talked about here resulted in a 10X bandwidth reduction and, if you had a chance to look at the video in our introduction, you can see that the video quality is incredibly high considering that reduction. Models like this could make it possible to democratize access to video conferencing and reduce strain on networks, especially in residential and rural areas where bandwidth is already harder to come by.
The paper is packed with implementation details, failure modes, and other nitty-gritty. I highly recommend going through the paper especially the appendix section.
I would also like to thank Justin Tenuto for his edits.
Add a comment