One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing
In this report, we will look at the latest work published in CVPR 21 in the domain of one-shot talking-head synthesis.
Created on March 16|Last edited on April 19
Comment
For reasons we won't belabor, video conferencing has gained a tremendous user base in the last year. But despite its rise, it's not accessible to many because of the high network bandwidth required to carry both video and speech in real-time. Deep learning techniques (especially GANs) can deliver high-quality video via image compression at lower bit rates.
But before we dig into this research on a really interesting application of deep learning called talking head synthesis, we recommend checking out the brief video below. It'll help anchor the research as we dig a bit deeper into "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing" (Wang et al., 2020).
Project Page | Paper | Online Demo
Overview of the Proposed Method
Let's first level-set on the notations that we'll be using and get a little clarity on the goal of this research. From the paper:
Let be an image of a person, referred to as the source image. Let be a talking-head video, called the driving video, where ’s are the individual frames, and is the total number of frames.
Our goal is to generate an output video , where the identity in ’s is inherited from and the motions are derived from ’s.
Depending on the (i.e. the image of the person), the goal can be either one of the two broader deep learning tasks:
- If the person in the source image () is the same as in the driving video (), then it's a video reconstruction task. The generated output video () still takes the identity information from and motion information from .
- If the person in is not the same as in , then it's a motion transfer task.
To inherit the features from the source image and control the novel synthesis of the talking head, the authors devised an unsupervised approach for learning a set of 3D keypoints and their decomposition.
The proposed method can be divided into three major steps:
- Source image feature extraction
- Driving video feature extraction
- Video generation
The beauty of the proposed solution is the joint training of all the architectures, in all three stages. We will look into the training details in a moment, but let's first quickly look at the architectural design for source image feature extraction.
Source Image Feature Extraction

Four separate neural network architectures (well, three actually) are used to extract identity-specific information. Digging in a bit:
3D Appearance Feature Extraction ():
Using a neural network , the source image is mapped to a 3D appearance feature volume . The network consists of multiple downsampling blocks followed by a number of 3D residual blocks to compute the 3D feature volume .

3D Canonical Keypoint Extraction ():
Using a canonical 3D keypoint detection network , a set of canonical 3D keypoints and their Jacobians are extracted from .
The Jacobians represent how a local patch around the keypoint can be transformed into a patch in another image via an affine transformation.
The authors have used a U-Net style encoder-decoder to extract canonical keypoints.

Our canonical keypoints are formulated to be independent of the pose and expression change. They should only contain a person’s geometry signature, such as the shapes of face, nose, and eyes.
Head Pose () and Expression Extraction ():
A pose estimation network is used to estimate the head pose of the person in . It is parameterized by a rotation matrix and a translation vector . The rotation matrix in practice is composed of three matrices - yaw, pitch
Expression deformation estimation network is used to estimate deformation of keypoints from the neutral expression. Thus there are 3D deformations .
Note that the authors have used a common backbone with shared weights for both and . This is evident from the proposed architecture for the same.

Note: The same architecture is used to extract motion-related information from the driving video.
Using the information from all 3 architectures, the authors have proposed a transformation to obtain the final 3D keypoints and their Jacobians for the source image. is applied to the keypoints and to the Jacobians such that:
Driving Video Feature Extraction

The driving video is used to extract motion-related information. To this end, head pose estimation network and expression deformation estimator network is used. Note that 3D feature extractor () and canonical key point extraction network () are not used. This is inclined to the formulated goal. From the paper,
Instead of extracting canonical 3D keypoints from the driving image using , we reuse and , which were extracted from the source image . This is because the face in the output image must have the same identity as the one in the source image . There is no need to compute them again.
Using the identity-specific information ( and ) and motion-related information, final 3D keypoints and their Jacobians is computed for the driving video. The same transformations and are used such that,
This 3D keypoint and its Jacobian are derived for every frame in the driving video. Since identity-specific information is used for computing final 3D keypoints and Jacobians for the driving video frame, we can provide user-specific rotation and translation matrix to change a person's head pose.
Our approach allows manual changes to the 3D head pose during synthesis. Let and be user-specified rotation and translation, respectively. The final head pose in the output image is given by and . In video conferencing, we can change a person’s head pose in the video stream freely despite the original view angle.
Video Synthesis

The 3D keypoints and Jacobians extracted from the source image and the driving video frame are used to estimate warping flow maps. This flow map is generated based on the keypoint using the first-order approximation. This flow field is used to warp the source feature where . First Order Motion Model for Image Animation by Siarohin et al. might be a useful read. You can check out this paper's summary by Lavanya Shukla here.
These warped features are fed to a motion field estimator network which is a 3D U-Net style network. As shown in the figure, two outputs are estimated using this network - mask and occlusion .

- Softmax activation is used to obtain the flow composition mask , which consists of 3D masks. These are again combined with warping flow maps to obtain the final composite flow field . This is finally used to obtain the warped source feature .
- Warping leads to occlusions. A 2D occlusion mask is predicted to be inputted to the generator.
The authors have used a generator network that takes the warped 3D source feature map and first projects them back to the 2D feature. This is then multiplied with the occlusion mask followed by a series of 2D residual blocks and upsampling layers to obtain the final image.

To summarize so far, we have source image and driving video . The task is to generate output video such that it has the identity-specific information from and motion-specific information from . To obtain identity-specific information different neural networks are used and the same goes for motion-specific information. These pieces of information are used to obtain 3D keypoints and Jacobians for both and .
These keypoints and Jacobians are then used to warp the source appearance feature extracted from from which they generate the final output image using a generator network .
So how did the authors train this system? We'll cover the procedure, the dataset they used, and their losses.
Training the Models
The authors used a dataset of talking-head videos to train their models here. They mention the use of VoxCeleb2 and TalkingHead-1KH for evaluation, though it is a bit unclear which dataset they used for training.
For each video, two frames were sampled:
- one would act as a source image ,
- and the other as the frame from the driving video .
The networks ���, , , , , and are trained together by minimizing the following loss:
Let's go through each term one-by-one:
- Perpetual Loss (): Perpetual loss is commonly used in image reconstruction tasks. Here's a nice description of this loss function. In short, a pre-trained VGG network is used to extract features from both the ground truth image and the reconstructed image. The distance is computed between the features. The features are extracted from multiple hidden layers of varying resolutions. Besides regular VGG (trained on ImageNet) the authors have also used a pre-trained face VGG network for obvious reasons.
- GAN Loss (): The authors have used patch GAN implementation along with hinge loss. Check out this quick summary here.
- Equivalence Loss (): This loss ensures the consistency of the estimated keypoints. Let be the detected keypoints for the input image . When a known transformation is applied to the image (), the detected keypoints should be transformed in the same way. distance is minimized such that tends to zero. Here is the inverse of the known transform. The same logic is applicable to the jacobians of the keypoints.
- Key Prior Loss (): This loss encourages the estimated image-specific keypoints to spread out across the face region, instead of crowding around a small neighborhood. Distance is computed between the keypoint pairs and penalized if the distance is below some threshold.
- Head Pose Loss (): distance is computed between the estimated head pose and the one predicted by a pre-trained estimator . This approximation is as good as the pre-trained model head pose estimator.
- Deformation Prior Loss (): This loss is simply given as norm of the deviation such that, .
The models are trained using the coarse-to-fine technique. ADAM optimizer with a learning rate of 0.0002 is used to train the model on 256x256 resolution images for 100 epochs. This is then fine-tuned on 512x512 resolution images for 10 epochs.
Conclusion
The results of this research are really quite promising. The techniques we talked about here resulted in a 10X bandwidth reduction and, if you had a chance to look at the video in our introduction, you can see that the video quality is incredibly high considering that reduction. Models like this could make it possible to democratize access to video conferencing and reduce strain on networks, especially in residential and rural areas where bandwidth is already harder to come by.
The paper is packed with implementation details, failure modes, and other nitty-gritty. I highly recommend going through the paper especially the appendix section.
I would also like to thank Justin Tenuto for his edits.
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.