For reasons we won't belabor, video conferencing has gained a tremendous user base in the last year. But despite its rise, it's not accessible to many because of the high network bandwidth required to carry both video and speech in real-time. Deep learning techniques (especially GANs) can deliver high-quality video via image compression at lower bit rates.
But before we dig into this research on a really interesting application of deep learning called talking head synthesis, we recommend checking out the brief video below. It'll help anchor the research as we dig a bit deeper into "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing" (Wang et al., 2020).

Project Page | Paper | Online Demo

Overview of the Proposed Method

Let's first level-set on the notations that we'll be using and get a little clarity on the goal of this research. From the paper:
Let s be an image of a person, referred to as the source image. Let \{d_1, d_2,...,d_N\} be a talking-head video, called the driving video, where d_i’s are the individual frames, and N is the total number of frames.
Our goal is to generate an output video \{y_1, y_2,...,y_N\}, where the identity in y_i’s is inherited from s and the motions are derived from d_i’s.
Depending on the s (i.e. the image of the person), the goal can be either one of the two broader deep learning tasks:
To inherit the features from the source image and control the novel synthesis of the talking head, the authors devised an unsupervised approach for learning a set of 3D keypoints and their decomposition.
The proposed method can be divided into three major steps:
The beauty of the proposed solution is the joint training of all the architectures, in all three stages. We will look into the training details in a moment, but let's first quickly look at the architectural design for source image feature extraction.

Source Image Feature Extraction

Figure: Different features extracted from the source image. (Source)
Four separate neural network architectures (well, three actually) are used to extract identity-specific information. Digging in a bit:

3D Appearance Feature Extraction (F):

Using a neural network F, the source image s is mapped to a 3D appearance feature volume f_s. The network F consists of multiple downsampling blocks followed by a number of 3D residual blocks to compute the 3D feature volume f_s.
Figure: Architectural design of F. (Source)

K3D Canonical Keypoint Extraction (L):

Using a canonical 3D keypoint detection network L, a set of K canonical 3D keypoints x_{c, k} \in \R^3 and their Jacobians J_{c, k} \in \R^{3 \times 3} are extracted from s.
The Jacobians represent how a local patch around the keypoint can be transformed into a patch in another image via an affine transformation.
The authors have used a U-Net style encoder-decoder to extract canonical keypoints.
Figure: Architectural design of L. (Source)
Our canonical keypoints are formulated to be independent of the pose and expression change. They should only contain a person’s geometry signature, such as the shapes of face, nose, and eyes.

Head Pose (H) and Expression Extraction (\triangle):

A pose estimation network H is used to estimate the head pose of the person in s. It is parameterized by a rotation matrix R_s \in \R^{3 \times 3} and a translation vector t_s \in \R^3. The rotation matrix R_s in practice is composed of three matrices - yaw, pitch
Expression deformation estimation network \triangle is used to estimate deformation of keypoints from the neutral expression. Thus there are K 3D deformations \delta_{s, k}.
Note that the authors have used a common backbone with shared weights for both H and \triangle. This is evident from the proposed architecture for the same.
Figure: Architectural design of H and \triangle. (Source)
Note: The same architecture is used to extract motion-related information from the driving video.
Using the information from all 3 architectures, the authors have proposed a transformation T to obtain the final 3D keypoints x_{s, k} and their Jacobians J_{s, k} for the source image. T_x is applied to the keypoints and T_j to the Jacobians such that:
x_{s, k} = T_x(x_{c, k}, R_s, t_s, \delta_{s, k}) \equiv R_sx_{c, k} + t_s + \delta_{s, k}
J_{s, k} = T_j(J_{c, k}, R_s) \equiv R_sJ_{s, k}

Driving Video Feature Extraction

Figure: Different features extracted from the driving video. (Source)
The driving video is used to extract motion-related information. To this end, head pose estimation network H and expression deformation estimator network \triangle is used. Note that 3D feature extractor (F) and canonical key point extraction network (L) are not used. This is inclined to the formulated goal. From the paper,
Instead of extracting canonical 3D keypoints from the driving image d using L, we reuse x_{c, k} and J_{c, k}, which were extracted from the source image s. This is because the face in the output image must have the same identity as the one in the source image s. There is no need to compute them again.
Using the identity-specific information (x_{c, k} and J_{c, k}) and motion-related information, final 3D keypoints x_{d, k} and their Jacobians J_{d, k} is computed for the driving video. The same transformations T_x and T_j are used such that,
x_{d, k} = T_x(x_{c, k}, R_d, t_d, \delta_{d, k}) \equiv R_dx_{c, k} + t_d + \delta_{d, k} J_{d, k} = T_j(J_{c, k}, R_d) \equiv R_dJ_{d, k}
This 3D keypoint and its Jacobian are derived for every frame in the driving video. Since identity-specific information is used for computing final 3D keypoints and Jacobians for the driving video frame, we can provide user-specific rotation and translation matrix to change a person's head pose.
Our approach allows manual changes to the 3D head pose during synthesis. Let R_u and t_ube user-specified rotation and translation, respectively. The final head pose in the output image is given by R_d \leftarrow R_uR_d and t_d \leftarrow t_u + t_d. In video conferencing, we can change a person’s head pose in the video stream freely despite the original view angle.

Video Synthesis

Figure: Video synthesis pipeline. (Source)
The 3D keypoints and Jacobians extracted from the source image and the driving video frame are used to estimate warping flow maps. This flow map w_k is generated based on the k^{th} keypoint using the first-order approximation. This flow field w_k is used to warp the source feature w_k(f_s) where k \in \{1,2,..,K\}. First Order Motion Model for Image Animation by Siarohin et al. might be a useful read. You can check out this paper's summary by Lavanya Shukla here.
These warped features are fed to a motion field estimator network M which is a 3D U-Net style network. As shown in the figure, two outputs are estimated using this network - mask m and occlusion o.
Figure: Architectural design of M. (Source)
The authors have used a generator network G that takes the warped 3D source feature map w(f_s) and first projects them back to the 2D feature. This is then multiplied with the occlusion mask o followed by a series of 2D residual blocks and upsampling layers to obtain the final image.
Figure: Architectural design of G. (Source)
To summarize so far, we have source image s and driving video d. The task is to generate output video y such that it has the identity-specific information from s and motion-specific information from d. To obtain identity-specific information different neural networks are used and the same goes for motion-specific information. These pieces of information are used to obtain K 3D keypoints and Jacobians for both s and d.
These keypoints and Jacobians are then used to warp the source appearance feature f_sextracted from s from which they generate the final output image using a generator network G.
So how did the authors train this system? We'll cover the procedure, the dataset they used, and their losses.

Training the Models

The authors used a dataset of talking-head videos to train their models here. They mention the use of VoxCeleb2 and TalkingHead-1KH for evaluation, though it is a bit unclear which dataset they used for training.
For each video, two frames were sampled:
The networks F, L, H, \triangle, M, and G are trained together by minimizing the following loss:
\mathcal{L} = \mathcal{L}_P(d, y) + \mathcal{L}_G(d, y) + \mathcal{L}_E(\{x_{d,k}\}, \{J_{d, k}\}) + \mathcal{L}_L(\{x_d, k\}) + \mathcal{L}_H(R_d, \bar{R}_d) + \mathcal{L}_{\triangle}(\{\delta_{d, k}\})
Let's go through each term one-by-one:
The models are trained using the coarse-to-fine technique. ADAM optimizer with a learning rate of 0.0002 is used to train the model on 256x256 resolution images for 100 epochs. This is then fine-tuned on 512x512 resolution images for 10 epochs.

Conclusion

The results of this research are really quite promising. The techniques we talked about here resulted in a 10X bandwidth reduction and, if you had a chance to look at the video in our introduction, you can see that the video quality is incredibly high considering that reduction. Models like this could make it possible to democratize access to video conferencing and reduce strain on networks, especially in residential and rural areas where bandwidth is already harder to come by.
The paper is packed with implementation details, failure modes, and other nitty-gritty. I highly recommend going through the paper especially the appendix section.
I would also like to thank Justin Tenuto for his edits.