3D Face Reconstruction From Volumes of Videos Using a Mapreduce Framework

As video blogs become favorable to the commonage, egocentric videos generate tremendous big video data, which capture a large number of interpersonal social events. There are significant challenges on retrieving rich social information, such as human identities, emotions and other interaction information from these massive video data. Limited methods have been proposed so far to address the issue of the unlabeled data. In this paper, we present a fully-automatic system retrieving both sparse 3D facial shape and dense 3D face, from which more face-related information can be predicted during social communication. First, we localize facial landmarks from 2D videos and retrieve sparse 3D shape from motion. Second, we apply the retrieved sparse 3D shape as a prior estimation of dense 3D face mesh. To deal with big social videos in a scalable manner, we design the proposed system on a Map/Reduce framework. Tested on FEI and BU-4DFE face datasets, we improve time efficiency by 92% and 73% respectively without accuracy loss.