MultiComp OpenFace 2.0

Over the past few years, there has been an increased interest in automatic facial behavior analysis and understanding. We present OpenFace – an open source tool intended for computer vision and machine learning researchers, affective computing community and people interested in building interactive applications based on facial behavior analysis. OpenFace is the first open source tool capable of facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation. The computer vision algorithms which represent the core of OpenFace demonstrate state-of-the-art results in all of the above mentioned tasks. Furthermore, our tool is capable of real-time performance and is able to run from a simple webcam without any specialist hardware.

CE-CLM Facial Landmark Detector

State of the art in facial landmark detection, Convolutional Experts Constrained Local Model (CE-CLM) is a new member of CLM family that uses a novel local detector called Convolutional Experts Network (CEN). This local detector is able to deal with varying appearance of landmarks by internally learning an ensemble of detectors, thus modeling landmark appearance prototypes. This is achieved through a Mixture of Expert Layer, which consists of decision neurons connected with non-negative weights to the final decision layer. In our experiments we show that this is a crucial part of the CEN, which outperforms previously introduced local detectors of LNF and SVR by a big margin. Due to this better performance CE-CLM is able to perform better than state-of-the-art approaches on facial landmark detection and is both more accurate and more robust, specifically in the case of profile faces. 

CMU Multimodal Data SDK

CMU Multimodal Data SDK simplifies loading complex multimodal data. Often cases in many different multimodal datasets, data comes from multiple sources and is processed in different ways. The difference in the nature of the data and the difference in the processing makes loading this form of data very challenging. Often the researchers find themselves dedicating significant time and energy to loading the data before building models. CMU Multimodal Data SDK allows you to load and align multimodal datasets very easily. These datasets normally come in the form of video segments with labels. This SDK comes with functionalities already implemented for a variety of processed outputs.

Please reload