Filters
total: 9
Search results for: KINECT SENSOR
-
Scene Segmentation Basing on Color and Depth Images for Kinect Sensor
PublicationIn this paper we propose a method for segmenting single images from Kinect sensor by considering both color and depth information. The algorithm is based on a series of edge detection procedures designed for particular features of the scene objects. RGB and HSV color planes are separately analyzed in the first step with Canny edge detector, resulting in overall color edges mask. In depth images both clear boundaries and smooth...
-
An extension to the FEEDB Multimodal Database of Facial Expressions and Emotions
PublicationFEEDB is a multimodal database that contains recordings of people expressing different emotions, captured by using a Microsoft Kinect sensor. Data were originally provided in the device’s proprietary format (XED), requiring both the Microsoft Kinect Studio application and a Kinect sensor attached to the system to use the files. In this paper, we present an extension of the database. For a selection of recordings, we also provide...
-
CNN Architectures for Human Pose Estimation from a Very Low Resolution Depth Image
PublicationThe paper is dedicated to proposing and evaluating a number of convolutional neural network architectures for calculating a multiple regression on 3D coordinates of human body joints tracked in a single low resolution depth image. The main challenge was to obtain a high precision in case of a noisy and coarse scan of the body, as observed by a depth sensor from a large distance. The regression network was expected to reason about...
-
Very low resolution depth images of 200,000 poses
Open Research DataA dataset represents simulated images of depth sensor seeing a single human pose, performing 200,000 random gestures. The depth images as vectors of pixels are stored with ground truth positions of every relevant joint.
-
Deep neural networks for human pose estimation from a very low resolution depth image
PublicationThe work presented in the paper is dedicated to determining and evaluating the most efficient neural network architecture applied as a multiple regression network localizing human body joints in 3D space based on a single low resolution depth image. The main challenge was to deal with a noisy and coarse representation of the human body, as observed by a depth sensor from a large distance, and to achieve high localization precision....
-
FEEDB: A multimodal database of facial expressions and emotions
PublicationIn this paper a first version of a multimodal FEEDB database of facial expressions and emotions is presented. The database contains labeled RGB-D recordings of people expressing a specific set of expressions that have been recorded using Microsoft Kinect sensor. Such a database can be used for classifier training and testing in face recognition as well as in recognition of facial expressions and human emotions. Also initial experiences...
-
Facial emotion recognition using depth data
PublicationIn this paper an original approach is presented for facial expression and emotion recognition based only on depth channel from Microsoft Kinect sensor. The emotional user model contains nine emotions including the neutral one. The proposed recognition algorithm uses local movements detection within the face area in order to recognize actual facial expression. This approach has been validated on Facial Expressions and Emotions Database...
-
Acquisition and indexing of RGB-D recordings for facial expressions and emotion recognition
PublicationIn this paper KinectRecorder comprehensive tool is described which provides for convenient and fast acquisition, indexing and storing of RGB-D video streams from Microsoft Kinect sensor. The application is especially useful as a supporting tool for creation of fully indexed databases of facial expressions and emotions that can be further used for learning and testing of emotion recognition algorithms for affect-aware applications....
-
Detection of Face Position and Orientation Using Depth Data
PublicationIn this paper an original approach is presented for real-time detection of user's face position and orientation based only on depth channel from a Microsoft Kinect sensor which can be used in facial analysis on scenes with poor lighting conditions where traditional algorithms based on optical channel may have failed. Thus the proposed approach can support, or even replace, algorithms based on optical channel or based on skeleton...