Search results for: FACIAL FEATURES - Bridge of Knowledge

Search

Search results for: FACIAL FEATURES

Search results for: FACIAL FEATURES

  • Facial features extraction for color, frontal images

    Publication

    - Year 2011

    The problem of facial characteristic features extraction is discussed. Several methods of features extraction for color en--face photographs are discussed. The methods are based mainly on the colors features related to the specific regions of the human face. The usefulness of presented methods was tested on a database of en--face photographs consisting of 100 photographs.

  • Bimodal Emotion Recognition Based on Vocal and Facial Features

    Emotion recognition is a crucial aspect of human communication, with applications in fields such as psychology, education, and healthcare. Identifying emotions accurately is challenging, as people use a variety of signals to express and perceive emotions. In this study, we address the problem of multimodal emotion recognition using both audio and video signals, to develop a robust and reliable system that can recognize emotions...

    Full text available to download

  • Real-Time Facial Features Detection from Low Resolution Thermal Images with Deep Classification Models

    Deep networks have already shown a spectacular success for object classification and detection for various applications from everyday use cases to advanced medical problems. The main advantage of the classification models over the detection models is less time and effort needed for dataset preparation, because classification networks do not require bounding box annotations, but labels at the image level only. Yet, after passing...

    Full text to download in external service

  • Real-time facial feature tracking in poor quality thermal imagery

    Publication

    Recently, facial feature tracking systems have become more and more popular because of many possible use cases. Especially in medical applications location of the face and facial features are very useful. Many researches have presented methods to detect and track facial features in visible light. However, facial feature analysis in thermography may also be very advantageous. Some examples of using infrared imagery in medicine include...

    Full text to download in external service

  • Emotion Recognition Based on Facial Expressions of Gamers

    This article presents an approach to emotion recognition based on facial expressions of gamers. With application of certain methods crucial features of an analyzed face like eyebrows' shape, eyes and mouth width, height were extracted. Afterwards a group of artificial intelligence methods was applied to classify a given feature set as one of the following emotions: happiness, sadness, anger and fear. The approach presented in this...

  • Emotion Recognition Based on Facial Expressions of Gamers

    Publication

    This article presents an approach to emotion recognition based on facial expressions of gamers. With application of certain methods crucial features of an analysed face like eyebrows' shape, eyes and mouth width, height were extracted. Afterwards a group of artificial intelligence methods was applied to classify a given feature set as one of the following emotions: happiness, sadness, anger and fear.The approach presented in this...

  • Identification of Emotions Based on Human Facial Expressions Using a Color-Space Approach

    Publication

    HCI technology improves human-computer interaction. Such communication can be carried out with the use of emotions that are visible on the human face since birth. In this paper the Emotion system for detecting and recognizing facial expressions, developed in the MSc work, is presented. The system recognizes emotion from webcam video in real time. It is based on color segmentation and morphological operations. The system uses a...

    Full text available to download

  • Selection of Features for Multimodal Vocalic Segments Classification

    Publication

    English speech recognition experiments are presented employing both: audio signal and Facial Motion Capture (FMC) recordings. The principal aim of the study was to evaluate the influence of feature vector dimension reduction for the accuracy of vocalic segments classification employing neural networks. Several parameter reduction strategies were adopted, namely: Extremely Randomized Trees, Principal Component Analysis and Recursive...

    Full text to download in external service

  • A Parallel Genetic Algorithm for Creating Virtual Portraits of Historical Figures

    In this paper we present a genetic algorithm (GA) for creating hypothetical virtual portraits of historical figures and other individuals whose facial appearance is unknown. Our algorithm uses existing portraits of random people from specific historical period and social background to evolve a set of face images potentially resembling the person whose image is to be found. We then use portraits of the person's relatives to judge...

    Full text available to download

  • Przegląd robotów humanoidalnych

    W artykule przedstawiono przegląd najpopularniejszych robotów humanoidalnych, wyróżniając ich ważniejsze cechy i porównując podstawowe charakterystyki, biorąc przy tym pod uwagę pożądane kognitywne aspekty rozwoju robotyki. Wśród osiągalnych cech rozmaitych rozwiązań aparatów humanoidalnych dostępnych na rynku – wyróżnia się głównie liczbę stopni swobody, rodzaj zastosowanego układu lokomocji oraz możliwości wyrażania mimiki twarzy,...

    Full text available to download

  • Pose-Invariant Face Detection by Replacing Deep Neurons with Capsules for Thermal Imagery in Telemedicine

    Abstract— The aim of this work was to examine the potential of thermal imaging as a cost-effective tool for convenient, non- intrusive remote monitoring of elderly people in different possible head orientations, without imposing specific behavior on users, e.g. looking toward the camera. Illumination and pose invariant head tracking is important for many medical applications as it can provide information, e.g. about vital signs, sensory...

    Full text available to download

  • Stress Detection of Children with Autism using Physiological Signals in Kaspar Robot-Based Intervention Studies

    Publication
    • B. Coskun
    • P. Uluer
    • E. Toprak
    • D. E. Barkana
    • H. Kose
    • T. Zorcec
    • B. Robins
    • A. Landowska

    - Year 2022

    This study aims to develop a stress detection system using the blood volume pulse (BVP) signals of children with Autism Spectrum Disorder (ASD) during robot-based interven- tion. This study presents the heart rate variability (HRV) analysis method to detect the stress, where HRV features are extracted from raw BVP signals recorded from an E4 wristband during interaction studies with the social robot Kaspar. Low frequency power...

    Full text to download in external service

  • Influence of Thermal Imagery Resolution on Accuracy of Deep Learning based Face Recognition

    Publication

    Human-system interactions frequently require a retrieval of the key context information about the user and the environment. Image processing techniques have been widely applied in this area, providing details about recognized objects, people and actions. Considering remote diagnostics solutions, e.g. non-contact vital signs estimation and smart home monitoring systems that utilize person’s identity, security is a very important factor....

    Full text available to download

  • Thermal Images Analysis Methods using Deep Learning Techniques for the Needs of Remote Medical Diagnostics

    Publication

    - Year 2020

    Remote medical diagnostic solutions have recently gained more importance due to global demographic shifts and play a key role in evaluation of health status during epidemic. Contactless estimation of vital signs with image processing techniques is especially important since it allows for obtaining health status without the use of additional sensors. Thermography enables us to reveal additional details, imperceptible in images acquired...

    Full text available to download

  • Long Distance Vital Signs Monitoring with Person Identification for Smart Home Solutions

    Publication

    - Year 2018

    Abstract— Imaging photoplethysmography has already been proved to be successful in short distance (below 1m). However, most of the real-life use cases of measuring vital signs require the system to work at longer distances, to be both more reliable and convenient for the user. The possible scenarios that system designers must have in mind include monitoring of the vital signs of residents in nursing homes, disabled people, who...

    Full text available to download

  • Normalization of face illumination using basic knowledge and information extracted from a single image

    Publication

    This paper presents a method for face image normalization that can be applied to the extraction of illumination invariant facial features or used to remove bad lighting effects and produce high-quality, photorealistic results. Most of the existing approaches concentrate on separating the constant albedo from the variable light intensity; that concept, however, is based on the Lambertian model, which fails in the presence of specularities...

    Full text to download in external service

  • Face with Mask Detection in Thermal Images Using Deep Neural Networks

    Publication

    As the interest in facial detection grows, especially during a pandemic, solutions are sought that will be effective and bring more benefits. This is the case with the use of thermal imaging, which is resistant to environmental factors and makes it possible, for example, to determine the temperature based on the detected face, which brings new perspectives and opportunities to use such an approach for health control purposes. The...

    Full text available to download

  • An extension to the FEEDB Multimodal Database of Facial Expressions and Emotions

    Publication
    • M. Szwoch
    • L. Marco-gimenez
    • M. Arevalillo-herráez
    • A. Ayesh

    - Year 2015

    FEEDB is a multimodal database that contains recordings of people expressing different emotions, captured by using a Microsoft Kinect sensor. Data were originally provided in the device’s proprietary format (XED), requiring both the Microsoft Kinect Studio application and a Kinect sensor attached to the system to use the files. In this paper, we present an extension of the database. For a selection of recordings, we also provide...

    Full text to download in external service

  • Database of speech and facial expressions recorded with optimized face motion capture settings

    The broad objective of the present research is the analysis of spoken English employing a multiplicity of modalities. An important stage of this process, discussed in the paper, is creating a database of speech accompanied with facial expressions. Recordings of speakers were made using an advanced system for capturing facial muscle motion. A brief historical outline, current applications, limitations and the ways of capturing face...

    Full text available to download

  • Limitations of Emotion Recognition from Facial Expressions in e-Learning Context

    Publication

    The paper concerns technology of automatic emotion recognition applied in e-learning environment. During a study of e-learning process the authors applied facial expressions observation via multiple video cameras. Preliminary analysis of the facial expressions using automatic emotion recognition tools revealed several unexpected results, including unavailability of recognition due to face coverage and significant inconsistency...

    Full text to download in external service

  • Facial emotion recognition using depth data

    Publication

    - Year 2015

    In this paper an original approach is presented for facial expression and emotion recognition based only on depth channel from Microsoft Kinect sensor. The emotional user model contains nine emotions including the neutral one. The proposed recognition algorithm uses local movements detection within the face area in order to recognize actual facial expression. This approach has been validated on Facial Expressions and Emotions Database...

    Full text to download in external service

  • FEEDB: A multimodal database of facial expressions and emotions

    Publication

    - Year 2013

    In this paper a first version of a multimodal FEEDB database of facial expressions and emotions is presented. The database contains labeled RGB-D recordings of people expressing a specific set of expressions that have been recorded using Microsoft Kinect sensor. Such a database can be used for classifier training and testing in face recognition as well as in recognition of facial expressions and human emotions. Also initial experiences...

    Full text to download in external service

  • An unorthodox view on the problem of tracking facial expressions

    Publication

    - Year 2014

    Recent developments in imaging cameras has opened a new way of analyzing facial expression. We would like to take advantage from this new technology and present a method of imaging and processing images of human face as a response to the particular stimuli. The response in this case is represented by the facial expressions and the stimuli are still images representing six basic emotions according to Eckmann. Working hypothesis...

    Full text to download in external service

  • An unorthodox view on the problem of tracking facial expressions

    Recent developments in imaging cameras has opened a new way of analyzing facial expression. We would like to take advantage from this new technology and present a method of imaging and processing images of human face as a response to the particular stimuli. The response in this case is represented by the facial expressions and the stimuli are still images representing six basic emotions according to Eckmann. Working hypothesis...

    Full text available to download

  • ALOFON corpus

    The ALOFON corpus is one of the multimodal database of word recordings in English, available at http://www.modality-corpus.org/.  The ALOFON corpus is oriented towards the recording of the speech equivalence variants. For this purpose, a total of 7 people who are or speak English with native speaker fluency and a variety of Standard Southern British...

  • WEB-CAM AS A MEANS OF INFORMATION ABOUT EMOTIONAL ATTEMPT OF STUDENTS IN THE PROCESS OF DISTANT LEARNING

    Publication

    - Year 2014

    New methods in education become more popular nowadays. Distant learning is a good example when teacher and student meet in virtual environment. Because interaction in this virtual world might be complicated it seems necessary to assure as much methods of conforming that student is still engaged in the process of learning as it is possible. We would like to present assumption that by means of web-cam we will be able to track facial...

  • Acquisition and indexing of RGB-D recordings for facial expressions and emotion recognition

    Publication

    In this paper KinectRecorder comprehensive tool is described which provides for convenient and fast acquisition, indexing and storing of RGB-D video streams from Microsoft Kinect sensor. The application is especially useful as a supporting tool for creation of fully indexed databases of facial expressions and emotions that can be further used for learning and testing of emotion recognition algorithms for affect-aware applications....

    Full text to download in external service

  • Analysis of the influence of external conditions on temperature readings in thermograms and adaptive adjustment of the measured temperature value

    Measuring human temperature is a crucial step in preventing the spread of diseases such as COVID-19. For the proper operation of an automatic body temperature measurement system throughout the year, it is necessary to consider outdoor conditions. In this paper, the effect of atmospheric factors on facial temperature readings using infrared thermography is investigated. A thorough analysis of the variation of facial temperature...

    Full text to download in external service

  • Emotion Recognition for Affect Aware Video Games

    In this paper the idea of affect aware video games is presented. A brief review of automatic multimodal affect recognition of facial expressions and emotions is given. The first result of emotions recognition using depth data as well as prototype affect aware video game are presented

    Full text to download in external service

  • Detection of Face Position and Orientation Using Depth Data

    Publication

    In this paper an original approach is presented for real-time detection of user's face position and orientation based only on depth channel from a Microsoft Kinect sensor which can be used in facial analysis on scenes with poor lighting conditions where traditional algorithms based on optical channel may have failed. Thus the proposed approach can support, or even replace, algorithms based on optical channel or based on skeleton...

    Full text to download in external service

  • Robot Eye Perspective in Perceiving Facial Expressions in Interaction with Children with Autism

    Publication

    The paper concerns automatic facial expression analysis applied in a study of natural “in the wild” interaction between children with autism and a social robot. The paper reports a study that analyzed the recordings captured via a camera located in the eye of a robot. Children with autism exhibit a diverse level of deficits, including ones in social interaction and emotional expression. The aim of the study was to explore the possibility...

    Full text to download in external service

  • Facial feature extraction from a monochromatic picture.

    Publication

    - Year 2009

    Face pose determination represents an important area of research in Human Machine Interaction. In this paper, I describe a new method of extracting facial feature locations from a single monochromatic monocular camera for the purpose of estimating and tracking the three dimensional pose of human face and eye-gaze direction.

  • Facial Feature extraction from a monochromatic picture

    Publication

    Face pose determination represents an important area of research in Human Machine Interaction. In this paper, I describe a new method of extracting facial feature locations from a single monochromatic monocular camera for the purpose of estimating and tracking the three dimensional pose of human face and eye-gaze direction.

  • On Facial Expressions and Emotions RGB-D Database

    Publication

    - Year 2014

    The goal of this paper is to present the idea of creating reference database of RGB-D video recordings for recognition of facial expressions and emotions. Two different formats of the recordings used for creation of two versions of the database are described and compared using different criteria. Examples of first applications using databases are also presented to evaluate their usefulness.

    Full text to download in external service

  • Evaluation Criteria for Affect-Annotated Databases

    In this paper a set of comprehensive evaluation criteria for affect-annotated databases is proposed. These criteria can be used for evaluation of the quality of a database on the stage of its creation as well as for evaluation and comparison of existing databases. The usefulness of these criteria is demonstrated on several databases selected from affect computing domain. The databases contain different kind of data: video or still...

    Full text to download in external service

  • Intelligent video and audio applications for learning enhancement

    The role of computers in school education is briefly discussed. Multimodal interfaces development history is shortly reviewed. Examples of applications of multimodal interfaces for learners with special educational needs are presented, including interactive electronic whiteboard based on video image analysis, application for controlling computers with facial expression and speech stretching audio interface representing audio modality....

    Full text to download in external service

  • Intelligent multimedia solutions supporting special education needs.

    The role of computers in school education is briefly discussed. Multimodal interfaces development history is shortly reviewed. Examples of applications of multimodal interfaces for learners with special educational needs are presented, including interactive electronic whiteboard based on video image analysis, application for controlling computers with facial expression and speech stretching audio interface representing audio modality....

  • Biometryczna kontrola dostępu

    Opisano szczegółowo algorytm detekcji oraz identyfikacji człowieka na podstawie punktów nodalnych twarzy. Zdefiniowano pojęcia: biometria, proces pomiaru biometrycznego, metody biometrycznej identyfikacji oraz kontrola dostępu. Przedstawiono opis opracowanego systemu biometrycznej identyfikacji wykorzystującego sztuczne sieci neuronowe. Podano wyniki badań oraz przeprowadzono ich wnikliwą dyskusję.Biometrics is the study of automated...

  • For Your Eyes Only – Biometric Protection of PDF Documents

    Publication

    The paper introduces a concept of a digital document content encryption/decryption with facial biometric data coming from a legitimate user. Access to the document content is simple and straightforward, especially during collaborative work with mobile devices equipped with cameras. Various contexts of document exchange are presented with regard to the next generation pro-active digital documents proposed by authors. An important...

    Full text available to download

  • Comparison of selected off-the-shelf solutions for emotion recognition based on facial expressions

    The paper concerns accuracy of emotion recognition from facial expressions. As there are a couple of ready off-the-shelf solutions available in the market today, this study aims at practical evaluation of selected solutions in order to provide some insight into what potential buyers might expect. Two solutions were compared: FaceReader by Noldus and Xpress Engine by QuantumLab. The performed evaluation revealed that the recognition...

    Full text to download in external service

  • Using Physiological Signals for Emotion Recognition

    Publication

    - Year 2013

    Recognizing user’s emotions is the promising area of research in a field of human-computer interaction. It is possible to recognize emotions using facial expression, audio signals, body poses, gestures etc. but physiological signals are very useful in this field because they are spontaneous and not controllable. In this paper a problem of using physiological signals for emotion recognition is presented. The kinds of physiological...

    Full text to download in external service

  • Super-resolved Thermal Imagery for High-accuracy Facial Areas Detection and Analysis

    In this study, we evaluate various Convolutional Neural Networks based Super-Resolution (SR) models to improve facial areas detection in thermal images. In particular, we analyze the influence of selected spatiotemporal properties of thermal image sequences on detection accuracy. For this purpose, a thermal face database was acquired for 40 volunteers. Contrary to most of existing thermal databases of faces, we publish our dataset...

    Full text available to download

  • Assessing the attractiveness of human face based on machine learning

    Publication

    The attractiveness of the face plays an important role in everyday life, especially in the modern world where social media and the Internet surround us. In this study, an attempt to assess the attractiveness of a face by machine learning is shown. Attractiveness is determined by three deep models whose sum of predictions is the final score. Two annotated datasets available in the literature are employed for training and testing...

    Full text available to download

  • AffecTube — Chrome extension for YouTube video affective annotations

    Publication

    - SoftwareX - Year 2023

    The shortage of emotion-annotated video datasets suitable for training and validating machine learning models for facial expression-based emotion recognition stems primarily from the significant effort and cost required for manual annotation. In this paper, we present AffecTube as a comprehensive solution that leverages crowdsourcing to annotate videos directly on the YouTube platform, resulting in ready-to-use emotion-annotated...

    Full text available to download

  • Audio-visual aspect of the Lombard effect and comparison with recordings depicting emotional states.

    In this paper an analysis of audio-visual recordings of the Lombard effect is shown. First, audio signal is analyzed indicating the presence of this phenomenon in the recorded sessions. The principal aim, however, was to discuss problems related to extracting differences caused by the Lombard effect, present in the video , i.e. visible as tension and work of facial muscles aligned to an increase in the intensity of the articulated...

    Full text to download in external service

  • Emotion Recognition - the need for a complete analysis of the phenomenon of expression formation

    Publication

    This article shows how complex emotions are. This has been proven by the analysis of the changes that occur on the face. The authors present the problem of image analysis for the purpose of identifying emotions. In addition, they point out the importance of recording the phenomenon of the development of emotions on the human face with the use of high-speed cameras, which allows the detection of micro expression. The work that was...

    Full text available to download

  • Emotion monitoring system for drivers

    This article describes a new approach to the issue of building a driver monitoring system. Actual systems focus, for example, on tracking eyelid and eyebrow movements that result from fatigue. We propose a different approach based on monitoring the state of emotions. Such a system assumes that by using the emotion model based on our own concept, referred to as the reverse Plutchik’s paraboloid of emotions, the recognition of emotions...

    Full text available to download

  • Remote Estimation of Video-Based Vital Signs in Emotion Invocation Studies

    Abstract— The goal of this study is to examine the influence of various imitated and video invoked emotions on the vital signs (respiratory and pulse rates). We also perform an analysis of the possibility to extract signals from sequences acquired with cost-effective cameras. The preliminary results show that the respiratory rate allows for better separation of some emotions than the pulse rate, yet this relation highly depends...

    Full text available to download

  • Face Profile View Retrieval Using Time of Flight Camera Image Analysis

    Publication

    Method for profile view retrieving of the human face is presented. The depth data from the 3D camera is taken as an input. The preprocessing is, besides of standard filtration, extended by the process of filling of the holes which are present in depth data. The keypoints, defined as the nose tip and the chin are detected in user’s face and tracked. The Kalman filtering is applied to smooth the coordinates of those points which...

    Full text to download in external service

  • Variable length sliding models for banking clients face biometry

    An experiment was organized in 100 bank branches to acquire biometric samples from nearly 5000 clients including face images. A procedure for creating face verification models based on continuously expanding database of biometric samples is proposed, implemented, and tested. The presented model applies to circumstances where it is possible to collect and to take into account new biometric samples after each positive verification...

    Full text available to download