• The Complete Research Material is averagely 47 pages long and it is in Ms Word Format, it has 1-5 Chapters.
  • Major Attributes are Abstract, All Chapters, Figures, Appendix, References.
  • Study Level: BTech, BSc, BEng, BA, HND, ND or NCE.
  • Full Access Fee: ₦4,000

Get the complete project » Instant Download Active



1.1              Background of the Study

Human computer interaction (commonly referred to as HCI) researches the design and use of computer technology, focused on the interfaces between people (users) and computers. Researchers in the field of HCI both observe the ways in which humans interact with computers and design technologies that let humans interact with computers in novel ways (Carroll, 2010).

As a field of research, human computer interaction is situated at the intersection of computer science, behavioral sciences, design, media studies, and several other fields of study. The term was popularized by Stuart K. Card, Allen Newell, and Thomas P. Moran in their seminal 1983 book, The Psychology of Human Computer Interaction, although the authors first used the term in 1980 and the first known use was in 1975 (Carroll, 2010). The term connotes that, unlike other tools with only limited uses (such as a hammer, useful for driving nails but not much else), a computer has many uses and this takes place as an open-ended dialog between the user and the computer. The notion of dialog likens human computer interaction to human to human interaction, an analogy which is crucial to theoretical considerations in the field. Humans interact with computers in many ways; and the interface between humans and the computers they use is crucial to facilitating this interaction (Carroll, 2010).

Desktop applications, internet browsers, handheld computers, and computer kiosks make use of the prevalent graphical user interfaces (GUI) of today. Voice user interfaces (VUI) are used for speech recognition and the emerging multi-modal and gestalt User Interfaces (GUI) allow humans to engage with embodied character agents in a way that cannot be achieved with other interface paradigms (Carroll, 2010). The growth in human computer interaction field has been in quality of interaction, and in different branching in its history. Instead of designing regular interfaces, the different research branches have had different focus on the concepts of multimodality rather than unimodality, intelligent adaptive interfaces rather than command/action based ones, and finally active rather than passive interfaces (Carroll, 2010).

The Association for Computing Machinery (ACM) defines human computer interaction as a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them (Jacko, 2012). An important facet of HCI is the securing of user satisfaction (or simply End User Computing Satisfaction). Because human computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques in computer graphics, operating systems, programming languages, and development environments are relevant (Jacko, 2012). On the human side, communication theory, graphic and industrial design disciplines, linguistics, social sciences, cognitive psychology, social psychology, and human factors such as computer user satisfaction are relevant (Jacko, 2012).

A facial expression is one or more motions or positions of the muscles beneath the skin of the face. According to set of some theories, these movements convey the emotional state of an individual to observers. Facial expressions are a form of nonverbal communication. They are a primary means of conveying social information between humans, but they also occur in most other mammals and some other animal species (Carlson, 2010).

Humans can adopt a facial expression voluntarily or involuntarily, and the neural mechanisms responsible for controlling the expression differ in each case. Voluntary facial expressions are often socially conditioned and follow a cortical route in the brain. Conversely, involuntary facial expressions are believed to be innate and follow a subcortical route in the brain (Carlson, 2010).

Facial expression recognition is a process performed by humans or computers which consist of:

        i.            Locating faces in the scene (for example, in an image; this step is also referred to as face detection) (Adolphs, 2002).

      ii.            Extracting facial features from the detected face region (e.g., detecting the shape of facial components or describing the texture of the skin in a facial area; this step is referred to as facial feature extraction) (Adolphs, 2002).

     iii.            Analyzing the motion of facial features and/or the changes in the appearance of facial features and classifying this information into some facial expression interpretative categories such as emotion categories like happiness, sad and so on (this step is also referred to as facial expression interpretation) (Adolphs, 2002).

An artificial neuron network (ANN) is a computational model based on the structure and functions of biological neural networks. Information that flows through the network affects the structure of the ANN because a neural network changes or learns, in a sense based on input and output. ANNs are considered nonlinear statistical data modeling tools where the complex relationships between inputs and outputs are modeled or patterns are found. ANN is also known as a neural network (CireşAn, Meier, Masci and Schmidhuber, 2012).

An ANN has several advantages but one of the most recognized of these is the fact that it can actually learn from observing data sets. In this way, ANN is used as a random function approximation tool. These types of tools help estimate the most cost-effective and ideal methods for arriving at solutions while defining computing functions or distributions. ANN takes data samples rather than entire data sets to arrive at solutions, which saves both time and money. ANNs are considered fairly simple mathematical models to enhance existing data analysis technologies (CireşAn, Meier, Masci and Schmidhuber, 2012).

ANNs have three layers that are interconnected. The first layer consists of input neurons. Those neurons send data on to the second layer, which in turn sends information to the output neurons (third layer). Training an artificial neural network involves choosing from allowed models for which there are several associated algorithms like Levenberg-Marquardt, Scaled Conjugate Gradient, and so on (CireşAn, Meier, Masci and Schmidhuber, 2012).

In this research, we will be training a system that will be able to recognize facial expression of an individual in an image. To achieve these four different facial expressions (Neutral, Happy, Sad and Surprise) on the Japanese Female Facial Expressions (JAFFE) database will be used. Viola-Jones object detection algorithm will be used in detecting the face in the image and select the upper face action unit features from the face detected which is the eyes comprising of the eyelids, eyebrows, the upper part of the nose and the eye position and lower face action unit features from the face detected which are the nose and the mouth comprising of the lips, teeth and the jaw. Discrete Cosine Transform (DCT) will be used for features extraction before training the system using ANN with the features extracted vectors.

1.2       Statement of the Problem

Many factors contribute in conveying emotions of an individual; pose, speech, facial expressions, behavior and actions are some of the factors. From these above mentioned factors, facial expressions have a higher importance since they are easily perceptible (Chavan and Kulkarni, 2013). In communicating with other humans, emotions of another human can be recognized with considerable high level of accuracy when using facial expression (Chavan and Kulkarni, 2013).

Many factors impinge upon the ability of an individual to identify emotional expression. Social factors, such as deception, and display rules, affect one's perception of another's emotional state. Therefore, there is a need to develop Face Expression Recognition System (FERS).

Though numerous approaches have been taken on this topic, some limitations still exist. Some researchers have constrained their approaches by either using upper face action unit or lower face action unit of the person in the emotion classification phase (Chavan and Kulkarni, 2013). It is believed that if both the upper and lower case action unit can be combined a higher accuracy would be achieved.

1.3       Aim and Objectives

The aim of this research is to recognize facial expression in Human Computer Interaction using artificial neural network.

The objectives are to:

        i.            Collect images of different facial expressions.

      ii.            Extract the features in the images.

    iii.            Train the system with the extracted features using ANN.

    iv.            Evaluate the accuracy of the system.

1.4       Scope and Limitation of the Study

This scope of this research is to detect human facial expression into emotion which can be sad, happy, surprise or neutral using ANN. The classification will be done by training the system with the features extracted from different facial expression images.

This proposed system is limited to the following:

        i.            Detection of four main basic facial expressions (happy, sad, surprise and neutral).

      ii.            Detection of only human being facial expressions.

    iii.            Detection of only 2 dimensional (2D) images.

    iv.            Non-classification of images where the human uses eyeglasses or any item on the face.

      v.            Capturing of only one sample during image acquisition phase for each testing conducted.

1.5       Significance of the Study

This research will be useful in the following ways:

        i.            Will assist medical Doctors in monitoring their patient emotion to know their stress level.

      ii.            Will be useful in identifying someone in Facial Recognition.

    iii.            Will be useful in detecting criminals based on their facial Expression

1.6       Definition of Terms

        i.            Human Computer Interaction: is the study of interaction between people and computers.

      ii.            Face: is the front part of the head of human being featuring the eyes, nose and mouth and the surrounding area.

    iii.            Facial Expression: is the expression or countenance that seems to an onlooker to be represented by the appearance of a person’s face, resulting from specific use of that person’s facial muscles.

    iv.            Emotion: is a person internal state of being and involuntary physiological response to an object or a situation based on or tied to physical state and sensory data.

      v.            Artificial Neural Network: a real or virtual computer system designed to emulate the human brain in its ability to learn and to assess imprecise data.

You either get what you want or your money back. T&C Apply

Share a Comment

You can find more project topics easily, just search

Quick Project Topic Search