emotion recognition from facial expression using fuzzy logic
DESCRIPTION
NO: 6, 11th MAIN,2ND FLOOR, JAYA NAGAR 4TH BLOCK, BANGLORE-11 M: 9611582234. [email protected] W: www.targetjsolutions.com, FB: https://www.facebook.com/pages/Final-year-Projects-in-bangalore/1496249907324164 Blg: http://finalyearprojectbangalore.blogspot.in/TRANSCRIPT
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
CHAPTER 1INTRODUCTION
1.1 Statement of the problem
In this project, the exact facial expression is identified from a fuzzy domain.
Identification of the exact facial expression from a blurred facial image is not an easy
task. Second, segmentation of a facial image into regions of interest is difficult,
particularly when the regions do not have significant differences in their imaging
attributes. Third, unlike humans, machines usually do not have visual perception to map
facial expressions into emotions.
1.2 Scope of the problem
This project also proposes a scheme for controlling emotion by judiciously
selecting appropriate audiovisual stimulus for presentation before the subject. The
selection of the audiovisual stimulus is undertaken using fuzzy logic. Experimental results
show that the proposed control scheme has good experimental accuracy and repeatability.
Experimental results show that the detection accuracies of emotions for adult male, adult
female, and children of 8–12 years are as high as 88%, 92%, and 96%, respectively,
outperforming the percentage accuracies of the existing techniques .
1.3Aim of the project
Currently available human–computer interfaces do not take complete advantage of
these valuable communicative media and thus are unable to provide the full benefits of
natural interaction to the users. Human–computer interactions could significantly be
improved if computers could recognize the emotion of the users from their facial
expressions and hand gestures, and react in a friendly manner according to the users’
needs. This project aims to recognize emotions in human subjects on a computer, whose
facial expressions are analyzed by segmenting and localizing the individual frames into
regions of interest.
DEPT OF CSE, EPCET Page 1
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
1.4 Plans for delivering the project goals
The project will involve 4stages:analysis and requirements, design,
implementation and evaluation.
1.4.1 Analysis and requirements
During this stage a study will be conducted on fuzzy logic and C# and .NET
technology with the following objectives:
To obtain an understanding of fuzzy logic.
To learn the C# and .NET technology with the view of using it for emotion
recognition.
To develop a specification of fuzzy logic in emotion recognition.
To critically examine previous work conducted on integrating C# and .NET
technology.
1.4.2 Design
During this stage, a design will be developed for the specification conceived in the
previous stage. In particular, the following outcomes of this stage are:
Identification of architecture for the system required by the application.
Assumptions are identified.
Alternatives for the system components researched and evaluated.
Documentation of a conceptual design for all system components and overall
design for integration of these components. This may include documentation of
design methodologies.
1.4.3 Implementation
During this stage, the actual implementation of the system design will take place.
Problems in implementing the original design will be identified and design modifications
conceived and tested. A working implementation that meets the specification will
hopefully be the outcome of this stage.
1.4.4 Evaluation
In this stage, the finished system will be evaluated for the value brought to the
user. In addition, the design methodologies will be evaluated as to there effectiveness in
delivering the required outcomes. Finally the project will be evaluated as to how well the
goals of the project were achieved.
DEPT OF CSE, EPCET Page 2
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
CHAPTER 2
LITERATURE SURVEY
2.1 Fuzzy Logic
Fuzzy logic is a form of many-valued logic; it deals with reasoning that is fluid or
approximate rather than fixed and exact. In contrast with "crisp logic", where binary sets
have two-valued logic: true or false, fuzzy logic variables may have a truth value that
ranges in degree between 0 and 1. Fuzzy logic has been extended to handle the concept of
partial truth, where the truth value may range between completely true and completely
false.Furthermore, when linguistic variables are used, these degrees may be managed by
specific functions.
Fuzzy logic began with the 1965 proposal of fuzzy set theory by Lotfi Zadeh.
Fuzzy set theory defines fuzzy operators on fuzzy sets. The problem in applying this is
that the appropriate fuzzy operator may not be known [2].
2.2 What is .NET?
When .NET was announced in late 1999, Microsoft positioned the technology as a
platform for building and consuming Extensible Markup Language (XML) Web services.
XML Web services allow any type of application, be it a Windows- or browser-based
application running on any type of computer system, to consume data from any type of
server over the Internet. The reason this idea is so great is the way in which the XML
messages are transferred: over established standard protocols that exist today. Using
protocols such as SOAP, HTTP, and SMTP, XML Web services make it possible to
expose data over the wire with little or no modifications to your existing code.
Figure 2.1 presents a high-level overview of the .NET Framework and how XML Web
services are positioned.
DEPT OF CSE, EPCET Page 3
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Figure 2.1. Stateless XML Web services model.
Since the initial announcement of the .NET Framework, it's taken on many new and
different meanings to different people. To a developer, .NET means a great environment
for creating robust distributed applications. To an IT manager, .NET means simpler
deployment of applications to end users, tighter security, and simpler management. To a
CTO or CIO, .NET means happier developers using state-of-the-art development
technologies and a smaller bottom line. To understand why all these statements are true,
you need to get a grip on what the .NET Framework consists of, and how it's truly a
revolutionary step forward for application architecture, development, and deployment[8].
2.2.1 .NET Framework
Now that you are familiar with the major goals of the .NET Framework, let's
briefly examine its architecture. As shown in Figure 2.2, the .NET Framework sits on top
of the operating system, which can be a few different flavors of Windows and consists of
a number of components .NET is essentially a system application that runs on Windows.
DEPT OF CSE, EPCET Page 4
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Figure 2.2: .NET framework
Conceptually, the CLR and the JVM are similar in that they are both runtime
infrastructures that abstract the underlying platform differences. However, while the JVM
officially supports only the Java language, the CLR supports any language that can be
represented in its Common Intermediate Language (CIL). The JVM executes bytecode, so
it can, in principle, support many languages, too. Unlike Java's bytecode, though, CIL is
never interpreted. Another conceptual difference between the two infrastructures is that
Java code runs on any platform with a JVM, whereas .NET code runs only on platforms
that support the CLR. In April, 2003, the International Organization for Standardization
and the International Electrotechnical Committee (ISO/IEC) recognized a functional
subset of the CLR, known as the Common Language Interface (CLI), as an international
standard. This development initiated by Microsoft and developed by ECMA International,
a European standards organization, opens the way for third parties to implement their own
versions of the CLR on other platforms, such as Linux or Mac OS X. For information on
third-party and open source projects working to implement the ISO/IEC CLI and C#
specifications
The layer on top of the CLR is a set of framework base classes. This set of classes
is similar to the set of classes found in STL, MFC, ATL, or Java. These classes support
rudimentary input and output functionality, string manipulation, security management,
network communications, thread management, text management, reflection functionality,
collections functionality, as well as other functions.
On top of the framework base classes is a set of classes that extend the base
classes to support data management and XML manipulation. These classes, called
ADO.NET, support persistent data management—data that is stored on backend
DEPT OF CSE, EPCET Page 5
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
databases. Alongside the data classes, the .NET Framework supports a number of classes
to let you manipulate XML data and perform XML searching and XML translations.
Classes in three different technologies (including web services, Web Forms, and
Windows Forms) extend the framework base classes and the data and XML classes. Web
services include a number of classes that support the development of lightweight
distributed components, which work even in the face of firewalls and NAT software.
These components support plug-and-play across the Internet, because web services
employ standard HTTP and SOAP.
Web Forms, the key technology behind ASP.NET, include a number of classes
that allow you to rapidly develop web Graphical User Interface (GUI) applications. If
you're currently developing web applications with Visual Interdev, you can think of Web
Forms as a facility that allows you to develop web GUIs using the same drag-and-drop
approach as if you were developing the GUIs in Visual Basic. Simply drag-and-drop
controls onto your Web Form, double-click on a control, and write the code to respond to
the associated event.
Windows Forms support a set of classes that allow you to develop native
Windows GUI applications. You can think of these classes collectively as a much better
version of the MFC in C++ because they support easier and more powerful GUI
development and provide a common, consistent interface that can be used in all
languages[11].
2.2.2 The Common Language Runtime
At the heart of the .NET Framework is the common language runtime. The
common language runtime is responsible for providing the execution environment that
code written in a .NET language runs under. The common language runtime can be
compared to the Visual Basic 6 runtime, except that the common language runtime is
designed to handle all .NET languages, not just one, as the Visual Basic 6 runtime did for
Visual Basic 6. The following list describes some of the benefits the common language
runtime provides:
Automatic memory management
Cross-language debugging
Cross-language exception handling
Full support for component versioning
Access to legacy COM components
DEPT OF CSE, EPCET Page 6
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
XCOPY deployment
Robust security model
People might expect all those features, but this has never been possible using Microsoft
development tools. Figure 2.3 shows where the common language runtime fits into
the .NET Framework.
Figure 2.3. The common language runtime
Code written using a .NET language is known as managed code. Code that uses
anything but the common language runtime is known as unmanaged code. The common
language runtime provides a managed execution environment for .NET code, whereas the
individual runtimes of non-.NET languages provide an unmanaged execution
environment.
2.2.3 Inside the Common Language Runtime
The common language runtime enables code running in its execution
environment to have features such as security, versioning, memory management and
exception handling because of the way .NET code actually executes. When you compiled
Visual Basic 6 forms applications, you had the ability to compile down to native node or
p-code. Figure 2.4 should refresh your memory of what the Visual Basic 6 options dialog
looked like.
DEPT OF CSE, EPCET Page 7
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Figure 2.4. Visual Basic 6 compiler options dialog.
When you compile your applications in .NET, you aren't creating anything in
native code. When you compile in .NET, you're converting your code—no matter
what .NET language you're using—into an assembly made up of an intermediate
language called Microsoft Intermediate Language (MSIL or just IL, for short). The IL
contains all the information about your application, including methods, properties, events,
types, exceptions, security objects, and so on, and it also includes metadata about what
types in your code can or cannot be exposed to other applications. This was called a type
library in Visual Basic 6 or an IDL (interface definition language) file in C++. In .NET,
it's simply the metadata that the IL contains about your assembly.
The file format for the IL is known as PE (portable executable) format, which is
a standard format for processor-specific execution.
When a user or another component executes your code, a process occurs called
just-in-time (JIT) compilation, and it's at this point that the IL is converted into the
specific machine language of the processor it's executing on. This makes it very easy to
port a .NET application to any type of operating system on any type of processor because
the IL is simply waiting to be consumed by a JIT compiler.
DEPT OF CSE, EPCET Page 8
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
The first time an assembly is called in .NET, the JIT process occurs. Subsequent
calls don't re-JIT the IL; the previously JITted IL remains in cache and is used over and
over again. when you learn about Application Center Test, you also see how the warm-up
time of the JIT process can affect application performance.
Understanding the process of compilation in .NET is very important because it
makes clear how features such as cross-language debugging and exception handling are
possible. You're not actually compiling to any machine-specific code—you're simply
compiling down to an intermediate language that's the same for all .NET languages. The
IL produced by J# .NET and C# looks just like the IL created by the Visual Basic .NET
compiler. These instructions are the same, only how you type them in Visual Studio .NET
is different, and the power of the common language runtime is apparent.
When the IL code is JITted into machine-specific language, it does so on an as-
needed basis. If your assembly is 10MB and the user is only using a fraction of that
10MB, only the required IL and its dependencies are compiled to machine language. This
makes for a very efficient execution process. But during this execution, how does the
common language runtime make sure that the IL is correct? Because the compiler for
each language creates its own IL, there must be a process that makes sure what's
compiling won't corrupt the system. The process that validates the IL is known as
verification. Figure 2.5 demonstrates the process the IL goes through before the code
actually executes.
Figure 2.5. The JIT process and verification.
When code is JIT compiled, the common language runtime checks to make sure
that the IL is correct. The rules that the common language runtime uses for verification
DEPT OF CSE, EPCET Page 9
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
are set forth in the Common Language Specification (CLS) and the Common Type
System (CTS)[8].
2.2.4 The .NET Framework Class Library
The second most important piece of the .NET Framework is the .NET Framework
class library (FCL). As you've seen, the common language runtime handles the dirty work
of actually running the code you write. But to write the code, you need a foundation of
available classes to access the resources of the operating system, database server, or file
server. The FCL is made up of a hierarchy of namespaces that expose classes, structures,
interfaces, enumerations, and delegates that give you access to these resources.
The namespaces are logically defined by functionality. For example, the
System.Data namespace contains all the functionality available to accessing databases.
This namespace is further broken down into System.Data.SqlClient, which exposes
functionality specific to SQL Server, and System.Data.OleDb, which exposes specific
functionality for accessing OLEDB data sources. The bounds of a namespace aren't
necessarily defined by specific assemblies within the FCL; rather, they're focused on
functionality and logical grouping. In total, there are more than 20,000 classes in the FCL,
all logically grouped in a hierarchical manner. Figure2.6 shows where the FCL fits into
the .NET Framework and the logical grouping of namespaces.
Figure 2.6. The .NET Framework class library.
To use an FCL class in your application, you use the Imports statement in Visual
Basic .NET or the using statement in C#. When you reference a namespace in Visual
Basic .NET or C#, you also get the convenience of auto-complete and auto-list members
DEPT OF CSE, EPCET Page 10
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
when you access the objects' types using Visual Studio .NET. This makes it very easy to
determine what types are available for each class in the namespace you're using. As you'll
see over the next several weeks, it's very easy to start coding in Visual Studio .NET.
The Structure of a .NET Application
To understand how the common language runtime manages code execution, you
must examine the structure of a .NET application. The primary unit of a .NET application
is the assembly. An assembly is a self-describing collection of code, resources, and
metadata. The assembly manifest contains information about what is contained within the
assembly. The assembly manifest provides:
Identity information, such as the assembly’s name and version number
A list of all types exposed by the assembly
A list of other assemblies required by the assembly
A list of code access security instructions, including permissions required by the
assembly and permissions to be denied the assembly
Each assembly has one and only one assembly manifest, and it contains all the
description information for the assembly. However, the assembly manifest can be
contained in its own file or within one of the assembly’s modules.
An assembly contains one or more modules. A module contains the code that makes
up your application or library, and it contains metadata that describes that code. When
you compile a project into an assembly, your code is converted from high-level code to
IL. Because all managed code is first converted to IL code, applications written in
different languages can easily interact. For example, one developer might write an
application in Visual C# that accesses a DLL in Visual Basic .NET. Both resources will
be converted to IL modules before being executed, thus avoiding any language-
incompatibility issues.
Each module also contains a number of types. Types are templates that describe a set
of data encapsulation and functionality. There are two kinds of types: reference types
(classes) and value types (structures). These types are discussed in greater detail in
Lesson 2 of this chapter. Each type is described to the common language runtime in the
assembly manifest. A type can contain fields, properties, and methods, each of which
should be related to a common functionality. For example, you might have a class that
represents a bank account. It contains fields, properties, and methods related to the
functions needed to implement a bank account. A field represents storage of a particular
type of data. One field might store the name of an account holder, for example. Properties
DEPT OF CSE, EPCET Page 11
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
are similar to fields, but properties usually provide some kind of validation when data is
set or retrieved. You might have a property that represents an account balance. When an
attempt is made to change the value, the property can check to see if the attempted change
is greater than a predetermined limit. If the value is greater than the limit, the property
does not allow the change. Methods represent behavior, such as actions taken on data
stored within the class or changes to the user interface. Continuing with the bank account
example, you might have a Transfer method that transfers a balance from a checking
account to a savings account, or an Alert method that warns users when their balances fall
below a predetermined level.
2.2.5 Compilation and Execution of a .NET Application
When you compile a .NET application, it is not compiled to binary machine code;
rather, it is converted to IL. This is the form that your deployed application takes—one or
more assemblies consisting of executable files and DLL files in IL form. At least one of
these assemblies will contain an executable file that has been designated as the entry point
for the application.
When execution of your program begins, the first assembly is loaded into
memory. At this point, the common language runtime examines the assembly manifest
and determines the requirements to run the program. It examines security permissions
requested by the assembly and compares them with the system’s security policy. If the
system’s security policy does not allow the requested permissions, the application will not
run. If the application passes the system’s security policy, the common language runtime
executes the code. It creates a process for the application to run in and begins application
execution. When execution starts, the first bit of code that needs to be executed is loaded
into memory and compiled into native binary code from IL by the common language
runtime’s Just-In-Time (JIT) compiler. Once compiled, the code is executed and stored in
memory as native code. Thus, each portion of code is compiled only once when an
application executes. Whenever program execution branches to code that has not yet run,
the JIT compiler compiles it ahead of execution and stores it in memory as binary code.
This way, application performance is maximized because only the parts of a program that
are executed are compiled.
The .NET Framework base class library contains the base classes that provide
many of the services and objects you need when writing your applications. The class
library is organized into namespaces. A namespace is a logical grouping of types that
DEPT OF CSE, EPCET Page 12
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
perform related functions. For example, the System.Windows.Forms namespace contains
all the types that make up Windows forms and the controls used in those forms.
Namespaces are logical groupings of related classes. The namespaces in the .NET base
class library are organized hierarchically. The root of the .NET Framework is the System
namespace. Other namespaces can be accessed with the period operator. A typical
namespace construction appears as follows:
System
System.Data
System.Data.SQLClient
The first example refers to the System namespace. The second refers to the System. Data
namespace. The third example refers to the System.Data.SQLClient namespace[5].
2.3 Overview of ADO.NET
Most applications require some kind of data access. Desktop applications need to
integrate with central databases, Extensible Markup Language (XML) data stores, or local
desktop databases. ADO.NET data-access technology allows simple, powerful data
access while maximizing system resource usage.
Different applications have different requirements for data access. Whether your
application simply displays the contents of a table, or processes and updates data to a
central SQL server, ADO.NET provides the tools to implement data access easily and
efficiently.
2.3.1 Disconnected Database Access
Previous data-access technologies provided continuously connected data access by
default. In such a model, an application creates a connection to a database and keeps the
connection open for the life of the application, or at least for the amount of time that data
is required. However, as applications become more complex and databases serve more
and more clients, connected data access is impractical for a variety of reasons, including
the following:
Open database connections are expensive in terms of system resources. The more
open connections there are, the less efficient system performance becomes.
Applications with connected data access are difficult to scale. An application that
can comfortably maintain connections with two clients might do poorly with 10
and be completely unusable with 100.
DEPT OF CSE, EPCET Page 13
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Open database connections can quickly consume all available database licenses,
which can be a significant expense. In order to work within a limited set of client
licenses, connections must be reused whenever possible.
ADO.NET addresses these issues by implementing a disconnected data access model by
default. In this model, data connections are established and left open only long enough to
perform the requisite action. For example, if an application requests data from a database,
the connection opens just long enough to load the data into the application, and then it
closes. Likewise, if a database is updated, the connection opens to execute the UPDATE
command, and then closes again. By keeping connections open only for the minimum
required time, ADO.NET conserves system resources and allows data access to scale up
with a minimal impact on performance[1].
2.3.2 ADO.NET Data Architecture
Data access in ADO.NET relies on two entities: the Dataset, which stores data on
the local machine, and the Data Provider, a set of components that mediates interaction
between the program and the database.
The Dataset
The Dataset is a disconnected, in-memory representation of data. It can be thought
of as a local copy of the relevant portions of a database. Data can be loaded into a Dataset
from any valid data source, such as a SQL Server database, a Microsoft Access database,
or an XML file. The Dataset persists in memory, and the data therein can be manipulated
and updated independent of the database. When appropriate, the Dataset can then act as a
template for updating the central database.
The Dataset object contains a collection of zero or more Data Table objects, each of
which is an in-memory representation of a single table. The structure of a particular Data
Table is defined by the Data Columns collection, which enumerates the columns in a
particular table, and the Constraint collection, which enumerates any constraints on the
table. Together, these two collections make up the table schema. A Data Table also
contains a Data Rows collection, which contains the actual data in the Dataset.
The Dataset contains a Data Relations collection. A Data Relation object allows you to
create associations between rows in one table and rows in another table. The Data
Relations collection enumerates a set of Data Relation objects that define the relationships
between tables in the Dataset. For example, consider a Dataset that contains two related
DEPT OF CSE, EPCET Page 14
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
tables: an Employees table and a Projects table. In the Employees table, each employee is
represented only once and is identified by a unique Employee field. In the Projects table,
an employee in charge of a project is identified by the Employee field, but can appear
more than once if that employee is in charge of multiple projects. This is an example of a
one-to-many relationship; you would use a Data Relation object to define this
relationship.
Additionally, a Dataset contains an Extended Properties collection, which is used to store
custom information about the Dataset.
The Data Provider
The link to the database is created and maintained by a data provider. A data provider is
not a single component; rather it is a set of related components that work together to
provide data in an efficient, performance-driven manner. The first version of the
Microsoft .NET Framework shipped with two data providers: the SQL Server .NET Data
Provider, designed specifically to work with SQL Server 7 or later, and the Loeb .NET
Data Provider, which connects with other types of databases. Microsoft Visual
Studio .NET 2003 added two more data providers: the ODBC Data Provider and the
Oracle Data Provider. Each data provider consists of versions of the following generic
component classes:
The Connection object provides the connection to the database.
The Command object executes a command against a data source. It can execute
non-query commands, such as INSERT, UPDATE, or DELETE, or return a Data
Reader with the results of a SELECT command.
The Data Reader object provides a forward-only, read-only, connected record set.
The Data Adapter object populates a disconnected Dataset or Data Table with data
and performs updates.
Data access in ADO.NET is facilitated as follows: a Connection object establishes a
connection between the application and the database. This connection can be accessed
directly by a Command object or by a Data Adapter object. The Command object
provides direct execution of a command to the database. If the command returns more
than a single value, the Command object returns a Data Reader to provide the data. This
data can be directly processed by application logic. Alternatively, you can use the Data
Adapter to fill a Dataset object. Updates to the database can be achieved through the
Command object or through the Data Adapter.
DEPT OF CSE, EPCET Page 15
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
The generic classes that make up the data providers are summarized in the following
sections.
The Connection Object
The Connection object represents the actual connection to the database. Visual
Studio .NET 2003 supplies two types of Connection classes: the SqlConnection object,
which is designed specifically to connect to SQL Server 7 or later, and the
OleDbConnection object, which can provide connections to a wide range of database
types. Visual Studio .NET 2003 further provides a multipurpose ODBCConnection class,
as well as an OracleConnection class optimized for connecting to Oracle databases. The
Connection object contains all of the information required to open a channel to the
database in the ConnectionString property. The Connection object also incorporates
methods that facilitate data transactions.
The Command Object
The Command object is represented by two corresponding classes, SqlCommand
and OleDbCommand. You can use Command objects to execute commands to a database
across a data connection. Command objects can be used to execute stored procedures on
the database and SQL commands, or return complete tables. Command objects provide
three methods that are used to execute commands on the database:
ExecuteNonQuery.
Executes commands that return no records, such as INSERT, UPDATE, or DELETE
ExecuteScalar.
Returns a single value from a database query
ExecuteReader.
Returns a result set by way of a DataReader object.
The Data Reader Object
The DataReader object provides a forward-only, read-only, connected stream
recordset from a database. Unlike other components of a data provider, DataReader
objects cannot be directly instantiated. Rather, the DataReader is returned as the result of
DEPT OF CSE, EPCET Page 16
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
a Command object’s ExecuteReader method. The SqlCommand.ExecuteReader method
returns a SqlDataReader object, and the OleDbCommand.ExecuteReader method returns
an OleDbDataReader object. Likewise, the ODBC and Oracle Command.ExecuteReader
methods return a DataReader specific to the ODBC and Oracle Data Providers
respectively. The DataReader can supply rows of data directly to application logic when
you do not need to keep the data cached in memory. Because only one row is in memory
at a time, the DataReader provides the lowest overhead in terms of system performance,
but it requires exclusive use of an open Connection object for the lifetime of the
DataReader.
The DataAdapter Object
The DataAdapter is the class at the core of ADO.NET disconnected data access. It
is essentially the middleman, facilitating all communication between the database and a
DataSet. The DataAdapter fills a DataTable or DataSet with data from the database
whenever the Fill method is called. After the memory-resident data has been manipulated,
the DataAdapter can transmit changes to the database by calling the Update method. The
DataAdapter provides four properties that represent database commands. The four
properties are:
Select Command.
Contains the command text or object that selects the data from the database. This
command is executed when the Fill method is called and fills a DataTable or a DataSet.
InsertCommand.
Contains the command text or object that inserts a row into a table.
DeleteCommand.
Contains the command text or object that deletes a row from a table.
UpdateCommand.
Contains the command text or object that updates the values of a database.
When the Update method is called, changes in the DataSet are copied back to the
database, and the appropriate InsertCommand, DeleteCommand, or UpdateCommand is
executed.
Accessing Data
Visual Studio .NET has many built-in wizards and designers to help you shape
your data-access architecture rapidly and efficiently. With minimal actual coding, you can
implement robust data access for your application. However, the ADO.NET object model
DEPT OF CSE, EPCET Page 17
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
is fully available through code to implement customized features or to fine-tune your
program. In this lesson, you will learn how to connect to a database with ADO.NET and
retrieve data to your application. You will learn to use the visual designers provided by
Visual Studio .NET and direct code access[3].
2.4 Microsoft SQL server
Microsoft SQL server lets you quickly build powerful and reliable database
applications. SQL server 7.0 highly scalable, fully relational, high performance, multi-
user database server. That can be used by enterprise of any size to manage large amount
of data for client\server applications.
The major new and improved features of SQL server 7.0 include the multi-user
support Multi platform support, added memory support, scalability, integration with
MMC, Microsoft Management console and improved multiple server management.
Parallel database backup and restore. Data replication, Data warehousing distributed
queries, distributed transactions, Dynamic cocking Internet Access, Integrated windows
security, Mail integration Microsoft English Query, ODBC Support.
SQL Server management is accomplished through a set of component applications. SQL
Server introduces a number of new and improved management tools that are SQL Server
Enterprise management, profiles, and Query Analyzer service manager wizards.
CHAPTER 3
SYSTEM REQUIREMENTS AND SPECIFICATION
Requirement specification is the activity of translating the information gathered
during analysis into a requirement document.
DEPT OF CSE, EPCET Page 18
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
3.1 Classification
User Requirements
System Requirements
3.1.1 User Requirements
User requirements is abstract statements of the system requirements for the customer and
end user of the system who do not have a detailed technical knowledge of the system.
The device should provide option for selecting the company code.
Provision should be provided to save the current values.
The real time values with respect to the company code should be displayed from
the various share sites
The page should be refreshed every 30 seconds
The alerts should be provided based on the values matched
3.1.2 System Requirements
A set of system services and constraints in detail, the system requirements are the more
detailed specification of the user requirements it some times serves as a contract between
the user and the developer
SOFTWARE REQUIREMENTS
Microsoft .Net framework 2.0
Visual studio 2008
C# .Net
SQL Server 2005
HARDWARE REQUIREMENTS
Processor : Pentium IV
Monitor : SVGA
RAM : 128MB(minimum)
Speed : 500MHZ
DEPT OF CSE, EPCET Page 19
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Secondary Device : 10GB
FUNCTIONAL REQUIREMENTS
These are the statements of services the system should provide, how the system should
react for a particular inputs and how the system should behave in the particular situations.
NON-FUNCTIONAL REQUIREMENTS
These are define system properties and constraints e.g. reliability, response time
and storage requirements. Constraints are I/O device capability, system
representations, etc.
Process requirements may also be specified mandating a particular CASE system,
programming language or development method
Non-functional requirements may be more critical than functional requirements. If
these are not met, the system is useless
Typically they are:
Reliability
Security
Availability
Performance.
CHAPTER 4
SYSTEM ANALYSIS
4.1 Existing System
DEPT OF CSE, EPCET Page 20
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Very few works on human emotion detection have so far been reported in the
current literature on machine intelligence.Some of the researchers have proposed the
following schemes,but they have not yet been implemented.Researchers such as Ekman
and Friesen proposed a scheme for the recognition of facial expressions from the
movements of cheek, chin, and wrinkles. They observed the movement of facial muscles
as shown in figure 3.1 to recognize emotions.
Figure 4.1: Emotion Recognition from chin,cheek and wrinkles
Yamada proposed a new method of recognizing emotions through the classification of
visual information. Cohen considered temporal variations in facial expressions, which
are displayed in live video to recognize emotions. She proposed a new architecture of
hidden Markov models to automatically segment and recognize facial expressions[7].
4.1.1 Limitations of Existing System
Currently available human–computer interfaces do not take complete advantage
of the valuable communicative media and thus are unable to provide the full benefits
of natural interaction to the users.Human–computer interactions could significantly be
improved if computers could recognize the emotion of the users from their facial
expressions.The existing systems does not have a good classification accuracy. The
exact emotion was not detected.There is no system to help people suffering with
neurodevelopment disorder as shown in Figure 3.2. Children with the
neurodevelopmental disorder known as Autism often have difficulty with social
interaction, in part due to an impaired ability to intuit the emotional state of other
people.
DEPT OF CSE, EPCET Page 21
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Figure 4.2: People suffering from Autism Disorder
4.2 Proposed system
The Proposed System provides an alternative scheme for human emotion
recognition from facial images, and its control, using fuzzy logic. Fuzzy C-means (FCM)
clustering is used for the segmentation of the facial images into three important regions
containing mouth, eyes, and eyebrows. The exact emotion is extracted from fuzzified
emotions by a denormalization procedure similar to defuzzification. The proposed
scheme is both robust and insensitive to noise because of the nonlinear mapping of image
attributes to emotions in the fuzzy domain. Experimental results show that the detection
accuracies of emotions for adult male, adult female, and children of 8–12 years are as
high as 88%, 92%, and 96%, respectively, outperforming the percentage accuracies of the
existing techniques.
4.2.1 Advantages of Proposed System
Emotion recognition and control can be applied in system design for two different
problem domains. First, it can serve as an intelligent layer in the next generation human–
machine interactive system. Such a system would have extensive applications in the
frontier technology of pervasive and ubiquitous computing. Second, the emotion
monitoring and control scheme would be useful for psychological counseling and
therapeutic applications.
4.2.2 Applications of the Proposed System
DEPT OF CSE, EPCET Page 22
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Proposed system helps people suffering from Autism.These people can’t
understand the emotions of surrounding people and others can’t understand their
emotions.Thus this system helps in physiological counseling and therapy.
It helps in the detection of criminal and antisocial motives.Here by looking at the
criminal faces,we can find out whether the criminal has done the crime for gaining
money or for fame.
4.3 Feasibility Study
In the feasibility study, we studied various feasibility studies performed i.e
technical feasibility whether existing equipment, software were sufficient for completing
the project. The economic feasibility determines whether the doing of project is
economically beneficial. This seems to be beneficial because the company need not spend
any amount on the project. Trainees because they work at a less amount and only machine
time are burden.The outcome of first phase was that the request and the various studies
were approved and it was decided that the project taken up will serve the end user. On
developing and implementation this software saves a lot of amount and Sharing of
valuable company time.
The key considerations involved in the feasibility analysis are
Economical feasibility
Technical feasibility
Social feasibility
4.3.1 Economical feasibility
This study is carried out to check the economic impact that the system will have
on the organization.th e amount of fund that the company can pour into research and
development of the system is limited . the expenditure must be justified.
4.3.2 Technical feasibility
This is carried out to check the technical feasibility ,that is,the technical
requirements of the system.any system developed must not have a high demand on the
DEPT OF CSE, EPCET Page 23
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
available technical resources.this will lead to high demands on the available technical
resourses.the developed system must have a modest requirements. And are required for
implementing this system.
4.3.3 Social feasibility
The aspect of study is to check the level of acceptance of the system by the
user.This includes the process of training the user to use the system efficiently.The user
must not be threatned by the system.His level of confidence must be increased so that he
is able to make some constructive criticism which is welcomed[9].
CHAPTER 5
DESIGN
DEPT OF CSE, EPCET Page 24
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
A software design is description of the structure of the software to be
implemented, the data which is part of the system, the interfaces between the system
components and sometimes the algorithms used. Designers do not arrive at a finished
design immediately but develop the design iteratively through a number of different
versions. The design process involves adding formality and detail as the design is
developed with constant backtracking to correct earlier designs.
5.1 Design process
Design is concerned with identifying software components specifying
relationships among components. Specifying software structure and providing blue print
for the document phase.
Modularity is one of the desirable properties of large systems. It implies that the
system is divided into several parts. In such a manner, the interaction between parts is
minimal clearly specified.
Design will explain software components in detail. This will help the
implementation of the system. Moreover, this will guide the further changes in the system
to satisfy the future requirements.
5.1.1 Form design
Form is a tool with a message; it is the physical carrier of data or information.
The user interface form provides a user to select a workgroup, find the active peers, type
any message to send to an active peer.
5.1.2 Input design
Inaccurate input data is the most common case of errors in data processing. Errors entered
by data entry operators can control by input design. Input design is the process of
converting user-originated inputs to a computer-based format. Input data are collected and
organized into group of similar data.
The specific design process activities are:
Architectural design: The sub-system making up the system and their rela-
tionships are identified and documented.
DEPT OF CSE, EPCET Page 25
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Object oriented design: In Object oriented design we thought of “things” in-
stead of operations and functions, the executing system is made up of interact-
ing objects that maintain their local state and operation.
Real time software design: One way of looking at a real time system is as a
stimulus response system. Given a particular input stimulus, the system must
produce some corresponding response.
User interface Design: Good user interface design is critical to the success of
the system, an interface that is difficult to use will, a best, result in a high level
of user errors.
5.2 Modules
1. Face Detection from input image.
2. Segmentation & Determination of the Mouth Region.
3. Segmentation & Determination of the Eye Region.
4. Emotion Detection.
5.2.1 Face Detection From Input Image
For face detection, first we convert binary image from RGB image. For converting
binary image, we calculate the average value of RGB for each pixel and if the average
value is below than 110, we replace it by black pixel and otherwise we replace it by white
pixel. By this method, we get a binary image from RGB image as shown in Figure 5.1.
Then, we try to find the forehead from the binary image. We start scan from the
middle of the image, then want to find a continuous white pixels after a continuous black
pixel. Then we want to find the maximum width of the white pixel by searching vertical
both left and right site. Then, if the new width is smaller half of the previous maximum
width, then we break the scan because if we reach the eyebrow then this situation will
arise. Then we cut the face from the starting position of the forehead and its high will be
1.5 multiply of its width as shown in Figure 5.2.
DEPT OF CSE, EPCET Page 26
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Figure 5.1:converting RGB image to Binary image
Figure 5.2: Face calculation
Figure 5.3: finding the middle position of face
5.2.2 Segmentation & Determination of the Mouth Region
This module is used the mouth region, we first represent the image in the L * a * b
space from its conventional red–green–blue (RGB) space. The L * a * b system has the
additional benefit of representing a perceptually uniform color space. It defines a uniform
matrix space representation of color so that a perceptual color difference is represented by
the Euclidean distance. The color information, however, is not adequate to identify the lip
region. The position information of pixels together with their color would be a good
feature to segment the lip region from the face. The Fuzzy C-means clustering algorithm
that we employ to detect the lip region is supplied with both color and pixel-position
information of the image. This module use in image segmentation in general and lip
region segmentation in particular is a novel area of research.
Determination of MO in a black and white image is easier because of the presence
of the white teeth. A plot of the average intensity profile against the MO reveals that the
curve has several minima, out of which the first and third correspond to the inner region
of the top lip and the inner region of the bottom lip, respectively. The difference between
the preceding two measurements along the Y-axis gives a measure of the MO[6].
DEPT OF CSE, EPCET Page 27
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
5.2.3 Segmentation & Determination of the Eye Region and Eyebrows
The eye region in a monochrome image has a sharp contrast to the rest of the face.
Consequently, the thresholding method can be employed to segment the eye region from
the image. Images grabbed at poor illumination conditions have a very low average
intensity value. Segmentation of the eye region in these cases is difficult because of the
presence of dark eyebrows in the neighborhood of the eye region. To overcome this
problem, we consider images grabbed under good illuminating conditions.
After segmentation of the image, we need to localize the left and right eyes on the
image. For eyes detection, we convert the RGB face to the binary face. Now, we consider
the face width by W. We scan from the W/4 to (W-W/4) to find the middle position of the
two eyes as shown in figure 5.3. The highest white continuous pixel along the height
between the ranges is the middle position of the two eyes
Figure 5.4: Segmentation of Eye region
Then we find the starting high or upper position of the two eyebrows by searching
vertical. For left eye, we search w/8 to mid and for right eye we search mid to w – w/8.
Here w is the width of the image and mid is the middle position of the two eyes. There
may be some white pixels between the eyebrow and the eye. To make the eyebrow and
eye connected as shown in figure 5.4, we place some continuous black pixels vertically
from eyebrow to the eye. For left eye, the vertical black pixel-lines are placed in between
mid/2 to mid/4 and for right eye the lines are in between mid+(w-mid)/ 4 to mid+3*(w-
mid)/ 4 and height of the black pixel-lines are from the eyebrow starting height to (h-
eyebrow starting position)/4. Here w is the width of the image and mid is the middle
position of the two eyes and h is the height of the image. Then we find the lower position
of the two eyes by searching black pixel vertically. For left eye, we search from the mid/4
to mid - mid/4 width. And for right eye, we search mid + (w-mid)/ 4 to mid+3*(w- mid)/
4 width from image lower end to starting position of the eyebrow. Then we find the right
DEPT OF CSE, EPCET Page 28
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
side of the left eye by searching black pixel horizontally from the mid position to the
starting position of black pixels in between the upper position and lower position of the
eye br. And left side for right eye we search mid to the starting position of black pixels in
between the upper position and lower position of right eye. The left side of the left eye is
the starting width of the image and the right side of the right eye is the ending width of
the image. Then we cut the upper position, lower position, left side and the right side of
the two eyes from the RGB image.
In a facial image, eyebrows are the second darkest region after the hair region.
The eye regions are also segmented by thresholding[10].
5.2.4 Emotion Detection
For emotion detection of an image, we have to find the Bezier curve of the lip, left
eye and right eye. Then we convert each width of the Bezier curve to 100 and height ac-
cording to its width. If the person’s emotion information is available in the database, then
the program will match which emotion’s height is nearest the current height and the pro-
gram will give the nearest emotion as output.
If the person’s emotion information is not available in the database, then the
program calculates the average height for each emotion in the database for all people and
then get a decision according to the average height.
5.3 Architecture Diagram
DEPT OF CSE, EPCET Page 29
Main Page
Contains
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
DEPT OF CSE, EPCET Page 30
Segmentation Process
Options
EyeMouth
File
Restore Image
Exit
Pre-Process
Skin Color
Next
Connected
Segmentation Binary Image Face
Input Image
Save
Camera Image
Result
EyeBrow
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Figure 5.5: Architecture diagram
5.4 Data Flow Diagram and Use case Diagram
start
DEPT OF CSE, EPCET Page 31
Launch Main Application
Login PageLogin Fail
Contrast
BrightnessPreview
Help
Sharpen Image
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Return Binary Images
Figure 5.6: DFD of Login module
DEPT OF CSE, EPCET Page 32
Check User name& Pass-word
Login Successfully
Home PageAfter
Successfully Login in Main
Page
Select Images (.jpeg,.bng,.tiff etc)
Convert to Binary Image
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Figure 5.7: DFD of segmentation and emotion result
DEPT OF CSE, EPCET Page 33
User
UserLogin
Select Image
Pre Process
Segmentation
Registration
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Figure 5.8: Use case Diagram
DEPT OF CSE, EPCET Page 34
Select Image+Open File dialog+Upload Image()
Segmentation
+Image+ eyelocal ()+ range ()
Emotion Result
+Databsae image+Connection+emotion ()+Calculate Distance()+compare Image()+bezier_position+displayResult()
Pre Process
+result+ color_segmentation
User
+details+Login()+Registration()
User Login
+UserName+Password+Login()
Registation
+Detail+registation()
+ black_white ()
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
5.5 Class Diagram and ER diagram:User:
Figure 5.9: class diagram
DEPT OF CSE, EPCET Page 35
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
PersonName
Smile
Normal
Surprise
Sad
PositionId
lip1_x
lip1_y
lip2_x
lip2_y
lip3_x
lip3_y
lip4_x
lip4_y
lip5_x
lip5_y
lip6_x
lip6_y
left_eye1_x
left_eye1_y
left_eye2_x
left_eye2_y
left_eye3_x
left_eye3_y
left_eye4_x
left_eye4_y
left_eye5_x
left_eye5_y
left_eye6_x
left_eye6_y
right_eye1_x
right_eye1_y
right_eye2_x
right_eye2_y
right_eye3_x
right_eye3_y
right_eye4_x
right_eye4_y
right_eye5_x
right_eye5_y
right_eye6_x
right_eye6_y
lip_h1
lip_h2
lip_h3
lip_h4
left_eye_h1
left_eye_h2
left_eye_h3
left_eye_h4
right_eye_h1
right_eye_h2
right_eye_h3
right_eye_h4
TB_SourceUserRecId
UserName
Pwd
Figure 5.10 : ER diagram
DEPT OF CSE, EPCET Page 36
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
CHAPTER 6
IMPLEMENTATION
Implementation is the realization of an application, or execution of a plan, idea,
model, design, specification, standard, algorithm, or policy.
There are five things in consideration when the project is developed. They are as
follows:-
Correction
Adaptation
Maintenance
Change
Correction:
The project is corrective to its end and all the validation has been incorporated to
software developed so that no further corrective action can be thought of.
Adaptation/Enhancement:
In this Project a high performance data synchronization server for mobile device is
proposed. For the mobile application system, the information or data (ex. Contacts,
Music, Video, Image) sets are usually stored in both the mobile device and system
database. After several operations for the mobile system, the data sets between the mobile
device and system database may become not identical. In order to keep the consistence of
these data sets, the data synchronization plays a key role in such mobile applications
Maintenance:
The project is to be maintained in the way its accuracy, versatility, working,
integrity, corrective ness, etc. are as was proposed and will be as it was made with
possibility of enhancement to these properties. This project also has this property that
makes it truly maintainable.
Change:
Design during maintenance involves redesigning the product to incorporate the desired
changes. The changes must then be implemented, nternal documentation of the code must
be updated, and new test cases must be designed to access the adequacy of the
modification. Also the supporting documents must be updated to reflect the changes.
The modules were implemented as follows:
DEPT OF CSE, EPCET Page 37
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
6.1 Face Detection module
For converting binary image, we calculate the average value of RGB for each
pixel and if the average value is below than 110, we replace it by black pixel and
otherwise we replace it by white pixel.The code for this is:
public Bitmap black_and_white(Image Im)
{
Bitmap b = (Bitmap)Im;
int A, B, C, c;
int limit = 110; //limit value
for (int i = 1; i < b.Height; i++) // loop for the image pixels height
{
for (int j = 1; j < b.Width; j++) // loop for the image pixels width
{
Color col;
col = b.GetPixel(j, i);
A = Convert.ToInt32(col.R);
B = Convert.ToInt32(col.G);
C = Convert.ToInt32(col.B);
if (A > limit || B > limit || C > limit)
c = 255;
else
c = 0;
if (c == 0)
b.SetPixel(j, i, Color.Black);
else
b.SetPixel(j, i, Color.White);
}
}
return b;
}
DEPT OF CSE, EPCET Page 38
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
The white pixels on face region are scanned continuosly . Then we want to find
the maximum width of the white pixel by searching vertical both left and right site. Then,
if the new width is smaller half of the previous maximum width, then we break the scan.
int cr_start = 140, cr_end = 170, cb_start = 105, cb_end = 150;
private void YCbCr_Click(object sender, EventArgs e)
{
double c = 0, cb = 0, cr = 255;
Bitmap bb = (Bitmap)pictureBox1.Image.Clone();
Bitmap bb1 = new Bitmap(pictureBox1.Image.Size.Width,
pictureBox1.Image.Size.Height);
//Bitmap bb1 = new Bitmap(pictureBox2.Image);
//if ((cr > 140 && cr < 160) && (cb > 105 && cb < 140))actual according to
paper
if ((cr > cr_start && cr < cr_end) && (cb > cb_start && cb < cb_end))//nice
result for good image
{
#region finding face rectangle
/*
* finding the minimum co-ordinate and maximum co-ordinate xy
* of the image between the Cb and Cr threshold value region
*/
if (i < bb.Width / 2 && i < min_x)
{
min_x = i;
}
if ((i >= bb.Width / 2 && i < bb.Width) && i > max_x)
{
max_x = i;
}
if (j < bb.Height / 2 && j < min_y)
{
min_y = j;
}
if ((j >= bb.Height / 2 && i < bb.Height) && j > max_y)
DEPT OF CSE, EPCET Page 39
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
{
max_y = j;
}
#endregion
//bb1.SetPixel(i, j, Color.FromArgb(bb.GetPixel(i, j).R, bb.GetPixel(i, j).G,
bb.GetPixel(i, j).B));
bb1.SetPixel(i, j, Color.Black);
}
else
bb1.SetPixel(i, j, Color.White);
}
}
6.2 Mouth module
The position information of pixels together with their color would be a good
feature to segment the lip region from the face. The Fuzzy C-means clustering algorithm
that we employ to detect the lip region is supplied with both color and pixel-position
information of the image.
private void button33_Click(object sender, EventArgs e)
{
if (lip_number == 0)
{
Bitmap b = new Bitmap(pictureBox5.Image);
Bitmap ba = new Bitmap(skin_color(b));
pictureBox8.Image = (Image)ba;
lip_number++;
}
else if (lip_number == 1)
{
Bitmap b = new Bitmap(pictureBox8.Image);
Bitmap ba = new Bitmap(big_conect(b));
pictureBox8.Image = (Image)ba;
lip_number++;
}
DEPT OF CSE, EPCET Page 40
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
else if (lip_number == 2)
{
Bitmap b = new Bitmap(pictureBox8.Image);
Bitmap ba = new Bitmap(bezier(b));
pictureBox8.Image = (Image)ba;
lip_number++;
}
}
6.3 Eye Module
The middle position of 2 eyes is found.Then the upper position of 2 eyebrows is
located.Then the black pixels are placed vertically between eye and eyebrow.The lower
regions of eyes are calculated.then the left and right regions of both eyes are found.
private void eyelocal(Bitmap b) { //////Bitmap b = new Bitmap(pictureBox2.Image); int w = b.Width; int h = b.Height; int ys1 = h, ye1 = h - 1, ys2 = h, ye2 = h - 1; int i, j, k; int mid = 0, max = 0;
for (i = w / 4; i < w - (w / 4); i++) //to find middle position of 2 eyes { for (j = spq; j < h; j++) if (b.GetPixel(i, j).R == 0 && b.GetPixel(i, j).B == 0 && b.GetPixel(i, j).G == 0) break;
if (max < (j - spq)) { max = j - spq; mid = i; } } int tp1 = mid - 5, tp2 = mid + 5, mp1 = h - 1, mp2 = h - 1; for (i = w / 8; i < w - (w / 8); i++) //to find the upper position of two eyebrows { for (j = spq; j < h; j++) if (b.GetPixel(i, j).R == 0 && b.GetPixel(i, j).B == 0 && b.GetPixel(i, j).G == 0) break; if (i <= mid)
DEPT OF CSE, EPCET Page 41
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
{ if (j - 1 < ys1) ys1 = j - 1; if (i >= mid / 2) if (mp1 > j) { tp1 = i; mp1 = j; } } else { if (j - 1 < ys2) ys2 = j - 1; if (i <= mid + (w - mid) / 2) if (mp2 > j) { tp2 = i; mp2 = j; } } }
int black = 0;
for (i = mid / 2; i >= mid / 4; i--) { black = 0; for (j = ys1; j <= ys1 + (h - ys1) / 4; j++) { if (b.GetPixel(i, j).R == 0 && b.GetPixel(i, j).G == 0 && b.GetPixel(i, j).B == 0) {
if (black != 0 && black != j - 1) {
for (k = black; k <= j; k++) b.SetPixel(i, k, Color.Black);
} black = j; }
}
}
for (i = mid + (w - mid) / 4; i <= mid + 3 * (w - mid) / 4; i++)//right eye { black = 0; for (j = ys2; j <= ys2 + (h - ys2) / 4; j++) {
DEPT OF CSE, EPCET Page 42
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
if (b.GetPixel(i, j).R == 0 && b.GetPixel(i, j).G == 0 && b.GetPixel(i, j).B == 0) {
if (black != 0 && black != j - 1) {
for (k = black; k <= j; k++) b.SetPixel(i, k, Color.Black); //placing black pixels vertically
} black = j; }
}
6.4 Emotion DetectionFor emotion detection of an image, we have to find the Bezier curve of the lip, left
eye and right eye. Then we convert each width of the Bezier curve to 100 and height ac-
cording to its width.
public Bitmap bezier(Bitmap b)
{ int n, m; n = b.Height; m = b.Width; big = new int[m][];
int i, j = 0, flag = 0;
for (i = 0; i < m; i++) { big[i] = new int[n]; for (j = 0; j < n; j++) big[i][j] = 0; } int count = 0; b1 = new double[1000]; p1 = new double[1000]; int x1 = 0, y1 = 0, xn = 0, yn = 0, xm1 = 0, ym1 = 0, xm2 = 0, ym2 = 0, xm3 = 0, ym3 = 0, xm4 = 0, ym4 = 0; int yz1 = -1, yz2 = -1;
for (i = 0; i < m; i++) {
DEPT OF CSE, EPCET Page 43
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
for (j = 0; j < n; j++) if (b.GetPixel(i, j).R == 0 && b.GetPixel(i, j).G == 0 && b.GetPixel(i, j).B == 0) { if (yz1 == -1) { yz1 = j; yz2 = j; } else yz2 = j; } if (yz1 != -1) { x1 = i; y1 = (yz1 + yz2) / 2; break; } }
yz1 = -1; yz2 = -1; for (i = m - 1; i >= 0; i--) { for (j = 0; j < n; j++) if (b.GetPixel(i, j).R == 0 && b.GetPixel(i, j).G == 0 && b.GetPixel(i, j).B == 0) { if (yz1 == -1) { yz1 = j; yz2 = j; } else yz2 = j; } if (yz1 != -1) { xn = i; yn = (yz1 + yz2) / 2; break; } }
//////////Uper Lip/////////////// ///////left///////////////////// int Q, R, T, start_x, p; double pi = 22 / 7; start_x = x1 - 2; if (start_x < 0) start_x = 0; p = y1; for (Q = 0; Q < 90; Q++) { flag = 0; for (i = start_x; i < m; i++) { R = i - start_x;
DEPT OF CSE, EPCET Page 44
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
T = Convert.ToInt16(R * Math.Tan((Q * pi) / 180)); R = R + start_x; T = p - T; if (R >= m || T < 0) break; if (b.GetPixel(R, T).R == 0 && b.GetPixel(R, T).G == 0 && b.GetPixel(R, T).B == 0) { flag = 1; break; } } if (flag == 0) break; } xm1 = x1 + (xn - x1) / 3; R = xm1 - start_x; T = Convert.ToInt16(R * Math.Tan((Q * pi) / 180)); T = p - T; ym1 = T; if (ym1 < 0) ym1 = 0; ////////left////////////////// /////////right//////////////// start_x = xn + 2; if (start_x >= m) start_x = m - 1; p = yn; for (Q = 0; Q < 90; Q++) { flag = 0; for (i = start_x; i >= 0; i--) { R = start_x - i; T = Convert.ToInt16(R * Math.Tan((Q * pi) / 180)); R = start_x - R; T = p - T; if (R < 0 || T < 0) break; if (b.GetPixel(R, T).R == 0 && b.GetPixel(R, T).G == 0 && b.GetPixel(R, T).B == 0) { flag = 1; break; } } if (flag == 0) break; } xm2 = x1 + 2 * (xn - x1) / 3; R = start_x - xm2; T = Convert.ToInt16(R * Math.Tan((Q * pi) / 180)); T = p - T;
DEPT OF CSE, EPCET Page 45
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
ym2 = T; if (ym2 < 0) ym2 = 0; /////////right//////////////// //////////Uper Lip/////////////// //////////Lower Lip/////////////// ///////left/////////////////////
start_x = x1 - 2; if (start_x < 0) start_x = 0; p = y1; for (Q = 0; Q < 90; Q++) { flag = 0; for (i = start_x; i < m; i++) { R = i - start_x; T = Convert.ToInt16(R * Math.Tan((Q * pi) / 180)); R = R + start_x; T = p + T; if (R >= m || T >= n) break; if (b.GetPixel(R, T).R == 0 && b.GetPixel(R, T).G == 0 && b.GetPixel(R, T).B == 0) { flag = 1; break; } } if (flag == 0) break; } xm3 = x1 + (xn - x1) / 3; R = xm3 - start_x; T = Convert.ToInt16(R * Math.Tan((Q * pi) / 180)); T = p + T; ym3 = T; if (ym3 > n) ym3 = n - 1; ////////left////////////////// /////////right//////////////// start_x = xn + 2; if (start_x >= m) start_x = m - 1; p = yn; for (Q = 0; Q < 90; Q++) { flag = 0; for (i = start_x; i >= 0; i--) { R = start_x - i;
DEPT OF CSE, EPCET Page 46
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
T = Convert.ToInt16(R * Math.Tan((Q * pi) / 180)); R = start_x - R; T = p + T; if (R < 0 || T >= n) break; if (b.GetPixel(R, T).R == 0 && b.GetPixel(R, T).G == 0 && b.GetPixel(R, T).B == 0) { flag = 1; break; } } if (flag == 0) break; } xm4 = x1 + 2 * (xn - x1) / 3; R = start_x - xm4; T = Convert.ToInt16(R * Math.Tan((Q * pi) / 180)); T = p + T; ym4 = T; if (ym4 > n) ym4 = n - 1; /////////right//////////////// //////////Lower Lip///////////////
for (i = 0; i < n; i++) if (b.GetPixel(xm1, i).R == 0 && b.GetPixel(xm1, i).G == 0 && b.GetPixel(xm1, i).B == 0) { ym1 = i; break; } for (i = 0; i < n; i++) if (b.GetPixel(xm2, i).R == 0 && b.GetPixel(xm2, i).G == 0 && b.GetPixel(xm2, i).B == 0) { ym2 = i; break; } for (i = n - 1; i >= 0; i--) if (b.GetPixel(xm3, i).R == 0 && b.GetPixel(xm3, i).G == 0 && b.GetPixel(xm3, i).B == 0) { ym3 = i; break; } for (i = n - 1; i >= 0; i--) if (b.GetPixel(xm4, i).R == 0 && b.GetPixel(xm4, i).G == 0 && b.GetPixel(xm4, i).B == 0) { ym4 = i; break; }
b1[2 * count] = x1; b1[2 * count + 1] = y1; count++; b1[2 * count] = xm1; b1[2 * count + 1] = ym1; count++; b1[2 * count] = xm2; b1[2 * count + 1] = ym2; count++; b1[2 * count] = xn;
DEPT OF CSE, EPCET Page 47
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
b1[2 * count + 1] = yn; count++;
bLength = count * 2; Bezier2D(10); for (i = 0; i < 9; i++) { x = Convert.ToInt16(p1[i * 2]); y = Convert.ToInt16(p1[i * 2 + 1]); x1 = Convert.ToInt16(p1[(i + 1) * 2]); y1 = Convert.ToInt16(p1[(i + 1) * 2 + 1]); slope(x, y, x1, y1); if (x < x1s) { x1s = x; y1s = y; } if (x1 < x1s) { x1s = x1; y1s = y1; } if (x > x1e) { x1e = x; y1e = y; } if (x1 > x1e) { x1e = x1; y1e = y1; } }
slope(xs, ys, x1s, y1s); slope(xe, ye, x1e, y1e);
for (i = 0; i < m; i++) for (j = 0; j < n; j++) { if (big[i][j] == 1) b.SetPixel(i, j, Color.Black); else b.SetPixel(i, j, Color.White); }
return b;
}
DEPT OF CSE, EPCET Page 48
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
CHAPTER 7
TESTING AND RESULT ANALYSIS
The purpose of testing is to discover Errors. Testing is the process of trying to
discover every conceivable fault or weakness in work product, IT provides a way to
check the functionality of components ,sub assemblies ,assemblies and/or finished
product. it is the process of exercising software with the intent of ensuring that the
software meets its requirements and user expectations and does not fail in an
unacceptable manner ..there are various types of test. each test type addresses a specific
resting requirement.
7.1Unit Testing
In this, the programs that made up the system were tested. This is also called as
program testing. This level of testing focuses on the modules, independently of one
another. The purpose of unit testing is to determine the correct working of the individual
modules. For unit testing, we first adopted the code testing strategy, which examined the
logic of program. During the development process itself all the syntax errors etc. got
rooted out. For this we developed test case that result in executing every instruction in the
program or module i.e. every path through program was tested. (Test cases are data
chosen at random to check every possible branch after all the loops.).
Unit testing involves a precise definition of test cases, testing criteria, and management of
test cases. This level of testing focuses on the modules, independently of one another.
Testing means to check whether system meets user requirements about:
7.1.1 Unit test for face module:
The unit testing for face module is done after the completion of face module. The face
module was designed and tested to see if there is any error. Here whether the face region
is segmented correctly or not was checked.
Test Results: The test cases mentioned above passed successfully. No defect was
encountered.
DEPT OF CSE, EPCET Page 49
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
7.1.2 Unit test for mouth module:
The unit testing for mouth module is done after the completion of mouth module.
The mouth module was designed and tested to see if there is any error. Here whether the
mouth region is segmented correctly or not was checked.
Test result: The test cases mentioned above passed successfully. No defect was
encountered.
7.1.3 Unit test for eye module:
The unit testing for eye module is done after the completion of eye module. The
eye module was designed and tested to see if there is any error. Here whether the left eye
and right eye region is segmented correctly or not was checked.
Test result: The test cases mentioned above passed successfully. No defect was
encountered.
7.1.4 Unit test for emotion detection module:
The unit testing for emotion detection module is done after the completion of
emotion module. The emotion module was designed and tested to see if there is any error.
Here whether the exact emotion is detected correctly or not was checked.
Test result: The test cases mentioned above passed successfully. No defect was
encountered.
7.2 Integration Testing
In this the different modules of a system are integrated using an integration plan. The
integration plan specifies the steps and the order in which modules are combined to
realize the full system. After each integration step, the partially integrated system is
tested. The primary objective of integration testing is to test the module interface.
In Main module, all the individual programs are tested first and after having
successful results in the individual program testing we moved further for the integration.
We have combined some programs and then tested it, after having good results; we have
combined all the programs together and started for system testing.
Test result: The test cases mentioned above passed successfully. No defect was
encountered
DEPT OF CSE, EPCET Page 50
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
7.3 System Testing
Once we are satisfied that all the modules work well in themselves and there are
no problems, we do in to how the system will work or perform once all the modules are
put together. At this stage the system is used experimentally to ensure that all the
requirements of the user are fulfilled. At this point of the testing takes place at different
levels so as to ensure that the system is free from failure.
The training is given to user about how to make an entry. The best test made on
the system was whether it produces the correct outputs. All the outputs were checked out
and were found to be correct. Feedback sessions were conducted and the suggested
changes given by the user were made before the acceptance test. System tests are
designed to validate a fully developed system with a view to assuring that it meets its
requirements.
7.4 User Acceptance Testing
Acceptance testing involves planning and execution of functional test,
performance tests. This is critical phase of any project and requires significant
contribution by end user.
Test result: All the test cases passed successfully. No defect was encountered.
CASE INPUT EXPECTED
OUTPUT
RESULT
1 When a person with
Spectacles was
given as input
image.
Correct emotion
can’t be detected.
Success
2 When input image
was animals
It is not a human
face
Success
3 When invalid
password or user-id
was entered
Warning message to
be displayed
Success
DEPT OF CSE, EPCET Page 51
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
CHAPTER 8
CONCLUSION AND FUTURE ENHANCEMENTS
An important aspect of this Project is the design of an emotion control scheme.
The accuracy of the control scheme ensures convergence of the control algorithm with a
zero error, and repeatability ensures the right selection of audiovisual stimulus. The pro-
posed scheme of emotion recognition and control can be applied in system design for two
different problem domains. First, it can serve as an intelligent layer in the nextgeneration
human–machine interactive system. Such a system would have extensive applications in
the frontier technology of pervasive and ubiquitous computing. Second, the emotion mon-
itoring and control scheme would be useful for psychological counseling and therapeutic
applications. The pioneering works on the “structure of emotion” by Gordon and the
“emotional control of cognition” by Simon would find a new direction with the proposed
automation for emotion recognition and control.
In the course of work, we have identified areas that we need to carry out the further
work of the project.
Our proposed system can be enhanced to be used in next generation human
machine interactive system.
We can use a web camera to capture images of people and detect their
emotions.
It is enhanced to be used in medicine field for physiological counseling.
Used in emotion recognition of animals.
Emotions such as anger, disgust etc can be detected in the future.
DEPT OF CSE, EPCET Page 52
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
CHAPTER 9
SNAPSHOTS
The project consists of a login screen,segmentation screen and the emotion result
screen.
Login screen:
The user has to enter the registered username and password to login into the
application.Here there are maximum 3 login attempts.If the user fails to enter the
correct username and password,the user can not login into the application.
DEPT OF CSE, EPCET Page 53
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Registration screen
The user has to enter the name,nickname and password during registration.Registered
data is saved in data base.
DEPT OF CSE, EPCET Page 54
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
If the login details matches with the registered data,the “Login successful” message is
displayed.
This is the first step after login.Here the Input image is imported from the file where
the images are stored.The images with .gif,.jpeg,.bnp extension formats etc are
supported.
DEPT OF CSE, EPCET Page 55
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
This is how it looks when the Image is imported.
The Skin color option in the Preprocess menu is selected to convert RGB image to
Binary image.
DEPT OF CSE, EPCET Page 56
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
The connected option is clicked to focus on the face region.
The next option has to be clicked for Segmentation to take place.
DEPT OF CSE, EPCET Page 57
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
Segmentation of eyes and mouth is done.
The Beizer curves are obtained by appropriate calculation of eye and mouth region.
DEPT OF CSE, EPCET Page 58
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
These measurements of beizer curves will be compared with already stored
measurements in database .The nearest matching Emotion result is shown.
DEPT OF CSE, EPCET Page 59
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
DEPT OF CSE, EPCET Page 60
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
REFERENCES
[1] J. C. Bezdek, “Fuzzy mathematics in pattern classification,” Ph.D. dissertation,Appl.
Math. Center, Cornell Univ., Ithaca, NY, 1973.
[2] B. Biswas, A. K. Mukherjee, and A. Konar, “Matching of digital images using fuzzy
logic,” AMSE Publication, vol. 35, no. 2, pp. 7–11, 1995.
[3] M. T. Black and Y. Yacoob, “Recognizing facial expressions in image sequences us-
ing local parameterized models of image motion,” Int. J.Comput. Vis., vol. 25, no. 1, pp.
23–48, Oct. 1997.
[4] C. Busso and S. Narayanan, “Interaction between speech and facial gestures in emo-
tional utterances: A single subject study,” IEEE Trans. Audio,Speech Language Process.,
vol. 15, no. 8, pp. 2331–2347, Nov. 2007.
[5] I. Cohen, “Facial expression recognition from video sequences,” M.S.thesis, Univ.
Illinois Urbana-Champaign, Dept. Elect. Eng., Urbana, IL,2000.
[6] I. Cohen, N. Sebe, A. Garg, L. S. Chen, and T. S. Huang, “Facial expression recogni-
tion from video sequences: Temporal and static modeling,”Comput. Vis. Image Underst.,
vol. 91, no. 1/2, pp. 160–187,Jul. 2003.
[7] C. Conati, “Probabilistic assessment of user’s emotions in educational games,” J.
Appl. Artif. Intell., Special Issue Merging Cognition AffectHCT, vol. 16, no. 7/8, pp. 555–
575, Aug. 2002.
[8] G. Donato, M. S. Bartlett, J. C. Hager, P. Ekman, and T. J. Sejnowski,“Classifying fa-
cial actions,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. 21, no. 10, pp. 974–989, Oct.
1999.
[9] P. Ekman and W. V. Friesen, Unmasking the Face: A Guide to RecognizingEmotions
From Facial Clues. Englewood Cliffs, NJ: Prentice-Hall,1975.
[10] I. A. Essa and A. P. Pentland, “Coding, analysis, interpretation and recognition of fa-
cial expressions,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. 19, no. 7, pp. 757–763,
Jul. 1997.
[11] W. A. Fellenz, J. G. Taylor, R. Cowie, E. Douglas-Cowie, F. Piat,S. Kollias, C.
Orovas, and B. Apolloni, “On emotion recognition of faces and of speech using neural
networks, fuzzy logic and the ASSESS systems,”in Proc. IEEE -INNS-ENNS Int. Joint
Conf. Neural Netw., 2000,pp. 93–98.
DEPT OF CSE, EPCET Page 61
EMOTION RECOGNITION FROM FACIAL EXPRESSION USING FUZZY LOGIC 2013-14
[12] J. M. Fernandez-Dols, H. Wallbotl, and F. Sanchez, “Emotion category accessibility
and the decoding of emotion from facial expression and context,” J. Nonverbal Behav.,
vol. 15, no. 2, pp. 107–123,Jun. 1991
[13] www.msdn.microsoft.com
[14] www.pentestmonkey.com
[15] www.testbed.com
[16] www.michaeldaw.org
[17] www.webappsec.com
DEPT OF CSE, EPCET Page 62