an automated system in atm booth using face encoding and

6
An Automated System in ATM Booth Using Face Encoding and Emotion Recognition Process Atiqul Islam Chowdhury United International University, Dhaka, Bangladesh Mohammad Munem Shahriar East West University, Dhaka, Bangladesh Ashraful Islam Daffodil International University, Dhaka, Bangladesh Eshtiak Ahmed Daffodil International University, Dhaka, Bangladesh Asif Karim Charles Darwin University, NT, Australia Mohammad Rezwanul Islam Charles Darwin University, NT, Australia ABSTRACT Nowadays, the banking transaction system is more flexible than the previous one. When the banking sector introduces the ATM booth to us, it was a step ahead to ease the human effort. Here, ATM booth is an automated teller machine that gives out money to the consumer by inserting a card in it. All ATM booths support both credit and debit cards for the transaction, and this has saved everyone’s time. But still, there are some certain situations, i.e., forgetting the card authentication details for a transaction can ruin a consumer’s day. For this reason, this paper has tried to propose a system that will help everyone regarding this situation. This proposed system is about face encoding process with an emotion recognition test for making transactions faster and accurate, based on Convolutional Neural Network (CNN). However, normal card transactions can still be possible besides using the proposed system. FER2013 dataset was used for training and then tested the model using our own sample images. The result shows that the proposed system can correctly separate ‘Happy’ faces from other emotional faces and allow the transaction to proceed. CCS CONCEPTS Computing methodologies; Computer graphics; Image manipulation;• Image processing; KEYWORDS Face encoding, emotion labels, network model, fer2013 dataset, ATM booth ACM Reference Format: Atiqul Islam Chowdhury, Mohammad Munem Shahriar, Ashraful Islam, Eshtiak Ahmed, Asif Karim, and Mohammad Rezwanul Islam. 2020. An Automated System in ATM Booth Using Face Encoding and Emotion Recog- nition Process. In 2020 2nd International Conference on Image Processing and Corresponding E-mail: [email protected]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. IPMV 2020, August 05–07, 2020, Bangkok, Thailand © 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-8841-2/20/08. . . $15.00 https://doi.org/10.1145/3421558.3421567 Machine Vision (IPMV 2020), August 05–07, 2020, Bangkok, Thailand. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3421558.3421567 1 INTRODUCTION Since the beginning of mankind, there has been an urge to make things comfortable. From discovering fire to driving a car, every- thing is invented, organized, developed, managed only to ease the human effort. Regarding that, automation has also played a big role in this journey towards comforting human lives. People are bringing automation in almost every aspect of their lives. ATM banking transactions, home automation, interactive voice response systems, self-driving vehicles, space probes, and many more are some of the current examples of automation. Another one can be included as an image processing tool. It is being used intensively in many smart devices for various applications. Applications that in- clude face detection, facial recognition, face encoding, etc. With the help of many deep learning methods, these applications of image processing are performed. In recent times, face recognition is used by the help of secu- rity cameras. Sometimes from the footage of those cameras, many police investigations get the benefit of solving criminal cases by simply detecting the faces from the video [1] and distinguishing the suspects by recognizing their faces. Moreover, facial recognition is implemented in many smart devices such as smartphones, laptops, desktop, etc. An automated function like facial encoding is used to authenticate the facial image of the users to unlock the devices. Another part of image processing is emotion recognition. By detecting facial emotions from an image or a video, an analysis can be made on certain traits. It can be the emotional response of the people regarding the photos of an art exhibition. Likewise, the class lecture of a teacher can be improved by getting a response from the students’ activity during the class [2]. Furthermore, customers’ satisfaction regarding a product can tell much about the product’s value in the market [3]. This can be tracked by the facial emotion of the customers. In spite of the advancement of technology, these image process- ing tools are still not included in some of the automation systems. One of the systems is the Automated Teller Machine which is also known as an ATM. In general, a person can only take his visa or credit or debit card to the nearest ATM booth, and then by using that card, he can collect or deposit money which entirely saves the energy and time in place of going to a bank. However, this automation might ease human lives, but some other measurements could also be taken here. If facial emotion recognized ATM banking 57

Upload: others

Post on 20-Mar-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

An Automated System in ATM Booth Using Face Encoding andEmotion Recognition Process

Atiqul Islam Chowdhury∗United International University,

Dhaka, Bangladesh

Mohammad Munem ShahriarEast West University, Dhaka,

Bangladesh

Ashraful IslamDaffodil International University,

Dhaka, Bangladesh

Eshtiak AhmedDaffodil International University,

Dhaka, Bangladesh

Asif KarimCharles Darwin University, NT,

Australia

Mohammad Rezwanul IslamCharles Darwin University, NT,

Australia

ABSTRACTNowadays, the banking transaction system is more flexible thanthe previous one. When the banking sector introduces the ATMbooth to us, it was a step ahead to ease the human effort. Here,ATM booth is an automated teller machine that gives out moneyto the consumer by inserting a card in it. All ATM booths supportboth credit and debit cards for the transaction, and this has savedeveryone’s time. But still, there are some certain situations, i.e.,forgetting the card authentication details for a transaction can ruina consumer’s day. For this reason, this paper has tried to proposea system that will help everyone regarding this situation. Thisproposed system is about face encoding process with an emotionrecognition test for making transactions faster and accurate, basedon Convolutional Neural Network (CNN). However, normal cardtransactions can still be possible besides using the proposed system.FER2013 dataset was used for training and then tested the modelusing our own sample images. The result shows that the proposedsystem can correctly separate ‘Happy’ faces from other emotionalfaces and allow the transaction to proceed.

CCS CONCEPTS• Computing methodologies; • Computer graphics; • Imagemanipulation; • Image processing;

KEYWORDSFace encoding, emotion labels, network model, fer2013 dataset,ATM booth

ACM Reference Format:Atiqul Islam Chowdhury, Mohammad Munem Shahriar, Ashraful Islam,Eshtiak Ahmed, Asif Karim, and Mohammad Rezwanul Islam. 2020. AnAutomated System in ATM Booth Using Face Encoding and Emotion Recog-nition Process. In 2020 2nd International Conference on Image Processing and

∗Corresponding E-mail: [email protected].

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] 2020, August 05–07, 2020, Bangkok, Thailand© 2020 Association for Computing Machinery.ACM ISBN 978-1-4503-8841-2/20/08. . . $15.00https://doi.org/10.1145/3421558.3421567

Machine Vision (IPMV 2020), August 05–07, 2020, Bangkok, Thailand. ACM,New York, NY, USA, 6 pages. https://doi.org/10.1145/3421558.3421567

1 INTRODUCTIONSince the beginning of mankind, there has been an urge to makethings comfortable. From discovering fire to driving a car, every-thing is invented, organized, developed, managed only to ease thehuman effort. Regarding that, automation has also played a bigrole in this journey towards comforting human lives. People arebringing automation in almost every aspect of their lives. ATMbanking transactions, home automation, interactive voice responsesystems, self-driving vehicles, space probes, and many more aresome of the current examples of automation. Another one can beincluded as an image processing tool. It is being used intensively inmany smart devices for various applications. Applications that in-clude face detection, facial recognition, face encoding, etc. With thehelp of many deep learning methods, these applications of imageprocessing are performed.

In recent times, face recognition is used by the help of secu-rity cameras. Sometimes from the footage of those cameras, manypolice investigations get the benefit of solving criminal cases bysimply detecting the faces from the video [1] and distinguishing thesuspects by recognizing their faces. Moreover, facial recognition isimplemented in many smart devices such as smartphones, laptops,desktop, etc. An automated function like facial encoding is used toauthenticate the facial image of the users to unlock the devices.

Another part of image processing is emotion recognition. Bydetecting facial emotions from an image or a video, an analysis canbe made on certain traits. It can be the emotional response of thepeople regarding the photos of an art exhibition. Likewise, the classlecture of a teacher can be improved by getting a response fromthe students’ activity during the class [2]. Furthermore, customers’satisfaction regarding a product can tell much about the product’svalue in the market [3]. This can be tracked by the facial emotionof the customers.

In spite of the advancement of technology, these image process-ing tools are still not included in some of the automation systems.One of the systems is the Automated Teller Machine which is alsoknown as an ATM. In general, a person can only take his visa orcredit or debit card to the nearest ATM booth, and then by usingthat card, he can collect or deposit money which entirely savesthe energy and time in place of going to a bank. However, thisautomation might ease human lives, but some other measurementscould also be taken here. If facial emotion recognized ATM banking

57

IPMV 2020, August 05–07, 2020, Bangkok, Thailand Atiqul Chowdhury et al.

systems can be introduced, then this will be another emergence ofevolution in banking transactions.

The rest of this paper is organized as follows. Section 2 willdiscuss related works regarding this area. The dataset descriptionand our methodology will be discussed in section 3. Experimentalresults will be shown in section 4. Finally, section 5 will deliver thediscussion on this work and section 6 concludes this article withfew future directions.

2 RELATEDWORKSFacial classification can be used to focus on the drowsiness of adriver behind a vehicle. The researchers in this study [4], have pro-posed a model that can be used in face recognition and detection,emotion recognition, and mainly drowsiness detection. The mainfocus of this study is to prevent the drivers of a vehicle from en-gaging in any car accident. Here, the model at first, detects the facealong with its emotion. Then by measuring the eye aspect ratio ofthe person, it will determine if the person is feeling drowsy or not,and simultaneously will alert the driver to be awake. In another pa-per [5], the proponents have presented a study regarding emotionrecognition with the help of micro-expression such as the move-ment of the face and eyes. For this study, they have approachedtwo methods. One method is the Viola-Jones algorithm and Haarfeatures to extract the face and eyes from an image with the helpof the OpenCV 4.0 library. The other one is the color intensityestimation method based on skin color characteristics.

In some papers, it has been shown that facial emotion recognitioncan be used to investigate criminal cases. In [6], Ousmane et al. haveproposed a system for the criminal investigators. This system canautomatically recognize the facial expressional emotions through aweb interface called ‘throughYou’. To train the model, the FER2013dataset has been used here and Convolutional Neural Network hasbeen applied for that purpose. Though it can detect happy emotionand having a hard time detecting fear emotion, it gives a bettertesting accuracy of 66% than that of other models that have beenused onto the FER2013 dataset.

For performing an efficient face detection, the proponents here[7] have developed a model to modify the sliding window techniquefor face detection by using an ant system algorithm. They haveapplied their model onto the UM Learning dataset which consistsof single face images of 10 students. The model has given a betterresult than the Viola-Jones algorithm. With almost, the same timescheduling, this model takes less iteration than the Viola-Jonesalgorithm to detect a face of a person.

Now the authors here [8] have presented a model that can dis-tinguish each of the six core emotions into two emotional states atfirst, such as positive emotional state and negative emotional state.And then they can classify them back to the core emotions. This hashelped to differentiate the core emotions, surprise, and fear. Theaccuracy that has been obtained from the research is impressive. Itseems that the accuracy with the emotional state is higher than thatof only core emotions. But there can be another type of behaviorthat is similar to human emotions which is conversational behav-ior. This is why in [9] the researchers have introduced a systemthat can annotate the conversational behavior of a human beingfrom analyzing the conversation between a human and a dialog

system. There were seven conversational behaviors, which were abit different than normal facial emotions. For the study, a video col-lection of 19 persons were collected via Amazon Mechanical Turk.Moreover, from the videos, results were annotated by 3 annotators,and thus, making three sections of the same dataset following theInter-Annotator agreement method.

In [10], Wahyono et al. have proposed a part of computationalintelligence on ubiquitous devices by a two way testing, such as,dataset testing and direct testing for face emotion detection. Thisis a part of computational intelligence on ubiquitous devices. Thedataset was classified into three emotions, namely, happy, normal,sad. Dataset testing was conducted with 10 people. But, there wasa web-based application for direct testing. Another developmenttook place in [11] where the authors have developed an applica-tion, which follows a method that can recognize the emotions of acustomer regarding an advertisement. This method is called ‘Neuro-marketing’, which is a combination of neuroscience and marketing.This application has been created so that it can be beneficial for thecompanies and companies can understand what type of product acustomer might buy or not.

Facial emotion recognition can be also used in the health sector.Bargshady et al. have preceded in [12] that the health status of apatient can be detected by their pain emotions. By combining twoRecurrent Neural Networks into a hybrid neural network, they havemeasured the intensity of the pain emotion given by a patient. Forwhich they have used the database of UNBC-McMaster ShoulderPain. The joint deep neural network in this study gives a good un-derstanding of its better performance in emotion detection throughfacial images. Though the approach of joint RNN was pre-trainedby a type of Convolutional Neural Network, in [13] the researchershave pointed out that every dataset cannot give better results byusing CNN. Moreover, CNN might over train the dataset which hasspecific features. In these regards, they have approached a methodwhich includes pain detection of real-time data. Here, the datasethas been combined with pain and other non-pain emotions. At last,they have implemented it on a humanoid robot called ‘Pepper’ forreal-time data analysis. And most of the authors proposed differentmethods to improve the technology which is increasing day by day[14].

3 METHODOLOGY3.1 DatasetFacial expression recognition along with facial encoding of twofaces of different poses or expressions are a great challenge inthe ongoing days. Thus, it is considered to be one of the mostresearched topics nowadays. Though many datasets can be foundon this topic, most of them have images as its data. This can be aproblem as all images may not have the same resolution. However,the FER2013 dataset [15] was used for this research purpose whichhad 48x48 sized face images with 7 emotions. Figure 1 shows theemotion labeled in this dataset. The dataset containing total 35,887examples, was divided into training-set: 80%, validation-set: 10%,test-set: 10%.

58

An Automated System in ATM Booth Using Face Encoding and Emotion Recognition Process IPMV 2020, August 05–07, 2020, Bangkok, Thailand

Figure 1: Emotion Labels in the Dataset.

3.2 Proposed MethodThe main aim in this research is that one can withdraw cash froman ATM booth using facial expression and encoding system alongwith the card system. This system is an alternative system to the oldone as the card can also be used in the ATM booth. But this systemwill be beneficial for everyone at the time of critical emergencies.The total procedure is divided into three steps which are givenbelow:

• There will be an automated camera for capturing the facialimage of a person. Then the machine will check the imagewith the stored image (in the bank database) of that person.If match found, go to the second step.

• The second step is any person has to give a smile before thecamera for the expression recognition system. Because ifanyone would do one harm and try to snatch one’s money,then there will be another expression except happy emotion.Then the next step will not execute. So if the real persongives a smile in front of the camera, the third step will beinitiated.

• The third step is the final step of the system, which is togive the person’s pin code number in the ATM booth. Afterproviding the pin code, that person can collect his money.

Now in the model for facial expression recognition, firstly, wehad to train the Convolutional Neural Network (CNN) so that itcould predict any of the 7 emotions. For that reason, a 6-layeredCNN was built in Keras and image augmentations had been used toimprove the performance of the model. The batch size was about64. This procedure also used ADAM optimizer with a learningrate of 0.001. The ReLU activation function was used for the inputlayer and Softmax for the output layer. Moreover, the tasks werecategorized by collecting, preprocessing, and training the data,along with creating models. After creating the model, more than140 images of our own were tested for evaluating the result of theemotion. The flow diagram of the model creation and predictionsystem is shown in Figure 2

Before classifying the dataset with the model, one more step wasneeded, which was face encoding. In this research, we had used‘face_recognition’ python library for this procedure. This libraryscanned the image and created a bounding box around the face

Figure 2: Model Creation and Prediction Steps.

for detection. In the encoding process, encoding vectors had beencreated for both faces and this method used the built-in function ofthat library for comparing the distance of those two vectors. Fromthose two facial images, one will be from the person standing infront of the ATM booth, and the other will be the stored image inthe bank database of that person. An overview of the system isshown in the following figure (Figure 3).

From the Figure 3, it is seen that there are seven (7) emotionshere. But for the automated system, the method is divided into twoemotional stages. One is happy, and the other one is not happy. Thereason behind this is simply for security. The robbery cases maytake place in an ATM booth by harming the person who is about tocollect cash, is enormous. However, in these kinds of circumstances,emotion recognition will identify any emotion except the happyone. That is why the rest of the six (6) emotions is boxed in thefigure.

4 EXPERIMENTAL ANALYSISThe experimental results of the face encoding and emotion recogni-tion are presented here in this section. For face encoding, a pythonlibrary called ‘face_recognition’ was used. However, the CNN, adeep learning model, was used for emotion recognition [2] and acneclassification [16] from facial images. The model provided about67% accuracy on predicting the emotions. The model was comparedwith some other previous proposed models, and worked better thanthose. In training period, 30 epochs were completed to check the

59

IPMV 2020, August 05–07, 2020, Bangkok, Thailand Atiqul Chowdhury et al.

Figure 3: Overview of Emotion Detection Procedure.

Figure 4: Model Loss and Accuracy Graph for Training and Validation.

model loss and accuracy which is shown in Figure 4. Moreover, thetesting dataset used for both of these procedures, was loaded withimages of our own along with some of the random images fromthe internet.

4.1 Face EncodingFacial encoding is a part of face detection and can be done bythe same python library. The main purpose of this mechanism isto match whether two faces are of the same person or not. Thismechanism requires some steps which include finding faces inan image. After that, it examines the facial features of the twodetected faces and then compares them whether they match ornot. The output shows the result in either True or False, similar tothe following figures (Figure 5 & Figure 6). This mechanism is thefirst step of the proposed system. When a person stands before theautomated camera situated on the ATM booth, an image will becaptured. The captured image and the image of that person in thebank database will be compared with the help of face encoding. Ifthe result is True, then the procedure will proceed to the next step.

4.2 Emotion RecognitionAfter face encoding, then comes emotion recognition. There aremainly seven (7) emotions that can be detected by emotion recog-nition. For this procedure, the help of deep learning algorithmslike Convolutional Neural Network (CNN) models built in Keras, is

Figure 5: Face Encoding of Two Different Persons.

Figure 6: Face Encoding of Two Same Persons.

needed. Among the seven (7) emotions, any of them can be detectedfrom a facial image by deep learning algorithms. After feeding a fa-cial image into a deep learning model, it analyzes the facial features,and according to the analysis, it detects the emotion of the person

60

An Automated System in ATM Booth Using Face Encoding and Emotion Recognition Process IPMV 2020, August 05–07, 2020, Bangkok, Thailand

Figure 7: Emotion Recognition samples using our model.

in the image. From the model, some of the emotion recognitionoutputs are shown in Figure 7

According to the proposed system, this is the second step. In theATM booth, the captured image of the person will be fed to theproposed CNN model for emotion recognition. If the result fromthe emotion recognition appears ’Happy’, the output will be True.Otherwise, the output will be False for other emotions. Moreover,this procedure acknowledges the safety of the consumer. If theperson is facing robbery, then the emotion recognition procedurecan detect any emotion rather than ’Happy’. Thus the safety of theperson’smoney will be preserved. Total seven emotions accuracy were ana-lyzed with the running model. And the model got better results for’Happy’ and ’Surprise’ emotions, but the others are also good re-garding dataset. Figure 8 shows the confusion matrix of those sevenemotions. This confusion matrix was plotted to find out whichemotion usually get confused with each other.

Figure 8: Normalized Confusion Matrix of Emotion Labels

Before giving the pin code to the ATM booth, the output of bothof the mechanisms has to be True. If one of them gets False, thenthe system will be stopped and restarted right then. This automatedsystem can help a person to withdraw money at any emergencyissue, and there is no need for any debit or credit card. This willalso consume less time.

5 DISCUSSIONAs the modern world embraces new technology to ease humaneffort, the proposed system of face encoding along with emotionrecognition, can be a revolution in the banking sector. This systemhas brought a satisfactory outcome in the testing result for bothface encoding and emotion recognition. Not only the model in thegiven system has encoded faces precisely, but it also has correctlypredicted the ’Happy’ emotion. However, this ’Happy’ emotion isneeded as it is the key to unlock the account in the booth. Moreover,this will be a great help to the ATM booth system as transactionscan be made more efficiently. This system can keep pace with themodern world technologies along with the safety of the consumers.It can save a consumer if he faces robbery. As the account cannot beopened without giving a happy facial image, only face recognitioncannot open it. Thus, the person can be saved from robbery. So,in a way, the proposed system concerns both time and safety of aconsumer. The card system can also work here, but this proposedsystem can be the ultimate future.

6 CONCLUSION AND FUTUREWORKMany technologies reside around us nowadays. Technologies likeface, eye, mouth detection, emotion recognition, are being used invarious forms of applications. Most of them have smartphone-basedor desktop-

based applications that can be used to monitor drowsiness in adriver’s face, detect satisfaction of a customer regarding a product,or unlock a smartphone. However, in this paper, another field hasbeen found where some of these technologies can be applied. Inthe ATM booth, face encoding along with emotion recognition,

61

IPMV 2020, August 05–07, 2020, Bangkok, Thailand Atiqul Chowdhury et al.

can help the consumers in saving time and ensuring their safety.We used images of our own for testing. However, the dataset wasan imbalanced one and in future, file tuning will be applied here.Moreover, this paper focuses to generate a better model as thisapproach for the banking system may put an impact on futureendeavors. Hopefully, it will be a sound system and can reducethe use of debit or credit cards in the future. ‘face_recognition’library works by pointing some portions in a face. The system hascompared 10 years’ earlier images with recent pictures of somepersons, and it has found the match. It also works for beard persons.However, if any person uses a mask or any religious objects, then itwill be difficult. Mask should be removed from the face in this case.We are trying to develop this issue. So, it will be our future work.

REFERENCES[1] I. G. N. Made Kris Raya, A. N. Jati and R. E. Saputra, "Analysis realization of

Viola-Jones method for face detection on CCTV camera based on embeddedsystem," 2017 International Conference on Robotics, Biomimetics, and IntelligentComputational Systems, Bali, 2017, pp. 1-5.

[2] I. Lasri, A. R. Solh and M. E. Belkacemi, "Facial Emotion Recognition of Studentsusing Convolutional Neural Network," 2019 Third International Conference onIntelligent Computing in Data Sciences (ICDS), Marrakech, Morocco, 2019, pp. 1-6.

[3] M. S. Bouzakraoui, A. Sadiq and A. Y. Alaoui, "Appreciation of Customer Satis-faction Through Analysis Facial Expressions and Emotions Recognition," 20194th World Conference on Complex Systems (WCCS), Ouarzazate, Morocco, 2019,pp. 1-5.

[4] A. Uppal, S. Tyagi, R. Kumar and S. Sharma, "Emotion recognition and drowsinessdetection using Python," 2019 9th International Conference on Cloud Computing,Data Science & Engineering, India, 2019, pp. 464-469.

[5] A. D. Sergeeva, A. V. Savin, V. A. Sablina and O. V. Melnik, "Emotion Recognitionfrom Micro-Expressions: Search for the Face and Eyes," 2019 8th MediterraneanConference on Embedded Computing (MECO), Budva, Montenegro, 2019, pp. 1-4.

[6] A. M. Ousmane, T. Djara, F. J. Zoumarou W. and A. Vianou, "Automatic recog-nition system of emotions expressed through the face using machine learning:

Application to police interrogation simulation," 2019 3rd International Conferenceon Bio-engineering for Smart Technologies (BioSMART), Paris, France, 2019, pp.1-4.

[7] K. C. Kirana, S. Wibawanto, N. Hidayah and G. P. Cahyono, "Ant System for FaceDetection," 2019 International Seminar on Application for Technology of Informationand Communication (iSemantic), Indonesia, 2019, pp. 152-156.

[8] A. M. Cadayona, N. M. S. Cerilla, D. M. M. Jurilla, A. K. D. Balan and J. C. d. Goma,"Emotional State Classification: An Additional Step in Emotion Classificationthrough Face Detection," 2019 IEEE 6th International Conference on IndustrialEngineering and Applications, Japan, 2019, pp. 66-671.

[9] O. Roesler and D. Suendermann-Oeft, "Towards Visual Behavior Detection inHuman-Machine Conversations," 2019 Joint 8th International Conference onInformatics, Electronics & Vision (ICIEV) and 2019 3rd International Conferenceon Imaging, Vision & Pattern Recognition (icIVPR), Spokane, WA, USA, 2019, pp.36-39.

[10] I. D. Wahyono, D. Saryono, M. Ashar, K. Asfani and Sunarti, "Face EmotionalDetection using Computational Intelligence based Ubiquitous Computing," 2019International Seminar on Application for Technology of Information and Communi-cation (iSemantic), Semarang, Indonesia, 2019, pp. 389-393.

[11] F. Filipovic, M. Despotovic-Zrakic, B. Radenkovic, B. Jovanic and L. Živojinovic,"An Application of Artificial Intelligence for Detecting Emotions in Neuromar-keting," 2019 International Conference on Artificial Intelligence: Applications andInnovations (IC-AIAI), Belgrade, Serbia, 2019, pp. 49-494.

[12] G. Bargshady, J. Soar, X. Zhou, R. C. Deo, F. Whittaker and H.Wang, "A Joint DeepNeural NetworkModel for Pain Recognition fromFace," 2019 IEEE 4th InternationalConference on Computer and Communication Systems (ICCCS), Singapore, 2019,pp. 52-56.

[13] L. Dai, J. Broekens and K. P. Truong, "Real-time pain detection in facial expressionsfor health robotics," 2019 8th International Conference on Affective Computingand Intelligent Interaction Workshops and Demos (ACIIW), Cambridge, UnitedKingdom, 2019, pp. 277-283.

[14] Anis-Ul-Islam Rafid, Amit Raha Niloy, Atiqul Islam Chowdhury, Nusrat Sharmin," A Brief Review on Different Driver’s Drowsiness Detection Techniques", Inter-national Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.12, No.3,pp. 41-50, 2020.

[15] FER Dataset, https://www.kaggle.com/deadskull7/fer2013.[16] M. S. Junayed, A. A. Jeny, S. T. Atik, N. Neehal, A. Karim, S. Azam, B. Shanmugam,

“AcneNet - A Deep CNN Based Classification Approach for Acne Classes”, 201912th IEEE International Conference on Information & Communication Technology andSystem, 2019, Indonesia.

62

View publication statsView publication stats