the opinions expressed and figures provided in this ...amity university lucknow campus message it is...
TRANSCRIPT
Disclaimer
No part of this material may be reproduced, stored in a retrieval system, or
transmitted in any form by any means, electronic or mechanical including
photocopying, recording or by any information storage and retrieval system,
without prior written permission of the publisher.
The opinions expressed and figures provided in this proceedings of CAIRIA-2018
are sole responsibility of the authors. The organizers and the editor bear no
responsibility in this regard. Any and all such liabilities are disclaimed.
Print: ISBN 978-93-5300-754-6
Copyright © CAIRIA-2018 All Rights Reserved. Printed in INDIA. March 2018
Published and Printed by Publication Committee Of Conference.
Designed by Ayush Kumar Rathore BSc.(IT)
C A I R I A - 2 0 1 8
C O N F E R E N C E O N A R T I F I C I A L I N T E L L I G E N C E :
R E S E A R C H , I N N O V A T I O N & I T S A P P L I C A T IO N S
1 4 - 1 5 M A R C H - 2 0 1 8
AMITY INSTITUTE OF INFORMATION TECHNOLOGY,
AMITY UNIVERSITY, LUCKNOW CAMPUS
Proceedings
Conference on Artificial Intelligence: Research, Innovation & its Applications (CAIRIA-2018)
Event Photographs
Keynote Speakers: Dr. Laxmidhar Behera, Professor, Indian Institute of Technology, Kanpur; Dr. Rohit Mehrotra,
Surgeon, GSV Medical University, Kanpur; Along with Professor STH Abidi; Prof. S. K. Singh; Mr. Deepak
Sharma, Chairman CSI; Brig. U. K. Chopra; Wg. Cdr. (Dr.) Anil K. Tiwari & Dr. Pankaj K. Goswami.
Keynote Speakers: Dr. Laxmidhar Behera, Professor, Indian Institute of Technology, Kanpur; Dr. Vivek Singh,
Professor, IIT BHU; Dr. Rohit Mehrotra, Surgeon, GSV Medical University, Kanpur; Along with Conference
Organizing Committee.
Keynote Speakers: Dr. Nischal K. Verma, Professor IIT-Kanpur & Mr Shivank Shekhar, Co-Chair, Global
WebVR Industry Committee, VRAR Association Along with Professor S. K. Singh; Dr. Parul Verma;
Ms Meenakshi Srivastava; Mr. Sanjay Taneja; Dr. Pankaj K. Goswami and participants.
Brig. Umesh K. Chopra (Retd.) Director-AIIT,
Amity University Lucknow Campus
Message
It is a matter of esteemed privilege to be part of the Conference on Artificial Intelligence: Research, Innovation & its Applications (CAIRIA-2018) being organized by Amity Institute of Information Technology, Amity University Lucknow Campus. The theme of the conference is of great importance and extreme relevance. It will focus on technology innovation and trend setting initiatives for Academia, Research & Development and Healthcare domain. Technology is moving ahead from digital age to the smart age of computing. An Artificial Intelligence is one of the most scope-worthy areas and also enhances the great value along with Robotics, Machine Learning, Machine Translation, Neural Networks and Fuzzy Systems. Artificial Intelligence techniques in healthcare have led to early detection of diseases including Cancer and even help in treatments. This Conference will focus on technological innovation and trend setting initiatives for the purpose and objectives of transforming the society into a knowledge base community. Artificial Intelligence researchers have created many tools to solve the most difficult problems being faced by society. I extend my sincere thanks to the organizing-committee for the success of the conference.
Brig. Umesh K. Chopra (Retd.)
Sanjay Mohapatra President
Computer Society Of India,
Mumbai
Message
It gives me great pleasure to know that Amity University Uttar Pradesh Lucknow Campus in association with Lucknow Chapter of Computer Society of India & IET is organizing a National Conference CAIRIA-2018 on " Artificial Intelligence : Research, Innovation & its Applications " on 14th & 15th March, 2018 at Lucknow.
This is the need of the present India and a right direction to look, when the country is
rising to be a global player and trying to improve the quality of life of its citizen. I am sure that this Seminar shall be a step forward and contribute in providing an insight towards the developments in Artificial Intelligence and will contribute in fulfilling dream and mission of 'Digital India'.
I send them my Best Wishes for the success of the Conference and look forward to the
ideas and the knowledge; the Conference is going to generate, which will help India and the Lucknow region, in realizing its true potential.
(Sanjay Mohapatra)
President, CSI
Advisory Committee
Dr. Y.N. Singh Professor, IIT Kanpur
Dr. S.K. Dwivedi
Professor, BBA, University, Lucknow
Dr. O.P. Sharma Associate Professor, IIT Patna
Mr. Deepak Sharma
Chairman, CSI Lucknow Chapter
Mr. Arvind Sharma Regional Vice President, Region-1, CSI Mumbai
Dr. Geetika Srivastava
Assistant Professor, Amity University
Organizing Committee
Chief Patron Dr. Aseem Chauhan Chairman, AUUP Lucknow Campus, Chancellor, Amity University, Rajasthan Patron Maj. Gen. K.K.Ohri AVSM (Retd.) Pro-Vice Chancellor, AUUP Lucknow Campus Director of the conference Brig. Umesh K. Chopra (Retd.) Director, AIIT, AUUP Lucknow Campus Chair Professor Dr. S. K. Singh Prog. Director, AIIT Mob. : 9415617702
Conference Chair Dr. Pankaj K. Goswami Mob. : 9453434364 Publication Chair Dr. Parul Verma Mob. : 9839289870 Finance Chair Mr. Sanjay D. Taneja Mob. : 9415874468, Registration Conduct Chair Ms. Namrata Nagpal Mob. : 9838930104, Technical Program Committee (TPC) Chair Ms. Meenakshi Srivastava Mob. : 9415433194,
Organising Committee ( Students )
Mr. Nikhil Singh Rawat
8317032920
Mr. Rishab Nigam
7068832340
Mr. Ayush Kumar Rathore
7905180613
Mr. Vinayak Singh
7860644732
Mr. Devanshu Srivastava
9129375459
Mr. Jasmeet Singh
7376263516
Mr. Abhishek Soni
9559778090
Mr. Varun Singh
7668721334
Mr. Prafulla Sahu
7275232428
Mr. Dravin Palekar
7007347310
Ms. Jigeesha Agarwal
7309030817
Ms. Sonal Jaiswal
7408594979
Nishchal K. Verma, Ph.D. (IIT Delhi) Devendra Shukla Young Faculty Research
Fellow, 2013-16, Associate Professor,
Department of Electrical Engineering, Indian Institute of Technology, Kanpur
Abstract
Nowadays, the data generation is increasing at exponential rate day by day. With huge size (terabytes and petabytes) of data available, big data brings large opportunities and transformative potential for public and private sectors. But, handling such huge amount of data using conventional machine learning is difficult. Deep learning techniques are the advancement in machine leaning is quite popular due to their adaptive nature for handling such type of data. But in real world applications, data generated exists in both linguistic and numeric form. Extracting relevant information from such types of data using deep learning is difficult. The deep fuzzy network (DFN) which is advancement in deep neural network (DNN) combines both linguistic and numeric form of data to transform the data into relevant information. This lectures starts with basic introduction to deep neural networks (DNN) and deep fuzzy networks (DFN). Further these networks are used to extract useful representations from the data for various applications i.e. machine health monitoring data, gene expression microarray data etc.
Dr Vivek Singh Professor, Computer Science,
Banaras Hindu University, Varanasi
Abstract
The work in Artificial Intelligence (AI) since 1956, when the term was first coined by John Mc Carthy in the Dartmouth summer research project on AI, has witnessed cycles of successes, misplaced optimisms & resulting cutbacks in enthusiasm and funding. While every small success aroused further interest and enthusiasm; major theoretical & experimental failures resulted in the research funding being reduced drastically. Most of the initial work on AI was concerned with one of the three areas: Language translation, Problem solving and Pattern recognition. There were noticeable successes, although at a micro scale, with Newel & Simon’s Logic Theorist (followed by GPS) and systems to handle Morse code & deciphering simple patterns along with important works of Marvin Minsky, Warren Mc Culloh & Walter Pitt etc. It was widely thought that as soon as cheap & faster hardware and memories are available, the simple theories and successes could be scaled up to work on bigger problems. However, most of the early systems which performed well on simple problems, turned to fail miserably when tried out on bigger and more difficult ones. This forced AI researchers to have a relook at the approaches and the philosophical directions.
Dr. Rohit Mehrotra Professor,
GSVM University, Kanpur
Abstract
Over 5% of the world’s population – or 466 million people – have disabling hearing loss (432 million adults and 34 million children). It is estimated that by 2050 over 900 million people – or one in every ten people – will have disabling hearing loss. 5-6 in every 1,000 children born in India are deaf. 45000 children born every day in India so about 250 deaf children born each day. If a Child can’t hear, how can the Child learn words & language? So, he can’t speak. Etiology of hearing loss in children remains obscure in about half of the cases. We can now check the hearing of a new born baby on the day of birth by OAE and BERA Test. On testing if child is deaf and is operated in the 1st year of life, the child hears and speaks normally. Best results are with bilateral implantation. If these surgeries are done in time there would not be any deaf and dumb in our society. But if surgery is delayed results start reducing and surgery does not benefit after 5 years of age. Early intervention is the key to success. This life transforming surgery of Cochlear Implantation gives normal hearing and speech. So with the aim of deaf free UP by 2030, the mission is to give hearing to every deaf child. Artificial Intelligence can be used to make speech processing faster and better, implant smaller and completely implantable.
Mr. Shivank Shekhar Co-chair
Global WebVR, Industry Committee,
VRAR Association
Abstract
Coalescing spectrums of AI & XR Once Oracle CEO Mark Hurd wrote on LinkedIn that almost three-quarters of companies respond to digital disruption only after the second year of its occurrence, while only 14% of executives believe their companies are ready to effectively redesign their organizations. So in the corporate world, there is a huge window of opportunity when it comes to understanding and applying the underlying technologies here. Congruence of the applicability of VR and Artificial intelligence excites me to introduce students to the frontline research and industrial application which have already been done so far.
Principle and Applications of Speaker Recognition
Security System Nilu Singh
1, Alka Agrawal
2 and R. A. Khan
3
Chennupati Jagadish1, Chris Bailey
2, and Parviz Famouri
3
1,2,3
SIST-DIT, Babasaheb Bhimrao Ambedkar University (A Central University), Lucknow, UP, India
Email: [email protected]
Abstract — This paper overviews the principle and
applications of speaker recognition. Speech is a natural way to
convey information by humans. Speech signal is enriched with
information of the individual. To recognize a person by his/her
voice is known as speaker recognition (SR). Since human voice
has some measurable characteristics hence it falls in the
category of biometric. The term biometric is used to measure
human’s body related characteristics. Biometric is also known
as realistic authentication. Voice biometric or speaker
recognition is used to recognize an individual through their
voice’s individual characteristic. Index Terms – Speaker Recognition, Speaker Verification,
Speaker Identification Open-set & Closed-set, Applications of
Speaker Recognition.
I. INTRODUCTION
Human speech is a medium for expressing their thoughts
during communication. Voice is a medium for human to
express their thoughts and information. A speech signal is a
complex signal which is packed with several knowledge
resources such as acoustic, articulatory, semantics, linguistic
and many more [1-2]. During communication, human easily
understand information such as emotion, language, and
mental status etc. This ability of human to decode
information motivated researchers to cognize speech signal
related information. And this idea helps to emerging a
system which able to procure and process the assembled
information of a speech signal. A person‟s voice is different
from another due to the acoustic properties of speech signal.
Speaker‟s voice is unique to an individual due to differences
which occur as structure of glottal and dissimilarities in the
vocal tract and the cultured speaking behaviors of
individuals [1-3].
In this digital era speaker recognition is the most useful
biometric recognition technique [3]. Now days many
organizations like bank, industries, access control systems
etc. are using this technology for providing greater security
to their vast databases [3-4]. Speaker recognition is mainly
divided into speaker identification (1: N matching) and
speaker verification (1:1 matching). Identification is
considered as more difficult than verification [5]. This is
intuitive that performance of speaker identification system
affected by the number of enrolled speakers (the probability
of incorrect decision increases). While the performance of
speaker verification system is not affected by increase in
voice database size since only two speakers are compared.
In last few years, requirement for authentication has
been increased with the increasing digital world of
information. It has already been proved that a biometrics
authentication technique increases security levels. Speaker
identification is a procedure of recognizing an utterance from
the enrolled speakers while speaker verification is a binary
task that is either speaker accepted or rejected. Speaker
verification systems are the real example of biometric
authentication systems. Further, it can be categorized as text-
dependent and text-independent [6]. The text-dependent
systems are based on same utterance spoken by speaker in
both cases i.e. training and testing while in text-independent
systems it is not required to utter the same sentence/words
during training and testing [5]. It is accepted that text-
dependent systems provide more precise results as both the
content and voice can be matched that is speaker utters exact
the sentence which he/she uttered during training. While
text-independent recognition systems, either use the same
voice sample or different for verification/identification [7-8].
Gaussian mixture model (GMM) is one of the most
common method used to speaker modeling for speaker
identification. GMM is used as two distinct ways for
identification system; firstly, when training database
principle is the maximum likelihood (ML) and parameter
estimation is performed by using expectation maximization
(EM) algorithm; and secondly when the training database
principle is maximum a posteriori (MAP)[9-11].
Speaker verification is used for those applications
where speech is used as main component to authorize the
speaker. The task of speaker identification is to decide that a
given utterance comes from a certain registered speaker.
Speaker verification usability is more than speaker
identification. With the increase in voice database, difficulty
of speaker identification increases. Speaker verification is
independent of voice database population since it works only
on binary decision that is acceptance or rejection. Speaker
recognition system performance (recognition accuracy) is
most affected by intersession variability (variability over
time) and spectra of a speakers speech signal [7] [12].
II. SPEAKER RECOGNITION
It is well accepted that in this electronic era people interact
using voice with the help of electronic devices. Human voice
is a signal which contains several information related to
human characteristics, such as emotion, words, language,
speaker identity etc.. To identify a human being by their
voice, required speech features are selected from speech
signal with available feature extraction techniques [13]. It is
the process of recognizing a person on the basis of his/her
voice. Automatic speaker recognition system is categorized
as speaker identification and speaker verification. Speaker
verification system decision is binary i.e. 0 or 1 (accept or
reject) as this justifies an identity claimed by the speaker.
Speaker identification decision requires N matching and then
the decision is made about acceptance or rejection [14].
Further it is distinguished as text-dependent and text-
independent speaker recognition. In the text-dependent
system, the recognition of speaker‟s identity is depends on
the specific words or sentences. In the case of text-
independent recognition, speakers have no restriction to
speak sentence or phrases [3] [15-19]. Fig. 1, presents an
overview of speaker recognition system and Fig. 2, shows
the basic concept of speaker identification and speaker
verification methodology.
Fig. 1 Basic speaker recognition system
The popularity of speaker recognition system is due
to its low cost of implementation. This is because of the
easily availability of microphones and the universal
telephone network. As in this digital era it is very easy to
capture someone‟s voice and authenticate it by using
speaker recognition system. The only cost is due to the
software which is used for speaker recognition system. The
study of speech signal is about its characteristics which
distinguish one speech signal with another [3] [15] [17].
Fig. 2 Speaker identification and speaker verification
A. Speaker Identification
Speaker identification system is 1: n matching
system. In this, user needs not to provide his/her identity to
the system. During identification user has to input his/her
speech to the ASR system and the system now decides the
identity of user on the basis of the match score. In this case
system has to perform N comparison (N is the number of
stored speaker/user model of voice database). During
identification, comparison with each registered model will
produce a likelihood score, on the basis of this score higher
likely model is identified for the speaker [7] [20].
From the study it is clear that speaker identification
is complex than speaker verification. Hence in case of
speaker identification, system performance degrades as
compared to speaker verification [21]. Fig. 3, shows the
process of speaker identification system.
Fig. 3 Process of speaker identification system
B. Speaker Verification
Speaker verification system is 1:1 that is the system
either accepts or rejects. In verification process, firstly user
need to provide his/her identity and then it is checked by the
system and decisions are made accordingly that whether the
claimed identity is true (accept) or false (reject) [22].
Speaker verification can be explained easily with the help of
an example of Automated Teller Machine (ATM). Before
any transaction, users are first needed to insert the ATM card
in the machine. This credit/debit card contains the
information about user such as name, signature etc. Now, if
the ATM is working on ASR technology then it will check
that the card is used by its genuine holder by asking to
produce his/her voice. Since the user has already provided
his/her identity to the system, only „yes or no‟ decision has to
be made by the ATM machine i.e. either the card holder is
accepted or rejected. This decision is made on the basis of
comparing the voice input to the previous voice input
provided by the user [2] [5]. Fig. 4 shows the process of
speaker verification system.
Model for each
Speaker
θ
C. Open-Set vs. Closed-Set
Speaker identification is categorized as closed-set and
open-set. Open-set speaker identification for a test utterance
of enrolled speakers is a twofold problem. Primarily, it is
necessary to find speaker model which have best matches in
the given set; Furthermore, it must be determined that the
best match test utterance is actually formed by the best
matched model speaker or some unknown speaker [2] [21]
[23-24]. Fig. 5 shows the classification of speaker
recognition system.
The possible error and problems in open set speaker
identification can be examined as follows. Suppose that N
speakers are enrolled in the system for voice database and
M1, M2……Mi, are their statistical models [25]. If O
represents the feature vector sequence extracted from the
given utterance, then the open-set identification is given as
follows:
Where θ is a pre-defined threshold and O is assigned to
speaker model. If the maximum likelihood score is greater
than the threshold θ, it means the voice is originated from a
known speaker. For a given O following errors are possible
[2] [23][26].
False Acceptance (FA): The recognition system
accepts a pretender as one of the registered speaker
[2].
False Rejection (FR): The recognition system
discards a registered speaker [2].
Speaker Confusion (SC): The system properly
accepts a registered speaker but demented with
another registered speaker [2].
Fig. 5 Classification of speaker recognition
III. EXTRACTION OF SPEECH FEATURES
A speech signal has several features such as phonetic,
prosodic and acoustic etc. Selection of the required speech
features among several features is the important task. By
selecting more informative speech features will help to
improve system performance. Selection of less useful or not
useful features for a particular task is unfavorable to the
system performance. Hence, examining the new speech
features for a specific task has been always an important and
difficult task [27]. Speech signals do not only carry language
information but it also includes key paralinguistic
components, prosody. Speech features stated as prosodic
features define prosody of a speech while the features which
are not used to describe prosody are called acoustic features
of speech. Fundamental frequency is the main component of
a speech signal which is further explained as:
Fundamental Frequency features: Fundamental
frequency (F0) features are useful for tonal languages. Tones
are related with dynamics of the fundamental frequency. To
extract F0 there are many methods used. One common
method is through autocorrelation i.e. autocorrelation of the
signal within a frame. In this computation, second highest
peak of autocorrelation is represented as fundamental
frequency of speech signal. For better accuracy or to make
system more robust against noise another technique is
required. It is based on observation such as tracking the peak
of the autocorrelation across frames or normalizing the
autocorrelation according to the analysis window [15] [28-
29] [30].
Measurement unit of F0 is Hertz (Hz) i.e. number of
periods within one second. Fundamental frequency period
can be further divided as jitter and shimmer.
Jitter: It is frequency stability in terms of equality of
period‟s duration. It is computed as average absolute
difference between consecutive periods (divided by the
average period) [31-32].
Shimmer: It is the measurement for amplitude stability of F0
periods. It is computed as Average absolute difference
between the amplitudes of sequential periods (divided by the
average amplitude) [31-32].
Fig. 4 Process of speaker verification system
Both jitter and shimmer have their observation based
thresholds and basically used in speech pathology research.
From the fundamental frequency and glottal cycle point of
view, there are many phonation types voice such as the
following [15]:
Normal Voice: Vocal cords are in their natural mode [15]
[16].
Creaky Voice, Vocal Fry: Creaky phonation is
characteristically related with aperiodic glottal pulses. In
such type of voice, degree of aperiodicity in the glottal
source is quantified by measurement of the jitter. During
creaky phonation, jitter values are higher than other
phonation types [14-16].
Falsetto voice: in such type of voice production vocal folds
are stretched longitudinally (thin). Therefore vibrating mass
is smaller hence tone is higher [15].
Breathy voice: It is noticeable as compound phonation type.
Such type of voice production has vocal fold vibration which
is inefficient due to incomplete closure of the glottis [15].
F0 rises when the vowel follows an unvoiced volatile
and decrease when it follows a voiced explosive.
IV. APPLICATION OF SPEAKER RECOGNITION In the last few years use of biometric system has become a
reality. There are lots of commercial as well as personal
applications where biometric is used for security purpose.
Speaker verification has gained a huge acceptance in both
government and financial sectors for secure authentication
[5] [33-34]. Australian Government organization Centrelink,
use speaker verification for authentication of recipients using
telephone transactions [35]. Possible applications of speaker
recognition are forensic investigation, telephone banking,
access control, user authentication etc. [36].
Speaker recognition have more potential than other
biometrics such as face recognition, finger prints, and retina
scans. The main advantage of speaker recognition over other
biometric is low cost, high acceptance and non-invasive
character of speech acquisition. To develop a speaker
recognition system, expensive equipment as well as direct
participation of speakers is not required. Speaker recognition
have potential to eliminate the need of carrying debit card,
credit card, remembering password for bank account or any
other security locks and many other online services [6] [37-
38]. With the continuous improvement in reliability of
speaker recognition technology, its usability has increased.
Now days, use of speaker recognition has become a
commercial reality and part of consumer‟s everyday life [34]
[39].
The performance of speaker recognition system is
vulnerable to change in speaker characteristics such as age,
health problems, speaking environment etc. Another
disadvantage is that it is possible to play a recorded voice
instead of the actual voice of a speaker [2] [34-35].
Access Control: Controlling access to computer
networks
Transaction Verification: For telephone banking and
account access control
Law Enforcement: Used in home parole monitoring and
residential call observing
Speech Data Organization: Used for voice mail. E.g.
speech skimming or audio mining applications.
Personalization: Device customization, store and fetch
personal information based on user verification for multi
user device [33].
All the above mentioned applications require robust
speaker recognition techniques. The requirement of
robustness in case of speaker recognition system can be
explained with the help of an example. In telephone based
services a user speaks in many circumstances (in noisy
environment or street), use different communication medium
(telephone or mobile), differs the distance of microphone etc.
Therefore robustness is the key factor for deciding the
success of speaker recognition system. In the area, in 1983,
the first international patent was filed by Michele
Cavazza, Alberto Ciaramella. This invention relates to
analysis of speech characteristics of speakers, in particular,
to a device for a speaker's verification [40].
Barclays Wealth and Investment Management was the
first organization in the world to deploy passive voice
security services. The basic aim behind using voice based
security was transforming the customer service experience.
By using this technique customers are automatically verified
as they speak with a service center executive [41]
In August 2014 GoVivace, developed speaker
identification system by using voice biometric technology.
This technology can be used for rapid voice sample matching
with thousands or millions of voice recordings. The purpose
of implementing this technology is to identify callers in
enterprise contact canter settings where security is a major
concern. GoVivace's SI technology is also available as an
engine. Application Programming Interfaces (APIs) to use
the software as a service [58]. In the UK, HSBC is
developing voice recognition and touch ID services for 15
million customers by the summer in a big step towards
biometric banking [42].
The popularity of voice biometric has risen more in past
few years. According to Opus research, more than a half
billion voiceprint will be in record, alone by 2020. People
found more comfortable with biometric authentication [3].
V. CONCLUSION
This paper provides concise definition and discussion about
speaker recognition technology. In addition, basic concepts
of automatic speaker recognition systems, modeling
technique etc. has been discussed. Speaker recognition is
method of designing a system for identity of an individual
through their voices. Speaker recognition has a significant
prospective as it is appropriate biometric technique for
security. The speaker recognition task is normally achieved
by acquiring speech signal, feature extraction, modeling
speech features for speaker, pattern matching and obtaining
match score.
REFERENCES
[1] B. S. Atal, “Automatic Recognition of Speakers from their Voices”,
Proc. IEEE, vol. 64(4), pp. 460-475, 1976.
[2] Q. Jin, “Robust Speaker Recognition”, PhD Thesis, Language
Technologies Institute School of Computer Science Carnegie Mellon
University, Pittsburgh, pp. 23-177, 2007.
[3] A. Rajsekhar G., “Real Time Speaker Recognition using MFCC and
VQ”, PhD Thesis, Department of Electronics & Communication
Engineering, National Institute of Technology Rourkela, pp. 9-71,
2008.
[4] J. Luettin, “Visual Speech and Speaker Recognition”, PhD Thesis,
Department of Computer Science University of Sheffield, pp. 16-156,
May 1997.
[5] T. Kinnunen, “Spectral Features for Automatic Text-Independent
Speaker Recognition”, Licentiate‟s Thesis, University of Joensuu,
Department of Computer Science Joensuu, Finland, pp. pp. 1-151,
December 21, 2003.
[6] M. R. Srikanth, “Speaker Verification and Keyword Spotting Systems
for Forensic Applications”, PhD Thesis, Department of Computer
Science and Engineering Indian Institute of Technology Madras, pp.
1-135, Dec. 2013.
[7] U. Sandouk, “Speaker Recognition Speaker Diarization and
Identification”, PhD Thesis, University of Manchester, School of
Computer Science, pp. 14-101, 2012.
[8] M. Savvides, “Introduction to Biometric Technologies and
Applications”, ECE & CyLab, Carnegie Mellon University.
http://www.biometricscatalog.org/ biometrics/biometrics_101.pdf or
biometrics_101.pdf
[9] M. El. Ayadi, Abdel-Karim S.O. Hassan, Ahmed Abdel-Naby, Omar
A. Elgendy, “Text-independent Speaker Identification using Robust
Statistics Estimation”, Speech Communication vol-92, pp. 52–63,
2017.
[10] S. Sarkar, Sreenivasa, Rao, K., “Stochastic Feature Compensation
Methods for Speaker Verification in Noisy Environments” Appl. Soft
Comput. 19, pp. 198–214, 2014.
[11] G. Doddington, Liggett, W., Martin, A., Przybocki, M., and Reynolds,
D, “Sheeps, Goats, Lambs and Wolves: A Statistical Analysis of
Speaker Performance in the NIST 1998 Speaker Recognition
Evaluation”, In Proc. Int. Conf. on Spoken Language Processing
(ICSLP 1998) (Sydney, Australia), 1998.
[12] S. Furui, “Speaker-Dependent-Feature Extraction, Recognition and
Processing Techniques”, ESCA Workshop on Speaker
Characterization in Speech Technology Edinburgh, Scotland, UK, pp.
10-27, June 26-28, 1990.
[13] L. P. Cordella, P. Foggia, C. Sansone, M. Vento, “A Real-Time Text-
Independent Speaker Identification System”, Proceedings of the
ICIAP, pp. 632, 2003.
[14] B. Richard Wildermoth, “Text-Independent Speaker Recognition
Using Source Based Features”, PhD Thesis, Griffith University
Australia, pp. 1-101, Jan. 2001.
[15] M. Breen, L. C. Dilley, J. Kraemer, and E. Gibson, “Inter-Transcriber
Reliability for Two Systems of Prosodic Annotation: ToBI (Tones and
Break Indices) and RaP (Rhythm and Pitch)", Corpus linguist. ling.,
vol. 8 (2), pp. 277-312, 2012.
[16] A. Stolcke, “Higher‐Level Features in Speaker Recognition”, Winter
School on Speech and Audio Processing, IIT Kanpur, Jan. 2009.
[17] N. Dehak, D. Pierre, and K. Patrick, “Modeling Prosodic Features
with Joint Factor Analysis for Speaker Verification”, IEEE
Transactions on Audio, Speech, and Language Processing, vol. 15 (7),
pp. 2095-2103, Sept. 2007.
[18] Z. Huang, L. Chen and M. Harper, “An Open Source Prosodic Feature
Extraction Tool”, School of Electrical and Computer Engineering
Purdue University West Lafayette, pp. 2116-2121, 2006.
[19] T. Kinnunen and Haizhou Li, “An Overview of Text-Independent
Speaker Recognition: From Features to Supervectors”, Speech
Communication vol.52, pp.12–40, 2010.
[20] N. Singh and Khan R. A., “Underlying of Text Independent Speaker
Recognition”, in IEEE Conference (ID: 37465) (10th INDIACom
2016 International Conference on Computing for Sustainable Global
Development), held on 16th -18th March, 2016 at BVICAM, New
Delhi, pp. 11-15, 2016.
[21] D. Sierra Rodriguez, “Text-Independent Speaker Identification”, PhD
Thesis, AGH University of Science and Technology Krakow, Faculty
of Electrical Engineering, Automatics, Computer Science and
Electronics, pp. 1-121, 2008.
[22] M. Ghassemian and K. Strange, “Speaker Identification - Features,
Models and Robustness”, Technical University of Denmark, DTU
Informatics Kongens Lyngby, Denmark, pp. 1-118, 2009.
[23] D. A. Reynolds, “An Overview of Automatic Speaker Recognition
Technology”, MIT Lincoln Laboratory, Lexington, MA USA,
@2002IEEE, pp. 4072-4075, 2002.
[24] B. S. Atal, “Effectiveness of Linear Prediction Characteristics of the
Speech Wave for Automatic Speaker Identification and verification”,
Journal Acoustical Society of America, vol. 55, no. 6, pp. 1304-1312,
June 1974.
[25] "Speaker Identification". Archived from the original on August 15,
2014. Retrieved September 3, 2014.
[26] A. Majetniak, “Speaker Recognition using Universal Background
Model on YOHO Database”, Aalborg University, The Faculties of
Engineering, Science and Medicine Department of Electronic Systems,
pp. 1-61, May 31, 2011.
[27] B. Martínez-Gonzalez, M. Jose Pardo, D. Julian Echeverry-Correa,
Ruben San-Segundo, “Spatial Features Selection for Unsupervised
Speaker Segmentation and Clustering”, Expert Systems with
Applications, vol. 73, pp. 27–42, 2017.
[28] D. Talkin, “A Robust Algorithm for Pitch Tracking (RAPT)”, Speech
Coding and Synthesis, pp. 495-518, 1995.
[29] P. Boersma, “Accurate Short-term Analysis of the Fundamental
Frequency and the Harmonics-to-Noise Ratio of a Sampled Sound”,
In Proc. the Institute of Phonetic Sciences, 1993.
[30] C. Liu, P. Jyothi, H. Tang, V. Manohar, M. Hasegawa-Johnson, and S.
Khudanpur, “Adapting ASR for under-Resourced Languages using
Mismatched Transcriptions”, In Proc. ICASSP, 2016.
[31] E. Vayrynen, “Emotion Recognition from Speech Using Prosodic
Features”, PhD Thesis, University of Oulu Graduate School,
University of Oulu, Faculty of Information Technology and Electrical
Engineering, Department of Computer Science and Engineering,
Infotech Oulu, pp. 1-92, 2014.
[32] N. Singh, A. Agrawal and Khan R. A., “Automatic Speaker
Recognition: Current Approaches and Progress in Last Six Decades”,
Global Journal of Enterprise Information System. Vol. 9, Issue-3,
July-September, pp. 38-45, ISSN: 0975-1432, 2017.
[33] D. A. Reynolds, “Automatic Speaker Recognition: Current
Approaches and Future Trends” MIT Lincoln Laboratory, Lexington,
MA USA, pp. 1-6, 2001.
[34] S. Memon, “Automatic Speaker Recognition: Modeling, Feature
Extraction and Effects of Clinical Environment”, PhD Thesis, School
of Electrical and Computer Engineering Science, Engineering and
Technology Portfolio RMIT University, pp. 1-242, June 2010.
[35] R. Summerfield, T. Dunstone, C. Summerfield, “Speaker Verification
in a Multi-Vendor Environment”,
www.w3.org/2008/08/siv/Papers/Centrelink/w3c-sv_multivendor.pdf
[36] R. Nakatsu, J. Nicholson, and N. Tosa, “Emotion Recognition and its
Application to Computer agents with Spontaneous Interactive
Capabilities”, Knowledge based systems, vol. 13, pp. 497–504, 2000.
[37] G. Gravier and G. Chollet, “Comparison of Normalization Techniques
for Speaker Verification”, In Proc. on Speaker Recognition and its
Commercial and Forensic Applications (RLA2C), pp. 97-100, 1998.
[38] N. Singh, Khan R. A. and Raj shree, “Applications of Speaker
Recognition” Science Direct, Procedia Engineering, vol-38, pp.
3122-3126, 2012.
[39] E. Shriberg and A. Stolcke, “Direct Modeling of Prosody: An
Overview of Applications in Automatic Speech Processing”, In
International Conference on Speech Prosody, 2004.
[40] A. Michele Cavazza and C. Alberto, "Device for Speaker's
Verification”,
" http://www.google.com/patents/US4752958?hl=it&cl=en
[41] International Banking (December 27, 2013). "Voice Biometric
Technology in Banking | Barclays". Wealth.barclays.com.
Retrieved February 21, 2016.
[42] K. Julia, "HSBC Rolls out Voice and Touch ID Security for Bank
Customers | Business", The Guardian, Retrieved February 21, 2016.
Examine the aptness of strategies of
crowdsourcing in agile software methods
Himanshu Pandey
Research Scholar
MUIT, Lucknow.
Dr. Santosh Kumar
Associate Professor,
MUIT, Lucknow.
Dr. Manuj Darbari
Associate Professor,
BBDNITM, Lucknow.
Abstract- Agile software development has been vigorous research
domain for more than a decennary. Nonetheless, forthright
application of agile methods in actual praxis is encumbered by
few confront, which require to be resolute both by industry and
academia. One of the extensive off spring in this respect solicitude
is the dearth of resources, assets and funds for creation of
application and development of agile methodology and
techniques. The crowdsourcing methods in the area or field of
software development like in Top Coder and Apple App have
shown promising and feasible solutions to various problems. Use
of crowd methodology in the field of software development is
estimated to take its place along with existing methodologies or
techniques like service-oriented computing, agile. In this study,
we put forward a hypothetical and inquisitive framework of
crowdsourcing strategies to anatomize its existence and
emergence in agile software development. The paper has been
put forward with the concept of crowdsourcing cognizance into
the current agile methods along with apprehension the strategies
of crowdsourcing approach in agile software development. To
scrutinize the applicapability and existence of crowdsourcing
strategies like Crowd creation, Crowd Democracy, Crowd
Reviews, Crowd Wisdom and Crowd funding, we exploited both
quantitative and qualitative research methods. We also describe
the basal principles of crowdsourcing strategies in agile
development process.
Keywords: Crowdsourcing, Crowd funding, Crowd creation,
Crowd reviews, Crowd democracy, Crowd Wisdom
I. Introduction
Software engineering is limited to small association of
developers, but is gradually widespread in, communities,
institutes and organizations involving large group of peoples.
There is an increase in globalization with a stand-point on
coordinated practices and infrastructure. Only evolving
method getting work completed is crowdsourcing, a sourcing
technique that discovered in the 1990s. Determined by Web
2.0 technologies, organizations, with an Internet connection
can form a workforce consisting of anyone. Customers can
publish blocks of tasks, or work, on a crowdsourcing platform,
where scrum or individual workers opt for those works that are
according to their skills and capabilities.
The commonly used methods for solving these problems
involve carrying out surveys, formation of committees and
forums, and hiring consultants. A new and efficient way of
solving these problems can be—crowdsourcing.
Firstly, all social networks like facebook, Twitter, and Skype
have changed the very meaning of its presence. The web
medium permits for collective intellectual abilities of a
population in methods. Due to time and place limitations,
face-to-face communication cannot possible in every time.
Our case study says that the crowdsourcing model is a
prosperous, attainable web-based, distributed problem solving
technique and production method for small business. It is an
effective model for enabling software development
procedures. Crowdsourcing has matured as a substantial
feature for development of any agile products, because it
permits the candidness and sharing of resources and
knowledge collected by communities. Crowdsourcing is a way
by which Software developers, designers, testers and others
not only develop framework for any software or application,
but also manages the collective resources of various
contributors and customizing products according to their goal
needs. Completing or solving a full project or software related
problem can be a difficult job for few people in the
organizations. With crowdsourcing, even non-professional
enthusiasts have the chance to participate in the development
of new types of agile software applications.
II. Literature Survey
Based on the literature review we analyzed various work and
task consort to crowdsourcing which are playing major role in
rapid and flawless software development process. The paper
by J.Bishop[1], investigate a study on Big Data from a project
called “QPress”. It was implementing by an organization, lie
on dependent working with inter-professionalism. This study
shows the value of the Cloud to distance working, like
teleworking means, individuality of the organization and how
can helpers associate to it, including what component play
major role on research worker motivation. This case study
examines that the organizational structure of the company
analyses, where many software companies behave different
works could be investigate as better exercise in assisting inter-
professional atmosphere. Monika Skaržauskaitė[2], undertakes
the benefit of crowdsourcing in educational activities. Focus
of this paper is to take deep investigation at how educational
organizations are employing crowdsourcing as component of
their act along with intimate how the praxis of crowdsourcing
may divide to another educational activity. Upendra Chintala,
Shrikanth, Anurag Dwarakanath, Gurdeep Virdi, Alex Kass+,
Anitha Chandran[3], analyses a numerous software
development methodology, gives main confront for the
assiduity of Crowdsourcing to the development of enterprise
application. This research showed a concept to methodically
break down the whole business application into small works
so that the works can be implemented autonomously in
equidistant by the crowd. This research endorses automated
testing and integration. Thierry Buecheler, Jan HerikSeig,
Rudolf M. Fuchslin and Rolf Pfeifer [5], clearly describe how
advantages of crowd, enjoyed by nonprofit “Research Value
Chain”. Research content explains a) how crowdsourcing can
be devoted to fundamental science and b) the impact of
conclusions of Artificial Intelligence research on activeness of
crowdsourcing. Thierry merged different research series of
Conclusions and results like, Versatile or complicated
networks. The research and works of Eddy Maddalena,
Vincenzo Della Mea, Stefano Mizzaro[6] , focuses the
probabilities that if a task applied on crowdsourcing platforms
is suitable two mobile devices. Eddy first aim is to investigate
the compatibility of preexisting crowdsourcing platforms on
mobile devices, along with what sort of the application of
Crowds express the benefits of crowdsourcing in educational
activities. In this paper [7], “The application of Crowdsourcing
in educational activities”, enlighten the existence of Crowd
Sourcing in educational activities. With the time passes, the
roar in Information and Communication Technologies by dint
of Internet and time limit of a wide area of alternatives for
organizations to attain their aim. The main concept of this
research is to give up deep elaboration of achievement of
crowdsourcing as part of their activities with explanation of
how implementation of outsourcing would be soon spread to
other educational activities. Andre VnHoek's [8],
“Crowdsourcing in Software Engineering, Models,
Motivations and Challenges” is about the possibility of crowd
in software engineering and advantages of crowdsourcing are
applicable in software or not. Crowdsourcing has begun
developing a place in software development area. The paper
[9],“Reactive Crowdsourcing”, by Alessandro Bozzon, Macro
Brambilla, StafanoCeri, Andrea Mauri proposers the aspects
of Crowd Sourcing which delivers find level, ex-controls and
powerful. We extract the initially design, crowdsourcing
application as commingle of basal task types. Further, we
reform this hi-tech feature into the specification of a reactive
execution environment; endorse planning, execution and
termination of a task as well analysis of performer. Controls
are established as live phenomenon in data structures which
are gathered from the specification and design of application.
These phenomenon can be replaced or altered as per
requirement, assures highest competence with the lowest
attempt. The paper by Kathryn T. Stolee, Sebastian
Elabaum[10], examine the benefit of crowdsourcing as a
procedure to discourse that confront by helping in subject
recruitment. Further, by help of his research we can expose
how study can be accomplished below infrastructure that
certify the existence to formulate a large base of uses and also
admit managing users while the research is running out. We
discuss the results and observations of this incident which
describes the efficiency and potential of crowdsourcing
software engineering studies. RajanVaish[11], has shown a
research direction which examines the suitability of expert
outsourcing by merging mentor with student crowd. This
procedure will permit mentors to alternatively used operators
such as split commerce, remove or add on project ideas, cord
on studying to daddy suit on project ideas. This research
process will contain numerous stages like paper-pencil
prototyping, brainstorming and development and user
evaluation for producing publishable results. This research
presents the existence of crowdsourcing research process with
operators along the research level, while solving the
constraints of resource and opportunity among mentor and
crowd. We are inspired by the work of Bernardo A.
Humberman[12], research of a large data set fetch from
YouTube, potential exhibited in crowdsourcing provides a
strong credence on attentiveness, composed by number of
downloads. Research paper by Yanni Han, Hanning Yuan, Jun
Hu[13], for a research perspective, propound a agile
development procedure of software ,based on service provider.
Along with, we investigate the service efficiency of software,
increased by the user-driven needs in network environment.
The agile development process is proposed based on users'
individual needs. This method can be represented as
demanded service resources and approach to the target slowly.
Tom Narock and Pascal Hitzler[14], for proving reliable
solution implementing search algorithms with crowdsourcing.
They observe Big Data within the geosciences and define
logical questions related to the merger of crowdsourcing. De
establishes a crowd Sourcing portal that permits people of the
jio science community to club their conference presentations
and funded grant descriptions to the Database used in those
projects. Rashmi Popli, Naresh chauhan[15], cast the spotlight
on the research work in Agile Software Development and
calculation in Agile. They favored a technique for the actual
cost, to avoid from the problems of current agile practices.
Related to the concepts of flexibility along with adaptability,
the agile methods, shown an innovative set of software, are
presently used as a gimmick to these recurrent problems and
make but a clear way for the future of development. Emal
Altameem[16], proposed several methods , in which Agile
concepts has been, proved to be influential in software
development. It also defines the advantages and the limitations
of Agile Technique. This research motivates developers, to
assume this technique, to develop software that proves out to
be a remedy for their changing requirements. Gaurav Kumar,
Pradeep Kumar Bhatia [17], resolves the significance of this
methodology, in terms of its quality within the cultural
framework. KudaNageswaraRao, G.KavitaNaidu, Praneeth
Chakka[18] has been executed with the factual intentions of
investigation and gain acuity. Into, the latest agile concepts
and Techniques, distinguishes between capability and
weaknesses of agile methods and numerous issues regarding
their applicability. WenjunWu, Wei-Tek. Tsai [19], examine
the data in detail, gathered on software crowdsourcing and
abbreviates major lessons, learned, then analyze two Software
crowdsourcing processes, containing artistic ways. Preeti Rai,
Saru Dhir[20],elucidate the influence and comparison of
several traditional techniques and a new methodology. Top
Coder and App Story processes. Concluding, they identify the
min-max nature among candidates as a crucial element of
design in software Crowd-Sourcing for software quality and
investigate the reasons for which software industries, shifted
from Traditional RE to Agile RE. Gaurav Kumar, Pradeep
Kumar Bhatia [21], recognize the fact agile methodology has a
remarkable impact over software development procedure
related to quality, within the methodical, organizational and
cultural framework. Kiran Jamaalam adakal, V. Rama Krishna
[22], emphasize on few confrontations with Agile->scrum and
gives vision to the user whether the Agile is WONDER
DRUG. Malik Hneif, Siew Hock Ow[23], showcase their
review over three Agile approaches (Extreme programming,
Agile Modeling, and scrum) distinguishes between them and
advice, when to adapt them. Gurleen Singh, Tamanna[24],
reviews various agile techniques like on their characteristics,
objectives, boons and banes of using agile methodology and
their unique characteristics.
III. Suitability of Crowd sourcing strategies in Agile
Methods
A. Crowd Creation
Crowd creation is the best usual constitute of crowdsourcing.
It is related to build up activities of software development
process, asking personage and business to clear up a specific
state of uncertainty, so that produce an adequate answer to the
particular problem. End product is the output of the crowd
creation, even if in intellectual capital or physical form
(software product) , that has a concrete value to remains. One
of the three forms (Cattle call, Warehousing Mechanism,
collective project) is pursuing by crowd creation.
a) Cattle Call
In the first form of crowd creation, to achieve the specific
need, company or individuals places a request for submission.
Instances of this are slogan, fan generated commercials or
design concepts. These have a clear cut task of a product
promotion or work completion. Individuals or scrum master
convey their complete design to the wishing entity in their
complete form. Selection process has followed for choosing
either a small group or single selection. This selection process
can be done through crowd voting or inside the company
professionals.
b) Warehousing Mechanism
As same to the cattle call, scrum master or entity places a
requirement on the web for any project or task in warehousing
mechanism. The specific content and submission do not
achieve by the objective of the site, that type of restriction
proposed by the requester. Software organization or software
professionals upload their innovative creation or product to the
base entities website to be seeable by the people at large.
Dissimilar cattle call, foe selection process, there is no criteria
followed by the company; those items are displayed that
achieve to guidelines or instructions. Customers have a
flexibility to choose either the developed software product or
the raw content itself. The base company allows the customer
selection to them in the sale form. After completion of the
transaction, revenues sale are divided into the software
company and the content provider at a fated rate. One another
limitation of the form is, possession authority of the image
may or may not with the begetter.
c) Communal project
The main contrast among other forms of crowd creation and
collective project is that, from the crowd are member of whole
project, and creates the end product. In the era of internet,
centers basically close to cerebral capital as adverse to
physical formation like programming code. Collective project
has two forms.
i. Cattle call of multistage- Contains all these
process:
• Forthcoming with the new concept
• Determine either it creates a attainable production
candidate
• Examining the completed design of the project
• Generating the final item is break down into
individual chunks
ii. Open source initiative- Generally, application of
that type of project drawn under the form of open
source initiatives or micro work. Commonly,
solutions have to adjust inside the framework of
the application.
B. Crowd Wisdom
Crowd Wisdom is the process that takes into consideration.
The collective opinion of a group of individuals, having their
own point of views than a single person. This theory states that
answers gathered and discussed with large groups that involve
estimation, reasoning, worldly affairs are usually better.
Crowd Wisdom aims at less, but better output and aggregates
individual knowledge. It needs explicit command and
motivation.
The Key conditions for Crowd Wisdom are:
1. Decentralization->Cognition-> Co-operation
2. Diversity of opinion ->Co-ordination
a) Retrospectives
Retrospectives, in the perspective of crowdsourced agile
software development, are the conversations that are held after
frequent intervals or iterations. During these conversations, the
team reflects on what happened in the previous iterations and
identifies the areas where more improvement is desired or
needed, and takes or plan future actions accordingly. During
these conversations an environment of trust and honesty are
necessary needed so that each member can share their
thoughts without hesitating .This also boosts their confidence
levels.
C. Crowd Democracy
Crowd Democracy is also used in policy making in project; it
also considers outer professionals opinion and gathers
information. The US Federal government uses Crowd
Sourcing for several years as a way of citizen participation in
national open government strategy. Transparency and
collaboration being at its core.
Crowd sourcing is also used in every phase of agile software
development. Even though, Crowd Sourcing is a new and
emerging technology it has been used in various projects
initiated by the software companies all around the globe.
With the help and assistance of Crowd Democracy companies
or individuals can get direct feedback from its requester or
customers. It has also been used in many Non- Profit
Organizations. After developing the project under crowd
sourcing method, crowd democracy plays a major role to
insure their quality attributes.
A project was carried out to test crowd sourcing is public
participation for transit planning in salt take from 2008 to
2009 founded by USA’s federal transit administration grant.
Sometimes, it is difficult to gather demographic data of crowd.
Both intrinsic and extrinsic motivation cause people to worse
for or contribute to a crowded task.
D. Crowd Reviews
Main perspective of crowd reviews is taking task requirements
or reviews from the crowd for the software development.G2-
Crowd is a peer to peer, business solution review platform.
The company was established in 2012 and it provides real time
and unbiased reviews based on user rating and other relevant
data to help you assess what is best for business.
There are few problems that always take place with
retrospectives some problems are like team members are not
able to understand the full value of a retrospective meeting and
even some are unwilling to participate in it questioning its
importance.
Now, here we have a few problems along with their
explanation and solutions. They are as follows:
• Thinking of one’s team to be a perfect for
retrospectives. It is almost impossible that any team
is so perfect that they need no improvements. Even if
this happens then it means that the team is not
progressing. Therefore, no team is too good.
• Sometimes members find the meetings boring and
monotonous. So , it was never mentioned anywhere
that they should be exciting after all it is a meeting
and it should be carried out following all rules and
regulation. And the decorum should always be
maintained.
• There are even people who say that they don’t like
retrospectives. Now, it is not possible to like all
aspects of jobs but the sake of professionalism
participate in each and every mandatory meeting or
gathering or discussions.
E. Crowd funding
In a software development field every professional is trying to
get money for their concept or innovative ideas, in the
erstwhile, venture capital funds or angel investors were
options for getting large sums of cash. These days, crowd
funding has become a pragmatic resource for software
developers or entrepreneurs who require assets but do not wish
to forfeit investment or go throughout the arduous procedure
of being interrogate by financier.
Crowdfunding is a novel method for entrepreneur or software
developer to elevate capital by tapping the masses rather than
a small pool of financier. Some instances might have heard of
a few, like kickstarter helps to find the resources and support
they wish to form their innovative ideas an actuality, to help
bring creative projects of life. According to our analysis,
“kickstarter is the best way to connect with people who can
truly help you.
Crowd funding works by permitting in essence anyone to
compose relatively small assistance to a project in interchange
for tangible goods, rather than having a small group of
financier or investors, risk big chunks of funds. Contribution
funds variegate from campaign to campaign but generally
range from dollar 1 to dollar 10,000. Crowdsourcing sites
basically primitively ill fame in the creative and innovative
communities of groups.
IV. FUTURE SCOPE
Crowdsourcing strategies will be segmented i.e., for each
subtask, professional organizations will evolve and occupy the
market which may or may not create a monopoly.
Crowdsourcing strategies (Crowd funding, Crowd creation,
Crowd reviews, Crowd democracy, Crowd Wisdom) will be
applied to different fields of software development and
specification will be strict and bounded. The input and output
of the crowdsourced task will be strictly defined and
constrained which in turn will result in overall quality
improvement of the output. More and more developers will be
attracted towards this method as it is not bounded by any
factors like deadline pressure and will enable them to work as
per their interest and emerge with quality solutions.
REFERENCES
[1] J. Bishop, “Supporting crowd-funded agile software
development projects using contingent working: Exploring
Big Data from participatory design documentation and
interviews”, Int'l Conf. Information and Knowledge
Engineering |IKE’15.
[2] Monika Skaržauskaitė, “The application of crowdsourcing
in educational activities”, ISSN 2029-7564 (online)
SOCIALINĖS TECHNOLOGIJOS SOCIAL
TECHNOLOGIES 2012, 2(1), p. 67–76.
[3] Anurag Dwarakanath, Upendra Chintala, Shrikanth N. C.,
Gurdeep Virdi, Alex Kass+, Anitha Chandran, “ CrowdBuild:
A Methodology for Enterprise Software Development using
Crowdsourcing”,
http://digitalworkforce.accenture.com/axpense/.
[4] Kenneth Benoit, Drew Conway, Benjamin E. Lauderdale,
“Crowd sourced text analysis: Reproducible and agile
production of political data”, Slava Mikhaylov University
College London .June, 2015.
[5] Thierry Buecheler, Jan HenrikSieg, Rudolf M. Füchslin1
and Rolf Pfeifer, “Crowdsourcing, Open Innovation and
Collective Intelligence in the Scientific Method: A Research
Agenda and Operational Framework”, Artificial Intelligence
Laboratory, Department of Informatics, University of Zurich.
[6] Vincenzo Della Mea, Eddy Maddalena, Stefano Mizzaro,
“Crowdsourcing to Mobile Users: A Study of the Role of
Platforms and Tasks”, DBCrowd 2013: First VLDB Workshop
on Databases and Crowdsourcing.
[7] Monika Skaržauskaitė “The application of crowd sourcing
in educational activities”, ISSN 2029-7564 (online)
SOCIALINĖS TECHNOLOGIJOS SOCIAL
TECHNOLOGIES 2012, 2(1), p. 67–76.
[8] André van der Hoek, “Crowdsourcing in Software
Engineering Models, Motivations, and Challenges”, focus:
The Future of Software engineering.
[9] Alessandro Bozzon. Marco Brambilla, Stefano Ceri,
Andrea Mauri, “Reactive Crowdsourcing”, Dipartimento di
Elettronica, Informazione e Bioingegneria – Politecnico di
Milano Piazza Leonardo da Vinci, 32 – 20133 Milano, Italy
[10] Kathryn T. Stolee, Sebastian Elbaum, “Exploring the Use
of Crowdsourcing to Support Empirical Studies in Software
Engineering”, Department of Computer Science and
Engineering University of Nebraska – Lincoln, NE, U.S.A.
[11] RajanVaish, “Crowdsourcing the Research Process”,
University of California at Santa Cruz1156 High Street, Santa
Cruz, California 95064, USA.
[12] Bernardo A. Huberman “Crowdsourcing, attention and
productivity”, Social Computing Lab, HP Laboratories, Palo
Alto, CA, USA.
[13] Hanning Yuan, Yanni Han, Jun Hu, “Research on Agile
Development Methodology of Service-Oriented Personalized
Software” published by IEEE xplore ,2008 International
Conference on Computer Science and Software Engineering.
[14] Tom Narock and Pascal Hitzler “Crowdsourcing
Semantics for Big Data in Geoscience Applications”,
Semantics for Big Data AAAI Technical Report FS-13-04.
[15] Rashmi Popli, Naresh Chauhan, “Cost and effort
estimation in agile software development”, published by IEEE
2014 International Conference on Reliability Optimization and
Information Technology (ICROIT).
[16] EmanA. Altameem, “Impact of Agile Methodology on
Software Development” published by Computer and
Information Science; Vol. 8, No. 2; 2015 ISSN 1913-8989 E-
ISSN 1913-8997 Published by Canadian Center of Science
and Education 9.
[17] Gaurav Kumar, Pradeep Kumar Bhatia,“Impact of Agile
Methodology on Software Development Process” published
by International Journal of Computer Technology and
Electronics Engineering (IJCTEE) Volume 2, Issue 4, August
2012.
[18] KudaNageswaraRao, G. Kavita Naidu, PraneethChakka,
“A Study of the Agile Software Development Methods,
Applicability and Implications in Industry”, published by
International Journal of Software Engineering and Its
Applications Vol. 5 No. 2, April, 2011.
[19] Wenjun Wu, Wei-Tek Tsai, “Creative software
crowdsourcing: from components and algorithm development
to project concept formations”, published by Int. J. Creative
Computing, Vol. 1, No. 1, 2013.
[20] PreetiRai, SaruDhir, “Impact of Different Methodologies
in Software Development Process” published by ,
International Journal of Computer Science and Information
Technologies, Vol. 5 (2) , 2014.
[21] Gaurav Kumar, Pradeep Kumar Bhatia,“Impact of Agile
Methodology on Software Development Process” published
by, International Journal of Computer Technology and
Electronics Engineering (IJCTEE) Volume 2, Issue 4, August
2012.
[22] Kiran Jammalamadaka1, V Rama Krishna, “Agile
Software Development and Challenges” IJRET: International
Journal of Research in Engineering and Technology. ISSN:
2319-1163, ISSN: 2321-7308 Volume: 02 Issue: 08 | Aug-
2013.
[23] Malik Hneif, Siew Hock Ow, “Review of Agile
Methodologies in software development” International Journal
of Research and Reviews in Applied Sciences ISSN: 2076-
734X, EISSN: 2076-7366 Volume 1, Issue 1(October 2009).
[24] Gurleen Singh, Tamanna ,“An Agile Methodology Based
Model for Software development”, published by, International
Journal of Advanced Research in Computer Science and
Software Engineering Research Paper, Volume 4, Issue 6,
June 2014.
IMPLEMENTATION OF ARTIFICIAL INTELLIGENCE AND
NEURAL NETWORK LOGIC IN CLOUD COMPUTATION
Shailendra Kumar Rawat1, S. K.Singh
2, Amar Nath Singh
3,
1,3Bengal College of Engineering & Technology, Durgapur 2Amity Institute of Information
Technology, Lucknow
Email: [email protected], [email protected], [email protected]
ABSTRACT
Cloud computation is the one of the most recent and
modern technique, which basically aimed to
maximize the benefit of distributed resources and
aggregate them to achieve higher throughput to solve
large scale computation problems. In this technology,
the customers needn’t require to buy the product for
their use rather they can use to process the task as per
the use basis. It is the computation technique which is
considered as the fastest computation because of its
architecture. As we know that due to the advance in
problem prospect, the complexity of processing the
task is getting more so the computation logic also
need to be enhanced accordingly. In this paper we
introduce the cloud computing technique using
generic approach of algorithm, Back propagation
technique of artificial Intelligence and artificial
neural networks, and then reviewed the literature of
cloud job scheduling. We here propose to
implementing artificial neural networks which is a
component of Artificial Intelligence to optimize the
job sending sequence for processing which is also
called as scheduling, results in cloud as it can able to
find the find new set of process classifications.
Keywords:
Cloud Computation, Job Sequencing, Artificial
Intelligence, Artificial Neural Networks, Back
propagation technique, generic algorithm.
1. Introduction As we know that in traditional days the computation
process was very time consuming and hectic due to
because of its job sequencing. But due to the
presence of cloud concept where the resources can be
used as per the requirement basis, the task became
more and more sophisticated to get processed. In the
cloud structure the arrangement of the resources can
be of various types , but here we have taken the
arrangement as showed in Figure 1. In traditional
days we need to access the resources in traditional
ways but due to the implementation of Cloud
technique, It makes the complexity of the problem
minimum by resizing the problem and shifts up the
IT infrastructure to such a great extent so that the
computation process don’t require the interpretation
of third party logic[1, 2, 9]. The process of
computing in cloud became more and more efficient
and economical and can be outsourced to another
party as and when required by the user. So by doing
this we reduces the complexities of processing of the
task and simultaneously we increase the job
sequencing to a great extent [3][4].
2. Cloud Computing Deployment:
As we know that, the cloud is a open wide access
prospect, so the computing of the task or accessing of
the task for processing in the cloud environment
became more sophisticated due to its characteristics
which is commonly known as the, customization of
task, portability of task fetching, availability of the
desired resources on users demand and isolation of
the task as and when required [6, 29]. It is also
important that, the computability shouldn’t get
effected by increase in terms of task ratio [7, 23]. In
modern technology of era, most of the IT Companies
and other sectors are using the cloud, because do not
need to invest a huge amount of budget in new
infrastructure and training to the employee to use the
cloud and its operation. Hence, by using cloud
computing, the Small and Medium scale category
Businesses (SMB) can able to access the best
applications for their purpose and resources for their
requirement at very low cost [8]. On the other hand
if we try to emphasize the security of the data then
cloud provides a better security other then the
traditional approach of security in terms of data
prospects for their use [8]-[10].
Whenever any companies try to implement the cloud
network then, the major task is its deployment.
Normally a cloud is consisting of various structures,
out of which the basic classification aspect it is of 4
types, such as:
1. public cloud which is a wide application
aspect,
2. private cloud used for personalize purpose,
3. hybrid cloud which is a combination of
public and private cloud and,
4. Community cloud which a special purpose.
In the public cloud as it is open to all, so if a person
wants then he may access it using any web browsers.
Thus by implementing this mechanism will decrease
the operation costs for processing the task. But
security point of aspect, they are less secure, because
we know that all the software which is used now a
day are much typical and data present on this model
are more exposure to outside world and a hacker can
easily hack the data or corrupt the data [11].
In the private cloud which is also a personal purpose
in terms of their use, and can be best for local area or
limited area use. This private model of cloud is
similar to the Intranet. The internet is just a common
gateway or a medium for which we are using it to
transfer the data. The main biggest advantage of
private cloud is that it is very easy to operate and the
data is very secure in this case. The operational cost
and maintenance of the infrastructure can be
upgraded as and when requires at low cost [4].
Similarly in case of hybrid model, as it is a
combination of private and public cloud so, in this
case, a private cloud is linked to one or more external
cloud services which in terns extends as public. [4]
[11].
To make the cloud computing system effective we
need a medium such as internet where there are front-
end and back-end. The front-end is mostly
responsible to connect with cloud and provides the
platform for the user to access the cloud. On the other
hand, the back-end part is the platform through which
the task result are going to processed and the results
are stored. Figure 2 shows the detailed approach of
the cloud structure using which we can easily used
the cloud as per our purpose and the mode of
operation. [6].
Figure 1. Cloud computing paradigm.
It provides the computing environment through the
platform by using which user can easily process the
task and can able to get the resource service as and
when required. It is a common platform for both the
front end as well as the back end [3][5].
Figure-2
3. Genetic Algorithm and Artificial
Neural Networks A generic Algorithms which is also some times
referred as Genetic algorithm is basically a part of
heuristic approach, using which we can find the
optimal natural selection and natural genetics. Using
this approach we can find the similar estimated
solutions to difficult problems, see Figure 3 [19]
[20]. The main principle behind the genetic algorithm
is to identify the problem scenario and conjugate
them to identify the solution. This algorithm was first
introduced in 1962 by Holland [21,22,25]. The
purpose of using the genetic algorithm here is to
identify the problem statement of the cloud in the
structure. As we know that when ever the task is
subject to process through the front end, then the job
sequencing became a typical problem. So to identify
the common ability of the problem is done using this
approach [33, 26,27]. In general way of its use , it
uses the concept of ANN model. The ANN is a
component of Artificial Intelligence, which is used to
identify the sensing of problem statement and its
related components. The ANN consist of three main
aspects, such as:
1. Network topology,
2. Transfer function, and
3. Training algorithm.
The ANN output accuracy is directly related with the
number of layers presents in the network [35]. Hence
depending on the parameters that used to train the
system. Networks performance is affected by number
of layers, ie if we increase the number of layers then
ultimately the performance of the system gets
decreases[34, 28,29]. So it is always suggested that,
to minimize the number of nodes layers and by doing
this we can enhance the performance of the system.
[26].
Figure 3. Simple genetic algorithm operation [24].
Figure 4. Basic structure of an artificial neural network
(ANN) [28].
4. Cloud Computing Job Scheduling
In every computational logic the process for
providing the task in the unit by the resource is very
important. Because it decides which task has to
processes at what sequence[30, 31]. Hence we
ultimately use the scheduling algorithms. In cloud
computing, the most important aspect is its
architecture. As we have already discussed abut the
various types of cloud category [20]. Out of them the
most primitive is their job sequencing and their
software and hardware configuration based on which
the entire job processing depends [29, 32].
6. Discussion
We know that, the cloud computing is the most
recent and modern trend of computing. Besides of
this there are also some other AI applications logic by
using which we can enhance the computation logic.
On the other hand the Neural Networks which is also
an another aspects of AI logic, provides a better
strategy for the cloud for its arrangement and its
mode of operation. As we know that once the cloud is
ready to use, then the job sequencing and identifying
the common aspect of job for the problem is typical.
Hence we must have to use the Genetic algorithm
along with ANN in the cloud to identify the aspect
and resolve the issue.
7. Conclusions and Future Works
As we know that the cloud computing the most recent
technology which are used widely for the computing
the complex problems. It is due to because of its
completely different architecture in term of its job
fetching and storing during the processing. It uses its
AI logic along with the ANN concept which in turns
uses the generic algorithm to identify the problem
scenario and enhance the job processing.
We also know that cloud uses the internet as its
medium to exchange the data , so we somehow think
for the security of data also. Although the cloud uses
its own security system but still we need to make
smart enough to prevent our data from the hackers
and external threats. It can be achieved by fusing the
AI logic in the modern software, so that it can be
identify during the processing itself.
About the Author:
Asst. Prof Shailendra Kumar Rawat is presently
pursuing his PhD in Computer Science stream at
Amity University, Lucknow.
Prof S. K. Singh is presently working as Reader in
Amity Institute of Information
Technology,
Lucknow. Prof. Amar Nath Singh is presently working at
Bengal College of Engineering, Durgapur as a reader
in the department of Computer Science and
Engineering. His research area is Underground Mines
and Surface mining using Artificial Intelligence,
Fuzzy Logic. His research area includes cloud
computing, WSN, Machine Learning and Data
Science. He has produced more than 60 M.Tech
scholars till date.
References
[1] Xu, X. (2012) From Cloud Computing to Cloud
Manufacturing. Robotics and Computer-Integrated
Manufacturing,75-86.
http://dx.doi.org/10.1016/j.rcim.2011.07.002
[2] Svantesson, D. and Clarke, R. (2010) Privacy and
Consumer Risks in Cloud Computing. Computer Law &
Security M. Maqableh et al. 199 Review, 391-397.
http://dx.doi.org/10.1016/j.clsr.2010.05.005
[3] Jadeja, Y. and Modi, K. (2012) Cloud Computing—
Concepts, Architecture and Challenges. 2012 International
Conference on Computing, Electronics and Electrical
Technologies (ICCEET).
http://dx.doi.org/10.1109/ICCEET.2012.6203873
[4] Malathi, M. (2011) Cloud Computing Concepts. 3rd
International Conference on Electronics Computer
Technology (ICECT).
[5] Nexogy (2014) The Impact of Cloud Computing for
VoIP. http://nexogy.wordpress.com/2011/09/27/the-impact-
of-cloud-computing-for-voip/
[6] Arshad, J., Townend, P. and Xu, J. (2013) A Novel
Intrusion Severity Analysis Approach for Clouds. Future
Generation Computer Systems,416-428.
http://dx.doi.org/10.1016/j.future.2011.08.009
[7] Modi, C., et al. (2013) A Survey of Intrusion Detection
Techniques in Cloud. Journal of Network and Computer
Applications,42-57.
http://dx.doi.org/10.1016/j.jnca.2012.05.003
[8] Subashini, S. and Kavitha, V. (2011) A Survey on
Security Issues in Service Delivery Models of Cloud
Computing. Journal of Network and Computer
Applications,1-11.
http://dx.doi.org/10.1016/j.jnca.2010.07.006
[9] Rabai, L.B.A., et al. (2013) A Cybersecurity Model in
Cloud Computing Environments. Journal of King Saud
University—Computer and Information Sciences, 63-75.
http://dx.doi.org/10.1016/j.jksuci.2012.06.002
[10] Karajeh, H., Maqableh, M. and Masa’deh, R. (2014)
Security of Cloud Computing Environment. 23rd IBIMA
Conference on Vision 2020: Sustainable Growth, Economic
Development, and Global Competitiveness.
[11] Mathur, P. and Nishchal, N. (2010) Cloud Computing:
New Challenge to the Entire Computer Industry. 2010 1st
International Conference on Parallel Distributed and Grid
Computing (PDGC), Solan, 28-30 October 2010, 223-228.
[12] Maqableh, M., Samsudin, A. and Alia, M. (2008) New
Hash Function Based on Chaos Theory (CHA-1). 20-27.
[13] Maqableh, M.M. (2010) Secure Hash Functions Based
on Chaotic Maps for E-Commerce Applications.
International Journal of Information Technology and
Management Information System (IJITMIS), 12-22.
[14] Maqableh, M.M. (2010) Fast Hash Function Based on
BCCM Encryption Algorithm for E-Commerce
(HFBCCM). 5th International Conference on E-Commerce
in Developing Countries: With Focus on Export, Le Havre,
15-16 September2010, 55-64.
[15] Maqableh, M.M. (2011) Fast Parallel Keyed Hash
Functions Based on Chaotic Maps (PKHC). Western
European Workshop on Research in Cryptology, Lecture
Notes in Computer Science, Weimar, 20-22 July 2011, 33-
40.
[16] Maqableh, M.M. (2012) Analysis and Design Security
Primitives Based on Chaotic Systems for E-Commerce.
Durham University, Durham.
[17] Wikipedia (2014) Cloud Computing. Wikipedia
Contributors.
http://en.wikipedia.org/w/index.php?title=Cloud_computin
g&oldid=616939446
[18] Khorshed, M.T., Ali, A.B.M.S. and Wasimi, S.A.
(2012) A Survey on Gaps, Threat Remediation Challenges
and Some Thoughts for Proactive Attack Detection in
Cloud Computing. Future Generation Computer Systems,
833-851
http://dx.doi.org/10.1016/j.future.2012.01.006
[19] Benny Karunakar, D. and Datta, G.L. (2007)
Controlling Green Sand Mould Properties Using Artificial
Neural Networks and Genetic Algorithms—A Comparison.
Applied Clay Science, 58-66.
http://dx.doi.org/10.1016/j.clay.2006.11.005
[20] Abdella, M. and Marwala, T. (2005) The Use of
Genetic Algorithms and Neural Networks to Approximate
Missing Data in Database. Computing and Informatics,
577-589.
[21] Fraile-Ardanuy, J. and Zufiria, P.J. (2007) Design and
Comparison of Adaptive Power System Stabilizers Based
on Neural Fuzzy Networks and Genetic Algorithms.
Neurocomputing, 2902-2912.
http://dx.doi.org/10.1016/j.neucom.2006.06.014
[22] Kumar, P. and Verma, A. (2012) Scheduling Using
Improved Genetic Algorithm in Cloud Computing for
Independent Tasks. ICACCT12, Chennai, 3 November
2012.
[23] Goñi, S.M., Oddone, S., Segura, J.A., Mascheroni,
R.H. and Salvadori, V.O. (2008) Prediction of Foods
Freezing and Thawing Times: Artificial Neural Networks
and Genetic Algorithm Approach. Journal of Food
Engineering,164-178.
http://dx.doi.org/10.1016/j.jfoodeng.2007.05.006
[24] Group, L. (2014) Optimisation of Collector Form and
Response.
http://www.engineering.lancs.ac.uk/lureg/group_research/w
ave_energy_research/Collector_Shape_Design.php.
[25] Heckerling, P.S., Gerber, B.S., Tape, T.G. and Wigton,
R.S. (2004) Use of Genetic Algorithms for Neural
Networks to Predict Community-Acquired Pneumonia.
Artificial Intelligence in Medicine, 71-84.
[26] Varahrami, V. (2010) Application of Genetic
Algorithm to Neural Network Forecasting of Short-Term
Water Demand. M. Maqableh et al. 200 International
Conference on Applied Economics—ICOAE, Athens, 26-28
August 2010, 783-787.
[27] Chen, C.R. and Ramaswamy, H.S. (2002) Modeling
and Optimization of Variable Retort Temperature (VRT)
Thermal Processing Using Coupled Neural Networks and
Genetic Algorithms. Journal of Food Engineering, 209-
220.http://dx.doi.org/10.1016/S0260-8774(01)00159-5
[28] Tadiou, K.M. (2014) The Future of Human Evolution.
http://futurehumanevolution.com/artificial-intelligence-
future-human-evolution/artificial-neural-networks
[29] Li, L.Q. (2009) An Optimistic Differentiated Service
Job Scheduling System for Cloud Computing Service Users
and Provider. 3rd International Conference on Multimedia
and Ubiquitous Engineering, Qingdao, 4-6 June 2009, 295-
299.
[30] do Lago, D.G., Madeira, E.R.M. and Bittencourt, L.F.
(2011) Power-Aware Virtual Machine Scheduling on
Clouds
Using Active Cooling Control and DVFS. 9th International
Workshop on Middleware for Grids, Clouds and e-Science,
Lisboa, 12-16 December 2011.
[31] Dutta, D. and Joshi, R.C. (2011) A Genetic-Algorithm
Approach to Cost-Based Multi-QoS Job Scheduling in
Cloud Computing Environment. International Conference
and Workshop on Emerging Trends in Technology (ICWET
2011)- TCET, Mumbai, 25-26 February 2011.
[32] Ghanbari, S. and Othman, M. (2012) A Priority Based
Job Scheduling Algorithm in Cloud Computing. Procedia
Engineering, 778-785.
[33] Xu, B.M., Zhao, C.Y., Hu, E.Z. and Hu, B. (2011) Job
Scheduling Algorithm Based on Berger Model in Cloud
Environment. Advances in Engineering Software, 419-425.
http://dx.doi.org/10.1016/j.advengsoft.2011.03.007
[34] Li, K., et al. (2011) Cloud Task Scheduling Based on
Load Balancing Ant Colony Optimization. 6th Annual
China Grid Conference, Dalian, 22-23 August 2011.
[35] Morariu, O., Morariu, C. and Borangiu, T. (2012) A
Genetic Algorithm for Workload Scheduling in Cloud
Based E-Learning. Proceedings of the 2nd International
Workshop on Cloud Computing Platforms (CloudCP 12),
Bern, 10 April 2012.
The Future Roadmap of Artificially Intelligent Electric Vehicles
Sanjay Devkishan Taneja
Assistant Professor
Amity Institute of Information Technology
Amity University Lucknow Campus
Lucknow, India
Abstract— This paper presents an ongoing overview of much
awaited emergence of electric vehicles (EV), its background, market
trends and acceptance for these electrically run vehicles in different
countries including India, technological developments towards
Artificial Intelligence powered Electric Vehicle, perception of
government and our society for these vehicles followed by major
challenges. The limited natural stock of Petrol, Diesel,the domination
of countries naturally blessed with enormous fuel repository over
the rest of the world, the observed climatic changes due to global
warming and serious pollution issues are the few valid reasons which
has encouraged the inception and development of electrical vehicles
worldwide.
The highest source of emission of Green House Gas from human
based activities in major countries of the world is from surface
transportation and other related means as per the data collected in
’2010’. The emissions from this sector are expected to escalate by
approximately 33% from 1990 to 2030 on account of predicted
growth rate in the number of vehicles on Road.
Aim of the study is to look upon the latest and upcoming
developments in the market for the electric and AI-electric vehicles, ,
their mutual comparisons and finally to know the outlook and the
support of Government for this new emerging sector.
Keywords—Electric Vehicle, EV Market, AI-Electric Vehicle,
Trends and Challenges
I. INTRODUCTION
The limited natural stock of Petrol, Diesel and CNG, the
supremacy of countries naturally blessed with enormous fuel
repository over the rest of the world, the observed changes in
the direction of global warming and serious pollution issues
are the few valid reasons which has encouraged the inception
and development of electrical vehicles worldwide.
To join hands with the ambitious new 2030s Carbon
reduction targets defined by UK and to account for
increasing level of customer awareness for various
environmental related issues, a large scale of R&D has been
done and different propulsion methods are discovered and
developed.
But carbon dioxide is not the only greenhouse gas: methane,
nitrous oxide and numerous others also trap heat as they
accumulate in the atmosphere, and therefore must be included
the list. The highest source of emission of Green House Gas
from human based activities in the major countries of the
world is from surface transportation and other related means
as per the data collected in ’2010’ [2]. The emissions from
this sector are expected to escalate by approximately 33%
from 1990 to 2030 on account of predicted growth rate in
the number of vehicles on Road [1].
The purpose of this report is to describe the roadmap of
electric vehicles, latest scenario & technological developments in AI powered electric vehicles, perception of government and our society for these vehicles followed by major challenges.
II. HISTORY OF ELECTRIC VEHICLE
An electric vehicle works with a single or dual electric motors
for propulsion. It may be powered a collector system by
electricity from off-vehicle sources. EVs mainly covers
surface and underwater vessels.
The electric vehicle cannot be regarded as the recent
development. It has been around for over 100 years and the
journey with technological changes goes on to the present.
England and France were amongst the few who actually
developed the electric vehicle in the late 1800s.
The attention and contribution for such electric vehicles came
from Americans in the year 1885. Many innovations were
followed but the actual interest was seen increasing greatly in
the late 1890s and early 1900s[4].
Sudden plunge in the cost of the batteries in the Chinese
market, growing commitments from major auto makers,
strong policy support from the governments and considerably
very low operational costs have placed electric vehicles
(EVs) on the track to smoothly override gasoline-powered
vehicles. Incredibly, the United States electric vehicle have
registered a growth on an average of over 32% annually from
the year 2012 to year 2016 and 46% over the year ending in
June 2017.
Recently in February 2018, Twenty Two Motors launched it
smart electric scooter, Flow, equipped with AI and cloud
capabilities at the Auto Expo 2018 being held at Greater
Noida and New Delhi.
The Government of India has been pushing for a shift towards
electric vehicles, and Flow along with other electric vehicles
launched at the expo seem to be a step towards this shift.
III. LATEST TRENDS IN ELECTRIC AND AI - POWERED
ELCTRIC VEHICLES
There are a variety of alternative fuel based modes of surface
transportation. However, amongst all those, electric vehicles
have been on the government radar nationally and
internationally.
As per the year 2015 Global Automotive Executive Survey
conducted by KPMG International , less than one in 20
vehicles is expected to be equipped with electrified
powertrains by the year 2020. Only 0.01 out of 100 cars are
expected to be equipped with fuel cells which comes out to be
about 16,000 units per annum by 2020 [4].
The growth rate of the number of electric vehicles traded
each year globally is mounting swiftly from 45,000 units in
the year 2011 to 300,000 plus units in year 2014 as per the
data in Figure-1, showing Annual global electric vehicle
sales..
Figure-1 : Annual global electric vehicle sales.
The worldwide electric vehicle cumulative registrations
matured notably at a 92% CAGR to touch the expected count
of 665,000 electric cars on road, by the fall of year 2014.
Although US has the largest share with the world’s biggest
fleet of e-vehicles, China has emerged as one of the best
upcoming market for all types of electric vehicles with 36,500
electric buses , incredibly 230 million electric bikes, 83,000
electric cars to be on road by the year 2014.
Honda's latest concepts focus on artificial intelligence only.
Honda has unveiled a series of AI devices and transport
concepts at this year's Tokyo Motor Show, including a sports
car that communicates with its driver.
Figure-2 : National Electric Vehicle Sales Goals for 2020-30
The British and French governments have recently said they
will ban sales of gasoline- and diesel-powered vehicles by
2040.
China is also tightening regulations to encourage the adoption
of electric cars as per the data in Figure-3, showing The Rise
of Electric Cars.
Figure-3 : The Rise of electric cars.
All these conclude that a new world order is coming, driven
by AI and electric cars.
A. Highlights of AI powered electric vehicles
Well known Twenty Two Motors recently launched its
smart e-scooter with AI and cloud capabilities at the Auto Expo 2018 organized at Greater Noida and New Delhi.
The scooter will have the capability of experiential learning , learning of the personal driving etiquettes and behavior of all its regular drivers , enabling them to customize the driving attributes such as , the pickup speed, torque as per heir personal needs.
Audi's AI-CON is designed to run over 700 kilometers on a single charge of just 30-40 minutes. The rapid advancement in the field of self-driven cars guarantees very low fatal human errors on the road and will also ease severe traffic congestions.
Very recently, an AI-powered electric vehicle has been shaped with a bundle of technologies viz. digital image processing for the purpose of road detection, obstacle detection and Lidar, Sonar and Infrared based Obstacle avoidance.
Toyota Research Institute is planning to budget $36 million over the next four years toward deploying artificial intelligence to work on identifying suitable materials that can be readily approved for batteries or catalysts that power hydrogen-fueled cars.
Mitsubishi Electric has confirmed that its AI-powered camera systems will obsolete car mirrors, and much more is happing in this direction all over the world.
B. Indian Government perception for electric vehicles with
Artificial intelligence
The Indian Government has been pushing for a big shift
towards AI enabled electric vehicles. With the Indian Government’s inclination to declare India a 100% EV nation by the year 2030, as indicated in Figure-4.
Figure-4 : Scenario of Electric Vehicles in India
This push for EVs will create a remarkable change in Country’s energy security priorities, securing lithium supplies,
a key raw material for making low priced and high life batteries.
C. Other country initiatives in EV space
With Norway in the worldwide lead, Europe has been set for the race in electric vehicles production. Norway as on today has the highest per capita number of electric vehicles in the globe : nearly 100,000 in a country of 5.2 million population.
The ambitious emissions-reduction targets has aimed growing support from government and the various auto led industries for the rapid technological advancement in electric vehicle zones all over the Europe.
China has been ranked first in the world in electric vehicles production and usage with over 600,000 electric vehicles on the roads , targeting over 5 million by the year 2020. The United States globally ranks third in the row with almost 500,000 electric vehicles registered as on date.
D. Major challenges in electric vehicles
The only major challenge is the vehicle cost as electric vehicle battery is expensive. The batteries in electric cars are supposed to be able to hold huge amounts of charge to make the cars practical on road. They have to be designed and built using high quality expensive raw materials, most of which are tough to procure. The charging stations are another the big challenge. Recharging time is high and roughly consumes around 7-8 hours which seems to be unfeasible during business hours.
IV. CONCLUSION
As per the need of the time, Electric Vehicles are also
proven one of the safest and best option for surface
transportation. By this report, futuristic trends and AI
application has to increase in Indian market as well for the
acceptance of electric vehicle by a common man. By this
we also came to know about current key technological
development in the field of electric vehicles in global
market.
Presently, China and most of the European countries and
then US has registered maximum percentage sale of
electric vehicles as compared to other countries of the
globe. We also studied the inclination of government as
one of the major support for increasing sale of electric
vehicles in various countries.
We also learned the valid reasons why Indian EV market is
still far behind in this increasing race of electric vehicles
specially in the launch of vehicles with AI. What is our
forevision by the year 2030 has also been studied as
regards to design and technological developments in the
emerging area of electic vehicles.
REFERENCES
[1] Neilsen Ole Kenneth and Neilsen Malene, “Projections of Green House
Gas Emission 2010 to 2030” National Environmental Research Institute, Aahrus University, vol. 1, pp. 9-10, September 2011.
[2] GreenHouse Gas Emission (2018) from https://www.epa.gov/ghgemissions/sources-greenhouse-gas-emissions.
[3] Mader, Jerry, and Transportation Energy Center TEC. "Battery powered vehicles : Don’t rule them out." (2006).
[4] Edison Tech Centre (2018) from http://www.edisontechcenter.org/ElectricCars.html
[5] Study by KPMG ‘Emerging trends and technologies in the automotive sector- Supply chain challenges and opportunities’ K- Klynveld, PPeat,
[6] Trends in vehicle concept for Journal of International Battery, Hybride fuel cell Electric vehicle symposium on nov-2013, issu 1, volume
[7] Hybrid and Fuel Cell Electric Vehicle Symposium & Exhibition (EVS-25), Shenzhen, China. 2010.
[8] Liu, Jinsong. "Research on the Development Strategies of New Energy Automotive Industry Based on Car Charging Stations and Battery
[9] Weinert, Jonathan, et al. "The future of electric two-wheelers and electric vehicles in China." Energy Policy 36.7 (2008): 2544-2555.
[10] Wirges, Johannes, Susanne Linder, and Alois Kessler. "Modelling the development of a regional charging infrastructure for electric.
MATLAB Based: Automatic Voice Assistance System for Visually Impaired
Anupam Bhardwaj
Department of CS&E
Amity School of Engineering
and Technology
AUUP,Lucknow
om
Ankit Yadav
Department of CS&E
Amity School of Engineering
and Technology,
AUUP,Lucknow
Dr. Geetika Srivastava
Department of E&CE
Amity School of Engineering
and Technology,
AUUP,Lucknow
Introduction From a long period of time reading and
detecting the sign boards, labels and other
surrounding objects is a matter of great
concern for visually impaired people. Since
the sign boards and posters can’t be written in
braille script which is easily readable and
understandable by the sightless people, so
there is a need to solve this problem. This
project focuses on this problem and is
designed to solve this problem without any
issues. The text written on the sign board or
posters is detected by the program and it is
provided as a voice assistance to the person
through an ear piece.
Working The image of the sign board or poster is
captured by the camera that is mounted on the
spectacles of the blind person. This image is
then transmitted through Bluetooth to the
persons mobile phone. This image is then
processed to detect the MSER (Maximally
Stable Extremal Region) region. MSER
region is that region which have very less
variation in intensity. These regions are
detected by performing stability test using
Intensity Threshold Level. The detected
MSER region is then processed for their stroke
width transformation. A stroke in an image is a
region of continuous band having nearly
constant width. Starting by assuming infinite
width, we start to increment through the region
of one edge to another in the direction of
calculated gradient. The direction of gradient
is mostly directed toward the edge. Finally, the
pixels that have similar stroke width and
satisfying certain conditions are grouped
together.
After stroke width transformation the image is
then send for filtration. In the filtering process
certain filtering condition are taken into
account. For an image to satisfy filtering
condition aspect ratio must be in small range.
This is done in order to filter long and narrow
components. Components with too large or
small are ignored. Finally, we calculate the
boundary box is calculated for each character.
These boundary boxes are then connected to
form a complete word or a sentence. These
detected words and sentences are the
converted from text to speech and the audio is
played through the speakers
Flow Diagram
Future Scope Currently this project is only able to detect text
form an image taken using a webcam or an IP
camera. But in future it will be able to detect
text as well surrounding objects using a
Bluetooth camera mounted on the spectacle of
the sightless person and the processing will be
done through an application on the persons
smartphone. Apart from only detection this
project will also be able to give translation of
the detected text in various languages.
Conclusion This project will help a lot of visually impaired
people around the world by making them more
independent. It will allow them to work freely
and live in this world fearlessly. This project
demonstrated in not only feasible for
technology but has a large market potential. As
our aim is to make this world a better place to
live for everyone.
References:
1. Ezaki, Nobuo, Marius Bulacu, and
Lambert Schomaker. "Text detection
from natural scene images: towards a
system for visually impaired
persons." Pattern Recognition, 2004.
ICPR 2004. Proceedings of the 17th
International Conference on. Vol. 2.
IEEE, 2004.
2. Do, Cuong, and Michael Merman.
"Automatic bank teller machine for
the blind and visually impaired." U.S.
Patent No. 6,061,666. 9 May 2000.
3. Cohen, Albert E., et al. "Method and
system for assisting the visually
impaired in performing financial
transactions." U.S. Patent No.
6,464,135. 15 Oct. 2002.
4. Moore, Michael C., et al. "Device for
assisting the visually impaired in
product recognition and related
methods." U.S. Patent No. 5,917,174.
29 Jun. 1999.
5. Garaj, Vanja, et al. "A system for
remote sighted guidance of visually
impaired pedestrians." British Journal
of Visual Impairment 21.2 (2003): 55-
63.
6. Sanders, Aaron D., and Kasey C.
Hopper. "System for indoor guidance
with mobility assistance." U.S. Patent
No. 9,539,164. 10 Jan. 2017.
7. Zientara, Peter A., et al. "Third Eye: a
shopping assistant for the visually
impaired." Computer 50.2 (2017): 16-
24.
8. Gowtham, S. U., et al. "Interactive
Voice & IOT Based Route Navigation
System For Visually Impaired People
Using Lifi." (2017).
9. Rahman, Samiur, Sana Ullah, and
Sehat Ullah. "Obstacle Detection in
Indoor Environment for Visually
Impaired Using Mobile
Camera." Journal of Physics:
Conference Series. Vol. 960. No. 1.
IOP Publishing, 2018.
10. Ani, R., et al. "Smart Specs: Voice
assisted text reading system for
visually impaired persons using TTS
method." Innovations in Green Energy
and Healthcare Technologies
(IGEHT), 2017 International
Conference on. IEEE, 2017.
11. Engelke, Robert M., and Kevin R.
Colwell. "System for text assisted
telephony." U.S. Patent Application
No. 15/010,199.
The Wider Realm to Artificial Intelligence in
International Law Abhivardhan
Amity University Uttar Pradesh, Lucknow
Malhaur Near Railway Station, Nizampur, Gomti Nagar Extension,
Lucknow, Thasemau, Uttar Pradesh, India, 226010 [email protected]
Abstract— International Law subsists towards a traditionalist
approach of the legal instruments existing between states. This
traces its origins towards a sui generis act of responsive
considerations, which form the psychological and material
content of state practice. However, the basic tenets of
International Law have not yet developed. The limiting factors
that have prevented it are not exhaustive nor exclusive but just
heavily traditional. A better set of examples may include the
jurisprudence of Budapest Convention on Cybercrime and the
action against Russia by NATO in 2007 regarding the cyber-
attack in Estonia. Moreover, the approaches of North Korea
pertaining to WannaCry and the need of a civilized set of legal
principles are in scarcity or if there are any such, the
traditionalism of the so-called manifestation is in a permanent
question of concern, which, in the end, is fatal for the future
approaches in International Law The question of individual
agility and representation in a cyber realm stands unquestioned
by International Law even where there is a subjected realization
of AI, based on null human intervention, which may broaden.
Artificial Intelligence does not only manifest the sui generis realm
of nullity of human sense and presence, but it also moves towards
the synthesis towards a more technical but reflective aspect of
society, which shall affect statehood and non-state actors. The
paper thus focuses on the ad hoc purpose to devise some basic
principles on how the instruments of International Law can
modernize its limits via AI.
Keywords— International Law, Artificial Intelligence, Political
Subjectivity, Informational Bias, Sovereignty.
I. INTRODUCTION
International Law is considered to be the concept of legal
reflection among ‘civilized nations’, which is true in its own
origins. However, it traces back its very nearest vicinities in
the Roman Empire, where state sovereignty and statehood
had only a mere relevance up to absolutism, obedience and
deterrence. There, in the contemporary era, it widens and
we seek the formation of International Humanitarian Law,
the 1899 and 1907 Hague Conventions and the formation of
United Nations (also the League of Nations, which
however, had least value of concern). Well, increasing
global integration is not an entirely new phenomenon,
contemporary globalization has had an unprecedented
impact on global public health and is creating adverse
challenges for international law [1] and policy including the
municipal laws and legal system of various developed and
developing nations. However, machines matching humans
in general intelligence (possessing common sense and an
effective ability to learn, reason, and plan to meet complex
information-processing challenges transversely a wide
range of natural and abstract domains [2]) have been
expected since the invention of computers in the 1940s. [2]
Yet starting at a sub-human level of development and
enduring throughout all its succeeding development into a
galactic superintelligence observable, the AI’s conduct is to
be guided by an essentially unchanging final value that
becomes better understood by the AI in direct significance
of its general intellectual progress- and likely quite
differently by the mature AI than it was by its original
programmers, though not different in a random or hostile
way [2] but in a compassionately appropriate way.
Henceforth, wherever AI substantiates its subjected matter
of relevance, it enshrines a sui generis relationship with the
limits and phases of human intervention that it seeks.
However, when it comes to International Law, the concept
itself is too stringent. It is far traditionalized, which does not
necessitate those basic approaches that we require still.
Since, the conglomeration of AI and International Law
at the level of nations is not a reality yet, therefore there
does not exist any real case. However, in the presence of
some hypothetical considerations, it is asserted that the
purpose of this paper to give newer directions to guide
towards a more better and understandable theoretical and
applied reality that we can achieve via AI to bring the
applied mechanisms of International Law towards
modernity. A good example is the concept of sovereign
recognition, which is dealt differently in the paper. More
examples can be regarding Artificial Diplomacy, where
human intervention to the political aspect of diplomacy is
attained but there exists a watcher to control political
subjectivity as only a mere human-oriented data and not a
directive. This is widely discussed further in the paper.
Therefore, this paper gives an earnest effort and
approach to conglomerate the basic principles and
conclusions that can guide the jurisprudence of
International Law towards the next level of human and non-
human configurations irrespective of the proclamation that
the assertions may not become an absolute truth.
II. THE TRADITIONAL SUBJECTIVITY OF INTERNATIONAL LAW
The basic foundation of International from the Roman
Law favor sovereignty more and remain either moot or
restrictive over the relevance of self-determination, which
is the baseline to understand what International Law is.
Even though the introduction of International Law is
benefitted by the erga omnes of every nation, relations in
the international level ascribe a different setup of how these
principles ascribe the general relevance of the sovereigns
and their functionary manifestations and establishments.
Thus, this introduction shall focus a little on the divine
theory of state under the Roman Law, which was advocated
at its best by Alberico Gentili.
Gentili stipulated, “Sovereignty is absolute and
perpetual power… This sovereignty means that the prince
never finds anything above him, neither human being, nor
law… This power is absolute and without limitation… That
the ‘Prince is not bound by law’ is law, as is also that ‘Law
is what pleases the prince’. And this is no barbarian law but
Roman law, the first and foremost among human laws…
And so, what is called regal prerogative in England…is
absolute power [3]”.
However, the virtue of notions does not have any
substantiated relevance and existence when we deal with
the contemporary principles. One of them, excellently
states- “It is a principle of international law, and even a
general conception of law, that any breach of an
engagement involves an obligation to make reparation [4]”.
Still, this is just based on the law of obligations, wherever
the principle that a State cannot rely on its own wrongful
conduct to avoid the consequences of its international
obligations is capable of novel applications and
circumstances can be imagined where the international
community would be entitled to treat a new State as existing
on a given territory, notwithstanding the facts [5]. Thus, if
we consider the mere concept of consent as per the Vienna
Convention on Law of Treaties [6], we come to know that
the convention merely subjects better leverage to
sovereigns, which if we understand in more general aspects,
we can understand that still, obligatory limitations in the
creativity to embody the realism from the International
Relations theory is sculpted, which is reflected in the
International Covenants of 1966 and the Geneva
Conventions of 12 August 1949. Irrespective of that,
International Law converts into a traditional system of
action, which is due to the traditional attributes of the state’s
understandable in the state practices and the customary
international law that makes it jus cogens. Still, the attacks
by Russian hackers to access the information of the website
owned by the Government of Estonia is joined as a violation
of the Article 2 of the Charter of the UN and so the United
States invoked the Article 51 of the Charter and the Article
5 of the North Atlantic Treaty to intrude into a cyber-war by
Russia. Moreover, the actionable constraints created by
Stuxnet and the required fear understandable from North
Korea’s WannaCry leaves the International Community in
a disarray to substantiate how the cyber realm is lead. Well,
this is not the case of cybercrimes at an international level
as deterrence and retribution is just the means to limit and
regulate the cyber community when it is subjected to least
control in the vicinity of its young age. Hence, this must be
concluded-
• It is not the fault or blame of the cyber realm to
exist, but that of the understanding and action that
law has primarily dealt;
• Responsiveness and deterrence is a need, but that
does not recognize the realism and informational
relevance that the cyber realm is manifesting;
• Traditionalizing legal instruments is a drawback to
clearly regulate a societal realm, whether is it
human or non-human
III. THE REASON WHY AI CAN GUIDE INTERNATIONAL LAW
Artificial Intelligence in all its sui generis aspects, is the
realm of subjected limitations to human instruments’
intervention and represents a general realm of self-developed
attributes of recognition, representation and participation,
which, in the end, necessitates the broader outlook of the lives
that non-human and human instruments make up with the
coherence of furtherance.
Well, the nexus can never be easy as both the disciplines are
extremely divergent and could never have observed the basic
relationship, which is to be expected. Now, Vienna Convention
on Law of Treaties [6]. However, even after the successful
construction of an operational rule-base, the meaning of the
individual rules remained a mystery [7]. In International Law,
we generally develop entities, within the ambit of ‘what is there
actually’ and ‘how that can be relevantly persisted’. Hence,
where operational configurations build up the required set of
contingencies, we come up with more clearer solutions.
However, in International Law, the subjectivity problem can
be demarcated if we change the prioritization of International
Relations with regards to the factor of utility. It is actually
possible, which is greatly conceivable.
Let us take an example. The Russian Federation in 2007,
attacked the cyber systems of the State of Estonia, a European
nation. A series of cyber-attacks began 27 April 2007 that
swamped websites of Estonian organizations, including
Estonian parliament, banks, ministries, newspapers and
broadcasters, amid the country's disagreement with Russia
about the relocation of the Bronze Soldier of Tallinn, an
elaborate Soviet-era grave marker, as well as war graves in
Tallinn [8] . It was established to what has become known as
“Web War 1”, a concerted denial-of-service attack on Estonian
government, media and bank web servers that was precipitated
by the decision to move a Soviet-era war memorial in central
Tallinn in 2007 [9]. Similarly, on 20 July 2008, weeks before
the Russian invasion of Georgia, the "zombie" computers were
already on the attack against Georgia [10] [11]. The website of
the Georgian president Mikheil Saakashvili was targeted,
resulting in overloading the site. The traffic directed at the Web
site included the phrase "win+love+in+Rusia". The site then
was taken down for 24 hours [12] [13] . Nothing compares to
the destructive power of a nuclear blast. But cyber-attacks
loom on the horizon as a threat that is best understood as an
extraordinary means to a wide variety of political and military
ends, many of which can have serious national security
ramifications. [14]
Now, sovereignty and self-determination cannot be non-
human or non-human intervened. Why? Sovereignty as a
political concept is only for humans. That is why our leaders
can maintain the concept’s fragility and regulate their
representation. However, what if became possible that
sovereignty is more federalised? It means that each and every
aspect of it is under a practical recognition. AI can possibly do
that. However, the AI itself cannot dominate human interests
and opinion, which is the matter of necessitated caution.
Obviously, all the political realms are human and till human
thought and presence is a reality, sovereignty as a matter of
relevance must be as it is. So, by this example, we can
understand that International Law, may become a little
intervening but that would realise into a very dangerous aspect
sometimes. While machine learning is commonly associated
with automated driving, search algorithms, and image
recognition, it extends to cover the law. [15] Still, this may not
be that dangerous for any matter of concern as such, which is
related to some very basic concerns. If we go towards the
ambiguity of customary international law, we can solve its
problem by maintaining a recognizable limit of human
intervention and present relevance, which is itself maintained
by the AI configurations.
IV. THE RELATIVITY FROM AI TO THE INSTRUMENTS OF
INTERNATIONAL LAW AND RELATIONS
Artificial Intelligence can provide a wider utility to the
applied realms of International Law, that is, global
cooperation in IEL and IHL, recognition of cyber realm and
its users with a wider realization of the relevance of every
action in the purview of it and others, which are still
unknown to us. The notion of control, it can be assumed,
has garnered support, because it is a vague term. It is a
longstanding practice in international law to use vague legal
terms which all parties can construe according to their
preferences when positions diverge widely. Thus, the
U.S.A. and other states likely to have autonomous weapons
systems may interpret control loosely. This allows them to
see as lawful a wide variety of constellations, such as an
operator who simultaneously controls hundreds of
weaponized swarm drones or an armed underwater or air
vehicle which has been broadly authorized to use force and
loiters for months [16]. Here are a few examples and their
analysis per se for the cause of the required objective to bear
a clarity.
A. Global Health Law
Global Health Law ascribes more than a mere reflective
action under globalization and state responsibilities and
intervention of International Organizations. Thus, referring
Lawrence Gostin, “Global health law is a field that
encompasses the legal norms, processes, and institutions
needed to create the conditions for people throughout the
world to attain the highest possible level of physical and
mental health. [17]” It is widely recognized that current
system of global health governance is insufficient to meet
the wide range of challenges and opportunities brought by
globalization [18]. A fundamental challenge of global
health governance is the state-centric nature of international
law [17], where a critical limitation of the state-centric
nature of international law is its inability to incorporate
nonstate actors in the legal framework for global health
governance. [17] In contemporary global health
governance, states are apparently unwilling to develop
international legal instruments that create binding and
meaningful obligations and incentives and provide deep
funding or services for the protection of the world’s poorest
people. As a consequence of the voluntary nature of
international law and the overriding principle of
sovereignty, states have established only a limited legal
framework for national action and international cooperation
to advance domestic and global public health [17], which
defies the principles of IHRL. Thus, just maintaining a
vague relevance of the concept of sovereignty would not
benefit the realm. Instead, via the instrumenting
demarcation of AI, we can actually, make better, helpful and
cogent responsible drives among states. A great so-called
resemblance is the Earthwatch, initiated by the UNEP in the
mandate of the General Assembly, where governments are
put on one table to negotiate and effectively avail
themselves of watching the contingencies arisen per se.
Here, AI is not used. Nevertheless, even if AI comes into
play, that would never defy state sovereignty because states
give consensual protection, where treaty obligations are
actually protected by the virtue of AI. Second, the AI shall
provide a close watch to the anomalies to solve all the
required problems via keeping in mind the sovereign
obligations and legal backing and no so-called “politically
unusual” interests. Third, as International Law cannot be
automated, interests shall be more transparent because even
negotiations arise into something very secretive or abstract,
that would be made more concretely analyzable. The
creation of international legal norms, processes and
institutions provides an ongoing and structured forum for
states to develop a shared humanitarian instinct on global
health. But the problem of using international law as a tool
for effective global health governance has long perplexed
scholars, and for good reason. [17] Thus, let us give an
extended but extensively ascertainable utility to states for
not opening up their interests but their activities among
states, when they are required.
B. Customary International Law
Customary International Law is the vaguest idea in
International Law. It is criticized for the demarcated
categories that it has- that is, the psychological element and
the material element. Thus, if we understand how it goes on,
let us analyze one important consideration by the ICJ in
Hungary v. Slovakia [19]
The Court analyzed one statement submitted by Slovakia
pertaining to a Variant C, which was- “It is a general
principle of international law that a party injured by the non-
performance of another contract party must seek to mitigate
the damage he has sustained. [19]” The statement, however,
failingly signifies that “while this principle might thus
provide a basis for the calculation of damages, it could not,
on the other hand, justify an otherwise [19]”. Hence, the
responsibility cannot be adjudged. Here, this is important to
understand that states commit one single mistake - they hide
subjective interests, which is justifiable, but they also hide
transparent reasonability, which is nonetheless, so vague.
State practice, is not only restricted by VCLT or jus cogens,
but also the non-generalization of political subjectivity as
data and not directives. That is why even China adopted a
‘liberal’ perspective in its communist political spectrum.
Even Japan and UK need liberal politics and representation
as they know that sovereignty must never be colourless and
ineffective. Thus, AI can advise, control and solve the
nuances of state practices, where it can become a separate
alternative to the sovereign’s application of mind provided
that the political essence existent in the society does not due
and is left to replenish itself with time.
C. Statehood and Federalism
Sovereignty as a concept has been discussed extensively.
However, if we materialize what sovereignty is, then we can
find it out that there is no sovereignty as a material at all. It is
just evolved as just a manifestation of relevance or
authoritarian subjectivity that law takes as a burden; it does not
give any quantitative solution to any sui generis dispute when
self-determination arises as a basic realm from human
instruments to the sovereign. That is why even a democracy
fails in the USA, India and Europe. But that does not mean we
don’t need democratic institutions. To earn democratic
institutions, we need to make it technically more trustworthy
and clear. No subjective data must be converted into a directive
like perception, emotion, ethnicity, religion and sectarianism,
which Artificial Intelligence, being free from all such massive
data of subjectivity, can ably perform. In fact, it can neutralize
sovereignty and redefine every human’s role in a more better
way to provide subjective independence. Now, even AI can be
dangerous. However, it is the endeavor that should keep going.
Here is a simulating exemplification of how the mechanisms
can be initiated to enhance the political, social and economic
fate of mankind, which shall affect the obligations of
International Law. This example is based on to procure AI
systems for diplomatic representations, political assertions and
state polity action mechanisms in accordance with the reality
of the sovereignty of any X nation.
Political independence is guaranteed by the charter of the
UN in its Article 2(3), which treats political element,
information or any other part or constituent of political realms
as data (say). The principles in law and politics are
interpretable. Let us say there shall be two kinds of
interpreters- human and non-human. The non-human
interpreter shall, under the virtue of AI, be given a sui generis
virtue to equate and equitize the requirement, procurement and
limitation of interpretation. However, interpretation in
International Law is human-oriented, i.e., it consists of
application of human virtue and mind, where the capability of
humans and referential presence as curation makes up the task
of interpretation. Now, for politics, this can be a good argument.
However, we can do one more thing. If an AI interface is
acknowledged with all the data or the information converted
into the specifically capsuled data such as the GDP stats,
economic policies and networks at internal and international
level, defence information, legal instruments such as a
constitution, federal laws, codes, statutes, case laws (with
ample limited and suggestive relevance), etc., then we can
surely achieve the cause per se. Provisional results from
Myanmar’s first census in 30 years released over the weekend
show that the country has nearly 9 million fewer people in it
than originally thought, as rights groups decry the absence of
data recognizing its oppressed Muslim Rohingya population.
According to the provisional results, Myanmar now has a
population of 51.41 million, falling short of the estimated 60
million previously believed to be living in the country that was
once all but closed off to the world. [20] This is not in the
purview of International Law directly but shall affect
International Relations. Thus, it shall always have a less
distinctive effect of the utility of international law. Frost and
Sullivan partnered with SocialCops to use our platform to build
a data-driven scorecard comparing each of Johor's 98 police
stations. Frost & Sullivan had already conducted three primary
surveys: Customer Satisfaction Index (CSI) perception, CSI
transaction, and Employee Satisfaction Index (ESI). [21]
Eventually, the dashboard showed what areas felt safest to
citizens, how reliable and fair citizens perceived each police
station, how satisfied police were at each station, and much
more. [21] In 2015, India set out to make its central
government more data driven and transparent. By setting and
tracking clear, quantitative goals, the government would be
able to measure its progress toward building a better India. Niti
Aayog and QCI partnered with SocialCops to make this
possible by unifying and tracking data for all of India's 89
ministries. [22] This, there can be a good possibility.
V. CONCLUSIONS
Nothing is absolute; and so is the AI. It just depends on how
we effectively make up the best human resource in the earth –
data- whether as being material or immaterial, relevant and
connective with nations and other non-state entities.
International Law is not a puppet of the law of treaties and
subjective directives of nations; it is the most beautiful
manifestation of statutory relevance and faith, which is
committed for our countless generations to succeed throughout.
So, if we really need to utilize the resource of AI and its future
for a better international community, we must regulate and
entrepreneur the subjectivity that subjectifies International
Law and just not become a subjective of this narrative and
directive.”
ACKNOWLEDGMENT
The author presents gratitude to his parents and the
colleagues and faculty of Amity University Uttar Pradesh,
Lucknow for moral encouragement and support.
REFERENCES
[1] Lee, K., Buse, K. and Fustukian, S. (eds) (2002). Health Policy in a
Globalising World. Cambridge: Cambridge University Press.
[2] N. Bostrom, Superintelligence. Oxford: Oxford University Press,
2013, pp. 3-210.
[3] A. Gentili, ‘De potestate regis absoluta disputatio’, in Regales
disputationes (London, 1605), pp. 5-11, cited and commented in Panizza, Alberico Gentili, p. 159..
[4] (1928) PCIJ Series A, No. 17, 29. Interpretation of Peace Treaties with Bulgaria, Hungary and Romania, Second Phase, ICJ Reports, 1950, p.
221, 228; Phosphates in Morocco, Preliminary Objections, (1938)
PCIJ, Series A/B, No. 74, 28..
[5] James Crawford, The Creation of States in International Law, 2nd
edition, (Clarendon Press, Oxford, 2006), pp. 477-8. Also see Preamble para 5 of the Israel-Palestine Liberation Organization:
Interim Agreement on the West Bank & the Gaza Strip, signed 28
September 1995, reprinted in (1997) 36 ILM 551 (‘a transitional period not exceeding five years from the date of signing the
Agreement on the Gaza Strip and the Jericho Area on May 4, 1994’).
[6] United Nations, Treaty Series, vol. 1155, p. 331.
[7] Breuker, J. & Wielinga, B. (1989). Models of expertise in knowledge acquisition. In Guida, G. & Tasso, C. (Eds.), Topics in expert system
design, methodologies and tools (pp. 265 -295). Amsterdam,
Netherlands: North Holland.
[8] Traynor, Ian (17 May 2007). Russia accused of unleashing cyberwar to disable Estonia, The Guardian; (1 July 2010). "War in the fifth
domain. Are the mouse and keyboard the new weapons of conflict?".
The Economist. Retrieved 2 July 2010.
[9] Sutherland, Benjamin (2011). The Economist: Modern Warfare,
Intelligence and Deterrence: The technologies that are transforming them, Profile Books, 1-302.
[10] Markoff, John (12 August 2008). Before the Gunfire, Cyberattacks,
The New York Times. Retrieved from http://www.nytimes.com/2008/08/13/technology/13cyber.html.
[11] Wentworth, Travis (23 August 2008). How Russia May Have Attacked Georgia's Internet, Newsweek. Retrieved from
http://www.newsweek.com/how-russia-may-have-attacked-georgias-
internet-88111.
[12] Danchev, Dancho (22 July 2008). Georgia President's web site under
DDoS attack from Russian hackers, ZDNet. Retrieved from
http://www.zdnet.com/article/georgia-presidents-web-site-under-ddos-attack-from-russian-hackers/.
[13] (21 July 2008). Georgia president's Web site falls under DDOS attack, Computerworld, Retrieved from
https://www.computerworld.com/article/2534930/networking/georgia-
president-s-web-site-falls-under-ddos-attack.html.
[14] Kenneth, Geers, (2010). The challenge of cyber attack deterrence,
Volume 26, Issue 3, Computer Law & Security Review, 298-303.
[15] Machine learning and the law is a rather new field. However, the
conference “Fairness, Accountability, and Transparency in Machine Learning” has been organized on a yearly basis since 2014 (see:
http://www.fatml.org). Various conference series and organizations
have also covered the topic broadly for some time, e. g. CEPE (Computer Ethics: Philosophical Enquiry), ETHICOMP, and IACAP
(International Association for Computing and Philosophy). NIPS
(Neural Information Processing Systems) also offered a workshop on Machine Learning and the Law in 2016, organized by Adrian Weller,
Jatinder Singh, Thomas D. Grant, and Conrad McDonnell: see
www.mlandthelaw.org (with papers).
[16] Thomas Burri, “International Law and Artificial Intelligence”, 27
October 2017, available on SSRN: https://ssrn.com/abstract=3060191
(forthcoming in German Yearbook of International Law 2017/18), pp. 1-21.
[17] Lawrence O. Gostin, Allyn L. Taylor; Global Health Law: A
Definition and Grand Challenges, Public Health Ethics, Volume 1, Issue 1, 1 April 2008, Pages 53–63,
https://doi.org/10.1093/phe/phn005.
[18] Dodgson, R., Lee, K. and Drager, N. (2002). Global Health
Governance: A Conceptual Review. Geneva: World Health
Organization and London School of Hygiene and Tropical Medicine
[19] Gabčĭkovo-Nagymaros Project (Hungary v. Slovakia) 1997 I.C.J. 7 (Apr. 9); Chorzow Factory Case (Germany v. Pol.) 1928 P.C.I.J. 47
(ser. A), N° 17 (Sept. 13); Military and Paramilitary Activities in and
against Nicaragua (Nicaragua v. U.S.A.) 1986 I.C.J 14 (June 27); Nuclear Tests Cases (New Zealand v. France) 1974 I.C.J. 4 (Aug. 8).
[20] Heijmans, Philip (September 02, 2014). Myanmar’s Controversial
Census, The Diplomat. Retrieved from https://thediplomat.com/2014/09/myanmars-controversial-census/
[21] CASE STUDY: Evaluating Malaysian Police Stations with Data, SocialCops. Retrieved from https://socialcops.com/case-
studies/crowdsourcing-citizen-feedback-for-malaysian-police/
[22] CASE STUDY: Making India's National Ministries Data Driven,
SocialCops. Retrieved from https://socialcops.com/case-
studies/building-a-system-for-niti-aayog-to-track-89-ministries/
DESIGNING OF LOW POWER AND HIGH PERFORMANCE SAMPLE AND HOLD
CIRCUIT IN 180nm CMOS TECHNOLOGY USING MENTOR GRAPHICS
Snigdha Tripathi1 , Vivek Kumar
2, Dr. Geetika Srivastava
3
Department of Electronics and communication
Amity University, Lucknow Campus [email protected], [email protected] , [email protected]
ABSTRACT
Objective of the paper is to design low power, high
performance sample and hold circuit in 180nm CMOS
technology. A Switched Capacitor topology being used
for Sample and Hold (S&H) circuit that allows
integration of both digital and analog functions on a
single silicon chip. 180nm CMOS technology with power
supply of 3.3 V is being used. (A/D) converters consists
three parts: (i) sampling (ii) quantization (iii) coding.
The S&H circuit forms fundamental building block for
analog-to digital (A/D) converters and responsible for
sampling and quantization of analog signals. It also
plays an important role in operating speed of (A/D)
converters. S&H circuit helps in reducing the dynamic
errors in ADC at high frequency operations. The
simulation results are based on Mentor Graphics -ELDO
SPICE Simulator. The results of this paper can be a
useful resource for high speed application designers.
I. INTRODUCTION
Sample and Hold circuit is an analogous device
which helps in sampling the continuous varying
voltage of analogous signal and hold its value at
constant level at specified period of time. These
circuits are fundamental analogous memory devices.
These are typically used in Analog to Digital
converters to eliminate distortion in input signal
which disturbs the process of conversion.
Fig. 1 Sample and Hold Circuit
II. BACKGROUND
Figure 1 shows the circuit diagram of basic Sample
and Hold circuit . This circuit consists of switch s0
which is coupled in series with capacitor Cout. The
voltage across capacitor is defined by input Vin and
switch s0 is closed for sampling rate.
Fig 2.Sample and Hold circuit using NMOS
Fig 3. Sampled output of S&H circuit
Figure 2 describes Schematic of basic NMOS
(S&H) circuit. In this switch s0 in figure 1 is
replaced by NMOS transistor. ELDO simulator is
used here for result analysis. The NMOS used is
L=180nm W=720nm Vt=3.3V. Here V1 is the
input analog signal of frequency 10 Mhz and
NMOS trainsistor is used as switching element .V2
is the other input with sampling rate of 100 Mhz and
time period of 100us , pulse =3.3v rise time =0, fall
time = 0,pulse width=50us. C1 is the capacitor of
1pf capacitance. Figure 3 shows the sampled output
of the (S&H) circuit.
When the input clock at gate switches to high and is
switched on , capacitor C1 is shorted to the input. In
this case NMOS is in sampling mode and works in
tracking mode , for that and
.gate to source voltage is
Final equation shows the resistance of NMOS
transistor. M1 is dependent on input voltage and the
resistance results in distortion of hold voltage of
capacitor during hold mode of capacitor.
NMOS transistor is being used for design of (S&H)
circuit instead of PMOS transistor because
A N-channel devices are faster as compared
to P-channel devices as the electron which
are carrier in N-channel device are two
times mobile than holes in P-channel
devices.
Since the impedance of N-channel
transistors are almost as half as the
impedance of P-channel transistors
operating under the identical condition.
Therefore N-channel integrated circuits can
be implemented in less area of silicon
wafer.
PMOS junction area is larger as compare to
N-channel devices and this gives advantage
to N-channel devices. Speed of operation of
metal oxide semiconductor is reduced by
resistor capacitance time constant , thus
capacitance of diode is directly
proportional to the size of diode , which
enhances the speed of N-channel devices as
they consists smaller capacitance.
III. CIRCUIT DESCRIPTION
We have used unity gain amplifier circuit as
requirement of current mode approach. the
major concerns with regard to unity gain
amplifier in these application are their high
frequency response and chip dimesions.
Unfortunately switched capacitor based unity
gain amplifier are often parasitic sensitive.it
used as first stage in various current feedback
amplifiers such as current operational amplifiers.
Fig 4. S&H circuit with unity gain amplifier
Figure 4 shows the circuit of (S&H) of analog to
digital converter. unity gain amplifier circuit is
added to the previous circuit of NMOS based .
As the unity gain amplifier doesn’t amplify but
it provides the gain of 1. Its advantage is that it
isolates the input current of the circuit from
output current of the circuit .unity gain amplifier
circuits comes in two types : voltage follower
and voltage inverter. Here we have used voltage
follower circuit of unity gain amplifier using
CMOS .
IV. RESULTS AND CONCLUSION
Fig.5 Sampled Output with unity gain amplifier
Result analysis of the circuit is simulation based
in mentor graphics – ELDO tool. As the circuit
is designed for low power dissipation while
working on high frequency input signals which
helps the application designers for more
optimization. Figure shows the result of high
gain sampled output of the high frequency
analog input. Result shows the output sampled
voltage following the high frequency analog
input signal which consumes less power which
helps in more optimized circuit designing of
analog to digital converters.
REFERENCES
[1] Venes, Ardie GW, and Rudy J. van-de-
Plassche. "An 80-MHz, 80-mW, 8-b CMOS
folding A/D converter with distributed track-
and-hold preprocessing." IEEE Journal of Solid-
State Circuits 31.12 (1996): 1846-1853.
[2] Shahramian, Shahriar, Sorin P. Voinigescu,
and Anthony Chan Carusone. "A 30-GS/sec
track and hold amplifier in 0.13-μm CMOS
technology." Custom Integrated Circuits
Conference, 2006. CICC'06. IEEE. IEEE, 2006.
[3] Jakonis, Darius, and Christer Svensson. "A 1
GHz linearized CMOS track-and-hold circuit."
Circuits and Systems, 2002. ISCAS 2002. IEEE
International Symposium on. Vol. 5. IEEE,
2002.
[4] Dai, Liang, and Ramesh Harjani. "CMOS
switched-op-amp-based sample-and-hold
circuit." IEEE Journal of Solid-State Circuits
35.1 (2000): 109-113.
[5] Bult, Klaas, and Govert JGM Geelen. "A
fast-settling CMOS op amp for SC circuits with
90-dB DC gain." IEEE Journal of Solid-State
Circuits 25.6 (1990): 1379-1384.
[6] Liu, Ren-Chieh, Kuo-Liang Deng, and Huei
Wang. "A 0.6-22-GHz broadband CMOS
distributed amplifier." Radio Frequency
Integrated Circuits (RFIC) Symposium, 2003
IEEE. IEEE, 2003.
[7] Bernstein, Kerry, et al. "High-performance
CMOS variability in the 65-nm regime and
beyond." IBM journal of research and
development 50.4.5 (2006): 433-449.
Artificial Intelligence in Legal Research and other dimensions of Law
Vagish Yadav
Research Scholar
Amity Law School,
Lucknow, India.
Abstract
Legal Research is a time-taking process. It takes
lawyers a lot of patience and hard work to come up
with the right sources and materials for proper legal
decision making. May it be any court of law,
advocates are relevant in any conundrum presented
before the court to solve the respective conundrum.
The use of tools and basic ideas of Artificial
Intelligence can be used to make the legal research
for the lawyer more efficient and problem-solving.
This, would be a better presentation of the case
before the court effecting the legal decision making
positively without any losses to the jobs, which is
mostly thought of when we talk about Artificial
Intelligence.
Use of search methods and planning methods in AI
is used to create such intelligence which can
automatically classify and organize legal text and
summarize the required research at the terms of the
lawyer. This can be done by giving the intelligence
access to all data that shall be used in legal research
for a particular case.
ROSS Intelligence is an example of Artificial
Intelligent Lawyer which does the legal research
efficiently. This is the first step to the subject of
Artificial Intelligence and Law.
I. INTRODUCTION
Artificial Intelligence and Law is a subfield of
Artificial Intelligence(AI). Both the subjects
complement each other and can help each other
develop more. While Artificial Intelligence is in it’s
initial stages to develop into a more enhanced form,
law has already developed it’s base. Legal Problem
solving has been most exposed to the general public
and hence, it is the most worked on. Artificial
Intelligence can take the techniques of problem
solving from the legal perspective and develop it’s
own basic concepts and tools. On the other hand,
Artificial intelligence has found a lot of uses in
legal world. There is work been done in the field of
creation of better LawBots. These LawBots are
basically robots which can be hired to do the simple
and repetitive legal procedures. For say, filing of a
case by the clerks takes a lot of work but the
procedure is almost the same. Such jobs can be
done by the LawBots. But due to the opposition to
AI killing the jobs, the topic did not develop well.
There is another use of Artificial Intelligence in
Law. It is that of Legal Research.
Before going anywhere now, we define Artificial
Intelligence. Artificial Intelligence are models
targeted at thinking, perception and action. These
models are supported by representations which are
exposed to constraints. These constraints are
enabled by certain algorithms written by computer
scientists.
Now, Legal Research is the process of identifying
and retrieving worthy data which shall be used in
legal problem solving and decision making. The
entire work shall be done by an AI by the same
simple procedures which Lawyers tend to do. The
result would be a better search from the provided
database as well as a better viewpoint of the non-
human intelligence. This shall also trigger new
ideas in the lawyers to pursue the case in a more
wider ambit.
II. LEGAL RESEARCH
The Legal Research is a very simple procedure and
is known to every lawyer practicing law in the court
of law. When a case is presented in the court f law,
the counsel advocate present their viewpoints to
satisfy the judge and lure him into giving the
judgment in his favor. The background of this act is
a daily research work collecting strong arguments
backed by precedents or case laws. Most cases are
such which happen to be similar to some of the
previous, or a part of them is similar to some
previous cases. These arguments presented before
the court hold water only when they are backed by
such case laws or precedents. These chunks of data
are Legal Sources. There are a variety of legal
sources. They are:
1. Legislation
2. Judicial Precedents.
3. Customs
Legislation involves all the laws which are created
and introduced by the ruling government. It is the
most effective and has the ultimate power to win a
case.
Judicial Precedents are case laws, i.e. the judgments
delivered by the courts previously which have
binding power in the current court of law.
Customs are traditions which haven’t been codified
but are practiced by the people in any state. The
custom shall have been practiced from times
immemorial.
For research purpose, the sources are classified into
two. They are:
1. Primary Sources
2. Secondary Sources
Primary Sources are those sources which are
sanctioned by government entity. These government
entities are courts, legislature or executive agencies.
These sources are Case Laws, judgments, Laws and
constitution.
Secondary sources are like a restatement of the
primary law with or without a commentary and a
viewpoint or a suggestion. These secondary sources
are :
1.Legal Dictionaries
2. Words and Phrases
3. Legal Encyclopedias.
4. Annotated Law Reports
5. Legal Periodicals
6. Legal Directories
7. Constitutional Debates
8. News
Such sources are also called Secondary sources.
III. CONCEPTS OF ARTIFICIAL INTELLIGENCE
The concepts in Artificial Intelligence that can be
used for legal research are :
1. Rule Based Expert Systems
2. Search Methods
3. Goal Trees
Rule based expert systems are basically novice
systems. They are also called Deduction System.
We give some pre-recognized rules and according
to them, the system finds out the correct and the
most accurate data required as per the case.
These rules are basic deducing rules which happen
to work for almost every general case. Say, the
lawyer asks for a certain type of case. The first
process is to identify the case. This will need
grasping it’s basic characterstics. Here is where
Rule Based Expert systems come into play. There
will be rules for every type of law to be identified in
a case through simulations or keywords.
These systems were developed in mid 1980s.
Knowledge is basically encapsculated in the form of
simple rules. These rules make a complex case easy
to comprehend and hence, the identification of the
case is done.
There are two approaches to making a expert
system. These approaches are called Forward
chaining and Backward chaining. Forward chaining
rule based expert system is a system where it works
forward from facts to the conclusion. The backward
chaining method is where the system assumes a
hypothesis. It works backward from the hypothesis
to the facts. Whichever facts meet the given ones,
that hypothesis becomes the conclusion. Both
systems are Deduction Systems.
The next step is to answer the questions on it’s own
behavior. This requires it to draw Goal Trees. Goal
Trees are such trees composed of nodes and links
without any loops. They are focused on getting to a
goal. This is one of the most basic concepts in
Artificial Intelligence. Whatever the problem may it
be, it can be solved by drawing a goal tree for the
problem. If we go back the links that are formed, we
would be able to see that the program answers
questions based on it’s own behavior. For say, at the
end of the research done by the AI, if the lawyer
seeks to learn what keyword or question made a
particular document to be in the results, the artificial
Intelligence can straightaway direct it to the specific
keyword, thus, making the research more user
friendly.
The last topic to discuss is Search methods. There
are a variety of search methods in computer science.
There are hill climbing, depth-first, breadth-first,
beam search, optimal and many more.
The intelligence shall be provided with a large
database of cases, laws, journals and all the sources
which were mentioned in the previous section.
Through them, the search methods enable to find
the finest documents for the research work. Also,
the Search methods can be used in conjunction with
Goal trees. This will help understand the user that
why did the system reject certain cases and select
certain cases. Goal trees will be drawn on the
searching pattern. The rules for the search basically
constitute the types of search.
IV. ROSS INTELLIGENCE
Ross Intelligence is one of the finest artificial
intelligence in the field of legal research introduced
to the world. The intelligence is based on concepts
of artificial intelligence like planning, goal trees,
automated search methods from databases etc.
Ross intelligence is known for it’s precise
highlighting fo topics, intuitive queries and law
monitoring. It can monitor the latest legal
developments.
Currently, ROSS operates in three legal areas.
1. Bankruptcy Law.
2. Intellectual Property Law.
3. Labor and Employment Law.
Ross Intelligence is a direct application of the
procedure stated above. The processing of
extraction of information from the large database of
legal material and then, ability to answer questions
on it’s own research which is called behavior, is
done by making goal trees.
Today, there are a number of firms that have
multiplied their success by taking the help of ROSS
intelligence. ROSS intelligence shall further be
developed and learning methods should be made
applicable to it. There is not much data available on
ROSS intelligence.
V. OTHER DIMENSIONS OF LAW
Artificial Intelligence is the new emerging
technology in the modern age. This era is just the
initiation era for artificial intelligence. A non-
human intelligence can be more than a novice
system. It can contribute to international relations
between the countries. Countries today, have
bilateral treaties. An Artificial Intelligence can
promote peace in the world by drafting certain
model multilateral treaties so that all the countries
may benefit. Artificial Intelligence, can perceive
neutrally for all countries, and may help out on
sensitive issues such as Climate change. Thus, there
is more to Artificial Intelligence than it’s use in
Legal Research.
Author’s note
The author/researcher would like to thank Amity University
and Cairia 2018 for giving him this chance to present his idea
via this paper in front of you.
He would also like to thank his friends Prafulla Sahu and
Abhivardhan for their constant support and love.
The author would at last like to thank God.
References
[1] Patrick Henry Winston, Artificial Intelligence, 3rd
edition. [2] https://ocw.mit.edu/courses/electrical-engineering-and-
computer-science/6-034-artificial-intelligence-fall-2010/
[3] V.D. Mahajan,Jurisprudence and Legal Theory.
[4] https://rossintelligence.com/
Role of Artificial Intelligence in Healthcare
Devanshu Srivastava
AMITY Institute of Information Technology (AIIT)
AMITY University
Lucknow, India
E-mail: [email protected]
Abstract
Artificial Intelligence tends to duplicate the human
responsive system. The use of artificial intelligence
has increased very rapidly from last 5-10 years and
has now become the most used technology across
the globe. The artificial intelligence has established
itself in almost every field like education, business,
healthcare etc. The AI has proved its importance in
the field of healthcare by analyzing the data and
giving the exact cause of the disease along with its
solutions. AI uses some of the methods like
Machine Learning (ML), Natural Language
Processing (NLP), Neural Networks and many more
for evaluating the disease and giving the best
solutions for curing the disease. The most important
areas where the AI tools are used are Cancer,
Diabetes, Stroke, Neurology and Cardiology.
Key Words: Neurology, Cardiology, Neural
Networks, Natural Language Processing and
Machine Learning.
Introduction
Nowadays, AI techniques have sent a drastic
intuition’s across the healthcare sector and also
pumping the active debate about that whether the AI
technique will replace the human doctors in the
coming future. But according to a study the human
doctors cannot be replaced by the AI doctors in the
near future but instead they can help the physician
in making the better and effective clinical decisions.
The AI technique can help the doctors to study the
disease the cause and its symptoms so that they can
take effective decisions on how to cure that disease
in minimal time. AI analyzes the healthcare data
and big data to collect information’s about the type
of disease and its cause of occurrence in the human
being which helps to solve the problems. AI provide
the realistic data of the problem or the disease by
taking the previous data into account and on that
data it predicts the outcomes and the causes which
can be helpful in saving someone’s life.
The aspects of applying AI in the healthcare are as
follows:
1. Reasons for applying artificial intelligence
in the healthcare sector.
2. The types of data that has to be analyzed.
3. Devices which enables to generate
meaningful AI data.
4. Types of disease that AI has to tackle.
Reasons
The main reason to use AI in healthcare is its
repeatability and reliability. Once the AI is
configured and tested then it can analyze data
continuously without any stress and fatigue and can
generate the effective results. It can be updated with
the clinical data and assumptions to make the result
more beneficial and useable for the human doctors.
It also has the capability of self-learning and auto
correcting without any involvement of the human
brain. AI can help to reduce the errors of diagnostic
and neural tests. It has the capability of self-
evaluating and testing so that it can produce the
desired result and also it is not biased for anything.
So, the chances of errors are very less and almost
negligible. It uses a large patient data to assist
making real-time decisions for health risk and health outcome predictions.
Figure-1 Data types used in artificial intelligence. [1]
HealthCare Data-Types
Before implementing the AI systems in the real
world it has to be tested over and trained over large
set of clinical data of diagnosis, stroke, and diabetes
etc. so that it can generate the genuine results about
the data studied. The data types can vary from
disease to disease and the AI should be capable of
reading all those data and produce different results
for the different type of data. The above figure-1 [1]
shows the different types of data that are being
studied by the AI to produce the results of different
diseases.
AI Devices
AI devices are mainly categorized in two main
sections. The first one includes Machine Learning
(ML) which analyzes the structured data or the data
is which is available online or in any literature form
like images, texts etc.
The second category includes natural Language
Processing (NLP) methods which includes
unstructured data such as clinical notes, medical
prescriptions or data etc. NLP aims to convert the
text into the ML so that AI can easily understand it
and evaluate it. The Figure-2 [2] describes the
architecture of AI in healthcare using NLP data and
ML data.
Types of Disease
Instead of having rich AI content in medical field,
the research mainly focuses on some common
disease types like cancer, neural disease and
cardiovascular disease (figure-3) [3]. Some of the
examples are given below:
1. Cancer: Cancer has been the most
dangerous and major cause of the deaths in
Asia, North America and some parts of
Middle East. There are different types of
cancers like blood cancer, skin cancer, brain
cancer and etc. AI can be used to gather
information’s about different types of cancer
and there reasons for occurrences in a
human being and from that data doctors can
easily focus on the cause of a particular
cancer so that a proper vaccinations can be
made.
2. Neural Disease: Neural disease includes
damage of nerves and neurons in a human
body which carries signal from brain to
other parts of the body. When a particular
nerve is damaged then it is unable to carry
the brain signals which lead to the
disfunctioning of a particular body parts like
hands, legs, finger movement etc. AI helps
to recognize the major causes of neurons
dysfunction by analyzing the medical data of
several patients.
Figure-2: Architecture of AI in healthcare using NLP data and ML analysis. [2]
3. Cardiology: Cardiology is a part of
medicine which deals with the heart diseases
and different parts of the circulatory
systems. This includes Cardiac arrest,
clotting of blood in the heart vessels etc. It is
one of the major disease found in the age
group of 30 above people many people lose
their lives due to heart diseases every year.
The above three diseases are the main focus
because they are very dangerous and takes
many lives every year. AI in these fields can
help to improve the medical facility because
it will predict the exact outcomes and
reasons of the disease.
Figure-3: The leading 10 diseases. [3]
Applications of AI in Stroke
Stroke/Heart Attack is a very common and frequent
occurring disease which causes more than 300
million people to lose their lives worldwide.
Therefore, research on how to prevent stroke is very
important. In the past few years AI techniques have
been used to gather more information’s about heart
related issues. Stroke for 65% of the time, is caused
due to clotting of blood in the heart vessel.
There are many reasons which affect the stroke and
disease mortality. As compared to the previous
methods ML methods are more accurate and precise
in improving prediction performance about the
stroke.
ML and NLP can be used to analyze stroke data of
different patient around the globe and based on that
analysis it can predict the main cause of stroke in a
human being and what age it occurs the most.
Conclusion
I reviewed the reasons of using AI in healthcare and
also presented the various data of different diseases
which are the major cause of death nowadays. After
that I have also discussed various techniques used to
analyze data and give predictions out of those data
Of various patients across the globe which will be
helpful in finding the reasons of the diseases. The
Various techniques that are used in this are Machine
Learning (ML) and Natural Language Processing
(NLP). ML is a structured data which works on the
online or printed data to give the predictions
whereas NLP uses clinical data to give the same.
By using Artificial Intelligence we can save many
lives which we loses due to the lack of knowledge
and decision making system. AI helps to give the
perfect data to the doctors so that they can easily
take decision that what steps to be followed in the
particular disease so that it can be cured.
In the near future most of the health related issue
will be tackled by the AI itself but under the human
supervision.
References
1. (2018)- Data types used in artificial
intelligence. [online]Available.
https://www.google.co.in/search?q=ai+i
n+healthcare+data&source=lnms&tbm=i
sch&sa=X&#imgrc=O9rnblpYjyQIyM:
2. (2018)- Architecture of AI in
healthcare using NLP data and ML
analysis.[online]Available.
https://www.google.co.in/search?biw=13
66&bih=662&tbm=isch&sa=1&ei=9jX
CWrekEsPVvgSF45OYCA&q=roadmap
+of+ai+using+nlp+and+ml&oq=roadma
p+of+ai+using+nlp+and+ml&gs_l=psy-
ab.3...72919.80281.0.80574.18.14.1.0.0.
0.655.3103.0j1j7j5-
2.10.0....0...1c.1.64.psy-
ab..8.0.0....0.nEraZV8ek7w#imgrc=f3sh
wwEEvpDgLM:
3. (2018)- The leading 10 diseases.
[online]Available.
https://www.google.co.in/search?q=ai+i
n+healthcare+data&source=lnms&tbm=i
sch&sa=X&#imgdii=XAB2y81qEij0iM:
&imgrc=O9rnblpYjyQIyM:
Conference Chair
Dr. Pankaj K. Goswami
Assistant Professor, AIIT
Mob. : 9453434364, [email protected]
Publication Chair
Dr. Parul Verma
Assistant Professor, AIIT
Mob. : 9839289870, [email protected]