expert systems with...
TRANSCRIPT
Expert Systems With Applications 65 (2016) 361–371
Contents lists available at ScienceDirect
Expert Systems With Applications
journal homepage: www.elsevier.com/locate/eswa
A rule-based support system for dissonance discovery and control
applied to car driving
F. Vanderhaegen
a , b
a UVHC, LAMIH, F-59313 Valenciennes, France b CNRS, UMR 8201, F-59313 Valenciennes, France
a r t i c l e i n f o
Article history:
Received 21 March 2016
Revised 18 August 2016
Accepted 30 August 2016
Available online 31 August 2016
Keywords:
Inconsistency
Affordance
Knowledge acquisition
Dissonance discovery
Rule-based support system
Inductive reasoning
Deductive reasoning
Abductive reasoning
a b s t r a c t
This paper is based on the concept of dissonance, that is, gaps or conflicts existing in a specific knowledge
base or among different knowledge bases. It presents a rule-based system that assists human operators
in dissonance discovery and control by taking into account two kinds of dissonance, i.e., affordance to
study conflicts of use, and inconsistencies to study conflicts of intention and action, through the analy-
sis of cognitive behavior implemented in knowledge bases. This system elaborates the knowledge base
composed of rules, and analyzes the knowledge content to discover new knowledge by creating addi-
tional rules, or to identify inconsistencies when conflicts between rules occur. The affordance discovery
control process uses a deductive and an inductive reasoning algorithm of which the aim is to establish
new rules using existing ones. The inconsistency discovery control process applies an abductive reasoning
algorithm in order to determine contradictory rules when existing rules may result in opposite intentions
being accomplished. Two groups of inconsistencies are addressed: interferences involving several decision
makers, and contradictions involving the same decision maker. A knowledge acquisition control process
facilitates the creation of the initial rules that contain parameters such as intentions relating to the goals
to be achieved, actions to be performed to achieve these intentions, objects used to carry out these ac-
tions and the decision makers who execute these actions using the corresponding objects. A feasibility
study taking into account five rule bases relating to the manual use of an Automated Speed Control Sys-
tem (ASCS), the automated control of the car speed by the ASCS, the manual control of aquaplaning, the
manual control of the car speed, and the manual control of car fuel consumption is proposed to validate
the rule-based support system.
© 2016 Elsevier Ltd. All rights reserved.
1
d
a
n
i
e
b
a
p
w
i
l
i
s
d
e
k
o
t
b
i
g
w
c
a
c
c
h
h
0
. Introduction
Dissonance engineering relates to engineering science that
eals with dissonance, and is considered as a new approach to risk
nalysis ( Vanderhaegen, 2014 ). It focuses on the concept of disso-
ance developed in cognitive science ( Festinger, 1957 ) and cindyn-
cs ( Kerven, 1995 ). Cognitive dissonance is defined as an incoher-
nce between cognitions, i.e., between elements of knowledge or
etween sets of knowledge. Cindynics dissonance is a collective or
n organizational dissonance related to incoherence between peo-
le or groups of people. Dissonance occurs when something seems
rong, i.e., something will be, is, maybe or was wrong, and can be
nterpreted in terms of gaps or conflicts between individual or col-
ective knowledge. Dealing with dissonance is a recursive process:
t may generate discomfort or a situation overload, which may re-
ult in further dissonance. Therefore, knowledge discovery can pro-
E-mail address: [email protected]
t
c
m
ttp://dx.doi.org/10.1016/j.eswa.2016.08.071
957-4174/© 2016 Elsevier Ltd. All rights reserved.
uce inconsistency and inconsistent knowledge can lead to knowl-
dge discovery.
Dissonance occurs when gaps or conflicts exist in a specific
nowledge base or among different knowledge bases. These gaps
r conflicts can be identified in relation to 1) a single base when
asks to be achieved are known and clearly defined, 2) multiple
ases when several viewpoints regarding task accomplishment ex-
st, or 3) no base when tasks are unknown, poorly defined or for-
otten. As a matter of fact, dissonance due to such gaps or conflicts
ith no base, in a base or between bases requires the knowledge
ontent to be controlled and relates to dissonance discovery.
This paper contributes to the control of dissonance discovery,
nd more precisely the discovery of affordances and inconsisten-
ies regarding a rule-based knowledge base, taking into account
ognitive behavior of humans and automated components of a
uman-machine system. For instance, cognitive behavior relates to
he application of procedures from a system user manual or the
reation of new procedures by human operators. Knowledge bases
ust then be developed to model the cognitive behavior of human
362 F. Vanderhaegen / Expert Systems With Applications 65 (2016) 361–371
D
2
&
A
e
s
(
p
(
t
w
l
b
l
d
o
t
o
D
s
t
o
e
n
e
p
k
i
2
1
t
r
J
l
i
d
T
c
g
c
r
s
2
Y
t
c
m
c
b
n
e
p
q
t
o
A
p
b
d
o
operators and automated systems. This paper presents an original
support system dedicated to knowledge acquisition and dissonance
discovery. The knowledge acquisition control process aims to de-
velop a rule base taking into account links between parameters
such as intentions, actions, objects and decision makers. An inten-
tion can be linked to other intentions or to a triplet composed of
an action, an object and a decision maker. A decision maker must
execute a specific action using a given object to fulfill an intention.
The affordance and inconsistency discovery control processes use
different rule bases to identify possible new rules or contradictions
between rules. Section 2 presents the concept of dissonance dis-
covery and control. A specific architecture and formalism adapting
the principles of deductive, inductive or abductive reasoning are
then proposed in Section 3 . Section 4 presents a feasibility study
to validate the proposed system with a practical example of an ap-
plication.
2. Dissonance discovery and control
Knowledge analysis aims to verify the integrity of the knowl-
edge content. The concept of knowledge inconsistency is used
mainly in the literature for considering knowledge integrity.
Knowledge integrity may be affected by many sources of prob-
lems such as syntactical mistakes, inconsistent information, obso-
lete knowledge, incoherent knowledge, lack of knowledge or repet-
itive knowledge ( Batarekh, Preece, Bennett, & Grogono, 1991; Co-
enen, Eaglestone, & Ridley, 1999; Hunter, 2002; Nguyen, 2008; O’
Keefe, Preece, 1996 ). Inconsistency control aims to find a consensus
when conflicting rules are true at the same time. Several ontology
or rule fusion-based approaches can be applied to recover inconsis-
tency. Other so-called paraconsistent reasoning-based approaches
tolerate the presence of inconsistent knowledge by applying spe-
cific rules to control possible absurd rules ( Grant & Hunter, 2008 ).
Knowledge discovery can also be a source of inconsistency.
The main principle of knowledge discovery consists in us-
ing several knowledge bases in order to merge them and dis-
cover new knowledge ( Wachla & Moczulski, 2007 ; Lee and Wang,
2012; Ruiz, Foguem, & Grabot, 2014; Valverde-Albacete, González-
Calabozo, Peñas, & Peláez-Moreno, 2016; Wanderley, Tacla, Barthès,
& Paraiso, 2015; Zhang et al., 2014 ). It can also concern an un-
expected discovery such as serendipity ( McCay-Peet, Toms, & Kel-
loway, 2015 ), or creative discovery such as inventive problem solv-
ing ( Yan, Zanni-Merk, Cavallucci, & Collet, 2014 ). Relaxing safety
constraints can lead to the discovery of new alternative action
plans (Ben Yahia et al., 2015). Knowledge discovery can also be the
result of trial-and-error and wait-and-see-based behavior to con-
trol unknown situations or to test new alternatives ( Vanderhaegen
& Caulier, 2011 ).
A particular knowledge discovery process consists in applying
the affordance principle. Affordance is based on relations between
objects and possible actions that can be achieved using these
objects ( Gibson, 1986; Zieba, Polet, Vanderhaegen, & Debernard,
2010 ). For instance, the object “chair” can be related to the action
“sit” . Regarding the experience of chair users, other actions can be
identified:
• A chair can be related to the action “climb” if a person climbs
on a chair to access remote objects such as lights on the ceiling.• A chair can be related to the action “transport” if a person uses
a wheelchair after an accident for instance.
Therefore, the knowledge discovery process consists in creating
new relationships between objects and actions. Conflicts may occur
between some of the relationships discovered. Such affordances
lead to dissonance. Another kind of dissonance relates to incon-
sistency between rules, data, beliefs, intentions, perceptions, inter-
pretations or decisions for instance ( Ben-David & Jagerman, 1997;
ash, Dash, Dehuri, Cho, & Wang, 2013; Hunter & Summerton,
006; Ma, Zhang, & Lu, 2010; McBriar et al., 2003; Telci, Maden,
Kantur, 2011; Wu & Liu, 2014; Xue, Zeng, Koehl, & Chen, 2014 ).
utomation surprise, barrier removal and cognitive blindness are
xamples of such inconsistency. Automation surprise is the incon-
istency of an intention between an automated system and its user
Inagaki, 2008 ). Barrier removal is an inconsistency between view-
oints on the same situation involving the use of a safety barrier
Vanderhaegen, 2010 ). Cognitive blindness such as perseveration or
he tunneling effect is a conflict of perception when human experts
ith high levels of knowledge do not hear alarms even though the
atter are functioning correctly ( Dehais, Causse, Vachon, & Trem-
lay, 2012 ).
Dissonance can occur or be generated when there is a loss or
ack of knowledge, or when the required knowledge has nothing to
o with the current one, from an individual or organizational point
f view ( Hendriks, 1999; McBriar et al., 2003; Sharma & Bhat-
acharya, 2013; Vanderhaegen, 2014; Wu & Liu, 2014 ). Other kinds
f dissonance can be considered ( Hunter & Summerton, 2006 ).
ispositional dissonance relates to opposite knowledge about the
ame facts, epistemic dissonance concerns different beliefs about
he sources of knowledge, and ontological dissonance is different,
pposite meanings of the same knowledge. Therefore, if the knowl-
dge discovery process is related to dissonance, it is called disso-
ance discovery.
The control of dissonance discovery in relation to the discov-
ry of affordances or inconsistencies may require different sup-
ort tools. Some support tools consist in sharing or gathering
nowledge from different human operators or automated systems
n order to attain an individual or a joint goal ( Vanderhaegen,
012; Vanderhaegen, 1997; Zieba, Polet, & Vanderhaegen, 2011 ,
999). A shared workspace is then required in order to facili-
ate the cooperation process and reduce the risk of human er-
ors ( Jouglet, Piechowiak, & Vanderhaegen, 2003; Vanderhaegen,
ouglet, & Piechowiak, 2004 ). Other support tools facilitate self-
earning or co-learning in order to reject the dissonance and to
gnore its possible impact on knowledge, to solve it and to pro-
uce new knowledge, or to modify or delete current knowledge.
hese tools aim to reinforce individual or collective knowledge
ontent ( Ouedraogo, Enjalbert, & Vanderhaegen, 2013; Vanderhae-
en & Zieba, 2014 ). Knowledge content can be represented by logi-
al rules and implemented using genetic algorithms, artificial neu-
al networks, case-based reasoning systems, or rule-based control
ystems for instance ( Chen, Khoo, Chong, & Yin, 2014; Polet et al.,
012; Rubiolo, Caliusco, Stegmayer, Coronel, & Fabrizi, 2012 ; Ben
ahia et al., 2015; Colak, Karaman, & Turtayb, 2015 ).
Table 1 summarizes some dissonance studies and introduces
he contributions of this paper. The study of dissonance requires
ognitive analysis of field observations when knowledge is not for-
alized and implemented in a system, or automated systems for
ontrol support.
Contributions requiring retrospective dissonance analysis are
ased on cognitive behavior analysis, and are field studies that
eed retrospective methods to record and analyze data so as to
xplain dissonances that have occurred. Data concern for instance
sychological, physiological or physical information, or come from
uestionnaires. The data collection and analysis relates to the ac-
ivities of a human operator interacting with the same system.
The paper proposes a support tool for formalizing data in terms
f rules and for analyzing them in order to anticipate dissonances.
decision maker is free to define the initial rules and assess the
roposed dissonances, but the initial rules and the dissonances can
e validated by other decision makers.
Contributions implementing automated support tools apply de-
uctive, inductive or abductive reasoning to recover or predict only
ne kind of dissonances. The proposed support tool implements
F. Vanderhaegen / Expert Systems With Applications 65 (2016) 361–371 363
Table 1
Summary of some dissonance studies and the contributions of this paper.
Example of references Associated dissonance Principle
Data source or knowledge
processing
Dissonance analysis
or control
( Rushby, 2002 )
( Inagaki, 2008 )
Automation surprise Conflict of intention
( Dehais et al., 2012 ) Tunneling effect Cognitive blindness
( Vanderhaegen, 1999 ) Erroneous cooperation Conflict of allocation Data from questionnaires or
psychological,
physiological or physical
data
Retrospective dissonance
analysis ( Vanderhaegen et al., 2006 ) Competition Conflict of interest
( Gibson, 1986 )
( Zieba et al., 2010 )
Affordance Conflict of use
(Brunel, Gallen, 2011)
( Telci et al., 2011 )
Organizational change Conflict of information
( Vanderhaegen, Caulier, 2011 ) Lack of autonomy Lack of knowledge Abductive reasoning
(Ben Yahia et al., 2015) Difficult decision Conflict between alternatives Inductive reasoning
( Vanderhaegen, 2010 ),
( Vanderhaegen et al., 2011 )
Barrier removal Conflict between viewpoints Deductive reasoning Dissonance recovery or
prediction support tool
( Bench-Capon, Jones, 1999 )
( Hunter, Summerton, 2006 )
( Nguyen, 2008 )
Inconsistency Conflict of action Deductive reasoning
Abductive reasoning
Contributions of the current
paper
Interference, contradiction
and affordance
Conflict of intention, conflict
of action and conflict of
use
Deductive, inductive and
abductive reasoning
Dissonance discovery support
tool
Fig. 1. Dissonance studied in the paper.
a
i
e
a
h
t
t
h
n
d
fl
k
b
n
t
b
t
m
t
p
s
i
w
t
Validation interface
Users
Rule base 1
Inconsistencydiscovery
Affordance discovery
Acquisition interfaceKnowledge base
.
.
.
Rule base N
Fig. 2. The rule-based system architecture.
a
t
3
c
t
t
S
t
d
t
d
3
k
c
n original adaptation of deductive, inductive or abductive reason-
ng, and combines them in order to discover and anticipate sev-
ral kinds of dissonances such as interferences, contradictions and
ffordances. The adapt ation consist s in modeling the cognitive be-
avior by making links between intentions, actions, and supports
o achieve them, and in redefining deductive, inductive and abduc-
ive reasoning to study possible affordances among different be-
aviors.
The innovation presented on this paper relates thus to disso-
ance discovery that is a new concept adapted from knowledge
iscovery and inconsistency taking into account cognitive con-
icts in a knowledge base or among several knowledge bases. The
nowledge bases are developed using a specific formalism of rules
ased on the cognitive behavior of human and automated compo-
ents of a human-machine system. Thus, the rules contain inten-
ions to carry out an action, or a triplet composed of the actions to
e carried out, the objects that are the physical supports to fulfill
he actions and the decision-makers who undertake the actions.
The paper proposes an original rule-based tool to model hu-
an and technical behaviors, to detect possible conflicts between
hese behaviors, and to assist dissonance discovery and control
rocesses. Three kinds of dissonances were addressed and are pre-
ented in Fig. 1: affordances, when new rules can be created us-
ng the same objects to achieve other intentions; contradictions,
hen the same decision maker can carry out opposite actions; in-
erferences, when different decision makers can carry out opposite
ctions. Automation surprise is an example of interferences. Con-
radictions and interferences are particular inconsistencies.
. The rule-based support system for dissonance discovery and
ontrol
Section 3.1 presents the global rule-based system archi-
ecture, and its modules are detailed in the following sec-
ions. Section 3.2 describes the knowledge acquisition interface.
ection 3.3 recalls the deductive, abductive and inductive reasoning
hat will be used in Sections 3.4 and 3.5 to present the affordance
iscovery module and the inconsistency discovery module, respec-
ively. The interface to validate the affordances and inconsistencies
iscovered is presented in Section 3.6 .
.1. The rule-based system architecture
The system architecture consists of the following modules: the
nowledge acquisition interface module, the dissonance discovery
ontrol module, and the validation interface module. Fig. 2 depicts
364 F. Vanderhaegen / Expert Systems With Applications 65 (2016) 361–371
List of intentions
Add Del Mod
List of actions
Add Del Mod
List of objects
Add Del Mod
List of decision makers
Add Del Mod
New rule Save
New base
Intention Intention
Action Object Decision maker
List of rule bases End
List of rules for base BR
Definition of rules for base BR
Del
Fig. 3. The knowledge acquisition interface module.
3
f
L
I
(
t
a
i
f
a
t
k
i
m
i
r
t
the system architecture. The dissonance discovery process identi-
fies two kinds of dissonance: possible affordances and possible in-
consistencies.
The knowledge base is a set of knowledge, K , that contains sev-
eral rule bases. A given rule, R i , of a rule base, RB , consists of a
predicate of activation, Pred(R i ) , and a conclusion, Conc(R i ) . The in-
tentions, actions, objects and decision makers are inputs to build
the rule base. The set of intentions, I , relates to possible predicates
of a rule. The conclusion of a rule is an intention or a triplet ( A, O,
D ). The set of actions, A, lists the possible actions to be achieved
by a decision maker. The set of objects, O , contains the possible ob-
jects that the decision maker can use to achieve the corresponding
action. The set of decision makers, D, is the list of actors who can
achieve the action using the associated object.
Therefore, the following notations can be used to characterize
a given rule R(i) of a rule base RB from K using Pred(R i ), Conc(R i ) ,
and the I, A, O and D sets:
R (i ) ∈ RB → (R (i ) = (P red( R i ) → Conc( R i ))) with (P red( R i ) ∈ I) and (Conc( R i ) ∈ I or Conc( R i ) ∈ (A, O, D ))
(1)
3.2. The knowledge acquisition interface
The corresponding knowledge acquisition interface module
helps define the content of the rule base ( Fig. 3 ).
This module aims to help create the initial rules taking into ac-
count the sets of intentions, actions, objects and decision makers.
The elements of these sets can be added, deleted or modified, and
must be used to define a rule. Once a rule is defined, it has to be
saved so it can be integrated in the list of rules of the correspond-
ing rule base area. After such validation, a rule from this rule base
can be deleted in the case of an error and redefined using the ded-
icated rule-building area.
.3. Deductive, abductive and inductive reasoning
Deductive, inductive, and abductive reasoning were adapted
rom a first-order propositional logic presented in ( Brachman &
evesque, 2004 ). Deductive reasoning is strict top-down reasoning.
f the fact p(a) and the rule p(x) ⇒ q(x) are true, then q(a) is true:
(p(a ) and ∀ x (p(x ) ⇒ q (x )) � q (a ) (2)
Inductive reasoning is bottom-up reasoning. If the rules
p(a) ⇒ q(a) and p(b) ⇒ q(b) are true then the rule (p(x) ⇒ q(x) is
rue:
(p(a ) ⇒ q (a ) and p(b) ⇒ q (b)) � ∀ x (p(x ) ⇒ q (x ) (3)
Abductive reasoning is the opposite of deductive. If the fact q(a)
nd the rule p(x) ⇒ q(x) are true, then p(a) is true:
(q (a ) and ∀ x (p(x ) ⇒ q (x )) � p(a ) (4)
The adaptation of deductive and abductive reasoning consists
n identifying the rules from a rule base, RB , consisting of an input
act represented by an intention Int or a triplet (Act, Obj, Dec) for
n action, an object and a decision maker, respectively. The adap-
ation of inductive reasoning concerns the discovery of new rules
nowing that rules from the rule base RB are true. The correspond-
ng algorithms are listed in Fig. 4 , and the corresponding mathe-
atical functions are described hereafter.
The deductive function, Deduction(Int, RB) , consists in determin-
ng the possible conclusions knowing a given predicate Int and the
ule base RB where this predicate exists:
Deduction : I x K → K
( Int, RB ) → R B
d = Deduct ion (Int , RB ) , ∀ R ∈ RB,
Int = P red(R ) , R
d = ∪ (R ) (5)
The abductive reasoning function, Abduction(Int, RB) or Abduc-
ion(Act, Obj, Dec, RB) , consists in searching a predicate knowing a
F. Vanderhaegen / Expert Systems With Applications 65 (2016) 361–371 365
Fig. 4. Adaptation of deductive, abductive and inductive reasoning.
g
R
)
o
c
r
R
r
f
c
s
s
T
l
p
t
K
c
3
t
t
T
r
i
r
i
a
p
i
r
R
f
Fig. 5. Affordance discovery algorithms.
iven conclusion Int or (Act, Obj, Dec) and the rules of the rule base
B where this conclusion occurs:
Abduction : I x K → K
(Int , RB ) → R B
a = Abduct ion ( Int , RB ) ,
∀ R ∈ RB, Int = Conc(R ) , R B
a = ∪ (R ) (6)
Abduction : A x O x D x K → K
( Act, Ob j, Dec, RB ) → R B a = Abduction (Act, Ob j, Dec, RB ) ,
∀ R ∈ RB, ((Act, Ob j, Dec) = Conc(R ) , R B a = ∪ (R
(7)
Inductive reasoning aims to create new rules based on existing
nes. The induction process consists in discovering new rules by
ombining the predicate of a rule with the conclusion of another
ule. Therefore, given two different rules R i and R j of a rule base
B, the mathematical function Induction(RB) is defined as follows:
Induction (RB ) : K → K
RB → R B i = Induction (RB ) , ∀ R i ∈ RB, ∀ R j ∈ RB, R i � = R j ,
R B i = ∪ ((Pred( R i ) → Conc( R j )) ∪ (Pred( R j ) → Conc( R i )))
(8)
The affordance discovery process is deductive and inductive
easoning. It produces new rules regarding a given system that
unctions using several rule bases. The inconsistency discovery pro-
ess relates to abductive reasoning by identifying possible oppo-
ite actions. Several functions are then required for the control
upport of possible dissonance that may arise among rule bases.
he K_Affordance function that uses the K_Filtering function aims to
ist possible affordances. The K_Inconsistency function determines
ossible inconsistencies between rules. The K_Interference func-
ion identifies interferences among these inconsistencies and the
_Contradiction function lists the contradictions among these in-
onsistencies. These functions are defined in Sections 3.4 and 3.5 .
.4. The affordance discovery module
Affordance discovery is aimed at discovering possible new rela-
ions between intentions or between objects and actions to achieve
he same intention by applying deductive and inductive reasoning.
he algorithms of this application are given in Fig. 5 , and the cor-
esponding mathematical functions are detailed hereafter.
First, the K_Filtering function applies deductive reasoning, which
s required to identify similar predicates, to identify pairs of
ules that may produce possible affordances related to similar
ntentions:
K _ F iltering : K → K
2
RB → R B
+ = K _ F iltering(RB ) , ∀ R i ∈ RB,
∀ R j ∈ R, i � = j, P red( R i ) ⊂ P red( R j ) ,
P red( R i ) � = Not(P red( R j ))
R
+ = ∪ ( R i , R j ) (9)
The K_Affordance function applies inductive reasoning to cre-te new rules based on the predicates and the conclusions of aair of rules. New rules are proposed if they do not already ex-
st in the initial base RB . The function K_Afford_Plus assumes thisole:
K _ Afford _ P lus : K 3 → K
( R 1 , R 2 , RB ) → R B aff = K _ Afford _ P lus ( R 1 , R 2 , RB ) ,
(P red( R 1 ) → Conc( R 2 )) / ∈ RB,
R B aff = (P red( R 1 ) → Conc( R 2 )) (10)
The global affordance search process addresses all the pairs of
B + rules identified by K_Filtering . It then uses the K_Affordance
unction to create all possible new rules:
366 F. Vanderhaegen / Expert Systems With Applications 65 (2016) 361–371
v
r
K
K
s
u
t
K
3
e
b
t
r
i
t
g
a
t
t
t
m
f
4
t
o
p
i
r
m
d
a
v
d
t
b
b
a
4
e
Fig. 6. The inconsistency discovery algorithm.
K _ Affordance : K → K
RB → R B aff = K _ Affordance (RB ) , ∀ ( R i , R j ) ∈ K _ F iltering(RB ) ,
R B aff = ∪ (K _ Afford _ Plus ( R i , R j , RB ) ∪ K _ Afford _ Plus ( R j , R i , RB ))
(11)
3.5. The inconsistency discovery module
The algorithms of this application are given in Figs. 6 and 7 , and
the corresponding mathematical functions are detailed hereafter.
The K_Inconsistency function aims to list the contradictory rules,
i.e., rules that present opposite intentions, and is based on abduc-
tive reasoning by limiting the selection of rules for which opposite
actions may occur in two different sub-rule bases noted RB A and
RB B :
K _ Inconsistency : K → K
2
RB → R B
inc = K _ Inconsistency (RB ) ,
∀ R B
A ∈ RB, ∀ R B
B ∈ RB, R B
A � = R B
B ,
∀ ( R i , R j ) ∈ (R B
A , R B
B ) ,
(Conc( R i ) = Not(Conc( R j )) ,
Conc( R i ) ∈ I, Conc( R j ) ∈ I,
( R j , R i ) / ∈ R B
inc , R B
inc = ∪ ( R i , R j ) (12)
Two kinds of inconsistencies are identified among RB in c : inter-
ferences when different decision makers carry out opposite actions
and contradictions when the same decision maker is supposed to
carry out opposite actions ( Fig. 7 ).
As the conclusion of a RB inc rule appears in set I , it has to be
processed by the K_Select_Plus and K_Selecting functions in order
to replace it with the conclusion of a rule from the same rule base
composed of a triplet (A, O, D) :
K _ Sel ect _ P l us : K
2 → K
( R i , RB ) → R i = K _ Sel ect _ P l us ( R i , RB ) ,
∃ R j ∈ RB, Conc( R j ) ∈ ( A, O, D ) ,
Conc( R i ) = P red( R j ) ,
R i ← (P red( R i ) → Conc( R j )) (13)
The K_Selecting function aims to identify the decision makers
that apply the rules when the conclusions of the rules are limited
to the expression of an intention. RB inc is then transformed into
RB inc + using the K_Select_Plus function. For each pair of rules from
RB
inc , the transformation is achieved using two rule bases from RB
noted RB A and RB B :
K _ Selecting : K → K
RB → R B inc+ = K _ Selecting(RB )
∀ ( R i , R j ) ∈ K _ Inconsistency (RB ) , ∃ R B A ∈ RB, R i ∈ R B A , ∃ R j ∈ R B B , R B B ∈ RB, R B A � = R B B
R B inc+ = ∪ (K _ Sel ect _ Pl us ( R i , R B A ) , K _ Sel ect _ Pl us ( R j , R B
B )) (14)
The K_Interference function aims to select inconsistencies in-
olving different decision makers. The discovery of all interferences
equires the list of inconsistencies to be processed by applying the
_Selecting function:
_ Interference : K → K 2
RB → R B inter = K _ Interference ( R B ) ,
∀ ( R i , R i ) ∈ K _ Selecting ( RB ) ,
Conc( R i ) ∈ ( A, O, D ) and Conc( R j ) ∈ ( A, O, D ) ,
D ( R i ) � = D ( R j ) , R B inter ← ∪ ( R i , R j ) (15)
The K_Contradiction function selects the inconsistencies of a
ame decision maker. The identification of all the contradictions
ses the list of inconsistencies obtained by the K_ Selecting func-
ion:
_ Contradiction : K → K 2
RB → R B contr = K _ Contradiction ( RB ) ,
∀ ( R i , R i ) ∈ K _ Selecting ( RB ) ,
Conc( R i ) ∈ ( A, O, D ) and Conc( R j ) ∈ ( A, O, D ) ,
D ( R i ) = D
(R j
), R B contr ← ∪ ( R i , R j ) (16)
.6. The validation interface module
A specific validation interface is proposed in order to validate 1)
ach rule in a given rule base RB , 2) each new RB aff rule proposed
y the K_Affordance function, 3) each inconsistent RB inc rule ob-
ained by the K_Inconsistency function, 4) each contradictory RB c °ntr
ule obtained by the K_Contradiction function, and 5) each RB inter
nterference obtained by the K_Interference function. The users can
hen give their points of view on such inputs or outputs by inte-
rating levels of certainty ( Fig. 8 ).
The next section proposes a feasibility study for the proposed
rchitecture and formalism taking into account several rule bases:
he rules for using an Automated car Speed Control System (ASCS),
he rules for the automated control of the car speed by the ASCS,
he rules for the manual control of aquaplaning, the rules for the
anual control of car speed, and the rules for the control of car
uel consumption.
. Feasibility study applied to car driving
Knowledge acquisition consists in inserting the specific cogni-
ive knowledge of a human operator into the knowledge bases
f the proposed system in order to analyze it and bring to light
ossible dissonance. Therefore, regarding the car driving feasibil-
ty study, one expert car driver was invited to freely build the
ule bases in terms of intentions, actions, objects and decision
akers using the proposed system. This driver had thirty years’
riving experience. Twenty car drivers aged between twenty-four
nd twenty-nine years with at least five years’ driving experience
alidated the rule bases created by this expert and the dissonance
iscovered. The car drivers were students or members of staff at
he University of Valenciennes. Section 4.1 presents the initial rule
ases defined by the expert car driver using the proposed rule-
ased system. Sections 4.2 and 4.3 present the results relating to
ffordance and inconsistency discovery.
.1. The initial knowledge base and its validation
This feasibility study concerned the results of possible knowl-
dge discovery and inconsistency that can appear in a knowledge
F. Vanderhaegen / Expert Systems With Applications 65 (2016) 361–371 367
Fig. 7. Interference and contradiction discovery algorithms.
Fig. 8. The validation interface module.
b
a
d
e
i
A
u
t
r
o
f
b
m
p
p
A
t
s
T
ase consisting of different rule bases. The possible discoveries
nd inconsistencies between five rule bases were addressed. A car
river was invited to build the knowledge base using the knowl-
dge acquisition support interface. This process took one hour and
nvolved defining the rules related to 1) the manual use of an
SCS, 2) automated car speed control by the ACSC, 3) the man-
al control of aquaplaning, 4) manual car speed control, and 5)
he manual control of car fuel consumption. The content of the
ule bases defined by this car driver is given in Tables 1–5 . Twenty
ther car drivers assessed, validated and discussed the outputs
rom the global knowledge base grouping the rules of the five
ases given by the K_Affordance and K_Inconsistency functions.
The ASCS is a car cruise control system that is integrated in
any cars proposed by several manufacturers. Its rules of use are
resented in Table 1 . The “+ " and the “−" buttons were used to
rovide the initial speed setpoint or to modify this setpoint. The
SCS was turned on or off using the “on” or “off” buttons, respec-
ively. A limited number of intentions, actions, objects and deci-
ion makers were identified to produce the associated rules (see
able 2 ).
368 F. Vanderhaegen / Expert Systems With Applications 65 (2016) 361–371
Table 2
Intentions, actions, objects, decision makers and rules for the manual use of the
ASCS.
I = {To turn on the ASCS, To turn off the ASCS, To deactivate the ASCS, To
brake, To disengage, To increase the car speed setpoint when the ASCS
is activated, To decrease the car speed setpoint when the ASCS is
activated}
A = {To push, To turn on}
O = {Brake pedal, “on” Button, “off” Button, “+ ” Button of activated ASCS,
“−” Button of activated ASCS, Clutch pedal}
D = {Driver}
R1: To turn on the ASCS → (To turn on, “on” Button, Driver)
R2: To turn off the ASCS → (To turn on, “off” Button, Driver)
R3: To deactivate the ASCS → To brake
R4: To brake → (To push, Brake pedal, Driver)
R5: To deactivate the ASCS → To disengage
R6: To disengage → (To push, Clutch pedal, Driver)
R7: To increase the car speed setpoint when the ASCS is activated → (To
push, “+ ” Button of activated ASCS, Driver)
R8: To decrease the car speed setpoint when the ASCS is activated → (To
push, “−” Button of activated ASCS, Driver)
Table 3
Intentions, actions, objects, decision makers and rules of automated car speed
control by the ASCS.
I = {To increase the car speed when it is under the ASCS setpoint and
when the ASCS is activated, To accelerate, Decrease the car speed when
it is over the ASCS setpoint and when the ASCS is activated, To
decelerate}
A = {To increase, To reduce}
O = {Engine speed}
D = {ASCS}
R9: To increase the car speed when it is under the ASCS setpoint and
when the ASCS is activated → To accelerate
R10: To accelerate → (To increase, Engine speed, ASCS)
R11: To decrease the car speed when it is over the ASCS setpoint and
when the ASCS is activated → To decelerate
R12: To decelerate the engine speed → (To reduce, Engine speed, ASCS)
Table 4
Intentions, actions, objects, decision makers and rules for the manual control of
aquaplaning.
I = (To control aquaplaning, Not to brake, Not to accelerate, Decrease the
car speed when it is over the ASCS setpoint and when the ASCS is
activated, To decelerate)
A = (Not to push)
O = (Brake pedal, Gas pedal)
D = (Driver)
R13: To control aquaplaning → Not to brake
R14: Not to brake → (Not to push, Brake pedal, Driver)
R15: To control aquaplaning → Not to accelerate
R16: Not to accelerate → (Not to push, Gas pedal, Driver)
Table 5
Intentions, actions, objects, decision makers and rules for manual car
speed control.
I = (To increase the car speed, To decrease the car speed)
A = (Push, Release)
O = (Gas pedal)
D = (Driver)
R17: To increase the car speed → (To push, Gas pedal, Driver)
R18: To decrease the car speed → (To release, Gas pedal, Driver)
Table 6
Intentions, actions, objects, decision makers and rules for the manual control of
fuel consumption.
I = (Decrease the car fuel consumption going uphill, Decrease the car fuel
consumption going downhill, To take advantage of the car inertia going
uphill, To take advantage of the car inertia going downhill, Not to
accelerate, Not to brake)
A = (Not to push)
O = (Gas pedal, Brake pedal)
D = (Driver)
R19: To decrease car fuel consumption uphill → To Take advantage of the
car inertia going uphill
R20: To take advantage of the car inertia going uphill → Not to accelerate
R21: Not to decelerate → (Not to push, Gas pedal, Driver)
R22: To decrease the car fuel consumption, going downhill → To take
advantage of the car inertia going downhill
R23: To take advantage of the car inertia going downhill → Not to brake
R24: Not to brake → (Not to push, Brake pedal, Driver)
o
c
o
g
T
S
d
b
d
A
t
t
o
T
e
t
o
4
b
{
i
d
When the ASCS is activated, the system controls the car speed
in relation to the initial car speed setpoint given by the driver.
Table 3 gives the sets of intentions, actions, objects and decision
makers required to establish the operating rules of the ASCS to au-
tomatically control the car speed.
The intentions, actions, objects, decision makers and rules in
Table 4 concern the control of aquaplaning.
Table 5 contains the corresponding sets of intentions, actions,
bjects, decision makers and the resulting rules related to manual
ar speed control.
The last rule base relates to the rules and its parameters for
ptimizing fuel consumption ( Table 6 ).
Twenty car drivers were invited to validate the content of the
lobal knowledge base containing these five rule bases ( Table 7 ).
his validation is based on the control interface presented in
ection 3.6 , and required thirty minutes for each car driver. The
rivers could agree/disagree with the content of the initial rule
ases or have no opinion. Certainty levels were required when the
rivers agreed or disagreed.
Globally, most of the content of the rule bases was validated.
ll the drivers agreed with the rule base with regard to the use of
he ASCS because they use it regularly. For the rule base relating to
he manual control of aquaplaning, some of them were not aware
f such an event and had some doubts about the associated rules.
he rule base relating to decreasing fuel consumption needs to be
xtended or detailed because some of the drivers were unaware of
he strategies to be applied regarding the topology of the road in
rder to optimize fuel consumption.
.2. Examples of affordance discovery
The control interface proposes several discoveries that had to
e validated by the drivers. Set R + contains several pairs of rules
(R17, R9), (R17, R7), (R18, R11), (R18, R8)}.
R aff contains the following proposals related to the correspond-
ng pairs of initial rules:
• Affordance 1: To increase the car speed → To accelerate • Affordance 2: To increase the car speed when it is below the
ASCS setpoint and when the ASCS is activated → (To push, Gas
pedal, Driver) • Affordance 3: To increase the car speed → (To push, “+ ” button
of activated ASCS, Driver) • Affordance 4: To increase the car speed setpoint when the ASCS
is activated → To accelerate • Affordance 5: To decrease the car speed → To decelerate • Affordance 6: To decrease the car speed when it is over the
ASCS setpoint and when the ASCS is activated → (To release, Gas
pedal, Driver) • Affordance 7: To decrease the car speed → (To push, “−” button
of activated ASCS, Driver) • Affordance 8: To decrease the car speed setpoint when the
ASCS is activated → (To release, Gas pedal, Driver)
Table 8 presents the results of the validation of these affor-
ances. Affordances 1 and 5 are true for all the drivers: the man-
F. Vanderhaegen / Expert Systems With Applications 65 (2016) 361–371 369
Table 7
Validation of the content of the entire knowledge base.
Agree Certitude level Disagree Certitude level No opinion
High Medium Low High Medium Low
Global knowledge base 15 9 6 0 2 0 2 0 3
Table 8
Validation of the proposed affordances.
Agree Certitude level Disagree Certitude level No opinion
High Medium Low High Medium Low
Affordance 1 20 15 4 1 0 0 0 0 0
Affordance 2 15 9 6 0 3 2 1 0 2
Affordance 3 15 14 1 0 4 2 2 0 1
Affordance 4 3 2 1 0 15 10 5 0 2
Affordance 5 20 15 4 1 0 0 0 0 0
Affordance 6 4 0 2 2 13 7 6 0 3
Affordance 7 15 13 2 0 4 1 2 1 1
Affordance 8 2 0 2 0 16 13 2 1 2
u
a
f
c
A
s
t
t
t
s
n
a
s
e
N
m
3
c
i
s
4
h
o
R
2
K
r
d
I
l
n
i
W
t
t
d
f
a
c
r
t
b
f
al increase or decrease in car speed can be related to acceleration
nd deceleration, respectively. The points of view for the other af-
ordances differ. Affordances 2, 3 and 7 are considered mainly as
orrect, whereas affordances 4, 6 and 8 are considered as wrong.
ffordance 2 confirms that it is indeed possible to increase the car
peed when the driver pushes the gas pedal even if the ASCS is ac-
ivated. The application of affordances 3 and 7 transforms the func-
ions of the ASCS interfaces: the “+ ” and “−" buttons can be used
o control manually the increase and the decrease in car speed, re-
pectively.
Affordance 6 is incorrect because the decrease in car speed can-
ot be due to the driver releasing the gas pedal when the ASCS is
ctivated. Affordances 4 and 8 are incorrect because it is not pos-
ible to control the ASCS speed setpoint using the gas pedal. Sev-
ral affordances can then be introduced into the knowledge base.
evertheless, some of them present risks according to the remarks
ade by some drivers. For instance, the application of affordances
and 7 can be dangerous if there is any confusion between the de-
reasing or the increasing function of the car speed and the brak-
ng function in an emergency, for instance. Further risk analysis
hould thus be performed.
.3. Examples of inconsistency discovery
The validation interface proposes several contradictions that
ad to be validated by the drivers. Set R inc contains several pairs
f rules that may be conflicts between intentions: {(R3, R13), (R9,
15), (R9, R20), (R3, R23)}. Four inconsistencies occur:
• Inconsistency 1 (with D(R3) = D(R13) = Driver):
◦ R3: Deactivation of the ASCS → To brake
◦ R13: To control aquaplaning → Not to brake • Inconsistency 2 (with D(R9) = ASCS and D(R15) = Driver):
◦ R9 = To increase the car speed when it is below the ASCS
setpoint and when the ASCS is activated → To accelerate
◦ R15 = To control aquaplaning → Not to accelerate • Inconsistency 3 (with D(R9) = ASCS and D(R20) = Driver):
◦ R9 = To increase the car speed when it is below the ASCS
setpoint and when the ASCS is activated → To accelerate
◦ R20: To take advantage of the car inertia going uphill →
Not to accelerate • Inconsistency 4 (with D(R3) = Driver and D(R23) = Driver):
◦ R3: Deactivation of the ASCS → To brake
◦ R23: To take advantage of the car inertia going downhill →
Not to brake o
Inconsistencies 1 and 4 are contradictions and inconsistencies
and 3 are interferences. They are replaced by applying the
_Selecting function in order to produce the following pairs of
ules:
• Inconsistency 1:
◦ R3: Deactivation of the ASCS → (To push, Brake pedal, Driver)
◦ R13: To control aquaplaning → (Not to push, Brake pedal,
Driver) • Inconsistency 2:
◦ R9 = To increase the car speed when it is below the ASCS
setpoint and when the ASCS is activated → (To increase,
Engine speed, ASCS)
◦ R15 = To control aquaplaning → (Not to push, Gas pedal,
Driver) • Inconsistency 3:
◦ R9 = To increase the car speed when it is below the ASCS
setpoint and when the ASCS is activated → (To increase,
Engine speed, ASCS)
◦ R20: To take advantage of the car inertia going uphill → (Not
to push, Gas pedal, Driver) • Inconsistency 4:
◦ R3: Deactivation of the ASCS → (To push, Brake pedal, Driver)
◦ R23: To take advantage of the car inertia going downhill →
(Not to push, Brake pedal, Driver)
Table 9 presents the results of the validation process by the
rivers. Globally, all the inconsistencies were considered as correct.
nconsistencies 1 and 2 may affect driver safety with a complete
oss of car control. A new possible rule to be integrated should be
ot to use the ASCS if it is raining, but some of the drivers real-
ze that there can be water on the road even if it is not raining!
ater on the road may decrease the current car speed, which may
hen go below the setpoint of the ASCS. As a result, the ASCS will
hen increase the car speed to reach the required setpoint. This
ecision to increase the car speed or to deactivate the ASCS differs
rom the behavior required when aquaplaning occurs: the drivers
re invited not to accelerate or to activate the braking system be-
ause they may lose control of their vehicle. For instance, a new
ule to be integrated may concern the deactivation of the ASCS in
he event of aquaplaning. This can be done by pushing the “off”
utton or by pushing the clutch pedal instead of the brake pedal.
Inconsistencies 3 and 4 relate to a possible conflict between the
uel consumption when going uphill or downhill and the behavior
r the use of the ASCS. Some of the drivers were not aware of such
370 F. Vanderhaegen / Expert Systems With Applications 65 (2016) 361–371
Table 9
Validation of the proposed inconsistencies.
Agree Certitude level Disagree Certitude level No opinion
High Medium Low High Medium Low
Inconsistency 1 14 8 4 0 3 2 0 1 3
Inconsistency 2 15 10 5 0 4 2 0 2 1
Inconsistency 3 12 4 6 3 3 1 2 0 5
Inconsistency 4 12 4 6 3 3 1 1 1 5
c
t
s
i
w
n
j
a
u
t
(
2
o
o
c
t
o
t
u
t
A
t
A
U
t
R
A
B
B
B
B
C
C
D
D
F
possible relations or considered that the rules proposed were in-
sufficient and unclear. Nevertheless, most of them recognized that
in specific conditions, the behavior of the ASCS could increase fuel
consumption. Instead of taking advantage of the inertia of the car
when going uphill or downhill to limit fuel consumption without
accelerating or braking, if the current speed goes above or below
the speed setpoint of the activated ASCS, the ASCS will decide to
decelerate or accelerate, respectively, in order to reach the setpoint.
This is then done independently of fuel consumption optimization.
5. Conclusion
This paper deals with dissonance in terms of conflicts of use,
intentions and actions by analyzing cognitive behavior of humans
and technical components of a human-machine system. An orig-
inal architecture and formalism is proposed to assist the control
of knowledge acquisition and dissonance discovery. The knowledge
base is composed of rules from several bases. A rule contains pred-
icates and conclusions. A predicate is an intention and the conclu-
sion is an intention or a triplet: an action to be achieved, an ob-
ject to be used to achieve the action and the decision maker who
achieves the action using the object. Specific dissonance discover-
ies are addressed: affordances and inconsistencies. Affordances re-
late to the discovery of new relations between intentions or be-
tween actions and objects. Inconsistencies are conflicts between
intentions or actions. Specific functions based on deductive, induc-
tive and abductive reasoning were proposed in order to discover
and display possible dissonance on a validation interface. A fea-
sibility study is presented with a practical example based on five
rule bases: 1) the use of an ASCS, 2) the automated control of the
car speed by the ASCS, 3) the manual control of car aquaplaning,
4) the manual control of the car speed, and 5) the control of fuel
consumption. A car driver specified the content of the rule bases
using a dedicated knowledge acquisition support interface. Other
car drivers validated these five rule bases, and assessed and dis-
cussed the dissonance discoveries proposed by the rule-based sup-
port system.
Globally, the feasibility study applied to car driving showed the
interest of such a rule-based support system for dissonance discov-
ery and control. Indeed, the proposed rule-based system facilitates
the formalization and analysis of cognitive behavior in terms of in-
tentions, actions, objects and decision makers. It supports the im-
plementation of explicit and implicit cognitive behavior of human
operators or groups of human operators in knowledge bases in or-
der to analyze and control potentially dangerous dissonance. It also
supports the discovery of the new, unforeseen use of an embed-
ded system and contradictions or interferences between users and
embedded systems. In fact, this case study will indeed motivate
the development of wider research in other domains of applica-
tion such as nuclear power plants, railways or aeronautics as well
as the integration of users in the design, analysis and evaluation of
future embedded systems.
Other perspectives are planned in order to improve the ap-
proach. First, they concern the integration of a belief function in
the predicate of a rule, in its conclusion, or in the relation between
the predicate and the conclusion. This will aim to take levels of
ertainty into consideration when defining the rules by assessing
he dissonance discovery process outputs. The adaptation of tools
uch as Bayesian or evidential networks will also be an interest-
ng prospective subject in order to extend the proposed approach
ith indicators of inconsistency or affordance and to assess alter-
ative or new action plans ( Aguirre, Sallak, Vanderhaegen, & Berd-
ag, 2013; Sedki et al., 2013 ). Another improvement relates to risk
ssessment related to the dissonance discovery process. The eval-
ation of dissonance discoveries such as affordances or inconsis-
encies may be interpreted using the so-called Benefit-Cost-Deficit
BCD) model defined in ( Vanderhaegen, Chalmé, Anceaux, & Millot,
006; Vanderhaegen, Zieba, Enjalbert, & Polet, 2011 ), i.e., in terms
f benefits and costs in the case of dissonance success, and deficits
r dangers in the case of dissonance failure. The last perspective
oncerns the adaptation of the proposed rule-based support sys-
em for exchanging and confronting points of view between users
n the rules, intentions, actions, objects and decision makers of
he knowledge base. This could be achieved by involving groups of
sers who could share rules and recover possible erroneous ones
hrough cooperation.
cknowledgments
The International Research Network on Human-Machine Sys-
ems in Transportation and Industry (GDR I HAMASYTI) and the
NR (Agency for National Research) with the UTOP project (Open
niversity of Technology) supported the present research; the au-
hor gratefully acknowledges their support.
eferences
guirre, F. , Sallak, M. , Vanderhaegen, F. , & Berdjag, D. (2013). An evidential network
approach to support uncertain multiviewpoint abductive reasoning. InformationSciences, 253 , 110–125 .
atarekh, A. , Preece, A . D. , Bennett, A . , & Grogono, P. (1991). Specifying an expert
system. Expert System with Applications, 2 , 285–303 . ench-Capon, T. J. M. , & Jones, D. M. (1999). PRONTO — Ontology-based evaluation
of knowledge based systems. In A. Vermesan, & F. Coenen (Eds.), Validation andverification of knowledge based systems, theory, tools and practice (pp. 93–112).
London: Springer . en-David, A. , & Jagerman, D. L. (1997). Evaluation of the number of consistent mul-
tiattribute classification rules. Engineering Applications of Artificial Intelligence,
10 (2), 205–211 . rachman, R. J. , & Levesque, H. J. (2004). Knowledge representation and reasoning .
San Francisco, CA: Morgan Kaufmann Publishers . Chen, C.-H. , Khoo, L. P. , Chong, Y. T. , & Yin, X. F. (2014). Knowledge discovery us-
ing genetic algorithm for maritime situational awareness. Expert Systems withApplications, 41 , 2742–2753 .
oenen, F. , Eaglestone, B. , & Ridley, M. (1999). Validation, verification and integrity
in knowledge and data base systems: Future directions. In A. Vermesan, & F. Co-enen (Eds.), Validation and verification of knowledge based systems, theory, tools
and practice (pp. 297–311). London: Springer . olak, C. , Karaman, E. , & Turtayb, M. G. (2015). Application of knowledge dis-
covery process on the prediction of stroke. Computer Method and Programs inBiomedecine, 19 , 181–185 .
ash, C. S. K. , Dash, A. P. , Dehuri, S. , Cho, S.-B. , & Wang, G.-N. (2013). DE + RBFNsbased classification: A special attention to removal of inconsistency and irrele-
vant features. Engineering Applications of Artificial Intelligence, 26 (10), 2315–2326 .
ehais, F. , Causse, M. , Vachon, F. , & Tremblay, S. (2012). Cognitive conflict in hu-man-automation interactions: A psychophysiological study. Applied Ergonomics,
43 (3), 588–595 . estinger, L. (1957). A theory of cognitive dissonance . Stanford, CA: Stanford Univer-
sity Press .
F. Vanderhaegen / Expert Systems With Applications 65 (2016) 361–371 371
G
G
H
H
H
I
J
K
M
M
M
N
O
O
P
R
R
R
S
S
T
V
V
V
V
V
V
V
V
V
V
V
W
W
W
X
Y
Z
Z
Z
ibson, J. J. (1986). The ecological approach to visual perception . Hillsdale: LawrenceErlbaum Associates (Originally published in 1979) .
rant, J. , & Hunter, A. (2008). Analysing inconsistent first-order knowledge bases.Artificial Intelligence, 172 , 1064–1093 .
endriks, P. H. (1999). The organisational impact of knowledge-based systems: Aknowledge perspective. Knowledge-Based Systems, 12 , 159–169 .
unter, A. (2002). Logical fusion rules for merging structured news reports. Data &Knowledge Engineering, 42 , 23–56 .
unter, A. , & Summerton, R. (2006). A knowledge-based approach to merging infor-
mation. Knowledge-Based Systems, 19 , 647–674 . nagaki, T. (2008). Smart collaboration between humans and machines based on
mutual understanding. Annual Reviews in Control, 32 , 253–261 . ouglet, D. , Piechowiak, S. , & Vanderhaegen, F. (2003). A shared workspace to sup-
port man-machine reasoning: Application to cooperative distant diagnosis. Cog-nition, Technology & Work, 5 , 127–139 .
ervern, G.-Y. (1995). Eléments fondamentaux des cindyniques . Economica Editions,
Paris: (Fundamental elements of cindynics) . a, J. , Zhang, G. , & Lu, J. (2010). A state-based knowledge representation approach
for information logical inconsistency detection in warning systems. Knowl-edge-Based Systems, 23 , 125–131 .
cBriar, I. , Smith, C. , Bain, G. , Unsworth, P. , Magraw, S. M. , & Gordon, J. L. (2003).Risk, gap and strength: Key concepts in knowledge management. Knowl-
edge-Based Systems, 16 , 29–36 .
cCay-Peet, L. , Toms, E. G. , & Kelloway, E. K. (2015). Examination of relationshipsamong serendipity, the environment, and individual differences. Information
Processing & Management, 51 (4), 391–412 . guyen, N. T. (2008). Advanced methods for inconsistent knowledge management . Lon-
don: Springer . ’Keefe, R. M. , & Preece, A. D. (1996). The development, validation and implementa-
tion of knowledge-based systems. European Journal of Operational Research, 92 ,
458–473 . uedraogo, K.-A. , Enjalbert, S. , & Vanderhaegen, F. (2013). How to learn from the
resilience of Human–Machine Systems? Engineering Applications of Artificial In-telligence, 26 (1), 24–34 .
olet, P. , Vanderhaegen, F. , & Zieba, S. (2012). Iterative learning control based toolsto learn from human error. Engineering Applications of Artificial Intelligence,
25 (7), 1515–1522 .
ushby, J. (2002). Using model checking to help discover mode confusions and otherautomation surprises. Reliability Engineering & System Safety, 75 (2), 167–177 .
ubiolo, M. , Caliusco, M. L. , Stegmayer, G. , Coronel, M. , & Fabrizi, M. G. (2012).Knowledge discovery through ontology matching: An approach based on an ar-
tificial neural network model. Information Sciences, 194 , 107–119 . uiz, P. P. , Foguem, B. K. , & Grabot, B. (2014). Generating knowledge in maintenance
from experience feedback. Knowledge-Based Systems, 68 , 4–20 .
edki, K. , Polet, P. , & Vanderhaegen, F. (2013). Using the BCD model for risk anal-ysis: An influence diagram based approach. Engineering Applications of Artificial
Intelligence, 26 (9), 2172–2183 . harma, R. S. , & Bhattacharya, S. (2013). Knowledge dilemmas within organizations:
Resolutions from game theory. Knowledge-Based Systems, 45 , 100–113 . elci, E. E. , Maden, C. , & Kantur, D. (2011). The theory of cognitive dissonance: A
marketing and management perspective. Procedia Social and Behavioral Sciences,24 , 378–386 .
alverde-Albacete, F. J. , González-Calabozo, J. M. , Peñas, A. , & Peláez–
Moreno, C. (2016). Supporting scientific knowledge discovery with extended,generalized formal concept analysis. Expert Systems With Applications, 44 ,
198–216 .
anderhaegen, F. (1997). Multilevel organization design: The case of the air trafficcontrol. Control Engineering Practice, 5 (3), 391–399 .
anderhaegen, F. (1999b). Multilevel allocation modes - Allocator control policies toshare tasks between human and computer. System Analysis Modelling Simulation,
35 , 191–213 . anderhaegen, F. (2010). Human-error-based design of barriers and analysis of their
uses. Cognition Technology & Work, 12 , 133–142 . anderhaegen, F. (2012). Cooperation and learning to increase the autonomy of
ADAS. Cognition Technology & Work, 14 (1), 61–69 .
anderhaegen, F. (2014). Dissonance engineering: A new challenge to analyse riskyknowledge when using a system. International Journal of Computers Communica-
tions & Control, 9 (6), 750–759 . anderhaegen, F. , & Caulier, P. (2011). A multi-viewpoint system to support abduc-
tive reasoning. Information Sciences, 181 (24), 5349–5363 . anderhaegen, F. , Chalmé, S. , Anceaux, F. , & Millot, P. (2006). Principles of cooper-
ation and competition - Application to car driver behavior analysis. Cognition
Technology & Work, 8 (3), 183–192 . anderhaegen, F. , Jouglet, D. , & Piechowiak, S. (2004). Human-reliability analysis
of cooperative redundancy to support diagnosis. IEEE Transactions on Reliability,53 (4), 458–464 .
anderhaegen, F. , & Zieba, S. (2014). Reinforced learning systems based on mergedand cumulative knowledge to predict human actions. Information Sciences,
276 (20), 146–159 .
anderhaegen, F. , Zieba, S. , Enjalbert, S. , & Polet, P. (2011). A Benefit/Cost/Deficit(BCD) model for learning from human errors. Reliability Engineering & System
Safety, 96 (7), 757–766 . achla, D. , & Moczulski, W. A. (2007). Identification of dynamic diagnostic models
with the use of methodology of knowledge discovery in databases. EngineeringApplications of Artificial Intelligence, 20 (5), 699–707 .
anderley, G. M. P. , Tacla, C. A. , Barthès, J.-P. A. , & Paraiso, E. C. (2015). Knowl-
edge discovery in task-oriented dialogue. Expert Systems with Applications, 42 ,6 807–6 818 .
u, Z. , & Liu, Y. (2014). Knowledge augmented policy conflict analysis for servicescollaboration. Knowledge-Based Systems, 62 , 11–27 .
ue, Z. , Zeng, X. , Koehl, L. , & Chen, Y. (2014). Measuring consistency of two datasetsusing fuzzy techniques and the concept of indiscernibility: Application to hu-
man perceptions on fabrics. Engineering Applications of Artificial Intelligence, 36 ,
54–63 . an, W. , Zanni-Merk, C. , Cavallucci, D. , & Collet, P. (2014). An ontology-based ap-
proach for inventive problem solving. Engineering Applications of Artificial Intel-ligence, 27 , 175–190 .
hang, B. , Lin, C.-W. , Gan, W. , & Hong, T.-P. (2014). Maintaining the discovered se-quential patterns for sequence insertion in dynamic databases. Engineering Ap-
plications of Artificial Intelligence, 35 , 131–142 .
ieba, S. , Polet, P. , Vanderhaegen, F. , & Debernard, S. (2010). Principles of adjustableautonomy: A framework for resilient human machine cooperation. Cognition,
Technology & Work, 12 (3), 193–203 . ieba, S. , Polet, P. , & Vanderhaegen, F. (2011). Using adjustable autonomy and hu-
man-machine cooperation for the resilience of a human-machine system – Ap-plication to a ground robotic system. Information Sciences, 181 , 379–397 .