finding solutions in goal models: an interactive backward reasoning approach jennifer horkoff 1 eric...
TRANSCRIPT
Finding Solutions in Goal Models: An Interactive Backward Reasoning
Approach
Jennifer Horkoff1
Eric Yu2
Department of Computer Science1
Faculty of Information2
[email protected] [email protected] of TorontoNovember 1, 2010
ER’10
Outline Early RE A Framework for Iterative, Interactive Analysis
of Agent-Goal Models in Early RE Background: Goal Models Background: Interactive Forward Satisfaction
Analysis Need for Backward Analysis Interactive, Iterative Backward Analysis
Example Implementation
Qualitative Studies Conclusions and Future Work
Finding Solutions in Goal Models - Horkoff, Yu 2
Early Requirements Engineering Domain understanding in the early stages of a
project is crucial for the success of a system Characterized by incomplete and imprecise
information Many non-functional, hard to quantify success
criteria: customer happiness, privacy, job satisfaction, security,…
(Ideally) a high degree of stakeholder participation
Goal models can be a useful conceptual tool for Early RE Represent imprecise concepts (softgoals) and relationships
(contribution links) Relatively simple syntax Amenable for high-level analysis and decision making
Finding Solutions in Goal Models - Horkoff, Yu 3
Framework for Iterative, Interactive Analysis of Agent-Goal Models in Early RE We develop a framework to support iteration and
stakeholder participation in early goal model analysis Survey and analysis of existing analysis procedures
(SAC’11) Interactive forward evaluation (CAiSE’09 Forum,
PoEM’09, IJISMD) Interactive backward evaluation (this work!) Visualizations for backward evaluation (REV’10) User testing (PoEM’10) Suggested methodology (CAiSE’09, PoEM’09, IJISMD,
…) More…
Finding Solutions in Goal Models - Horkoff, Yu 5
Background: Goal Models We use i* as an example goal modeling framework
Finding Solutions in Goal Models - Horkoff, Yu 6
Interactive Forward Satisfaction Analysis A question/scenario/alternative is placed on the model and
its affects are propagated “forward” through model links Dependency links propagate values as is AND/OR (Decomposition/Means-Ends) links take the min/max Contribution links propagated as follows:
Interactive: user input (human judgment) is used to decide on partial or conflicting evidence “What is the resulting value?”
Finding Solutions in Goal Models - Horkoff, Yu 7
Example: Interactive Forward Satisfaction Analysis
Finding Solutions in Goal Models - Horkoff, Yu 8
Human Judgment
Human Judgment
What if… the application asked for secret question and did not restrict the structure of password?
Need for Backward Analysis We can ask “what
if…?” questions, but what about “Is this possible?” “How?” “If not, why?”
Finding Solutions in Goal Models - Horkoff, Yu 9
Is it possible for Attract Users to be Satisfied? Partially Satisfied? If so how? If not, why not?
Background: SAT and UNSAT Core SAT Solver: Algorithms which solve Boolean Satisfiability
Problem Accept a Boolean formula in conjunctive normal form (CNF),
composed of a conjunction of clauses. Searches for an assignment of the formula’s clauses in which
it makes the formula true. Although the SAT problem is NP-Complete, algorithms and
tools which can solve many SAT problems in a reasonable amount of time have been developed.
When a SAT problem does not have a solution, we can find the UNSAT core UNSAT core: an unsatisfiable subset of clauses in a CNF
We use the zChaff and zMinimal implementations in our tool
Finding Solutions in Goal Models - Horkoff, Yu 10
Interactive, Iterative Backward Analysis Procedure overview:
i* model and target values are encoded into CNF Iteratively call a SAT solver on the CNF representation:
After each iteration, prompt for human judgment for intentions with conflicting or multiple sources of partial evidence
Re-encode CNF formula
If SAT solver finds an answer and no human judgment is needed:
Success: answer provided If SAT solver cannot find an answer
Display UNSAT core Backtrack over last round of human judgment If no more human judgment to backtrack over:
Failure: no answer
Finding Solutions in Goal Models - Horkoff, Yu 12
Formally Expressing i* i* model: a tuple M = <I, R, A>
I is a set of intentions R is a set of relations between intentions A is a set of actors
Intention type: each intention maps to one type in IntentionType = {Softgoal, Goal, Task, Resource}
Relation type: Each relation maps to one type in RelationType = {Rme, Rdec, Rdep, Rc} Rc can be broken down into {Rm, Rhlp, Ru, Rhrt, Rb}
Relation behavior: Rdep and Rc are binary (one intention relates to one intention), Rme and Rdec are (n+1)-ary (one to many intentions relate to one intention)
Finding Solutions in Goal Models - Horkoff, Yu 13
Analysis Predicates Formal expression of analysis predicates Similar to predicates used in Tropos Framework
(Sebastiani et al. CAiSE’04) For i ∈ I:
Example: PS(Attract Users), D(Ask for Secret Question) Conflict label: C(i), Conflict situation: for i ∈ I a predicate from more than one
of the following sets hold: {S(i), PS(i)}, {U(i)}, {C(i)}, {PD(i), D(i)}
Finding Solutions in Goal Models - Horkoff, Yu 14
S(i) PS(i) C(i) U(i) PD(i) D(i)
Satisfied
Partially Satisfied
Conflict Unknown
Partially Denied
Denied
Propagation in CNF We use the SAT formula from the Tropos Framework:Φ = ΦTarget ∧ (the target values representing the
question)ΦForward ∧ (axioms describing forward propagation) ΦBackward ∧ (axioms describing backward
propagation) ΦInvariants ∧ (axioms describing invariant properties)ΦConstraints (constraints on propagation)
ΦInvariants:S(i) → PS(i) D(i) → PD(i)
ΦConstraints:∀ i ∈ I s.t. i is a leaf: PS(i) ⋁ C(i) ⋁ U(i) ⋁ PD(i)
∀ i ∈ I s.t. i is a non-softgoal leaf: i must not have conflicting predicates (?)
Finding Solutions in Goal Models - Horkoff, Yu 15
Forward Propagation Axioms We develop axioms to express forward and
backward propagation Forward Propagation, for i ∈ I, R:i1 x … x in → i:
(Some combination of v(i1)…v(in)) → v(i)
Samples (complete list in paper): Rhlp: S(i) → PS(i) PS(i) → PS(i) U(i) → U(i)
C(i) → C(i) PD(i) → PD(i) D(i) → PD(i)
Finding Solutions in Goal Models - Horkoff, Yu 16
Forward Propagation Axioms Forward Propagation, for i ∈ I, R:i1 x … x in → i:
(Some combination of v(i1)…v(in)) → v(i)
Rdec:
Finding Solutions in Goal Models - Horkoff, Yu 17
Backward Propagation Axioms Decomposition, Means-Ends and Dependency
backward axioms are the inverse of forward axioms
Forward Propagation, for i ∈ I, R:i1 x … x in → i: (Some combination of v(i1)…v(in)) → v(i)
Backward Propagation, for i ∈ I, R:i1 x … x in → i: v(i) →(Some combination of v(i1)…v(in))
Contribution links are different – information is lost during value combination in forward direction In the backward direction, we can make limited
assumptions
Finding Solutions in Goal Models - Horkoff, Yu 18
Human Judgment An intention, i ∈ I, needs human judgment if:
i is the recipient of more than one contribution link, AND: There is a conflict situation OR, PS(i) or PD(i) holds and i has not received a human
judgment in the previous iteration (allows promotion of partial evidence)
When a judgment is provided, the CNF encoding is adjusted as follows: Forward and backward axioms propagating to or from i are
removed (Any combination of v(i1)…v(in)) → v(i)
v(i) → (Any combination of v(i1)…v(in)) New axioms representing the judgment are added
Forward: (v(i1) ⋀ … ⋀ v(in)) → judgment(i)
Backward: judgment(i) → (v(i1) ⋀ … ⋀ v(in))
Finding Solutions in Goal Models - Horkoff, Yu 19
Example: Interactive Backward Satisfaction Analysis
Finding Solutions in Goal Models - Horkoff, Yu 20
Human Judgment
Human Judgment
Conflict
Backtrack
Backtrack
Is it possible for Attract Users to be Partially Satisfied? If so how? If not, why not?
Target(s) unsatisfiableThe following intentions are involved in the conflict:Security PSAttract Users PSUsability PS
The following intentions are the sources of the conflict:Restrict Structure of PasswordPS, D, not PD, PD
Implementation Implemented in OpenOME Worst case run time: O(6q(l * n2 + n *
runtime(SAT))) There are 6 evaluation labels n = number of intentions q = maximum number of sources for intentions l = number of links in the model
Procedure has been applied to several medium to large sized models
If the user does not repeat judgments, procedure will terminate
Finding Solutions in Goal Models - Horkoff, Yu 21
Qualitative Studies of Interactive Goal Model Analysis Tested the procedure with a group implementing the inflo
“back-of-the-envelope” calculation modeling tool Ten individual case studies (PoEM’10) Qualitative analysis of results (no statistical significance) Observations showed benefits and revealed usability issues
Individuals and groups able to understand and apply backward analysis
In some cases interesting discoveries were made about the model and domain (model incompleteness, meaning of intentions)
Analysis may be less useful on incomplete, smaller models, or models without tradeoffs
i* and evaluation training, or presence of an experienced facilitator is needed to get the full intended benefits
Finding Solutions in Goal Models - Horkoff, Yu 22
Conclusions and Future Work Expanded the analysis ability of i* models as part
of an interactive, iterative framework aimed for Early RE
Future work includes: Procedure optimizations
Optionally reusing human judgment Deriving judgments from existing judgments Improving run time
Further visualizations Highlighting intentions involved in human judgment Links coloring for conflicts
Further applications More realistic action research setting
Finding Solutions in Goal Models - Horkoff, Yu 24
Thank you [email protected] www.cs.utoronto.ca/~jenhork
[email protected] www.cs.utoronto.ca/~eric
OpenOME: https://se.cs.toronto.edu/trac/ome
Finding Solutions in Goal Models - Horkoff, Yu 25
Related Work Giorgini et al. have introduced a formal framework for
(forward and) backward reasoning with goal models (CAiSE’04)
We use the CNF formula and some of the axioms from this work; however, we make the following expansions and modifications: Incorporating interaction through human judgment – making the
procedure more amenable to Early RE Accounting for additional agent-goal syntax (dependencies,
unknown) and analysis values (conflict, unknown) Producing results with only one value per intention Providing information on model conflicts when a solution cannot
be foundFinding Solutions in Goal Models - Horkoff, Yu 26
S, PD
PS
PS, D
PS, PD