teaching ai undergraduate 10.summary
TRANSCRIPT
-
8/7/2019 Teaching AI Undergraduate 10.Summary
1/102
SummarySummary
Introduction to Artificial Intelligence
-
8/7/2019 Teaching AI Undergraduate 10.Summary
2/102
AI:Summary 2
Understand and BUILDintelligent entities
Two Views: Weak AI (Turing Test); Strong AI (ChineseRoom Objection)
Six Approaches: Symbolic AI; Connectionism; MachineLearning; Nouvelle AI; Simulated Evolution; Swarm
Intelligence
Intelligent Systems: Expert Systems; LISP andPROGLOG; Optical, DNA, and Quantum Computing
-
8/7/2019 Teaching AI Undergraduate 10.Summary
3/102
SymbolicAI
SymbolicAI
Chapter 2-3
-
8/7/2019 Teaching AI Undergraduate 10.Summary
4/102
AI:Summary 4
Search and ProblemSearch and Problem--SolvingSolving
State Space
- five elements (S, F, C, I, G)
- the solution is corresponding with apath from one of initial
states to one of goal states
And/Or Graph
- embody theproblem reduction
- the goal is to determine whether the root node is solvable
- the solution is a sub-graph leading to the solvability of the
root node
-
8/7/2019 Teaching AI Undergraduate 10.Summary
5/102
AI:Summary 5
Effectiveness and Efficiency of search algorithms are
contradictions to one another, thoroughly considering their
completeness, optimality, and complexity. The corresponding
design strategies include:blind vs. heuristic, local vs. global,
and feasible vs. optimal.
Graph is gradually generated through expanding nodes in the
search process until a goal state is found out (state space) or
the initial node is labeled solvable or unsolvable (and/or graph).
-
8/7/2019 Teaching AI Undergraduate 10.Summary
6/102
AI:Summary 6
The key difference between different search strategies is in the
order of node expanding.
Blind search does not consider the particularity of the problem,
which includes Bread-first search, Depth-first search, Depth-
limited search and Iterative deepening search.
Heuristic search employ the heuristic informationprovided by
the problem to evaluate the costs of nodes. The node with
lowest cost will be selected to expand.
A* algorithm is a well-known heuristic search algorithm for
state space, which guarantee to find out the optimal solution
through throwing a constraint on the heuristic function.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
7/102
AI:Summary 7
Admissible heuristicsAdmissible heuristics
A heuristic h(n) is admissible if for every node n,
h(n) h*(n), where h*(n) is the true cost toreach the goal state from n.
An admissible heuristic never overestimates thecost to reach the goal, i.e., it is optimistic
Example: hSLD(n) never overestimates the actual road
distance.
Theorem: Ifh(n) is admissible, A* is optimal
for tree search
-
8/7/2019 Teaching AI Undergraduate 10.Summary
8/102
AI:Summary 8
2.3
2.13 A*2.3
a, b, c, da, b, c, d10 (1,1,1,1) (0, 0, 0, 0)
{}{}(1,1,1,1)(0,1,0,1)(1,1,0,1)(0,0,0,1)(1,0,1,1)(0,0,1,0)(
1,0,1,0)(0,0,0,0)
-
8/7/2019 Teaching AI Undergraduate 10.Summary
9/102
AI:Summary 9
hh((xx)=)=
(1,1,1,1)h=3
f=3
(0,0,1,1)h=
f=(0,1,0,1)
h=2
f=3(0,1,1,0)
h=
f=
(1,1,0,1)h=2
f=4
(0,0,0,1)h=1
f=4(0,1,0,0)
h=1
f=4
(1,0,0,1)h=
f=
(1,0,1,1)h=2
f=6 (1,1,0,0)h=
f= (1,1,1,0)
h=2
f=6
1
0
2
3 4
5
-
8/7/2019 Teaching AI Undergraduate 10.Summary
10/102
AI:Summary 10
(1,0,1,1) h=2f=6
(0,0,1,0)h=1
f=6
(0,0,0,1)h=1
f=6
(1,0,1,0)h=1
f=8(1,1,1,0)
h=2
f=8
(0,0,0,0)h=0
f=7
hh((xx)=)=
6
(1,0,0,1)h=
f=
7
9
10
(1,1,1,0)h=2
f=68
-
8/7/2019 Teaching AI Undergraduate 10.Summary
11/102
AI:Summary 11
Game treeData structure for representing game process
= And/Or graph in structure when considering best achievable
payoff against best play
Minimax algorithm
- Cut offthe game tree for real decision-making
- Evaluate the game state
- choose move to position with highest minimax value!
- pruning
Improve search efficiency throughpruning some
unnecessary branches in the process ofdepth-limited search
-
8/7/2019 Teaching AI Undergraduate 10.Summary
12/102
AI:Summary 12
2.21
-
-
8/7/2019 Teaching AI Undergraduate 10.Summary
13/102
AI:Summary 13
Knowledge and ReasoningKnowledge and Reasoning
Reasoning is the process ofmaking decision basedon facts and knowledge.
- Mechanisms
deduction vs. induction; certainty vs. uncertainty,
monotonic vs. non-monotonic
- control strategies
inference direction, conflict resolution, search
Knowledge representation- Facts, Rules and Meta-knowledge
- Declarative vs. Procedural knowledge
-
8/7/2019 Teaching AI Undergraduate 10.Summary
14/102
AI:Summary 14
Two languages of knowledge representation
- Propositional logic and first-order logic
Sentences: Constants, Predicates, Functions, Variables,
Connectives, Quantifiers
- Production rule and system
condition-action rules
system structure: working memory, rule base and interpreter
working model: match-resolve-act cycle
-
8/7/2019 Teaching AI Undergraduate 10.Summary
15/102
AI:Summary 15
Syntax of FOL: Basic elementsSyntax of FOL: Basic elements
Constants E.g., John, 2, Sun,...
Functions E.g., Sqrt, FatherOf,...
Variables E.g., x, y, a, b,... Predicates E.g., Brother, >,...
Connectives , , , ,
Quantifiers ,
Term
-
8/7/2019 Teaching AI Undergraduate 10.Summary
16/102
AI:Summary 16
3.9
1
2
3
4
5
6
7
)()( xiansummerhotxiansummerdry
choudoufuxlikexpersonx ,p
shuihuxreadsanguoxlikereadxpersonx ,, p
xiongzhangIwantfishIwant ,,
xybiggeryintegerxrealyx ,p
ybioxselecthistoryxselectxstudentx log,, p
yxdanceyxworkyxdanceyxworkysaturdayxstudentyx ,,,,
-
8/7/2019 Teaching AI Undergraduate 10.Summary
17/102
AI:Summary 17
Production RuleProduction Rule
P Q
or IF P THEN Q (certainty factor )
AntecedentCondition
ConsequentConclusion or Action
E.g.,
IF An animal has teeth, claw, andeyes
staringinto the frontage
Then the animalis carnivore (0.6)
-
8/7/2019 Teaching AI Undergraduate 10.Summary
18/102
AI:Summary 18
ProductionProduction--rule Systemrule System
Control System
(InferenceEngine)
(Interpreter)
Rule base Global Database (Facts)
(working memory)
Structure
ProcedureRule Matching Conflict Resolution RuleExecution
StopConditions
-
8/7/2019 Teaching AI Undergraduate 10.Summary
19/102
AI:Summary 19
First-order inference rules are fundamentsin first-order inference
Validity and Satifiability
Equivalence and Entailment
Unification is necessary in first-orderinference
Most general unification
-
8/7/2019 Teaching AI Undergraduate 10.Summary
20/102
AI:Summary 20
3.18
1
2
3
4
5
),(),,( yxPbaP _ aybxa ,
),(),,( zyPbxfP _ azbyxf ,
),(),,( bfyPyxfP
),,(),,,( bfafxxyyfP
),(),,( xyPyxP _ ayx
-
8/7/2019 Teaching AI Undergraduate 10.Summary
21/102
AI:Summary 21
Resolution is a kind ofreduction toabsurdity, which prove facts=>conclusion by proving factsconclusionunsatisfiable, i.e. by deriving the emptyclause
conjunctive normal form
resolution strategies
-
8/7/2019 Teaching AI Undergraduate 10.Summary
22/102
AI:Summary 22
Resolution RuleResolution Rule
Eliminate complementary literals in twosentences, the disjunction of remaining parts oftwo sentences construct the resolvent clausewhich is the entailment of two sentences above.
Literals are co ple entary if one is the negation ofthe other.
Apply resolution steps to CNF(KB ); complete
for inference when empty clause is reached.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
23/102
AI:Summary 23
3.14
3
1. Eliminate biconditionals and implications
2. Move
inwards
yxRyxQyxPyx ,,, p
yxRyxQyxPyx ,,,
3. Standardize variables
4. Skolemize: a more general form of existential instantiation
yxRyxQyxPyx ,,,
xfxxfxxfxPx ,,,
Table 3-5 (P87) + Table 3-2 (P83)
-
8/7/2019 Teaching AI Undergraduate 10.Summary
24/102
AI:Summary 24
5. Drop universal quantifiers:
6. Distribute over :
xfxxfxxfxP ,,,
xfxxfxxfxxfxP ,,,,
Table 3-5 (P87) + Table 3-2 (P83)
Clause 1 Clause 2
Skeleom
-
8/7/2019 Teaching AI Undergraduate 10.Summary
25/102
AI:Summary 25
3.14
1
2
4
5
),(),( yxQyxPyx yxQyxPyx ,, p
),(),,( vuQyxP
yxQyxP ,,
zxRyxQyxPzyx ,,, p
yxfxRyxQyxP ,,,,
wzxRwvuzyxQwvuzyxPwvuzyx ,,,,,,,,,,,,
vzgvzfzyxvzgvzfzyxP ddddddd ,,,,,,,,,,,,,
-
8/7/2019 Teaching AI Undergraduate 10.Summary
26/102
AI:Summary 26
3.19 A B C D
E
1
qPzP sPqP
lPsP
sPzP lPqP
2
3
zP
-
8/7/2019 Teaching AI Undergraduate 10.Summary
27/102
AI:Summary 27
4.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
28/102
AI:Summary 28
3.20
1
2MaryBill
MaryTom
1
xWyxByx p ,
xWyxSyx p ,
BillMaryS ,
-
8/7/2019 Teaching AI Undergraduate 10.Summary
29/102
AI:Summary 29
2.
xWyxB , uWvuS ,
BillMaryS ,
3.
TomMaryB ,
4.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
30/102
AI:Summary 30
xWyxB , TomMaryB ,
MaryW uWvuS ,
vMaryS , BillMaryS ,
NIL
Mary/x, Tom/y
Mary/u
Bill/v
-
8/7/2019 Teaching AI Undergraduate 10.Summary
31/102
Artificial Neural NetworksArtificial Neural Networks
Chapter 4
-
8/7/2019 Teaching AI Undergraduate 10.Summary
32/102
AI:Summary 32
Artificial Neural Networks are massivelyparallel adaptive networks ofartificialneurons which are intended to abstractand model some of the functionality of the
human nervous system.
Basic elements of artificial neurons A set ofconnecting links
A combination function An activation function
-
8/7/2019 Teaching AI Undergraduate 10.Summary
33/102
AI:Summary 33
Artificial NeuronsArtificial Neurons
M-P Neuron
x1
x2
xn
y
1
2
n
!!
!
n
i
iixffy
1
U[W
Input Output
Threshold
McClloch and Pitts, , 1943
f:Activation
Function
-
8/7/2019 Teaching AI Undergraduate 10.Summary
34/102
AI:Summary 34
Combination FunctionsCombination Functions
Weighted Sum
Radial Distance
n
i
iixg1
U[\ X
n
i
iixffy1
U[\
!!
!
n
i
iixffy0
[\
!
!!n
i
ii cx1
2C\
n
i
i cxffy1
2\
-
8/7/2019 Teaching AI Undergraduate 10.Summary
35/102
AI:Summary 35
Activation FunctionsActivation Functions
\
)(\f
\
)(\f
\
)(\f
\
)(\f
\
)(\f
\
)(\f
Threshold Linear Saturating Linear
Logistic SigmoidHyperbolic
tangent SigmoidGaussian
-
8/7/2019 Teaching AI Undergraduate 10.Summary
36/102
AI:Summary 36
Two key problems in ANNArchitecture and Learning Approach
Solutions to these two problems leads to an ANNmodel
Two categories of ANN architecture
Feedforward vs. Feedback (Recurrent)
Four Learning StrategiesHebbrain, Error Correction, Winner-take-all,Stochastic
-
8/7/2019 Teaching AI Undergraduate 10.Summary
37/102
AI:Summary 37
1x 2x 3x nx
1y
2y
3y
ny
General structures of feedforward networks
General structures of feedback networks
-
8/7/2019 Teaching AI Undergraduate 10.Summary
38/102
AI:Summary 38
Multi-layer perceptron is the classical model offeedforward network.
Back-Propagation (B-P) learning algorithm aimsto minimize the Mean Square Error between
generated outputs and desired outputs of thenetwork, which is optimized by Gradient Descent.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
39/102
AI:Summary 39
MultiMulti--layer Perceptron (MLP)layer Perceptron (MLP)
Layers are usually fully connected
numbers ofhidden units typically chosen by hand
-
8/7/2019 Teaching AI Undergraduate 10.Summary
40/102
AI:Summary 40
BB--P Learning StepsP Learning Steps
Step1. Select a pattern from the training set and present it to the
network.
Step2. Compute activation of input, hidden and output neurons in
that sequence.
Step3. Compute the errorover the output neurons by comparing
the generated outputs with the desired outputs.
Step4. Use the calculated error to update all weights in the
network, such that a global errormeasure gets reduced.
Step5. Repeat Step1 through Step4 until the global error fallsbelow a predefined threshold.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
41/102
AI:Summary 41
4.15 Sigmoid
i
BP
iix [,
y
ni ,,1 .!
!
!
n
i
iixy1
tanh [
xx 2tanh1htan !d
-
8/7/2019 Teaching AI Undergraduate 10.Summary
42/102
AI:Summary 42
iii
ii tyyEytE !
xx! 2
21
jiji
jiji xg
xg !x
x!
[
[[[
TT
jiiiiji xyty21!xx[
jiiiijiji
jiji xyty21 !
x
x!@ L[
[L[[
1
22 1tanh1tanh iii ygg
ygy !!
x
x! [
[
[T
TT
-
8/7/2019 Teaching AI Undergraduate 10.Summary
43/102
-
8/7/2019 Teaching AI Undergraduate 10.Summary
44/102
AI:Summary 44
Hopfield network is a classical model offeedbacknetwork. As a dynamic system, Stability is the
main problem of Hopfield network which is solvedby Lyapunov energy function.
Hopfield networks solve problems by going to itsattractor state from any initial state. Its primaryapplications include content addressable memoryand optimization.
1x 2x 3x nx
1y 2y 3y ny
-
8/7/2019 Teaching AI Undergraduate 10.Summary
45/102
AI:Summary 45
Self-organizing feature map (SOFM) is a networkwithunsupervised learning, which can be used to
reduce dimensionality withpreservation oftopological information.
Three key components of SOFM algorithm include
competition, cooperation, and adaptation.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
46/102
Machine LearningMachine Learning
Chapter 5
-
8/7/2019 Teaching AI Undergraduate 10.Summary
47/102
AI:Summary 47
Learning is any process by which a
system improvesperformance fromexperience.
Four elements of intelligent systems withlearning capability
performance, environment, critic, learning
-
8/7/2019 Teaching AI Undergraduate 10.Summary
48/102
AI:Summary 48
Learning strategies include rote learning,learning from instruction, learning byanalogy, explanation-based learning,
learning fromind
uction
Rote learning save the pair of solutionsand problem and retrieve them when
needed.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
49/102
AI:Summary 49
Explanation-based learning is deductivelearning in essence.
- Input
training example, domain theory, OperationalityCriterion
- Output
target concept
- Two processesExplanation and Generalization
-
8/7/2019 Teaching AI Undergraduate 10.Summary
50/102
AI:Summary 50
In supervised learning, a function islearned from the examples ofinput-outputpairs.
Ockham
s razor: prefer t
he simplest
hypot
hesisconsistent with data
Decision tree learning is a method forapproximating discrete functionID3, Information Gain
-
8/7/2019 Teaching AI Undergraduate 10.Summary
51/102
AI:Summary 51
BA
A B Result
0 0 0
0 1 0
1 0 1
1 1 0
A
B0
0 1
1 0
0 1
AB:
Result
01note:
5.12
Choose attribute based on informationChoose attribute based on information
-
8/7/2019 Teaching AI Undergraduate 10.Summary
52/102
AI:Summary 52
Choose attribute based on informationChoose attribute based on informationtheorytheory
To implement Choose-Attribute in the DTL
algorithm
Information Content (Entropy):
I(P(v1), , P(vn))= i -P(vi) log2 P(vi) For a training set containing p positive
examples and n negative examples:
np
n
np
n
np
p
np
p
np
n
np
pI ! 22 loglog),(
-
8/7/2019 Teaching AI Undergraduate 10.Summary
53/102
AI:Summary 53
Information gainInformation gain
A chosen attribute A divides the training set Einto subsets E1, , Ev according to theirvalues for A, where A has v distinct values.
Information Gain (IG) or reduction in entropyfrom the attribute test:
Choose the attribute with the largest IG
!
!v
i ii
i
ii
iii
np
n
np
pI
np
npAremainder
1
),()(
)(),()( Are aindernp
n
np
pIAIG !
-
8/7/2019 Teaching AI Undergraduate 10.Summary
54/102
AI:Summary 54
5.13 ABa1a2,TrueF
alse
a1 a2
1 B True False
2 B True True
3 A False True
4 B False False
5 A True False
6 A True True
a1a2
1111
-
8/7/2019 Teaching AI Undergraduate 10.Summary
55/102
AI:Summary 55
082.0,918.0
918.03
1log
3
1
3
2log
3
2
918.03
1
log3
1
3
2
log3
2
2
1
2
1
0,1
12
1log
2
1
2
1log
2
1
12
1log
2
1
2
1log
2
13
1
3
2
12
1log
2
1
2
1log
2
1
222
22
22
2
111
22
22
1
22
!!!@
}!
}!
!
!!!@
!!
!!
!
!!
aaa
false
true
falsetruea
aaa
false
true
falsetruea
SGS
S
SGS
S
:
12 aa GG "
a2
-
8/7/2019 Teaching AI Undergraduate 10.Summary
56/102
AI:Summary 56
In unsupervised learning, only input datais provided, expected output is unknown.
Clustering is the process ofdividing the
data set into clusters according to thesimilarity between data.
Partition clustering and Hierarchal
clustering are two basic strategies.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
57/102
AI:Summary 57
k-means algorithm is a main algorithm ofpartiton clustering, in which each cluster isrepresented by its mean and the data ispartitioned into the closest clusteriteratively.
k-means algorithm is changed into k-mediods algorithm if we represent each
cluster using representative data.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
58/102
AI:Summary 58
An example of K-means clustering
From Data
Mining:
Concepts andTechniques, J.
Han and M.
Kamber
-
8/7/2019 Teaching AI Undergraduate 10.Summary
59/102
AI:Summary 59
Total
substitution cost
for the data set:
! j jihih CC
Substitution
cost of each
data
-
8/7/2019 Teaching AI Undergraduate 10.Summary
60/102
AI:Summary 60
5.23 7
1,2,3,4,5,6,7
1 = 1
2 =
3 =
4
=5 =
6 =
7 = 1
= 877
2127 !C
1402
2
2
5
2
7
3
7 !v PCCC350
3
3
2
3
2
5
2
7
2
4
3
7
4
7 !vvv PCCCCCC
3012
2
2
4
3
7
2
2
3
4
3
7
2
3
4
7
5
7 !vvv PCCPCCCCC
63372
7
1
7 ! CCC
-
8/7/2019 Teaching AI Undergraduate 10.Summary
61/102
AI:Summary 61
5.24 D9A1(3,2), A2(3,9), A3(8,6), B1(9,5),
B2(2,4), B3(3,10), C1(2,6), C2(9,6),C3(2,2)
k-Dk=3A1,B1,C1
1
2
3q=3
-
8/7/2019 Teaching AI Undergraduate 10.Summary
62/102
AI:Summary 62
1
K1=A1, K2=B1, K3=C1
d(A2, K1) =49 d(A2, K2) =52 d(A2, K3) =10d(A3, K1) =41 d(A3, K2) =2 d(A3, K3) =36
d(B2, K1) =5 d(B2, K2) =50 d(B2, K3) =4
d(B3, K1) =64 d(B3, K2) =61 d(B3, K3) =17
d(C2, K1) =52 d(C2, K2) =1 d(C2, K3) =49
d(C3, K1) =1 d(C3, K2) =58 d(C3, K3) =16
{A1, C3} K1=(2.5, 2)
{B1, A3, C2}, K2=(8.7, 5.7)
{C1, A2, B2, B3}, K3=(2.5, 7.25)
-
8/7/2019 Teaching AI Undergraduate 10.Summary
63/102
AI:Summary 63
2
K1, K2, K3
d(A1, K1) =0.25 d(A1, K2) =32.49 d(A1, K3) =27.81
d(A2, K1) =49.25 d(A2, K2) =54.58d(A2, K3) =3.31
d(A3, K1) =46.25 d(A3, K2) =0.58 d(A3, K3) =31.81
d(B1, K1) =51.25 d(B1, K2) =0.58 d(B1, K3) =47.31
d(B2, K1) =4.25 d(B2, K2) =47.78 d(B2, K3) =10.81
d(B3, K1) =64.25 d(B3, K2) =50.98 d(B3, K3) =7.61
d(C1, K1) =16.25 d(C1, K2) =44.98 d(C1, K3) =1.81
d(C2, K1) =58.25 d(C2, K2) =0.18 d(C2, K3) =43.81d(C3, K1) =0.25 d(C3, K2) =58.58 d(C3, K3) =27.81
{A1, B2, C3} K1=(2.3, 2.7)
{B1, A3, C2}, K2=(8.7, 5.7)
{C1, A2, B3}, K3=(2.7, 8.3)
-
8/7/2019 Teaching AI Undergraduate 10.Summary
64/102
AI:Summary 64
K1, K2, K3
d(A1, K1) =0.98 d(A1, K2) =32.49 d(A1, K3) =39.78
d(A2, K1) =40.18 d(A2, K2) =54.58d(A2, K3) =0.58d(A3, K1) =43.38 d(A3, K2) =0.58 d(A3, K3) =33.38
d(B1, K1) =50.18 d(B1, K2) =0.58 d(B1, K3) =50.58
d(B2, K1) =1.78 d(B2, K2) =47.78 d(B2, K3) =18.98
d(B3, K1) =53.78 d(B3, K2) =50.98 d(B3, K3) =2.98
d(C1, K1) =10.98 d(C1, K2) =44.98 d(C1, K3) =5.78d(C2, K1) =55.78 d(C2, K2) =0.18 d(C2, K3) =44.98
d(C3, K1) =0.58 d(C3, K2) =58.58 d(C3, K3) =38.29
{A
1, B2, C3}
K1=(2.3, 2.7){B1, A3, C2}, K2=(8.7, 5.7)
{C1, A2, B3}, K3=(2.7, 8.3)
-
8/7/2019 Teaching AI Undergraduate 10.Summary
65/102
AI:Summary 65
Two methods ofhierarchal clustering arebottom-up agglomerative nesting and top-
down divisive analysis.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
66/102
Nouvelle AINouvelle AI
Chapter 6
-
8/7/2019 Teaching AI Undergraduate 10.Summary
67/102
AI:Summary 67
Nouvelle AI Intelligence without representation and reason
Situated AI
Agent Weak notion and strong notion
Architecture types include Deliberative (BDImodel), Reactive(subsumption, agent
network), and Hybrid(PRS, TouringMachines,InteRRaP)
-
8/7/2019 Teaching AI Undergraduate 10.Summary
68/102
QQ l C t til C t ti
-
8/7/2019 Teaching AI Undergraduate 10.Summary
69/102
AI:Summary 69
r(state, action)immediate reward values Q(state, action) values
V*(state) values
100
0
0
100
G
0
0
0
0
0
0
0
0
0
90
81
100
G
0
81
72
90
81
81
72
90
81
100
G90 100 0
81 90 100
QQ--values Computationvalues Computation
Using the discountedcumulative reward, the discount factor = 0.9
g
!!
0i
it
i
t rsV KT
S! ttt sss ,max* TT
asfVasfasQ tStRt ,,, *K!
81=0+0.9*90
-
8/7/2019 Teaching AI Undergraduate 10.Summary
70/102
AI:Summary 70
L i th QL i th Q ll
-
8/7/2019 Teaching AI Undergraduate 10.Summary
71/102
AI:Summary 71
Learning the QLearning the Q--valuevalue
Note: Q and V* closely related
Allows us to write Q recursively as
Using Q-values
a's,Qmaxargs*Va'
| asfasfasQ tStRt ,,, *K!
aasfQasfasQ tSa
tRtd!
d,,max,, K
asQs ta
t ,maxarg* !T
A kind ofTemporal Difference (TD) learning
Solve: How
to learn?
Solve: How to
choose the best
action?
QQ L iL i StSt
-
8/7/2019 Teaching AI Undergraduate 10.Summary
72/102
AI:Summary 72
QQ--LearningLearning StepsSteps
FOR each DO Initialize table entry:
Observe current state s
WHILE (true) DO
Selectaction a and execute it Receive immediate reward r
Observe new state s
Update table entry for as follows
Move: record transition from s to s
0nas,Q
a,
a','axa,ra,a'
n
-
8/7/2019 Teaching AI Undergraduate 10.Summary
73/102
AI:Summary 73
R Matrix
Initial Q
Q Matrix
Final Q
-
8/7/2019 Teaching AI Undergraduate 10.Summary
74/102
Evolutionary ComputationEvolutionary Computation
Chapter 7
-
8/7/2019 Teaching AI Undergraduate 10.Summary
75/102
AI:Summary 75
Evolutionary Computation is based on biologicalmetaphor:
Optimized Behavior Optimal Solution
Individual Solution;Fitness Objective; Environment Problem
Ingredients of EC
Representation: Individual and Population
Fitness evaluation: Objective
Selection: Parent and Survivor
Genetic Operators: Mutation and/or Recombination Initialization/Termination
Th I di tTh I di t
-
8/7/2019 Teaching AI Undergraduate 10.Summary
76/102
AI:Summary 76
The IngredientsThe Ingredients
t t + 1
mutationrecombination
reproduction
selection
Image from Ida Sprinkhuizen-Kuyper: Introduction to
Evolutionary Computation, 2000.
The Evolution MechanismThe Evolution Mechanism
-
8/7/2019 Teaching AI Undergraduate 10.Summary
77/102
AI:Summary 77
The Evolution MechanismThe Evolution Mechanism
Increasing diversity by genetic operators
mutation
Recombination
Decreasing diversity by selection
of parents
of survivors
Th E l ti C lTh E l ti C l
-
8/7/2019 Teaching AI Undergraduate 10.Summary
78/102
AI:Summary 78
The Evolutionary CycleThe Evolutionary Cycle
Recombination
MutationPopulation
Offspring
ParentsParent Selection
Survivor Selection
Image from Ben Paechter: Evolutionary Computing
A Practical Introduction
Desirable Performance of ECDesirable Performance of EC
-
8/7/2019 Teaching AI Undergraduate 10.Summary
79/102
AI:Summary 79
Desirable Performance of ECDesirable Performance of EC
Image from: Introduction to Stochastic Search and Optimization (ISSO) by J. C. Spall
-
8/7/2019 Teaching AI Undergraduate 10.Summary
80/102
AI:Summary 80
Main Features of EC: Parallel and Stochastic
Easy applicable
Exploration/Exploitation Balance
A key issue in EC Main streams of Evolutionary Algorithms
Genetic Algorithm
Evolutionary Programming
Evolutionary Strategies
Genetic Programming
-
8/7/2019 Teaching AI Undergraduate 10.Summary
81/102
AI:Summary 81
Genetic Algorithm Representation: Bit-string (Genotype)
Mutation: Bit-swap
Recombination:
Crossover
ParentSelection: Chance according toFitness
SurvivorSelection: Extinct Replacement
Worked ExampleWorked Example
-
8/7/2019 Teaching AI Undergraduate 10.Summary
82/102
AI:Summary 82
Worked ExampleWorked Example
World size: 4X4
Population size: N= 4
Genome: 16 bits
Fitness:
F(p)= squares from goal
Selection: Greedy
Example: Initial PopulationExample: Initial Population
-
8/7/2019 Teaching AI Undergraduate 10.Summary
83/102
AI:Summary 83
Example:Initial PopulationExample:Initial Population
Initial Population P(0): 4 random 16-bitstrings
Example: Fitness CalculationExample: Fitness Calculation
-
8/7/2019 Teaching AI Undergraduate 10.Summary
84/102
AI:Summary 84
Example: Fitness CalculationExample: Fitness Calculation
F(p1)=(8-8) 4 = -4 F(p2)= -5 F(p3)= -6 F(p4)= -4
Example: Selection and ChangeExample: Selection and Change
-
8/7/2019 Teaching AI Undergraduate 10.Summary
85/102
AI:Summary 85
Example: Selection and ChangeExample: Selection and Change
Select p1 and p4 as parents of the nextgeneration
Produce offspring using crossover and mutation
Example: Survivor SelectionExample: Survivor Selection
-
8/7/2019 Teaching AI Undergraduate 10.Summary
86/102
AI:Summary 86
Example: Survivor SelectionExample: Survivor Selection
The next generation is...
Example: Repeat for next GenerationExample: Repeat for next Generation
-
8/7/2019 Teaching AI Undergraduate 10.Summary
87/102
AI:Summary 87
Example: Repeat for next GenerationExample: Repeat for next Generation
Repeat: F(p1)= -4 F(p2)= -4 F(p3)=0 F(p4)= -4
-
8/7/2019 Teaching AI Undergraduate 10.Summary
88/102
AI:Summary 88
7.1 A*
1 A*2A*
-
8/7/2019 Teaching AI Undergraduate 10.Summary
89/102
Swarm IntelligenceSwarm Intelligence
Chapter 8
-
8/7/2019 Teaching AI Undergraduate 10.Summary
90/102
AI:Summary 90
Swarm Intelligence (SI) an artificial intelligence technique based
around the study ofcollective behavior indecentralized, self-organized systems.
Multi-Agent Systems (MAS)
A system that consists of a number of agents,whichinteract with one-another.
Communication, Coordination,Collaboration
-
8/7/2019 Teaching AI Undergraduate 10.Summary
91/102
AI:Summary 91
AntColony Optimization (A
CO)
Inspired by ant colony foraging
pheromone as heuristic information(stigmergy)
Iteration between Construct
AntSolutions andUpdatePheromones
Particle Swarm Optimization (PSO)
Inspired by bird flocking Heuristic information: results from partners
Particle Velocity Update
-
8/7/2019 Teaching AI Undergraduate 10.Summary
92/102
AI:Summary 92
Since the route B isshorter, the ants on thispath will complete the
travel more times andthereby lay morepheromone over it.
The pheromoneconcentration on trail B willincrease at a higher ratethan on A, and soon theants on route A will chooseto follow route B
Since most ants will nolonger travel on route A,and since the pheromone isvolatile, trail A will startevaporating
Only the shortest routewill remain!
Model Ant Colony for OptimizationModel Ant Colony for Optimization
-
8/7/2019 Teaching AI Undergraduate 10.Summary
93/102
AI:Summary 93
Model Ant Colony for OptimizationModel Ant Colony for Optimization
Eachartificial ant is a probabilistic mechanism thatconstructs a solution to the problem, using:
Artificial pheromone deposition
Heuristic information: pheromone trails, and others
Differences between real
and artificial ants:
The pheromone is updated
only after a solution has
been constructed.
Additional mechanisms
Model Ant Colony for OptimizationModel Ant Colony for Optimization
-
8/7/2019 Teaching AI Undergraduate 10.Summary
94/102
AI:Summary 94
(Cont(Contd)d)
ConstructAntSolutions The rules for the probabilistic choice of solution
components.
UpdatePheromones
It is used to increase the pheromone values associated with
good or promising solutions, and decrease those that areassociated with bad ones.
Decreasing all the pheromone values throughpheromoneevaporation -->allows forgetting-> favors exploration ofnew areas
Increasing the pheromone levels associated with a chosen
set of good solutions -> makes the algorithm converge to asolution
-
8/7/2019 Teaching AI Undergraduate 10.Summary
95/102
AI:Summary 95
Illustrating the velocity update schema of global PSO
-
8/7/2019 Teaching AI Undergraduate 10.Summary
96/102
Next Generation ofNext Generation of
ComputersComputers
Chapter 11
Optical ComputersOptical Computers
-
8/7/2019 Teaching AI Undergraduate 10.Summary
97/102
AI:Summary 97
Optical ComputersOptical Computers
Optical Computers perform computations,operate, store and transmit data usingonly light.
Two kinds of optical computers
Electro-Optical Hybrid and Pure Optical
Optical logic and Holographic Memory
Optical Interconnect
optical fibers, waveguides and free-spaceoptics.
DNA ComputDNA Computinging
-
8/7/2019 Teaching AI Undergraduate 10.Summary
98/102
AI:Summary 98
DNA ComputDNA Computinging
DNA itself does not carry out any computation. It
rather acts as a massive memory.
BUT, the way complementary bases react with each
other can be used to compute things. Proposed by Adelman in 1994 to solve Hamiltonian
Path Problem with DNA.
Uniqueness of DNA
- Extremely dense information storage.
- Enormous parallelism.
Quantum ComputingQuantum Computing
-
8/7/2019 Teaching AI Undergraduate 10.Summary
99/102
AI:Summary 99
Quantum ComputingQuantum Computing
A quantum computer is a machine that
performs calculations based on the laws
ofquantum mechanics, which is thebehavior of particles at the sub-atomic
level.
-
8/7/2019 Teaching AI Undergraduate 10.Summary
100/102
AI:Summary 100
Have you got overall understanding ofAI, and prepared yourself for learning
and study of the branches of AI?
Course GradingCourse Grading
-
8/7/2019 Teaching AI Undergraduate 10.Summary
101/102
AI:Summary 101
gg
Three components
- Final exam70%
2009.11.2510:10 - 12:10, 2008
- Programming 20%
2.10, 3.11, 5.17, 8.4
Submitted before 2009.1.20
- Problem sets (closed)10%
-
8/7/2019 Teaching AI Undergraduate 10.Summary
102/102
Questions?Questions?
E_mail: [email protected]
Tel: 68913447 (office)