associative theories of ltm. networks how is all the information in our ltm represented and how does...

28
Associative Theories of LTM

Upload: donald-warren

Post on 18-Jan-2018

214 views

Category:

Documents


0 download

DESCRIPTION

Networks (con’t) Nodes – represent individual ideas (e.g., a “South Dakota” node). Association links – connect nodes to other nodes. A “search,” then, consists of travelling from one node to the next along the links until the target information is reached.

TRANSCRIPT

Page 1: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Associative Theories of LTM

Page 2: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Networks

How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge among the hugh store of material available in LTM?

Representing information in LTM as a network of connections has been around for centuries. The following desribes the basic ideas of a “network”:

Page 3: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Networks (con’t)

Nodes – represent individual ideas (e.g., a “South Dakota” node).

Association links – connect nodes to other nodes.

A “search,” then, consists of travelling from one node to the next along the links until the target information is reached.

Page 4: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Spreading Activation

A node is activated when it receives sufficient input or excitation. At that point, activation spreads along its links to other nodes, partially activating those nodes.

Nodes receive activation from other nodes, increasing “subthreshold activation” levels. when the “sum” of activation reaches “response threshold,” the node “fires.”

Note the parallels between our conceptualization of activation in a network and the way neurons work.

Page 5: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Psychological Evidence Of Networks

The results of a variety of lines of research are consistent witht the idea of a network and predictions based on a network:

1. Cueing - When recall alone is insufficient to activate a node, a “cue” or “hint” can sometimes help by activating another node that can spread its activation to the target node.

2. Context Reinstatement - When study and test contexts are the same, recall is better because the nodes that were activated and connected to the new material will likely be activated again during testing.

Page 6: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Evidence Of Networks (con’t)

3. Lexical-Decision Task (priming) - In a variation of the lexical decision task, pairs of words or nonwords are presented. Some of the word pairs were related (e.g., bread-butter) while others were not (e.g., nurse-butter). Spreading activation resulting from the “bread” node should partially activate the “butter” node, allowing for a quicker response time.

Page 7: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Evidence Of Networks (con’t)4. Sentence Verification - Since one travels from node-to-node,

it should take longer to reach a more distant node than a closer node. Consider the following partial network:

When asked to verify the truth of statements such as, “A robin is a bird” and “A robin is an animal,” the first sentence should be verified more quickly than the second.

Page 8: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Evidence Of Networks (con’t)In general, the predictions are confirmed. From this research, three general conclusions can be drawn:

a. If a fact about a concept is frequently encountered, it will be stored with that concept even if it could be inferred from a more superordinate concept.

b. The more frequently encountered a fact about a concept is, the more strongly that fact will be associated with the concept. And the more strongly associated facts are with concepts, the more rapidly they are verified.

c. Verifying facts that are not directly stored with a concept but that must be inferred takes a relatively long time.

Page 9: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

The Fan EffectSome nodes have many connections (e.g., robin) while others have few (e.g., aardvark).

It is assumed that when a node is activated, activation will spread through all its association links (i.e., connections).

Page 10: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

The Fan Effect (con’t)

There is, however, a limit on the amount of activation that can spread from such a node. The more connections there are from a node, the less activation there will be spread to any one associated node.

Fan Effect - the name given to the increase in RT related to the increase in the number of connections (i.e., greater fan) associated with a node.

Page 11: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

A Methodological IllustrationSs learn sentences of the form “The <person> is in the <place>.” The number of times the person or the place occurred in the sentences varied. For example,

The doctor is in the bank. (1 -1)The fireman is in the park. (1 - 2)The lawyer is in the church. (2 - 1)The lawyer is in the park. (2 - 2)

Ss were given a speed-recognition test in which they were presented sentences (some previously learned and some new foils) and had to indicate whether or not they had studied the sentences.

Page 12: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

A Methodological Illustration (con’t)

Ss were fastest when responding to (1 - 1) type sentences and slowest when responding to (2 - 2) type sentences: they were equally fast at responding to (1 - 2) and (2 - 1) type sentences.

Page 13: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Searching A Network

How might we search through such a network as we’ve described?

A search on the Internet can be used as an analogy.

One advantage our network model, however, is that activation can spread from more than one source simultaneously, resulting in the convergence of activation to the sought-after node.

How do we get to an “entry node”?

Our networks rely, in part, on sensory input. Feature nodes and spreading activation of the feature nets are seamlessly connected to our memory network and, therefore, activate the cognitive nodes we have been discussing.

Page 14: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

What’s In A Node?

How is information (in general) represented and, subsequently, how is complex information represented?

Several models have been proposed:

Node = Concept

Propositional Networks

Page 15: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Node = Concept

This model proposes each node represents a concept (e.g., “Lincoln,” “war,” etc). Each node is connected via a “relational” associative link (e.g., “isa” or “hasa”).

Such a conception, however, would require too many types of relational links (e.g., opposite of, analogous to, larger than, etc) to be useful.

Page 16: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Propositional NetworksA more fruitful proposal comes from “propositional” networks, most notably Anderson’s ACT (Adaptive Control of Thought) computer program.

A “proposition” is the smallest unit of knowledge about which it makes sense to judge as being true or false.

Consider the following sentence:“Lincoln, who was president of the USA during a bitter war, freed the slaves.”

Page 17: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Propositional Networks (con’t)

That sentence is made up of several simpler sentences, each representing a proposition:

A: Lincoln was president of the USA during a war.B: The war was bitter.C: Lincoln freed the slaves.

Note each of those simpler sentences can be judged true or false.

We can represent that information in a propositional network:

Page 18: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Propositional Networks (con’t)Each proposition is represented by an ellipse with labeled arrows to its relation and arguments.

The ellipse, relations, and arguments are called “nodes” and the arrows are called “links” (i.e., they connect the nodes).

Lincoln

president-of

USA

wartime

relation

agentobject

warsubject

relation

bitter

agentLincoln

objectslaves

relationfreed

The nodes can be thought of as “ideas” and the links as “associations” between the ideas and identified by their syntactic role within the proposition.

Page 19: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Propositional Networks (con’t)Once the nodes are all connected, they are organized within the network for ease of interpretation.

Page 20: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Propositional Networks (con’t)In addition to representing simple ideas as the sentence above, propositional networks can represent more complex ideas or knowledge. For example…

This network, represents a small portion of our knowledge about “dogs.”

Page 21: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Propositional Networks (con’t)There are two types of nodes:

type – refers to a general category (e.g., dog) and are true for the entire category.token – refers to a specific instance of the category (e.g., “my dog”).

The token nodes are connected to type nodes:

Page 22: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Propositional Networks (con’t)“Time” and “location” nodes are also incorporated into propositions

Like network models in general, nodes are connected by associative links, some of the links are stronger than others (depending on frequency and recency of use), and spreading activation partially activates connected nodes.

Page 23: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Problems With Propositional NetworksNetwork models, though promising in their approach and having the support of many researchers, do have their limitations. Let’s examine some of those limitations:Retrieval blocks – There is a saying that “’close’ only counts in horseshoes.” That saying should also apply to the concept of “spreading activation,” yet we see many cases of retrieval blocks where it seems “close” doesn’t count (e.g., TOT phenomenon).Too many distant connections – Suppose you activate a node that has many links connected to it (e.g., “health”). Spreading activation will result in all those associated nodes to be activated as well. If the information you seek it one or two additional links away, you will have activated tens of thousands of other nodes. How do you sift through them all?

Page 24: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Addressing Those ChallengesGiven that spreading activation will activate nodes you seek and many that are irrelevant, a means of narrowing activation would be useful, allowing you to focus on just the relevant nodes.That can be accomplished by postulating that nodes can inhibit as well as excite adjacent nodes via spreading activation.

A more strongly activated node would have the effect of deactivating neighboring, less active nodes, until only one node remains active. This is referred to as a winner-takes-all system.

Such a system could address the problem of activating too many nodes and narrowing the focus of the search for a piece of information.

Page 25: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

fG U LE I W S Q

H T AX

B P N C JM F Z

D K M

O R V Y

fG U LE I W S Q

H T AX

B P N C JM F Z

D K M

O R V Y

fG U LE I W S Q

H T AX

B P N C JM F Z

D K M

O R V Y

B P N C JM F Z

D K M

O R V Y

Connectionism“Connectionist networks” eliminate the idea of a node representing an individual idea. Instead, ideas are considered to be patterns of activation across the network… distributed representations. Here is a simple illustration:

Page 26: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Connectionism (con’t)In this conception, an individual node has no particular meaning. Instead, the entire pattern of nodes must be considered to determine what is being represented... its meaning is “distributed” across the entire network.

Pattern activation is accomplished quickly through parallel distributed processing (PDP) and without the aid of a “central executive.”

One advantage of the connectionist network is that it can perform simultaneous multiple constraint satisfaction. That is, given a problem with several constraints, different parts of the network can work independently on each constraint and collectively come up with an average-like solution to the problem.

Page 27: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Learning In A Connectionist NetworkTo say that you “know” something means that you have an available pattern of nodes that represents that knowledge. But how did that pattern emerge in the first place? How does a connectionist network “learn?”

Connection weights – the strength of individual connections – are adjusted locally by on-going activity.

The adjustments are accomplished by algorithms:

“What-goes-with-what”

“Feedback”

Page 28: Associative Theories of LTM. Networks How is all the information in our LTM represented and how does one go about finding and retrieving a bit of knowledge

Current Status of Connectionist NetworkMany researchers are excited about the potential of connectionist network, in part because of a number of successes:

Others, however, are not so convinced. Learning is slow and occurs only when stimuli are presented in the correct order.

• learned to generalize simple shapes• learned to read• learned to play strategic games (e.g., chess, backgammon)• seem to fit well with our understanding of the functioning

of neurons and the brain