preliminary experiments: learning virtual sensors

26
1 Preliminary Experiments: Learning Virtual Sensors • Machine learning approach: train classifiers – fMRI(t, t+ ) CognitiveState • Fixed set of possible states • Trained per subject, per experiment • Time interval specified

Upload: magee-sandoval

Post on 02-Jan-2016

30 views

Category:

Documents


0 download

DESCRIPTION

Preliminary Experiments: Learning Virtual Sensors. Machine learning approach: train classifiers fMRI(t, t+ d )  CognitiveState Fixed set of possible states Trained per subject, per experiment Time interval specified. Approach. Learn fMRI(t,…,t+k)  CognitiveState Classifiers: - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Preliminary Experiments: Learning Virtual Sensors

1

Preliminary Experiments: Learning Virtual Sensors

• Machine learning approach: train classifiers– fMRI(t, t+ ) CognitiveState

• Fixed set of possible states

• Trained per subject, per experiment

• Time interval specified

Page 2: Preliminary Experiments: Learning Virtual Sensors

2

Approach

• Learn fMRI(t,…,t+k) CognitiveState

• Classifiers:– Gaussian Naïve Bayes, SVM, kNN

• Feature selection/abstraction– Select subset of voxels (by signal, by anatomy)

– Select subinterval of time

– Average activities over space, time

– Normalize voxel activities

– …

Page 3: Preliminary Experiments: Learning Virtual Sensors

3

Study 1: Word Categories

• Family members

• Occupations

• Tools

• Kitchen items

• Dwellings

• Building parts

• 4 legged animals• Fish• Trees• Flowers• Fruits• Vegetables

[Francisco Pereira]

Page 4: Preliminary Experiments: Learning Virtual Sensors

4

Word Categories Study

• Ten neurologically normal subjects• Stimulus:

– 12 blocks of words:• Category name (2 sec)• Word (400 msec), Blank screen (1200 msec); answer• Word (400 msec), Blank screen (1200 msec); answer• …

– Subject answers whether each word in category– 32 words per block, nearly all in category– Category blocks interspersed with 5 fixation blocks

Page 5: Preliminary Experiments: Learning Virtual Sensors

5

Training Classifier for Word CategoriesLearn fMRI(t) word-category(t)

– fMRI(t) = 8470 to 11,136 voxels, depending on subject

Feature selection: Select n voxels– Best single-voxel classifiers

– Strongest contrast between fixation and some word category

– Strongest contrast, spread equally over ROI’s

– Randomly

Training method:– train ten single-subect classifiers

– Gaussian Naïve Bayes P(fMRI(t) | word-category)

Page 6: Preliminary Experiments: Learning Virtual Sensors

6

Learned Bayes Models - MeansP(BrainActivity | WordCategory = People)

Page 7: Preliminary Experiments: Learning Virtual Sensors

7

Learned Bayes Models - MeansP(BrainActivity | WordClass)

Animal words People words

Accuracy: 85%

Page 8: Preliminary Experiments: Learning Virtual Sensors

8

Results

Classifier outputs ranked list of classesEvaluate by the fraction of classes ranked ahead of true

class– 0=perfect, 0.5=random, 1.0 unbelievably poor

Try abstracting 12 categories to 6 categories

e.g., combine “Family Members” with “Occupations”

Page 9: Preliminary Experiments: Learning Virtual Sensors

9

Impact of Feature Selection

Page 10: Preliminary Experiments: Learning Virtual Sensors

10

X1=S1+N1 X2=S2+N2

N0

Goal: learn f: X ! Y or P(Y|X)

Given:

1. Training examples <Xi, Yi> where Xi = Si + Ni , signal Si ~ P(S|Y=i), noise Ni ~ Pnoise

2. Observed noise with zero signal N0 ~ Pnoise

“Zero Signal” learning setting.

(fixation)

Class 1 observations

Class 2 observations

Page 11: Preliminary Experiments: Learning Virtual Sensors

11

Page 12: Preliminary Experiments: Learning Virtual Sensors

12

[Haxby et al., 2001]

Page 13: Preliminary Experiments: Learning Virtual Sensors

13

Study 1: Summary

• Able to classify single fMRI image by word category block

• Feature selection important

• Is classifier learning word category or something else related to time?– Accurate across ten subjects– Relevant voxels in similar locations across subjs– Locations compatible with earlier studies– New experimental data will answer definitively

Page 14: Preliminary Experiments: Learning Virtual Sensors

14

Study 2: Pictures and Sentences

• Trial: read sentence, view picture, answer whether sentence describes picture

• Picture presented first in half of trials, sentence first in other half

• Image every 500 msec • 12 normal subjects• Three possible objects:

star, dollar, plus• Collected by Just et al.

[Xuerui Wang and Stefan Niculescu]

Page 15: Preliminary Experiments: Learning Virtual Sensors

15

It is true that the star is above the plus?

Page 16: Preliminary Experiments: Learning Virtual Sensors

16

Page 17: Preliminary Experiments: Learning Virtual Sensors

17

+

---

*

Page 18: Preliminary Experiments: Learning Virtual Sensors

18

Page 19: Preliminary Experiments: Learning Virtual Sensors

19

Is Subject Viewing Picture or Sentence?

• Learn fMRI(t, …, t+15) {Picture, Sentence}– 40 training trials (40 pictures and 40 sentences)– 7 ROIs

• Training methods:– K Nearest Neighbor– Support Vector Machine– Naïve Bayes

Page 20: Preliminary Experiments: Learning Virtual Sensors

20

Is Subject Viewing Picture or Sentence?

• Support Vector Machine worked better on avg.

• Results (leave one out) on picture-then-sentence, sentence-then-picture data

– Random guess = 50% accuracy

– SVM using pair of time slices at 5.0,5.5 sec after stimulus: 91% accuracy

Page 21: Preliminary Experiments: Learning Virtual Sensors

21

Accuracy for Single-Subject Classifiers

Page 22: Preliminary Experiments: Learning Virtual Sensors

22

Can We Train Subject-Indep Classifiers?

Page 23: Preliminary Experiments: Learning Virtual Sensors

23

Training Cross-Subject Classifiers

• Approach: define supervoxels based on anatomically defined regions of interest– Abstract to seven brain region supervoxels

• Train on n-1 subjects, test on nth

Page 24: Preliminary Experiments: Learning Virtual Sensors

24

Accuracy for Cross-Subject GNB Classifier

Page 25: Preliminary Experiments: Learning Virtual Sensors

25

Accuracy for Cross-Subject and Cross-Context Classifier

Page 26: Preliminary Experiments: Learning Virtual Sensors

26

Possible ANN to discover intermediate data abstraction across multiple subjects. Each bank of inputs corresponds to voxel inputs for a particular subject. The trained hidden layer will provide a low-dimensional data abstraction shared by all subjects. We propose to develop new algorithms to train such networks to discover multi-subject classifiers.

Subject 1 Subject 2 Subject 3

Learned cross-subject representation

Output classification