using space to represent categories: insights from gaze position … · 2016. 7. 30. · johansson...

26
Running Head: Mental Imagery and Visual Memory 1 Martarelli, C. S., Chiquet, S., Laeng, B., & Mast, F. W. (2016). Using space to represent categories: Insights from gaze position. Psychological Research, 10.1007/s00426-016-0781-2 Using Space to Represent Categories: Insights from Gaze Position Corinna S. Martarelli 1,2 *, Sandra Chiquet 1,2 , Bruno Laeng 3 , and Fred W. Mast 1,2 Department of Psychology, University of Bern, Switzerland Center for Cognition, Learning and Memory, University of Bern, Switzerland 3 Department of Psychology, University of Oslo, Norway *Corresponding author: Corinna Martarelli, Department of Psychology, University of Bern, Fabrikstrasse 8, CH-3012 Bern, Switzerland Email: [email protected]; phone: +41 31 631 40 31; fax: +41 31 631 82 12 1 2

Upload: others

Post on 28-Jan-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • RunningHead:MentalImageryandVisualMemory

    1

    Martarelli, C. S., Chiquet, S., Laeng, B., & Mast, F. W. (2016). Using space to

    represent categories: Insights from gaze position. Psychological Research,

    10.1007/s00426-016-0781-2

    UsingSpacetoRepresentCategories:InsightsfromGazePosition

    CorinnaS.Martarelli1,2*,SandraChiquet1,2,BrunoLaeng3,andFredW.Mast1,2

    DepartmentofPsychology,UniversityofBern,Switzerland

    CenterforCognition,LearningandMemory,UniversityofBern,Switzerland

    3DepartmentofPsychology,UniversityofOslo,Norway

    *Correspondingauthor:CorinnaMartarelli,DepartmentofPsychology,UniversityofBern,

    Fabrikstrasse8,CH-3012Bern,Switzerland

    Email:[email protected];phone:+41316314031;fax:+41316318212

    1

    2

  • RunningHead:MentalImageryandVisualMemory

    2

    UsingSpacetoRepresentCategories:InsightsfromGazePosition

    Abstract

    Weinvestigatedtheboundariesbetweenimagery,memory,andperceptionbymeasuring

    gazeduringretrievedversusimaginedvisualinformation.Eyefixationsduringrecallwere

    boundtothelocationatwhichaspecificstimuluswasencoded.However,eyeposition

    informationgeneralizedtonovelobjectsofthesamecategorythathadnotbeenseen

    before.Forexample,encodinganimageofadoginaspecificlocationenhancedthe

    likelihoodoflookingatthesamelocationduringsubsequentmentalimageryofother

    mammals.Theresultssuggestthateyemovementscanalsobelaunchedbyabstract

    representationsofcategoriesandnotexclusivelybyasingleepisodeoraspecificvisual

    exemplar.

    Keywords:mentalimagery,visualmemory,eyegaze,embodiedcognition,prediction

  • RunningHead:MentalImageryandVisualMemory

    3

    UsingSpacetoRepresentCategories:InsightsfromGazePosition

    Introduction

    Thefunctionalroleofeyemovementsdoesnotseemtobelimitedtotheprocessingof

    visualinformation;eyemovementsarealsorelevantincognitivetaskswhentherewould

    seemtobenoobviousreasontomoveone’seyes.Theso-called“blankscreenparadigm”

    illustratesthatemptyareasvisitedduringimageryandmemorytaskscorrespondto

    locationsthatwereinspectedduringperception(Altmann,2004;Brandt&Stark,1997;

    Foerster,Carbone,Koesling,&Schneider,2012;Fourtassietal.,2013;Johansson,Holsanova,

    Dewhurst,&Holmqvist,2012;Johansson,Holsanova,&Holmqvist,2006;Johansson&

    Johansson,2014;Laeng,Bloem,D’Ascenzo,&Tommasi,2014;Laeng&Teodorescu,2002;

    Martarelli&Mast,2011,2013;Richardson&Spivey,2000;Scholz,Mehlborn,&Krems,

    2016;Spivey&Geng,2001;Wantz,Martarelli,&Mast,2015;Wantz,Martarelli,Cazzoli,

    Kalla,Müri,&Mast,2016).Inaseminalarticle,NotonandStark(1971)proposedascanpath

    theory,accordingtowhich,whilescanningavisualscene,thebrainstoresthesequenceof

    fixationsinmemoryandreactivatesitwhenseeingtheimageagainorwhenvisualizingit

    laterintheabsenceofanyperceptualinformation(Brandt&Stark,1997).Foulshamand

    Kingstone(2013)revisitedthescanpaththeoryandillustratedthatareasseenatencoding

    wereindeedmoreoftenlookedatduringrecognition.However,theirresultsshowedthat

    theorderinwhichtheareaswerescanneddidnotcorrespondtotheencodingorderas

    suggestedbythescanpaththeory.Itisnoteworthythatarecentstudy(Bochynska&Laeng,

    2015)foundsomeevidenceinfavorforretentionofthescanpath’ssequence,thusthisissue

    remainscontroversial.

  • RunningHead:MentalImageryandVisualMemory

    4

    Inpreviousstudieswhilerecallingfrommemorytheimageofananimal(e.g.,adog)

    thatwasencodedinadefinedarea(e.g.,upper-leftareaofthescreen),participantsspent

    moretimeinthesameareaofinterestalthoughthescreenwasblank(e.g.,Laeng&

    Teodorescu,2002).Despitemanystudieshavereplicatedtheabovefinding,itremains

    unclearwhethereyemovementsduringmemory/imageryareonlyrelatedtothe

    recollectionofapreviousepisodewithitsspecificelement(e.g.,adog)orcanalsobe

    generalizedtootheritemsinthesamecategory(e.g.,othermammalsorfour-legged

    animals).Thus,inthepresentstudy,weassessedwhetherthe“correspondingareaeffect”

    alsooccursduringvisualizationofsemanticallyrelateditems(e.g.,acat)thathadnotbeen

    seenorassociatedwithaspecifictestepisodebefore.

    Byfindingoutwhethereyefixationstransfertoothercategories(i.e.,semanticeye

    fixations)andnotonlytospecificexemplarsorepisodes(i.e.,episodiceyefixations),wewill

    gainconsiderableinsightintothenatureofeyemovementsandtheunderlyingformatof

    mentalimages.Infact,theexistenceofsemanticeyefixationswouldsupportaviewof

    mentalimagerythatisintrinsicallyflexibleandcreativeinkind,sinceitwouldshowtobea

    processthat,althoughgroundedonspecificpastexperiences,isabletogeneralizethepast

    informationtonovelimages(e.g.,itsgeneration)byselectingapastepisode(e.g.,dog)that

    sharessomefeatureswiththenovelitem(e.g.,cat).

    Anotherissuethatresearchershavedebatedconcernsthereferenceframeused

    duringimagerybywhichepisodicvisualmemoryisencodedandusedtotriggereye

    movements:Somebelievethatthereferenceframeisaretinotopiccoordinate,others

    believeitisalocationinabsolutespace,whileothersbelieveitistheobject’sstructure.

    HooverandRichardson(2008)supportedthenotionthattheobject’sstructureservesasthe

    referenceframebecausere-fixationsseemtofollowthenewlocationsofmovingobjects.

  • RunningHead:MentalImageryandVisualMemory

    5

    However,theresultsofotherexperimentsinwhicheyepositionwasmanipulated(e.g.,

    Johansson&Johansson,2014;Laeng&Teodorescu,2002;Scholzetal.,2016)suggestthat

    thelocationinspacemaybeencodedbydefault.Iflocationsarestoredandintegratedinto

    thememorytrace,theneyemovementsmayplayacriticalroleinmanycognitivetasks.

    Littleisknownabouttheroleofeyemovementsintherepresentationand

    organizationofcategories.TheworkofZelinskyandcolleagues(e.g.,Maxfield,Stalder,&

    Zelinsky,2014;Zelinsky,Peng,&Samaras,2013)highlightstheroleofeyemovementsin

    categoricalsearchtasks(tasksinvolvingfindinganobjectfromatargetcategory)byshowing

    thattargettypicalityaffectseyebehavior.Toourknowledge,nostudyhasinvestigatedeye

    behaviorandcategoriesusingtheblankscreenparadigm.Itseemspossiblethat,inthe

    absenceofaspecificmotoricorspatialcomponentthathasbeenencoded,anewmental

    imagecouldstilllauncheyefixationstospecificlocations.HooverandRichardson(2008)

    suggestedthattheobject-basedeffecttheyidentifiedplaysaroleinimaginingpossible

    futureevents.Ourworkinghypothesisisthat,whenanobject’slocationisencoded,visual

    andsemanticinformationaboutthatobjectwillgeneralizeortransfertoitemsofthesame

    orneighboringcategoriesandthustriggereyemovementstotherelevantareas.Thiswill

    occurduringvisualrecollectionofolditemsaswellasduringmentalimageryofnewitems

    aslongasthenewitemsbelongtothesamecategoryoraresemanticallyrelatedtoold

    items.

    Method

    Participants

    Twenty-fivestudents(24female,1male)ranginginagefrom18to40(M=22.3,SD=

    4.9)tookpartinthisstudy.Thedatafromoneparticipanthadtobeexcludedbecauseof

  • RunningHead:MentalImageryandVisualMemory

    6

    technicalproblems(thetrackingratiowas24.6%).Participantswerenaïvetothepurposeof

    theexperimentandreceivedcoursecreditforparticipation.Theyhadcorrected-to-normal

    visualacuity.

    Apparatus

    EyemovementswererecordedusinganSMIREDtrackingsystem(SensoMotoric

    Instruments,Teltow,Germany).Datawererecordedwithasamplingrateof50Hz,aspatial

    resolutionof0.1°andagazepositionaccuracyof0.5°.Theeye-trackingdevicewascontact-

    freeanddeterminedthedirectionofgazebycombiningthecorneareflexwiththepupil

    locationviaaninfraredlight-sensitivevideocamera.Thestimuliwerepresentedona17-

    inchscreen(1280x1024pixel)usingSMIExperimentCenterSoftwareandeyedatawere

    recordedwithI-ViewXSoftware,bothdevelopedbySensoMotoricInstruments(Teltow,

    Germany).

    Stimuli

    Theitemswere64three-dimensionalcolorimagespresentedastwo-dimensional

    projectionstakenfromanonlinedatabase(dennisharoldsen.com).Eachimagebelongedto

    oneoffourcategories(mammals,birds,machines,homefurniture).Thespatialorientation

    ofthethree-dimensionalobjectswaskeptconstantacrosscategories(right/left).Images

    belongingtothebirdcategoryalwaysappearedintheupperleftarea,furnitureimagesin

    theupperrightarea,machineimagesinthelowerleftarea,andmammalimagesinthe

    lowerrightarea.Wehadtwoversionsoftheexperiment(VersionsAandB).Wepresented

    32imagesrandomlyselectedfromtheinitial64imagestohalfoftheparticipantsandthe

    remaining32imagestotheotherhalf.

  • RunningHead:MentalImageryandVisualMemory

    7

    Procedure

    Theexperimentwasdividedintothreephases:perceptualencodingphase,

    distractionphase,andrecallphase(seeFigure1foraschematicrepresentationofthe

    differentphasesoftheexperiment).Participantswereseatedinfrontofthecomputer

    screen.Thedistancebetweenparticipantandscreenwasapproximately70cm.Weuseda

    5-pointcalibrationandvalidationprocedure(onlyerrorvaluesbelow0.8°wereaccepted).

    Intheperceptualencodingphase,participantswerepresentedwith32imagesfrom

    thefourcategoriesmammals,birds,machines,andhomefurniture(eightpercategory).The

    stimuliappearedfor6s(precededbyafixationcrosspresentedfor3s).Simultaneously,

    participantsheardthenameofthepresentedobject.Allaudiofileswerecreatedusing

    Audacity(http://audacity.sourceforge.net)andpresentedvialoudspeakers.Thestimuliwere

    presentedinrandomorder.

    Intherecallphase,participantsweregiventhreetasks:tovisualize32oldand32

    newitems(imagegenerationtask);toevaluate(true/false)apre-recordedstatementabout

    thevisualdetailsoftheobjectsuchas“theflamingoisstandingononeleg”(image

    inspectiontask);andtojudgewhethertheyhadseentheitempreviously(old/new

    recognitiontask).Eachofthe32newitemsalsobelongedtooneofthefourcategories

    (mammals,birds,machines,homefurniture;8percategory).Theprocedurewassimilarto

    proceduresusedbyKosslynandcolleagues(e.g.,Thompson,Kosslyn,Sukel,&Alpert,2001),

    whoinvestigatedthestagesofmentalimagery(imagegenerationandimageinspection).We

    addedanold/newrecognitiontasktoensurethattheparticipantsdidnotconfoundthe

    items.

    Duringtherecallphase,thescreenwasblankwhite(andparticipantswerefreeto

    movetheireyes).Tofacilitatespontaneouseyemovements,weexplicitlyavoidedusing

  • RunningHead:MentalImageryandVisualMemory

    8

    fixationcrossesandthethreetasks(imagegeneration,imageinspection,old/new

    recognition)wereself-paced(seeFig.1).Afterhearingthepre-recordedcue,participants

    generatedthementalimage(imagegeneration)andinformedtheexperimenterthatthey

    haddonesobysaying“ok.”Thentheyheardaspecificquestion(auditoryfile)andgavetheir

    responseverbally(imageinspection).Theexperimenterinstantlypressedabuttononthe

    keyboard.Finally,theparticipantsjudgedwhethertheyhadseentheitempreviously

    (old/newrecognitiontask);thisresponsewasrecordedbytheexperimenterviakeyboard.

    Keypressesinitiatedandterminatedtherecordingoftheeye-trackingsequence.

    The64trialsoftherecallphasewerepresentedinrandomorder.Adistractionphase

    (involvingadditionsandsubtractionsforthedurationoffiveminutes)waspresented

    betweentheperceptualencodingandtherecallphasetopreventactiverehearsal.Atthe

    endoftheexperiment,theparticipantswerepresentedwiththe64visualimages(centrally

    onthescreen)andthecorrespondingaudiofile.Theyweretoevaluatetheimageswith

    respecttocategorytypicalityona5-pointLikertscale(1=notatalltypical,2=nottypical,3

    =neutral,4=typical,5=verytypical).The16imagesfromeachcategory(mammals,birds,

    machines,andhomefurniture)werepresentedblockwise.Eachblockwasprecededbytask

    instructions(i.e.,toratethetypicalityoftheimageforthegivencategory),whichappeared

    onthescreen.

    -------------------------------------------------

    InsertFigure1abouthere

    -------------------------------------------------

  • RunningHead:MentalImageryandVisualMemory

    9

    Figure1.Aschematicrepresentationofthetemporalorderofeventsintheexperimentincludingthestimuli

    usedduringthedifferenttasks.

    Results

    TheeyedataanalyseswerebasedonfixationsextractedusingBeGazeTMsoftware

    (SensoMotoricInstruments,Teltow,Germany).Fixationsweredetectedwhenthesumof

    thegazestreamonthex-andy-axeswaswithinanareaof100pixelsandwhenthefixation

    durationexceeded80ms.Blinkeventswereautomaticallysubtractedfromtheoriginalgaze

    streambythesoftwareandtreatedasmissingdata.

    Thescreenwasdividedattheverticalandhorizontalmidlinesintofourequallysized

    areasofinterest(AOIs).Theeyedatafromtheperceptualencodingphaseandtheimage

    generationtask,theimageinspectiontask,andtheold/newrecognitiontaskoftherecall

  • RunningHead:MentalImageryandVisualMemory

    10

    phasewereanalyzed.ThemainanalysesinvolvedrepeatedmeasuresANOVAsinorderto

    comparethetimespentintheAOIsinwhichthepicturesweredisplayedwiththemeantime

    spentfixatingoneofthenon-correspondingareasbothforoldandnewitems.When

    Mauchly’stestindicatedthatthesphericityassumptionwasviolated(p<.05),weusedthe

    Huynh-Feldtcorrectiontoadjustthedegreesoffreedom.Wereportpartial andCohen’s

    dasmeasuresofeffectsize.

    Validityofstimulusmaterial

    Wetestedwhetherthefourcategoriesdifferedwithrespecttoperceivedtypicality

    andwhethertheparticipants’ratingsinVersionAoftheexperiment(n=12)differedfrom

    theparticipants’ratingsinVersionBoftheexperiment(n=12)1.Theseanalysesare

    reportedinAppendixA.Theinclusionofexperimentversion(A,B)intheeyedataanalyses

    didnotchangetheresultsandthefactorturnedouttobenon-significant.Thus,wereport

    theresultswithoutexperimentversionincludedinthemodel.Thecategorytypicalityratings

    arereportedinAppendixA:Theywererelativelyhighwithanoverallmeanof4.01.We

    concludedthattheobjectswerevalidstimuli.

    Behavioraldata

    Accuracy.Participantswerecorrectin70.7%(SD=6.8)ofthetrialswitholdstimuliin

    theimageinspectiontask.Participantswerecorrectin84.5%(SD=9.3)ofthetrialswithold

    1 InVersionAoftheexperiment,participantswerepresented32imagesthathadbeenrandomly

    selectedfromtheinitial64images;inVersionBtheywerepresentedtheremaining32images.The

    imagesthatwerepresentedintheencodingphaseconstitutedtheolditemsintherecallphase.The

    remaining32imageswereusedasnewitemsintherecallphase.

    η2

  • RunningHead:MentalImageryandVisualMemory

    11

    stimuliintheold/newrecognitiontaskandin89.6%(SD=7.1)ofthetrialswithnewstimuli

    intheold/newrecognitiontask.

    Responsetimes(RTs).Afterwehadremovedtheoutliers(RTs>M+3xSDforeach

    participantforeachtask–intotal,2%ofalltrials),weperformedarepeatedmeasures

    ANOVAonRTsforcorrecttrials,withtask(imagegeneration,imageinspection,old/new

    recognition)andrecognition(old,new)asindependentvariables.Theanalysisrevealeda

    significantinteractionbetweentaskandrecognition,F(1.701,39.124)=8.434,p=.002,

    partial =.268.Bonferroni-correctedposthoctestsshowedsignificantdifferences

    betweenold(M=3637,SD=830)andnewitems(M=4019,SD=1323)intheimage

    generationtask(p=.013),andbetweenold(M=5667,SD=872)andnewitems(M=5457,

    SD=794)intheimageinspectiontask(p=.045),whereasthedifferencebetweenold(M=

    1931,SD=786)andnewitems(M=1983,SD=871)intheold/newrecognitiontaskwas

    non-significant(p=.462).Interestingly,participantsweresloweringeneratingnewimages,

    butfasterininspectingnewimages,ascomparedtooldimages.Theanalysisalsorevealeda

    significantmaineffectoftask,F(2,46)=97.000,p<.001,partial =.808.Bonferroni-

    correctedposthoctestsshowedsignificantdifferencesbetweenalltasks,withtheRTsinthe

    imageinspectiontaskbeingthelongestandthoseintheold/newrecognitiontaskbeingthe

    shortest(p<.001).Themaineffectofrecognition(old,new)wasnon-significant,F(1,23)=

    1.187,p=.287,partial =.049.

    Thecorrespondingareaeffect

    Perceptualencoding.Asamanipulationcheck,wecomparedthetimespentinthe

    areainwhichthestimuliwerepresented(5251ms,SD=374)withthemeantimespentin

    theotherthreeareas(64ms,SD=45),t(23)=63.862,p<.001,d=13.036.Participants

    η2

    η2

    η2

  • RunningHead:MentalImageryandVisualMemory

    12

    spentsignificantlymoretimeintheareasinwhichthestimuliwerepresented,thus

    confirmingproperencodingofthestimuli.

    Recallphase.Analysesofgazepositionduringtherecallphasewerebasedoncorrect

    trialsoftheold/newrecognitiontaskforbothnewandolditemsandoncorrecttrialsofthe

    inspectiontaskforolditems.WeconductedarepeatedmeasuresANOVAwithgazeposition

    (correspondingarea,non-correspondingarea),recognition(old,new),andtask(image

    generation,imageinspection,andold/newrecognition)aswithin-subjectfactors.The

    dependentvariablewasdwelltimeintheAOIs(ms).MeansarereportedinFigure2.The

    analysisrevealedasignificanttwo-wayinteractionbetweentask(imagegeneration,image

    inspection,old/newrecognition)andgazeposition(correspondingarea,non-corresponding

    area),F(2,46)=6.372,p=.004,partial =.217.PosthoctestswithBonferronicorrection

    indicatedthat,inalltasks,participantsspentmoretimeinthecorrespondingareathanin

    thenon-correspondingarea(p<.002).Theeffectwasthelargestintheimageinspection

    task(1630ms,SD=706,inthecorrespondingareavs.928ms,SD=398,inthenon-

    correspondingarea),followedbytheimagegenerationtask(1053ms,SD=542,inthe

    correspondingareavs.697ms,SD=333,inthenon-correspondingarea),andtheold/new

    recognitiontask(549ms,SD=324,inthecorrespondingareavs.351ms,SD=180,inthe

    non-correspondingarea).

    Thetwo-wayinteractionbetweentask(imagegeneration,imageinspection,old/new

    recognition)andrecognition(old,new)wasalsosignificant,F(1.64,37.63)=6.005,p=.008,

    partial =.207.PosthoctestswithBonferronicorrectionshowedthatonlythedifference

    betweenolditemsandnewitemsintheimagegenerationtaskwassignificant(p=.005).As

    alreadyillustratedintheRTanalyses,participantswereslowerintheimagegenerationtask

    withnewitemsascomparedtoolditems.Thetwo-wayinteractionbetweenrecognition

    η2

    η2

  • RunningHead:MentalImageryandVisualMemory

    13

    (old,new)andgazeposition(correspondingarea,non-correspondingarea)wasnot

    significant(p=.107),suggestingthatthepatternofresults(moretimespentinthe

    correspondingareathanintheotherareas)wassimilarforoldandnewitems.Thethree-

    wayinteractionwasnotsignificant(p=.600).Themaineffectoftask(imagegeneration,

    imageinspection,old/newrecognition)yieldedasignificantresult,F(2,46)=65.664,p<

    .001,partial =.741.PosthoctestswithBonferronicorrectionrevealedthatalldifferences

    weresignificant(p<.001),withtheslowestresponsesbeinggivenintheimageinspection

    task,followedbytheimagegenerationtask,andthefastestresponsesbeinggiveninthe

    old/newrecognitiontask(seetheRTanalysesabove).Themaineffectofgazeposition

    (correspondingarea,non-correspondingarea)wasalsosignificant,F(1,23)=18.61,p<.001,

    partial =.447(withmoretimespentinthecorrespondingareasthaninthenon-

    correspondingareas).However,themaineffectofrecognition(old,new)wasnotsignificant

    (p=.721),showingthattherewasnodifferenceintimebetweenoldandnewitemsoverall

    (seeRTanalyses).ThesamerepeatedmeasuresANOVAwithnumberoffixationsasthe

    dependentvariableyieldedthesameresults(nodifferencesinrejectingoracceptingthenull

    hypothesis).

    -------------------------------------------------

    InsertFigure2abouthere

    -------------------------------------------------

    η2

    η2

  • RunningHead:MentalImageryandVisualMemory

    14

    Figure2.Meandwelltime(inms)inthecorrespondingvs.otherAOIsforoldandnewitemsinthe

    imagegeneration,imageinspection,andold/newrecognitiontasks.Errorbarsindicate1SEM.

    Giventhatpercentagesallowforcomparisons,inAppendixB,wealsoreportthe

    percentageoftimespentinthecorrespondingAOIs(chancelevel25%).Inaddition,we

    performedarepeatedmeasuresANOVAonthepercentageoftimespentinthe

    correspondingAOIs,withrecognition(old,new)andtask(imagegeneration,image

    inspection,andold/newrecognition)aswithin-subjectfactors.Onlythemaineffectof

    recognition(old,new)wassignificant,F(1,23)=5.55,p=.027,partial =.195.Participants

    spentsignificantlymoretimeinthecorrespondingareawitholditems(38%,SEM=.035)

    thanwithnewitems(34%,SEM=.026).Themaineffectoftaskandtheinteractionwere

    non-significant(p>.257),suggestingthatthepatternofresults(timespentinthe

    correspondingAOIs)wasroughlyequalacrosstasks.

    Intheanalysespresentedhere,wepooledthedwelltimeinthenon-corresponding

    areas.Inordertoensurethatpoolingdidnotintroducebias,wecomputedseparate

    analyseswithunpooleddata.TheresultsarereportedinAppendixC.

    Errortrials.WealsoconductedarepeatedmeasuresANOVAonfixationsduring

    errortrialswithgazeposition(correspondingarea,non-correspondingarea)andtask(image

    η2

  • RunningHead:MentalImageryandVisualMemory

    15

    generation,imageinspection,andold/newrecognition)aswithin-subjectfactors.

    Participantsspent902ms(SD=767)inthecorrespondingareavs.807ms(SD=718)inthe

    non-correspondingareaintheimagegenerationtask;theyspent1405ms(SD=1056)inthe

    correspondingareavs.1000ms(SD=588)inthenon-correspondingareaintheimage

    inspectiontask;and512ms(SD=442)inthecorrespondingareavs.510ms(SD=371)inthe

    non-correspondingareaintheold/newrecognitiontask.Theanalysesrevealedamaineffect

    oftask,F(2,46)=30.647,p<.001,partial =.571.However,nosignificantinteraction

    betweentaskandgazeposition,F(2,46)=2.130,p=.130,partial =.085,andnomain

    effectofgazeposition,F(1,23)=.761,p=.392,partial =.032,wasfound.

    Discussion

    Inthisexperiment,participantswerepresentedwithobjectsfromfourcategories.

    Objectsfromaparticularcategoryalwaysappearedinthequadrantassignedtothat

    category.Duringvisualizationoftheobjects,participantsspentmoretimeinthe

    correspondingarea,forbotholdandnewitems.Participantswereneitheraskedtoencode

    thelocationoftheobjects,norweretheyinformedaboutthefourcategoriesandtheir

    spatialinformation.Nonetheless,theeyegazepositionindicatedthattheynotonlyencoded

    thespecificspatialinformationoftheobjectsalongwithothervisualandsemantic

    properties,butthattheencodedspatialinformationgeneralizedtonovelobjectsfromthe

    fourcategories.Thepresentresearchextendsfindingsoneyegazepositionduringvisual

    memorybyprovidinginformationabouttherepresentationofobjectsandcategories.

    Specifically,participantsspentmoretimeinthesameareawithremembereditems

    (e.g.,aVespascooter)and,interestingly,alsowhenimaginingnovelitemsfromthesame

    category(e.g.,abicycle).Hence,locationmemorytransferredtootherobjectsfromthe

    η2

    η2

    η2

  • RunningHead:MentalImageryandVisualMemory

    16

    samecategory.Toexplainthistransfer,weassumethateachobject’scategoryis

    automaticallyactivatedwhentheobjectismemorized(e.g.,Hintzman,1986;Jamieson,

    Crump,&Hannah,2012).Thus,memorizinganobjectinagivenlocationstrengthenedthe

    connectionsbetweentheobjectanditspositionaswellaswiththeobject’scategory.Since

    differentobjectsthatbelongtothesamecategorywereconsistentlypresentedatthesame

    position,thepresentfindingsshowthatparticipantsimaginednewobjectsbeinginthe

    samelocationaspreviousobjectsthatbelongedtothesamecategory.Wealsosuggestthat

    thenewobjectactivatesasimilarcategorythatcanbefoundinepisodicmemory,whichin

    turnactivatesapositionthatiscongruentwithpastviewingexperience.

    Giventhatthenewitemswerenotpreviouslyassociatedtooculomotororspatial

    informationduringencoding,weconcludethatfixationstolocationsduringencodingarenot

    requiredtocausesystematiceyemovementswiththeblankscreenparadigm.Thisfinding

    lendssupporttothetheorythateyemovementsduringretrievalcanbelaunchedbyspatial

    representationsassociatedwithasemanticcategory(e.g.,Richardson&Spivey,2000).One

    possibilityisthatnotonlyobjectlocationbutalsocategorylocationisencodedalongwith

    visualandsemanticinformationandthuswilltriggereyemovementstotherelevantareas

    duringbothvisualmemoryandmentalimageryofobjectsbelongingtothesamecategories.

    Eyemovementsrepresentthespatialextentthatrealandpossibleimagesembody.Spatial

    informationasrevealedbyeyemovementscanplayanactiveroleintherepresentationof

    categories.Alternatively,categoryisautomaticallyaccessedafteranitem(whether

    previouslyencodedornovel)hasbeenpresentedandthelocationsassociatedwiththe

    sameorneighboringcategoryareprioritizedwhenapositionisassignedtothevisualimage.

    Interestingly,participantsweresloweringeneratingnewimages,butfasterin

    inspectingnewimages,ascomparedtooldimages.Thispatternillustratesthatitismore

  • RunningHead:MentalImageryandVisualMemory

    17

    demandingforparticipantstocreateanewmentalimagethantoretrievethementalimage

    ofapreviouslyinspecteditem.However,whenitcomestoimageinspection,itiseasierto

    inspectamentalimagethathasbeencreatedbytheindividualthananimagethathasbeen

    learnedfromanexternaltemplate.Alternatively,itispossiblethatgeneratingthemental

    imageofanewitemismorerelatedtoaprototypicalorabstractversionofthatitem(e.g.,a

    four-leggedanimal),whichislessconcreteordetailedthananencodedexemplarfroma

    specificcategory.Thepossibleabsenceofspecificdetailsinthenewimagescouldbean

    alternativeexplanationforfasterresponseswiththisclassofitems.

    Furthermore,wewereunabletofindthecorrespondingareaeffectforerrortrials,

    thusreplicatingthepreviousfindingsofMartarelliandMast(2011)withvisualmaterialand

    Scholzetal.(2016)withverbalmaterial.ThisresultsupportsthesuggestionbyFerreira,

    Apel,andHenderson(2008)andRichardson,Altmann,Spivey,andHoover(2009)thatmany

    aspectsofanevent,includingspatialinformation,areactivatedwhileretrievingsemantic

    informationrelatedtothatevent.Theabsenceofthecorrespondingareaeffectforerror

    trialssupportsresearchshowingthefunctionalroleofeyemovementsduringmental

    imagery(e.g.,Johansson&Johansson,2014;Laengetal.,2014).However,thebestwayto

    understandthesortoflocationthatisbeingencodedremainsthemanipulationofeye

    position.Onecaveatisthatthepresentresultsareofcorrelationalnature.

    Interestingly,similareyemovementsforobjectsbelongingtothesamecategory

    suggestthattherearetightlinksbetweenspatialandconceptualrepresentations.Thisresult

    isconsistentwithperceptual-motortheoriesofcognitiverepresentation(e.g.,Barsalou,

    2008).LakoffandJohnson(1980)proposedthatthereisametaphoricalmappingforthe

    conceptof“category,”whichisrepresentedbytheimageofacontainer.BootandPecher

    (2011)foundthattheunderstandingoftheconcept“category”isindeedgroundedinthe

  • RunningHead:MentalImageryandVisualMemory

    18

    concreterepresentationoftheimageofacontainer.Theypresentedpicturesofanimalsand

    vehiclesoutsideorinsideaframeandtheparticipantsweretodecidewhethertwoimages

    presentedonthescreenwereeitheranimals(orvehicles,respectively)ornot.Theauthors

    foundfasterresponseswhenitemsthatbelongedtothesamecategorywereboth

    presentedinaframe.Inourparadigm,therewasnoframesurroundingtheitems(exceptfor

    thecomputerscreen),butwethinkthatourtaskactivated,atleasttosomeextent,the

    containerimageschema.Indeed,thepositionoftheitemswashighlypredictable(same

    positionforeachcategory).Thus,thegazetoaspecificlocationstructuredtherelationship

    betweenitemandcategory.

    Futureresearchwillneedtobecarriedoutinordertobetterinvestigatetheroleof

    predictioninmemoryperformance.Forexample,wekeptthetypicalityoftheitems

    constant,butitwouldbeinterestingtoconsiderthedegreeoftypicality(distancefromthe

    prototype,Rosch&Mervis,1975).Anotherpointthatneedsfurtherinvestigationis

    perceptualsimilarityinordertodisentanglethepotentialinfluenceofperceptualsimilarity

    andthecategorythestimulibelongto.Futureresearchshouldalsovarythepositionofthe

    objectsofthesamecategory(intheencodingphase),sothatparticipantsareunableto

    predictthelocationofanobjectbelongingtoagivencategoryandthusnotusethis

    informationtoorganizetheirknowledge(intherecallphase).

    Inconclusion,theresultsofthisstudyshowthateyegazecanbeusedstrategicallyto

    organizeknowledge.Theeyegazeeffectsobservedwiththeblankscreenparadigmstrongly

    suggestthatconceptualknowledgeisgroundedinsensorimotorexperience.

    ConflictofInterest

    Theauthorsdeclarethattheyhavenoconflictofinterest.

  • RunningHead:MentalImageryandVisualMemory

    19

    References

    Altmann,G.T.(2004).Language-mediatedeyemovementsintheabsenceofavisualworld:The

    'blankscreenparadigm'.Cognition,93(2),B79-87.

    Barsalou,L.W.(2008).Groundedcognition.AnnualReviewofPsychology,59,617-645.

    Boot,I.,&Pecher,D.(2011).Representationofcategories.ExperimentalPsychology,58(2),162-170.

    Bochynska,A.&Laeng,B.(2015).Trackingdownthepathofmemory:Eyescanpathsfacilitate

    retrievalofvisuospatialinformation.CognitiveProcessing,16(1),159-163.

    Brandt,S.A.,&Stark,L.W.(1997).Spontaneouseyemovementsduringvisualimageryreflectthe

    contentofthevisualscene.JournalofCognitiveNeuroscience,9(1),27-38.

    Ferreira,F.,Apel,J.,&Henderson,J.M.(2008).Takinganewlookatlookingatnothing.Trendsin

    CognitiveSciences,12(11),405-410.

    Foerster,R.M.,Carbone,E.,Koesling,H.,&Schneider,W.X.(2012).Saccadiceyemovementsinthe

    darkwhileperforminganautomatizedsequentialhigh-speedsensorimotortask.Journalof

    Vision,12(2),1-15.

    Foulsham,T.,&Kingstone,A.(2013).Fixation-dependentmemoryfornaturalscenes:An

    experimentaltestofscanpaththeory.JournalofExperimentalPsychology:General,142(1),

    41-56.

    Fourtassi,M.,Hajjioui,A.,Urquizar,C.,Rossetti,Y.,Rode,G.,&Pisella,L.(2013).Iterative

    fragmentationofcognitivemapsinavisualimagerytask.PLoSONE,8(7),e68560.

    Hintzman,D.L.(1986).“Schemaabstraction”inamultiple-tracememorymodel.Psychological

    Review,93(4),411–428.

    Hoover,M.A.,&Richardson,D.C.(2008).Whenfactsgodowntherabbithole:Contrastingfeatures

    andobjecthoodasindexestomemory.Cognition,108(2),533-542.

    Jamieson,R.K.,Crump,M.J.C.,&Hannah,S.D.(2012).Aninstancetheoryofassociativelearning.

  • RunningHead:MentalImageryandVisualMemory

    20

    LearningandBehavior,40(1),61–82.

    Johansson,R.,Holsanova,J.,Dewhurst,R.,&Holmqvist,K.(2012).Eyemovementsduringscene

    recollectionhaveafunctionalrole,buttheyarenotreinstatementsofthoseproduced

    duringencoding.JournalofExperimentalPsychology:HumanPerceptionandPerformance,

    38(5),1289-1314.

    Johansson,R.,Holsanova,J.,&Holmqvist,K.(2006).Picturesandspokendescriptionselicitsimilar

    eyemovementsduringmentalimagery,bothinlightandincompletedarkness.Cognitive

    Science,30(6),1053-1079.

    Johansson,R.,&Johansson,M.(2014).Lookhere,eyemovementsplayafunctionalroleinmemory

    retrieval.PsychologicalScience,25(1),236-242.

    Laeng,B.,Bloem,I.M.,D'Ascenzo,S.,&Tommasi,L.(2014).Scrutinizingvisualimages:Theroleof

    gazeinmentalimageryandmemory.Cognition,131(2),263-283.

    Laeng,B.,&Teodorescu,D.-S.(2002).Eyescanpathsduringvisualimageryreenactthoseof

    perceptionofthesamevisualscene.CognitiveScience,26(2),207-231.

    Lakoff,G.,&Johnson,M.(1980).Metaphorsweliveby.Chicago:ChicagoUniversityPress.

    Martarelli,C.S.,&Mast,F.W.(2011).Preschoolchildren’seye-movementsduringpictorialrecall.

    BritishJournalofDevelopmentalPsychology,29,425-436.

    Martarelli,C.S.,&Mast,F.W.(2013).Eyemovementsduringlong-termpictorialrecall.

    PsychologicalResearch,77(3),303-309.

    Maxfield,J.T.,Stalder,W.D.,&Zelinsky,G.J.(2014).Effectsoftargettypicalityoncategorical

    search.JournalofVision,14(12),1-11.

    Noton,D.,&Stark,L.W.(1971).Scanpathsinsaccadiceyemovementswhileviewingand

    recognizingpattern.VisionResearch,11(9),929-942.

    Richardson,D.C.,Altmann,G.T.M.,Spivey,M.J.,&Hoover,M.A.(2009).Muchadoabouteye

  • RunningHead:MentalImageryandVisualMemory

    21

    movementstonothing:AresponsetoFerreiraetal.:Takinganewlookatlookingatnothing.

    TrendsinCognitiveSciences,13(6),235-236.

    Richardson,D.C.,&Spivey,M.J.(2000).Representation,spaceandHollywoodSquares:Lookingat

    thingsthataren'tthereanymore.Cognition,76(3),269-295.

    Rosch,E.,&Mervis,C.B.(1975).Familyresemblances:Studiesininternalstructureofcategories.

    CognitivePsychology,7(4),573-605.

    Scholz,A.,Mehlhorn,K.,&Krems,J.F.(2016).Listenup,eyemovementsplayaroleinverbal

    memoryretrieval.PsychologicalResearch,80(1),149-58.

    Spivey,M.J.,&Geng,J.J.(2001).Oculomotormechanismsactivatedbyimageryandmemory:Eye

    movementstoabsentobjects.PsychologicalResearch,65(4),235-241.

    Thompson,W.L.,Kosslyn,S.M.,Sukel,K.E.,&Alpert,N.M.(2001).Mentalimageryofhigh-and

    low-resolutiongratingsactivatesarea17.NeuroImage,14(2),454-464.

    Wantz,A.L.,Martarelli,C.S.,Cazzoli,D.,Kalla,R.,Muri,R.,&Mast,F.W.(2016).Disruptingfrontal

    eye-fieldactivityimpairsmemoryrecall.Neuroreport,27(6),374-378.

    Wantz,A.L.,Martarelli,C.S.,&Mast,F.W.(2015).Whenlookingbacktonothinggoesbackto

    nothing.CognitiveProcessing,17(1),105-114.

    Zelinsky,G.J.,Peng,Y.,&Samaras,D.(2013).Eyecanreadyourmind:Decodinggazefixationsto

    revealcategoricalsearchtargets.JournalofVision,13(14),1-13.

  • RunningHead:MentalImageryandVisualMemory

    22

    AppendixA:Categorytypicalityrating

    WeconductedamixedANOVAwithrecognition(old,new)andcategory(mammals,birds,

    machines,andhomefurniture)aswithin-subjectfactors,experimentversion(A,B)asa

    between-subjectsfactor,andtypicalityratingsasthedependentvariable.Themeansare

    reportedinTable1.Theresultsrevealedasignificantthree-wayinteraction,F(3,66)=3.38,

    p=.023,partial =.133.Bonferroni-correctedposthoctestsshowedsignificant

    differencesinVersionAoftheexperimentinthebirdscategory(Old:4.21,SD=.79,New:

    3.83,SD=.62),p=.021,andinthehomefurniturecategory(Old:3.82,SD=.56,New:4.18,

    SD=.42),p=.008.Thetwo-wayinteractionbetweencategoryandrecognitionalsoyieldeda

    significantresult,F(3,66)=3.79,p=.014,partial =.147.Bonferroni-correctedposthoc

    testsillustratedasignificantdifferenceinthemammalscategory(Old:4.32,SD=.55,New:

    4.21,SD=.63),p=.040.Theresultsalsorevealedasignificanteffectofcategory,F(3,66)=

    13.54,p<.001,partial =.381.Bonferroni-correctedposthoctestsillustratedsignificant

    differencesbetweenthebirds(4.04,SD=.56)andthemachines(3.61,SD=.62),p=.003,

    thehomefurniture(4.12,SD=.44)andthemachines(3.61,SD=.62),p=.002,andthe

    mammals(4.26,SD=.58)andthemachines(3.61,SD=.62),p<.001.Machineswerejudged

    aslesscategory-typicalthanmammals,birds,andhomefurniture.Theanalysisfurther

    revealednomaineffectofrecognition(oldvs.new),nomaineffectofexperimentversion

    (A,B),andnosignificantinteractionbetweencategoryandcondition(p>.328).

    Table1

    Participants’meantypicalityratings(andstandarddeviations)foroldvs.newitemsinthe

    fourcategories(mammals,birds,homefurniture,andmachines)byexperimentversion(A,

    B).

    η2

    η2

    η2

  • RunningHead:MentalImageryandVisualMemory

    23

    Category Recognition VersionAofexperiment VersionBofexperiment

    Mammals Old 4.25(SD=.56) 4.40(SD=.54)

    New 4.12(SD=.67) 4.30(SD=.61)

    Birds Old 4.21(SD=.79) 4.08(SD=.54)

    New 3.83(SD=.62) 4.05(SD=.54)

    Homefurniture Old 3.82(SD=.56) 4.25(SD=.43)

    New 4.18(SD=.42) 4.25(SD=.47)

    Machines Old 3.42(SD=.68) 3.80(SD=.59)

    New 3.53(SD=.73) 3.69(SD=.53)

    Note.Theareasinboldindicatesignificantdifferenceswithincategories.

  • RunningHead:MentalImageryandVisualMemory

    24

    AppendixB:Thecorrespondingareaeffectinpercentages

    PercentageoftimespentinthecorrespondingAOIs(wherethestimuliwheredisplayed

    previously)duringtheimagegeneration,imageinspection,andold/newrecognitiontasks

    separatedforolditems(correcttrialsaccordingtothespecificquestionandtotheold/new

    recognitiontask)andnewitems(correcttrialsaccordingtotheold/newrecognitiontask).

    One-samplettestswerecomputedtocomparethepercentagesoftimewithachancelevel

    of25%ofthetotaltime(fourareas).

    DwelltimeinthecorrespondingAOIs(%) Olditems Newitems

    Imagegenerationtask

    36%(SD=14%)

    t(23)=3.934,p=.001

    d=1.641

    33%(SD=10%)

    t(23)=3.515,p=.002

    d=1.466

    Imageinspectiontask

    40%(SD=23%)

    t(23)=3.264,p=.003

    d=1.361

    36%(SD=16%)

    t(23)=3.240,p=.004

    d=1.351

    Old/newrecognitiontask

    38%(SD=19%)

    t(23)=3.358,p=.003

    d=1.400

    33%(SD=15%)

    t(23)=2.545,p=.018

    d=1.061

  • RunningHead:MentalImageryandVisualMemory

    25

    AppendixC:Thecorrespondingareaeffectcomparedtothethreenon-correspondingAOIs

    Inorder toexcludeapossiblebias causedbypooling thenon-correspondingAOIs,wealso

    computed analyses with unpooled data. As did Richardson and Spivey (2000), we “clock

    coded” the data. The corresponding AOI was labeled 0. The other three areas, moving

    clockwise,werelabeled1to3(Tables2,3).

    Table2

    Meandwelltime(inms)duringmentalimageryofolditems

    Meandwell time in

    theareas

    Imagegenerationtask Imageinspectiontask Old-newtask

    M(SD) M(SD) M(SD)

    CorrespondingAOI 1005(573) 1708(900) 568(359)

    AOI01 544(267) 912(686) 311(233)

    AOI02 594(308) 832(458) 333(213)

    AOI03 669(415) 946(619) 353(198)

    Note. The analysis of eye gaze position duringmental imagery of old items showed that, in all three tasks

    (imagegeneration,imageinspection,old/newrecognition),participantsspentmoretimeinthecorresponding

    area(wheretheyhadpreviouslyseentheobject)thaninthenon-correspondingAOIs(AOI01,AOI02,AOI03).

    A repeatedmeasures ANOVA revealed a significant interaction between gaze position (corresponding area,

    non-correspondingarea)andtask(imagegeneration,imageinspection,old/newrecognition),F(3.44,79.08)=

    4.567,p=.004,partial =.166,asignificantmaineffectofgazeposition,F(1.96,45.05)=10.689,p<.001,

    partial = .317, and a significantmain effect of task,F(2, 46) = 89.126,p < .001, partial = .795.Only

    correcttrials(accordingtoboththespecificquestionandtheold/newtask)wereconsidered.

    η2

    η2

    η2

  • RunningHead:MentalImageryandVisualMemory

    26

    Table3

    Meandwelltime(inms)duringmentalimageryofnewitems

    Meandwell time in

    theareas

    Imagegeneration Imageinspection Old-Newtask

    M(SD) M(SD) M(SD)

    CorrespondingAOI 1084(496) 1519(632) 495(269)

    AOI01 785(506) 920(469) 408(265)

    AOI02 781(409) 1009(476) 363(265)

    AOI03 758(427) 903(377) 336(226)

    Note.The analysis of eye gaze position duringmental imagery of new items showed that, in all three tasks

    (imagegeneration,imageinspection,old/newrecognition),participantsspentmoretimeinthecorresponding

    area (where objects from the same category had appeared previously) than in the non-corresponding AOIs

    (AOI01,AOI02,AOI03).ArepeatedmeasuresANOVArevealedasignificantinteractionbetweengazeposition

    (corresponding area, non-corresponding area) and task (image generation, image inspection, old/new

    recognition), F(3.12, 71.87) = 4.773, p = .004, partial = .172, a significant main effect of gaze position,

    F(1.98, 45.45) = 10.010,p < .001, partial = .303, and a significantmain effect of task, 53.541,p < .001,

    partial = .700. Only correct trials (according to the old/new recognition task, i.e., new items correctly

    identifiedasnew)wereconsidered.

    η2

    η2

    η2