a low-cost touchscreen operant chamber using a raspberry pi™...the raspberry pi is a single-board...

8
A low-cost touchscreen operant chamber using a Raspberry PiJames D. OLeary 1 & Olivia F. OLeary 1,2 & John F. Cryan 1,2 & Yvonne M. Nolan 1,2 Published online: 8 March 2018 # Psychonomic Society, Inc. 2018 Abstract The development of a touchscreen platform for rodent testing has allowed new methods for cognitive testing that have been back- translated from clinical assessment tools to preclinical animal models. This platform for cognitive assessment in animals is comparable to human neuropsychological tests such as those employed by the Cambridge Neuropsychological Test Automated Battery, and thus has several advantages compared to the standard maze apparatuses typically employed in rodent behavioral testing, such as the Morris water maze. These include improved translation of preclinical models, as well as high throughput and the automation of animal testing. However, these systems are relatively expensive, which can impede progress for researchers with limited resources. Here we describe a low-cost touchscreen operant chamber based on the single-board computer, Raspberry Pi TM , which is capable of performing tasks similar to those supported by current state-of-the-art systems. This system provides an affordable alternative for cognitive testing in a touchscreen operant paradigm for researchers with limited funding. Keywords Cognition . Touchscreen operant chamber . Operant behavior . Raspberry Pi . Arduino . Automation Operant-based behavioral tasks are standard techniques used in experimental psychology in which a rodent learns to press a lever or turn a wheel to receive an appetitive or aversive re- sponse (Crawley, 2007; Skinner, 1938). Standard operant par- adigms, such as fixed-ratio (in which a reward is delivered every nth lever press) or variable-ratio (in which a reward is delivered after a pseudorandom number of lever presses) training, have been used to investigate addiction, impulsivity, and motivation (Halladay, Kocharian, & Holmes, 2017; Perry, Larson, German, Madden, & Carroll, 2005; Salamone & Correa, 2002). These operant-based tasks have been further developed over the years, particularly through the implemen- tation of a computer touchscreen in place of levers. Touchscreen operant chambers have been used in a variety of species including rodents (McTighe, Mar, Romberg, Bussey, & Saksida, 2009), birds (Cook, 1992), dogs (Range, Aust, Steurer, & Huber, 2008), and reptiles (Mueller-Paul et al., 2014). The development of a touchscreen platform for behavioral testing has allowed new methods for cognitive assessment in preclinical models (Bartko, Vendrell, Saksida, & Bussey, 2011; Bussey et al., 2012; Horner et al., 2013; Nithianantharajah et al., 2015). These methodologies are com- parable to the human neuropsychological tests employed by the Cambridge Neuropsychological Test Automated Battery, such as the pairwise associative learning (PAL) task and the trial-unique nonmatching to location (TUNL) task (Bartko et al., 2011; Bussey et al., 2012; Kim, Romberg, et al., 2015b; Mar et al., 2013; Nithianantharajah et al., 2015; Talpos, Winters, Dias, Saksida, & Bussey, 2009). Just as pa- tients in the clinic use an iPad/computer to respond to visual and audio cues during neurocognitive assessment, rodents can view a computer touchscreen and respond in a similar fashion (via nose pokes rather than finger touches) during behavioral testing in an operant chamber. Very often the rodent tasks have visual stimuli similar or identical to the stimuli used for testing in the clinic. Using this platform, the rodent is presented with an image on the computer screen and, depending on the task paradigm, is trained to respond to either the specific image or location of the image via nose pokes on the touch-sensitive computer screen. A correct response elicits a food reward, where- as an incorrect response triggers a timeout. Through repeated trials the rodents performance can be assessed and the underly- ing neurobiology required for the task can be studied. Currently, several tasks are available that assess different aspects of cogni- tive function and associated neurophysiology, such as visual * Yvonne M. Nolan [email protected] 1 Department of Anatomy and Neuroscience, University College Cork, Cork, Ireland 2 APC Microbiome Institute, University College Cork, Cork, Ireland Behavior Research Methods (2018) 50:25232530 https://doi.org/10.3758/s13428-018-1030-y

Upload: others

Post on 25-Jan-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

  • A low-cost touchscreen operant chamber using a Raspberry Pi™

    James D. O’Leary1 & Olivia F. O’Leary1,2 & John F. Cryan1,2 & Yvonne M. Nolan1,2

    Published online: 8 March 2018# Psychonomic Society, Inc. 2018

    AbstractThe development of a touchscreen platform for rodent testing has allowed newmethods for cognitive testing that have been back-translated from clinical assessment tools to preclinical animal models. This platform for cognitive assessment in animals iscomparable to human neuropsychological tests such as those employed by the Cambridge Neuropsychological Test AutomatedBattery, and thus has several advantages compared to the standard maze apparatuses typically employed in rodent behavioraltesting, such as the Morris water maze. These include improved translation of preclinical models, as well as high throughput andthe automation of animal testing. However, these systems are relatively expensive, which can impede progress for researcherswith limited resources. Here we describe a low-cost touchscreen operant chamber based on the single-board computer, RaspberryPiTM, which is capable of performing tasks similar to those supported by current state-of-the-art systems. This system provides anaffordable alternative for cognitive testing in a touchscreen operant paradigm for researchers with limited funding.

    Keywords Cognition . Touchscreen operant chamber . Operant behavior . Raspberry Pi . Arduino . Automation

    Operant-based behavioral tasks are standard techniques usedin experimental psychology in which a rodent learns to press alever or turn a wheel to receive an appetitive or aversive re-sponse (Crawley, 2007; Skinner, 1938). Standard operant par-adigms, such as fixed-ratio (in which a reward is deliveredevery nth lever press) or variable-ratio (in which a reward isdelivered after a pseudorandom number of lever presses)training, have been used to investigate addiction, impulsivity,and motivation (Halladay, Kocharian, & Holmes, 2017; Perry,Larson, German, Madden, & Carroll, 2005; Salamone &Correa, 2002). These operant-based tasks have been furtherdeveloped over the years, particularly through the implemen-tation of a computer touchscreen in place of levers.Touchscreen operant chambers have been used in a varietyof species including rodents (McTighe, Mar, Romberg,Bussey, & Saksida, 2009), birds (Cook, 1992), dogs (Range,Aust, Steurer, & Huber, 2008), and reptiles (Mueller-Paulet al., 2014). The development of a touchscreen platform forbehavioral testing has allowed new methods for cognitive

    assessment in preclinical models (Bartko, Vendrell, Saksida,& Bussey, 2011; Bussey et al., 2012; Horner et al., 2013;Nithianantharajah et al., 2015). Thesemethodologies are com-parable to the human neuropsychological tests employed bythe Cambridge Neuropsychological Test Automated Battery,such as the pairwise associative learning (PAL) task and thetrial-unique nonmatching to location (TUNL) task (Bartkoet al., 2011; Bussey et al., 2012; Kim, Romberg, et al.,2015b; Mar et al., 2013; Nithianantharajah et al., 2015;Talpos, Winters, Dias, Saksida, & Bussey, 2009). Just as pa-tients in the clinic use an iPad/computer to respond to visualand audio cues during neurocognitive assessment, rodents canview a computer touchscreen and respond in a similar fashion(via nose pokes rather than finger touches) during behavioraltesting in an operant chamber. Very often the rodent tasks havevisual stimuli similar or identical to the stimuli used for testingin the clinic. Using this platform, the rodent is presented withan image on the computer screen and, depending on the taskparadigm, is trained to respond to either the specific image orlocation of the image via nose pokes on the touch-sensitivecomputer screen.A correct response elicits a food reward,where-as an incorrect response triggers a timeout. Through repeatedtrials the rodent’s performance can be assessed and the underly-ing neurobiology required for the task can be studied. Currently,several tasks are available that assess different aspects of cogni-tive function and associated neurophysiology, such as visual

    * Yvonne M. [email protected]

    1 Department of Anatomy and Neuroscience, University College Cork,Cork, Ireland

    2 APC Microbiome Institute, University College Cork, Cork, Ireland

    Behavior Research Methods (2018) 50:2523–2530https://doi.org/10.3758/s13428-018-1030-y

    http://crossmark.crossref.org/dialog/?doi=10.3758/s13428-018-1030-y&domain=pdfmailto:[email protected]

  • discrimination and reversal learning, the five-choice serial reac-tion time task, and the continuous performance test, which allmeasure executive functions, such as cognitive flexibility, deci-sion making, and attention, and have been shown to be sensitiveto prefrontal cortexmanipulation in rats andmice (Kim, Hvoslef-Eide, et al., 2015a; Mar et al., 2013). In addition, the locationdiscrimination and TUNL tasks, which measure spatial learning,have been shown to be dependent on adult hippocampalneurogenesis and an intact hippocampal formation in rats andmice (Clelland et al., 2009; Creer, Romberg, Saksida, van Praag,& Bussey, 2010; McTighe et al., 2009; Oomen et al., 2013;Talpos, McTighe, Dias, Saksida, & Bussey, 2010). Similarly,the PAL task has been shown to be sensitive to glutamatergicinactivation of the hippocampus in rats (Talpos et al., 2009).Furthermore, impaired performance in the PAL task has beenshown in patients with schizophrenia (Wood et al., 2002), andPAL performance has been identified as a predicative measure ofAlzheimer’s disease pathology (Swainson et al., 2001).

    The touchscreen operant platform for behavioral assess-ment in animals has several advantages relative to the standardmaze apparatus commonly employed in rodent behavioraltesting, such as the Morris water maze or radial arm maze.First, it enables the design of tasks that better represent humanneuropsychological tests thus it is highly translatable. For ex-ample, audiovisual stimuli as well as the task paradigm itself,such as the PAL task, can be set up so that they are identical tothose used in tasks for humans (Talpos et al., 2009). Second,the touchscreen operant platform can be used to conduct be-havioral assessments as part of a test battery. Although this isalso the case for tasks using standard maze apparatuses, suchas the Morris water maze or radial arm maze, the touchscreenplatform enables a consistent environment and behavioralresponse/reward system, thereby reducing any potential con-founds from employing different maze equipment and para-digms. Third, the platform is automated thus a number ofchambers can be used simultaneously for behavioral assess-ments. This increases the throughput of experimental animalsand reduces the burden of labor on the experimenter.Although the touchscreen system has advantages over stan-dard maze paradigms, current systems can cost upward of€25,000 for a four-chamber system. This can be prohibitivelyexpensive for researchers with limited resources, as is oftenthe case for early-career scientists or those in the developingworld. Thus, due to the relatively low cost of the components,the option of building a touchscreen chamber in-house is bothattractive and viable. Indeed, several groups have already re-ported building low-cost operant chambers. Steurer, Aust, andHuber (2012) demonstrated a low-cost touchscreen operantchamber that could be used by a variety of species, such aspigeons, tortoise and dogs. This system was significantlycheaper than commercial alternatives, at approximately€3,000. Moreover, work by Pineño (2014) further reducedthe price point of an in-house system, by building a low-cost

    touchscreen operant chamber using a touch-sensitive iPod andan Arduino microcontroller. This group was the first to dem-onstrate a low-cost touchscreen operant chamber using off-the-shelf electronics for a fraction of the cost of commerciallyavailable alternatives, at only a few hundred euros. Althoughthe system is innovative, it is limited in its ability to facilitatethe running of similar tasks to that of the current state-of-the-art systems, such as the Bussey–Saksida chambers given thesmall touchscreen display, although the addition of an iPadwith a larger screen may help to overcome this limitation(Pineño, 2014). It is worth pointing out that the original aimof this study was to showcase a proof of concept that off-the-shelf components could be used to build a low-costalternative, and thus lay the foundation for future work.Since then, Devarakonda, Nguyen, and Kravitz (2016) builta Rodent Operant Bucket (ROBucket), a standard operantchamber based on the Arduino microcontroller. The systemconsisted of two nose-poke sensors and a liquid delivery sys-tem capable of both fixed-ratio and progressive-ratio trainingthat can be used to train mice to nose poke a receptacle for asucrose solution (Devarakonda et al., 2016). Moreover, Rizzi,Lodge, and Tan (2016) built a low-cost rodent nose-pokechamber using the Arduino microcontroller. Their systemwas composed of four nose-poke modules that detected andcounted head entries. Rizzi et al. successfully trained mice toprefer the nose-poke module, which would trigger anoptogenetic stimulation of dopaminergic neurons within theventral tegmental area. Although both Devarakonda et al. andRizzi et al. demonstrated low-cost alternatives, these systemsare designed as standard operant chambers and therefore donot allow for the similar translatable tasks available within atouchscreen operant platform. Here, we build on the previouswork by Pineño, Devarakonda et al., and Rizzi et al. by com-bining the single-board Raspberry PiTM computer and 7-in.Raspberry Pi touchscreen with an Arduino microcontroller.We demonstrate that this low-cost touchscreen operant cham-ber is capable of supporting a number of tasks similar to thoseenabled by current state-of-the-art systems, such asautoshaping animals to nose-poke for a food response, as wellas more complex paradigms such as visual discrimination andthe PAL and TUNL tasks.

    The Raspberry Pi is a single-board computer, roughly thesize of a credit card. Despite its size and inexpensive price(approx. €30), the Pi runs a full computer operating systemand is capable of supporting the same tasks as a typical desk-top PC—for instance, word processing and web browsing. Inaddition, the Raspberry Pi has several general purpose input–output (GPIO) pins. GPIO pins are generic pins on an inte-grated circuit whose function can be programmed by the user.For example, they can be programmed to receive specificinput (i.e., reading a temperature sensor) or deliver a certainoutput (i.e., moving a servo motor). In addition, the RaspberryPi touchscreen is a fully integrated touch-sensitive display that

    2524 Behav Res (2018) 50:2523–2530

  • runs natively on the Raspberry Pi. The combination of a fullPC operating system, touch-sensitive display, easy hardwareintegration through the GPIO pins, and inexpensive pricemakes the Raspberry Pi a very powerful platform for electron-ic projects, and therefore an ideal basis for a touchscreen op-erant chamber. This article describes a low-cost touchscreenoperant chamber based on the Raspberry Pi, a single-boardcomputer system.

    Materials and method

    Hardware

    The main components of the touchscreen operant chamberwere a Raspberry Pi 2 (Raspberry Pi Foundation, UK), a 7-in. touchscreen display for the Raspberry Pi (Raspberry PiFoundation, UK), and an Arduino Uno microcontroller(Arduino, Italy) (Figs. 1a and b). All components were pur-chased from Adafruit Industries, USA. The touchscreen dis-play was connected to the Raspberry Pi and mounted within aPerspex box (35.6 × 23.4 × 22.8 cm), which was housedwithin a sound-attenuating box (63.5 × 43.2 × 42.2 cm)(Med Associates, USA). On the opposite side of the Perspexbox was a food magazine, which consisted of a food hopperconnected to a pellet delivery chute, made from a PVC pipe. Aservo motor within the hopper dispenses a 45-mg pellet, whichfalls down the delivery chute and into the collection receptacleafter each correct response (Figs. 1a and b). The food hopper wascontrolled by a servomotor attached to the Raspberry Pi (Fig. 2).An LED light within the collection receptacle signaled a reward,

    and an infrared (IR) beam detected the collection of the foodpellet. The IR beam/sensor was connected to the Arduino Uno,which was in turn connected to the Raspberry Pi via a USB port(Fig. 2). A Piezo buzzer within the Perspex box was used tosignal the delivery of the food pellet and was also controlledby the Raspberry Pi (Fig. 2). For a detailed list of the componentsand their associated prices at the time of publication, see Table 1.The commercially availableMedAssociates touchscreen operantchamber (consisting of a rectangular operant box with grid floor-ing, overhead light, touchscreen, and food hopper; MedAssociates, USA) was used for comparison.

    Software

    A program to control the main functionality of the touchscreenchamber was written in Python (version 3.1.1), a high-level pro-gramming language utilizing the pygame library (https://www.pygame.org/news), which ran on the Raspberry Pi (Fig. 3).Briefly, the program displayed two images (two white squares)on the screen. Once either image was touched (e.g. nose-pokedby the rat), the program moved the attached servomotor, locatedwithin the food hopper, which in turn dispensed a food pellet.Simultaneously, a tonewas played through a buzzer, and an LEDlight within the food receptacle was turned on to signal rewarddelivery. An infrared (IR) beam within the food receptacle de-tected collection of the food reward. The next trial then began,and the same process was repeated. A second program waswritten in the Arduino sketch, which signaled an IR beam-break detection in the food collection receptacle. The code forthe Arduino sketch was adapted from Adafruit.com examplecode (https://learn.adafruit.com/ir-breakbeam-sensors/

    Fig. 1 Raspberry Pi touchscreen operant chamber. The Raspberry Pi andtouchscreen were mounted to a Perspex box, with a food magazine andcollection receptacle equipped opposite to the display (a). Top-down view

    of the Raspberry Pi chamber (b). The touchscreen chamber was placedinside a sound-attenuating box

    Behav Res (2018) 50:2523–2530 2525

    https://www.pygame.org/newshttps://www.pygame.org/newshttp://adafruit.comhttps://learn.adafruit.com/ir-breakbeam-sensors/overview

  • overview). Each correct response was written to a text file andsaved to the Raspberry Pi. These data were used to determine theanimal’s performance during each session.

    Experimental design

    Twomale Sprague-Dawley rats (ten weeks old, bred in-house)were used to validate the Raspberry Pi touchscreen system.An additional group consisting of three male Sprague-Dawleyrats (eight weeks old) was obtained from Envigo Laboratories(The Netherlands) and trained in the standard Med Associatestouchscreen operant chamber for comparison in training per-formance. The rats were group-housed in standard housingconditions (temperature 22 °C, relative humidity 50%) on a12-h light/dark cycle (0730–1930). Water and rat chow wereavailable ad libitum prior to food restriction. Rats were food

    restricted to 90% of their free-feeding weight so as increasetheir motivation to seek out a food reward within thetouchscreen operant paradigm. All experiments were conduct-ed in accordance with the European Directive 2010/63/EU,and under an authorization issued by the Health ProductsRegulatory Authority Ireland and approved by the AnimalEthics Committee of University College Cork.

    Behavioral autoshaping protocol

    Rats were food-deprived, with body weight maintained at90% of their free-feeding weight during operant training soas to increase their motivation to seek out a food reward. Theautoshaping protocol was adapted from Horner et al. (2013)and was composed of three stages that served to shape theanimals to touch the touchscreen for a food reward. Stage 1involved habituation to the testing chambers for 30 min fortwo consecutive days, with ten pellets dispensed within thefood magazine. Criteria for the animal to progress to the nextstage of training was that all pellets were consumed within the30-min session. The food magazine light was illuminated dur-ing food delivery and was switched off upon food collection.The house light was off, and no images were displayed on thescreen. Stage 2 involved associating the displayed image witha food reward. Two images (white squares) were presentedsimultaneously for 30 s in two locations (left and right), sep-arated by 5 cm. If no touch had occurred after 30 s, a foodpellet was dispensed, and the food magazine was illuminatedand a tone (1 s, 3 kHz) was sounded. If the image was touchedby the animal, a reward (1 × 45 mg food pellet) was dispensedimmediately and concurrently with the tone (1 s, 3 kHz), andthe food magazine light was switched on. Upon reward col-lection, the magazine light was switched off and an intertrialinterval (ITI) began (5 s), following which a new trial began.

    Fig. 2 Wiring diagram of the Raspberry Pi and Arduino: The servomotorwas connected to the Raspberry Pi 5-V pin, GND pin, and GPIO Pin 17.The food magazine LED was connected to the GPIO Pin 18 and GNDpin. The Piezo buzzer was connected to the GPIO Pin 23 and GND pin.

    The Arduino was connected to the Raspberry Pi via a USB port. Theinfrared beam-break sensor was connected to the 5-V pin, 3.3-V pin,GND pin, and GPIO Pin 4 of the Arduino

    Table 1 List of components of the Raspberry Pi chamber

    Part Price* (EUR)

    Raspberry Pi 2 Model B ARMv7 €35.00

    7-in. touchscreen display for the Raspberry Pi €70.00

    Arduino Uno microcontroller €20.00

    Buzzer (Local electronics store) €1.00

    Pack of white LEDs €5.00

    IR break-beam sensor 5-mm LEDs €6.00

    Continuous Rotation Servo FeeTech FS5103R €10.00

    Perspex box €5.00

    PVC pipe for food magazine €3.00

    Pack of assorted electrical wire €3.00

    Total €158.00

    * Price of components at time of writing

    2526 Behav Res (2018) 50:2523–2530

    https://learn.adafruit.com/ir-breakbeam-sensors/overview

  • The session ended after 30 trials or 30 min, whichever camefirst. The criteria for the animals to progress to the next

    training stage was to complete 30 trials in 30 min. Stage 3involved associating the image touch with a food reward.The protocol was the same as for Stage 2, except that theanimal had to touch the displayed image to receive a reward.The session ended after 100 trials or 60 min. The criteria forthe animals to complete the final stage of training was tocomplete 60 trials in 60 min for at least two consecutive days.

    Results

    Autoshaping task

    Stage 1: HabituationDuring Stage 1, two rats were habituatedto the Raspberry Pi chamber environment over two days.During these two habituation days, both rats ate the ten foodpellets within the food receptacle, and both were thereforeadvanced to the next stage of training. An additional three ratswere similarly habituated to the Med Associates operantchamber. Likewise, the rats ate all ten food pellets within thefood receptacle during the two habituation days and were thusadvanced to the next stage of training.

    Stage 2: Image/reward pairing During Stage 2, image offsetwas paired with the food reward. Initially, both rats in theRaspberry Pi chamber only completed approximately ten tri-als per session (Figs. 4a and b). However, after five days oftraining, both rats completed 30 trials within 30 min (Figs. 4aand b). Therefore, both rats were advanced to the next stage oftraining. The rats trained in the Med Associates chamberoutperformed the rats using the Raspberry Pi system by com-pleting 100 trials in one 60-min training session (Figs. 4c–e),so they were advanced to the next stage of training after onesession.

    Stage 3: Touch response During Stage 3, the rats were re-quired to touch the image for a food reward. Initially, perfor-mance by Rat 1 in the Raspberry Pi chamber was quite low, inthat only three or four trials were completed within the 60-minsession. However, after five days of training Rat 1 had com-pleted 63 trials and 73 trials, respectively, on two consecutivedays within the 60-min session (Fig. 4f). Similarly, the perfor-mance of Rat 2 in the Raspberry Pi chamber was initiallyinconsistent with training, with only six trials completed onthe first day, followed by 62 trials on Day 2 but then only 17trials on Day 3. However, after five days of training, Rat 2completed 112 trials on two consecutive days within the 60-min session (Fig. 4g). During Stage 3, the rats in the MedAssociates chambers quickly reached the learning criteria.Specifically, Rat 3’s performance was quite low on the firstday of training; however, this performance quickly improved,resulting in the completion of 96 and 100 trials on TrainingDays 2 and 3, respectively (Fig. 4h). Similarly, Rats 4 and 5

    Fig. 3 Flowchart of the autoshaping program: The program to run thetouchscreen chamber consisted of a basic loop function in which imageswere displayed on the screen and, if touched, triggered a Bcorrectresponse^ condition. This in turn activated a servo motor that dispenseda food pellet as well as playing a tone and turning an LED light on. Theprogram looped for 60 minutes

    Behav Res (2018) 50:2523–2530 2527

  • completed 67 and 81 trials on Day 1, and 98 and 100 trials onTraining Day 2, respectively (Figs. 4i and j). We directly com-pared the performance of the rats during Stage 3 in both sys-tems, to show that the rats trained in the Raspberry Pi systemwere slower to reach the learning criteria than the rats trainedin the Med Associates system (Fig. 5). However, all rats hadreached a similar level of performance by Days 4 and 5 (Fig.5), indicating that all rats had learned to touch the image for a

    food response, regardless of the touchscreen operant chambersystem used.

    Discussion

    Here we describe a low-cost touchscreen operant chamberbased on the Raspberry Pi, a single board computer system.Specifically, two rats were successfully trained to nose poketwo white squares in a low-cost touchscreen operant chamberand their performance was compared to rats trained in a stan-dard Med Associates touchscreen operant chamber. Both ratstrained in the low-cost Raspberry Pi system reached the learn-ing criteria of 60 trials within 60 min on two consecutive dayswithin ten days. For comparison with a commercially avail-able system, three rats were trained in the standard MedAssociates touchscreen operant chamber. Rats trained in theMed Associates chamber reached the learning criteria of 60trials within 60 min on two consecutive days within four daysof testing. Previous studies have shown similar levels of per-formance and training acquisition as reported here in theRaspberry Pi system. Specifically, Horner et al. (2013), Maret al. (2013), and Oomen et al. (2013) reported that learningcriteria was reached within five days, and Sbisa, Gogos, and

    Fig. 4 Autoshaping. Completed trials during Stage 2 in the Raspberry Pi system (a and b) and in the Med Associates system (c–e). Completed trialsduring Stage 3 in the Raspberry Pi system (f and g) and in the Med Associates system (h–i)

    Fig. 5 Comparison of training performance: Completed trials duringStage 3 for rats trained in the Raspberry Pi or the Med Associates system

    2528 Behav Res (2018) 50:2523–2530

  • van den Buuse (2017) reported successful training after 13days. Although we observed a slower acquisition rate of ratstrained in the Raspberry Pi system, it may be due to the designof the reward collection receptacle itself (a piece of PVC pipe).For example, in the Raspberry Pi system, delivery of the foodpellet may land in the front or back of the delivery chute (PVCpipe), leading to slight inconsistencies in the reward place-ment and subsequently affecting task acquisition. This limita-tion will be overcome by further optimization of the collectionreceptacle. Nevertheless, our data demonstrate that the presentsystem is a potential viable, low-cost alternative to the currentstate-of-the-art systems.

    Notwithstanding, a number of improvements and alter-ations could be applied to our system to advance its develop-ment. For example, the acquisition rate of the animals couldbe improved by the use of Bscreen masks^ that aid the ani-mal’s response to specific active windows of the touchscreenwhere an image is presented. Screen masks physically coverthe touchscreen except for the response windows where theimage is presented, therefore encouraging the rodent’s atten-tion and nose-pokes to the specific area of the screen that willelicit a food reward. This would help shape the animal’s re-sponse and improve task acquisition. Furthermore, thePerspex rectangular box described here could easily bechanged to a trapezoid box, which has been suggested as ameans to help focus the attention of an experimental animaltoward the touchscreen, thereby improving task acquisition.We report an overall cost of the touchscreen chamber of ap-proximately €160, which, as of the date the manuscript wassubmitted, was substantially less than the previous estimate ofUSD300 reported by Pineño (2014). This price could be fur-ther reduced by elimination of the Arduino microcontroller.Here we used the Arduino to control the IR beam in order todetect reward collection. The Arduino could be removed andthe IR senor controlled by the Raspberry Pi, thus reducing theoverall cost of the hardware by approximately €20.

    It should be noted that a limitation of the low-cost approachis that each program has to be programmed individually,which requires both time and programming knowledge.Moreover, the present system runs a .py file from within thepython IDLE (Integrated Development and LearningEnvironment), and therefore requires some programmingknowledge to operate once it is set up. This limitation couldbe overcome by the development of a graphical user interface(GUI). A GUI would allow for a better end-user experience,similar to that of the current top-end systems, such as the MedAssociates system used in the present study. The GUI couldalso facilitate other functionality, such as data analysis andtask building for future behavioral assessment. Although thedevelopment of a GUI would require significant work, itwould also enable the adoption of low-cost alternative systemsby less technologically savvy researchers. Indeed, Pineño(2014) developed a GUI that allowed the wireless pairing of

    the iPod touch within the operant chamber with a second iOSdevice, such as an iPhone or iPad, for graphing and monitor-ing the animal’s behavior during the experimental session. Inthe short term, the program presented here could also be im-proved by better data-handling capabilities, similar to thosedescribed by Pineño. Currently, the program simply recordsa B1^ to a text file after every correct response, and the num-bers are summed at the end of the program to generate a basicperformance score. This could be improved by including re-sponse latencies, reward collection latencies, and screentouches during the ITI as measures of preservation, as wellas a heat map of screen touches throughout the session to aiddetection of location bias for individual animals.

    In summary, our work has advanced previous work byPineño (2014), Devarakonda et al. (2016), and Rizzi et al.(2016) by combining the Raspberry Pi and a 7-in. touchscreendisplay with an Arduino microcontroller to create a low-costtouchscreen operant chamber capable of performing taskssuch as the autoshaping task and other more complex para-digms, such as the PAL or TUNL, that are available in theMed Associates and other state-of-the-art commercially avail-able systems. This low-cost alternative system will provideresearchers who have limited funding with a viable option tocarry out cognitive testing in a touchscreen operant platform.Although the chamber described here is a prototype and re-quires some knowledge of programming and electronics bythe user in order to operate it, it demonstrates that low-costsystems are capable of conducting similar behavioral tasks tothose of the high-end commercially available systems.

    Author note This work was funded by Science FoundationIreland (SFI) under Grant Number SFI/IA/1537. The authorsdeclare no conflict of interest.

    References

    Bartko, S. J., Vendrell, I., Saksida, L. M., & Bussey, T. J. (2011). Acomputer-automated touchscreen paired-associates learning (PAL)task for mice: Impairments following administration of scopolamineor dicyclomine and improvements following donepezil.Psychopharmacology, 214, 537–548. https://doi.org/10.1007/s00213-010-2050-1

    Bussey, T. J., Holmes, A., Lyon, L., Mar, A. C., McAllister, K. A.,Nithianantharajah, J., … Saksida, L. M. (2012). New translationalassays for preclinical modelling of cognition in schizophrenia: Thetouchscreen testing method for mice and rats. Neuropharmacology,62, 1191–1203. https://doi.org/10.1016/j.neuropharm.2011.04.011

    Clelland, C. D., Choi, M., Romberg, C., Clemenson, G. D., Jr., Fragniere,A., Tyers, P., … Bussey, T. J. (2009). A functional role for adulthippocampal neurogenesis in spatial pattern separation. Science,325, 210–213. https://doi.org/10.1126/science.1173215

    Cook, R. G. (1992). Acquisition and transfer of visual texture discrimi-nations by pigeons. Journal of Experimental Psychology: Animal

    Behav Res (2018) 50:2523–2530 2529

    https://doi.org/10.1007/s00213-010-2050-1https://doi.org/10.1007/s00213-010-2050-1https://doi.org/10.1016/j.neuropharm.2011.04.011https://doi.org/10.1126/science.1173215

  • Behavior Processes, 18, 341–353. https://doi.org/10.1037/0097-7403.18.4.341

    Crawley, J. (2007). What’s wrong with my mouse: Behavioral phenotyp-ing of transgenic and knockout mice (2nd). Hoboken: Wiley.

    Creer, D. J., Romberg, C., Saksida, L. M., van Praag, H., & Bussey, T. J.(2010). Running enhances spatial pattern separation in mice.Proceedings of the National Academy of Sciences, 107, 2367–2372. https://doi.org/10.1073/pnas.0911725107

    Devarakonda, K., Nguyen, K. P., & Kravitz, A. V. (2016). ROBucket: Alow cost operant chamber based on the Arduino microcontroller.Behavior Research Methods, 48, 503–509. https://doi.org/10.3758/s13428-015-0603-2

    Halladay, L. R., Kocharian, A., & Holmes, A. (2017). Mouse strain dif-ferences in punished ethanol self-administration. Alcohol, 58, 83–92. https://doi.org/10.1016/j.alcohol.2016.05.008

    Horner, A. E., Heath, C. J., Hvoslef-Eide, M., Kent, B. A., Kim, C. H.,Nilsson, S. R., … Bussey, T. J. (2013). The touchscreen operantplatform for testing learning and memory in rats and mice. NatureProtocols, 8, 1961–1984. https://doi.org/10.1038/nprot.2013.122

    Kim, C. H., Hvoslef-Eide, M., Nilsson, S. R., Johnson,M. R., Herbert, B.R., Robbins, T. W., … Mar, A. C. (2015a). The continuous perfor-mance test (rCPT) for mice: A novel operant touchscreen test ofattentional function. Psychopharmacology, 232, 3947–3966.https://doi.org/10.1007/s00213-015-4081-0

    Kim, C. H., Romberg, C., Hvoslef-Eide, M., Oomen, C. A., Mar, A. C.,Heath, C. J., … Saksida, L. M. (2015b). Trial-unique, delayednonmatching-to-location (TUNL) touchscreen testing for mice: sen-sitivity to dorsal hippocampal dysfunction. Psychopharmacology,232, 3935–3945. https://doi.org/10.1007/s00213-015-4017-8

    Mar, A. C., Horner, A. E., Nilsson, S. R., Alsio, J., Kent, B. A., Kim, C.H., … Bussey, T. J. (2013). The touchscreen operant platform forassessing executive function in rats and mice. Nature Protocols, 8,1985–2005. https://doi.org/10.1038/nprot.2013.123

    McTighe, S.M., Mar, A. C., Romberg, C., Bussey, T. J., & Saksida, L.M.(2009). A new touchscreen test of pattern separation: effect of hip-pocampal lesions. NeuroReport, 20, 881–885. https://doi.org/10.1097/WNR.0b013e32832c5eb2

    Mueller-Paul, J., Wilkinson, A., Aust, U., Steurer, M., Hall, G., & Huber,L. (2014). Touchscreen performance and knowledge transfer in thered-footed tortoise (Chelonoidis carbonaria).Behavioral Processes,106, 187–192. https://doi.org/10.1016/j.beproc.2014.06.003

    Nithianantharajah, J., McKechanie, A. G., Stewart, T. J., Johnstone, M.,Blackwood, D. H., St. Clair, D.,… Saksida, L. M. (2015). Bridgingthe translational divide: identical cognitive touchscreen testing inmice and humans carrying mutations in a disease-relevant homolo-gous gene. Scientific Reports, 5, 14613. https://doi.org/10.1038/srep14613

    Oomen, C. A., Hvoslef-Eide, M., Heath, C. J., Mar, A. C., Horner, A. E.,Bussey, T. J., & Saksida, L. M. (2013). The touchscreen operantplatform for testing working memory and pattern separation in ratsand mice. Nature Protocols, 8, 2006–2021. https://doi.org/10.1038/nprot.2013.124

    Perry, J. L., Larson, E. B., German, J. P., Madden, G. J., & Carroll, M. E.(2005). Impulsivity (delay discounting) as a predictor of acquisition

    o f IV coca i n e s e l f - a dm in i s t r a t i on i n f ema l e r a t s .Psychopharmacology, 178, 193–201. https://doi.org/10.1007/s00213-004-1994-4

    Pineño, O. (2014). ArduiPod Box: A low-cost and open-source Skinnerbox using an iPod Touch and an Arduino microcontroller. BehaviorResearch Methods, 46, 196–205. https://doi.org/10.3758/s13428-013-0367-5

    Range, F., Aust, U., Steurer, M., & Huber, L. (2008). Visual categoriza-tion of natural stimuli by domestic dogs. Animal Cognition, 11, 339–347.

    Rizzi, G., Lodge, M. E., & Tan, K. R. (2016). Design and construction ofa low-cost nose poke system for rodents. MethodsX, 3, 326–332.https://doi.org/10.1016/j.mex.2016.04.002

    Salamone, J. D., & Correa, M. (2002). Motivational views of reinforce-ment: Implications for understanding the behavioral functions ofnucleus accumbens dopamine. Behavioural Brain Research, 137,3–25. https://doi.org/10.1016/S0166-4328(02)00282-6

    Sbisa, A. M., Gogos, A., & van den Buuse, M. (2017). Spatial workingmemory in the touchscreen operant platform is disrupted in femalerats by ovariectomy but not estrous cycle.Neurobiology of Learningand Memory, 144, 147–154. https://doi.org/10.1016/j.nlm.2017.07.010

    Skinner, B. F. (1938). The behaviour of organisms: An experimentalanalysis. New York: Appleton-Century.

    Steurer, M. M., Aust, U., & Huber, L. (2012). The Vienna comparativecognition technology (VCCT): An innovative operant conditioningsystem for various species and experimental procedures. BehaviorResearch Methods, 44, 909–918. https://doi.org/10.3758/s13428-012-0198-9

    Swainson, R., Hodges, J. R., Galton, C. J., Semple, J., Michael, A., Dunn,B. D., … Sahakian, B. J. (2001). Early detection and differentialdiagnosis of Alzheimer’s disease and depression with neuropsycho-logical tasks.Dementia and Geriatric Cognitive Disorders, 12, 265–280. https://doi.org/10.1159/000051269

    Talpos, J. C., McTighe, S. M., Dias, R., Saksida, L. M., & Bussey, T. J.(2010). Trial-unique, delayed nonmatching-to-location (TUNL): Anovel, highly hippocampus-dependent automated touchscreen testof location memory and pattern separation. Neurobiology ofLearning and Memory, 94, 341–352. https://doi.org/10.1016/j.nlm.2010.07.006

    Talpos, J. C., Winters, B. D., Dias, R., Saksida, L. M., & Bussey, T. J.(2009). A novel touchscreen-automated paired-associate learning(PAL) task sensitive to pharmacological manipulation of the hippo-campus: a translational rodent model of cognitive impairments inneurodegenerative disease. Psychopharmacology, 205, 157–168.https://doi.org/10.1007/s00213-009-1526-3

    Wood, S. J., Proffitt, T., Mahony, K., Smith, D. J., Buchanan, J.-A.,Brewer, W.,… Pantelis, C. (2002). Visuospatial memory and learn-ing in first-episode schizophreniform psychosis and establishedschizophrenia: a functional correlate of hippocampal pathology?Psychological Medicine, 32, 429–438. https://doi.org/10.1017/S0033291702005275

    2530 Behav Res (2018) 50:2523–2530

    https://doi.org/10.1037/0097-7403.18.4.341https://doi.org/10.1037/0097-7403.18.4.341https://doi.org/10.1073/pnas.0911725107https://doi.org/10.3758/s13428-015-0603-2https://doi.org/10.3758/s13428-015-0603-2https://doi.org/10.1016/j.alcohol.2016.05.008https://doi.org/10.1038/nprot.2013.122https://doi.org/10.1007/s00213-015-4081-0https://doi.org/10.1007/s00213-015-4017-8https://doi.org/10.1038/nprot.2013.123https://doi.org/10.1097/WNR.0b013e32832c5eb2https://doi.org/10.1097/WNR.0b013e32832c5eb2https://doi.org/10.1016/j.beproc.2014.06.003https://doi.org/10.1038/srep14613https://doi.org/10.1038/srep14613https://doi.org/10.1038/nprot.2013.124https://doi.org/10.1038/nprot.2013.124https://doi.org/10.1007/s00213-004-1994-4https://doi.org/10.1007/s00213-004-1994-4https://doi.org/10.3758/s13428-013-0367-5https://doi.org/10.3758/s13428-013-0367-5https://doi.org/10.1016/j.mex.2016.04.002https://doi.org/10.1016/S0166-4328(02)00282-6https://doi.org/10.1016/j.nlm.2017.07.010https://doi.org/10.1016/j.nlm.2017.07.010https://doi.org/10.3758/s13428-012-0198-9https://doi.org/10.3758/s13428-012-0198-9https://doi.org/10.1159/000051269https://doi.org/10.1016/j.nlm.2010.07.006https://doi.org/10.1016/j.nlm.2010.07.006https://doi.org/10.1007/s00213-009-1526-3https://doi.org/10.1017/S0033291702005275https://doi.org/10.1017/S0033291702005275

    A low-cost touchscreen operant chamber using a Raspberry Pi™AbstractMaterials and methodHardwareSoftwareExperimental designBehavioral autoshaping protocol

    ResultsAutoshaping task

    DiscussionReferences