contents  · web viewthe adaptive learning spaces and interactive content environments (alice)...

47
ALICE: Adaptive Learning Spaces & Interactive Content Environments IMLS National Leadership Grant Final Design Document January 2016 Contents Background Grant Participants ALICE Functional Prototype Functional Prototype Components Mobile Application Indoor Localization System Fingerprinting Execution Stage iBeacons ALICE Artificial Intelligence Engine Libraries Services Functional Prototype Implementation Challenges Indoor Localization Indoor Navigation Wayfinding Dynamic, Adaptive Content Evaluative Process Evaluative Methods Evaluation Question 1. How effective was the prototype in helping library users complete the given task/goal? Evaluation Question 2. How can the prototype be used in other contexts? Evaluation Question 3. What human capacity / interdisciplinary dynamic was necessary to complete this work?

Upload: others

Post on 20-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

ALICE: Adaptive Learning Spaces & Interactive Content EnvironmentsIMLS National Leadership GrantFinal Design DocumentJanuary 2016

ContentsBackground

Grant ParticipantsALICE Functional PrototypeFunctional Prototype Components

Mobile ApplicationIndoor Localization System

FingerprintingExecution StageiBeacons

ALICE Artificial Intelligence EngineLibraries Services

Functional Prototype Implementation ChallengesIndoor LocalizationIndoor NavigationWayfindingDynamic, Adaptive Content

Evaluative ProcessEvaluative MethodsEvaluation Question 1. How effective was the prototype in helping library users complete the given task/goal?Evaluation Question 2. How can the prototype be used in other contexts?Evaluation Question 3. What human capacity / interdisciplinary dynamic was necessary to complete this work?

Going Beyond the Functional PrototypeScenario 1. Museums: tailored exhibit contentScenario 2. Libraries: theme-based virtual toursScenario 3: Museum context: interactive exhibits

Appendix 1: Commercial options for indoor localization and navigationAppendix 2: Exploring creation of a sensor network to monitor environmental conditions in physical spacesAppendix 3: Exploring use of Kinect to enable interaction with large display walls

Page 2: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Background

The Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership Grant. The grant proposal sought to explore the challenge presented by the explosion of display screens in learning spaces. As they become almost ubiquitous in learning spaces in libraries and museums, how do we keep this digital content fresh and relevant in a sustainable manner? How do we address key challenges such as content delivery, indoor navigation, user location, wayfinding, and sustaining an interactive and engaging digital component of physical learning spaces? The 5 large MicroTile video display walls at the James B. Hunt Jr. Library on the NC State University campus were identified as an excellent environment for experimentation, as they provided the physical environment of a learning space filled with individuals and groups supplied with large-scale displays that are on around the clock.

The goals of this grant were to explore a conceptual model and architecture upon which an adaptive learning space could be built and then test this with one or more “proof of concept” functional prototypes. In addition, the grant intended to explore an assessment methodology that could be used to evaluate future efforts in this area. During the grant period, we identified an existing use case at the Hunt Library and built a functional prototype that addressed that use case while exploring many of the relevant challenges identified in the grant proposal. This document will share our experiences, the challenges we faced, and the possibilities we envision for future work. We hope that what we have learned will help other libraries and museums make more informed decisions as they work to address these ongoing challenges in their individual contexts. The ALICE website1 provides more information for interested parties.

The grant period ended on October 31, 2015.

Grant Participants

The ALICE grant was a cross-disciplinary project with participation across the NC State University campus. Participants included faculty, staff, and graduate students from:

● Department of Computer Science (gaming AI -- path and content selection)● Department of Electrical and Computer Engineering (indoor localization and mobile

application development)● NCSU Libraries (space, content, use cases)● College of Design (selection and design of content for display walls)● Friday Institute (research methods and evaluation)

The evaluative portion of this document describes the role of the different grant participants in more detail, and contact information is available at the ALICE website.

1 http://alice.wordpress.ncsu.edu/

1

Page 3: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

ALICE Functional PrototypeIn order to provide direction in developing a functional, demonstrative prototype, the grant participants worked with Hunt Library staff to identify concrete use cases that represented existing patron needs. One area that arose was the need for Libraries’ patrons navigate to group study rooms within the Hunt Library building. With nearly 100 reservable spaces in the the building, finding a specific, available group study room can be a challenge for patrons. This prototype tied closely with interest on the part of Electrical and Computer Engineering participants’ in exploring different techniques for indoor user localization -- identifying a user’s location within a given physical space.

The prototype resulted in an Android-based wayfinding application for Hunt Library. When the user opens the application, he or she selects a specific area of interest as part of his or her personal profile. This selection is used to tailor adaptive content on the MicroTile display walls. The user can enter a specific room to navigate to or can search for an appropriate space given a set of parameters, such as brightness, noise level, temperature, or space type. The mobile application then consults with the central ALICE AI to identify an appropriate route through the building to the selected space. As the user follows the path through the physical space, the AI selects from a database of tagged content relevant to the user’s area of interest and pushes that content to nearby MicroTile display walls along the route as the user approaches them. If the user deviates from desired path, the AI re-routes the user along a new route based on his or her current location.

2

Page 4: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Figure 1: ALICE prototype Android application showing path from the Reading Lounge on the second floor to the Institute for Emerging Issues Commons on the second floor.

Functional Prototype ComponentsTo function as intended, the ALICE functional prototype requires a number of different components to communicate successfully with each other.

Figure 2. Components of the ALICE functional prototype

Mobile ApplicationThe Android application provides the user interface for the user to interact with for the ALICE prototype. The application communicates with a localization server and with the ALICE AI. The application provides a visual map of Hunt Library through use of a Google Maps Android API. When the user first starts the application, his or her location is calculated and shown on the Hunt Library floor map as a red marker.

3

Page 5: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Figure 3: Display of user’s current location

The application recalculates the user’s location every 5 seconds. To update the location, the Android application sends the data required by the localization server via HTTP. The localization server responds with the longitude, latitude, and name of the computed current location. The application then displays the marker on the map for the user to see.

Another feature of the Android application is the ability for the user to select his or her area of interest (see Figure 2). This is used to select appropriate content to display on the MicroTile display walls. When the user selects an area of interest, the application sends the selection over HTTP to the ALICE AI, which is responsible for choosing which content to display.

4

Page 6: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Figure 4: Selecting an area of interest in the Android application

The ALICE prototype was designed to be able to help the user navigate through the Hunt Library. The user can either type the name of a specific room in the search box or search for an appropriate room by identifying specific characteristics.

5

Page 7: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Figure 5: Enter a specific room to navigate to (ex: café)

6

Page 8: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Figure 6: Search for a room based on certain characteristics

After the user selects a destination, the Android application sends the desired destination to the ALICE AI web service to initiate the path finding functionality. The ALICE AI sends the path back over HTTP as a list of intermediate room locations. The room locations are parsed and displayed as light blue markers on the application with lines connecting the markers to show the path. The user’s current location is marked in red. The application continues to send the user’s location each time the user enters a new room so the ALICE AI can determine if the user strays from the planned path, reaches a location where content should be triggered on a MicroTile display wall, or reaches the final destination.

Figure 7: Path from the Reading Lounge on the second floor to the Game Lab on the third floor

Indoor Localization SystemIn the ALICE system, the localization server is responsible for taking in data from the Android application to determine the user’s current location within the Hunt Library building. Indoor localization is far more complex than localization in open outdoor areas. Global positioning system (GPS) is the most common technology used for localization in open outdoor areas, but the attenuation of the GPS signals though the building structure makes indoor localization using GPS unreliable. It has been found that radio frequency identification (RFID), ultra wideband (UWB), Bluetooth, or wireless LAN (WLAN) can be used for indoor localization with higher accuracy. The typical estimation methods used are trilateration, triangulation, and fingerprinting. Due to the complexity of indoor layouts and non-line-of-sight (NLOS) propagation environments,

7

Page 9: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

trilateration and triangulation methods tend to have larger location estimation errors than the fingerprinting technique.2

For this project, fingerprinting was chosen as the method of localization using WiFi signals since it has been shown to be the most reliable. Wireless access points were already deployed throughout the building, allowing for localization to be implemented without additional hardware. The fingerprinting technique requires a calibration stage, also known as the fingerprinting stage, before deployment. For the fingerprinting stage, reference points are created throughout the building by storing the received signal strength (RSS) values of the surrounding wireless access points at each reference point. When the user is moving throughout the building, the Android application periodically polls the surrounding access points, collects the RSS values, and sends them to the localization server. The localization algorithm compares the collected RSS values to the stored RSS values to determine the user’s location.

The localization system consists of the Android application frontend with a server and database backend that utilizes a LAMP stack application (Linux + Apache + MySQL + PHP) as shown in Figure 6 below.

Figure 8. Diagram of ALICE Localization System

2 Farshad, A., Li, J., Marina, M. K., & Garcia, F. J. (2013, October). A Microscopic Look at WiFi Fingerprinting for Indoor Mobile Phone Localization in Diverse Environments. In Indoor Positioning and Indoor Navigation (IPIN), 2013 International Conference on Indoor Positioning and Indoor Navigation (pp. 1-10). IEEE. Liu, H., Darabi, H., Banerjee, P., & Liu, J. (2007). Survey of wireless indoor positioning techniques and systems. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 37(6), 1067-1080.

8

WLAN Localization System Diagram

Page 10: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

The Android application communicates with the Apache server over HTTP. The MySQL database holds all of the reference points recorded in the fingerprinting stage. The PHP localization script collects the current access points’ RSS values sent by the mobile application and the relevant data from the reference points in the MySQL database and uses the localization algorithm to compute the estimated location. Then the localization script passes this best match location back to the Android application to display to the user.

FingerprintingThe fingerprinting stage of the project consisted of collecting the RSS values from surrounding access points at reference points around Hunt library. For this implementation, at least one reference point was created in each room of Hunt Library, with additional reference points created in the larger spaces, such as entrance lobbies or reading rooms. A separate Android application was written to help collect these measurements. The application used the Google Maps Android API and allowed the user to click on the map at their current location. On that click, the application recorded the longitude and latitude values, collected the RSS values from the surrounding access points, encoded the data into the MySQL format, and wrote that data to a file. This sped up the fingerprinting process and made the measurements more reliable by removing human error. There are two tables in the MySQL database as shown in figure 7 below.

Figure 9. Fingerprinting Database UML Diagram

Each reference point in the fingerprinting database contains the longitude, latitude, and room name for the location. The signal_strengths table contains entries for the MAC addresses and RSS values for all of the surrounding access points; each location could have multiple access points and RSS values recorded for it.

Execution StageThe ALICE Android application scans the surrounding wireless access points every 5 seconds and sends their MAC address and RSS values to the localization server. The localization algorithm calculates the difference between the RSS values of the access points and each recorded reference point. The reference point with the smallest difference is the nearest neighbor. The coordinates and name of the nearest neighbor reference point is found in the database and sent to the Android application. This localization algorithm is not flawless; while it usually gets the general location of the user, it can bounce around to different rooms. This is

9

Page 11: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

because obstacles such as walls, furniture, and humans affect the signal strengths from the surrounding wireless access points.

iBeaconsA major goal of this functional prototype was to trigger content on the MicroTile display walls as users approached them. However, the WiFi localization system was not accurate enough to reliably trigger content as the user approached the display wall to ensure it could be seen before the user had moved past. To try to address this issue, Bluetooth iBeacons were added around the display walls for proximity triggering of content. The iBeacon uses Bluetooth low energy to transmit its unique identifier, major value, minor value, and transmission power for the main purpose of proximity detection. The Android application receives the information when it is in range of the iBeacon. From the RSS value and the iBeacon’s transmission power, an algorithm can calculate an estimated distance from the iBeacon. These iBeacons were placed near the display walls. Obstacles such as walls and human bodies also affect the signal strengths, but the placement of the iBeacons can be controlled, helping to reduce the error caused by these obstacles.

To reduce interference caused by humans, the iBeacons were placed on or near the ceilings immediately adjacent to the display walls in most of the locations. The iBeacons were set to the lowest power setting of -30 dBm and the broadcast interval was set to 250 ms. The goal of that value was to set it as high as possible while still getting the desired response time. The Android application only scanned for iBeacons with the specified UUID, so all the iBeacons’ UUID was set to the same value. The Android application could tell the location of the iBeacons based on their Major values. One or two iBeacons were placed at each display wall. The iBeacons with the same UUID and Major values were considered to be in the same beacon region. This means that when the Android application receives the signal from one of the iBeacons in that region, the user is considered to be in that iBeacon region until the Android app no longer receives signals from any of the iBeacons in that region. This was important because without the definition of the regions, the Android application would trigger the content again when it reached a new iBeacon in that same region.

Overall, the iBeacons significantly improved the reliability of the content triggering on the MicroTile walls. The content is never triggered on the incorrect display wall. This was by design because the Android application simply cannot receive the Bluetooth signals until it is close to the iBeacons. The triggering happens within a few seconds of when it is expected, however the deleting of content from the screens has a much longer delay of tens of seconds.

ALICE Artificial Intelligence EngineThe purpose of the ALICE Artificial Intelligence (AI) engine is to generate a sequence of actions that accomplishes the goals of an interactive experience for a particular user. These sequences of actions, called plans, allow the ALICE system to lead the user through the Hunt Library to their desired destination by communicating with the Android application running on the user’s mobile device. The ALICE AI also communicates with a web service controlling the library’s

10

Page 12: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

wall-mounted displays in order to trigger relevant content as the user approaches a MicroTile display wall. The ALICE AI accomplishes this task with three connected components: a planner, a mediator, and a web service.

Planners 3 are used to identify sequences of actions that transform an initial state into a goal state. The ALICE AI uses a planner called Fast Downward 4 to generate sequences of actions to guide the user’s experience. The planner takes as input a planning problem and domain in a specification called Planning Domain Description Language (PDDL) 5. PDDL is a logic-based representation of a world and how it can change. A PDDL planning problem specifies an initial state and a goal state. An initial state is a representation of how the world is configured at the start of the problem. A goal state is a representation of how the world should be configured when the goal state is reached. A PDDL planning domain is a list of operators that can be applied to states to produce new successor states. A planner starts at the problem’s initial state and searches for a sequence of actions, instantiated from domain operators, that transform the initial state into a goal state. This sequence of actions is called a plan.

The ALICE AI system is packaged with a planning domain and problem that models the Hunt Library. The model includes the actions a user can take as they traverse the library space and actions the AI can take as the system senses the user’s movement. The Hunt Library model consists of a list of rooms, how the rooms are connected by doors, hallways, and stairs, a human user, in what room the user is currently located, a list of the Hunt Library displays that can be controlled, and where the displays are located. The domain operators model human participants moving between rooms and Hunt Library display walls showing content to a participant located nearby. The Fast Downward planner takes these PDDL files as input and finds a sequence of user movements that places the user at the desired destination.

The ALICE AI system’s second component is a mediator 6. The mediator is a meta-planning system that accounts for the freedom of action the user has to follow or disregard the current series of planned actions. The mediator builds a tree to represent the possible ways the user could move through the library space. The nodes of this tree are world states and its edges are possible actions the user or the library could take. The root of the tree is the initial world state given in the planning problem. The leaves of the tree are states where the planning problem’s goals are accomplished. The mediator navigates through this tree as it receives updates about the user’s activity in the Hunt Library. Whenever the mediator navigates a tree edge, it checks if the current plan can be executed from the current state. If not, it formulates a new planning problem from the current state and passes this to the Fast Downward planner, which returns a

3 Bonet, B., & Geffner, H. (2001). Planning as Heuristic Search. Artificial Intelligence, 129(1-2), 5–33.4 Helmert, M. (2006). The Fast Downward Planning System. Journal of Artificial Intelligence Research, 26, 191–246.5 Ghallab, M., Knoblock, C., Wilkins, D., Barrett, A., Christianson, D., Friedman, M., ... & Sun, Y. (1998). PDDL-the planning domain definition language.6 Robertson, J., & Young, R. M. (2014). Gameplay as On-Line Mediation Search. In Experimental AI in games papers from the 2014 AIIDE workshop (pp. 42-48). AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment: Raleigh, NC.

11

Page 13: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

new series of events that accomplish a goal state. The mediator’s replanning process ensures that a plan that achieves the goal state can be found no matter what the user decides to do. For example, it will recalculate the necessary route through the building to the destination if the user deviates from the original path.

The final component of the ALICE AI system is a web service interface that is used to communicate with other components of the functional prototype. The ALICE AI web service runs on a server that can be accessed by the ALICE Android application. It allows the Android application to request a new session starting from a particular location in the Hunt Library, which wipes any existing data and creates a new mediation tree rooted at that location. The Android application notifies the ALICE AI web service whenever the user’s location changes and the web service transitions the mediation tree along the corresponding edge. Whenever a new session is initialized or the user moves to a new location, the current plan is returned to the Android application so it can be displayed to the user. This allows the mobile application to display a dynamic route to the user as he or she moves throughout the library building.

The ALICE AI web service also communicates with services that control the Hunt Library’s display walls. The Hunt Library exposes a web service that allows content items to be triggered on the display walls. When the ALICE Android application initializes a new session, it sends the ALICE AI web service a list of personalized tags that represent the user’s area(s) of interest. The content database is a hand-authored dictionary of tag and image URL pairs. Whenever the mediator returns an updated plan to the user’s mobile device, it checks if the next action is for a display wall. If so, the ALICE AI service selects a URL that matches one of the user’s list of tags in the content database, connects to the Hunt Library display wall service, and passes the URL for content to display as well as the location of display wall that the user is approaching.

Together, the ALICE AI system’s planner, mediator, and web service create and modify plans to guide users through the Hunt Library environment, send the current plan to the Android application on the user’s mobile device, and trigger customized visual content to display on Hunt Library’s display walls.

Libraries ServicesThe Libraries developed two services that are utilized by the ALICE prototype. The first set of REST-based web services was developed to allow the ALICE AI to interact with the Hunt Library building itself, more specifically, to trigger display of specific content on a MicroTile wall at a specific point in time. These services allow the ALICE AI to request the display of a specific piece of content (a URL) on a specific wall, and in a specific mode (ex: full-screen or partial overlay). They also allow the ALICE AI to terminate display of content on a MicroTile wall as the user moves away. Content will automatically expire after a default period of time if it is not explicitly terminated.

The second set of web services allows the ALICE AI to query information about the spaces in the Hunt Library. This allows the ALICE mobile application to take in information from the user

12

Page 14: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

(such as desired space type, temperature, brightness, or noise) and locate a space in the Hunt Library that meets those criteria. This is accomplished through the creation of a database that contains entries for each space in Hunt Library. While some data is static, such as space type, other data can be dynamically updated on a regular basis from sensor readings (such as temperature). We did not deploy sensors throughout every room in the Hunt Library for this prototype, so much of this data was randomly generated. However, we did experiment with building low cost sensors capable of taking these kinds of readings and having them populate data in the space database (see Appendix 2 for more details about these explorations).

Functional Prototype Implementation ChallengesWhile developing the ALICE prototype, the team encountered some unexpected challenges. In particular, the process of creating a complex indoor navigation system that provides a user with a real-time path through a large building based on signals from WiFi access points was more difficult than expected. In addition to the use of Bluetooth technology like iBeacons for proximity triggering and rudimentary navigation, Appendix 1 identifies some alternative commercial options for institutions specifically interested in indoor navigation.

Indoor LocalizationOne of the major challenges of using WiFi localization is the tradeoff between speed and accuracy. The fingerprinting method of WiFi localization utilized for the ALICE prototype requires a calibration phase where the signal strengths from the access points at multiple locations around the building are collected and stored in a database. When the user uses the Android application, the signal strengths at his or her location are captured and compared to those in the database, and the closest location to the user in the database is determined. The more locations that are measured during the calibration phase, the more data in the database needs to be analyzed each time the user’s location is updated. Various algorithms exist that can compute the user’s location based on the locations in the database that range in complexity and computation power required.7 However, with such a large space as the Hunt Library, it was necessary to keep the algorithm simple to allow for fast computation over a large set of data covering many wireless access points. It is important for the localization to have a fast response to the user’s movements in order to keep the map up to date as the user moves through the building. The algorithm utilized in the ALICE prototype employs the nearest neighbor approach and the stored location with the closest access point signal strengths is determined to be the user’s location. One trick used to reduce the amount of data in the database was to delete very small signal strengths. This also increased the accuracy of the algorithm because it gave more weight to the higher, more meaningful signal strengths.

7 Ding, G., Zhang, J., Zhang, L., & Tan, Z. (2013). Overview of received signal strength based fingerprinting localization in indoor wireless LAN environments. 2013 IEEE 5th International Symposium on Microwave, Antenna, Propagation and EMC Technologies for Wireless Communications (MAPE), 160–164.

13

Page 15: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

However, the WiFi localization system as implemented in the ALICE prototype still has errors and miscalculates the user’s location at times This is because obstacles such as walls, furniture, and human bodies affect the signal strengths of the WiFi signals from the access points. Since a primary goal of the ALICE prototype was to trigger actions on the display walls as the user approaches them, more accurate localization was required near the screens so that the actions on the display walls were reliably triggered at the correct time for the user to see them before moving past. Since the WiFi localization was not accurate enough, Bluetooth iBeacons were added around the display walls for more fine-grained proximity triggering. When the Android application receives data from an iBeacon over Bluetooth, it determines which display wall the iBeacon is near based on the major number sent and calls to trigger an action. Based on observation, the iBeacons were much more reliable for triggering content to display. While obstacles such as walls and human bodies also affect the signal strengths of the iBeacons, their placement can be controlled, reducing the error from the obstacles. The iBeacons were placed on or near the ceilings to reduce human interference in most of the locations.

Indoor NavigationWhen users attempted to follow the map-based route displayed on their mobile device to navigate to their desired destination, it revealed some challenges in calculating this route.

One particular challenge was the complexity of the physical spaces and the mismatch between those spaces and the logical model used by the ALICE AI to calculate routes. There are many small hallways and spaces between clearly defined spaces in the logical model that cause problems when transitioning from room to room using the ALICE AI’s path finding capabilities. In these spaces the WiFi localization sometimes chooses an unrelated nearby location as the user’s current location, which throws off the route created by the ALICE AI. For example, when going from the entrance hall to the reading lounge on the 2nd floor of Hunt Library, there is small area outside the iPearl Immersion theater between the entrance hall and the reading lounge. When the WiFi localization updates in that spot it often chooses the theater as the user location because it is technically the closest location to that point. This causes the path finding to recalculate because it thinks the user is off track by being in the theater.

14

Page 16: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Figure 10. Logical model of Hunt Library, floor 2, as used by the ALICE Artificial Intelligence system.

Without the ability to perform precise localization using GPS coordinates, fine-grained navigation within a large, complex space like the Hunt Library requires mapping of a large number of wireless access points and tying them to a very fine-grained logical model of the space for live updating calculation of the user’s current location and ideal route through the building.

Wayfinding

The ALICE prototype explored the concept of wayfinding by allowing users to search for a space within Hunt Library that met their needs on a particular day. The Android application included some aspects of the building spaces that represent the physical conditions on a given day and are not well-represented in our current room reservation system. For the prototype, users could search by room temperature, brightness, noise level, and room type (ex: group study, public study, computing). Although we experimented with building low cost sensors that could be deployed to gather information about temperature, brightness, and noise throughout the spaces (see Appendix 2), it became apparent that it was not feasible to deploy them on a massive scale throughout all the rooms in the building (Hunt Library has more than 60 reservable study spaces). Aside from the challenge of building and maintaining these sensors, battery life was a problem as power was not readily accessible near the optimum locations for deployment. Although this type of sensor deployment might work well for smaller, more specific use cases, we would probably be better served at the Hunt Library by tapping into the building control system which already utilizes a large number of sensors built into the building’s infrastructure.

15

Page 17: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Dynamic, Adaptive Content

The content store mechanism explored as part of the ALICE functional prototype was fairly rudimentary in that it represented a small, static database of content that was created and tagged by hand. This allowed us to explore the use of an AI to adapt the content displayed at a given moment in time to the interests of a user moving through the space. However, maintenance of a database of static content remains tedious and time-consuming. There is significant opportunity still to explore techniques for dynamically generated content, which does not require human intervention to update and evolve over time based on some type of external input. One interesting idea that came out of this work is related to the use of “overlays” for displaying content on the Hunt Library MicroTile walls. In the ALICE functional prototype, each piece of content was tagged as either full screen or overlay. In the overlay scenario, the ALICE content popped up into a small section of the display without overriding whatever was currently displayed. This type of display seems very suitable for integrating with more text-based content, such as a feed of entries from an event calendar, which could be a good source of dynamic content.

Another interesting challenge related to the delivery of adaptive content relates to contextual awareness. In the Hunt Library example, the large display walls used for the ALICE prototype have scheduled content playing at all times. In addition, there are several MicroTile walls that allow users to select specific content to view at a given point in time. An important question to consider prior to full-fledged implementation of a project like this is how to handle interruption of ongoing content delivery. One solution could be to rely on “overlay” display of content that only utilizes a portion of a screen. Another solution would be to prevent the display of adaptive content if a screen is already being actively used.

Evaluative ProcessThe role of the evaluator on the ALICE project was to measure the efficacy of the functional prototype developed by the grant participants. Over the course of the grant, the evaluation aimed to answer these questions:

1. How effective was the prototype in helping library users complete the given task?2. How can the app be used in other contexts?3. What human capacity/interdisciplinary dynamic was necessary to complete this work?4. To what extent does the web-based portal facilitate use of the app in other contexts?

In the following sections we discuss the methods used to address these research questions, then present the findings of the evaluation, organized by evaluation question.

16

Page 18: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Evaluative MethodsThis evaluation study employed a qualitative methodology in order to understand the how the prototype functioned with regard to wayfinding and adaptive content and to explore its potential uses in other contexts. Table 1 below illustrates the qualitative measures used in this study.

Data Source PurposePrototype Testing Observations Observe the functionality of the prototype as it is being

used by participants; note aspects of Hunt Library context that supported or constrained use of the prototype.

Post Testing User Interview Gather participants’ perceptions on the functionality and potential uses of the prototype after gaining hands-on experience during testing.

External Context Focus Group Gather participants’ perceptions on the potential uses of the prototype in their own contexts, as well as any projected factors that would support or constrain its use.

Grant Team Reflection Gather grant participants’ own perceptions of the process of developing the prototype and the human capacity necessary to complete the project.

Table 1. ALICE Prototype Evaluation Data Sources

Prototype Testing Observations. In order to test the functionality and user-friendliness of the prototype, student participants were asked to use the prototype to complete a set of tasks. Participation was voluntary, and an invitation to participate was sent to all students enrolled in an undergraduate design course at NC State. Students were informed that their decision to participate in the study did not impact their grade, and were also told that participation meant they would receive a $25 gift card for Amazon. Testing timeslots were awarded to the first 10 students that volunteered. Ultimately, eight one-hour testing timeslots were filled. During their testing timeslots, students were familiarized with the prototype’s dual function of navigation throughout the Hunt Library and triggering of tailored content on the large display. Participants were then asked to navigate to several locations in the building. The evaluator conducted an observation of each participant’s interactions with the prototype during testing. It included items such as “How effective was the wayfinding technology?” and “What barriers were encountered during the test?”

At this point it should be noted that at the time of the testing the adaptive content component of the prototype was not working properly. However, prototype testing was still seen as an opportunity to get the student participants’ feedback as to the potential usefulness of adaptive content display. For this reason, a member of the grant team was placed near the display wall to force content to display as the participant walked by. Participants were made aware of this at the beginning of testing.

17

Page 19: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Post Testing Participant Interview. Immediately after the prototype testing was conducted, the participant was asked a short series of questions about their experience with the prototype. This included questions such as “Was the app interface user-friendly?” and “Do you think this app would be useful in your day-to-day library experience?” Each interview took approximately 10 minutes.

External Context Focus Group. Part of the work of the evaluation for the ALICE project involved gaining an understanding of how the ALICE prototype might interact in contexts outside of Hunt Library, including the potential usefulness of the prototype and how aspects of other contexts might support or constrain its use. To answer these questions, representatives from local university and college libraries and from local museum contexts were invited to a free luncheon at the Hunt Library. This luncheon included a demonstration of the prototype, as well as a focus group session that included questions such as “Do you think this prototype would be useful to users in your context? What potential benefits could you anticipate?” and “What challenges do you anticipate you would encounter in making use of the prototype?”

Grant Team Reflection. To provide an understanding of what human capacity was necessary to complete this work, the grant team was asked to participate in a written reflection on their experience throughout the ALICE project. Items on the written reflection included “What did you feel was your major contribution to this project?” and “What would you tell others doing similar work (i.e., creating a similar app with an interdisciplinary team)?”

Evaluation Question 1. How effective was the prototype in helping library users complete the given task/goal?

Were users able to complete the task? If so, how many users were able to do so? During testing, seven of the eight participants were able to use the prototype to complete some of the navigation tasks provided, although in some cases familiarity with Hunt Library helped users. For the eighth participant, the prototype crashed during their testing timeslot. Some participants noted that the paths did not update reliably as they walked, or showed lag. In observations, it was noted that the marker showing the participant’s current location on the map would sometimes bounce around when the participant was standing still.

How effective was the wayfinding technology? Did it work as intended? The participants felt that the prototype was largely intuitive and user-friendly because of its resemblance to Google maps and noted several potential areas for improvements. One of the most often cited potential improvements was having the prototype automatically switch to a new floor as the participants navigated between floors, instead of the participants doing this manually. Additionally, the prototype displayed across multiple floors at the same time, and the participants found this confusing.

How effective was the adaptive content technology? Did it work as intended? Despite having to force tailored content onto the display walls as the participants walked by, the

18

Page 20: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

participants were still able to get enough of an experience to provide feedback on the potential usefulness of the concept of adaptive content. Participants were unsure about the usefulness of the adaptive content, though most agreed that the concept was “cool.” Some felt that adaptive content would be more useful on the smaller screens in individual study rooms as opposed to the large display walls. Multiple participants noted that they did not see the adaptive content as particularly useful because they did not look at or notice the large tile walls much when then visited the library regularly. Others said that if they were using the navigation aspect of the prototype, they probably would not notice the display walls or stop to read the content. The potential usefulness for tours, instructional scavenger hunts, or to guide visitors to the library was discussed by several participants. One student noted that even though the adaptive content might not be useful for regular library users, it still made sense for it to be situated in Hunt Library, which is intended to be a tech-savvy and “next-gen” facility.

How helpful did users find the prototype? Many of the participants felt that the navigation aspect of the prototype would be useful in their day-to-day library experience, citing how large and confusing the layout of Hunt Library can be, even to seasoned users. Most of the participants noted that this functionality would also be useful in other large buildings across campus, especially for incoming freshman. The navigation was seen as particularly useful if it could be integrated with Hunt Library’s existing room reservation system. Multiple participants stated that they often spent a sizeable portion of their time at the library looking for an open room; having an app to reserve a room would simplify the process and not require them to open and boot up a laptop. There was less agreement on being able to choose a room based on light and sound using the prototype. Some stated that they really disliked studying in rooms they found too hot or too cold; others said they really only cared about finding a room that was unoccupied.

In spite of issues with the prototype’s functionality, when asked whether they would download and use this app as it was intended to function, many answered an emphatic “yes.” It is clear that users are interested in a tool to assist with navigating large academic buildings like the Hunt Library.

Throughout testing, the context of Hunt library was observed to offer both some constraints and supports for the use of the prototype. The primary barrier observed was the nature of the Hunt Library space. With large, open, and unconventional spaces, implementing the indoor navigation was challenging. However, even though unconventional spaces made it more difficult to map the users’ routes successfully, it also created the need for the wayfinding and navigation in the first place. Many participants who were regular users of the library, expressed statements like, “I didn’t even know these study rooms were back here,” or that it was easy to get lost in the library, indicating that having an interior mapping/navigation system for a complex setting like Hunt Library may be beneficial for users.

19

Page 21: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Evaluation Question 2. How can the prototype be used in other contexts?In the external context focus group, participants discussed a large number of ways that they thought the prototype would be useful in their contexts. When asked specifically about displaying adaptive content, participants mentioned the prototype could be used for exhibits in museums to provide increased information and context, or that the prototype could be used for theme-based tours to enhance fundraising events. Others mentioned that the adaptive content could be used to improve the experiences of users with disabilities, where an overlay could discretely inform users where ramps, elevators, or other related services were located. In a museum context, the prototype could be used to personalize displays based on age of the user or primary language. Specific to a campus environment, the prototype could increase visibility of campus programming by displaying events that the user may be most interested in, or it could be used to supplement users’ experiences during campus tours. When asked about the wayfinding and navigation functionality of the prototype, the focus group participants mirrored what the testing participants had said and mentioned the prototype’s usefulness for wayfinding across entire campuses or in larger facilities, as well as helping users find specific exhibits, books, or even restrooms. The prototype was also seen as useful for emergency situations, where it could guide users to emergency exits. Overall, participants who worked in a library context tended to focus on the usefulness of the wayfinding and navigation functionality, while those who worked in a museum context tended to focus on the adaptive content component of the prototype.

The participants noted several potential challenges they could face in implementing the prototype. With regard to the adaptive content, many noted that constructing an appropriate opt-in process, and being transparent about what user data was being collected by the prototype, would be difficult, especially when dealing with minors. The adaptive content aspect would also require staff to curate and manage a database of content to display, and permission to run on campus infrastructure and harvest information from campus systems. In terms of the wayfinding and navigation, some participants noted that the WiFi localization necessary might not be possible in their context, or that their current room reservation system would not be able to communicate with the prototype.

The participants also discussed aspects of their current context that would support implementing the prototype. Some stated that their facilities were already adding in new technologies, others said that they already had the in-house expertise to program interactive displays. In the museum context, images of digitized collections provided a large cache of existing high-quality content to display. Those in the university library context noted that they could collaborate with students or faculty on campus, and those in both the museum and the library context mentioned existing business or corporate partnerships that they could leverage.

20

Page 22: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Evaluation Question 3. What human capacity / interdisciplinary dynamic was necessary to complete this work?

This section details the human capacity utilized to develop both the wayfinding and the adaptive content aspects of the ALICE prototype. Table 2 lists the grant participants and their role in contributing to the development of the prototype. Note that in some cases, multiple graduate students participated because the grant work extended past the availability of individual participants. While it would be possible to re-create this type of application using fewer participants, this table gives a sense of the different components that were required to make the prototype function and the expertise needed.

Title Role on the ProjectProfessor of Computer Science Principal Investigator, supervised development

of artificial intelligence systems

Professor of Electrical and Computer Engineering

Faculty Participant, supervised localization work

Associate Head, Information Technology, NCSU Libraries

Co-PI and Project Director

Information Technology Manager, NCSU Libraries

Implementation of adaptive content; application testing

Systems Programmer/Specialist, NCSU Libraries

Development of Libraries’ web services

Graduate Student, Computer Science Development of artificial intelligence system

Graduate Student, Electrical and Computer Engineering

Development of Android application

Graduate Student, Electrical and Computer Engineering

Development of localization system

Graduate Student, Electrical and Computer Engineering

Development of Android application and localization system

Associate Professor of Art and Design Faculty Participant, development of adaptive content

Table 2. Grant participants and role in development of the ALICE prototype

Going Beyond the Functional PrototypeIn its most general model, the ALICE functional prototype represents a system that consists of a series of sensory inputs, a central intelligence processing them, and a corresponding series of outputs or actions. In the functional prototype, the inputs are represented by the user’s area of interest, the user’s desired destination, the user’s current location, and the simulated values for temperature, brightness, and noise in the library spaces. The outputs are represented by the

21

Page 23: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

path displayed to the user in the Android application and the content triggered on the MicroTile display walls.

In other contexts, a central intelligence agent might work on a completely different series of inputs and outputs to achieve an entirely different goal. Although not included in this functional prototype, the grant team discussed a variety of other sensors that might feed into a user’s experience in a learning space, such as:

● availability of reservable spaces● user density in particular spaces● individual user’s program of study or courses● computer availability● movement of users through a particular space (for example, a motion sensor)● user entrances / exits to / from library space● hours of a given service desk / coffee shop / exhibit● social media data (for example, images with certain hashtags like #huntlibrary 8)● current weather data● circulation or requesting of library materials

In addition, there is an enormous variety of outputs or actions that could happen based on a set of given inputs, including things like:

● displaying adaptive content on e-boards or display walls● displaying adaptive content on the user’s mobile device● triggering interactive applications on a user’s mobile device● triggering interactivity with content on e-boards or display walls (see Appendix 3 for

details on experimentation with using Kinect devices to enable interaction with large display walls)

● triggering audio to speaker systems or to a user’s mobile device (for example, oral narration of a museum exhibit or information about services offered at a specific location)

● incorporating live user data into a dynamically generated visualization (for example, the Fractal Forest project at the NCSU Libraries 9)

8 The My #huntlibrary project aggregated images of the newly opened Hunt Library building that users uploaded to Instagram and tagged with #huntlibrary. These images were prominently displayed on the building’s large MicroTile display walls, as well as on the Libraries’ website. To learn more, visit: https://d.lib.ncsu.edu/myhuntlibrary/about.9 The Fractal Forest project was produced as part of the first Code+Art Student Data Visualization Contest at the NCSU Libraries. It is a piece of generative art where gate count activity is used to dynamically drive the growth of trees in the visualization. To learn more, visit: https://www.lib.ncsu.edu/stories/codeart-different-kind-data-experience.

22

Page 24: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Figure 11. ALICE conceptual model

We have developed several specific scenarios that attempt to show how the concepts explored in this grant could be applied in other contexts.

Scenario 1. Museums: tailored exhibit content

In traditional museum exhibits, printed placards, signs, or binders are often used to provide more in-depth information to visitors. A significant challenge in this methodology is the wide range abilities and areas of interest among museum-goers. A 2nd grader and a graduate student both see the same information. In this context, an ALICE prototype could be used to tailor the content displayed to the user based on a variety of inputs, including age, grade, or area of interest. This could be implemented in several ways. In one scenario, the museum could either provide mobile devices or provide visitors with an application to install on their own mobile device. Similar to the Android application used for the ALICE prototype, this application could “learn” about the visitor through a series of simple questions, such as age, grade, and area of interest. A more complex version of this application could utilize AI functionality to “learn” about a visitor’s areas of interest based on what type of exhibit content he or she reads or which

23

Page 25: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

exhibits he or she visits and tailor future content offerings based on those interests. For example, in a natural sciences museum, is the visitor most interested in biological sciences (for example, animal eating habits) or physical sciences (for example, carbon dating of fossils) or fun facts (for example, this is the largest whale skeleton in North America)? Simple Bluetooth devices, like iBeacons, could be deployed near exhibit spaces to detect when the user is entering the exhibit. This could trigger the display of exhibit content on the mobile device being carried by the user. For example, it could bring up simple content and vocabulary with pictures for the 2nd grader, but much more detailed content for the graduate student with links to online resources to learn more. In addition to learning content, the mobile application could present learning games, quizzes, or puzzles for visitors to complete as they move throughout the museum (perhaps a type of mobile scavenger hunt). For groups of visitors, the content could be triggered to display on a large e-board or touch screen near the exhibit, instead of on the mobile device.

Implementation of this scenario would require a number of components:● registration and placement of Bluetooth devices (or other sensors) in the exhibit spaces● programming of a mobile application (Android and/or iOS) that is capable of registering

Bluetooth signals and calling up appropriate content based on information gathered about the visitor

● development of content at varying age levels or targeted toward specific areas of interest

Many museums already have staff capable of developing exhibit-related content and placement of Bluetooth devices. Development of a simple mobile application could likely be outsourced to a company specializing in this area for a reasonable cost. Inclusion of interactive games, puzzles, or quizzes in the mobile application would increase the complexity of this development. It would be important to ensure that museum staff have a way to update the content over time as exhibits evolve.

Scenario 2. Libraries: theme-based virtual tours

As an important hub for research and learning, campus libraries are often featured in tours and visits by different types of groups. This is particularly common when new facilities are created or existing spaces undergo significant renovation. It is exciting for library staff to see an uptick in interest from prospective students and faculty members, campus groups and clubs, and community groups, but it also creates significant strain on staff resources to accommodate these groups. In this context, an ALICE prototype could be used to provide automated theme-based virtual tours that are relevant to the background or interests of the visitors.

Similar to scenario 1, the library (or museum) could either provide mobile devices or a mobile application that visitors could download on their own devices. This application would offer available theme-based tour options for the visitor to select. In the library scenario, tour content could be tailored to donors, undergraduate students, faculty members, relevant colleges or

24

Page 26: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

departments on campus, or visitors from other libraries. Bluetooth devices could again be used to signal places of significant interest within the building to visitors. In order to achieve an effective automated tour, the mobile application would need to include a mapping component. To avoid the complexities of localization and navigation encountered during this grant, a simpler technique could be used. The application could use either the Google Maps API or a simple floor map to indicate locations that are of interest to the tour. As the visitor approaches a specific location and is registered by the Bluetooth device or other sensor located there, the map could update to indicate the new current location. Then it could also provide visitors to option to view the tour content. For tour groups, it might be appropriate for the content to also be automatically triggered on nearby e-boards or touch screens, as this would eliminate the need for every member of the group to have a mobile device with the application installed.

Implementation of this scenario would require a number of components:● registration and placement of Bluetooth devices (or other sensors) near locations of

interest for a virtual tour● programming of a mobile application (Android and/or iOS) that is capable of registering

Bluetooth signals and calling up appropriate content based on the current location and the tour theme selected

● creation of a map that can be updated to show the current location when a Bluetooth signal is registered to indicate a new location

● development of content for the various theme-based tours

Many libraries already have staff capable of placement of Bluetooth devices or other sensors. In addition, many libraries have staff capable of creating a simple web framework that is capable of storing customized tour content and providing it on demand to the mobile application. Development of the mobile application may be more complex in this scenario (as compared to Scenario 1), but could be outsourced to a company specializing in this area for a reasonable cost if needed.

Scenario 3: Museum context: interactive exhibits In order to engage visitors with content and enhance learning outcomes, museums have been creating various types of interactive exhibits for a long time. The growing prevalence of larger display walls offers an interesting new canvas for interactivity in museum exhibits, especially as it can be cost prohibitive for these screens to be touch-enabled. One relatively successful area of experimentation with video walls and interactivity completed as part of the ALICE grant involved the use of the Kinect to allow interaction through motion (see Appendix 3 for implementation details). Rather than a Bluetooth device or other sensor, a Kinect (or other motion sensing camera) could be placed near (perhaps above) a larger display wall. This device could register when a visitor approaches and offer a welcome screen and/or voiceover inviting him or her to participate in some type of learning game or puzzle on the display wall. One of the advantages to providing a learning game via a large display wall as opposed to a smaller touch screen or mobile device is that it could allow multiple visitors to participate together. The game

25

Page 27: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

or application would need to provide a key to visitors showing the motions that can be recognized by the Kinect (or other motion sensing camera) for interaction.

Implementation of this scenario would require a number of components:● existence of a larger display wall for the content● development of a learning game with interesting interactions that could accommodate

multiple players● programming to integrate the learning game with the sensor data coming from the Kinect

(or other motion sensing camera)

This work is likely more complex than creation of a simple mobile application, so might not be accessible to many museums without some type of mini-grant or partnership.

26

Page 28: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Appendix 1: Commercial options for indoor localization and navigation

Indoor localization and navigation systems are becoming more prominent now that smartphones and WiFi enabled buildings are commonplace.

A company called indoo.rs (http://indoo.rs/technology/) is developing systems for indoor localization and navigation. They use both Bluetooth iBeacons and existing WiFi infrastructure for localization. They focus on systems to help the visually impaired navigate inside, and public safety providers such as EMT first responders to find endangered individuals. They also develop indoor navigation systems for retail, travel, and office and industry. Another company, infsoft (http://www.infsoft.com/Products/Indoor-Navigation/Functionality), not only uses WiFi and Bluetooth beacons, but also smartphone sensors to help minimize the localization errors for their indoor navigation systems. Pathfindr (http://www.pathfindr.co.uk/) also provides indoor positioning and indoor navigation, but rather than using WiFi and Bluetooth, it uses front and rear facing cameras on a smart device to identify flags around the building. These flags need to be installed and stick to walls or objects around the building.

Companies like Wifarer (http://www.wifarer.com/deployment) provide the same type of localization and navigation systems using WiFi and Bluetooth beacons. They advertise that they will convert your floorplans to integrate with their app or you can use them to create your own. They also have fingerprinting calibration software that requires minimal effort from the company purchasing the system.

Apple appears to be interested in adding indoor mapping capabilities into Apple Maps. While there is an iOS application available (http://9to5mac.com/2015/11/02/apple-has-a-hidden-indoor-gps-app-in-the-app-store-likely-for-apple-maps-connect/) for marking indoor locations in a building, it’s only available on a limitied basis to select Apple Maps Connect users (https://mapsconnect.apple.com/indoor/signup).

27

Page 29: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Appendix 2: Exploring creation of a sensor network to monitor environmental conditions in physical spaces

The goal of this experiment was to gather environmental information for the ALICE prototype at Hunt Library. Sensors would be placed in different user spaces in the library, and the sensor network would provide frequent updates about current environmental conditions in different locations within the Hunt Library to the ALICE space database created by the library.

Sensor Network

The sensor network consisted of a master-slave network, with one master and five slaves. The master connects and coordinates the slaves and also connects to the library network, which allows it to populate the ALICE space database. Both masters and slaves were created with Arduino boards so as to keep coding and circuits simple. The master can be placed virtually anywhere in the library. This allows it to be powered using wall outlets. As the master needed to connect to the WiFi network, Arduino Yun was used. Arduino Yun has a built-in WiFi module.The slaves were intended to be placed in user spaces throughout the library building. As the sensors would be in public view, they had to be as small as possible. To prevent theft and damage from library users, the sensors would be placed on the ceiling or as close to the ceiling as possible. Placing the sensors close to the ceiling meant that they would need to be wireless and battery powered to prevent stringing cables down walls

Power saving

The slave sensors were designed and implemented. Initially a standard 9V battery was used, but it only lasted up to 9 hours. To increase the battery life, the following choices were made:

● The Arduino Micro, a low energy and small Arduino was used.● The sensor slave devices were connected to the master using Zigbee network instead of

the WiFi network. ● Low power and circuit-less sensors were chosen so that they connect directly to the

Arduino controller instead of connecting to the controller via an interface.● Instead of continuously sending data to the master, it was collected by the master every

10 minutes. ● The Zigbee network was shut down as soon as data was collected.● Instead of using a low power 9V battery and stepping down voltage, a high power 1.2V

D-cell was used and the voltage was stepped up.As a result of employing these measures, the battery life was increased to up to 250 hours (10 days). However, even 10 days of battery life was not very practical across many sensors

28

Page 30: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

mounted onto ceilings. Experimenting with a solar panel to provide power to the devices during the day was not effective because it could not provide enough power from inside the building.

Data collection

The sensors provided the ALICE space database with light intensity level, noise level, and temperature at their given location. This information was collected by the master every 10 minutes. The sensors also kept track of the current battery level so that the master could notify the staff when battery level for a particular slave sensor was low.

Although the master-slave network configuration was successful in gathering and saving data from the slave sensors, it was decided that the power requirement of the sensors made them impractical to use on a wide scale throughout a large building like the Hunt Library. However, a different sensor device was designed for placing in the technology showcase.

Interactive motion sensor

The motion sensor device worked independently (it did not use the master-slave configuration). As this sensor was deployed in a location where it did not need to rely on batteries, many restrictions were removed. The sensor used Arduino Yun to connect directly to the WiFi network and did not use the Zigbee network. Since Arduino Yun comes with linux installed, many of the linux features were supported. With linux available, it was possible to support the REST protocol for web services. This means that the sensor data could be accessed and use by a variety of other applications. Finally, the sensor was able to collect data every second, which is important when being used in a motion sensing capacity. This sensor device and its features were then used by a student to add simple pieces of interactivity to an art application for one of the large display walls at the Hunt Library. The sensor registered when users approached the display, causing a change in the background of the scene being rendered.

29

Page 31: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Appendix 3: Exploring use of Kinect to enable interaction with large display wallsThe goal of this experiment was to provide interactivity to users for several of the large MicroTile display walls at the Hunt Library. For ease of use, the goal was to enable interaction without requiring the user to physically obtain an input device. Thus, it was decided that a Microsoft Kinect sensor would be used as an input device by tracking user location and movement.

The Problem

The Microsoft Kinect sensor has to be placed near enough to the display wall where it can capture user movement. However, the content on the display walls is generally powered by a display server in the server room several floors away. The Microsoft Kinect uses USB 3.0 to transfer data and required a bandwidth of more than 25 MB/s. To connect the sensor to the display server in the server room, a very long USB 3.0 cable or USB extender would be required. Over such a large distance, the latency was too high and the Kinect would not function. So, an alternative had to found to transfer the data from sensor to the server (and back).

The Prototype

Although the Microsoft Kinect sensor cannot connect directly to a wireless network, it can be connected to a separate computer, which can then communicate with the display server over the network. Some research had already done in transferring Kinect skeleton data over the network.10 This research project was used to test if it was possible to do the same over the WiFi network. The project sent the skeleton every few milliseconds. The main problem, however, was that there was no control on the data being sent to the display server. Most of the data sent was not required, and so bandwidth was wasted in the process.

To solve this, the prototype utilized a sample project from the Microsoft Kinect SDK to transfer Kinect skeleton data over the WiFi network. This project showed how to access the skeleton data from the Kinect sensor. A web server was also designed that would allow multiple client applications to connect. The web server allows a client application to connect to the Kinect and ask for particular sets of data. For testing, instead of the full skeleton data, only the positions of the hands of one user were made available to the client application. This significantly reduced required traffic over the network. A client application was made in Processing for testing this server-client interface. It allowed the user to draw on a large display wall by making a fist and then moving his or her hand around.

10 Kinect 2 Broadcaster: http://slsi.dfki.de/software-and-resources/kinect-2-broadcaster/

30

Page 32: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Another client application was made in Python. This application took control of the mouse input. Then, when user move his hand in front of the Kinect sensor, the client would move the mouse accordingly, allowing the user to move the mouse, click, drag and scroll. However, this prototype web server had some shortcomings:

● Sometimes there were sudden jumps in the Kinect skeleton position information. Because of this, the mouse control application was very jittery and difficult to control.

● Sometimes, the frequency of data sent to the client application would decrease suddenly.

● The server only provided the location of 2 hands. If a client application needed the location of the face or legs or any other joints, the data was not available.

● The server only provided the client application with skeleton data. Besides skeleton data, the Kinect sensor provides 1080p video stream, an infrared video stream, a depth map stream, directional audio stream and a stream to recognize the general position of users. None of this data was available to the client.

Many of these shortcomings could be attributed to using the Microsoft Kinect SDK sample project.

Final Design

To resolve some of these limitations, a new web server was designed that provides access to all the data streams supported by the Microsoft Kinect. Data from each of the streams is collected on a separate thread and each client connection is handled by a different thread. This makes sure that the data collection is not interrupted by clients. Similarly, as each connection is on a separate thread, adding clients has very little effect on the web server. The clients now have finer control over what data is being requested. For each of the streams (video, infrared, etc.), the web server sends a frame only if the client asks for it. Thus, no bandwidth is wasted on sending large image frames that the client may not require. For skeleton data, the client can select exactly which joint information it wants. Since the skeleton data is needs to be sent frequently the web server passes this information as a stream.

The client applications previously made were ported to use the new web server. The applications in conjunction with the new web server provided much better experience as compared to those used with the prototype web server. In particular, in the mouse client application, much finer control over the mouse was provided and the jitteriness was practically removed.

Example Applications

One application that was rewritten to enable Kinect interactivity is called the Front Pages application. This is a web page originally designed to show five random newspaper front pages. These front pages changed randomly and the user had no way to select what newspaper he or she wanted to read. With the mouse client for the Kinect sensor in place, modifications were made to the web page to allow Kinect interactivity.

31

Page 33: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Initially, the web page had buttons and the user could search for the newspaper he wanted by typing in a search. This caused problems when using the Kinect as the “click” Kinect gesture (closing the hand) is somewhat difficult to convey and typing in letters on at a time via the Kinect required significant effort and was error-prone. To try to address these issues, the application was modified to respond to the Kinect hover gesture (an open hand) instead of a click. In addition, the newspaper search was turned into a browse.

Figure 12. Using Kinect gestures to control the Front Pages newspaper application

Another application at Hunt that will be using Kinect interactivity is called Art in Motion. This application allows the user to use his or her hands to grab colored balls and move them around the display wall to “paint.” It can process movement from two hands simultaneously. As this application will be made available to visitors first, it provides a good opportunity to see how intuitive the Kinect gesture interaction is. To illustrate how to “grab” the colored balls, a small table tent with instructions will be placed near the Microsoft Kinect.

32

Page 34: Contents  · Web viewThe Adaptive Learning Spaces and Interactive Content Environments (ALICE) project was funded by an Institute for Museums and Library Services National Leadership

Figure 13. Using gestures to paint with the Art in Motion application

33