internet of things status monitoring with augmented ... · this project makes use of several...
TRANSCRIPT
INTERNET OF THINGS STATUS MONITORING WITH
AUGMENTED REALITY ON GOOGLE GLASS
by
Jacob Lites
Advisor
Dr. Michael Branton
A senior research project submitted in partial fulfillment of the requirements
for the degree of Bachelor of Science
in the Department of Mathematics and Computer Science
in the College of Arts and Science
at Stetson University
DeLand, Florida
Spring Term
2015
2
TABLE OF CONTENTS
TABLE OF CONTENTS……………………………………………………………………….....2
ABSTRACT…………………………………………………………………………………….…3
INTRODUCTION..………………………………………………………….……………………4
RELATED WORK………………………………………………………………………………..5
ARCHITECTURE…..……………………………………………………………….……………8
Software ………………………………………………………………….………….......9
Google Glass Development Kit & Android Development Kit……………….......9
Arduino Programming Language………………………………………………..11
D3.js………………………………………………………………………..…….11
Web Server………………………………………………………………….……11
Hardware ……………………………………………………...…………………………12
Arduino……………………………………………………………………….….12
Raspberry Pi………………………………………………………………….…..13
Google Glass………………………………………………………………..……13
Sensors…………………………………………………………………….….….14
Temperature Sensor………………………………………………….…..14
Grove Dust Sensor…………………………….……………………...….14
Logarithmic Light Sensor……………………………………………..…14
Grove Moisture Sensor……………………………………………..…....14
Grove Sound Sensor……………………………………………….…….14
IMPLEMENTATION……………………………………………………………………………15
Overview…………………………………………………………………………...…….15
Layout………………………………………………………………………………..…..16
Data Gathering…………………………………………………………….………..…....16
Browser Access………………………………………………………………………..…20
Google Glass Access……………………………………………………………………..26
FURTHER WORKS…………………………………………………………………………..…31
BIBLIOGRAPHY..……….…………………………………………………………………...…32
3
ABSTRACT
We build a system that implements the augmented reality capabilities of Google
Glass and ties them together with the growing idea of internet-able items across an
environment, the Internet of Things. This system gathers information from an
environment, and stores it to be accessed either by the web or by scanning QR codes
on a custom app on the Google Glass. On the Glass interface, this brings up a real-time
infographic for the information related to the scanned QR code. This is mirrored by the
web interface with the addition of more in depth analysis of the data.
4
INTRODUCTION
Our project was inspired to help usher in the future, allowing important real-time
information from around the house to be presented to the user in a convenient and
quickly comprehendible way. This required several steps: a system to gather data, a
system to transmit this data to a server, and a system to display this information to the
user. Along the way, emphasis was placed on ease-of-use and convenience for the
user.
For the data gathering system, we ordered a collection of sensors to attach to
Arduino Unos for use in gathering information from their environment. These sensors
included a temperature sensor, an ambient noise sensor, an air particulate sensor, an
ambient light sensor, and a soil moisture sensor, which are explained in more detail
below. These sensors were connected to Arduino Unos to gather data and process it
over time. Average values for the data were calculated and sent through Radio-
frequency identification (RFID) sensors to a wi-fi enabled Raspberry Pi.
This Raspberry Pi compressed the data into an acceptable format and sent it to
an online MySQL database, where it was stored. This data was accessed one of two
ways. When accessed through a web browser, this data was downloaded and
processed through custom graphing scripts written using the D3.js library, and then
displayed to the user. These graphs are then stored on the server to be accessed by
the Google Glass users. When a Google Glass user scans the appropriate QR code,
these graphs are sent to the Google Glass through Java sockets and displayed to the
user.
This project makes use of several concepts that are the focus of much research
at the moment. They use an Internet of Things (IoT) framework to gather data from
around an environment, and display that information in an augmented reality (AR)
system to the user, using the Google Glass hardware.
5
RELATED WORK
The easy accessibility of augmented reality has only recently become possible
with the advancement of real-time computing and analysis capabilities. The phrase
“Augmented Reality,” was coined by researcher Tom Caudellin in 1992, a worker at
Boeing at the time [7]. Augmented reality systems
integrate the virtual world and the physical world,
usually taking input from a camera or other input
and showing the user useful information along with
the input. AR is closely related to Virtual Reality,
(an environment which simulates an experience to
the user.) The difference is that AR attempts to
take information from a real life environment and
supplement it with data, instead of creating the
environment itself.
The beginning of usable AR and VR
systems begins with the Sword of Damocles,
created by Ivan Sutherland in 1965[5]. The Sword
of Damocles was a headset to be worn by the
user, projecting a three dimensional line drawing
across cathode ray tubes placed in front of the
eyes. These were situated so that the user could still see through them, seeing designs
and the room they were in at the same time. The headset calculated rotation and
scaling matrices based on information from the helmet, either through ultrasonic
scanning or through a connected pillar, monitoring the direction and location of the user.
This was one of the first examples of digital information being overlaid on a real world
view, augmenting the vision of the user.
6
The idea was further built upon in 1997 when Steven Feiner and associates
attempted to create a portable AR system for exploring a University that showed users’
locations around the school, along with information and a direct link to websites for the
buildings[8]. Location was calculated with GPS, direction with an orientation tracker in
the helmet, and a radio modem to connect to the internet. These days, the processing
speeds in portable computers like cell phones and video cameras are making
augmented reality more and more accessible and complex. Almost omnipresent
connections to the internet with 4G signals
allow for information from all around the
globe to be transferred to the user in real
time to supplement the reality around
them.
The phrase “Internet of Things” is
defined by the U.S. National Intelligence
Council as “the general idea of things,
especially everyday objects that are
readable, recognizable, locatable,
addressable, and controllable via the
Internet - whether via RFID, wireless LAN,
wide-area network, or other means”[9].
One of the most interesting concepts of
the Internet of Things is that while devices
that are commonly considered computers
(smart phones, desktops, tablets) are
indeed a part of the IoT system, it can be
expanded to include everyday, around the
house items like fridges, thermometers, air conditioning units and even a sprinkler
system. One can even create their own IoT enabled technology to serve their needs.
One requirement for IoT enabled devices is to be connected to the internet. The internet
has become a staple in common life, with 75 percent of people on earth expected to
have internet access in 2015[11]. With wireless data becoming more and more popular
in the cellular phone market, and wifi becoming commonplace in the American home,
IoT devices are just beginning to be a feasible prospect. Wireless networks are being
adapted and even created to service the IoT system, with a cellular network in the
California Bay area being created solely for the purpose of IoT devices [10].
A very popular and important technology in the IoT system is RFID
communication. RFID (Radio Frequency Identification) allows for the passage of
information through radio waves. RFID read ranges can reach anywhere from about
300 feet for battery powered transmitters, to 3 feet for high frequency signals, although
a lot is dependent on the power supply, the frequency of the radio waves and the size of
7
the antenna. RFID tags can be powered from afar, although the distance from which
this can be done depends on the power of the scanner and its hardware, usually around
20 to 30 feet. This allows for a system that transmits information and uses energy only
when it is requested and from a distance. They can also have a power source, and
continuously communicate with an RFID receiver from afar[12]. This has far reaching
implications for IoT, giving a convenient and energy conserving manner in which to
have devices communicate to a central hub and back. Students at the University of
Washington put together an entire, interactive IoT experience for nametags and tools
around their labs using an RFID Ecosystem to keep track of items and people[13].
Finally, Google Glasses have made large headway in the Augmented Reality and
wearable computing fields. Glass is a perfect example of a new concept of wearable
computing, or computers that are worn on the body to supplement their daily life.
Wearable computing has many applications, from sensing the user’s current status and
location, to providing information based on GPS coordinates or augmenting the user’s
memory about what they wanted to purchase at the grocery store[4]. Things become
even more useful when the wearable computer has a camera and a hands-free screen,
which can allow for the real time processing of camera data to impact data being
displayed on the screen. However, it has been emphasized that “Wearable
computing...is based on the idea that computing is NOT the primary task”, and that
instead the wearable computer should be focused on augmenting the user’s senses
[18].
The Glass has pros and cons. Glassware (Glass’s name for native apps) is not
downloaded directly to the device, but instead interfaced with through Google’s online
systems. This allows for computations to be done separately from Glass’s hardware, but
also keeps control of all information flow solidly in Google’s hands. It also keeps data
confidential (to Google and the
user) and disables the need for
any type of software download
or update process, but limits
software accessibility in non-
connected areas [17]. Glass’s
supplemented vision has paved
the way for hands-free
information whenever the user
may need it, and has already
had quite an impact in the
medical community[15]. It also
has everyday applications, with
the onboard RAM and
Processor being enough to
8
interpret the camera data to detect faces
and run an algorithm on them to analyze
what emotion the face is currently
displaying[16]. Google glass is coded using
Google’s Android API, but comes with its
own app and hardware system on top of
the Android API called the Mirror API. The
mirror API allows access to the Glass’s
touchpad, the camera, and the rest of the
sensors in the hardware.
ARCHITECTURE
The architecture of the project is as follows. There are circuits for gathering
various types of information from the environment, and we need a different sensor for
each type of data. We
are extracting the
following information
from the environment
using sensors:
Temperature [28],
Soil Moisture levels
[30], Ambient Light
Intensity [31],
Ambient Noise [33],
and Air Quality [34].
These sensors report their information back to interspersed Arduinos to convert the
analog data into a digital format. From there, an RFID wireless transmitter sends the
data to the data hub. The data hub consists of a RFID wireless receiver to receive data
from all of the circuits, as well as an Arduino to parse and organize said data. An
Arduino is a programmable, open source microcontroller, convenient for simplistic data
collection and sending. After the data is in a usable format, it is passed serially to a
Raspberry Pi. The Raspberry Pi uses a Wi-Fi adaptor to send said data to a server to
be stored in a MySQL database.
The database has a rather simple design, classifying data by both data type
(temperature, light, etc.) and by date, so as to keep track of the data over time. This
database is accessed by the webpage that displays the processed information.
The second part of the implementation concerns the Google Glass’s access to
the information on the server. Using a custom app, the user scans a QR code
associated with a specific data collector. This QR code directs the Google Glass’s app,
9
or “Glassware”, to a server containing all of the information that the associated data
collector has recorded over the past week. The Glassware then takes a precompiled
graphic generated by the website to show to the user on the Google Glass.
The Glassware is written using the Android API [24] as well as the Google
Glass’s add-on, the GDK [25]. After using a QR code to access the information through
the internet, a pre-generated graphical image is sent back to the Google Glass. This
graph is generated to display the data in a way that is presentable for an augmented
reality system[3].
Finally, the server has a browser accessible site that will present the information
for each thing being monitored in two ways: a visually stimulating, up-to-date infographic
(the same as presented on the Google Glass). The website compiles the data into
colorful, easy to understand bars using custom graphing scripts created with the D3.js
library[REFERENCE]. The colors in the bars shift using a gradient to show change over
time to the user in a simple, understandable way. In today’s world of quick-fire
information, quick, boxy colors allow for quick analysis of details without diving into too
many specifics.
Software
Google Glass Development Kit (GDK) & Android Development Kit (ADK)
The Google Glass Development Kit (GDK) is based off of the Android Development kit
4.4.2, API 19. The android OS is a Linux system that takes a unique approach to how
its apps interact. Apps in the Android OS are considered users, which sets permissions
for them to only access their own data [22]. They also run in their own virtual machine,
providing privacy and security. However, each app is made up of Activities, which are
different actions that each app can take. Apps can call upon the activities of other apps
to reduce code and leverage other apps capabilities. For instance, the camera app can
call the email app’s “compose” activity to compose an email that already has a recently
taken picture attached to it. The Android Development Kit relies upon “intents” for these
activities to communicate. Intents specify the activity they are intending to communicate
with, along with any data that may need to be passed from the requesting activity to the
requested activity.
Because the Android OS has been adapted for many, many different hardware
specifications, the ADK has been crafted to be rather adaptable through the use of XML
files providing the bulk of the data the app will use. For instance, it’s discouraged to
create any type of String or Integer in the main code of an Android or Glass app, as all
data should be pre-set in the XML files, should they need to be adaptable to a different
screen size, camera type or device.
10
Android apps are written in Java, as are Glass apps. Google glass can parse and
display Android apps with a few adjustments and input issues. However, if one wants to
use any of the sensors or features of the Google Glass, they must implement the GDK,
an add-on to the ADK, that is meant for easy conversion from smartphone to Glass
apps. The GDK adds support for the “cards” system used on glass (as explained
below), voice recognition, sensors unique to the glass and the gesture detector built in.
These two development kits come together to create easy and convenient to use apps
for Google Glass.
Another unique feature of Google Glass is its ability to use web-based APIs to
send and process information from the Glass while not taking up processor time or
storage data. As noted earlier, these APIs are called “Glassware”, and conveniently shift
the processing power required for actions away from the Glass’s somewhat limited
CPU. The Mirror API is used to craft this Glassware, which can then be downloaded
from Google’s official Glassware collection [23].
The Glass’s “card” system is useful for at-a-glance information and updates, but
can be used to provide an interactive experience as well. Static cards can have their
own menu options. For example, one might want to respond to a text received, or have
an email read to them as they drive. Should the static card and its options not be
enough for a developer, Live Cards are available. Live cards stay on the timeline to the
left of the main menu, staying until they have completed their task or are shut down by
the user. These cards are exemplified by events that are ongoing, like a stopwatch or a
compass, instead of events at fixed points in time like an email or a text. These also
have their own menu options to modify the content or function of the live card. Finally,
should the developer require the user to have complete focus on the app while it’s
running, there are immersions. Google advises against immersions unless they are
necessary, as they go against the hands-free and simplistic environment that Google
Glass is meant to provide. They are more difficult to code and require control over all
interruptions to make sure that they remain front-and-center for the user. They’re also to
be used when actions that would progress or regress the user through the timeline need
to be used as inputs for the local app, like forward or backward swipes.
One of the selling points of the Google Glass is its completely hands-free nature,
which puts large emphasis on its voice interface. Voice commands are pre-programmed
into the Google Glass by Google, and are available upon suggestion if required for an
app. The simple interface involves linking a constant associated with the voice
command to the specific menu option, and the rest is taken care of by the Glass’s voice
recognition software. Should the manual menu be opened by a tap to the side, the
interface allows for tactile input as well. These options can be changed through custom
code, but for the sake of maintaining a familiar user interface, should often be left alone
unless changes are specifically warranted.
11
Because the Glass is made to notify the user for many different events, a lot of
each app’s structure is based off of events activated by the glass. When coding, one
must handle each time the user leaves the card, when the user refocuses on the card,
when the user attempts to take a picture while using the card, when the user uses and
interface option, or when any sensor receives relevant information. These events drive
the user’s experience, but also mesh perfectly with the previously mentioned intent
system. Because these events could happen at any moment in the Glass app’s
function, a simple system of passing the focus to another “intent”, or action, makes for a
malleable user interface that can react quickly to any happenstance that may occur.
One example is the activation of the camera. The Google Glass comes with a
button on the upper right-hand side that will stop any given action (unless overridden) to
take a picture. The system is simple: the programs written for the glass have a section
of code set aside for when the camera button is pushed. When the event occurs, it
gives focus and control of the camera back to the main system long enough for the
camera app to snap a picture, at which point the camera and focus are passed back to
the app or card that was being utilized at the time of the button push, again through an
event.
Arduino Programming Language
The Arduino programming language is a modified version of C, with extra
methods and default variables to help with the complications of interacting with physical
sensors and analog inputs. It consists of an initial setup method, followed by a loop
method for the Arduino to continue processing while it has power. Methods like
digitalRead and analogRead make working with sensors and switches a lot simpler than
usual, and the Serial output makes debugging easier. Finally, there are a multitude of
libraries online for almost any piece of hardware imaginable that can make interfacing
with switch matrices or multiple LED’s a much more convenient process.
D3.js
D3.js is a javascript library used for manipulating documents based on data [37].
It focuses on creating useful and visually appealing graphical interpretations of data,
and is programmed to make data access, modification and presentation simple. We
created a javascript object that holds specifications about each graph, along with the file
from which to grab the data. The object then runs with a specific location within which to
place the graph.
Web Server
Finally, we used a Linux, Apache, MySQL, PHP server (LAMP server) to store
the information and display it to users over the internet. The MySQL database holds the
12
data, and the PHP code parses the information into a displayable and up-to-date
analysis of the data.
Hardware
This project makes use of information gathering sensors, along with Arduinos and a
Raspberry Pi to gather data and send it to the online server. They are attached to
several circuits around the environment, and each circuit has an Arduino transmitting to
a single receiver Arduino, which passes the information on to the Raspberry Pi. The
Raspberry Pi formats the data and sends it to the MySQL database on the server.
Arduino
Arduinos are open-source microcontrollers that make use of the ATmega328
chip. This chip is
programmed
through a USB
interface in a
specialized Arduino
language on the
default boards,
although it is very
easy to overwrite
the compiler
program with a
program of your
choice. The default
Arduino firmware for programming is based off of a custom variant of the C
programming language. This project makes use of the Arduino Uno, one of the more
basic types of Arduinos. The onboard ATmega238 chip comes with 32 KB of flash
memory (although 0.5 KB are used by the bootloader), with 2 KB of SRAM and 1 KB of
EEPROM, while having a clock speed of
16MHz [27].
The Arduino board can be powered by a
9-volt battery or an AC-to-DC adapter. The
recommended input voltage is 7 to 12 volts,
and it provides 5 volt and 3.3 volt supply for
circuits based around the Arduino. The Uno
comes with 14 digital input/output (I/O) pins
and 6 analog input pins. Digital input/output
is for control and data gathering from simple
13
on/off devices, while the analog input pins will be used to gather ranging data, like the
changing temperature or ambient light levels inside of the environment.
The ease at which the Arduino can be programmed to complete small tasks while
taking in and putting out analog and digital signals makes it perfect for a small circuit
made for gathering data. Due to Arduino’s popularity, there are many software libraries
for dealing not only with specific hardware that may be implemented in a circuit, but also
for stabilizing incoming analog data, simplifying difficulties that may be had with the
engineering of the circuit.
Raspberry Pi
The Raspberry Pi is a small
computer simplified to a bare bones
structure. It comes with a
ARM1176JZF-S 700 processor and
a VideoCore IV GPU [36]. In its first
iterations, it came with 256 MB of
RAM, but it has since been
upgraded to 512 MB. It uses SD or
MicroSD sockets for data, including
the operating system and local
memory. The project will be using
the B+ model, which includes 4
USB ports with which to
communicate with the board, along
with a micro-USB power supply and an HDMI port for displays.
Google Glass
Google Glasses run Google’s own Android operating system, version 4.4. It has
a high definition display and a 5 megapixel camera, along with wifi and bluetooth
capabilities. It has one gigabyte of RAM (two GB in the new edition) to process it’s
gyroscope, accelerometer, and magnetometer, along with it’s ambient light sensing and
eye sensor[14]. It’s also running a TI OMAP4430, a CPU with 2 cores that run at 1 GH
and have a specialty in processing visual and audio data. Google Glass is an
augmented reality based system that displays information from the camera, your cell
phone, and the internet in a heads-up display set just above the user’s regular field of
vision.
14
Sensors
Temperature Sensor
The TMP36 was used in this project to gather the temperature data of the
environment. It runs from 2.7 to 5.5 Volts, and is calibrated directly for Celsius
measurements. It has an accuracy of ±2°C. It operates from -40 °C to +125 °C. It
provides a linearly proportional voltage to the centigrade temperature[28].
Grove Dust Sensor
The Grove dust sensor gathers the air particulate content of its environment by
measuring the Lo Pulse Occupancy time, and is responsive to particulates with a
diameter of 1 micrometer. It runs between 4.75 and 5.25 Volts, and can register up to
28,000 pieces per liter. This sensor can be used to calculate the estimated particles per
0.01 cubic feet. [34]
Logarithmic Light Sensor
The GA1A1 Log-scale light sensor returns the current lux of the environment on
a logarithmic scale, to stay accurate with a large range of light values. It is powered with
2.3 to 6 Volts, and has an onboard 68K load resistor to maximize output to 3 Volts. It
measures values from 3 to 55,000 Lux [31].
Grove Moisture Sensor
The Grove soil moisture sensor measures based on soil resistivity. The more resistance
there is in the soil, the less wet it is. It takes a voltage from 3.3 to 5 Volts, with a current
of 0 to 35 milliamps. It outputs an analog value of anywhere from 0 to 950, although the
output does not correspond to any actual unit value. Less than 300 means dry soil, 300
to 700 means humid soil, while 700 to 950 means the sensor is most likely in water[30].
Grove Sound Sensor
The Grove sound sensor detects the sound strength of the environment. It
operates between 4 and 12 Volts, between 4 and 8 milliamps. It senses sounds
between 48 and 52 decibels, and has a microphone frequency of 16 to 20 Kilohertz. It
measures the data using a simple microphone based on the LM358 amplifier[33].
15
IMPLEMENTATION
Overview
This project
intended to display
information to the user in a
quick and easy way. As
can be seen, the website
displays an easy to
understand bar for each
data type that can be put
together with minimal
explanation. The
temperature bar follows the common temperature scale from 0 to 100 degrees
Fahrenheit, while the air particulates bounce around to give the user an idea of how
cluttered their air is at a current moment. The ambient noise changes in amplitude to let
the user know when their environment is louder or quitter, and the ambient light scales
between black and white to let the user know the lux level in their home. Finally, the soil
moisture bar scales in opacity between super moist and very dry to let the user know
when they should sprinkle some water on their plant. The graphs were designed taking
into account the advice of Edward Tufte and his ideas on data-ink and presentation
efficiency [41]. While not a perfect implementation, emphasis was placed on presenting
the data accurately, keeping graphs interesting and allowing the data to speak for itself.
Each division can be clicked to expand into more information about said data
type, including a comparison of the past 24 hours of data with the 24 before it, and a
high, low and average
of the values for the
day. Finally, there’s a
synthesis bar that is
used to give the user a
quick overview of the
current status of the
home, and can be
clicked to view a more
abstract representation
of the current status of
the home. This all fits
the goals of gathering information and presenting it to the user in a convenient and
useful way, and can be used as a proof of concept for similar projects in the future.
16
As for the Google Glass augmented reality system, it functions as follows.
Opening the app brings one to a start page, which can be clicked to bring up a menu.
Selecting “Go” will bring up a camera for scanning a QR code. Should an appropriate
QR code be scanned, the glass will fetch the corresponding custom bar graph from the
server and display it to the user. The implications of this are that the QR codes can be
placed wherever is convenient. As an example, the QR code corresponding to the soil
moisture may be placed on the potted plant, or the temperature QR code may be placed
next to the thermostat. These are just a few of the many possible uses of this IoT AR
display.
Layout
The system ended up being fairly
complex, as seen in the figure on the left.
The data was gathered by the sensors,
processed by the Arduinos, transmitted
through RFID to a central Arduino, where the
information was processed to be passed to a
Raspberry Pi. From there, it was received
and uploaded via a Python script to the
MySQL server. When a user attempted to
access the site from their browser, it would
load the page in the browser and implement
the Javascript code that was written for the
webpage. We made use of D3.js’s simplified data gathering code to simply request a
JSON object from a webpage, with the graph being generated after all of the data was
loaded. We input the page of a PHP script that would gather data from the MySQL
server and display it as a JSON object for the D3.js library to parse efficiently. The script
takes the data and generates a real time graph based on the information available. It
then uses AJAX to send the graph back to the server to be saved and accessed later.
Should a user request access via the Google Glass, they start up an app made for this
project. The app is used to scan and process a QR code. It then connects to a PHP
script running on the server and sends the information from the QR code. The script
processes this information and sends back the appropriate real time graph in PNG
format, which is then displayed to the user by the application.
Data Gathering
Data gathering was done using sensors, breadboards, and Arduinos, and
processing the information using the Arduino IDE. The data was accessed through an
analog read from the sensor and processed to make sense in a real world scenario.
//Get raw, analog data
17
tempReading = analogRead(tempPin);
lightReading = analogRead(lightPin);
/* PROCESSING TEMPERATURE DATA */
float temperatureF = rawToF(tempReading);
// tempF * 100 because we don't wanna send a float. That's confusing
temperatureF *= 100;
int tempFInt = round(temperatureF); //round to 100ths place
Serial.print(tempFInt/100); Serial.print("."); Serial.print(tempFInt%100);
Serial.println(" degrees F"); //Print that temp!
/* PROCESSING LIGHT DATA */
float lux = rawToLux(lightReading);
int luxInt = round(lux);
Serial.print(luxInt); Serial.println(" Lux");
This is an example of the processing done to the raw light and temperature data,
with the called methods being displayed below.
float rawToF(int temp)
{
float voltage = temp * 3.3; //5 volts
voltage /= 1024.0; //Account for 1 - 1024 range of circuit
// Calculate Celcius
float temperatureC = (voltage - 0.5) * 100 ;
// Calculate Fahrenheit
float temperatureF = (temperatureC * 9.0 / 5.0) + 32.0;
//return Fahrenheit
return temperatureF;
}
float rawToLux(int raw)
{
float logLux = raw * logRange / rawRange;
return pow(10, logLux);
}
The data was then sent in a digital fashion to a connected RFID transmitter. The
project made use of VirtualWire[38], an import to make RFID transmission and receiving
an easier process. However, the data still had to be converted into a byte array for
transmission, and back to actual data upon retrieval. Below is an example of the data
sent from one of the Arduinos to the central Arduino.
//PROCESS TEMP
//Make the temperature byte array for sending, then send it.
18
byte tempByte[6];
long tempAve = cum/count;
integerToBytes(tempAve, tempByte);
tempByte[4] = B0001; //dataType
tempByte[5] = identifier++; //Identity of this Data
if(identifier >= 100)
{identifier = 0;}
//Send dat data
sendData(tempByte);
sendData(luxByte);
Along with the methods used in the above code.
void integerToBytes(long val, byte b[6]) {
b[0] = (byte )((val >> 24) & 0xff);
b[1] = (byte )((val >> 16) & 0xff);
b[2] = (byte )((val >> 8) & 0xff);
b[3] = (byte )(val & 0xff);
}
float sendData(byte b[6])
{
for(int i = 0; i < 5; i++)
{
vw_send(b, 6);
vw_wait_tx(); // Wait until the whole message is gone
}
}
This processing and sending method was used for all data types, and they were
all transmitted to the same central Arduino, which gathered and prepared the code to be
sent to the MySQL server through the Raspberry Pi. Often, a single point of data would
be lost between the RFID transmitter and receiver, although sending the data five times
over (as seen in the sendData method above) solved the problem.
The code for gathering information was rather simple. It consisted mainly of
receiving a message, and making sure that this point of data hadn’t been seen before.
The main code can be seen below.
boolean alreadySeen = false;
for(int j = 0; j < 5; j++)
{
if (!alreadySeen)
{
if(buf[5] == records[j])
{alreadySeen = true;}
}
}
if(!alreadySeen)
{
records[4] = records[3];
records[3] = records[2];
19
records[2] = records[1];
records[1] = records[0];
records[0] = buf[5];
String message = "";
message.concat(buf[4]);
message.concat(",");
message.concat(tempFloat);
Serial.println(message);
}
Because the Arduino was communicating with the Raspberry Pi serially (that is,
through the USB port), We had to be careful about what was printed through Serial.print,
as this was reserved for the Raspberry Pi. If the data value had not already been seen
(that is, if the unique identity of the data point wasn’t in the array of records), then the data
point was sent to the Raspberry Pi serially and the records were updated. Otherwise, the
message was ignored. This part of the project was made much simpler with the
VirtualWire import, as it included a non-blocking vw_get_message method that could just
wait for an incoming message.
The Raspberry Pi has a Python script running on it constantly, listening to the serial
port and uploading any data it receives to the MySQL database on the server. The MySQL
database is setup in a rather simple manner. There are only two tables, one for data types
(i.e. temperature, ambient noise, and light) and another for holding the actual data along
with its timestamp. The timestamp is generated by when the data is uploaded, and the
data’s ID is generated automatically. This makes accessing the data very simple with built
in MySQL commands to select data from the previous 24 or 48 hours.
The Python code itself is rather simple, with the Pi listening to the Arduino serially,
and accessing the data with a simple readline(). This is done in an infinite loop. After data
is received, it’s formatted and sent to the server.
ser = Serial('/dev/ttyACM0', 9600)
go = True;
def getData():
x = ser.readline()
db = MySQLdb.connect(host="host",
user="user",
passwd="pass",
db="database")
cur = db.cursor()
print(x)
x = x.replace('\r', '')
x = x.replace('\n', '')
data = x.split(',');
command = "INSERT INTO data (data_type_id, data_val) values
(" + data[0] + ", " + data[1] + ");"
cur.execute(command)
cur.close()
20
print("data_type_id: " + data[0]);
print("data_value: " + data[1]);
print("command executed")
print("-----------")
while go:
getData()
This script runs constantly. One of the major issues faced was a faulty internet
connection, which would cause the script to crash and require a manual reset. After
reconfiguring the wifi adaptor purchased for the Raspberry Pi, the problem was mostly
resolved, although the script will still crash roughly once a week. Future projects could
add a simple catch to keep the code from crashing and to generate a new connection if
the connection fails.
Browser Access
Browser access is a very simple process until we arrive at the Javascript for the
page. The PHP for the page simply has six divisions, and the CSS and formatting
Javascript simply modify the sizes and appearance of each section when selected. The
graph generation however is a completely different beast.
We decided on a system of displaying qualitative data to the user first, so that the
user could have a quick understanding of their environment without in depth analysis. To
achieve this, we created a single bar filled with various representations of the past 24
hours for each data type. For temperature and light, we used a gradient between colors
to show the user how the temperature has changed over time. For soil moisture, we used
a blue color and opacity to let the user know when their water is waning. For ambient
noise we showed the user a simulated sound wave that increased in amplitude as the
ambient noise rose in the room. Finally, for air particulates, we created an animated bar
with semi-transparent squares that move to simulate dust particles in the air. Each of
these gave a quick summation of the history of the status of the house. Should more
information be requested (with a click on the designated section), the division grows to
take up the full screen, and more graphs are shown below the initial bar. A line graph
comparing the past 24 hours and the 24 hours before those is shown, as well as a
minimum, maximum and average value for the past 2 days. Finally, there is a graph
showing change over time. These are simple line graphs and text data for the user to
analyze, and allow for a more quantitative experience.
Because each graph needed the same details to be turned into a graph, we created
a barFactory class to store the data and generate any of the graph types should it be
requested. Each factory took in factors like height, margins, the highest and lowest
expected data value, unit labels, colors for gradients, and links to the data. This way, the
data could be stored in one location and not need to be replicated more times than
necessary.
21
var tempFact = new barFactory()
.setTitle("Temperature")
.setUnits("°F")
.setHeight($("#sec1").height())
.setWidth($("#sec1").width())
.setDataMax(100) //Max temp
.setDataMin(32) //Min temp
.setUnits(" °F")
.setColors(["black","blue","green","red","yellow","white"])
.setDataLink("/augmentedFactory/php/tempData.php")
.setDataLink2("/augmentedFactory/php/yestTempData.php")
.setTooltip("Temperature here is measured in Fahrenheit ");
tempFact.singleBar("#chart1Div");
var sec1Chil = $("#sec1 .moreInfoDiv").children().toArray();
tempFact.lineComp(sec1Chil[0]);
tempFact.maxMinAve(sec1Chil[1]);
tempFact.changeOverTime(sec1Chil[2]);
We generated these graphs using the D3.js library. The D3 library comes with lots
of extremely useful tools, like scales to keep data in relevant ranges, and lots of dynamic
graphical tools to add to a graph. To start, I would create my scales to keep the data in
the correct range.
var airScale = d3.scale.linear()
.domain([this.dataMin, this.dataMax])
.range([0, 200]);
var parseDate = d3.time.format("%Y-%m-%d %H:%M:%S").parse;
var xScale = d3.time.scale()
.range([0, this.width]);
The D3 library also provides extremely convenient ways to retrieve data, with a
simple JSON object retrieval system. When combined with PHP’s ability to retrieve data
from MySQL databases easily, this creates a fluid, dynamic system of retrieving up to
date data. Data is retrieved with the following template
d3.json(dataLink, function(error, data)
{
//Analyze the data[]
}
This method retrieves a JSON object asynchronously, and runs the code inside of
the method after all of the data has been retrieved. If an URL is passed into the datalink,
it will check that website for the data. We created a quick PHP script to retrieve data from
my MySQL server and put it into JSON format for D3 to access.
$conn = new mysqli($servername, $username, $password, $dbname);
if($conn -> connect_error)
{
die("DB connection failed: " . $conn -> connect_error);
22
}
$tempSql = "select * from data where DATE_SUB(NOW(), INTERVAL 1 DAY)
<= timestamp AND data_type_ID=1 ORDER BY timestamp";
$tempResult = $conn->query($tempSql);
$data = array();
for ($i = 0; $i < $tempResult->num_rows; $i++)
{
while ($row = $tempResult->fetch_assoc())
{
$curVal = array();
$curVal["timestamp"] = $row["timestamp"];
$curVal["data_val"] = $row["data_val"];
$data[] = $curVal;
}
}
echo json_encode($data);
$conn -> close();
After the data is retrieved by the D3 method, the data can be analyzed and
processed to create the graph. This is made very simple with D3.js methods.
//Placing the title
var titleText = chart.append("text")
.attr("x", "50%")
.attr("y", (this.height/2) - (this.barHeight/2) - 30)
.text(this.title);
For the color graphs, it’s a simple process of creating equidistant points between
the dataMin and dataMax for each of the colors to display, creating a nice gradient
between the two. If there’s only one color, then it’s assumed the user wants the gradient
to use opacity instead of colors.
if(this.opacity)
{
//if only 1 color, goes between opaque and see through
colorScale = d3.scale.linear()
.domain([this.dataMin, this.dataMax])
.range([0.0, 1.0]);
}
else
{
//Otherwise, we take equidistant measures between dataMax and
dataMin and apply the appropriate colors
var dataDiv = [];
var step = (this.dataMax - this.dataMin) / this.dataColors;
var curStep = this.dataMin;
for(var i = 0; i < this.dataColors-1; i++)
{
dataDiv[dataDiv.length] = curStep;
curStep += step;
23
}
dataDiv[dataDiv.length] = this.dataMax;
colorScale = d3.scale.linear()
.domain(dataDiv)
.range(this.colors);
}
After this, it’s easy to calculate the location and color for each part of the gradient,
placing definite color points where we have them and allowing the gradient to do the
calculating of the colors between our points
var colorArrayAddition = [];
//This is for each gradient's color point, in which we scale it's spot on
the dateScale divided by the width, then *= 100 to turn it into a percentage
//Simply, this gives us the percent distance for each definite color
colorArrayAddition["offset"]=(100.0*xScale(d.timestamp)/tempWidth) + "%";
if(tempOpacity)
{
//If there's only one color, scale the opacity with a rgba string
var rgchartDivject = d3.rgb(tempColors[0]);
var colorString = "rgba(" + rgchartDivject.r + ", " +
rgchartDivject.g +", " + rgchartDivject.b + ", " +
colorScale(d.data_val) + ")";
colorArrayAddition["color"] = colorString;
}
else
{
//Otherwise we use our fancy colorScale
colorArrayAddition["color"] = colorScale(d.data_val);
}
From there, we create the rectangle and apply our color and location data to the
gradient. Then we inject our graph into the div and we’re done.
//Creating the actual bar that the chart will be in
chart.append("rect")
.attr("y", (tempHeight/2) - (tempBarHeight/2))
.attr("height", tempBarHeight)
.attr("width", tempWidth)
.attr("fill", "url(#" + titleClass + "-gradient)");
//Creating the linear gradient that will define the color of the rect
chart.append("linearGradient")
.attr("id", (titleClass + "-gradient"))
.attr("gradientUnits", "userSpaceOnUse")
.attr("x1", 0).attr("y1", 0)
.attr("x2", tempWidth).attr("y2", 0) //from left to right
.selectAll("stop")
.data(colorArray)
.enter().append("stop")
24
.attr("offset", function(d) { return d.offset; })
.attr("stop-color", function(d) { return d.color; });
Similar processes are followed for the rest of the graphs. For the air particulate
graphs, creative use of rectangles and animateMotion elements allows for an interesting
and visually stimulating take on air particulate, albeit at the cost of code clarity.
//Making a g for each datapoint, entering apoint for units from 0 - 200
//We make a square for each unit, and give it a random verticle path to
//elsewhere on the bar, to emulate dust floating around
chart.selectAll("g")
.data(data)
.enter()
.append("g").attr("transform", function(d, i)
{return "translate(" + i * sectWidth + ", " +
((ref.height/2) + (ref.barHeight/2) - 1) + ")";})
.selectAll("g")
.data(function(d){return d3.range(d3.min
([airScale(d.data_val), 200])); })
.enter()
.append("rect")
.attr("y", function(d){d = -(d%100) * (ref.barHeight/100); return d})
// Each one in a place from 1-100, if >100, we double up, because
they're .25 opacity
.attr("height", ref.barHeight/100)
.attr("width", sectWidth)
.attr("fill", "black")
.attr("fill-opacity", ".25")
.insert("animateMotion")
.attr("path", function(d){return ref.randomPath(d);})
.attr("dur", function(){return "" +
((10 * Math.random()) + 2) + "s";})
//Give them random time so some move faster or slower. But they all
take at least 2 secs
.attr("repeatCount", "indefinite");
While the graphical sound wave generation is a much simpler process
// Add the valueline path.
chart.append("path")
.attr("class", "line soundLine")
.attr("fill", "none")
.attr("stroke", "green")
.attr("stroke-width", "1.5")
.attr("d", valueline(data))
.attr("transform", "translate(0, " + (ref.height/2.0) + ")");
After each data graph is generated, it lends its data to the synthesis graph, which
shows the most current data value for each data type.
25
A large part of the project is being able to access these graphs on the Google
Glass. However, the processing power of the Google Glass can be a severe limitation,
and it would be much more efficient to send images to the Glass than generate them
based on data in real time. Therefore, we would take the already rendered graphs, store
them on the web server, and simply have the Glass access these image files when
requested. However, because D3.js is a javascript library, it only runs in the browser.
Therefore, the only way to get it back to the server is to AJAX the generated image back
to the server to be saved for future reference. This causes several problems. There are
immediate security concerns, as the AJAX could be manipulated to send unintended
images back to the server, or even code snippets. It also reduces the real time effect of
the Glass users, as they can only see the most recent access to the browser. This could
be greatly improved upon in future projects, perhaps generating the graphs on the server
using software like Node.js.
The AJAXing process is relatively simple, with a canvas being generated,
capturing the generated svg, base 64 encoding it and sending it to a saving script on the
server.
image.addEventListener('load', function()
{
var canvas = document.createElement("canvas");
canvas.width = image.width;
canvas.height = image.height;
var context = canvas.getContext("2d");
context.clearRect(0, 0, image.width, image.height);
context.drawImage(image, 0, 0);
var a = document.createElement("a");
a.download = "file.png";
a.href = canvas.toDataURL("image/png");
$.ajax({
url: 'php/tempSVG.php',
type: 'POST',
data:{"link":a.href, "title" : tempTitle.replace(" ", "_")},
async: true
});
});
The script for saving the incoming image on the server is extremely simple
$SVGStr = $_POST['link'];
$SVGStr = str_replace("_" ,"+",$SVGStr);
$image = fopen($SVGStr, 'r');
file_put_contents("images/" . $_POST['title'] . ".png", $image);
fclose($image);
26
This way, the images can be stored on the server for the Glass to access upon
request.
Google Glass Access
The Google Glass access method works differently. As explained above, the
Google Glass works with a system of cards that are displayed to the user. However,
besides the interface differences, it is very similar to common android development. I
programmed in an initial page with rudimentary instructions, and the menu is used to
navigate to the rest of the options (“Stop” for ending the application or “Go” for scanning
another QR code). If “Go” is selected, a camera opens for scanning. For the QR scanning
portion of the code we make use of the ZXing library by Google, which takes control of
the camera to scan a barcode or QR code and returns the value of the scanned item. This
is an extremely simple and convenient use of Android’s Intent system, where processes
can be delegated to other applications that are already on the system.
private void scanForQR()
{
Intent objIntent = new Intent("com.google.zxing.client.android.SCAN");
objIntent.putExtra("SCAN_MODE", "QR_CODE_MODE");
startActivityForResult(objIntent, QR_REQUEST);
}
When the data has been returned, it is sent to the onActivityResult method, where
the data is retrieved and the actual contact to the server can begin.
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent
data) {
if( requestCode == QR_REQUEST && resultCode == RESULT_OK) {
String result = data.getStringExtra("SCAN_RESULT");
System.out.println("SCAN_RESULT:" + result);
if(mBinderService != null)
{
this.contactServer(result);
}
else
{System.out.println("CRITICAL ERROR: BINDER SERVICE WAS
NULL");}
}
super.onActivityResult(requestCode, resultCode, data);
}
When everything is verified as correct, we proceed to the contactServer method,
which sends the data to the server to be replied to by the PHP script listening.
private void contactServer(String result)
{
27
System.out.println("Gonna connect to that sweet, sweet server");
String hostName = "147.253.70.159"; //"galactus.stetson.edu";
String portNumber = "443";
new ServerTalk().execute((Object) hostName, (Object) portNumber,
(Object) this, (Object)result);
}
This process is surprisingly simple, generating a new AsyncTask to communicate
with the server. The AsyncTask is used because synchronous server contact had been
deprecated by Google, which caused some problems with development, as slow
internet connections would result in a long waiting period for the user and confusion on
whether or not the app was functioning. This is due to the fact that the image is
displayed on the main card, requiring all scanning, server communication and
processing to be done before exiting the initial menu of the card. The AsynTask consists
of several methods that must be implemented, including a doInBackground method and
a onPostExecute method, so the program knows what it must be doing and how to
finish.
class ServerTalk extends AsyncTask<Object, Void, String>
{
protected String doInBackground(Object...serverData)
{
String fromServer = "";
LiveCardMenuActivity lcma = (LiveCardMenuActivity)serverData[2];
String result = (String)serverData[3];
try {
Socket kkSocket = new Socket((String)serverData[0],
Integer.parseInt((String)serverData[1]));
PrintWriter out = new PrintWriter(kkSocket.getOutputStream(),
true);
BufferedReader in = new BufferedReader(
new InputStreamReader(kkSocket.getInputStream()));
BufferedReader stdIn =
new BufferedReader(new InputStreamReader(System.in));
String fromUser;
String input;
while ((input = in.readLine()) != null) {
fromServer += input;
}
} catch (UnknownHostException e) {
System.err.println("Don't know about host " + serverData[0]);
System.exit(1);
} catch (IOException e) {
System.err.println("Couldn't get I/O for the connection to " +
serverData[0] + ", port #" + serverData[1]);
}
lcma.canFinish(fromServer);
return "done";
}
protected void onPostExecute(String result)
28
{
System.out.println("THIS IS ON THE POST EXECUTE");
System.out.println("Result: " + result);
}
}
As can be seen above, a socket is made to communicate with a given server,
and the data from the QR code is sent. The code on the Glass then listens for an
incoming base64 encoded message. When one is received and no errors are thrown, it
calls a method in the liveCardActivity (where the AsyncTask is generate from) with the
retrieved information. This method conveys the information to the view of the interface,
in this case the mBinderService.
public void canFinish(String[] messages)
{
if(mBinderService != null)
{
if(Arrays.asList(messages).contains(null))
{
System.out.println("In CanFinish, message was NULL");}
else
{
chart1 = messages[0];
if(messages[1] != "no")
{
extras = true;
chart2 = messages[1];
chart3 = messages[2];
chart4 = messages[3];
}
else
{
extras = false;
}
}
}
mDoFinishWait = false;
mRequestScroller = true;
performActionsIfConnected();
}
After the information has been processed and stored, we create an intent for the
ChartScrollerActivity, the activity that will generate the CardScroller with the cards with
the graphs on them. We do this by creating an Intent, passing the chart data in as
strings (they’re still base 64 encoded strings at this point) and start the activity.
if (mDoFinish && !mDoFinishWait)
{
System.out.println("performing action to finish the menu (nono unless
already got data back)");
29
mBinderService = null;
unbindService(mConnection);
if(mRequestScroller) {
Intent i = new Intent(getApplicationContext(),
ChartScrollerActivity.class);
i.putExtra("mainChart", chart1);
if (extras) {
i.putExtra("extraChart1", chart2);
i.putExtra("extraChart2", chart3);
i.putExtra("extraChart3", chart4);
i.putExtra("extras", "true");
} else {
i.putExtra("extras", "false");
}
startActivity(i);
mRequestScroller = false;
}
finish();
}
In the ChartScrollerActivity, the strings are decoded and formatted into bitmaps.
for(int i = 0; i < chartSourceArray.length; i++) {
try {
byte[] bob = Base64.decode(chartSourceArray[i], Base64.DEFAULT);
Bitmap bitmap = BitmapFactory.decodeByteArray(bob, 0, bob.length);
if (bitmap != null) {
chartArray[i] = bitmap;
} else {
System.out.println("Daaang, dat bitmap was null doe");
}
} catch (IllegalArgumentException e) {
System.out.println("Base64 had an issues: " + e);
} catch (NullPointerException e) {
System.out.println("Null Pointer: " + e);
}
}
We then use a custom XML format embedded into Google Glass’s cards (as
recommended by google) [40]. This XML format is a simple ImageView, which contains
an image and some formatting for the card it will appear on. The received images are
then placed into the view of the card when the view is requested
@Override
public View getView(int position, View convertView, ViewGroup parent) {
View vw = mCards.get(position).getView(convertView, parent);
ImageView imgVw =
(ImageView)vw.findViewById(R.id.main_chart_image_view_id);
imgVw.setImageBitmap(chartArray[position]);
return vw;
}
30
As for the script on the server, the implementation is rather simple. It simply
listens for information from the Glass and relays the contents of a link depending on the
incoming data.
$socket = @socket_create_listen("443");
if(!$socket)
{
print "Failed to create socket!\n";
exit;
}
while(true)
{
$input = "http://i2.kym-cdn.com/entries/icons/facebook/000/000/091/Problem.jpg";
$client = socket_accept($socket);
$request = socket_read($client, 2048);
echo($request);
$linkString = "http://website.com/directory/to/images/";
$typeString = "";
$chartString = "";
$typeNumber = round($request, -1)/10;
switch ($typeNumber) {
case 1:
$typeString = "Temperature";
break;
case 2:
$typeString = "Air_Particulates";
break;
case 3:
$typeString = "Ambient_Noise";
break;
case 4:
$typeString = "Light";
break;
case 5:
$typeString = "Soil_Moisture";
break;
case 6:
$typeString = "Synthesis";
break;
}
$chartNumber = $request % 10;
switch ($chartNumber) {
case 0:
$chartString = ".png";
break;
case 1:
$chartString = "_lineComp.png";
break;
case 2:
$chartString = "_minMaxAve.png";
break;
case 3:
31
$chartString = "_changeOverTime.png";
break;
}
$input = $linkString . $typeString . $chartString; echo("accepted client");
print "Serving image\n";
$contents = file_get_contents($input);
$contents = base64_encode($contents);
$output = $contents;
file_put_contents("this.out", $output);
socket_write($client, $output);
socket_close ($client);
}
socket_close ($socket);
FURTHER WORK
There are many advances that could be made to this project. As this project uses
the D3.js library primarily for graph generation, graphs are then AJAXed back to the
server to be stored for the Google Glass to access. A better system would be to have
the graphs generated on the server every time new data is stored, causing the website
to have faster load times and for always up to date data be available to the Google
Glass interface.
There were also multiple SVG browser issues that came into play. One that is
prominently noticeable is the lack of animateMotion animations in the dust bar chart in
Internet Explorer and on Android browsers. This is due to embedded mpaths as
opposed to path references in the animateMotion attributes, among other things. A fully
supported browser experience would make the system smoother.
There were also some issues converting SVGs to PNGs and displaying that
information on the Google Glass. The Glass’ screen display is not very large, and their
recommended card creation involves embedding custom XML onto a preset layout,
which allows for even less space. Better formatting of the created graphs and their
location on the Google Glass cards would make for a better user experience.
32
Google glass recently announced
a pause in their development of the
Google Glass, which may be due to its
waning popularity. One difficulty had
during this project was the
misunderstanding of the UI by users who
tested the app. When the app was on
display, it was not uncommon for the
users to accidently begin taking pictures
or video, which was uploaded to our
Google Glass account. This difficulty in
comprehending the UI could be focused
on for a simpler experience in later projects.
Animation was also difficult to format on the Google Glass, and an animated dust
chart would go a long way to creating interest of the data in the user. In the future
projects may focus on animating the bars that are sent to the Google Glass.
Because of the recent hiatus of Google Glass’s production, a similar project
could be tried with another AR system, like Microsoft’s upcoming Hololense[39]. One
could experiment with placing the data in the environment, allowing it to more naturally
mesh with the reality the user is experiencing and responding as though the data were
truly in the real world.
One could experiment with many different types of data, finding what is the most
useful and merges the best with an augmented reality display. One could also display
the data in different mediums, ambient noise being conveyed as actual noise to the
user, or air particulates bouncing along the screen in front of the user to further integrate
the presentation into their reality.
Bibliography
[1] Damala, A., Cubaud, P., Bationo, A., Houlier, P. and Marchal, I. (2008). Bridging the gap between the
digital and the physical: design and evaluation of a mobile augmented reality guide for the museum visit.
ACM Digital Interactive Media in Entertainment & Arts. pp.120--127.
[2]Havlik, D., Schade, S., Sabeur, Z., Mazzetti, P., Watson, K., Berre, A. and Mon, J. (2011). From sensor
to observation web with environmental enablers in the future internet. Sensors, 11(4), pp.3874--3907.
[3]Olsson, T. and Salo, M. (2012). Narratives of satisfying and unsatisfying experiences of current mobile
augmented reality applications. Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems pp.2779--2788.
33
[4]Starner, T., Mann, S., Rhodes, B., Levine, J., Healey, J., Kirsch, D., Picard, R. and Pentland, A. (1997).
Augmented reality through wearable computing. Presence: Teleoperators and Virtual Environments, 6(4),
pp.386--398.
[5]Sutherland, I. (1968). A head-mounted three dimensional display. AFIPS '68 (Fall, part I) pp.757--764.
[6]Sutherland, I. (1965). The ultimate display. Multimedia: From Wagner to virtual reality.
[7]Eejournal.com, (2014). Augmented Reality. [online] Available at:
http://www.eejournal.com/archives/articles/20140401-augmented [Accessed 3 Oct. 2014].
[8]Feiner, S., MacIntyre, B., Hollerer, T. and Webster, A. (1997). A touring machine: Prototyping 3D
mobile augmented reality systems for exploring the urban environment. Personal Technologies, 1(4),
pp.208--217.
[9] National Intelligence Council. Disruptive Technologies Global Trends 2025. Six Technologies
with Potential Impacts on US Interests Out to 2025. 2008. Available online:
http://www.fas.org/irp/nic/disruptive.pdf (accessed on 31 October 2012).
[10]WIRED, (2013). In the Programmable World, All Our Objects Will Act as One | WIRED. [online]
Available at: http://www.wired.com/2013/05/internet-of-things-2/all/ [Accessed 3 Oct. 2014].
[11]Bosch ConnectedWorld Blog, (2014). The Internet of Things – new infographics. [online] Available at:
http://blog.bosch-si.com/categories/internetofthings/2013/01/the-internet-of-things-new-infographics/
[Accessed 3 Oct. 2014].
[12] Want, R. (2006). An introduction to RFID technology. Pervasive Computing, IEEE, 5(1), pp.25--33.
[13]Welbourne, E., Battle, L., Cole, G., Gould, K., Rector, K., Raymer, S., Balazinska, M. and Borriello, G.
(2009). Building the internet of things using RFID: the RFID ecosystem experience. Internet Computing,
IEEE, 13(3), pp.48--55.
[14]Support.google.com, (2014). Tech specs - Google Glass Help. [online] Available at:
https://support.google.com/glass/answer/3064128?hl=en&ref_topic=3063354 [Accessed 3 Oct. 2014].
[15]Glauser, W. (2013). Doctors among early adopters of Google Glass. CMAJ. [online] Available at:
http://www.cmaj.ca/content/early/2013/09/30/cmaj.109-4607.short [Accessed 3 Oct. 2014].
[16]Fraunhofer Institute for Integrated Circuits, (2014). 20140827_BS_Shore_Google_Glas. [online]
Available at: http://www.iis.fraunhofer.de/en/pr/2014/20140827_BS_Shore_Google_Glas.html [Accessed
3 Oct. 2014].
[17]Scheffel, J. and Kockesen, G. (n.d.). Wearable Web Technology: Google Glass and the Mirror API.
MSc Media Technology
[18]Miller, P. (2012). Project Glass and the epic history of wearable computers. [online] The Verge.
Available at: http://www.theverge.com/2012/6/26/2986317/google-project-glass-wearable-computers-
disappoint-me [Accessed 3 Oct. 2014].
34
[19] http://pubs.acs.org/doi/pdf/10.1021/nn500614k Immunochromatographic Diagnostic
Test Analysis Using Google Glass, ACS Nano
[20] "What's Inside Google Glass?" Google Glass Teardown. Catwig.com, n.d. Web.
http://www.catwig.com/google-glass-teardown/ 24 Nov. 2014.
[21] "Zxing." GitHub. Google, n.d. Web. https://github.com/zxing/zxing 24 Nov. 2014.
[22] "Application Fundamentals." Android Developers. Google, n.d. Web.
http://developer.android.com/guide/components/fundamentals.html 24 Nov. 2014.
[23] "Mirror API." Google Developers. Google, n.d. Web.
https://developers.google.com/glass/develop/mirror/index 24 Nov. 2014.
[24] "Glass Development Kit." Google Developers. Google, n.d. Web.
https://developers.google.com/glass/develop/gdk/ 24 Nov. 2014.
[25] "Introduction to Android." Android Developers. Google, n.d. Web.
http://developer.android.com/guide/index.html 23 Nov. 2014.
[26] "JavaScript InfoVis Toolkit." JavaScript InfoVis Toolkit. InfoVis, n.d. Web. http://philogb.github.io/jit/
24 Nov. 2014.
[27] "Arduino Uno." Arduino. Arduino, n.d. Web. 23 Nov. 2014.
<http://arduino.cc/en/Main/arduinoBoardUno>.
[28] "TMP36 - Analog Temperature Sensor." Adafruit. Adafruit, n.d. Web. 24 Nov. 2014. <http://www.adafruit.com/products/165>. [29] "Water Level Indicator." Adafruit. Adafruit, n.d. Web. 24 Nov. 2014. <http://www.electroschematics.com/9964/arduino-water-level-indicator-controller/>. [30] "Grove Soil Moisture Sensor." Grove. Grove, n.d. Web. < http://www.seeedstudio.com/wiki/Grove_-_Moisture_Sensor>. [31] "Analog Light Sensor." Adafruit Light Sensor. Adafruit, n.d. Web. 24 Nov. 2014. <http://www.adafruit.com/product/1384>. [32] Triple-Axis Accelerometer." Adafruit Shop. Adafruit, n.d. Web. 24 Nov. 2014. <http://www.adafruit.com/products/2019>. [33] "Grove - Sound Sensor." Grove Sound Sensor. Robot Mesh, n.d. Web. 24 Nov. 2014. <http://www.robotmesh.com/grove-sound-sensor?gclid=CjwKEAjwnNqgBRDdgOitrZPj6yYSJACM86tDXvEQtPPlfxSe-COposOsPSAwx-IbdF-_bVFkAeutTBoCfMbw_wcB>. [34] "Grove - Dust Sensor." Seeed. Seeed, n.d. Web. 24 Nov. 2014. <http://www.seeedstudio.com/depot/grove-dust-sensor-p-1050.html?cPath=25_27>. [35] “Bi-directional Logic Level Converter." Adafruit Logic Level Converter. Adafruit, n.d. Web. 24 Nov.
2014. <http://www.adafruit.com/products/757>.
35
[36] "Broadcom BCM2835 SoC Has the Most Powerful Mobile GPU in the World?" Internet Archive:
Wayback Machine. Grand Max, n.d. Web. 24 Nov. 2014.
<https://web.archive.org/web/20120413184701/http://www.grandmax.net/2012/01/broadcom-bcm2835-
soc-has-powerful.html>.
[37] "D3.js - Data-Driven Documents." D3.js - Data-Driven Documents. Web. 21 Apr. 2015.
<http://d3js.org/>.
[38] McCauley, Mike. "VirtualWire." Airspayce. Mike McCauley, 1 Jan. 2008. Web. 21 Apr. 2015.
<http://www.airspayce.com/mikem/arduino/VirtualWire.pdf>.
[39] "Microsoft HoloLens." Microsoft HoloLens. Microsoft. Web. 21 Apr. 2015.
<https://www.microsoft.com/microsoft-hololens/en-us>.
[40] "CardBuilder." Google Developers. Google. Web. 22 Apr. 2015.
[41] Tufte, Edward R. The Visual Display of Quantitative Information. 2nd ed. Cheshire: Graphics, 1983.
Print.