using technology developed for autonomous cars to help …€¦ · 2. synergies between autonomous...

9
Using Technology Developed for Autonomous Cars to Help Navigate Blind People Manuel Martinez Alina Roitberg Daniel Koester Boris Schauerte Rainer Stiefelhagen Karlsruhe Institute of Technology Institute for Anthropomatics and Robotics 76131 Karlsruhe, Germany http://cvhci.anthropomatik.kit.edu/ Abstract Autonomous driving is currently a very active research area with virtually all automotive manufacturers competing to bring the first autonomous car to the market. This race leads to billions of dollars being invested in the development of novel sensors, processing platforms, and algorithms. In this paper, we explore the synergies between the chal- lenges in self-driving technology and development of navi- gation aids for blind people. We aim to leverage the recently emerged methods for self-driving cars, and use it to develop assistive technology for the visually impaired. In particular we focus on the task of perceiving the environment in real- time from cameras. First, we review current developments in embedded platforms for real-time computation as well as current algorithms for image processing, obstacle segmen- tation and classification. Then, as a proof-of-concept, we build an obstacle avoidance system for blind people that is based on a hardware platform used in the automotive in- dustry. To perceive the environment, we adapt an imple- mentation of the stixels algorithm, designed for self-driving cars. We discuss the challenges and modifications required for such an application domain transfer. Finally, to show its usability in practice, we conduct and evaluate a user study with six blindfolded people. 1. Introduction Humans rely strongly on vision, our primary sense used for perception of surroundings. Our daily life independence is closely connected to the ability to explore new environ- ments and detect obstacles in a safe way. Navigation in an unknown setting is therefore a very difficult task for visually impaired people, often limiting their independence. World Health Organization estimates around 285 million people worldwide to be visually impaired with around 39 millions being diagnosed with blindness [2]. Especially in (a) Tartan Rancing Car (2007) (b) Google Autonomous Car (2015) (c) Stixels for Autonomous Cars (d) Stixels for Blind Assistance Figure 1: High investment in autonomous cars promotes rapid development of new technologies, some of them can be transferred to assistive systems for the visually impaired. 1a: Tartan Racing, winner of the 2007 DARPA Challenge. 1b: 100% Autonomous car developed in 2014 by Google. 1c: Stixels computed on GPUs for autonomous driving. [12] 1d: Our proof-of-concept: Stixel-based obstacle detection re-applied to assist visually impaired people. developed countries, the demand for assistive technologies for the blind grows due to the demographic shift towards an elderly population, the group most susceptible to low vi- sion. Still, the market for vision-based navigation aids for the visually impaired remains small. In contrast, strong interest in autonomous vehicles ac- celerated the progress in new technologies, creating a very active research field. Problem statements in this area are often related to computer vision, as understanding what is going on outside and handling uncertain situations even un- 1424

Upload: others

Post on 08-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Using Technology Developed for Autonomous Cars to Help …€¦ · 2. Synergies between Autonomous Cars and Assitive Technology for the Blind Autonomous driving technology has many

Using Technology Developed for Autonomous Cars to Help Navigate Blind People

Manuel Martinez Alina Roitberg Daniel Koester Boris Schauerte Rainer Stiefelhagen

Karlsruhe Institute of TechnologyInstitute for Anthropomatics and Robotics

76131 Karlsruhe, Germanyhttp://cvhci.anthropomatik.kit.edu/

Abstract

Autonomous driving is currently a very active research

area with virtually all automotive manufacturers competing

to bring the first autonomous car to the market. This race

leads to billions of dollars being invested in the development

of novel sensors, processing platforms, and algorithms.

In this paper, we explore the synergies between the chal-

lenges in self-driving technology and development of navi-

gation aids for blind people. We aim to leverage the recently

emerged methods for self-driving cars, and use it to develop

assistive technology for the visually impaired. In particular

we focus on the task of perceiving the environment in real-

time from cameras. First, we review current developments

in embedded platforms for real-time computation as well as

current algorithms for image processing, obstacle segmen-

tation and classification. Then, as a proof-of-concept, we

build an obstacle avoidance system for blind people that is

based on a hardware platform used in the automotive in-

dustry. To perceive the environment, we adapt an imple-

mentation of the stixels algorithm, designed for self-driving

cars. We discuss the challenges and modifications required

for such an application domain transfer. Finally, to show its

usability in practice, we conduct and evaluate a user study

with six blindfolded people.

1. Introduction

Humans rely strongly on vision, our primary sense used

for perception of surroundings. Our daily life independence

is closely connected to the ability to explore new environ-

ments and detect obstacles in a safe way. Navigation in an

unknown setting is therefore a very difficult task for visually

impaired people, often limiting their independence.

World Health Organization estimates around 285 million

people worldwide to be visually impaired with around 39

millions being diagnosed with blindness [2]. Especially in

(a) Tartan Rancing Car (2007) (b) Google Autonomous Car (2015)

(c) Stixels for Autonomous Cars (d) Stixels for Blind Assistance

Figure 1: High investment in autonomous cars promotes

rapid development of new technologies, some of them can

be transferred to assistive systems for the visually impaired.

1a: Tartan Racing, winner of the 2007 DARPA Challenge.

1b: 100% Autonomous car developed in 2014 by Google.

1c: Stixels computed on GPUs for autonomous driving. [12]

1d: Our proof-of-concept: Stixel-based obstacle detection

re-applied to assist visually impaired people.

developed countries, the demand for assistive technologies

for the blind grows due to the demographic shift towards

an elderly population, the group most susceptible to low vi-

sion. Still, the market for vision-based navigation aids for

the visually impaired remains small.

In contrast, strong interest in autonomous vehicles ac-

celerated the progress in new technologies, creating a very

active research field. Problem statements in this area are

often related to computer vision, as understanding what is

going on outside and handling uncertain situations even un-

11424

Page 2: Using Technology Developed for Autonomous Cars to Help …€¦ · 2. Synergies between Autonomous Cars and Assitive Technology for the Blind Autonomous driving technology has many

der difficult weather conditions is crucial for traffic safety.

This is very similar to the challenges visually impaired peo-

ple face in their everyday life. The booming field of au-

tonomous driving can therefore also lead to progress in the

much smaller domain of assistive technologies for the blind.

Bringing those applications together and discussing, how

development of new aids for visually impaired people can

benefit from much larger automotive industry, is the main

topic of this work.

This paper is organized in two parts. First, we discuss the

feasibility of transferring technologies extensively used for

navigation in Advanced Driver Assistance Systems (ADAS)

to assist visually impaired people in outdoor and indoor

navigation. We overview the state-of-the-art ADAS tech-

nology, and highlight the potential areas that can be re-used

to create assistive technology.

In the second part, we demonstrate the feasibility of such

technology transfer in practice. We present a prototype ob-

stacle avoidance system for blind people and test its effec-

tiveness in a user-study. Technology from the automotive

sector plays an important role in both software- and hard-

ware design. We adapt the stixel-based obstacle detection

method from the automotive industry as an aid for the nav-

igation of the visually impaired people. The input of a

wearable depth sensor is processed with an extended ver-

sion of the stixel computation algorithm. An audio interface

is warning the human of the detected obstacles and indicat-

ing both their distance and size. Our approach is mainly

focused on mid-range obstacle detection, since we see the

method as an addition to the classical white cane. Still, our

method is able to detect near-range obstacles.

Our user study evaluation and benchmark of the pre-

sented system show that automotive-inspired technology

can indeed be applied to aid the visually impaired.

2. Synergies between Autonomous Cars and

Assitive Technology for the Blind

Autonomous driving technology has many facets, but in

all cases, the car must perceive the environment using a set

of sensors, then the information from the sensors must be

processed to understand the environment using specialized

algorithms, and, finally, a control system must decide about

the most appropriate course of action in current situation.

This control loop (perception - understanding - action)

must satisfy several properties: it must be executed in real-

time, handle previously unseen environments, be robust un-

der varying environmental conditions, and, last but not least,

be safe for both, users and pedestrians around them.

We can see many similarities between the control loops

of self-driving cars and navigation aids for the visually im-

paired. To help navigate visually impaired people we must

also perceive the environment using a set of sensors and

understand the environment using algorithms. However, in-

Figure 2: Lidars and cameras are the main sensors used to

map the road ahead of the car. Left: Velodyne HDL-64E

lidar, which costs ca. $75,000. Right: Volvo’s customized

windshield camera array. Sources: Velodyne and Volvo.

stead of just controlling the car, navigation aids must com-

municate with the user to suggest a course of action or alert

about an impeding obstacle.

Although both tasks do not have identical goals, re-

searchers of navigation aids for the blind can benefit of the

impressive developments achieved in self-driving cars.

Here we present an overview of the current develop-

ments in sensors, algorithms and processing platforms that

can be applied to navigational aids for the visually impaired.

2.1. Sensors

The main goal of sensors is to perceive the environment,

and thus they are a critical part of the system: we simply

cannot avoid obstacles that we can’t sense.

There is no sensor modality that is capable, by itself, to

perceive all possible challenges in all environments, there-

fore a self-driving car must combine sensors from multiple

modalities. Ultrasound sensors are popular because they are

robust and inexpensive, but their low range limits their use-

fulness to aid in parking and to act as a last option safety

feature. On the other hand, radars, lidars and cameras are

used for long range sensing. Radars do provide limited spa-

tial information, so they are mainly used as a safety feature

only, thus lidars and cameras are the two main modalities

used to map the road (see Figure 2).

Lidars (a portmanteau of light and radar) are sensors that

use laser range sensing technology to create a 3D map of

the environment, in cars they are usually installed on the

roof. Lidars were popular in the initial stages of research,

and most of the participants on the Darpa Grand Challenge

competitions that took place between 2004 and 2007 used

them, but currently they have fallen out of favor. This is

mainly due to two factors: first, lidars are expensive, with

hardware costs in the range of tens of thousands of dollars,

and second, the technology behind lidars is sensitive to en-

vironmental conditions and has trouble recognizing the en-

vironment in case of rain or snow [23]. Waymo (formerly a

part of Google) still invests heavily in lidar technology, and

has vowed to reduce hardware costs by 90% [3].

1425

Page 3: Using Technology Developed for Autonomous Cars to Help …€¦ · 2. Synergies between Autonomous Cars and Assitive Technology for the Blind Autonomous driving technology has many

(a) Nvidia drivePX Xavier (b) Mobileye EyeQ2 (c) Nvidia Jetson TX1 (d) Nvidia Jetson TX2 (e) Myriad

Figure 3: Overview of embedded platforms used in automotive industry. From left to right: Nvidia drivePX Xavier; the PCB

and camera module from a Hyundai Lane Guidance, which is based on the EyeQ2 chip from Mobileye ; Nvidia Jetson TX1

(used in our proof-of-concept); Nvidia Jetson TX2, which is an upgrade over the TX1; Fathom inference engine in a USB

stick form factor. It is powered by the Myriad 2 MA2450 chip. Sources: Nvidia, Binarysequence and Movidius.

Self-driving technology has migrated slowly but steadily

towards using cameras as the main sensors for environment

modeling and understanding. As of 2017, camera-based

systems dominate the research being published in the field.

Cameras are relatively inexpensive but processing images

is significantly more complex than using lidars or radars,

which provide 3D world models directly. The use of cam-

eras as sensors for self-driving cars has become viable due

to the recent advances in large scale machine learning, as

well as the development of new hardware that is able to

execute image processing algorithms in an embedded form

factor.

There is high potential in integrating sensor modalities

used for self-driving cars into navigational assistive systems

for the visually impaired: ultrasound sensors [24], lidars

[16] and cameras [7, 19].

2.2. Platforms

Image processing algorithms are very demanding in

terms of processing power compared to almost any other

modality. A Full-HD color camera contains approximately

2 megapixels and produces images at 60 frames per sec-

ond, generating 360 million values per second, if the cam-

era is HDR (High Dynamic Range), this translates into 720

Megabytes per second. This is a staggering amount of data

to transfer and process in real-time and autonomous cars use

many cameras to cover the whole 360◦ environment.

Most CPUs are not capable of processing this amount

of data sequentially, and thus specific instruction sets were

designed to process multiple pixels simultaneously, like the

MMX (MultiMedia eXtensions) introduced in 1997 by In-

tel. Even using those instruction sets most CPUs struggle

with image processing. This means that a very large amount

of power must be used to process images, and this hinders

its integration into a self-driving car (or a mobility aid for

the blind).

The current trend is to leverage the fact that most im-

age processing algorithms can be parallelized easily. By

using GPUs (Graphical Processing Units), it is common to

achieve a performance/power ratio around 10 times better

than when using CPUs [20]. Nvidia, the lead GPU manu-

facturer, is offering a family of platforms named Drive PX

that are designed specifically for self-driving cars, are com-

pact, and use a small amount of power (see Figure 3a).

What is noteworthy, is that the same processors used in

the Drive PX family are also made available in the form of

compact modules named Jetson (see Figure 3c). This way

the processors used in self-driving cars are made available

also for the general use.

Still, Nvidia embedded platforms are relatively large and

power hungry because they are the result of scaling down

an architecture designed originally for desktop computers.

Instead, several companies are building chips designed for

scratch for embedded applications, we would like to high-

light Mobileye and the EyeQ platform (see Figure 3b) as

well as Movidius and the Myriad platform (see Figure 3e).

Details about Mobileye solutions are scarce, but their EyeQ

platforms seem to be designed exclusively for autonomous

driving. On the other hand, Movidius technology is well

known [14] and it has been tested in many applications.

Both companies have been acquired recently by Intel, and

thus at the moment it is still not clear if Intel will support

the development of assistive technology for the visually im-

paired.

2.3. Algorithms

Many algorithms developed for self-driving cars can also

be used as components for a navigation aid for the visually

impaired. Both tasks present similar challenges for camera

calibration, image processing, scene understanding, envi-

ronment mapping, obstacle recognition, planning, localiza-

tion and visual odometry, among others. A comprehensive

review of the recent developments for all those challenges

1426

Page 4: Using Technology Developed for Autonomous Cars to Help …€¦ · 2. Synergies between Autonomous Cars and Assitive Technology for the Blind Autonomous driving technology has many

falls out of the reach of this paper, instead, we would like

to highlight projects that deliver publicly available code that

can be easily adapted for its use in assistive navigation tools.

We start with a stereo matching implementation op-

timized for the Nvidia embedded platforms [11]. This

work implements the popular Semi-Global Matching algo-

rithm [13] and achieves 42 frames per second for an image

size of 640 × 480 on a Jetson TX1 module. This implemen-

tation allows us to obtain a depth field from the images, and

thus a better spatial perception of the scene.

Using the depth information we can obtain a better rep-

resentation of the environment by using stixels [4]. Stixels

offer a mid-level representation of the scene where the im-

age is divided in vertical regions that are segmented based

on their disparity (see Figure 1c). Compared to other seg-

mentation algorithms, stixels can be calculated very effi-

ciently [12] and their representation is well suited to dis-

tinguish between obstacles and ground.

The next level of understanding is to find all semanti-

cally significant objects in the image. This task is com-

monly tacked by convolutional neural networks, but the best

performing networks are usually not fast enough to be em-

bedded in cars or mobile applications, therefore there are

several attempts at speeding up the task either by using cus-

tom architectures [22] or even using stixels as a basis [18],

as seen in Figure 4.

Mentioned techniques only provide image analysis with

no actual decision taking, which is delegated to a future

stage. Instead of splitting the perception and the control

stages, Nvidia suggests that it might be more efficient to use

an end-to-end network that generates control signals for the

car based only on the source images, with no intermediate

steps [5].

3. Proof of Concept: Obstacle Avoidance for

the Visually Impaired based on Stixels

For a long time, navigation aides for the visually im-

paired have been limited to classical training-intensive

tools, such as white canes or blind dogs [8], and both still

remain the modality of choice for most of the visually im-

paired people due to their reliability and convenience.

White canes are excellent for detecting ground-level ob-

stacles very close to the user, which is crucial for the safety.

However, the user does not obtain much information about

the orientation and the medium- or long-range environment

topology, which is an important navigation cue. Therefore,

there has been extensive work developing electronic aids to

increase the sensing range [8, 19].

Often, progress in this area is connected to technologi-

cal developments in other fields. For example, Kulyukin et

al. uses Radio Frequency Identification (RFID) cues to nav-

igate through novel environments [15]. Smartphone tech-

nology, which integrates cameras, powerful processors, and

Figure 4: Top: stixels obtained from a driving scene.

Bottom: their semantic segmentation. Source: Stixel-

world [18].

GPS technology, is enabling many of the recently devel-

oped assistive technology, like the navigation app named

VoiceMaps [9, 25], and the camera-based obstacle avoid-

ance system by Tapu et al. [21].

To explore the potential of transfer driverless car tech-

nology to practical assistive tools for the blind, we adapt

one of the prominent algorithms for depth-based obstacle

detection from the automotive industry and for its use as an

assistive tool for the visually impaired.

3.1. System Setup

As a proof of concept, we built an obstacle avoidance

system that must alert an user of an obstacle in their imme-

diate proximity. As an input sensor we use an Asus Xtion

Pro, which is a lightweight (225g) and low power depth

camera that suits our environment well. The Xtion Pro pro-

vides disparity images at a resolution of 640 × 480 pixels

at 30 frames per second.

We process the disparity image using the stixels algo-

rithm using the implementation suggested by Hernandez et

al. [12], which was developed for automotive applications,

and meant to be executed on a Nvidia GPU.

To compute the stixels, we use a Jetson TX1 develop-

ment kit from Nvidia. The Jetson TX1 is powered by the

Tegra T210 processor that features four 64-bit ARM cores

and 256 CUDA cores [1].

We use bone conducting headphones to provide feedback

to the user by generating beeps, whose sound and frequency

are related to the size and distance of the objects detected by

the stixel algorithm.

An overview of the system is shown in Figure 5.

1427

Page 5: Using Technology Developed for Autonomous Cars to Help …€¦ · 2. Synergies between Autonomous Cars and Assitive Technology for the Blind Autonomous driving technology has many

Jetson TX1 Board

Ubuntu 16.04OpenNI

Asus Xtion Pro

Infrared (IR) light source, RGB andIR-cameras, microphone

RGB: 1280x1024, Depth: 640x48030 FPS

USB 3.0OpenNI Drivers

Audio Feedback Interface

Pitch: distance of the obstacleVolume: size of the obstacle

Depthaquisition

Obstacle detection with

STIXELS

Visually impaired user

Auditory feedback assists the userwhen navigation throughan unknown environment

HeadphonesUSB 3.0

Figure 5: Overview of the developed system, inspired by

automotive technology.

3.2. Source Algorithm

Our source algorithm is a fast implementation of the stix-

els segmentation approach designed to be run in real-time

in embedded GPU platforms [12]. The stixels segmentation

approach, which uses a single disparity image as a source,

divides the field of view in a number of columns, which

are divided into segments belonging to one of the follow-

ing three classes: ground, obstacle, or sky. For each seg-

ment, the algorithm provides its class, the start position, its

length, and its disparity value. The basic assumption of the

algorithm is that the sections parallel to the ground belong

to the ground itself, while obstacles are mainly vertical with

respect to the ground. Although this view is an oversimpli-

fication of the real world, it has shown to be successful at

detecting obstacles and is computationally efficient.

The source code of this approach has been made publicly

available, and it is optimized for the Jetson TX1 platform,

where it achieves 45.7 frames per second when using a res-

olution of 640 × 480. For details of the algorithm and its

implementation we refer to the original publication in [12].

3.2.1 Ground Plane Estimation

From the programming point of view, the stixel classifica-

tion algorithm has two main steps: first, the height and pitch

of the camera with respect to the ground is estimated, and

second, the disparity image is segmented in stixels.

Our test implementation assumes that the camera is fixed

to the car, and thus the pitch and roll of the camera are zero

in all cases. In our application, the roll was not zero be-

cause the camera was placed in the chest of a person, but

the algorithm is robust enough to deal with those cases.

The main problem we faced is that the original algorithm

uses a Hough-based line detector [6] to localize the vanish-

Figure 6: Raw depth output from Asus Xtion Pro sensor

(left) and output of the stixel based obstacle detection sys-

tem (right). Red markings indicate the detected obstacles.

ing point of the image, and uses the geometrical properties

between the detected lines and the vanishing point to esti-

mate the horizon line, and the camera height and pitch. This

approach works well in cars (see Figure 1c), but it isn’t as

successful in human environments.

Instead, we implemented our own ground plane estima-

tion. We divide the source disparity image in blocks of 4

× 4 pixels. To sped up our algorithm, we select randomly

10% of the blocks in the lower half of the field of view

(closer to the user), and calculate the direction of the nor-

mal vector per block. Then we cluster the normal vectors

using Expectation-Maximization and find the main compo-

nent, which we assume corresponds to the normal of the

ground. We found this algorithm to be fast and robust, and

it enables us to use the GPU accelerated stixel classification

code.

Finally, we combined the ground plane estimation and

the stixel algorithm to find obstacles from depth images, as

seen in Figure 6.

3.2.2 Obstacle Sonification

There are multiple ways to provide feedback information

about the perceived obstacles to the user of the system [17].

In this proof of concept, for simplicity, we choose to use

auditive feedback. As a transceiver, we use bone conducting

headphones as they do not obstruct the ears, and allow the

users to hear their environment.

Based on our previous experience, we choose to sonify

only the obstacles right in front of the user, in order to min-

imize confusion and reduce cognitive load.

For each processed frame, we produce a tone for each of

the stixels classified as obstacles that lie in the 15◦ frontal

1428

Page 6: Using Technology Developed for Autonomous Cars to Help …€¦ · 2. Synergies between Autonomous Cars and Assitive Technology for the Blind Autonomous driving technology has many

Figure 7: Visualization of our the obstacle detection system algorithm developed for autonomous cars. Stixels-based method

was adapted for assisting visually impaired people and tested in indoor and outdoor environments. Detected obstacles are

marked in red. Top row: indoors. Bottom row: outdoors.

field of view of the user. The volume of each tone corre-

sponds to the size of the stixel, thus a stixel that covers the

whole height of the image would produce the loudest tone,

whereas a stixel that only covers a single vertical pixel pro-

duces a tone that is only 1/480 of the same volume.

The frequency of the tones generated scales with the dis-

parity of the stixel. Generally higher frequencies are asso-

ciated to more urgent messages, so we set the frequency of

the tone as 10 times the reported disparity. For the Asus

Xtion, whose focal length (f ) is 575 and its baseline (b) is

7.5cm, the formula for the disparity (d) from the depth (z)

is as follows:

d =b · f

z. (1)

Therefore, an obstacle placed at one meter would gener-

ate a tone of 431 Hz, while the same obstacle placed at four

meters generates a tone of 108 Hz.

3.3. Performance Evaluation

We measured the computational performance of the Jet-

son TX1 platform as well as its power usage when running

the stixels code. As a baseline, we also measure the same

algorithm running in a notebook equipped with a Core i7

5500U CPU and a GT840M GPU.

To measure the performance, we processed 500 dispar-

ity images, and we report the average processing time as

well as the standard deviation (see Table 1). We measured

independently the ground estimation part of the algorithm,

which runs on the CPU, and the stixel calculation, which

runs on the GPU. The notebook is 5.6 times faster for the

ground plane estimation, and 4.0 times faster for the stixels

calculation, and more importantly, the standard deviation is

more than 15 times lower on the notebook than on the Jet-

son TX1.

Jetson TX1 i7 5500U + GT840M

Ground 41.1(±8.2)ms 7.28(±0.46)ms

Stixels 35.2(±13.9)ms 8.70(±0.60)ms

Table 1: Performance evaluation of our ground plane es-

timation algorithm and the stixels algorithm in the Jetson

TX1 platform, compared to a notebook. Standard deviation

in brackets.

Regarding power usage, we monitored the systems idle,

streaming from the depth camera (without performing any

processing), and running the stixels code (see Table 2). In

both cases the WiFi signal was enabled, and the notebook

screen had its brightness set to its minimum level. We

would highlight that the Jetson TX1 consumes almost 4

times less power than the notebook when idle, and almost 3

times less than the notebook when processing stixels.

The numbers we provide include the power consumed by

the Asus Xtion Pro depth camera, incidentally proving that

this depth camera is significantly more power friendly than

the Kinect v2, which uses 16 Watts.

This results show that, although the embedded platform

is significantly less powerful than a common notebook, it

provides sufficient performance for the task (allowing more

than 10 frames per second) while consuming little power.

For our use case, the Jetson TX1 was powered by a com-

pact and portable 75 Watt-hour battery, thus the endurance

of the system as we tested it was around 9 hours.

1429

Page 7: Using Technology Developed for Autonomous Cars to Help …€¦ · 2. Synergies between Autonomous Cars and Assitive Technology for the Blind Autonomous driving technology has many

Jetson TX1 i7 5500U + GT840M

Idle 3.1W 12.1W

Streaming 6.1W 13.2W

Stixels 8.3W 22.5W

Table 2: Power consumption evaluation of the Jetson TX1

platform, compared to a notebook. Standard deviation is in

brackets.

Asus Xtion Pro 2 used for depth acquisition

Backpack containingJetson TX1 Board

for GPU-basedStixels computation

Audio feedbackabout the

environmentvia headphones

Figure 8: Blindfolded user equipped with the developed

prototype system. Computing board is placed inside the

backpack and Asus Xtion Pro sensor is attached to its strap

in the front. Sound helps the user to avoid detected obsta-

cles.

3.4. Conducted User Study

The conducted experiment has two main objectives.

First, we research the potential of driverless car technology

for assisting impaired vision in health care in general. To

do so, we implement a prototype for a novel assistive navi-

gation system for the blind, using algorithms of driver less

car technology and test general feasibility for obstacle de-

tection. Secondly, we objectively evaluate current stage of

the prototype development in a user study.

The study was conducted with six probands (N=6, two

female, four male). At this stage of development, we have

invited participants without visual impairment, who were

artificially blindfolded with a mask.

At the beginning, the subject set on the mask and

was equipped with the backpack containing the computing

board. The Asus Xtion Pro Sensor was located on the back-

pack strap approximately at the height of subjects shoul-

der, as shown in Figure 8. The subject received instructions

about the audio interface and was given a few minutes to

get used to the situation and try out the system with differ-

ent obstacles.

Figure 9: Results of the NASA-TLX assessment test. The

finger displays average scores (scale from 1 to 100) reported

by the subject after the obstacle avoidance experiment.

Subject’s task in the experiment was to walk through a

long corridor from the beginning to the end with multiple

obstacles, blindfolded, using only the auditory interface of

the system. Every-day obstacles, which are indeed a chal-

lenge for blind people, such as chairs or a ventilator, were

placed in the corridor. The placement was re-arranged after

the subject put on the mask, so that he or she was unaware of

the surrounding obstacles. It should be mentioned, that the

walls and open doors in the corridor are also problematic,

as a blindfolded person easily becomes disoriented and even

going through an obstacle-free corridor in a straight line is

not easy. The system assists the user in going in a straight

line by emitting louder signal when the sensors angle shifts

towards one of the walls. Examples of the algorithm output

in the experiment scenario are shown in Figure 7.

Figure 7 also shows multiple outdoor examples of the

obstacle detector output, where bicycles, cars, bushes and

pedestrians are successfully identified as an obstacle. Out-

door tests were, however, only Proof-Of-Concept experi-

ments and did not include a quantitative user study yet.

After the task is completed, the user is asked to evalu-

ate the system by answering standardized NASA TLX form

[10] questions, giving valuable feedback for further devel-

opment. The user rates the perceived workload of the task in

six subscales: Mental Demands, Physical Demands, Tem-

poral Demands, Own Performance, Effort and Frustration.

Individual assessments of the participants were aver-

aged, the results are shown in Figure 9. Low scale in

NASA-TLX test corresponds to low reported levels of the

corresponding aspect (e.g. Mental Demand). An exception

is the evaluation of Performance, where low value corre-

sponds to successful performance and the value of 100 is

the lowest level of performance possible. Lower values on

1430

Page 8: Using Technology Developed for Autonomous Cars to Help …€¦ · 2. Synergies between Autonomous Cars and Assitive Technology for the Blind Autonomous driving technology has many

each of the scales therefore indicate higher effectiveness of

the system. Mental Demand, Effort as well as Temporal De-

mand were very close to a medium value on the scale. Val-

ues slightly higher than medium were observed as it comes

to Frustration and slightly lower values as it comes to Per-

formance (as mentioned before, low values indicate high

performance at the scale).

Significantly lower rating was reported in regard to

Physical Demand. This is, on the one hand, not surpris-

ing, since the navigation is rather mentally, not physically,

challenging. On the other hand, good values in Physical De-

mand indicate that the system, where the depth sensor and

the computing board are being carried by the person, is not

intrusive or too heavy.

One should take into account that the study was con-

ducted on people with normal vision, who were artificially

blindfolded. The probands are therefore not used to the set-

ting of absent vision and high levels of frustration might

also be connected with the fact that exploring an environ-

ment without vision is, in general, a very hard task. In the

future it is therefore crucial to conduct a large-scale study

on visually impaired people, who are familiar with this chal-

lenges. However, the experiment evaluation has shown, that

applying algorithms used for driverless car navigation for

human navigation, is, indeed, possible. The user study de-

livered very valuable feedback, which will help us in further

development stages.

4. Conclusion and Discussion

In this work we discussed the technical feasibility of

transferring technology developed for autonomous cars into

assistive technology for blind and visually impaired people.

We present an overview of the current developments in

autonomous industry, covering, in particular, the sensors,

platforms, and algorithms that are used to perceive and an-

alyze the surrounding environment of the car. We highlight

the synergy that exists between both fields, and note how

some of the developments made for cars can be used for as-

sistive purposes. As a proof of concept, we have built an

obstacle avoidance system based on an object detection al-

gorithm designed for cars. Due to the small differences in

both settings (i.e., camera position, and the surrounding en-

vironment), we had to slightly adapt the preprocessing step

of the algorithm. Otherwise, the algorithm was successfully

used to detect obstacles in both indoor and outdoor environ-

ments, which was evaluated in a user study.

A large amount of capital is currently being invested in

new technology for autonomous driving, accelerating re-

search progress in this field. We expect that such results

can be highly beneficial in the research of assistive tools to

help navigate blind and visually impaired people, and im-

prove their quality of life. This work is a proof of concept

that the research transfer in such a direction is possible.

Acknowledgements

This work has been partially funded by the German Fed-

eral Ministry of Education and Research (BMBF) under

grant no. 16SV7609 within the TERRAIN project.

References

[1] NVIDIA Jetson embedded systems developer kits, mod-

ules, sdks. http://www.nvidia.com/object/

embedded-systems-dev-kits-modules.html.

Accessed: 2017-06-30. 4

[2] World Health Organization visual impairment and blind-

ness. http://www.who.int/mediacentre/

factsheets/fs282/en/. Accessed: 2017-06-30. 1

[3] J. Ayre. Waymo shaved 90% off lidar sensor cost. http:

//goo.gl/BfC6Rw. January 9th, 2017. 2

[4] H. Badino, U. Franke, and D. Pfeiffer. The stixel world-a

compact medium level representation of the 3d-world. In

DAGM-Symposium. Springer, 2009. 4

[5] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner,

B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller,

J. Zhang, et al. End to end learning for self-driving cars.

arXiv preprint arXiv:1604.07316, 2016. 4

[6] R. O. Duda and P. E. Hart. Use of the hough transformation

to detect lines and curves in pictures. Communications of the

ACM, 15(1):11–15, 1972. 5

[7] H. Fernandes, P. Costa, V. Filipe, L. Hadjileontiadis, and

J. Barroso. Stereo vision in blind navigation assistance. In

World Automation Congress (WAC). IEEE, 2010. 3

[8] N. A. Giudice and G. E. Legge. Blind navigation and the role

of technology. The Engineering Handbook of Smart Technol-

ogy for Aging, Disability, and Independence, 2008. 4

[9] L. Hakobyan, J. Lumsden, D. OSullivan, and H. Bartlett.

Mobile assistive technologies for the visually impaired. Sur-

vey of ophthalmology, 58(6):513–528, 2013. 4

[10] S. G. Hart and L. E. Staveland. Development of nasa-tlx (task

load index): Results of empirical and theoretical research.

Advances in psychology, 52:139–183, 1988. 7

[11] D. Hernandez-Juarez, A. Chacon, A. Espinosa, D. Vazquez,

J. C. Moure, and A. M. Lopez. Embedded real-time stereo

estimation via semi-global matching on the gpu. Procedia

Computer Science, 80:143–153, 2016. 4

[12] D. Hernandez-Juarez, A. Espinosa, J. C. Moure, D. Vazquez,

and A. M. Lopez. Gpu-accelerated real-time stixel computa-

tion. In Winter Conference Applications of Computer Vision,

pages 1054–1062. IEEE, 2017. 1, 4, 5

[13] H. Hirschmuller. Stereo processing by semiglobal matching

and mutual information. IEEE Transactions on pattern anal-

ysis and machine intelligence, pages 328–341, 2008. 4

[14] M. H. Ionica and D. Gregg. The movidius myriad archi-

tecture’s potential for scientific computing. IEEE Micro,

35(1):6–14, 2015. 3

[15] V. Kulyukin, C. Gharpure, J. Nicholson, and S. Pavithran.

Rfid in robot-assisted indoor navigation for the visually im-

paired. In Intelligent Robots and Systems (IROS), volume 2.

IEEE, 2004. 4

1431

Page 9: Using Technology Developed for Autonomous Cars to Help …€¦ · 2. Synergies between Autonomous Cars and Assitive Technology for the Blind Autonomous driving technology has many

[16] T.-S. Leung and G. Medioni. Visual navigation aid for the

blind in dynamic environments. In Computer Vision and Pat-

tern Recognition Workshops, pages 565–572, 2014. 3

[17] M. Martinez, A. Constantinescu, B. Schauerte, D. Koester,

and R. Stiefelhagen. Cognitive evaluation of haptic and au-

dio feedback in short range navigation tasks. In Proc. 14th

Int. Conf. Computers Helping People with Special Needs

(ICCHP), 2014. 5

[18] L. Schneider, M. Cordts, T. Rehfeld, D. Pfeiffer, M. En-

zweiler, U. Franke, M. Pollefeys, and S. Roth. Semantic

stixels: Depth is not enough. In Intelligent Vehicles Sympo-

sium (IV), pages 110–117. IEEE, 2016. 4

[19] T. Schwarze, M. Lauer, M. Schwaab, M. Romanovas,

S. Bohm, and T. Jurgensohn. An intuitive mobility aid for

visually impaired people based on stereo vision. In Proceed-

ings of the IEEE International Conference on Computer Vi-

sion Workshops, 2015. 3, 4

[20] S. Shi, Q. Wang, P. Xu, and X. Chu. Benchmarking

state-of-the-art deep learning software tools. arXiv preprint

arXiv:1608.07249, 2016. 3

[21] R. Tapu, B. Mocanu, A. Bursuc, and T. Zaharia. A

smartphone-based obstacle detection and classification sys-

tem for assisting visually impaired people. In Proceedings

of the IEEE International Conference on Computer Vision

Workshops, pages 444–451, 2013. 4

[22] M. Treml, J. Arjona-Medina, T. Unterthiner, R. Durgesh,

F. Friedmann, P. Schuberth, A. Mayr, M. Heusel, M. Hof-

marcher, M. Widrich, et al. Speeding up semantic segmenta-

tion for autonomous driving. NIPSW, 1(7):8, 2016. 4

[23] Y. Wang, F. Fu, J. Shi, W. Xu, and J. Wang. Efficient moving

objects detection by lidar for rain removal. In International

Conference on Intelligent Computing, 2016. 2

[24] D. Whittington, D. Waters, and B. Hoyle. The ultra cane.

http://www.ultracane.com. June, 2011. 3

[25] K. Yelamarthi, D. Haas, D. Nielsen, and S. Mothersell. Rfid

and gps integrated navigation system for the visually im-

paired. In Circuits and Systems. IEEE, 2010. 4

1432