designing a humane multimedia interface for the visually impaired

13
This article was downloaded by: [University of South Florida] On: 09 October 2014, At: 11:07 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK European Journal of Engineering Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/ceee20 Designing a humane multimedia interface for the visually impaired Claude Ghaoui , M. Mann & Eng Huat Ng Published online: 02 Jul 2010. To cite this article: Claude Ghaoui , M. Mann & Eng Huat Ng (2001) Designing a humane multimedia interface for the visually impaired, European Journal of Engineering Education, 26:2, 139-149, DOI: 10.1080/03043790110034401 To link to this article: http://dx.doi.org/10.1080/03043790110034401 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution,

Upload: eng-huat

Post on 17-Feb-2017

214 views

Category:

Documents


1 download

TRANSCRIPT

This article was downloaded by: [University of South Florida]On: 09 October 2014, At: 11:07Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number:1072954 Registered office: Mortimer House, 37-41 Mortimer Street,London W1T 3JH, UK

European Journal ofEngineering EducationPublication details, including instructions forauthors and subscription information:http://www.tandfonline.com/loi/ceee20

Designing a humanemultimedia interface for thevisually impairedClaude Ghaoui , M. Mann & Eng Huat NgPublished online: 02 Jul 2010.

To cite this article: Claude Ghaoui , M. Mann & Eng Huat Ng (2001) Designinga humane multimedia interface for the visually impaired, European Journal ofEngineering Education, 26:2, 139-149, DOI: 10.1080/03043790110034401

To link to this article: http://dx.doi.org/10.1080/03043790110034401

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of allthe information (the “Content”) contained in the publications on ourplatform. However, Taylor & Francis, our agents, and our licensorsmake no representations or warranties whatsoever as to the accuracy,completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views ofthe authors, and are not the views of or endorsed by Taylor & Francis.The accuracy of the Content should not be relied upon and should beindependently verified with primary sources of information. Taylor andFrancis shall not be liable for any losses, actions, claims, proceedings,demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, inrelation to or arising out of the use of the Content.

This article may be used for research, teaching, and private studypurposes. Any substantial or systematic reproduction, redistribution,

reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of accessand use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Uni

vers

ity o

f So

uth

Flor

ida]

at 1

1:07

09

Oct

ober

201

4

Designing a humane multimedia interface for the visually impaired

C LAUDE GHAOUI†* , M. M ANN† and ENG HUAT NG‡

Whilst graphical user interfaces (GUI) have gained much popularity in recentyears, the need of the visually impaired to use applications in a non-visualenvironment is great. This paper promotes the provision of interfaces that allowusers to access most of the functionality of existing GUIs using speech. This hasbeen achieved by the design of a speech control tool that incorporates speechrecognition and synthesis into existing packaged software, such as Teletext, theInternet or a word processor. The tool developed has taken the menu structureas a means of demonstrating what is accessible by the use of speech input andoutput. The tool provides the facility to dump screen text into clipboard and readit out loud. Adapting existing GUI applications in this way requires successfulintegration of speech, which in turn requires a profound understanding of themedium and the development of human engineering techniques.

1. Introduction Whilst the intention of modern IT is to make things easier for the many, for some

people such as the visually impaired, the advances in technology can hinder theirlives and progress (Hawkridge et al. 1985). The most basic form of communicationis speech, and provided that our counterparts speak the same language, it is possibleto understand each other. This has led to the development of hardware and softwaresolutions in order that people can speak to their computers and in turn theircomputer can speak back to them (Aldrich and Parkin 1988).

Speech synthesis is by no means a new product, and has gradually advanced sinceits arrival around the 1930s. Although some of the early synthesized speech was, at�rst, dif�cult to understand, it is like any foreign dialect or language—a person canpick it up in time (Bristow 1986). Most of the discussions surrounding synthesizedspeech nowadays are concerned with making it sound better and easier to under-stand. Developments are being made in areas of naturally speaking synthesizers.Within the open market there are very few applications which use speech and veryfew applications concentrating on speech synthesis available. In general, not muchemphasis has been placed on using speech recognition and synthesis outside special-ist software. In the past, visually impaired people have been able to use screen-reading software to translate the basic ASCII character interface into speech as ameans of communication with computers. This gave people access to all kinds ofinformation. Now, with the introduction of graphical user interfaces (GUIs), ob-stacles have been placed in their way, and traditional screen-reading software is oflittle use to them (Arons 1991, Downton 1991).

EUR. J. ENG. ED., 2001, VOL. 26, NO. 2, 139–149

† School of Computing and Maths Sciences, Liverpool JMU, Liverpool L3 3AF, UK.* To whom correspondence should be addressed.‡ E-Commerce Group, AIG-Software International JV Sdn Bhd, G-1, Incubator 2,

Technology Park Malaysia, Bukit Jalil, 57000 Kuala Lumpur, Malaysia.

European Journal of Engineering EducationISSN 0343-3797 print/ISSN 1469-5898 online © 2001 Taylor & Francis Ltd

http://www.tandf.co.uk/journalsDOI: 10.1080/0304379011003440 1

Dow

nloa

ded

by [

Uni

vers

ity o

f So

uth

Flor

ida]

at 1

1:07

09

Oct

ober

201

4

There is a package for speech synthesis . . . There is a package for speech recog-nition . . . There are several word processors . . . You can use a mouse, a keyboard,a joystick, a touch screen and an eye tracker. However, there is little work into inte-grating all of this into one accessible environment. This project incorporates speechrecognition and speech synthesis with existing packaged software (e.g. Teletext, theInternet, a word processor) that could be easily bundled together, and sold as anaccessibility tool for the visually impaired users. Examples of what this tool wouldenable its users to do include: type or talk a document into a word processor; havethe document spoken back to us; navigate around applications with the use of speechrecognition; listen to applications’ text with the aid of speech synthesis; and listen aswe type.

2. BackgroundThe term ‘visually impaired’ covers the ‘blind’ and ‘partially sighted’. The most

dif�cult problems for the visually impaired to overcome are reading and writing. Thesaying ‘A picture paints a thousand words’ holds true for sighted people (Ghaoui2000, Ghaoui and Ainsley 2001, Ainsley et al. 2000), but the visually impaired areunable to interpret a picture and rely heavily on words, for which Braille plays animportant role (Petrie and Gill 1993).

2.1. Human–computer interface‘The user interface is the main point of contact between the user and computer

system; it is the part of the system that the user sees, hears, touches and communi-cates with’ (Dix et al. 1998). Usually, guidelines for the design of a human–computerinterface (HCI) assume the users to be fully functioning humans, i.e. ‘see and hear’.One of the main causes for poor interface design is the lack of understanding ofoperations and procedures by the analysts and designers (Wang et al. 1997,Lenschow 1998). These design faults will result in low productivity, high errors, highlearning times and low user acceptance (Parkin and Aidrich 1989, Dix et al. 1998).The most common human senses used in human–computer interaction are visionand hearing (these are often referred to as the primary senses).

A well-designed user interface will provide a good match between the user andthe computer (Edwards 1995). Most software applications are written with theaverage person in mind (�gure 1), very few are written to cater for special needs(Petrie et al. 1997) (�gure 2).

In order for the special needs to use a software, they need to have it adapted tointerface between the user and the application (Edwards 1995) (�gure 3).

Certain aspects of a GUI cannot be changed, making access for blind users verydif�cult. Some aspects of a GUI are inherently visual and dif�cult to present in anon-visual manner. Most operating systems do not provide the functionality ofspeech input and/or output. Application software can be written to use the soundcard to communicate using speech but this does not give access to blind users whowish to explore new ideas and the other facilities provided by the operating system.Microsoft have SAPI—speech application program interface—and SRAPI—speechrecognition application program interface (Amundsen 1996).

2.2. SpeechSpeech recognition is the process of recognizing human speech as a form of

input to a computer (Baber 1993). Every speech recognition system uses four key

140 C. Ghaoui et al.D

ownl

oade

d by

[U

nive

rsity

of

Sout

h Fl

orid

a] a

t 11:

07 0

9 O

ctob

er 2

014

operations to listen to and understand human speech. These are: (i) word separ-ation—this is the process of creating discrete portions of human speech; eachportion can be as large as a phrase or as small as a single syllable or word part; (ii)vocabulary—this is the list of speech items that the speech engine can identify; (iii)word matching—this is the method that the speech system uses to look up a speech

Humane multimedia interface 141

Figure 1. User interface for the average person.

Figure 2. User interface which is not suited for people with special needs.

Figure 3. User interfaces adapted for people with special needs.

Dow

nloa

ded

by [

Uni

vers

ity o

f So

uth

Flor

ida]

at 1

1:07

09

Oct

ober

201

4

part in the system’s vocabulary; and (iv) speaker dependence—this is the degree towhich the speech engine is dependent on the vocal tones and speaking patterns ofindividuals.

Speech synthesis is very often known as text-to-speech (TTS) and is the processof speech output from a computer using a synthetic voice (Baber 1993). This serviceprovides the ability to convert written text into spoken words. The four commonissues that must be addressed when creating a TTS engine are: (i) phonemes—thesound parts that make up words; (ii) voice quality—the quality of a synthetic voiceis directly related to the sophistication of the rules that identify and convert text intoan audio signal; (iii) TTS synthesis—this method uses calculations of a person’s lipand tongue position, the force of breath and other factors to synthesize humanspeech; and (iv) TTS diphone concatenation—this method of generating speech usespairs of phonemes to produce each sound.

2.3. Products review

2.3.1. Talking TeletextIt gives access to the useful stream of news and entertainment constantly broad-

cast by Teletext. Using a friendly ‘speaking’ keyboard, one can select any page andhear its contents spoken by a clear conversational voice synthesizer. This is an oralfacility for scanning each page line by line, word by word—and spelling out words.The information can be stored on a computer or printed out in plain language orBraille. There are three models available: Talking Teletext, Talking Teletext plus TVSound and Talking Teletext plus TV Sound with Special Features. It is simple andstraightforward; it plugs in earphones; plugs directly into the Television aerial; andformulates words from letters for speech synthesis.

2.3.2. Virtual reality mouseThe virtual reality mouse is also known as the Tactile Mouse, which provides

feedback to the user in the form of touch. Two models exist (Immersion Corpora-tion 1998): (i) Immersion Corporation’s FEELit Mouse—anything displayed on thescreen can be felt as realistic tactile sensations; and (ii) Control Advancements’VRM with Tactile Feedback—the VRM replaces the standard computer mouse andgives users ‘force feedback’ so that they can feel their way around the Windowsenvironments (Control Advancements 1998, Foulke 1991).

2.3.3. Speech software

(1) IBM Simply Speaking Gold: the user can dictate text and commands directlyinto their favourite Windows applications by simply speaking to thecomputer. The program recognizes ‘discrete’ speech—talking with a slightpause between words—and continues to ‘learn’ and re�ne the user’s uniquespeech patterns with every use.

(2) Talk->To Plus by Dragon Systems, Inc.: It is a voice-activated software forWindows. It enables the user to use voice commands to control Windowsoperations. For example, they can open a �le, switch to another openwindow, or use one of the Windows accessories.

(3) WinSpeech 3.0N: WinSpeech is a text-to-speech application designed for 32-bit Windows systems. It reads English text and produces speech sound to theaudio device in a PC. For assisting low vision users, audio guidance is

142 C. Ghaoui et al.D

ownl

oade

d by

[U

nive

rsity

of

Sout

h Fl

orid

a] a

t 11:

07 0

9 O

ctob

er 2

014

available throughout the program for reporting the current status andrepeating the dialogue messages.

(4) Clip&Talk 2.0: It adds speech abilities to Window applications. It is designedto work with other programs, such as word processors, spread sheetprograms, e-mail readers, Web browsers, or just any Windows applicationthat can send text to the Clipboard.

3. Design and development

3.1. Tool outlineThe speech control tool is a multimedia application for the visually impaired,

developed to support learning and teaching (Court 1998, Holmes 1998). It controlsspeech recognition and speech synthesis, and their integration with some off-the-shelf packages like Word, Explorer and Hauppauge VTPlus. The tool minimumrequirements are: Pentium P133 personal computer, 12 Mb RAM, 5Mb hard diskspace, sound card or built-in sound hardware, speakers or headphones, microphone,Windows 95, Microsoft Windows Speech Application Program Interface (SAPI)and Microsoft Voice—speech engine. Optional requirements, which are only neededif the user plans on using the Teletext facilities, are: Hauppauge Television/TeletextCard and Hauppauge VT Plus Teletext Software.

There are two levels associated with the speech recognition commands—highand low level. The high level commands are handled by Microsoft Voice, and consistof things like ‘Tell me the Time’. The low level commands are handled within thetool, and consist of things like ‘Commands’, ‘Applications’, ‘Up’. The speech recog-nition is split in this way to give the programmer greater control over the commandsthey choose to implement. This has been demonstrated with the use of a feedbackmechanism—for each low level command, the command will be repeated back tothe user, as a con�rmation, before it is executed.

3.2. Off-the-shelf packages used

3.2.1. Hauppauge VT PlusThere are numerous ways in which we can interface between different appli-

cations, and VT Plus is no exception. The facilities available within VT Plus areDirect Data Exchange (DDE) and VT Plus Script.

3.2.2. Microsoft Voice Microsoft Voice provides a high level speech interface for both speech recog-

nition and text-to-speech. One main drawback to using this facility is that it picks upbackground noise and may interpret these as commands. The ‘not listening’ modemeans that the user can press a key to talk. One of the items on the pop-up menu is‘What can I say?’ By clicking on this option the user is confronted with the follow-ing list of commands (high and low level) for use in speech recognition (�gure 4).

3.3. Main functionalityThis describes the main functionality of the speech control tool developed.

3.3.1. Start and exitEnsure that Microsoft Voice is already running, run the tool from the application

directory. The user has the ability to stop the tool without affecting the applications

Humane multimedia interface 143D

ownl

oade

d by

[U

nive

rsity

of

Sout

h Fl

orid

a] a

t 11:

07 0

9 O

ctob

er 2

014

running, even if the applications were loaded from within it. To stop the tool, clickon the Exit menu item on the main screen. The user will be prompted to enter ‘y’or ‘n’ or click on the appropriate buttons.

3.3.2. Press to talkMicrosoft Voice provides the user with the facility to change some of the options

available to them. One of these functions is the way in which the speech interfaceworks. The user can choose to press a key in order to say a command, or say thecomputer’s name before each command, or have the computer listen to everythingwhich is said. We chose the press to talk method, as we found this to be the mostappropriate way to rule out accidental commands as a result of background noise.

3.3.3. Main windowThis window is positioned at the bottom right of the screen. Most of the time the

window may not be visible, but the idea behind having a screen was mainly fortesting purposes (the status bar lets me know what is going on) and for the sightedusers who feel comfort in being able to see something happening.

144 C. Ghaoui et al.

Figure 4. ‘What can I say’ screen.

Dow

nloa

ded

by [

Uni

vers

ity o

f So

uth

Flor

ida]

at 1

1:07

09

Oct

ober

201

4

3.3.4. Applications menuAt the loading, the applications menu is built as a list of applications that are

available on the computer system. This list is held in a Microsoft Access databaseand amended manually when the need arises.

3.3.5. Help menu and about menu itemContains details about the title, version and author of the software. Pressing

enter returns to the main menu (equivalent to clicking on the OK button). Anotherbutton called ‘Current Applications’, exits which provides details of which windowsare currently open—this is in place for testing purposes but proved to be a usefultool to keep. Both these buttons speak their captions as the mouse moves over them.

3.3.6. Status barThis has been placed on the form merely as a visual effect, so that a sighted

person can see what is going on, as well as hear. Text messages are placed in thestatus bar at the same time as they are spoken.

3.3.7. Speech commandsThe speech commands are split into four categories: application commands,

general global commands, top level menu commands and sub menu commands.Whenever any of the commands are spoken into the microphone and recognizedcorrectly, the user will hear the command repeated back to them as a form of con�r-mation of its actions.

3.3.7.1. Application commands. These commands are global commands in thesense that we can call them at any time in order to change from one application toanother. The applications included in the tool are: Television, API Viewer, Calcu-lator, Microsoft Word, Microsoft Excel, Teletext and Internet Explorer. These appli-cations can be changed by entering into the access database and changing the data.If the user were to speak ‘Microsoft Word’ into the microphone, then the systemwould �rst echo the user’s request and then check to see if it is running already andswitch to the application if it is, otherwise the application will be loaded. During thetime of the search or load of an application, the user is kept informed of what thesystem is doing. One of the general global commands is ‘Applications’, and if thiscommand is spoken, then a list of the available applications is read back.

3.3.7.2. General global commands. These commands are global commands whichcan be called upon at any time and have been set to:

� Commands: provides the user with a list (by speech) of all of the currentlyavailable commands, which includes the general global commands, the toplevel menu commands (if any), the sub menu commands (if any) and the word‘Applications’.

� Menu: imitates the Alt key on the keyboard.� Minimize: minimize the currently active window.� Maximize: maximize the currently active window.� Restore: restore the currently active window to its original size and position.� Current Window: announces to the user what the currently active window title

is.

Humane multimedia interface 145D

ownl

oade

d by

[U

nive

rsity

of

Sout

h Fl

orid

a] a

t 11:

07 0

9 O

ctob

er 2

014

� Left: imitates the left arrow key.� Right: imitates the right arrow key.� Up: imitates the up arrow key.� Down: imitates the down arrow key.� Enter: imitates the enter key being pressed.� Select: imitates the enter key being pressed.� Read: reads the text associated with the currently active window.� Stop: stops the speaking.� Applications: provides the user with a verbal list of available applications.

The left, right, up and down commands have been provided to navigate aroundthe menus without using named menu items. For example, in the Teletext software,in order to use the Export facility we could say ‘Menu’, ‘Right’, ‘Down’, ‘Enter’.‘Menu’ will highlight ‘Page’, ‘Right’ will move one position right and highlight ‘Edit’,‘Down’ will move one position down and highlight Export, and ‘Enter’ will select‘Export’. The general global commands are �xed within the software and are notuser changeable, however, we found these commands to be suf�cient in order tomove about and activate the menu items.

3.3.7.3. Top level menu commands. These commands change from one applicationto another. When the tool starts, there will be no top level menu commands set. Onlywhen the user calls an application will these top level menu commands become setto the top level menu items of the currently active application window.

3.3.7.4. Sub menu commands. Not only do these commands change from one appli-cation to another, but they also change from one top level menu item to another.When a top level menu item is selected, the sub menu is identi�ed, and the sub menuitems are added to the list of available speech commands

3.4. A scenario demoThis demo shows a scenario of a successful integration of speech into a GUI-

based application, without removing the original functionality of the GUI.

Run the tool<< wait for the welcome message >>Click HelpClick About<< About screen is shown and spoken >>Move mouse over ‘OK Button’<< hear ‘OK Button’ >>Move mouse over ‘Current Applications Button’<< hear ‘Current Applications Button’ >>Click on ‘Current Applications Button’<< Current Applications Screen is shown >>Move mouse over ‘OK Button’Press Enter << proves keyboard still works >>Move mouse over ‘OK Button’<< hear ‘OK Button’ >>Click ‘OK Button’

146 C. Ghaoui et al.D

ownl

oade

d by

[U

nive

rsity

of

Sout

h Fl

orid

a] a

t 11:

07 0

9 O

ctob

er 2

014

<< returned to main screen >>Click Exit<< prompted ‘Are you sure. .?’ >>Press NClick Applications<< shows a list of available applications: Television, API Viewer, Explorer, Calculator, Word, Excel, Teletext, Internet

Explorer >>Click Applications<< back to main menu >>

Explain the use of the F8 key as a press-to-speak keyPress F8 and say Commands<< menu, alt, minimize, maximize, restore, current window, left, right, up, down,

enter, select, read, stop, applications >>Press F8 and say Applications<< Television, API Viewer, Explorer, Calculator, Word, Excel, Teletext, Internet

Explorer >>

Explain problems with Teletext software needing television to run �rst.Explain problems with television not being able to be initiated.Press F8 and say Teletext << wait for Teletext is loaded message >>F8 / say Maximize, F8 / say Minimize, F8 / say Teletext, F8 / say Restore, F8 / say

Read, F8 / say Page, F8 / say Receive, type 555 tab bbc2 enter, F8 / say Read << this should be a blank screen as the page has not been received yet >><< wait for page to be received >>F8 / say ReadExplain that I have slowed down the copy function so that you can see what it is

doing.Demonstrate a few more screens.

Demonstrate the use of speech on the menu commands.F8 / Explorer, F8 / View, F8 / Commands << to show commands as they change

>>F8 / Options, Click Cancel, F8 / Tools, F8 / Commands << to show commands as

they change >>, F8 / Menu, Current Window.

F8 / Internet Explorer<< default page is loaded >>F8 / Read

Other Functions: ‘Tell me the date’‘do it’ << con�rmation >>

‘Tell me the time’‘do it’ << con�rmation >>

Humane multimedia interface 147D

ownl

oade

d by

[U

nive

rsity

of

Sout

h Fl

orid

a] a

t 11:

07 0

9 O

ctob

er 2

014

4. ConclusionWe were able to put the speech tools into practice and provide not only a Teletext

converter for the blind but also a more general application which will read any text-based information from a GUI screen. This included a demonstration of its usewithin Internet Explorer. Handling of the menus by speech enabled us to providethe visually impaired users with most of the functionality of GUI-based applicationswithout removing their original functionality. In the beginning, it was dif�cult toobtain much information about speech tools as they were new to the market. Astime went on, more and more products started appearing and it became dif�culttrying to keep up with the changes and advancements being made. This project hastouched on an important area of research, which we aim to investigate further in thenear future. A lot of the limitations within the software are down to the problemsidenti�ed with the packaged software that were used. Our knowledge of WindowAPI calls at the start of the project summed a total of zero. As a result, knowledgewas gained during development through trials and other examples. With the arrivalof digital Teletext, there will be nothing to gain by taking the Teletext converterforward with analogue signals. As soon as digital Teletext becomes available, it willmake sense to analyse the possibilities this will provide and build a ‘Digital TeletextConverter for the visually impaired’ based on the same, if not similar, principles aslaid down in this paper.

ReferencesAINSLEY, H., GHAOUI, C. and WHITELEY, K., 2000, An OO model to generate knowledge

structures for authoring instructional hypermedia. Proceedings of the 26th EuromicroConference, Maastricht, the Netherlands, 5–7 September (Piscataway, NJ: IEEEComputer Society), pp. 35–42.

ALDRICH, F. K. and PARKIN, A. J., 1988, Tape recorded textbooks for the blind: a survey ofproducers and users. British Journal of Visual Impairment, 6, 3–6.

AMUNDSEN, M., 1996, MAPI, SAPI, & TAPI Developer’s Guide (SAMS Publishing).ARONS, B., 1991, Hyperspeech: navigating in speech-only hypermedia. Proceedings Hypertext

‘91 (New York: ACM Press), pp. 133–146.BABER, C. and NOYES, J. M. (eds), 1993, Speech output. Interactive speech technology: human

factors issues in the application of speech input/output to computers (London: Taylor &Francis).

BRISTOW, G. (ed.), 1986, Electronic speech recognition: techniques, technology and appli-cations (London: Collins).

CONTROL ADVANCEMENTS INC., 1998, Virtual reality mouse with tactile feedback.http://www.controladv.com/products

COURT, A. W., 1998, Improving creativity in engineering design education. European Journalof Engineering Education, 23, 141–154.

DIX, A., FINLAY, J., ABOWD, G. and BEALE, R., 1998, Human Computer Interaction, 2nd edn(Englewood Cliffs, NJ: Prentice Hall).

DOWNTON, A. (ed.), 1991, Engineering the human–computer interface: an overview. The EssexSeries in Telecommunication and Information Systems (Berkshire: McGraw-Hill).

EDWARDS, A. D. N. (ed.), 1995, Extra-ordinary human–computer interaction. Interfaces forusers with disabilities (Cambridge: Cambridge University Press).

FOULKE, E. B., 1991, In M. A. HELLER and W. SCHIFF (eds), The Psychology of Touch (Hills-dale, NJ: Lawrence Erlbaum Associates).

GHAOUI, C., 2000, Document icon bar for the support of authoring, learning and navigationon the Web: usability issues. Journal of Network and Computer Applications, 23,455–475.

GHAOUI, C. and AINSLEY, H., 2001, Generating multiple hypermedia learning views using OOmodelling. International Journal on Interactive Learning Environments (in press).

148 C. Ghaoui et al.D

ownl

oade

d by

[U

nive

rsity

of

Sout

h Fl

orid

a] a

t 11:

07 0

9 O

ctob

er 2

014

HAWKRIDGE, D., VINCENT, T. and HALES, G., 1985, New Information Technology in theEducation of Disabled Children and Adults (London: Croom Helm).

HOLMES, S., 1998, There must be more to life than this. European Journal of EngineeringEducation, 23, 191–198.

IMMERSION CORPORATION, 1998, FEELit Mouse. Http: //www.force-feedback.comLENSCHOW, R. J., 1998, From teaching to learning: a paradigm shift in engineering education

and lifelong learning. European Journal of Engineering Education, 23, 155–161.PARKIN, A. J. and ALDRICH, F. K., 1989, Improving learning from audio tape: a technique that

works. British Journal of Visual Impairment, 7, 58–60.PETRIE, H. and GILL, J. M., 1993, Current research in access to graphical user interfaces for

blind computer users. European Journal of Special Needs Education, 8, 153–157.PETRIE, H., MORLEY, S., MCNALLY, P., O’NEILL, A. and MAJOE, D., 1997, Initial design and

evaluation of an interface to hypermedia systems for blind users. In M. BERNSTEIN, L.CARR and K. OSTERBYE (eds), The Eighth ACM Conference of Hypertext, Hypertext ‘97(Southampton: Hypertext).

WANG, W., GHAOUI, C. and RADA, R., 1997, Domain model based hypermedia for collabora-tive authoring. In Intelligent Hypertext: Advanced Techniques for the World Wide Web,Lecture Notes in Computer Science (Berlin: Springer), Vol. 1326, pp. 131–144.

About the authorsClaude Ghaoui has been a Senior Lecturer in Computer Systems at the School of Comput-ing and Maths Sciences of Liverpool JMU since 1995. Her research interests include: HCI,hypermedia in education, open/distance learning, WWW. She received her PhD in computerscience in 1995 from the University of Liverpool. She held a lecturing post in ComputerScience at Kuwait University from 1985 to 1990; she worked as a RA at IBM/KSC from 1985to 1986 and as a RA at Liverpool University 1991–95. She is a voting member of the ACM.She is UK Correspondent for EuroMicro, and has organized, as a Programme Chair, theworkshop of EuroMicro 2000 on Multimedia&Telecommunications. She is the DeputyProgramme Chair for the EuroMicro 2001 on Multimedia and is on the IPC for EuroMicroWAP 2001, which is partly sponsored by Nokia.

M. Mann is a freelance IT developer working on various projects for hospitals and companies.She completed her BSc in Computer Studies from the Liverpool JMU in 1997. Her researchinterests include HCI and their applications for the blind.

Eng Huat Ng has been a Senior researcher at AIG-Software International since 1999. Hereceived a PhD in Computing in 1998 from Liverpool JMU, where he worked as a lecturerfrom 1998 to 1999. His current research interests include WWW, HCI and hypermedia appli-cations.

Humane multimedia interface 149D

ownl

oade

d by

[U

nive

rsity

of

Sout

h Fl

orid

a] a

t 11:

07 0

9 O

ctob

er 2

014