controlling the electronic whiteboard’s writing surface...

8
Controlling the Electronic Whiteboard’s Writing Surface without Cluttered Toolboxes: Shifting the Focus Back to Content Delivery Khaireel A. Mohamed and Thomas Ottmann Institut für Informatik, Albert-Ludwigs-Universität Freiburg, D-79110, Germany {khaireel, ottmann}@informatik.uni-freiburg.de Abstract: We present an interaction technique for classroom teaching, through our software for electronic whiteboards, which allows the writing surfaces of the boards to be used as if they were ordinary whiteboards. Toolboxes that provide, among other things, options for changing ink properties, editing commands, and accessing files have been ubiquitously tucked away in the background. These tools appear only (and whenever) they are in-demand, at the stroke of a recognised ink-gesture. Because of this, the entire surface of the electronic whiteboard is at the complete disposal of the instructors. The incorporation of our context-aware ink-gesture recognition technique into the software frees instructors from searching for correct menu items on the board’s perimeter. Based on the input ink-gesture command, it analyses the current contextual ink information on the board’s writing surface, gathers the most probable of menu items that the instructors may require at that particular juncture, and pops them up close to where the instructors are currently writing. This becomes an added advantage when these wall-mounted boards are cascaded next to each other, as the distance and the peripheral view-range between the instructors and the static menu items increases inside boards that are further away from the left-most screen in the series of the cascade. Consequently, the dynamic popup menus shift the attention of the instructors back towards the delivery of their lesson contents (at full screens), as compared to them moving about too much between the boards to affect system commands in front of a live audience. Introduction Chalkboards are by far the most widely used medium for lecture presentations in institutes of higher learning, and they are very effective. Not only do they allow instructors to communicate all or part of the course materials in natural handwriting, but they also take away the attention to details for their use. But as our knowledge providers turn to technology, Chong and Sakauchi [5] observed that when presentation slides are available (either as overhead projected slides, or PowerPoint slides from a beamer), the use of the chalkboard is significantly reduced during the course of the lecture. And when it comes to facilitating questions and answers, the chalkboard is conveniently used again to demonstrate facts, confirm equations, and diagrammatically illustrate flowcharts. But making a direct reference from the chalkboards to any of the slides can be a hassle, and sometimes the instructors resort to annotating on those slides in whatever little space is available, and may even compromise on legibility. At the same time, there is a growing trend amongst the learning institutions approving the integration of the electronic whiteboards into their classrooms. There are as many reports in this emerging body of literature suggesting that vendors of education and teaching instructors alike find the electronic whiteboards relatively easy and compelling to use [1, 7]. However, the authors also caution that becoming confident in their use often requires effort in commitment in terms of both training and independent exploration. An electronic whiteboard is the equivalent of a chalkboard, but on a large, wall-mounted, touch-sensitive digital screen that is connected to a (rear) projector and a computer [14]. This computer can be controlled directly from the digital screen by simply using a finger to touch it or with a special digital pen (stylus). Other known terminologies of the electronic whiteboards include the “eBoard”, “digital whiteboard”, “smart board”, or “interactive whiteboard”. They carry slightly different meanings to different people depending on their environment of application. However, the above terms all describe in general, the group of technologies that are brought together to support classroom activities. Instructors and students in class do not only see the prepared course materials projected onto the electronic whiteboards, but also the active annotations made by the instructors using the stylus as the lessons progress. Combined with dedicated software applications, the supporting whiteboard technologies can be made to function more effectively with better pen control, and appropriately displaying both static and moving images at suitable resolutions. Among other things, these applications offer:

Upload: others

Post on 22-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Controlling the Electronic Whiteboard’s Writing Surface ...algo.informatik.uni-freiburg.de/.../authorsCopy/... · electronic whiteboards, which allows the writing surfaces of the

Controlling the Electronic Whiteboard’s Writing Surface without Cluttered Toolboxes: Shifting the Focus Back to Content Delivery

Khaireel A. Mohamed and Thomas Ottmann

Institut für Informatik, Albert-Ludwigs-Universität Freiburg, D-79110, Germany {khaireel, ottmann}@informatik.uni-freiburg.de

Abstract: We present an interaction technique for classroom teaching, through our software for electronic whiteboards, which allows the writing surfaces of the boards to be used as if they were ordinary whiteboards. Toolboxes that provide, among other things, options for changing ink properties, editing commands, and accessing files have been ubiquitously tucked away in the background. These tools appear only (and whenever) they are in-demand, at the stroke of a recognised ink-gesture. Because of this, the entire surface of the electronic whiteboard is at the complete disposal of the instructors. The incorporation of our context-aware ink-gesture recognition technique into the software frees instructors from searching for correct menu items on the board’s perimeter. Based on the input ink-gesture command, it analyses the current contextual ink information on the board’s writing surface, gathers the most probable of menu items that the instructors may require at that particular juncture, and pops them up close to where the instructors are currently writing. This becomes an added advantage when these wall-mounted boards are cascaded next to each other, as the distance and the peripheral view-range between the instructors and the static menu items increases inside boards that are further away from the left-most screen in the series of the cascade. Consequently, the dynamic popup menus shift the attention of the instructors back towards the delivery of their lesson contents (at full screens), as compared to them moving about too much between the boards to affect system commands in front of a live audience.

Introduction Chalkboards are by far the most widely used medium for lecture presentations in institutes of higher learning, and they are very effective. Not only do they allow instructors to communicate all or part of the course materials in natural handwriting, but they also take away the attention to details for their use. But as our knowledge providers turn to technology, Chong and Sakauchi [5] observed that when presentation slides are available (either as overhead projected slides, or PowerPoint slides from a beamer), the use of the chalkboard is significantly reduced during the course of the lecture. And when it comes to facilitating questions and answers, the chalkboard is conveniently used again to demonstrate facts, confirm equations, and diagrammatically illustrate flowcharts. But making a direct reference from the chalkboards to any of the slides can be a hassle, and sometimes the instructors resort to annotating on those slides in whatever little space is available, and may even compromise on legibility.

At the same time, there is a growing trend amongst the learning institutions approving the integration of the electronic whiteboards into their classrooms. There are as many reports in this emerging body of literature suggesting that vendors of education and teaching instructors alike find the electronic whiteboards relatively easy and compelling to use [1, 7]. However, the authors also caution that becoming confident in their use often requires effort in commitment in terms of both training and independent exploration.

An electronic whiteboard is the equivalent of a chalkboard, but on a large, wall-mounted, touch-sensitive digital screen that is connected to a (rear) projector and a computer [14]. This computer can be controlled directly from the digital screen by simply using a finger to touch it or with a special digital pen (stylus). Other known terminologies of the electronic whiteboards include the “eBoard”, “digital whiteboard”, “smart board”, or “interactive whiteboard”. They carry slightly different meanings to different people depending on their environment of application. However, the above terms all describe in general, the group of technologies that are brought together to support classroom activities.

Instructors and students in class do not only see the prepared course materials projected onto the electronic whiteboards, but also the active annotations made by the instructors using the stylus as the lessons progress. Combined with dedicated software applications, the supporting whiteboard technologies can be made to function more effectively with better pen control, and appropriately displaying both static and moving images at suitable resolutions. Among other things, these applications offer:

Page 2: Controlling the Electronic Whiteboard’s Writing Surface ...algo.informatik.uni-freiburg.de/.../authorsCopy/... · electronic whiteboards, which allows the writing surfaces of the

“Flipchart” capabilities to save and print images as well as annotated ink-trace markings; A “blackboard” to manipulate primitive handwritten text and drawings; Integrated slides and blackboard “environment”; “Movie screens” to showcase video and software demonstrations; “Tools” to facilitate discussions and group work; and Access to the network, where shared teaching resources may be available.

More importantly, the way the electronic whiteboards are utilised (by both instructors and students) will shape the kind of impact it will have on the overall learning environment. The basic guidelines highlighted by Glover and Miller [7] for good practices and outcomes of engaging classroom interaction and participation with the electronic whiteboards across the curriculum are:

To increase efficiency by enabling instructors to draw upon a variety of ICT-based resources without disruption or loss of space;

To extend learning by using more engaging materials to explain concepts; and To transform learning by creating new learning styles stimulated by interaction with the electronic

whiteboards.

All of the above technological and social benefits have positive indications from the wider observational point of view. Nevertheless, there are also subtle cases that can make instructors hesitant to change their pedagogy to incorporate the electronic whiteboards into their lesson plans, if practical considerations hinder their perspectives. We already pointed out that it requires effort in commitment to develop media-enhanced teaching contents for the electronic whiteboards, which could be considerably taxing on the instructors’ initial workload. Also, according to the studies compiled by BECTA [1], that while the act of teaching with electronic whiteboards instils anxiety in the students to learn, it unduly places the pressure onto the instructors when it comes to delivering the contents. This is especially true if the writing environment of the electronic whiteboards is not sufficiently supported with the necessary tools.

Good electronic whiteboard contents are generally those that can display themselves immediately in highly visual terms, which can be most effectively understood by a group of individuals [2, 6]. The ability to easily modify and change these content materials is all about the personalisation of the electronic whiteboard environment. In relation to this, and in order to achieve good content presentation, there needs to be good software applications to manage and control the content data with enhanced features and functions. Currently, makers of the electronic whiteboards ship out accompanying software that are embedded into their systems for use in classroom environments. Examples include those that are listed on Smart Technologies Inc. website such as ‘notebook whiteboarding software’ (see Figure 1) and ‘SMART Board tools’. These intuitive software support numerous operations that are categorised into many menu items under several options that instructors can use while delivering their lessons. So much so, that they tend to become more of a distraction to the instructors rather than an assistance to them in front of a live audience.

Now, we are particularly interested in the part where these software allow instructors to write on the electronic whiteboard freely as if it was a normal blackboard. The situation described above may not necessarily pose as a problem if the screen-size of the electronic whiteboard is small. However, when these software are maximised to full

Figure 1. Cascading electronic whiteboard set-up. Note the distance between the static toolbars.

Page 3: Controlling the Electronic Whiteboard’s Writing Surface ...algo.informatik.uni-freiburg.de/.../authorsCopy/... · electronic whiteboards, which allows the writing surfaces of the

screen or screens, then instructors will bump into the problem of ‘hand-and-eye coordination’. This is like a cross between uncomfortable peripheral views for the eyes, and awkward bodily positions to access the menus without disturbing the flow of the lesson, or blocking the audience. This problem worsens when we cascade the electronic boards in the learning environment in series next to each other.

In this paper, we put forward our approaches to overcome this problem for electronic whiteboard environments while still adhering to the three main UI design concepts; adaptable, adaptive, and perceptual. We concentrate our discussions on how our software is derived to handle interfacing issues on the integrated writing surfaces. We show that by actively analysing all input ink-traces, we are able to work on the concept of ‘modeless interactivity’ that does not involve any other tracking devices around the current electronic whiteboard set-up. Finally, we conclude with the findings and evaluations of our software based on the hypotheses described above. UI Design Concepts Figure 2 shows the Venn diagram representing the three main UI-design concepts we pointed out earlier and their relations to each other.

Adaptable interfaces can be considered reliable if it gives a faithful picture of what is happening within the application programs, if it is comprehensive (nothing material has been left out of the picture), and if it is verifiable (the interface carries out the commands as hinted by the content-widgets).

Adaptive interfaces are understandable when they are not designed for the benefit of experts alone, and when their proactive assistances still leave the impression that the users are always in control.

The relevance of perceptual interfaces has the potential to affect a user's belief: to confirm his or her beliefs from past actions, or to shape his or her beliefs for future interactivities with the system. We add here that amassing relevant information is usually timely, and sometimes the delay reduces utility.

When we overlap these distinctive characteristics onto each other, we obtain corollaries that are generally useful for domain specific applications. For instance, merging part of adaptable and adaptive concepts results in better (assisted) management of customisable menus that has been proven to expedite (manual) repetitive tasks [3]. Secondly, negative aspects of adaptive user interfaces on “ease of use” can be overcome by adding a perceptual support component of user-mental-models based on contextual information. And finally, visualising the various human sensory perceptions on the WIMPs as they display each multi-modal element perceived from the environment, reassures the user that the system has registered all inputs and is doing something to evaluate them.

Figure 2. Relating adaptable, adaptive and perceptual concepts on user interfacing and its design.

Our approach to developing a suitable interface for our case with the electronic whiteboards lies in the centre of the three merges. For in there, we expect to address the problems of providing for huge writing surfaces without compromising too much on critical issues known to each UI-design concept.

Feeling in Control of the Writing Environment Without investing on additional tracking devices such as motion-detectors or cameras, the large interconnected, high-resolution, wall-mounted, multiple displays are by themselves receptively sensitive to the instructors’ touch. Properly configured, this electronic whiteboard environment offers an extended desktop metaphor quite unlike the ones that we presently know and have grown used to. Further incorporating the pen-input technology into this setup, accords instructors a higher level of personal presentiment than a conventional monitor, in a way that it allows them to directly touch the active-screens and see immediate reactions through the digital pens. The electronic whiteboards are meant to be an active presentation medium for a congregation of audience in a large room. While there is always

Page 4: Controlling the Electronic Whiteboard’s Writing Surface ...algo.informatik.uni-freiburg.de/.../authorsCopy/... · electronic whiteboards, which allows the writing surfaces of the

a desktop console that controls the electronic whiteboards, through the mouse and keyboard, we want to free up this mode of interfacing and let instructors communicate directly with the screen using just a digital pen.

Judging by the dominant metaphor that what we have today is a mismatch for the computer environment we are dealing with tomorrow, it is difficult to place this huge screen scenario (alone) into any pronounced categories of user interface debates on concepts of adaptivity and adaptability [12, 13]. For instance, the sphere of influence of the hand-and-eye coordination needs to be enlarged, and perhaps include the body, to follow the claims of direct manipulation of adaptable interfaces. Furthermore, while the audience have it easy watching from a distance, the instructor does not: standing so close to one part of the multiple huge screens often leads to the interface widgets he or she may require to be out of reach, or worse still, because the instructor cannot see those widgets, he or she may assume that such actions or commands represented by those widgets do not exist within the board application. We point out here that this may affect the flow of the delivery of the lesson contents. In this case, we may be left to rely on the adaptivity of the interface for the electronic whiteboards to proactively assist the instructor while at the same time ensuring that they still feel in control of the whole process. Modeless Interactivity We define modeless interactivity for the digital ink environment on the electronic whiteboards as the seamless integration of both the ‘writing’ and ‘gesture-command’ modes on a common, unified platform. On top of this, we further encompass the conventional on-screen widgets such as the fixed pull-down and pop-up menus as ‘interaction modes’ that can be minimised on their usage or done away completely when not needed in the most cumbersome of circumstances. We go by the notion that instructors should not always need to run between different sections of the boards in order to affect a menu action. The Digital Ink Traces

Figure 3. Splitting up freehand writing into ink components on the time line.

The digital ink domain is unlike any others that can have all its related information easily organised and cross-referenced, and be presented in front of users to allow them direct manipulation of the data. We begin by defining a trace that refers to a trail of digital ink data made between a successive pair of pen-down and pen-up events representing a sequence of contiguous ink points – the X and Y coordinates of the pen's position. Sometimes, we may find it advantageous to also include timestamps for each pair of the sampled coordinates, if the sampling property of the transducer device is not constant. A sequence of traces accumulates to meaningful graphics, forming what we (humans) perceive as characters, words, drawings, or commands. Each trace can be categorised in terms of the timings noted for its duration, lead and lag times as illustrated in Figure 3.

Figure 4. Stand-alone ink components that can be interpreted as gestures.

Detecting Ink-Gestures for Gesture-Commands

For a set of contiguous ink components Sj = {c0, …, cn-1} in a freehand sentence made up of n traces, we note that the lag-time for the ith component is exactly the same as the lead-time of the (i+1)th component; i.e. lag(ci) = lead(ci+1). Consequently, the timings that separate one set of ink components apart from another are the first lead-time lead(c0) and the last lag-time lag(cn-1) in Sj. These times are significantly longer than their in-between neighbours c1 to cn-2. Furthermore, if we observe a complete freehand sentence made up of a group of freehand words, we can categorise each ink component within those words into one of the following four groups:

Beginnings – Ink components at the start of a freehand word;

Page 5: Controlling the Electronic Whiteboard’s Writing Surface ...algo.informatik.uni-freiburg.de/.../authorsCopy/... · electronic whiteboards, which allows the writing surfaces of the

Endings – Ink components at the end of a freehand word; In-betweens – Ink components in the middle of a freehand word; and Stand-alones – Disjointed ink components.

The groups differ in the demarcations of their lead and lag times, and as such, provide for a way a perceptual system can identify them. The temporal relation that singles out stand-alone components (and freehand gestures) is the “longer-than-average” lead and lag times of a single-stroke ink trace, as shown in Figure 4. There is a pause period between samplings of ink components that results in significantly longer lead and lag times. It is not very often that we see people gesturing to a control system in the middle of writing a sentence or drawing a diagram.

We reported in our previous works that we can convincingly anticipate, based on the lead and lag times obtained, that the latest ink component is more of an instance of a trace, rather than a gesture [10]. Advanced recognition techniques have been introduced to the digital ink domain, and while a number of works have gone into handwriting and sketch recognition [11], there are among these works that use the same techniques to attempt to recognise hand-drawn pen gestures.

Hence we now have the ‘writing’ mode as well as the ‘gesture’ mode to help out with the current interface issues. We point out here that by ‘gesturing to the system’ we mean the actual gesticulated actions resulting from the pen movements that represent a command, and not merely the ‘tap-and-drag-the-icon’ movements. The prospects of gesturing commands in ink-domain applications are encouraging. So much so that many authors think that this is the epitome of the final interface for pen-based applications. Although we agree to a certain extent, that this mode of ubiquitous interfacing may form the fundamentals of future (more enhanced) hardware devices, we do not anticipate this to be the trend in our present context. The PUI guidelines should be adopted and used to create the first steps of the bridging between fully packed screens of user interface widgets to that of a ubiquitous one. Setting Up, Engaging and Deriving Contextual Ink Information

Figure 5. Context-enabled digital ink environment (cascading electronic whiteboards).

Similar to the Confab architecture described in Heer et al. [8], our approach is geared towards data-centricity rather than process-centricity. Instead of sending requests to distributed components and getting them back, a central Context-Observer provides subscription methods to component-identifiers that makes them perceptible. These identifiers are the actual context-information that belongs to a pre-agreed set of ontology used by the whole environment. While in use, they are systematically attached to every component within the environment for as long as the environment is alive. The list of context-identifiers includes ‘contents’ (ink traces, in-slide-contents, images, etc.), ‘actions’ (gestures, commands, etc.), ‘UI-features’ (colour, thickness of pen-tip, last known position, etc.), and ‘environment’ (board info, slide info, user info, etc.). Depending on a particular component’s own context, the items in the list of identifiers may vary when compared to the list in other components. Figure 5 depicts our context-

Page 6: Controlling the Electronic Whiteboard’s Writing Surface ...algo.informatik.uni-freiburg.de/.../authorsCopy/... · electronic whiteboards, which allows the writing surfaces of the

enabled digital ink environment that brings up (known and trusted) relevant (dynamic-menu) items for the same gesture-type that are displaced on various spatial locations on the boards.

The Context-Observer maintains a context-memory database where it keeps track of present and past contexts that happen inside the environment. Every scene change within the environment will trigger the update mechanism of the context-memory. For example, when a user flips to the next picture-slide or writes on a new slate on a different section of the board, a new ‘environment’ context is created with the information from the current list of identifiers that the Context-Observer picks up. This is then added as a new node in the database.

While ‘environment’ is considered to be a top-level context (abstract), other categories that mainly cater to ‘contents’ are ordered lower down in the context hierarchy (detailed items). These components, like the ink-traces, images, and slide-data, have imbued into them the id (timestamps) of the top-most level context within their tree-branch from the database. This is illustrated in Figure 6. Combining Recognised Ink-Gestures with the Context-Aware Approach

Figure 6. Data management of context information.

Applications running and controlling the digital screens are allowed to query the Context-Observer. Because all of the information contained in the context-memory database is current, sorted, and indexed, the application will receive a list of related context-information from the Context-Observer, by providing it with parameters such as time range, bounding box, or any component ids visible on screen. Our software is programmed to further tune-down this resulting list by applying functions that manipulate lists (set theory; union, intersection, complement, cap, cup, etc.), as well as imperatively computing new contexts from the existing list of identifiers without changing any of the existing values (functional programming; fold, map, zip, etc.).

For example, in order to display the most probable menu items that will complement a ‘Circle’ gesture on a background-slide with handwritten ink on it, we would first query the Context-Observer for a list of current contexts sorted in the order of frequency (of appearance and usage), age, and probability of relevance. Then, from this list, we apply the cup function (i.e. x ∈ A1 ∪ … ∪ An ⇔ x ∈ A1, … x ∈ An) on the list of contexts to pick out only those that has the gesture-identifier ‘Circle’ with over 95% level of confidence. We repeat this individually with all the ‘environment’ contexts found in the list as input parameters to the same cup function. The resulting list of lists would, among others, contain contexts that relate to ‘contents’ and ‘UI-features’. Finally, we apply a mapping function that converts only ‘contents’ and ‘UI-features’ in the union of the list of lists to obtain unique command-names. A typical conversion for an ink-trace-‘content’ would return the command-names “colour” and “pen-tip style”, which are coincidentally context-identifiers belonging to ‘UI-features’. This final list is then thematically categorised and sorted by the frequency of usage, before invoking the application’s UI-controller to output a pie-menu of what we think are the correctly related menu items.

This modeless interaction paradigm is modelled after Heer et al.’s [8] “liquid query operator tree”, supporting traditional database operators including selection (σ), projection (π), and joins (⋈), as well as (λ) operators that are responsible for interfacing local information spaces. Deriving Interfaces from the Ink Inputs

To complete the visualisation of the on-demand assistance, we bank-in on the last known coordinate position of the instructor’s latest written trace (or gesture). Popping up dynamic-content pie menus and/or other static command buttons in that vicinity seems most appropriate and serves a convenience to the instructor. Earlier research on pie menus proved an advantage for users of digital screens, as people tend to better remember the cyclic position, and as such expedites the selection process [4]. Our command button (toolbar) interface is logged to the bottom of the

Page 7: Controlling the Electronic Whiteboard’s Writing Surface ...algo.informatik.uni-freiburg.de/.../authorsCopy/... · electronic whiteboards, which allows the writing surfaces of the

screens, appearing on-demand with only an array of anticipated widgets. See Figures 5 (a), (b), (c) for examples. The combined lengths of the number of widgets will not exceed the arm length of the instructor’s to ensure that all widgets are within reach and within the instructor’s line of sight. Findings and Evaluations

Figure 7. Deriving interfaces from single-stroke digital ink inputs.

To measure the response from instructors who utilise the interconnected, wall-mounted electronic whiteboards, based solely on the digital pen inputs, we collected and analysed all ink information received in classroom lessons for a period of one semester week. The digital boards were set up to record the amount of writings by instructors, and to see how much of those writings are ‘meant by’ and ‘interpreted’ as command gestures or for invoking on-demand menu options. The data are then used to infer how well our support system reacts in providing valid and useful assistances to instructors. While we got the system to record all the above performance data, we also stationed a human observer for each lesson conducted that utilised the digital boards. The sole purpose is to track “human errors”, where we define this to be the changing of the instructor’s mind on a ‘set action’. For example, after purposely calling up an on-demand menu, the instructor decided that the menu was not needed after all (subsequently, the Cancel option was selected), or the instructor suddenly decided that he or she really wanted to access a different option that is completely out of context (subsequently, the “More>>” button was selected). Inks, Commands, and the UIs

Figure 8. Users choices on widget options.

There was a total of 125,866 ink inputs received and processed by our software operating on the electronic whiteboards during our observation period. All lessons had PowerPoint presentation slides embedded on the screen. Instructors were able to browse the contents of the slides and the boards, and write anywhere on the screens they please. We noted that over 94% of the digital inks collected were freehand writings (and drawings), and only 5.15% were anticipated by the system as gestures for commands and for invoking the hidden (on-demand) interface. Figure 7 depicts the further breakdown of the command gestures and the extent the correct anticipation led to the invoking of the appropriate UIs. “Human errors” amounted to 12.43% of the system’s anticipation process. This is as opposed to the system’s own error in judgment where it either reacted with a wrong command or interpreting the wrong context, which were all due to the misinterpretation of ambiguous single-stroke gestures. This is a prevalent problem that was highlighted by Mohamed and Ottmann [9] and is also explained by various authors [10, 11], especially in cases where the legibility of the instructor’s freehand writings are by themselves questionable.

Our data shows that of all the correct (assisted-) interpretation made by the system in providing the interface-on-demand, more that 55% of the time involved popup pie menus. The rest of the time required the appearances of toolbar widgets near the instructors. Our observers noted that the instructors took the advantage of the absence of tool-widgets and naturally utilised the boards as if they were chalkboards – writing everywhere and anywhere they please. The instructors did not seem to mind the lack of visible technology, as to them the system seems to conjure

Page 8: Controlling the Electronic Whiteboard’s Writing Surface ...algo.informatik.uni-freiburg.de/.../authorsCopy/... · electronic whiteboards, which allows the writing surfaces of the

up the interface whenever it is that they need to use it. Making choices from the anticipated menu options gave users the impression that the boards have always been in their control. Instructors’ Choices

To obtain a better picture of how users take to having their menu options anticipated by the system in whatever context they are currently working on and then presented to them on-demand, we also made provisions to monitor the kinds of widget options users click. The graph in Figure 8 depicts this finding. The first two bars highlight the frequency of users’ choices whenever any options were selected from the bottom toolbar. Our system was able to correctly anticipate the context in which the users are working more than 86% of the time. That is, within the limited number of command buttons allowed for the arm span of the users, the system is able to place at least one button that the user can tap for immediate action. Only 13.45% of the time do users have to locate their commands buried inside the “More>>” button. The last three bars show the number of times users click on either the static pull-down menus, the on-demand widgets (bottom toolbars) and the dynamic-content popup-pie menus respectively. Over half the time we see users utilising the convenience of the popups and 42.02% of the time on the on-demand widgets. Conclusion Our derivation of a system to balance three UI concepts of adaptability, adaptivity, and perceptual-ity, for the benefit of huge wall-mounted electronic whiteboards based on just the single input modal of the digital pen, is reliable, understandable and relevant. Drawing conclusions from the results, we found that fewer widgets on screen seem to be better in classroom situations. But without concentrating too much on the technology, we got instructors to still feel in control of their teaching devices and putting in more of their concentration in presenting their lessons. And as the system is always aware of its current contexts with respect to the writing environment, the number of gestures the system can support becomes quite irrelevant – making it easier for the instructors as they do not have to learn different gestures for different commands, but simply gesturing what they think the command will look like.

References [1] BECTA. What the research says about interactive whiteboards. Becta ICT Research, 2003. [2] BECTA. Designing content for electronic whiteboards: Using picture frames [online]. Available http://www.becta.org.uk/

industry/advice/advice.cfm?section=2&id=3888. 2004. [3] A. Bunt, C. Conati, and J. McGrenere. What role can adaptive support play in an adaptable system? In Proceedings of the

9th International Conference on IUI, pp 117-124, 2004. [4] J. Callahan, D. Hopkins, M. Weiser, and B. Shneiderman. An empirical comparison of pie vs. linear menus. In

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp 95-100, 1988. [5] N. S. T. Chong, and M. Sakauchi. Back to the basics: a first class chalkboard and more. In Proceedings of the 2000 ACM

Symposium on Applied Computing, pp 131-136, 2000. [6] L. A. Clyde. Electronic whiteboards. In Teacher Librarian Toolkit: Information Technology. Vol. 32, no. 2 [Online].

Available http://www.teacherlibrarian.com/tltoolkit/info_tech/info_tech_32_2.html. December 2004. [7] D. Glover and D. Miller. Running with technology: the pedagogic impact of the large-scale introduction of interactive

whiteboards in one secondary school. Journal of Information Technology for Teacher Education, 10(3):257-276, 2001. [8] J. Heer, A. Newberger, C. Beckmann, and J. I. Hong. liquid: Context-aware distributed queries. In Proceedings of

UbiComp 2003, pp 140-148, 2003. [9] K. A. Mohamed and Th. Ottmann. Fast interpretation of pen gestures with competent agents. In Proceedings of the IEEE

Second International Conference on CIRAS, pp PS09-4, 2003. [10] K. A. Mohamed. Increasing the accuracy of anticipation with lead-lag timing analysis of digital freehand writings for the

perceptual environment. In T. E. Simos (editors), Proceedings of the ICCMSE, pp 387-390, 2004. VSP/Brill Science Citation Index.

[11] D. Rubine. Specifying gestures by example. In Proceedings of the 18th Annual Conference on Computer Graphics and Interactive Techniques, pp 329-337, 1991.

[12] B. Shneiderman, and P. Maes. Direct manipulation vs. interface agents. Interactions, 4(6):42-61, 1997. [13] B. Shneiderman. Direct manipulation for comprehensible, predictable and controllable user interfaces. In Proceedings of

the 2nd International Conference on IUI, pp 33-39, 1997. [14] Smart Technologies Inc. [online]. Available http://www2.smarttech.com. 2005.