gui checklist

158
4. Assure that all windows have a consistent look and feel. 8. Assure the existence of the "File" menu. 9. Assure the existence of the "Help" menu. 10. Assure the existence of a "Window" Menu. A Checklist of Common GUI Errors Found in Windows, Child Windows, and Dialog Boxes. 1. Assure that the start-up icon for the application under consideration is unique from all other current applications. 2. Assure the presence of a control menu in each window and dialog box. 3. Assure the correctness of the Multiple Document Interface (MDI) of each window - Only the parent window should be modal (All child windows should be presented within the confines of 5. Assure that all dialog boxes have a consistent look and feel. 6. Assure that the child widows can be cascaded or tiled within the parent window. 7. Assure that icons which represent minimized child windows can be arranged within the parent window. 11. Assure the existence and proper location of any other menus which are logically required by the application. 12. Assure that the proper commands and options are in each 13. Assure that all buttons on all tool bars have a corresponding menu commands. 14. Assure that each menu command has an alternative(hot-key) key sequence which will invoke it where appropriate. 15. In "tabbed" dialog boxes, assure that the tab names are not abbreviations. 16. In "tabbed" dialog boxes, assure that the tabs can be accessed via appropriate hot key combinations. 17. In "tabbed" dialoged boxes, assure that duplicate hot keys do not exist 18. Assure that tabs are placed horizontally across the top (avoid placing tabs vertically on the sides as this makes the 19. Assure the proper usage of the escape key (which is to roll back any changes that have been made). 20. Assure that the cancel button functions the same as the escape key. 21. Assure that the Cancel button becomes a Close button when changes have be made that cannot be rolled back. 22. Assure that only command buttons which are used by a particular window, or in a particular dialog box, are 23. When a command button is used sometimes and not at other times, assure that it is grayed out when it should not be

Upload: api-3806986

Post on 14-Nov-2014

587 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Gui Checklist

2. Assure the presence of a control menu in each window and dialog box.

4. Assure that all windows have a consistent look and feel.

5. Assure that all dialog boxes have a consistent look and feel.

6. Assure that the child widows can be cascaded or tiled within the parent window.

8. Assure the existence of the "File" menu.

9. Assure the existence of the "Help" menu.

10. Assure the existence of a "Window" Menu.

12. Assure that the proper commands and options are in each menu.

13. Assure that all buttons on all tool bars have a corresponding menu commands.

15. In "tabbed" dialog boxes, assure that the tab names are not abbreviations.

17. In "tabbed" dialoged boxes, assure that duplicate hot keys do not exist

20. Assure that the cancel button functions the same as the escape key.

A Checklist of Common GUI Errors Found in Windows, Child Windows, and Dialog Boxes.

1. Assure that the start-up icon for the application under consideration is unique from all other current applications.

3. Assure the correctness of the Multiple Document Interface (MDI) of each window - Only the parent window should be modal (All child windows should be presented within the confines of the parent window).

7. Assure that icons which represent minimized child windows can be arranged within the parent window.

11. Assure the existence and proper location of any other menus which are logically required by the application.

14. Assure that each menu command has an alternative(hot-key) key sequence which will invoke it where appropriate.

16. In "tabbed" dialog boxes, assure that the tabs can be accessed via appropriate hot key combinations.

18. Assure that tabs are placed horizontally across the top (avoid placing tabs vertically on the sides as this makes the names hard to read).

19. Assure the proper usage of the escape key (which is to roll back any changes that have been made).

21. Assure that the Cancel button becomes a Close button when changes have be made that cannot be rolled back.

22. Assure that only command buttons which are used by a particular window, or in a particular dialog box, are present.

23. When a command button is used sometimes and not at other times, assure that it is grayed out when it should not be used.

Page 2: Gui Checklist

25. Assure that command button names are not abbreviations.

27. Assure that command buttons are all of similar size and shape.

32. Assure that option button (AKA radio button) names are not abbreviations.

35. Assure that option box names are not abbreviations.

41. Assure that the parent window has a status bar.

42. Assure that all user-related system messages are presented via the status bar.

43. Assure consistency of mouse actions across windows.

Testing GUIs

24. Assure that OK and Cancel buttons are grouped separately from other command buttons.

26. Assure that command button names are not technical labels, but rather are names meaningful to system users.

28. Assure that each command button can be accessed via a hot key combination (except the OK and CANCEL buttons which do not normally have hot keys).

29. Assure that command buttons in the same window/dialog box do not have duplicate hot keys.

30. Assure that each window/dialog box has a clearly marked default value (command button, or other object) which is invoked when the Enter key is pressed.

31. Assure that focus is set to an object which makes sense according to the function of the window/dialog box.

33. Assure that option button names are not technical labels, but rather are names meaningful to system users.

34. If hot keys are used to access option buttons, assure that duplicate hot keys do not exist in the same window/dialog box.

36. Assure that option box names are not technical labels, but rather are names meaningful to system users.

37. If hot keys are used to access option boxes, assure that duplicate hot keys do not exist in the same window/dialog box.

38. Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly demarcated areas.

39. Assure that each demarcated area has a meaningful name that is not an abbreviation.

40. Assure that the Tab key sequence which traverses the defined areas does so in a logical way.

44. Assure that the color red is not used to highlight active GUI objects (many individuals are red-green color blind).

45. Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop background characteristics).46. Assure that the GUI does not have a cluttered appearance (GUIs should not be designed to look like a mainframe character user interfaces (CUIs) when replacing such data entry/retrieval screens)

Page 3: Gui Checklist

For windows:

Can the window be resized, moved, scrolled?

Are all functions that relate to the window available when needed?

Are all functions that related to the window operational?

Is the active window properly highlighted?

Does the window properly close?

For pull down menus and mouse operations:

Is the appropriate menu bar displayed in the appropriate context?

Do pull-down operations work properly?

Do break-away menus, palettes, and tool bars work properly?

Are all menu functions and pull down subfunctions properly listed?

Are all menu functions properly addressable by the mouse?

Are text typeface, size and format correct?

Modern software applications often have sophisticated user interfaces. Because the number of lines of code (or reusable components) required for GUI implementation can often exceed the number of lines of code for other elements of the software, thorough testing of the user interface is essential. . For this checklist, the more questions that elicit a negative response, the higher the risk that the GUI will not adequately meet the needs of the end-user.

Will the window open properly based on related typed or menu-based commands?

Is all data content contained within the window properly addressable with a mouse, function keys, directional arrows, and keyboard?Does the window properly regenerate went it is overwritten and then recalled?

Are all relevant pull-down menus, tool bars, scroll bars, dialog boxes and buttons, icons, and other controls available and properly displayed for the window?When multiple windows are displayed, is the name of the window properly represented?

If multitasking is used, are all windows updated at appropriate times?Do multiple or incorrect mouse picks within the window cause unexpected side effects?Are audio and/or color prompts within the window or as a consequence of window operations presented according to specification?

Does the application menu bar display system related features (e.g., a clock display)?

Page 4: Gui Checklist

Does each menu function perform as advertised?

Are the names of menu functions self explanatory?

Is help available for each menu item and is it context sensitive?

Data entry:

Do graphical modes of data entry (e.g., a slide bar) work properly?

Is invalid data properly recognized?

Are data input messages intelligible?

Is it possible to invoke each menu function using its alternative text-based command?Are menu functions highlighted (or grayed-out) based on the context of current operations within a window?

Are mouse operations properly recognized throughout the interactive context?If multiple clicks are required, are they properly recognized in context?If the mouse has multiple buttons, are they properly recognized in context?Do the cursor, processing indicator (e.g., an hour glass or clock), and pointer properly change as different operations are invoked?

Is alphanumeric data entry properly echoed and input to the system?

Page 5: Gui Checklist

Four Stages of GUI Testing

The four stages are summarised in Table 2 below. We can map the four test stages to traditional test stages as follows:

Low level - maps to a unit test stage.Application - maps to either a unit test or functional system test stage.Integration - maps to a functional system test stage.

Stage Test Types

Low Level Checklist testingNavigation

Application Equivalence PartitioningBoundary ValuesDecision TablesState Transition Testing

Integration Desktop IntegrationC/S CommunicationsSynchronisation

Non-Functional Soak testingCompatibility testingPlatform/environment

Checklist Testing

Programming/GUI standards covering standard features such as:window size, positioning, type (modal/non-modal)standard system commands/buttons (close, minimise, maximise etc.)

Application standards or conventions such as:standard OK, cancel, continue buttons, appearance, colour, size, locationconsistent use of buttons or controls

object/field labelling to use standard/consistent tex

Navigation Testing

To conduct meaningful navigation tests the following are required to be in place:

An application backbone with at least the required menu options and call mechanisms to call the window under test.Windows that can invoke the window under test.Windows that are called by the window under test.

Obviously, if any of the above components are not available, stubs and/or drivers will be necessary to implement navigation tests. If we assume all required components are available, what tests should we implement? We can split the task into steps:

For every window, identify all the legitimate calls to the window that the application should allow and create test cases for each call.Identify all the legitimate calls from the window to other features that the application should allow and create test cases for each call.Identify reversible calls, i.e. where closing a called window should return to the ‘calling’ window and create a test case for each.Identify irreversible calls i.e. where the calling window closes before the called window appears.

Non-functional - maps to non-functional system test stage.

Page 6: Gui Checklist

There may be multiple ways of executing a call to another window i.e. menus, buttons, keyboard commands. In this circumstance, consider creating one test case for each valid path by each available means of navigation.

Note that navigation tests reflect only a part of the full integration testing that should be undertaken. These tests constitute the ‘visible’ integration testing of the GUI components that a ‘black box’ tester should undertake.

Application Testing

Application testing is the testing that would normally be undertaken on a forms-based application. This testing focuses very much on the behaviour of the objects within windows. The approach to testing a window is virtually the same as would be adopted when testing a single form. The traditional black-box test design techniques are directly applicable in this context.

Technique Used to test

Input validation

Simple rule-based processing

Decision Tables Complex logic or rule-based processing

State-transition testing

Desktop Integration Testing

We define desktop integration as the integration and testing of a client application with these other components. Because these interfaces may be hidden or appear ‘seamless’ when working, the tester usually needs to understand a little more about the technical implementation of the interface before tests can be specified. The tester needs to know what interfaces exist, what mechanisms are used by these interfaces and how the interface can be exercised by using the application user interface.

To derive a list of test cases the tester needs to ask a series of questions for each known interface:

Is there a dialogue between the application and interfacing product (i.e. a sequence of stages with different message types to test individually) or is it a direct call made once only?Is information passed in both directions across the interface?Is the call to the interfacing product context sensitive?Are there different message types? If so, how can these be varied?

In principle, the tester should prepare test cases to exercise each message type in circumstances where data is passed in both directions. Typically, once the nature of the interface is known, equivalence partitioning, boundary values analysis and other techniques can be used to expand the list of test cases

Client/Server Communication Testing

Client/Server communication testing complements the desktop integration testing. This aspect covers the integration of a desktop application with the server-based processes it must communicate with. The discussion of the types of test cases for this testing is similar to section 3.4 Desktop Integration, except there should be some attention paid to testing for failure of server-based processes.

In the most common situation, clients communicate directly with database servers. Here the particular tests to be applied should cover the various types of responses a database server can make. For example:

Logging into the network, servers and server-based DBMS.Single and multiple responses to queries.Correct handling of errors (where the SQL syntax is incorrect, or the database server or network has failed)Null and high volume responses (where no rows or a large number of rows are returned).

The response times of transactions that involve client/server communication may be of interest. These tests might be automated, or timed using a stopwatch, to obtain indicative measures of speed.

Synchronisation Testing

Equivalence Partitions and Boundary Value Analysis

Applications with modes or states where processing behaviour is affected

Windows where there are dependencies between objects in the window.

Page 7: Gui Checklist

There may be circumstances in the application under test where there are dependencies between different features. One scenario is when two windows are displayed, a change is made to a piece of data on one window and the other window needs to change to reflect the altered state of data in the database. To accommodate such dependencies, there is a need for the dependent parts of the application to be synchronised.

Examples of synchronisation are when:

The application has different modes - if a particular window is open, then certain menu options become available (or unavailable).If the data in the database changes and these changes are notified to the application by an unsolicited event to update displayed windows.If data on a visible window is changed and makes data on another displayed window inconsistent.

In some circumstances, there may be reciprocity between windows. For example, changes on window A trigger changes in window B and the reverse effect also applies (changes in window B trigger changes on window A).

In the case of displayed data, there may be other windows that display the same or similar data which either cannot be displayed simultaneously, or should not change for some reason. These situations should be considered also. To derive synchronisation test cases:

Prepare one test case for every window object affected by a change or unsolicited event and one test case for reciprocal situations.Prepare one test case for every window object that must not be affected - but might be.

Non-Functional Testing

The tests described in the previous sections are functional tests. These tests are adequate for demonstrating the software meets it’s requirements and does not fail. However, GUI applications have non-functional modes of failure also. We propose three additional GUI test types (that are likely to be automated).

Soak Testing

In production, systems might be operated continuously for many hours. Applications may be comprehensively tested over a period of weeks or months but are not usually operated for extended periods in this way. It is common for client application code and bespoke middleware to have memory-leaks. Soak tests exercise system transactions continuously for an extended period in order to flush out such problems.

These tests are normally conducted using an automated tool. Selected transactions are repeatedly executed and machine resources on the client (or the server) monitored to identify resources that are being allocated but not returned by the application code.

Compatibility Testing

Whether applications interface directly with other desktop products or simply co-exist on the same desktop, they share the same resources on the client. Compatibility Tests are (usually) automated tests that aim to demonstrate that resources that are shared with other desktop products are not locked unnecessarily causing the system under test or the other products to fail.

These tests normally execute a selected set of transactions in the system under test and then switch to exercising other desktop products in turn and doing this repeatedly over an extended period.

Platform/Environment Testing

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how to customise their PC configuration. Although your application may be designed to operate on a variety of platforms, you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

Test Types Manual or Automated?

Page 8: Gui Checklist

Checklist testing

Navigation Automated execution.

Manual tests of complex interactionsSynchronisation Manual execution.

Automated execution.

Manual execution of tests of application conventions

Automated execution of tests of object states, menus and standard features

Equivalence Partitioning, Boundary Values, Decision Tables, State Transition Testing

Automated execution of large numbers of simple tests of the same functionality or process e.g. the 256 combinations indicated by a decision table.

Manual execution of low volume or complex tests

Desktop Integration, C/S Communications

Automated execution of repeated tests of simple transactions

Soak testing, Compatibility testing, Platform/environment

Page 9: Gui Checklist

The four stages are summarised in Table 2 below. We can map the four test stages to traditional test stages as follows:

standard OK, cancel, continue buttons, appearance, colour, size, location

To conduct meaningful navigation tests the following are required to be in place:

An application backbone with at least the required menu options and call mechanisms to call the window under test.

Obviously, if any of the above components are not available, stubs and/or drivers will be necessary to implement navigation tests. If we assume all required components are available, what tests should we implement? We can split the task into steps:

For every window, identify all the legitimate calls to the window that the application should allow and create test cases for each call.Identify all the legitimate calls from the window to other features that the application should allow and create test cases for each call.Identify reversible calls, i.e. where closing a called window should return to the ‘calling’ window and create a test case for each.Identify irreversible calls i.e. where the calling window closes before the called window appears.

Page 10: Gui Checklist

There may be multiple ways of executing a call to another window i.e. menus, buttons, keyboard commands. In this circumstance, consider creating one test case for each valid path by each available means of navigation.

Note that navigation tests reflect only a part of the full integration testing that should be undertaken. These tests constitute the ‘visible’ integration testing of the GUI components that a ‘black box’ tester should undertake.

Application testing is the testing that would normally be undertaken on a forms-based application. This testing focuses very much on the behaviour of the objects within windows. The approach to testing a window is virtually the same as would be adopted when testing a single form. The traditional black-box test design techniques are directly applicable in this context.

We define desktop integration as the integration and testing of a client application with these other components. Because these interfaces may be hidden or appear ‘seamless’ when working, the tester usually needs to understand a little more about the technical implementation of the interface before tests can be specified. The tester needs to know what interfaces exist, what mechanisms are used by these interfaces and how the interface can be exercised by using the application user interface.

To derive a list of test cases the tester needs to ask a series of questions for each known interface:

Is there a dialogue between the application and interfacing product (i.e. a sequence of stages with different message types to test individually) or is it a direct call made once only?

In principle, the tester should prepare test cases to exercise each message type in circumstances where data is passed in both directions. Typically, once the nature of the interface is known, equivalence partitioning, boundary values analysis and other techniques can be used to expand the list of test cases

Client/Server communication testing complements the desktop integration testing. This aspect covers the integration of a desktop application with the server-based processes it must communicate with. The discussion of the types of test cases for this testing is similar to section 3.4 Desktop Integration, except there should be some attention paid to testing for failure of server-based processes.

In the most common situation, clients communicate directly with database servers. Here the particular tests to be applied should cover the various types of responses a database server can make. For example:

Correct handling of errors (where the SQL syntax is incorrect, or the database server or network has failed)Null and high volume responses (where no rows or a large number of rows are returned).

The response times of transactions that involve client/server communication may be of interest. These tests might be automated, or timed using a stopwatch, to obtain indicative measures of speed.

Page 11: Gui Checklist

There may be circumstances in the application under test where there are dependencies between different features. One scenario is when two windows are displayed, a change is made to a piece of data on one window and the other window needs to change to reflect the altered state of data in the database. To accommodate such dependencies, there is a need for the dependent parts of the application to be synchronised.

The application has different modes - if a particular window is open, then certain menu options become available (or unavailable).If the data in the database changes and these changes are notified to the application by an unsolicited event to update displayed windows.If data on a visible window is changed and makes data on another displayed window inconsistent.

In some circumstances, there may be reciprocity between windows. For example, changes on window A trigger changes in window B and the reverse effect also applies (changes in window B trigger changes on window A).

In the case of displayed data, there may be other windows that display the same or similar data which either cannot be displayed simultaneously, or should not change for some reason. These situations should be considered also. To derive synchronisation test cases:

Prepare one test case for every window object affected by a change or unsolicited event and one test case for reciprocal situations.Prepare one test case for every window object that must not be affected - but might be.

The tests described in the previous sections are functional tests. These tests are adequate for demonstrating the software meets it’s requirements and does not fail. However, GUI applications have non-functional modes of failure also. We propose three additional GUI test types (that are likely to be automated).

In production, systems might be operated continuously for many hours. Applications may be comprehensively tested over a period of weeks or months but are not usually operated for extended periods in this way. It is common for client application code and bespoke middleware to have memory-leaks. Soak tests exercise system transactions continuously for an extended period in order to flush out such problems.

These tests are normally conducted using an automated tool. Selected transactions are repeatedly executed and machine resources on the client (or the server) monitored to identify resources that are being allocated but not returned by the application code.

Whether applications interface directly with other desktop products or simply co-exist on the same desktop, they share the same resources on the client. Compatibility Tests are (usually) automated tests that aim to demonstrate that resources that are shared with other desktop products are not locked unnecessarily causing the system under test or the other products to fail.

These tests normally execute a selected set of transactions in the system under test and then switch to exercising other desktop products in turn and doing this repeatedly over an extended period.

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how to customise their PC configuration. Although your application may be designed to operate on a variety of platforms, you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

Page 12: Gui Checklist
Page 13: Gui Checklist

Obviously, if any of the above components are not available, stubs and/or drivers will be necessary to implement navigation tests. If we assume all required components are available, what tests should we implement? We can split the task into steps:

Page 14: Gui Checklist

There may be multiple ways of executing a call to another window i.e. menus, buttons, keyboard commands. In this circumstance, consider creating one test case for each valid path by each available means of navigation.

Note that navigation tests reflect only a part of the full integration testing that should be undertaken. These tests constitute the ‘visible’ integration testing of the GUI components that a ‘black box’ tester should undertake.

Application testing is the testing that would normally be undertaken on a forms-based application. This testing focuses very much on the behaviour of the objects within windows. The approach to testing a window is virtually the same as would be adopted when testing a single form. The traditional black-box test design techniques are directly applicable in this context.

We define desktop integration as the integration and testing of a client application with these other components. Because these interfaces may be hidden or appear ‘seamless’ when working, the tester usually needs to understand a little more about the technical implementation of the interface before tests can be specified. The tester needs to know what interfaces exist, what mechanisms are used by these interfaces and how the interface can be exercised by using the application user interface.

Is there a dialogue between the application and interfacing product (i.e. a sequence of stages with different message types to test individually) or is it a direct call made once only?

In principle, the tester should prepare test cases to exercise each message type in circumstances where data is passed in both directions. Typically, once the nature of the interface is known, equivalence partitioning, boundary values analysis and other techniques can be used to expand the list of test cases

Client/Server communication testing complements the desktop integration testing. This aspect covers the integration of a desktop application with the server-based processes it must communicate with. The discussion of the types of test cases for this testing is similar to section 3.4 Desktop Integration, except there should be some attention paid to testing for failure of server-based processes.

In the most common situation, clients communicate directly with database servers. Here the particular tests to be applied should cover the various types of responses a database server can make. For example:

The response times of transactions that involve client/server communication may be of interest. These tests might be automated, or timed using a stopwatch, to obtain indicative measures of speed.

Page 15: Gui Checklist

There may be circumstances in the application under test where there are dependencies between different features. One scenario is when two windows are displayed, a change is made to a piece of data on one window and the other window needs to change to reflect the altered state of data in the database. To accommodate such dependencies, there is a need for the dependent parts of the application to be synchronised.

In some circumstances, there may be reciprocity between windows. For example, changes on window A trigger changes in window B and the reverse effect also applies (changes in window B trigger changes on window A).

In the case of displayed data, there may be other windows that display the same or similar data which either cannot be displayed simultaneously, or should not change for some reason. These situations should be considered also. To derive synchronisation test cases:

The tests described in the previous sections are functional tests. These tests are adequate for demonstrating the software meets it’s requirements and does not fail. However, GUI applications have non-functional modes of failure also. We propose three additional GUI test types (that are likely to be automated).

In production, systems might be operated continuously for many hours. Applications may be comprehensively tested over a period of weeks or months but are not usually operated for extended periods in this way. It is common for client application code and bespoke middleware to have memory-leaks. Soak tests exercise system transactions continuously for an extended period in order to flush out such problems.

These tests are normally conducted using an automated tool. Selected transactions are repeatedly executed and machine resources on the client (or the server) monitored to identify resources that are being allocated but not returned by the application code.

Whether applications interface directly with other desktop products or simply co-exist on the same desktop, they share the same resources on the client. Compatibility Tests are (usually) automated tests that aim to demonstrate that resources that are shared with other desktop products are not locked unnecessarily causing the system under test or the other products to fail.

These tests normally execute a selected set of transactions in the system under test and then switch to exercising other desktop products in turn and doing this repeatedly over an extended period.

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how to customise their PC configuration. Although your application may be designed to operate on a variety of platforms, you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

Page 16: Gui Checklist
Page 17: Gui Checklist
Page 18: Gui Checklist

Application testing is the testing that would normally be undertaken on a forms-based application. This testing focuses very much on the behaviour of the objects within windows. The approach to testing a window is virtually the same as would be adopted when testing a single form. The traditional black-box test design techniques are directly applicable in this context.

We define desktop integration as the integration and testing of a client application with these other components. Because these interfaces may be hidden or appear ‘seamless’ when working, the tester usually needs to understand a little more about the technical implementation of the interface before tests can be specified. The tester needs to know what interfaces exist, what mechanisms are used by these interfaces and how the interface can be exercised by using the application user interface.

In principle, the tester should prepare test cases to exercise each message type in circumstances where data is passed in both directions. Typically, once the nature of the interface is known, equivalence partitioning, boundary values analysis and other techniques can be used to expand the list of test cases

Client/Server communication testing complements the desktop integration testing. This aspect covers the integration of a desktop application with the server-based processes it must communicate with. The discussion of the types of test cases for this testing is similar to section 3.4 Desktop Integration, except there should be some attention paid to testing for failure of server-based processes.

Page 19: Gui Checklist

There may be circumstances in the application under test where there are dependencies between different features. One scenario is when two windows are displayed, a change is made to a piece of data on one window and the other window needs to change to reflect the altered state of data in the database. To accommodate such dependencies, there is a need for the dependent parts of the application to be synchronised.

In the case of displayed data, there may be other windows that display the same or similar data which either cannot be displayed simultaneously, or should not change for some reason. These situations should be considered also. To derive synchronisation test cases:

The tests described in the previous sections are functional tests. These tests are adequate for demonstrating the software meets it’s requirements and does not fail. However, GUI applications have non-functional modes of failure also. We propose three additional GUI test types (that are likely to be automated).

In production, systems might be operated continuously for many hours. Applications may be comprehensively tested over a period of weeks or months but are not usually operated for extended periods in this way. It is common for client application code and bespoke middleware to have memory-leaks. Soak tests exercise system transactions continuously for an extended period in order to flush out such problems.

These tests are normally conducted using an automated tool. Selected transactions are repeatedly executed and machine resources on the client (or the server) monitored to identify resources that are being allocated but not returned by the application code.

Whether applications interface directly with other desktop products or simply co-exist on the same desktop, they share the same resources on the client. Compatibility Tests are (usually) automated tests that aim to demonstrate that resources that are shared with other desktop products are not locked unnecessarily causing the system under test or the other products to fail.

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how to customise their PC configuration. Although your application may be designed to operate on a variety of platforms, you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

Page 20: Gui Checklist
Page 21: Gui Checklist
Page 22: Gui Checklist

Application testing is the testing that would normally be undertaken on a forms-based application. This testing focuses very much on the behaviour of the objects within windows. The approach to testing a window is virtually the same as would be adopted when testing a single form. The traditional black-box test design techniques are directly applicable in this context.

We define desktop integration as the integration and testing of a client application with these other components. Because these interfaces may be hidden or appear ‘seamless’ when working, the tester usually needs to understand a little more about the technical implementation of the interface before tests can be specified. The tester needs to know what interfaces exist, what mechanisms are used by these interfaces and how the interface can be exercised by using the application user interface.

Client/Server communication testing complements the desktop integration testing. This aspect covers the integration of a desktop application with the server-based processes it must communicate with. The discussion of the types of test cases for this testing is similar to section 3.4 Desktop Integration, except there should be some attention paid to testing for failure of server-based processes.

Page 23: Gui Checklist

There may be circumstances in the application under test where there are dependencies between different features. One scenario is when two windows are displayed, a change is made to a piece of data on one window and the other window needs to change to reflect the altered state of data in the database. To accommodate such dependencies, there is a need for the dependent parts of the application to be synchronised.

In production, systems might be operated continuously for many hours. Applications may be comprehensively tested over a period of weeks or months but are not usually operated for extended periods in this way. It is common for client application code and bespoke middleware to have memory-leaks. Soak tests exercise system transactions continuously for an extended period in order to flush out such problems.

Whether applications interface directly with other desktop products or simply co-exist on the same desktop, they share the same resources on the client. Compatibility Tests are (usually) automated tests that aim to demonstrate that resources that are shared with other desktop products are not locked unnecessarily causing the system under test or the other products to fail.

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how to customise their PC configuration. Although your application may be designed to operate on a variety of platforms, you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

Page 24: Gui Checklist
Page 25: Gui Checklist
Page 26: Gui Checklist

We define desktop integration as the integration and testing of a client application with these other components. Because these interfaces may be hidden or appear ‘seamless’ when working, the tester usually needs to understand a little more about the technical implementation of the interface before tests can be specified. The tester needs to know what interfaces exist, what mechanisms are used by these interfaces and how the interface can be exercised by using the application user interface.

Page 27: Gui Checklist

There may be circumstances in the application under test where there are dependencies between different features. One scenario is when two windows are displayed, a change is made to a piece of data on one window and the other window needs to change to reflect the altered state of data in the database. To accommodate such dependencies, there is a need for the dependent parts of the application to be synchronised.

In production, systems might be operated continuously for many hours. Applications may be comprehensively tested over a period of weeks or months but are not usually operated for extended periods in this way. It is common for client application code and bespoke middleware to have memory-leaks. Soak tests exercise system transactions continuously for an extended period in order to flush out such problems.

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how to customise their PC configuration. Although your application may be designed to operate on a variety of platforms, you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

Page 28: Gui Checklist
Page 29: Gui Checklist
Page 30: Gui Checklist
Page 31: Gui Checklist

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how to customise their PC configuration. Although your application may be designed to operate on a variety of platforms, you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

Page 32: Gui Checklist
Page 33: Gui Checklist
Page 34: Gui Checklist
Page 35: Gui Checklist

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how to customise their PC configuration. Although your application may be designed to operate on a variety of platforms, you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

Page 36: Gui Checklist
Page 37: Gui Checklist
Page 38: Gui Checklist
Page 39: Gui Checklist

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how to customise their PC configuration. Although your application may be designed to operate on a variety of platforms, you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

Page 40: Gui Checklist
Page 41: Gui Checklist
Page 42: Gui Checklist
Page 43: Gui Checklist

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how to customise their PC configuration. Although your application may be designed to operate on a variety of platforms, you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

Page 44: Gui Checklist
Page 45: Gui Checklist
Page 46: Gui Checklist
Page 47: Gui Checklist

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how to customise their PC configuration. Although your application may be designed to operate on a variety of platforms, you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

Page 48: Gui Checklist
Page 49: Gui Checklist

GUI Testing ChecklistCONTENTS:Section 1 - Windows Compliance Testing1.1. Application1.2. For Each Window in the Application1.3. Text Boxes1.4. Option (Radio Buttons)1.5. Check Boxes1.6. Command Buttons1.7. Drop Down List Boxes1.8. Combo Boxes1.9. List BoxesSection 2 - Tester’s Screen Validation Checklist2.1. Aesthetic Conditions2.2. Validation Conditions2.3. Navigation Conditions2.4. Usability Conditions2.5. Data Integrity Conditions2.6. Modes (Editable Read-only) Conditions2.7. General Conditions2.8. Specific Field Tests2.8.1. Date Field Checks2.8.2. Numeric Fields2.8.3. Alpha Field ChecksSection 3 - Other3.1. On every Screen3.2. Shortcut keys / Hot Keys3.3. Control Shortcut KeysGUI SCREEN VALIDATION CHECKLIST Page 2 of 19

Section 4 Right Click option

1. Windows Compliance

WINDOWS COMPLIANCE TESTINGFor Each ApplicationStart Application by Double Clicking on its ICONThe Loading message should show the application name, version number, and a bigger pictorialrepresentation of the icon.No Login is necessaryThe main window of the application should have the same caption as the caption of the icon inProgram Manager.Closing the application should result in an "Are you Sure" message boxAttempt to start application TwiceThis should not be allowed - you should be returned to main WindowTry to start the application twice as it is loading.On each window, if the application is busy, then the hour glass should be displayed. If there is nohour glass (e.g. alpha access enquiries) then some enquiry in progress message should be displayed.All screens should have a Help button, F1 should work doing the same.

For Each Window in the Application

Page 50: Gui Checklist

If Window has a Minimise Button, click it.Window Should return to an icon on the bottom of the screenThis icon should correspond to the Original Icon under Program Manager.Double Click the Icon to return the Window to its original size.The window caption for every application should have the name of the application and the windowname - especially the error messages. These should be checked for spelling, English and clarity ,especially on the top of the screen. Check does the title of the window makes sense.If the screen has an Control menu, then use all ungreyed options. (see below)Check all text on window for Spelling/Tense and GrammarUse TAB to move focus around the Window. Use SHIFT+TAB to move focus backwards.Tab order should be left to right, and Up to Down within a group box on the screen. All controlsshould get focus - indicated by dotted box, or cursor. Tabbing to an entry field with text in it shouldhighlight the entire text in the field.The text in the Micro Help line should change - Check for spelling, clarity and non-updateable etc.If a field is disabled (greyed) then it should not get focus. It should not be possible to select them witheither the mouse or by using TAB. Try this for every greyed control.Never updateable fields should be displayed with black text on a grey background with a black label.All text should be left-justified, followed by a colon tight to it.In a field that may or may not be updateable, the label text and contents changes from black to greydepending on the current status.List boxes are always white background with black text whether they are disabled or not. All othersare grey.In general, do not use goto screens, use gosub, i.e. if a button causes another screen to be displayed,the screen should not hide the first screen, with the exception of tab in 2.0

When returning return to the first screen cleanly i.e. no other screens/applications should appear.In general, double-clicking is not essential. In general, everything can be done using both the mouseand the keyboard.All tab buttons should have a distinct letter.

Text BoxesMove the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrow to InsertBar. If it doesn't then the text in the box should be grey or non-updateable. Refer to previous page.Enter text into BoxTry to overflow the text by typing to many characters - should be stopped Check the field width withcapitals W.Enter invalid characters - Letters in amount fields, try strange characters like + , - * etc. in All fields.SHIFT and Arrow should Select Characters. Selection should also be possible with mouse. DoubleClick should select all text in box.

Option (Radio Buttons)Left and Right arrows should move 'ON' Selection. So should Up and Down.. Select with mouse byclicking.

Check BoxesClicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE should dothe same

Command Buttons

Page 51: Gui Checklist

If Command Button leads to another Screen, and if the user can enter or change details on the otherscreen then the Text on the button should be followed by three dots.All Buttons except for OK and Cancel should have a letter Access to them. This is indicated by aletter underlined in the button text. The button should be activated by pressing ALT+Letter. Makesure there is no duplication.Click each button once with the mouse - This should activateTab to each button - Press SPACE - This should activateTab to each button - Press RETURN - This should activateThe above are VERY IMPORTANT, and should be done for EVERY command Button.Tab to another type of control (not a command button). One button on the screen should be default(indicated by a thick black border). Pressing Return in ANY no command button control shouldactivate it.If there is a Cancel Button on the screen , then pressing <Esc> should activate it.If pressing the Command button results in uncorrectable data e.g. closing an action step, there shouldbe a message phrased positively with Yes/No answers where Yes results in the completion of theaction.

Drop Down List BoxesPressing the Arrow should give list of options. This List may be scrollable. You should not be able totype text in the box.Pressing a letter should bring you to the first item in the list with that start with that letter. Pressing‘Ctrl - F4’ should open/drop down the list box.Spacing should be compatible with the existing windows spacing (word etc.). Items should be inalphabetical order with the exception of blank/none which is at the top or the bottom of the list box.Drop down with the item selected should be display the list with the selected item on the top.Make sure only one space appears, shouldn't have a blank line at the bottom.Combo BoxesShould allow text to be entered. Clicking Arrow should allow user to choose from listList BoxesShould allow a single selection to be chosen, by clicking with the mouse, or using the Up and DownArrow keys.Pressing a letter should take you to the first item in the list starting with that letter.If there is a 'View' or 'Open' button beside the list box then double clicking on a line in the List Box,should act in the same way as selecting and item in the list box, then clicking the command button.Force the scroll bar to appear, make sure all the data can be seen in the box.

2. Screen Validation Checklist

AESTHETIC CONDITIONS:1. Is the general screen background the correct colour?.2. Are the field prompts the correct colour?3. Are the field backgrounds the correct colour?4. In read-only mode, are the field prompts the correct colour?5. In read-only mode, are the field backgrounds the correct colour?6. Are all the screen prompts specified in the correct screen font?7. Is the text in all fields specified in the correct screen font?8. Are all the field prompts aligned perfectly on the screen?9. Are all the field edit boxes aligned perfectly on the screen?10. Are all groupboxes aligned correctly on the screen?11. Should the screen be resizable?12. Should the screen be minimisable?13. Are all the field prompts spelt correctly?

Page 52: Gui Checklist

14. Are all character or alpha-numeric fields left justified? This is the defaultunless otherwise specified.15. Are all numeric fields right justified? This is the default unless otherwisespecified.16. Is all the microhelp text spelt correctly on this screen?17. Is all the error message text spelt correctly on this screen?18. Is all user input captured in UPPER case or lower case consistently?19. Where the database requires a value (other than null) then this should bedefaulted into fields. The user must either enter an alternative valid valueor leave the default value intact.20. Assure that all windows have a consistent look and feel.21. Assure that all dialog boxes have a consistent look and feel.

VALIDATION CONDITIONS:1. Does a failure of validation on every field cause a sensible user errormessage?2. Is the user required to fix entries which have failed validation tests?3. Have any fields got multiple validation rules and if so are all rules beingapplied?4. If the user enters an invalid value and clicks on the OK button (i.e. doesnot TAB off the field) is the invalid entry identified and highlightedcorrectly with an error message.?5. Is validation consistently applied at screen level unless specifically requiredat field level?6. For all numeric fields check whether negative numbers can and should beable to be entered.7. For all numeric fields check the minimum and maximum values and alsosome mid-range values allowable?8. For all character/alphanumeric fields check the field to ensure that there isa character limit specified and that this limit is exactly correct for thespecified database size?9. Do all mandatory fields require user input?10. If any of the database columns don’t allow null values then thecorresponding screen fields must be mandatory. (If any field which initiallywas mandatory has become optional then check whether null values areallowed in this field.)

NAVIGATION CONDITIONS:1. Can the screen be accessed correctly from the menu?2. Can the screen be accessed correctly from the toolbar?3. Can the screen be accessed correctly by double clicking on a list control onthe previous screen?4. Can all screens accessible via buttons on this screen be accessed correctly?5. Can all screens accessible by double clicking on a list control be accessedcorrectly?6. Is the screen modal. i.e. Is the user prevented from accessing otherfunctions when this screen is active and is this correct?7. Can a number of instances of this screen be opened at the same time and isthis correct?

USABILITY CONDITIONS:1. Are all the dropdowns on this screen sorted correctly? Alphabetic sortingis the default unless otherwise specified.2. Is all date entry required in the correct format?

Page 53: Gui Checklist

3. Have all pushbuttons on the screen been given appropriate Shortcut keys?4. Do the Shortcut keys work correctly?5. Have the menu options which apply to your screen got fast keys associatedand should they have?6. Does the Tab Order specified on the screen go in sequence from Top Leftto bottom right? This is the default unless otherwise specified.7. Are all read-only fields avoided in the TAB sequence?8. Are all disabled fields avoided in the TAB sequence?9. Can the cursor be placed in the microhelp text box by clicking on the textbox with the mouse?10. Can the cursor be placed in read-only fields by clicking in the field with themouse?11. Is the cursor positioned in the first input field or control when the screen isopened?12. Is there a default button specified on the screen?13. Does the default button work correctly?14. When an error message occurs does the focus return to the field in errorwhen the user cancels it?15. When the user Alt+Tab’s to another application does this have any impacton the screen upon return to The application?16. Do all the fields edit boxes indicate the number of characters they will holdby there length? e.g. a 30 character field should be a lot longer

DATA INTEGRITY CONDITIONS:1. Is the data saved when the window is closed by double clicking on theclose box?2. Check the maximum field lengths to ensure that there are no truncatedcharacters?3. Where the database requires a value (other than null) then this should bedefaulted into fields. The user must either enter an alternative valid valueor leave the default value intact.4. Check maximum and minimum field values for numeric fields?5. If numeric fields accept negative values can these be stored correctly onthe database and does it make sense for the field to accept negativenumbers?6. If a set of radio buttons represent a fixed set of values such as A, B and Cthen what happens if a blank value is retrieved from the database? (In somesituations rows can be created on the database by other functions whichare not screen based and thus the required initial values can be incorrect.)7. If a particular set of data is saved to the database check that each valuegets saved fully to the database. i.e. Beware of truncation (of strings) androunding of numeric values.

MODES (EDITABLE READ-ONLY) CONDITIONS:1. Are the screen and field colours adjusted correctly for read-only mode?2. Should a read-only mode be provided for this screen?3. Are all fields and controls disabled in read-only mode?4. Can the screen be accessed from the previous screen/menu/toolbar in readonlymode?5. Can all screens available from this screen be accessed in read-only mode?6. Check that no validation is performed in read-only mode.

GENERAL CONDITIONS:1. Assure the existence of the "Help" menu.

Page 54: Gui Checklist

2. Assure that the proper commands and options are in each menu.3. Assure that all buttons on all tool bars have a corresponding key commands.4. Assure that each menu command has an alternative(hot-key) key sequence whichwill invoke it where appropriate.5. In drop down list boxes, ensure that the names are not abbreviations / cut short6. In drop down list boxes, assure that the list and each entry in the list can beaccessed via appropriate key / hot key combinations.7. Ensure that duplicate hot keys do not exist on each screen8. Ensure the proper usage of the escape key (which is to undo any changes that havebeen made) and generates a caution message “Changes will be lost - Continueyes/no”9. Assure that the cancel button functions the same as the escape key.10. Assure that the Cancel button operates as a Close button when changes have bemade that cannot be undone.11. Assure that only command buttons which are used by a particular window, or in aparticular dialog box, are present. - i.e make sure they don’t work on the screenbehind the current screen.12. When a command button is used sometimes and not at other times, assure that it isgrayed out when it should not be used.13. Assure that OK and Cancel buttons are grouped separately from other commandbuttons.14. Assure that command button names are not abbreviations.15. Assure that all field labels/names are not technical labels, but rather are namesmeaningful to system users.16. Assure that command buttons are all of similar size and shape, and same font &font size.17. Assure that each command button can be accessed via a hot key combination.18. Assure that command buttons in the same window/dialog box do not haveduplicate hot keys.19. Assure that each window/dialog box has a clearly marked default value (commandbutton, or other object) which is invoked when the Enter key is pressed - and NOTthe Cancel or Close button20. Assure that focus is set to an object/button which makes sense according to thefunction of the window/dialog box.21. Assure that all option buttons (and radio buttons) names are not abbreviations.22. Assure that option button names are not technical labels, but rather are namesmeaningful to system users.23. If hot keys are used to access option buttons, assure that duplicate hot keys do notexist in the same window/dialog box.24. Assure that option box names are not abbreviations.25. Assure that option boxes, option buttons, and command buttons are logicallygrouped together in clearly demarcated areas “Group Box”26. Assure that the Tab key sequence which traverses the screens does so in a logicalway.27. Assure consistency of mouse actions across windows.28. Assure that the color red is not used to highlight active objects (many individualsare red-green color blind).29. Assure that the user will have control of the desktop with respect to general colorand highlighting (the application should not dictate the desktop backgroundcharacteristics).30. Assure that the screen/window does not have a cluttered appearance31. Ctrl + F6 opens next tab within tabbed window32. Shift + Ctrl + F6 opens previous tab within tabbed window33. Tabbing will open next tab within tabbed window if on last field of current tab

Page 55: Gui Checklist

34. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbedwindow35. Tabbing will go onto the next editable field in the window36. Banner style & size & display exact same as existing windows37. If 8 or less options in a list box, display all options on open of list box - should beno need to scroll38. Errors on continue will cause user to be returned to the tab and the focus shouldbe on the field causing the error. (i.e the tab is opened, highlighting the field withthe error on it)39. Pressing continue while on the first tab of a tabbed window (assuming all fieldsfilled correctly) will not open all the tabs.40. On open of tab focus will be on first editable field41. All fonts to be the same42. Alt+F4 will close the tabbed window and return you to main screen or previousscreen (as appropriate), generating "changes will be lost" message if necessary.43. Microhelp text for every enabled field & button44. Ensure all fields are disabled in read-only mode45. Progress messages on load of tabbed screens46. Return operates continue47. If retrieve on load of tabbed window fails window should not open

Specific Field Tests

Date Field ChecksAssure that leap years are validated correctly & do not cause errors/miscalculationsAssure that month code 00 and 13 are validated correctly & do not causeerrors/miscalculationsAssure that 00 and 13 are reported as errorsAssure that day values 00 and 32 are validated correctly & do not causeerrors/miscalculationsAssure that Feb. 28, 29, 30 are validated correctly & do not cause errors/miscalculationsAssure that Feb. 30 is reported as an errorAssure that century change is validated correctly & does not cause errors/miscalculationsAssure that out of cycle dates are validated correctly & do not causeerrors/miscalculations

Numeric FieldsAssure that lowest and highest values are handled correctlyAssure that invalid values are logged and reportedAssure that valid values are handles by the correct procedureAssure that numeric fields with a blank in position 1 are processed or reported as anerrorAssure that fields with a blank in the last position are processed or reported as an erroran errorAssure that both + and - values are correctly processedAssure that division by zero does not occurInclude value zero in all calculationsInclude at least one in-range valueInclude maximum and minimum range valuesInclude out of range values above the maximum and below the minimumAssure that upper and lower values in ranges are handled correctly

Page 56: Gui Checklist

Alpha Field ChecksUse blank and non-blank dataInclude lowest and highest valuesInclude invalid characters & symbolsInclude valid charactersInclude data items with first position blankInclude data items with last position blank

VALIDATION TESTING - STANDARD ACTIONS

On every ScreenAddViewChangeDelete

ContinueAddViewChangeDelete

CancelFill each field - Valid dataFill each field - Invalid data

Different Check Box combinations

Scroll ListsHelpFill Lists and ScrollTabTab OrderShift TabShortcut keys - Alt + F

SHORTCUT KEYS / HOT KEYS

Page 57: Gui Checklist

CONTROL SHORT KEYS

Recommended CTRL+Letter Shortcuts

Section 4 Right Click option

Page 58: Gui Checklist

Copy character values or numeric values with keyboard(ctrl+c) or mouse and paste it in the textbox with keyboard or mouse.

Page 59: Gui Checklist

hour glass (e.g. alpha access enquiries) then some enquiry in progress message should be displayed.

Page 60: Gui Checklist

If a field is disabled (greyed) then it should not get focus. It should not be possible to select them with

In general, do not use goto screens, use gosub, i.e. if a button causes another screen to be displayed,

Page 61: Gui Checklist

If pressing the Command button results in uncorrectable data e.g. closing an action step, there should

Page 62: Gui Checklist
Page 63: Gui Checklist
Page 64: Gui Checklist

Copy character values or numeric values with keyboard(ctrl+c) or mouse and paste it in the textbox with keyboard or mouse.

Page 65: Gui Checklist

Classification of Errors by Severity

Often the severity of a software defect can vary even though the software never changes. The reason being is that a software defect’s severity depends on the system in which it runs. For example, the severity of the Pentium’s floating-point defect changes from system to system. On some systems, the severity is small; whereas on other systems, the severity is high. Another problem (which occurs regularly) is that the definitions of the severity levels (or categories) themselves change depending on the type of system. For example, a catastrophic defect in a nuclear system means that the fault can result in death or environmental harm; a catastrophic defect in a database system means that the fault can (or did) cause the loss of valuable data. Therefore, the system itself determines the severity of a defect based on the context for which the defect applies. The context makes all the difference in how to classify a defect’s severity. I have attached two sample classification methods – a 3 level classification method, and a more comprehensive 5 level classification method, which I hope you may find useful.

3 Level Error Classification Method

Errors, which are agreed as valid, will be categorised as follows :-

Explanation of Classifications

Example of severally affected functionality:

5 Level Error Classification Method

·         Category A - Serious errors that prevent System test of a particular function continuing or serious data type error

·         Category B - Serious or missing data related errors that will not prevent implementation.

·         Category C - Minor errors that do not prevent or hinder functionality.

1.      An "A" bug is a either a showstopper or of such importance as to radically affect the functionality of the system i.e. :

§         If, because of a consistent crash during processing of a new application, a user could not complete that application. §         Incorrect data is passed to system resulting in corruption or system crashes

§         Calculation of repayment term/amount are incorrect §         Incorrect credit agreements produced

2.      Bugs would be classified as "B" where a less important element of functionality is affected, e.g.:

§         a value is not defaulting correctly and it is necessary to input the correct value§         data is affected which does not have a major impact, for example - where an element of a customer application was not propagated to the database §         there is an alternative method of completing a particular process - e.g. a problem might occur which has a work-around.§         Serious cosmetic error on front-end.

3.      "C" type bugs are mainly cosmetic bugs i.e.:

§         Incorrect / misspelt text on screens§         drop down lists missing or repeating an option

Page 66: Gui Checklist

1. Catastrophic:

Defects that could (or did) cause disastrous consequences for the system in question.

E.g.) critical loss of data, critical loss of system availability, critical loss of security, critical loss of safety, etc.

2. Severe:

Defects that could (or did) cause very serious consequences for the system in question.

Page 67: Gui Checklist

2. Severe:

E.g.) A function is severely broken, cannot be used and there is no workaround.

3. Major:

Defects that could (or did) cause significant consequences for the system in question - A defect that needs to be fixed but there is a workaround.

E.g. 1.) losing data from a serial device during heavy loads.

Page 68: Gui Checklist

3. Major:

E.g. 2.) Function badly broken but workaround exists

4. Minor:

Defects that could (or did) cause small or negligible consequences for the system in question. Easy to recover or workaround.

E.g.1) Error messages misleading.

E.g.2) Displaying output in a font or format other than what the customer desired.

Page 69: Gui Checklist

5. No Effect:

Trivial defects that can cause no negative consequences for the system in question. Such defects normally produce no erroneous outputs.

E.g.1) simple typos in documentation.

E.g.2) bad layout or mis-spelling on screen.

Page 70: Gui Checklist

Often the severity of a software defect can vary even though the software never changes. The reason being is that a software defect’s severity depends on the system in which it runs. For example, the severity of the Pentium’s floating-point defect changes from system to system. On some systems, the severity is small; whereas on other systems, the severity is high. Another problem (which occurs regularly) is that the definitions of the severity levels (or categories) themselves change depending on the type of system. For example, a catastrophic defect in a nuclear system means that the fault can result in death or environmental harm; a catastrophic defect in a database system means that the fault can (or did) cause the loss of valuable data. Therefore, the system itself determines the severity of a defect based on the context for which the defect applies. The context makes all the difference in how to classify a defect’s severity. I have attached two sample classification methods – a 3 level classification method, and a more comprehensive 5 level classification method, which I hope you may find useful.

- Serious errors that prevent System test of a particular function continuing or serious data type error

- Serious or missing data related errors that will not prevent implementation.

An "A" bug is a either a showstopper or of such importance as to radically affect the functionality of the system i.e. :

If, because of a consistent crash during processing of a new application, a user could not complete that application.

Bugs would be classified as "B" where a less important element of functionality is affected, e.g.:

a value is not defaulting correctly and it is necessary to input the correct valuedata is affected which does not have a major impact, for example - where an element of a customer application was not propagated to the database there is an alternative method of completing a particular process - e.g. a problem might occur which has a work-around.

Page 71: Gui Checklist

Often the severity of a software defect can vary even though the software never changes. The reason being is that a software defect’s severity depends on the system in which it runs. For example, the severity of the Pentium’s floating-point defect changes from system to system. On some systems, the severity is small; whereas on other systems, the severity is high. Another problem (which occurs regularly) is that the definitions of the severity levels (or categories) themselves change depending on the type of system. For example, a catastrophic defect in a nuclear system means that the fault can result in death or environmental harm; a catastrophic defect in a database system means that the fault can (or did) cause the loss of valuable data. Therefore, the system itself determines the severity of a defect based on the context for which the defect applies. The context makes all the difference in how to classify a defect’s severity.

Page 72: Gui Checklist

Another problem (which occurs regularly) is that the definitions of the severity levels (or categories) themselves change depending on the type of system. For example, a catastrophic defect in a nuclear system means that the fault can result in death or environmental harm; a catastrophic defect in a database system means that the fault can (or did) cause the loss of valuable data.

Page 73: Gui Checklist

Another problem (which occurs regularly) is that the definitions of the severity levels (or categories) themselves change depending on the type of system. For example, a catastrophic defect in a nuclear system means that the fault can result in death or environmental harm; a catastrophic defect in a database system means that the fault can (or did) cause the loss of valuable data.

Page 74: Gui Checklist

What is Software Testing...?

Software testing is more than just error detection;

In other words, validation checks to see if we are building what the customer wants/needs, and verification checks to see if we are building that system correctly. Both verification and validation are necessary, but different components of any testing activity.

The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.

Remember: The purpose of testing is verification, validation and error detection in order to find problems – and the purpose of finding those problems is to get them fixed.

2. Why Testing CANNOT Ensure Quality

Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed.

3. What is Software “Quality”?

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.

4. What is “Quality Assurance”?

1. What is Software Testing

2. Why Testing CANNOT Ensure Quality

3. What is Software Quality?

4. What is Quality Assurance?

5. Software Development & Quality Assurance

6. The difference between QA & Testing

7. The Mission of Testing

1. What is Software Testing?

Testing software is operating the software under controlled conditions, to (1) verify that it behaves “as specified”; (2) to

1. Verification is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements.  [

2. Error Detection: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should.

3. Validation looks at the system correctness – i.e. is the process of checking that what has been specified is what the user actually wanted.  [

 

 

However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.  

Page 75: Gui Checklist

“Quality Assurance” measures the quality of processes used to create a quality product.

Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and improving all activities associated with software development, from requirements gathering, design and reviews to coding, testing and implementation.

It involves the entire software development process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software.

Organisations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers or quality managers.

5. Quality Assurance and Software Development

Quality Assurance and development of a product are parallel activities. Complete QA includes reviews of the development methods and standards, reviews of all the documentation (not just for standardisation but for verification and clarity of the contents also). Overall Quality Assurance processes also include code validation.

A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help minimise the risk of project failure. QA people aim to understand the causes of project failure (which includes software errors as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are referred to as QA Teams, perhaps acknowledging that testers should consider broader QA issues as well as testing.

6. What’s the difference between QA and testing?

Simply put:

TESTING means “Quality Control”; and QUALITY CONTROL measures the quality of a product; while QUALITY ASSURANCE measures the quality of processes used to create a quality product.

7. The Mission of Testing

In well-run projects, the mission of the test team is not merely to perform testing, but to help minimise the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognise that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality.

 

 

 

Page 76: Gui Checklist

In other words, validation checks to see if we are building what the customer wants/needs, and verification checks to see if we are building that system correctly. Both verification and validation are necessary, but different components of any testing activity.

The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.

Remember: The purpose of testing is verification, validation and error detection in order to find problems – and the purpose of finding those problems is to get them fixed.

Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed.

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.

verify that it behaves “as specified”; (2) to detect errors, and (3) to validate that what has been specified is what the user actually wanted.

is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements.  [Verification: Are we building the system right?

: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should.

looks at the system correctness – i.e. is the process of checking that what has been specified is what the user actually wanted.  [Validation: Are we building the right system?

However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.  

Page 77: Gui Checklist

Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and improving all activities associated with software development, from requirements gathering, design and reviews to coding, testing and implementation.

It involves the entire software development process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software.

Organisations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers or quality managers.

Quality Assurance and development of a product are parallel activities. Complete QA includes reviews of the development methods and standards, reviews of all the documentation (not just for standardisation but for verification and clarity of the contents also). Overall Quality Assurance processes also include code validation.

A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help minimise the risk of project failure. QA people aim to understand the causes of project failure (which includes software errors as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are referred to as QA Teams, perhaps acknowledging that testers should consider broader QA issues as well as testing.

QUALITY ASSURANCE measures the quality of processes used to create a quality product.

In well-run projects, the mission of the test team is not merely to perform testing, but to help minimise the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognise that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality.

Page 78: Gui Checklist

In other words, validation checks to see if we are building what the customer wants/needs, and verification checks to see if we are building that system correctly. Both verification and validation are necessary, but different components of any testing activity.

The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.

Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed.

that what has been specified is what the user actually wanted.

Verification: Are we building the system right?]

Validation: Are we building the right system?]

However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.  

Page 79: Gui Checklist

Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and improving all activities associated with software development, from requirements gathering, design and reviews to coding, testing and implementation.

It involves the entire software development process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software.

Organisations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers or quality managers.

Quality Assurance and development of a product are parallel activities. Complete QA includes reviews of the development methods and standards, reviews of all the documentation (not just for standardisation but for verification and clarity of the contents also). Overall Quality Assurance processes also include code validation.

A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help minimise the risk of project failure. QA people aim to understand the causes of project failure (which includes software errors as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are referred to as QA Teams, perhaps acknowledging that testers should consider broader QA issues as well as testing.

In well-run projects, the mission of the test team is not merely to perform testing, but to help minimise the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognise that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality.

Page 80: Gui Checklist

The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.

Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed.

However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.  

Page 81: Gui Checklist

It involves the entire software development process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software.

Organisations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers or quality managers.

Quality Assurance and development of a product are parallel activities. Complete QA includes reviews of the development methods and standards, reviews of all the documentation (not just for standardisation but for verification and clarity of the contents also). Overall Quality Assurance processes also include code validation.

A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help minimise the risk of project failure. QA people aim to understand the causes of project failure (which includes software errors as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are referred to as QA Teams, perhaps acknowledging that testers should consider broader QA issues as well as testing.

In well-run projects, the mission of the test team is not merely to perform testing, but to help minimise the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognise that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality.

Page 82: Gui Checklist

However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.  

Page 83: Gui Checklist

It involves the entire software development process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software.

A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help minimise the risk of project failure. QA people aim to understand the causes of project failure (which includes software errors as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are referred to as QA Teams, perhaps acknowledging that testers should consider broader QA issues as well as testing.

In well-run projects, the mission of the test team is not merely to perform testing, but to help minimise the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognise that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality.

Page 84: Gui Checklist

However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.  

Page 85: Gui Checklist

It involves the entire software development process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software.

In well-run projects, the mission of the test team is not merely to perform testing, but to help minimise the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognise that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality.

Page 86: Gui Checklist

However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.  

Page 87: Gui Checklist

In well-run projects, the mission of the test team is not merely to perform testing, but to help minimise the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognise that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality.

Page 88: Gui Checklist

However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.  

Page 89: Gui Checklist

Software Quality Assurance & Usability Testing

The role of User Testing in Software Quality Assurance.

Table of Contents:

1

1.1.

1.2.

1.3.

2

2.1.

2.2.

2.3.3

4

1. The role of User Testing in Software Quality Assurance.

 The Role of User Testing in Software Quality Assurance

 Introduction

 What is 'Usability Testing'

 Why Usability Testing should be included as an element of the testing cycle 

 How to Approach Usability Testing

 How to Implement Usability Testing

 The Benefits of Usability Testing

 The Role and benefits of "Usability Testers" Summary

 Sources of Reference & Internet Links

1.1. Introduction

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 90: Gui Checklist

The lessons learnt from that excercise were then implemented into any further developments, and saw the addition of "usability testing" to the system test cycle. The software was re-worked, and was re-released. The revamped version, although containing mostly cosmetic (non-functional) changes proved to be a success; although the damage was done - there was a little more reluctance to accept the software because they had "heard that it wasn't much good".

I believe that QA have a certain responsibility for usability testing. There are several factors involved, but the main reason is the 'perspective differences' or different viewpoints of the various teams involved in the development of the software.

To demonstrate, assume a new application is developed, that is exactly, 100%, in accordance with the design specifications - yet, unfortunately, it is not fit for use - because it may be so difficult/awkward to use, or it ends up so complicated that the users don't want it or won't use it. Yet, it is what the design specified. This has happened, and will happen again.

I remember a diagram that vividly showed this - it showed the design of a swing, with sections on "what the customer ordered", "What the development team built", "What the engineers installed" etc., with the effect of illustrating the different perspectives of the various people involved.

This is especially true where the business processes that drive the design of the new application are very complex (for example bespoke financial applications).

Secondly, when a totally new or custom application is being developed, how many of the coders themselves (1) have actual first hand experience of the business processes/rules that form the basis of the application being developed; and/or (2) how many of the coders will actually end up using the finished product ? Answer: Usually none. (3) How many of the test team do have first hand experience or the expert knowledge of the underlying business logic/processes ? Answer: Usually minimal.

Even if the testers are indeed experts in their area, they may miss the big picture, so I think that usability testing is a sub-specialty that often is not best left to the average tester. Only some specific personnel should be responsible for doing Usability Testing.

The best way to implement usability testing is two fold - firstly from a design & development perspective, then from a testing perspective.

(2) Following on from the screen reviews, standards should be documented i.e. Screen Layout, Labelling/Naming conventions etc. These should then be applied throughout the application.

Where an existing system or systems are being replaced or redesigned, usability issues can be avoided by using similar screen layouts - if they are already familiar with the layout the implementation of the new system will present less of a challenge, as it will be more easily accepted (provided of course, that that is not why the system is being replaced).

3). Including provisions for usability within the design specification will assist later usability testing. Usually for new application developments, and nearly always for custom application developments, the design team should either have an excellent understanding of the business processes/rules/logic behind the system being developed; and include users with first hand knowledge of same. However, although they design the system, they rarely specifically include usability provisions in the specifications.

An example of a usability consideration within the functional specification may be as simple as specifying a minimum size for the 'Continue' button.

4). At the unit testing stage, there should be an official review of the system - where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the focus should be on how a user navigates through the system. This should identify any potential issues such as having to open an additional window where one would suffice. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 15 fields on the main input screen - yet 7 or 8 of these fields are only required in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.

5). All the previous actions could be performed at an early stage if Prototyping is used. This is probably the best way to identify any potential usability/operability problems. You can never lessen the importance of user-centered design, but you can solve usability problems before they get to the QA stage (thereby cutting the cost of rebuilding the product to correct the problem) by using prototypes (even paper prototypes) and other "discount usability" testing methods.

User Acceptance Testing (UAT) is an excellent exercise, because not only will it give you there initial impression of the system and tell you how readily the users will take to it, but this way it will tell you whether the end product is a closer match to their expectations and there are fewer surprises. (Even though usability testing at the later stages of development may not impact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software.

(7) Another option to consider is to include actual users as testers within the test team. One financial organization I was involved with reassigned actual users as "Business Experts" as members of the test team. I found their input as actual "tester users" was invaluable.

A post mortem was then carried out on the software, and I was involved, as a representative of the test team. The investigation discovered that the software was not "user-friendly". Yet I, as a tester, had not considered usability or operability to be a problem. We then sat down with several of the users, and got them to go through the application with us screen by screen. This showed that testers have a different viewpoint than users. I was so familiar with the system that I didn't consider some convoluted key strokes to be a problem, until I saw them from a new users perspective.  It turned out to be a very important lesson for me - and indeed would be very educational for any tester or developer.

1.2. What is 'Usability Testing'

'Usability Testing' is defined as: "In System Testing, testing which attempts to find any human-factor problems". 

1.3. Why Usability Testing should be included as an element of the testing cycle.

Thirdly, apart from the usual commercial considerations, the success of some new software will depend on how well it is received by the public -

2. How to approach Usability Testing

2.1. How to Implement Usability Testing

From a design viewpoint, usability can be tackled by (1) Including actual Users as early as possible in the design stage. If possible, a prototype should be developed - failing that, screen layouts and designs should be reviewed on-screen and any problems highlighted.. The earlier that potential usability issues are discovered the easier it is to fix them.

6). From a testing viewpoint, usability testing should be added to the testing cycle by including a formal "User Acceptance Test". This is done by getting several actual users to sit down with the software and attempt to perform "normal" working tasks, when the software is near release quality. I say "normal" working tasks because testers will have been testing the system from/using test cases - i.e. not from a users viewpoint. User testers must always take the customer's point of view in their testing.

Page 91: Gui Checklist

8). The final option that may be to include user testers who are eventually going to be (a) using it themselves; and/or (b) responsible for training and effectively "selling" it to the users.

The benefits of having had usability considerations included in the development of computer software are immense, but often unappreciated. The benefits are too numerous to list - I'd say it's similar to putting the coat of paint on a new car - the car itself will work without the paint, but it doesn't look good. To summarise the benefits I would just say that it makes the software more "user friendly". The end result will be:

Better quality software.Software is easier to use.Software is more readily accepted by users.Shortens the learning curve for new users.

They can also help to:

Refocus the testers and increase their awareness to usability issues, by providing a fresh viewpoint

Provide and share their expert knowledge - training the testers to the background and purpose of the system

Provide a "realistic" element to the testing, so that test scenarios are not just "possible permutations".

1

2

3

2.2. The Benefits of Usability Testing

2.3.  The Role and benefits of "Usability Testers"

Apart from discovering and preventing possible usability issues, the addition of 'Usability Testers' to the test team can have a very positive effect on the team itself. Several times I have seen that testers become too familiar with the "quirks" of the software - and not report a possible error or usability issue.  Often this is due to the tester thinking either "It's always been like that"  or "isn't that the way it's supposed to be ?". These types of problem can be allieviated by including user testers in the test team.

3. Summary:

Usability evaluation should be incorporated earlier in the software development cycle to minimize resistance to changes in a hardened user interface; 

Organizations should have an independent usability evaluation of software products to avoid the temptation to overlook problems to release the product; 

Multiple categories of dependent measures should be employed in usability testing because subjective measurement is not always consonant with user performance; and 

Page 92: Gui Checklist

4

In my experience, the greater the involvement of key users, the more pleased they will be with the end product. Getting management to commit their key people to this effort can be difficult, but it makes for a better product in the long run.

4.1. Publications

Originally published: Proceedings of the Human Factors Society 33rd Annual Meeting, 1989, pp. 1218-1222

Republished: G. Perlman, G. K. Green, & M. S. Wogalter (Eds.) Human Factors Perspectives on Human-Computer Interaction: Selections from Proceedings of Human Factors and Ergonomics Society Annual Meetings, 1983-1994, Santa Monica, CA: HFES, 1995, pp. 191-195.

NASA Usability Testing Handbook

Even though usability testing at the later stages of development may not impact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software. 

4. Sources of Reference:

"The Case for Independent Software Usability Testing: Lessons Learned from a Successful Intervention". Author: David W. Biers.

Http://www.acm.org/~perlman/hfeshci/Abstracts/89:1218-1222.html

http://aaa.gsfc.nasa.gov/ViewPage.cfm?selectedPage=48&selectedType=Product

1.  A Practical Guide to Usability Testing.

Page 93: Gui Checklist

2

1. 

Joseph S. Dumas & Janice C. Redish.  Norwood, NJ: Ablex Publishing, 1993. ISBN 0-89391-991-8. This step-by-step guide provides checklists and offers insights for every stage of usability testing. 

Usability Engineering.

Page 94: Gui Checklist

2

3

Jakob Nielsen. Boston, MA: Academic Press, 1993. ISBN 0-12-518405-0. This book immediately sold out when it was first published. It is an practical handbook for people who want to evaluate systems.

Usability Inspection Methods.

Page 95: Gui Checklist

3

4

Jakob Nielsen & Robert L. Mack (Eds.)  New York: John Wiley & Sons, 1994. ISBN 0-471-01877-5. This book contains chapters contributed by experts on usability inspections methods such as heuristic evaluation, cognitive walkthroughs, and others. Cost-Justifying Usability.

Page 96: Gui Checklist

4

5

Randolph G. Bias & Deborah J. Mayhew (Eds.)  Boston: Academic Press, 1994. ISBN 0-12-095810-4. This edited collection contains 14 chapters devoted to the demonstration of the importance of usability evaluation to the success of software developUsability in Practice: How Companies Develop User-Friendly Products

Page 97: Gui Checklist

5

Michael E. Wiklund (Ed.)  Boston: Academic Press, 1994. ISBN 0-12-751250-0. This collection of contributed chapters describes usability practices of 17 companies: American Airlines, Ameritech, Apple, Bellcore, Borland, Compaq, Digital, Dun & Bradstre

Page 98: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 99: Gui Checklist

The lessons learnt from that excercise were then implemented into any further developments, and saw the addition of "usability testing" to the system test cycle. The software was re-worked, and was re-released. The revamped version, although containing mostly cosmetic (non-functional) changes proved to be a success; although the damage was done - there was a little more reluctance to accept the software because they had "heard that it wasn't much good".

I believe that QA have a certain responsibility for usability testing. There are several factors involved, but the main reason is the 'perspective differences' or different viewpoints of the various teams involved in the development of the software.

To demonstrate, assume a new application is developed, that is exactly, 100%, in accordance with the design specifications - yet, unfortunately, it is not fit for use - because it may be so difficult/awkward to use, or it ends up so complicated that the users don't want it or won't use it. Yet, it is what the design specified. This has happened, and will happen again.

I remember a diagram that vividly showed this - it showed the design of a swing, with sections on "what the customer ordered", "What the development team built", "What the engineers installed" etc., with the effect of illustrating the different perspectives of the various people involved.

This is especially true where the business processes that drive the design of the new application are very complex (for example bespoke financial applications).

Secondly, when a totally new or custom application is being developed, how many of the coders themselves (1) have actual first hand experience of the business processes/rules that form the basis of the application being developed; and/or (2) how many of the coders will actually end up using the finished product ? Answer: Usually none. (3) How many of the test team do have first hand experience or the expert knowledge of the underlying business logic/processes ? Answer: Usually minimal.

Even if the testers are indeed experts in their area, they may miss the big picture, so I think that usability testing is a sub-specialty that often is not best left to the average tester. Only some specific personnel should be responsible for doing Usability Testing.

The best way to implement usability testing is two fold - firstly from a design & development perspective, then from a testing perspective.

(2) Following on from the screen reviews, standards should be documented i.e. Screen Layout, Labelling/Naming conventions etc. These should then be applied throughout the application.

Where an existing system or systems are being replaced or redesigned, usability issues can be avoided by using similar screen layouts - if they are already familiar with the layout the implementation of the new system will present less of a challenge, as it will be more easily accepted (provided of course, that that is not why the system is being replaced).

3). Including provisions for usability within the design specification will assist later usability testing. Usually for new application developments, and nearly always for custom application developments, the design team should either have an excellent understanding of the business processes/rules/logic behind the system being developed; and include users with first hand knowledge of same. However, although they design the system, they rarely specifically include usability provisions in the specifications.

An example of a usability consideration within the functional specification may be as simple as specifying a minimum size for the 'Continue' button.

4). At the unit testing stage, there should be an official review of the system - where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the focus should be on how a user navigates through the system. This should identify any potential issues such as having to open an additional window where one would suffice. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 15 fields on the main input screen - yet 7 or 8 of these fields are only required in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.

5). All the previous actions could be performed at an early stage if Prototyping is used. This is probably the best way to identify any potential usability/operability problems. You can never lessen the importance of user-centered design, but you can solve usability problems before they get to the QA stage (thereby cutting the cost of rebuilding the product to correct the problem) by using prototypes (even paper prototypes) and other "discount usability" testing methods.

User Acceptance Testing (UAT) is an excellent exercise, because not only will it give you there initial impression of the system and tell you how readily the users will take to it, but this way it will tell you whether the end product is a closer match to their expectations and there are fewer surprises. (Even though usability testing at the later stages of development may not impact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software.

(7) Another option to consider is to include actual users as testers within the test team. One financial organization I was involved with reassigned actual users as "Business Experts" as members of the test team. I found their input as actual "tester users" was invaluable.

A post mortem was then carried out on the software, and I was involved, as a representative of the test team. The investigation discovered that the software was not "user-friendly". Yet I, as a tester, had not considered usability or operability to be a problem. We then sat down with several of the users, and got them to go through the application with us screen by screen. This showed that testers have a different viewpoint than users. I was so familiar with the system that I didn't consider some convoluted key strokes to be a problem, until I saw them from a new users perspective.  It turned out to be a very important lesson for me - and indeed would be very educational for any tester or developer.

'Usability Testing' is defined as: "In System Testing, testing which attempts to find any human-factor problems". [1] A better description is "testing the software from a users point of view". Essentially it means testing software to prove/ensure that it is 'user-friendly', as distinct from testing the functionality of the software. In practical terms it includes ergonomic considerations, screen design, standardisation etc.

1.3. Why Usability Testing should be included as an element of the testing cycle.

Thirdly, apart from the usual commercial considerations, the success of some new software will depend on how well it is received by the public - whether they like the application

usability can be tackled by (1) Including actual Users as early as possible in the design stage. If possible, a prototype should be developed - failing that, screen layouts and designs should be reviewed on-screen and any problems highlighted.. The earlier that potential usability issues are discovered the easier it is to fix them.

usability testing should be added to the testing cycle by including a formal "User Acceptance Test". This is done by getting several actual users to sit down with the software and attempt to perform "normal" working tasks, when the software is near release quality. I say "normal" working tasks because testers will have been testing the system from/using test cases - i.e. not from a users viewpoint. User testers must always take the customer's point of view in their testing.

Page 100: Gui Checklist

8). The final option that may be to include user testers who are eventually going to be (a) using it themselves; and/or (b) responsible for training and effectively "selling" it to the users.

The benefits of having had usability considerations included in the development of computer software are immense, but often unappreciated. The benefits are too numerous to list - I'd say it's similar to putting the coat of paint on a new car - the car itself will work without the paint, but it doesn't look good. To summarise the benefits I would just say that it makes the software more "user friendly". The end result will be:

Refocus the testers and increase their awareness to usability issues, by providing a fresh viewpoint

Provide and share their expert knowledge - training the testers to the background and purpose of the system

Provide a "realistic" element to the testing, so that test scenarios are not just "possible permutations".

Apart from discovering and preventing possible usability issues, the addition of 'Usability Testers' to the test team can have a very positive effect on the team itself. Several times I have seen that testers become too familiar with the "quirks" of the software - and not report a possible error or usability issue.  Often this is due to the tester thinking either "It's always been like that"  or "isn't that the way it's supposed to be ?". These types of problem can be allieviated by including user testers in the test team.

Page 101: Gui Checklist

In my experience, the greater the involvement of key users, the more pleased they will be with the end product. Getting management to commit their key people to this effort can be difficult, but it makes for a better product in the long run.

Republished: G. Perlman, G. K. Green, & M. S. Wogalter (Eds.) Human Factors Perspectives on Human-Computer Interaction: Selections from Proceedings of Human Factors and Ergonomics Society Annual Meetings, 1983-1994, Santa Monica, CA: HFES, 1995, pp. 191-195.

The Case for Independent Software Usability Testing: Lessons Learned from a Successful Intervention". Author: David W. Biers.

Page 102: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 103: Gui Checklist

The lessons learnt from that excercise were then implemented into any further developments, and saw the addition of "usability testing" to the system test cycle. The software was re-worked, and was re-released. The revamped version, although containing mostly cosmetic (non-functional) changes proved to be a success; although the damage was done - there was a little more reluctance to accept the software because they had "heard that it wasn't much good".

I believe that QA have a certain responsibility for usability testing. There are several factors involved, but the main reason is the 'perspective differences' or different viewpoints of the various teams involved in the development of the software.

To demonstrate, assume a new application is developed, that is exactly, 100%, in accordance with the design specifications - yet, unfortunately, it is not fit for use - because it may be so difficult/awkward to use, or it ends up so complicated that the users don't want it or won't use it. Yet, it is what the design specified. This has happened, and will happen again.

I remember a diagram that vividly showed this - it showed the design of a swing, with sections on "what the customer ordered", "What the development team built", "What the engineers installed" etc., with the effect of illustrating the different perspectives of the various people involved.

Secondly, when a totally new or custom application is being developed, how many of the coders themselves (1) have actual first hand experience of the business processes/rules that form the basis of the application being developed; and/or (2) how many of the coders will actually end up using the finished product ? Answer: Usually none. (3) How many of the test team do have first hand experience or the expert knowledge of the underlying business logic/processes ? Answer: Usually minimal.

Even if the testers are indeed experts in their area, they may miss the big picture, so I think that usability testing is a sub-specialty that often is not best left to the average tester. Only some specific personnel should be responsible for doing Usability Testing.

(2) Following on from the screen reviews, standards should be documented i.e. Screen Layout, Labelling/Naming conventions etc. These should then be applied throughout the application.

Where an existing system or systems are being replaced or redesigned, usability issues can be avoided by using similar screen layouts - if they are already familiar with the layout the implementation of the new system will present less of a challenge, as it will be more easily accepted (provided of course, that that is not why the system is being replaced).

3). Including provisions for usability within the design specification will assist later usability testing. Usually for new application developments, and nearly always for custom application developments, the design team should either have an excellent understanding of the business processes/rules/logic behind the system being developed; and include users with first hand knowledge of same. However, although they design the system, they rarely specifically include usability provisions in the specifications.

4). At the unit testing stage, there should be an official review of the system - where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the focus should be on how a user navigates through the system. This should identify any potential issues such as having to open an additional window where one would suffice. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 15 fields on the main input screen - yet 7 or 8 of these fields are only required in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.

5). All the previous actions could be performed at an early stage if Prototyping is used. This is probably the best way to identify any potential usability/operability problems. You can never lessen the importance of user-centered design, but you can solve usability problems before they get to the QA stage (thereby cutting the cost of rebuilding the product to correct the problem) by using prototypes (even paper prototypes) and other "discount usability" testing methods.

User Acceptance Testing (UAT) is an excellent exercise, because not only will it give you there initial impression of the system and tell you how readily the users will take to it, but this way it will tell you whether the end product is a closer match to their expectations and there are fewer surprises. (Even though usability testing at the later stages of development may not impact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software.

(7) Another option to consider is to include actual users as testers within the test team. One financial organization I was involved with reassigned actual users as "Business Experts" as members of the test team. I found their input as actual "tester users" was invaluable.

A post mortem was then carried out on the software, and I was involved, as a representative of the test team. The investigation discovered that the software was not "user-friendly". Yet I, as a tester, had not considered usability or operability to be a problem. We then sat down with several of the users, and got them to go through the application with us screen by screen. This showed that testers have a different viewpoint than users. I was so familiar with the system that I didn't consider some convoluted key strokes to be a problem, until I saw them from a new users perspective.  It turned out to be a very important lesson for me - and indeed would be very educational for any tester or developer.

A better description is "testing the software from a users point of view". Essentially it means testing software to prove/ensure that it is 'user-friendly', as distinct from testing the functionality of the software. In practical terms it includes ergonomic considerations, screen design, standardisation etc.

whether they like the application . Obviously if the s/w is bug ridden then the popularity of the s/w will suffer; aside from that, if it is a high quality development the popularity of the s/w will still depend on the usability (albeit to a lesser degree). It would be a pity (but it wouldn't be the first time) that an application was not a success because it wasn't readily accepted - because it was not user friendly, or because it was too complex or difficult to use.

usability can be tackled by (1) Including actual Users as early as possible in the design stage. If possible, a prototype should be developed - failing that, screen layouts and designs should be reviewed on-screen and any problems highlighted.. The earlier that potential usability issues are discovered the easier it is to fix them.

usability testing should be added to the testing cycle by including a formal "User Acceptance Test". This is done by getting several actual users to sit down with the software and attempt to perform "normal" working tasks, when the software is near release quality. I say "normal" working tasks because testers will have been testing the system from/using test cases - i.e. not from a users viewpoint. User testers must always take the customer's point of view in their testing.

Page 104: Gui Checklist

8). The final option that may be to include user testers who are eventually going to be (a) using it themselves; and/or (b) responsible for training and effectively "selling" it to the users.

The benefits of having had usability considerations included in the development of computer software are immense, but often unappreciated. The benefits are too numerous to list - I'd say it's similar to putting the coat of paint on a new car - the car itself will work without the paint, but it doesn't look good. To summarise the benefits I would just say that it makes the software more "user friendly". The end result will be:

Apart from discovering and preventing possible usability issues, the addition of 'Usability Testers' to the test team can have a very positive effect on the team itself. Several times I have seen that testers become too familiar with the "quirks" of the software - and not report a possible error or usability issue.  Often this is due to the tester thinking either "It's always been like that"  or "isn't that the way it's supposed to be ?". These types of problem can be allieviated by including user testers in the test team.

Page 105: Gui Checklist

In my experience, the greater the involvement of key users, the more pleased they will be with the end product. Getting management to commit their key people to this effort can be difficult, but it makes for a better product in the long run.

Republished: G. Perlman, G. K. Green, & M. S. Wogalter (Eds.) Human Factors Perspectives on Human-Computer Interaction: Selections from Proceedings of Human Factors and Ergonomics Society Annual Meetings, 1983-1994, Santa Monica, CA: HFES, 1995, pp. 191-195.

Page 106: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 107: Gui Checklist

The lessons learnt from that excercise were then implemented into any further developments, and saw the addition of "usability testing" to the system test cycle. The software was re-worked, and was re-released. The revamped version, although containing mostly cosmetic (non-functional) changes proved to be a success; although the damage was done - there was a little more reluctance to accept the software because they had "heard that it wasn't much good".

To demonstrate, assume a new application is developed, that is exactly, 100%, in accordance with the design specifications - yet, unfortunately, it is not fit for use - because it may be so difficult/awkward to use, or it ends up so complicated that the users don't want it or won't use it. Yet, it is what the design specified. This has happened, and will happen again.

I remember a diagram that vividly showed this - it showed the design of a swing, with sections on "what the customer ordered", "What the development team built", "What the engineers installed" etc., with the effect of illustrating the different perspectives of the various people involved.

Secondly, when a totally new or custom application is being developed, how many of the coders themselves (1) have actual first hand experience of the business processes/rules that form the basis of the application being developed; and/or (2) how many of the coders will actually end up using the finished product ? Answer: Usually none. (3) How many of the test team do have first hand experience or the expert knowledge of the underlying business logic/processes ? Answer: Usually minimal.

Where an existing system or systems are being replaced or redesigned, usability issues can be avoided by using similar screen layouts - if they are already familiar with the layout the implementation of the new system will present less of a challenge, as it will be more easily accepted (provided of course, that that is not why the system is being replaced).

3). Including provisions for usability within the design specification will assist later usability testing. Usually for new application developments, and nearly always for custom application developments, the design team should either have an excellent understanding of the business processes/rules/logic behind the system being developed; and include users with first hand knowledge of same. However, although they design the system, they rarely specifically include usability provisions in the specifications.

4). At the unit testing stage, there should be an official review of the system - where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the focus should be on how a user navigates through the system. This should identify any potential issues such as having to open an additional window where one would suffice. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 15 fields on the main input screen - yet 7 or 8 of these fields are only required in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.

5). All the previous actions could be performed at an early stage if Prototyping is used. This is probably the best way to identify any potential usability/operability problems. You can never lessen the importance of user-centered design, but you can solve usability problems before they get to the QA stage (thereby cutting the cost of rebuilding the product to correct the problem) by using prototypes (even paper prototypes) and other "discount usability" testing methods.

User Acceptance Testing (UAT) is an excellent exercise, because not only will it give you there initial impression of the system and tell you how readily the users will take to it, but this way it will tell you whether the end product is a closer match to their expectations and there are fewer surprises. (Even though usability testing at the later stages of development may not impact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software.

A post mortem was then carried out on the software, and I was involved, as a representative of the test team. The investigation discovered that the software was not "user-friendly". Yet I, as a tester, had not considered usability or operability to be a problem. We then sat down with several of the users, and got them to go through the application with us screen by screen. This showed that testers have a different viewpoint than users. I was so familiar with the system that I didn't consider some convoluted key strokes to be a problem, until I saw them from a new users perspective.  It turned out to be a very important lesson for me - and indeed would be very educational for any tester or developer.

A better description is "testing the software from a users point of view". Essentially it means testing software to prove/ensure that it is 'user-friendly', as distinct from testing the functionality of the software. In practical terms it includes ergonomic considerations, screen design, standardisation etc.

. Obviously if the s/w is bug ridden then the popularity of the s/w will suffer; aside from that, if it is a high quality development the popularity of the s/w will still depend on the usability (albeit to a lesser degree). It would be a pity (but it wouldn't be the first time) that an application was not a success because it wasn't readily accepted - because it was not user friendly, or because it was too complex or difficult to use.

usability can be tackled by (1) Including actual Users as early as possible in the design stage. If possible, a prototype should be developed - failing that, screen layouts and designs should be reviewed on-screen and any problems highlighted.. The earlier that potential usability issues are discovered the easier it is to fix them.

usability testing should be added to the testing cycle by including a formal "User Acceptance Test". This is done by getting several actual users to sit down with the software and attempt to perform "normal" working tasks, when the software is near release quality. I say "normal" working tasks because testers will have been testing the system from/using test cases - i.e. not from a users viewpoint. User testers must always take the customer's point of view in their testing.

Page 108: Gui Checklist

The benefits of having had usability considerations included in the development of computer software are immense, but often unappreciated. The benefits are too numerous to list - I'd say it's similar to putting the coat of paint on a new car - the car itself will work without the paint, but it doesn't look good. To summarise the benefits I would just say that it makes the software more "user friendly". The end result will be:

Apart from discovering and preventing possible usability issues, the addition of 'Usability Testers' to the test team can have a very positive effect on the team itself. Several times I have seen that testers become too familiar with the "quirks" of the software - and not report a possible error or usability issue.  Often this is due to the tester thinking either "It's always been like that"  or "isn't that the way it's supposed to be ?". These types of problem can be allieviated by including user testers in the test team.

Page 109: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 110: Gui Checklist

The lessons learnt from that excercise were then implemented into any further developments, and saw the addition of "usability testing" to the system test cycle. The software was re-worked, and was re-released. The revamped version, although containing mostly cosmetic (non-functional) changes proved to be a success; although the damage was done - there was a little more reluctance to accept the software because they had "heard that it wasn't much good".

To demonstrate, assume a new application is developed, that is exactly, 100%, in accordance with the design specifications - yet, unfortunately, it is not fit for use - because it may be so difficult/awkward to use, or it ends up so complicated that the users don't want it or won't use it. Yet, it is what the design specified. This has happened, and will happen again.

Secondly, when a totally new or custom application is being developed, how many of the coders themselves (1) have actual first hand experience of the business processes/rules that form the basis of the application being developed; and/or (2) how many of the coders will actually end up using the finished product ? Answer: Usually none. (3) How many of the test team do have first hand experience or the expert knowledge of the underlying business logic/processes ? Answer: Usually minimal.

3). Including provisions for usability within the design specification will assist later usability testing. Usually for new application developments, and nearly always for custom application developments, the design team should either have an excellent understanding of the business processes/rules/logic behind the system being developed; and include users with first hand knowledge of same. However, although they design the system, they rarely specifically include usability provisions in the specifications.

4). At the unit testing stage, there should be an official review of the system - where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the focus should be on how a user navigates through the system. This should identify any potential issues such as having to open an additional window where one would suffice. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 15 fields on the main input screen - yet 7 or 8 of these fields are only required in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.

5). All the previous actions could be performed at an early stage if Prototyping is used. This is probably the best way to identify any potential usability/operability problems. You can never lessen the importance of user-centered design, but you can solve usability problems before they get to the QA stage (thereby cutting the cost of rebuilding the product to correct the problem) by using prototypes (even paper prototypes) and other "discount usability" testing methods.

User Acceptance Testing (UAT) is an excellent exercise, because not only will it give you there initial impression of the system and tell you how readily the users will take to it, but this way it will tell you whether the end product is a closer match to their expectations and there are fewer surprises. (Even though usability testing at the later stages of development may not impact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software.

A post mortem was then carried out on the software, and I was involved, as a representative of the test team. The investigation discovered that the software was not "user-friendly". Yet I, as a tester, had not considered usability or operability to be a problem. We then sat down with several of the users, and got them to go through the application with us screen by screen. This showed that testers have a different viewpoint than users. I was so familiar with the system that I didn't consider some convoluted key strokes to be a problem, until I saw them from a new users perspective.  It turned out to be a very important lesson for me - and indeed would be very educational for any tester or developer.

A better description is "testing the software from a users point of view". Essentially it means testing software to prove/ensure that it is 'user-friendly', as distinct from testing the functionality of the software. In practical terms it includes ergonomic considerations, screen design, standardisation etc.

. Obviously if the s/w is bug ridden then the popularity of the s/w will suffer; aside from that, if it is a high quality development the popularity of the s/w will still depend on the usability (albeit to a lesser degree). It would be a pity (but it wouldn't be the first time) that an application was not a success because it wasn't readily accepted - because it was not user friendly, or because it was too complex or difficult to use.

usability testing should be added to the testing cycle by including a formal "User Acceptance Test". This is done by getting several actual users to sit down with the software and attempt to perform "normal" working tasks, when the software is near release quality. I say "normal" working tasks because testers will have been testing the system from/using test cases - i.e. not from a users viewpoint. User testers must always take the customer's point of view in their testing.

Page 111: Gui Checklist

The benefits of having had usability considerations included in the development of computer software are immense, but often unappreciated. The benefits are too numerous to list - I'd say it's similar to putting the coat of paint on a new car - the car itself will work without the paint, but it doesn't look good. To summarise the benefits I would just say that it makes the software more "user friendly". The end result will be:

Apart from discovering and preventing possible usability issues, the addition of 'Usability Testers' to the test team can have a very positive effect on the team itself. Several times I have seen that testers become too familiar with the "quirks" of the software - and not report a possible error or usability issue.  Often this is due to the tester thinking either "It's always been like that"  or "isn't that the way it's supposed to be ?". These types of problem can be allieviated by including user testers in the test team.

Page 112: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 113: Gui Checklist

The lessons learnt from that excercise were then implemented into any further developments, and saw the addition of "usability testing" to the system test cycle. The software was re-worked, and was re-released. The revamped version, although containing mostly cosmetic (non-functional) changes proved to be a success; although the damage was done - there was a little more reluctance to accept the software because they had "heard that it wasn't much good".

Secondly, when a totally new or custom application is being developed, how many of the coders themselves (1) have actual first hand experience of the business processes/rules that form the basis of the application being developed; and/or (2) how many of the coders will actually end up using the finished product ? Answer: Usually none. (3) How many of the test team do have first hand experience or the expert knowledge of the underlying business logic/processes ? Answer: Usually minimal.

3). Including provisions for usability within the design specification will assist later usability testing. Usually for new application developments, and nearly always for custom application developments, the design team should either have an excellent understanding of the business processes/rules/logic behind the system being developed; and include users with first hand knowledge of same. However, although they design the system, they rarely specifically include usability provisions in the specifications.

4). At the unit testing stage, there should be an official review of the system - where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the focus should be on how a user navigates through the system. This should identify any potential issues such as having to open an additional window where one would suffice. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 15 fields on the main input screen - yet 7 or 8 of these fields are only required in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.

5). All the previous actions could be performed at an early stage if Prototyping is used. This is probably the best way to identify any potential usability/operability problems. You can never lessen the importance of user-centered design, but you can solve usability problems before they get to the QA stage (thereby cutting the cost of rebuilding the product to correct the problem) by using prototypes (even paper prototypes) and other "discount usability" testing methods.

User Acceptance Testing (UAT) is an excellent exercise, because not only will it give you there initial impression of the system and tell you how readily the users will take to it, but this way it will tell you whether the end product is a closer match to their expectations and there are fewer surprises. (Even though usability testing at the later stages of development may not impact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software.

A post mortem was then carried out on the software, and I was involved, as a representative of the test team. The investigation discovered that the software was not "user-friendly". Yet I, as a tester, had not considered usability or operability to be a problem. We then sat down with several of the users, and got them to go through the application with us screen by screen. This showed that testers have a different viewpoint than users. I was so familiar with the system that I didn't consider some convoluted key strokes to be a problem, until I saw them from a new users perspective.  It turned out to be a very important lesson for me - and indeed would be very educational for any tester or developer.

. Obviously if the s/w is bug ridden then the popularity of the s/w will suffer; aside from that, if it is a high quality development the popularity of the s/w will still depend on the usability (albeit to a lesser degree). It would be a pity (but it wouldn't be the first time) that an application was not a success because it wasn't readily accepted - because it was not user friendly, or because it was too complex or difficult to use.

usability testing should be added to the testing cycle by including a formal "User Acceptance Test". This is done by getting several actual users to sit down with the software and attempt to perform "normal" working tasks, when the software is near release quality. I say "normal" working tasks because testers will have been testing the system from/using test cases - i.e. not from a users viewpoint. User testers must always take the customer's point of view in their testing.

Page 114: Gui Checklist

Apart from discovering and preventing possible usability issues, the addition of 'Usability Testers' to the test team can have a very positive effect on the team itself. Several times I have seen that testers become too familiar with the "quirks" of the software - and not report a possible error or usability issue.  Often this is due to the tester thinking either "It's always been like that"  or "isn't that the way it's supposed to be ?". These types of problem can be allieviated by including user testers in the test team.

Page 115: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 116: Gui Checklist

4). At the unit testing stage, there should be an official review of the system - where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the focus should be on how a user navigates through the system. This should identify any potential issues such as having to open an additional window where one would suffice. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 15 fields on the main input screen - yet 7 or 8 of these fields are only required in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.

A post mortem was then carried out on the software, and I was involved, as a representative of the test team. The investigation discovered that the software was not "user-friendly". Yet I, as a tester, had not considered usability or operability to be a problem. We then sat down with several of the users, and got them to go through the application with us screen by screen. This showed that testers have a different viewpoint than users. I was so familiar with the system that I didn't consider some convoluted key strokes to be a problem, until I saw them from a new users perspective.  It turned out to be a very important lesson for me - and indeed would be very educational for any tester or developer.

. Obviously if the s/w is bug ridden then the popularity of the s/w will suffer; aside from that, if it is a high quality development the popularity of the s/w will still depend on the usability (albeit to a lesser degree). It would be a pity (but it wouldn't be the first time) that an application was not a success because it wasn't readily accepted - because it was not user friendly, or because it was too complex or difficult to use.

Page 117: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 118: Gui Checklist

4). At the unit testing stage, there should be an official review of the system - where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the focus should be on how a user navigates through the system. This should identify any potential issues such as having to open an additional window where one would suffice. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 15 fields on the main input screen - yet 7 or 8 of these fields are only required in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.

A post mortem was then carried out on the software, and I was involved, as a representative of the test team. The investigation discovered that the software was not "user-friendly". Yet I, as a tester, had not considered usability or operability to be a problem. We then sat down with several of the users, and got them to go through the application with us screen by screen. This showed that testers have a different viewpoint than users. I was so familiar with the system that I didn't consider some convoluted key strokes to be a problem, until I saw them from a new users perspective.  It turned out to be a very important lesson for me - and indeed would be very educational for any tester or developer.

Page 119: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 120: Gui Checklist

4). At the unit testing stage, there should be an official review of the system - where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the focus should be on how a user navigates through the system. This should identify any potential issues such as having to open an additional window where one would suffice. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 15 fields on the main input screen - yet 7 or 8 of these fields are only required in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.

Page 121: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 122: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 123: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 124: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 125: Gui Checklist

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

Page 126: Gui Checklist

Acceptance Form

Acceptance Test Plan

Action Item Log

Change Control Form

Change Control Log

Change Control Log - Detailed

Change Request Form

Data Access Control Form

Enhancement Request Form

Installation Completion Form

Issue Log

QA / Program Manager Checklist

Quality Log

Release Control Form

Requirements Testing Report

Risk Log

Risk Management Plan Form

Software Test Plan Template

System Final Release Sign-off Form

System Requirements Sign-off Form

System Test Cycle Sign-off Form

System Test Environment Sign-off Form

System Test Plan Sign-off Form

System Test Sign-off Form

Test Case Template

Test Case Validation Log

Test Plan Review Checklist

Test Plan Task Preparation

Test Record

Test Script Allocation Form

Test Script

Team Roles and Responsibilities Form

Team Training Requirements Form

Unit Test Plan

User Acceptance Test (UAT) Report

Version Control Log

Web Usability Test Report

Microsoft Word Files -  Table of Contents

Microsoft Excel Files -  Table of Contents

Microsoft Excel  -  Table of Contents

Page 127: Gui Checklist

Worksheet

Action Item Log

Log Status

Failed Scripts

Open Issues

Quality Log

Risk Log

Status Report

Change Control Log

Change History Log

Data Access Control

Roles and Responsibilities

Page 128: Gui Checklist

Test Script

Test Script List

Task Preparation

Test Case

Validation Log

Test Tracking Report

Version Control Log

Web Usability Report

Page 129: Gui Checklist
Page 130: Gui Checklist

FAQs

FAQ: What file formats are the templates?

All files are in Microsoft Word and are Anti-Virus free! 

FAQ: How soon can I download them?

Immediately after you pay online, you are sent to a page where you can download the templates online.

FAQ: What is the End User License Agreement?

Page 131: Gui Checklist

Acceptance Form

Acceptance Test Plan

Action Item Log

Change Control Form

Change Control Log

Change Control Log - Detailed

Change Request Form

Data Access Control Form

Enhancement Request Form

Installation Completion Form

Issue Log

QA / Program Manager Checklist

Quality Log

Release Control Form

Requirements Testing Report

Risk Log

Risk Management Plan Form

Software Test Plan Template

System Final Release Sign-off Form

System Requirements Sign-off Form

System Test Cycle Sign-off Form

System Test Environment Sign-off Form

System Test Plan Sign-off Form

System Test Sign-off Form

Test Case Template

Test Case Validation Log

Test Plan Review Checklist

Test Plan Task Preparation

Test Record

Test Script Allocation Form

Test Script

Team Roles and Responsibilities Form

Team Training Requirements Form

Unit Test Plan

User Acceptance Test (UAT) Report

Version Control Log

Web Usability Test Report

Microsoft Word Files -  Table of Contents

Microsoft Excel Files -  Table of Contents

Microsoft Excel  -  Table of Contents

Page 132: Gui Checklist

Use this template to:

Describe the date, author and history.

Allocate an action item number, description, status (Low/Medium/High), date reported, resource it was assigned to, its due date, and other additional comments

Identify the basis for the change; confirm whether it is disapproved or approved. Include the Software Change Request Number (SCR) #, Requirements (Rqmnt) #, date submitted, and whether it is approved/not approved, on hold, in progress, or cancelled.

For each Person or Group, identify the individuals who have access to the test cases and their status, e.g. all DEV has access to the Test Cases for Web Project 22B.

For each log, identify its Log ID, the nature of the risk/issue, and whether it is Open or Closed.

Identify the Area where the script failed, and provide details on the Set, Date, with a description of the error and its Severity, e.g. minor error, major error etc.

Identify all the open issues by number (#); list when they were created; who raised them, provide a brief description with details of its Assigned/Target Date/Category, Status (e.g. Open of Closed), Resolution and its Resolution Date.

When performing the checks, identify the Ref #, its Module, the Method of Checking, name of the Tester, its Planned Date, Date Completed, details of the Result, the Action Items (i.e. tasks) and the Sign-off Date.

Identify the Risk Number, its Date, Type (e.g Business/Project/Stage) a brief description, Likelihood %, Severity (e.g. Low or Medium) Impact, Action Required, who is was Assigned To and its Status.

Identify all the roles on the project, with details of their responsibilities. Include contact names and email addresses.

Identify the function that is under test, enter its Business value on a scale of 1-5 with 1 the lowest value and 5 the highest (or whichever numbering system you wish to use); details of the problem severity broken out by a factor of 1 to 5. The total number of issues (a.k.a anomalies) is calculated in the final column.

Page 133: Gui Checklist

Use this to track the Product’s Version No., its Date, and Approvals.

Enter the Area under test, its Set, whether it has Passed or Failed, with a Description of the Error and its Severity, e.g. L/M/H

Enter the Area under test, its Test Case ID, Bug ID, Bug Fixed Date, Bug Fixed By and Fix verified By details.

Use this checklist to prepare for the Test Plan: Review Software Requirements Specifications, Identify functions/modules for testing, Perform Risk Analysis. The second checklist is for the Test Plan Population and helps to: Identify/Prioritize features to be tested, Define Test Strategy; Identify Test Tools; Identify Resource Requirements etc.

This Test Case template is used to capture the name of the Test Case; its Description; Start Conditions; Pass Criteria; Tester Name; Build Number; identify the Test Data Used; Steps, Action and Expected Result.

Use this to track the progress of the software tests each Week, capture which are Planned, were Attempted and numbers that are Successful.

Use this to capture the Project’s Completion Date; Test Event; Test Case ID; Test Date; Tester; Test Results and Status

Use this to analyze the usability of a web project, such as the performance of its Navigation, Graphics, Error Messages, and the quality of its Microcontent

Page 134: Gui Checklist
Page 135: Gui Checklist

What file formats are the templates?

All files are in Microsoft Word and are Anti-Virus free! 

How soon can I download them?

Immediately after you pay online, you are sent to a page where you can download the templates

What is the End User License Agreement?

Page 136: Gui Checklist

Unit Test PlanModule ID: _________

1. Module Overview

1.1 Inputs to Module

[Provide a brief description of the inputs to the module under test.]

1.2 Outputs from Module

[Provide a brief description of the outputs from the module under test.]

1.3 Logic Flow Diagram

[Provide logic flow diagram if additional clarity is required.]

2. Test Data

Briefly define the purpose of this module. This may require only a single phrase: i.e.: calculates overtime pay amount, calculates equipment depreciation, performs date edit validation, or determines sick pay eligibility, etc.

Page 137: Gui Checklist

(Provide a listing of test cases to be exercised to verify processing logic.)

2.1 Positive Test Cases

2.2 Negative Test Cases

3. Interface Modules

4. Test Tools

5. Archive Plan

[Representative data samples should provide a spectrum of valid field and processing values including "Syntactic" permutations that relate to any data or record format issues. Each test case should be numbered, indicate the nature of the test to be performed and the expected proper outcome.]

[The invalid data selection contains all of the negative test conditions associated with the module. These include numeric values outside thresholds, invalid Characters, invalid or missing header/trailer record, and invalid data structures (missing required elements, unknown elements, etc.)

[Identify the modules that interface with this module indicating the nature of the interface: outputs data to, receives input data from, internal program interface, external program interface, etc. Identify sequencing required for subsequent string tests or sub-component integration tests.]

[Identify any tools employed to conduct unit testing. Specify any stubs or utility programs developed or used to invoke tests. Identify names and locations of these aids for future regression testing. If data supplied from unit test of coupled module, specify module relationship.]

[Specify how and where data is archived for use in subsequent unit tests. Define any procedures required to obtain access to data or tools used in the testing effort. The unit test plans are normally archived with the corresponding module specifications.]

Page 138: Gui Checklist

6. Updates[Define how updates to the plan will be identified. Updates may be required due to enhancements, requirements changes, etc. The same unit test plan should be re-used with revised or appended test cases identified in the update section.]

Page 139: Gui Checklist

Unit Test PlanProgram ID: ___________

1. Module Overview

1.1 Inputs to Module

[Provide a brief description of the inputs to the module under test.]

1.2 Outputs from Module

[Provide a brief description of the outputs from the module under test.]

1.3 Logic Flow Diagram

[Provide logic flow diagram if additional clarity is required.]

2. Test Data

Briefly define the purpose of this module. This may require only a single phrase: i.e.: calculates overtime pay amount, calculates equipment depreciation, performs date edit validation, or determines sick pay eligibility, etc.

Page 140: Gui Checklist

(Provide a listing of test cases to be exercised to verify processing logic.)

2.1 Positive Test Cases

2.2 Negative Test Cases

3. Interface Modules

4. Test Tools

5. Archive Plan

[Representative data samples should provide a spectrum of valid field and processing values including "Syntactic" permutations that relate to any data or record format issues. Each test case should be numbered, indicate the nature of the test to be performed and the expected proper outcome.]

[The invalid data selection contains all of the negative test conditions associated with the module. These include numeric values outside thresholds, invalid Characters, invalid or missing header/trailer record, and invalid data structures (missing required elements, unknown elements, etc.)

[Identify the modules that interface with this module indicating the nature of the interface: outputs data to, receives input data from, internal program interface, external program interface, etc. Identify sequencing required for subsequent string tests or sub-component integration tests.]

[Identify any tools employed to conduct unit testing. Specify any stubs or utility programs developed or used to invoke tests. Identify names and locations of these aids for future regression testing. If data supplied from unit test of coupled module, specify module relationship.]

[Specify how and where data is archived for use in subsequent unit tests. Define any procedures required to obtain access to data or tools used in the testing effort. The unit test plans are normally archived with the corresponding module specifications.]

Page 141: Gui Checklist

6. Updates[Define how updates to the plan will be identified. Updates may be required due to enhancements, requirements changes, etc. The same unit test plan should be re-used with revised or appended test cases identified in the update

Page 142: Gui Checklist

Some suggested starting points for a reader-friendliness checklist include:

Clarity of Communication

Does the site convey a clear sense of its intended audience?

Does it use language in a way that is familiar to and comfortable for its readers?

Is it conversational in its tone?

Accessibility

Is load time appropriate to content, even on a slow dial-in connection?

Is it accessible to readers with physical impairments?

Consistency

Does the site have a consistent, clearly recognizable "look-&-feel"?

Does it make effective use of repeating visual themes to unify the site?

Navigation

Does the site use (approximately) standard link colors?

Are the links obvious in their intent and destination?

Design & maintenance

Does the site make effective use of hyperlinks to tie related items together?

Are there dead links? Broken CGI scripts? Functionless forms?

Is page length appropriate to site content?

Visual Presentation

Is the site moderate in its use of color?

Does it avoid juxtaposing text and animations?

Does it provide feedback whenever possible?

When testing a web based program, I look at testing the following:

Is there an easily discoverable means of communicating with the author or administrator?

Is it visually consistent even without graphics?

Is there a convenient, obvious way to maneuver among related pages, and between different sections?

(for example, through the use of an easily recognizable ALINK color, or a "reply" screen for forms-based pages)

Page 143: Gui Checklist
Page 144: Gui Checklist

How does the Web site look?Check its content for layout, spelling mistakes, etc.How is the flow or logic (Organised)? on the pages

Page 145: Gui Checklist

Overall presentationLink testing?Navigation testingMemory demandsNo needless big files, etc.How fast is the system?

Are there any processes running?

Check HTML aspects

Time testing (meaning how fast it is)

Any forms, reports, queries? Then test them accordingly

Load testing .. and other things in usability, system testing?