multimedia interface testing approach and execution recommendations

17

Click here to load reader

Upload: mindtree-ltd

Post on 10-Apr-2015

260 views

Category:

Documents


0 download

DESCRIPTION

The purpose of this document is to provide an approach to Customer Inc. to help test their new platform/framework for multimedia on mobile devices.

TRANSCRIPT

Page 1: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 1 of 17

Multimedia Interface Testing – Aztecsoft

Recommended Approach and

Execution Model

Vishal Talreja

Aztecsoft itest

The information contained in this document is proprietary & confidential to

Aztecsoft Limited.

Page 2: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 2 of 17

MULTIMEDIA INTERFACE TESTING

Customer Inc. is the industry's leading wireless semiconductor company focused on radio frequency (RF) and entire cellular system solutions for mobile communications applications. Customer is working on building a framework for Multimedia applications for the mobile industry. This framework is a layer below the API layer that talks to the MMI developed by the Phone manufacturer/Multimedia application providers. Some of the elements that will be tested include:

MMS functionality including Adaptive Multi-Rate (AMR) voice tags

JPEG decode and Java™ acceleration

Full-rate, half-rate, Enhanced Full-Rate (EFR) Vocoders

Voice services including Automated Speech Recognition (ASR)

Multimedia, 64-tone polyphonics and camera support

iMelody Ringtone functionality

The purpose of this document is to provide an approach to Customer Inc. to help test their new platform/framework for multimedia on mobile devices. The approach is based along three lines of action:

Audio and Video Testing Approach

Identification of the types of bugs common in embedded systems software

Discussion of the various methods used to find bugs

Some of the approach paths detailed in the sections below have been executed on desktops in a windows environment while some pertain to mobile and handheld devices. The baseline philosophy of testing remains agnostic of the platform under test.

Page 3: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 3 of 17

MULTIMEDIA AUDIO TESTING

It is extremely difficult to define a term as vague as audio quality; everybody has a different view of the subject. Standardizing external and integrated audio around established technology specifications is very important for mobile audio to succeed. The quality of audio in a mobile device is extremely important for a rich multimedia experience on a handheld device. Internal audio can affect playback on stereo Bluetooth headphones, Bluetooth headsets, wired headphones, wired headsets and externally amplified players and hence control of audio settings via software is very important from an end-user perspective. We believe that the multimedia platform (audio portion only) will be used in (but not limited to) the following environments:

Playback of audio files (MP3, AAC, WAV, WMA, Ogg-Vobis, MIDI)

Playback of polyphonic ringtones including iMelody, MIDI, MP3, WAV

Playback of game audio

Speech Recognition – useful in voice-based dialing

Conversation Recording on device

Voice Memo Recording

Aztecsoft‟s approach to testing multimedia audio consists of these steps:

Understand all the APIs pertaining to audio of the multimedia chipset

Parameterize what needs to be tested

Understand the acceptance criteria

Determine testing tools that are required for the task

Understand automation needs and writing test automation code

Execute the tests and logging test results

Assist in the acceptance testing process

Page 4: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 4 of 17

The above approach includes the following:

Audio User Control Testing

The following user audio controls need to be tested:

User Control (Audio) Description of Test Test Type

Volume Change in levels will impact the output volume measured in decibels (dB)

Automated

Equalizer

Change in frequency of output audio by changing the octave response which is frequency based between 31.25Hz and 16KHz maintaining the SNR level high

Automated

Bass Boost This setting increases the value of the lower frequencies thus increasing the bass level output of the audio

Automated

Mute This setting disables voice signal output to the speaker

Automated

Microphone Level Measurement of sensitivity of onboard microphone for audio capture

Automated

Audio Parameter Testing

As the above settings are altered by software, corresponding audio parameters need to be measured to ascertain that the change in audio settings accurately changes the audio output and its characteristics. Audio parameters that need to be measured include:

Audio Parameter Description of Parameter under Test Test Type

Dynamic Range (DR) The output can accommodate both soft and loud signals if this value is high

Automated

Signal-to-Noise Ratio (SNR)

For freedom from noise, SNR level needs to be high

Automated

Total Harmonic Distortion (THD+N)

For freedom from distortion, THD level should be low

Automated

Frequency Response This should match the full range of human hearing

Automated

Channel-to-Channel crosstalk

This value needs to be low to minimize leakage between audio channels

Automated

Tonal Balance This is a measure of balance of sound levels across the full range of audio frequencies

Automated

The above tests can be performed using the Customer platform API and audio spectrum analyzer software such as RightMark Audio Analyzer test suite.

Page 5: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 5 of 17

Audio Device State Testing

Audio device states need to be measured via API during the course of audio testing. These are:

Audio Device State State Definition

Playing Audio is playing

Recording

- Foreground Normal application is recording

- Background Speech recognition or speech activity detection is running

Full duplex Device is simultaneously playing and recording

Paused

File handle is open. Only devices that are playing, foreground recording or in full duplex operation may be paused. Background recording may not be paused. The paused state assumes that a device must transition to the resumed state rapidly. No audio samples may be lost between the device is paused and later resumed.

Closed No file handle is open

Audio Device Driver DRM

With the proliferation of audio streams, podcasts, and digital-music capable mobile devices, there is a growing adoption of Digital Rights Management. DRM tags are included in audio content. If the audio device driver does not support DRM, then it is difficult to playback audio content that is intended for trusted devices. Aztecsoft will test for DRM support using the Customer platform APIs.

Drivers with DRM support: Devices with DRM-enabled drivers are considered "trusted." The DRM system will play all encrypted and non-encrypted content on these devices.

Drivers without DRM support: Drivers without DRM signatures can render unencrypted content. These drivers can also play DRM content that does not require a "trusted audio device." However, if the usage rules in the content require that it play only on trusted devices, the DRM system will not work.

CONDITIONS FOR TESTING DRM DRIVERS

If DRM support is implemented in the multimedia platform, then it is essential to test the driver to determine full and correctly implementation of the DRM interface as follows:

When requested, the audio driver must disable the ability to capture the stream currently being played back. That is, whenever Content Copy Protection rights are asserted through the DRM interface:

The driver does not save the unprotected digital content in any non-volatile storage. This may be EEPROM, hard disk, memory card, memory stick, or any other similar storage medium.

The driver disables the capture multiplexer on an output D/A converter or otherwise prevents loop-back of digital content.

When requested, the audio driver must disable the digital audio output on the device. That is, whenever the content asserts Digital Output Disable rights through the DRM interface, the driver must disable all digital audio outputs that could transmit content over a standard interface through a standard interconnection scheme.

The audio driver must rely only on other components that also contain DRM signatures. The driver must never facilitate the transfer of audio data to any component that does not have a DRM signature.

Page 6: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 6 of 17

For instance, if an audio file is transmitted from the mobile device to a Windows-based computer, the driver must utilize the Windows DRM kernel APIs to make the Windows DRM system aware of the movement of digital content.

The audio device and driver must not include user-selectable options to defeat or subvert the DRM components. Specifically, the driver must not provide user control panels, or other methods of disabling the DRM functions.

Multimedia Video Testing

Multimedia video quality is easier to assess as compared to audio, which is very subjective. We believe that the multimedia platform (video portion only) will be used in (but not limited to) the following environments:

Playback of video files (MPEG-2, MPEG-4, 3GPP)

Playback of static image files including WBMP, BMP, JPG, GIF, PNG formats

Playback of video audio

Aztecsoft recommends a testing approach for video multimedia as below:

Understanding of the video-related APIs of the multimedia platform.

Definition of testing parameters

Setting Video-specific Acceptance Criteria

Determining Testing Tools

Writing Test Cases, Scripts and Automation Code

Executing Tests and Logging Test Results

Assisting the Acceptance Testing Process

Video quality can be determined by measuring many parameters such as resolution, refresh rate, support of video file formats, video memory etc.

Video Resolution Testing

Most mobile video devices are capable of reproducing 160x160 or 320x320 resolution. For the video to appear unaffected by the digitization process, a video capture chip should capture all of the supported resolution samples known in the industry as D1. Following tests will be conducted to test the Video Resolution:

Current Resolution Value

Display of video with all supported resolutions

Change of Resolution Parameters

Display of video with changed resolution parameters

Frame Rate Testing

The number of frames that can be refreshed per second determines the frame rate of the playback board on the multimedia chipset. This is usually measured in fps. Higher the fps, better is the display of the video.

Page 7: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 7 of 17

Determining frame rate is an automated process that requires code to be written to divide the number of frames rendered with the time it took to render them. One can choose to measure the FPS over as many frames as one wants. If it is measured over just the last frame one can get very accurate FPS but it will vary too fast for most applications' need. If, instead, it is measured over the last 10 frames one can get a more stable number.

Ghost Factor Testing

When the display leaves a footprint image of the previous frame while transition to the next frame, the image on the screen is called a Ghost Image. The ghost factor is a number that is dependent on the resolution of video playback and the refresh rate capability of the playback interface. Lower the number; better the quality of video playback.

File Format Support

Testing of supported file formats is important from a user‟s perspective. Aztecsoft will test the software and the hardware performance for all the published video formats and encoding types supported. These tests will include:

Test Parameter Description

Video File Format Playback The test will determine if the supported video file format plays back on the device maintaining horizontal-vertical resolution and audio-video synchronization

Stream File Playback The test will determine if a video streamed over-the-air plays back on the device maintaining horizontal-vertical resolution and audio-video synchronization

Image File Format Playback The test will determine if a static image file plays back on the device maintaining horizontal-vertical resolution

Video Streaming Testing

Efficient and accurate video streaming requires availability of adequate network bandwidth to provide the intended experience. As opposed to downloading video which requires storage capability on the device, streaming requires buffer memory to „buffer and show‟ for a jitter-free viewing experience. A common approach to testing video streaming is Mean Opinion Scores (MOS) based that evaluates overall user satisfaction by relating output to human visual parameters such as „blur‟ and „blockiness‟ and thus help pinpoint errors in video display. This approach applies to both video content streaming and video telephony. Qvoice has good testing tools that can test this functionality very effectively. Genista, a company based in Japan, provides software solutions to measure the perceptual quality of electronic media including desktop, video, mobile, media streams, digital audio and images. The above audio-video testing will be carried out under various modes and states that include:

Test Mode State

Radio

GSM Standby

GPRS Active Download

GSM Active – Incoming Call

Incoming SMS

Incoming MMS

Page 8: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 8 of 17

Test Mode State

Battery

Battery Full

Battery Low

Battery Critical State

Vibration Vibration ON

Vibration OFF

Playback Functions (Audio and Video)

Mute

Un-mute

Pause

Resume

Play

Stop

Forward

Rewind

Volume UP

Volume DOWN

Seek

Audio Modes Stereo

Mono

Page 9: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 9 of 17

TESTING IN THE EMBEDDED SYSTEMS ENVIRONMENT

When devising a plan to remove bugs from software, it helps to know what you're trying to find. Software can fail in many ways, and mistakes are introduced into the code from many different sources. Some bugs have greater repercussions than others, and almost all of them have consequences determined by the type of application and the domain in which it operates. What follows is a partial catalog of errors found in embedded systems software. Without understanding how embedded and mobile software can err, it's difficult to find the potential errors.

There are many types of software defects that are common to all classes of computer software:

Errors, omissions, and ambiguities in requirements and designs

Errors in the logic, mathematical processing, and algorithms

Problems with the software‟s control flow: branches, loops, etc.

Inaccurate data, operating on the wrong data, and data-sensitive errors

Initialization and mode change problems

Issues with interfaces to other parts of the program: subroutines, global data, etc.

In addition to all these, real-time embedded systems have potential problems in many additional areas:

Exceeding the capability of the microprocessor to perform needed operations in time

Spurious resets caused by the watchdog timer

Power mode anomalies

Incorrect interfacing to the hardware peripherals in the system

Exceeding the capacity of limited resources like the stack, heap, others

Missing event response deadlines

The list below is only partial. The relative frequency and severity is listed for each type of bug.

Error Type Error Description Frequency Severity

Non implementation error sources

Errors can be introduced into the code from an erroneous (or ambiguous) specification or an inadequate design. They can also result from the chipset hardware that doesn't operate correctly, or operates differently than specified or otherwise understood.

Common Non-Functional to Critical

Implementation error sources

Algorithm/logic/processing bugs

These are logical errors in the software such as incorrect loop executions.

Common High

Parameter passing Incorrect arguments or parameters may be passed to a subroutine.

Common only when many complicated function invocations are used.

Varies

Return Codes

Improper handling of return codes is another potential error source. Assuming the called function executes correctly and not checking for unexpected return

Common when using unfamiliar libraries or complicated

Varies

Page 10: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 10 of 17

Error Type Error Description Frequency Severity

codes can cause problems. functions with many return codes.

Math overflow/underflow

The result of an operation should be checked to ensure an overflow /underflow did not occur, before using the result in any meaningful way. Failure to check for overflow/underflow can result in data-sensitive problems that can be difficult to track down. If an overflow condition is detected, it must be handled in some appropriate way (often by limiting the data to the largest number that can be represented in the data type). These checks are unnecessary only when the input data is well known and it's impossible for the operation to ever overflow or underflow.

Common where arithmetic operations are performed using integer or fixed-point math.

High

Logic/math/processing error

Incorrect decision logic grows common in complicated functions and deeply nested decisions. Boolean operations and mathematical calculations can also be easily misunderstood in complicated algorithms.

Common High

Reentrance Problem

If a section of code can be interrupted before it completes its execution, and can be called again before the first execution has completed, the code must be designed to be reentrant. This typically requires that all variables referenced by the reentrant routine exist on a stack, not in static memory. In addition, any hardware resources used must be manipulated carefully. If not, data corruption or unexpected hardware operation can result when the interrupted (first) execution of the routine finally completes.

Rare since most embedded systems code is not reentrant

Critical

Incorrect control flow

The intended sequence of operations can be corrupted by incorrectly designed for and while loops, if then else structures, switch case statements, goto jumps, and so on. This causes problems such as missing execution paths, unreachable code, incorrect control logic, erroneous terminal conditions, unintended default conditions, and so on.

Common Non-functional to critical

Data errors

Pointer Error

Pointer errors are often more common when certain types of structures are used in the code. Doubly linked lists make heavy use of pointers, so it's easy to point to the wrong node or link to a NULL pointer.

Common in languages that support pointers, such as C.

High or Critical

Page 11: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 11 of 17

Error Type Error Description Frequency Severity

Indexing Error

Index registers (or similar types of registers in other architectures) are commonly used. High-level language programs often make heavy use of arrays. Many times, strings are stored as arrays of characters. Individual elements within an array are identified with an array index. Accessing the wrong element within an array is another example of an indexing problem.

Common High or Critical

Improper variable initialization

Sometimes improper initialization can be obvious, as when reading a variable that has never been written. Other times it's more obscure, such as reading a filtered value before the proper number of samples have been processed.

Not very common Low

Variable scope error

To get the expected results, the correct data must be processed. The same name can be applied to different data items that exist at different scopes. For example, an automatic variable can coexist with a static variable of the same name in that file. Different objects instantiated from the same class refer to their members with the same name. When pointers are used to reference these objects, it becomes even easier to make a mistake.

Not very common Low to High

Incorrect conversion/type-casting/scaling

Converting a data value from one representation to another is a common operation, and often a source of bugs. Data sometimes needs to be converted from a high-resolution type used in calculations to a low-resolution type used in display and/or storage. Conversion between unsigned and signed types, and string and numeric types is common. When using fixed-point math, conversion between data types of different scales is frequent. Typecasts are useful to get data into whatever representation is needed, but they also circumvent compiler type checking, increasing the risk of making a mistake.

Common Low to Critical

Data Synchronization Error

Many real-time embedded systems need to share data among separate threads of execution. For example, suppose an operation that uses a number of different data inputs is performed. This operation assumes these data are synchronous in order to perform its processing. If the data values are updated asynchronously, the processing

Not very common Low to High

Page 12: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 12 of 17

Error Type Error Description Frequency Severity

may be using some "new" data items with some "old" data items, and compute a wrong result. This is especially true if a control flag is used to interpret the data in some way. Some embedded systems use a serial port to send a "system snap-shot" of the critical data items in response to an asynchronous request. If the data items in the snapshot are not updated synchronously, the snapshot may contain a mix of some current information and some old information.

System Bugs

Stack overflow/ underflow

Pushing more data onto the stack than it is capable of holding is called overflow; pulling more data from the stack than was put on it is called underflow. Both result in using bad data, and can cause an unintended jump to an arbitrary address.

Common Critical

Version control error

Version control grows in importance as the complexity of the software project grows. Including the version of the file that still has the bug produces another bug report. Including a version that is now incompatible with the latest hardware may produce many bug reports!

Common High to Critical

Resource Sharing Problem

Resource sharing is common in most embedded systems at some level. Wherever sharing occurs, strict rules for using the resource cooperatively must be defined and followed to avoid conflicts. Ignoring a mutual exclusion semaphore can corrupt data.

Not very common High to Critical

Resource Mapping

Some microcontrollers allow the peripheral registers and memory to be mapped to many different locations. Some applications use different mappings for various purposes. If the code isn't re-mapped before burning it into EPROM, the data or code becomes whatever happens to be in the RAM after power-up.

Rare Critical

Instrumentation Problem

Sometimes a software bug is not actually a problem with the software at all. Instrumentation generally alters the behavior of the system, albeit in very small, subtle ways. Sometimes problems disappear when the emulator is connected, and other times they only

Not very common Low

Page 13: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 13 of 17

Error Type Error Description Frequency Severity

appear when the emulator is used. Reported bugs could also be a result of improper use of the instrumentation.

Various Testing Approaches

Aztecsoft has highlighted various testing approaches below to test embedded software. A cost (time and money) and effectiveness index is also provided to showcase the benefits and drawbacks of each of the approaches.

Structural Testing

Sometimes called "glass-box testing," this activity uses an unobstructed view of how the code does its work. This type of test is usually performed on a single unit of the software at a time. Test procedures are written to exercise all the important elements of the code under test. This may include exercising all the paths in the module. It often involves many executions of the same code, with different values of data. Boundary conditions are typically exercised. This test is also used to determine the consistency of a component's implementation with its design. In this phase, Aztecsoft plans to:

Determine the worst-case stack usage for each function.

Determine the longest execution time for any path through the function (from call to return).

Look for any non-deterministic timing structures, such as recursive routines, waiting for hardware signals or messages, etc. These can make the subsequent timing analyses inaccurate.

Cost

- Time High It takes time to write the procedures, and examine what is critical to test. Once the test procedures are written, they can be reused to retest the same module later when minor modifications are performed.

- Money Varies Depends on how the testing is done. Simulators are less expensive than In-Circuit Emulators (ICEs).

Effectiveness High If a module passes a thorough white box test, the level of confidence is high that it won't cause problems later.

Functional Testing

In functional test, the program or system is treated as a black box. This implies the tester has no knowledge of what's in the box, only its inputs and outputs. The system is exercised by varying the inputs and observing the outputs. This type of test is usually performed on the entire software system. In complex applications, the software is broken down into components or subsystems, which are tested individually. All the individual parts are then brought together (usually one at a time) and the integrated system is tested. Test procedures are usually written to describe the expected behavior in response to a given environment and input stimulation. In the absence of formal test procedures, some engineers simply go through the system or software's actual behavior to its specified behavior without regard to the software implementation. In this phase, Aztecsoft plans to perform a system-level inspection of the software, checking specific items:

Page 14: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 14 of 17

Insure all peripheral registers are understood and accessed correctly.

Insure that the watchdog timer is enabled. (Many times, engineers keep it disabled while

the code is being developed to ease debugging.)

Insure all digital inputs are de-bounced appropriately.

Insure that the turn-on delay for analog to digital converters (and digital to analog

converters) is handled appropriately.

Carefully examine power-up and power-down behavior, and the entrance and exit from

any low power modes.

Insure an appropriate interrupt vector is defined for every interrupt, not just those that are expected.

Cost Severity Details

- Time Medium

This type of testing is almost always done to one degree or another. Ultimately, the customer will exercise all the features of the program; it's better to find any obvious bugs before he/she does.

- Money Varies

Depends on how easy it is to manipulate the inputs and observe the correct outputs. Sometimes the entire test can be done stand-alone; other times, custom equipment must be constructed in order to verify correct behavior.

Effectiveness Medium

Many parts of the system are difficult to exercise with a black box test. Error conditions (especially due to hardware failure) can be difficult, if not impossible, to generate. Some combinations of inputs are difficult to produce, especially those with unique timing characteristics.

Verification Testing

Different organizations mean different things when they use the term verification. Some use "verification" and "validation" interchangeably; others make a clear distinction between the terms. The IEEE defines validation as "the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements." Verification, on the other hand, is defined as "The process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. It is a formal proof of program correctness." Aztecsoft views verification testing as a detailed analysis of software, independent of its function, to determine if common errors are present. It is generally performed on the code as a whole, not on individual units. It's very white box-like, in the sense that verification examines the structural integrity of the code, and not functionality. Examples of typical verification checks include: stack depth analysis, proper watchdog timer usage, power-up/power-down behavior, singular use of each variable, and proper interrupt suppression.

Cost Severity Details

- Time Medium Some checks can be automated, or partially automated. Others are manual and tedious.

- Money Low to Medium

Depends on what is checked. Most checks can be done by hand or automated. Others, such as verifying interrupt timing, require test equipment.

Page 15: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 15 of 17

Cost Severity Details

Effectiveness High

This is the only way to develop confidence in some areas of the code. It's difficult to generate tests that will produce certain conditions (for example, worst-case stack depth, maximum interrupt latency). However, by analyzing the code, the engineer can determine what the worst-case condition is and show that the system has been designed to handle it.

Stress/Performance Testing

Stress tests are designed to load the system to the maximum specified load and then beyond, to see where it breaks. "Load" could be number of users, number of multimedia messages per unit time, number of multimedia formats that the chip can handle at the same time, size of multimedia play-lists, picture messages, frequency of a periodic interrupt, number of dynamically allocated tasks, and so on. Knowing at what point the system fails tells us how much overhead (factor of safety) we have. Performance testing is conducted to measure the system's performance. This may be important to verify that a requirement is met, to ensure the design is adequate, or to determine resources available to add more features. (Examples: processor throughput, interrupt latency, worst case time to complete a given task, playback quality and so on.)

Cost Severity Details

- Time Low Most of this testing can be automated fairly easily and quickly.

- Money Low to High Depends on what parameter is measured and what level of automation coding is involved.

Effectiveness High Stress and Performance testing showcase the amount of load the platform/product can handle and can effectively measure performance under common conditions

Acceptance Testing

Acceptance tests represent the customer‟s interests. The acceptance tests give the customer confidence that the application has the required features and that they behave correctly. In theory, when all acceptance tests pass the project is considered complete. Acceptance tests do three things for a software development team:

They capture user requirements in a directly verifiable way, and they measure how well the system meets expressed business and functional requirements.

They expose problems that unit tests miss and verify if the system executes the way it was intended.

They provide a ready-made definition of how “ready” the system is i.e. satisfies all the requirements in a usable way.

Aztecsoft provides Acceptance Testing Certification services to customers. This service has four distinct phases:

Plan - This is a phase where we understand and analyze the customer‟s functional and user requirements

Design - In this phase, we design an Acceptance Test Plan on behalf of the customer

Page 16: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 16 of 17

Implement - This phase involves running the Acceptance Test on behalf of the customer along the guidelines defined in the plan

Assess- In this phase we assess the results of the Acceptance Test and certify the project/product

Cost Severity Details

- Time Medium Acceptance Tests need to be designed, signed-off and executed.

- Money Medium to High

Depends on what level of acceptance testing is executed. If all the 4 phases are executed then the cost is higher.

Effectiveness High Very effective in situations where customers are not able to executed acceptance testing because of time or skill constraints.

In addition to the above, Aztecsoft‟s engineers will also perform the following critical analyses on the software:

Stack Depth – Determine the worst-case stack depth and insure adequate stack has been allocated.

Watchdog Usage – Using the timing data obtained in the unit tests, find all places where the watchdog timer is reset and insure that all correctly operating modes of the system will always reset the watchdog timer before it expires.

Shared Data – Locate all data that is accessed by interrupt service routines and other areas of the code. If a multi-tasking design is used, also locate all data that is shared by tasks operating at different priority levels. Verify that adequate protection measures are used on the data to insure it never becomes corrupted.

Deadlock – Build an allocation graph of all the data shared among different tasks that could end up causing a deadlock. Insure that deadlock is prevented by the design of the system and the order in which shared data items are locked.

Utilization – Using the timing information obtained in the unit tests, determine the worst-case execution times of all tasks in the system. Verify that the microprocessor can complete all tasks by their deadlines, at their maximum rate of occurrence, under worst-case situations.

Schedulability – Using the worst-case task execution times obtained, determine if all the tasks can be scheduled and meet their deadlines under all situations.

Timing – Insure all other timing requirements of every task are met under worst-case situations. For example, release-time jitters, end-to-end completion, etc.

Page 17: Multimedia Interface Testing Approach and Execution Recommendations

itest.aztecsoft.com Page 17 of 17

EXECUTION MODEL

Aztecsoft recommends an execution model that will comprise of a team of 4 key individual roles to help test this platform effectively. The individual roles will be:

Multimedia Knowledge Worker – This individual has knowledge and expertise in core multimedia systems, interfaces and devices. This individual will provide guidance and subject-matter expertise to the tester on all aspects related with Multimedia devices and components. Aztecsoft will fulfill this role.

Mobility Expert – This expertise will need to be provided by Customer and essentially will be a Customer employee overseeing all the mobility related elements of this testing engagement.

Senior Test Engineer – This individual will work closely with the Multimedia and Mobility experts to write correct test cases, test scenarios and will manage the delivery of this engagement. Aztecsoft will fulfill this role.

Test Engineer – This individual will write the automation code and execute the various tests and log the test results. The Test Engineer wills eek guidance and will call on the expertise of the Multimedia and Mobile experts and also the Senior Test Engineer. Aztecsoft will fulfill this role.