welcome to h2kinfosys. h2k infosys is e-verify business based in atlanta, georgia – united states...
TRANSCRIPT
Welcome to H2KInfosys
H2K Infosys is E-Verify business based in Atlanta, Georgia – United States
www.H2KINFOSYS.com
USA - +1-(770)-777-1269 , UK - (020) 3371 7615 [email protected] / [email protected]
100% Job Oriented Instructor Led Face2Face True Live online Software Training
+ Cloud Test Lab with Software Tools & Live Project work
+ Mock Interviews + Resume Prep & Review + Job Placement
assistance=
Better than on site IT training Trusted by many students world wide.
Why H2KInfosys
• Introducti on to performance testi ng
What
Why
• Types of performance testi ng
• Performance Testi ng Approach
• Performance - Qual i ty Aspect
• Performance testi ng terminology
• Performance Counter
Soft ware
Hardware
Cl ient S ide
Server S ide
Agenda
• Scenarios What & How Performance Testi ng process• Performance Requirements• Performance Test Planning• Performance Lab What it is ? Various Components• Performance Test Scr ipti ng• Performance Test Executi on• Metrics Col lecti ons• Result Analys is• Report Creati on• Q & A• Workload What & Why Types of Workload
AGENDA
“ Performance Testi ng is the discipline concerned with determining and reporti ng the current performance of a soft ware applicati on under various parameters. “
Performance Testing – What?
Primarily used for…
•verifying whether the System meets the performance requirements defined in SLA’s
•determining capacity of existing systems.
•creating benchmarks for future systems.
•evaluating degradation with various loads and/or configurations.
Performance Testing – Why?
Load Test
Objecti ve : To get the insight into the performance of the system under normal conditi on
Methodology : o The user behaviors are modeled as per the real world.
o Test script mimics the activities that the users commonly perform, and include think time delays and arrival rates reflective of those in the real world
Performance Testing Types
Stress Test
Objecti ve: The applicati on is stressed with unrealisti c load to understand the behavior of the applicati on in the worst case scenario
Methodology: In stress tests, scripted actions are executed as quickly as possible.
Stress testing is load testing with the user think time delays removed.
Performance Testing Types
Scalability
Scalability is the capacity of an applicati on to deal with an increasing number of users or requests without degradati on of performance. Scalabil ity is a
measure of applicati on performance.
Performance Testing Approach
Stability
The stabil ity (or reliability) of an applicati on indicates its robustness and dependability. Stability is the measure of the applicati on’s capacity to perform even aft er being accessed concurrently by several users and heavily used. The applicati on’s stabil ity partly indicates its performance.
Performance Testing Approach
F uncti onality
U sability
R eliability
P erformance
S upportability
Quality Aspect Performance
PerformanceEffectivenessScalability
Quality Aspect Performance
Performance
Online Transacti onso Response times (seconds)
Batch Transacti onso Runtime (minutes or hours)o Throughput (items / second)
Quality Aspect Performance
Effectiveness
CPU Utilizationo Real User and System Time
Memory Usage (Mb)
Network Loado Package Size and Package Counto Band Width and Latency
Disk Usage (I/O))
Quality Aspect Performance
Scalability
Online Transactionso Large # of Users
Batch Transactionso Large Transaction Volumeso Large Data Volumes
Scale in/out
Performance testing terminologyScenarios
Sequence of steps in the application under test For e.g. searching a product catalog
WorkloadIt is the mix of demands placed on the system (AUT), for e.g. in terms of concurrent users, data volumes, # of transactions
Operational Profile (OP)List of demands with frequency of use
BenchmarkStandard workload, industry wide (TPC-C,TPC-W)
TPC- C : An on-line transaction processing benchmark. TPC-W : A transactional web e-Commerce benchmark
Performance testing terminologyUsers finishes
requestSystem's starts
execution Users starts
request
Users starts
request
System's starts
response
Reaction
time
Response time
Think time
System's completes
response
Performance testing terminologyThroughput
Rate at which the requests can be serviced by the system
Batch streams Jobs /sec
Interactive systems Requests /sec
CPU Million Instructions/sec (MIPS) Million Floating-Point operations per sec
Network Packets per second or bits per second
Transaction processing Transaction per second
Performance testing terminologyBandwidth
A measure of the amount of data that can travel through a network. Usually measured in kilobits per second (Kbps). For example, a modem line often has a bandwidth of 56.6 Kbps, and an Ethernet line has a bandwidth of 10 Mbps (10 million bits per second).
LatencyIn a network, latency, a synonym for delay, is an expression of how
much time it takes for a packet of data to get from one designated point to another. In some usages (for example, AT&T), latency is measured by sending a packet that is returned to the sender and the round-trip time is considered the latency.
Propagation (@ speed of light) + Transmission( ~ Size) + Router Processing (examining) + Other computer and Storage (Switch or bridge) Delays
Performance testing terminologyReliability
Mean time between failureAvailability
Mean time to FailureCost/performance
Cost Hardware/software licensing, installation and maintenance
Performance, (usable capacity)
What to watchS/W Performance
– OS– Application Code– Configuration of Servers
H/W Performance – CPU– Memory– Disc– Network
Performance countersWhy Performance Counters?
They allow you to track the performance of your applicationWhat Performance Counters? Client-Side
Response Time Hits/sec Throughput Pass Fail Statistics
Server-Side CPU - %User Time, %Processor Time, Run Queue Length Memory – Available and Committed Bytes, Network – Bytes – Sent/sec, Received/sec Disc – Read Bytes/sec, Write bytes/sec
Client side metricsHits per Second: The Hits per Second graph shows the number of hits on the Web server (y-axis) as a function of the elapsed time in the scenario (x-axis). Hits per Second graph can be compared to the Transaction Response Time graph to see how the number of hits affects transaction performance.
Pass-Fail Statistics: It gives the measure of application capability to function correctly under load. It is measured by measuring transaction pass/fail/error rates.
workloadWorkload is the stimulus to system. It is an instrument simulating the real world environment. The
workload provides in depth knowledge of the behavior of the system under test. It explains how typical users will use the system once it goes in production. It could include all the requests and/or data inputs.
Request may include things such as :– Retrieving data from a database– Transforming data– Performing calculations– Sending documents over HTTP– Authenticating a user, and so on.
workloadWorkload may be no load, minimal, normal, above normal or
extreme.
– Extreme loads are used in load stress testing - to find the breaking point and bottlenecks of tested system.
– Normal loads are used in performance testing - to ensure acceptable level of performance characteristics like response time or request processing time under estimated load.
– Minimal loads are usually used in benchmark testing - to estimate user experience.
workloadWorkload is identified for each of the scenarios. It can be identified based on following parameters:
• Numbers of users. The total number of concurrent and simultaneous users who access the application in a given time frame.
• Rate of requests. The requests received from the concurrent load of users per unit time. • Patterns of requests. A given load of concurrent users may be performing different tasks using the
application. Patterns of requests identify the average load of users, and the rate of requests for a given functionality of an application.
• Steady State Steady state workload is the simplest workload model used in load testing. A constant number of virtual users are run against the application for the duration of the test.
Workload models• Steady State
Steady state workload is the simplest workload model used in load testing. A constant number of virtual users are run against the application for the duration of the test.
• Increasing
Increasing workload model helps testers to find the limit of a Web application’s work capacity. At the beginning of the load test, only a small number of virtual users are run. Virtual users are then added to the workload step by step.
Workload Model - Increasing
0204060
80100120140
0:00 0:28 0:57 1:26 1:55 2:24
Elapsed Time
Lo
ad
Workload Model - Increasing
0
20
40
60
80
100
120
140
0:00 0:28 0:57 1:26 1:55 2:24
Elapsed Time
Lo
ad
Workload models
DynamicDynamic workload model, you can change the number of virtual users in the test while it is being run and no simulation time is fixed.
Workload Model - Dynamic
0
10
20
30
40
50
60
0:00 0:28 0:57 1:26 1:55 2:24
Elapsed Time
Lo
ad
Workload Profile
A workload profile consists of an aggregate mix of users performing various operations. Workload profile can be designed by performing following activities:
– Identify the distribution (ratio of work). For each key scenario, identify the distribution / ratio of work. The distribution is based on the number of users executing each scenario, based on application scenario.
– Identify the peak user loads. Identify the maximum expected number of concurrent users of the Web application. Using the work distribution for each scenario, calculate the percentage of user load per key scenario.
– Identify the user loads under a variety of conditions of interest. For instance, you might want to identify the maximum expected number of concurrent users for the Web application at normal and peak hours.
Workload Profile
For sample web application, the distribution of load for various profiles could be similar to that shown in table below:
• Number of users: 200 simultaneous users • Test duration: 2 hours • Think time: Random think time between 1 and 10 seconds in the test script after each
operation• Background processes: Anti-Virus software running on test environment
Scenarios what and how
Decide on the business flows that needs to be run Decide on the mix of the business flows in a test run Decide on the order of the test scripts that need to be started Decide on the ramp up for each business flow and test run duration
The load generators and test scripts (user groups) for a scenario should be configured so that the scenario accurately emulates the working environment.
The Runtime settings and test system/application configuration can be changed for creating different scenarios for the same workload profile
Performance engagement process
PROJECT
TEAM
PERFORMANCE
LAB
Query
Requirements questionnaire
Response
Test plan & engagement Contract
Signed contract, approved Test Plan & application demo
Test execution reporting
Written approval
Business flow document & project plan
Customer feedback & project closure
Performance test process
Initiate
• App Demo
• Business Flow
Plan
•Test Plan
•Access to Staging
Design
• Lab Design
• Script design
Execute
• Iteration 1
• Iteration 2
Report
• Data Collection
• Analysis & Report
Activities:
Fill performance require-ments questionnaire
Finalize estimates andService Plans
Prepare engagementContractReviews by Project Team
Activities:
Establish Test Goals
Prepare Test Plan
Reviews by Project Team
Activities:
Application Walkthrough
Freeze workload
Set up master data
Create performance scripts
Reviews by Project Team
Activities:
Execute performance tests
Collect performance metrics
Reviews by Project Team
Activities:
Analyze test results
Prepare performance test Report
Reviews by Project Team
Deliverables:
Signed engagementcontract
Deliverables:
Performance Test Plan
Deliverables:
Performance test scripts
Deliverables:
First Information Report
Deliverables:
Performance Test Report
Project Team Performance Test Lab
Performance requirements Performance Test Objective
• Forms the basis for deciding what type performance test needs to be done• The test plan and test report should reflect this objective
Performance Requirements• Expected performance levels of the application under test• Can be divided into two categories
o Performance Absolute Requirementso Performance Goals
Performance Absolute Requirements• Includes criteria for contractual obligations, service-level agreements (SLA) or fixed
business needs Performance Goals
• Includes criteria desired in the application but variance in these can be tolerated under certain circumstances
• Mostly end-user focused
Performance requirements Determine
• Purpose of the system• High level activities• How often each activity is performed• Which activity is frequently used and intensive (consume more resources?)• hardware & software architecture• Production architecture
Should be specific Should detail the required
• response times• throughput or rate of work done
Under• specific user load (normal/peak)• Specific data requirements
Can mention resource utilization desired/thresholds Analyze Business Goals and objectives
Performance requirements Where do we need to set the goals?
• Which 5% of system behavior is worth defining with respect to performance• Which users have priority, what work has priority?• Which processes has business risks (loss in revenue)
How aggressive do the service levels need to be?• Differentiators• Competitor• Productivity gains
What are the demanding conditions under which SUT would be put in? What are the resource constraints (Already procured hardware, network, Disk etc or sharing
the same environment with others)? What are the desired norms of resource utilization? Desired norms for reserve or spare capacity What are the trade offs (throughput vs response time), availability etc Where to get information
• Stakeholders• Product managers• Technical/application architects• Application users• Documentation (User Guide, Technical design etc)
Performance requirements Performance requirements template
• Objectives• Performance Requirements• System deployment architecture• System configuration• Workload profile• Client network characteristics
Performance TEST PLANContents
• Requirements/Performance Goals• Project scope• Entry Criteria
E.g:All found defects during system testing phase have been fixed and re-tested.Code freeze is in place for the application and the configuration is frozen.The Test environments are readyMaster data has been created and verifiedNo activities outside of performance testing and performance test monitoring will occur during performance testing activities
• Exit Criteria E.g.: All planned Performance tests have been performed. The Performance Test passes all stated objectives at maximum numbers of users
• Application overview This gives a brief description of the business purpose of the Web application. This may include some marketing data stating estimates or historical revenue produced by the Web application.
Performance TEST PLAN• Architecture overview
This depicts the hardware and software used for the performance test environment, and will include any deviations from the production environment. For example, document it if you have a Web cluster of four Web servers in the production environment, but only two Web servers in your performance test environment
• Performance test process This includes a description of:
User scenarios Tools that will be used. User ratios and sleep times Ramp up/Ramp down pattern.
• Test Environment1. Test Bed2. Network Infrastructure3. Hardware Resources4. Test Data
Performance TEST PLANTest Environment
Test Bed Describe test environment for load test and test script creation. Describe whether the environment is a shared environment and its hours of availability
for the load test. Describe if environment is production-like or if it is actually the production
environment. Network infrastructure
Describe the network segments, routers, switches that will participate in the load test. It can be described with network topology diagram.
Hardware resources Describe machine specifications that are available for the load test such as machine’s
name, its memory, processor, environment (i.e. Win 2000, win XP, Linux, etc), whether the machine has the application that needs to be tested installed (AUT = Application under test), and the machine’s location (what floor, what facility, what room, etc)
Test Data Database size and other test data. Who will provide the data and configure the test
environment with appropriate test data.
Performance TEST PLANTest Environment
• Staffing and Support Personnel List the individuals participating during the load tests with their roles and level of support.
• DeliverablesDescription of deliverables such as test scripts, monitoring scripts, test results,
reports –explanation of what graphs and charts will be produced. Also explain who will analyze and interpret the load test results and review the results with the graphs from all participants monitoring the execution of the load tests.
• Project Schedule• Timelines for test scenario design, test scripting• Communication Plan• Project Control and Status Reporting Process
Performance TEST PLANTest Environment
• Risk Mitigation Plan E.g of risks:
The software to be tested is not correctly installed and configured on the test environment.
Master data has not been created and/or verified. Available licenses are not sufficient to execute performance tests to validate the
application. Scripts and/or scenarios increase from Performance Test Plan
• Assumptions E.g.:
Client will appoint a point of contact to facilitate all communication with project team and provide technical assistance as required by personnel within four (4) business hours of each incident. Any delays caused would directly affect the delivery schedule
Client will provide an uninterruptible access to their application during the entire duration of this assignment
The Client application under test use HTTP protocol. Client application will not contain any Java Swing, AJAX, Streaming media,
VBScript, ActiveX or any other custom plug-ins. Presence of any such components will require re-visit to effort and cost estimation
Performance TEST LABVirtual User – Emulate the end user action by sending request and receiving response
Load Generator - emulates the end-user business process
Controller - organizes, manages and monitors the load test
- Probes – captures a single behavior when load test is in progress
Performance TEST scripting Correlations
o Correlation is done to resolve dynamic server values• Session IDs• Cookies
o LoadRunner supports automatic correlation Parameterization
o Parameterization is done to provide each vuser with unique or specific values for application parameters
o Parameters are dynamically provided by LR to each vuser or they can be taken from data fileso Different types of parameters are provided by LR for script enhancements
Transactionso Transactions are defined to measure the performance of the server.o Each transaction measures the time it takes for the server to respond to
specified Vuser requests.o The Controller measures the time taken to perform each transaction during the
execution of a performance test.
Rendezvous points
Verification Checks
Performance TEST scripting Rendezvous points
o Rendezvous points are used to synchronize Vusers to perform a task at exactly the same moment – to emulate heavy user load.
o When a Vuser arrives at a rendezvous point, it is held by the Controller until all Vusers participating in the rendezvous reach that point.
o You may only add rendezvous points in the Action section – not to the init or end
Verification Checkso Add verification checks to the script
• Text verification checks• Image verification checks
Performance TEST ExecutionBefore Execution, Ensure
Test system is ready for test Test data is in place System configuration is as per the plan
Smoke test is done Scenario scheduling is correct
Ramp up/Run for some time/Ramp down Schedule according to groups Number of Vusers for each group is correct for that test run
Monitors are ready to collect performance data Debugging statements in the script are commented / Logging is done only on necessity Load Generators
Load Generating machines Divide load among different machines
Log information about each test run Allow test system to stabilize for each test run Store results for each test run in a separate folder Check if Test system state is per the plan after each test run
Metrics collection Client side metrics
Response time Throughput Provided by the test tool
Server side metrics Perfmon (for Windows) System commands ( for Linux/UNIX) Test tool monitors JMX counters Application servers Database Servers Web Serves
Scripts using Perl/shell/... for collecting metrics and formatting data
Result analysis Response Time : lower is better
Throughput : higher is betterThroughput increases as load increases and
reaches a saturation point beyond which any further load increases the response time exponentially This is called the knee point
User Load Vs CPU utilization
Should be linear If 100% CPU usage reached before the expected user load
CPU is the bottleneck Increase the number of CPUs Use a faster CPU check for Context Switches and Processor Queue Length If 100% CPU usage not reached then check for bottlenecks in other system resources
User Load Vs Response Time & Throughput
0
10
20
30
40
50
60
70
50 100 150 200 250 300 350 400 450 500 550
User Load
0
50
100
150
200
250
300
Response Time
Throughput
Result analysis Response Time : lower is better
User Load Vs Disk I/O Check for Average Disk Queue Length and Current Disk Queue Length Check for % Disk Time Check for Disk Transfers/sec
Memory utilization Vs Time
check for Available memory and amount of swapping Memory usage should get stabilized after some into the test If memory usage increases with each test run or for each iteration of an activity for the same
number of users and doesn't come down then it could be a possible indication of memory leak
Network utilization
current bandwidth, packets/sec, packets lost/sec
Report creation The end deliverable of performance test Very important from stakeholders point of view Should reflect the performance test objective Provide test results in tabular format and graphs Include all issues faced during testing Document all the findings & observations ( performance testing is close to research) Load and especially stress would sometime reflect the bad side of an application. It throws errors, capture
all of them for future analysis Any deviations/workarounds used should be mentioned
Contents of Test Reporto Executive Summary
- Test Objective - Test Results Summary - Conclusions & Recommendations
o Test Objectiveo Test Environment Setup
- Software configuration used including major and minor versions where applicable - Hardware configuration
o Business flows tested / test scenarioso Test run information including observationso Test resultso Conclusions & recommendations
Thanks Deepa
For Registrations
OR