scqaa-sf meeting on may 21 2014
DESCRIPTION
TRANSCRIPT
May 22, 2014 1
Attendee Photo
SCQAA-SF (www.scqaa.net) chapter sponsors the
sharing of information to promote and encourage the
improvement in information technology quality practices
and principles through networking, training and
professional development.
Networking: We meet once in 2 months in San Fernando
Valley.
Check us out on LinkedIn (SCQAA-SF)
Contact Sujit at [email protected] or call 818-878-0834
May 22, 2014 3
Excellent speaker presentations on advancements in
technology and methodology
Networking opportunities
PDU, CSTE and CSQA credits
Regular meetings are free for members and include
dinner
May 22, 2014 4
Recently revised our membership dues policy to better
accommodate member needs and current economic
conditions.
Annual membership is $50, or $35 for those who are in
between jobs.
Please check your renewal with Cheryl Leoni. If you
have recently joined or renewed, please check before
renewing again
May 22, 2014 5
Prabhu Meruga Director - Solution Engineering
21st May
SCQAA – San Fernando, CA
• Basic Complexity
• Why performance?
• Performance failure statistics
• myths of performance testing
• Span of performance testing
• Application performance factors
• End User experience
• Cost comparison analysis
• Process Improvements in performance life cycle
• performance metrics -prioritize what's needed,
• Case study
Data
Centers
Firewall
Operations
Network
Enterprise
Technolog
y partners
Multiple
channels
of access
Enterprise
integratio
n
1 Billion smartphones
shipped in 2013
50% of the internet
users are from mobile
80% of mobile time is
spent on apps
Mobile web adoption
is growing 8 times
faster
Statistics Source: Digitalbuzz
NFV. SDN. Cloud. 3G/4G data transmission.
Customer First. Ease of use. Multiple channels.
Responsiveness. Transactions. Global Speed
User Experience
Infrastructure evolution
Front End. Back End. Middleware. Technology Evolution
UK businesses
could lose up to
£36.7 billion in
revenue per year.
Source: Microfocus
Annual loss of
1.6 million hours
of downtime
each year across
North America.
Source: CA
Technologies
Single outage can cost
up to USD 300,000 an
hour, certainly not an
amount to be taken
lightly. Source: Emerson
Network Power
• 60% enterprises
overestimate their site’s
capacity to handle user
traffic.
• 98% of the online
retailers thought 2 sec
response time was
desirable.
• Source: news.cison.com
Application
performance
failures account for
73% of all failures
in IT infrastructure
today. Source:
eginnovations
Comair airlines had
cancellation of over
1,000 flights on
Christmas Day after its
computer systems for
reservations crashed.
Source:
internetnews.com
Technical • Load tests equal
performance, scalability and sizing tests
• Load tests provide reliable performance information
• The right load test tool will do everything for me
• User experience is driven by server response time
Process and Commercials • Performance/Load Testing
needs complex planning and scheduling
• Performance/Load Testing is limited to applications and not infrastructure
• Performance testing tools are license based and implementation is costly
• Open Source performance testing tools are not scalable and robust
Traditional performance/load
testing scope Current trending effective performance /
load testing built on end user experience
Why is network performance important?
What’s the role of end user and experience
of the application usage?
Does this mean increased effort, scope
and complexity?
• Client side processing (Platform, Browser) • Network variants (LAN, WAN, Wifi)
Workload growth
Hardware resource
consumption
Architectural
design strategies
• User population • Database changes • Component allocation • Application population • Transaction complexity
• CPU consumption • Memory allocation • Disk I/O subsystem • Network hardware
• Logical packaging • Physical deployment • Component instancing • Optimized database access
End User Experience
Heterogeneous
channels
End User Performance Testing & Monitoring elements
Physical, Virtual & Mobile Device Performance • Storage &
Event Log • Hung
Processes • App crashes • Operating
System • Login Profiling • Geographical
Origin
Application Performance • Latency • Response Time • Throughput • Broken Links • Successful
Transactions • Failed
Transactions
User Productivity • Application/Mo
dule wise usage statistics
• Usage trail from login to logout
• Transaction execution time
• Time spent on web page
Source: Compuware
Client Side Statistics: Application statistics,
Location Origin, source hygiene check (PC,
LAPTOP, Mobile device) configuration pre-
checks, transactions/second
Network Statistics: Latency, Firewall hops, data
transfer rate, Data center hops, network
infrastructure performance (Switches, Routers etc),
Bandwidth, Connections per second, Maximum
concurrent connections
Server Side Statistics: Transactions/second, active
sessions, log archive, open Vs ended sessions,
Memory leaks Vs usage, DB/App server performance
Meaningful analysis of metrics is “Analytics”
Performance bottlenecks survey by
Oracle
Key elements to focus in network performance testing • Routers • Swtiches • NFV (network
function virtualization)
• Firewalls • Load Balancers
Planning Tool
selection
Test Infrastructu
re setup
Scripting &
execution
• Dependent on
skill set – this
can be optimal
exercise and
can be
controlled.
• Driver for entire
test scripting,
execution and
end result
reporting.
Options
available but
one time
selection is
important.
Paradigm shift in
test infrastructure
setup – options
available here
too!!
Dependent on the
pre requisites
such as tool
selection and
ease of use.
Open Source
Potenti
al
benefits
Performance test planning
Traditional Performance Testing cycle and
activities Dedicated performance test
Environment setup
Test scripts creation
Test scripts execution &
results baseline
1 – 2 weeks 4 – 6 weeks 2 – 4 weeks 1 – 2 weeks
Cloud based Performance Testing cycle model
Performance test
planning
Performance test
Environment setup on cloud
Test scripts creation
Test scripts execution &
results baseline 1 - 2 weeks
1 week 2-3 weeks 1 week
3-5 week effort
savings realized
through cloud
based
performance
testing
infrastructure
model
Load Generation over the cloud
Performance Engineering
Passive Monitoring Active Monitoring
Performance Testing
Performance Results
Tuning
recommendations
Network
Simulation
Predictive
Analytics
For applications performance testing JMeter OpenSTA WebLOAD The Grinder Multi-Mechanize Selenium Capybara OpenSTA Pylot Webrat Windmill www.apicasystems.c
om Locust.io
For network simulation testing • ns (open source) • OPNET (proprietar
y software) • NetSim (proprietar
y software) • Shunra
(proprietary)
For end user experience testing • Open Web
Analytics
• PIWIK
• Google Page
Speed Module
• Site Speed
• CSS Corp
PROBLR
• New Relic Lite
Background
◦ Pre-release Performance Testing for a portal toolkit
Expedites and standardizes the process of developing customized Internet portals
Developed for several geographical regions including Central and Eastern Europe, Middle East and Africa, and was hosted in UK
◦ Developed by one of the Top 5 outsourcing vendors
◦ Single instance application running in multiple locations
Challenges
◦ 100% availability and scalability requirements
◦ Improve service uptime and QoS
◦ Optimize Application availability & performance
Value Addition ◦ Scaled the system from 20 to 500 users.
◦ Reduced CPU utilization to allow for growth
◦ System Architecture for growth planning
› Performance Engineering Results
• Recommendations
– Regular expression mismatch – rewrite
– Fix serialization
– Implement Bind Variables
• Tuning activities
– Created Function-based Indexes
– Tuned Resource Crunching SQL Queries
– Reconfigured Instance Level Parameters
– Addressed Wait Events
Run 1 Run 2 Run 3
Scalable to 20 Users
CPU Utilization
Application profiled
Regular expressions consuming
the most CPU time
ABC Bank Online : Run 1
0
12
24
36
48
60
0:00 0:05 0:10 0:15 0:20 0:25 0:30
Elapsed Time (hh:mm)
Lo
ad
Siz
e /
Th
rou
gh
pu
t (K
Bp
s)
0.00
20.00
40.00
60.00
80.00
100.00
CP
U (D
B) /
CP
U (A
PP
)
Load Size Throughput (KBps) CPU Utilization (App) CPU Utilization (DB)
App Server Observations
Analysis – after Run1
Run 1
Scalable to 140 Users
CPU Utilization Trends
DB I/O
Query costs
ABC Bank Online : Run 2
0
40
80
120
160
200
0:00 0:05 0:10 0:15 0:20 0:25 0:30 0:35 0:40 0:45 0:50 0:55 1:00
Elapsed Time (hh:mm)
Lo
ad
Siz
e /
Th
rou
gh
pu
t (K
Bp
s)
0.00
20.00
40.00
60.00
80.00
100.00
CP
U (
DB
) /
CP
U (
AP
P)
Load Size Throughput (KBps) CPU Utilization (App) CPU Utilization (DB)
Bind Variables issues
CPU usage (parsing), Memory (SQL Area)
Indexing
Full table scans on indexed columns due to functions
Errant Queries with huge buffer gets
Instance Level Parameters – DB_Block_Buffers, Shared_Pool, Sort_Area_Size not
optimized
High Wait Events – DB Scattered Read & DB Sequential Read
DB Server Observations
Analysis – after Run 2
Bottlenecks Identified
Run 2
Recommendations:
Eliminate or reduce the use of regular expression to free up CPU time
Fix serialization
Tune database by implementing blind variables and reconfiguring instance level parameters
Code Profiling – Java
Tuning:
Created Function-based Indexes
Tuned Resource Crunching SQL Queries
Reconfigured Instance Level Parameters
Addressed Wait Events
Benefits:
Scaled the system to 1000 users
Reduced CPU to allow for growth
Achieved better than target SLA of 400 Kbps throughput
Run3 & Engagement Summary
RUN 3 Results
Run 3
©2014 CSS Corp The information contained herein is subject to change without
notice. All other trademarks mentioned herein are the property
of their respective owners.
Thank You!
Want to be invited by SCQAA-SF?
Please contact [email protected]