AJAXWORLD 2007 EAST: LASZLO UNVEILS LASZLO WEBTOP
New breed Web applications
JackBe’s Rich Enterprise Application (REA) platformclears the road to SOA business benefits.
There’s an abundance of products and vendors to help you create your SOA. Now, consumethose SOA services with JackBe REAs to achieve the business productivity and results that ledyou to SOA in the first place. Our new REA platform combines the power of Ajax and SOA with
reliable, secure communications to extend your SOA directly into powerful business applications.
A fully visual IDE reduces manual coding and accelerates the development process. And ourlightweight, vendor-neutral platform easily integrates with existing middleware and services whilemaintaining server-side governance and control--unlike products that leave governance to the
browser or create middleware platform dependencies.
Join over 35 industry leaders like Citigroup, the U.S. Defense Intelligence Agency, Sears,Tupperware, and Forbes who are already optimizing their business activity with JackBe solutions.
Call or visit our website—let us help you remove the barriers on the road to achieving realbusiness value from your SOA investment.
Facing a few barrierson the road to SOA?
Web: www.jackbe.com
Phone: (240) 744-7620
Email: [email protected] Business Activity
March 2007
n6 TheAJAXStoryIs StillJustBeginning FromTheEditor by Dion Hinchcliffe
n8 On-DemandIsinDemand AdvancesinprogrammingmakeSaaS aviableoptionforSMBs by Victor Pinenkov
n10 Mashups&Semantic Mashups ThevalueofRDF:TakingtheWeb tothenextstep by Dean Allemang
n14 AJAX,WhereDesign MeetsCode ThebeginningofanewWebexperience by Kris Hadlock
n16 BuildingtheDataWeb Acallforadatasyndicationstandard by Ric Hardacre
n20 ExposingandPreventing ErrorsinAJAXApplications Usingaqualityinfrastrucure by Nathan Jakubiak
n24 TheBigShift AJAXandWebServiceBusarchitectures enabletheshiftfromrequestandresponse tomessageandevent-drivenapplication architectures by Kevin Hakman
n32 AutomaticallyTestingAJAX UserInterfaces Thereasonforautomating userinterfacetests by Reginald Stadlbauer
n38 StrutsPortletswith AJAXinAction Challengesfacedduringimplementation by Kris Vishwanathan
n44 HowWeb2.0Makes BusinessIntelligenceSmarter AndwhatcorporateITneedstodo toimplementsuchsolutions Langdon White
AJAx.SyS-coN.coM March 2007 �
The conjunction of AJAXWorld
Conference & Expo 2007 this month
with the publication of the 600-
page blockbuster Real-World AJAX, the
announcement by the OpenAjax Alliance
of the OpenAjax Hub and OpenAjax
Conformance, the release by Laszlo
Systems of OpenLaszlo 4.0 and the Laszlo
Webtop...all these thing coming together
at once mark a significant milestone in the
brief history of AJAX, rich user experiences
in general, and the growing challenges and
opportunities in this space as we continue
to witness a tectonic shift in the way Web
apps are designed and built.
We are now rapidly leaving the era
where static HTML is acceptable to the
users and customers of our software. The
“Web page” metaphor is just no longer
a compelling model for the majority of
online Web applications.
Combined with the rise of badges and
widgets, and the growing prevalence of
what I call the “Global SOA” to give us vast
landscapes of incredibly high-value Web
services and Web parts, it’s important
to note that the use of AJAX is essential
to even start exploiting these important
trends. Skirting the corners of this phe-
nomenon are also the non-trivial chal-
lenges offered up by largely abandon-
ing the traditional model of the browser.
Specifically, what happens to search
engine optimization (SEO), disabled
accessibility, link propagation (along with
network effects), Web analytics, tradition-
al Web user interface conventions, and
more, which are all dramatically affected
– often broken outright – by the AJAX Web
application model?
Many of these questions will be
answered, for those of you reading this
issue at AJAXWorld 2007 East, at the event
in New York City. Many are addressed too
in Real-World AJAX, but many remain rela-
tively unanswered in an industry strug-
gling to deal with a major mid-industry
change. The tools, processes, and tech-
nologies we’ve brought to bear to build
Web applications are going to change a
lot, as well as the skill sets. These types
of rich Web applications require serious
software development skills, particularly
as the browser is a relatively constrained
environment compared to traditional soft-
ware development runtime environments
like Java and .NET.
Of course, despite these issues – even
because of them – it is a very exciting
time to be in the AJAX business right now.
One big reason is that there are few AJAX
products with clear market dominance
yet and the dozens and dozens of AJAX
libraries and frameworks currently avail-
able offer a very diverse and compelling
set of options for use as the foundation of
the next great AJAX application. While the
Dojo Toolkit is probably the AJAX toolkit
with the largest mindshare and lots of
industry interest at the present time, the
big vendors such as Microsoft and their
ASP.NET AJAX (a.k.a. “Atlas”) show, as the
first major products from big vendors
make their way to market, that the story
is still just beginning.
There’s little doubt that we’ll continue
to see the AJAX market maturing and I’m
looking forward to a variety of upcoming
improvement such as Project Tamarin, the
high-speed JavaScript engine donated by
Adobe to the Mozilla project; the ongoing
evolution of OpenAjax; and the 1.0 release
of Dojo sometime this year; to name just
a few of the exciting things that have the
potential to ensure AJAX continues to grow
and evolve. n
From The Editor
Group Publisher Jeremy Geelan
Art Director Louis F. Cuffari
EditorialEditor-in-chief
Dion Hinchcliffe
Editor
Nancy Valentine, 201 802-3044
To submit a proposal for an article, go to
http://grids.sys-con.com/proposal.
SubscriptionsE-mail: [email protected]
U.S. Toll Free: 888 303-5282
International: 201 802-3012
Fax: 201 782-9600
Cover Price U.S. $5.99
U.S. $19.99 (12 issues/1 year)
Canada/Mexico: $29.99/year
International: $39.99/year
Credit Card, U.S. Banks
or Money Orders
Back Issues: $12/each
Editorial and Advertising OfficesPostmaster: Send all address changes to:
SYS-CON Media
577 Chestnut Ridge Road,
Woodcliff Lake, NJ 07677
Worldwide Newsstand DistributionCurtis Circulation Company,
New Milford, NJ
List Rental InformationKevin Collopy: 845 731-2684,
Frank Cipolla: 845 731-3832,
Promotional ReprintsMegan Mussa
Copyright © 2007by SYS-CON Publications, Inc. All rights
reserved. No part of this publication may
be reproduced or transmitted in any form
or by any means, electronic or mechani-
cal, including photocopy or any informa-
tion storage and retrieval system, without
written permission.
AJAXWorld Magazine is published bimonthly (6 times a year)
by SYS-CON Publications, Inc.,
577 Chestnut Ridge Road,
Woodcliff Lake, NJ 07677.
SYS-CON Media and SYS-CON
Publications, Inc., reserve the right to
revise, republish, and authorize its
readers to use the articles submitted
for publication.
TheAJAXStoryIsStillJustBeginning
Dion Hinchcliffe
6 March 2007 AJAx.SyS-coN.coM
Most small and medium-sized businesses
(SMBs) require the same software capa-
bilities as large enterprises but don’t have
the resources and in-house technical expertise neces-
sary to manage it themselves. A study by the Cutter
Consortium finds that many businesses are no longer
willing to tolerate the long deployment cycles, ongoing
administrative hassles, high operating costs, and low
ROI associated with traditional on-premise software.
Consequently, an increasing number of organiza-
tions are seeking outsourced and hosted solutions,
which offer comparable levels of functionality at a
fraction of the cost. Now a viable alternative to on-
premise solutions, Software as a Service (SaaS) adop-
tion is expected to grow over the next several years.
Research by industry analyst firms shows that the
percentage of business software spending on SaaS
will grow from 5% in 2005 to 25% in 2011.
NewLifeforHostedApplications Application service providers (ASPs), the first
companies to offer software via the Web, hosted
third-party applications in mini-data centers. The
large complex software had limited functionality and
required expensive server installations.
The current crop of SaaS applications operate at
higher speeds and have greater capabilities because
of breakthrough technologies and programming such
as Asynchronous JavaScript and XML (AJAX). AJAX
programming provides an intelligent and efficient
approach to client/server interaction and enables
automatic changes to content without requiring the
full Web page to reload, allowing users to move rapid-
ly between different areas of the application. Software
applications end up operating much like familiar
desktop software and provide similar functionality.
For example, the improved interactivity and usability
of the applications make it easy to conduct real-time
actions, such as drag-and-drop, grab and scroll, and
grab and zoom.
Advances in AJAX and other HTML standards,
open source systems, and the increasing availabil-
ity of open application program interfaces (APIs)
have made it possible to develop and enhance SaaS
more quickly and less expensively. SaaS providers
now have greater customization and configuration
capabilities. Many solutions enable administrators to
configure the application across the entire business,
setting parameters that meet the company’s particu-
lar requirements. Customization also allows resellers
and systems integrators to add new fields, create new
interfaces, and integrate data from other software
applications.
Integration used to be a challenge for SaaS appli-
cations and a deterrent for potential customers who
needed the hosted program to integrate with their
existing applications, databases, and architectures.
Advanced integration middleware, service-oriented
architectures, and open APIs are making it easier to
create connections between software systems. Many
SaaS vendors now offer a variety of options, such as file
and batch transfer, and provide a defined set of pro-
gramming interfaces to make it easier for customers to
integrate their SaaS and on-premise applications.
“The advent of AJAX and other de facto Web-ori-
ented application development standards has helped
to accelerate the growth of the SaaS market by mak-
ing it easier for users to implement and integrate
on-demand software services,” according to Jeffrey
M. Kaplan, managing director of THINKstrategies
and founder of the SaaS Showplace ( www.saas-show-
place.com).
A wider array of service options and capabili-
ties will become available as developers make fur-
ther advances in programming technology. Some
experts suggest that an increasing number of SaaS
vendors will offer fully functioning offline versions
of their AJAX-based services that synchronize with
a user’s online account when reconnected to the
Internet. Kaplan suggests the growth of offline
capabilities is a natural market evolution for SaaS
solutions.
SaaS providers will also continue to focus on
improving security, reliability, and integration capa-
bilities and enhancing features offered to end users.
With its current market penetration and expectations
for growth, it’s apparent that SaaS is asserting its posi-
tion in the software industry – an affordable alterna-
tive that allows SMBs to break free of costly enterprise
applications. n
Victor Pinenkov is vice-president of engineering for BlueTie, Inc. He
is responsible for managing the company’s application development,
software infrastructure, and all IT engineers and staff.
SAAS
On-DemandIsInDemandAdvancesinprogrammingmakeSaaSaviableoptionforSMBs
by Victor Pinenkov
SYS-CON MediaPresident & CEO
Fuat Kircaali, 201 802-3001
Group Publisher
Jeremy Geelan, 201 802-3040
AdvertisingSenior Vice President,
Sales & Marketing
Carmen Gonzalez, 201 802-3021
Vice President, Sales & Marketing
Miles Silverman , 201 802-3029
Advertising Sales Director
Megan Mussa, 201 802-3028
Sales Manager
Corinna Melcon, 201 802-3026
Events Manager
Lauren Orsi, 201 802-3024
ProductionLead Designer
Louis F. Cuffari, 201 802-3035
Art Director
Alex Botero, 201 802-3031
Associate Art Directors
Tami Beatty, 201 802-3038
Abraham Addo, 201 802-3037
SYS-CON.COMConsultant, Information Systems
Robert Diamond, 201 802-3051
Web Designers
Stephen Kilmurray, 201 802-3053
Richard Walter, 201 802-3057
AccountingFinancial Analyst
Joan LaRose, 201 802-3081
Accounts Payable
Betty White, 201 802-3002
Customer RelationsCirculation Service Coordinators
Edna Earle Russell, 201 802-3081
8 March 2007 AJAx.SyS-coN.coM
On-DemandIsInDemandAdvancesinprogrammingmakeSaaSaviableoptionforSMBs
In the early days of the Web, we were thrilled when
we could find addresses and phone numbers of
commercial establishments. As more sophisti-
cated Web sites started using databases behind the
scenes, we came to expect a page that included all
the addresses for each branch of a chain in our city
or neighborhood. When Mapquest came along, we
were delighted that we could click a button and see
the location for each of these shops on a map. Then
came the Web 2.0 idea of the mashup; at a click of a
single button, we could see all the Starbucks in a zip
code all at once.
These mashups are great if you’re looking for one
kind of thing (coffee shops, hotels, gyms) and come
from one source (especially when that source is an
amalgamator like Citysearch or even Google). But
suppose you’re moving to an unfamiliar city; you’d
like to see a map with all the subway stations, gyms,
organic supermarkets, art-house movie theaters, and,
yes, coffee shops so you can figure out where you’d
like to live, preferably each type in a distinctive color
or icon to distinguish it from the others, providing a
real integrated picture of the neighborhood.
There are a couple of ways to do this with com-
mon Web 2.0 approaches today. You could do a map
mashup of each kind of thing you want to see and
display all those maps on the screen at once. This gets
you the information you need, but you might be at
risk of whiplash turning your head so much from one
map to another.
You could spend some time bookmarking all these
things in your favorite mapping application so that
they’re all on a single map. But the idea of a mashup
is that the information should be available from feeds
– like the RSS feeds we’ve become familiar with for
our news. That way, whenever Starbucks opens a
new shop or a neighborhood announces an open-
studios event or another real estate listing is posted,
your mashup will update with it. When we can merge
information from multiple feeds and organize the
results – that’s what we call a Semantic Mashup.
The first step is to turn all of your data sources
into feeds that can be mashed up. RDF is the ulti-
mate mashup language; it provides a simple way to
express information from spreadsheets, databases,
Web pages, XML files, and, yes, even native RDF/XML
files, allowing them to be merged into a single graph
structure.
If you have unfettered access to the database
behind a Web page, you can treat it as RDF using an
RDF database wrapper like D2RQ, but since we usu-
ally want to mash up information from other sources,
and most security administrators don’t provide global
access to their database, another approach is needed.
Enter the W3C Working Draft.
RDFa lets Web site developers make an HTML
page do double duty as a presentation page (the usual
role of HTML) and as a machine-readable source of
structured data in RDF. As a simple example, consider
the following excerpt from the Los Angeles Metro Rail
Web page, with a few minor modifications to embed
RDF data in with the HTML display:
<p><font size=”2”><div name=”rlhv” id=”rlhv”>
<link rel=”rdf:type” href=”[LAMetro:
RedlineStation]”/>
<meta property=”rdfs:label”>Hollywood/
Vine</meta> <br/>
<a href=”../../about_us/metroart/images/
pict_mrlhv.jpg”>Station Image</a>
<b><br/>
</b><meta property=”geo:address”>6250
Hollywood Bl.<br/>
Los Angeles 90038 <br/></meta>
60 Park/Ride Lot Spaces (Parking Fee)<br/>
<a href=”../../projects_plans/bikeway_plan-
ning/images/bike_rack_mrhv.jpg”>18
Bike Rack Spaces</a></div></font></p>
The extra <div …>, <link … >, and <meta …> tags
mark up the HTML page to include a few simple facts;
something called “rlhv” has type “RedlineStation,”
label “Hollywood/Vine,” and the address “6250
Hollywood Bl. Los Angeles 90038.” Most of this infor-
mation was already available in the Web page (includ-
ThevalueofRDF:TakingtheWebtothenextstep
RDF
MashupsandSemanticMashups by Dean Allemang
Dean Allermang is a long-time traveller
in knowledge science. He was awared his
PhD in AI in 1990, worked at five differ-
ent AI labs in Europe between 1990-1996,
co-founded a company in the mid-90s that
tried to invent the Semantic Web when the
standards were just a gleam in the eye of
a few W3C folks, and is now working as a
consultant for TopQuadrant Inc. as their
Semantic Web expert. Though he is the one
running the training course, he learns more
from the students than they ever guess. His
laptop is full of semantic web software, and
he even knows how to use more than half of
it. “It’s an exciting time,” says Allerman, “for
those of us who have been fans of abstrac-
tion for the past three decades.”
10 March 2007 AJAx.SyS-coN.coM
ing the name “rlhv”!); the markup just links it together
in a consistent way. The extra markup makes no dif-
ference to HTML viewers (like the browser); the page
displays in the same way.
The Web Ontology Language was built on top
of RDF to let us describe how we want to combine
information from multiple sources together. Types
in RDF (like “RedlineStation,” in this example) are
called Classes in OWL, and typed individuals (like
“rlhv” above) are members of one or more classes.
Information from different sources will have different
types, and so appear as members of different classes;
the Red Line page describes members of the class
RedLineStation, the Blue Line page describes mem-
bers of the class BlueLineStation, and so on. We define
a new class for anything else we can find, a class for
gyms, amusement parks, organic supermarkets.
Now we can use OWL to describe how we want to
combine these things together. If we want to combine
information about all the metro lines, we define a new
class in OWL called LAMetroStation and make each of
the line classes (BluelineStation, RedlineStation, etc.)
subclasses of it. We can do the same for any other
classes we’ve defined for gyms, amusement parks,
organic supermarkets, whatever. We can express this
class/subclass structure in RDF/XML as follows:
<owl:Class rdf:about=”#LALocale”/>
<owl:Class rdf:about=”#Entertainment”>
<rdfs:subClassOf rdf:resource=”#LALocale”/>
</owl:Class>
<owl:Class rdf:about=”#LAMetro”>
<rdfs:subClassOf rdf:resource=”#LALocale”/>
</owl:Class>
<owl:Class rdf:about=”#Fitness”>
<rdfs:subClassOf rdf:resource=”#LALocale”/>
</owl:Class>
<owl:Class rdf:ID=”GoldLineStation”>
<rdfs:subClassOf rdf:resource=”#LAMetro”/>
</owl:Class>
<owl:Class rdf:ID=”GreenLineStation”>
<rdfs:subClassOf rdf:resource=”#LAMetro”/>
</owl:Class>
<owl:Class rdf:ID=”BlueLineStation”>
<rdfs:subClassOf rdf:resource=”#LAMetro”/>
</owl:Class>
<owl:Class rdf:ID=”RedLineStation”>
<rdfs:subClassOf rdf:resource=”#LAMetro”/>
</owl:Class>
<owl:Class rdf:ID=”Cinema”>
<rdfs:subClassOf rdf:resource=”#Entertainment”/>
</owl:Class>
<owl:Class rdf:ID=”Gym”>
<rdfs:subClassOf rdf:resource=”#Fitness”/>
</owl:Class>
<owl:Class rdf:ID=”Pool”>
<rdfs:subClassOf rdf:resource=”#Fitness”/>
</owl:Class>
<owl:Class rdf:ID=”Yoga”>
<rdfs:subClassOf rdf:resource=”#Fitness”/>
Figure 1: Class hierarchy for places in Los Angeles, displayed in TopBraid Composer.
Figure 2: Places of various types (metro station, amusement park, etc.) shown on a single google map.
AJAx.SyS-coN.coM March 2007 11
</owl:Class>
<owl:Class rdf:ID=”AmusementPark”>
<rdfs:subClassOf rdf:resource=”#Entertainment”/>
</owl:Class>
<owl:Class rdf:ID=”Theater”>
<rdfs:subClassOf rdf:resource=”#Entertainment”/>
</owl:Class>
<owl:Class rdf:ID=”ConcertHall”>
<rdfs:subClassOf rdf:resource=”#Entertainment”/>
</owl:Class>
This is easier to see in outline form (as shown in
an RDF visualization tool like TopBraid Composer, see
Figure 1).
The meaning of subClassOf in OWL is very simple;
it just means that all the members of BlueLineStation,
RedLineStation, etc. are also members of LAMetro,
and that all the members of LAMetro, Fitness, or
Entertainment in turn are members of the top class,
LALocale. So to display all the metro stations, you
just query for members of LAMetro; to display all the
places of interest in L.A., you query for members of
LALocale. Send the results off to a display API like
GoogleMaps and you get a screenshot, that looks like
Figure 2.
Just as class structures in object-oriented pro-
gramming help you to organize your program code,
class structures in OWL let you organize your data.
You can display your mashup to include information
at any level of the tree.
But OWL goes far beyond simply amalgamat-
ing information through subclasses; you can also
specify constraints on your information. For exam-
ple, we could say something like “All members of
BlueLineStation are restricted to have http://www.
topquadrant.com/images/icons/bluetrain.gif as an
icon.” In OWL/RDF this looks like:
<owl:Class rdf:about=”#BlueLineStation”>
<rdfs:subClassOf>
<owl:Restriction>
<owl:hasValue>http://www.topquadrant.com/imag-
es/icons/bluetrain.gif</owl:hasValue>
<owl:onProperty rdf:resource=”http://www.
topquadrant.com/maps/mapModel.owl#hasIcon”/>
<rdfs:subClassOf>
</owl:Class>
When we do the same for the other classes (redtrain.gif
for RedLineStation, ferriswheel.gif for AmusementPark),
this lets OWL set icons according to the class an instance
belongs to. We send that to the mapping API as before,
and we get a map with different kinds of places indicated
by appropriate icons, as shown in Figure 3.
This same approach can be used to customize
any part of the mashup that the API can handle, what
Figure 3: Each type of place from Figure 2 is shown with a different icon; colored icons for each metro line, ferris wheels for amusement parks, etc.
12 March 2007 AJAx.SyS-coN.coM
information appears when you hover over an icon or
what URL you visit when you click through a bubble.
There’s no reason to limit this kind of mashup to
maps; the same approach works just as we ll for any way
you might want to display data, calendars, spreadsheets,
org charts, pie charts, and PERT charts. There’s not
even any reason to limit your mashup to a single one of
these modes; after all, concert halls have performance
schedules, train stations have timetables, and people
in your organization attend meetings that take place at
particular places and times. OWL provides a wide range
of modeling capabilities that let you combine informa-
tion from multiple sources in your filters. Following the
map example, display the location of all the amusement
parks, but use a special icon for the ones you’ve visited,
and put the date you visited them on the hover label.
This paints a pretty rosy picture of how you can build
powerful semantic mashups with standards-compliant,
readily available tools. But there’s a bit of rain on this parade
– a pre-requisite for all this mashing up is that the data be
available in RDF – and who wants to do all that work? My
data is in a database, or a spreadsheet, or HTML, or XML,
or… Do I have to transfer it to yet another format? And
especially RDFa – who’s going to do all that marking up just
to make their data easier for other people to use?
Well, fortunately for semantic mashups (and
indeed, fortunately for the Web, since this is how the
Web came into being in the first place), the Web is full
of exhibitionists. The only reason their data is on the
Web in the first place is to show it off. And the more
people who have their data available in RDF, the more
valuable it is to have your data available that way too
for semantic mashups like the ones we describe here.
This is the sort of network effect that brought the Web
into being. TopQuadrant is doing its part to start this
bootstrap by providing semantic mashup tools; you
can get in the game by making your own data avail-
able in RDF. n
Advertiser URL Phone Page Adobe www.adobe.com 23
Backbase www.backbase.com 866-800-8996 47
Cynergy Systems www.cynergysystems.com 50,51
Etelos ajaxworld.etelos.com 37
Google www.google.com/jobs/ 31
helmi technologies www.helmitechnologies.com 41
ICEsoft www.icesoft.com 1-877-263-3822 21
JackBe www.jackbe.com 240-744-7620 4
JetBrains www.jetbrains.com/idea 27
Kapow technologies www.kapowtech.com 43
Laszlo Systems www.laszlosystems.com 2,3
Nexaweb www.nexaweb.com 7
ORACLE oracle.com/middleware 1-800-ORACLE1 9
Parasoft www.parasoft.com 888-305-0041 35
Quasar Technologies www.quasar-tech.com 29
Servoy www.servoy.com 805-624-4959 45
Sun Microsystems sun.com/startup 13
telerik www.telerik.com/radajax 52
AdvertisingIndex
Advertiser index is provided as an additional service to our readers. Publisher does not assume any liability for ommissions and/or misprints in this listing since this listing is not part of any insertion order.
© 2
007
Sun
Mic
rosy
stem
s, In
c. A
ll ri
ghts
res
erve
d.
Visit sun.com/startup.
Get enterprise-grade products at ridiculously low prices, free world-class software and expertise to get you up and running fast.
Jumpstart your Startup.
Time and again I’ve seen applications fail
because of an unusable interface design. This
not only includes the look-and-feel of the
application, but the actual interaction processes that
it contains. I know many people who immediately
leave a Web site if it does not look appealing or func-
tion simply from the start. It doesn’t matter whether
the application could have helped them accomplish a
goal, it simply doesn’t stand a chance without usable
design and functionality. Simply put, if an application
doesn’t look usable it’s not usable.
There are far too many options on the Web today,
leaving no time for discovery. People want information
immediately and they will get it elsewhere if an appli-
cation doesn’t provide it. This is one of many reasons
why the primary focus during application develop-
ment should be on the design of the application and
the code should be used to accomplish those design
goals. Although design is largely based on what some-
thing looks like, Web application design is also based
on how interactions flow and how a user interacts with
that design to accomplish a goal. The code should be
focused on making the interface more usable, the tasks
easier to understand and so easier to accomplish.
AJAX offers solutions to this problem by keeping
the design tightly integrated with the code. As an AJAX
developer it’s impossible not to consider how the code
relates to the interface because the request/response
model that it creates requires knowledge of how to
parse and display the data to the user, how to interact
with the server from the front-end, and how to use
code to stylize the end results. AJAX is replacing the
disjointed user experiences of the past with cohesive
applications that require that design and code work
together. AJAX makes the right features of an applica-
tion more responsive and keeps the attention of the
user where it counts. Web 2.0 relies on these concepts;
although not new to some, the Web is finally becom-
ing a reliable destination for implementing them.
AJAX has existed for years now, but many browsers
didn’t support the technologies that produce this
functionality. Now we’re in the presence of Web 2.0
where design and code are integrated to produce Web
applications of a new breed.
CodinganExperience The whole purpose behind using AJAX is centered
on the idea of creating a better user experience, one
in which users don’t have to wait for pages to load,
allowing them to get information on demand and
achieve what they set out to do with an application.
Although AJAX requires development skills to cre-
ate the complex server-side requests and front-end
interactions, the learning curve is truly not very steep
and any designers with the motivation and drive can
easily find resources or wrappers to help them imple-
ment an AJAX solution. Aside from a good idea a suc-
cessful AJAX application relies heavily on the design
and interaction processes it implements. To create a
successful AJAX application the code shouldn’t just
be solid and written well, its primary focus should
be keeping itself invisible to the user and focusing on
what occurs at the front-end.
Until recently desktop applications were the way to
provide complex possibilities to users. With browsers
adopting more standards the Web is becoming a new
realm of experience capable of providing powerful func-
tionality that rivals the desktop. The Web not only pro-
vides information to the user, it lets users provide infor-
mation to the Web. We’re experiencing the beginning of
a new Web experience that lets users interact with infor-
mation, collaborate, and really immerse themselves in
the information they want. AJAX simply provides a new
platform that lets users achieve this possibility.
TechTalk AJAX relies on the request model inherent in the
xmlhttprequest object, but is made accessible to the
user by the integration of JavaScript, CSS, and (X)HTML
and its combined creation of DHTML. All three of these
technologies produce the graphical user interface or
GUI. The GUI is where the user views and interacts with
an application. In an application that implements AJAX,
the GUI and the server share a common bond through
XML. With AJAX, XML is the glue that ties design with
code and produces a process like no other. The process
begins with an interaction made by the user. The inter-
action produces a request that’s made to the server, the
server responds with static or dynamic XML. JavaScript
Design
AJAX,WhereDesignMeetsCode
by Kris Hadlock
Kris Hadlock is the author of AJAX for Web
Application Developers (Sams, October 2006)
and a featured columnist and writer for numer-
ous Web sites and design magazines. He is also
the founder of Studio Sedition, a Web develop-
ment firm, the co-founder of 33Inc alongside
partner Robert Hoekman, Jr., and maintains a
blog called Designing with Code. You can find
all of the above and more about Kris at
www.krishadlock.com
14 March 2007 AJAx.SyS-coN.coM
gets the response and handles displaying it to the user.
CSS is used to stylize the elements in the interface and
the (X)HTML is used to display the data to the user in
the presentation defined by CSS. The figure below shows
the layers of an AJAX application and how the GUI and
server work closely to integrate design with code.
The GUI and the server work in cohesion to produce the
results necessary for a successful application. AJAX integrates
the front- and back-end by letting them communicate with-
out reloading the page. Complex data exchanges can occur
without interrupting the experience. Exchanges such as
saving, retrieving, updating, or deleting data from a database
are possible just as with the standard HTTP request model.
There are many different options when deciding how to
approach an AJAX application, let’s focus on a couple that
leverage the full capabilities of an AJAX experience.
Approaches When developing AJAX applications there are a
number of ways to approach the creation of an appli-
cation. You can build one from scratch, choose from
one of the many existing libraries and frameworks, or
integrate these options to create a hybrid.
Creating an application from scratch requires that the
developer know both the client- and server-side technol-
ogies, or two tightly knit teams. AJAX creates a bridge that
allows closer collaboration between programming and
design teams. To a single developer this approach pro-
vides much more power because it requires a complete
understanding of the application, such as how requests
are made, what server-side processes occur, and how the
responses are parsed and displayed. A developer who
uses this approach has to understand how the data relates
to the user and server and how to integrate the two seam-
lessly to create an exclusive experience. This approach
can also teach developers techniques they wouldn’t have
learned otherwise. Most often programmers don’t touch
the GUI and GUI designers don’t touch the back-end, but
AJAX requires both to interact so closely that it’s impos-
sible not to use both unless a team of developers and
designers are involved. Ultimately an application’s focus
should be on how to leverage both sides to create a more
usable experience.
TheRightPlaceattheRightTime As with any technology, AJAX isn’t always the best
solution to a problem. Using it at the right place at the
right time is crucial to its purpose. As always, when plan-
ning an application, the right blend of technologies is
imperative to the purpose and overall goal of the appli-
cation. AJAX is rarely necessary for an entire application
and should be used for more specific features, features
that would benefit in some way from using this group
of technologies. It’s rare that any one technology is right
for an entire application, as was discovered with Flash in
the not so distant past. Most developers learned that it’s
more beneficial to use extremely interactive technolo-
gies in smaller doses where it counts. This automatically
draws more attention to those areas of the application
and provides the user with focus. n
JavaScript
Server-side language
XML
CSS
(X)HTML Graphical User Interface
Server
The combination of these front-end technologies is DHTML. This is where the user views and interacts with an application. This is also the location where Ajax requests are made and responses are received then displayed to the user.
Figure 1:
AJAx.SyS-coN.coM March 2007 1�
The Web is full of data: statistics, surveys, and
reports can be found on almost any topic you
care to search for. It’s this very fact that makes
the Web the first stop in anyone’s research. Want to
know the average number of petals on a daisy? Thirty-
four. The number of species of whale? Eighty (or
thereabouts, depending on your definition of “whale”
apparently).
Of course one could spend all day typing random
questions into a search engine but the serious busi-
ness of research lies in statistical analysis, comparing
datasets for trends. One obvious example is compar-
ing the performance of various stock markets around
the globe over time, this happens so frequently that
it’s quite a simple task online. So is comparing curren-
cy’s performances against each other. But what if you
wanted to do both? What if you wanted to compare
the yen against the London Stock Exchange’s closing
index? At first glance that might seem like a nonsensi-
cal comparison but in financial research trends are
key and trends can’t be found without comparing
datasets, no matter how far and wide.
Unfortunately most, if not all, of the data on the
Web is embedded in Web pages, and most often only
presented in graphical form. There’s just no way of
looking at a chart of something over time and reliably
viewing the corresponding data, at least, not across
the board.
Now let’s step back a moment and look at another
existing Web technology: RSS (or ATOM if you are so
inclined). Really Simple Syndication is a way for Web
sites to publish a simplified list of links to articles and
items on their sites. So instead of logging into a news
site every day to check for new items you “subscribe”
to an RSS feed and just check the feed. What made
this possible was a simple, standard, open format that
was easy both to create and read by both humans and
computers.
Over the past few years RSS has caught on in a
big way, it’s simple to write your own RSS parser and
create a news ticker that watches your favorite Web
sites, for example. Browser plug-ins have given way
to native support and suddenly the Web started to
feel just that little bit more integrated. RSS foretold
the Web 2.0 age and should not be overlooked. Before
there were JavaScript APIs coming out of the wood-
work there were RSS aggregators that built virtual Web
sites out of the parts of other Web sites.
Now let’s consider the most seen AJAX powered
mashup: modifications of map sites, adding real
estate pictures and locations to a map, for example.
This sort of thing would be made a lot easier and
accessible if the real estate agents published an RSS-
like feed of properties, along with their GPS coor-
dinates and prices. Even in this limited scope the
possibilities are endless, a burger chain could publish
the locations of its restaurants, or news bulletins
could come attached with markers. Planes, trains, and
– well, possibly – automobiles could be tracked and
tacked onto maps. Want to see where the roadworks
are on your journey? Just import the official highway’s
feed of roadworks into any mapping site or software
of your choosing.
This problem has already been solved to a certain
extent of course with Google Earth, but the scope is
limited by its narrow focus, purely on coordinates.
Wouldn’t it be great if a feed could contain not only
property locations and prices, but that your browser
could detect the presence of both. The browser itself
would then let you tack them onto a map, or sort by
price in a spreadsheet. This too has limits. I doubt
“number of bedrooms” could be generalized into a
universal datatype, but price (currency=”USD”) and
coordinates (type=”GPS”) are easily comparable and
transposable across differing datasets. The next step
would be to merge the property and burger chain
feeds, selecting only those houses within x miles of
the nearest drive-thru.
So let’s get back to our original data problem,
this time suppose I want to compare the popula-
tions of London, New York, and Tokyo over the past
century. Sounds simple enough and a few minutes
of Web searching yielded a handy “Inner London”
set of census data I could use, which was encourag-
ing. However, New York took a few minutes longer
before it too yielded data. Tokyo proved too stubborn
however and I gave up, having only managed to get
data for a handful of random years; 2000, 2003, and
Acallforadatasyndicationstandard
RSS
BuildingtheDataWeb by Ric Hardacre
Ric Hardacre discovered the Web in the mid
‘90s when he really should have been at
lectures and not in the University computer
lab. He’s since worked as a Web solutions
developer, systems architect, and wireless
and satellite network specialist. His current
day job is as an MCP C# Developer in the
16 March 2007 AJAx.SyS-coN.coM
1960, hardly an extensive dataset. I got relatively good
data for London and New York, one set of figures
for each decade. One, however, was embedded in a
HTML table and the other in a text file, formatted and
aligned using spaces.
The next step is to copy and paste each snippet
of data cell by cell into a spreadsheet and finally run
the graph wizard. Finally you can behold the wonder
as Inner London’s population stays essentially still
while New York’s stealthily overtakes it. Now I could
be accused at this point of serial laziness, but it just
seems like a lot of work, especially the extensive
search engine hammering. Suppose that each site
had published a standard formatted set of data, and
I could run a dataset search to pull it back. Then an
aggregator could look at the data and splice it togeth-
er. Sounds farfetched, but is it really?
Let’s now pull RSS back into this discussion, sup-
pose we apply a similar principle to this data too.
First, give the dataset a header including its title and
source then add each column of data with a strict set
of criteria for defining its type and formatting. This
way a piece of aggregator software would look at our
population sets and see two columns in eac h, one for
the year and one for the population. Population is just
a number and at the behest of trying to keep it simple
our columns could probably be just “year” and “num-
ber,” leaving it up to the column title to belie what the
number count is, well, counting.
Next the aggregator would look across the datasets
and recognizing that both have a major axis of “year”
instantly know that they can be combined, even if the
years don’t match exactly (British population counts
are done on years ending in 1, not 0 for example).
Provided the definition of “year” is fixed and the num-
bers representing the year are always four-digit then
there can never be any ambiguity between different
datasets.
Now the aggregator would see that both have a
second column of number, but even though it doesn’t
know what the number pertains to it would notice
that their lowest and highest values fall into a similar
range, hardly rocket science. The result is the ability to
graph them both together. Even if we were comparing
two datasets with differing second columns then, like
we do manually, we would overlay them both scaled
to facilitate easy comparison with a label to say what
units each are in.
And here’s a crude example, thrown together in
minutes:
<?xml version=”1.0” encoding=”utf-8” ?>
<dataset title=”Census Population of Inner London
1901-2001” key=”year”>
<head cols=”2”>
<col id=”year” title=”Year” format=”yyyy” />
<col title=”Population” format=”0” />
</head>
<body>
<tr><td>1901</td><td>6506889</td></tr>
<tr><td>1911</td><td>7160441</td></tr>
<tr><td>1921</td><td>7386755</td></tr>
<tr><td>1931</td><td>8110358</td></tr>
<tr><td>1941</td><td note=”Estimate”>8160000</td></
tr>
<tr><td>1951</td><td>8196807</td></tr>
<tr><td>1961</td><td>7992443</td></tr>
<tr><td>1971</td><td>7368693</td></tr>
<tr><td>1981</td><td>6608598</td></tr>
<tr><td>1991</td><td>6679699</td></tr>
<tr><td>2001</td><td>7172036</td></tr>
</body>
</dataset>
Basically what we have here is a combination of
technologies, the XML is designed to reflect RSS and
bear more than a passing resemblance to an HTML
table. In the latter case it means that any program-
mer familiar with browser DOM scripting can easily
parse this too. It takes a different approach to a SOAP
dataset in that the datatypes and column names are
listed in a header block while the data table structure
is fixed. It’s also more lightweight than a serialized
recordset but still contains the important details
about the data we’re publishing.
Because it’s a straight table with defined columns
we can use it to publish serial data, as above, or dis-
creet data, such as the list of burger restaurants and
their locations. Finally, because it’s simple XML it’s
extensible, rows, or cells could have individual notes
attached. Indeed, notice that in 1941 World War II was
going on so we only have an estimate to work with.
Suddenly comparing global temperature to the
numbers of sea-borne pirates over time becomes so
much easier – and though I’m only speculating out
loud here, I for one would welcome such a standard.
Imagine what this would do for business practice
in general, and democracy at large. If the de facto
standard of openness was to publish on one’s Web
site a syndication of data: incomes, expenditure, tax,
contracts – all there for the public to parse, analyze,
and compare in the click of a button.
Wait a minute, I hear you say, what about ODF
spreadsheets? Yes, they’re open, standard, and
already XML formatted. But suggesting that a data
syndication format would be pointless if you could
already download and open a spreadsheet in a
spreadsheet application is the same as asking of RSS:
Why not just visit the Web site and check for new
news items yourself? So ask yourself this instead:
If that had remained the attitude for all this time,
would we have a Web 2.0? n
AJAx.SyS-coN.coM March 2007 17
When organizations focus on exposing
errors in any kind of application, they
traditionally focus on testing. Once an
application is finished, it’s passed to a QA team to find
and report problems. There’s some back and forth as
the QA team reports problems, development fixes
them, QA retests the application, and so on. Once that
cycle has been repeated a few times (if there’s time
for that), the application is released. This approach
is problematic, however. The quality of the applica-
tion is assessed at the end of the project. By that time,
incorrectly implemented requirements or bugs may
be firmly planted in the application and have features
built on top of them, and addressing the problems
can be very difficult and time-consuming, if not
impossible. With no way to determine whether the
quality objectives are actually being met throughout
the development process, it’s impossible to deliver a
truly quality application.
Another issue unique to Web applications that
deliver critical business needs is that many of them
are in a continual “beta”state. There’s no time for a
traditional development cycle where requirements
are defined and the application is then architected,
built, tested, and released. Features are put into the
application as quickly as possible because they affect
the bottom line. Since there’s no time for heavily
investing in the design of a new feature, it’s quickly
added as a “trial”; if the feature doesn’t meet the needs
of the customer, it can be removed just as quickly. This
means that many Web applications are in a continual
state of flux; there’s never a finishing point. Features
are simply added one or two at a time, on demand, as
they’re needed. Without being able to take any signifi-
cant amount of time for testing, how do you ensure
the quality of the application?
The answer to ensuring quality lies in creating a
quality infrastructure that enables an organization to
establish quality through all stages of development.
However, in the case of AJAX applications, the follow-
ing challenges arise:
1. AJAX applications are very complex. A large num-
ber of components make up the application such
as application server, database server, back-end
Web Services, client-side JavaScript engine, and
different browsers with different JavaScript/DOM
implementations, just to name a few. Logic no
longer resides only on the server, but also in cli-
ent-side JavaScript. Figure 1 shows the aspects of
a typical AJAX application on which I focus in this
article.
2. The user interface is put together bit by bit – it
consists of an initial HTML layout, followed by
inserted data from asynchronous server requests,
all constructed using a JavaScript engine to pro-
duce the finished result.
3. Because Web applications are delivered through
browsers, and browsers have different JavaScript/
DOM implementations, it’s crucial to test the Web
application against different browser types and
versions. This can be time-consuming and tedious
if it’s not automated.
Because of these complexities, AJAX applications
require specific ways of ensuring their quality. In this
article, I’ll focus on what it means to define a quality
infrastructure, and I’ll also explain what that looks like
in detail for an AJAX application.
DefiningandEnforcingDevelopmentBestPractices The first step in building a quality infrastructure
is defining quality at the beginning – with devel-
opers. This means defining the development best
practices that are going to be followed to ensure that
when the application is ready for deployment, it has
been developed correctly the entire time. You may
not think that this is important, but consider this: the
quality of the code base for your Web applications
directly influences your organization’s bottom line.
If the application works properly, more customers
use your services; if it doesn’t, they may never return.
Functional and other kinds of tests are crucial to vali-
dating the quality of your application; however, test-
ing finds errors after they’ve been introduced. Many
errors can be prevented simply by identifying them in
your source code as soon as they’ve been added.
One source of best practices that you should
Usingaqualityinfrastructure
Tests
ExposingandPreventingErrorsinAJAXApplications
by Nathan Jakubiak
Nathan Jakubiak is a software engineer
at Parasoft. He currently manages the
development of Parasoft WebKing and is
the lead developer of Parasoft’s AJAX testing
solution. Nathan has extensive experience
developing testing tools for Web-based
technologies and has a number of pending
patents in the area of automated
regression testing.
18 March 2007 AJAx.SyS-coN.coM
adopt is common industry best practices for
how code should be written to prevent errors.
Conventional wisdom exists regarding the best
way to write code in different languages to solve
different problems. Any given project may not
benefit from all the coding best practices that exist,
but every project can benefit from some subset of
them.
In addition, your organization may have defined
its own internal standards for ensuring that applica-
tions are written properly. Most development groups
already have these kinds of best practices defined,
either formally or informally. However, defining them
is only part of the solution – you must also enforce
these best practices.
Best practices do no good unless they’re followed.
You’ll get pushback from developers here. I know
because I am one. Developers don’t like being told
how to write code. However, if the infrastructure
is defined in such a way that best practices can be
enforced and followed painlessly, developers can
easily adapt. Enforcing best practices means using
some means (either open source or from a vendor)
of validating the code on a regular basis to ensure
that best practices are being followed. Each devel-
oper should be able to run best practices validation
on his code before checking it in. There should also
be a nightly process running against the entire code
base that identifies violations and which developer
is responsible for them, and sends a report to that
person so that he can fix them. This handles cases
where the developer forgot (or refused) to check
his code before he checked it into the source code
repository.
Development Best Practices for AJAX Applications The following best practices should be enforced
for AJAX applications:
JavaScript If you’re not using a server-side AJAX frame-
work, your application is likely heavily dependent
on JavaScript. It ‘s much easier to introduce errors
into JavaScript code than into a compiled language
because there is no compile-time check for errors.
Hence syntax checking is important for JavaScript
code. JavaScript’s language features promote bad
programming practices that can make it difficult
to manage and reuse a JavaScript code base – for
example, it’s very easy to add global variables and
methods. So it’s important to eliminate these kinds of
problems by continually validating that coding best
practices are being followed. Browser compatibility
is a huge issue for AJAX applications. It’s critical to
ensure that your developers aren’t using browser-
specific JavaScript (and CSS) features, and that your
own custom browser-agnostic JavaScript methods are
being called instead of browser-specific methods that
could break your application in some browsers.
Use an enforcement tool that automatically checks for
syntax problems besides having built-in rules to look
for common JavaScript coding mistakes or browser
incompatibility issues. The enforcement tool should
also let you easily define your own internal best
practices guidelines and automatically enforce them.
Examples of some custom guidelines you might cre-
ate and enforce:
1. Every method or variable must be defined in a
namespace
2. Use your own NodeCreator.createNode() method
instead of the standard DOM methods for creating
nodes to handle browser inconsistencies.
Security Policies Security issues are more important for AJAX appli-
cations than for traditional Web applications. There
are two reasons for this. First, the application logic
that now resides on the client in the form of JavaScript
exposes more of the guts of the application to would-
be hackers, making it easier to determine how to
attack the application. Second, the automatic page
update mechanisms of many AJAX applications pro-
vide more opportunities for cross-site scripting kinds
of exploits.
However, the methodologies for preventing securi-
ty flaws haven’t changed. You should take a two-phased
approach. The first phase is to define a security policy
that specifies how code should be written to ensure
security. A policy might specify things such as:
1. Call a custom method you’ve defined called clean-
Input() on any client input to the Web application
to remove malicious input
2. For Java applications, use the class Prepared-
Statement for all DB queries to prevent SQL injec-
tion exploits
The second phase should involve functional vali-
dation that the security policy is being followed. I
discuss this below. With a well-defined security policy,
this two-phased approach should let you build secu-
rity into your application.
Application Source Code Your application probably uses a range of tech-
nologies and languages, including but not limited
to CSS, HTML, JSP, JSF, Java, ASP .NET, PHP, etc. All
source code should be validated using a set of best
practices to ensure maintainability and reliability. As
with JavaScript, you should use an automated tool
to enforce the best practices that meet the goals
and needs of your Web application. The tool should
have built-in best practice enforcement as well as
allow you to define your own internal best practices.
AJAx.SyS-coN.coM March 2007 19
Application Server Output Code that’s delivered to the browser from the server
is often the result of an application server processing
server-side code (JSP, JSF, and ASP, for example) and
outputting HTML, JS, etc. As a result, it’s often impor-
tant to validate the server output once it reaches the
client, since there’s no physical representation of the
“page” on the server. For AJAX applications, HTML
code often needs to contain certain elements (div
or span tags, for example) onto which AJAX libraries
attach the user interface controls that they define.
Validating that such essential elements exist is impor-
tant in ensuring that developers don’t inadvertently
break page components. This can be done by using
an automated tool to define additional best practices
customized for an individual page or group of pages,
and enforcing them on the appropriate rendered
pages. Enforcing other best practices regarding well-
formed and accessible HTML is also important here.
CreatingaNightlyBuild/TestInfrastructure Creating a nightly build/test infrastructure is the sec-
ond major step in building the overall quality infrastruc-
ture. At Parasoft we’re continually amazed at how many
organizations don’t do a nightly build. Without a nightly
build, how can you be sure that your current code base
even compiles? Or runs properly? Without a nightly build,
errors introduced into the application can exist for a long
time before they’re detected. So, a nightly build is a crucial
component of the quality infrastructure. The nightly build
must be easily repeatable (meaning it can be run without
user intervention), and must build the full application.
Next, the infrastructure should run automated tests
every night against the application built by the nightly
build. The tests should run automatically without user
intervention, and a report should be generated and
possibly e-mailed with the results of the run. The
report should categorize the results in an easy-to-read
Figure 1: Focus areas for ensuring quality in an AJAX application
20 March 2007 AJAx.SyS-coN.coM
way, or else they’ll be ignored. But the infrastructure isn’t enough. There
must be a culture in the development group where failures in the nightly
tests aren’t allowed to remain. Failures must be addressed as soon as they
appear. Otherwise you lose the benefit of automated tests.
This is where the real power in the quality infrastructure lies. The
development group should adopt a practice that, as each new feature
is added, one or more automated tests that validate the feature are
added to the test suite. In addition, for each bug that’s fixed, one or
more automated tests should be created that find the error being
fixed. Personally, some of my nightly tests are JUnit-based. I’ve made it
a practice to create a JUnit test that catches the bug I’m fixing before I
actually fix it. Then I know that my JUnit test reproduces the problem,
and by running it I can know when I actually do fix the problem. This
speeds up my development AND adds another test to my test suite.
By adopting these practices, you’ll develop a large array of tests in
the long run. If they all succeed, you can be reasonably sure that your
application has no major flaws. If you have to put out an emergency
build and you have no time for testing, you can have a reasonable
amount of confidence that the application is stable and works cor-
rectly if the automated tests succeed. This is especially true in the
world of Web applications, where features are bled into the applica-
tions constantly.
Functional Tests for AJAX Applications Since automated nightly tests are so important, let’s focus on the
details of what should be functionally tested in an AJAX application.
Back-end Web Services Many AJAX applications depend on Web Services. These may be
services that your company owns and manages, or they may be services
exposed by your partners. For Web Services owned by your company, you
should use an automated Web Services testing solution to validate that
the Web Services provide the functionality they’re supposed to, and that
they handle both expected and unexpected inputs. Your Web application
depends on these Web Services providing the expected data, so you need
to ensure that they do. Whether you own the Web Services or you use
services exposed by someone else, you also need to validate that your
Web application correctly handles unexpected responses from the Web
Services (such as the service being inaccessible, the service returning
malformed or unexpected data, etc.). This way you can be certain that
your Web application will provide an appropriate response under unex-
pected conditions.
Asynchronous Message Content When an error occurs with the page-update mechanism of an
AJAX application, it’s typically in one of two places. The first place
is in the contents of the asynchronous messages used to commu-
nicate with the server. The second place can be in the logic of the
JavaScript engine that processes the asynchronous messages and
that updates the user interface. So it’s important to validate that the
server returns expected content for the asynchronous messages.
Regression tests should be set up to verify that the server returns
the correct responses for both expected and unexpected inputs. If
there’s a problem in the Web application, and these tests are suc-
ceeding, then the problem must be somewhere in the client-side
JavaScript engine.
Client-Side JavaScript Engine Validating the functionality of a client-side JavaScript engine may
be the most difficult aspect of ensuring quality in an AJAX applica-
tion. This is why it’s important to use both best practice enforcement
and functional testing to attack the problem from multiple angles.
Functionally validating the JavaScript engine is done by constructing
functional tests for each user interface control in your Web applica-
tion. You should use an automated tool that lets you record a user’s
actions as he interacts with a browser and set up validations that
ensure the control correctly responds to the user’s actions. Since
what the user sees in a Web application is directly tied to the brows-
er’s DOM, and since AJAX applications rely on heavy manipulation of
the DOM, the automated tool should be able to perform validations
at the level of the browser DOM. Tools that emulate a browser, rather
than actually using a real browser for testing, are probably not going
to be able to accurately test the application. The tests should be
component-level tests of individual page components, rather than
functional scenarios of the Web application as a whole. When these
tests are combined with accurate regression tests on asynchronous
message content, you can have confidence that your AJAX compo-
nents are working properly.
End-to-End Functional Scenarios Automated functional tests of the scenarios that describe your Web
application users’ actions are crucial to ensuring that your Web application
is providing its critical business functions. This is where you validate that
all aspects of the complex AJAX application are working together properly.
You should identify what the important scenarios are, and then use an
automated tool that can record and replay the scenarios easily. The tool
should let you validate that the user interface updates properly, as well
as test that the user’s actions can actually be done. As with validating the
JavaScript engine, the user interface validations should be done at the level
of the browser DOM. In addition, since it’s probably important for your
Web application to support multiple kinds of browsers, the test tool should
let you define the tests once, yet run them against the different browsers
and different browser versions that your application needs to support.
Functional Security Tests Functional security tests ensure that the security policy that’s been
put in place actually works and is being followed. These tests should sup-
port, rather than replace, the policy. The tests should send various kinds
of unexpected inputs to the Web application to validate that it responds
appropriately to hacker attacks (such as cross-site scripting and SQL
injections). Server responses should be validated to ensure that the appli-
cation filters out the inappropriate inputs and doesn’t reveal information
that would help a would-be hacker with further attacks.
Conclusion Creating a quality infrastructure will let you deliver your applica-
tions quickly, but with confidence that they are reliable and robust.
This is especially true of AJAX applications. Creating the infrastructure
may require a significant investment of time and resources upfront;
however, experience with my own development and with that of cli-
ents has shown that the gains from finding and preventing errors early
using a quality infrastructure are many times greater than the initial
cost. n
22 March 2007 AJAx.SyS-coN.coM
There have been numerous articles over
the last year on how the complemen-
tary nature of AJAX and service-orienta-
tion together are rapidly transforming the way in
which we design, architect, deploy, and manage
applications. The ramifications of this change
impact nearly every aspect of business application
development, from design conception, architec-
tural planning, and implementation to unit testing
and monitoring. New methodologies, tools, and
infrastructure are now emerging as the industry
evolves from the three-tier system concepts that
have dominated the last decade on the Web into
the Service Oriented Architectures (SOA) currently
being implemented.
Let’s look at several major areas of application
development and architecture that are shifting from
three-tier implementation styles to AJAX and SOA
implementation styles to deliver more powerful,
high-performance, extensible, manageable, and
cost-effective solutions:
• Client Side: Web pages are becoming applica-
tions.
• Web Protocols: Publish/Subscribe events and
messages over HTTP are becoming peers with
standard HTTP Request/Response.
• ServerSide:Centralized, silo’d application deploy-
ments are getting plugged into Enterprise Service
Buses to make way for massively distributed ser-
vice deployments.
• Coding: Custom coding page-based GUIs is
diminishing in favor of assembling solutions from
ready-made parts and services.
In general this evolution can be summarized as
moving from homogenous, language- and vendor-
centric solutions, to the mixed environment, het-
erogenic systems typical of a business information
system.
ClientSide:PagesBecomeApplicationsUntoThemselves In many ways AJAX has provided the perfect
complement to HTTP information services. AJAX
is highly optimal at consuming raw information
over HTTP, be it SOAP, XML, JSON, or other formats,
and transforming that information into what the
user sees on the screen. Both AJAX and SOA were
born from taking the Hypertext Markup Language
(HTML) out of Hypertext Transfer Protocol (HTTP)
transmissions. Instead of a Web server generating an
HTML page, it provides an XML, SOAP, or JSON data
service with which any number of applications can
communicate. And instead of a Web browser repeat-
edly, often redundantly, rendering a stateless, static
HTML page received from the server, the browser
gets and executes JavaScript to update portions or
the entirety of the user interface within the stateful
context of a single HTML page.
With the tremendous exploration of the relation-
ship between AJAX and services that has occurred
over the past 18 months, it’s important to under-
stand the difference between the ideas of “AJAX-
enriched pages” and “AJAX applications.” A quick
peek at Yahoo!’s personalized homepages and Yahoo!
Mail easily demonstrates the distinction. On the My
Yahoo! page you get a variety of AJAX interactions
within the page. You can drag-and-drop a content
section to rearrange the layout, even remove an item
from the page with an AJAX call in the background
while it fades from view, then other content areas
flow in to fill the gap. Pretty cool. But as soon as you
click a content item, you go to a different page and
so primarily navigate page to page to page. You go
to the information, though some of it comes to you
through AJAX without leaving the page. I call these
AJAX-enriched HTML pages and the nuances they
add atop static HTML are excellent enhancements.
Now try dropping in on Yahoo! Mail (beta). The
AJAXandWebServiceBusarchitecturesenabletheshiftfromrequestandresponsetomessageandevent-drivenapplicationarchitectures
AJAx & SoA
TheBigShift by Kevin Hakman
24 March 2007 AJAx.SyS-coN.coM
experience there is akin to Microsoft Outlook. The
semantics of the solution lack the concept of “page”
altogether. Instead of navigating from page to page
to page, information comes to you via tabs, alerts,
windows, drag-and-drop, and the characteristics
of robust software solutions. Of course the units of
Internet advertising are based primarily on pages
and navigating between them, so we’ll certainly not
see the page disappear. In addition, pages are great
for documents, forms, reports, and lots more. At the
same time, AJAX liberates us from the paradigms
of the page so that we can deliver richer business
productivity applications akin to the experiences of
desktop installed software, but, of course, do so over
the Web via the browser.
Where desktop-style applications were created
in the past with VisualBasic, Java Swing, MFC, or
the like, they can now be substantially created and
deployed using AJAX techniques. And where GUIs
for business productivity solutions were bound to
the limited paradigms of HTML pages and forms,
AJAX opens a whole new vocabulary of end-user
interactions that go well beyond the tabbed pane
and into sliders, editable grids, tree tables, hierarchi-
cal menus, modal dialogs, toolbars, vector graphics
and charts, accordions, spinners, and date pickers.
AJAX libraries are already optimizing themselves
for both AJAX pages and AJAX application-style
implementations. Much the way that the Java lan-
guage has .jsp for generating pages and that AWT,
Swing and others have for generating software-style
GUIs, AJAX is evolving with libraries focused on
the value of enriched HTML pages or streamlining
the process of creating richer, software-style GUIs.
Therefore the question won’t be “which AJAX library
should I use” but “which collection of AJAX libraries
should I use.” For example, one of our customers (a
large investment bank) uses Direct Web Remoting
(DWR) for simple enhancements to HTML pages
when it wants asynchronous access to its server-side
Java environments and simple updates to the HTML
elements with that data. In addition, it uses the Dojo
Toolkit when it needs richer controls in those HTML
pages. Finally, it uses TIBCO GI when it needs to
deliver richer business productivity solutions as part
of its core business processes. Three different kits
used for three different problems.
WebProtocols:Real-TimeHTTPMessagesandEvents Aside from the Web page, request/response com-
munications are architecturally significant. This is
what Web servers have been optimized to do for the
last decade – get a request, send the response as fast
as possible, and close the connection. Inherently,
the HTTP protocol is much like flipping pages,
going from one to the next, sitting idle in between.
Accordingly, the user experience is diminished by
the request/response limitation, coming up short
relative to the streaming data and server-originating
event capabilities of client systems that run outside
the ubiquitous browser.
Figure 1: Evolution to AJAX and SOA-enabling capabilities
AJAx.SyS-coN.coM March 2007 2�
One core limitation of going beyond request/
response over HTTP has been that neither an HTML
page nor JavaScript can currently register an IP
address that a server can address messages to the
way their thick-client cousins can. So AJAX vendors
are working to provide solutions at the next best
level — keeping the HTTP connection open so that
as soon as data is available, it can be sent without
getting a request from the browser first. The AJAX
library DWR calls this “ReverseAJAX.” Dojo is a
contributor to the CometD project that’s creating a
persistent HTTP solution, and TIBCO is preparing to
release its TIBCO AJAX Message Service product to
extend message buses to the browser over an HTTP
channel.
The persistent HTTP connection helps avoid
the redundant polling loops approach that can tax
a Web server with handling unnecessary requests,
but nonetheless, traditional servlet contexts can
face scalability challenges. The reality is that Web
application servers are designed to get an HTTP
request/response as fast as possible then close the
connection. But just as the page is evolving into
a stateful container for richer AJAX applications,
the HTTP 1.1 protocol, now predominant in most
application servers, provides the foundation for
persistent HTTP connections capable of keeping a
connection open so that multi-part information can
be sent from the server to the client. The challenges
aren’t in the protocol itself, but in the server-side
implementations as far as scale goes. The most scal-
able solutions today work outside the strict servlet
specification’s one-connection-to-one-thread rule
by enabling thread pooling and hence more scale
for pushing data through always-on connections.
Besides, since there’s a single pipe through which
information must be pushed, savvier server-side
implementations include notions of multiplexing
— merging multiple streams of messages and events
into a single stream through the single connection.
Solutions using these techniques claim to have
supported close to 10,000 concurrent connections
on single-CPU machines and, at the same time, the
servlet specification is under review with a view to
shared threading.
Greg Murray of the Jetty Servlets Project also
contends that the browser only provides two HTTP
connections. So if one is always-on for instant
information feeds, the other must be left for tra-
ditional request/response processes like getting
images, files, and information that need not be sent
instantly.
The exciting thing from a developer or archi-
tect’s point-of-view is that persistent HTTP can blur
the dividing line between client and server so that
Figure 2: Evolution of desktop GUIs in network computing
Figure 3: Request/response communications evolve to publish/subscribe
Figure 4: The evolution of serving pages to point services to service buses
26 March 2007 AJAx.SyS-coN.coM
very low-latency events and messages at the client
become the equivalent of events and messages at
the server. And the conceptual diagrams of distinct
client and server actors working across an Internet
cloud get replaced with an “event cloud” in which
both the connected clients and servers exist using
server-side and client-side message buses to pub-
lish information to topics so that other parts of the
system subscribing to those topics can intercept and
handle the information accordingly.
Assuming support for 10,000 concurrent connec-
tions, the benchmarks being reported by vendors
mean a single CPU implementing these techniques
can be just as scalable as today’s application serv-
ers doing request/response handling. I personally
wouldn’t use this technique if I were Google or
Yahoo, both of which serve millions of users concur-
rently. However, for business productivity solutions
where end-user communities are typically far less
than 10,000, message- and event-based application
architectures have much to offer as a general imple-
mentation technique for AJAX RIAs.
Having such a capability opens a wide range
of potential applications and solutions that could
be deployed using AJAX and further diminishes
the gap between what’s feasible in a browser and
what requires pre-installation of specific runtimes
or client software. Real-time applications like chat,
streaming stock quotes, alerts and notifications in
dashboards, and other types of collaborative appli-
cations become feasible, while avoiding the high
cost of desktop-installed software or the overhead
and user disdain of downloading and upgrading
plug-ins. Moreover, such capabilities over HTTP
are far more firewall-friendly than applet or plug-
in approaches that may require additional firewall
ports being opened — something business secu-
rity folks abhor. Further messaging concepts also
include the notion of queues, which are not only
useful in real-time data solutions, but also when
combined with offline persistence. Multiple AJAX
toolkits have shown offline data capabilities. For
example, Firefox now has native support for this
capability and Internet Explorer allows 1MB of stor-
age per domain. These capabilities assist in tasks like
synchronization when a user goes back online.
So AJAX event and message paradigms can also
ease the composition of GUIs in much the same
way that Visual Basic development is streamlined
by registering listeners for various events. Consider
an application architecture where both services as
well as GUI components can publish and subscribe
to topics. On the client side, there’s an event and
message bus implemented in JavaScript connected
via request/response. And there are persistent HTTP
connections to the server-side message bus that
connects the services through common governance,
policy management, and SLA infrastructure. In such
an architecture, instead of procedurally coding a
request/reply to a service that explicitly handles the
response data and updates the GUI, a user can use
the publish/subscribe and asynchronous call inter-
faces in various AJAX toolkits to set up event listeners
and event handlers that wrap calls to the services.
That object then becomes a reusable bit of code that
can be dropped into various solutions. Next, a GUI
component, such as a data grid, can be configured
to listen for the availability of new information
from that service and so too can a history list, a log,
and detail windows subscribed to information from
those services. Thus, when data comes back from
the service, an event is thrown and the GUI objects
listening for that event handle it.
These approaches enable faster and more man-
ageable development, and enforce reusable modular
design patterns that facilitate efficient work among
development team members. This leads to greater
reuse of assets with much less chance of getting
tangled in procedural JavaScript hairballs of code.
EnterpriseServiceBusandBeyond The highly visible, visual, and easy-to-find Google
Maps remains the harbinger of AJAX for many in the
industry. And while Google Maps certainly helped
popularize the idea of AJAX through the unique
experience it offered end users, equally noteworthy
is the underlying service Google exposes to overlay
information on the maps. The service interfaces
are savvy, accepting input in many variations of
addresses, landmarks, and cities, resolving those
close-to-natural-language phrases and returning the
relevant data sets in milliseconds. Kudos to Google
for not only exposing the breakthrough GUI capa-
bilities of AJAX to the masses, but also for backing it
with scalable services that could handle the ways in
which humans want to ask for information (not to
mention the data store behind the system with all
that information). Google Maps stands out because
the solution is non-trivial at all these levels and
should be heralded for its accomplishments.
However, let’s get back to business and to the het-
erogeneous, diverse, and more complex reality that
most of us creating business solutions must grapple
with. While Google Maps stands out as exemplary,
it’s also an anomaly in the context of 99% of the
business we do every day as creators and users of
software solutions. Google has the advantage of
being able to create, maintain, and optimize that one
28 March 2007 AJAx.SyS-coN.coM
phenomenal service and publish that one sexy GUI
for others to use or overlay in mashups with data
from other data relevant to geographic contexts (and
without having to generate revenue from it too). But
what happens if you have five services to implement
and manage? Probably doable without much other
infrastructure, right? What about 50 or 500? What
about 5,000 services? Not to mention the hundreds
of application screens that will interact with those
services. Those might seem like large numbers com-
pared to simple use cases and solutions. But the
reality for independent software vendors, solutions
consultants, and corporate IT shops creating enter-
prise solutions is that support for hundreds (if not
thousands) of services is or will soon be required as
enterprises mandate interoperation with their SOA
infrastructures in their procurement and vendor
selection processes.
When dealing with integration challenges due to
the number of components in a system like those, it’s
been generally accepted that message bus architec-
tures address these kinds of issues best. In fact, Sun’s
JSR-208 specification Java Business Integration (JBI)
has an Enterprise Services Bus at its core, extended
for use with a variety of Web Service types. Beyond
Java, TIBCO’s implementation of the JSR-208 specifi-
cation in its recently announced ActiveMatrix prod-
uct has demonstrated support for .NET containers
and services as well on the same infrastructure.
Thus the homogenous three-tier, platform-specific
architectures of the past decade are giving way to
heterogeneous service-oriented systems managed
by service-bus architectures, even crossing the great
Java versus Microsoft dividing line. Services can be
implemented in a variety of languages, but are then
normalized into XML, SOAP, JSON, or other formats
for consumption by other systems of potential vary-
ing types. Systems with multiple service endpoints
can leverage message bus architectures at the core
to provide needed integration conduits between sys-
tems, enabling massively distributed systems with
centralized governance and policy management by
virtue of the bus. In contrast, from a policy and
governance point-of-view, the three-tier Web appli-
cation server world is inherently more difficult to
manage across disparate systems that don’t share the
common infrastructure of the bus. The good news is
that these bus architectures are designed so that you
can plug your existing assets into them. So get on the
bus.
LessCode,MoreUseofReady-MadeParts AJAX and SOA also pave the way to faster solution
development. As described earlier, an implemen-
tation centered on event and message paradigms
enables significantly more manageability around
client-side and server-side components and fosters
a greater opportunity for reuse of such components.
Clearly, reuse is understood in the context of SOA
where a single service can have multiple consumers
and so greater value in relation to the cost of build-
ing and operating that service. One can see similar
trends on the client side enabled by AJAX as well.
Reusable AJAX GUI components like editable data
grids, charts, sliders, color, and date pickers are now
readily available. Instead of building these up from
DIV and SPAN tags, you can now include the higher-
level notion of an editable data grid and configure
its behaviors as well as its look-and-feel. Numerous
AJAX toolkits provide these kinds of ready-made
widgets for use in applications. The mashup craze
also makes this readily evident. For example, an
investment in creating the orchestrating logic (not
the service or the GUI components) enables you to
take a GUI from location G, and overlay it with data
from sources C and D.
But as we discussed before, in business environ-
ments things can get more complex, quickly. Google
and craigslist have the enviable simplicity of being
publicly accessible. In the business world, not every-
one is privileged to access a service, do certain tasks,
or see certain information. Here again, centralized
policy management across a message bus architec-
ture greatly reduces the complexity of creating AJAX
and SOA solutions and deals with cross-domain
security issues in a managed context.
Conclusion AJAX and SOA will continue to erode the need
to develop and deliver installed software solutions.
Many developers are getting started in the evolu-
tionary process by adding some AJAX and a few ser-
vices to the basic three-tier infrastructure they have
in place today. However, businesses seeking to gain
greater benefits need to look to rich AJAX applica-
tions in concert with Enterprise Service Bus infra-
structures capable of governing hundreds or thou-
sands of services. In addition, the advent of scalable
real-time protocols over existing HTTP connections
will further enable rich client/server-style business
applications to be implemented and managed more
efficiently, further displacing installed runtimes and
software. n
“AJAXandSOAwillcontinuetoerode
theneedtodevelopanddeliverinstalledsoftware
solutions”
Kevin Hakman is Co-founder, TIBCO General
Interface, TIBCO Software Inc. Prior to TIBCO
General Interface, he was the co-founder of
Versalent Inc. a leading provider of enterprise
client technology. Prior to Versalent, he founded
a series of successful emerging Internet tech-
nology and ecommerce ventures. He has also
written for eBusiness Journal and HotWired.
30 March 2007 AJAx.SyS-coN.coM
i: www.backbase.com t: (866) 800-8996 e: [email protected]
© Backbase BV - all rights reserved. BACKBASE is a trademark of Backbase BV.
AJAX for Java
Backbase offers a comprehensive AJAX Development Framework for building Rich Internet Applications that have the same richness and productivity as desktop applications.
The Backbase AJAX JSF & Struts Editions:
are based on JavaServer Faces (JSF) or Struts/JSPrun in all major Application Serversupport development, debugging and deployment in Eclipseembrace web standards (HTML, CSS, XML, XSLT)
Download a 30-day Trial at www.backbase.com/jsf
••••
The advent and rapid adoption of AJAX tech-
nology by the Web community has truly revo-
lutionized the user experience as well as the
development of Web applications. AJAX overcomes
the technical limitations of classic post->submit-
>reload CGI applications. So one can implement
complex and advanced tasks as Web applications with
an appealing user interface. The result is that today’s
AJAX Web applications don’t have much in common
with the old-style Web applications anymore but are
on a par with classic desktop GUI applications in
terms of sophistication. Combined with the Web’s
network transparency and easy deployment opportu-
nities the result is a development environment with
hard-to-beat power.
Many Web technology vendors have understood
this trend and have created and marketed powerful
Web client frameworks. By using these toolkits it’s
simpler nowadays to create a Web application with an
appealing GUI than to create a classic desktop GUI
application.
This development now asks for new solutions
for a vital part of the software development process:
testing the new-style user interfaces – ideally auto-
mated testing. It doesn’t suffice anymore to test the
back-end and run only unit-tests. The user interface
has to be tested as well to provide high-quality appli-
cations.
While engineers testing AJAX GUI applications
principally face the same kind of challenges as when
testing classing GUI applications, the dynamics and
networked nature of AJAX applications as well as the
variety of Web toolkits, available Web browsers, and
operating systems pose additional challenges.
This article will discuss the motivation for auto-
mating user interface tests and the challenges and
solutions of creating automated tests for AJAX appli-
cations. The material is based on the collected experi-
ence of providing consulting and support services in
real-world automated testing scenarios.
Motivation First, let’s look at the motivation for automating
the testing of the user interface.
• Faster: Once automated tests are available, it’s
much faster to execute the tests than when doing
this manually. More tests can be executed at the
same time and run more often (like every night).
This way regressions are found quicker after they’re
introduced. This makes finding and fixing the bugs
that lead to the regression easier. Bottom line, we
save time, money, and nerves.
• Trivialtoreplayindifferentconfigurations:Once
a test is automated and the right tool has been
chosen for the task, there’s no additional effort
to run a test in different configurations (different
browser, OS, hardware, etc.). A good Web applica-
tion should work on at least the most common
browsers such as IE 6, IE 7, Firefox 1.x, Firefox
2.x, and Safari on Windows, Linux, and Mac OS X.
This makes nine configurations in total, so testing
manually has an overhead of 800% over testing
automatically.
• Reliable and reproducible: Good programmers
and test engineers strive to work efficiently. This
means they optimize monotonous tasks. Normally,
this is a good thing since this way innovative solu-
tions are created. When executing a manual test,
optimization is a bad thing. It’s desirable to exe-
cute exactly the same steps every time. Otherwise
issues and problems won’t be reproducible, or
even worse, not found at all. Humans aren’t suited
to running the same tests over and over again.
Computers are the exact opposite. They’re good
at executing the same over and over again. They’re
more reliable testers than humans.
• Relieve testers from monotonous tasks: When
automating a test, test engineers don’t have to do
the boring job of executing the same manual tests
all the time. Instead they can work on interesting
ThereasonforautomatinguserinterfacetestsandthechallengesandsolutionsofcreatingautomatedtestsforAJAXapplications
Interfaces
AutomaticallyTestingAJAXUserInterfacesinaCross-BrowserEnvironment
by Reginald Stadlbauer
Reginald Stadlbauer is co-founder and
CEO of froglogic GmbH. Born in Graz,
Austria, at university he joined the KDE
project and wrote the office applications
KPresenter and KWord. In 1999 he started
working for Trolltech ASA, the creator of the
Qt GUI toolkit. In his four years at Trolltech
Reginald was part of the team leading the
Qt, Qt Designer, and QSA development. In
2003 Reginald helped start froglogic, spe-
cializing in the automated testing of user
interfaces by providing services and the
leading automated UI testing tool Squish.
32 March 2007 AJAx.SyS-coN.coM
tasks such as creating new tests, setting up auto-
mation frameworks, and analyzing the results that
provide additional motivation.
While all this clearly speaks to test automation,
one should never expect an automated testing tool to
do all the work for you. Using an automated testing
tool you can’t get rid of your testers. At least initially
some work has to be done to create the tests and set
up the framework. Tool vendors who claim that their
products will automatically create all the tests for you
are naive or guilty of negligent misrepresentation.
There will always be a few tests that simply require
too much effort to automate. Checking a printout of
the application is something that you probably might
still want to test, for example. So the goal of 100% test
automation isn’t feasible.
Automating most tests will deliver software of
a higher quality while reducing costs. It’ll get you
faster releases by shortening the long manual test and
release cycles.
ChallengesandSolutions This section will discuss the main challenges and
solutions when setting up automated UI tests for
AJAX applications. Common mistakes will be covered
as well.
Robustness and Object Identification The main requirement for an automated test is
that it has to be robust and not break when the appli-
cation being tested changes. Neglecting this impor-
tant requirement is one of the main reasons for failure
of many automated UI testing efforts.
To create a robust test, a test script has to interact
with the AUT (application under test) in an abstract
way. This means that in emulating a click, for exam-
ple, on a tree item in an AJAX tree control, it isn’t good
enough to click to a fixed coordinate (x, y) in the Web
page. It isn’t even good enough to click a specific DOM
element (e.g., a DIV containing the tree item repre-
sentation) without knowing more. Both ways, the test
will break quite easily if the position, representation,
implementation, or structure of the Web page or tree
control changes.
To overcome this, a testing tool has to fulfill the
following requirements:
• ToolkitKnowledge:A testing tool has to recognize
high-level controls (tree, table, layout controls,
date picker, input field, buttons, menus, etc.)
and provide an interface to interact with the con-
trols and query their states and properties on an
abstract level. This requires knowing about the
Web client toolkit built into the testing tool but as
a result, robust tests can be written.
One special challenge in the AJAX world is the
myriad open source and commercial Web client
toolkits each providing their own special imple-
mentation of advanced controls such as layout
controls, tables, trees, menus, etc. – not even
counting the high number of unknown internal
Web client frameworks.
It’s highly unlikely that there will ever be an
automated testing tool that will support all of
these frameworks out-of-the-box. Does this mean
that it’s impossible to create robust automated
tests for your application if you use a Web client
toolkit that isn’t supported by a testing tool?
The answer is: No, it’s still possible.
When evaluating Web testing tools it’s advis-
able for you to find out if it’s possible to extend the
framework yourself or, even better, if the vendor is
willing to extend the testing tool to recognize and
support the controls you’re using. So even if your
Web client toolkit isn’t supported out-of-the-box,
it just needs a way for extension to create robust
automated tests.
This means what you have to watch out for is
that the tool provides the necessary concepts to
support the toolkit abstractions discussed above.
If a tool is advertised to generically support testing
every Web page there is, be cautious. This usually
means that actions will be emulated coordinate
based, sometimes ignorant of the type of ele-
ments. This will break sooner than you think.
• Object Names and Object Map: While DOM
objects may have unique IDs or names, this isn’t
always the case. So the testing tool has to provide
other means to identify an object. It can do so
by considering a set of properties that – taken
together – make up a unique identification. This
again requires knowing the Web client framework
built into the testing tool but allows for the devel-
opment of robust tests.
<span class=”button” onClick=”search();”>
<p>Search</p>
</span>
An HTML element without a single unique id attribute
{tagName=’SPAN’ innerText=’Search’ }
Generated attribute tuple identifying the element
Using an object map that maps symbolic names
(as used in the test script) to real names (as used in the
Web application) also eases maintenance. If an object
name changes in the application, it only requires one
change in the central object map without touching
the test cases.
• Scripting Language: Test cases should be written
in a scripting language that provides common
AJAx.SyS-coN.coM March 2007 33
language features such as conditions, loops, sub-
routines, and so forth in addition to functions to
access files, network, databases, etc. This creates
sophisticated tests. Tools that only provide a list
of statements for test cases are too limited to do
anything useful except a few simple tests.
As an example imagine creating a test for an
online store. You probably want to test adding
multiple items to the shopping cart. To implement
that, a loop is required. Also, imagine that you don’t
want the test to add every item to the shopping
cart but only those that cost less that $10. For this
you’ll need a conditional. Finally, you might want
to check that the order has been written to the
database correctly. For this you’ll need a scripting
language that offers a database API.
To be able to implement such a test properly,
you’ll need a versatile scripting language for your
test case.
If a testing framework provides the features above,
it’s possible to create robust tests. This way a click on
a tree item may be emulated using a function call
such as clickItem(“files_Tree”, “readme.txt”) instead
of a click to a coordinate position or unknown generic
DOM element. If the Web page or the tree control
changes, the tool will still locate the correct tree
control and item in it to click it. Even if the internal
implementation of the tree control changes, the tool
can be adjusted to emulate the click correctly while
the test script won’t have to be changed.
Synchronization One special challenge when creating tests for
AJAX applications is dealing with the asynchronous
nature of Web applications. Unlike desktop applica-
tions, where most operations are instantaneous, most
operations in AJAX applications require communica-
tion with one or multiple servers. The connections to
these servers may have varying bandwidths or even
be unreliable.
This means a test needs to be able to synchronize
with the state of the Web application. Good testing
frameworks provide such synchronization mecha-
nisms. There are several kinds of synchronizations
that can be useful:
• Pageloadingstate: A function to wait until the cur-
rently loaded page is completed
• Objectavailability:A function to wait until a given
object is available
• Arbitrary expression: A function to evaluate an
arbitrary script expression and wait until it evalu-
ates to true. For example, to wait until an element’s
property has a certain value.
Using such functions it’s possible to create test
scripts independent of timing and network speed.
Of course, the synchronization functions should also
provide the possibility of specifying a timeout to make
sure a test doesn’t hang forever due to, say, an unex-
pected error.
This uncovers another feature that’s very useful:
Event handlers. To deal with unexpected errors such
as message box popping up, many tools provide a
way to handle such events and, for example, close the
unexpected message box after logging its content in
the test result log.
Verifications Besides emulating user events, the main task
covered by automated UI tests is the automatic
verification that will lead to test passes and fails. One
common mistake is the excessive use of screenshot
comparisons in automated UI tests.
While screenshot comparisons look promising at
first glance because they make it easy to verify that the
screen looks as expected, this approach is doomed
to fail once the application undergoes the slightest
change, not to mention the graphical differences of
the rendering engines when running tests in different
Web browsers.
Table 1: Object map
34 March 2007 AJAx.SyS-coN.coM
The approach of comparing the structure of the
complete DOM tree against expected data isn’t much
better. Both approaches compare more than 90%
irrelevant data (noise) and less than 10% of the really
interesting and relevant data. This means there’s a
very high probability that a test fails because some-
thing irrelevant changed.
The best and most robust approach is to verify
only the mentioned 10% in a page. This means a test
should retrieve references to the object and controls
you want to verify and query, and compare the states
and properties on this objects against expected val-
ues. This will ensure that only relevant checks are
made and if a test fails, it means that a real regres-
sion has been found. Unnecessary checks make tests
unmaintainable and unusable in the long run.
As mentioned several times already, this requires a
deep knowledge of the Web client framework used and
the availability of a scripting language in the testing tool.
Of course, there are some scenarios where screen-
shot verification can be used. This is true when check-
ing graphical views such as charts, curves, etc.
Creating Tests To aid in creating tests, a testing framework should
offer several individual tools. The most prominent is
the event recorder. Using that tool a test engineer can
run the Web application and execute the test scenario
manually while the testing tool records all the actions
and generates a test script.
To create a useful test script, the testing tool has to
recognize the real widgets in the Web page and record
high-level actions as discussed.
Another useful tool is a program to insert verifica-
tion points. Such a tool should let the test engineer
visually pick the objects and properties to verify and
automatically create verification statements.
Additional tools such as a verification point editor
and a test data editor come in handy too. Often all the
tools and some test management facilities are com-
bined in an IDE.
Post-Editing, Refactoring, and Data-Driven Testing Using tools such as an event recorder have been
deliberately downplayed in this article. A common
beginner’s mistake is to think that automated tests
should be created only using an event recorder and
verification point editor. While such a test script will
run and deliver results quickly, it’s always advisable to
post-edit test scripts to make them more robust and
easier to maintain.
Such post-editing tasks include factoring com-
mon actions into functions, inserting synchroniza-
tion points, documenting code parts, and removing
unnecessary steps.
Another methodology that can be introduced is
data-driven testing. This means that the test data is
moved into a separate file and the test script only con-
tains the test logic. The test script reads the data file
and executes the test logic for each test data record.
This way new tests can be added without having to
modify the test scripts.
As an example of data-driven testing imagine a
script that reads article numbers from a text file and
enters those numbers into a form of your Web applica-
tion. This script is “driven” by the data that’s fed to it.
Test-Automation,Distribution,andDifferentEnvironments Once tests are created following the guidelines
outlined here, the next step is to automate the test
execution and reporting.
There are several things one should consider to
make effective use of test automation. The following
key points make up successful test automation:
• Multiple Platform and Browser Support: A great
advantage to Web applications is the possibil-
ity of running them on different platforms and in
different browsers. As different types of browsers
exhibit different runtime behavior and bugs the
tester has no choice but to verify correct behavior
on each platform and browser. Needless to say this
would be a drag if done manually. But as soon as a
test is automated it just needs a click of the button
to replay it in a different environment.
Cross-platform and cross-browser support
is an important criterion if quality assurance is
taken serious. This doesn’t apply only to the test
execution but also to the cross-platform avail-
ability of each component that makes up the
testing process.
• Command-Line Tools and Remote-Control: To
automate tests successfully, the testing tool should
offer command-line tools to execute them. This
way it’s simple to integrate the UI tests into exist-
ing frameworks and test management systems
and integrate the generated results.
It should also be possible to control all the test runs
from a central machine and run several tests simulta-
neously on different machines, platforms, and brows-
ers to make effective use of automated testing.
Once the test execution is automated, a procedure
for following up on the test results has to be put in
place. This can be a Web site displaying the results
or e-mails with errors being sent to the engineers or
managers responsible. The best tests and results are
of no use if they aren’t inspected regularly to fix regres-
sions and any issues uncovered.
Support One very important feature of a test automation
framework is vendor support. When starting with
36 March 2007 AJAx.SyS-coN.coM
automated testing many questions will come up and
usually assistance is required. Also some adoptions
in the testing tool might be necessary to make the
most effective use of it.
So a crucial factor in the success or failure of any
test automation effort is the support you get from the
testing tool vendor. During an evaluation you should
make use of the vendor’s support service to find out
if you get issues resolved quickly and if the vendor is
open to necessary adoptions.
ChoosingtheRightTools Since the author of this article isn’t completely
unbiased and works for froglogic, the vendor of the
popular Web testing tool Squish, this article won’t
attempt to recommend a specific tool. There are
several Web testing tools out there. A few meet most
of the requirements mentioned above, many don’t.
So careful evaluation is necessary. Here’s a short sum-
mary of what you should look for in a Web testing tool
when evaluating it:
• Cross-platform and cross-browser support without
having to adjust test scripts
• Complicated setups such as special proxy serv-
ers shouldn’t be necessary for easy deploy-
ment
• Support for a scripting language for test scripts
(preferable an open language such as Python or
JavaScript instead of a proprietary language)
• Built-in knowledge of the Web client framework
used in the AUT (or a way to adopt and get the tool
adopted by the Web client toolkit used)
• The possibility of freely post-editing and re-factor-
ing tests
• Tools aiding in the creating and management of
tests preferably combined in an IDE
• Command-line tools to automate tests
• Qualified and responsive vendor support service
When evaluating a tool, don’t pay too much
attention to components such as the event record-
er. While it should work reliably, it doesn’t have to
be perfect. Features that help create robust and
maintainable tests are more important. Don’t let
yourself be fooled by false promises and quick
short-term results that don’t result in maintainable
tests.
• If a vendor or tool makes one the following prom-
ises you should be very skeptical and stay away:
• The tool automatically creates tests for you or
requires little effort to create tests
• Absolutely no programming skills needed
• You don’t need test engineers anymore
• Delivers 100% test automation n
What do you needYour Web App to do?
Customize itInstall it1 Pick it
On-Demand, Open Source Web 2.0 Applications on the Hosting Environment You Choose.
2 3
ajaxworld.etelos.com
The Struts Framework from Apache is a very
popular, robust Web application framework.
Many corporate companies have deployed
tons of Web applications using Struts. And they hit
the ground with abundant technical resources. They
can develop newer applications quickly. Portal imple-
mentations offer a new challenge by introducing new
frameworks and APIs. To leverage existing assets and
skills, WebSphere Portal supports the Struts Portlet
Framework so Struts Web applications can be easily
migrated to a portal.
AJAX, or Asynchronous JavaScript and XML, is a
new happening technology to make Web applications
function like client/server applications. I mean Web
pages triggering requests based on events (asynchro-
nously) that refresh only specific parts of a page as
opposed to refreshing the whole page.
TheStrutsFramework There’s tons of information on Struts, AJAX, and
the Struts Portlet Framework available on the Web. In
this article I’m going to provide a brief introduction to
these topics; for more details check out the resources
section.
The Struts Framework is an implementation of
the MVC architecture, with Model being the business
logic represented using JavaBeans, EJBs, etc., View
the presentation layer represented by using JSPs,
ActionForms, and tag libraries, and Controller the
ActionServlet and Action classes (see Figure 1).
Struts Control Flow Here is the typical request-response cycle for the
Web application using Struts:
• The client requests a path matching the Action
URI pattern.
• The Web container passes the request to the
ActionServlet.
• The ActionServlet looks for the Struts configura-
tion file, if this application has multiple modules
it looks for the appropriate mapping for the given
path in any of the Struts configuration file.
• If the mapping specifies a form bean, ActionServlet
looks to see if one exists or it creates one.
• Once the form bean is created, ActionServlet pop-
ulates values from the HTTP request. If the map-
ping has a validate property as true, ActionServlet
invokes the validate method on the form bean. If
the validation fails, the control flow ends there.
• If the Struts configuration file specifies action
mapping then the appropriate action class ‘exe-
cute’ method is invoked with an instantiated form
bean as a parameter.
• The Action may call other business objects and
repopulate the form bean with the latest values
and other business functions as needed.
• Based on where you want to send the control next,
the Action reads ActionForward and the Action
URI from the Struts configuration file and returns
to the ActionServlet.
• If the ActionForward is a URI to the JSP to display
the page, the control flow ends by sending the out-
put to the browser or if it’s forwarded to another
Action URI the above flow starts again.
AJAXinAction AJAX is not a new technology, but is the most hap-
pening thing in the Web world that gives a desktop or
client/server application feel to your Web applications
(see Figure 2).
AJAX Control Flow• A Web page sends its requests using an
XMLHttpRequest object and JavaScript function,
which handles talking to the server.
• Nothing has changed for the Web application
server; it still responds to each request, the way it
did it before.
• The server’s response only has the data it needs in the
XML form, without any markup or presentation.
• The JavaScript dynamically updates the Web page
without redrawing the whole page.
• Most of the page doesn’t change, only the parts of
the page that need to change is updated, and that
Challengesfacedduringimplementation
Struts
StrutsPortletswithAJAXinAction
by Kris Vishwanathan
Kris Vishwanathan is a senior software
engineer at IBM’s lab in Research Triangle
Park, NC.
38 March 2007 AJAx.SyS-coN.coM
asynchronously.
StrutsPortletFramework Developers who have experience with the
Struts Framework should adapt easily to the Struts
Portlet Framework. The packaging for Struts Portlet
Application is similar to a Struts application in the serv-
let environment. However, WebSphere Portal introduc-
es additional challenges such as portlet modes, states,
inter-portlet communications, portlet two-phase
processing, and multiple device support, which are
addressed through the Struts Portlet Framework. For a
detailed explanation of this topic, see the Redbook IBM
Rational Application Developer V6 Portlet Application
Development and Portal Tools.
A portlet processes and renders differently than
a servlet. A servlet will do all its processing in the
service method, while the portlet will divide action
processing and rendering into two separate methods,
hence portlet processing is defined as a two-phase
approach. Figure 3 explains the two-phase processing
approach of a Struts portlet.
To invoke actions in the rendering phase there is
a new interface called IStrutsPrepareRenderer. For
more details see Executing Struts actions during the
render phase of IBM WebSphere Portal. (http://www-
128.ibm.com/developerworks/websphere/techjour-
nal/0504_pixley/0504_pixley.html).
IBM supports Struts Portlet Framework for both
IBM API and JSR 168 API portlet containers. Users
can easily migrate existing Struts applications to
Struts portlet applications using either of the portlet’s
APIs. For more information on migrating and devel-
oping newer applications using the Struts Portlet
Framework see the Portal Info Center.
StrutsPortletswithAJAX Though AJAX isn’t the right solution for every Web
application, there’s increasing curiosity and enthusi-
asm among organizations to explore its benefits and
see how it works for their applications. AJAX isn’t new,
but it’s all on the client side and making it work with
different server-side frameworks is a challenge. This
article is a result of my client experience in developing
Struts portlet applications with AJAX. In the sections
above I briefly described Struts and AJAX. In the sec-
tions below I’ll describe some of the challenges faced
during implementation. To develop AJAX applications
there are several toolkits such as Google Web Toolkit
(GWT), DOJO, AJAX Faces, and JSON, but for simplic-
ity’s sake we chose a plain vanilla implementation.
The tricks to rendering content asynchronous-
ly using AJAX has more to do with JavaScript and
XMLHttpRequestObject than any server-side tech-
nology, however, getting the blend of AJAX working
with server-side frameworks involves careful design
and architectural thinking.
Making asynchronous calls to the server using
AJAX best fits a portal scenario with different portlets
representing different back-end application views.
However, this feature isn’t supported with portlet
URLs since each response from the portlet container
represents a portal page with different portlets with
different window states and modes. Given this limita-
tion, the alternative we have is the XMLHttpRequest
object calling a servlet from a servlet container as
depicted in Figure 4.
To demonstrate the above scenario I have used the
simple downloadable example shown below.
As in Figures 5 and 6, when the user clicks on
the menu items, the right-side blue box items get
refreshed without unloading the whole page.
Below are some of the challenges we had and I’ll
explained how we addressed them.
JavaScriptObjects AJAX applications use the XMLHttpRequest Object
and client-side JavaScript, and as these applications
become more complex it becomes difficult to main-
tain the code using plain JavaScript functions. Object-
oriented programming is ideal and proven for such
Figure 1: MVC Architecture and Struts Components
AJAx.SyS-coN.coM March 2007 39
complex applications. JavaScript offers many differ-
ent ways of defining objects, but it’s not a full-fledged
object-oriented programming language. There are
several efforts by different people aimed at provid-
ing workarounds for object-oriented programming
features such as inheritance, reflection, and interfaces
using JavaScript.
JSON (JavaScript Object Notation) provides guid-
ance on JavaScript object notation. In our example,
we have used JSON guidelines and defined all the
JavaScript functions as objects. We have also defined
all the functions as external js files so that all the cli-
ent-side logic is kept modular and separate from the
JSP file, following portlet best practices.
Here is the JavaScript code snippet from the down-
loadable sample application defining an object and
calling the function as object instance methods.
if (typeof(contentHelper) === ‘undefined’){
var contentHelper = [];
}
contentHelper[‘<%=contentAppNamespace%>’] = new
ContentHelper({
namespace : “<%=contentAppNamespace%>”,
urlContextPath : “<%=request.getContext-
Path()%>”
});
I defined all other JavaScript functions in different
js files. Below is the definition of the ‘ContentHelper’
function.
function ContentHelper(params){
//Parameter Variables
if (!params){
alert(‘Missing parameters while
instantiating!’);
return {};
}
if (typeof(WebFormUtils)==’undefined’){
alert(“The utils package is not
included!”);
return;
}
var namespace = params.namespace;
var urlContextPath = params.urlContextPath;
var formUtils = new WebFormUtils({
namespace : namespace
});
var AJAXurls = new AJAXURLs({
namespace : namespace,
urlContextPath : urlContextPath
});
var AJAXEngine = new AJAXEngine({
redirectHandlerFunction : formUtils.han-
dleAJAXRedirect,
retryPromptMessage : “Warning, there was a
problem
during an attempt to communicate with the
server\n”
+
“Would you like to retry? (It’s recommended
to wait a bit first)\n” +
“If the problem persists, please contact Help
Desk”,
autoRetries : 2,
retryDelay : 10000
});
…..
…..
}
The object/function above in turn instantiates
other objects such as AJAXUrls and AJAXEngine.
HandlingNamespacewithanAJAXResponse As specified in Figure 3, Struts portlets make asyn-
chronous AJAX requests to servlets. Servlets process
the request and send the response back to the client,
which in turn is suppose to refresh certain parts of
the portlet JSP page. Portlets being namespace-aware,
it’s very important that any Web page refresh be
namespace-aware. Unfortunately, the servlet is not
Figure 2: AJAX control flow
Figure 3: Struts portlet control flow
40 March 2007 AJAx.SyS-coN.coM
aware of the portlet namespace. To resolve this issue
we have passed namespace as a parameter for every
AJAX request so that the servlet response can prefix all
object ids with the appropriate namespace.
AccessingaFormBeanintheServlet Struts actions have a nice way of collecting all the
HTML form elements using an Action form bean.
Behind the scenes, the Struts Framework binds all
the HTML form elements to a form bean as per the
action mapping specified in the Struts configuration
file.
Through an AJAX request you can use a similar
mechanism using a BeanUtils package when you’re
posting a form to a servlet so you don’t have to do a
request.getParameter() for each element value. For
more information on the BeanUtils package see the
Apache Struts Framework Web site.
Here’s the call to AJAX request posting the HTML
form:
AJAXEngine.request({
“url” : url,
“title” : “Side Menu”,
“responseFunction” : receiveLinkResults,
“postBody” : serializeContentForm(),
“method” : “POST”
});
function serializeContentForm() {
var url=””;
url+= “filter=”+encodeURIComponent(document.
getElementById(“filter”).value);
url+= “&selectType=sideMenuContent”;
url+=”&namespace=”+this.namespace;
return url;
}
On the server, you can retrieve form elements
posted as below:
ContentBean contentBean = new ContentBean();
try {
BeanUtils.populate(contentBean,arg0.getParameter-
Map());
} catch (IllegalAccessException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InvocationTargetException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println(“Filter=”+contentBean.getFil-
ter());
System.out.println(“SelectType=”+contentBean.getSe-
lectType());
We need to make sure that the ContentBean has
the properties specified in the HTML form. In our
case, we used the same Action form bean used by the
Struts portlet.
Figure 4: Struts Portlet with AJAX data flow
Figure 5: Portal page view for first menu item selected
Figure 6: Portal page with blue box refreshed for the second menu item selected
42 March 2007 AJAx.SyS-coN.coM
SessionSharing Once you define the servlets in the same portlet
application, you can share session objects among the
portlets, JSP, and servlets. The only requirement is to
make sure that the session objects are declared with
APPLICATION_SCOPE.
AbouttheSampleCode The sample code is a downloadable zip file with
a strutsAJAXportlet.war file in it and can be down-
loaded from the online version of this article at
http://ajax.sys-con.com You can install the war file
in the WebSphere Portal server and put a portlet on
any portlet page to see it work. If you install the war
file in IBM’s Rational Application Developer you’ll
find the source code for the client-side AJAX engine,
JavaScript functions for request and response han-
dling, and the server-side portlet and servlet for
request processing.
Conclusion Many organizations today want to leverage exist-
ing assets to embrace new technologies. The Struts
Framework is a very popular MVC pattern implemen-
tation and very suitable for large enterprise applica-
tions. The Struts Portlet Framework is an extension
of Struts that’s supported by all major releases of
WebSphere Portal. In this article, I explained how to
blend the Struts Portlet Framework with AJAX. With
AJAX on the client side it can work with any server-
side framework. There are several other initiatives to
get AJAX working with Java Server Faces (JSF), JSON-
RPC, and Java-like programming so that frameworks
can convert it into client-side script.
References• Apache Struts home page: http://struts.apache.
org/
• IBMRedbook:IBM Rational Application Developer
V6 Portlet Application Development and Portlet
tools: http://www.redbooks.ibm.com/abstracts/
SG246681.html
• ExecutingStrutsactionsduringtherenderphase
of IBMWebSphere Portal: http://www.ibm.com/
developerworks/websphere/techjournal/0504_
pixley/0504_pixley.html
• Errorvalidationandexceptionhandlinginport-
lets, using the Struts Portlet Framework: http://
www.ibm.com/developerworks/websphere/
library/techarticles/0411_sriramoju/0411_sri-
ramoju.html
• Developing JSR168 Struts portlets: http://www-
128.ibm.com/developerworks/websphere/library/
techarticles/0601_patil2/0601_patil2.html n
C
M
Y
CM
MY
CY
CMY
K
oreilly Web 2 programme Mar 2007Page 1 02-03-2007 11:34:55
The advent of Web 2.0 has upset the Internet
in some interesting ways, particularly with
regard to user experience and participation,
the creation, derivation, and relevance of metadata,
and the ability to deliver new functionality by lever-
aging existing sites, thereby accelerating time-to-
market. This article considers how these concepts
can be applied to benefit the world of business intel-
ligence (BI). We’ll discuss how users can benefit and a
number of issues and requirements of corporate IT to
implement and benefit from such solutions.
BI solutions have long provided decision makers
with access to corporate data, such as product profit-
ability, customer profiles, and resource allocation.
Access to such business intelligence is invaluable
for strategic planning. However, using the tools that
provide access to this invaluable information can
frustrate users.
Web 2.0 and, specifically, rich Internet applica-
tions (RIA) solutions can provide features and quali-
ties in Web-based applications that engage users in
ways flat HTML solutions can’t. In the Web 2.0 world,
users can escape the mundane, unproductive paper
or PDF report, and the mind-numbing exercise of
manipulating Excel pivot tables. They’ll no longer
have to generate custom reports from an enterprise
reporting package or wait weeks for IT to generate
the report for them. Instead, users point, click, drag,
and slide through data presented in familiar Web 2.0
widgets they use every day at sites such as Yahoo Mail
Beta, Google Maps, and Outlook Web Access. In this
type of environment, users can interact and collabo-
rate with the data, decorating it with commentary and
other metadata, and sharing it with a mouse click.
The Web 2.0 world enables users to access and
compare a broader range of data, enables them to
look more deeply at that data, and get the data they
need sooner. Wasn’t this the original promise of busi-
ness intelligence as a tool that supports business
managers and executives, helping them make more
accurate and timely decisions?
This article considers how the following hallmarks
of Web 2.0 can benefit BI and what corporate IT needs
to do to implement such solutions.
The hallmarks of Web 2.0 and rich Internet appli-
cations are:
• User experience: Applications that feel like an
installed fat client delivered over the Web (e.g.,
Google Finance and The Baby Name Voyager),
particularly allowing immediate, interactive
responses to user actions
• User participation: User feedback encouraged
and integral to the application (e.g., del.icio.us)
• Creation and relevance of metadata: User-cre-
ated metadata that improves the quality of the
data presented in the tools (e.g., flickr)
• Composite applications and time-to-market:
Applications created from services requiring only
user interface (UI) development and minor busi-
ness logic (e.g., most mash-ups, for examples see
programmableweb.com)
It’s important to note that the capabilities deliv-
ered by Web 2.0/RIA solutions aren’t completely new.
DHTML enabled many of the capabilities (if not all)
that we see in Web 2.0 applications; so what’s dif-
ferent now? AJAX and Flash provide a more robust
environment to implement this type of functionality.
Productivity on these ubiquitous, de facto standards
is light years ahead of the DHTML world. This pro-
ductivity and ubiquity have resulted in an explosive
growth on public Internet sites and its emergence in
corporate solutions.
BoostingUserAdoptionandEfficiency:Think“Interactivity” In the corporate BI space, as with the public
Internet, increased user adoption and engagement
can be achieved by providing user interfaces that
are visually appealing and task-oriented. However,
to make an application (or site) truly come alive
for a user, the application should also provide for
manipulating data, incremental user feedback, and
interactive components (e.g., finance.google.com).
Such features aren’t possible in HTML, while others
are possible but not practical based on the number of
roundtrips required to update and render the data.
User adoption and efficiency is also increased by
AndwhatcorporateITneedstodotoimplementsuchsolutions
Web 2.0
HowWeb2.0MakesBusinessIntelligenceSmarter
by Langdon White
Langdon D. White is director of global
engineering of architecture services for
Keane, Inc., a $1 billion business process
and IT services firm with more than 10,000
employees. He manages a global team of
technical consultants and is responsible
for the quality of all technical deliverables
produced by architecture services. Over the
past six years, Langdon has served in vari-
ous roles in architecture services, including
R&D, consulting, and management. In each
of these areas he has overseen and partici-
pated in the delivery of many high-value
technical solutions for Fortune 100 clients,
as well as delivered numerous technical
industry talks and written many articles.
44 March 2007 AJAx.SyS-coN.coM
task-oriented interfaces, since they let users be much
more self-sufficient than traditional applications do
while requiring a minimum of training or technical
expertise. This is particularly compelling because cur-
rent implementations of most off-the-shelf business
intelligence tools aren’t sufficiently usable for most
users to be able to leverage their full abilities. However,
the failing is primarily on usability (not on technical
ability) as most out-of-the-box BI solutions do allow
users to customize the data and the views of that data.
IncreasingUserParticipation:MoreHeadsAreBetterThanOne Increased user involvement is also an interesting
phenomenon of Web 2.0 that may have a significant
impact on BI. Web 2.0 applications encourage users to
submit metadata to applications. The data that users
contribute to applications in the Web 2.0 space, while
not verified as accurate, is, nonetheless, providing
significant new information to the data spaces that
they contribute to. In the BI world, we can leverage
this same phenomenon to gather new “intelligence”
about the data we’re presenting to users. Specifically,
we can gather metadata such as:
• Correctionsto the information stored in the ware-
house or the enterprise data space; generally this
appears as explanations for the data. For example,
letting a user correct the monthly revenue number
to account for product returns.
• Commentson the perceived accuracy of the data.
• Amendments to the information stored by the
system. For example, when a sales representative
misses their sales quota they can explain why.
• Relatedlinks to the information presented. (This
can be particularly interesting since it may lead to
a Semantic Web environment.)
Users providing feedback to the system can also
improve the searching capabilities of the enterprise.
For example, when users (or administrators) can
provide bibliographic-style information to a system
about its sub-components (or the new components
they create), future users can search based on these
references. This minimizes the burden on the IT
department to create these new “views” and increases
the intelligence of the overall enterprise.
The storage of this information must be con-
sidered part of an overall user metadata solution.
Currently, there aren’t a lot of off-the-shelf solutions
to support this problem (if any), but many of the
semantic tools and BI metadata repositories that
come with BI packages can be adapted as a solution.
The primary goal is to offer a “metadata service”
in the enterprise that allows application teams to
✱ Cross Platform ✱ Cross Database✱ SaaS ✱ SOA✱ AJAX - Web 2.0 ✱ Office Integration✱ PDF & Acrobat Forms ✱ Multi-Lingual✱ CMS ✱ Smart- and Web Client✱ Fast Development ✱ Email Integration✱ Document Management ✱ Standards Compliant
Do you have a vertical market software product? Are you considering creating one?The Servoy ISV (Independent Software Vendor) Assurance Program offers ISVs a unique service to migrate from their current developmentplatform to enterprise-level Java at a fixed cost and in a fixed timeframe. Rapid and efficient cross-deployment (simultaneously) to Web,Mobile and SaaS applications -- complete with AJAXcapabilities -- is now possible.
Call us now: (805) 624-4959www.servoy.com
Your application AJAX and SaaS awarein 3 - 12 months. Guaranteed!
submit information as well as search for it. The
implementation of the metadata repository, while
important, can generally be considered an imple-
mentation detail.
FasterTime-to-Market:NewRulesoftheGame The Web 2.0 world carries as a key tenet the con-
tinuous improvement of applications (or the “per-
petual beta”). In other words, applications get many
incremental updates that have a minimal outage
impact on users. BI solutions, in particular, benefit
immensely from this kind of model where enhance-
ments are released quickly because the business intel-
ligence of an organization is always changing, either
because users come to new conclusions or because
the rules of the economic environment change. This
is in stark contrast to traditional BI solutions, which
are relatively slow to change, requiring IT involve-
ment for even the simplest variations on reporting.
Even when BI tools claim to support the users without
involving IT, in practice that’s rarely the case and new
functionality for the BI application must be priori-
tized in the IT department’s regular queue.
AccessingContent:Anytime,Anywhere How do corporate users manage their inflow (the
bombardment) of information in their personal lives?
Any way they can. They read content bits on their
phones and PDAs, print out articles and take them on
the train, and construct home pages on portal sites
with channels of specific content. How can corporate
IT help its users manage these information streams?
Can corporate IT supply BI content and data through
similar channels to support its users better?
RSS (or Really Simple Syndication), which deliv-
ers data as an XML file or an “RSS feed,” can be
considered part of Web 2.0 if one applies O’Reilly’s
“Hierarchy of ‘Web 2.0-ness,” one of whose key char-
acteristics is that “the application could only exist on
the net,” not to mention the recent spike in popular-
ity of RSS. Leveraged properly, RSS can be a solution
to the information overload problem by allowing
users to control the method and level of information
they receive. Specifically, IT can create an enterprise
RSS feed aggregator that caches and publishes, for
example:
• Popular and industry-specific feeds
• User-created feeds to publish the information they
produce (e.g., document changes, internal blogs,
company news)
• User-created search feeds (i.e., feeds created from
custom user searches that publish new results)
Each user can then customize the Enterprise RSS
infrastructure to deliver information to the device
(cell phone, e-mail, BlackBerry, printer) with the fre-
quently they prefer (hourly, daily, weekly). Leveraging
current GPS-enabled cellular devices, systems can
also deliver content that is location-aware (e.g., the
current inventory of the nearest retail outlet). Further
expanding the RSS infrastructure, an enterprise can
implement language and semantic translation on the
content it supports as requested by users.
GettingStarted However, as with all good things, migrating to an
environment that supports a Web 2.0 world isn’t triv-
ial. There are, however, two bright spots on this front.
The first is that the press is already touting many
of the items discussed below and as a result, firms
may already be giving these projects higher priority
in the application queue. The other is that, as with a
Service Oriented Architecture (SOA) initiative, an IT
organization isn’t required to start from a “greenfield”
environment and can instead incrementally adopt
these techniques when implementing applications
and pay only a small increase in cost for the project to
contribute to the environment.
SOAasanEnabler A Service Oriented Architecture, while not
required, will make it significantly easier to adopt
Web 2.0 principles. One of the core concepts of Web
2.0 is the idea of a “hackable environment,” meaning
that developers can quickly cobble together various
components to create new applications. While not
all SOA implementations leverage Web Services, they
are especially compelling to achieve the goals of Web
2.0, given their ubiquitous support from packaged
applications, development environments, and the
Web-at-large. In addition, their tag- and text-based
nature makes them easily “parsable” and “hackable.”
To further encourage the development of unique,
user-centric applications, the enterprise should con-
sider adding non-business functionality via Web
Services as well. Some examples include mapping
components, stock tickers/lookup, sales data from
other companies, and package tracking. Adding
these functions, either from the public Internet or by
acquiring software that provides them, may seem odd
and non-business-related but allows IT to really start
to think out-of-the-box about the kind of solutions it
can offer to its users.
AgileMethods&LightweightLanguages A more lightweight development model benefits
the creation of Web 2.0 solutions immensely. Industry
buzz suggests that agile methods are the only option for
Web 2.0, not, interestingly, the authors of the term (Dale
Dougherty, Craig Cline), but in fact any short-release
cycle software development technique can work. A
short release cycle, releasing at least every six weeks,
encourages the perpetual beta and the continuous
46 March 2007 AJAx.SyS-coN.coM
Google is looking for engineers withgreat aspirations.If you’ve ever imagined being able to use your talent, education and skills to benefitmillions of people around the world, Google might be the place for you. We’re looking for the brightest minds in engineering at all levels of experience – from recent gradsto 20-year professionals. And we’re hiring in our offices around the world – from Zurich to Bangalore.
One of our goals is to create the most inspiring work environment on the planet – a place where you can dream big and make things happen. If you’d like to be a part of that, we’d love to hear from you.
Stop by our booth, and check out our current openings at www.google.com/jobs/
© Copyright 2006. All rights reserved. Google and the Google logo are registered trademarks of Google Inc.
improvement model demanded by Web 2.0 solutions.
Agile methods can bring an enterprise signifi-
cant positive change. However, one should con-
sider where to apply them and realize they aren’t
an appropriate solution for every problem. Some
typical examples of inappropriate use include
large projects (using in excess of 50 developers) or
projects where the functionality required is clearly
articulated with a clear release date. However,
the enterprise should seriously evaluate whether
these kinds of projects are worthwhile (large proj-
ects often lose momentum before release) or exist
(clear requirements and dates have a nasty habit of
changing over time) and consider redesigning the
projects to fit an agile method instead. Experience
has also shown that an agile method may be used
on large projects by approaching the project from
an agile viewpoint. Unfortunately, there’s no clear
way to tell when agile isn’t appropriate and the
enterprise should, instead, start with small non-
mission-critical applications and learn from expe-
rience how and when to apply agile techniques.
Another common failure of agile methods is the
“cowboy” developer who doesn’t like writing docu-
mentation and latches on to agile as an excuse. As
a result, an organization must develop or adopt an
appropriate agile methodology and then ensure care-
ful adherence to the method to avoid failure.
Along with agile methods, “lightweight languages” are
becoming significantly more popular as they turn into
very robust enterprise-class environments for develop-
ment. Most recent languages that are generally grouped
into the “lightweight” category provide native (or “near-
native”) support for Web services and rich Internet appli-
cations (primarily via AJAX). As a result, the Web 2.0 devel-
opment leveraging these languages is especially efficient,
thereby allowing greater experimentation, which is a
central component of the Web 2.0 world.
Recently, not just start-ups and individual Web sites,
but mature corporations have also started to believe
that languages such as Ruby, Python, and Flash/Flex are
robust enough to support their users. When corporate
IT evaluates its portfolio of applications across different
classes of uptime and reliability, it finds ideal candi-
dates for such environments. In other words, not every
application in the enterprise needs “six 9s” (99.9999%)
uptime or support for 30,000 users. In fact, user loads
and “mission criticalness” vary among applications.
Recognizing these facts has allowed corporations to
consider these “toy” languages as what they are: robust
environments, particularly for applications that value
change over performance and stability.
Conclusion The Web 2.0 world brings a number of potential
advantages for all applications, but business intelligence
applications may benefit especially. However, increased
user engagement and increased user participation don’t
come without cost, like implementing SOA and thinking
about development differently, but they should be well
worth it. The business intelligence derived from the Web
2.0 world leading to better, faster decision-making can
often have a significant, measurable ROI.
The costs of migrating to a Web 2.0 world are also,
strictly speaking, not significant over the long-term.
In fact, migrating to an environment that supports
Web 2.0 better will decrease the cost of the overall
enterprise over time. Much of the literature on SOA
shows how to measure the ROI and the value of
migrating to SOA. Web 2.0 is just another impetus for
investing in the enterprise.
InterestingLinks• Google Finance, http://finance.google.com
• Baby Name Voyager, http://babynamewizard.
com/namevoyager
• del.icio.us, http://del.icio.us/
• flickr, http://www.flickr.com/
• Programmable Web, http://programmableweb.
com
• Yahoo Mail Beta, http://mail.yahoo.com
• Google Maps, http://maps.google.com
• Outlook Web Access, available with a deployed
Microsoft Exchange Server
• Flash and Flex, http://www.adobe.com/products/
flash/flashpro/ and http://www.adobe.com/prod-
ucts/flex
• Tim O’Reilly. “What is Web 2.0.” http://www.oreil-
lynet.com/pub/a/oreilly/tim/news/2005/09/30/
what-is-web-20.html
• Tim Bray. “Not 2.0?” http://radar.oreilly.com/
archives/2005/08/not_20.html n
“‘Lightweightlanguages’arebecomingsignificantlymorepopular
astheyturnintoveryrobustenterprise-classenvironmentsfordevelopment.”
48 March 2007 AJAx.SyS-coN.coM
Contact Us
Address:555 Not My Home St. Big City, CO 12345
Scary Question.Exactly who is developing your next app?
Your App Starts Here.We are the leaders in RIA development services.
INCREDIBLE APPLICATIONS
PASSIONATE USERS
PROVEN SUCCESS
Unlock your potential with the help of industry leaders in
Rich Internet Application development. 10 Years. 1400+ Customers.
Your app starts here.
C Y N E R G Y S Y S T E M S . C O M
Contact Us
Address:555 Not My Home St. Big City, CO 12345
Scary Question.Exactly who is developing your next app?
Your App Starts Here.We are the leaders in RIA development services.
INCREDIBLE APPLICATIONS
PASSIONATE USERS
PROVEN SUCCESS
Unlock your potential with the help of industry leaders in
Rich Internet Application development. 10 Years. 1400+ Customers.
Your app starts here.
C Y N E R G Y S Y S T E M S . C O M