tsm_5_2012_en

54
TO DAY SOFTWARE No. 5 / 2012 www.todaysoftmag.com MAGAZINE Your own startup Internal SEO techniques - part I Google Guice Agile & Testing & Mobile Three converging Concepts How to Grow An Agile Mentality in Software Development? Play „Hard Choices” in every sprint and pay your debts User Experience Design and How to apply it in Product Development Build dynamic JavaScript UIs with MVVM and ASP.NET UseTogether. What? How? and Why? Social networks Core Data – Under the hood 5 Java practices that I use Microsoft Kinect – A programming guide Testing - an exact science What’s new in Windows Communication Foundation 4.5 Service Bus Topics in Windows Azure Android Design for all platforms Malware pe Android statistics, behavior, identification and neutralization Gogu

Upload: today-software-magazine

Post on 31-Mar-2016

219 views

Category:

Documents


4 download

DESCRIPTION

http://www.todaysoftmag.com/pdf/TSM_5_2012_en.pdf

TRANSCRIPT

Page 1: TSM_5_2012_en

TSM T O D A YS O F T WA R E

No. 5 / 2012 • www.todaysoftmag.com

M AG A Z I N E

Your own startup

Internal SEO techniques - part I

Google Guice

Agile & Testing & Mobile Three converging Concepts

How to Grow An Agile Mentalityin Software Development?

Play „Hard Choices” in every sprint and pay your debts

User Experience Design and How to apply it in Product Development

Build dynamic JavaScript UIswith MVVM and ASP.NET

UseTogether. What? How? and Why?

Social networks

Core Data – Under the hood

5 Java practices that I use

Microsoft Kinect – A programming guide

Testing - an exact science

What’s new in Windows Communication Foundation 4.5

Service Bus Topics in Windows Azure

Android Design for all platforms

Malware pe Android statistics, behavior, identification and neutralization

Gogu

Page 2: TSM_5_2012_en
Page 3: TSM_5_2012_en

6Your own

startupCălin Biriș

10UseTogether. What? How?

and Why?UseTogether team

10Internal SEO techniques (I)

Radu Popescu

13Google Guice

Mădălin Ilie

16Agile & Testing & Mobile -

3 converging ConceptsRares Irimies

20Android Design

for all platformsClaudia Dumitraș

23How to Grow An

Agile Mentality in Software Development?

Andrei Chirilă

26Play „Hard Choices”

in every sprint and pay your debts

Adrian Lupei

28User Experience Design

and How to apply it in Product Development

Sveatoslav Vizitiu

32Build dynamic JavaScript UIs with MVVM and ASP.NETCsaba Porkoláb

34Social networksAndreea Pârvu

36Core Data –Under the hoodZoltán, Pap-Dávid

395 Java practices that I useTavi Bolog

42Microsoft Kinect -A programming guide Echipa Simplex

44Testing - an exact scienceAndrei Contan

45What’s new in Windows Communication Foundation 4.5Leonard Abu-Saa

47Service Bus Topics in Windows AzureRadu Vunvulea

50Malware pe Android sta-tistics, behavior, identifica-tion and neutralizationAndrei Avădănei

52GoguSimona Bonghez

Page 4: TSM_5_2012_en

4 nr. 5/2012 | www.todaysoftmag.com

Editorial

Apple has recently released iPhone 5, and this information is likely to be more popular than an entire football championship. I will not discuss whether the hardware is more interesting than the one from the competition, because the

fact is that they are similar and the major differentiating factor is the operating system and the diversity in App Store. From this point of view iOS 6 has changed a bit the general rules of the game by introducing their own maps, by removing the YouTube application, through a better integration with social networks and improved data synchronization between devices in almost real time. Paradoxically, this makes it similar to the Microsoft strategy, where we see the same thing: their system of maps by Nokia using Navteq, a very good integration of social networks in OS and the synchronization support. Android continues the trend of disruptive business by offering it for free, but the multitude of devices that can be supported is sometimes quite challenging for application develop-ment. Depending on the device the user experience varies greatly. If all that mattered ten years ago was the hardware and consequently the device manufacturer such as Nokia or Sony Ericcson, today we pay attention to what the operating system itself provides. The ecosystem to which it gets access is more valuable than the purchased hardware. From all this, a software developer should understand that a single product can have a short term success and what matters on the long term is the creation of an ecosystem that con-tains a more complete experience of the target users. For programmers this lesson would be translated into emphasis on diversity and the desire to understand all levels of such an ecosystem, to be actively involved in its development, starting with the application running on the device and ending with the backend. The current issue of the magazine contains 40% more articles than the previous one and can be considered an expression of this diversity. The emphasis on Agile methodology and many items that start with the definition of the layout application on Android devices, SEO optimization techniques, The Google Guice, Core Data iOS, Java best practices, WCF 4.5 latest news, Windows Azure Service Bus, the user experience design, the statistics and behavior of Android viruses and many others that you can find in the magazine offer readers a wide range of subjects.

The importance of the ecosystems can be locally extended and that’s where the TSM journal supports the community for better communication and sharing of technical expertise of programmers and beyond. The academics involvement in order to collabo-rate with the IT companies from Cluj and the increased importance of the research will be discussed during a special event; we will come back with details. The collaboration with local researchers may also be a good opportunity for local startups which would then prove to be truly innovative.

I conclude by thanking all the contributors, sponsors and last, but not least, to you, the readers.

Thank you,

Ovidiu Măţan

Founder & CEO Today Software Magazine

Ovidiu Măţan, [email protected]

Founder & CEO Today Software Magazine

editorial

Page 5: TSM_5_2012_en

5www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

Editorial Staf

Founder / Editor in chiefOvidiu Mățan

[email protected]

Editor (startups and interviews) Marius Mornea

[email protected]

Graphic designer Dan Hădărău

[email protected]

Colaborator marketing: Ioana Fane

[email protected]

Reviewer Romulus Pașca

[email protected]

ReviewerTavi Bolog

[email protected]

TranslatorsCintia Damian

[email protected]

Made byToday Software Solutions SRL

str. Plopilor, nr. 75/77Cluj-Napoca, Cluj, [email protected]

www.todaysoftmag.comwww.facebook.com/todaysoftmag

twitter.com/todaysoftmag

ISSN 2285 – 3502ISSN-L 2284 – 8207

Copyright Today Software Magazine

Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with

other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code

www.todaysoftmag.rowww.todaysoftmag.com

Mădălin [email protected]

Cluj Java Discipline Lead@ Endava

Leonard [email protected]

System Architect@ Arobs

Radu [email protected]

QA and Web designer @ Small Footprint

Rareș Irimieș[email protected]

Senior QA @ 3Pillar Global Romania

Călin Biriș [email protected]

Călin Biriş is the marketing crocodile of Trilulilu and President of IAA Young Professionals Cluj.

Tavi [email protected]

Development lead @Nokia

Adrian [email protected]

Project Manager and Software Engineering Manager @ Bitdefender

Echipa [email protected]

Sveatoslav [email protected]

User Experience and User interface Senior Designer

Andreea Pâ[email protected]

Recruiter for Endava and trainer for development of skills and leadership competencies, communication and teamwork

Radu [email protected]

Senior Software Engineer@iQuest

Zoltán, Pap-Dá[email protected]

Software Engineer @ Macadamian

Simona Bonghez, [email protected]

Speaker, trainer and consultantin project management,

Owner of Confucius Consulting

Andrei Chirilă [email protected]

Team LeaderTechnical Architect@ ISDC

Andrei Conț[email protected]

Principle QA @ Betfair Co-Founder Romanian Testing Community

Authors

Andrei Avădă[email protected]

Founder and CEO DefCampCEO worldit.info

Claudia Dumitraș[email protected]

Android Developer @ Skobbler

Page 6: TSM_5_2012_en

6 nr. 5/2012 | www.todaysoftmag.com

Your own startup

I heard that the market in Cluj will change. The wages are rising and they rea-ched a high level, comparable to the wages in the Western countries. The outsour-cing companies will begin to suffer from this reason and their customers will begin to move their projects to other cheaper markets.

A healthy way for any local outsourcing company would be to start building their own products to address to multiple client markets. Thus, by varying the portfolio, they will not venture all their eggs in one basket and the risks will be divided.

But why wait for the company where you work to put you in a team for a new product, when you can start a startup outside office hours and work for you? Your advantage at this point is that you’re working on a good salary and in your free time you can focus on your own future projects. If you think of a startup, the most important things that you should take into

account are: team, money and idea.

The teamThe most important resource in a

start-up is the human capital, i.e. the spe-cialists. Many who are thinking to start a start-up fall into the money trap believing that without investment money you can’t achieve a successful product on the market and therefore they no longer start anything. I think that if you make a good team who delivers a valuable product with a healthy business model, money will come.

The problem is that this team should be composed of members with comple-mentary skills. When Trilulilu started, the initial project was a team made from the best specialists in design, programming, server administration, management, bran-ding and legal, a handful of people who complemented each other and have worked for the same purpose to launch an impact service on the market.

Think of three local online brands, which started in Cluj and came to succeed at national or international level. They are not so hard to find, are they? However, if you think about the major IT centers in Romania, Cluj is in the first three. We

have many IT practitioners who are absorbed by outsourcing companies. Few are the companies that have developed their own products.

startups management

Călin Biriș[email protected]

Călin Biriş is the marketing crocodile of Trilulilu and President of IAA Young Professionals Cluj.

Page 7: TSM_5_2012_en

7www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

Think about the skills you need in your startup and find the best people to engage with you.

MoneyIf you start a startup that you think it

has potential, it’s easier than you expected to reach people willing to provide feedback, or even to invest in the idea of your team.

I recently wrote on my blog about the events dedicated to start-ups, where the Angel type investors meet teams who need money for a product which they develop. A handful of such events is already happening once a year.

But not all investors want to promote the fact that they have money to give. We found out about people who have money set aside and are waiting for the right team to invest in, even if they don’t participate in such events.

Also, you can think of other financing sources such as loans, bank grants or the 3F - friends, family and fools.

But more important than „what you do to get the investment for your start-up „is to think what business model you will follow and how you will make money. If

you have a good answer to this, the invest-ment will come along with an increasing number of payers customers.

The ideaI had several opportunities to talk to

developers who wanted to be involved in their project, but who didn’t believe that it will be a market success. In addition to this, due to the work on the projects of other customers, they had no time to think of their own. The best they could do is to try to find ideas through networking.

By taking part in several start-up com-petitions I have realized that the best ideas come from people who are not so oriented on the IT but more towards the business side. Therefore, several meetings should be organized more often between experts from business and IT specialists who try to find those start-up ideas that would have a chance of success.

If you don’t have a startup idea, try meeting more and more people from different areas and discover the opportuni-ties they saw in different markets. Maybe together you can find solutions that could be successful.

But how do you realize that an idea is

good?

The easiest way to test an idea is trying to sell the product before it is actually achieved. You might find customers willing to pay now for a solution which is to be delivered just over a period of time. With the same test you can discover needs that you haven’t originally thought they would be covered by service or your product.

But still, why start working on a start-up and not settle for what you have now?

Because you can:

• Work on your own ideas,• Gain better• Be more appreciated for what you

create,• Meet more people and travel more• Keep all your eggs in one basket, in

case something happens.

In vain is Cluj among the top IT cen-ters in the country, if we cannot boast about our achievements. It’s up to you to try something new, to make a team and start something, anything. You have a lot more to gain than to lose. Think about it!

management

Page 8: TSM_5_2012_en

8 nr. 5/2012 | www.todaysoftmag.com

UseTogether. What? How? and Why?

It all started about 6 months ago at Startup Weekend Cluj where a group of 12 newly acquainted people joined hands around a simple, yet powerful idea: why buy when you can borrow? The plan seemed simple enough: all we had to do was put together the best ideas we had and come up with a proof of concept for an online platform centered around borrowing and lending of

objects. Fortunately, the whole experience ended up being more than positive: we left the event with a ton of advice, new ideas, fresh

startups

perspectives and most importantly, we received an all-round thumbs up for the idea on top of which the project was built.

As you might expect, a long series of meetings, plans, sketches and doodles followed, and working between jobs, uni-versity, family, friends and all sorts of other obligations, today we can happily say that UseTogether is finally up in beta and is hea-ding (hopefully) in the right direction.

This article will describe the dev adven-tures behind UseTogether and the stack of technologies that we use and that make our life easier.

Although we won Startup Weekend

Cluj using the classic PHP, MySQL and JavaScript formula, we soon realized that we needed something more powerful and more flexible if we wanted to give the idea behind UseTogether a proper implementa-tion. After a long series of discussions, we finally decided to use Django as the cor-nerstone of our project. Django is a MVC web framework written in Python which emphasises fast development and sim-ple, pragmatic design. It was first released publicly around 2005 out of its creators’ frustration of having to maintain complex websites written in PHP and it was origi-nally intended to be a thin layer between mod_python and the actual Python code. Nonetheless, sick of having to constantly copy and paste code from project to pro-ject, the creators managed to develop the initial thin layer enough that currently, working in Django is pretty much a bre-eze. Still, why Django and not say Ruby on Rails? For us, the main reason was sim-ple: all of us had previous experience with Python and of course, Django was named after a guitar player - Django Reinhardt.

As a note, the list of popular web-sites built with Django includes names such as Pinterest, Instagram, Mozilla, The

Washington Times and Public Broadcasting Service.

With the technology issue out of the way, the only thing left to figure out was a choice of project management / bug issue software. We first settled for Trac. As described on http://trac.edgewall.org/, Trac is minimal software system which combi-nes a wiki and an issue tracker. It’s ideal for projects where a roadmap is firmly in place and face to face communication takes place easily and frequently. Unfortunately, our team met none of those requirements. We did not have a clear idea on how we were going to tackle the project and because we also had jobs and university to attend, we only managed to get together during weekends.

So we had to look for something a bit more complex feature-wise, something that would allow us to be firmly connec-ted with the development process at all times. Fortunately, the guys at Facebook had similar problems long before us and as an answer to those problems, Phabricator was born. Phabricator is a stack of web apps meant to make developers’ lives a bit easier and although it looks a lot like systems such as Trac or Redmine, o short visit to http://phabricator.org/ will reveal that the former takes itself a lot less seriously: Facebook engineers rave about Phabricator, descri-bing it with glowing terms like „okay” and „mandatory”. From descriptions such as “Shows code so you can look at it” to Close a ticket “out of spite” or the submit button that has been renamed to “Clowncopterize” (because clowns and helicopters are awe-some), Phabricator is like a long walk through Nerdtown down Geekstreet. A tool written by programmers for programmers.

Beyond the funny interface that resem-bles Facebook a lot, Phabricator provides everything you could ask from a modern project management tool: code review, issue tracking, source browsing, wiki,

CLI and even an API. Versioning wise, Phabricator can work with Git, Subversion and Mercurial and can run on Linux, Mac and even Windows.

For us, code reviews have proven to be a real lifesaver. If you might be willing to give up on code quality and take the quick and dirty road when you’re the one wri-ting the code, fortunately, we humans are a pretty unforgiving bunch, because we’re more than willing to see every little mistake others have made. And if at times you’re blind to your own huge faults, I assure you that this workflow has brought to attention every small and seemingly insignificant hack in our coding.

I can’t overstate the importance of code reviews in a project like UseTogether, with a really small team of developers, where each person has to be able to understand and modify any part of the codebase.

And last but not least, our toolset had to include Git. Why Git? Because it’s fun, fast and very easy to use.

In the end, we wish you all as few criti-cal bugs as possible and we leave you with a little piece of wisdom: “Python is a drop-in replacement for BASIC in the sense that Optimus Prime is a drop-in replacement for a truck.” - Cory Dodt.

The UseTogether team

From left to right, on the top row:Daniel Rusu, Mircea Vădan, Alex Țiff, Cătălin Pintea, Gabi Nagy, Paul Călin, Adriana Valendorfean, Victor Miron; and on the bottom row:Larisa Anghel, Ioana Hrițcu, Sorina Andreşan, Alina Borbely

Page 9: TSM_5_2012_en

9www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

Transylvania Java User GroupJava technologies community.Website: http://www.transylvania-jug.org/Started on: 15.05.2008 / Members: 493 / Events: 38

Romanian Testing CommunityCommunity dedicated to QA.Website: http://www.romaniatesting.roStarted on: 10.05.2011 / Members: 520 / Events: 1

Cluj.rbRuby community.Website: http://www.meetup.com/cluj-rb/Started on: 25.08.2010 / Members: 112 / Events: 27

The Cluj Napoca Agile Software Meetup GroupCommunity dedicated to Agile development.Website: http://www.agileworks.roData înfiinţării: 04.10.2010 / Members: 234 / Events: 13

Cluj Semantic WEB MeetupCommunity dedicated to semantic technologies.Website: http://www.meetup.com/Cluj-Semantic-WEB/Started on: 08.05.2010 / Members: 125 / Events: 18

Romanian Association for Better SoftwareCommunity dedicated to IT professionals with extensive experience in any technology.Website: http://www.rabs.roStarted on: 10.02.2011 / Members: 173 / Events: 9

Google Technology User Group Cluj-NapocaCommunity dedicated to Google technologies.Website: http://cluj-napoca.gtug.ro/Started on: 10.12.2011 / Members: 25 / Events: 7

Cluj Mobile DevelopersCommunity dedicated to mobile technologies.Website: http://www.meetup.com/Cluj-Mobile-Developers/Started on: 08.05.2011 / Members: 45 / Events: 2

The community section commits to keeping track of the relevant groups and communities from the local IT industry and to also offer an upcoming events calendar. We start with a short presentation of the main local initiatives, and we intend to grow this list until it contains all relevant communities, both from the local landscape and the national or international one with a solid

presence in Cluj. The order is given by a function of number of members and number of activities reported to the lifespan, thus we are striving to achieve a hierarchy that reveals the involvement of both organizers and members.

Calendar

September 25Technical Days C++C ont a c t : ht tp : / / b l o g . p e opl e - c e nt r i c . ro / ne w s /un-nou-eveniment-technical-days-c-la-cluj

September 26Open Coffee Meetup - #HaSH - Hack a Server HackathonContact: http://www.facebook.com/opencoffeecluj/events

September 29Windows 8 Dev CampContact: http://codecamp-cluj-sept2012.eventbrite.com

October 2Patterns for Parallel ProgrammingContact: http://www.rabs.ro

October 17 HTML5 and Problem solving in Web DesignContact: http://www.meetup.com/Cluj-Semantic-WEB

November 9 Artificial Intelligence, Computational Game Theory, and Decision Theory - Unifying pathsContact: [email protected]

Local communities

Page 10: TSM_5_2012_en

10 nr. 5/2012 | www.todaysoftmag.com

Internal SEO techniques part I

programming

Meta tagsThe purpose of the <title> tag is

identical to the purpose of the <meta description> tag, which is to increase the click rate in results page. The use of some interesting titles or keywords within the tag will increase this click rate. In case you want to use a brand name in the <title> tag, there are two approaches. If the brand is famous, it is recommended for it to be pla-ced in the beginning of the tag (e.g.: Brand Name|Page Name). In case of small or new brands, we have to add the brand’s name at the end of the tag (e.g.: Page Name| Brand Name). These approaches are based on the idea that an important brand name has a very powerful impact on the decision to click on that specific result, while unknown brands will not attract clicks. The length of the <title> tag has to be between 10-64 cha-racters, knowing that there are words with no descriptive value that occupy the space such as “of ”, “on”, “in”, “for”, etc, so we have to avoid them.

There is a myth related to <meta keywords>, that you need to have as many

keywords as possible in a page, because they are important in SERPS. This is not true. Ever since September 2009, Matt Cutts (which at the time was the Search Quality Manager) announced on one of Google’s blogs, that the famous search engine will not take into account the <keta keywords> anymore. Adding and using this tag is not harmful, but still it takes some time we can use for other tasks.

Another myth is that the <meta descrip-tion> tag would help a page appear higher in the Google results. In fact, this meta tag’s purpose is to convince as many peo-ple as possible to click on our result. <meta description> is a tool which helps very much to increase the click rate in SERPS. All the people who see a results page (after a Google search) will have to decide on which of the links will click. The descrip-tion that offers a very good or interesting summary will determine most people to click on it. A more controversial approach in this meta tag’s case is the use of some very interesting descriptions, but incom-plete (finished in the middle of an idea with

Radu [email protected]

QA şi Web designer @ Small Footprint

In the previous issue we saw the most important changes in Google’s search algo-rithms. In the current issue we decided to write an article to present some of the most important techniques of search engine optimization, applicable in the area of

internal SEO. These techniques, although easy to apply, offer very good results in long term – concerning the increase of the organic traffic.

Page 11: TSM_5_2012_en

11www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

suspension points). This will determine the person faced with that result to want to find out more details on the description, so he will click on it. In what concerns the length of the description, we have to mention that it should have between 50-150 characters in order to have the best results.

Fişierul Sitemap.xml şi rolul săuThe Sitemap is an XML file type that

helps search engines finds more easily the pages that they will index, offering a list that contains the entire site URLs. This file was used for the first time by Google in 2005 and in 2006 MSN and Yahoo search engines have announced that they will also use the sitemap for indexing. The file for-mat is very simple and anyone can create it. There are, though free online generators that can do this in a much shorter time. Among these, the best and the fastest is www.xml-sitemaps.com. Note that if this file will contain over 50.000 URLs or it will have more than 10 Mb, it has to be compre-ssed in gzip format.

<?xml version=”1.0” encoding=”utf-8”?><urlset xmlns=”http://www.sitemaps.org/schemas/ sitemap/0.9”>

<url>

<!-- această secţiune se va repeta pentru fiecare pagină în parte --> <loc>http://example.com/</loc> <!-- adresa paginii --> <lastmod>2006-11-18</lastmod> <!--data ultimei modificări a conţinutului -->

<changefreq>daily</changefreq> <!--intervalul de schimbare al conţinutului-->

<priority>0.8</priority> <!-- prioritatea paginii (poate varia de la 0.0 pana la 1.0) --> </url></urlset>

Once created, the Sitemap.xml file needs to be added in the site root and it will be accessible at the following address www.example.com/sitemap.xml.

HTML code validation and its importanceOne thing that it’s very often forgotten

or left behind because of the short develo-ping time is the validation of the HTML code. We all remember the days when most sites had small labels or logos saying that the website contains valid HTML and CSS code. Actually, they weren’t just a reason for brag, but really helped in SEO. The search engines have to parse the HTML code in order to find the content. If there are errors, it is possible that some parts of the website can’t be taken into consideration and all the content optimization work will be for

nothing. Among the most common HTML vali-

dation errors, we mention the following:• Closing some tags. <div> tag will

be closed with </div>, while <img> tag will be closed in a single format <img/>

• Using an incorrect DOCTYPE• Lack of ALT attribute inside the

image tag• Incorrect tag nesting. Here is

an incorrect way of nest ing <div><b>TEXT</div></b>. The correct way is <div><b>TEXT</b></div>

• Converting special characters in type Entity symbols. For the “©” character we will use “&copy” in the HTML code

Checking the validation of the HTML code can be done using a free tool, availa-ble for anyone on www.validator.w3.org. Although not all errors can have a nega-tive impact over SEO, it is recommended to have a valid and clean code, at least for professionalism and quality reasons.

Page 12: TSM_5_2012_en

12 nr. 5/2012 | www.todaysoftmag.com

<Bold> vs. <strong> and <i> vs. <em>

We have to understand that search engines, when they look at a web page, don’t see the same thing as a real person. They see the source code and they analyze it. Because of this, although for us, a text inside the <b> tags (bold) looks the same as a text inside the <strong> tag, search engines see two totally different things. <b> tag is perceived like a design tag, giving the words a more pronounced style, while <strong> tag gives semantic information which emphasizes the content. Generally, <em> tag will be used for the content which is generated by the users (testimo-nials or reviews), <strong> tag to mark different keywords inside the content and for stylization of the text it is recommended to avoid as much as possible using <b> and <i> tags and to use instead <font-weight> and <font-style> CSS properties.

Redirecting the address non-www to the address www

301 redirect is a type of permanent redirection which will allow transferring over 90% of the optimization and SEO benefits from the redirected domain to the new domain. Generally, this type of redi-rect will be used when moving a website on another domain without losing a lot of the organic traffic coming from Google. Many

times it is overlooked the fact that the addresses www.example.com and example.com are seen as being different by search engines and the SEO power of the external links may be cut in half. 301 redirection of the address example.com to www.exam-ple.com will help us have a better rank. To better understand this, we can imagine that we have 50 links from some friends’ blogs to example.com and 50 links from different other websites to www.example.com. Those 100 links divide between two different addresses, but with a simple 301 redirect, we can have 100 links to the same address, almost doubling the authority of our web-site. This type of redirection can be done through the hosting dashboard, by using an .htaccess file in case of the websites built with PHP or by using the webconfig file in case of websites that are using Microsoft technologies.

ConclusionsA very important thing to know is

that the excessive optimization may have a negative impact on your website, after Penguin update was introduced. Using exactly the same word or the same group of words in the meta tags, in the page title, in the <h1> tag and also inside the content may bring a penalty. It is recommended to use as many variations as possible of the same keyword or different constructions of

a certain group of words. Although most of these techniques seem, at a first glance, simple, they present a big advantage in optimization for search engines. The inter-nal branch of the SEO is related mainly to the actions of the people who build or own a website, rather than to external factors and this makes us pay special attention and gives us the confidence that the results will be real. In the next issue, we will continue this article with other techniques for the internal optimization.

Internal SEO techniques - part Iprogramming programming

Page 13: TSM_5_2012_en

13www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

Google Guice

The servlets will benefit from:• Constructor injection• Type-safe configuration• Modularization• AOP

In this article I’ll present the following scenarios:• Developing a web application from

scratch• Adding Guice to an existing web

application

Starting a new web application using Guice

Besides the core Guice libraries pre-sented in the previous article, we must also add the guice-servlet.jar in the application’s classpath (please check the end of the arti-cle for the Maven dependency).

After the classpath is properly configu-red, we must first define the Guice Filter in web.xml. This will actually be the only con-figuration present in this file.

This is the web.xml:

<?xml version=”1.0” encoding=”UTF-8”?><web-appxmlns=”http://java.sun.com/xml/ns/javaee” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”

xsi:schemaLocation= ”http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee /web-app_3_0.xsd”version=”3.0”>

<display-name>Guice Web</display-name> <filter> <filter-name>guiceFilter</filter-name> <filter-class> com.google.inject.serv let.GuiceFilter </filter-class>

</filter> <filter-mapping> <filter-name>guiceFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping></web-app>

We are sure now that all requests will be processed by Guice.

Next step is creating the Injector and defining the Modules. Besides the “normal” modules used by an application – presen-ted in the previous article – in order to actually use the Guice Servlet module we must declare an instance of com.google.inject.servlet.ServletModule. This Module

As promised in the previous article I’ll continue presenting Google Guice also for web applications. In order to do this you’ll need to get the servlet extension – part of the standard distribution, along with other extensions like JMX, JNDI,

Persist, Struts or Spring. Using Guice, the web.xml will be reduced at minimum - just make the Guice container start. The rest of the configurations will be easily done in Java in the same type-safe manner presented in the previous article.

Mădălin [email protected]

Cluj Java Discipline Lead@ Endava

programming

Page 14: TSM_5_2012_en

14 nr. 5/2012 | www.todaysoftmag.com

is responsible for setting the Request and Sessions scopes and this is the place where we’ll configure the servlets and filters within the application.

Considering that we write a web application, the most logical and intu-itive place to create the Injector is within a ServletContextListener. A ServletContextListener is a component that fires just after the application is deployed and before any request received by the ser-ver. Guice comes with its own class that must be extended in order to create a valid injector. As we’ll use Servlet 3.0 API we’ll annotate this class with @WebListener – we won’t need to declare it in web.xml.

I was saying that the ServletModule is the place where we configure our servlets and filters. This is the content of the class that it will extend this module. It will con-figure a servlet mapped to all the .html requests:

package endava.guice.modules;import endava.guice.servlet.MyServlet;

import com.google.inject.servlet.ServletModule;

public class MyServletModule extends ServletModule {

@Overrideprotected void configureServlets() { serve(„*.html”).with(MyServlet.class); }}

The ServletContextListener:package endava.guice.listener;import javax.servlet.annotation.WebListener;import endava.guice.modules.MyServletModule;

import com.google.inject.Guice;import com.google.inject.Injector;import com.google.inject.servlet. GuiceServletContextListener;

@WebListenerpublic class MyGuiceConfig extends GuiceServletContextListener {

@Overrideprotected Injector getInjector() { return Guice.createInjector(new MyServletModule()); }

Please note inside the getInjector() method the creation of the Injector based on the servlet module defined before. If the application has many other modules all of them must be declare here.

Also, you can see how intuitive the declaration of the servlet mapping is.

This is the MyServlet class:

package endava.guice.servlet;import java.io.IOException;import javax.servlet.ServletException;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;

import endava.guice.service.MyService;import com.google.inject.Inject;

import com.google.inject.Singleton;

@Singletonpublic class MyServlet extends HttpServlet {private static final long serialVersionUID = 1861227452784320290L;

@Inject private MyService myService;

protected void service( HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.getWriter().println( „Service: „ + myService.doStuff()); }}

Let’s analyze this code:1. A servlet must be a singleton – mark

it using the @Singleton annotation – otherwise the application will throw an Excepton

2. We use field injection in order to get a MyService instance

3. The class extends HttpServlet just like any other servlet

The MyService interface:package endava.guice.service;import com.google.inject.ImplementedBy;

@ImplementedBy(MyServiceImpl.class)public interface MyService {

String doStuff(); }

And its implementation:package endava.guice.service;

public class MyServiceImpl implements MyService {

@Overridepublic String doStuff() { return „doing stuff!”; }}

The application is ready to be deployed. This is the result of calling index.html:

Integrating Guice into an existing web application

In order to integrate Google Guice into an existing web application we must make sure that everything is in place:• The required jar are in classpath• The Guice filter is defined in web.

xml• We have a ServletContextListener

t h a t e x t e n d s GuiceServletContextListener

At this stage, all these configurations will not have any impact on the application – everything will work as before.

We can have 2 directions:• Use Guice only for new things – of

course, this is not a best practice, but this is a normal scenario for big applications with legacy code

• Guicefy the entire application – ideal case

For the second case you must follow the same path presented in the first part of the article.

For the first case we’ll end up using DI in servlet classes that are not instrumen-ted by Guice. We can access the Injector instance using the ServletContext

Injector injector = (Injector) request.getServletContext(). getAttribute(Injector.class.getName());

In order to get all the dependencies injected you can:

1. Call injector.injectMembers(this) –this will inject all the dependencies

2. Call injector.getInstance(clazz) for each instance that needs to be injected

Request and Session ScopeThe servlet extension adds 2 new sco-

pes: Request and Session. We’ll see next an example of using the Session scope.

We’ll slightly modify some of the classes presented before. Considering that we’ll need to mix scopes and we want to access an object with a narrower scope from an object with a wider scope (access a Session scoped object from a Singleton) we’ll use Providers (see the Note section for details).

The servlet module will look like this:

package endava.guice.modules;

import endava.guice.provider.PojoProvider;import endava.guice.servlet.MyServlet;import endava.guice.servlet.PojoClass;

import com.google.inject.servlet.ServletModule;import com.google.inject.servlet.ServletScopes;

public class MyServletModule extends Servlet-Module {

@Overrideprotected void configureServlets() { serve(„*.html”).with(MyServlet.class); bind(PojoClass.class). toProvider(PojoProvider.class). in(ServletScopes.SESSION); }}

Please note the ServletScopes.SESSION binding. The PojoProvider classpackage endava.guice.provider;

import endava.guice.servlet.PojoClass;import com.google.inject.Provider;

programareprogramming

Google Guice

Page 15: TSM_5_2012_en

15www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINEprogramare

public class PojoProvider implements Provider<PojoClass> {

public PojoClass get() { return new PojoClass(); }}

The PojoClass class:package endava.guice.servlet;

public class PojoClass {

private String name;

public void setName(String s) { this.name = s;}

public String getName() { return this.name; }}

In order to prove that the applica-tion is actually working, we’ll modify the MyServlet class to display additional information:

package endava.guice.servlet;import java.io.IOException;

import javax.servlet.ServletException;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;import endava.guice.service.MyService;

import com.google.inject.Inject;import com.google.inject.Provider;import com.google.inject.Singleton;

@Singletonpublic class MyServlet extends HttpServlet {

private static final long serialVersionUID = 1861227452784320290L;

@Injectprivate Provider<PojoClass> pojoClass;

@Injectprivate MyService myService;

protected void service( HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

response.getWriter().println( „Service: „ + myService.doStuff() + „ with „);

if (pojoClass.get().getName() == null) { pojoClass.get().setName(„name”); } else {

pojoClass.get().setName(„existing name”);}response.getWriter(). println(pojoClass.get(). getName()); }}

In order to demo the functionality we’ll

access the application twice in the same session. The first time it will display the below result:

The second time it will display the following result:

This example can be easily changed in order to use Request scope instead of Session scope.

This is how a simple Guice web appli-cation looks like. I tried to touch the most important points from the Guice Servlet extension. As mentioned in the previous article, this is just a small intro. You can continue experimenting different situations and you’ll learn the most by actually using Guice into a real project.

Guice as a Maven dependency

<dependency> <groupId>com.google.inject</groupId> <artifactId>guice</artifactId> <version>3.0</version></dependency>

ProvidersProviders address the following

situations:• A client needs more instances of the

same dependency per injection• A client wants to get the dependency

when it will actually use it (lazy loading)

• You want to inject a narrower sco-ped object into a wider scoped object

• Additional logic is needed in order to create the object being injected

• You want to control the process of creating instance per binding

As you can see in the previous exam-ples, it is very easy to write a Provider. You just need to implement the Provider<T> interface, where T is the concrete type of the object being injected.

Page 16: TSM_5_2012_en

16 nr. 5/2012 | www.todaysoftmag.com

Agile & Testing & Mobile Three converging Concepts

These applications have quickly deter-mined us to use our mobile phones or tablets throughout the day even for the least important tasks. As customer needs evolved, mobile application developers have shifted their focus from the qualita-tive aspect, favoring the complexity of their product. Increasing customer demands have led to frequent problems related to time allocation for the development and testing of applications.

A solution for this issue could be identified by discussing the development methodology best suited for customer needs and which requires as less time and resources as possible. Standards shall be established at company level and will have to be adjusted and implemented so as to respect both customer demands and the product for delivery. By following the “Agile” methodology, the company that I work for has managed to define a set of guidelines that ensure the product will correspond to the customer’s vision and market ideal.

These basic principles are:• defining the testing strategy;• defining the testing objective;• constant adaptation to objective

changes;• identifying risks and establishing a

prevention strategy;• continuous feedback.

We shall further discuss each of these principles targeting the mobile application

universe and analyzing their individual impact upon product development.

Defining the testing strategyEven though we cannot predict the

obstacles we could come across with during testing , it is essential to esta-blish the basic testing strategy from the very beginning. We shall therefore need to: • Establish the testing area;• Determine the testing environment;• Define the types of testing to be

applied.

Testing areaThe customer plays a very important

role in establishing the area of testing (devices, operating systems, platform com-binations, browsers etc). The QA (Quality Assurance) team has to inform the custo-mer about the risks presented by a reduced area of testing or about likely risks once the application is being developed. To gather this information, the team needs to pro-spect the market so as to establish which are the most commonly used devices and operating systems. During a customer-team discussion, the following aspects must be agreed upon:• which platforms are targeted in

development;• which devices and operating

systems combinations testing should be focused on;

• how likely it is that mobile phone operators or device producers h ave i mpl e me nte d c om mon

A modern overview of the IT universe reveals mobile technology as a particularly dynamic domain. This market sector is presently disputed between three major competitors, namely Apple, Nokia and the extended family of Android devi-

ces (Samsung, Motorola, Sony-Ericsson, etc.). Since recently mobile devices only offered users access to basic applications (e-mail, browsers, calculator and rudimentary games), yet nowadays we are bombarded with financial, health and insurance applications, per-sonal assistant applications and advanced graphics games.

testing

Rareș Irimieș[email protected]

Senior QA @ 3Pillar Global Romania

management

Page 17: TSM_5_2012_en

17www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

functionalities upon which the application would rely on.

Due to limited time or budget, actual testing will not be able to cover sufficient devices and operating system versions to considerably reduce the incidence of bugs. In this event, it is recommended that the most frequently used device-operating system combinations be tested, as well as the devices with the highest bug report.

Another major step in establishing the area of testing is positioning the applica-tion on the market (games, social, banking, health etc.), since we will be able to decide which devices to use for testing according to the market segment the application is intended for. For example, we would need to cover as many devices as possible for a banking application to ensure easy access for users, whereas for games we would focus on devices mostly used by children (iPod, iPad, Android tablet).

Another problem that may occur during the development of a complex appli-cation is using APIs provided by existing applications (Facebook, Twitter, etc) or by third-party libraries (TapJoy, iAD etc). The customer needs to be aware that integrating these components may push the deadline back or may have a notable impact upon the application’s stability (third-party bugs that cannot be fixed in due time or the avo-idance of which could unpleasantly affect the application’s functionality).

Testing environment Once the area of testing has been esta-

blished, the testing environment needs to be defined. We shall consider the following aspects:• which testing types to use• the costs of obtaining the specific

environment• the testing effectiveness on such an

environment• can the generated environment be

used for testing other applications

We can thus make a comparison between the following testing environ-ments: emulators, physical devices and shared infrastructure

EmulatorsThe emulators are convenient to use,

but a series of characteristics specific to emulator testing have to be taken into account if they are used as the main testing environment:• e f fe c t ive in : Sanity tes t ing ,

Compatibility testing, Functional testing, Smoke testing;

• free download; • 40% of testing can be executed on

the emulator;• device-operating system combinati-

ons can be obtained;• not fully effective for day-to-day

scenarios;• relatively slow testing environments;• intensely used in automation testing.

Shared infrastructure This alternative has a few features

worthy of mention:• effective in: Exhaustive testing,

Compatibility testing, Interruption te s t i ng , Fu nc t i on a l t e s t i ng , Regression testing;

• n ot f re e ( D e v i c e A ny w h e re , PerfectoMobile);

• testing can cover a wide array o f d e v i c e - op e r at i ng s y s t e m combinations;

• slow because internet connection is required;

• effective for reproducing day-to-day scenarios.

Physical DevicesThe approach of testing using real devi-

ces can be considered the closest one to the authentic experience, since user behavior can be perfectly simulated. The features important to remember are:• Effective in: Functional testing,

Smoke testing, Acceptance testing, Regression testing, Bug fix testing;

• Accurate for reproducing problems occurred while using the device

• 50% has to be conducted on a phy-sical device;

• Effective in automation testing;• Offers quick and precise results.

Testing typesAfter defining the testing environment,

the team has to decide which testing types to resort to. Be it manual or automated, the testing process must be adapted to

management

Page 18: TSM_5_2012_en

18 nr. 5/2012 | www.todaysoftmag.com

the project complexity, allocated reso-urce budget as well as the amount of time available. Consequently, when comparing manual with automation testing, we must take the listed aspects into account:

Which is the best testing method?Manual testing is 100% effective, since

it is easily able to mimic user behavior and should be used for at least 70% of the entire project. Automation testing is only 60-70% effective because the scripts applied can-not be fully complete or correct. Presently, the question of Manual vs. Automation is highly debated and most conclusions state that automation is not able to substitute manual testing.

What are the costs of each method?To provide a short answer, we need to

consider the complexity of the application and the amount of time allocated for tes-ting. Automation testing is more costly in the initial phase, in which time and resour-ces are consumed by developing automated scripts, setting up and using a Continuous Integration environment. Still, this method will prove useful in the regression testing stage. On the other hand, manual testing is able to isolate bugs in the earliest stage of product development, but presupposes higher costs in terms of the physical devi-ces required.

What are the advantages of this testing method?

Automation testing is very useful in the regression testing stage by saving a great deal of testing execution time and plays a leading role in load testing or performance testing for the entire project. Certainly, as previously stated, the possibility of integra-ting automation testing into a Continuous Integration system (ex. Bamboo, Jenkins/Hudson etc.) should be considered.

The main testing methods for manual execution are Functional and Bug-fix tes-ting and the objectives are:

a. User interface (fonts, colors, navi-gation from one screen to the next);

b. Basic application functionalities;c. Usability testing – buttons such as

“Home” or “Back” in each screen; d. Download, installation or unin-

stallation of the application;e. Interruption testing – applica-

tion behavior when it runs in the

foreground and the user receives a text message or phone call;

f. Monkey testing –chaotically inser-ting characters or images in order to test the application’s resistance to stress.

Aside from manual and automation testing, increasingly more companies resort to “Crowd sourcing” services, which serve to establish if the product is correctly built by transferring it to a group of future users (Alfa testing).

Testing StrategyHaving detailed the previous three tes-

ting components (area, environment and type), a four-step mobile application testing strategy can be conceived:

1. Plana. Understanding the documenta-

tion and requirements;b. Establishing the devices and com-

binations with operating systems;c. Describing the emulators or the

physical devices to be used in testing;

d. Describing the testing type to be used (manual or automation).

2. Designa. F i n d i n g a d e q u a t e t o o l s

(for Automation, Test Case Ma n a g e m e nt , C o nt i nu o u s Integration);

b. Procuring the devices; c. Establishing test scenarios.

3. Developmenta. Creating test data; b. Creating test scenarios.

4. Executarea. Configuring the test environment; b. Actual testing; c. Bug reports; d. Execution matrix.

Defining the testing purposeIn the Agile-Scrum methodology, the

QA process unfolds throughout the entire project, in each sprint. At the end of each sprint, all applications must have a compi-lable code which can be tested. Given that the generally allocated time for QA is 2-4 days, the test plan needs to be very clear and has to define the degree of quality of the code when the Demo is ready.

During the first sprints (where new functionalities are generally implemented),

the tester will only be able to validate the happy flow. As the code becomes more stable, automation testing, performance testing and negative testing scenario will be possible. Depending on the complexity of the product, the manager needs to allocate 1-2 sprints for testing and bug-fixing before the product is installed on the production environment and becomes public.

Continuous alignment to scope change

This rule is rather uncomfortable for most testers. Since in “Agile”, the priorities and functionalities of the application may change, the test plan has to be constantly upgraded to reflect the occurred changes. For example, the decision to allow users access to the backend part of the applica-tion is made, in order for them to add or delete other user accounts. This, however, has not been mentioned in the initial docu-mentation of the application. In this case, new methods of testing will be integrated into the test plan:• Security testing (depending on the

user’s role, access to the data base is granted or denied)

• Load testing (checking behavior in the event that multiple users access the database at the same time)

This is a mere example, but openness towards a constant shift of purpose and methodology is permanently welcome

Risk assessment and mitigation strategy

It is highly important that the test plan foresee and include risks that may arise during the development and testing stage. For example, in order to test a more com-plex functionality, additional testing time is required. The approach and risk preven-tion strategy could consist in allocating more resources for a short period of time to allow testing as many risk scenarios that may be executed by a user or, alternatively, could mean reducing the number of tests and prioritizing them depending on how they affect the application.

Continuous feedbackPerhaps most importantly, in order

to obtain feedback, a permanent open dialogue needs to be carried between the QA team and the customer. When stri-ving for quality, it is essential that new

testingAgile & Testing & Mobile - Three converging Concepts

Page 19: TSM_5_2012_en

19www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

functionalities or major changes brought to the application be discussed with each individual involved in the project. These modifications have a great impact upon the test plan and quality of the existing pro-duct. For this reason, the QA team has to be involved when these decisions are made by the customer or the development team.

To support this rule, one could also bring into discussion the idea of testing in the early stages of the project and a high degree of implication on behalf of the tes-ters and the product manager in the quality assurance process. The quality of the pro-duct needs to be ensured collaboratively and the development team must, in this sense, be aware of the quality they produce.

ConclusionsTesting mobile applications will be

increasingly difficult with the passing of time, as products become more complex and customers grow more pretentious. Opting for the correct strategy regarding testing types and environments lightens the workload, cuts costs and may help identify possible problems in advance. Perhaps the most important ingredient is not being reluctant towards change and the ability to adapt to an ever-changing purpose.

Testing mobile applications has in itself become a challenge since the mobile envi-ronment is the most dynamic of its time.

Page 20: TSM_5_2012_en

20 nr. 5/2012 | www.todaysoftmag.com

Android Design for all platforms

One of the problems that the Android application developers face is launching an application that works correctly on all supported platforms. What works per-fectly on some phones may not work at all on others, this forcing some to give up the attempt of covering a category as large as the Android phones, including tablets.

The main constraints to be taken into account when writing an application are: • supported SDK version, • Runtime configuration options

such as language, phone orientation (Portrait or landscape), operators, etc.

• different hardware versions, each with its own limits;

• phone or tablet size.

Graphic constraintsA special place in the design and deve-

lopment of an application is occupied by the graphic architecture in order to cover as many types of phones as possible. A GUI should be dynamically built in order to be changed according to the expecta-tions and needs of users; visual language must be taken beyond the static screens

for a flexible usage of the visual elements. Creating applications with an advanced UI - to react quickly to user action - is not enough. It should also be an intuitive one, so that all elements are clear and fully visible.

Fortunately, the Android framework is continually growing in this direction, namely to support the development appli-cations for several types of phones. All graphical models that each version of SDK brings should not be considered restricti-ons, but rather ways of development in this direction.

From the outset, the framework of UI was designed to be adjusted according to the available screen space. An example at hand might be the ListView component which is able to change height according to the screen size, which varies depending on the QVGA HVGA and WVG

Graphic design for mobile phonesThe graphic is constructed using the

.xml files called layouts. A new concept of screen density was introduced since Android 1.6, making scaling on multiple

I would like to emphasize from the very beginning that this article does not bring anything new to Android programming, but it’s rather a synthesis of the information available through the Android system.

Claudia Dumitraș[email protected]

Android Developer @ Skobbler

managementprogramming

Page 21: TSM_5_2012_en

21www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

resolutions much easier, even when phones were about the same physical dimension (eg the option was very useful for Droid mobile genre with high resolutions that have appeared on the market). The screen density means the number of pixels on a given screen surface or in other words, dots per inch. Thus, the graphics (layouts) can be classified into: „Small” for QVGA, „normal” HVGA and „large” for WVGA, making possible to use different resources in compliance with the screen’s dimension.

Depending on the configuration quali-fiers that are present in the application, the system is able to choose the appropriate resources according to the specific cha-racteristics of the screen. A configuration qualifier is the string added to resource files.The example below shows a list of resource file names according to different layouts and examples of different image files for „small”, „normal” and „high” screen densities:

// Layout for screen size “normal” res/layout/my_layout.xml

// Layout for screen size “small”res/layout-small/my_layout.xml

// Layout for screen size “large” res/layout-large/my_layout.xml

// File image for low density “low” res/drawable-ldpi/my_icon.png

// File average density images “mediumres/drawable-mdpi/my_icon.png

// File images to high density “Heigh” res/drawable-hdpi/my_icon.png

Even if your application has multi-ple resource directories, it is good to use certain standards in all existing layouts,

namely it is recommen-ded to use dps as a unit of measurement (Density independent pixels). A dp is a virtual unit, describing the sizes and positioning of graphical elements in a manner independent of density. A dp is equal to a phy-sical pixel on a screen of 160 dpi (dots per inch), which is the density refe-rence for a „medium” screen density. At run time, the platform han-dles any scaling of the dp units needed, based on the actual density of the screen in use.

Graphic design for tabletsFor the first generation

of tablets running Android 3.0, the pro-per way to declare tablet layouts was to put them in a directory with the xlarge configuration qualifier (for example, res/layout-xlarge/). In order to accommodate other types of tablets and screen sizes—in particular, 7” tablets—Android 3.2 intro-duces a new way to specify resources for more discrete screen sizes. The new tech-nique is based on the amount of space your layout needs (such as 600dp of width), rather than trying to make your layout fit the generalized size groups (such as large or xlarge). While these two devices are seemingly close to each other in size,

separating tablets 7 „ the 5 „was required, because these two were in the large group phones. The amount of screen space of these two tablets is significantly different, as is the style of user interaction.

To make it possible for you to provide

different layouts for these two kinds of scre-ens, Android now allows to specify your layout resources based on the width and/or height that are actually available for your application’s layout, specified in dp units. For example, after you’ve designed the layout you want to use for tablet-style devi-ces, you might determine that the layout

management

W E W A N T Y O UA N A W E S O M E C + + D E V E L O P E R

T O A P P LY !N O W I T ’ S T I M E&

to work with the latest kick-ass technologiesto do some serious rocket science with more than 3 mio customers, who drive 2 mio km/week

to rock the Romanian IT industryto change the LBS community worldwide

www.skobbler.com/[email protected]

Figura 1 Strategii de a trata design-ul layout-urilor pentru tablete portet şi landscape.

Sursa: http://static.googleusercontent.com/external_content/

untrusted_dlcp/www.google.com/en//events/io/2011/static/presofiles/

designing_and_implementing_android_uis_for_phones_and_tablets.pdf

Page 22: TSM_5_2012_en

22 nr. 5/2012 | www.todaysoftmag.com

stops working well when the screen is less than 600dp wide. This threshold thus beco-mes the minimum size that you require for your tablet layout. As such, you can now specify that these layout resources should be used only when there is at least 600dp of width available for your application’s UI// pentru tablete de 7” (600dp wide and bigger)res/layout-sw600dp/main_activity.xml // For 10” tablets (720dp wide and bigger)res/layout-sw720dp/main_activity.xml

For tablets, it is usually desired to use all the available space and graphics expansion throughout the length of the screen, which does not always have a pleasing result. A solution at hand is to divide the layout in smaller panels, a technique called multi-pane. T his separation must be made in an ordered and clear way so that the content should become more detailed, but this divi-sion must be kept regardless of the tablet

orientation. In the following f igure there are s ome strategies propo-sed by the Android team, in order to resolve any pro-blems that might appear in portrait or landscape.

FragmentsThe implemen-

tation of multiple panels is most easily a c h i e ve d u s i ng f ragments (API F r a g m e n t s ) . A fragment can be

a special function, or part of a gra-phica l inter face. You can combine

multiple fragments in a single activity to build a multi-pane UI. This is an advan-tage, because it can be reused in multiple activities. Each sub-layout can be divided in fragments. A fragment can be regarded as a mini-Activity (Activity - the specific Android class that supports the graphics) which cannot function independently, but must be included in an Activity class. A fragment must always be embedded in an activity and the fragment’s lifecycle is directly affected by the host activity’s lifecycle. For example, when the activity is destroyed, so are all fragments. However, the fragments can be added or deleted while the activity is running

The minimum SDK versionThe minimum SDK version is an attri-

bute which is declared in the Android manifest f i le application (android:

targetSdkVersion). It is usually chosen according with the highest number of mobile phones existing on the market. This number represents the upper API limit and is very important because it could influence certain functions. The lower the version number is, the more troubles can arise about API limitations. Also, many of the native widgets could have an outdated image.

ConclusionIt is important that the application

design is compatible with all available plat-forms, thus increasing the number of users. This, however, is not enough. Each screen size provides different opportunities and challenges for user interaction, so to be really impressive, we need to make a step forward, namely we should optimize the experience for each type of configuration.

Even if you cannot buy all the latest devices on the market to test your applica-tion, Android provides multiple methods to test the final result on multiple plat-forms. But before this, each application must comply with all standards required by the system.

Figura 2. Exemplu de definere a modulelor grafice care conțin fragmente, ce pot fi combinate într-o activitate pentru tablete, dar separate petru telefoane Sursa http://developer.android.com/guide/components/fragments.html)

Referinţehttp://developer.android.com/guide/practices/screens_support.htmlhttp://android-developers.blogspot.ro/2011/02/android-30-fragments-api.htmlhttp://developer.android.com/guide/components/fragments.htmlhttp://www.google.com/events/io/2011/sessions/designing-and-implementing-android-uis-for-phones-and-tablets.htmlhttp://static.googleusercontent.com/external_content/untrusted_dlcp/www.google.com/en//events/io/2011/static/presofiles/designing_and_implemen-

ting_android_uis_for_phones_and_tablets.pdfhttp://www.youtube.com/watch?v=2jCVmfCse1E&feature=relmfuhttp://developer.android.com/guide/topics/manifest/uses-sdk-element.htmlhttp://developer.android.com/training/multiscreen/index.html

Android Design for all platforms

managementprogramming

Page 23: TSM_5_2012_en

23www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

Building a teamIt is easy to understand from the prin-

ciples of the Agile Manifestio that the agile movement is modeling people as unique individuals and not replaceable resour-ces and that the greatest value it brings is not given by individual performance (although this plays an important role) but by the power of interactions an collabo-ration between individuals. The agile best practices recommend building cohesive self-organizing teams with clearly defined roles, with the end goal of delivering added value. This means, of course, that the team members are empowered to act as they think it is best for them according to their own principles and without having any-body from the outside pushing a particular way of working to them. Also, this means that they are equally responsible for their

results and performance.

Building an agile team or, in other words, the way people are assigned to a certain team (based on their competences, abilities, compatibility etc.) is crucial for having a successful team. On a larger scale, building an agile team is very important for developing an agile environment and, hence, an agile company which ultimately should result in creating an agile mentality. More details will follow later on. Regarding the team, its configuration is of great importance as much as the responsibilities of its memebers and their way of collabora-ting for achieving successful relationships. The compatibility of team members, the way they act and interact are crucial for growing a successful team.

Influencing and mimicing performance

There is no better way of describing the essence of the Agile Mentality than starting from the principles of the Manifesto for Agile Software Development

„We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

• Individuals and interactions over processes and tools• Working software over comprehensive documentation• Customer collaboration over contract negotiation• Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.”

agilemanifesto.org

management

How to Grow An Agile Mentality in

Software Development?

Andrei Chirilă [email protected]

Team LeaderTechnical Architect@ ISDC

Page 24: TSM_5_2012_en

24 nr. 5/2012 | www.todaysoftmag.com

is obvious when the team has not reached a certain maturity level and when the most experienced individual is perceived as a model by the rest of his/her colleagues. The power of example is greater than we can imagine. Generally speaking, atti-tude can be easily copied. People have the natural tendency to behave as the people they empathize with, to copy or to reject the attitude of their superiors and, beca-use of that, the “antourage”, the team and the working context are very important as they can set the ground for growing a team and a collective vision.

An anonymous saying states: “To be inspired is great, but to inspire is incredi-ble”. A true leader will be recognized and appointed by the team and will know to motivate the others to go beyond their limits. The role of this leader (or SCRUM Master in the context of a SCRUM set-up) is so important because (s)he not only needs to make people perform but also to protect them when they are vulnerable

Empowerement and responsibilityMost often, the attitude of people

within a company towards empowering and making teams responsible makes the difference between a mature company and a less experienced one. A team is more productive when its team members feel responsible for their actions and also empowered to decide what is best for their product and the context they work in.

In a classic set-up, the power of a manager, the insight and single-point con-trol of projects are just an illusion; this is mainly because people feel tempted to see this act as a way of deferring responsibi-lity to higher levels. In this context, the manager is usually informed while the team meber is indirectly freed of the any “charge” of being responsible, of acting.

In an agile environment, the perspec-tive is different because the empowerment of individuals is not meant only for motivating but also for improving the management of decisions. The even spread of information and responsibility across a network (i.e. team in our case) is more effi-cient than isolating them to a single node of a network (i.e. individual in our case). A team that values code writing over adding business value and over getting involved in decision making without taking res-ponsibility cannot be considered a mature team. Generally speaking, people should be encouraged and empowered to make decisions with the information they have access to; they should be motivated to build their own success stories without waiting for other people (i.e. their bosses) to lead them towards success.

Responsibility and empowerement don’t naturally occur from one day to another, but they can be “encouraged”. If initiating this process is the tough part, once started, things will evolve on their

own until a certain maturity level is reached.

Mature teams don’t normally need too much maintenance. They own enough information and have a great deal of expe-rience to fix most of the problems they have to deal with. Moreover, they should acknowledge the limits of their circle of influence and also know when to ask for help. That’s why a true team leader has to encourage the team to strive for more by making its members get out of their com-fort zone but without ignoring to equally protect them of the outer forces and of the risk of overestimating their forces. Shortly put, a true team leader will know to grow a team, to coach it and to protect it.

People over ProcessesAlthough the Agile Manifesto is

talking about the ‘people over processes’ paradigm, it is important to notice that this does not mean that processes do not add value to software development. On the contrary, this should complement the Agile Model. Even though some ‘Process Improvement’ implementations (such as CMMi etc) argue about the areas to be improved, they do not come with an actual approach. This is the place where the agile methodology can help because the need for changing and perfecting the way of working are core pillars within the agile philosophy if we think about the notori-ous principle of “inspecting and adapting”.

management

How to Grow An Agile Mentality in Software Development?

management

Page 25: TSM_5_2012_en

25www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

The “inspect and adapt” principle pro-motes changing things that are not optimal for the team through the development of microprocesses, processes that are mostly relevant in the context of a team. On long term, these microprocesses can turn into macroprocesses because, by sharing them, they can help other teams and the organiza-tion learn from history. The more frequent the sharing, the more beneficial it becomes to the other teams and, in the end, to the organization because it promotes learning from past mistakes, establishing best prac-tices and building a collective identity.

How vs What vs WhyThe members of an agile team will face

challenges they will have to cope with, they will learn and, in the end, they will be forced to evolve. Driven by their past expe-riences, they will develop new processes, rules and procedures that will help them be more efficient in the way they work, in the quality they deliver and in the way they learn from each other. By doing so, people will actually focus time, effort and

attention on perfecting the “Why”. While some teams will do this fine, other teams will be “rockstars” at doing it.

But here comes the interesting part. The more mature an agile team is, the more it will move its focus from defining the func-tionality it has to deliver (the “What”) and, subsequently, from the way this is done efficiently (the “How”) towards the reason behind the scenes for building functiona-lity, the Business Value (the “Why”). To sum up, a mature team will want to hear more often the Business needs, it will want to challenge invalid requests, it will give support and ask for support, it will focus on adding value and not just on developing applications.

But the question is why an overworking, firefighting team should be interested in knowing the “Why” and its roots. The answer is relatively simple: the more you challenge the reason, the more you chal-lenge yourself and your client and the more you develop a stronger relationship with that client.

Linking to this topic, there is an inte-resting technique called the “5 Whys Technique” that came into the spotlight in the IT field several years ago for getting to the rootcause of problems, but ended up being used in challenging the needs. Through this change of perspectives, the team is not considered to only do software development per se, but it is also seen to have a more strategic position beacause it is proactively involved in the decision making process.

A similar model can be met in Simon Sinek’s Golden Circle, a simple but yet

powerful model for the inspiration leaders-hip, that stresses on the role of the “Why” and on the way answering this question can lead to a conventional or exceptional reso-lution of problems.

An agile mentatlity means much more than being agile

As simple as it might sound at first, having an agile mentatlity means much more than being or behaving in an agile way. Being agile translates to having an agile behavior, which is determined by the characters of individuals, by their experi-ences in agile projects, by the way they act and interact within a team. An agile men-tality is not born within individuals. On the contrary, it is determined by sharing expe-riences, opinions, best practices, behaviors across teams and it equally contributes to the construction of a knowledge inventory. In other terms, an agile mentality cannot exist prior to having already existing agile teams.An agile mentality has impact at higher levels. In the beginning, the set up is given by an agile environment promoted by agile teams, which gradually leads to growing an agile mentality. An agile men-tality then stimulates the growth of an agile company which in turn leads to more agile teams and so on and so forth.

The agile mentality is the one that has visible impact at all levels of a company and the one that sustains scalability and growth on the long run. The more people and teams share a common agile vision and, hence, an agile mentality, the more the organization becomes capable of facing change and, ulti-mately, evolving. Nowadays, the IT world has turned into a complex chaotic system. Companies are stuck, incapable of innova-ting and bringing new extra values without evolving and adapting to the needs of their market. Having an agile mentality means being open to change and innovation and that is why the vision people have and share about innovation plays such an important role. An anonymous saying states that “innovation is the ability to see change as an opportunity and not as a threat” because through innovation we can make ourselves better and still through innovation we are empowered to change the world around us.

management

Page 26: TSM_5_2012_en

26 nr. 5/2012 | www.todaysoftmag.com

Play „Hard Choices” in every sprint and pay your debts

But what is the meaning of countries’ debts in the story of software debt? For sure we can make an analogy, we are also talking about large and small organizations, on their ability to be competitive or not, on the power to grow faster or slower, the speed to react to customer needs.

Agile development methodologies tell us that you can quickly adapt to market needs if you are agile. But what if you know you’re agile but in order to respond quickly to customer demands you need to put some dirt under the carpet? How many times can you do that? If you are a project manager how often can you say that the technical team has not delivered on time? If you are developer how many times can you re-esti-mate and delay a task? Can you demand every time to re-implement everything from scratch? If you are a testing engineer how many times can you delay the release because you received the binaries yesterday and didn’t have time to test them? If you are a release manager how many times can you say you should have released yesterday but did not have the ok from the QA team? Where are all these problems came from?

I think you got the idea that software debt sums up all decisions you postpo-ned or even ignore valuable activities in software development. There are certain

expressions that points out the fact that the team is accumulating debt:• We don’t have time right now, but

we’ll do it later.• We don’t need this now, we will do it

when will be necessary.• Where I worked before we did not

do this and things went well.• We don’t know how to do this or we

did not do this kind of things before.• We don’t have time to do this now,

but if something goes bad we will find a solution.

And if debts are extremely high someti-mes we hear expressions like „that worked fine on my computer”, „we could not repro-duced it” or „it should have been finished yesterday”.

It is clear that on short-term period the team has no problem. But what happens on a long term period? You may encoun-ter problems on a demo and then you need one or more sprints in order to pay your technical debt. You can have big problems at customers and you can’t produce a quick fix without damaging other components. Wouldn’t be possible that the product or the service that you are working at loose its customers or, even worse, the business became bankrupt?

I am absolutely convinced that some

Welcome to debt era! America owes about 16 trillion dollars and if you didn’t have the chance to go to New York’s Times Square you can see this debt directly online on sites like http://www.usdebtclock.org/. Is not a surprise to

anyone that Europe is in a debt crisis and countries that excel in this chapter are Greece, Spain, Italy, Ireland and Portugal with almost 120 billion euros borrowed. Also Romania is affected by this crisis or participates in it with various political or economic actions. Thus, debt is everywhere! Do we need to pay the debt? When?

management

Adrian [email protected]

Project Manager and Software Engineering Manager Bitdefender

management

Page 27: TSM_5_2012_en

27www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

readers of this article are expecting at methodologies, recipes, tools, or agile software development practices but beca-use all depends on you and the changes that you make every day in your organiza-tion, we will talk a bit about a board game invented by Software Engineering Institute (one of us).

The Hard Choices game is a simula-tion of the software development cycle meant to communicate the concepts of uncertainty, risk, options, and technical debt. In the quest to become market leader, players race to release a quality product to the marketplace. By the end of one game all players experiences the implications of effort invested in „making a good job done” in order to gain competitive advantage, and also the price paid for taking a shortcut. Players can call or even combine the two strategies to cope with uncertainty and this is similar with a team choosing to use one methodology versus another for software development.

The game has a few simple rules in order to be played quite easily, just like Scrum and Kanban has few rules. Likewise, playing rounds can be compared to sprints or iterations that agile teams are already familiar with. Rules of the game points out that it doesn’t matter only who finishes first but also the number of points collected during the game, because each collected tool represents a valuable point that helps determine the final ranking.

The most interesting aspect of the game Hard Choices is that when you decide to choose a shortcut will be penalized until you pay your debt, similar with sprints for „technical debt”. How

much delay will get the lack of unit testing? What is the penalty for not having automated tests to validate the build? What is the cost for delaying the delivery of software or bug fixing? I wonder what the cost is when customer does not have the same experience after a software upgrade.

When players have a tool card and they land on a tool square they may play it immediately by throwing the dices or col-lect another tool card for points. This rule is similar to the decisions made inside of a sprint, for example in a retrospective case, decisions taken to improve a particular

component or a particular process, impro-vements can be seen immediately in the next sprint, by throwing the dices or, at the end when the product is released to the customers, the game ends.

To better explain the software debt, few engineers have tested the game using the following three strategies:

1. player always chooses the shortcut and he is penalized throughout all the game by getting one point with minus every time he throws the dices.

2. player selects shortcut but pays immediately by sitting around without playing.

3. player never chooses the shortcut

Strategy 2 represents the winning one and strategies 1 and 3 are about equally effective for the simple rules of this game. But what if we stay two rounds or sprints in order to pay the debt? What if we were penalized two points for each roll of the dice? Clearly strategies 1 and 3 would be

differentiated in terms of efficiency.

However the purpose of the game is not to reach the end as quickly as possible, but, to gain points, as they say in agile “to deli-ver value”. It matters who reaches the end first because wins more points, but it also matters how many points you accumulate along the game from

collecting tools.

In conclusion debt accumulation is closely related with the postponement of decisions and ignoring certain practices that may give long-term results. Perhaps it is easier to say that you do not get invol-ved in the team because you want to learn another technology or that you want to change the project but it is not likely to find elsewhere and higher debt? What if other teams or companies do not includes you in the team if you are not open to pair pro-gramming or Test Driven Development?

management

Page 28: TSM_5_2012_en

28 nr. 5/2012 | www.todaysoftmag.com

User Experience Design and How to apply it in

Product Development

frustration. It’s all becoming more and more about making a good experience for users... Now it’s not good enough to just be usable. The design has to fit into peoples’ lives. It actually has to make people happy, and anti-cipate their needs. [2]

The User experience(UX) includes all aspects of the end-user’s interaction with the company, with services, and with products. A good user experience is to meet the exact needs of the customer. To achieve high-quality, user experience must be a seamless merging of the services of multiple discipli-nes, including interface design, industrial design, engineering and marketing.

Each designer does affect the user expe-rience; every decision is forming the future of the people we design for. Primary goal of any good designer is communicating the intended message that leads to a posi-tive user experience. The colour of text, the alignment of page, the images, the design pattern - are all part of this communication, the hints which often remain not noted or unexpressed, that ultimately make up the user’s experience.

This may include interactions with your software, your web site, your call centre, an advertisement, with a sticker on someone else’s computer, with a mobile application, with your Twitter account, with you over email, maybe even face-to-face. The sum total of these interactions over time is the

user experience. [3].

PRINCIPLES OF USER EXPERIENCEFor most people, changes evoke fear and

stress and impose familiarity with product, on the other hand, there are comforting. It allows us to live and operate with a cer-tain level of ease that doesn’t require active thought at all times. Building successful products rarely happens by doing what everyone else is doing. Successful products happen by fundamentally changing people’s perception of what will fulfil their need and providing a painless transition at the “new” product

Good Experiences are Simple“Simplicity is the ultimate sophistica-

tion.” - Leonardo Da VinciWhen the product is so complete

that there is truly nothing else to add and nothing else to take away. When things are perfectly understood and represented by their state and appearance, to behold them is to understand and know them. If people can understand or use something with little difficulty, then we’ve made something sim-ple. But simple is not always easy to design, it only seems that way.

LifecycleAs users interact with your product

or service, they proceed through a series

User experience Design(UX) refers to a concept that places the end-user at the focal point of design and development efforts, as opposed to the system, its applications or its aesthetic value alone. The requirement for an exemplary user

experience(UX) is to meet the exact needs of the customer. One of the primary goals of any good designer is communicating the intended message in such a way that it leads to a positive user experience.

Design is less and less about solving problems, testing less and less about eliminating

UIX

Sveatoslav [email protected]

User Experience and User interface Senior Designer

Page 29: TSM_5_2012_en

29www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINEUIX

of steps called the usage lifecycle[4]. Like other lifecycles, the usage lifecycle has a beginning, middle, and an end.

Lifecycle Stages:• First Contact - People become aware

of product• First Time Use - The first actual

usage of product and when the user seriously considers the long-term obligation. It is the first real impres-sion about product.

• Ongoing Use - Regular use of your product.

• Passionate Use - Users get into a state in which they are highly immersed.

• Death - People stop using productAll people are in life cycle, defines a

context for the user so much, how many something else. We need to “know your user” we need to know what they’re doing.

ConversationUX is really just good marketing.

It’s about knowing who your market is, knowing what is important to them, knowing why it is important to them, and designing accordingly. It’s also about liste-ning after you’ve designed and adjusting to the changing marketplace: improving the experience of those in your market. It’s easy to recognize this when you consider that users = market. That’s what users are: your users are the market you’re designing for

InvisibilityWhen people are having a great expe-

rience, they rarely notice the work that has been put into place to make it happen. Great UX is to be so successful that nobody talks about designers.

ContextThere is no right answer to a design

problem. There are only bad, good and better answers for the current situation. Each of the potential solutions sits within a particular context... To find the better answers for your design problem, you need to know the context it sits within. You need to know what you are trying to achieve, what a successful outcome is and what you have to do to get there.

To find the better answers for a design problem, we should know the context in which its limits are. Designers need to know what they are trying to achieve, what a successful outcome is and what they have to do to get there. For many application and web design problems, designers need to know about the goals, what people want to do, what they already know and what’s

the content.

SocialWe think too much about what we

are trying to achieve, about what we have designed or built, and thus in terms of what it does or should do. This leads us to think in terms of controlling outcomes, or tweaking features for new behaviours... Social is happening out there, and your users do not have you or your product in mind, but their own experiences. Change your frame.

We don’t deal with people anymore we deal with social lives.

USER EXPERIENCE STRATEGYDesign strategy is about serving peo-

ple... The real challenge is trying to solve the human problem. It’s about understan-ding their needs, their aspirations, and then meeting them in some way. But sometimes their needs are to be surprised and deligh-ted, and they can’t tell us how to surprise and delight them. That has to come from us as creative people in our profession.

A User Experience Designer being able to accurately answer these 5 questions can make a product that instantly resonates with the customer.

In order to capture that vision and stra-tegy, a User Experience professional should be included at the outset of the project, as they are uniquely positioned to help answer each of these questions:

What problem are we trying to solve?Before we can begin building products,

we must identify what problem this pro-duct is attempting to solve? Articulate the problem as clearly and concisely as possi-ble. Once we are able to do this, we have a lens through which we can answer the remaining questions, further clarifying your purpose and informing your strategy going forward.

Who is the customer?One of the most important questions

to answer for any new product. Without a clear understanding of our target audience we run the risk of building something that doesn’t meet or fit our customer’s expecta-tions, use-cases or mental model of what it is they came to us for. User research, eth-nography, and personas are components of the UX toolset that can help answer these questions.

Where can we improve on existing patterns and solutions?

Understanding how others have

attempted to solve the same or a similar problem is extremely valuable in under-standing what pitfalls to avoid, identifying where an existing pattern failed or needs improvement and revealing the moments in the customer’s journey that we can sur-prise and delight them when others have failed them. We will find ourselves asking this question over and over throughout the entire design process.

When should we begin to get user feedback?

Early and often is usually the answer to this question, but that may not always be true. If we introduce it too early we run the risk of letting the user drive the product development. If we introduce it too late, we may miss out on valuable feedback that could have saved you from overbuilding. Understanding who our customers are and knowing where you can improve on exis-ting solutions will help us know when we can begin to incorporate user-testing/feed-back into the product development cycle.

Why does our product solve the problem?

Having identified what problem we are trying to solve, who our target users are, and where we can improve on existing solutions, we should be able to articulate why our solution solves the problem. This should be the foundation for the short-term goals with a long-term strategy.

How can data help you understand what you are building?

There are countless analytics that can be used to validate assumptions, con-firm design decisions and clarify our product/market fit. Techniques such as: data mining, eye-tracking, A/B testing, user flows, ethnographic research, usabi-lity benchmarking and many others can be used to gain better insights. No detail is too small to test. Copy, layout, interaction - in all of these cases good data can help us understand the results we are seeing and adjust accordingly.

Companies that start by answering these questions and engaging in a process of UX-driven design have a much greater chance of going on to create viable and longer-lasting products than those that start building without these answers. In the ever-expanding start-up world, those who value and apply UX from the start will be those at the head of the pack.

An objective tool for measurement and analysis helps us provide our clients with

Page 30: TSM_5_2012_en

30 nr. 5/2012 | www.todaysoftmag.com

fact-based recommendations, as oppo-sed to simple conjecture and opinion. The methodology we’ll explore in this article will help you to: • Remove subjectivity from the equa-

tion as much as possible.• Enable persons with different

backgrounds (designers, developers, clients) to share a common under-standing of the product.

• Enable persons with different backgrounds (designers, developers, clients) to share a common under-standing of the product.

• Provide our clients with a fact-based, visual representation of their product’s benefits and limitations.

The user experience is primarily made up of four factors[9]: • branding• usability• functionality• content

Independently, none of these fac-tors makes for a positive user experience; however, taken together, these factors con-stitute the main ingredients for a product’s success.

BrandingBranding includes all the aesthetic and

design-related items within a Product. It entails the product’s creative projection of the desired organizational image and message. Statements used to measure bran-ding can include: • The visual impact of the product is

consistent with the brand identity. • Graphics, collaterals and multime-

dia add value to the experience.

• The product delivers on the percei-ved promise of the brand.

• The product leverages the capabi-lities of the medium to enhance or extend the brand.

The product leverages the capabilities of the medium to enhance or extend the brand.

FunctionalityFunctionality includes all the tech-

nical and ‚behind the scenes’ processes and applications. It entails the product’s delivery of interactive services to all end users, and it’s important to note that this sometimes means both the public as well as administrators. Statements used to mea-sure a product’s functionality can include:

• Usersreceivetimelyresponsestotheir queries or submissions.

• Ta sk p rog re s s i s c l e a r l ycommunicated

UsabilityUsability entails the general ease of use

of all product components and features. Statements used to measure content can include: • Effectiveness: A user’s ability to

successfully use a Website to find information and accomplish tasks.

• Efficiency: A user’s ability to quickly accomplish tasks with ease and without frustration.

• Satisfaction: How much a user enjoys using the Website. Error frequency and severity - How often do users make errors while using the system, how serious are these errors, and how do users recover from these errors?

• Memorability - If a user has used the system before, can he or she remem-ber enough to use it effectively the next time or does the user have to start over again learning everything?

ContentContent refers to the actual content

of the product (text, multimedia, images, etc) as well as its structure, or information architecture. We look to see how the infor-mation and content are structured in terms of defined user needs and client business requirements. Statements used to measure content can include: • Content is structured in a way that

facilitates the achievement of user

goals. • Content is appropriate to customer

needs and business goals.• Content across multiple languages is

comprehensive.Measuring the effectiveness of design is

new for many designers. And indeed, we are still very early on in being able to do it well. There are several reasons why design isn’t measured, including not having agree-ment on success metrics, not knowing how to measure a positive user experience with your product/service, and not being able to put measurement methods in place. None of this stuff is easy: it takes a culture dedica-ted to gathering feedback and improving by it, the ability to access customers and web site analytics data, as well as the scheduling ability to iterate and get things done when metrics aren’t going in the right direc-tion.

Every project may need some tools in various orders depending on the challen-ges you’re confronted with. Which will help you, at any given time, and can give the most clarity and boost productivity [13]:

Sketches are usually hand-drawn graphics that contain screen ideas or expla-natory graphics outlining the high-level problem or solution. They are the most valuable when the idea hasn’t been fully formed, explored, or realized in any way. Sketches will help you understand what general pieces are needed to accomplish your goals.

Wire-frames tend to be compu-ter-generated graphics illustrating the organization of content, features and functionality. Prioritizing the elements of a design and determining general page layout; can be a very messy part of any pro-ject. A well-built wire-frame will help you pull apart the individual pieces and make sure they are appropriate to the goal of the page.

Mock-ups are rich graphics intended to simulate the look and feel of a project so you can understand the impact visual elements have on the brand. A mock-up can set the right impression and commu-nicate emotions and personality. Without actually building the website (which a lot of people do), there really isn’t any other way to concretely define what a website should look like.

are partially complete versions of a website used to understand how pages interact with each other and flow from one

Figure 1. The User Experience is made up of four interdependent elements.

UIXUser Experience Design and How to apply it in Product Development

Page 31: TSM_5_2012_en

31www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

area to another. More complicated interac-tions between sophisticated components might require a fully functional prototype to actually understand. When there are a lot of moving parts and goals have multi-ple steps involved, HTML prototypes can really help you find the gaps in your plans.

For most of us, the majority of our work involves refining, updating and improving existing systems. However, we must never forget that our job is fundamentally about shaping and creating the future. As desig-ners, the very heart of what we do is to visualize in our mind what does not pre-sently exist and then set about creating it.

We spend our time categorizing, orga-nizing, labelling and identifying patterns and components. New design frameworks are emerging with the goal of enabling reusable, highly extensible designs and

providing a roadmap to innovation. As our applications and products become increasingly more complex, we certainly don’t want to spend our time re-inventing the wheel. But these systems, frameworks and “best-practices” can also prevent us from breaking out of present patterns and making the space and time to envision something entirely new.

CONCLUSIONSUsability testing is one of the best

things we can do to understand whether or not people can use the product as you intended, and from there make informed iterative improvements.

We have often heard people saying that usability testing is difficult or incon-clusive or even a waste of time. We believe that many of these notions stem

from misunderstanding the process and requirements of a formal usability test. Yes, quantitative tests, qualitative tests, and developing comparative tests can be overwhelming. The truth is that some testing is better than no testing at all. You may still be thinking that this is not going to work for you. The “do-it-yourself ” style of usability testing has definitely begun to resonate with designers, engineers and pro-duct managers.

It can help you improve your product, which in turn will make for happy, satisfied customers.

Page 32: TSM_5_2012_en

32 nr. 5/2012 | www.todaysoftmag.com

is working well with ASP.NET and with other server-side technologies. The rende-red HTML should contain the reference to Knockout library.

Why Knockout?• Introduces an elegant dependency

tracking – it is updating the UI every time when the Data Model is changing.

• Uses Declarative Bindings – having a specific syntax, Knockout allows us to connect the UI to the Data Model.

• It is extensible – a lot of plugins have been released. Knockout already has a well formed active community around it.

The DOM can be easily manipulated by JQuery, but JQuery doesn’t perfectly separate the functionality from the UI. The result of using Knockout library will be a much more structured code. Knockout uses MVVM pattern. Using Knockout we can eliminate the complicated intersecting event-handlers. Everything is declarative in Knockout.

About MVVM pattern (Model View ViewModel)

Model-View-ViewModel (MVVM) pattern is helping us to separate the busi-ness level from the user interface (UI), helping us to eliminate a lot of develop-ment and design problems and making the testing and support more accessible.

The parent of MVVM is the MVP

(Model-View-Presenter) framework. In this pattern the model represents the appli-cation data, the View represents the user interface and the presenter is linking the View to the UI. The Presenter has referen-ces to the Models and to the Views.

Properties of MVVM:• The Model stores the data that needs

to be passed to UI. Each business class (domain objects) belongs to the Model.

• Between Views and ViewModels there is a one to one relation, each View has its own instance of ViewModel. The functionality is implemented in ViewModel. In this way the functionality is clearly sepa-rated from the logic and UI.

• The separation of the implementa-tion logic from UI means that it can be more easily unit tested and main-tained. Unit tests are very important in the case of a complex applica-tion. The MSDN site is showing the following diagram for MVVM (Figure 1).

If we use ASP.NET / MVC for server side, then the Business Logic and Data will be a controller class having functions returning JSON. Using the MVC routing each function of the controller will have its own URL, URL that can be called from Razor (View). Presentation Logic will be a javascript class having the same structure as the C# class and the UI logic will be defi-ned in Razor template.

The ViewModel can be generated from JSON by using the knockout mapping plugin.var viewModel = ko.mapping.fromJS(jsonString);

<div id=”PatientForm”> <p><label>Firstname: </label> <span data-bind=”text: firstName”>

Knockout is a Javascript library helping us to create desktop like web pages. These pages have a clear data, and it is synchronizing perfectly the UI to data model. Knockout is an open-source project, created by Steve Sanderson, who is deve-

loper at Microsoft. Knockout is his personal project, it doesn’t belong to Microsoft. The source code can be downloaded from GitHub and a relevant documentation and notifi-cations can be found at http://www.knockoutjs.com

The source code is native javascript; third party javascript libraries are not used. It

programming

Build dynamic JavaScript UIs with

MVVM and ASP.NET

Figure 1. - MVVM pattern

Csaba Porkolá[email protected]

Software developer @ Macademian

Page 33: TSM_5_2012_en

33www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

</span></p> <p><label>Lastname: </label> <span data-bind=”text: lastName”> </span></p>

<p><label>Fullname: </label> <span data-bind=”text: fullName”> </span></p></div>

<script language=”javascript” type=”text/javascript”> var viewModel = { firstName: ko.observable(‘John’), lastName: ko.observable(‘Harlid’), fullName: ko.computed(function() { return this.firstName() + “ “ + this.lastName(); }, this) }

ko.applyBindings(viewModel, $(‘#PatientForm’));</script>

We have a defined viewModel and a HTML template in the examples above. The applyBindings function will bind the viewModel to a DOM element. If we call ko.applyBindings(viewModel) with only the viewModel parameter then the bin-ding will be realized with the HTML body element.

Properties firstName and lastName are observables. If their value is changed, then the UI will be updated. FullName is a computed property being updated when we change the value of firstName or last-Name. An observable is a function which is returning the latestValue when it is cal-led without any parameter.

We can use arrays as observables by using the observableArrays class. Knockout has a lot of predefined functions for function manipulation, such as push, remove, removeAll.

The predefined bindings can be divi-ded into three categories: (We are going to enumerate only few of them).1. Binding controlling text and

appearance:a. Visible binding: causes the asso-

ciated DOM element to become hidden or visible

<div data-bind=”visible: isVisible”>

(isVisible is an observable and returns a boolean value)

b. Text binding: Sets the innerText property of a DOM element <span data-bind=”text: myMessage”> (myMessage is an observable and returns string)

c. Html binding: Sets the innerHTML property of a DOM element <span data-bind=”text: myHtml”> (myHtml is an observable and returns string)

d. Css binding: adds or removes one or more named CSS classes to the associated DOM element according to one or more conditions.

<span data-bind=”css: {errorWarning: errorCount() > 0, hide: errorCount () ==0}”>

e. Style binding: adds or removes one or more style values to the associa-ted DOM element according to one or more conditions

<span data-bind=” css: { color: errorCount() > 0 ? ‚red’ : black’}”>

f. Attr binding: provides a generic way to set the value of any attribute for the associated DOM element DOM <a data-bind=”attr: { href: url, title: details }”>

2. Control flow bindings:a. Foreach binding: duplicates a sec-

tion of markup for each entry in an array, and binds each copy of that markup to the corresponding array item

<ul data-bind=”foreach: students”> $index<li data-bind=”text: name”></ul>

The example above displays student names. $index displays the index of iterator. $parent is the parent object.

b. If binding: Allows us to include or skip a DOM element accor-ding to a boolean observable. If displayMessage returns false then the “div” element will be skipped. Instead of a Boolean observable we can use complex logical expressions

<div data-bind=”if: displayMessage”> Here is a message.</div>

3. Event bindings:a. Click binding: Will execute a func-

tion defined in ViewModel when the click event occurs

<button data-bind=”click: doSomething”>

b. Event binding: Attaches a javascript event to a DOM element.

<div data-bind=”event: { mouseover: enableDetails, mouseout: disableDetails }”>

Virtual bindings“Control flow” binding can be applied

to virtual DOM elements<ul><li class=”heading”>My heading</li> <!-- ko foreach: items --> <li data-bind=”text: $data”> </li>

<!-- /ko --></ul>

Custom MappingsWe are not limited to use only bin-

dings defined in Knockout library, we can define new ones. We have to regis-ter the binding as a sub property of class ko.bindingHandlers:

ko.bindingHandlers.legaturaNoua = { init: function(element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) { update: function(element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) { }};

The init function will be called when the binding is first applied to an element. The update function will be called once when the binding is first applied to an ele-ment, and again whenever the associated observable changes value.

The functions above have the same input parameters• element – The DOM element invol-

ved in this binding• valueAccessor – A JavaScript func-

tion that you can call to get the current model property that is involved in this binding. Call this without passing any parameters to get the current model property value.

• allBindingsAccessor – A JavaScript function that you can call to get all the model properties bound to this DOM element. Like valueAc-cessor, call it without any parameter to get the current bound model properties.

• viewModel – The view model object that was passed to ko.applyBindings. Inside a nested binding context, this parameter will be set to the current data item.

• bindingContext – An object that holds the binding context availa-ble to this element’s bindings. This object includes special properties including $parent, $parents, and $root that can be used to access data that is bound against ancestors of this context

Concluzii Knockout is a compact library. It is

suggested to be used for complex rich web pages having a lot of Ajax calls. The library is acting slowly when the page is overcrowded with bindings. The gene-ration process for 30 sections bind to observableArrays can take 1 second and the transition is visible.

Figura 2. - Clasele KO

Page 34: TSM_5_2012_en

34 nr. 5/2012 | www.todaysoftmag.com

There is a saying „If you’re not on Facebook, you’re not alive”?! The first time when I’ve heard it I started to laugh because honestly, it is very funny. Facebook reached more than 1mld users and only now I can say that the previous affirmation is very valuable. Everything happens on Facebook, from the recruitment campaigns to real candidate’s selection. Facebook has it pluses and minuses like every web site, but first of all even though it is considered a personal space, let’s be honest, because every post on the internet becomes visible and public for the others. Lots of recruiters are using Facebook as a very good source of inden-tifying the candidates. It happened also to me to find more accurate information on Facebook comparing with the recruitment classical web sites. On Facebook everybody has an updated profile: from the companies where they are hired to all the extra activi-ties that they do.

The purpose of this article is not a user guide of Facebook, because I am sure that everybody knows how to use it, but cer-tainly very few of us take into consideration the impact that what we publish on our Facebook page in recruiting and selecting the best candidate, this is why I am going to describe every big section presented on the web site through the eyes of a recruiter „This is what I would like to discover on every Facebook profile”.

About you Section – I consider it extremely relevant to find information about the candidate with whom I am going to interract face to face during an interview. The detailed it is, the bigger the impact you can have in a recruitment eye. What king of information are they look for? I would answer with an example, because it is much easier. E.g.: a person who has as a quote

„Nothing is impossible”, could be a results oriented and problem solver.

Work and Education Section – I do recomment to leave it visible for everybody, because the biggest plus is brought by the

continuous update and improvement of it and if possible attach a link of the web site of the companies where are you currently working or have you worked.

Contact Info Section – in this sec-tion it is totally up to you if you want to make public all your personal contact information. For me as a recruiter the ideal situation would be to find at least a phone number in order to be able to contact you

For sure everybody heard about „Social Networks” because lots of articles were written on this topic, about the history of Social Networks, advantages and disa-dvantages or confidentiality of the information. But, the purpose of this article is

to identify the impact that this kind of site could have in the recruitment process. There are a lor of Social Networks, but I will present in the following pages only 2 of them: Facebook and LinkedIn.

HR

Social networks

Andreea Pâ[email protected]

Recruiter for Endava and trainer for development of skills and leadership competencies, communication and teamwork

Page 35: TSM_5_2012_en

35www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

but as I previously said it depends on you how you want to keep the confidentiality of them.

Events Section – if you are actively involved in different events and you are a good networker use this section to incre-ase your contact list with persons who share the same interests or hobbies, beca-use the most people you know the more visibility you have on the market.

In the end I will talk about a very disputed subject „Pictures”. I have heard so many times the question „What kind of pictures should I post on Facebook?”. I have to confess that every time I have answered in a very personal way, because I consider that you can publish as many pic-tures as you want which do not put you in embarassing situations.

I have started with „Facebook”, beca-use it is the most used „Social Network”, but still has to improve a lot on the pro-fessional side. Facebook will become for sure the most important source that can provide many information about us, beca-use in the near future companies will pay big amount of money just to have access at very detailed information, this is why I recommend you to think twice what kind of information you make it official on Facebook.

Another social site that I will talk about is „LinkedIn”, a much more profes-sional site which used at fullest potential could bring you lots of benefits. The advan-tages of LinkedIn are bigger than the ones on Facebook, because this site has a pro-fessional approach, because the sections offer you the possibility to create an online CV. Statistically LinkedIn reached almost 100 mil users and a big part of them are using it to find a job or to interact with business professionals.

Profile Section – offers the possibility to create an online CV• In the Summary section you can

provide information that are relevant to describe your career evolution, your strong points or knowledge that could be an added value for a company, or the main achievements in your jobs, in a few words, what makes you unique comparing with the other candidates.

• Specialties say as many things about

what you’re good at “praise yourself, as much as possible”.

• Experience describe clear and con-cise the responsibilities that you had at all your jobs, including voluntee-ring experiences. The biggest plus that LinkedIn offers is the possi-bility to ask for recommendations. An excellent profile that has recom-mendations it is impossible not to draw the recruiters’ attention.

My advice is to ask for as many and different recommendations in order to have a valid feedback from the persons with whom you’ve interacted.

• Education subsection is about for-mal education. Mention the last ones that you have graduated: mas-ters or bachelor degree. The last part from this section refers to your interests and different other perso-nal information.

Groups Section – join all the groups of interest for you. The purpose of this sec-tion is better defined than on Facebook, because you can have access to different opinions and ideas of some professionals in that domain, based on the discussion that you can start as a member of the group.

Jobs Section - with maximum uti-lity for those interested in a new job is one of the most important resources that

LinkedIn prov i d e s t o t h e users.

Rereading the article I realized that I did not make a very relevant observation. When you want to add a new person on you contact list, personalize the invitation and explain the reason why are you sen-ding it.

The increased impact that this 2 „Social Networks” started to have, it’s demonstrated by the new users that are joining every day either Facebook or LinkedIn. Comparing with 2011, LinkedIn registered a growth with 4 mil users and Facebook with more than 300.000.000 new users, because more and more peo-ple understand the benefits of this 2 web sites in providing new career or business opportunities.

For the ones who don’t have yet a LinkedIn account, I recommend not to procrastinate and to make one fast, and for the others who already have one, update it and become more and more visible.

Good Luck!

Page 36: TSM_5_2012_en

36 nr. 5/2012 | www.todaysoftmag.com

Core Data API or stack, can be decomposed to the following three com-ponents: Persistent Store Coordinator (NSPersistentStoreCoordinator), Managed Object Model (NSManagedObjectModel) a n d M a n a g e d O b j e c t C o n t e x t (NSManagedObjectContext). All these three parts are working together to allow storing/retrieving Managed Objects (NSManagedObject).

Managed Object Model (NSManagedOb-jectModel)

NSManagedObjectContext is the object which we access when we want to save to disk, when we want to read data in the memory, and when we want to create new objects, or delete them. Is on top of the stack, and all managed objects can exist only with it. Also is not thread safe, there-fore all threads which want to use for data access, needs a different context instance. It is important to keep in mind that, like the UI, NSManagedObjectContext should be accessed only on the thread that created it.

Persistent Store CoordinatorSits at the bottom of Core Data

stack, and is responsible for persisting the data into its repository. The reposi-tory can be stored on disk or in memory. NSPersistentStoreCoordinator is not

thread safe. Three types of repositories are included with Core Data API: SQLite, XML and binary. All of them can be serialized to disk or in-memory (the last one is only a temporary store).

XML and binary are atomic stores (the entire data file is rewritten on every save). Although atomic stores have their advan-tages, they do not scale as well as SQLite store. Also, they also are fully loaded into memory and they have larger memory footprint than SQLite store.

SQLite is a software library that implements a self-contained, server-less, zero-configuration, transactional SQL database engine. Is the most widely deployed SQL database engine in the world. By using a relational database as persistent store, we don’t need to load the entire data set in memory, and our database can scale to very large size. SQLite itself was tested with data sets measured in terabytes. The data on the disc is efficiently organized, and only the data we are using at the moment is loaded in the memory. Because we have a database instead of a flat file, we have access to many performance-tuning options. We can also control how objects are loaded into memory.

For most scenarios Core Data provides transparent object access for this store, but in order to write efficient scalable applica-tions based on Core Data we need to know how the framework works, otherwise we can face some unpleasant situations.

Core Data is an object graph and persistence framework provided by Apple. Is a relatively small but very robust framework, it provides solutions for many gene-ral problems and fits perfectly with Cocoa and other API provided by Apple. (In

the MVC pattern Core Data takes the place of the Model).

programming

Core Data – Under the hood

Zoltán, Pap-Dá[email protected]

Software Engineer @ Macadamian

Page 37: TSM_5_2012_en

37www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

In the following sections we will pick up some Core Data specific behavior/pro-perty and we will investigate its impact over application performance.

FaultingFaulting is the one of the most sensi-

tive topic regarding Core Data. It happens transparently; therefore simple applica-tions can be developed without knowing about it. To demonstrate the faulting impact over Core Data application per-formance I’ll use a simple example. (The example is not a Core Data “real-life example”, using Core Data power, could be implemented more efficiently. The idea behind it is to demonstrate basic functionality)

Consider an empty model, and create a single entity (Person) with three attribu-tes (age, gender, name), and no relation. Consider a store with 5000 instances of this entity.

We want to iterate over all objects and check an attribute (of course we can make

it more efficient with defining a predicate, but we want to ensure that all objects will be loaded in memory):

NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init];

NSEntityDescription *entity = [NSEntityDescription entityForName:@”Person” inManagedObjectContext:context];

[fetchRequest setEntity:entity];

NSArray * result = [context executeFetchRequest: fetchRequest error:&error];//execution time 0.552 s

for (Person * person in result) { if ([person.age intValue] < 25) value++;}//execution time 0.083s

We can see that we can inspect all objects in a decent time. Now we want to extend the model to have another entity, and a relation between them.

We create the Car entity with three attributes (engine, manualTransmission, name), and a relation between Person and Car entities.

We are using a store, which contains 5000 Person objects, and every person object has a Car (totally 10000 objects). Also we want to extend the following

query, but now instead inspecting an attri-bute from the target object, we want to use the relation, and to inspect a related objects’ property.

NSFetchRequest *fetchRequest= [[NSFetchRequest alloc] init];

NSEntityDescription *entity = [NSEntityDescription entityForName:@”Person” inManagedObjectContext:context];

[fetchRequest setEntity:entity];

NSArray *result=[context executeFetchRequest:fetchRequest error:&error];//execution time 0.351 seconds

for (Person * person in result) { if (person.car.manualTransmission) { value += [person.age intValue]; }}//execution time 21.201 seconds

We can observe drastic perfor-mance degradation in this case. But what happened?

If we are looking closer in debugger to a person object, we can see

Lets check what’s happening when we are fetching objects.

Fetching objectsTo optimize memory usage Core

Data is using lazy loading when fetching data. “Fetching” is the term for resolving NSManagedObject from the repository. When we are using SQLite store, after exe-cuting a fetch request, it is quite possible that an object we think is in memory is actually on disk and needs to be loaded in memory. Similarly, objects that we think we are done with may actually still sit in a cache.

When NSFetchRequest is executed, we will retrieve an array of NSManagedObject, but behind those objects, in memory we will have only placeholder objects (Faults). The only attribute that is loaded in memory is the NSManagedObjectID. When we want to access a field, the fault is fired, and the object is loaded fully in memory.

Let’s have a closer look.

A. After executing NSFetchRequest, the result will be an array of faulted NSManagedObjects,

B. When we begin to iterate over the array, and we are accessing a pro-perty of the first object, a fault will be fired automatically to lower levels of the stack,

C. Core Data looks in internal cache for the specific object. Because it is not in cache it loads from repository,

D. With a single access to the store, the cache will be filled with data, and the requested object is fully loaded in memory. The property value can be used.,

E. Iterating through other values will be quick, because when the fault is fired, the object is loaded from internal cache.

Because single database access is not efficient, data is loaded in chunks. This behavior can be fine tuned by setting NSFetchRequest’s FetchBatchSize property. Default value is 0, which cor-responds to ∞ (all items will be loaded in cache). For long result lists we could con-sider changing the batch size, which will result in lower memory consumption at the beginning, but as the fault are fired, more database accesses.

Therefore we need to consider every case, when we want to change this default behavior. Also we could turn off faulting properties by calling

before executing the request. In this way all properties will be loa-

ded in memory after executing the request, faults will cause cache hit.

By introducing Faulting, Core Data tries to optimize memory consumption. Faults are fired only when properties are accessed. Here we need to notice, that not all properties and methods of NSManagedObject cause faults to be fired. As we saw previously, the object id is loa-ded in memory; therefore no fault is fired, when the property is accessed. Also there is a long list of methods of NSManagedObject which are not causing faults to be fired: isEqual:, hash, superclass, class, self, zone, isProxy, isKindOfClass:, isMemberOfClass:,

Page 38: TSM_5_2012_en

38 nr. 5/2012 | www.todaysoftmag.com

programmingCore Data – Under the hood

conformsToProtocol:, respondsToSelector:, retain, release, autorelease, retainCount, description, managedObjectContext, entity, objectID, isInserted, isUpdated, isDeleted, isFault

Fetching and relations Previously we saw what’s happening

when we are working with objects without relations. The next step up in the scale of loading data is to fetch the relationships while loading the targeted entities. This does not fetch them as fully formed, but as faults. This step up can have a significant impact on the performance of a Core Data application.

We could saw on the second exam-ple, that the performance was ruined. The reason for that is the fact, that faulting rela-tions are resolved one by one. Meanwhile accessing the first faulting Person object, results loading all 5000 Person objects into the cache with a single disk access; acces-sing the Car faulting objects (referred by relation), will brings in the cache only the respective object. Therefore in the second case totally 5001 disc accesses will be used instead one disk access from the first case. 5001 disc access instead 1. That’s why the performance was ruined.

The solution for this problem is pre-fetching. This is a special case of batch faulting, performed after another fetch. Anticipation of future needs to avoid indi-vidual fault firing. This can be achieved simply specifying the relations, which we will use on the fetched objects.

NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init];

NSEntityDescription *entity = [NSEntityDescription entityForName:@”Person” inManagedObjectContext:context];

[fetchRequest setEntity:entity];

[fetchRequest setRelationshipKeyPathsForPrefetching: [NSArray arrayWithObject:@”car”]];

NSArray * result = [context executeFetchRequest:fetchRequest error:&error];//execution time 4.006 seconds

for (Person * person in result) { if (person.car.manualTransmission) { value += [person.age intValue]; }}//execution time 0.974 seconds

This tricks solves our problem for the moment, but is fragile, because we need to maintain the code as the schema is modifying, because we are hardcoding relations.

ConclusionsAs we saw, firing faults individually is

one of the most common causes for the poor performance of Core Data applica-tions. Faults are a double-edged sword that can make great improvements to the speed and performance of our applications or can drag the performance down to the depths of the unusable. We need to know this behavior of Core Data, and we need to keep a balance between the number of disc accesses and the memory consumption, and fetch only data we need it and when we need it. Also we need to keep this in mind when we create the schema of our model to organize data in an efficient way to need smaller amount of storage accesses to load them in memory.

When fetching data, if we fetch too little, our application will feel unrespon-sive, because the big number of disc access. On the other hand if we load big amount of data in memory, we will have quick access to our loaded data, but we will bump in low memory issues on devices (don’t forget that iPad2 has 512MB internal memory). Also we need to be aware, that because no memory virtualization is provided by the iOS, Core Data does some clever hou-sekeeping, and frees up objects from cache when they are not used. This could cause disc access for objects that were previously in memory.

In order to optimize our Core Data

applications we could use the following tricks:

• Warming up the cache. We can pre-load the data in the cache to be sure the data will be available, when it will be needed. When the applica-tion started, in a background thread we fetch data that we know, will be used, and in this way when it will be used, will be already in cache.

In this case doesn’t matter if we are using more threads, because all fetching operations will be directed to NSPersistentCoordinator, which has a single cache.

• Us e d i s c a c c e s s e f f i c i e nt l y . Considering disc access is less slow than memory access, we can reduce the disc access numbers to optimize application. We saw how we can do this on fetching data. The same pro-blem could be extended when we save data. Is more efficient to save in batches instead of saving every object separately. Deleting object is also a sensitive topic, which needs special attention. Even if we are deleting a single object, this could imply cascaded object deletion based on relations between objects.

Page 39: TSM_5_2012_en

39www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

5 Java practices that I use

Implement equals, hashCode and toString methods inherited from java.lang.Object

All 3 methods are part of java.lang.Object since JDK 1.0. They could be really useful when used in your classes as follows:

Object#equals(Object obj): determines if the current object is equal with the object supplied as parameter. Implementation coming from Object#equals verifies only the equality of references (this == obj), but this covers only a sub-set of cases. Usually, a full blown implementation of equals checks:

a. If (obj == null) returns false;b. If (this == obj) returns true;c. If (!getClass().equals(obj.getClass())

returns false;d. Type cast obj to type of thise. Check the equality of attributes

between this and obj and returns true or false.

Overriding equals has benefits when equality of objects state is important for a program to run properly (e.g. objects are part of collections and they need to be found during a “get” operation). Another situation would be writing of unit tests, when equality of objects is pretty often verified by the assertion statements.

Along with Object#equals , the Object#hashCode is another method that could be useful in similar conditions as

equals. General contract for hashCode is:a. Equals is overriddenb. If two objects are equal, they gene-

rate the same hash codec. It should use the same fields as used

to check equalityd. If the fields used to implement

“equals” are not changed, sub-sequent calls of the hashcode should return the same value; still it is not mandatory that the same value to be returned on subsequent application executions.

The hash based collections and maps are built using hash based buckets, so ope-rations like “get” may produce unexpected behavior if hashCode is not properly implemented. Because of statement d) above, the client code may not be able to find an object is a hash if the state of the object changes meantime (and that state is used by “equals”). Best way to avoid this is to use immutable objects (the state of the object will not change during his life cycle). As well, if the object is immutable, hashCode implementation could be cached to improve performance (because for sure the state of the object will not change after initialization as being immutable).

How many t imes we lo ok on some log files and see something like: “value is: Car@29edc073”? Yes, I know very annoying and useless; and all this because Object#toString was not

This article presents 5 Java practices that I use while coding. It’s interesting that simple things make your developer (and fellow colleagues) life easier. I have no intent to create some sort of top here, but just to illustrate things I consider

helpful

programming

Tavi [email protected]

Development lead @ Nokia

Page 40: TSM_5_2012_en

40 nr. 5/2012 | www.todaysoftmag.com

programming

overridden. This method is very handy when comes to logging the state of objects. After all seeing something like: “value is: Car[type=BMW,number of doors=5,isAllWheelDrive=false]” makes more sense. One hint on using toString(): it is preferred to use something like: “value is “ + car instead of “value is “ + car.toS-tring() because “car” is anyway evaluated using toString because is concatenated to a string (“value is “) and will also avoid a NullPointerException in case “car” is null.

For all 3 methods above, Apache Commons Lang 3 (http://commons.apa-che.org/lang/) provides utilities that make developers life easier. See https://github.com/tavibolog/TodaySoftMag/blob/mas-ter/src/main/java/com/todaysoftmag/examples/objectmethods/Car.java and the associated test.

Avoid deep nesting of statementsIn general, I like to keep my code sim-

ple and easy to read. Of course, this doesn’t always happen. As a rule of thumb, I prefer to close statements as soon as possible. By doing this, you limit the complexity of rea-ding through nested statements. Closing statement could be achieved by calling: return, break, continue or throw. In my opinion using:

If (a == null) {return;}

is preferable to:If (a != null) {//do something}

A way to enforce this practice could be limiting the number of characters per line in your IDE. I keep mine to 120. See https://github.com/tavibolog/TodaySoftMag/blob/master/src/main/java/com/todaysoftmag/examples/nested/Utility.java for 2 versions of the same uti-lity one using nested statement and the other trying to avoid them. Then choose yourself the one that looks cleaner.

Check validity of method’s param-eters

None l ikes an infamous NPE (NullPointerException) to show up in the stack trace of his code. Me neither. There are various techniques to avoid this:

a. Use JavaDoc to document your input parameters. The problem is most people don’t read Java docs.

/** @throws NullPointerException in

case argument is null */

b. Use assertions. These are Boolean statements that allow developers to verify assumptions on the code being executed. So, assertions are not for the users. When an asser-tion fails, most likely the library which contains the “assert” has a bug. I think the most common use of assertions is related to the design-by-contract paradigm:

• Pre-conditions: check what needs to be true when a method is called. This could be a solution for private methods, but not for public methods, since the applica-tion could run without assertions enabled and also assertions can only throw AssertionError and not a specific type. So, the public methods should check the vali-dity of the parameters. Still, the private methods could use asser-tions before doing any operation.

private void divide(int a, int b) {

assert (b != 0);…..}

• Post-conditions : check what needs to be true before a method returns. Here, the assertions are allowed for public methods also, since the client code might be able to handle an eventual unex-pected result (e.g. null account below)

private Account createAccount() { Account account = null;// create the account…assert (account != null);return account;}

• Class invariants: check what needs to be true at any point in time for an object. In this situa-tion, any constructor or public method of a class should call the invariant before returning, to ensure the consistency of the state of the object.

public class Account {String name = null;private boolean checkNameForNull() { // class invariant return name != null;}public Account(String name) {… assert checkNameForNull(); }}

Mind that there is no mechanism to recover from an assertion being thrown, since the main purpose is to help building code that is testable and protect against executions with corrupted state of the

application.By default the assertions are disabled.

To enable, run:

java –ea MyClaassorjava –enableassertions MyClass

c. c. Check the validity of input para-meters and throw exceptions as needed. In case the code can recover from invalid input, then you should use checked exception, otherwise use un-checked exception. This is according to Oracle guidelines from here: http://docs.oracle.com/javase/tutorial/essential/exceptions/run-time.html. Usually the constructors of object can’t properly instanti-ate an object if input parameters are invalid, so I prefer to throw IllegalArgumentException. Also specifying this in the Java doc is a good practice.

See https://github.com/tavibolog/TodaySoftMag/blob/master/src/main/java/com/todaysoftmag/examples/check/Dog.java and the associated test

NOTE: checking the input parame-ters before assigning may be problematic in the so called “window of vulnerability” (in the period between a parameter check and parameter assignment, another thread could modify the input parameter as inva-lid making our initial parameter check useless). To avoid this, the validity check could be done on the actual assignee (e.g. left operand of parameter assignment) after the assignment was done.

Avoid un-necessary noise

It’s always best to keep the code as clean as possible. Some developers may have good intentions to apply good practi-ces, but the result is not really bringing too much benefit, rather un-necessary noise.

Comments are helpful only if they say more than the code itself. This is a tricky statement. If the code is self-explanatory then comments like below don’t have much value:

a. /** set person */public void setPerson(Person p){}

b. …// read file

5 Java practices that I use

Page 41: TSM_5_2012_en

41www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINEHR

readFile(new File(“…”));

…As a rule of thumb, better name your

methods or variables as meaningful as possible to avoid the noise coming from comments. Let’s not also forget that com-ments need to be maintained as part of a refactoring process, which add even more work to do.

Next item to discuss is usage of this. This makes sense, in my opinion when:

a. is used to call a constructor inside of another constructor of the same class to avoid eventually duplicate code to initialize an object

public Person(String name, int age)

{this(name);}

b. when the name of an attribute is identical with the name of a parameter

public setPerson(Person person){this. person = person;}

Otherwise usage of this sounds like noise too me. Below there are few counter-examples:System.out.println(this.name);

System.out.println( this.COUNT_OF_MESSAGES);

The last item I want to discuss here is the usage of final. This makes sense to sig-nal that a variable could be used in an inner class or if you want to clearly state for later changes that the variable should not be re-assigned. I used first situation more often, while the second makes more sense when defining a constant, but should not be abu-sed in other circumstances unless really needed. See this example:

public int add(final int a, final int b) { final int c = a + b; return c;}

This is correct, but wouldn’t be simpler to write:

public int add(int a, int b) { return a + b;}

Make sure you clean up your resources

Managing resources is key to ensure that a system will properly run without any restarts or unpredictable behavior. One of the most common mistakes on managing resources is forgetting to close the streams once they are open. The sequence should be:

a. Open streamb. Manipulate streamc. Close stream

Last step needs to be executed no mat-ter if exceptions happen in the prior steps. Up to Java 6, this could be done using finally statement. Here is how usually the code will look like:

FileInputStream fis = null;try { fis = new FileInputStream(new File(„README”)); /// do your file manipulation here} catch(IOException ioexp) { // handle exception} finally {//make sure the stream gets closedif (fis != null) { fis.close(); }}

What could happen here is that the finally block to throw an exception and the code eventually need to catch that, which means more boiler plate to handle by the developer and yet another source of errors.

Starting with Java 7, the syntax of try statement was extended to define scope for assignments, called try-with-statements.

try (FileInputStream fis = new FileInputStream(new File(„README”))) { // do your file manipulation here }

This means that at the end of the try block, the resources which implement AutoCloseable interface will be automati-cally closed. You can even specify multiple resources to be closed at the end of the same block. To support this new concept, a lot of the existing JDK classes, now imple-ment AutoCloseable interface, including java.io.*, java.nio.*, java.net.*, etc classes, still maintain the backward compatibi-lity of the existing written code. As well, the compiler generated code handles null

resources.See https://github.com/tavibolog/

TodaySoftMag/blob/master/src/main/java/com/todaysoftmag/examples/resources/ResourceCleaner.java and the associated test to illustrate both situations (properly closing streams – including a JDK 1.7 example - and failing to close steams). Especially on Linux/Unix systems, fai-ling to close streams will generate a “FileNotFoundException” with “too many open files” message.

Another interesting concept in Java 7 is the automatic handling on exception masking. Exception masking is the situation where one exception is masked by another exception and is not related to the situation when a client code wraps around exception coming from low level API, but for example to the case where an exception is thrown in the try block and another exception is thrown in the finally block and the client code will mainly notice the exception from the finally block only. Java 7 handles auto-matically (and also programmatically using Throwable.addSuppress(…) method) this situation, by suppressing the 2nd exception (from finally) in order to handle the excep-tion from the try block.

It is recommended to let the compiler generate the boiler plate, since it will add the minimal code to handle exception suppressing, rather than writing this code ourselves (a lot of ugly boiler plate code that could generate other problems).

Other situation that fall under cleaning your resources section are:

a. Failing to close database connections b. Failing to close sockets after these

are not needed anymore.

NOTE: the code examples were tested using Ubuntu 12.04 running open jdk 7 and maven 3.

Page 42: TSM_5_2012_en

42 nr. 5/2012 | www.todaysoftmag.com

void sensor_AllFramesReady(object sender, AllFramesReadyEventArgs e){ if (closing) { return; }

Skeleton first = GetFirstSkeleton(e);

if (first == null) { return; }

ContainerSkeleton.Children.Clear();Dictionary<JointType, Joint> jointsDictionary = new Dictionary<JointType, Joint>();

foreach (Joint joint in first.Joints) jointsDictionary.Add(joint.JointType, getScaledJoint(joint, e));

drawJoints(jointsDictionary);drawBones(jointsDictionary);}

Once we receive a frame from Kinect, we check first if it is null, then extract the wrists from the first discovered skeleton and retain them in a dictionary.

void drawJoints(Dictionary<JointType, Joint> jointsDictionary){ foreach (KeyValuePair<JointType, Joint> joint in jointsDictionary) { drawJointEllipse(30, Brushes.LightGreen, joint.Value); }}

private void drawJointEllipse(int diameter, Brush color, Joint joint){ Ellipse el = new Ellipse(); el.Width = diameter; el.Height = diameter; el.Fill = color;

Canvas.SetLeft(el, joint.Position.X - diameter / 2); Canvas.SetTop(el, joint.Position.Y - diameter / 2);

main.ContainerSkeleton.Children.Add(el); el.Cursor = Cursors.Hand;}

The DrawJointEllipse method draws one wrist of the user’s body in the Canvas type element, ContainerSkeleton. Each joint is graphically illustrated by an ellipse.

void drawBones(Dictionary<JointType, Joint> jointsDictionary){Brush brush = new SolidColorBrush(Colors.LightGray);

ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.ShoulderCenter].Position.X, jointsDictionary[JointType.ShoulderCenter].Position.Y), new Point(jointsDictionary[JointType.Head].Position.X,

jointsDictionary[JointType.Head].Position.Y) }, brush, 9));

In our previous issues, we introduced a code sequence involved in initiating Kinect and we built a quick Hello World application type. In what follows, we will continue the development of the application through a new functionality: displaying the entire

skeleton of the user.

programming

Microsoft Kinect –A programming guide Part II – Displaying the entire skeleton of

the user

Simplex [email protected]

Page 43: TSM_5_2012_en

43www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.Shoulder-Center].Position.X, jointsDictionary[JointType.ShoulderCenter].Position.Y), new Point(jointsDictionary[JointType.ShoulderLeft].Position.X, jointsDictionary[JointType.ShoulderLeft].Position.Y), new Point(jointsDictionary[JointType.ElbowLeft].Position.X, jointsDictionary[JointType.ElbowLeft].Position.Y), new Point(jointsDictionary[JointType.WristLeft].Position.X, jointsDictionary[JointType.WristLeft].Position.Y), new Point(jointsDictionary[JointType.HandLeft].Position.X, jointsDictionary[JointType.HandLeft].Position.Y)}, brush, 9));

ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.ShoulderCenter].Position.X, jointsDictionary[JointType.ShoulderCenter].Position.Y), new Point(jointsDictionary[JointType.ShoulderRight].Position.X, jointsDictionary[JointType.ShoulderRight].Position.Y), new Point(jointsDictionary[JointType.ElbowRight].Position.X, jointsDictionary[JointType.ElbowRight].Position.Y), new Point(jointsDictionary[JointType.WristRight].Position.X, jointsDictionary[JointType.WristRight].Position.Y), new Point(jointsDictionary[JointType.HandRight].Position.X, jointsDictionary[JointType.HandRight].Position.Y)}, brush, 9));

ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.ShoulderCenter].Position.X, jointsDictionary[JointType.ShoulderCenter].Position.Y), new Point(jointsDictionary[JointType.Spine].Position.X, jointsDictionary[JointType.Spine].Position.Y), new Point(jointsDictionary[JointType.HipCenter].Position.X, jointsDictionary[JointType.HipCenter].Position.Y)}, brush, 9));

ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.HipCenter].Position.X, jointsDictionary[JointType.HipCenter].Position.Y), new Point(jointsDictionary[JointType.HipLeft].Position.X, jointsDictionary[JointType.HipLeft].Position.Y), new Point(jointsDictionary[JointType.KneeLeft].Position.X, jointsDictionary[JointType.KneeLeft].Position.Y), new Point(jointsDictionary[JointType.AnkleLeft].Position.X, jointsDictionary[JointType.AnkleLeft].Position.Y), new Point(jointsDictionary[JointType.FootLeft].Position.X, jointsDictionary[JointType.FootLeft].Position.Y)}, brush, 9));

ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.HipCenter].Position.X, jointsDictionary[JointType.HipCenter].Position.Y), new Point(jointsDictionary[JointType.HipRight].Position.X, jointsDictionary[JointType.HipRight].Position.Y), new Point(jointsDictionary[JointType.KneeRight].Position.X, jointsDictionary[JointType.KneeRight].Position.Y), new Point(jointsDictionary[JointType.AnkleRight].Position.X, jointsDictionary[JointType.AnkleRight].Position.Y), new Point(jointsDictionary[JointType.FootRight].Position.X, jointsDictionary[JointType.FootRight].Position.Y)}, brush, 9)); }

The DrawBones method draws a segment for each of the major bones of the user’s skeleton. The segments are added to the container that contains ellipses representing previously identified wrists.private Polyline getBodySegment(PointCollection points, Brush brush, int thickness){Polyline polyline = new Polyline();polyline.Points = points;polyline.Stroke = brush;polyline.StrokeThickness = thickness;return polyline;}

private Joint getScaledJoint(Joint joint, AllFramesReadyEventArgs e){var posX = (float)GetCameraPoint(joint, e).X;var posY = (float)GetCameraPoint(joint, e).Y;

joint.Position = new Microsoft.Kinect.SkeletonPoint{ X = posX, Y = posY, Z = joint.Position.Z};

return joint;}

One way to display the screen wrists was using the ScaleTo method. However, ScaleTo tends to distort the user’s body, so as to occupy all available space on the display. To preserve anatomical proportions, we will manually convert the coordinates of each wrist from the metric system into pixels through the GetCameraPoint function.

Point GetCameraPoint(Joint joint, AllFramesReadyEventArgs e){

using (DepthImageFrame depth = e.OpenDepthImageFrame()) { if (depth == null || sensor == null) { return new Point(0, 0); }

DepthImagePoint jointDepthPoint = depth.MapFromSkeletonPoint(joint.Position);

ColorImagePoint jointColorPoint = depth.MapToColorImagePoint(jointDepthPoint.X, jointDepthPoint.Y, ColorImageFormat.RgbResolution640x480Fps30); return new Point((int)(ContainerSkeleton.ActualWidth * jointColorPoint.X / 640.0), (int)(ContainerSkeleton.ActualHeight * jointColorPoint.Y / 480)); }

}

ConclusionsComplete skeleton display is especially useful for debugging (we can easily learn how precise the joints are positioned on the user’s

body), or for some graphics applications and games.

Page 44: TSM_5_2012_en

44 nr. 5/2012 | www.todaysoftmag.com

A simple scenario is given: a student or recent graduate is hired on a position of “Software Testing Engineer”, generically called QA. In the first days he becomes familiar with the company, colleagues, the application he is about to test. Moreover, it is suggested that he should study for the ISTQB Foundation Level certification, for a better understanding of the testing process.

Our QA studies conscientiously and he finally obtains the certification diploma for the first level of software testing.

At one point some concepts are for-gotten, as we are just people, and he calls the Google friend to refresh his memory. Inevitably he finds several definitions for the same concept, similar in explanation, but with a slightly different interpretation (E.g. „Test Automation”). The question that follows is: What is the best definition, is there the „single source of truth” that we all dream of?

Yes, there may be few who choose the method of documentary studies or publi-shed scientific articles on this topic and most of them will go for a quick search on the internet. I will try in the following example to show that automatic testing is more than test rolling, and an automatic process structure is a laborious work which involves many factors, not only what we find in a simple search on the net.

ISTQB is a certification recognized all over the world, and yet it is not the only one. This type of certification is not consi-dered important by all people. However, a QA and a software testing researcher can speak the same language.

Software testing tends to be an exact science well studied and defined. It is an area where a concept receives a definition after an evaluation process and feedback, laborious and used by academics, based on real data.

To give a concrete example: a simple Google search on „Test Automation” goes to a Wikipedia1 article where you will find

the following definition:„ In software testing, test automation

is the use of special software (separate from the software being tested) to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test con¬trol and test reporting functions. Commonly, test automation involves auto-mating a manual process already in place that uses a formalized testing process.”

Instead, if you search for the same con-cept in studies and research articles, you will stumble upon Parasuraman2 princi-ple that defines automated testing In the following way:

“We propose that automation can be applied to four broad classes of functions: a) information acquisition; b) information analysis; c) decision and action selection; and d) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully auto¬matic.[…] We therefore use a defi-nition that emphasizes human-machine compa¬rison and define automation as a device or system that accomplishes (parti-ally or fully) a function that was previously, or conceiva¬bly could be, carried out (par-tially or fully) by a human operator [8].”

It can be seen that this version of Wikipedia refers only to running of test scenarios which, in terms of the academic principle, it’s just an intermediate test level where full automation is defined by the absence of any human intervention.

I learned in the early days of QA that on a scale of 1 to 10 (1 – the system does not offer any assistance, all decisions and actions are human; 10 – the system decides everything, acts autonomously, ignoring human interference) Test Automation is on level 5, a level involving only test exe-cution, any other decision being taken by man (test data definition, test selection, results analysis, etc.).

There are many automation models,

involving more or less human intervention and the example offered is only a starting point for a better understanding of what each team wants when using automated testing. Finally, an efficient and success-ful process of software automation comes down to factors like productivity /cost / quality.

In conclusion, I would just like to mention that everyone chooses his docu-mentation and sources of information from where he considers appropriate and it is ultimately irrelevant whether it is online or offline. What is truly relevant is the information that remains and the accu-racy of that information. Software testing, especially the automatic one, is not a part of the creative art where each of us can come with his own definition of test auto-mation or testing or any other term. And speaking of art, automation can become an art, to the extent that the processes used, scientifically defined and demonstrated, will receive the admiration of people in software and beyond

Bibliography1. http://en.wikipedia.org/wiki/

Test_automation2. Raja Parasuraman, Thomas B.

Sheridan, Fellow, IEEE, and Christopher D. Wickens – “A Model for Types and Levels of Human Interaction with Automation” in IEEE transactions on systems, man, and cybernetics—part a: systems and humans, vol. 30, no. 3, may 2000

The field of software testing has become increasingly dynamic, new test methods are introduced, more concepts are refined or reinvented. The famous phrase „anyone can test” begins to be increasingly difficult to be confirmed, due to the high technical level and technologies used in application development.

Testing - an exact science

testing

Andrei Conț[email protected]

Principle QA@ Betfair

Co-Founder Romanian Testing Community

Page 45: TSM_5_2012_en

45www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINEprogramming

Modernization and scalability have been achieved by implementing new fea-tures like: web sockets, UDP protocol support, support for the new async pro-gramming model „awayt / assync”.

Let’s start with the configuration file. You look familiar scenario in which you create a service endpoint expose it through a http / tcp, and when you add a reference client application’ve found a ton of settings for a binding? Fortunately this has changed and things will look different.

It looks like a default configuration in a client application binding in WCF 4.0

That will show in WCF 4.5!

Beautiful. No?

Another aspect of simplicity is that they have changed some settings and values that services are „production ready”. In this regard were Modified Reader Quotas, Throttles, and Timeouts in order to avoid the many exceptions that generated the previous default settings.

New features in WCF 4.5 tend to fall into two aspects: simplicity and scalability.The biggest problem there is when it comes to WCF configuration. What

they wanted from the new version was easier and simpler configuration. We all know that is not easy to configure a WCF service. Indeed, after being set up and go, it becomes a great advantage in any system of distributed applications.

What’s new in Windows Communication

Foundation 4.5

Leonard [email protected]

System Architect@ Arobs

Page 46: TSM_5_2012_en

46 nr. 5/2012 | www.todaysoftmag.com

A newly introduced feature is support for multiple authentication modes per endpoint. If this was previously only pos-sible by creating multiple endpoints is now possible by setting a single endpoint that supports Windows authentication and Basic example simultaneously.

Defining an endpoint and a service is easier in the configuration file inllisense new media, which now gives us a list of contracts and names of classes that can be used.

If an item has not been configured, the window of warnings will be aware of this fact. This feature is called „Validation over configuration”.

WCF 4.5 comes with a new binding. It introduced a basicHttpsBinding. This is similar to basicHttpBinding, and has the following default settings: SecurityMode = Transport and ClientCredentialType = None.

Settings can now be done in ser-vice of the Code by a new type, called Ser viceConf igurat ion. In case the service is void static method Configure (ServiceConfiguration config), it will be called by default. With this new data can be added behavior’s new endpoints us and other common settings. Settings can now upload a file in a different location on the

disk. This was possible before only work-around sites that brought overhead and difficulty in maintenance. This feature is clearly called Code-based configuration.

All of these features were introduced for simplicity and ease of working with WCF.

Next we will talk about what has brou-ght modernization and addition to the scalability of this framework.

I will start with new ways in which to write asynchronous, task-based model. This feature allows you to write asyn-chronous code on both the server and the client’s calling all asynchronously. As a principle, any method that returns Task or Task <T> is a method asynchronously on the server. These methods can be invo-ked using the new keyword sites: await and async. Consider the following example:

public static async void CopyToAsync(Stream source, Stream destination){ byte[] buffer = new byte[0x1000]; int numRead; while((numRead = await source. ReadAsync(buffer, 0, buffer.Length)) > 0) { await destination.WriteAsync(buffer, 0, numRead); }}

Basically we have a copy of asynchro-nous streams. Async modifier method needs because inside they await operator use. In this way the compiler knows that it is asynchronous and when executing the code is ready to continue with any other instructions remain to be executed. If we use but we await in async method signature we have a compilation error.

As compared’ll add the same example

using APM (Async Programming Model), the usual that we have been used so far.

It is clear that major things have been simplified. Using the new task based model is almost as easy as writing asynchronous and synchronous methods. Just as the return type of the method

Task type to <T> and add sites await and async keywords.

Return to the bindigs which was intro-duced UdpBinding. This is useful if you want to transmit data more frequently to cool-ri, like broadcast and multicast scena-rios. In this case it is not as important as content when sending messages. As a limi-tation that does not support WAS.

Another very important feature is support for WebSockets. They allow bi-directional communication, full duplex, in a way easier than it was before.

In conclusion, in WCF 4.5 attempts to simplify and modernize development platform by setting „production ready”, intellisense and validation configuration files, support for WebSockets, UDP, task-based model and other sites feature. My opinion is that these sites are useful fea-ture welcomed by reference to the previous versions and cannot wait to put them into practice.

public static IAsyncResult BeginCopyTo(Stream source, Stream destination){ var tcs = new TaskCompletionSource();

byte[] buffer = new byte[0x1000]; Action<IAsyncResult> readWriteLoop = null; readWriteLoop = iar => { try { for (bool isRead = iar == null; ; isRead = !isRead) { switch (isRead) { case true: iar = source.BeginRead(buffer, 0, buffer.Length, readResult => { if (readResult.CompletedSynchronously) return; readWriteLoop(readResult); }, null); if (!iar.CompletedSynchronously) return; break;

case false: int numRead = source.EndRead(iar); if (numRead == 0) { tcs.TrySetResult(true); return; } iar = destination.BeginWrite(buffer, 0, numRead, writeResult => { try

{ if (writeResult.CompletedSynchronously) return; destination.EndWrite(writeResult); readWriteLoop(null); } catch(Exception e) { tcs.TrySetException(e); } }, null); if (!iar.CompletedSynchronously) return; destination.EndWrite(iar); break; } } } } catch(Exception e) { tcs.TrySetException(e); } }; readWriteLoop(null);

return tcs.Task;}

public static void EndCopyTo(IAsyncResult asyncResult){ ((Task)asyncResult).Wait();}

programming

Page 47: TSM_5_2012_en

47www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINE

Windows Azure Service Bus provides us the infrastructure to communicate in a distributed system without any problems. All issues related to scalability, security, load balancing are resolved by it. A very important property of this infrastructure is that it assure us that no message will be lost, each one will reach its destination once it has been received by the Service Bus.

At this point when we talk about Service Bus we have to mention three very important components:• Service Bus Queues,• Service Bus Topics,• Service Bus Relay.

Service Bus Queue, which we will talk about in this article, is a service that allows us to work with message queues within the Service Bus. Service Bus Topics is quite similar to Service Bus Queues, the main difference is that a message can reach to more listeners (Subscribers), not just one.

Service Bus Relay allows us to integrate and expose WCF services that are hosted on-premises servers using Windows Azure.

Service Bus Queue is a service that allows communication between two modu-les via message queues. This operation is completely asynchronous and the modules are not required to wait after other modu-les. Any message that was successfully sent to the message queue will not be lost, and this fact is assured by Windows Azure. Even when there is no messages consumer, they will be retained by the Service Bus Queue.

The number of messages in a queue is not limited. There is a limit only for the maximum capacity a queue can have, which is 5GB. Each message can have a maximum size of 256KB. Inside it we can add information in the form of (key, value) or any serializable object. We can even add data streams, as long as their size does not exceed the maximum capacity of a message.

The first CTP of Windows Azure was announced in 2008 and after two years the commercial version was launched. Since then each new version of Windows Azure has brought new functionalities. If in 2010 the web role and worker role

were the main strengths, Windows Azure from now is more complex and allows us to do things we could not imagine.

Radu [email protected]

Senior Software Engineer@iQuest

Service Bus Topics in Windows Azure

technologies

Page 48: TSM_5_2012_en

48 nr. 5/2012 | www.todaysoftmag.com

Each message added to the Service Bus Queue has a TTL (time-to-live), the maxi-mum value accepted is infinite. When the TTL expires, a message is automatically removed from the queue. Besides this we also have property where we can specify when a message queue becomes active and can be consumed. In this way we can send a message queue that can be processed only after the date that we have defined.

Service Bus Queue is a service that allows us to have an unlimited number of producers and message consumers. This service also includes a mechanism to detect duplicates. In terms of security and access to the Service Bus message queue, it can be done through:

• ACS claims • Acces pe bază de rol• Identify provider federation

Currently Shared Access Signature can-not be used to access the Service Bus queue or any other resource in the Service Bus.

To use a queue or any other ser-vices available through the Service Bus is necessary to create a namespace. A names-pace is a unique name through which we can identify the resources and services we have available in the Service Bus. It can be created on the Windows Azure por-tal, under „Service Bus, Access Control & Caching”. Once you have a namespace, a queue can be created in various ways, from portal to configuration files or code. We can give any name to a queue as long as respect the following rules:• It contains only numbers, uppercase

and lowercase letters• It contains characters like: ‘-‘, ‘_’, ‘.’• It does not exceed 256 characters

Applications that want to access the Service Bus Queues need to know three information: • Endpoint– : the url through which

we can access services from Service Bus; this address generally has the following form:

sb://[namespace].servicebus.windows.net/

• SharedSecretIssuer – the name of the issuer (the owner namespace)SharedSecretValue –secret key on

which a namespace can be accessed. This can be regenerated at any time through the portal.

All this information and the location where it has to be added to a project can be found on the Windows Azure portal. This can automatically generate the XML that has to be added. All we have to do is to copy/paste this information. The following examples will be in C #, but this does not mean you cannot use Service Bus Queues from PHP, Node.js or any other program-ming language. This service is exposed in REST format and any library you use is actually a wrapper over it.

Before you create a queue is advisable to check if it does not already exist. If we want to create a queue with a name that already exists an exception will be thrown. NamespaceManager allows us to access any service within Service Bus. NamespaceManager namespaceManager = NamespaceManager.CreateFromConnectionString( CloudConfigurationManager.GetSetting( “ServiceBusConnectionString”));

QueueDescription queueDescription = new QueueDescription(„FooQueue”);

queueDescription.DefaultMessageTimeToLive = new TimeSpan(0, 10, 30);if(!namespaceManager.QueueExists(„FooQueue”)){

namespaceManager. CreateQueue(queueDescription);//sau namespaceManager.CreateQueue(“FooQueue”);}

As you can see, you can create by only specifying a queue name. If we want to set up some parameters value that is diffe-rent from the default we can appeal to QueueDescription.

Any message we add to the queue is a BrokeredMessage. In this object it is necessary to put all the information that must reach the consumer. A message con-sists of two parts:• Header –contains the message

configuration and all the properties that we want to add

• Body – message content; it can be an serialized object or a stream.

Operations on a queue can take place both synchronously and asynchronously. The mechanism for adding a message in the queue is quite simple. When we want to extract a message from the queue we have two mechanisms:• Receiveanddelete– - a message is

taken from the queue and immedi-ately deleted

• Peekandlock– a message is taken from the queue and blocked. It will be deleted only when the consu-mer specify this. If on a predefined time interval the consumer does not confirms that the message has been processed, Service Bus Queue will make the message to be available again on the queue (the default is 60 seconds).

•All this operations can be made

through a QueueClient object.BrokeredMessage message = new BrokeredMessage(); message.Properties[„key”] = „value”;

QueueClient queueClient = QueueClient.CreateFromConnectionString( connectionString, „FooQueue”);

queueClient.Send(message);

BrokeredMessage messageReceiveAndDelete = queueClient.ReceiveAndDelete();

try{BrokeredMessage messagePeekAndLock = queueClient.Receive();// ... messagePeekAndLock.Complete();} catch(){ messagePeekAndLock.Abandon();}

The message queue operations are tran-sactional, and both the producer and the consumer may work with batch’s messa-ges. If you reach to a point where you add a lot of data in a message or want to add

technologies

Page 49: TSM_5_2012_en

49www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINEService Bus Topics in Windows Azure

the contents of a stream in a message try to think twice if you need this information and if you cannot put it in another place where both producer and consumer can access it (e.g. . Windows Azure Tables, Windows Azure Blobs). byte[] currentMessageContent = new byte[100];//...BrokeredMessage messageToSend = new BrokeredMessage(new MemoryStream( currentMessageContent));

queueClient.Send(messageToSend);

BrokeredMessage messageToReceive = queueClient.Receive();

Stream stream = messageToReceive. GetBody<Stream>();

After calling the Receive () method we should check if the returned object is null. However, the thread will remain blocked at the call of ReceiveAndDelete method until a message from the queue is available. Each message that is added to the queue can have attached a unique session id (sessionID). Through this session we can be configure the consumer to receive messages only from this session ID. This is a way to send data to a consumer that normally would not fit in a single message. On the producer we must set only the sessionID property of the message, and the consumer must use the MessageSession, instance that can be obtai-ned by calling the „AcceptMessageSession ([sessionID])” method of the QueueClient.MemoryStream finalStream = new MemoryStream();MessageSession messageSession = queueClient. AcceptMessageSession(mySessionId);

while(true){ BrokeredMessage message = session. Receive(TimeSpan.FromSeconds(20));

if(message != null) {// ... message.Complete(); continue; } break;}

A message queue from Service Bus Queues doesn’t contain only one queue. We can say that a queue can contain several

queues. All messages that could not be pro-cessed (e.g. messages that have expired or when the queue has reached the maximum capacity) end in a „Death Letters” message queue. This queue can be accessed, is very useful when we are debugging or when we need to log messages that could not be processed. In this queue we can also have messages that are not added by the system. These messages may be added from the code through the DeathLetter method of BrokeredMessage.

There is a scenario where this functi-onality can be very useful. Imagine that a message consumer tries to consume a message that cannot be processed. Consumers will try forever to process this message. This type of message is called „Poison Message” and a poorly thought system can be messed up because of the queue.

In the following example we will try to process a message three times. If we fail, it is automatically marked as „Poison Message”.

while(true){ BrokeredMessage message = queueClient. Receive();

if(message == null) { Thread.Sleep(1000); continue; } try { ... message.Complete() } catch(Exception ex) { if( message.DeliveryCount > 3 ) { message.DeadLetter(); } message.Abandon(); }}

A reference to a message queue for

death letters can be obtained as follows:QueueClient deathLeterQueue= QueueClient. CreateFromConnectionString( myFooConnectionString, QueueClient.FormatDeadLetterPath( „FooQueue”));

A special feature is the integration of the Service Bus Queues with WCF. A WCF service can be configured to use message queues to communicate. By this way, our service will be flexible and scalable. All load balancing problems are solved almost alone in this way. If the service is down, the consumer can still send messages without errors and without losing these messages. In some cases this can be very helpful. Also by this way we can expose WCF services from private networks without having security issues.

From some points of view Service Bus Queues are quite similar to Windows Azure Queues. We can imagine Service Bus Queues as a Windows Azure Queues with steroids. When we need features such as guaranteeing message order, transacti-onal operations, TTL greater than 7 days or integration with WCF, the Service Bus Queues is our solution. But there are cases when Windows Azure Queues is a much better solution. That’s why you should com-pare the two types of message queues from Windows Azure and see which is best for your application.

Finally Service Bus Queues is extremely powerful and has 4 strengths that should not be overlooked: • Loosecoupling(the consumer and

producer do not have to know each other)

• Loadleveling(the consumer doesn’t have to run 24/7)

• Load balancing (we are able to register consumers and producers very easily)

• Temporarydecoupling (producer or consumer can be offline)

Page 50: TSM_5_2012_en

50 nr. 5/2012 | www.todaysoftmag.com

Malware pe Android statistics, behavior, identification and neutralization

Unfortunately, Android doesn’t escape the attention of hackers and cyber crimi-nals just waiting for new technologies to become popular in order to get informa-tional, technological and financial illegal earnings. However, what is the present situation of Android?

According to a study performed by TrendLabs, during the first 3 months of 2012, 5000 malware applications were found and then in April almost 15 000. It is expected to reach nearly 130 000 mobile malwares by the end of the year. It is worrying that only 20% of mobile applica-tions have a dedicated application security and law evidence, 17 malicious applications from Google Play have been downloaded 700,000 times until they were eliminated.

Android malware behavioral classi-fication shows that the top applications are Premium Service Abuser with 48% followed by Adware with 22% and at a short distance by data theft, with 21%. Intelligence tools currently occupy only 4% which demonstrates that ,by now, they are used for targeted purpose. An example of malicious application was SMSZombie, not too popular in Europe and the U.S. because they addressed a service predominantly used by Chinese - SMS Mobile Payment

Service. The application infected 500,000 devices and sent cash in the accounts of some online game profiles.

What options does a public spy ap-plication have?

For about $ 16 per month or less, there are spy applications of mobiles with Android (note, not only)extremely com-plex and well-developed that publicly address the parents who want to be clo-ser to their children, the managers of the companies which suspected Industrial and government espionage.

The application control is carried out through SMS and the Internet, when pos-sible. You can record calls and view call history, you can read incoming and sent SMS, you can define the SMS keywords, you can record the sound to over 10 meters around, you can read emails from Gmail or any service explicitly defined in the default email application from Android, you can access all contacts stored on your phone, all calendar activities, you can view photos, videos, music; there is access to the browser ‘s history, location monitoring with or without GPS and the list goes on.

I repeat, this can only make a public espionage application. Wifi network moni-toring, obtaining control over networks, the installation of 0-day exploits on the computer network, bluethoot monitoring would complete the list.

What options could a botnet for Android have?

Although situations of this type were quite few until now, we can always

Android, the operating system for Smartphones, the most popular OS mobile in the U.S. and perhaps the most popular on the planet, enjoys increasing global attention due to the diversity of the gadgets on which is installed.

technologies

Andrei Avădă[email protected]

Founder and CEO DefCampCEO worldit.info

tehnologii

Page 51: TSM_5_2012_en

51www.todaysoftmag.com | nr. 5/2012

TODAY SOFTWARE MAGAZINEtehnologii

expect a real botnet popularized through Google Play + something magic. Imagine the following scenario - A new app that apparently solves a general public’s need was loaded in Google Play. After being downloaded, this automatically gives like, +1 and a good score for the application of the market, eventually leaving a random comment from a predefined list, even trying to link a dialogue between two „cli-ents”. The malicious software exploits a vulnerability that allows getting the root right on your mobile phone. Among others, it pings to a hijacked website, which has a communication service especially installed for malware. The latter communicates a list of decentralized hijacked websites. Well, it makes a complete mobile fingerprinting (the operation can also be done before getting root access and backdoor) deci-ding whether an additional tool in one of the servers is necessary. If not, information is gathered as - email address, numbers, telephone conversations, history, cookies, accounts. Phones could be infected until the target is reached, the operation for which it was created is performed and then it is diluted. The rest is variable. Still, the problem is the spreading on other phones –through wifi networks, offering persona-lized messages to friends sorted by country and city, through 0-day sites and building permanently infected applications.

An introduction to Android security Android is a modern mobile platform

that has been created for further develop-ment. The android applications use the hardware and software technology to pro-vide users with the ideal experience. To protect the value created by the operating system, the platform provides an environ-ment that ensures the safety of users, data, applications, and the device’s network. Securing an Open Source Platform requi-res a robust architecture security and a rigorous security process. Android was designed so that security could be divided into several layers, thus providing grea-ter flexibility and protecting the platform users.

The Android security programThe Android security process begins

in the early stages of development by cre-ating a complete and configurable security model. Each major facility of the plat-form is reviewed by engineers. During its

development the components are reviewed by the Android Security Team, Google Information Security Engineering Team and independent security consultants. The purpose of this analysis is to identify weaknesses and potential vulnerabilities long before they are Open Source published information. At the time of publication, Android Open Source Project allows mai-ntenance and code review by an indefinite number of enthusiasts and volunteers. After the source code was published, the Android team quickly reacts to any report on a potential vulnerability to ensure that the risk of users to be affected is minimized as soon as possible. These reactions include rapid updates, the removal of Google Play applications or the removal of applications from the devices.

The Security Architecture of the Android Platform

The Security Architecture of the Android Platform Android aims to be the safest operating system for mobile plat-forms, trying to ensure:

•usersdataprotection,• system resources protec t ion

(Including the network),•applicationisolation.

To reach this point, some key points in the platform architecture have to be achieved:

•Robust securityOS level throughLinux kernel,

•ApplicationSandbox,• Secure communication between

processes,•signingapplications,

•definingpermissionsforapplicationsand users.

More details about the Android archi-tecture is available on source.android.com.

What Android Anti-Virus applica-tions are there?

Ac c o r d i n g t o t h e r e s u l t s o f AV-Comparatives in September 2012, the most effective mobile security applications are: avast!, Mobile Security, BitDefender Mobile Security, ESET Mobile Security, F-Secure Mobile Protection, IKARUS mobile.security, Kaspersky Mobile Security, Lookout Premium, McAfee Mobile Security, Qihoo360 Mobilesafe, SOPHOS Mobile Security, WEBROOT. Their effici-ency is over 93% in descending order.

ConclusionsAlthough statistics seem worrying, the

reality is that most malware applications so far have not behaved very dangerous. At the same time, modern anti-virus technology are moving on mobiles with surprising speed, most market players coming with an already prepared security solution for smartphones. Moreover, the Android Development team claims that the last version of the operating system brings additional security, as confirmed by some independent security specialists. From experience, I tend to think that the problem of viruses on mobile phones will soon be as sensitive as the one on computers and we have to be more attentive than ever to what we are installing. Stay safe, stay open!

Page 52: TSM_5_2012_en

52 nr. 5/2012 | www.todaysoftmag.com

Gogu

The Chief opened the door and, by looking at him, Gogu soon reali-zed that something was wrong.

The truth was that if he had looked around, he would have noticed that the entire office was shocked: everyone found it hard to believe the way that Mişu and Gogu were fighting and yelling to each other. Well, that’s because he’s so stubborn! Gogu said to himself. But wait a minute, we’ll settle it right now…

- How good that you came, Chief! Can you help us solve an issue?

- Just one issue?! Judging by the scandal you made, I can hardly believe is only about one issue! Let me hear it, what is the apple of discord?

- What apple, Chief, do you think we are in the mood for joking? Mişu inter-fered with his red cheeks, glassy eyes and scant breath as if he had run the marathon. Enlighten Gogu yourself because I’m out of arguments!

- You’re out because there aren’t any! That’s why you get so nervous and raise you tone! Gogu could not refrain.

Chief realized that the discussion had already last for a while and it was obvious – for who knew him, of course – that Gogu was striving to look calm, but he was in fact as “involved” as Misu was (only he was not as “red” as him). Immediate measures were necessary and therefore Chief walked back and, with a low and talkative voice, let the words come out loud and clear:

- I think it’s time for both of you to calm down and explain – calmly, without raising tones and talking at once – what this is all about.

Obviously, both of them started talking and explaining so Chief felt the need to repeat:

- Calmly, without raising tones and talking at once, I said. Gogu, you tell me.

Gogu tried to refrain himself from showing the victory smile (it was clear that I would explain, I’m older in the company and certainly more skilled) and after he sli-ghtly theatrically took a breath, began to explain:

- Chief, here’s what this is all about: the pyramids of Egypt were built some five thousand years ago, right?

- That’s right, Gogu!- The Great Chinese Wall is more

recent, around 2200 years, right?

- That’s right, Gogu!- The Mayan pyramids are even

more recent, but still they have 1,500 years.- What’s this, Gogu, a history les-

sons? Chief lost his patience.Mişu wanted to interfere, but the

Chief ’s look stopped him. Gogu continued relentlessly:

- The construction of Taj Mahal lasted 22 years and ended in the 17th century. The Eiffel Tour was build more recently and it is also an architectural masterpiece.

- Are we getting anywhere?- Could all of these extraordinary

constructions be referred to as “projects”? Gogu continued.

- Yes, Gogu, yes…- Then – he ended gloriously – isn’t

it right that project management has exis-ted for almost 5000 years?!

Mişu – who turned from red to cherry – was about to have a heart attack so he exploded:

- Project management – you may want to read a book or two, you megalithic mule! – appeared in 1950. You’ll find the information on the internet with a mini-mum effort… And therefore, if project management was not, neither were pro-jects, you obsolete!

Chief started laughing loudly and lus-tily, without any sign of wanting to stop. One by one, the entire office started lau-ghing. Our two heroes looked at Chief in a stunned way, but it was impossible not to get “infected” with his laugh so they found themselves smiling at first and then lau-ghing. Howling with laughter, Gogu said to himself: only Chief can make a comedy out of a big scandal. Once they got calmer, Chief sat comfortably on a desk, with his legs draping, preparing for the explanation. Clearly, he was enjoying it:

- Nice subject you raised but still there was no need to get so agitated. Let me tell you how are things with projects and their management. By definition, projects are complex processes and relati-vely unique for the company that executes them. Do you remember the last product, the one we released two months ago? Making and releasing it was, for us, a pro-ject. Now, that product is made in series and therefore the activity has turned into a process. Meaning that, once we have learned how it’s made, we can produce

everything by processes that we know and are managed by applying the concepts of operational management. For projects, we apply project management. But now I have a question for you: what happens if I have no idea about was project and project management means? Could I still be able to make and released a new product?

- Five years ago we knew nothing about project management – interfered Zoli – and we still launched a new product every year.

The others approved silently. It looked like the Chief ’s ”story” caught everyone. He continued:

- That’s right! Of course we did. Only that, things didn’t always turn out as we wanted, costs were not monitored, and deadlines were almost each time exceeded. We worked by our own methods, just like Egyptians and Chinese had done in their constructions. They weren’t too passio-nate about savings and deadlines. Your examples, Gogu, are successful ones in the history of mankind, but there are also many other - hundreds, thousands – initiatives that had been started but never finished, precisely because of their complexity.

- Sagrada Famiglia! Mişu whispe-red and Chief nodded approvingly…

- The concept of “project” appeared when people tried to apply new methods to ensure the success of complex, risky, and with huge investments processes. Actually, project management is about granting an increased managerial attention, materiali-zed in assigning a project manager and a team to be responsible for the execution of the complex and unique process – that we now know as project. You fighters are both correct and wrong in the same time: Gogu’s examples are indeed projects as we understand them nowadays, but those who participated to their execution had no idea about project management and certainly were not applying it… And now the break is over, let’s get back to work!

misc

Simona Bonghez, [email protected]

Speaker, trainer and consultantin project management,

Owner of Confucius Consulting

Page 53: TSM_5_2012_en
Page 54: TSM_5_2012_en

powered by

sponsors