dhyaas emagazine
Post on 11-Sep-2014
4.671 views
DESCRIPTION
The EMagazine Dhyass that I edited during last year for Computer DepartmetntTRANSCRIPT
From
My dea
connect
scientis
talented
determ
prosper
technol
m Princ
ar students,
The techno
tivity intern
st, engineer
d people. B
ination and
rity. Furthe
logies for be
cipal’s D
ology transf
net and wo
rs and tech
But then the
d self confi
er you sh
etterment o
Wishing
Desk…
er with no b
orldwide w
nologist wh
ere is sever
dence and
hould make
of our coun
g you all suc
boundaries
web. This gi
ho already
re global co
take our c
e sincere
trymen and
ccess.
and borde
ives immen
got world
ompetition t
country to
efforts to
d all mankin
rs has beco
nse opportu
recognition
too. I want
new heigh
use new
nd.
ome a reality
unities tou
n as intellig
you to face
ts of progr
w innovatio
Dr. D.J. T
Princ
YCCE, Nag
y due to
r young
gent and
e it with
ress and
ons and
idke
cipal
gpur
role in t
multi‐di
not easy
magazin
board fo
Wishing
Fro
The success
the developm
mensional te
One must in
y to arrange
ne is platform
or this magaz
g best compli
om the
of the devel
ment of hum
echnocrats a
‘All
nvolves in ex
and conduct
m for student
zine who hav
iments.
e desk o
loped nation
man recourse
and DHYAAS
work and no
xtra curricula
t such activit
ts to express
ve taken sinc
of Head
n much depe
es. Besides te
is an outcom
o play make
ar activities f
ties due to tig
s their innate
cere efforts t
d of dep
nds on the t
echnical know
me of it.
a jack dull b
or all round
ght schedule
e potentials.
to shape it.
partme
echnocrats.
wledge our a
boy’.
personality d
e of engineer
At last, I app
Prof. M.M. HOD, Foun CT & IT Dep YCCE, Nagp
ent
They play a
aim is to dev
developmen
ring and so t
preciate the
Kshirsagar nder Editor‐Dpartment, pur
pivoted
elop
nt. It is
his
editorial
Dhyaas
From th
variety
the dry
makes
someth
waterin
them, n
their m
one sta
unleash
anythin
believe
tomorro
them fl
been ta
same , I
enables
writes o
end I th
would n
giving m
he Desk of E
I always be
and taste t
Siberian pl
us live a n
hing very im
ng and tend
nurture the
ajestic wing
arts with a p
hed to take
ng extra –or
in the you
ow ….
Like any riv
But then a
ow in the d
With great
aken to pro
I solicit and
Technology
s food to m
or guides b
hank all tho
not been in
me opportu
Editor …
elieve that
to our life ,
ains .They
new life wit
mportant in
ding the stu
em …..most
gs and fly th
pen in hand
over the w
rdinary but j
uthful drea
ver , once a
s ever the
irection of
delight I pr
duce a flaw
encourage
y is not wh
millions , sm
ut is the on
ose who ha
n the shape
nity to wor
the studen
it is due to
laugh , teas
th them . T
life , the e
udents . My
important
he mighties
d , only the
world . I just
just have to
ms of chan
pen is held
direction o
light , life ,
resent to yo
wless work
e your partic
hat goes on
miles on bill
ne that kind
ave contribu
and magni
k as a stude
nts are like
o them but
se , learn , r
Those who
xperience o
y job is to
ly give them
st leap . The
en one reali
t see mysel
o stand by t
nging the w
words ebb
of flow is w
brotherhoo
ou all this Ju
just keepin
cipation in o
n in indust
ions , faith
dles , ignites
uted to this
ficence you
ent editor fo
the colours
for whom o
rejoice , fail
haven’t be
of satisfacti
toil with th
m a mouth
e pen is the
ises the pow
lf as the me
these eaglet
world .To s
out seamle
what holds
od .
uly issue of
ng in steed
our very ow
ry , labs ,
and reache
s and initiat
s magazine
u see. Also
or this maga
s in our live
our lives wo
l , recollect
een teache
on. I see m
hem , share
hpiece so th
portal to th
wer within…
entor who
ts to leap th
hare their
essly ..
importance
DHYAAS .W
with the ea
wn DHYAAS
Prof
Dh
warfare al
es every on
tes our dor
without w
I thank the
azine .
S
es, they ad
ould have b
, win and a
ers , I belie
yself as a g
e with them
hat they ca
he thoughts
… conceale
doesn’t hav
heir flight o
vision for a
e , I intend
While every c
arlier works
magazine.
. Ujwalla G
Facult
hyaas , CT/I
beit it’s wh
ne . Pen is n
mant selve
which this m
editorial b
Shiv Prakash
Studen
dd spice,
been like
all which
eve miss
gardener
m, guide
n unfurl
s , when
d… now
ve to do
f life. To
a better
to help
care has
s for the
awande
ty Editor
IT Dept .
hat that
not that
s .In the
magazine
oard for
h , V CT ,
nt Editor
C
Pg1 |V
Pg 2|Go
Pg 4|Vi
Pg 7|Op
Pg 11|F
CONT
Virus Battery
oogle’s Chro
rtualization
pen Moko
For tomorro
EnhPg 3Pg 4
ENTS
Pg 24 |
Pg 26 |
Pg 9 |
y
ome Brows
n
ow , think gr
Pg 15|I
Pg 18|
Pg 30|I
Pg 21|
Pg 23|
hance UR34| Softwar44| HIBERN
S
Intel’s Atom
IEEE 802.1
Hyper Term
er
reen
nto the win
Run Linux i
s windows
Crash route
Your credit
RSelf !!re Testing GETS Prof U
The editors ca
m
1n
minal
ndow factor
n windows
losing again
e
card can b
Glossary Projwalla H. Gaw
We cordially in be reached at
Pg 12
Pg 13
Pg 29
ry
with a twist
nst LINUX ?
be cloned
of Milind S. De
wande
nvite your comt ujjungp@red
|Bluetooth
| Voice Rec
| CDMA and
t !
eshkar
ments on this isdiffmail.com, sh
in year 300
cognition
d Agricultur
ssue of Dhyaasivprakash@ho
00 AD
re
s July, 2008 otmail.com.
P a g e | 1
Department of Computer and Information Technology | DHYAAS
VIRUS BATTERY !!!
Sarang.R.Kharpate, VII IT 67
Scientist have successfully utilized a virus to create a tiny battery that can power miniature electronic devices used for controlled drug delivery, and tiny lab‐on‐a‐chip applications. Experts at the Massachusetts Institute of Technology, Cambridge, say that their method to build microbatteries relies on a genetically‐engineered virus called M13. The scientists first made a template from polydimethylsiloxane (PDMS), a commonly used silicon‐based organic polymer.
After coating it with alternating layers of positive and negative electrolytes, they added the virus. The researchers had designed the virus to have negatively charged amino acids at its surface, so that it stuck to the template, and an affinity for cobalt, a favored material for batteries.
Each virus is a semi‐rigid fiber a few nanometers in diameter, and about a micrometer long, which tends to pack tightly into a whorl that looks similar to a fingerprint. The researchers say that when the whole assembly is dipped into a solution of cobalt ions, it coats the viruses to create a very large surface area that could store charge. When the researchers stamp the template onto a platinum layer, and peel off the PDMS, they get an array of small dots of the prepared material, cobalt‐side down, which forms the heart of an effective battery. "This is the first time anyone has ever stamped a battery device," Nature magazine quoted Paula Hammond, part of the MIT team, as saying.
P a g e | 2
Department of Computer and Information Technology | DHYAAS
Google's Minimal Appeal ‐ Chrome Shobhit Mathur , VII I.T.
Google's just‐released Chrome takes the same approach to browser design that Google takes to its home page ‐‐ stripped‐down, fast and functional, with very few bells and whistles.
That's both the good news and the bad news about this browser. Those who like a no‐frills approach to their Web experience, and who want the content of Web sites front and center, will welcome it..
That said, keep in mind that this is a first beta, and Google may well introduce new features in future versions. For example, this version doesn't have a true bookmarks manager, but it would be quite surprising if one didn't show up in future betas.
In fact, there's a very long list of features this browser doesn't have. There's no built‐in RSS reader, like there is in Internet Explorer or that's available as an add‐on for Firefox. You won't find a good bookmarks manager, such as you'll find in both Internet Explorer and Firefox. There are no add‐ons like those you'll find in Firefox. Be warned — the list of what's not there can go on for quite some time.
That was all by design, though, and it's why Google calls this browser Chrome. The user interface of a browser is called its chrome, and Google set out to reduce the chrome ‐‐ in other words, simplify the user interface ‐‐ as much as possible.
Designed for consumers or enterprises? A great deal of what makes Chrome different from other browsers is not what you see, but what you don't see. Chrome appears to be designed in great part to run AJAX and Web 2.0 applications. It's the only browser that has been built from the ground up for a world in which the browser is a front end to Web‐based applications and services like those that Google provides, and like those that are used increasingly by businesses.
Google has made dramatic changes under the hood. It has chosen the open‐source WebKit as the rendering engine, and it built its own JavaScript virtual machine called V8 for running JavaScript faster, with more stability, and more securely. Each tab in Chrome runs as its own separate process, so if one tab is busy or bogged down, it won't affect the performance in other tabs. Google claims that designing a browser this way will also cut down on memory bloat.
Also important is that Chrome comes equipped with Google Gears, which is a kind of glue that ties together Web‐based applications and your own hard disk.
P a g e | 3
Department of Computer and Information Technology | DHYAAS
The effect of all this should be — says Google — a browser able to run Web‐based applications with the same speed, interactivity and stability as client‐based applications. This means that Chrome may be aimed as much or more at Microsoft Office than it is at Internet Explorer. By providing a superior platform for running its Web‐based applications, Google is giving itself a chance to supplant Office with Google Docs.
Seen in that way, the ultimate success of Chrome may be measured more by how many enterprises switch from Office to Google Docs than by how many consumers switch from IE to Chrome.
A look at the interface All that being said, Chrome is, above all, a browser, and nothing would make Google happier than if the entire world switched to it. So the company has put a great deal of effort into rethinking the entire browser interface.
The Chrome interface looks different from any other browser you've seen. Tabs sit above the address bar instead of beneath it. There's no menu, no title bar and very few icons. In fact, there's not even a home page icon; look for it in vain. By default, it's turned off — to get one, you have to click the Tools icon, then choose Options ‐‐> Basics and check the box next to "Show Home button on the toolbar." Overall, it's as stripped‐down a browser interface as you'll find.
To get to most browser functions and options, you use menus that drop down from two icons at the right‐most portion of the browser — a page icon and a tools icon. But even there, this browser is stripped‐down. For example, the Options menu is where you often find many hidden features, buried beneath multiple tabs. In Chrome, the Options menu (found under the Tools icon) offers only three tabs, none of which includes an overload of choices. You'll mainly find basics such as whether to display the home page icon, where to store your downloads and so on.
The address bar — what Google calls the Omnibox — is one of Chrome's nicer features. It doubles as a search bar: Type in your search terms, and it uses the search engine of your choice to do a search. When you instead type in a URL, it works much like the address bar in Internet Explorer 8 and Firefox 3: It lists suggested Web pages as you type, which it gathers from previously visited sites and your bookmarks, as well as making suggestions of its own based on Web site popularity.
P a g e | 4
Department of Computer and Information Technology | DHYAAS
VIRTUALIZATION
Saurabh Lad ,VII IT, 68
Someone somewhere is still getting compromised after investing a lot in security. Now there’s something called ‘virtualization’ which seems to be some kind of a promised land – a ‘solution’ to all these security problems. It’s being adopted rapidly across multiple organizations just because its ‘secure’. So what is virtualization? Why is it such a craze? Is it really that secure? Is there no way to compromise it? Are we finally 100% safe? A lot of pertinent questions there – let’s try and answer them, shall we?
What is Virtualization?
The term virtual itself means something which simulates what is real ; something which you wouldn’t know was not real if you used it. That’s what virtualization is as well. You can install an entire OS inside a virtual machine and set it to open in “full screen mode” each time you want to use it (The same way you watch those movies on your computer – always in full screen mode). Once its booted up and your friend pops in to use your machine there’s no way he’ll ever know that he was accessing his email through a virtual machine (VM) — lets call this the Guest OS — and not the actual OS (Host OS). So all we need is, for example: A Windows box as your desktop, the virtualization software installed on it and Linux, Solaris and any other OS’s you use installed inside the software (Images).
Virtualization is the method by which a "guest" operating systems are run under another "host" operating system, with little or no modification of the guest OS. This also allows you to run multiple operating systems simultaneously on a single machine.
With support from the processor, chipset, BIOS, and enabling software, Intel VT improves traditional software‐based virtualization.
Intel Virtualization Technology (Intel VT) is a set of hardware enhancements to Intel server and client platforms that provide software‐based virtualization solutions. Intel VT allows a platform to run multiple operating systems and applications in independent partitions, allowing one computer system can function as multiple virtual systems
P a g e | 5
Department of Computer and Information Technology | DHYAAS
Why would I want virtualization?
The industry buzz around virtualization is just short of deafening. This gotta‐have‐it capability has fast become gonna‐get‐it technology, as new vendors enter the market, and enterprise software providers weave it into the latest versions of their product lines. The reason: Virtualization continues to demonstrate additional tangible benefits the more it's used, broadening its value to the enterprise at each step.
Server consolidation is definitely the sweet spot in this market. Virtualization has become the cornerstone of every enterprise's favourite money‐saving initiative. Industry analysts report that between 60 percent and 80 percent of IT departments are pursuing server consolidation projects. It's easy to see why: By reducing the numbers and types of servers that support their business applications, companies are looking at significant cost savings.
Less power consumption, both from the servers themselves and the facilities' cooling systems, and fuller use of existing, underutilized computing resources translate into a longer life for the data centre and a fatter bottom line. And a smaller server footprint is simpler to manage.
However, industry watchers report that most companies begin their exploration of virtualization through application testing and development. Virtualization has quickly evolved from a neat trick for running extra operating systems into a mainstream tool for software developers. Rarely are applications created today for a single operating system; virtualization allows developers working on a single workstation to write code that runs in many different environments, and perhaps more importantly, to test that code. This is a noncritical environment, generally speaking, and so it's an ideal place to kick the tires.
Once application development is happy, and the server farm is turned into a seamless pool of computing resources, storage and network consolidation start to move up the to‐do list. Other virtualization‐enabled features and capabilities worth considering: high availability, disaster recovery and workload balancing.
Now that we’re clear what a VM is and what its uses are, lets look at how its used by the IT community and why its being so talked about in the security community. Different people use a VM for different reasons: training , testing code, isolating key DMZ Servers, R&D and much more. However we’ll look at a VM just from a security standpoint. Let’s try and figure out how a VM keeps malware at bay, how it keeps viruses and worms from spreading.
P a g e | 6
Department of Computer and Information Technology | DHYAAS
Malware – On a normal system
So how does normal malware behave? One of the most common scenarios is as follows:
User clicks on link in Email
User taken to/redirected to a “Free software download” page
User downloads beta version of the latest game
User also unknowingly downloads, unpacks and installs malware packaged with the game
The malware now depending on how it was written usually does one of the following:
Duplicates itself on the user’s system and starts spreading itself to other systems
Installs itself on to the user’s computer and uses it to host pornography, illegal software or maybe even use it as a platform for a DOS attack
Installs itself on to user’s computer, loads at startup and captures user’s personal information and sends it to an attacker
So how do you normally counter malware? You keep your operating system and all installed software updated with the latest vendor patches, turn off needless services, kept your anti virus definitions updated, do not visit suspicious websites or links and have a host based packet based/program controlling firewall to regulate access. Yeah, right, we all know how easy that is! Enter the VM to help you out.
Malware – In a virtual machine
A VM is an OS inside an OS. The only difference is – its isolated from the host OS completely. Lets look at how a VM works a bit more closely. Here’s how a standard VM architecture is:
The VM simulates the exact working of the underlying hardware its running on. This means that whatever the user does inside the Guest OS, he’s actually using the RAM,Processor and N/w card of the host. The VMM (actually the VM Driver) gives as much control as possible to the Guest OS user. If however the Guest OS performs an operation that’ll affect the Host OS the VMM driver will step in and ensure that the Host OS remains completely unaffected. So for example: If there’s some virus you caught while inside the VMM and it was programmed to reboot the machine, just the Guest OS would get rebooted. The Host OS is completely unaffected by the virus. The best part is
P a g e | 7
Department of Computer and Information Technology | DHYAAS
– to get back to a clean Guest OS you just need to point the software to a clean uncorrupted image.
Conclusion
Virtualization is an excellent technology if you’re testing things. Its probably most widely used for Malware analysis though corporates are nowadays buying just 1 high end server fitted with truckloads of RAM and running their applications and even domain controllers in virtualized environments. VMWare is the most popular virtualization software that’s used right now – it uses the full virtualization approach. The Open Source community is associated with Xen – and it used a Paravirtualization approach. However rest assured, please do not treat it as a magic bullet. It is only as secure as your host OS itself.
So those good old best practices of defense in depth still stand. Well written security policies, secure rulebases, monitoring, LAN segregation – all those are still very much the key to a relatively secure environment. But virtualization does give you that extra layer of security – for now. There have been exploits on these products as well but they are relatively few. So as long as you implement VM’s well and don’t leave Guest OS images around loosely, virtualization is great and very useful.
I think I’ve touched on too many things in one article and not gone too deep into anything thus leaving you unsatisfied. My apologies for that – but its hard to cover every single detail in a five‐pager, specially on a subject as vast as this. Maybe someday later I’ll go a bit deeper into the technology and we could take a look at how exactly we could handle the problems of virtualization. Hope you enjoyed this introductory article though.
OPENMOKO—THE NEW FACE OF CELL PHONE
TECHNOLOGY Mayur Bhadikar , VII IT , 55 OpenMoko is an open source mobile communications movement on a mission to create a platform that empowers people to customize their phone, much like a computer, in any way they see fit. It is a platform that focuses on innovation, usability, reliability, and quality.
P a g e | 8
Department of Computer and Information Technology | DHYAAS
The Neo Freenunner OpenMoko Mobile phone
OpenMoko can be bought already installed in the cell phone NEO Freerunner which is available in india now at the price of 22,000 Rs. Since OpenMoko is completely opensource now even a user/customer if having some programming knowledge can write his own custom applications. Not only that, he can also write a whole custom mobile operating system kernel by himself which suits his own needs.
For example if I want my mobile phone to listen to and understand its surrounding environment and if I am in a meeting having an important conversation, I want to my phone to switch to silent or vibrate mode on itself, this is impossible if I am using any mobile phone because I am bound to do only those things which are available to me as options in the phone menu or I have to rely on some third party software which may not serve my need exactly. On OpenMoko mobile phone I have the freedom to write my own application the way I want it and new possiblities are now open for me.
OpenMoko’s heart lies deep into the components and resources of the Open Source and Free Software Movements. Moko is short for Mobile Kommunikations. ‘K’ is a tribute to all hackers around the world that build software that drives innovation into the platform. Open means that developers around the world can evolve the platform in anyway they like.
Until now, mobile platforms have been proprietary and scattered. With the release of OpenMoko, which is based on the latest Linux open source efforts, developers now have an easy way to create applications and deliver services that span all users and provide a common “look and feel”. OpenMoko also offers common storage models and libraries for application developers, making writing applications for mobile phones fun and easy while guaranteeing swift proliferation of a wide range of applications for mobile phones.
P a g e | 9
Department of Computer and Information Technology | DHYAAS
With such extremely high quality open frameworks, developers will be armed with exactly the tools they need to revolutionize the mobile industry.
“For the first time, the mobile ecosystem will be as open as the PC, and mobile applications equally as diverse and more easily accessible,” said Sean Moss‐Pultz, architect of OpenMoko and Product Manager of FIC’s Mobile Communication Business Unit. “Ringtones are already a multi‐billion dollar market. We think downloading mobile applications on an open platform will be even bigger.”
“Open platform standardization will kick‐start an entire ecosystem of mobile phone developers,” stated Dr. Ming J. Chien, Chairman of FIC. “I’m excited because I believe carriers will see an increase in revenues from new data traffic. And being able to customize your mobile phone in any way you see fit should be very appealing to end‐users.”
HyperTerminal
Sweta.D.Thakare, VII IT
Have you ever been in a position where you just had to send a file from your computer to a friend . In that case you would have had to put it in a floppy and drive to your friends house and then drive back and then explain your extended absence of an hour to your mom? The best way to save time now is by using a very simple software called hyper‐terminal which is present in all the current windows versions from Windows 95 to XP.
So, what is this hyper terminal?
You can use HyperTerminal to send and receive files, or to connect to computer bulletin boards and other information programs. You can also use HyperTerminal and a modem to connect to a remote computer, even if the remote computer isn't running Windows.
After opening the hyper terminal application you are prompted to select a connection. After entering the name of a connection and the icon, enter the number that you have to dial (In this case, your friends number). If the area code is shown then disable it by modifying the properties of the connection. After this, you are ready to go hyper!
Inform your friend beforehand that you are about to ring him up on the hyper terminal so he can select the Wait For Call option in the CALL menu. When you dial from your
P a g e | 10
Department of Computer and Information Technology | DHYAAS
modem his phone will ring once and then both the computers get connected.
To set up a new connection
1. On the File menu, click New Connection.
2. Type a name that describes the connection, click the appropriate icon, and then click OK.
3. Enter the information for the call, and then click OK.
4. To dial the call, click Dial.
To call a remote computer
1. On the File menu, click Open.
2. In the File name box, type or select the name of the connection you want to use.
3. Click Open, and then click Dial.
To send a file to a remote computer
1. On the Transfer menu, click Send File.
2. In the Filename box, type the path and name of the file you want to send.
3. Click Send.
Note:
1 You can change the protocol you use to send the file by clicking the one you want in the Protocol list.
2 You can also send a text file to a remote computer by clicking the Transfer menu, and then clicking Send Text File.
3 In most cases, you need to prepare the file‐transfer software on the remote computer to receive the file. For more information, contact the administrator of the remote computer
To receive a file from a remote computer
1. Use the software on the remote computer to send (download) the file to your computer.
2. On the Transfer menu, click Receive File.
3. Type the path of the folder in which you want to store the file.
4. On the Use receiving protocol list, click the protocol the remote computer is using to send your file.
P a g e | 11
Department of Computer and Information Technology | DHYAAS
For Tomorrow Think Green :
Manish Kharpate,VII IT
Electricity is one of the most important sources of energy that drives us. According to ‘International Energy Outlook 2008’,an official energy statistical release by the US government, global electricity generation will nearly double from about 17.3 trillion KW hours in 2005 to 24.4 trillion KW hours in 2015 and 33.3 trillion in 2030. Even though nuclear power and other means of electricity generation will increase over the period, the most carbon‐intensive source‐ coal will still dominate, says the report.
A news snippet underscoring the research initiatives for harnessing mechanical energy into electrical energy that can be used to power your portables. It is not only computing that matters, there are these all new breed of hybrid vehicles and electric cars, organic fuels including bio‐diesel and natural gases, fuel cells and so on. We delve a bit deeper to give you a sneak peek on how these greener initiatives will change the face of our technology and reduce our carbon footprints.
As the transistors count’s of today’s microprocessors increases, overheating a vital issue. As we move towards growing transistor microprocessors, the increasing power budgets of these chips must be addressed. Research carried out a Purdue suggests that after exceeding 35 to 40 W, additional power dissipation increases the total cost per CPU chip by more than a dollar per watt.
EFFICIENT COOLING SYSTEM:
Research initiatives for efficient cooling systems are inevitable. Especially when current technologies available for cooling cannot break beyond 2000 watts of heat per square centimeter. A Purdue university research team led by professor Issam Mudawar not only managed to break times when they created a way to cool chips that generate more than 1000 w of heat per square cm.
A fan is an almost inevitable component of the same. However, even if these are sufficient for basic computer application like business computing and Net surfing,
This method is rendered ineffective when the machines are revved by gamers and overclockers.
P a g e | 12
Department of Computer and Information Technology | DHYAAS
The new technology is based on a water cooling system as against the conventional air cooled ones. The method implements series of microjets that distributes a coolant in miniscule channels over the top of chip. The heat generated by the chips cause these narrow grooves to heat up, which in turn causes the coolant liquid to vaporize. The reason being, while the airflow cannot be kept under control, Coolant liquid jets can be maintained uniformly along the length of micro channels. Uniform cooling prevents formation of any hotspots and increases the life of the processors. Mudawar explains that is not the superior properties of the coolant that results in such a significant improvement in cooling mechanism, but its hybrid because it relies on two cooling methods: greater surface area via the microjets. The coolant collects at both ends of the channels and a circulated back through the system.
MINI COMPUTER AND APPLICATION:
Cloud computing is the new buzzword. Now the question arises‐ what makes it so popular? Nothing else but greater economy and flexibility.
‘Zonbu’ is just one of the products being churned out that function on this concept. In the wake of technological initiatives, products like Zonbu are power miser alternatives that business computing demands. Zonbu works on a Linux platform and considering the minimalists apps that it carries, it doesn’t even resort to a fan for its cooling needs. A mini device, no moving parts, energy efficient components and that’s all that an outlook and MS‐office user would ever require. On board flash memory as opposed to hard disk based one further reduces its power requirements.
For a comparable output, most conventional CPU’s will consume 60‐100 w or more approximately 4 to 5 times more than what Zonbu does. Zonbu authorities claim that it will course of a year. If you prefer to be more conservative, you can also add on an optional solar panel to power the device for mini applications.
BLUETOOTH IN THE YEAR 3000 A.D.
SWAPNIL S. HEDAU VII I.T. BLUETOOTH has been the subject of much hype and media attention over the last
couple of years. As various manufacturers prepare to launch products using Bluetooth
technology, the unsuspecting public is about to be catapulted into the next stage of the
information technology wars.
P a g e | 13
Department of Computer and Information Technology | DHYAAS
Bluetooth is an inexpensive but low data rate wireless technology used to connect
devices. You can connect computers, printers, pocket pc, headsets, stereo, mouse, and
keyboard. All these devices have to implement a Bluetooth "profile", which corresponds
to the specific way the cable would be used to connect one device to another. It is a
technology for spontaneous creation of wireless networks and the discovery of services
on these networks.
There are a lot of opportunities for Bluetooth‐enabled products that exploit the various
features of the technology to add value. However, there are many factors which could
work against it. Potentially competing technologies such as HomeRF and to a lesser
extent or even IrDA, could cause consumer confusion and at worst push Bluetooth into
niche corner. However, both HomeRF and IrDA have said to be working to form a co‐
operative harmonization between the technologies, though how will it work out yet
remains to be seen.
The SIG has created a series of new working groups that are continuing development of
the Bluetooth specification. The development is ongoing in three key areas: correction
and clarification of the version 1.0 specification, as well as the development of further
profiles and development of an enhanced radio and baseband, which will lead to new
version 2.0 core specifications.
Thus in the year 3000 A.D. all devices will able to communicate with each other and no
individual will be out of the network. Any body or individual can communicate with each
other or devices without cables within certain reach. Thus Bluetooth in future will bring
in technological revolution in the world which will be able to bring the whole world
within your reach.
P a g e | 14
Department of Computer and Information Technology | DHYAAS
VOICE RECOGNITION
Neelam Desai ,VII IT
Voice Recognition technique has emerged as a new tool in the present day age of Information. Voice & continuous speech recognition plays a major part in helping people with disabilities to communicate effectively. It has also revolutionalized in the field of medical transcription. The development of Voice Recognition Software has brought the entire world closer.
The voice recognition technique has been developed as a software & extensively been tested on a variety of parameters. The notable ones are Dragon Naturally Speaking v5.0(Preferred Edition), Philips FreeSpeech 2000, ISMI VoiceDirect Continuous Gold & We Tech.
The first software‐only dictation product for PC’s, Dragon System’s Dragon dictate for Windows 1.0, using discrete speech recognition technology, was released in 1994. It is a slow, unnatural means of dictation requiring a pause after each & every word. Two years later, IBM introduces the first continuous speech recognition software, its MedSpeak / Radiology. These systems had five‐figure price tags & required very expensive PCs. Continuous speech technology allows its users to speak naturally and conversationally, relieving much of the tedium of discrete speech dictation.
Dragon systems made an enormous stride in june 1997, when it released Naturally Speaking, the first general purpose continuous speech software program. It brought the realm of continuous speech recognition to a much wider range of users. Two months later, IBM released its competing continuous speech software, Via Voice.
Among the rest of the software, We Tech is one of the pioneers in creating India’s first speech recognition software based on Microsoft’s speech Engines, which have quite a reasonable price & offer local language support. On the other hand, both the Philips FreeSpeach & the VoiceDirect Gold use the same Via Voice speech recognition Engine.
Voice Recognition software has always given closer scrutiny since a lot is demanded & expected of them. Accuracy along with the time taken for recognition is of utmost importance. It is continuous and narrative type of speech recognition. The days of discrete speech recognition programs are over & hence, all tests were carried out without any pauses in between. In addition to these challenges, enormous variance that
P a g e | 15
Department of Computer and Information Technology | DHYAAS
exists among individual human speech patterns, pitch, rate & inflection. These variations are an extra‐ordinary taste of the flexibility of any program. Naturally the products were put through a stringent taste to assess all its features on a variety of parameters such as Accuracy, System Requirement, Capacity to incorporate specialized vocabulary, Ease & speed of installation, CPU utilization & others.
In addition to the parameters of system requirements, another component of speech Recognition software, ie often overlooked, is the microphone. A poor quality microphone can prove to be a major negative point of using the best of speech Recognition software. Another important factor leading to better accuracy for the voice Recognition software is the silence detection routines for the total background noise. The volume of the microphone has to be appropriately adjusted so as to avoid unnecessary noise to pick up that will add to the overheads of slurred speech & mumbling. These, together with background noise, are the first causes of improper detection of speech. Besides, the choice of the sound card is also a very important factor in speech Recognition. Since sound is conveyed to the speech Recognition software only after it is digitized.
When we use any voice Recognition software, it takes our words as snapshots. It hunts through its massive word database for the closest possible match. But due to tremendous variety in spoken dialects, phonetics & accents, the software often comes up with wrong matches. That’s where training comes in. The software tries to learn & adapt itself to the specific accent & voice tonality. It creates a specific user voice profile, which keeps on updating. A common analogy could be of a sculptor chiseling awake at a stone till he comes up with something that slowly starts resembling the human form. That’s how most Voice Recognition codes work.
The voice recognition software has multifaceted applications. One of the major areas of application is medical transcription. The entire process of medical transcription involves simply converting voice information into an editable & storable digital format. Newer speech recognition software also features the capability of working with technical & medical jargon. These features, coupled with the advances in processing power & in the speech engines itself, have resulted in such software being a very viable & practical option in human life. Besides, the newest range of cellular & mobile communication devices have capability of processing speech in real time. Today, we even have applications that provide real time language translation & also allow remote processing voice via a remote server on a base station.
P a g e | 16
Department of Computer and Information Technology | DHYAAS
Into the Window Factory
Dimple Sarode, 7, VII IT
Windows 7, (formerly codenamed Blackcomb and later Vienna) is the working name for the next major version of Microsoft Windows as the successor to Windows Vista. Microsoft has announced that it is "scoping Windows 7 development to a three‐year timeframe", and that "the specific release date will ultimately be determined by meeting the quality bar." Windows 7 is expected to be released in Jan 2010. The client versions of Windows 7 will ship in both 32‐bit and 64‐bit versions. A server variant, codenamed Windows Server 7, is also under development.
Microsoft is maintaining a policy of silence concerning discussion of plans and aspirations for Windows 7 as it focuses on the release and marketing of Windows Vista, though some early details of various core operating system features have emerged. As a result, little is known about the feature set, though public presentations from company officials have disseminated information about some features. Leaked information from people to whom Milestone 1 (M1) of Windows 7 was shipped also provides some insight into the feature set.
Unveiling
The Windows 7 user interface was demonstrated for the first time at the D6 conference
during which Steve Ballmer acknowledged a projected release date of late 2009. The build of Windows 7 that was on display had a different taskbar than found in Windows Vista, with, among other features, sections divided into different colors. The host declined to comment on it, stating "I'm not supposed to talk about it now today".
Builds
Milestone 1
The first known build of Windows 7 was identified as a "Milestone 1 (M1) code drop" according to TG Daily with a version number of 6.1.6519.1. It was sent to key Microsoft partners by January 2008 in both x86 and x86‐64 versions. Though not yet commented on by Microsoft, reviews and screenshots have been published by various sources. The M1 code drop installation comes as either a standalone install or one which requires Windows Vista with Service Pack 1, and creates a dual‐boot system.
P a g e | 17
Department of Computer and Information Technology | DHYAAS
On April 20, 2008, screenshots and videos of a second build of M1 were leaked with a version number of 6.1.6574.1. This build included changes to Windows Explorer as well as a new Windows Health Center.
A standalone copy of build 6519 was leaked initially to private FTPs by BETAArchive on June 10, 2008, which quickly spread to many torrent trackers.
Later builds
According to TG Daily article of January 16, 2008, the Milestone 2 (M2) code drop was at that time scheduled for April or May 2008. User interface appearance changes are expected to appear in later builds of Windows 7.
Milestone 3 (M3) is listed as coming in the third quarter, and although the release dates of beta versions and release candidates are currently "to be determined", the release to manufacturing of Windows 7 has been alternately confirmed for the second half of 2009 or the first half of 2010 depending on who was speaking at the time.
Features
Desktop context menu showing the return of the Display Properties icon (previously removed in Vista) and new options for Gadgets.
Windows 7 has reached the Milestone 1 (M1) stage and has been made available to key partners. According to reports sent to TG Daily, the build adds support for systems using multiple heterogeneous graphics cards and a new version of Windows Media Center. New features in Milestone 1 also reportedly include Gadgets being integrated into Windows Explorer, a Gadget for Windows Media Center, the ability to visually pin and unpin items from the Start Menu and Recycle Bin, improved media features, the XPS Essentials Pack being integrated, and a multiline Calculator featuring Programmer and Statistics modes along with unit conversion.
Reports indicate that a feedback tool included in Milestone 1 lists some coming features: the ability to store Internet Explorer settings on a Windows Live account, updated versions of Paint and WordPad, and a 10 minute install process. In addition, improved network connection tools might be included.
Device center, display, recovery center, and windows sensors had been added to control panel.
In build 6574, the Windows Security Center has been renamed the Windows Health Center, and focuses on monitoring the complete health status of the computer in a central location.
P a g e | 18
Department of Computer and Information Technology | DHYAAS
In the demonstration of Windows 7 at D6, the operating system featured multi‐touch, including a virtual piano program, a mapping and directions program and a touch‐aware version of Paint.
Methods of input
On December 11, 2007, Hilton Locke, who worked on the Tablet PC team at Microsoft reported that Windows 7 will have new touch features. An overview of the touch capabilities was demonstrated at the All Things Digital Conference on May 27, 2008. A video demonstrating the multi‐touch capabilities was later made available on the web on the same day.
Also, Bill Gates has said that Windows 7 is also "a big step forward" for speech technology and handwriting recognition.
Virtual hard disk
On May 21, 2008, Microsoft posted a job opening for Windows 7 regarding work to implement VHD support, i.e. support for single‐file containers that represent an entire hard drive including partitions, and transparently performing I/O operations on this as a typical hard drive, including boot support.
15 Second bootups
According to Microsoft Customer Experience Improvement Program, only 35% of vista SP1 can boot up in 30 second or less, the slow boot‐up mainly due to some services or programs which are not absolutely required, are loaded when the OS is launching. In order to solve this problem, Microsoft has set aside a team to work solely on the issue, and that team aims to "significantly increase the number of systems that experience very good boot times." They "focused very hard on increasing parallelism of driver initialization." Also, it aims to "dramatically reduce" the number of system services—along with their processor, storage, and memory demands.
Run Linux in Windows... With a twist!
Priyanka Kahare , VII ,IT
We keep complaining about how we’d like to use Linux but all our work and favourites applications are here on Windows. Virtualisation can be harnessed to run both Linux and Windows together, but virtual machines are very resource hungry and it’s not practical. AndLinux could be the solution to all your problems. With AndLinux, you can
P a g e | 19
Department of Computer and Information Technology | DHYAAS
run Linux applications on Windows without having to boot into Linux or run virtual machines. The AndLinux installer can be found in this month’s DVD. We’re showing you a quick outline of some of its features.
Getting AndLinux Up And Running
And Linux doesn’t require you to create or modify any partitions, and the entire installation procedure is done on Windows like any other application. Double click the executable and continue beyond the agreement accepting window. Enter a suitable amount of memory that you want to allot to AndLinux. If you want to use sound applications, check the radio box for enabling the audio support module. Similarly, if you want to use AndLinux seamlessly when you boot into Windows, choose to set up AndLinux as an NT service that runs automatically. Next, select the method you want to access the Windows file system. Choose CoFS, however, network shares can be used using Samba. The next step is to select the partition that can be accessed by AndLinux. You can also set it to access a particular folder. Select the folder and proceed. Accessing AndLinux’s Applications
If you’re wondering where the andLinux applications are, they are all accessible from the icon in the system tray. If you want to run console applications, you can open a terminal session using Konsole or an andLinux terminal. The username to use is [root] with no password. The andLinux terminal allows you to jump to other terminals using the [Alt] + [F1] / [F2] / [F3] / [F5] / [F6] keys.
Adding New Software
You might think andLinux works like some kind of LiveCD that runs on Windows, but it is much more than that. LiveCDs generally don’t let you install applications, which means you are stuck with what is provided.
Synaptic Package Manager is an easy way to add applications to andLinux over the Net
One of the most impressive features other than running Linux on Windows is that you can actually install new software on it. The simplest and quickest way to do this is to use the Synaptic Package Manager, as long as you have an Internet connection.
Right‐click on the system tray icon and click on Synaptic. Now search or browse through the directory for the software you want. Right‐click on the application and click Mark for
P a g e | 20
Department of Computer and Information Technology | DHYAAS
Installation. Click on Apply on the top menu to have the software downloaded and installed. You can also use apt‐get to install, like you would in other distributions. Stopping The Service
While it’s fun to have andLinux and its applications running on Windows, it can sometimes be a resource hog especially if you don’t have a lot of RAM to spare or if you’re about to start some intensive games.
andLinux can be temporarily shutdown by shutting down its service in Windows
In such cases, you might want to shut down all the unnecessary services and programs running in the background. Go to the folder where you installed andLinux. Double click on srvstop.bat and the
service will end. The service can also be shut through the Services manager under Windows Administrative Tools. To start the service again, double click srvstart.bat.
Using andLinux’s Integration For Windows File Formats If you haven’t noticed, andLinux doesn’t only run applications or open files in the Linux file system. You can browse through folders and use andLinux’s applications run certain files—for example, .doc files can be opened using KWord. Browse using Windows explorer, right‐click and you will see the application name in andLinux. If you don’t see an association made, you can open the application using the menu. Then proceed to open the file through the File > Open menu. Look for a path to Windows to open files in the Windows partition. You can also access the same partition through the path /mnt/win. Keep in mind, that andLinux will not be able to open paths outside the ones you specify during the installation.
Modifying The Launcher Menu
The andLinux launcher allows you to run most applications available in andLinux. It might even not add any new software that you download and install. The menu can be modified through a small text file located in the installation folder of andLinux—andLinux\Launcher. Open menu.txt in any text editor and add any new entries that you want. Adding—adds a separator in the menu. The format is—ApplicationName; ApplicationIcon.ico; ApplicationCommand. For example, Digit;digit.ico;konqueror
P a g e | 21
Department of Computer and Information Technology | DHYAAS
www.thinkdigit.com will add an entry Digit that will load www.thinkdigit.com in Konqueror. Unnecessary entries can be deleted from the list by removing a line. Save the files and close the launcher by right clicking on the system tray icon and then Exit. In the Launcher folder, run menu.exe again to restart the menu and you will now see the changes in place.
Crash Route !
Fact of life: when hard drives crash, they usually do so without warning. What results is a headache and, more importantly, heartache. But the data is usually not wiped clean from the crashed hard drive; rather, the file allocation table—which contains filenames and points to data on the drive—is what usually gets damaged. It is therefore technically possible to recover data in such situations; you only need to know the tools of the trade. The tool we’re talking about is PC Inspector File Recovery—a freeware that can be downloaded from www.pcinspector.de/ file_recovery/UK/welcome.htm.
If you’ve lost data on a partition on a drive other than C, just install the program in Windows. If your entire hard drive has crashed and you’re unable to boot, install the program on another computer and attach your hard drive as a secondary drive on that computer. Step1. Launch The Program
Launch PC Inspector File Recovery, choose English as your language, and click on the green tick button. Choose what kind of data you wish to recover by clicking on one of
the three buttons to the left.
If you choose Lost Data, go directly to Step 6; if you choose lost drive, go directly to Step 8. If it is deleted data that you wish to recover, click on the icon at the top left.
Step 2. Select The Drive
The program will scan your hard drive for some time for available partitions, after which it will display them. You will most likely be able to select the drive from the Logical Drives tab.
It may sometimes be listed twice; select the entry containing the drive’s letter. To verify the drive’s content, click the Preview button. Click on the tick button to continue to the next step.
P a g e | 22
Department of Computer and Information Technology | DHYAAS
Step 3. Sniffing Out Deleted Files
The selected drive will now be scanned; this may take a few minutes. A Deleted folder will be displayed (with the Recycle Bin icon). You need to go through this folder to locate the files you wish to recover. You might find that the original filename has not been preserved. You can opt to search for the file in the following way: choose Object > Find, enter the file type, and click on the green tick to search. Step 4. Recovering The File
A list of matching files will appear after the search process ends. If the file you’re searching for does not appear, you can use the Size and Date Modified columns to help identify it (assuming you know these details). Click on the top of these columns to arrange them accordingly, to make it easier to locate the file. Select the possible candidates by keeping on clicking on them while keeping [Ctrl] pressed. Keeping [Ctrl] pressed, right‐click on them and choose Save to. Choose a location. Step 5. Check And Rename
After they are restored, you should try to open the files to see if they are indeed the files you were looking for. After confirming this, you can rename them to what they were and copy them to the original location later. If the files are not the ones you were searching for, return to PC Inspector File Recovery and try to search for them again. Step 6. Find Your Lost Data
Sometimes data is lost due to a quick format or due to system or program crash. In such cases, you should choose the Lost Data button (the middle button on the left) in the main screen of PC Inspector File Recovery. In this case, data is retrieved in a way similar to deleted files. Select the drive from the Logical Drives list and click on the tick button when the Select Cluster Range dialog appears. The process of identification of files to be retrieved begins; this will take some time.
Step 7. Retrieve The Lost Data
You will see that PC Inspector File Recovery has found hundreds of “lost” files—many will be fragments. It will take quite a bit of your time to go through them all, but it is worthwhile because you have a chance of getting back precious files. Repeat steps 3 to 5 to find your file and check its integrity.
Step 8. Find The Lost Drive
If your partition table has been damaged, you may no longer be able to view any drive from the affected hard drive in the Logical Drives list. You’ll have to manually search for it.
P a g e | 23
Department of Computer and Information Technology | DHYAAS
Click on the Physical drive tab from the Select Drive dialog, and select your hard drive—usually named fixed disk #1. Click on Find logical drives.
Step 9. Search Within Clusters
You can choose to scan the entire drive for lost drives, or if you have some idea about where the partition was physically located on the disk, you can move the sliders to concentrate the search on that particular area. Now click on the tick button and wait for the search to complete, after which you can select the logical drive to recover your files and folders (as explained in the previous steps).
YOUR CREDIT CARD CAN BE CLONED SWAPNIL S. HEDAU VII SEM I.T
HAVE you stopped using cash for most of your weekend mall purchases and extravagant dinner at restaurants? And flash your credit or debit card every time you have to pay that bill? If yes, then keep eye on shopkeeper who is sweeping it because he can clone your card. This is done by swiping the card on device called a skimmer, which capture the information stored on the magnetic strip of the card.
• WHAT EXACTLY IS SKIMMING? Skimming is a process where by a person just creates a cloned version of the card.
This clone can be created either using leaked credit card or by swiping the card on the device called skimmer, which captures the data on the magnetic strips of the card. The data is then transferred to expired or blank cards. And you have a cloned credit card ready.
Keeping your card in always in your view is the only way to protect yourself. “In such cases technology can do very little. It is people who have to be careful and it’s important that you remain alert.” says Amuleek Bijral, country manager‐ India, for RSA, a security solution provider.
The world over, bank are now issuing EMV (Europay, MasterCard and Visa) card to curb skimming. “In EMV chip cards, even though the magnetic strip exists, the validation happens through the chip. When the card is inserted in the Electronic Data Capture (EDC) terminal there is a validation check done and if it is proved that the card is not genuine the user is protected .” says Nemvarkar of Axis Bank. “Tomorrow, if there is skimming related dispute, the card holder is protected and the fraud is liability of the merchant,” he adds. Several European countries and Japanese have moved to EMV technology already.
P a g e | 24
Department of Computer and Information Technology | DHYAAS
• HOW CARD IS CLONED? Card can BE cloned by swiping them on a device called skimmer or just by using leaked information. This device captures the information stored on magnetic strips of your card. This information is transferred to another card to create fake.
• HOW TO PROTECT YOURSELF? Never let your card out of your eye sight. Make sure that merchant is swipes it on bank machine, not something else – it can be skimmer. Never throw old receipts and blank statements in public bins.
Ensure that the part mentioning your card and account numbers on your statement is destroyed before you dispose them.
Intel’s Atom
Sudhanshu.P.Gawande, VII IT,72
The world is poised for a boom in cheap, portable computers. They might not be for everyone, but they will change the way you think about and use mobile and desktop devices in the coming years.
Intel CEO Paul Ottelini has called it “The most important processor innovation in the last 40 years”. Compared to the processors we’re used to seeing, the Intel Atom has low power consumption and radically low cost. It follows a completely new approach, offering impressive features such as a power requirement of only 0.65 to 2.4 watts—less than a tenth of today’s most common mobile processors which typically use up to 35 watts. The Atom is basically nothing but slimmer, more power efficient and cheaper Pentium M (the basis of today’s Core series CPUs). It produces so little waste heat that it can go without a cooling fan. If it is used together with the equally new System Controller Hub chipset (SCH), the equipment can be called “Centrino Atom”. The SCH supports two PCI ExpressSlots (x1), eight USB2.0 ports, DDR2 RAM at 400/533 MHz, and an IDE port as well as a 3D engine from PowerVR much like the one already available in the iPhone.
Inside the Atom
The Atom processor is different from the regular CPUs we’re used to mainly because it can process commands only one after another, i.e. “In Order”. It therefore cannot cope with very demanding tasks, and unsurprisingly, in spite of running at 1.6 GHz, the N270 model performs at par with an 800 MHz mobile CPU of today. All other modern CPUs
P a g e | 25
Department of Computer and Information Technology | DHYAAS
are faster per Megahertz because they can process their commands “Out of Order”, if this helps run them faster. What does work in favor of the Atom is that this approachcan save power in an unprecedented way. With maximum 2.5 Watt thermal envelope, the N270 helps devices achieve longer running times than even ultra‐low voltage Celerons (31 Watt) which are the standard for today’s ultraportables. Intel has matched the N270 with the extremely outdated but cheap i945 chipset. This chipset has one of the weakest graphics subsystems possible, but this too helps in saving power. It can support DDR2 RAM, solid‐state disks (SSDs) and a 7–10‐inch TFT display.
MIDs will be the smallest web browsing interface
The Z Series Atom CPUs are aimed at MIDs (“Mobile Internet Devices”), which are devices that might look like format with diagonal display size of about 6 inches. This will not only enable browsing, but you can also listen to MP3s, watch videos, make phone calls, navigate or play. But it might take some time, before this “pocket internet for everyone” arrives. Even though the prices of these devices should be low because of the cheap Atom processors, MIDs’ screens are thought to still be too small for comfortable internet browsing, at least at a level that would compare with desktop PCs or laptops. Above all, Windows Vista is too overloaded for pocket devices.
So Microsoft will have to offer a custom OS for such devices, somewhat more powerful than Windows Mobile, but not as resource‐heavy as Vista. An unnamed source at Intel even once remarked that “the first real MID which actually makes reasonable Internet browsing possible is the iPhone”. However, there are some promising Linux user interfaces. Atom processors will probably only help handheld devices take off in the second generation. For 2009/2010, Intel has a new, smaller and even more power‐saving Atom platform under development, codenamed “Moorestown”. This will combine the CPU with the chipset and a wireless communications module on a circuit board, making it highly suitable for smartphones and MIDs. Intel Vice President Anand Chandrasekher even announced “Moorestown” with the famous quote from Apple chief Steve Jobs: “One more thing…”. The first atom‐powered mobile phone could easily become an iPhone killer.
Nettops & Netbooks will proliferate
A new wave of super cheap computers is already arriving. The N‐series Atom processors will grab all the attention in this space. They require a little more power (4 to 8 Watts) compared to the Z Series Atoms. They will still be cheap—and can be mounted on Mini‐ITX format motherboards which use the older Intel i945 or SiS chipsets. This makes them ideal for super‐cheap PCs or notebooks—a market segment that the Asus Eee PC entered (and some might say created) with great success.
Intel also offers the an Atom processor called simply “230”, running at 1.6 GHz for desktop PCs. The Atom 230 can function in 4‐Watt thermal envelope and can run on a
P a g e | 26
Department of Computer and Information Technology | DHYAAS
mini motherboard along with an Intel 945 or SiS chipset. A tiny desktop PC like the Asus Eee Box or Mac mini would be called a “Nettop” or “ULPC” (ultra low‐cost PC)
Intel has named these budget devices “Nettops” and “Netbooks”, for low‐cost PCs and portables respectively, in order to distinguish them from regular Windows Vista PCs. Now, in addition to Asus, at least a dozen manufacturers have announced, demonstrated or already launched their low‐cost, low‐powered netbook designs. However, these devices will be more successful in developing countries—those who can afford standard computers will be much happier with the greater power.
Full‐sized notebooks and PCs might get dirt cheap
Starting late this year, Atom CPUs might displace even Celerons in ultra‐low cost but full‐sized laptops. These might be available for around Rs 12‐13,000. Thus, the Atom promises to become a great success, particularly in developing countries such as India and China where PC penetration is abysmal and Internet access remains out of the reach of the masses. A nice, light, Atom‐driven notebook with no cooling fans and a small amount of fixed storage space could really take markets by storm.
What about the competition?
Via Technologies has introduced a cheap CPU called Nano for the same segment. AMD is now expected to unveil its plans for low‐power, low‐cost CPUs in November this year. It would make sense for Intel to respond with a dual core Atom, which we might see quite soon. The demand for Netbooks and the enthusiasm surrounding them has been so high that Intel has already admitted that supply is not meeting demand—and most of the Atom Netbook designs we’ve seen haven’t even hit the market yet.
IEEE 802.11n Sunny.L.Telang, VII IT, 73
IEEE 802.11n is a proposed amendment to the IEEE 802.11‐2007 wireless networking standard to significantly improve network throughput over previous standards, such as 802.11b and 802.11g, with a significant increase in raw (PHY) data rate from 54 Mbit/s to a maximum of 600 Mbit/s. Most devices today support a PHY rate of 300 Mbit/s, with the use of 2 Spatial Streams at 40 MHz. Depending on the environment, this may translate into a user throughput (TCP/IP) of 100 Mbit/s.
802.11n is expected to be finalized in November 2009, although many "Draft N" products are already available.
P a g e | 27
Department of Computer and Information Technology | DHYAAS
IEEE 802.11n builds on previous 802.11 standards by adding multiple‐input multiple‐output (MIMO) and Channel‐bonding/40 MHz operation to the physical (PHY) layer, and frame aggregation to the MAC layer.
MIMO uses multiple transmitter and receiver antennas to improve the system performance. MIMO is a technology which uses multiple antennas to coherently resolve more information than possible using a single antenna. Two important benefits it provides to 802.11n are antenna diversity and spatial multiplexing.
MIMO technology relies on multipath signals. Multipath signals are the reflected signals arriving at the receiver some time after the line of sight (LOS) signal transmission has been received. In a non‐MIMO based 802.11a/b/g network, multipath signals were perceived as interference degrading a receiver's ability to recover the message information in the signal. MIMO uses the multipath signal's diversity to increase a receiver's ability to recover the message information from the signal.
Another ability MIMO technology provides is Spatial Division Multiplexing (SDM). SDM spatially multiplexes multiple independent data streams, transferred simultaneously within one spectral channel of bandwidth. MIMO SDM can significantly increase data throughput as the number of resolved spatial data streams is increased. Each spatial stream requires a discrete antenna at both the transmitter and the receiver. In addition, MIMO technology requires a separate radio frequency chain and analog‐to‐digital converter for each MIMO antenna which translates to higher implementation costs compared to non‐MIMO systems.
Channel Bonding, also known as 40 MHz, is a second technology incorporated into 802.11n which can simultaneously use two separate non‐overlapping channels to transmit data. Channel bonding increases the amount of data that can be transmitted. 40 MHz mode of operation uses 2 adjacent 20 MHz bands. This allows direct doubling of the PHY data rate from a single 20 MHz channel. (Note however that the MAC and user level throughput will not double.)
Coupling MIMO architecture with wider bandwidth channels offers the opportunity of creating very powerful yet cost‐effective approaches for increasing the physical transfer rate.
Backward compatibility
When 802.11g was released to share the band with existing 802.11b devices, it provided ways of ensuring coexistence between the legacy and the new devices. 802.11n extends the coexistence management to protect its transmissions from legacy devices, which include 802.11g, 802.11b and 802.11a.
P a g e | 28
Department of Computer and Information Technology | DHYAAS
Even with protection, large discrepancies can exist between the throughput a 802.11n device can achieve in a Greenfield network, compared to a mixed‐mode network, when legacy devices are present. This is an extension of the 802.11b/802.11g coexistence problem.
Deployment Strategies
To achieve maximum throughput a pure 802.11n 5 GHz network is recommended. The 5 GHz band has substantial capacity due to many non‐overlapping radio channels and less radio interference as compared to the 2.4 GHz band. An all‐802.11n network may be impractical, however, as existing laptops generally have 802.11b/g radios which must be replaced if they are to operate on the network. Consequently, it may be more practical to operate a mixed 802.11b/g/n network until 802.11n hardware becomes more prevalent. In a mixed‐mode system, it’s generally best to utilize a dual‐radio access point and place the 802.11b/g traffic on the 2.4 GHz radio and the 802.11n traffic on the 5 GHz radio.
Wi‐Fi Alliance
As of mid‐2007, the Wi‐Fi Alliance has started certifying products based on IEEE 802.11n Draft 2.0. This certification program established a set of features and a level of interoperability across vendors supporting those features, thus providing one definition of 'draft n'. The Baseline certification covers both 20 MHz and 40 MHz wide channels, and up to two spatial streams, for maximum throughputs of 144.4 Mbit/s for 20 MHz and 300 Mbit/s for 40 MHz (with Short Guard interval). A number of vendors in both the consumer and enterprise spaces have built products that have achieved this certification. The Wi‐Fi Alliance certification program subsumed the previous industry consortium efforts to define 802.11n, such as the now dormant Enhanced Wireless Consortium (EWC). The Wi‐Fi Alliance is investigating further work on certification of additional features of 802.11n not covered by the Baseline certification.
Wireless local area network standards
802.11 Protocol
Release Freq. (GHz)
Type throughput(Mbit/s)
Max net bit rate(Mbit/s)
Mod.rin. (m)
rout. (m)
– 1997 2.4 00.9 002 ~20 ~100
a 1999 5 23 054 OFDM ~35 ~120
b 1999 2.4 04.3 011 DSSS ~38 ~140
g 2003 2.4 19 054 OFDM ~38 ~140
n 2008 2.4, 5 74 248 OFDM ~70 ~250
P a g e | 29
Department of Computer and Information Technology | DHYAAS
CDMA AND AGRICULTURE
SWAPNIL S. HEDAU VII SEM I.T. INDIA is basically an agricultural country. Almost 90 percent of the total population is dependant on agriculture for their survival. Almost 80 percent of the total land area constitutes the villages. And almost 60 percent of the total population is still below poverty line. Thus the known fact of India is that its economic condition is very poor. Because of poor economic condition it is a matter of risk to implement a new technology on an experimental basis. It is known that implementing technology with the help of satellites or some of its kinds requires large investments. But a suitable replacement can be thought of in this situation. It by implementing CDMA technology in agriculture. Now question arises: What is CDMA and how can it be implemented? CDMA is an abbreviated form of Code Division Multiple Access. This technology recently has proved to be a boon in the field of mobile communication. As in technology there is no use of satellites and moreover good congestion free network is obtained. This technology is efficient in transferring video streams over large distance. CDMA consists of two things. Namely: TDMA and FDMA. This two combined to form the new era known as the CDMA era. The main advantage of this technology is that it is comparably a low cost technology and at a time many users can be communicated. In agriculture implementation of this technology can prove boon for India. Because it requires low cost for installation and moreover the farmers can be provided with the necessary details required to improve the quality of the crops. Moreover the farmers can contact the required specialist at any time to guide him. Thus the implementation of this technology in the field of agriculture can be a boon and can revolutionize the backbone of India. And once the backbone gets strong then automatically India will cover the distance to reach to the peak of success.
P a g e | 30
Department of Computer and Information Technology | DHYAAS
IS WINDOWS LOSING OUT AGAINST LINUX ?
Roshan Makhe, 38 ,VII IT
The penguin’s come of age. What began as a battle between proprietary and open source Linux software, started by geeks around the world, isn’t plain tech rhetoric anymore. It’s now a mainstream commercial platform — a technology that enterprises are taking very seriously and looking at as a major cost‐effective solution that has scalability and a great future roadmap.
A free software that can be downloaded from the Web, Linux has a source code that’s open and therefore available for anyone to use, modify, and redistribute freely. Proprietary Unix and Windows operating systems aren’t available for such tweaking.
With the movement getting the support of IT biggies such as IBM, Oracle and Hewlett‐Packard, which have devoted many of their engineers to work with the open source movement, enterprises are now showing confidence in adopting Linux. It’s no more now about getting your software free — in India the dominating Linux brands are Red Hat and Suse from Novell.
But companies are ready to pay. “You know, it’s not really about getting you software free — it’s about getting software that’s secure and robust... it’s about a system on which their applications will run well,” says Manojit Majumdar, country leader, academic initiatives and Linux, IBM. That’s the line Linux vendors are selling taking — making the most of the fact that Windows systems have the attention of hackers across the world and are often prone to virus attacks.
Meanwhile, enrolment for Linux‐based courses are increasing, governments around the world are pushing for Linux, and more and more tech companies are modifying their solutions to run on Linux. It’s a movement that’s sweeping the backend operations, but you’re unlikely to notice it since the desktop is still dominated by Microsoft Windows. But chances are the many of the servers right in your own office are running on Linux but you’ve never known.
“We now see Linux moving to mission‐critical applications — we see a lot of adoption in sectors such as banking, financial services, government and large corporations,” says a senior official of Red Hat, the leading Linux distributor in India. Some of the major Linux implementations in recent times took place at LIC, UTI, Central bank of India, Canara
P a g e | 31
Department of Computer and Information Technology | DHYAAS
Bank, KBC’s SMS systems, various Airtel applications. What’s more state governments in Kerala, Chhatisgarh and West Bengal are looking at large scale adoptions of Linux to bring down technology costs.
The Kerala government has, in fact, announced that the state will be a FOSS (Free and open source software) destination and had introduced Linux in 12,500 schools in the state.
Meanwhile in West Bengal too, Linux is seen as a cost‐saving solution in many e‐governance projects. Other governments, such as Tamil Nadu and Karnataka too have various projects running on Linux.
That doesn’t mean Microsoft’s Windows operating system is losing out. “The pie is getting bigger and the reason we’re collaborating with Novell is because many of our clients have servers running on different operating systems. With virtualisation becoming a big trend in tech adoption, we’ll see more of that. And we’re there to solve such issues for our customers,” says Radhesh Balakrishnan, director ‐ competitive strategy, Microsoft. The fact, is that a large enterprise will have multiple operating systems and different applications that run on it.
In India, Unix share actually increased almost 1 percentage point from 2005 to 2006, resulting in decline in Linux share. “That was due to strong growth in spending across the telecom and financial services space for core‐processing workloads,” Mr Arora said.
A learning experience
At Aptech too students there are indicators that the industry is looking for more Linux experts. “Since a large number of organizations are adopting Linux, the requirement for professionals is increasing accordingly,” D. Venkat, national academic head, Aptech & SSi Education. Manish Bahl, senior analyst, Springboard Research, corroborates that, explaining that it’s India’s major IT companies such as Infosys and TCS that are recruiting Linux professionals in a big way.
But clearly there’s room for coexistence of both proprietary and open source. Says Mr Arora of IDC: “There are some areas such as web workload, firewalls and high performance computing, where Linux has a strong presence. Windows, on the other hand has a sound position in business processing, CRM (customer relations management), messaging, collaborating, etc.” But don’t be surprised if Microsoft has been racking it’s collective brainpower to get into these areas too. “We are working with SGI to get into the high performance computing space too,” says Balakrishnan.
P a g e | 32
Department of Computer and Information Technology | DHYAAS
Experts pooh‐pooh that. In the case of high performance computing or HPC, the world’s top 500 computers run on Linux. “High performance computing is done by extremely technologically‐savvy people, who aren’t going to work on proprietary operating systems, and I’m not sure how Microsoft is going to address that. In fact, they’re are the kind who will fine‐tune the operating system to suit their needs, strip off parts of the source code that aren’t necessary so that they don’t overload it with functions and features that they don’t need,” says Mr Arora.
Linux is benefiting from migration that’s happening from other servers such as Unix and Netware. But then, so is Windows. Security and robustness are other factors that Linux vendors are selling. Reacting to Microsoft’s claims that it’s more secure than Linux, Mr Arora says: “In the case of Linux, the source code is open and there are thousands of developers around the world working on it. Any vulnerability is resolved immediately — that’s not the case with Windows. Which is why Microsoft is frequently putting out security updates, while in the case of Linux, updates are not quite as frequent.”
But it’s not as though Linux doesn’t have it’s challenges. For instance, though a company like IBM has 1,000 of its workforce devoted to Linux, all its software doesn’t run on Linux yet — it’s high‐end enterprise version database solution called DB2, for instance, as well as its WebSphere Application Server. “We have over 400 software products available on Linux, including Eclipse to write code in Java,” says Manojit Majumdar, who heads the Linux business of IBM, adding that customers could always check the company website to see if the application they planned to run would be compatible with Red Hat’s version of Linux or Novel’s Suse version.
Microsoft is at an advantage here — it can boast of an ecosystem (comprising applications, software developers trained on its software, training programmes) that the Linux vendors cannot hope to match yet. “But what we tell our customers is that if you want to scale up, Linux would be a more cost‐effective proposition. Besides, the fact that it’s more secure,” says Mr Majumdar.
However, Linux vendors don’t see that as a huge challenge. “More than challenges, there are opportunities for open source that have to be truly projected and we’re working with governments to achieve that,” says Red Hat officials.
The deal will also see the two companies getting into some joint marketing. There has been some foreboding about the collaboration, but experts say that this just may be the beginning of a new scenario — one in which each Linux vendor can say: “My Linux is better than yours.”
P a g e | 33
Department of Computer and Information Technology | DHYAAS
Enhance Urself !! A Special Series by Faculty.
Software Testing Glossary
Prof . Milind S. Deshkar
A Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria. Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.). Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing. Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test‐first design paradigm. See also Test Driven Development. Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across different system platforms and environments. Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services. Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality. Automated Testing:
• Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
• The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
B Backus‐Naur Form: A Meta language used to formally describe the syntax of a language. Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
P a g e | 34
Department of Computer and Information Technology | DHYAAS
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests. Basis Set: The set of tests derived using basis path testing. Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control. Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration. Beta Testing: Testing of a rerelease of a software product conducted by customers. Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification. Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component. Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests). Boundary Value Analysis: In boundary value analysis, test cases are generated using the extremes of the input domain I, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases". Branch Testing: Testing in which all branches in the program source code are tested at least once. Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail. Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.
C CAST: Computer Aided Software Testing. Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools. CMM: The Capability Maturity Model for Software (CMM or SW‐CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes. Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases. Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
P a g e | 35
Department of Computer and Information Technology | DHYAAS
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention. Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions. Coding: The generation of source code. Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware. Component: A minimal software item for which a separate specification is available. Concurrency Testing: Multi‐user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single‐threaded code and locking semaphores. Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard. Context Driven Testing: The context‐driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now. Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems. Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white‐box testing.
D Data Dictionary: A database that contains definitions of all data items defined during analysis. Data Flow Diagram: A modeling notation that represents a functional decomposition of a system. Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing. Debugging: The process of finding and removing the causes of software failures. Defect: Nonconformance to requirements or functional / program specification Dependency Testing: Examines an application's requirements for pre‐existing software, initial states and configuration in order to maintain proper functionality. Depth Testing: A test that exercises a feature of a product in full detail. Dynamic Testing: Testing software through executing it. See also Static Testing.
P a g e | 36
Department of Computer and Information Technology | DHYAAS
E Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution. End‐to‐End testing: Testing a complete application environment in a situation that mimics real‐world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification. Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes. Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.
F Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software. Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features. Functional Testing:
• Testing the features and operational behavior of a product to ensure they correspond to its specifications.
• Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
G Glass Box Testing: A synonym for White Box Testing. Gorilla Testing: Testing one particular module, functionality heavily. Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
H High Order Tests: Black‐box tests conducted once the software has been integrated.
I Independent Test Group (ITG): A group of people whose primary responsibility is software testing,
P a g e | 37
Department of Computer and Information Technology | DHYAAS
Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection). Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems. Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
L Localization Testing: This term refers to making software specifically designed for a specific locality. Loop Testing: A white box testing technique that exercises program loops.
M Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code. Monkey Testing: Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out. Mutation Testing: Testing done on the application where bugs are purposely added to it.
N Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail". N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors.
P Path Testing: Testing in which all paths in the program source code are tested at least once. Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing". Positive Testing: Testing aimed at showing software works. Also known as "test to pass".
Q Quality Assurance: All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
P a g e | 38
Department of Computer and Information Technology | DHYAAS
Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives. Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality. Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality. Quality Management: That aspect of the overall management function that determines and implements the quality policy. Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management. Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
R Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access. Ramp Testing: Continuously raising an input signal until the system breaks down. Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions. Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made. Release Candidate: A pre‐release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
S Sanity Testing: Brief test of major functional elements of a piece of software to determine if it’s basically operational. Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load. Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level. Smoke Testing: A quick‐and‐dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire. Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would
P a g e | 39
Department of Computer and Information Technology | DHYAAS
be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed. Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/ Software Testing: A set of activities conducted with the intent of finding errors in software. Static Analysis: Analysis of a program carried out without executing the program. Static Analyzer: A tool that carries out static analysis. Static Testing: Analysis of a program carried out without executing the program. Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage. Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load. Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software. System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
T Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met. Testing:
• The process of exercising software to verify that it satisfies specified requirements and to detect errors.
• The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE STD 829).
• The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerate the test beds(s) to be used. Test Case:
• Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
P a g e | 40
Department of Computer and Information Technology | DHYAAS
• A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit‐level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code. Test Driver: A program or test tool used to execute tests. Also known as a Test Harness. Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers. Test First Design: Test‐first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test. Test Harness: A program or test tool used to execute tests. Also known as a Test Driver. Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE STD 829. Test Procedure: A document providing detailed instructions for the execution of one or more test cases. Test Scenario: Definition of a set of test cases or test scripts and the sequence in which they are to be executed. Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool. Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests. Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test. Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation. Thread Testing: A variation of top‐down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels. Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
P a g e | 41
Department of Computer and Information Technology | DHYAAS
Total Quality Management: A company commitment to develop a process that achieves high quality product and customer satisfaction. Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.
U Usability Testing: Testing the ease with which users can learn and use a product. Use Case: The specification of tests that are conducted from the end‐user perspective. Use cases tend to focus on operating software as an end‐user would conduct their day‐to‐day activities. User Acceptance Testing: A formal product evaluation performed by a customer as a condition of purchase. Unit Testing: Testing of individual software components.
V Validation: The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation are testing, inspection and reviewing. Verification: The process of determining whether of not the products of a given phase of the software development cycle meets the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing. Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
W Walkthrough: A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review. White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.
Workflow Testing: Scripted end‐to‐end testing which duplicates specific workflows which are expected to be utilized by the end‐user. Software Testing Types
• COMPATIBILITY TESTING. Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware platforms. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.
P a g e | 42
Department of Computer and Information Technology | DHYAAS
• CONFORMANCE TESTING. Verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.
• FUNCTIONAL TESTING. Validating an application or Web site conforms to its specifications and correctly performs all its required functions. This entails a series of tests which perform a feature by feature validation of behavior, using a wide range of normal and erroneous input data. This can involve testing of the product's user interface, APIs, database management, security, installation, networking; etc testing can be performed on an automated or manual basis using black box or white box methodologies.
• LOAD TESTING. Load testing is a generic term covering Performance Testing and Stress Testing.
• PERFORMANCE TESTING. Performance testing can be applied to understand your application or WWW site's scalability, or to benchmark the performance in an environment of third party products such as servers and middleware for potential purchase. This sort of testing is particularly useful to identify performance bottlenecks in high use applications. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional load conditions.
• REGRESSION TESTING. Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new release of a product or Web site. Such testing ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process. Though regression testing can be performed manually an automated test suite is often used to reduce the time and resources needed to perform the required testing.
• SMOKE TESTING. A quick‐and‐dirty test that the major functions of a piece of software work without bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
• STRESS TESTING. Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. A graceful degradation under load leading to non‐catastrophic failure is the desired result. Often Stress Testing is performed using the same process as Performance Testing but employing a very high level of simulated load.
• UNIT TESTING. Functional and reliability testing in an Engineering environment. Producing tests for the behavior of components of a product to ensure their correct behavior prior to system integration.
H Wha
ORM
An O
HIBER
Prof. Ujat is Object/R
• Object/ra java athe map
• ORM, w
M Solution
ORM solution
An API f A langua A facility A technperform
NETS
jwalla H. Gaw
Relational M
relational mapplication topping betweeworks by (reve
n consists of
for performinage or API foy for specifyinique for thm dirty checki
Departmen
wande
Mapping (ORM
apping is theo the tables en the objecersibly) trans
f the followin
ng basic CRUor specifying ing mapping e ORM imping, lazy asso
nt of Comput
M)?
e automatedin a relationts and the dasforming dat
ng four piece
UD operationqueries thatmetadata plementationociation fetc
ter and Infor
d (and transpnal databaseatabase. ta from one
es:
s on objectst refer to clas
n to interaching, and oth
rmation Tech
parent) perse, using meta
representati
of persistensses and pro
ct with tranher optimiza
P a
hnology | DH
sistence of oadata that d
ion to anoth
nt classes operties of cla
sactional obation functio
g e | 43
HYAAS
bjects in describes
er.
asses
bjects to ns
P a g e | 44
Department of Computer and Information Technology | DHYAAS
Levels of ORM quality
• Pure Relational ‐ The whole application is designed around the relational model and SQL‐based relational operations. Suitable solution for simple applications where a low level of code reuse is tolerable.
• Light Object Mapping ‐ Entities are represented as classes that are mapped manually to the relational tables. Hand‐coded SQL/JDBC is hidden from the business logic using well known design patterns.
• Medium Object Mapping ‐ The application is designed around an object model. SQL is generated at build time using a code generation tool, or at runtime by framework code. Associations between objects are supported by the persistence mechanism, and queries may be specified using an object‐oriented expression language. Objects are cached by the persistence layer.
• Full Object Mapping ‐ Full object mapping supports sophisticated object modeling: composition, inheritance, polymorphism, and “persistence by reachability.” The persistence layer implements transparent persistence; persistent classes do not inherit any special base class or have to implement a special interface.
Why ORM?
• Productivity • Maintainability • Performance • Vendor Independence
What is Hibernate Framework?
• Hibernate is a solution for object relational mapping and a persistence management solution or persistent layer.
• Hibernate provides a solution to map database tables to a class. It copies the database data to a class. In the other direction it supports to save objects to the database. In this process the object is transformed to one or more tables.
• Saving data to a storage is called persistence. And the copying of tables to objects and vice versa is called object relational mapping.
P a g e | 45
Department of Computer and Information Technology | DHYAAS
The Hibernate interfaces shown in figure may be classified as follows:
• Interfaces called by applications to perform basic CRUD and querying operations. These are the main point of dependency of application business/control logic on Hibernate. They include Session, Transaction, and Query.
• Interfaces called by application infrastructure code to configure Hibernate, most importantly the Configuration class.
• Callback interfaces that allow the application to react to events occurring inside Hibernate, such as Interceptor, Lifecycle, and Validatable.
• Interfaces that allow extension of Hibernate’s powerful mapping functionality, such as UserType, CompositeUserType, and IdentifierGenerator. These interfaces are implemented by application infrastructure code
P a g e | 46
Department of Computer and Information Technology | DHYAAS
Core Interfaces of Hibernate
The five core interfaces are used in just about every Hibernate application. Using these interfaces, you can store and retrieve persistent objects and control transactions.
• Session Interface • SessionFactory Interface • Configuration Interface • Transaction Interface • Query and Criteria Interface
Session Interface
• The Session interface is the primary interface used by Hibernate applications. • An instance of Session is lightweight and is inexpensive to create and destroy, that’s why
can be created on every request. • Hibernate sessions are not threadsafe and should by design be used by only one thread at
a time. • The Hibernate notion of a session is something between connection and transaction, as a
cache or collection of loaded objects relating to a single unit of work. • Hibernate can detect changes to the objects in this unit of work. • Session is also called as persistence manager because it’s also the interface for
persistence‐related operations such as storing and retrieving objects.
SessionFactory Interface
• The application obtains Session instances from a SessionFactory. • The SessionFactory is not lightweight. It’s intended to be shared among many application
threads. • There is a single SessionFactory for the whole application—created during application
initialization, for example: if your application accesses multiple databases using Hibernate, we’ll need a SessionFactory for each database.
• The SessionFactory caches generated SQL statements and other mapping metadata that Hibernate uses at runtime. It also holds cached data that has been read in one unit of work and may be reused in a future unit of work (only if class and collection mappings specify that this second‐level cache is desirable).
Configuration Interface
• It’s the first object we begin using Hibernate. • The Configuration object is used to configure and bootstrap Hibernate. The application
uses a Configuration instance to specify the location of mapping documents and Hibernate‐specific properties and then create the SessionFactory. Configuration interface plays a relatively small part in the total scope of a Hibernate
application.
P a g e | 47
Department of Computer and Information Technology | DHYAAS
Transaction Interface
• The Transaction interface is an optional API. • Hibernate applications may choose not to use this interface, instead managing
transactions in their own infrastructure code. • A Transaction abstracts application code from the underlying transaction
implementation—which might be a JDBC transaction, a JTA UserTransaction, or even a Common Object Request Broker Architecture (CORBA) transaction— allowing the application to control transaction boundaries via a consistent API.
• This helps to keep Hibernate applications portable between different kinds of execution environments and containers.
Query and Criteria Interface
• The Query interface allows you to perform queries against the database and control how the query is executed.
• Queries are written in HQL or in the native SQL dialect of your database. • A Query instance is used to bind query parameters, limit the number of results returned
by the query, and finally to execute the query. • The Criteria interface is very similar; it allows you to create and execute object‐oriented
criteria queries. • A Query instance is lightweight and can’t be used outside the Session that created it.
Callback Interfaces
• Callback interfaces allow the application to receive a notification when something interesting happens to an object—for example, when an object is loaded, saved, or deleted.
• Lifecycle Interface • Validatable Interface • Interceptor Interface
Type Interface
• A Hibernate Type object maps a Java type to a database column type. • All persistent properties of persistent classes, including associations, have a
corresponding Hibernate type. • Cover all Java primitives and many JDK classes, including types for java.util.Currency,
java.util.Calendar, byte[], and java.io.Serializable and even supports user‐defined custom types.
• UserType and CompositeUserType are the interfaces provided to allow us to add our own types.