project on-science
TRANSCRIPT
1
OnScience
A Project Report submitted in partial fulfillment of the requirements for the
degree of
Bachelors of Technology
Submitted by
Amrit Ravi (06BIF008)
Vellore Institute of Technology University
Vellore- 632014, TN, India
[DEC 01 - MAY 17(2009-2010)]
2
ACKNOWLEDGEMENT
Several people have been instrumental in allowing this project to be completed. First, we
wish to thank Mr. G. Viswanathan, Chancellor, Vellore Institute of Technology University
for the excellent opportunities and facilities provided for undergraduate education.
We would also like to thank Dr. Lazar Mathew, the Director of School of Bio Sciences and
Technology (SBST), Prof. G. Jayaraman (Divisional head for Bioinformatics) for giving us
the opportunity to pursue this project outside VIT University and supporting us in every
aspect.
We are deeply indebted to Dr. Rao Sethumadhavan, School of Bio Sciences and
Technology (SBST), for being our internal guide. We appreciate him for providing a healthy
environment which made us working under his guidelines interesting and easy. His
encouragement helped us all the time during the research and also in writing of this thesis.
He has also guided us in editing our work, and made sure that our performance was
delivered at a superior level.
For finding us eligible, selecting us to be a part of project and at the same time believing in
our capabilities required to comply with the task for the whole time we sincerely thank our
Project Guide Mr. Timothy Peterson, PhD fellow from MIT-Whitehead Institute,
Massachusetts, USA (http://www.wi.mit.edu or, http://www.whitehead.mit.edu).
It was a tremendous and extensively learning experience working with the group which
includes talents from all over the globe.
At last, we would like to thank the team members for creating an extremely friendly
environment for us and sharing every bit of knowledge while working on the project. It was
great to work with adepts of multiple disciplines so far and we wish and look forward to
work with them in future.
3
CONTENTS
TITLE PAGE NO.
Cover Page 1
Certificate 2
Certificate from Organization 3
Acknowledgement 4
Abstract 6
Abbreviations 7
Introduction 8
Literature Review 9
Resources and Methodology
Workflow 10
Architecture and Schematics Designing Phase 1 12
Interface Front-End and Database Development Phase 2 19
Linking of the Modules Phase 3 23
Referencing and Testing Phase 4 38
Advertisement and Promotion Phase 5 42
Result and Discussion 44
Future Aspects 45
References 46
The Team 47
4
Abstract
OnScience is the concept with the sole primary objective of providing ease of access to the
millions of users across the globe in order to make the data availability as abundant as possible
using a highly user friendly interface. The functioning of this concept has to do with certain
goals which are crucial in the domain of life-science to be specific and can prove to be a major
breakthrough. The ideology is to provide a publishing and granting network along with the
availability of information of every kind to make novel biomedical research findings widely
accessible and reviewable in real time. Our goal as a publisher is to provide autonomy to the
researchers in formatting their work and to expedite the publishing process. To promote
continued discoveries, we also aim to use the portal for connecting granting/funding
organizations directly to the researchers and scientists. Another aspect of OnScience is the
researcher rating system where you have the opportunity to see the ratings of different
scientists based on the number and quality of their publications, i.e. considering the impact
factors of the publishers. Features also include polling on relevant topics and discussions using
forum, blogs, news updates, ADTs, related to science all over the world.
OnScience will have both commercial and non-commercial sides in usual. A certain amount of
information will be made available to the users free of cost. For some specific data or content
the user might have to pay some nominal amount in return in order to use or download that
data using the e-commerce portal. The portal is designed in a way that it will work on almost all
major browsers available or utilized all over the world and contains multiple mini-widgets which
could help the user track the way user wants to obtain the information. OnScience will also
include an entertainment domain on its portal where users will be able to access or download
music, videos, snapshots and other relevant data types by paying the prescribed amount. The
portal is designed in a way that it covers almost everything a researcher would need in order to
pursue some study or scientific research whereas the comfort of every class of users is widely
considered during the development of the service.
5
Abbreviations
ADTs: Auto Downloadable Tools ADI: Architectural Development of Interface AJAX: Asynchronous JavaScript XML ALU: Arithmetic and Logic Unit ANN: Artificial Neural Network API: Applications Programming Interface AVI: Audio Video Indexing ASP: Active Server Programming BMP: Bitmap Format CMS: Content Management System CPU: Central Processing Unit CSP: Client Side Programming CSS: Cascading Style Sheet CSSII: Client Support System DBMS: Database Management System DC: Data Curator DHTML: Dynamic Hypertext Markup E-CP: e-Commerce Platform FLV: Flash Video Format FDDU: Front end and Database Development Unit DTD: Document Type Definition FTP: File Transfer Protocol GIF: Graphics Interchange Format GUI: Graphical User Interfacing HPT’s: High Precision Tools HTML: Hyper Text Markup Language HTTP: Hyper Text Transfer Protocol HTTPS: Secured Hyper Text Transfer Protocol IE: Internet Explorer IIS: Internet Information Server IP: Internet Protocol ISP: Internet Service Provider ITT: Interface Testing Team JPEG: Joint Photographic Experts Group LAN: Local Area Network LoM: Linking of Modules MI: Module Integration MIT: Massachusetts Institute of Technology MPEG: Motion Pictures Expert Group OOP: Object Oriented Programming ORM: Online Resource Management PDF: Portable Document Format PERL: Practical Extraction and
Reporting Language PEAR: PhP Extension and Application Repository PHP: Hypertext Preprocessor PNG: Portable Network Graphics PRS: Peer Review System PSD: Photoshop Document QCM: Quality Control Meeting QOT: Query Optimization Tools RSM: Rating System Module TQM: Total Quality Management SEO: Search Engine Optimization SGML: Standard Generalized Markup Language SMTP: Simple Mail Transfer Protocol SQL: Structured Query Language SRT: Server Response Time SSD: Skeleton and Schematics Designing SSP: Server Side Programming URL: Uniform Resource Locator VDU: Visual Displaying Unit VI: Validation and Integration WAI: Web Accessibility Initiative WAMP: Windows Apache MySQL PhP WGO: Web Graphics Overview W3C: World Wide Web Consortium XHTML: Extensible Hypertext Markup Language XML: Extensible Markup Language
6
Introduction
The past few years, in the makeover and
advancements of life-science has proven to
be crucial since several innovative steps are
taken. With the availability of much higher
precision tools (HPTs) and technologies now
research labs can perform huge load of
tasks in a matter of hours. Informatics has
traditionally been a discipline in which
mathematics and algorithm analysts of
higher expertise level, computer scientists,
statisticians and engineers develop
technologies for supporting information
management in fields like healthcare and
data management. Thus, today the field of
informatics supports a broad spectrum of
research that determines the biological
significance of the data provides the
expertise to organize it and develops
practical computational tools needed to
mine the data for new information.
With all these considerations and
requirements of a source of data with
diverse domains of information and, which
could aid in research in a broader sense the
concept of OnScience was developed by Mr.
Timothy Peterson a.k.a Tim, a PhD fellow
from Massachusetts Institute of Technology
and a senior researcher from MIT affiliated
Whitehead Institute in Boston, MA. The
idea when proposed consisted of a plan to
bring out all the resources, be it data,
images, plots, graphs, videos, tutorials or
any relevant information, summed up onto
a single centroid interface so that the
researchers find it easy to grab all the
information they want in just a few clicks
under one roof. The project was planned
well and divided into five phases tentatively
at the beginning where the first 4 phases
involved the architecture development,
interface module development, additional
tools and annotation of data, cross
referencing of the output integrity, whereas
the last phase dealt with the marketing and
strategy planning for the interface which
proved to be a major deal since big names
like pubmed and swissprot were into the
market. The timeline of the project was
decided to be for 15 months from the date
of beginning of the development, which is
12th December’ 2009.
The most important aspect and the reason
of the successful implementation of the
planned protocol was the team working on
the project. The team was well accoutered
with the members from all around the
globe having roles specific to their
expertise, i.e. programmer, data curator,
interface developer, back-end developer,
web programmer, biologists, senior
researchers from MIT-Whitehead Institute,
Harvard University, University of Michigan
and several other research laboratories.
With the success of every phase of this
project the team learnt the professional
and realistic approach towards high-end
platform development and its basic pros
and cons. Once the product will be
launched, it will provide its users an easy
and efficient way to access enormous
amount of useful data where the user will
get an assurance of having many media
types on a single platform.
7
Literature Review
Every new invention is followed by many
followers who try to improve the existing
facility. OnScience was planned with the
same objective. The idea came up when Tim
Peterson and a few members from MIT-
Whitehead Institute planned to develop
one such interface from where scientific
information can be retrieved at a wide
scale. To be more precise the idea was to
put together all the possible data or
information types and put it on a centroid
server from where users all around the
globe could use it. Seeing the current
situation of pre-existing scientific
encyclopedias this idea was not bad and
Peterson realized that real fast and
performed a quick action to form a
development team.
One knows it well that for any interface
development at a larger level, it requires
the involvement of numerous technologies
in the form of interfaces and languages,
same happened during different phases of
this project. The team started with the
architectural designing of the whole
interface involving the usage of Cascading
Style Sheets (CSS) which is a part of HTML.
It is one of the most effective ways to
design a very intact and well formed front-
end layout. PHP proved to be a major part
of the coding segment. Most of the
modules were coded using this eventful
language and the team used PHP 5.0
version and object oriented frameworks like
CAKE PHP to perform the coding for several
modules. References and Reviews were
taken from various sources including books
and journals. Mentioned resources were
crucial in manipulating and debugging
scripts at different stages of the project.
SQL concepts were used to send and
retrieve data to and fro the database. SQL is
known as structured query language and is
widely used to access stored data from the
database in different formats as required
using the query codes. Multiple articles and
journals were referred and read to
accomplish the required objective. Major
level data exchange was facilitated using
the concepts of SQL at various stages of the
developmental phase and it was one unique
experience for the team to learn some great
recent advancements of this powerful
language.
JavaScript was utilized to write some
advanced client side modular segments for
the interface. Multiple small parts of the
front-end were coded using this highly
required language. Journals were referred
to troubleshoot certain complexities. AJAX
is another function or, we can say, an
advancement of JavaScript which was used
in order to bridge a connection between
the JavaScript application and the server
side. AJAX is used to exchange data sets
between client-side and server-side in case
of JavaScript and is considered to be one of
the highly in-demand necessities of current
coding scenario. Visual Basics and .NET
where indispensible part of the coding
schema and were used at different stages
of the project. With all the available
resources and full coordination of the team
finishing the objective was just an excuse to
work more and more for the great aim.
8
Resources and Methodologies
Workflow
The project development was categorized into five broad phases with specific roles in each of the phase:-
Architecture and Schematics Designing
Interface Front-End and Database Development
Linking of the modules
Testing of the Interface in multiple ways
Advertisement and Promotion We will be seeing the role of each and every phase one by one elaborately later in this section.
Each and every phase of the project was allotted with a tentative time frame. The progress
report was subjected to be continuously verified by the team leaders at the completion of each
working day.
Below is the representation of the entire project scenario:
Representation of the project workflow (figure 1.1a)
9
The overall approved project plan (figure 1.1b)
10
Phase 1
Architecture and Schematics Designing
The entire team was split up into several
groups based on their skills and interests.
The team responsible for the architecture
development consisted of 4 members and a
team leader. Since several members of the
entire team were participating virtually
being at the remote locations an FTP was
set up so that every member could get an
idea of the project progress and can put in
their suggestions and ideas. Before the raw
designing started it was of utmost priority
for us to have a sound idea of what actually
is needed by the current users which is
either unavailable or could be improved.
Many journals and articles were referred to
get help on this. The work began with the
enlisting of modules required to be there on
the interface with maximum user
involvement expected. Usually when one
starts to analyze currently existing scientific
web portals, the information they provide
are mostly job related, papers and journals
published or news updates. We thought
something out of the box. We planned to
include a rating system for the researchers
all over the world.
The objective was simple, to have a
centroid module where users from every
domain shall be able to see the ratings of
the scientists of their interest. The rating
was based on a simple algorithm where we
used certain parameters such as number of
papers/journals published, publishers
details, impact factor of the publisher,
whether the scientist is a first or second or
third author of the paper and similar other
criterions. These parameters were
formulated and the module architecture
was drawn. This is just an idea of how the
modules were planned in order to make the
interface maximum resourceful for the
user. The basic layout of the interface was
planned by SSD team which included many
options, such as, a link to home page, a
resources column, a news update section, a
poll of the week section, a peer review
system column, scientist rating column,
feedback forms, contact details and forms,
downloading column where the user could
download papers, articles, e-books, guides,
journals, magazines, tutorials and many
more, a multimedia column where the user
shall be able to listen to podcasts, watch
videos and download them all if needed.
There was a section prescribed for the users
as “submit your information” where the
user could create a profile and add personal
information including his/her resume and
other documents including published
papers and reviewed journals.
A major challenge for us was to develop a
modular architecture with an outstanding
robustness as an E-CP. In order to sustain
among the users and provide the best
outcome at the same time we had to
develop an algorithm with the capability to
take care of the interests of both the client
as well as the service provider. Considering
other major factors like the genuineness of
the rating of the scientist we planned a
dummy platform where the users were
asked to fill in their information or we can
11
say they were asked to create their profiles
along with all of their information so that
the rating system algorithm could be tested
for its efficiency which was a major issue.
Since the algorithm was all new and the
idea was new as well it was a firsthand
experience for all of us. We had to take care
of the factors starting from personal rights
including the right of privacy and the right
to contain original data by the scientists.
The development began with the sketch
work using highly efficient interface tool to
bring maximized effect in the layout
considering the user demands.
Projected main page layout for OnScience (figure 1.2)
The interface shown in figure 1.2 is the
projected main page or, home page for the
OnScience web portal. As mentioned earlier
it includes all the planned modules
discussed. The technologies as discussed in
the literature review section were PHP and
Javascript for the mainframe design and the
validation purposes for the page
respectively. One important aspect of
aesthetics while developing the
architecture was that the page to be
opened shall not be too overloaded with
excessive scripts and unnecessary large size
graphics. The team made sure that all the
graphics were to be of best resolution along
with manageable size factor. The major part
of the architecture development tested our
patience and skills both when we moved
into the security part of the portal and its
content which included sensitive and
personal information of many researchers
and individuals. Developing a system with
highest level of encryption with the
neatness of not affecting the remaining part
12
of the program was one major challenge.
The objective was to implement
encryptions at each possible level of the
system so that no usual hacking program or
bot could have an access or manipulate the
server and administrative data. Technology
involved was shell programming to provide
layers of data security which was for us a
learning experience. The next round of
work was to decide and finalize the modular
architecture for each and every modules
decided to be there of the portal. It was a
major deal as modules were the major part
constituting the web interface. Each and
every module had a specific task to perform
for the user and it had to be worth the
space it takes on the interface. In order to
understand the workflow in a full-fledged
scale one needs to go through each and
every module one by one and understand
the functionality well in advance before one
move to the schematics part.
Module by Module
A. Feedback Module
Info: “Rate this” module was designed and
integrated to rate any feature or content of
the portal. Here on the scale of 5 the user
can rate each and every part of the portal.
Taking an example, if a user likes a
particular article published with OnScience
then the user can rate that article based on
which the rating of the article will increase
and then it will be displayed under “highly
rated contents” section of the portal.
Another parameter to rate is to simply click
(+) or (-) shown on the module. If any user
selects (+) then the positive feedback is sent
for the particular subject on the interface.
B. Poll Module
Info: “Poll Module” was designed to
perform single and multiple option polls. It
is independent of any language barrier and
can be integrated in both Active Server
Programming server as well as Hypertext
Preprocessor server. Performing speed polls
and effective handling were the two major
objectives for the development of the
architecture of this module. The user can
see the poll results as well as create a poll
which, to appear on the mainframe, needs
to be approved by the administrators of the
portal. Multiple and Single options can be
selected as poll answers depending on how
13
the question is framed by the administrator.
Overall reviews proved it to be a very smart
and handy application altogether to
integrate with.
C. Sign Up Module
Info: “Sign-Up Module” is a highly secured
login module. The user can sign-up/register
or sign-in with the help of this module. The
user can subscribe to newsletters and
update personal details using this module.
It is a fine example of a secured login
gateway. The passwords are encrypted
using md5 method which is generates a 32
bit encrypted string of the password which
is unique and cannot be retraced.
D. Top Download Viewer
“Top Download Viewer” is a module
which is designed to display the articles
and subjects which are mostly
downloaded by the users. It is designed
based on an algorithm which works on
the number of clicks done at the
download link. It also provides with the
rating system integrated to make the
rating possible at the same time frame
while the user downloads.
E. Intra Search Module
Info: “Intra Search Module” is the module
designed using PHP concepts to retrieve
data/information, i.e. articles and journals,
published papers within the website. In
other words it can give the user back all
the data present in the database of the
system. As mentioned in the figure, there
are three parameters based on which the
search operation works
Document Name
Title
Author
It is an effective and easy to integrate
module. The user can also find an
alternative option to login or signup in case
the user is not using the main page login
option. The algorithm implemented is
14
simpler and easy to process and thus the
module consumes very less of resources.
F. Complete Search Module
Info: “Complete Search Module” unlike the
Intra Search Module is not confined to the
interface database. It has the capability to
pull data from any source tangled to the
World Wide Web. We can say that it has
similar functionality as Google with a very
effective time response frame.
There are hundreds of search algorithms
which can be implemented in order to
retrieve data from internet sources but our
team used the expertise of MIT experts to
develop our own version of the algorithm
which was truly time efficient and spam
free. It was designed in a way that it
automatically eliminates the spams, applets
or unnecessary advertisements attached
with the desired search outcomes with the
help of the extensive and robust programs
running extensively behind the scene.
0
10
20
30
40
50
15 sec 30 sec 45 sec 1.0 min
1 Click
2 Click
3 Click
Time schematics of the response after click (figure 2.1)
G. Search the Catalog Module
Info: “Search the Catalog Module” is
designed to perform a search confined to
the catalogue present in the interface. The
catalog consists of major attractions or
features of the interface where the user can
just glance quickly all the key notes and
functionalities of the interface. It works on
another specific and short algorithm and
database handling concepts are invoked to
retrieve desired outputs.
H. Upload your Doc Now Module
Info: “Upload your Doc Now Module” is a
module developed to provide the user the
ability to upload certain documents or
belongings, i.e. Resume, Paper, Journals,
Articles etc. It can also be used to upload
media files to user’s personal dashboard for
entertainment purpose.
15
There is a fixed size defined for the users to
upload any file. It is 1 MB for any image file
and 5 MB for any video clip. Any file above
the defined size limit will be automatically
sent to the trash folder of your profile.
I. Scientist Rating System Module
Info: “Scientist Rating System Module” is
one of the most innovative modules
developed during the course of this project.
The functionality is in a way that the user
enters the name of the scientist of user’s
choice and the module generates the
profile of the scientist based on the
parameters running at the program end of
the module. The algorithm was worked out
to bring highly precise and justified output.
Any scientist was rated based on the
number and quality of the paper he/she has
published. Another parameters include the
impact factor of the portal publishing the
paper, whether the author is a first author
or not and many other factors.
There was a small formula developed based
on the raw algorithm implemented in order
to bring the preciseness in the action.
The position value will be taken as 1 if the
author is the first author of the article. The
entire sum value will depict the rating of
the scientist and display to the users.
Extensive use of PHP coding is used to pull
out this kind of information.
J. Publication Help Module
Info: “Publication Help Module” is the
module which is connected to a sponsored
link with one of the sponsor corporation of
the project and the module is developed to
provide all the resources which are required
by an author to submit his/her article for
review before submitting for the
publication. We know that publishing a
paper is not definitely just about
performing the research work and
displaying the results on a white sheet.
There are a lot of steps required to get a
paper published starting from formatting
the raw data into an abstract and then
collecting the abbreviations to formatting
the content. This module leads an author to
all such information and provides standards
to perform such tasks. The corporation
providing help is widely famed for
publishing prestigious and high valued
journals and papers and provides free aid to
the authors seeking for assistance.
16
K. Peer Review System Module
Info: “Peer Review System Module” is one
of the highly worked out module and we
can say the most widely accessed module.
Here the peers an create their profiles and
then view articles available for rating. Once
they are able to access the material they
get an option to review the material based
on certain parameters including
genuineness of the material, how good is
the content etc. As you can see the display
image which shows that user is logged in
and in the right and top menu bar the user
has got an option to view articles.
PHP 5.0 was implemented to develop this
module and the objective was to show an
overall rating of any article or journal based
on the prescribed parameters. Profile
information like email address, personal
URL or webpage address, Organization,
Department, Address, Alternate email,
telephone number, fax number and areas of
expertise are some of the few key nodes
taken as parameters in the development.
Time Distribution
Time distribution for the architecture development
of ever module proposed (image 2.2)
17
Phase 2
Interface Front-End and Database Development
Phase II of the project began in February
first week of 2010. The objective as the
name says was to implement the
architecture formed into an interactive
interface for the system and to perform the
database development task simultaneously.
The team structure was also revised with
some excellent database developers joining
our team. Phase II of the project involved
more of practical approach than theoretical
reading done in the previous phase where
the WGO team had to take a lead. In order
to perform an efficient coding we needed to
develop each and every module with the
sole knowledge that the module has to be
low in size when it is integrated with the
main frame and it has to be quick in
execution.
While coming to the database part, it was a
well enlarged part to deal with and had
several major and unavoidable issues to
cope with before the development started
as prescribed by the FDDU. The schema
design began by the database development
team. Raw ER Diagrams were drawn one by
one and comparisons were made in order
to find out best ER Model structure.
Multiple issues and opinions came out in
order to optimize the ER Model. Database
was handled on MySQL and certain
manipulations were made to mould the
functionalities towards what we needed.
Security of the database was a big challenge
and requirement for the working team
since there were hundreds of spammers
and hackers who possess the capability to
pool into high-end systems and manipulate
or steal or corrupt important data. With us
it was a stern requirement because our
database was going to store personal and
scientific information of naïve to expert
researchers. We had to be sure of a leak
proof storage system.
Time and resource allotment chart for development
phase II (figure 3.1)
As we see that the database development is
allotted with more time and resources we
can infer that how important the task is.
The ER diagram development took almost
15 days time to be turned out as most
precise. At the same time the proposed
modules were finalized and teams were
split to be allotted with specific assignment
for the front-end coding. Each and every
module selected to be a part of the portal
were to be presented as highly independent
and robust as they could have been so the
18
best people available for graphics and
multimedia were allotted the task.
The process of database designing
Database design is the process of producing
a detailed data model of a database.
This logical data model contains all the
needed logical and physical design choices
and physical storage parameters needed to
generate a design in a Data Definition
Language, which can then be used to create
a database. A fully attributed data model
contains detailed attributes for each entity.
The term database design can be used to
describe many different parts of the design
of an overall database system. Principally,
and most correctly, it can be thought of as
the logical design of the base data
structures used to store the data.
Out of all in the relational type of database
management model these parameters are
the tables and views. In an object database
most of the entities and relationships map
directly to object classes and named
relationships. However, the term database
design could also be used to apply to the
overall process of designing, not just the
base data structures, but also the forms and
queries used as part of the overall database
application employed within the database
management system (DBMS). The design
process consists of the following steps:
1. Determining the purpose of your
database - This helps prepare you
for the remaining steps.
2. Finding and organizing the
information required - Gather all of
the types of information you might
want to record in the database,
much as product name and order
number.
3. Dividing the information into
tables - Divide your information
items into major entities or objects,
such as Products or Orders. Each
object then becomes an individual
table.
4. Turning information items into
columns - Decide what information
you want to store in each table.
Each attribute becomes a field, and
is displayed as a column in the
table. For example, an Order table
might include fields such as User
Name and Order Date etc.
5. Specifying primary keys - Choose
each table’s primary key. The
primary key is a column that is used
to uniquely identify each row. An
example might be Product ID or
Order ID.
6. Setting up the table relationships -
Look at each table and decide how
the data in one table is related to
the data in other tables. Add fields
to tables or create new tables to
clarify the relationships, as
necessary.
7. Refining your design - Analyze your
design for errors. Create the tables
and add a few records of sample
data. See if you can get the results
you want from your tables. Make
adjustments to the design, as
needed.
19
8. Applying the normalization rules -
Apply the data normalization rules
to see if your tables are structured
correctly. Make adjustments to the
tables, as needed.
Now that we understand the process and
steps of database development we now
understand how critical this part would
have been. Below is a small representation
of the overall logic on how the database
system is interlinked:
A raw interlinking diagram for a database system
for an interface (figure 3.2)
As we see, the centroid for such a system is
the database server. A database server is
a computer mediated and controlled
program that provides the database
services straightaway to other computer
programs or computers, as defined by
the client–server model. The term may also
refer to a computer dedicated to running
such a program. Database management
systems frequently provide database server
functionality, and some DBMSs rely
exclusively on the client–server model for
database access. Such a server is accessed
either through a "front end" running on the
user’s computer which displays requested
data or the back end which runs on the
server and handles tasks such as data
analysis and storage. In a master-
slave model, database master servers are
central and primary locations of data while
database slave servers are synchronized
backups of the master acting as proxies.
Some examples of Database servers are
Oracle, DB2, Informix, Ingres, and SQL
Server. Every server uses its own query logic
and structure. The SQL query language is
more or less the same in all the database
servers. Now, proceeding ahead with the
interlinking as shown in figure 3.2, the
database server is connected to three
domains controlling three different
functionalities with prime importance
respectively. The development domain is
responsible for all the steps and
methodology for the structure of the
database system. A robust structure is
always expected in most of the cases.
Another domain is test domain where the
database server rechecks the structured
development of the schema repeatedly and
sends feedbacks to the server to assure
about the design and generates alerts if any
improvement is possible. The third domain
is production domain where the tested
database structure is relayed out to become
the finalized design. The production domain
assures the safe transaction of data through
the server. There are tables tangled with
the respective domains carrying valuable
information under a secured surveillance.
20
The process of front end designing
As the name speaks, front end of a web site
is the face of the web interface which is
seen by its users. It is indeed the part which
actually attracts the attention of its users
even if developing what is running at the
back of the website be the toughest deal
(no one sees what is going at the back end).
During the development of a web interface
there are certain key-points a developer has
to take care of:
What browser/platform configuration
breaks your page when you use which
technology
Which technology or elements to use to
create the navigation in
What to do to avoid wrong display on
browsers
How to keep the size of the final
document small
How to convert graphics so that they
are small in file size and yet good
looking
How to deal with data coming from the
customer in various and sometimes
rather exotic formats
How to keep his work from stalling
when there is no data coming in that he
can use.
How to communicate to colleagues or
customers that the amount of final data
in the product does not really fit the
design (which is a case of bad planning
to begin with, but it does happen)
How to keep up with the rapid web
development market and techniques.
In our case it will be fair to call it as a user
interface since we were developing a
platform where the user can interact with
our system. A key property of a good user
interface is consistency. There are three
important aspects. First, the controls for
different features should be presented in a
consistent manner so that users can find
the controls easily. For example, users find
it very difficult to use interface when some
commands are available through menus,
some through icons, and some through
right-clicks. A good user interface might
provide shortcuts or "synonyms" that
provide parallel access to a feature, but
users do not have to search multiple
sources to find what they're looking for, so
we took care of this part with utmost care.
Second, the known "principle of least
astonishment" is crucial. Various features
should work in similar ways. For example,
some features in Adobe Acrobat are "select
tool, then select text to which apply."
Others are "select text, and then apply
action to selection."
Third, user interfaces should not change
version-to-version, user interfaces must
remain upward compatible. For example,
the change from the menu bars of
Microsoft Office 2003 to the "ribbon" of
Microsoft Office 2007 is universally hated
by established users. The "ribbon" could
easily have been "better" in the mid-1990's
than the menu interface if writing on a
blank slate, but once hundreds of millions of
users are familiar with the old interface, the
costs of change far exceed the benefit of
improvement. The vast majority of users
viewed this forced change, without a
21
backward-compatibility mode, as
unfavorable; more than a few viewed it as
verging on malevolence. Good user
interface design is about setting and
meeting user expectations. Better (from a
programmer's point of view) is not better.
The same (from a user's point of view) is
better. Considering all these issues which
needed prime attention of the team the
developers performed the coding in
languages like HTML with a exceptionally
good use of CSS, PHP, JavaScript, VB Script
and many bridging languages in between.
Now we will move to the next phase to see
the modular integration and some sample
layouts designed for the web portal.
Phase 3
Linking of the Modules
Before we move to the phase 3 explanation let us go through some of the screenshots of the
front end development explained in phase 2.
Home page layout for OnScience (image 4.1)
As we see and explained before the home
page was designed with basic and soothing
color combinations. The logo is located on
the top left along with intra-search module
on the right of the top banner. The main
menu holds options/links to home page,
Genres, Publications, Upload, Login and
Signup. When we move down we see a
search option which navigates the user to
the database search module were the user
22
can access 28 research publications. The
upload link next to it takes the user to the
upload section where the user can upload
personal facts and documents, e.g. resume.
Upload page along with the view of top downloads and search modules. You can also see publication help module at the bottom center (figure 4.2)
The above mentioned snapshot discloses
the features like upload your doc, search
through the catalog, top downloads,
publication help module, overall search
module, sign up module and login module.
Most of the features of the mentioned
modules have been discussed in earlier
section in phase II, module by module
explanations. One can easily figure out the
interface as it is completely user friendly.
FAQs layout for OnScience (figure 4.3)
23
Newsfeed layout for OnScience (figure 4.4)
Newsfeed section controls all the feeds and
categorized news updates for the portal. As
one can see in the snapshot there are
categories like general, current users,
languages etc. We can also glance at the
top navigation panel of the snapshot where
we can see how we are navigating on the
pages.
Weblink layout for OnScience (figure 4.5)
24
Registration page for OnScience portal (figure 4.5)
The page shows the registration panel for the
OnScience portal. All the fields are compulsory
to be filled. This panel once filled provides user
with a username and password which will be
used for future login purpose. The display
theme is one fine example of overlay screen on
web interfaces which are quiet famous these
days and can be commonly seen on portals.
Search (advanced) page for OnScience portal (figure 4.6)
The snapshot displayed is for advanced
search page. As we are seeing here there is
a field given for accepting the search
keyword. Three categories are given as all
words, any word, Exact Phrase to put the
searching constraints. Search can be
confined to Articles, Weblinks, Contacts,
Categories, Sections and News Feeds. The
result of the search employed is displayed
below the page.
25
Now since we are through with the screen
shots for the front end development, let’s
concentrate on the modular integration on
the portal. In the previous section we have
already acquired lump sum knowledge
about modules to be added. Below are the
two major modules which are explained in
details with their construction methodology
and functionality. The two modules are:
Scientist Rating System Module
Peer Review System Module
Scientist Rating System Module
Home page of the Scientist Rating System Module (figure 4.7)
As the screen says, the home page provides
multiple links including the link which takes
you to the login page. Once you login to the
page you will be navigated to the page
where you will be asked to provide the
name of the scientist for your choice. Once
you submit the name of the scientist the
server will provide you with the rating of
the particular scientist on the same screen.
The script is written in a advanced manner
and is designed to produce specific outputs
based on the keywords. Other links present
on the home page navigates you to the
Arborh (OnScience) home page,
technologies section, products section,
communities section. There is a flash
section on the home page as shown which
provides a slide show of certain relevant
images informing about recent scientific
events. Now we will see other screens of
the same module in order to understand
the functionality elaborately.
26
Login page for Scientist Rating System Module (image 4.8)
Find your Scientist page for OnScience portal (image 4.9)
27
The above mentioned image 4.8 is
displaying the login page for the scientist
rating system. The login page requires a
new user to register with the portal first
before he or she could use it. Once
registered, the server automatically will
send a confirmation mail to the email
address of the user which would be needed
to be confirmed by the user by going to the
provided mail address. Once the user
confirms the account credibility by clicking
the link sent by the server from inbox the
user will be redirected to the page
displayed in image 4.9. The navigated page
is the location where user provides the
name of the scientist of his choice (for
which the user wants to see the rating). All
the user has to do is put the name of the
scientist in the text box given and then
presses submit. The ratings along with the
total number of publications with reference
IDs and details will appear right away.
Result page generated by the server once the query has been run (figure 4.10)
Figure 4.10 shows the server redirected
page once the user as given the query as a
name of the scientist. The server generates
the number of publications scientist has
made along with the journal title and PMID
of those journals respectively. When we see
to the right side of the screen we find a box
which we call the rating panel. The rating
panel displays the rating in two forms. One
is by displaying the rating as a star mark
rating out of five stars. Similarly the other
rating pattern is based on the score out of
100 points. The algorithm employed is same
but the difference is that in one case it is
displayed as starts secured whereas in
other case it is the rating based on numeric
system. The data produced by the algorithm
is genuine or not was verified by the
developers effectively and time and again
using multiple data sources.
28
A sample code fragment used to retrieve scientist details from World Wide Web (figure 4.11)
As we see here, the script is simple and
efficient enough to pull all the required
data. The code is written using PHP
concepts. It is just a fragment of the code
and there are numerous other fragments
which combine to give an effective outcome
to the interface. PERL concepts as we can
see are used. One major objective of the
team while working was to develop codes
which could be of minimum length with the
effectiveness and robustness of higher
degree. HTML concepts are used further in
this script to generate the tables in the
output page. The design and graphics of this
page were kept simple and lower sizes since
it directly affects the downloading of the
page if the user is using a lower bandwidth
internet connection.
29
Peer Review System Module
PRS Module is a segment of the interface
with two sides. One is for the peers who
review the articles submitted by users and
the other is for the users where they can
upload their articles and journals. The user
can create an account, login to the
interface, read and upload articles at the
same time those articles would be reviewed
by the peers who get a separate login
interface with different options to review
the articles and journals. The aim of this
module is to provide an online alternative
to traditional academic journals that will
allow for easy and rapid publication and
increasingly unbiased peer review of the
journal articles. The technologies used for
the development are PHP 5.03, MySQL,
MD5 Encryption, HTML and CSS, Object
Oriented Programming Concept (OOP) –
PHP Mailer, XML applications and Flash.
Home page for the PRS system (figure 4.12)
As mentioned in the home page there are
multiple links such as peer login, manager
login and user access. Each login has its own
privileges and restrictions except manager
login since the manager is the one who
controls the interface activities. Peer login is
designed for the experts who will access the
journals and articles using this login and
eventually will review them. User access
option is for the users where they can login
and create their profiles and upload
contents as well. The user will be able to
see the review results for various articles
including the uploaded by user itself.
30
Peer login page for the PRS module (figure 4.13)
Profile page of the peer once logged in. Options like profile, view articles and logout can be seen (figure 4.14)
31
Figure 4.13 displays the login for peers
using which they can visit their profiles and
rate the articles and journals. Once the
login is successful the peer is taken to the
next page, as shown is figure 4.14, where
the peer has the option to edit or view its
profile. Other options include view articles
and logout. Using view articles the peer can
see the articles available for reviewing and
rate them based on the parameters given.
Profile page for the peer login interface (figure 4.15)
Rate this article page for the PRS module (figure 4.16)
32
As displayed in the image 4.15 of the PRS
module the “edit profile” page can be seen.
This page allows the peer or the user to edit
and upload information to the PRS
database. The options to be edited in the
profile section are organization,
department, address, email, alternate
email, telephone, fax, personal webpage
and area of expertise. Figure 4.16 displays
the journal page where the peer can go
through the journal or article. There is an
option at the right side menu bar of the
page where the option “Rate this article”
can be found. Once the peer/user clicks
that option the server navigates to the
review page where peer will find the
parameters of rating the article. The
interface of the review page is shown in
next screenshot.
Review page for the journals and articles. Various parameters of rating is shown in the screenshot (figure 4.17)
Figure 4.17 depicts it well and clear about
the parameters of the reviewing of articles
and journals. Parameters given are:
What according to the peer is the
originality level of this article
How significant in peers opinion is
the article related to the topic
mentioned
How accurate or rigorous is the
article or journal submitted for
review
What existing journal on the
OnScience publication would go to if
it were not OnScience
Comments from the users upto 1000
characters
The team performed multiple surveys
before generating the series of parameters
in order to bring highly genuine and precise
ratings. Based on the reviews done by many
peers the average is taken and the review
ratings are decided and redirected to the
stats page by the server. The stats page
consists of a graphical output which
33
displays the average rating of an article
based on the tree parameters as mentioned
forth and also an overall rating. The image
is generated dynamically using PHP graphic
libraries, XML, and flash files (swf) are also
utilized.
Stats page of the PRS module once the peer has reviewed the article (figure 4.18). The graphical output can be
exported in different formats like JPEG, PNG, GIF and BMP.
Timeline Distribution for the PRS Module
12/11/09- The project was started and we were given the outlines of the work we are supposed to do.
13/11/09 to 18/11/09-
A general analysis of the Peer Review Rating System was done.
Various online journals were visited to have an idea of the type of data required to complete such a system.
To get the concepts to use PHP for the system from the articles and journals
23/11/09 to 28/11/09-
1) Decided over the metrics on which the users and the peers will be able to rate the articles published in the journal. These are :
Originality of the article
Significance to the field of the article.
Rigorousness or the level of accuracy for the article.
In addition the user will also be required to suggest some comments for the betterment of the journal which is kept optional.
2) Revised concepts on database management from articles and journals
3/01/10 to 10/01/10-
Scripts were written for the login and logout of the peers and also for the profile display of the peer.
The peer ratings were stored into a database and then converted to graphical output using PHP libraries, Flash and XML.
15/01/10 to 22/02/10- Module testing and analysis of the outcome.
34
Strategy planning of PRS Module
The strategy flow of the PRS module (figure 4.19)
35
System schematics of PRS module
System schematics of events in the PRS module (figure 4.20)
36
Phase 4
Referencing and Testing
Phase IV of the project OnScience began in
mid march with a clear objective of cross
referencing each and every node of the
project with the valid and standard data.
Since we know that testing is one of the
major parts of any application or interface
development, the OnScience Testing team
allotted ample time for the testing part. In
the development of many programmes and
large projects, testing takes up a significant
portion of the budget and time. At
OnScience, the project managers defined
what they want to achieve from
testing; deliver the ongoing and standard
testing benefits and maximize the adequate
return precisely on the resources used.
Testing is not just about reducing risk but it
is also about increasing control. By aligning
the testing objectives with the OnScience
objectives and by increasing the
effectiveness of testing both will be
delivered.
For example:
Including testing expertise in the
contractual definitions for the
system or service and the
acceptance processes can
significantly reduce the electronic
delivery risks
Providing objective and accurate
information on risks, issues and
milestones throughout the project
lifecycle can significantly increase
control. And this clarity enables
managers to make informed and
timely choices as the project
proceeds. Precise and effective
testing services improve the
outcome and the journey to
create it.
There were several levels on which the
testing was performed in a widespread
manner:
Performance Testing: including load
testing, stress testing, scalability
testing and performance monitoring
services
Migration Testing: including data
conversion and the wide application
or infrastructure migration
Usability Testing: by using customer
representatives and expert
assessments
Security Testing: including the wide
penetration testing, threat
assessments and risk analysis
Web Testing: including a full set of
test lab services such as
compatibility and interoperability
testing
Business Continuity Testing: involves
including resilience and failover
testing
Disastrous Recovery Testing: The
system includes backup and recovery
testing
Network Testing: including network
performance monitoring and tuning
37
The flow schematics of the testing phase of project OnScience (figure 5.1)
38
Application or Modular Testing
Application testing deals with tests for the
entire application. This is driven by the
scenarios from the analysis team.
Application limits and features are tested
here. The application must successfully
execute all scenarios before it is ready for
general customer availability. After all, the
scenarios are a part of the requirement
document and measure success.
Application testing represents the bulk of
the testing done by industry. Unlike the
internal and unit testing, which are
programmed, these test are usually driven
by scripts that run the system with a
collection of parameters and collect results.
In the past, these scripts may have been
written by hand but in many modern
systems this process can be automated.
Most current applications have graphical
user interfaces (GUI). Testing a GUI to
assure quality becomes a bit of a problem.
Most, if not all, GUI systems have event
loops. The GUI event loop contains signals
for mouse, keyboard, window, and other
related events. Associated with each event
are the coordinates on the screen of the
event. The screen coordinates can be
related back to the GUI object and then the
event can be serviced. Unfortunately, if
some GUI object is positioned at a different
location on the screen, then the
coordinates change in the event loop.
Logically the events at the new coordinates
should be associated with the same GUI
object. This logical association can be
accomplished by giving unique names to all
of the GUI objects and providing the unique
names as additional information in the
events in the event loop. The GUI
application reads the next event off of the
event loop, locates the GUI object, and
services the event. The events on the event
loop are usually generated by human
actions such as typing characters, clicking
mouse buttons, and moving the cursor. A
simple modification to the event loop can
journal the events into a file. At a later time,
this file could be used to regenerate the
events, as if the human was present, and
place them on the event loop. The GUI
application will respond accordingly. A
tester, using the GUI, now executes a
scenario. A journal of the GUI event loop
from the scenario is captured. At a later
time the scenario can be repeated again
and again in an automated fashion. The
ability to repeat a test is key to automation
and stress testing.
Stress testing deals with the quality of the
application in the environment. The idea is
to create an environment more demanding
of the application than the application
would experience under normal workloads.
This is the hardest and most complex
category of testing to accomplish and it
requires a joint effort from all teams. A test
environment is established with many
testing stations. At each station, a script is
exercising the system. These scripts are
usually based on the regression suite. More
and more stations are added, all
simultaneous hammering on the system,
until the system breaks. The system is
repaired and the stress test is repeated
until a level of stress is reached that is
higher than expected to be present at a
customer site. Race conditions and memory
leaks are often found under stress testing. A
39
race condition is a conflict between at least
two tests. Each test works correctly when
done in isolation. When the two tests are
run in parallel, one or both of the tests fail.
This is usually due to an incorrectly
managed lock. A memory leak happens
when a test leaves allocated memory
behind and does not correctly return the
memory to the memory allocation scheme.
The test seems to run correctly, but after
being exercised several times, available
memory is reduced until the system fails.
Special methods exist to test non-functional
aspects of interface and interface. In
contrast to functional testing, which
establishes the correct operation of the
interface (correct in that it matches the
expected behavior defined in the design
requirements), non-functional testing
verifies that the interface functions properly
even when it receives invalid or unexpected
inputs. Interface fault injection, in the form
of fuzzing, is an example of non-functional
testing. Non-functional testing, especially
for interface, is designed to establish
whether the device under test can tolerate
invalid or unexpected inputs, thereby
establishing the robustness of input
validation routines as well as error-handling
routines. Various commercial non-
functional testing tools are linked from
the interface fault injection page; there are
also numerous open-source and free
interface tools available that perform non-
functional testing. Performance testing is
executed to determine how fast a system or
sub-system performs under a particular
workload. It can also serve to validate and
verify other quality attributes of the system,
such as scalability, reliability and resource
usage. Load testing is primarily concerned
with testing that can continue to operate
under a specific load, whether that is large
quantities of data or a large number
of users. This is generally referred to as
interface scalability. The related load testing
activity of when performed as a non-
functional activity is often referred to
as endurance testing. Volume testing is a
way to test functionality. Stress testing is a
way to test reliability. Load testing is a way
to test performance. There is little
agreement on what the specific goals of
load testing are. The terms load testing,
performance testing, reliability testing, and
volume testing, are often used
interchangeably. In counterpoint, some
emerging interface disciplines such
as extreme programming and the agile
interface development movement, adhere
to a "test-driven interface development"
model. In this process, unit tests are written
first, by the interface engineers (often
with pair programming in the extreme
programming methodology). Of course
these tests fail initially; as they are expected
to. Then as code is written it passes
incrementally larger portions of the test
suites. The test suites are continuously
updated as new failure conditions and
corner cases are discovered, and they are
integrated with any regression tests that
are developed. Unit tests are maintained
along with the rest of the interface source
code and generally integrated into the build
process (with inherently interactive tests
being relegated to a partially manual build
acceptance process). The ultimate goal of
this test process is to achieve continuous
deployment where interface updates can be
published to the public frequently.
40
Phase 5
Advertisement and Promotion
The last phase of the project is planned
after a long survey and market analysis.
Many analysts were contacted in order to
provide a precise and highly efficient
marketing strategy. Web advertising is the
continuing process to promote a website to
bring more visitors to the website. Many
techniques such as website or interface
content development scenario, search
engine optimization (also known as SEO),
and search engine submission used increase
the traffic to a site. SEO is one of the most
effective ways to promote and increase
traffic on your portal hence it was teams
top priority to exploit SEO on a larger scale.
Search engine optimization (SEO) is the
process of improving the volume or quality
of traffic to a web site or a webpage (such
as a blog) from search engines via "natural"
or one can definitely say un-paid ("organic"
or "algorithmic") search results as opposed
to other forms of search engine
marketing (SEM) which may deal with paid
inclusion. The theory is that the earlier (or
higher) a site appears in the search results
list, the more visitors it will receive from the
search engine. SEO may target different
kinds of search, including image
search, local search, video search and
industry-specific vertical search engines.
This gives a web site web presence. As
an Internet marketing strategy, SEO
considers how search engines work and
what people search for. Optimizing a
website primarily involves editing its
content and HTML and associated coding to
both increase its relevance to specific
keywords and to remove barriers to
the indexing activities of search engines.
The acronym "SEO" can refer to "search
engine optimizers," a term adopted by an
industry of consultants who carry out
optimization projects on behalf of clients,
and by employees who perform SEO
services in-house. Search engine optimizers
may offer SEO as a stand-alone service or as
a part of a broader marketing campaign.
Because effective SEO may require changes
to the HTML source code of a site, SEO
tactics may be incorporated into web
site development anddesign. The term
"search engine friendly" may be used to
describe web site designs, menus, content
management systems, images,
videos, shopping carts, and other elements
that have been optimized for the purpose
of search engine exposure.
Another class of techniques, known as
Social Media Marketing will also be
deployed soon. SMM involves promoting a
business on social networking sites such as
facebook, orkut, twitter etc. which are a
store house of over 700 million potential
users collectively across the globe.
OnScience team was all into the SMM
analysis and optimization.
41
A web marketing strategy for OnScience (figure 6.1)
Strategy planning scenario for the promotion (figure 6.2)
42
The marketing strategy and promotions are
yet to be applied on a real time basis since
the current status of the project puts it
under interface testing phase, i.e. phase IV.
Marketing strategies are wide in nature and
can be taken or deployed with a wide range
of key notes. OnScience is a platform which
will most likely attract user group of
students, researchers, pharmacists, drug
designing firms, bioinformatics research
centers and biotechnology related group,
chemoinformatics department and various
research institutes. People associated with
science in any form will have to do with this
portal and thus will prove its worth at a very
satisfactory level by giving its best.
Result and Discussion
The first three phases of the project were
successfully completed, tested and
implemented on the system centroid. The
project provided the team members an
extraordinary sense of project management
skill and how to handle situations under
tremendous work load when the deadlines
are too steep. Working under the guidance
of mentors from Massachusetts Institute of
Technology, Harvard and other prestigious
institutes gave the team members
enormous amount of courage and a positive
energy to work ahead on the similar
projects ahead in future. The project
development provided the team a sound
knowledge of languages which were used
so far in the development. A lot of
advanced softwares and tools were used to
generate data, design and layouts. The
knowledge was tremendous and will help
all of the members in further assignments.
Currently the interface is put up under the
phase IV, i.e. testing phase where the
testing team is performing widespread test
using various sets of input data and other
data sets applicable. Testing is part which
yields as good result as much as they are
employed in the project. The longer an
interface is put to test; the better is the
output scenario. The fifth phase of the
project is the marketing strategy and
planning the promotion. There is a lot of
analysis yet to be performed on the
marketing part since the testing part will
take almost 2 to 3 months from now on.
Once the interface is ready to be uploaded
there will a marketing strategy implicated
and the product will be on the World Wide
Web. The overall budget involved in the
project till the phase IV completion includes
the cost of hardware and software, cost of
hiring engineers and interns, cost of
building and resources (i.e. electricity and
water), cost of conveyance made, cost of
software testing by commercial
corporations and many more. The project
involved a widespread use of languages and
therefore many experts in different
languages were involved in the project
which produced a great chance for the
naïve developers to have hands on
experience by working with the numero
unos. Overall, it was a great learning
opportunity and lot many lessons were
taken including team work, planning and
strategy development, debugging and most
importantly being a patient learner
throughout the project duration.
43
Future Aspects
OnScience is a project and a step with a
vision, a vision to get into every minds
associated with scientific world. The current
movements are just the seeding of the idea
and a pioneering step. Once the interface is
uploaded on the web there will a lot of
feedback inputs to the server and
comments will be given in order to provide
better scientific information. There are
certain key points which must be seen as a
object of future advancements and
prospects. First, to place before the general
public the grand results of Scientific Work
and Scientific Discovery. Also to urge the
claims of Science to a more general
recognition in Education and in Daily Life.
And, Secondly, to aid scientific men
themselves, by giving early information of
all advances made in any branch of Natural
knowledge throughout the world, and by
affording them an opportunity of discussing
the various Scientific questions which arise
from time to time. There is certain planning
done by the team in order to attain as per
planned:
Podcasts of the books and journals
can be included
Scientific videos and animations
Interactive flash demonstration to
understand biological process
It will promote a new community of
free lancing scientists ready to offer
their help and consultancy online
Attractive subscriptions can be
designed with the consideration of
waived charges for students and
learning class of users
A central resource center with the
combined availability of all the scientific
portals resources and features, e.g.
nature, pubmed central, bioinformation,
cell, swissprot, scribd, YouTube and
many more clubbed together to make
the life easier for life-science
researchers
At the same time new ideologies we must
know that if Public sector corporations are
run with a sole objective to generate
awareness and provide a service to the
public. This service will need to meet the
needs of the less well off in society or help
improve the ability of the economy to
function: e.g. cheap and accessible web
services in future for researchers.
OnScience is having the same motive and
each and every member is deeply
motivated to put forth all they can in order
to make this service of prime importance
for its users and to provide a widespread
awareness of the new and resourceful web
based mantra called OnScience.
44
References
Ajax: the definitive guide, Anthony T. Holdener - 2008 - 957 pages
Apache security, Ivan Ristic - 2005 - 396 pages
Asp: Webster's Quotations, Facts and Phrases, Inc Icon Group International - 2008 - 73 pages
Beginning PHP and MySQL: from Novice to Professional, W. Jason Gilmore - 2008 - 1044 pages
DBMS: The complete practical approach, Sharad Maheshwari, Ruchin Jain - 2005 - 536 pages
Flash 8: projects for learning animation and interactivity, Rich Shupe, Robert Hoekman - 2006 - 340 pages
Fundamentals of OOP and data structures in Java, Richard Wiener, Lewis J. Pinson - 2000 - 463 pages
HTML: Complete Concepts and Techniques, Gary B. Shelly, Denise M. Woods - 2008 - 384 pages
HTML Validation – htmlvalidation.com
JavaScript: the complete reference, Thomas A. Powell, Fritz Schneider - 2004 - 948 pages
JavaScript: the missing manual, David Sawyer McFarland - 2008 - 528 pages
PHP for the World Wide Web, Larry Edward Ullman - 2004 - 450 pages
Pro VB 2008 and the .NET 3.5 Platform, Andrew Troelsen - 2008 - 1377 pages
Programming PHP, Rasmus Lerdorf, Kevin Tatroe, Peter MacIntyre - 2006 - 521 pages
Programming ASP.NET, Jesse Liberty, Dan Hurwitz - 2006 - 930 pages
SQL, the complete reference, James R. Groff, Paul N. Weinberg - 2002 - 1050 pages
SQL: access to SQL server, Susan Sales Harkins, Martin W. P. Reid - 2002 - 698 pages
Support Materials for User Interface Develop., Gary Perlman, CARNEGIE-MELLON UNIV PITTSBURGH ... - 1988 - 38 pages
The book of Visual Basic 2005: NET insight for classic VB developers, Matthew MacDonald - 2006 - 490 pages
The Apache modules book: application development with Apache, Nick Kew - 2007 - 558 pages
The Flash: the secret of Barry Allen, Geoff Johns, Howard Porter, Livesay – 2005
User interface development, Gary Perlman, Carnegie-Mellon University ... - 1989 - 160 pages
Windows multi-DBMS programming: using C++, Visual Basic, ODBC, Ken North - 1995 - 757 pages
W3SCHOOLS Official Website
XML: Visual QuickStart Guide, Kevin Howard Goldberg - 2008 - 269 pages
45
The Team
The team members for the project OnScience