p i n g 11.0

21

Upload: ping

Post on 08-Apr-2016

227 views

Category:

Documents


2 download

DESCRIPTION

P.I.N.G. 11.0 features an article on Google Project Zero, a team of specialists finding zero day exploits, along with articles on Offline Hacking, Google Polymer ,Skinput Technology, to name a few. The new feature of P.I.N.G. gives tribute to Douglas Engelbart, the inventor of mouse. The editorial feature of this Issue covers the interesting concept of Leap Second, an extra second in our lives. The Issue also includes an interview of Mr. Michael Susai(Chairman, NextStar Ventures, CA, USA) which includes his journey from college days to an entrepreneur.

TRANSCRIPT

Page 1: P i n g 11.0
Page 2: P i n g 11.0

1 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

The PICT IEEE Newsletter Group (P.I.N.G.) is the official bi-annual magazine of the PICT IEEE Student Branch (PISB). With the aim of inculcating a sense of technical awareness amongst its student members, PISB was established in the year 1988 to escalate the knowledge of the developments and trends in the diverse field of technologies.

P.I.N.G. is released bi-annually to commensurate with PISB’s most prestigious events: Credenz and Credenz Tech Dayz (CTD). An effort to bridge the gap between the academia and the industry, CTD ’15 brings with it a plethora of seminars ranging from robotics, 3D printing to start-ups and Internet of Things. Taking forward the legacy of providing different platforms for the students to develop their rationale, this year, CTD ’15 introduced three new events: ‘Inquizitive’, a quiz competition, ‘Ideate’, a platform to test entrepreneurship skills and ‘Xodia’, a battle of algorithms.

P.I.N.G. 11.0 brings with it, yet another assemblage of technical articles with ‘Project Zero’ emerging as the featured article. Keeping up with the tradition of offering something new to its readers in every release, the Issue gives a tribute to Douglas Engelbart, the inventor of mouse. An interview with Mr. Michel Susai, truly, becomes a mesmerizing read.

A special citation to the junior team without whose indefatigable efforts the culmination of the Issue would not have been possible. It is with their prodigious endeavours that P.I.N.G. 11.0 promises to enrapture its acclaimed technocratic audience. P.I.N.G. thanks all its contributing authors, students and professionals alike, for their noteworthy disquisition in the manifold arenas of technology.

Dear Readers,

It gives me immense pleasure to write this message for the new edition of P.I.N.G. This unique activity of PICT IEEE Student Branch has established itself and made an impact all over. This newsletter provides platform for all including student members to showcase their talent, views and further strengthens IEEE activities.

It is a great pleasure to serve as a branch counsellor to PICT IEEE Student Branch (PISB). The year 2014 was a very important year for me as IEEE conferred the “2014 Outstanding Branch Counsellor Award” upon me. I am thankful to all the members of PICT IEEE Student Branch for their activities and nomination. It was also a happening year as we had organized IEEE INDICON 2014 at Pune, during 11th to 13th December, 2014 and the IEEE All India Computer Society Student Congress, successfully. I would also like to note here that IEEE Pune Section also received “R10 Distinguished Medium Section Award” during my tenure as a Chairman, IEEE Pune Section. I had an opportunity to attend IEEE Sections Congress 2014 at Amsterdam, Netherlands during 21st to 25th August, 2014. This event was attended by more than 800 IEEE volunteers and staff from all over the world. I have also received appointment to work as an IEEE Region 10 Students’ Activity Coordinator. In January 2015, I had an opportunity to attend IEEE R10 meeting with IEEE President 2015, Michel Howard, and also to join the R10 team for mountain flight to Mt. Everest. Dr. Vijay Bhatkar, IEEE fellow and member of IEEE Pune Section has been selected for prestigious Padma Bhushan Award. The Felicitation program was also held on 8th Feb, 2015, during the AGM of IEEE Pune Section.

At PISB, we try our level best to create an environment where students keep themselves updated with the emerging trends, technology and innovations. Many events are conducted throughout the year and are widely appreciated by students, acclaimed academicians and industry professionals alike. The events include IEEE Day, workshops, Special Interest Group (SIG) activities, Credenz and Credenz Tech Days. Credenz is the annual technical event held in September each year.

I thank all the authors for their contributions and interest. On behalf of IEEE Pune Section, I wish the PISB as well this newsletter all success. I congratulate the P.I.N.G. team for their commendable efforts.

Dr. Rajesh B. IngleBranch Counsellor

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 2CTD.CREDENZ.INFO

Page 3: P i n g 11.0

3 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

the

edito

rial

An extra second in our livesLeap Second

Whilst Thomas, a young child, was gazing at the clock and letting seconds pass by without a smidgen of

awareness, The Bureau International de I’Heure was under earnest deliberation for introducing an extra second in our lives. Although most of us are devoid of the very prominence of time and tend to while away hours, the time-keepers are actually entertaining the thought of redefining time. Given the minuscule importance of a second, it fails to attract attention when wasted. By now you must be scratching your head and wondering how on Earth adding a second would affect you! So to address this curiosity let us first grasp the background by digging into time.

FlashbackThe manner in which we know time today, interestingly, wasn’t always the same, when the concept of time was born. For instance, in 140 AD, Ptolemy, the Alexandrian astronomer, mathematically subdivided the mean solar day and also used fractions for the equinoctial and seasonal hour; far away from the modern day second. But not all the definitions were alien to the modern concept of time. Al-Biruni in 1000 AD, divided the mean solar day and equinoctial hour into six parts, thereby conceiving the concept of second in the manner we know today. After estimation, a mean solar day was made of 86400 seconds.

Much to the surprise of everyone, precise definition of time wasn’t really a tougher task, keeping the track of it was. The first to realize this resistance was Simon Newcomb and others in 1952 after they stumbled upon the irregularities of the Earth’s rotation. Taking into account the findings, the International

Astronomical Union (IAU) defined ‘the second’ as a fraction of the sidereal year. Further down the decade, IAU redefined second as a fraction 1/315569259747 of the 1900 mean tropical year, owing to the more fundamental nature of the mean tropical year. And by the end of the decade, this definition was one of the many COGS of International System of Units (SI).

The ordeal of keeping track of time wasn’t taken care of and it was evident from the fact that there was yet another redefinition of a second in the year 1967. The definition included the most fundamental unit of life- the atom. In the sense, that one SI second was now equated to 9,192,631,770 periods of the radiation emitted

by a Caesium-133 atom in the transition between the two hyperfine levels of its ground state. For the sake of convenience the term International Atomic Time (TAI) was coined.

Coordinated Universal Time (UTC) was instilled in 1961, its basic working involved combination of atomic clocks and the Earth’s rotation (Universal Time or UT1). However, it wasn’t in sync with the Greenwich Mean Time (GMT) because of its partial dependency on the Earth’s rotation. As an outcome, the time-keepers were forced to slow down the atomic clocks from

difference between Coordinated Universal Time (UTC) and Universal Time (UT1) approaches 0.6 seconds, leap second is added to prevent the difference from exceeding 0.9 seconds. The end of any UTC month is capable of witnessing an addition of an extra second. As the history speaks this additional second has always been added either on June 30th or December 31st. This important decision is taken after a lot of contemplation by International Earth Rotation and Reference Systems Services (IERS) in Paris since 1988 but from 1972 to 1987 it was delegated to Bureau International de I’Heure (BIH).

Arriving at the agenda, the method of adding an extra second is pretty interesting. Rather than displaying 00:00:00 after 23:59:59, the UTC clock ticks to 23:59:60 and this non-mundane counting is programmed manually into the UTC clock. Up until the present day, there have been 25 such unusual seconds. The inception of this practice was in 1972 and the recent addition was on June 30th, 2012. However, the recent addition accompanies a story of its own.

Essential yet controversial Leap SecondThe last time we had an extra second not all of us found it something to be merry about; for some this second came with a cost. While the servers at Reddit (Apache Cassandra), Mozilla (Hadoop), Qantas Airways succumbed to this change in routine, the only one diligent enough was Google who invented a weapon of its own to counterbalance this technical catastrophe called ‘Leap Smear’. What the mighty Google did was one of its kind. Every time their servers updated, they added a few microseconds thereby rendering them immune to the catastrophe.

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 4CTD.CREDENZ.INFO

1961 to 1971. This slowing down of time wasn’t able to fulfil the thirst for perfection since the UTC seconds were slightly longer than the SI second. To take care of this glitch and achieve precision, the controversial concept of ‘Leap Second’ came into existence in 1972.

An extra secondThe ever inconsistent rotation speed of Earth recognized in 1952, is making a trifle task of keeping track of time, an extremely hard one. The major factor influencing this inconsistency is the Moon as it orbits around the planet causing tidal friction and movement of the Earth’s core. Moreover, the omnipotent earthquakes along with the drastic weather events contribute equally as well. Incurring to this invariabilities and the flawless precision of atomic clock, the UTC loses a second over a period of time. The Leap Second was introduced to make up this loss of time.

Why bother?Would you like your clock to show 1 o’clock in the afternoon when the Moon is still out there? Verily no! Such imbalance in the world which is driven by its desire for perfection is completely inexcusable. Though the slowdown of Earth’s rotation per se is wee, it adds up to a conspicuous amount over a large interval of time. For a small amount of time it will be fairly invisible but will be subtly growing up. Theoretically, it’s possible to add a larger span of time. However, the practical application of leap minutes or leap hours is all the more impossible in comparison to the Leap Second.

How it is done?More importantly, when it is added? When the

Page 4: P i n g 11.0

Many other sites running Linux weren’t spared as well. In 2012, nearly 400 flights in Australia were delayed because of this extra second. The computers succumbing to this change of routine forced the airport workers to conduct check-ins with the help hands and not computers.

Though the technical problems arising due to this extra second are relatively isolated, there is little chance of large-scale technological shutdown. But as more and more people get acquainted to the world of internet and spend more time on it, there is a possibility of more people getting affected due to this glitch.

Keeping track of time has more or less become a cliché. And the problem aggravates as time passes, pushing us to look for solutions. Thanks

to the concept of leap second, now its virtually impossible to calculate the exact time intervals for UTC dates, six or more months into the future.The IERS takes the controversial decision of adding an extra second every six months. Because of the unsure fate of this extra second, it is a rather difficult task for the computers to calculate the exact time interval- six months in advance.

Then why not abolish?The Computing Society will take a sigh of relief at this proposition but the astronomers might

just frown about it since plethora of tracking hardware are essentially built to incorporate the leap second. And replacing them would not only be awfully expensive but also undeniably tedious. Then again we can’t have our clocks to show 1 o’clock in the afternoon and the Moon still out there.

Also abolishing the concept of leap second and switching over to International Atomic Time (TAI), completely forgetting our long followed manner of keeping time might invite some unwanted and unfounded worries. This also means that we make the idea of sundials, used by our forefathers, completely useless.

Apart from the problems we have come across, the future might just hold more and make the situation all the more grave.

Into the futureLife is full of contingencies and having an uncertainty in time, just adds up to the stack. In the face of this dilemma of an extra second, augmenting our technical dexterities seems the only possible solution. The ever-increasing dependency on computer makes it imperative to transform the system completely infallible. Considering the fact that most of the crucial work depends on them, the perfectness of the system is of the utmost importance. On the way towards perfection, there might just be some contemplation over the thought of redefining the manner in which we measure time.This very year, the World Radio communication Assembly in Geneva, Switzerland is all set to table a conference to discuss the fate of leap second. While the time-keepers contemplate over the thought of redefining time, the mind can’t help but wonder whether it will be sane to change time the way we know it?

- The Editorial Team

5 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

tech

virt

uoso

A hologram (pronounced HOL-o-gram) is a three dimensional image, created with photographic projection. The term

is taken from the Greek words holos (whole) and gramma (message). Unlike 3-D or virtual reality, on a two dimensional computer display, a hologram is a truly three dimensional and free standing image that does not simulate spatial depth or require a special viewing device. Theoretically, holograms could someday be transmitted electronically to a special display device in your home and business. The theory of holography was developed by Dennis Gabor in 1947. The development of laser technology made holography possible.

Holography is a technique by which it is possible to store large amount of secret data. At present it is used in bank notes and credit cards, so that no one can copy the data for misuse. Also, these techniques are applied to save three dimensional image of the object. The principle, Interference of light, is used to store the secret data with the help of laser beam of particular wavelength. Recently, a group of researchers from the University of Cambridge designed a Smart Hologram with the help of silver nano- particles and laser beam for medical diagnosis of for diseases like diabetes, cardiac problems etc. In addition to medical applications, the holographic technology also has the potential uses in security applications: such as the detection of counterfeit medicine which is thought to cause hundreds of thousands of deaths each year.

The smart holograms can be used to create low cost- portable medical sensors that can test blood, urine, saliva or tear fluids for compounds including glucose, alcohol, hormones, drugs or bacteria. The holograms are made using highly absorbent material called hydrogel- a similar material used to make contact lenses- covered with silver nano particles. A laser pulse causes the nano particles to arrange themselves into three dimensional holograms with specific predetermined layers in a fraction of second.

The layers have a precise distance between them, which causes them to have certain optical properties, that makes them exhibit a certain colour.

Hydrogel (also called aquagel) is a network of polymer chains that are water-insoluble, sometimes found as a colloidal gel, in which water is the dispersion medium. Hydrogels are superabsorbent (they can contain over 99% water). Hydrogels also possess a degree of flexibility very similar to natural tissue, due to their significant water content. These hydrogels have the ability to sense changes of pH, temperature, or the concentration of metabolite- and release their load as a result of these changes.

When the holograms are exposed to a specific compound, the hydrogels will either shrink or swell. This causes the layers of nano particles to come either closer or further apart, which makes the colour of the hologram change to a specific colour depending on how the layers are designed in the first place. The process is reversible, so that same sensor could be used many times.

It is possible to produce holograms by incorporating it into smart polymer films, that contain appropriate receptor systems. They can optically respond to the presence of specific target analytes such as specific ions or glucose.

- Dr. K.C.NandiProfessor of Physics

Pune Institute of Computer TechnologyPune

3D actuatorsSmart Hologram

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 6CTD.CREDENZ.INFO

Page 5: P i n g 11.0

With smart phones, smart-tablets, smart - TVs becoming part of our daily lives, the question comes what

next? Where is the technology now heading? Which is the new device/domain that is going to be ‘smart’ now? And the answer is: Automotive.

What is an autonomous car?An autonomous car or ‘self-driving car’ is a prototype car, capable of all the functionalities of a car without any human input. So, it should be able to ‘sense’ the environment around it, process this input data, take decisions on its own, determine any obstacles on its way, recognize speed limit and decide route to be taken.

Advanced Driver Assistance Systems (ADAS):Advanced Driver Assistance Systems are making cars more and more intelligent. These advanced systems assist driver in different functions and also increase car safety. There are various systems available today:

• Automatic Parking AssistAutomatic Parking Assist will take a car from traffic lane to the parking spot to perform parallel, perpendicular, or angular parking. The sensors installed on the front and rear bumpers transmit and receive signals. The computer

cogn

osce

nti p

ens

dow

n

in the car uses time taken by this signal to compute position of the obstacle. Some systems make use of camera or radar. Now-a-days, these systems are being developed using a remote control so that the driver can give commands to the car to lock and park automatically.

• Traffic Sign DetectionThis is a technology by which a car is able to recognize traffic signs. This consists of two stages: traffic sign detection and recognition.

• Pedestrian DetectionThis includes detecting a pedestrian crossing road and automatically controlling speed based on the distance, using video footage taken by cameras.

• Automotive Night VisionIt is a system to increase a vehicle driver’s perception and sight in- darkness or poor weather beyond the reach of the vehicle’s headlights. There are two types of systems- passive and active systems. Active systems use an inbuilt infrared light source to illuminate the road. There are two kinds of active systems: gated and non-gated. The gated system uses a pulsed light source and a synchronized camera that enables long range (250m) and high performance in rain and snow. Passive system

The Future of AutomotivesSmart Cars

7 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

captures thermal radiation already emitted by the objects, using a thermographic camera.

• Collision Avoidance SystemCollision avoidance system is designed to reduce the severity of an accident. It uses radar and sometimes laser or camera, to detect an imminent crash. Once the detection is done, these systems either provide a warning to the driver when there is an imminent collision or take action autonomously. • Automotive Navigation SystemIt uses GPS to locate position of a car on road. Then it uses road database to provide directions to other locations in the map, keeping the direction completely autonomous. Advantages of autonomous cars: • Less traffic jams and improved ability to manage traffic flow.• Reliable driving for long hours, even at night without any manual intervention.• Great benefit for those who cannot drive: under-age, over-age, unlicensed, blind, or having any other disability.• Autonomous taxi and truck services could be provided at any time.

Obstacles in the use of autonomous cars: • Liability for damage: Currently liability directly goes to the driver or car company in case of any defect in the car. But in autonomous car, it will be hard to decide the liability.• Software reliability: Biggest challenge lies in the reliability of software as its failure could cause huge loss, even causing threat to the life.• Car’s computer could be hacked and the system could be misused. There could be viruses which will infect these cars causing them to fail.• An autonomous car may not function well in bad weather conditions. Machine Learning in Cars:There could be a great application of machine learning in this domain. Each car can communicate with other cars on the road and

learn about the environment surrounding it.On the other hand, it could also impact privacy. This can be the main concern of not using machine learning in cars.

A driverless car by GoogleThe question then comes when these cars will come into reality? Different estimates say that by 2020 such cars will be available on roads. Google has presented a prototype of fully functional self-driving car in 2014 and has planned to test it in San Francisco in the beginning of 2015. The Google car makes use of LIDAR system and has a Velodyne 64-beam laser mounted on the top. This laser helps in generating a detailed 3D map of the surrounding. However, there are certain limitations of this car like- improper functioning in extreme weather conditions and incapability of spotting potholes or humans.

Impact on the SocietyEvery new emerging technology has always had both positive and negative impact on the society. An autonomous car is no exception. Since autonomous cars run more accurately, it reduces the possibility of accidents. This will in turn affect insurance policy companies. On the other hand, it will save lots of lives that are lost in car accidents . With every new emerging technology, new model gets developed. So, job-loss in one form will be compensated by new job-creation in another way. - Ashwin H. Joshi

Embedded (System) Software EngineernVIDIA

GravityLight™ is an innovative device that generates light from gravi-ty. It takes only 3 seconds to lift the weight that powers it, creating 25 minutes of light on its descent.

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 8CTD.CREDENZ.INFO

Page 6: P i n g 11.0

When people decide to serve for a greater good, it is then that, the foundation for the successive generation is

laid. Incurring to their relentless efforts and complete abstinence to lead a normal life, the life of others become smooth. Their humane approach and ability to make something special out of the ordinary, makes them the creators of the better world. If it was not for their dexterities, we would still be scribbling on rocks in some cave in the middle of nowhere. The perks and prerogatives of being a human are all outcomes of their unprecedented acts. Their flawless and inspiring philosophies guide many during hard times. They are the inventors, the ones who always think different from the simpletons. One such philanthropic person was Douglas Carl Engelbart, the inventor of mouse. The innate desires of any other engenderer- drove Douglas to create epiphanies for others.

Early Life: Engelbart was born in Portland, Oregon on January 30, 1925. He was very inquisitive as a child and always wanted to learn new things. After graduating from Portland’s Franklin High School in 1942, he went on to study Electrical Engineering at Oregon State University. World War II interrupted his studies as he joined the United States Navy, where he worked for two years as a radar technician in the Philippines. After completing his Bachelor’s degree in Electrical Engineering in 1948, he settled down, working in wind tunnel maintenance at National Advisory Committee for Aeronautics(NACA) at the Ames Research Centre.

His Satori:In 1951, after Engelbart got married to Ballard Fish, he was off his rocker thinking about his life goals, how he might dedicate his career toward ssomething that might make a difference in the world and how he as an engineer will be able to overcome the global hardships. Vannevar Bush’s article “As We May Think”, inspired him to act for making knowledge widely available for national peace. From his experience as a radar

technician and the knowledge he possessed about computers, he knew that the information can be analyzed and displayed on a screen. He knew that mankind lacks the ability to amass the information around him and process it efficiently.

He envisioned people sitting in front of cathode-ray displays and drifting in a world of information, where they could formulate and portray their concepts in a way that could improve their perception and cognitive capabilities heretofore gone untapped. They could then organize their ideas in a more flexible and rapid way. So he applied to the graduate program in electrical engineering at the University of California, Berkeley, to launch his crusade. He obtained his Ph.D. in 1955, along with a half dozen patents and then stayed on at Berkeley as an acting assistant professor, to pursue his vision. This only goes to prove that not only did he have a goal but also a plan to achieve his goal.

Stanford Research Institute:In 1957, he settled on a research position at Stanford Research Institute. In a span of two years he earned dozen patents on magnitude computer components, fundamental digital phenomena and scaling potential.

By 1959, he had enough standing to get approval for his own research and by 1962, he produced a report about his vision and proposed research agenda titled “Augmenting Human Intellect: A

A tribute to an inventorDouglas Engelbart

enco

miu

m

9 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

Conceptual Framework”.

This led to funding from ARPA to launch his work. Engelbart recruited a research team in his new Augmented Research Centre (ARC) and embedded a set of organizing principles in his lab, which he termed “bootstrapping strategy”. The ARC continued developing on the lines of Engelbart’s vision to support the bootstrapping/augmentation process. Throughout the ‘60s and ‘70s, the lab pioneered an elaborate hypermedia-groupware system called NLS (for on-Line System) most of whose now-common features were conceived of, fully integrated and in everyday operational use, by the early 1970s.

And then comes the ‘MOUSE’:As a part of his vision for interactive computing, Engelbart came up with the idea of a small box which could move a cursor around on a screen through corresponding movements on a desk.

With his colleague William English, he made a prototype from wood and metal wheels, attached to the computer with a cable, the “tail” that led to the device being described as the “mouse”.

In December, 1968, Engelbart demonstrated several technologies that would shape the computing in the later part of 20th century. These included the mouse, a visual display, video conferencing and the use of the first two computers on ARPAnet, the predecessor of today’s internet. This demo has been penned

down in the books of history as the “mother of all demos”.

Engelbart was granted the patent for mouse in 1970, but did not enter the money-making market until a decade. It was in the late 1970s, that Apple decided to adopt the mouse for commercial use.

Achievements:Apart from the famous “mother of all demos”, Engelbart was awarded the Yuri Ruubinsky Memorial Award in December 1995, at the Fourth WWW Conference in Boston. In 1997 he was awarded the Lemelson- Massachusetts Institute of Technology (MIT) prize of $500,000, the world’s largest single prize for invention and innovation, and the ACM Turing Award. In 1998 he was honored with Lifetime Achievement Award by ACM SIGCHI, The Franklin Institute’s Certificate of Merit in 1996 and the Benjamin Franklin Medal in 1999 in Computer and Cognitive Science. In December 2000, United States President Bill Clinton awarded Engelbart the National Medal of Technology, the United States’ highest technology award. He was honored with the Norbert Wiener Award and in June 2009, the New Media Consortium recognized Engelbart as an NMC Fellow for his lifetime of achievements. In 2011, Engelbart was inducted into IEEE Intelligent Systems’ AI’s Hall of Fame. Engelbart received an honorary doctorate from Yale University in May 2011, their first Doctor of Engineering and Technology.

His demise:It was on July 2, 2013 when the gleaming star of technical excellence finally dimmed. Hearing it on the grapevine, Engelbart’s death came after a long battle with the Alzheimer’s. Engelbart was an extraordinary man and quintessential in his field. There are many in the community who will mourn his passing for his was a life of service, compassion, perseverance and excellence.

- The Editorial Team

Mouse-Box, is planned to pack a standard-looking mouse with an ARM-based Cortex processor, 128 GB of flash storage, and two USB 3.0 ports. The mouse would then output to a monitor, and a wireless charging pad would charge the mouse pad, sending power to an optional battery.

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 10CTD.CREDENZ.INFO

Page 7: P i n g 11.0

feat

ured

art

icle

As often as not we see major bugs in computer systems that affect millions of machines and billions of people. An

exception to this claim is Heartbleed. It took the world by storm and affected more than 300,000 public web servers. And given the fact that the number of web-users grow by the second, the number of people getting affected is significant as well. A simple shell script could make a server spill out more data than what was asked. April 1, 2014, was a shocking day for the major lot of server administrators and hundreds of thousands of users, as they stood helpless amidst “a buffer over-read” vulnerability in the OpenSSL cryptography library. The privacy and security of a large number of users was at stake. This kind of vulnerability is also known as zero-day vulnerability. To be precise, a zero-day vulnerability is one in which the creators of the software don’t know about the vulnerability and thus, taking the advantage of the innocence, it is exploited by an intruder.

The Initiators Google is very cautious and takes its own security very seriously. May it be the two-step verification for Google Mail; or encryption of data moving between their data centers. They try to make sure beforehand that their product doesn’t have any security loopholes. With a sophisticated standard of security research, Google has managed to keep bugs at bay. Google also contributed in discovering and fixing the Heartbleed vulnerability. In a week of its discovery, a few Google employees submitted their patch as a fix for this bug.This marked the beginning of a new project called Project Zero, an initiative by Google to prevent security attacks on popularly used softwares.

The Project: For SecurityProject Zero was announced on 15th of July, 2014. It is a committee of brilliant security researchers at Google. The committee serves for the greater good. The sole aim of this lot is to provide safer internet for all users. The project is entirely transparent which only goes

to prove the noble intentions of Google. It has a database containing records of all bugs found and is maintained by the team. An active bug-tracker is present at code.google.com/p/google-security-research/. At the site, everyone can see each and every bug that is filed, after its patch has been created. Project Zero also has an active blog at googleprojectzero.blogspot.in. Regular updates regarding some recent bugs and their fixes are published on the blog. These updates prove pretty helpful for the general lot.

Google made some news, when it reported a security issue in Microsoft’s Windows 8.1 in September 2014. After three months, of notifiying Microsoft, Google released the bug publicly. However, Microsoft was still working

on getting rid of it. The security issue pointed out by Google was related to Sandboxing. The Linux and OS X kernels provide kernel security through seccomp and sandbox-exec respectively. Windows recently made some additions to sandboxing, like the AppContainer model and the ability to disable Win32k system calls. Google Chrome’s some versions do not require security from the OS and also functionality that disables the Win32k security flag. Using one of those types, an exploit can be made which allows elevation of privileges.

Zeroing in on zero day vulnerabilitiesProject Zero

11 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

Owing to an inner function that duplicates a given memory map, a process can be duplicated with Write permissions. As an outcome normal users can gain administrator rights, putting the entire system at jeopardy.

Microsoft Windows has millions of users and if this bug was exploited, innumerable users were going to be victims of privacy breaches, data theft etc. It definitely wouldn’t have done any good to the Window’s reputation in the market. With the help of initiatives like Project Zero, security of internet users all over the world can be improved. If security flaws are pointed out before they become zero-day attacks, the internet would become safer. This is one of the ways to make the system infallible. It will enable users to use applications in a carefree manner. Google has set an example for all other organisations. It has encouraged them to promote research in security and to reduce security gaps not only on the internet, but also on standalone applications and on mobile devices.

The Project Zero team has a special member – George Hotz – who unlocked Apple’s iPhone, allowing it to be used by carriers other than AT&T. However, Apple ignored Hotz and worked on the bugs exposed by him. When he reverse engineered Sony Playstation 3, the

company sued him and came to terms with him only after he agreed not to hack any Sony product thereafter.

However, when George Hotz found some gaping holes in Google Chrome, Google rewarded him with $150,000. They offered him an opportunity to be a security researcher in their company. This shows a major difference in attitudes between Google and some other companies.

Project Zero is proving to be vital in solving several security issues already. It is a perfect instance for the fact that if several major companies come up with open, transparent research projects such as this, It will benefit them and all the computer users. It will also increase the prowess of the computing society and help them secure the web and the systems for all its users. Hopefully, we’ll see a day when industrial espionage will end. Data and indentity theft will be erased from the face of the internet, and user-privacy will no more be a concern.

- Vaibhav TulsyanPune Institute of Computer Technology

Pune

Palmsecure™ is the new biometric identification device which au-thenticates users based on vein pattern on the palm of their hands. This tech-nology is much more economical, easier to manage, and more reliable than traditional methods of identification.

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 12CTD.CREDENZ.INFO

Page 8: P i n g 11.0

tech

nocr

ats

Having an X-ray vision has been a dream for humans. The researchers at Massachusetts Institute of Technology

(MIT) have come a step closer in making this dream a reality. The researchers, with the help of Wi-Fi signals, are able to track the movements of an object behind a body as strong as a bulky wall.

Scientists claim that a person’s movements can be recorded just like a shadow, giving us the impression of absence of the wall. The abstract technologies in movies that lets the soldier predict the enemies’ movements behind the wall is now real. This device is called as Wi-Vi.

The concept is based on avoiding unnecessary reflections. As a low power Wi-Fi signal is transmitted towards a wall only a small fraction is transmitted and most of it is reflected by the wall. These reflections are to be eliminated. To achieve this, there are two antennas transmitting similar signals and one receiving the signals. The second transmitter sends the signal of opposite phase difference i.e. a crest for a trough and vice-versa, with respect to first transmitter. So what happens is that when these signals collide on static bodies like a wall, chair, table etc. before and after penetrations, the opposite signals cancel out. So the receiver only receives the signals reflected from moving

bodies. Hence when the person moves away from the receiving antenna the time period of the waves increases which gives the accurate position at accurate time. Previous attempts have included bulky antennas and a stack of transmitters that cannot be carried by a person. However this technology can easily be made to fit in our mobile phones. This can be done by just a small change in hardware.

Although this concept is similar to imaging, the difference is that it can be easily optimised and there are no large resource requirements. It also eliminates the cost of radar and sonar equipments that create the 3D image of the objects behind an obstacle. Moreover it can be implemented anywhere due to the abundance of Wi-Fi emitters in our neighbourhood. In the future, the OUTERNET technology will result in its handy use.

There are many more exciting applications of Wi-Vi. Optimization of Wi-Vi with a phone can tell you who is present around. It can be used to track the humans trapped under debris. By increasing the accuracy of this equipment it will be possible to use it in medical sciences for tracking foetus movement. The largest benefit of this technology is for the gaming industry. The shadow boxing concept can literally be implemented by this system. Thus Robot fights without using coding and just by gesture can be controlled.

Wi-Vi can also be used to identify simple gestures made behind a wall and to combine a sequence of gestures to communicate messages to a wireless receiver without carrying any transmitting device. Wi-Vi has brought greater compatibility and ease in gesture control. So don’t be surprised if, in the future you see a kid asking a robot to dig his garden by doing a simple gesture effortlessly.

- Prithviraj PawarPune Institute of Computer Technology

Pune

X-ray vision in your handWi-Vi

13 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

The world today is obsessed with 3D experiences in movies, gaming, devices, etc.

The primitive idea behind this 3D technology is to provide a more life like experience and provide a feel of having the events right in front of your eyes. However, it has limitations. Thus, technologists had to come up with a different term- Telepresence. Telepresence is often viewed as an advanced form of tele-operation, a technology that has been the subject of research for nearly fifty years.

The most interesting thing about this technology is how users are able to manipulate remote

objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.

The physical telepresence technology is a set of technologies which allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location. As the definition suggests, it is the next stage of 3D.

Physical telepresence can be used in medical field as surgeons can use it for performing surgeries on patients suffering from communicable diseases. This will ensure no direct physical contact with the patient as a result, it reduces the chances of the surgeon getting the disease from the patient. It is the need of the hour for organizations like NASA,

ISRO, etc, where workspace is out of reach. It is also used by industries to increase collaboration between their workers.

A company known as VGo has introduced a device which can help handicapped students attend school with a physical telepresence device while they actually sit at home. This device is actually a remotely controlled robot equipped with a camera and a microphone to make the operator aware of the surroundings a speaker and a display that shows the operator on the device to provide the feel of his presence.

An innovative use of the technology was shown by a group of students at Massachusetts Institute of Technology (MIT)where they invented a new device that could express physical shapes of people, hands, anything on the other side. It was also used to display some complex 3D Mathematical shapes that are mentally challenging to imagine. Their system uses Kinect to get input of the actions performed by the user to replicate them mechanically on the other side. The signals are transmitted over a CISCO c40/c60 physical telepresence codec router, which are then projected with the help of wooden blocks that project outwards to represent z axis of 3d display and the multiple blocks along the plain represent x and y co-ordinates.

Physical telepresence is a great technology and we hope to see it in every home in the future as it would provide an advance platform for entertainment, social media and educational purposes, the list doesn’t end here. Even online shopping markets could show their products to the buyers as if they were present at their homes before selling it to them. The technology can also replace the normal televisions that we have at our house to an advanced interactive entertainment and telecommunication system.

- Ashay Anil PablePune Institute of Computer Technology

Pune

The newer realityPhysical Telepresence

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 14CTD.CREDENZ.INFO

Page 9: P i n g 11.0

for the duration of our foray in the “public place”, we could still be under attack.

How?You must be wondering how is it even possible to get hacked if your system is completely disconnected from the outside world.

Theoretically, a computer that is not connected to a physical network- wireless, or Bluetooth should be completely inaccessible. Well, this theoretical statement is not true in every sense. It is inaccessible in the literal sense of the word when used in context of networking, but is not entirely isolated. That is, even if there is no actual media of communication established between two devices, still one of them (the unsuspecting client) can be spied upon by an erudite hacker. Pertaining to Newton’s Third Law, every action on our device, that is a keystroke, a spell check, a new program launch and other such actions, generate a small electronic signal in response. These signals can be “listened in” to by hackers. These signals are captured using a receiver and can be stored or forwarded. Hardware

developers were aware of these emissions while building the product. The developers though had a wrong belief that they were too inconsequential to actually pose a threat. But as history tells us, a flaw ignored earlier transforms into a big threat in the future. Recently, hackers have devised a method of capturing exactly those inconsequential emissions. These emissions are captured and used as the ultimate weapon for hacking and ruining one’s privacy.

Consider you are sitting at a coffee shop and have just booted your device. During login, the password that you enter is emitted as an electronic signal. A nearby receiver, disguised as a cell phone charger (for

Hacking RedefinedOffline Hacking

When the idea of Hacking came into prominence, it took the Cyber World by storm. It injected a feeling of

insecurity among the users all over the world. The specialists started scratching their heads to solve the threat of hacking. A common advice was given to reduce the use of Internet to keep our data secure. Till now, we follow it blindly and think we won’t get hacked if we are offline. But in this age of smart thinking, do you really believe you are safe from hackers if you just go off the Internet? Well, not anymore…

In a recent study it was revealed that hacking is still possible when you are offline. This can be done with the help of small, imperceptible electronic waves which are emitted by your device. We are prone to attack generally while in a public place like a café, or at a shopping mall, or an airplane—in short, areas where we work wirelessly. In a wired network, there are numerous tried-and-true methods of hacking. For a while now, it was believed that we would be susceptible to hacking only if we join the public network being offered or make our

specifications available to them, some other way. Our data would not remain private if we join these networks. But now, even if we steer clear of untrusted networks or just stay offline

15 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

example), is kept plugged into a power socket and secretly captures all signals in its vicinity. A hacker can obtain these signals from the receiver and hence have access to our information. Any other information that we view or enter can similarly be captured. There are other forms of emissions than electromagnetic signals, a few are mentioned below.

The researchers at the Georgia Institute of Technology are investigating where these information ‘leaks’ originate. These leaks, technically called as ‘side-channel signals’ can be measured and categorized in terms of their strengths. These side-channel emissions can be measured using various spying methods-electromagnetic emissions using antennas in briefcases, acoustic emissions using hidden microphones, power fluctuation information using fake battery chargers etc.

So what now?This new discovery, which makes us highly vulnerable to wrongdoers, is startling. But it is believed that these side-channel signals can easily be restricted. Researchers are studying the strengths of these signals as emitted by smartphones, PDAs, tablets, laptops. These statistics would provide a lot of leverage when the hardware design of the devices would be considered for revising. At Georgia Institute of Technology, researchers are measuring

the vulnerability using a metric called ‘signal available to attacker (SAVAT)’ which provides a measure of strength of the signal emitted. Every process or activity emits a unique strength signal. Hence analyzing and testing all different possibilities would take some time. One huge advantage for the hackers in this method is that it is very difficult to detect or track this kind of spying as there is no trail at all; everything being offline. This advantage will actually attract the syndicate of hackers to new ways of offline hacking.

Now if you are thinking that you are safe if you use smartphones, think again. The researchers are now studying smartphones, whose compact design and restrictions on idle and in-use power may make them more vulnerable.

Researchers are now in a dilemma, whether to praise the unexpected thought of Offline Hacking or worry about data security. Online hacking was not always done for a negative cause. Organisations used it for many productive things, which gave birth to the term - ‘Ethical Hacking’. Similarly, Offline Hacking can also prove to be a great boon for us if it is done ethically. Lets hope that it stays ethical.

- Shruti PalaskarPune Institute Of Computer Technology

Pune

Multiferroics are the materials that are both ferroelectric and magnet-ic. Researchers are using these materials to develop bendable devices by manu-facturing a thin film that keeps its useful electric and magnetic properties even when highly curved.

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 16CTD.CREDENZ.INFO

Page 10: P i n g 11.0

Nuclear energy is the latest mode of harnessing energy and nations are involved in creating new technologies

using this powerful energy source. Nuclear fusion and nuclear fission are techniques for harnessing this energy.

One of the main problems faced with nuclear-fission reactors is the production of enriched Uranium required for the reaction. Even though it is said that splitting of any Uranium atom can produce energy, in practice only one of the isotopes of Uranium (U235) is used. Another isotope which is found in abundant amount is Uranium-238, but this isotope does not split when struck by neutron under the conditions found in nuclear reactor.

Uranium 235 makes up only 0.7% of the natural Uranium deposits. The reactors need higher concentration of Uranium (3-4%) than present in nature to sustain chain reaction in which there is continual splitting of Uranium-235 nuclei to produce neutrons, which in turn split more Uranium nuclei and produce more neutrons. Even higher concentrations of Uranium-235 are required to make bombs.

Older MethodsSeparation of isotopes of same element is a formidable task. One of the oldest processes involved is gaseous diffusion and it is based on the fact that lighter atoms or molecules move faster than heavier ones at constant temperature due to the difference in weight. Since Uranium is present in solid form, it has to be first converted into Uranium hexafluoride which is gaseous at 56°C.This process is however very expensive. These diffusion plants occupy large area, covering hundreds of acres. Also each plant requires large amount of electricity, consuming hundreds of millions of dollars worth of electricity each year. The plant is not

very efficient with enriching Uranium as one third of Uranium-235 remains unused.

Another technology coming up is the gas centrifuge. This is also based on the principle of difference in weights of the isotopes. The separation is achieved by spinning gaseous Uranium compounds. The plant to be set up might cost up to 1 billion dollars but it would use only 10% of the energy of a gaseous-diffusion plant and have comparable output.

Laser technologyA lot of research is being done to reduce the disadvantages of the older methods and to find newer methods to harness nuclear energy. After a lot of experimentation, scientists have devised a new method by using Laser techniques for harnessing nuclear fuels. A laser can be used to energize a single isotope of the element only and the desired isotope can be extracted by

reacting it with certain chemicals.

Laser enrichment is however quite complicated in practice. One approach involving Uranium

Uranium EnrichmentNuclear Fuel With Lasers

17 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

vapour is being pursued at the Lawrence Livermore National Laboratory. Sadly, vaporizing Uranium means working at high temperature. Also, it was found that these vapors are nasty and corrosive.

The basic idea behind this technology is to tune a laser emitting light in, or near, the visible region of spectrum to exact wavelength absorbed by Uranium-235 and then to collect energized Uranium-235 by zapping with another laser to ionize or reacting it chemically with another material.

The researchers at Los Alamos National Laboratory are following a different laser approach, the molecular approach. This starts with Uranium hexafluoride gas or perhaps another Uranium compound. A laser tuned to a precise wavelength in infrared region then energizes molecules containing Uranium-235 selectively. A second laser emitting ultraviolet light strikes Uranium hexafluoride and provides it enough energy to free a fluorine atom from the molecules having Uranium-235. Thus, the Uranium pentafluoride molecules can be separated easily from Uranium hexafluoride as they have different chemical properties and liquefaction temperatures.

Both the atomic and molecular laser processes consume very little energy per atom of Uranium-235 in the finished product, because the laser is used to excite only the rare isotope. This advantage counteracts the low efficiency of the lasers involved.

The most important advantage of this process is sensitivity. Thus by picking out Uranium-235 efficiently, the amount of wastes produced can be brought down to 0.05-0.08% - about 10% of the natural percent level (0.7%). The Department of Energy is particularly interested in this process due to its potential to recover Uranium-235 from the existing nuclear waste. This would increase the Uranium supplies by 30-40%.

Offsetting these advantages are sizable problems. The vapor process requires handling

of Uranium vapors which as told earlier is nasty stuff. The molecular process requires a specific infrared laser, which is not yet developed. Both the atomic as well as molecular process requires another laser for second step of freeing an atom from excited molecule and an electron from the excited atom. And neither of this is as simple as it sounds.

The two laser processes are two of the three advanced isotope separation processes that are studied by the Department of Energy (The third is laserless plasma process being developed at TRW, Inc). The Department of Energy hopes to pick up anyone of these technologies for further development.

Los Alamos and Livermore have already produced macroscopic quantities of enriched Uranium. But the processes are not yet ready to be put to work. Years of engineering will come first, and laser approach is not likely to begin enriching Uranium on large scale, much before the end of the decade. However, efforts are going on rapidly and one day all nations will definitely use laser techniques to harness their nuclear fuels.

- Sanhita DhamdhereBITS Pilani

Goa Campus

Water drop lens-invented using the process of electro-wetting where-in, a water drop is deposited on a metal substrate and covered by a thin insu-lating layer. When a voltage is applied to the metal, it modifies the angle of the liquid drop. These can be used for low cost construction.

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 18CTD.CREDENZ.INFO

Page 11: P i n g 11.0

Sometimes, in order to create the future, we need to go back to our past. The simplicity of the past is often sacrificed for the

functionality of the future. The web is a perfect example of this phenomenon.

Polymer is Google’s new Javascript library, yet another addition to their web-arsenal. It is spearheading the revolution called Web Components. Web Components ushered in a new era of Web Development based on encapsulated and interoperable custom elements that extend HTML itself. This means, that you can now use a simple html tag to do practically anything without worrying about the scripts running inside it. It is encapsulation at its elegant best. It is utter brilliance and undoubtedly the best out there. It brings back the pure joy of declarative tags to web development and simplifies the job at hand. Polymer is capable of changing the way web is perceived by everyone. It is based on the philosophy that “Everything is an element”.

“Elements are pretty great. They’re the building blocks of the web. Unfortunately, as web apps got more complex, we collectively outgrew the basic set of elements that ships in browsers. Our solution was to replace markup with gobs of script. In that shift, we’ve lost the elegance of the element.”-The Polymer Team.

It goes back to our roots. Every browser is shipped

with default elements, like the <div> tag. Some elements are UI-based and some are utility-based. As web applications got more complex, these elements failed to keep up. Markup was replaced, as the folks at polymer say, with ‘Gobs of Script’. This made web development ugly and complex to comprehend thereby forcing web-developers to ponder more on the problem. Human-readable html was replaced by complex scripts nested inside 10s of div tags. Try to read the source of any decent website and you’ll be amazed at the number of nested div tags, you might even develop an instant aversion towards the way of web-developing. Everybody wished for a way to develop for the web without complex scripts driving them crazy.

Then, the Web Components technology redefined what an html tag could do. It kind of granted it superpowers.

“In the old world, script was your concrete, and the solution to most of your problems was to use gobs of it. In the new world, elements are your bricks; script is like mortar. Select the bricks that fit your needs most closely and use only a judicious amount of mortar to hold them together. This is what we mean when we say everything is an element.” –The Polymer Team.

Say you need to build a login system. Polymer allows you to do it with simple tags like <username> or <password>, yes! as simple as that. Developers can create such custom elements. Then you can reuse these components anywhere, which only goes to prove the vantage of polymer. We need not worry about their implementation. This encapsulation is the key. You can take a few low level elements and combine them to make a more powerful element with its internals safely encapsulated. It makes web-developing simpler and enjoyable. You can use such elements to make even bigger elements. Combine some more elements and you can build a completely encapsulated reusable app; all the more reason to switch to polymer. All this, with just declarative tags like

Future of web. Today.Polymer

19 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

<head> and <body>. This is done through ‘html import’ specification of Web Components. It allows the use of html documents- within other html documents.

Polymer packs in the power of scripts with the beauty and ease of use of markup. It is open source & highly modular. This means you can directly use a lot of top class elements made by the polymer team and several other top developers. You can reuse your own projects in other projects. Everything is modular. Since it runs on Web Components, a technology which isn’t yet supported on all browsers, it needs a library called polyfills for compatibility on older browsers. This allows use of Polymer on all platforms. Google Chrome & Opera have full support, Firefox is catching up fast & IE, is being IE.

Polymer also brings Material Design to the web. It has special Material Design elements called ‘Paper Elements’. They allow you to build an immersive experience using material design. Elevation, shadows, ink-effect, ripples, transitions, all are possible through polymer. Again, you only need to use tags like <paper-button> to create buttons with full material attributes. Polymer is magic. It allows you to do what was considered impossible. It empowers

you to build web apps that look equally beautiful on desktops as well as phones. With polymer you can build web apps with the functionality and looks of a native app. The polymer team has released a few core elements like the <core-ajax>, which allows you to perform complex ajax tasks with the ease of markup. It also makes transitions and animations usually associated with native apps possible on web apps.

Web Components have caused a tectonic shift in web development and convincing us to switch to it. Polymer, along with x-tags from Mozilla, is going to completely revolutionize the way web apps are made. Although it might not be stable enough, or mature enough for production-ready projects, getting on the Polymer train early will be beneficial. It is an amazing time to do web development and Polymer is one of the main reasons!

- Aditya ShirolePune Institute of Computer Technology

Pune

Ishin-Den-Shin- A device capable of turning a human body into a microphone. It allows someone to wordlessly pass recorded message just by touching another person’s ear. There is conversion of audio into looped record-ing by computer which is then converted into inaudible signal and transmitted.

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 20CTD.CREDENZ.INFO

Page 12: P i n g 11.0

Microsoft has the knack of surprising us every time; first by launching DOS in the market- under its name, and then

by ruling the software industry for decades. Now Microsoft has surprised us yet again.While most of us were only hoping Microsoft to deliver their “next big thing” Windows 10, there was someone quietly creeping over the bushes. Microsoft’s new HoloLens completely exploded on us and we couldn’t really dodge it like Neo did in ‘The Matrix’.

The HoloLens basically uses holographic lenses allowing us to visualise them in our daily life. The HoloLens will let us view digital content as actual physical objects! You can now play Minecraft right on your dining table or write mails during working out, not to mention you’ll be able to see it as a 3-dimensional virtual model.

Mentioning a few impressive things made possible by Hololens, you won’t be forgetting your grocery list anymore as the HoloLens can just project the list right in front of you. Whatever you do imagine, in whatever way you want to modify it, you will physically be able to do it using the HoloLens. Learning and research will be made a lot easier. The HoloLens is going to change our forms of entertainment and have a huge impact on the gaming industry as well.

Well, you can now watch your football game, pretty much anywhere! This ‘next generation of computing’ device is going to completely change the face of

technology which we see today. We no longer will be restricted to looking at technology, through some inches of glass.The HoloLens is wireless and does not use any cords either. Also, it doesn’t need or use your phone in any way. HoloLens will be enabled by Microsoft’s very own Windows 10. The in-built spatial sound will let you hear the holograms with pinpoint accuracy(99.9%). The HoloLens features its own Holographic Processing Unit, known as HPU. It will have another processor with it for other various sensors and handling tasks like gesture or voice recognition and such amazing features.

Apart from this, Microsoft has also come up with some applications for the HoloLens for billions of users worldwide. HoloStudio is an application for building 3-Dimensional models. With this you can model a car, a bike, a plane, anything! It is expected that this application will be merged with 3-D printing soon, which in itself is a wonder! Another new application is the OnSight, which is a project in collaboration with NASA. This project allows scientists and engineers to study the Martian environment using hologram techniques. If we cannot go to Mars, we can physically project it right here.

Totidem verbis, the Microsoft HoloLens, is a new-age masterpiece in a class of its own. The device which possibly brings a whole new era of holographic computing is soon going to be widely available this year itself. With it the advancement of mankind cannot be predicted and as it says the possibilities are endless. For now it definitely has no competitors, as it is in a different league of its own. But you never know which company will come up with newer products to replace HoloLens completely. Till then, let us adore and appreciate the quality work done by the researchers at Microsoft.

- Aditya P. KulkarniPune Institute of Computer Technology

Pune

The augmented realityMicrosoft HoloLens

21 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

Want to create a digital doppelganger? Vizify can actualize this bizarre wish. These days sharing is a norm,

like updating status on Facebook, tweeting cricket score, sharing location on FourSquare, Instagram pics, pining art to Pinterest, posting resume on LinkedIn and more. How about having a website which allows you to create an interactive and visually appealing info graphic landing page? Vizify not only allows you to create your landing page, but also connects to your chosen social networks. It showcases the desirable crux of your online presence. For instance, choose quote that best portrays your image, opt the information you want from the linked accounts and occlude the ones which you don’t desire.

Vizify combines all the digital breadcrumbs and defines you. It actually depicts who you are. Vizify is just like comic that contains numbers of pictures to go through and know someone. Co-Founder and CEO Todd Silverstein opined on the future of Vizify and inspiration behind it. And how they hope users will be benefited from the services. “Vizify is where you can make great first impressions. This is your landing page to you as a person”.

Vizify basically mines the web to amass data from various social networking sites. It uses various statistical, neural, and machine learning softwares to collect the data. It extracts, transforms and loads the data into the data warehouse system. Analyses data by various software application and exhibits it in the form of pictures. Jeff Cutler and Eli Tucker initially actualized the basic UI of the site. Then in the beta version of the site, presented the prowess of java-script. They used wider margins to take the advantage of taller images and let more background to peak through. The main layout showed a bubble for each and every page in the bio giving it an ore graphical touch. In March, 2014, Vizify was acquired by Yahoo and was provided a more visual approach. Yahoo may also bring a visual approach to its own data.

Vizify might help Yahoo to fend off competition from its arch rival Google and thus, strengthen its presence in search-engine business further.

Initially Vizify had a small office in northeast Portland. And then soon they received funds, nearly 1.4 million dollars by some of the top investors. The joining of new members in the team of Vizify lead to continuous trail of new features, it was time that they needed an upgrade and so, they started developing a beta version of the site from the scrap.

They had launched a mobile app for Vizify and had used some of the fascinating html 5 animations. They also launched a premium plan

which includes custom domains, embedding, advanced analytics, and new page types. At the end of the year 2013, Vizify had nearly 100,000 new accounts and more than a million visitors to Vizify bios, videos, and cards. The service of Vizify was terminated after its acquisition by Yahoo. I hope that Yahoo will bring out a completely new phase to the idea of Vizify.

- Mayank GuptaPune Institute Of Computer Technology

Pune

Choose how the world sees youVizify

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 22CTD.CREDENZ.INFO

Page 13: P i n g 11.0

Imagine that you are travelling on a hilly road in the middle of the night and your car suddenly breaks down. The next couple

of hours will be wasted on just analyzing what exactly went wrong! Your schedule for the next day will be disturbed and it will indeed be very frustrating. At that point, you might think of something which will solve this problem in a few minutes. What if you had robots which could enter the machine, analyse the problem and fix it? You may wonder that this thought might just be a figment of your imagination.

But now, maybe this thought won’t be a just a thought. Introducing Metamorphic Robots, the bots which are able to change their shape without any outside help. These innovative robots can transform without any intervention and this could be extremely beneficial in environments where people cannot go. For example in medical situations, in case of an emergency, robots of such type could provide great help. They can be used for multiple purposes in research laboratories and can prove to be best friends for the researchers. On a domestic level, the robot could assume a worm-like shape and fix your pipe by entering it or help you in your daily chores by adapting different roles for different activities.

Working:The mechanism involves each module having two or more connectors for connecting together several other bots. They can contain electronics, sensors, computer processors, memory and power supplies. They also use actuators to manipulate their location in the environment and with respect to each other. An interesting feature found in these robots is that they have the ability to automatically engage or disengage from each other or change their form according to their surroundings. This is a unique feature and it makes the metamorphic robots stand apart from the other robots. By saying ‘self-reconfiguring’ or ‘self-reconfigurable’, it means that the mechanism or device is capable of utilizing its own system of control such as with actuators or stochastic means to change its overall structural shape. Having the quality of being ‘modular’ in ‘self-reconfiguring modular robotics’ is to say that the same module or set of modules can be added to or removed from the system, as opposed to being generically modularized in the broader sense. The basic principle is to have a number of identical modules, or a finite and relatively small set of identical modules, in a mesh or matrix structure of self-reconfigurable modules. It is sufficient for self-reconfigurable modules to

The adoptive modular roboticsMetamorphic Robots

23 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

be a device that is produced at a conventional factory, where dedicated machines stamp or mould components, and factory workers on an assembly line assemble the components to build each module.

Advantages:They are functionally very productive in the sense that they are more robust and adaptive to the environment. The transforming ability allows them to change into new morphologies, such as changing into a legged robot or a snake robot, to perform new tasks. Economically, Self reconfiguring robotic systems can potentially lower overall robot cost by making a range of complex machines out of a single (or relatively few) types of mass-produced modules.

This concept has not fully been realised. A self configuring robot could be less efficient than a custom bot built for a specific task. But the need of the hour is to use a single robot for multiple tasks that would normally require more robots.

Applications:It would be a blessing from above for the space explorers if metamorphic robots are implemented in carrying out expeditions to the beyond. Since these expeditions are long term, these robots are self sustained, capable of self repair and can handle un-foreseen situations. In addition, space missions are highly volume and mass constrained. Sending a robot system that can reconfigure to achieve many tasks may be more effective than sending many robots that each can do one task.

Yet another interesting application is Telepario, a moving, physical, three-dimensional replica

of a person or an object, so life-like that human senses would accept them as real. This would eliminate the need for cumbersome virtual reality gear and overcome the viewing angle limitations of modern 3D approaches. One aspect of this application is that the main development thrust is geometric representation rather than applying forces to the environment

as in a typical robotic manipulation task. This project is widely known as Claytronics.A more potentially appealing application is that consumers of the future could have a container of self configuring modules in their store room or garage. When the need arises, they can call

for these robots to “fix the bulb” or “clean the garbage” and the bot does the task.

Conclusion:Efforts were made in the past to bring this scientific marvel into reality. Moteins (2011), Sambot (2010), Roombots (2009) are some of the examples. Since the early demonstrations of early modular self-reconfiguring systems, the size, robustness and performance has been continuously improving. This innovation could bring out a scientific breakthrough in the lives of all people. There are, however, several key steps that are necessary for these systems to realize their promise of adaptability, robustness and low cost. This invention will revolutionize the way of living of the people. This giant leap in the field of robotics will change the idea of the world we live in! However, this idea does invite some speculation over its functionality in the future.

- Aishwarya NaikPune Institute of Computer Technology

Pune

CoeLux -An Italian company CoeLux, has developed an innovative new technology that’s trying to shake up the way people think about “artificial light”, which recreates the look of sunlight through a skyline so well it can trick both human brains and cameras.

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 24CTD.CREDENZ.INFO

Page 14: P i n g 11.0

Ever imagined a smart washing machine recommending a repairman? Or your wardrobe tweeting you about a sale in a

nearby store that has the perfect jeans for you?All this in today’s world is possible with interconnection of uniquely identifiable embedded computing devices within the existing Internet infrastructure, viz. Internet of Things (IoT).

As the number of devices getting connected to the Internet continues to grow every day (by the year 2020 approximately 57,000 new objects per second getting connected to networks), IoT brings in new challenges like a real-time & event-driven model, listening every event as it happens, publishing information one-to-many, sending small packets of data from small devices, etc. For mobile and IoT applications, instant messaging is much more efficient & fast than the traditional HTTP request/

response which requires response status of publisher, exact destination of publisher and for the consumer to be available at all times which brings in latency for the needed instant communication with IoT. This is where MQTT takes over the HTTP protocol.

MQTT (Message Queue Telemetry Transport) is machine-to-machine i.e. IoT connectivity protocol. It is an open and extremely light-weight publish and subscribe messaging transport, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. It is useful for connections with remote locations where a small code footprint

is required and/or network bandwidth is at a premium. For example, a message to be sent using HTTP requires 0.1-1 kB and the same message using MQTT can be sent in 2 to 4 bytes, thus assuring faster communication.

Operation:An MQTT control packet consists of a fixed header, variable header and a payload that are available. The fixed header represents the type of control packet, content of the variable header varies depending on the packet type and payload contains final part of the packet which also depends on packet type. Some of the MQTT control packets are CONNECT, PUBLISH, SUBSCRIBE, UNSUBSCRIBE, DISCONNECT, etc.

Every message is published to an address, known as a topic. Clients may subscribe to multiple topics. Every client subscribed to a topic receives every message published to the topic. It has a Client/server model, where every sensor is a client and connects to a server, known as a broker, over TCP. It provides hierarchical structure for topics and different layers of subscriptions. Security is implemented through username and password authentication from clients, to connect to brokers. It implements 3 levels of Quality of Service (QoS):

0 - At most once W- Messages are delivered according to best efforts of environment. Message loss can occur1 - At least once - Messages are assured to arrive but duplicates can occur.2 - Exactly once - Message is assured to arrive exactly once.

Some client platforms utilized include C / C++, Embedded MQTT-SN (for Sensor Networks), Android, JavaScript, Python, PHP, etc. while some major brokers are IBM Message Sight, Mosquitto, Eclipse Paho , RabbitMQ , Apache ActiveMQ, HiveMQ, etc.

The operation of MQTT is on Publish/

An IoT connected protocolMQTT

25 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

Subscribe. So, when a client connects to a broker, it can either publish messages or subscribe to receive messages from various channels. Then depending upon the QoS , the messages will be transmitted. For example, if a client(s) subscribes to the topic named “/home/data”, it will receive messages published by “/home/data” through MQTT broker over the TCP/IP layer.

MQTT allows wildcard subscriptions, i.e. a client can receive messages from a certain hierarchical level of a topic. + is the Single-level wildcard. That is, when a client subscribes to topic “/home/data/laptop/+” it can receive messages about “/home/data/laptop/memory” but not about “/home/data/laptop/memory/free”. # is the Multi-level wildcard. That is, when a client subscribes to topic “/home/data/#” , it can receive messages about “/home/data/laptop”, “/home/data/desktop”, “/home/data/router”, etc. The topics beginning with wildcard $ are adopted as topics containing server specific information or control APIs. However we cannot use wildcards while publishing.

MQTT also supports agnostic payload for flexible delivery of messages. For example, an image is transformed into a JSON format and then to binary notation to forward from the network. Persistent messages are supported (upto 18 hours). Most recently, updated

messages are only persisted on client request. The clients can define a particular “last will and testament” message which is sent to them by the broker if they disconnect. This is used to signal the subscriber when the device is disconnected from the broker.

Applications:Facebook Messenger uses MQTT for phone-to-phone message delivery in hundreds ofmilli-seconds than in multiple seconds eliminating long-latency, need of longer battery life. Shaspa Bridge combined with robust and scalable cloud-based software – the Shaspa Service Delivery Framework, uses MQTT for home automation. Other major users are monitoring and controllers, Cell Labs for automated meter reading, Elecsys for Industrial Communications Gateway and Remote Monitors, etc. Currently MQTT is used by HiveMQ Websockets, MQTT on Android and iOS, desktop notification tools for Ubuntu , OS X , etc.

With the ongoing trend, it can be said for sure that MQTT will fast replace HTTP and may cause wonders to the way we utilize networks for our computers.

- Yash GandhiPune Institute of Computer Technology

Pune

Acoustic Cloaking - Researchers have made a device that can hide objects from sound. It looks like a bizarre pyramid full of holes, which alters the trajectory of sound waves. If you place the acoustic cloak over an object on a flat surface, it disappears from sound no matter which angle you observe it from.

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 26CTD.CREDENZ.INFO

Page 15: P i n g 11.0

We are constantly playing with the devices we invent when it comes to the interface of the devices. The best

example is the way we have changed the looks of our mobile phones for greater simplicity and fast functioning. It was not long back when we used buttons for input in our mobile phones, replacing them came into the picture touch screen phones enhancing the interaction between user and machine. Now we can make computer processors faster, LCD screens thinner, and hard drives larger but we cannot add surface area without increasing size—it is a physical constraint. Don’t you feel that this has trapped us in a device size paradox? We desire for more usable devices, but are unwilling to sacrifice the benefits of small size and mobility. In response, designers have walked a fine line, trying to strike a balance between usability and mobility.

Thus in search of an alternative Harrison and Hudson described a technique that allows phone to use tables as input device using gestural fingers, but this technique had a constraint that a table would not always be available and in a mobile context users are unlikely to carry large surfaces. However, there is one surface that has been previously overlooked as an input surface,

and one that happens to always travel with us: Our skin. Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands. Furthermore, the concept of proprioception – our sense of how our body is configured in three-dimensional space – allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area. This idea of using our skin as the input surface for devices will thus increase the speed of operation by hundred times!

Skinput technology is an input technology that uses the principle of bio-acoustic sensing to localize fingers on skin. This technology when used with pico projectors provide direct manipulation and graphical user interface on the body. Skinput technology was first introduced in the Microsoft Techfest, 2010 by Chris Harrison, Desney Tan and Dan Morrison working in Microsoft’s research computational user experience group.

The bio-acoustic sensorsSkinput Technology

27 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

Skinput technology works on bio-acoustics, when a finger taps the skin, several distinct forms of acoustic energies are produced. Some energy is radiated into the air as sound waves which is not captured by the Skinput system. Among the acoustic energy transmitted through the arm, the most readily visible are transverse waves, created by the displacement of the skin from a finger impact. In general, tapping on soft regions of arm creates higher-amplitude transverse waves than tapping on bony areas (e.g., wrist, palm, fingers), which have negligible compliance. In addition to the energy that propagates on the surface of the arm, some energy is transmitted inward towards the skeleton. These longitudinal (compressive) waves travel through the soft tissues of the arm, exciting the bone, which is much less deformable than the soft tissues but can respond to mechanical excitation by rotating and translating as a rigid body. This excitation vibrates soft tissues surrounding the entire length of the bone, resulting in new longitudinal waves that propagate out to the skin.

We highlight these two separate forms of conduction, transverse waves moving directly along the arm surface and longitudinal waves moving in and out of the bone through soft tissues—because these mechanisms carry energy at different frequencies and over different distances.

For sensing the produced vibrations, instead of a flat response sensor an armband comprising of an array of highly tuned mechanical vibration sensors, especially small cantilevered piezoelectric films corresponding to certain frequency range is used. Cantilever is adjusted in the resonating frequency using weights. A pico-projector is also installed for the display, and it mainly comprises of a laser light source, a combined optics and a scanning mirror. The acoustic approach is susceptible to variations in body composition. This included, most

notably, the prevalence of fatty tissues and the density/mass of bones. These tend to dampen or facilitate the transmission of acoustic energy in the body. Data and observations from the experiment suggest that high BMI is correlated with the decreased accuracies.

With Skinput, “literally, computing is always available”, Harrison said.

A person might walk towards his home, Harrison said, tap his palm to unlock the door and then tap some virtual buttons on his arms to turn on the TV and start flipping through channels.

Although Skinput technology still lingers in its initial stages, it has a great potential and can be seen as a boon to physically challenged people who face difficulties in interacting with devices; if their skin is the key to technology then interfacing would be a lot easier. Visually impaired people might be able to use all devices using this technology as Skinput provides them fast and eye-free input; the concept of proprioception serving the purpose. Gaming and viewing experiences can be enhanced, a great deal with the Skinput on you. Skinput will attract millions of young minds along the globe and will truly give a magical feeling which the experts are expecting.

- Shubham GuptaPune Institute of Computer Technology

Pune

Octopus Robot- A robot capable of zooming through water with ultra-fast propulsion and acceleration. As they are sleek and slender, they move easily by filling their body with water and then expelling it to dart away. It has a 3D printed skeleton with elastic outer hull as storage device.

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 28CTD.CREDENZ.INFO

Page 16: P i n g 11.0

Beacons are low-cost piece of hardware small enough to attach to a wall. They utilize battery friendly Bluetooth connections to

transmit message directly to a smart phone or tablet.

They are wireless device utilizing Bluetooth 4.0 (BLE) protocol to broadcast tiny signals around them. It allows Bluetooth 4.0 enabled devices to “talk” to them with proximity of 3 inches to 150 feet. Bluetooth Low Energy (BLE or Bluetooth 4.0) is the next generation of Bluetooth, and has negligible impact on your devices battery life.

Evolution of real world operating system:The most revolutionary aspect of beacons is that they can create real world operating systems: a mesh of devices, smart objects, and beacons with the ability to talk to each other with contextual intelligence. This real world operating system with connected objects can capture huge amount of data about users and our actions, while also putting that data in context. They can precisely capture our proximity and can tell us about how much time we spend in our offices, home or in a gym. The intense competition is created between the vendors because of beacon positivity.

•Low Cost: The beacons are relatively low cost. For example, beacon manufacturer- ESTIMOTE, is currently offering three beacons for US$99. They can send a signal to any BLE-enabled device up to 70 meters away by specifying a broadcast range. This can make covering a large floor space a much cheaper proposition than other connectivity solutions.•Energy Efficient: The BLE technology in beacons requires only a miniscule amount of energy to work. That means, they don’t need to be plugged in and can run off a coin-battery for long periods of time. Some reports suggest up to three years. •Distance Sensitive: When a smart phone detects a beacon, it can determine the distance to the beacon down to the nearest meter, if deployed correctly. GPS is also accurate to

within a short distance, but it does not work well indoors. That makes beacons more useful for location sensing on mobile devices than GPS and potentially more accurate than other indoor location-sensing technologies. This indoor proximity capability opens up a wide variety of navigation and contextual- aware possibilities inside buildings.

Beacons enabling innovative applications:Retail outlets are adopting beacons to provide in-store navigation product information, flash sales or deals and to speed up the checkout process with a complete contactless payments system.

The applications go beyond retail as it’s expected to deploy beacons all over the airport and ground transmit hubs- to notify on departure, delays and platform assignment can be instantly delivered on passengers phone. Imagine all the clunky ATMS that could be transformed into simple and sleek beacon money dispensers. As you come in proximity of Beacon-enabled ATM, your Smart Phone prompts you with an App that offers you an interface where the input can be amount of cash withdraw. There is no doubt that BLE enabled BEACONS will open up millions of possibilities for the developers and designers to create truly innovative experiences and services.

- Yash ChopdaPune Institute of Computer Technology

Pune

The status relayerBeacons

29 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

Water: one of the basic elements needed to sustain life on earth. Considering the pace at which the

world is growing, in terms of population, industrial development and many other factors, the use of water is increasing ten-fold along with it. The utilisation of water has now reached an alarming level resulting into acute scarcity of water in many remote places on earth. There has been an immediate need to develop alternative methods to generate safe, potable water for the people who are deprived of the main component to support life.

The water scarcity issue affects nearly 1 billion people in Africa. Similar to many African countries, parts of Ethiopia are severely affected by water shortage, consumption of contaminated water and lack of proper i n f r a s t r u c t u r e . Women and children are compelled to travel long distances to procure water. Research in the same domain has resulted into some interesting methods to provide water: Generating water from air!

What we are talking about is the Warka Water Tower. Designed and created by Arturo Vittori and his team at Architecture and Vision, this invention aims at providing hygienic water to remote villages. Taking the advantage of condensation of vapour, this structure captures potable water from the air by collecting rain, harvesting fog and dew.

The design of the Warka Water Tower is inspired by the Warka tree. Warka tree creates a social place for the community, where people gather and conduct meetings. The architecture of the tower consists of a water tower stand which stands at approximately 30 feet and is capable

of collecting over 25 gallons of potable water. The simple design includes pillars, wherein each pillar is comprised of two sections: a semi-rigid exoskeleton that is built by tying stalks of juncos or bamboo together and also, an internal plastic mesh. Along with this, an important part of the design is fulfilled by using nylon and polypropene fibres which act as a scaffold for condensation. As the droplets of the dew form, they follow the mesh into the basin at the base of the structure.

The structure makes use of biodegradable materials such as bamboo, hemp and bio-plastic and hence, avoids the use of high technology.

It is economical compared to the c o n v e n t i o n a l methods such as pipelines and wells. It can be easily-assembled within a week by a crew of 4-6 members. It is easy to be maintained, cleaned and repaired. Thus, this project is highly sustainable.

Well, nothing ever comes up without challenges. This project too, faced some problems ,one was withstanding various weather conditions that tested the stability of the structure, and the other was making the project economical. There were many other difficulties that showed up along the way and there will be many more in the future. But given the out-of-the-box idea being implemented to provide the masses with hygienic water, one can certainly hope that its enactment serves the people, serves for the greater good!

- Shruti ShettyPune Institute of ComputerTechnology

Pune

Water from AirThe Warka Water Tower

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 30CTD.CREDENZ.INFO

Page 17: P i n g 11.0

Humans have always feared dominance by entities such as fellow humans, aliens or even supernatural entities.

A certain futurist added another entry to this list with the onset of the 20th century, namely, artificial intelligence. Ray Kurzweil sent the technological world into frenzy with his hypothesis on technological singularity and since then the human race has tried to come up with innovative ideas to avoid such an event. While brain surgeries continued giving meek promises to paralysis patients and permanently handicapped patients awaited for breakthroughs in the organ regeneration area, another interesting stream began developing. As it turned out, humans had underestimated the capability of the most intriguing organ of their bodies - the brain.

The basic work of the brain is done by its smallest units - the neurons. These neurons have an electrical method of passage of signals from the brain to- the various organs it controls- through the respective important nerves like optic and auditory nerves. These electrical impulses create potential difference between two points in the brain, causing current to flow.

Brain waves map this current whenever an action is being performed by the brain, which is all the time.

Earlier open brain surgeons, through surgeries, made a person literally relive memories. Those surgeries involved a bunch of neurologists, passing current- at particular points in the brain cortex- which were most active while remembering it. The patients recalled the exact taste, smell and feel of things around them from the memory.

Brain Computer Interfaces use the exact reversal of this concept. A large number of electrodes are inserted directly in the brain in pairs. The potential difference between two points is measured when a signal is passed by the brain, for instance, “Go left”. That value of voltage is recorded and then this value is used in an algorithm for, say, a prosthetic arm to m ake it move left. The only problem is that, this algorithm is a learning algorithm and needs a number of trials before the algorithm learns what value of voltage corresponds to which command.

Your humanoid partnerMind Reading Computers

31 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

In 2006, 25 year old Matt Nagle, a tetraplegic (paralyzed in all limbs and torso) volunteered for an experiment, where 96 electrodes were inserted in his brain’s motor cortex. In the span of a 114 day trial, he performed a number of tasks successfully which included moving a cursor to a certain point on a screen and staying on that point for 150 ms till the success was shown by a smiley. Others included the opening of email and reading of first and second messages, using a prosthetic as well as a robotic arm.

Explaining how they did it, Nagle said that, at first, a technician would move the cursor up to the point and he was told to imagine he was controlling it. The signals in his brain were recorded and then he was told to think that he was moving the cursor left, right, above or below so that the signals were taken by the computer and translated into the motion of the cursor. Although it sounds simple, this task was successfully concluded on the 98th day of trial.Emotiv systems had another application for this ability. The device EPOC uses the electroencephalograph (EEG) to map the brain waves and then uses the prerecorded brain

waves received at several points on the scalp to perform an action. It is a hardware which is sold to consumers and developers alike- to use for different kind of applications like gaming or controlling a robot.

Another experiment last year at the University of Washington by Rajesh Rao and Andrew

Stocco confirmed that mind control is possible between two humans. In their experiment, Rao, who was seated in a room across the university campus from Stocco, was thinking about which words were to be typed while his brain waves were being recorded. These brain waves were then sent to Stocco via the internet, where Stocco found himself involuntarily typing out the words which Rao had thought about. The above finding would however, have the disadvantage that the person may not know who is controlling him/her, and could be called to be a humongous breach of security and privacy.

Mind control techniques, although revolutionary, will make concepts like “mind reading” technologically possible in the near future. Imagine the positive implications of the concept- telepathy overtaking the telecommunication industry; it seems almost ridiculously possible! When you feel hungry, your smart phone will instantly list a number of restaurants around you. When you feel sad or depressed, entertaining video links will be sent to you. Gaming would become a whole new experience with your thoughts getting mirrored into your player’s actions. Paralysis will have the ultimate solution with the use of robotic or prosthetic limbs which would be mind controlled. The deaf and blind will also be helped by sending recorded auditory or visual signals to the auditory and optic nerves respectively with necessary image processing for the latter.

It will provide human intelligence and technological progress a perfect balance saving the world from the threat posed by artificial intelligence, since “humanity” is a concept that can’t be coded into robots. It’s safe to say that with the present research and efforts in the stream, mind reading computers will be the next substantial step for the technological world.

- Urjeeta TulePune Institute of Computer Technology

Pune

Prototype Tree is an Energy Harvesting Tree developed by Finnish re-searchers, which harvests energy from light in its surroundings. The “leaves” of the tree are flexible, patterned solar panels. It produces enough power to charge small electronic devices such as smartphones, LEDs, sensors.

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 32CTD.CREDENZ.INFO

Page 18: P i n g 11.0

Using the “Watson-Crick” base pairing, the sequence of all staple strands are known and displayed. Further the DNA is mixed, then heated and cooled. As the desiged-process outcome gets cooled, various staple pulls the long strand into the desired shape.

Designs are directly observable under electron microscope, atomic force microscope or fluorescence microscope. To determine a certain specific shape, CAD software is being used, which not only fasten up the process but also reduces human errors associated with it.

DNA origami has wide applications in biomedical science such as in drug delivery

system for Doxorubicin (a well known anti- cancer drug) . It is used as circuitry in plasmonic devices and enzyme immobilization. It also finds its useful applications in the construction of nanorobots and nanoparticles.

To help in medical as well as in electronics, this nano technology will prove very much useful and beneficial for mankind as the world is growing up for minuscule-sized things!

- Neha GodseCummins College of Engineering

Pune

The craft of genesDNA Origami

Today’s era is all about micro and nano applications and techniques used for analyzing and inventing new

products. The idea of using a product from Deoxyribonucleic acid (DNA)- for studying all behavior to create templates for the nanofabrication of electronic components- is being conceived. Origami is related to folding of the acid to create a product, which helps in designing miniature circuits or building up a new equipment.

This unknown area, using DNA as a constructive material was first introduced in 1980’s by Nadrian Seeman. Currently, DNA Origami is been developed by Paul Rothemound at the California Institute of Technology.

DNA Origami is the nanoscale folding of DNA to create arbitrary two or three d i m e n s i o n a l shapes at the n a n o s c a l e . These are self a s s e m b l i n g b i o c h e m i c a l structures that are made up of two types of DNA.

The process involves folding of a long single strand of DNA aided by multiple smaller staple strands. Each staple strand is made up of a specific sequence of bases adenine, cytosine, thymine and guanine, the building blocks of DNA. These shorter strands binds the longer strand in various places- resulting in various shapes- including a smiley face along with 3-D structures such as cubes. The desired images are drawn with a “raster-fill” technique, that is fed into computer program to calculate placement or location of an individual staple strand.

33 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

The Chairman of NextStar Ventures, Founder Chairman and CEO, NetScaler (Citrix), Inc. and Founder Chairman and CEO, NeoAccel(VMware), Inc., and the inventor of Fundamental Internet Acceleration Technology which reduced traffic at Google and MSN etc., Mr. Michel Susai has more than fifteen years of experience in the networking industry. He talks about his journey from college days to being an acclaimed entrepreneur.

QSince you are an alumnus of PICT, could you please tell us about your experience in

PICT?

AI was in the second batch of PICT. Initially there was a totally different setup, from

where we then moved to the newly constructed area. I had a lot of fun during my first two years but by the end of second year I got more serious. You suddenly realize you have too much backlog and you need to start working on it. I spent a lot of time on my project in the fourth year which was later sponsored by a company named DataPro. They had a VAX/VMS mainframe, but you could work only in the night time as in the day time the place was occupied by their people. There, I would work from 6 pm to 6 am. You could say it was a great experience.

QYou are a very successful entrepreneur now; was it always your dream or did you

have different goals in the beginning?

AI wanted to do computer science ever since my eighth standard, may be because of the

influence of some teachers. The other thing that I had in my mind was to go to US and work on some big system project in NASA. But when I came to US, I realized that NASA mostly contracts out to space science contractors. This made me go into mainstream computer science. I had a good background because of the engineering final year project which is extremely important as it reaches into your lifestyle, and plays a major role in what you are going to do further. My project was based on

MS-DOS Emulation on VAX/VMS mainframe, so multiple users could use MS-DOS inside the mainframe. When I came to US, it helped me to get into the same kind of business. Luckily, I was one of the early people working on AIX Operating System. So, when IBM didn’t have UNIX, we made the UNIX for IBM, which is called AIX. I did have entrepreneurship on my mind all the time. But you should always try to do something new.

QBefore starting your own startup, you have worked in big companies like Sun

Microsystems, Unisys etc. How was your experience there?

AIt was a great experience! In big companies, it’s very hard to get into a project that you

want to work on. I was very lucky to be able to work on operating system and IBM AIX. I specifically went to work on the massively parallel operating systems, and that’s when I was able to get into the specific area and group of my interest. When I went to Sun Microsystems, the CTO’s office recruited me to do specific internet infrastructure project. But in a big company, it is very difficult to get into a project like that. You will mostly land up in a general project, and never get to explore the big picture.

With Mr. Michel Susai, Chairman, NextStar VenturesBridging the rift

inte

rvie

w

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 34CTD.CREDENZ.INFO

Page 19: P i n g 11.0

So luckily, the big companies where I went to, my groups were mostly like a startup group, always working on some technology- that really helped me. My advice to people is to join startup companies that are at an early stage and coming up with new ideas. You get to learn a lot while working for a startup company as compared to working for a big company. People in India are more attracted to the glamour of big MNCs, and thus get pulled into joining them. This hinders their growth considerably. It is better to go into a small company, work hard, learn and grow fast; you can always get into a big company anytime later.

QYou have invented Fundamental Internet Acceleration Technology that more than

75 percent of users traverse each day. Could you tell us more about this technology?

AWhen I was in Unisys, I wrote a paper called “policy based routing”, in which you

take a component called distributed TCP/IP in distributed operating system. In a distributed operating system, you have thousands of complete nodes. Distributed TCP distributes the traffic to different nodes, so that they can process in parallel. My idea was to bring out the device instead of staying in clusters, so that it becomes independent, and can direct traffic to different nodes. It is a fundamental technology for the web. The reason is, when you provide any service, you also need to provide high availability. The key is not only to distribute connections, but to distribute requests as well. This is the basic genesis of making the device and request switching. Because of the parallel computing background, I had core understanding of how to make this work without losing performance.

QYour Company NeoAccel is a pioneering innovator of the next generation internet

security solutions. Your company has been challenging the companies like Juniper Networks and F5 Networks. How do you compete with these Silicon Valley giants?

AEvery company has an IT department with lot of employees, and they have a policy,

wherein generally at the end of the day, they do what they want to do. If you approach them and propose an idea, they generally listen to you, though it doesn’t mean they are buying your product, but they give you the benefit of doubt; and if you want to beat these giant companies, performance is an advantage. The response time for using your device should be better than others, you should have the same size of the device, and at the same price; but your capacity should be higher than present processing capacity. You should have a green solution that can survive for a long. Then people feel the difference in the response time, the capacity and the throughput. Obviously you leave the decision on them to buy. I want to tell you something about my first company. I used to be very verbal. I repeatedly spoke about my company and guess what happened? Now NetScaler gives all traffic management solutions. Thus now without NetScaler you don’t have any cloud solution, it’s unbelievable.

QNetScaler and NeoAccel were both founded in US. Do you think India lags in resources

to setup such initiative?

ANot at all. 50 percent of NetScaler was done in India, initially Mumbai and

then Bangalore. 95 percent of NeoAccel was established in India, again in Mumbai and Pune. Many companies are setup in US, as US has some of the best laws in intellectual property and it is really easy to do business in US, everything is a bit bureaucratic and also there is respect of US product. It will take little bit time, may be next 5 years, India will have developed products having some reputation.

But, the companies can be always run from India and can have US office. India does not fund companies nor spend too much time for developing products. Also, it is a fact that 90% of products are from Silicon Valley. But, the reality behind this is, most of these companies work

35 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

more or less in India. So, it will take some time to build up some large companies. Companies can do business ethically from India. It will happen, and it needs to happen.

QThere is vast difference between the education system in India and the foreign

universities. Do you think Indian students lack exposure as compared to their contemporaries?

AIn US, theoretical researches have very good exposure. What you need in the

Indian education is money. Harvard has $4 billion budget while Washington University has $2 billion budget. So, US universities spend so much money on research. $2 billion may be the whole budget in India including all the Universities, even the IIT’s. They don’t have enough amounts of grants. That doesn’t mean that something is lagging in India.

India has to focus on lot of fundamental topics, like physics, biology, genomics, astrology etc. In the beginning, it wont pay much. The primary requirement is money. India needs to change its scheme, put a lot of money into research. India has hardworking, focused and knowledgeable people. But there should be focus on some fundamental areas, so that the industries developed from those areas give us the pioneers.

QWhat other activities you indulge yourself in, in your leisure time?

AI do travel a lot. I love travel and reading historical books and all kind of stuff.

Nothing is the limit.

QFrom being a graduate in India to an entrepreneur in the Silicon Valley, has it

been an easy journey?

AThe journey was not easy at all, but I always took it easy. There were lots of difficulties

and hurdles in my way. The challenges faced by a startup company are no less. Every day has

got new opportunities for you. So, it has always been an exciting journey for me. It was very challenging at the beginning as I was not an American. Even when I was in Los Angeles I was unable to do what I wanted to, leaving me handicapped. Only after 4 years when I moved to Silicon Valley, CA, that I had a chance to think about things I had not achieved yet.

QWhat message would you like to give to the budding entrepreneurs in India?

AI would like to give you two messages. First, everyone should think of starting

a company. If you work for somebody you can have only one company, but if you make something happen; it would be used by many people. The second important message which I would like to give you is; just don’t do things for the sake of doing it. Just don’t start a company because you think you wish to start a company. Your ideas need to have a proper business plan followed by its market value. You need to see that your time is well spent. So, if you make a product it should be an important and a widely accepted one. It should not be for your personal independence. If you can make something, you need to wait, think, explore and research about the product. Your product has to have a bigger perspective.

Besides, the stress and efforts will remain the same, even if the idea is small. The difficulty level of making a product remains same irrespective of its complexity. An entrepreneur should always ponder about important and significant ideas.

We thank Mr. Michel Susai for his time and con-tribution to P.I.N.G.

- The Editorial Team

FEBRUARY 2015 P.I.N.G. ISSUE 11.0 36CTD.CREDENZ.INFO

Self Assembling Chair- This very special chair, designed at the Self-Assembly Lab at MIT, is made up of six components. Each component, em-bedded with magnets, latches itself to another piece uniquely like a puzzle with the magnets acting as the attracting force.

Page 20: P i n g 11.0

crossword

37 P.I.N.G. ISSUE 11.0 FEBRUARY 2015 CTD.CREDENZ.INFO

Answer to this crossword lies within the included articles

Across

5. The concept which combines nanotechnology and

computer science for robotic manipulation task. (11)

6. Paralyzed from neck down. (11)

7. Mechanism for safely running untrusted programs. (10)

9. Google’s new JavaScript library. (7)

11. The team, employed by a renowned Internet giant, is

tasked with finding and fixing bugs and exploits. (7-4)

12. A form and shape changing robot. (7)

13. View of physical, real world environment by computer

generated sensory inputs (9-7)

15. Technique developed by Google to gradually add some

millisecond to its system clock to avoid internet crash. (4-5)

18. An assistance system to safeguard driving process. (4)

22. Which carries out passage of electrical signal from brain

to various organs. (8-6)

23. Greco-Egyptian mathematician and writer of Alexandria.

(7)

24. Local search and discovery mobile app which provides

personalized local search experience. (10)

Down

1. Software application that keeps track of defects in reported

software, unresolved during its development. (3-7)

2. A system whose state is randomly determined. (9)

3. A technology that records the movement of objects beyond

the wall. (2-2)

4. Microsoft’s development tool to build applications. (10)

5. Radioactive element used to define a second. (7)

6. Technology that has made virtual presence across

continents a reality. (12)

8. Phenomenon related to the branch of biology, dealing with

form and structure of organism. (12)

10. Which allows you to perform complex Ajax task with the

ease of markup. (4-4)

14. A non-hierarchical keyword assigned to piece of

information. (3)

16. A person carrying device that substitutes for missing or

defective part of body. (10)

17. Signal available to attacker for hacking. (5)

19. The input technology that uses bio-acoustic sensing to

localize fingers taps on the screen. (7)

20. A remote sensing method used by Google in its future

automobile venture.(5)

21. Tracking life at this initial stage is another phenomenal

application of Wi-Vi technology. (6)

Page 21: P i n g 11.0