reconnecting bits and atoms

21

Click here to load reader

Upload: g-cosier

Post on 05-Aug-2016

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Reconnecting Bits and Atoms

BT Technol J Vol 19 No 4 October 2001

77

Reconnecting bits and atoms

G Cosier and S Whittaker

Arguably, the defining technology of the late 20th century was information technology. In this new century, it will continue toincreasingly pervade every aspect of our lives from the way we earn our livelihoods to the way we, ourselves, accessinformation, even choose to entertain and inform ourselves, and live our lives. For this reason, it can be particularlyinformative to look at the key trends in information technology and consider some of the technologies that are and will bedisruptive.

1. Introduction

“The Digital Revolution is an incomplete story.There is a disconnect between the breathlesspronouncements of cyber gurus and the experience ofordinary people left perpetually upgrading hardwareto meet the demands of new software, or wonderingwhere their files have gone, or trying to understandwhy they can’t connect to the network. The revolutionso far has been for the computers, not the people.”

The varied projects at the Media Lab include creating anaffordable computer for developing countries capable offunctioning as a complete Linux development platform,user interfaces for children, animals and people with noreading skills, and printable semiconductors that can workas actuators as well as computers. These are exciting andoften fun projects but they illustrate an important point. Forstudents in the lab, design literacy means being able tofabricate everything from the physical construction, throughthe electronics, to the software, the user interfaces, as wellas understanding social implications. The work of the lab isshifting from creating machines to giving people the skillsto make the machines which make machines. The authorsalso believe that it is so important to look at the technologyfrom these differing angles.

It is interesting too that, while the world’s attention onthe Internet focuses on high bandwidth possibilities, manytechnologies are often relative bit dribblers. They are about

getting the right information in the right place at the righttime, about things like ‘smart’ pill bottles, and about gettingto the people who do not use computers — a revolution for‘everyone else’.

This paper investigates some of these technologies thatmay play a role in this process, some of the potentialimplications, some of the decisions that we may face, andsome of the very technologies that will disrupt — but fromwhich direction? It focuses on the substrate technologiesupon which future applications will sit and examines theemergence of networked systems from the nano-scale to theglobal. The authors argue that this is an extremely signifi-cant process. By releasing computing, communication andsensing technologies from the laboratory, we have theopportunity to put new tools and applications into the handsof diverse communities. If we succeed in this, the opportun-ities will only be limited by the imaginations of the individ-uals, empowered by the interconnected resources of a trulynetworked world. The subsequent sections examinedisruptive computing, disruptive networking and disruptiveenvironments.

2. Speeding up ...

We have grown used to the seemingly unceasing expo- nential increase in the performance of the underlying

technologies. Observations on the rate of improvement ofkey technologies and the value associated with their usehave become so well established that they have becomepopularly accepted as law. The speed of optical and wirelessnetworking technologies is doubling every 9 to 16 months.Today a single strand within a fibre optic cable can carry400 000 DVD-quality movies streaming simultaneously!What we are seeing is the convergence of exponentials —Moore’s Law meets Gilder’s Law meets Metcalfe’s Law.The result is more data, more users, more access devices,and more services — known as the ‘Net Effect’ [2] (shownin Fig 4 later).

The Media Lab at MIT became famous under NicholasNegroponte for its work on ‘being digital’, moving us

away from a world of atoms to a world of bits. As ourunderstanding of this digital world has developed, theauthors suggest that now could be the time to reconnect thebits and atoms. This, among other things, means looking notonly at how information is represented and manipulated, butalso at how people engage in the converging physical andvirtual worlds, and how technology plays its role in this. AsGershenfeld put it in a recent book [1]:

Page 2: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

78

• Moore’s Law (Fig 1)

Gordon Moore’s original observation that the numberof transistors within integrated circuits doubles roughlyevery 18 months is easily observed in the doubling inthe performance/price of retail computers over thesame time-scales.

• Gilder’s Law

The technotheorist George Gilder has forecast that forat least the next 25 years, the total bandwidth ofcommunications systems will triple every 12 months,leading to a huge proliferation of bandwidth in theoptical core of the world’s networks. As shown in Fig2, underpinning this is the year-on-year improvementin the raw capacity of the fibres themselves which, his-torically, have shown a tenfold increase in performanceevery 7 years. It is also evident that the displacementbetween best performance in the laboratory and theperformance in commercially deployed systemsexhibits a lag of about 7 years; hence it is likely that wehave some 7 years’ worth of fundamental ‘fibre-based’performance increase in the pipeline. When coupledwith the ongoing improvements in routers andswitching technology and the advent of all-opticalnetworks, Gilder’s Law looks perfectly sustainable forthe next decade at least.

• Metcalfe’s Law

The value of a network is related to the square of thenumber of nodes (Fig 3) (it has also been suggestedthat applications that encourage group-forming behav-iour among users may grow in value even faster [3]).Networks of people, telephones and computersdramatically increase in value each time a person, atelephone or a computer is added to that specificnetwork. Once critical mass is reached, watch out, thevalue to the network grows exponentially. Scarier stillis the fact that, once you are behind, you may nevercatch up to the competition.

In the same way that the telephone industry had to lookbeyond switchboards and operators and move to automatedswitching, the technology industry will have to find ways toscale with increasing complexity. Radical and disruptivechange will occur, and trends are already visible in almostevery aspect of information technology (e.g. in the densityof disk storage (Mb/in2) and the bandwidth of broadbandwireless) that affect every aspect of processing, storage andtransmission [4]. Operators are exploiting a combination ofWDM and TDM to fuel the bandwidth explosion (Figs 4 and 5).

Fig 1 Moore’s Law.

1970 1975 1980 1990 1995 2000time

1985

108

107

106

105

104

103

trans

isto

rs

Pentium 4 processorPentium lll processor

Pentium ll processorPentium processor

486 DX processor

386 processor286

8086

80804004

8008

1816141210

86

subs

crib

ers,

milli

ons

420

4Q96

2Q97

1Q97

3Q97

4Q97

1Q98

3Q98

2Q98

4Q98

1Q99

time

1000900800700600500400

tota

l min

utes

, milli

ons

300200100

4Q96

2Q97

1Q97

3Q97

4Q97

1Q98

3Q98

2Q98

4Q98

1Q99

0

time

Fig 3 Metcalfe’s Law — on-line customers for a typical ISP.

104

Gbi

t/s

103

102

101

100

10-1

10-2

1980

1982

1984

1986

1988

1990

1992

1994

1996

1998

2000

year

10 fold ~7 years

experimental

commercial

Fig 2 Optical fibre capacity.

Page 3: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

79

The most dramatic impact takes place when the surgingforce of innovation opens the way for the propagation ofsuch leading edge components and systems into low-costpervasive technologies.

The increased access to services from connected andwireless devices will place greater demands on the serversthat house these applications. These demands will not onlybe unpredictable, but also come at all hours of the day andnight.

It seems highly likely that in the case of information-related technologies this process will continue for manyyears to come and we will see increasing ‘democratisation’and pervasiveness — from embedded computing andintelligent environments to smart materials. It is very hardto predict the social and economic implications of thisprocess, but many of the opportunities, or disruptions, arebecoming clear — from distributed healthcare totransformations in the global supply chain.

3. Disruptive computing

In the commoditised desktop PC market, successivewaves of incremental improvements in processor, memory,disk and audiovisual technology find their way rapidly intoconsumer markets. As a result, the typical home PCcontinues to be exponentially more powerful than the officePC and technical workstations of just a few years before.Similarly, in the mobility arena, personal digital assistants(PDAs) and increasingly sophisticated mobile telephoneshave become common consumer tools and laptop PCs arefast becoming a standard student accessory.

However, if we consider the PC as being in some sensea mid-point in a spectrum of computing technologies fromhigh-end supercomputing to embedded computation, we seeequally dramatic developments towards the extremes —both the very large and the very small — and in new formsof computing

3.1 The very large — extreme computing

When we think of extremely powerful computers, wenaturally tend to think of scientific supercomputing — suchas those used for weather forecasting and geologicalanalysis. In fact, there have been dramatic developments in‘extreme computing’ in a number of areas.

In the scientific arena, extremely specialist, massivelyparallel machines, such as IBM’s Blue Gene [5], continue tostress the limits of design and manufacturing. With a targetof 1 PetaFLOP (1015 or 1000 000 000 000 000 floating-point operations per second), Blue Gene (Fig 6) is designedfor a class of extremely compute-intensive tasks that areintrinsically suited to parallel processing — such as bio-molecular modelling of protein folding.

To achieve this, an advanced 1.3 million processorarchitecture has been designed.

With each ‘chip’ containing 40 processors each capableof running 4 parallel threads, 36 ‘chips’ per board, 4 boardsper rack and 255 racks coupled to form a single system, thisis a long way from the typical desktop computer and animpressive engineering challenge.

The disruptive nature of such tightly crafted systemsmay be less in the spin-offs from the technologicaldevelopment itself, than in the transformations enabled indownstream industries. The shift in the pharmaceuticalindustry, underpinned by compute-intensive developmentsin genomics and proteomics and enabled by approachessuch as rational drug design, could radically change thestructure and economics of the industry by, for example,dramatically shortening the development life cycle andenabling genetically appropriate medications for target sub-populations.

Fig 4 Network bandwidth is driving architecture and innovation.

single fibrebandwidth

single CPUperformance

106

105

104

103

102

101

1

1975 1980 1985 1990 1995 2000

now doublingevery 9 months

doubles every24 months

num

ber o

f tim

es fa

ster

time

1988 1990 1992 1994 1996 2000time

1998 2002

1000

100

10

1

0

capa

city

, Gbi

t/s

40λ x 10Gbit/s96λ x 2.5Gbit/s

16λ x 10Gbit/s, 64λ x 2.5Gbit/s32λ x 2.5Gbit/s

16λ x 2.5Gbit/s4λ x 2.5Gbit/s

40Gbit/s

10Gbit/s

2.5Gbit/s

565Mbit/s

TDM(single channel)

WDM+ TDM

Fig 5 The bandwidth explosion.

P erhaps the most visible signs of sustained gains in thecapabilities and reductions in the price of computing

are in the consumer domain.

Page 4: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

80

An alternative, more distributed, path has been taken inthe commercial computing arena.

As the growth in on-line business continues, more andmore businesses are becoming increasingly reliant onspecialist Web-oriented computing platforms that have, to agreat extent, replaced the traditional view of the corporatedata-centre. In the transformation of the net from academicresearch venue to economic underpinning, we have seen amove from small isolated Web servers managed byindividuals to a complex technical and commercialecosystem, comprised of application service providers(ASPs), large specialist managed hosting farms, contentdelivery networks, and edge servers.

Such architectures are carefully engineered globalcomputing platforms ensuring that enterprise-scaledatabases can be exploited and rich media content deliveredwhere it is needed, when it is needed, to provide a high-quality personalised user experience. These are well-oiledmachines with which to run industries and economies.

However, the basic components of these systems tend tobe commodity in nature, rather than esoteric, and theecology is emergent rather than by design.

In some senses, grid (or meta-) computing [6] applies asimilar approach to scientific computing. The grid refers toan infrastructure that enables the integrated, collaborativeuse of high-end computers, networks, databases, andscientific instruments owned and managed by multipleorganisations. Grid applications often involve largeamounts of data and/or computing and often require secureresource sharing across organisational boundaries, and arethus not easily handled by today’s Internet and Webinfrastructures.

In a more general sense, it aims to let us think ofcomputation in the same way that we think about otherutilities such as water or electricity. You plug in and serviceis there; you do not have to know or care where it comesfrom — a model which we will return to at the micro-level.

Although meta-computing has its roots in thosescientific problems which are just too large and complex forall but the most advanced (and therefore scarce)supercomputers, it aims to make supercomputing-classpower available to a wider range of scientific, engineeringand commercial uses.

Impetus for such systems came in part fromdevelopments such as early Beowulf servers [7] — clustersof commodity computers with supercomputing-class per-formance at prices an order of magnitude lower (see Fig 7).The community-based Beowulf movement had its roots atNASA’s Goddard Centre and is based on open sourceextensions to the open-source Linux operating system.

In many respects this reflects the complexity ofdeveloping tightly coupled multi-processor architecturessuch as Blue Gene. When the rate of increase inperformance of commodity processors is faster than that inour ability to craft, future-proof and programme suchcomplex systems, then an alternative approach thatembraces the increasing performance of commoditytechnologies may be called for. In some respects the sameprocess is at work that meant that the Pentium rather thanthe Transputer dominates the contemporary desktop.

Formal collaborative grid programmes (such as Globus[8]) have been actively developing open source middlewarelayers which enable complex tasks to be split up anddisseminated over large geographically disperse networksof (homogeneous or heterogeneous) generic computers toform virtual supercomputers.

The scale of such systems can be impressive, the USNational Science Board’s Distributed Terascale Facility(DTF) will deliver TeraGrid [9] — linking computeclusters, visualisation environments and data at multiplelocations over a dedicated fibre backbone. The system,

Fig 6 IBM Blue Gene rack.

Fig 7 Beowulf server cluster.

Page 5: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

81

based on industry standard Intel Itanium processors, theLinux operating system and the Globus protocol set, willdeliver 13.6 TeraFLOPS (1012).

The growth in peer-to-peer computing applicationsexpands the meta-computing universe to encompassindividual consumer PCs and thereby the public’s sittingrooms. With initiatives such as SETI@home [10], thepublic effectively donates space, computing time andresources to a global distributed signal processing andanalysis task (in this case the search for life beyond our ownplanet). Other architectures aim to share other resourcessuch as storage and encryption — this is a very long wayfrom the prevailing pattern of centralised publication anddistributed consumption that typifies the net today, and iscloser to that of the early ‘democratic’ models of Webpublication.

These three architectures for supercomputing(integrated, grid and consumer) vary substantially in termsof their homogeneity. However, to some extent, thegranularity of processing is similar — this need not be thecase ...

3.2 The very small — embedded computing

As implied above, we have become used to the idea thatcomputing systems such as Web servers fall into two camps— the large professional systems inhabiting bomb-proofdata centres, and smaller ad hoc servers, often run byconsumers over their residential broadband connections as away of staying in contact with friends and families.

Increasingly we are seeing intelligence being built intomany of our household items — for example videorecorders and toasters. What if all of those smart ‘things’around us were their own Web servers?

Tiny Web servers have been developed whichimplement standard protocols such as TCP/IP and HTTP onstandard micro-controllers or low-cost gate array devices.For example, the IP-ic [11] implementation operates in 256bytes of ROM and 7 bytes of RAM. The result is a 1-chipWeb server with an optional 1-chip file system that costsless than $1 and is small enough to be embedded ineveryday items (see Fig 8). The architecture uses anapproach of de-layering to reduce the computationaldemands of implementing stable protocols and shows thatextremely low-cost devices such as controller chips canperform surprising and useful tasks.

Similarly, the I-coin device [12] couples a low-powerprocessor, sophisticated power-aware radio system andsensor interfaces into a watch-battery powered unit aboutthe size of a US quarter — capable of being ring or wristmounted. Techniques such as ‘minimum energy coding’ are

steps along the way to the kind of ‘Low-E’ computing wewill need for pervasive environments.

Both technologies are currently being commercialised[13, 14] .

Potentially complementary developments such as Jini[15, 16] tackle a similar problem to that of grid computingbut at a more human scale — for example within a room.

Heterogeneous elements, potentially including suchmicro-devices as the IP-ic and the I-coin, are able to shareresources by providing computational and interface servicesto each other:

• a client or service can discover and join communities,

• a client or service can look up specific services within acommunity,

• a network-leasing mechanism can dynamically adjustto network changes and partial failures,

• a remote event notification can pass advice betweenservices,

• interfaces can provide transactional integrity betweenservices.

Some of the issues associated with ad hoc environmentsof this kind, such as scaling to very dense networks, arediscussed below, but if we bring together some of thesetrends at ‘the very large’ and ‘the very small’, we can see acomputational future not only as much more heterogeneousthan in the past, but also in some sense as more communal.

3.3 The very different — beyond silicon

The trends described above tackle issues of global anddistributed computation and the efficient utilisation of

Fig 8 IP-ic Web server.

Page 6: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

82

commoditised technologies. However, there are a numberof additional challenges ahead. In two of these, there areparticularly intriguing technological developments.

• How do we get the price of computing down to thelevel where it can be thought of as disposable?

Despite the predictable drop in costs of computationaldevices, there are still issues of cost. These take twoforms — the fabrication costs of high-end devices andthe lowest cost achievable for useful devices.

In the former case, Moore’s so-called second lawcrystallises the observation that the fabrication of chipswith increasing sophistication and ever-reducingfeature sizes requires increasing levels of capitalinvestment — ensuring a diminishing financialincentive to innovation.

In the latter, it is easy to envisage applications wherethe minimum cost of a processor (limited to someextent by encapsulation costs) will be a gating factor —for example, when we wish to add intelligence toconsumables such as grocery items.

• Where do we go when the current chip technologiesbegin to run out of steam and Moore’s law ceases tooperate?

While it does not seem likely that technologicaldevelopments in processing, storage and networkingwill slow markedly over the next decade, we will reacha point where devices based on lithographic fabricationwill begin to encounter fundamental physical limitssuch as quantum mechanical effects. Even if Moore’sLaw were to be repealed in 10 years time, we mightstill expect desktop and embedded computers over 100times more powerful than today’s — enough to driveseveral economically powerful waves of new industrialand consumer applications.

Eventually, however, new approaches will be requiredand, although still in very early laboratory experi-mentation, a number of alternative technologies arenow on the horizon. In his seminal 1959 talk on whatwe now think of as nano-technology, Feynman [17]outlined a vision of extremely compact computing,storage and machinery. Although some of our currentapproaches may reach limits — ‘there’s plenty of roomat the bottom’.

Not all of these example candidates will come tofruition, but some may.

3.3.1 Printable computing

Although having already revolutionised society oncebefore, printing technologies may repeat the feat. Earlyindications are that printable electronics could replace the

traditional lithographic etching and deposition techniquesused in ‘traditional’ chip fabrication.

A number of approaches are being investigated for bothorganic and inorganic fabrication. For example, Jacobson’steam at MIT [18] has focused on semi-conductor inks madeup of suspensions of inorganic nano-crystals (Fig 9). Theseare printed at room temperature on to flexible plasticsubstrates using print-heads borrowed from ‘traditional’inkjet printing. Figure 10 is an example of a circuit pattern,printed to micron resolution.

Such developments may change the way we think aboutcomputing. Currently, computing technology is typicallycharacterised by discrete objects. However, we may soon beable to print sensing, intelligence, storage and userinterfaces, such as displays, directly on to consumerpackaging, active wallpaper, or fabrics.

Jacobson’s team have shown that alternative approachesto micro- and nano-fabrication, such as printing, stampingand atomic force microscope (AFM) deposition, can beused to produce a range of devices with usefully smallfeature sizes and even printable micro-electronic

Fig 9 An inorganic semiconductor droplet processed at plastic-compatible temperatures.

Fig 10 Printed circuit patterns.

Page 7: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

83

mechanical systems (MEMS), such as actuators (see Fig11). The printing of nano-particle-based inks has beenextended to produce 3-D structures consisting of hundredsof layers; linear-drive motors and thermal actuators havealready been demonstrated.

3.3.2 Quantum computing

Quantum computing is perhaps the most intriguing ofthe candidates and takes an approach to computation [19,20] which is very different to that of classical digitalsystems.

Two characteristics of quantum systems form the basisof such approaches.

• Superposition

Whereas the bits in traditional approaches areexplicitly either 0 or 1 at any time, their quantumequivalent qubits can exist in both statessimultaneously. In this way an n-qubit system can, anddoes, describe 2n states at any point in time.

• Entanglement

It is possible to construct sets of qubits whose states areinextricably linked. In a practical demonstration,Zeilinger’s team at the University of Innsbruck showedthat, if a pair of entangled photons are produced, oncethe polarisation of one photon is observed, thepolarisation of the other is instantly determined —irrespective of distance [21].

A number of basic quantum mechanical algorithms havebeen developed with attractive characteristics for certainclasses of problems. For example, whereas thecomputational demands of classical search algorithms aretypically proportional to the number of entries, Grover’squantum search algorithm is proportional to the square rootof the number of elements (completion on average in O(n)½

steps rather than O(n)). Similarly, Shor’s algorithm for

determining prime roots showed that the basis of manymodern cryptographic systems might be vulnerable toquantum attack.

A number of approaches have been proposed for theproduction and manipulation of qubits based on a number ofdiffering quantum mechanical systems.

Gershenfeld of MIT and Chuang of IBM [22, 23] havedeveloped systems that use miniature nuclear magneticresonance (NMR) devices to control atomic spins in bulkliquids (‘ensemble quantum systems’) — an approachwhich they have dubbed ‘tabletop quantum computing’ (seeFig 12). Multiple qubits are implemented within eachmolecule by careful selection and design of thosemolecules. The maximum number of qubits demonstrated todate in such a system is 7 and there may be practical limitsto the design of molecules, perhaps between 10 and 50qubits. Enhancements such as polymerisation and opticalpumping have been suggested to overcome these limits.

Fig 12 MIT MediaLab Quantum Computer.

However, they have shown that simple logic functionscould be implemented in practice, and such systems provideexcellent tools for investigating and modelling quantumphenomena and systems

Other quantum architectures may offer better strategiesto the scaling issues. Such architectures have been proposedbased on approaches such as ion-traps [24], Josephsonjunctions [25], and self-assembled arrays of quantum dots[19] .

The future of quantum computing is unclear — showinggreat challenges but also great opportunities. It is unlikelythat quantum systems will provide a basis for general-purpose computation, but there may be a number of tasksfor which they are uniquely well suited — one couldimagine a co-processor style arrangement. Similarly theremay be shorter-term opportunities for quantum systems,such as quantum cryptography (for example in secure keydistribution to satellites) and quantum routeing (for examplein all-optical photonic networks).

Fig 11 Printed machines.

Page 8: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

84

At present, there are major technological issues in boththe hardware and software aspects of quantum computing.The development of the necessary large qubit systems maytake a decade to solve and it is not clear which of thecandidate technologies will prove tractable. The area ofquantum programming is still in conception and immature,and there are currently relatively few useful quantummechanical algorithms — tools for compilation andabstraction are still required but being researched [26].

3.3.3 Molecular computing

Molecular computing is in some ways a moreincremental development on traditional fabrication than isquantum computing.

It seeks to drive down the size of the individual switcheswithin an integrated device, currently implemented as solidstate transistors, which are relatively large elements withminimum feature sizes of more than 100 nanometres, withindividual molecules each in the nanometre range.

Several approaches [27] have been proposed usingmolecules such as Rotaxane that have multiple stable stateswith different electronic characteristics.

A team at HP and UCLA [28] has patented a systembased on a self-organising molecular mono-layer sand-wiched between two layers of electrodes. Fabricating thenano-wire grids is a complex problem but candidatetechnologies include carbon nano-tubes. A further problemis that a high defect rate is likely within each device. Theproposed solution is to fabricate the devices as crossbararrays in such a way that any input can be connected to anyoutput. Fault detection tools are then used to detect defects.The virtual architecture would then be implemented overthe redundant switch substrate — still a non-trivial problemfor complex devices.

Although the fabrication technique proposed is verydifferent to current lithographic techniques, the resultantarchitectures are not dissimilar to contemporary memoryand field programmable gate array (FPGA) devices. Thepotential benefits are extremely high element densities andthe opportunity for massively parallel architectures.

3.3.4 Biological computing

Biological computing is perhaps at the extreme end ofcomputing research.

It takes as its starting point the observation that naturallyoccurring biological systems perform complex computationand sensing tasks. By borrowing from nature, and re-engineering where necessary, it is hoped to build highlyscalable systems.

Biological or cellular computing has a number ofsignificant attractions:

• it could provide a bridge between our existingcomputing paradigms and the ‘real’ biochemical world— providing, for example, smart drug delivery orenvironmental sensing,

• the relatively low speed of operation of cellularsystems could be made up for in the ability to buildmassively parallel self-assembled systems.

While the use of biological machines to solve biologicalproblems is, in some ways, an incremental step from currenttechniques such as genetic engineering, their use forarbitrary computational tasks is more speculative, andpossibly disruptive. Researchers at universities worldwidehave, however, developed some of the early buildingblocks, such as clocks, memory and switches [29].

4. Disruptive networks

Over the last ten years, we have seen an explosivegrowth in high-speed core networks and we are now at thestart of a revolution in residential broadband, with majorroll-out of technologies such as digital subscriber line(DSL), cable modems, and, to some degree, fibre-to-the-home.

At present, fixed and mobile networks are convergingon an IP infrastructure that underpins the current Internet.What scale of bandwidth will we demand in the future andwhat sort of network do we need? Will our currentapproaches to networking scale? Can they handle themassive increases in complexity, when everyone andeverything is ‘on-line’?

On the first question, there seems to be no upper limit toour ability to consume bandwidth, although this is not astraightforward matter.

At the lowest end, a simple signal such as ‘my family isalright’ light in the side of your glasses or a warm feelingthrough your watch, can impart great value and be theculmination of enormous amounts of complex sensing,machine learning and interpretation, but consume only themost trivial of network bandwidth — what the authors thinkof as ‘bit dribbling’. Similarly, the availability of extremelydense storage technologies could change the way we thinkabout mobile Internet use by pre-caching huge amounts of

The increasing economic and social importance ofinformation technology has been driven as much by

communications and networking technologies as by itsunderlying computational engines. Information is muchmore valuable if it can be shared — knowledge in motion ismore useful than knowledge at rest.

Page 9: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

85

information. Bit-dribbling will, however, have enormousimpact on network complexity issues.

Typical residential broadband currently providesdownload speeds between 0.25 and 2 Mbit/s. Is this reallybroadband? Adequate for Internet browsing as we currentlyexperience it perhaps, but it can only be thought of as abeginning.

One only needs to visit a high street electronics store tosee how, in a few years time, a typical home will hostmultiple high-definition TV (HDTV) systems eachconsuming 10—20 Mbit/s — if some of the developmentsdiscussed later in this paper come to fruition, virtually everysurface in the home could become a display and interfacesurface consuming huge bandwidth.

What happens when every camcorder and home securitycamera becomes Internet enabled? How many will you havewhen they only cost $5? Where will you put them? UsingMicrosoft net-meeting to videoconference from a meetingin Sydney to a newborn in Washington, and thegrandparents in London, is great for family cohesion — butit falls short of what customers really want.

At the end of the day, we want communications to breakthe boundaries between places (in the real and virtualworlds) with immediacy, fidelity, convenience, and toaddress issues such as serendipity, those occasions whereyou meet, completely unplanned, that so often happen in the‘real’ world.

Several things are clear about the future of networks andthese, along with market mechanisms, will drive the form ofthe new ‘Internet’.

• Demand

We will see explosive demand and use of broadbanddigital networks over the coming decades. Many of thecurrent limiting factors (such as the capital investmentcycle as it affects deployment and the reluctance ofcontent providers to trust their wares to the digitalworld) will take time to work through — but they willbe worked through.

• Co-existence

It is clear that no one networking technology will suitall requirements — for example, wireless or satellitesystems may initially be better suited to rapid rural roll-out than fibre. However, the near-infinitely scalablenature of confined bandwidth and the scarce nature offree-space bandwidth will lead to new architecturalbalances being struck.

• Standards

We will not want a different set of networks andprotocols for each application type. Increasingly wewill wish to decouple the application from the network.

• More things than people

As we will explore later, sensors and other smartdevices are becoming ubiquitous.

All of these hint at why the end-to-end application —and transport — agnostic nature of IP has made it the defacto networking approach of choice.

However, there are a number of trends within theInternet [30], such as the growing role of ISPs, capacity-based work-arounds (e.g. caches, content delivery networks,network address translation), and policy issues such as trustand privacy, which could seriously undermine this designprinciple.

There is an active debate as to the nature of the futureInternet and the outcome will have dramatic implicationsfor the development of this pervasive vision.

4.1 Hyper-mobility

To a great extent, the typical consumer has grown usedto the idea that ‘mobility’ equates to ‘cellular’. This is agross simplification of the mobility concept. A more usefulway of thinking of mobility is to compare mobility andcustomer relationship management (CRM).

In the case of CRM, an enterprise typically wishes tobuild a consistent interface to its customers, regardless ofthe channel through which they are in contact (call centre,retail location, Web browser, physical mail, etc) and tobuild a ‘single view of the customer’. In the case of personalmobility, the consumer wishes to be able to access all oftheir normal tools, information, relationships and resourceswherever they are and with whichever tools are to hand, andto access a ‘single view of the world’.

Currently, we move between isolated islands ofconnectivity each with its own rules, costs and interfacesand as a result, ongoing conversation and community (suchas between family, friends and teams) is regularly fracturedand fragmented. What we would really like is a continuoussense of community and connectivity where relationshipand access is maintained seamlessly over an invisiblesubstrate.

This human-centric perspective offers a much richer setof possibilities for applications and services, but also awider range of challenges. One of these is that there isunlikely to be a single homogeneous wireless datainfrastructure in the foreseeable future and as a result

Page 10: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

86

consumers will be moving through an ever-changingheterogeneous radio environment, which may offer moreopportunity than a rigid relationship with a single provider.This is a reality of the regulatory framework surroundingthe use of spectrum and the ingenuity engendered by thecommercial opportunities. Consider, for example, thecurrent extremes of spectrum management strategy.

At one extreme, government has realised that thedemand for television and 3G licences exceeds the availablespectrum, and hence a scarce national asset, or one whichcan be made scarce by design, can be used to generateincome through licence auctions, while imposing strictservice and social obligations. To operators who win suchauctions and obligations, the up-front investment implies afurther large investment in high-quality, carefullyengineered, standards-compliant infrastructure to develop amarket that will deliver a return higher than the cost ofcapital over the lifetime of the licence. At the other, in theunregulated bands (such as 2.4 GHz), the constraints arevery light (typically radiated power) and companies can bemuch less risk averse, cycling new technologies withshorter return-on-investment. These issues are discussed inmore detail by Mannings and Cosier [31].

However, the biggest difference is that the players in thelightly regulated markets are not just ‘operators’ but enter-prises running their own infrastructure and, increasingly,individuals and groups setting up their own low-cost radioenvironments from high street components such asIEEE802.11b.

One is a traditional, rigid, planned ecology of operators,regulators and back-to-back contracts — the other is morelike the bazaar. The range of alternatives available to aconsumer are already expanding rapidly — GSM, GPRS,Bluetooth, IEEE802.11b, etc.

However, this is the environment in which consumerswill find themselves. The question is, how will they cope?

Differing approaches are being taken to this ‘hyper-mobility’ problem — the ability of a consumer to moveinvisibly between mobile environments. These are typifiedby MIT’s personal router project [32], which provides a toolwith which consumers can operate in the dynamic marketfor connectivity using a wide range of different negotiatingstrategies.

One could imagine, for example, that as a consumerusing a mid-bandwidth always-on service through GPRS or3G, moves into range of a higher bandwidth wireless LANaccess point in a coffee shop, the router would negotiate forconnectivity and use the opportunity to upload anddownload large image or video files. Similarly on returninghome, full synchronisation could take place. This couldenable a range of new service offerings — for example, a

service provider could offer to negotiate on your behalf fora fixed fee and bulk contracts, or could negotiatedynamically for service to fulfil a global connectivitycontract in a locality where they have no infrastructure.

Interestingly, the personal router aims to tackle twoproblems simultaneously — looking-outwards to theheterogeneous connectivity environment and looking-inwards to build services over the dynamic set of smartobjects which you may be carrying at any given time.

In this kind of vision, software radio becomes a keytechnology in highly heterogeneous environments andmarkets where it is unfeasible to pre-implement and pre-integrate all necessary radio protocols — far better toprovide a system where the device interacts with itsenvironment to identify and acquire the necessary codecs.

In such a world, wireless LAN technology couldbecome a significant disrupter to existing mobile operators.This is part of the debate that surrounds the definition offourth generation mobile systems (4G) — in fact, the twoapproaches may be extremely complementary. The shapeof and structure of a pervasive wireless LAN infrastructureand the associated business models are very much open todebate — whether it would be dominated by the organicdeployment or a structured roll-out by major players isunclear [31].

4.2 Fine-grain networks

Earlier sections discussed developments in very low-cost computing. In fact, such devices are increasingly beingembedded by default in the objects around us. In 2000,more than 8 billion CPU chips were deployed but less than2% are in any way networked — the vast majority beingmicro-controllers embedded in everything from thermostatsto toasters.

Increasingly, such embedded processing will becomenetworked. Later sections discuss some of the opportunitiesof networked environments but it is increasingly clear thatthere may be significant benefits in linking the everydayitems around us.

It is also painfully clear that, with pervasive networks inwhich a single room may contain hundreds or thousands ofnodes, it is not feasible to manually configure each of thosenodes as we currently do with our PCs.

Objects need to be able to join and leave ad hocnetworks and to make their services available to other nodesin that network. In addition, these networks are likely to bepredominantly wireless, although dedicated communicationwiring and power-line systems may be appropriate formajor fixed or externally powered devices. It is a mistake tothink that wireless is the end of fixed networks — on the

Page 11: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

87

contrary, ubiquitous wireless access will need more andmore interconnections with fibre and copper systems. Thisis discussed in more detail in Mannings and Cosier [31].

Architectures are being developed for this kind of use ina variety of environments, such as dense or remote sensorapplications, smart homes and military domains; most focuson radio communications, for example:

• GRAd (gradient routing for ad hoc networks), a lowpower multi-hop such as the approach used in MIT’sArbornet [33] for distributed environmental sensing,

• LEACH (low-energy adaptive clustering hierarchy), ahierarchical system used in the MIT µAMPS project[34, 35].

However, some, such as Berkeley’s ‘smart dust’ project[36], also embody free-space optics as a basis for ad hocnetworking among MEMS-based sensor devices — giventhe advantageous power characteristics of point-to-pointoptics over radiated RF. Further discussion on sensing canbe found in section 5.1.2 and Fig 15.

As discussed above, one of the underlyingcharacteristics that has enabled IP to become a pervasive setof networking protocols is that, at the macro-level ofcomputing and networking, they have proved to an effectivecommon denominator — agnostic to the lower leveltransport layers and to the higher level applicationprotocols.

As a result, a vision has emerged that everything will beIP ‘enabled’ — for example, household items will beallocated an IP address, mobile networks will become IPend-to-end. This is extremely attractive, opening the way toextremely flexible networks on to which new device typesand applications can easily be added at will.

However, it has been suggested that, with extremelydense ad hoc networks, IP will not be adequate and newprotocols will be required.

Two preliminary MIT projects investigating these issuesare the ‘paintable’ computing project [37], which envisagestiny processing, sensing and networking elements so smallthat they can be mixed and applied to surfaces along withpaint, and MyriadNet, a successor to work on very compactprotocol implementations described above.

MyriadNet is investigating networking andprogramming techniques which may be needed whenvirtually every device around us is computationally enabledand applications are performed in an ensemble fashionthroughout the network. In this vision, applications may bedistributed between billions of tiny, networked processorsor sensors, and emergent behaviour appears from the

complex activity of billions of tiny co-ordinated activities tobecome a powerful tool.

5. Disruptive environments

Firstly, we discuss some of the raw componenttechnologies — identity, sensing and smart materials.Secondly, we discuss two synergistic approaches toapplication and deployment — wearable computing andsmart environments.

5.1 Environmental components

5.1.1 Identity

An alternative to embedding computation into everyitem is to attach identity. In many applications, identity canbe used as a proxy for intelligence, with the computationassociated with an object being performed remotely.

Perhaps the most common example of pervasiveidentity is the UPC bar code — over 5 billion barcodes areread everyday in 140 countries. Although the barcode is themainstay of the supply chain in most developed nations, itdoes have disadvantages — for example, it can only be readin line of sight and, with the current standards, the availablenumbering range typically only allows resolution down tostock keeping unit (SKU) — for example, to identify that abox is a box of cornflakes, or, in limited cases such asinternational shipping, down to individual envelopes.

Radio frequency (RF) tagging is set to supersede thebarcode and allow an individual object to be identifiedthroughout its lifetime — from manufacture to recycling.

RF tagging is contact-less — the tag is excited with aradio signal either using inductive or capacitive couplingand returns a unique identity. Different tagging technologieshave different characteristics such as range — for example,the tag on a train carriage or car has different requirementsto that on a box of cornflakes.

To date there have been two major issues limiting theroll-out of RF tagging — cost and standards. There areadditional technological issues associated with the use ofRF-tagging — for example, the ability to interrogate densegroups of tags, such as a shopping trolley full of groceries orevery tagged item in a truck. However, if the cost andstandards issues can be solved for base-functionality tagsand if the standards can be future-proofed, we are likely tosee explosive use of tags in the coming years — along withthe generation of enormous value from the use of the data.

P receding sections have discussed several issues relatingto the increasing permeation of computation and

networking into our everyday environments. In this section,we will examine some of the other disruptive technologydevelopments that will help to define those environments.

Page 12: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

88

• Standards

The Auto-ID centre at MIT [38] is a broadly basedindustrial consortium that is developing and triallingthe standards and back-end techniques that will enablethe economy-wide roll-out of a pervasive tagginginfrastructure.

In addition to rehearsing ‘use cases’ and exploring theinformation-related issues such as privacy andownership in city-scale trials, it is focusing on threekey technical issues:

— electronic product code (EPC) — standard forunique identity (for example bit length andpartitioning),

— object naming service (ONS) — the equivalent ofthe Internet DNS but for objects, allowing requestsrelated to a given object to be routed correctly,

— product mark-up language (PML) — XML-like dataassociated with an object, including static data (knownat ‘manufacture’, e.g. expiry, recycling), dynamic data(such as tracking or sensor history), instructions (suchas washing instructions), and software (supportapplications).

One could envisage a typical scenario where:

— an item is given an identity and database entry at‘birth’,

— at each stage along the automated manufacturingprocess, its records are updated with information suchas the individual tools used on it and which individualshandled it,

— along the supply chain, it is regularly scanned,routed and additional environmental informationlogged,

— it is tracked in, around and through the store,

— it interacts intelligently with the home beforeintegration or consumption,

— it can be interrogated in use,

— it is effectively recycled.

As well as enabling a new level of safety, quality andaccountability, this could have major implications forsupply chain operations where uncertainty andguesswork equates to cost. For example, the well-known ‘bull-whip’ effect shows that due to the poortransmission of information down the supply chain andthe presence of buffer stocks, small changes incustomer demand lead to wild swings at themanufacturer. Improved information flow not onlyreduces risk and hence cost, but also enables a wholemultitude of smart customer applications.

• Cost

In order for a pervasive infrastructure of identity todevelop, the cost of tags has to be driven down to anextremely low-level. For it to be cost-effective toattach tags to cornflake packets, the cost per tag has tobe pennies or sub-penny, which is a significanttechnological challenge.

Motorola (Bistatix) and Hitachi (Meu-chip) have bothannounced very low-cost tag technologies bothdesigned to be cheap enough to embed in, or beattached to, paper. In the former case, the Bistatixdevices, developed in collaboration with the MITMedia Labs, use capacitive coupling though printedcarbon antennas, allowing a ticket to be printed with itsown aerial [39].

Hitachi has unveiled a tiny Radio FrequencyIndentification (RFID) chip, so small that it can beembedded in money or disposable packaging. The chipis 0.4 mm square (Fig 13), slightly larger than a grainof sand and has 128 bits of read-only memory that canbe programmed to store identification and securitycodes. It operates to a range of approximately 30 cm toa reader unit connected to the network.

Fig 13 Hitachi ‘Tiny’ RFID chip.

Within the MIT Media Labs ‘Things That Think’consortium [40], the boundaries of cost have beenpushed further with resonant tags — where thephysical characteristics of the antenna are designed toallow it to return an identity via its specific resonanceunder excitation. Such tags can be printed or stampedfrom conductive sheets.

Although the primary drivers for the development ofRF tagging has been supply chain and product safetyrelated issues, some of the really disruptiveimplications will come from the fact that suddenly eachphysical artefact is a bridge between the physical andvirtual worlds — the world of bits and the world ofatoms. Every item now comes with its own bundle ofinformation — a history, instructions, advice, etc.

Page 13: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

89

In a wired world where media is delivered on-line, thelink with the physical is re-established but in a formwhich is more aesthetic than functional. Any objectcould represent any information or information-mediated action. Every identity can represent a URL ora segment of XML. The rights to watch a film nolonger have to look like a VHS tape. Hiroshi Ishii ofthe Tangible Media group at MIT [41] has shown thataesthetically desirable objects can be used to representcommunications and media — for example, hand-blown bottles that can each be opened to release thesound of home (Fig 14) or a ‘smart instrument’capable of collaborating with its peers to create aunique arrangement, or make use of its positionalinformation.

Fig 14 ‘Bottles of sound’.[Courtesy Webb Chappell]

Identity has real value and, as so often, the truedisrupter is information.

5.1.2 Sensing

We live in a world where most manufactured artefactshave no sense of their surroundings — your house does notknow if you are home and your telephone does not know ifyou are already in conversation.

There are two complementary revolutions under way insensing:

• sensing components,

• making sense of sensors.

Sensing components

It has been suggested that, before long, sensors willdominate the Internet. There are two elements to thisprediction.

Firstly, as already hinted at, the world is full of sensors(such as fridge door switches), but they are unable tocommunicate with anything except in the most simplisticway (‘fridge door switch’ meet ‘fridge light)’. If we followthe logic of pervasive networking outlined above, this willchange.

Secondly, it is increasingly inexpensive to producecomplex sensors. There are a number of reasons for this,including our increasing understanding of the use ofmaterials and structures as sensors, and our rapidlyincreasing ability to design new materials with new physicalcharacteristics. As a result, there are a number of potentiallydisruptive developments in sensing technology.

• MEMS

A key emergent technology for sensing is that ofMEMS. In this context, MEMS devices are interestingnot only because they can be manufactured in hugenumbers using well-established techniques fromsilicon fabrication, but also because they may be ableto harvest usable amounts of power from theirenvironment — for example, through vibration orintegrated solar-cells.

Perhaps the most dramatic example of MEMS sensingis the Smart Dust project at Berkeley [36] (Fig 15).This ambitious programme is attempting to developdistributed sensing systems (initially for militaryapplications) where each element is a millimetre-scaleMEMS device (‘micro-mote’), combining sensing,power harvesting, communications (radio or free-spaceoptical) and communications. Early demonstrationshave included the aerial dusting of a target area byUAV (unmanned aerial vehicle) using 1 inch ‘macro-mote’ prototypes to form an ad hoc wireless sensornetwork and the subsequent detection of militaryvehicles.

Fig 15 Berkeley Smart Dust.

Similar programmes include the MIT µAMPSdistributed sensing project [34] described above andthe DARPA ‘ultra-low power sensor project’ that isdeveloping extremely small low-power wireless visualsensors [42].

• Bio-sensing

Another area where sensing technology is progressingin new directions is in chemical and biological sensing.This has huge potential in areas such as health anddefence.

Page 14: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

90

In the area of healthcare, it is increasingly clear that ourcurrent approach to the provision of healthcare serviceswill not scale effectively to the social challenges of thefuture, such as the demographic time bomb. In hisanalysis of disruption in the health industry,Christensen [43] postulates how deployment ofdistributed health technologies such as sensing couldattack this problem by enabling an increasingproportion of diagnosis and treatment to be movedfrom the specialist practitioner to self-care. He comesto a number of conclusions including:

— create, then embrace, a system where the clinician’sskill is matched to the difficulty of the medicalproblem,

— invest less money in high-end, complextechnologies and more in technologies which simplifycomplex problems,

— create new organisations to do the disrupting.

In this model, cheap biological sensing becomes keyand this is, in part, the logic behind activities such asMIT’s silicon biology special interest group [44].

Within the military and civil security domains,chemical and biological weapons have become anincreasing concern within both conventional and non-conventional conflicts. To meet this challenge,DARPA has engaged in significant activities in bothtissue-based biosensors and controlled biologicalsystems which actively co-opt biological componentsfor sensor applications [45, 46]. Research directions in‘biology as technology’ include the coupling oflamprey brainstems to vision-enabled robotic systemsand the use of insect components as chemical monitors[47]. Other groups have investigated the growth ofneurons on silicon substrates.

• Smart tags

An earlier section touched on the subject of RFtagging, and, in particular, low-cost tags based on thephysical characteristics of the tag material and itsphysical structure.

The Physics and Media group at MIT have developed anumber of tags that also incorporate sensing, byaltering the RF characteristics of the device (Figs 16and 17). Similarly, it may be possible for a traditionalRF powered tag to conduct and record a sensor value

only when being powered by the RF interrogationfield, or for a power-harvesting MEMS device to reportstored data on interrogation.

Fig 16 Motorola Bistatix — low-cost RF tags embedded in everyday objects.

Fig 17 MIT Printable Tag Structures — very low cost RF sensors.[Courtesy Webb Chappell]

This is a good example of the blurring of boundaries —it becomes increasingly difficult to distinguish betweensensing, tagging, computation and networking. As weincreasingly embed differing combinations of thesecore functions into composite devices, the boundariescould become increasingly arbitrary — especially inthe very dense environments discussed earlier.

Making sense of sensors

As we fill the world with sensors, how are we to makesense of the data that they produce? Increasingly statisticalpattern-matching techniques are being applied to densenetworks of heterogeneous sensors, such as could beexpected in smart environments (e.g. homes) to extract highlevel behaviour, such as patterns of human usage of a space(Fig 18).

Use of sensor arrays and other sensor ensembles isbecoming increasingly important. A good example of this isin the development of ‘virtual camera’ technologies, whichapply the trends in synthetic aperture radar, towed arraysonar and virtual radio telescopes, to the visual arena.

The price of digital cameras continues to drop, driven tosome extent by consumer demand for video cameras and

Page 15: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

91

still digital cameras. Inexpensive cameras are now evenembedded in sub $200 watches and it seems almostinevitable that traditional film will cease to be a mainstreamtechnology for still and moving image capture — given theadditional value which can be generated by the sharing ofdigital media across applications. As this process continuesand as we continue to deploy security cameras around towncentres and public areas, it raises the question of how wecould combine the information from these sensors to gainmore useful information.

Techniques such as ‘dynamically re-parameterised lightfields’ [48] enable the output of multiple cameras to befused to produce arbitrary camera positions while accuratelysimulating effects such as depth of fields and parallax as thecamera is swung through the space. Similar systems havebeen used at major sporting events such as the NFL SuperBowl.

Further approaches such as ‘visual hulls’ providecomputationally tractable techniques for the extraction of3D objects, such as people, from multiple real-time videostreams — allowing a virtual camera to be positioned at anypoint in the space around them, also in real time [49].Imagine being able to follow a terrorist suspect or lost childusing a virtual camera made up of all of the individualsecurity cameras in a town centre.

When coupled with face recognition software, like thatdeployed in cities such as London, Birmingham and Tampa[50], such technologies also raise the social issue of thebalance between security and privacy.

In the medical arena, a number of groups havedemonstrated the use of 3D scanning (such as NMR andMRI) to enhance surgical preparation [51]. For example, inthe same way that satellite and other imagery can allow apilot to fly a mission before stepping into his plane, asurgeon can now explore and rehearse complex operationssuch as colonoscopy.

Sensor systems and sensor-enhanced systems such asthese help us to gain an enhanced view of the world aroundus, beyond the ability of our own senses — for example, atscales or frequencies for which evolution has not equippedus. Also context-aware systems become possible, allowingfeedback on user activity or location. The Smart Shoe hassix sensors that relay information on speed, angle,movement, acceleration and force (see Fig 19).

Fig 19 The ‘Smart Shoe’.

An equally intriguing application of such approaches isto turn the tables and enable systems to understand us. Thedevices that surround us are in the main completelyoblivious to our presence and our moods — for exampleyour PC has no sense of your level of frustration and yourcar has no sense of your level of tiredness.

Affective computing [52] aims to build systems whichare more aware of their users and which are able to adapttheir behaviour based on that understanding. MIT’sRosalind Picard has shown that even simple sensing canmake an enormous difference — for example that a car candevelop an understanding of when you are about to executea manoeuvre [53].

5.1.3 Smart materials

Much of the preceding discussion has touched on theway in which computation or intelligent behaviour is beingdesigned into devices and materials. Rational design ofmaterials, nano-fabrication and related technologies areenabling enormous opportunities in smart materials — frombulk materials to fabrics.

Space does not permit a full examination of this area;however, a good example of the capabilities that are nowemerging is E Ink.

Developed by MIT’s Molecular Machines Group [54]and subsequently commercialised [55], E Ink (Fig 20)follows the approach described earlier to the production ofvery low cost computing — printing.

Fig 18 Small passive RF tag.

Page 16: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

92

In this case, a flexible substrate (typically plastic) iscoated with a monolayer of microcapsules. Eachmicrocapsule is about the diameter of a human hair andcontains positively charged white particles and negativelycharged black particles suspended in a clear fluid. When anegative electric field is applied, the white particles move tothe top of the microcapsule where they become visible tothe user. This makes the surface appear white at that spot.At the same time, an opposite electric field pulls the blackparticles to the bottom of the microcapsules where they arehidden. Once the field is removed, the particles remain inplace.

By sandwiching the coated substrate between twoelectrode layers a very flexible high-resolution display canbe produced.

Such approaches scale very well to the types ofmanufacturing technology already in use for large-scaleprinting and for specialist paper products by companiessuch as 3M.

Although significant technological challenges remain,the way has been opened to very low cost displaytechnologies in the form of active paper which could scalefrom roll-up portable devices to wall-sized displays.

5.2 Affecting our everyday lives

The preceding sections have outlined a number of areasof disruptive technological innovation. As consumers, wewill only begin to see their full impact as they becomepervasive. This section discusses two approaches to thewidespread deployment of such technologies into oureveryday lives — wearable computing and environments.

5.2.1 Wearable computing

The vision of wearable computing is to move from ourcurrent experience of discrete personal devices (cellphones,PDAs, watches, etc) to one where our various devices worktogether seamlessly to provide timely, context-aware

support for normal activities. In the simplest example, doeseach of our personal tools need their own displays?

There are a number of wearable computing initiativesworldwide, particularly in the military arena where theopportunity to integrate battlefield C4I (command, control,communication, computing and intelligence) more tightlyinto the individual soldier’s environment is extremelycompelling. Examples include the UK FIST (futureintegrated soldier technology) and US Land Warriorprogrammes.

MIT’s MIThril [56] is an experimental context-awarewearable computing research platform which combinessmall, light-weight RISC processors, a single-cable power/data ‘body bus’, and high-bandwidth wireless networking ina package that is nearly as light, comfortable, andunobtrusive as ordinary street clothing (Fig 21).

An example of applications being developed using theMIThril platform includes ‘memory glasses’ — an attemptto build a wearable, proactive, context-aware memory aidbased on the MIThril platform and wearable sensors. Theprimary goal of this project is to produce an effective short-term memory aid and reminder system that requires aminimum of the wearer’s attention. The aim of the system isto deliver reminders to the wearer in a timely, situation-appropriate way, without requiring intervention on the partof the wearer beyond the initial request to be reminded. Inother words, the system behaves like a reliable humanassistant that remembers reminder requests and deliversthem under appropriate circumstances. Such a system isqualitatively different from a passive reminder system (suchas a paper organiser) or a context-blind reminder system (amodern PDA) which records and structures reminderrequests but which cannot know the user’s context.

Several of the key components of wearable computingare still in the early stages (for example, high-capacitypower-sources, unobtrusive glasses-mounted displays andwoven wiring). However, there may well be enough highvalue lead-user groups such as ‘blue-light services’,military, healthcare and aerospace to sustain the growthrequired.

Such groups are likely to demand the types of capabilitywhich wearable computing could provide before pervasivesmart environments become widespread, and in locationswith little infrastructure. Once such environments aredeployed, wearable computing becomes synergistic —interfacing with or becoming part of the environment.

The key challenges in wearable computing may be morein our understanding of how to develop usable applicationsthan in the technology — in human factors rather thanminiaturisation.

Fig 20 Electronic ink microcapsules.

Page 17: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

93

Gershenfeld [1] suggests that this is not just aboutconvenience. It could bring a new era in our evolution.

“Wearable computers are a revolution that I’m certainwill happen, because it already is happening. Threeforces are driving this transition: people’s desire toaugment their innate capabilities, emergingtechnological insight into how to embed computinginto clothing, and industrial demand to moveinformation away from where the computers are andto where the people are.

“The organization of life has been defined bycommunications. It was a big advance for moleculesto work together to form cells, for cells to worktogether to form animals, for animals to work togetherto form families, and for families to work together toform communities. Each of these steps, clearly part ofthe evolution of life, conferred important benefits thatwere of value to the species. Moving computing intoclothing opens a new era in how we interact with eachother, the defining characteristic of what it means tobe human.”

5.2.2 Smart environments

Research into smart environments aims to exploit manyof the technologies discussed in earlier sections to activelysupport the pursuits of occupants.

At present, most built environments have no sense oftheir occupants or their activities — sadly, you may caredeeply for your home, but it neither knows nor cares aboutyou.

Research by BT on smart environments for supportedliving showed that even limited sensing and intelligencecould provide substantial benefits. For example, statisticalanalysis can be used to analyse the output of simple sensors,such as fridge doors and light switches, to build valuablemodels of user behaviour — enough to estimate whether anelderly person has fallen.

Programmes such as MIT’s House_n [57] (Fig 22) andGeorgia Tech’s Aware Home Research Initiative [58]attempt to take a broad perspective on the home of thefuture by considering the whole life cycle of the home —not only rethinking the building industry and housing to bemore adaptive to its occupants’ needs, but also providingthe experience of knowledge-mediated living.

The smart environment is not limited to the home but isequally applicable to the office, the airport and the car.

Smart environments should be capable of not onlysupporting the activity currently under way at the time (forexample, MIT’s facilitator room [59] uses machine-learningvisual and audio analysis of occupant behaviour to provideactive environmental and information support to meetingsof various kinds), but also to support broader local or socialobjectives, such as energy management and distributedhealthcare.

The new extension to the MIT Media Laboratory isbeing implemented as a ‘smart building’ which bringstogether many of the strands which have been outlinedthroughout this paper. The intention is that every activecomponent, down to lamps and switches, in the buildingwill be networked — the early prototypes are based on theMyriadNet approaches described earlier.

Fig 21 MIT MIThril wearable vest.[Courtesy Webb Chappell]

Page 18: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

94

An equally ambitious programme in this area is theMIT Oxygen project [60] which aims to enable anarchitecture and environment in which computing is asubiquitous as the air we breath. The Oxygen vision is to:

“... bring an abundance of computation and com-munication to users through natural spoken and visualinterfaces, making it easy for them to collaborate,access knowledge, and automate repetitive tasks. TheOxygen system aims to be:

• pervasive — everywhere, with every portalreaching into the same information base,

• embedded — of our world, sensing and affectingit,

• nomadic — users and computations must be freeto move around according to their needs,

• eternal — it must never shut down or reboot;

components may come and go in response to demand,errors, and upgrades, but Oxygen as a whole must benon-stop and forever.” [60]

To achieve this, it is developing three key infra-structural components:

• N21 — a pervasive ad hoc network,

• H21 — a ‘soft’ hand-held interface device includingsoftware radio,

• E21 — environmental devices that co-ordinate andmediate between advanced sensors such as microphonearrays.

It is easy to see that, in this style of environment,wearable computing elements could potentially moveseamlessly sharing resources dynamically without userintervention.

Interesting issues occur at the interface between theprivate (wearable) and the public (environmental). What orwho do we relate with? We are used to an environment inwhich each functional element (a light switch, a clock, atelephone) is discrete. What happens when all areconnected? Who should we address?

Will our PDA be our ever-present interface or shouldwe address the room? In a personified world of humanisticinterfaces where we are able to assign distinct personality todevices, should we carry our guardian angel with us orshould it magically appear in any room where we justhappen to be — or even prepare the environment in advanceof our arrival?

6. Discussion and conclusions

It is clear that although the discussion has focused oncomputing, networking and environments, there are cross-currents, which make these disciplines almost

Fig 22 Active walls and environments in the MIT House_n project

The previous sections have explored a wide range ofemerging, potentially disruptive, technologies in the

communications domain.

Page 19: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

95

indistinguishable. Increasingly, as individuals, we willmove through environments that are aware of our presenceand abuzz with conversations of their own.

The boundaries between the physical and the virtual willhave been softened and augmented. Not only will weinterrogate items — use them as anchors and links toinformation, presence and emotion — but they will continuetheir own quiet secret lives, sensing, communicating,calculating, being told that they have expired.

We will expect to be connected unobtrusively andcontinuously to our friends, family and colleagues — butthat connectedness is likely to be much more varied than weare used to; some may be a soft touch of community, otherswill be highly engaged, immersive or enhancedenvironments shared across continents.

Many tasks which we currently think of as discrete willbecome continuous, for example, a shift from healthcaredominated by occasional visits to the doctor to wellnessmanagement mediated by sensors and advisory toolsembedded into our life-style.

The computer as an explicit item may disappear fromthe lives of the vast majority of people — absorbed into theenvironment and abstracted by interface.

We may not know where complex calculations are madeand where the data we used is stored. We may not evenknow which elements around us are parts of theconversation as our understanding of the physical, chemicaland biochemical worlds softens the boundary betweencomputation, material and design. Even though some of ourcurrent technologies and fabrication techniques areapproaching their end-stops, we are not yet approachingmany of Feynman’s fundamental physical limits — there’sstill plenty of room at the bottom.

Is this a utopian or a dystopian view of the world?

When we can track a million people as they walkthrough a city or blow a handful of nano-sensors into aschool we are entering a very different era.

It is clear that the developed world faces significantchallenges for which some of the technologies describedmay form part of the solution. For example, thedemographic time-bomb could prise apart the cracks inhealthcare and social provision — distributing health toolsto the periphery and rational drug design for geneticsubgroups could be absolutely vital. Imagine being able toundertake real-time in vivo sensing of blood proteins andintervening directly in the case of problems.

On the other hand, many of the technologies also allowus to undertake tasks that we have never been able to

attempt before — such as bringing together elements ofcomputational and biological systems, which some mayfind problematic.

In his study of the implementation of social objectivesin the developing Internet, Lessig [61] discusses thepotential for the development of a pervasive infrastructurefor identity — mediated by the requirement of commerce tobe able to conduct secure transactions, but at the cost ofanonymity and privacy. He develops a 4-force model for theimplementation of values that captures the real-politick oftechnological change:

• norms — social values,

• markets — design of incentives and business ecologies— ‘carrots’,

• law — the threat of sanctions — ‘sticks’,

• architecture — how we build our systems networks.

It seems very likely that some of the issues raised by theprogression of the information society will become keypublic policy issues and subjects for public debate in thecoming years. There will be many pragmatic decisions to bemade — and the questions are not straightforward.

To some extent, discussion on the impact oftechnologies such as those outlined here tends to focus onthe elite and middle-classes of the developed world.However, it is undeserved communities in both thedeveloped and developing world that may stand to gain themost.

The availability of low-cost computational andcommunications technology can revolutionise the fortunesof such communities. For example, the Lincos project [62]has deployed modified shipping containers, each equippedwith a satellite link and a wireless local area network toform hubs for education and healthcare in the rural villagesof Latin America, while the SARI project [63] has used lowcost communications to enable sustainable economicdevelopment in rural India, by opening access toagricultural market information.

By wiring such communities they become connected tothe global economy in a way that has never happenedbefore. Previously constrained innovation andentrepreneurship can be unleashed on global markets. In thewords of Jose Maria Figueros [64], former president ofCosta Rica: ‘Its not a time of change, but a change oftimes.’

We are faced by enormous opportunities but alsoextremely challenging issues — technological, social andeconomic — on our way to the pervasive future discussed.

Page 20: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

96

However, it seems both possible and well worth thejourney.

In the words of Bill Mitchell in his discussion on theeffects of the digital revolution on the urban environmentand our sense of place [65]:

“The time for breathless, the world-is-new, anything-is-possible rhetoric has passed. And it turns out thatwe face neither millennium-any-day-now nor itsmirror image apocalypse-real-soon. Instead, we havebeen presented with the messy, difficult, long-termtask of designing and building for our future — andmaking some crucial social choices as we do — underpermanently changed, post revolutionary conditions.”

In the emerging environment of apparent technologicalplenty, we may be able to progress from a perspectivedominated by the detail of incremental technologicalimprovement, to one of human-centric and issue-baseddevelopment. There is the potential for a radical shift in thedemocratisation of design and implementation — givingpeople the tools and opportunity to imagine, fabricate, adaptand deploy according to their needs across social andeconomic boundaries.

In this way, ‘things that think’ become ‘things to thinkwith’. The reunification of bits and atoms gives us a variednew box of tools with which to build our future. By placingthese in the hands of the many rather than the few, we havethe opportunity to unleash enormous disruptive forces ofdevelopment.

References

1 Gershenfeld N A: ‘When Things Start to Think’, Henry Holt & Co, Inc(1999).

2 Sun Microsystems — http://www.sun.com/neteffect/

3 Lipmann A: Private conversation.

4 Donofrio N: ‘Technology innovation for a new era’, IEE Comp andCont Eng J, 12, No 3, pp 115—124 (June 2001).

5 IBM ‘Blue Gene’ — http://www.research.ibm.com/bluegene/

6 Foster I and Kesselman C (Eds): ‘The Grid: Blueprint for a NewComputing Infrastructure’, Morgan Kaufmann (1999).

7 Beowulf — http://www.beowulf.org/

8 Globus — http://www.globus.org/

9 National Science Foundation (NSF) — http://www.nsf.gov/

10 SETI@home: ‘The search for extraterrestrial intelligence’, — http://setiathome.ssl.berkeley.edu/

11 Hariharasubrahmanian S, MIT Media Lab: ‘IPic — A match-headsized Web-server’, — http://www-ccs.cs.umass.edu/~shri/iPic.html

12 Adada H, MIT D’Aberloff Laboratory: ‘Low-E Computation’, —http://darbelofflab.mit.edu/research/LowE.html

13 Millennial Networks: ‘I-Bean’, — http://www.millennial.net/

14 Ipsil: ‘Connecting absolutely everything’, — http://www.ipsil.com

15 Sun Microsystems: ‘JINI network technology’, — http://www.sun.com/jini/

16 Gartner Group: ‘Q&A: The Impact of Bluetooth and Jini’, ResearchNote QA-12-7002 (February 2001).

17 Feynman F: ‘There’s plenty of room at the bottom’, A presentation toAmerican Physical Society (December 1959).

18 ‘Print your next PC’, MIT Technology Review (Nov/Dec 2000).

19 NSF: ‘Quantum Information Science’, NSF workshop (October 1999).— http://www.nsf.gov/cgi-bin/getpub?nsf00101

20 Nielsen N and Chuang I: ‘Quantum Computing and QuantumInformation’, Cambridge Univ Press (2001).

21 Zeilinger A: ‘Quantum Experiments and the Foundations of Physics’— http://www.quantum.univie.ac.at/

22 Gershenfeld N and Chuang I: ‘Quantum Computing With Molecules’,Scientific American (June 1998) — http://www.sciam.com/1998/0698issue/0698gershenfeld.html

23 Maguire Y, Boyden E and Gershenfeld N: ‘Towards a QuantumComputer’, IBM Systems Journal, 39, No 3 & 4, pp 823-839, —http://www.research.ibm.com/journal/sj/393/part3/maguire.html

24 Orlando T, MIT Super-conducting and quantum electronics group —http://web.mit.edu/superconductivity/

25 University of Oxford, Centre for Quantum Computation — http://www.qubit.org

26 Maguire Y: Private discussion.

27 Reed M and Tour J: ‘Computing with Molecules’, Scientific American(2000) — http://www.sciam.com/2000/0600issue/0600reed.html

28 ‘5 patents to watch — molecular memory’, MIT Technology Review,(May 2001) — http://www.technologyreview.com/magazine/may01/patents5.asp

29 ‘Biological Computing’, MIT Technology Review (May 2000) —http://www.technologyreview.com/magazine/may00/garfinkel.asp

30 Blumenthal M and Clark D: ‘Rethinking the design of the Internet: theend-end argument vs. the brave new world’, in Compaine B M andGreenstein S (Eds): ‘Communications Policy in Transition’, The MITPress (2001).

31 Mannings R and Cosier G: ‘Wireless everything — unwirung theworld’, BT Technol J, 19, No 4, pp .... (October 2001).

32 Clark D and Wroclawski J: ‘The Personal Router’, Whitepaper, MITLab for Computer Science, ver 2 (March 2001) — http://ana-www.lcs.mit.edu/anaweb/PDF/PR_whitepaper_v2.pdf

33 Poor R: ‘Embedded Networks: Pervasive, low-power, wirelessconnectivity’, MIT Doctoral Thesis (2000).

34 MIT µAMPS ‘m Adaptive multi-domain power aware sensors’ —http://www-mtl.mit.edu/research/icsystems/uamps/index.html

35 Min R, Bhardwaj M, Cho S H, Sinha A, Shih E, Wang A andChandrakasan A: ‘Low-Power Wireless Sensor Networks’, VLSIDesign 2001 (January 2001) Invited Paper — http://www-mtl.mit.edu/research/icsystems/uamps/pubs/rmin-vlsi01.html

36 Pister K: ‘Smart Dust — Autonomous Sensing and Communication ina Cubic Millimeter’, — http://robotics.eecs.berkeley.edu/~pister/SmartDust/

Page 21: Reconnecting Bits and Atoms

RECONNECTING BITS AND ATOMS

BT Technol J Vol 19 No 4 October 2001

97

37 Butera W and Bove Jr V M: ‘Literally embedded processors’, ProcSPIE Media Processors (2001) — http://www.media.mit.edu/~vmb/papers/4313-04.pdf

38 MIT Auto-ID centre — http://auto-id.mit.edu/

39 ‘Beyond the barcode’, MIT Technology Review (March 2001) —http://www.technologyreview.com/magazine/mar01/schmidt.asp

40 MIT ‘Things That Think’ Consortium — http://www.media.mit.edu/ttt/

41 MIT Media Labs, Tangible Media Group — http:/tangible.media.mit.edu/

42 Sodini C et al: ‘Ultra low power wireless sensor project’ — http://www-mtl.mit.edu/research/sodini/ultra_low_power_wireless_sensor.html

43 Christensen C: ‘Will Disruptive Innovation Cure Health Care?’,Harvard Business Review, pp 102—112 (September-October 2000).

44 MIT Media Labs, Silicon Biology SIG — http://www.media.mit.edu/siliconbio/

45 DARPA Tissue Based Biosensors — http://www.darpa.mil/dso/thrust/sp/Tbb/index.html

46 DARPA Controlled Biological Systems — http://www.darpa.mil/dso/thrust/sp/Cbs/index.html

47 Gugliotta G: ‘The robot with the mind of an eel’, Washington Post,p A01 (April 2001).

48 Isaksen A, McMillan L, and Gortler S J: ‘DynamicallyReparameterized Light Fields’, SIGGRAPH 2000 — http://graphics.lcs.mit.edu/~aisaksen/projects/drlf/index.html

49 Matusik W, Buehler C, Raskar R, McMillan L, and Gortler S J:‘Image-based visual hulls’, SIGGRAPH 2000 (July 2000).

50 Visionics — http://www.visionics.com/

51 MIT Medical Vision Group — http://www.ai.mit.edu/projects/medical-vision/index.html

52 Picard R W: ‘Affective Computing’ The MIT Press (2000).

53 MIT Media Labs, Affective Computing Group — http://www.media.mit.edu/affect/

54 MIT Media Labs, Molecular Machines Group — http://www.media.mit.edu/molecular/

55 E Ink — http://www.eink.com/

56 MIT MIThril — http://lcs.www.media.mit.edu/projects/wearables/

57 MIT House_n — http://architecture.mit.edu/house_n/

58 Georgia Institute of Technology, Aware Home Research Initiative —http://www.cc.gatech.edu/fce/house/house.html

59 MIT facilitation room — http://vismod.www.media.mit.edu/vismod/demos/facilitator-room/

60 MIT Oxygen project — http://www.oxygen.lcs.mit.edu/

61 Lessig L: ‘Code and other laws of cyberspace’, Basic Books (1999).

62 LINCOS (Little Intelligent Communities) — http://www.lincos.net/

63 SARI (Sustainable Access for Rural India) — http://edevelopment.media.mit.edu/sariedev.html

64 Figueros J M: A presentation to MIT Media Labs Digital NationsConsortium (July 2001).

65 Mitchell W: ‘e-topia : urban life, Jim — but not as we know it’, TheMIT Press (1999).

Graham Cosier is currently working with theCambridge MIT Institute (CMI) and withover 30 years of BT service under his belt, heis certainly no stranger to change. In a wide-ranging career, he has worked closely withleading industries and academics around theworld, exploiting the best of technology. It isthis diversity that has seen him deliver worldfirsts in satellite optimisation, avatartechnology, virtual conference and inhabitedtelevision — where his team won a covetedtelevision Oscar. He regularly consults on the‘emerging and disruptive technology’ themeto a broad range of international companies

and is a creative thinker who lives the future rather than trying to predict it,forming comprehensive views of 21st century society.

Steve Whittaker received a BSc in Com-puting and Electronics from the University ofDurham in 1984.

Since then, he has been involved in thedevelopment of advanced interactive voicesystems at Adastral Park. This has includedwork on large vocabulary and fluent speechsystems.

He is now BT’s primary technology interfacewith research and entrepreneurial commun-ities in the eastern USA. He is a foundermember of the BT Disruptive Lab at the MIT

Media Lab. Prior to this, he has been engaged in a wide range of strategicconsultancy, business development, research management and technologydevelopment roles.