we have always been interested in the notion of consciousness fact

Upload: santhameena-muthuramalingam

Post on 06-Apr-2018

224 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    1/16

    We have always been interested in the notion of

    consciousness fact, which is, for us, the fact that an

    individual endowed with a brain can think of something

    related to his position in the world right here right now.It is not about the continuity, or the performance, nor

    the profoundness of the thought, but it is about

    thinking of something in a knowable manner and which

    can be specified from a linguistic or mathematical

    angle, without it being an automatic and predefined

    response to a given situation.

    By analogy to the notion lengthily investigated by

    philosophers, psychologists, neurobiologists, we will

    pose the question of artificial consciousness: how can

    one transpose the fact of thinking of something into

    the computable field, so that an artificial system,

    founded on computer processes, would be able to

    generate consciousness facts, in a viewable manner.The system will have intentions, emotions and ideas

    about things and events related to it-self. The system

    would have to have a body that it could direct and

    which would constrain the system. It would also have

    to have a history, and intentions to act and, most of

    all, to think. It would have to have knowledge, notably

    language knowledge. It would have to have emotions,intentions and finally a certain consciousness about

    itself.

    We can name this system, by sheer semantic analogy,

    an artificial brain. However we will see that its

    architecture is quite different from living brains. The

    concern is transposing the effects, the movements;

    certainly not reproducing the components like neurons

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    2/16

    and glial cells. We should keep in mind principally one

    characteristic of the process of thinking unfolding in a

    brain: there is a complex neural, biochemical, electrical

    activation movement happening. This movement iscoupled to a similar but of a different mode in the

    nervous system deployed in the whole body. This

    complex movement generates, by selective emergence

    and by reaching a particular configuration, what we call

    a thought about something. This thought rapidly leads

    to actuators or language activity and descends then in

    the following thought which can be similar or different.

    This is the very complex phenomenon that has to be

    transposed into the computable domain.

    Hence, we should approach the sudden appearance of

    thoughts in brains at the level of the complex dynamics

    of a system building and reconfiguring recurrent and

    temporized flow. We can transpose this into computerprocesses architectures containing symbolic meaning

    and we should make it geometrically self-controlled.

    Two reasonable hypotheses are made for this

    transposition:

    analogy between the geometrical dynamics of the

    real brain and of the artificial brain. For one, flows are

    complex images, almost continuous; for the other,these are dynamical graphs which deformations are

    evaluated topologically.

    combinatory complexity reduction of the real brain in

    the computable domain by using symbolic and pre-

    language level for this approach. The basic elements

    are completely different; they are not of the same

    scale.

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    3/16

    However, once these hypotheses made, one should not

    start to develop an architecture that will operate its

    own control from the aspects of its changing geometry.

    One needs to ask the proper question aboutconsciousness fact generation. A philosopher, a couple

    of decades ago, M. Heidegger, asked the proper

    question: what brings us to think about this thing right

    here right now? The answer, quite elaborate, to this

    question will conduct to a system architecture choice

    that will take us away from reactive or deductive

    systems. The system will generate intentionally its

    consciousness facts, intention as P. Ricoeur understood

    it. There are no consciousness facts without intention

    to think. This settles the question, considered as a

    formidable, of freedom to think. One thinks of

    everything according to his memory and his intuition

    on the moment, but only if it is expressible as athought by the system producing thoughts. Some

    might see something infinite in this process; however it

    is not our case. A finite set of component which

    movements occur in a finite space has only a finite

    number of states in which it can be. Also, as the

    permanence of the physical real apprehensible by the

    sense is very strong, the preoccupation to think byman is quite limited, in his civilizations. Let us point out

    that artificial systems that will think artificially will be

    able to communicate directly at the level of forms of

    the ideas, without using a language mediator, and

    hence, would be co-active as well as being numerous in

    space.

    For different reasons, numerous people think that the

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    4/16

    path of artificial consciousness investigation should not

    be taken at all. I feel differently, because, discoveries

    have been the very root of our existence, from fire to

    the mighty F-16s. The mind is a work of art moulded inmystery, and any effort to unlock its doors should be

    encouraged because, I am sure, that its discovery is

    only going to help us respect the great architect more.

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    5/16

    Can you please summarize (in words)?The brain is fundamentally different from and complementary to todays computers. The brain can exhibitawe-inspiring function of sensation, perception, action, interaction, and cognition. It can deal with ambiguityand interact with real-world, complex environments in a context-dependent fashion. And yet, it consumes lesspower than a light bulb and occupies less space than a 2-liter bottle of soda.

    Our long-term mission is to discover and demonstrate the algorithms of the brain and deliver cool, compactcognitive computers that that complements todays von Neumman computers and approach mammalian-scaleintelligence. We are pursuing a combination of computational neuroscience, supercomputing, andnanotechnology to achieve this vision.

    Towards this end, we are announcing two major milestones.

    First, using Dawn Blue Gene / P supercomputer at Lawrence Livermore National Lab with 147,456 processorsand 144 TB of main memory, we achieved a simulation with 1 billion spiking neurons and 10 trillion individual

    learning synapses. This is equivalent to 1,000 cognitive computing chips each with 1 million neurons and 10billion synapses, and exceeds the scale of cat cerebral cortex. The simulation ran 100 to 1,000 times slowerthan real-time.

    Second, we have developed a new algorithm, BlueMatter, that exploits the Blue Gene supercomputingarchitecture to noninvasively measure and map the connections between all cortical and sub-cortical locationswithin the human brain using magnetic resonance diffusion weighted imaging. Mapping the wiring diagram ofthe brain is crucial to untangling its vast communication network and understanding how it represents andprocesses information.

    These milestones will provide a unique workbench for exploring a vast number of hypotheses of the structureand computational dynamics of the brain, and further our quest of building a cool, compact cognitivecomputing chip.

    Why do we need cognitive computing? How could cognitive computing help build asmarter planet?

    As the amount of digital data that we create continues to grow massively and the world becomes moreinstrumented and interconnected, there is a need for new kinds of computing systems imbued with a newintelligence that can spot hard-to-find patterns in vastly varied kinds of data, both digital and sensory; analyzeand integrate information real-time in a context-dependent way; and deal with the ambiguity found incomplex, real-world environments. Cognitive computing offers the promise of entirely new computingarchitectures, system designs and programming paradigms that will meet the needs of the instrumented andinterconnected world of tomorrow.

    What is the goal of the DARPA SyNAPSE project?

    The goal of the DARPA SyNAPSE program is to create new electronics hardware and architecture that canunderstand, adapt and respond to an informative environment in ways that extend traditional computation toincludefundamentally different capabilities found in biological brains.

    Who is on your SyNAPSE team?

    Stanford University: Brian A. Wandell, H.-S. Philip Wong

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    6/16

    Cornell University: Rajit Manohar

    Columbia University Medical Center: Stefano Fusi

    University of Wisconsin-Madison: Giulio Tononi

    University of California-Merced: Christopher Kello

    IBM Research: Rajagopal Ananthanarayanan, Leland Chang, Daniel Friedman, Christoph Hagleitner, BulentKurdi, Chung Lam, Paul Maglio, Stuart Parkin, Bipin Rajendran, Raghavendra Singh

    The Cat is Out of the Bag

    What advantages does Blue Gene provide to enable these simulations?

    Mammalian-scale simulations place tremendous restraints on the memory, processor and communicationcapabilities of any computer system. Blue Gene architecture provides the best match to meet these resourcerequirements by supporting hundreds of terabytes of memory, and hundreds of thousands of processors This isaugmented with outstanding communication capabilities in terms of bi-section and point-to-point bandwidth,excellent low-latency of communication and very efficient broadcast and reduce networks, some of whichhave dedicated hardware resources, and thus allowing truly parallel exploitation of processors and theirmemory.

    What role do large-scale cortical simulations play in the SyNAPSE project?

    Please note that the cat-scale cortical simulation is equivalent to equivalent to 1,000 cool, compactcognitive computing chips each with 1 million neurons and 10 billion synapses, and compares veryfavorably to DARPAs published metrics.

    The simulations in C2 will help guide the design of features in the SyNAPSE chip and the overall architecture ofthe hardware. C2 supports customizable components, in which hardware neurons and synapses can be usedinstead of the default biologically inspired phenomenological neurons and synapses. Thus, C2 enables afunctional simulation of the hardware and helps choose between alternate hardware implementations.

    Can you place the cat-scale simulation in context of relate to your past work?

    For past work on rat-scale simulations, please see here and for mouse-scale simulations, please seehere.

    http://www.modha.org/blog/2007/11/faq_anatomy_of_a_cortical_simu.htmlhttp://modha.org/blog/2007/02/towards_realtime_mousescale_co.htmlhttp://modha.org/blog/2007/02/towards_realtime_mousescale_co.htmlhttp://www.modha.org/blog/2007/11/faq_anatomy_of_a_cortical_simu.html
  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    7/16

    December 2006:Blue Gene/L at IBM Research - Almaden with 4,096 CPUs and 1 TB memory

    40% mouse-scale with 8 million neurons, 50 billion synapses10 times slower than real-time at 1 ms simulation resolution

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    8/16

    April 2007:Blue Gene/L at IBM Research - Watson with 32,768 CPUs and 8 TB memory

    Rat-scale with 56 million neurons, 448 billion synapses10 times slower than real-time at 1 ms simulation resolution

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    9/16

    March 2009:

    Blue Gene/P on KAUST-IBM WatsonShaheen machine with 32,768 CPUs and 32 TB memory1% of human-scale with 200 million neuron, 2 trillion synapses100 - 1000 times slower than real-time at 0.1ms simulation resolution

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    10/16

    SC09: this announcement:Blue Gene/P DAWN at LLNL with 147,456 CPUs and 144 TB memoryCat-scale with 1 billion neurons, 10 trillion synapses100-1000 times slower than real-time at 0.1ms simulation resolutionNeuroscience details: neuron dynamics, synapse dynamics, individual learning synapses, biologically realisticthalamocortical connectivity, axonal delaysPrediction: In 2019, using a supercomputer with 1 Exaflop/s and 4PB of main memory, a near real-time human-scale simulation may become possible.

    Summary: Progress in large-scale cortical simulations. Each of the four charts above details recentachievements in the simulation of networks of single-compartment, phenomenological neurons withconnectivity based on statistics derived from mammalian cortex. Simulations were run on Blue Genesupercomputers with progressively larger amounts of main memory. The number of synapses in the modelsvaried from 5,485 to 10,000 synapses per neuron, reflecting construction from different sets of biologicalmeasurements. First: Simulations on a Blue Gene/L supercomputer of a 40% mouse-scale cortical model with 8million neurons and 52 billion synapses, employing 4,096 processors and 1 TB of main memory. Second:

    Simulations on a Blue Gene/L supercomputer culminating in a rat-scale cortical model with 58 million neuronsand 461 billion synapses, using 32,768 processors and 8 TB of main memory. Third: Simulations on a BlueGene/P supercomputer culminating in a one-percent human-scale cortical model with 200 million neurons and1.97 trillion synapses, employing 32,768 processors and 32 TB of main memory. Fourth: Simulations on a BlueGene/P supercomputer culminating in a cat-scale cortical model with 1.62 billion neurons and 8.61 trillionsynapses, using 147,456 processors and 144 TB of main memory. The largest simulations performed on thismachine correspond to approximately 4.5% of human cerebral cortex.

    When will human-scale simulations become possible?

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    11/16

    The figure shows the progress that has been made in supercomputing since the early 90s. At each time point,the green line shows the 500th fast supercomputer, the dark blue line the fastest supercomputer, and the lightblue line the summed power of the top 500 machines. These lines show a nice trend, which weveextrapolated out 10 years.

    The IBM teams latest simulation results represent a model about 4.5% the scale of the human cerebral cortex,which was run at 1/83 of real time. The machine used provided 144 TB of memory and 0.5 PFLop/s.

    Turning to the future, you can see that running human scale cortical simulations will probably require 4 PB ofmemory and to run these simulations in real time will require over 1 EFLop/s. If the current trends in

    supercomputing continue, it seems that human-scale simulations will be possible in the not too distant future.

    What aspects of the brain does the model include?

    The model reproduces a number of physiological and anatomical features of the mammalian brain. The keyfunctional elements of the brain, neurons, and the connections between them, called synapses, are simulatedusing biologically derived models. The neuron models include such key functional features as inputintegration, spike generation and firing rate adaptation, while the simulated synapses reproduce time andvoltage dependent dynamics of four major synaptic channel types found in cortex. Furthermore, the synapsesare plastic, meaning that the strength of connections between neurons can change according to certain rules,which many neuroscientists believe is crucial to learning and memory formation.

    At an anatomical level, the model includes sections of cortex, a dense body of connected neurons where muchof the brain's high level processing occurs, as well as the thalamus, an important relay center that mediatescommunication to and from cortex. Much of the connectivity within the model follows a statistical map

    derived from the most detailed study to date of the circuitry within the cat cerebral cortex.

    What do the simulations demonstrate?

    We are able to observe activity in our model at many scales, ranging from global electrical activity levels, toactivity levels in specific populations, to topographic activity dynamics to individual neuronal membranepotentials. In these measurements, we have observed the model reproduce activity in cortex measured byneuroscientists using corresponding techniques: electroencephalography, local field potential recordings,optical imaging with voltage sensitive dyes, and intracellular recordings. Specifically, we were able to delivera stimulus to the model then watch as it propagated within and between different populations of neurons. Wefound that this propagation showed a spatiotemporal pattern remarkably similar to what has been observed inexperiments with real brains. In other simulations, we also observed oscillations between active and quietperiods, as is often observed in the brain during sleep or quiet waking. In all our simulations, we are able tosimultaneously record from billions of individual model components, compared to cutting-edge neuroscience

    techniques that might allow simultaneous recording of a few hundred brain regions, thus providing us with anunprecedented picture of circuit dynamics.

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    12/16

    Can I see the simulator in action?

    Yes, if you can download a 150 MB movie

    The following is a frame from the movie. An earlier frame showing the input is here and a later frameis here. To understand the figure and the movie, it is helpful if you study Figure 1 in thepaper.

    http://www.modha.org/C2S2/2009/11182009/content/IBM_logo_movie.mpghttp://www.modha.org/C2S2/2009/11182009/content/ibmLogo1_vert.JPGhttp://www.modha.org/C2S2/2009/11182009/content/ibmLogo3_vert.JPGhttp://www.modha.org/C2S2/2009/11182009/content/ibmLogo3_vert.JPGhttp://www.modha.org/C2S2/2009/11182009/content/SC09_TheCatIsOutofTheBag.pdfhttp://www.modha.org/C2S2/2009/11182009/content/SC09_TheCatIsOutofTheBag.pdfhttp://www.modha.org/C2S2/2009/11182009/content/ibmLogo3_vert.JPGhttp://www.modha.org/C2S2/2009/11182009/content/ibmLogo1_vert.JPGhttp://www.modha.org/C2S2/2009/11182009/content/IBM_logo_movie.mpg
  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    13/16

    Caption: Like the surface of a still lake reacting to the impact of a pebble, the neurons in IBM's corticalsimulator C2 respond to stimuli. Resembling a travelling wave, the activity propagates through differentcortical layers and cortical regions. The simulator is an indispensable tool that enables researchers to bringstatic structural brain networks to life, to probe the mystery of cognition, and to pave the path to cool,compact cognitive computing systems.

    Please note that the simulator is demonstrating how information percolates and propagates. It is NOT learning

    the IBM logo.

    How close is the model to producing high level cognitive function?

    Please note that the rat (-scale simulation) does not sniff cheese, and the cat (-scale simulation) does not

    chase the rat. Up to this point, our efforts have primarily focused on developing the simulator as a tool ofscientific discovery that incorporates many neuroscientific details to produce large-scale thalamocorticalsimulations as a means of studying behavior and dynamics within the brain. While diligent researchers havemade tremendous strides in improving our understanding of the brain over the past 100 years, neurosciencehas not yet reached the point where it can provide us with a recipe of how to wire up a cognitive system. Ourhope is that by incorporating many of the ingredients that neuroscientists think may be important to cognitionin the brain, such as a general statistical connectivity pattern and plastic synapses, we may be able to use themodel as a tool to help understand how the brain produces cognition.

    What do you see on the horizon for this work in thalamocortical simulations?We are interested in expanding our model in both scale and in the details that it incorporates. In terms ofscale, as the amount of memory available in cutting edge supercomputers continues to increase, we foreseethat simulations at the scale of monkey cerebral cortex and eventually the human cerebral cortex will soon bewithin reach. As supercomputing speed increases, we also see the speed of our simulations increasing toapproach real-time.

    In terms of details in our simulations, we are currently working on differentiating our cortical region intospecific areas (such as primary visual cortex or motor cortex) and providing the long-range connections thatform the circuitry between these areas in the mammalian brain. For this work, we are drawing from manystudies describing the structure and input/output patterns of these areas as well as a study recentlyperformed within IBM that collates a very large number of individual measurements of white matter, thesubstrate of long-range connectivity within the brain.

    How will this affect neuroscience?Within neuroscience, there is a rich history of using brain simulations as a means of developing models basedon experimental observations, testing those models and then using those models to form predictions that canbe tested through further experiments. A major limitation of such efforts is computational power, forcingmodels to make major sacrifices in terms of detail or scale. Through our work, we have developed anddemonstrated a tool that enables simulations at very large-scales on cutting edge supercomputers. We believethat as this tool continues to grow, it will serve as a crucial test bed for testing hypotheses about brainfunction through simulations at a scale and level of detail never before possible.

    BlueMatter

    What does BlueMatter mean?

    BlueMatter is a highly parallelized algorithm for identifying white matter projectomes written to takeadvantage of the Blue Gene supercomputing architecture. Hence, the term BlueMatter.

    Can you please provide more details on BlueMatter?

    Our software, BlueMatter, is able to provide unique visualization and measurement of the long range circuitry(interior white matter) that allow geographically separated regions of the brain to communicate. The labelsor colors of the fibers represent divisions of these fibrous networks that we are measuring. The colors andnames are as follows:

    Red - Interhemispheric fibers projecting between the corpus callosum and frontal cortex.Green - Interhemispheric fibers projecting between primary visual cortex and the corpus callosum.Yellow - Interhemispheric fibers projecting from corpus callosum and not Red or Green.Brown - Fibers of the superior longitudinal fasciculus, connecting regions critical for language processing.Orange - Fibers of inferior longitudinal fasciculus and uncinate fasciculus, connecting regions to cortex

    responsible for memory.Purple - Projections between parietal lobe and lateral cortexBlue - Fibers connecting local regions of the frontal cortex

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    14/16

  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    15/16

    High-resolution version (2MB)

    The figure displays results from BlueMatter, a parallel algorithm for white matter projection measurement.Recent advances in diffusion-weighted magnetic resonance imaging (DW-MRI) have allowed the unprecedentedability to non-invasively measure the human white matter network across the entire brain. DW-MRI acquires anaggregate description of the diffusion of water molecules, which act as microscopic probes of the densepacking of axon bundles within the white matter. Understanding the architecture of all white matterprojections (the projectome) may be crucial for understanding brain function, and has already lead tofundamental discoveries in normal and pathological brains. The figure displays a view from the top of the

    brain (top) and a view from the left hemisphere (bottom). The cortical surface is shown (gray) as well as thebrain stem (pink) in context with a subset of BlueMatters projectome estimate coursing through the core ofthe white matter in the left hemisphere. Leveraging the Blue Gene/L supercomputing architecture,BlueMatter creates a massive database of 180 billion candidate pathways using multiple DW-MRI tracingalgorithms, and then employs a global optimization algorithm to select a subset of these candidates as theprojectome. The estimated projectome accounts for 72 million projections per square centimeter of cortexand is the highest resolution projectome of the human brain.

    What role will BlueMatter play in the SyNAPSE project?

    Long term, we hope that our work will lead to insights on how to wire together a system of cognitivecomputing chips. Short term, we are incorporating data from BlueMatter into our cortical simulations.

    What makes all the computational power necessary?

    Because of the relatively low resolution of the data compared with the white matter tissue, there are manypossible sets of curves one may draw in order to estimate the projectome and compare it with a global errormetric as we have done. Searching this space leads to a combinatorial explosion of possibilities. This has ledmany researchers to focus on individual tract estimation at the cost of ignoring global constraints, such as thevolume consumption of the tracts. Rather than simplify our model, we have addressed the computationalchallenge with an algorithm designed to specifically leverage a supercomputing architecture of Blue Gene.

    What are the next steps?

    We are also interested in using our technique to make measurements on the projectome and communicationbetween brain areas that can generate hypothesis about brain function that may be validated with behavioralresults or perhaps functional imaging and can be integrated with large-scale simulations.

    Future

    http://www-03.ibm.com/press/us/en/attachment/28843.wss?fileId=ATTACH_FILE2&fileName=BlueMatter.jpghttp://www-03.ibm.com/press/us/en/attachment/28843.wss?fileId=ATTACH_FILE2&fileName=BlueMatter.jpg
  • 8/2/2019 We Have Always Been Interested in the Notion of Consciousness Fact

    16/16

    How will your current project to design a computer similar to the human brainchange the everyday computing experience?

    While we have algorithms and computers to deal with structured data (for example, age, salary, etc.) andsemi-structured data (for example, text and web pages), no mechanisms exist that parallel the brainsuncanny ability to act in a context-dependent fashion while integrating ambiguous information across differentsenses (for example, sight, hearing, touch, taste, and smell) and coordinating multiple motor modalities.Success of cognitive computing will allow us to mine the boundary between digital and physical worlds whereraw sensory information abounds. Imagine, for example, instrumenting the worlds oceans with temperature,pressure, wave height, humidity and turbidity sensors, and imagine streaming this information in real-time toa cognitive computer that may be able to detect spatiotemporal correlations, much like we can pick out a facein a crowd. We think that cognitive computing has the ability to profoundly transform the world and bringabout entirely new computing architectures and, possibly even, industries.

    What is the ultimate goal?

    Cognitive computing seeks to engineer the mind by reverse engineering the brain. The mind arises from thebrain, which is made up of billions of neurons that are liked by an internet like network. An emergingdiscipline, cognitive computing is about building the mind, by understanding the brain. It synthesizesneuroscience, computer science, psychology, philosophy, and mathematics to understand and mechanize themental processes. Cognitive computing will lead to a universal computing platform that can handle a wide

    variety of spatio-temporally varying sensor streams.

    x