university of lisbon · momentos, e onde diferentes graus e tipos de incoerência podem ser...
TRANSCRIPT
University of Lisbon
The Incoherent Mind
Analysis of mind, brain and split-brain data in search of a countable
number of minds
Author:
Tiago Machado de Brito
Supervision:
Prof. Dr. David Yates
Co-Supervision:
Prof. Dr. Jim Hopkins
Dissertation for the obtainment of Master’s Degree in Cognitive Science
2017
I
For my family,
Blood or otherwise
II
Acknowledgements:
First and foremost, a heartful thank you to my kind supervisors, David Yates and Jim Hopkins,
whose unyielding support kept me from sinking in this rather confusing and definitely vast area
of investigation. For what work surfaced, I am ever grateful to you both.
My deep appreciation to the head of the master’s course as well, João Branquinho, for all his help
revolving the troublesome bureaucracies and the little details associated with this thesis,
ultimately returning to me much needed time.
I must also mention how important it was to have such great friends, in the academia and beyond,
who would always be there for me, with a pint of beer in hand when in dire need. Those sweet
moments made the worse ones all the less bitter.
For last but not least, I leave a word out to my brother, whose dedication and interest in impossibly
baffling topics of the mind allowed me to keep mine open.
To all of you,
I thank you.
III
Resumo Estendido
Pacientes Split-Brain são indivíduos cujo corpo caloso foi seccionado. O corpo caloso é
a maior estrutura de massa branca do nosso cérebro, promovendo por excelência uma eficaz
comunicação entre os seus dois hemisférios, ao nível do cortéx. Pacientes Split-Brain (daqui em
diante SBP), cujo corpo caloso foi seccionado, revelam-se capazes de processar informações
diferentes em cada hemisfério independentemente, desde que cada hemisfério tenha acesso
exclusivo a informações segregadas. Estes indivíduos, nestas situações experimentais, agem
como se fossem dois indivíduos, um associado a cada hemisfério. Por ausência de comunicação
inter-hemisférica, cada hemisfério de um SBP não tem acesso ao que está a ser processado pelo
outro, nem tem acesso a funções cognitivas localizadas no outro. Quando em isolamento
informacional, cada hemisfério de um SBP tem a capacidade de trabalhar independentemente do
outro, agindo perante estímulos, respondendo a questões, revelando personalidade e consciência
de ocorrências referentes ao individuo.
Isto levou Thomas Nagel em 1971 a questionar quantas mentes podemos considerar
nestes pacientes. Após ponderação e sucessiva rejeição de todos os casos possíveis, conclui que
ou a nossa noção de mente está incorreta, ou estes indivíduos não têm um número contável de
mentes. Não ficando satisfeito com a segunda alternativa, propus averiguar a plausibilidade da
primeira. Uma das hipóteses defendidas por Nagel é a possibilidade de um SBP ter apenas uma
mente, mas cujo conteúdo cognitivo deriva dos dois hemisférios de um modo algo dissociado.
Nesta dissertação, tomo esta hipótese como a mais plausível resposta à questão de Nagel,
pretendendo defende-la. Para este efeito, três objetivos foram propostos. Devidamente
argumentados, o cumprimento destes objetivos levará o leitor a compreender a possibilidade desta
hipótese. Uma mente com conteúdo dissociado traz consigo implicações à noção que temos da
mente, pois perante esta hipótese, cada hemisfério será capaz de muito do que consideramos que
uma mente apenas é capaz (processar informação, revelar emotividade, apresentar-se consciente,
e.g.), sendo que para esta hipótese ser possível, é necessária a aceitação da nossa mente como
algo passível de incoerência. Coerência é dada como um caracter necessário à atividade mental:
de um individuo com uma mente é esperado que exista consecutivamente concordância
mental. O que acontece com um SBP quando, por exemplo, ele vê um objeto através
de um hemisfério e palavra através do outro, e se mostra independentemente consciente
de ambos, é precisamente o oposto do que seria esperado de uma mente coerente.
Assim , como primeiro objetivo , pretendo defender nesta dissertação que coerência
IV
não é de caracter necessário à atribuição de uma mente, mas antes um caracter necessário
à experiência de estados de consciência. Consideremos consciência como o conjunto de todos os
estados mentais conscientes que possuímos. Nesta perspetiva, cada hemisfério destes indivíduos
parece ser capaz de estados de consciência sobre coisas diferentes quando informações segregadas
lhes são dadas, e em cada hemisfério parecem gerar-se estados conscientes independentes do
outro. Se cada conjunto de estados conscientes gerados por cada hemisfério se mantiverem
consecutivamente coerentes entre si, a mente de um SBP como um todo pode ser considerada
incoerente, mas sem perda da sua singularidade. Para que esta hipótese possa ser considerável,
será necessário que um indivíduo normal também possa revelar incoerências mentais, sendo nós
admitidos possuidores de uma única mente. Foram então consideradas características de uma
mente, onde ao detentor de todas elas garantidamente terá de ser atribuída uma mente. Estas
características foram: capacidade de representação mental, deter um conjunto de crenças,
capacidade de agir, apresentar (algum grau de) coerência e revelar consciência. Análise de
fenómenos psicológicos de incoerência associados a estas características, como os de agir
incoerentemente ou ser detentor de crenças incoerentes, revelaram que indivíduos normais têm
de facto o potencial para revelar incoerências mentais, dadas pela existência de, numa mesma
mente, estados mentais incoerentes. Isto é semelhante ao que acontece num SBP, onde numa
mente ocorrem, simultaneamente, estados mentais que não deveriam poder ser atribuídos a uma
única mente (sendo a diferença que exclusivamente nestes últimos, surgem estados mentais
conscientes e incoerentes inter-hemisféricamente). A questão que se coloca é então a de um grau
ou tipo de incoerência, onde uma mente pode ser mais ou menos incoerente. Consideração de
graus de incoerência é o segundo objetivo do trabalho, onde quantos mais basais são os estados
mentais que se revelam incoerentes, mais incoerência se encontra em estados mentais a eles
associados; e onde quanto mais estados mentais se revelarem incoerentes numa mente, maior o
seu grau de incoerência. Podendo uma mente ser incoerente, aquilo que diferencia um individuo
normal de um SBP não é então o número de mentes, mas sim um grau de incoerência. Ficamos
agora a ponderar sobre a plausibilidade de um cérebro poder gerar estas incoerências. Como
terceiro objetivo, tenciono mostrar que qualquer cérebro potencialmente alberga incoerências
mentais, tanto num indivíduo normal como num SBP. Para este efeito, analisaram-se teorias de
funcionamento cerebral que expliquem como incoerências podem surgir; analisou-se
lateralização e duplicação de funções no nosso cérebro; e teorias de consciência aplicáveis tanto
a indivíduos normais como a SBP. A teoria de funcionamento cerebral abordada é a do Cérebro
Bayesiano, que compreende o cérebro como uma “máquina de previsões”, captando informação
sensorial e gerando representações conscientes daquilo que o cérebro prevê ser a causa da
imposição sensorial. Perante esta teoria, e considerando que é possível a existência de modelos
geradores de estados conscientes em conflito, incoerência pode ser vista como resultado deste
conflito, e onde estados mentais incoerentes podem surgir numa mente singular. Num SBP, por
V
ausência de influência inter-hemisférica, surge o potencial para cada hemisfério gerar modelos
independentes, levando á ocorrência de estados de consciência inter-hemisféricamente
incoerentes. Por análise de lateralização e duplicação, vemos que cada hemisfério tem a
capacidade de captar e analisar informação independentemente, e que cada hemisfério tem
também processa essa mesma informação de formas distintas. No entanto, parece claro que o
cérebro evoluiu para processar informações com contributo de ambos simultaneamente, e não em
isolamento. Teorias de consciência, como o modelo de união parcial, defende que a experiência
consciente de um SBP, dada a falta de comunicação hemisférica, se revela incoerente, mas não
dividida. A consciência de um destes indivíduos é incoerente, visto que cada hemisfério tem
modelos geradores de consciência independentes e incoerentes associados a informação
segregada, mas possuindo também modelos geradores associados à informação que não é
segregada. Esta informação leva a geração de modelos compartilhados pelos hemisférios,
mantendo num SBP algum nível de coerência consciente (inclusive a que é revelada em situações
normais). Considerando também que a construção mental de um indivíduo não depende apenas
constituição cerebral dos hemisférios corticais, mas também do sub-cortéx e associados a um
corpo singular, parece-me plausível aceitar que apenas uma mente é albergada no cérebro,
independentemente da autonomia que cada hemisfério seja capaz de revelar.
Assim, a tese propõe que tanto indivíduos normais como SBP são possuidores de uma e
apenas uma mente, sendo que esta não tem de ser necessariamente coerente em todos os
momentos, e onde diferentes graus e tipos de incoerência podem ser considerados em cada
individuo consoante a quantidade e quais os estados mentais que se revelarem incoerentes. Deste
modo, creio ter construído nesta tese uma defesa sólida, ainda que preliminar, à plausibilidade de
uma hipótese proposta por Thomas Nagel em 1971, que considera que um SBP tem uma e apenas
uma mente, mas cujo conteúdo mental se encontra inter-hemisféricamente dissociado.
VI
Abstract
Split-brains, patients who have undergone a corpus callosotomy – the severing of the corpus
callosum – have been targets of study for several decades, due to strange behavioral phenomena
that they reveal. In experimental conditions, in which different information can be exclusively
provided to each hemisphere of the brain, they appear to be able to act as if two distinct persons.
This phenomena have left many investigators from various areas of research in awe, unable to
explain how such strange occurrences could originate from a brain much like our own. However,
here I argue that, not only can a normal brain account for the split-brain phenomena (given their
structural changes), but that analyzing the problem from a different standpoint – that of
considering mental incoherencies – we can start seeing mental coherence, not as a necessary
property of a mind, but as a necessary property of a set of conscious states; and since split-brain
patients seem to have a partially incoherent consciousness, where an incoherent conscious stream
arises under experimental situations, no single or set of co-conscious conscious states in each
hemisphere reveals incoherencies, but rather, the mind as a whole does. As such, and in
accordance with the Bayesian Brain theory, I stand in the present work for an incoherent, single-
mind hypothesis to the question of how many minds a split-brain patient has – a question that has
closely followed the split-brain debate since its birth.
Keywords: Split-Brain; Consciousness; Mental Coherence; Lateralization of Function;
Bayesian Brain.
_____________________________________________________________________________
Resumo
Split-Brains, pacientes a quem foi efetuada uma calosotomia – o seccionamento do corpo caloso
– têm sido alvos de estudos há várias décadas devido a estranhos fenómenos comportamentais
por eles revelados. Em condições experimentais, onde diferentes informações podem ser
exclusivamente comunicadas a cada hemisfério do cérebro, estes pacientes aparentam ser capazes
de agir como duas pessoas distintas. Este fenómeno deixou inúmeros investigadores de diversas
áreas de estudo perplexos, incapazes de explicar em pleno como poderiam comportamentos tão
curiosos surgir de um cérebro tão semelhante ao nosso. Nesta dissertação proponho que, não só
um cérebro como o nosso tem o potencial para revelar os fenómenos manifestados por split-brains
(dadas as devidas alterações estruturais), proponho também que ao analisar o problema por um
outro prisma – através da consideração de incoerências mentais – podemos começar a considerar
coerência mental, não como uma propriedade necessária à existência de uma mente, mas antes
como uma propriedade necessária à existência de um conjunto de estados de consciência; e visto
que split-brains parecem ter uma consciência parcialmente incoerente, onde experiencias
conscientes incoerentes surgem ao serviço de um mesmo individuo em condições experimentais,
nenhum conjunto de estados de consciência gerado por cada hemisfério revela incoerências, mas
sim a mente enquanto um todo. Deste modo, e conforme a teoria do Cérebro Bayesiano, eu
defendo na presente dissertação a hipótese da possibilidade de uma mente incoerente para a
questão de quantas mentes podemos considerar um split-brain ter – questão esta que tem seguido
o debate de fenómenos split-brain desde a sua origem.
Palavras-chave: Split-Brain; Consciência; Coerência Mental; Lateralização de Função; Cérebro
Bayesiano.
VII
Index
Chapter 1: Brief Introduction to the Split-Brain Debate ............................................................... 1
1.1 The Brain and the Corpus Callosum ................................................................................... 1
1.2 The Split-Brain Phenomena ................................................................................................ 3
1.3 Nagel’s Question ................................................................................................................. 7
1.4 Points on the present study: ............................................................................................... 11
1.4.1 Approach and Relevancy............................................................................................ 11
1.4.2 Methods ...................................................................................................................... 12
Chapter 2: To Define a Mind ...................................................................................................... 13
2.1 Mental Representations ..................................................................................................... 14
2.2 Beliefs ............................................................................................................................... 15
2.3 Actions .............................................................................................................................. 17
2.4 Consciousness ................................................................................................................... 19
2.5 Coherence .......................................................................................................................... 21
2.0.1 Why these 5? .............................................................................................................. 22
2.6 The Mind can be Incoherent .............................................................................................. 23
Chapter 3: The Incoherent Mind ................................................................................................. 25
3.1 Types of Incoherence ........................................................................................................ 26
3.1.1 Incoherent Action ....................................................................................................... 26
3.1.2 Incoherent Beliefs ...................................................................................................... 29
3.1.3 Incoherent Representations ........................................................................................ 33
3.1.4 Incoherent Consciousness .......................................................................................... 36
3.2 Degrees and Types of Coherence ...................................................................................... 37
Chapter 4: Incoherence in the Brain ............................................................................................ 40
4.1 The Bayesian Brain ........................................................................................................... 41
4.1.1 Helmholtz and the Inference Machine ....................................................................... 41
4.1.2 Friston and Free-Energy ............................................................................................. 43
4.1.3 When Predictions and Errors meet ............................................................................. 44
4.1.4 Improving the Generative Model ............................................................................... 45
4.2 The Conflicting Generative Models .................................................................................. 47
4.2.1 In the Normal Brain: .................................................................................................. 48
4.2.2 In the Split-Brain ........................................................................................................ 49
4.3 Lateralization of Function ................................................................................................. 50
4.3.1 Hemispheric duplication ............................................................................................ 51
4.3.2 Hemispheric lateralization .......................................................................................... 54
4.3.3 Corpus Callosum and Conscious Coherence .............................................................. 58
VIII
4.3.4 Split-Brain Data Reprised .......................................................................................... 59
4.4 Split-Brain Streams of Consciousness .............................................................................. 61
4.4.1 The Conscious Duality Model .................................................................................... 61
4.4.2 The Switch-Model of Consciousness ......................................................................... 63
4.4.4 Disunity and Incoherence ........................................................................................... 66
4.5 The Mind of a Split-Brain ................................................................................................. 67
Chapter 5: Conclusions and Final Thoughts ............................................................................... 69
5.1 Brief Overview .................................................................................................................. 69
5.2 Conclusions ....................................................................................................................... 71
5.3 Final thoughts .................................................................................................................... 74
5.3.1 On counting of Consciousness’, Models and Minds .................................................. 74
5.3.2 On Impoverishment of Consciousness ....................................................................... 77
5.3.3 On Evolutionary Perspectives .................................................................................... 77
5.3.4 On what the Future holds ........................................................................................... 79
References: .................................................................................................................................. 81
Appendix: .................................................................................................................................... 90
IX
“You keep looking but you can’t find the woods
While you’re hiding in the trees.”
- Trent Reznor
1
Chapter 1
Brief Introduction to the Split-Brain Debate
One particular topic that has eluded the scientific community for years is the Split-Brain
Phenomena. Split-brain studies, firstly led by Nobel-Prize winner Roger Sperry, and subsequently
by his student and apprentice, Michael Gazzaniga, revealed some of the most curious human
behaviors ever reported, giving rise to a hot debate to this day. Few are those who are not intrigued
by the strange capacities of a Split-Brain patient. But in order to fully grasp the subject at hand,
we must first take a few steps back, covering the grounds on which the present work was based.
For this purpose, this introductory chapter will focus on various experiments involving Split-
Brain patients, what changes their brains have undergone, and what behaviors drew the attention
of the scientific community.
1.1 The Brain and the Corpus Callosum
Communication in the cerebral hemispheres is in the center of the problem at hand, so to
understand the split-brain phenomena, we must first overview how communication in the
hemispheres is managed. It will be essential to understand (1) how the hemispheres communicate
any information internally between them, and (2) how such perceived information is
communicated to the hemispheres. Let us then start with how neural information is communicated
internally between the hemispheres.
As it was briefly hinted, a human brain is composed of two cerebral hemispheres - a left-
hemisphere (henceforth LH) and a right-hemisphere (henceforth RH) – which compose what we
call the cerebral cortex, the largest region to be found in our brain (Kandel & Mack 2014). These
hemispheres are mostly connected by a bundle of commissural fibers known as the Corpus
Callosum (hereafter CC), which allow for the communication of neural information across the
hemispheres 1 (Luders et al 2010). The integrity of this structure ensures that information reaching
each hemisphere has a way to be communicated to its counterpart, contributing to the integration
1 Though most information can be said to be communicated between the hemispheres through the corpus callosum,
this is not the only path of communication. It is, however, the most significant one.
2
of perceptual and cognitive information in the brain, coherent decision making, and overall
accounts for the brain, with its two hemispheres, to function as a coordinated unit (Hofer & Frahm
2006; Sperry 1968). It is possible, however, to live with a sectioned corpus callosum. In cases of
severe epilepsy, in which all other treatments have failed to prevent or treat the problem, a Corpus
Callosotomy – a surgery in which the corpus callosum is severed - is an efficient surgical
treatment that all but eliminates the occurrence of epileptic seizures by limiting the spread of
epileptic activity between the hemispheres (Clarke et al, 2007; Mathews et al, 2008). But as we’ll
come to see, it is not without a cost. An individual who has undergone this invasive procedure
can lead a fairly normal life however, with no severe changes to one’s day to day routine (Sperry
1964; Gazzaniga 1967). These individuals have become known as Split-Brain Patients (hereafter
SBP), and they are the revealers of the split-brain phenomena. Most of the more curious behaviors
revealed by these patients can only be seen in experimental and controlled situations. Nonetheless,
as we will see, this makes these behaviors no less peculiar.
Secondly, a few words must be said regarding (2), on the way perceived information is
communicated to the brain. Considering tactile and visual information, the left side of our bodies
is managed by the RH of our brain, and right side of our bodies by the LH (Mutha et al 2012).
This includes both what information is communicated to which hemisphere (such as sensory
information), and which hemisphere communicates back to its respective body side (such as for
producing movement) (Sperry 1977). Take vision for instance: in each eye, the retina – where
visual information is perceived – can be divided in left and right sides, depending on what side
the light is captured. Information perceived by the retinas travels to the brain via the optic nerves
and through the Optic Chiasm, where these nerves partially cross (Llinás 2003). On this crossing,
information perceived on the left side of both retinas (which correspond with what is seen on the
right field of vision) is directly communicated to the LH, and information perceived on the right
side of the retinas (corresponding to the left field of vision) to the RH. Together, these visual
inputs make up our visual world, or visual field (Smythies 1996). As such, the division of visual
information perceived is not considered by which eye, left or right, perceives what, but rather as
on what side of your overall field of vision you are perceiving information. Consult Figure 1.01
in the appendix section for a very simplified illustration of said visual system. Similar
mechanisms of communication to the brain apply to other sensory processes, though
communication of said information may not be directed to the contralateral hemisphere, as with
sight or tactile and motor control: auditory information is partially communicated to both
hemispheres, though dominantly crossed as is visual and tactile information; olfactory
information is an exception to this crossing, as it’s the ipsilateral hemisphere that receives
perceived information from the nostrils; finally, muscle control of the face and neck is shared by
3
both hemispheres (Sperry 1973). Keeping these points in mind is key to understanding the split-
brain phenomena, as we will see shortly.
1.2 The Split-Brain Phenomena
If the CC accounts for most of the communication of information across hemispheres,
what can we expect to be the consequences of severing that channel of communication? This
question haunted scientists and philosophers ever since this procedure became the main, last resort
solution, for severe cases of epilepsy. More haunting still because, apparently, no change in these
individuals’ behaviors was observable. Firstly conducted by Roger Sperry and his student, Ronald
Meyers, back in 1955, split-brain studies were meant to take this lack of communication between
the hemispheres to the test. Back in their early days, Meyers and Sperry tested the hypothesis that
the CC was responsible for interhemispheric communication, preforming a corpus callosotomy
in cats (Myers & Sperry 1953; Kean 2014) and proceeding to perform with them several tests. He
discovered that he could actually teach different, contradictory things, to each of the hemisphere
of the cats, depending on what hemisphere had access to the experiment through covering one of
their eyes 2. As suspected, learned information was not communicated to the other hemisphere.
Human testing shortly followed, initially led by Sperry, and subsequently together with his student
Michael Gazzaniga, who leads SBP studies today.
As mentioned before, SBP’s show no signs of any behavioral change in normal
circumstances, which had left the scientific community in doubt as to how the severing of such a
huge axonomic pathway could produce no change in their output behavior. Sperry, Gazzaniga
and their colleagues enter the picture as explorers of the split-brain, dedicating years of research
to the topic, providing the world with a massive amount of data on these individuals, and finally
revealing what had been puzzling the scientific community throughout years before: the corpus
callosotomy does produce behavioral changes, though these might not have been so obvious at
first. The classic human split-brain study paradigm is as follows: the subject is presented a white
screen, which he is told to fixate, but only at the dead center. This ensures the division of the
visual field in left and right, which as we’ve seen, will guarantee that information on either side
of the field will reach, exclusively, the contralateral hemisphere. On this screen, experimenters
could now flash very brief stimuli, each lasting about 200 milliseconds, which is enough time to
quickly identify a stimulus, but not for an ocular response (such as one that would lead the subject
to focus the stimulus instead of the center of the screen) (Thorpe et al 1996). With this simple
2 Unlike human callosotomy, in which the optic chiasm is kept intact (allowing for input from one retina to reach both
hemispheres), in cats’ callosotomy the optic chiasm is severed, allowing for left eye / right eye input to be lateralized
in the hemispheres.
4
procedure, the left and right hemispheres can be stimulated separately, and responses from each
of them can be studied.
Suppose a stimulus is presented in the left side of the screen. If an individual with a
healthy brain were to be asked what he was seeing, he would promptly be able to describe
whatever it is that was on the screen. But Sperry and Gazzaniga were working with SBP’s, and
as bizarre as it seemed at first, the patient was not able to describe it. In fact, as far as the patient
was able to verbalize, he did not claim to see anything at all. On the other hand, if the stimuli were
to be shown on the right side of the screen, the patient would be able to both acknowledge seeing
it, as well as describing it (Sperry 1968; Gazzaniga 2013). If this weren’t strange enough, if you
were to give a SBP something to hold with his left hand, all the while not allowing them to see
what they are grabbing, they could not describe what they were holding, and again, would even
claim to not be holding anything at all. However, as you might suspect, should the object be held
by the subject’ right hand, they would be able to claim to be holding something, and if possible,
even identify it (Sperry 1968).
Now suppose a SBP was asked to pick up an object, through touch alone, which had been
shown to the right side of the screen. The subject would readily find the object with his left, RH
controlled hand (though claiming that he would prefer to do it with his right, LH controlled hand).
But then, if the experimenter were to say something like “Good job on picking the right object!”,
the subject would surely answer something like “The right object? How can I know which one is
the right object if I don’t know what I saw?” (Sperry 1968). Should this experiment be repeated
a time and time again, the subject would always get it right, and still claim not to know why. This
result is beyond mere luck or chance. But then, why was the subject not able to verbalize that he
had the information to get the right object, or even that he clearly knew he did have such
information? These are the sort of strange behaviors that split-brain patients reveal, and these
behaviors are what we call the curious Split-Brain Phenomena. It does get stranger however.
SBP’s can also act upon two different stimuli (each presented to each hemisphere)
simultaneously (Gazzaniga & Sperry 1966). Say then if two pictures of different objects were
flashed on both sides of the screen, the subject would be able to pick, through touch alone and at
the same time, the correct object with the hand corresponding its respective hemisphere. Imagine
also that a SBP was asked to draw a circle with his left hand, and a square with his right. A healthy
brained individual would struggle to accomplish such a task, and probably get two shapes that are
neither a square nor a circle. A SBP does this with little effort, and indeed, is able to draw a crude
circle with a hand and a square with the other (Markmcdermott 2010). It seems so far that, upon
separation of the hemispheres, a certain level of independence arises in each hemisphere, that
leads the SBP’s to be able to act as if two different people.
5
Stepping away from experiments involving opposing stimulation in the hemispheres, let
us see what happens when a SBP is faced with a task that can be completed with joint work from
both hemispheres. J.W., one famous SBP who has worked with Gazzaniga, is a very good artist
and a car aficionado. In an experiment in which he participated, the word “CAR” was flashed to
the LH, and “1928” was flashed to the RH. He was then asked to draw what he had seen. What
J.W. drew was a car from 1928. So even without interhemispheric communication, these
individuals were still able to perform tasks which required hemispheric communication. This
communication is considered to not be done internally however, but rather externally through
some sort of cross-cueing of information – an unconscious form of external hemispheric
communication – or maybe cooperative control over the drawing hand (Gazzaniga 2013).
Before bringing the SBP’s ability to cross-cue information externally to a close, consider
the following experiment: a SBP was flashed numbers to the either hemisphere, numbered 1
through 9, and was told to count the numbers as they appear. As such, if the number “1” was
flashed to the LH, the SBP would have to verbally report “One!”, and repeat the process with any
other number (1 through 9) that would be flashed. When the LH was tested, as expected, the SBP
could count the numbers perfectly, and took him approximately the same time to do so regardless
of the number shown. Now, when the same test was applied to the RH, the SBP was actually able
to count them as well, verbally reporting the numbers as they were shown. Against all the data
that had before been collected, which suggested that anything shown to the RH would go
unmentioned, the SBP was able to count what was flashed to the RH, just as the LH. The time it
took the SBP to count each number, however, was different in each one (Gazzaniga & Hillyard
1971). As Gazzaniga and Hillyard report: “we were standing before a self-cueing mechanism”.
As it would seem, the SBP, through his RH, had developed a method of slightly nodding his head
a number of times correspondent to the number displayed, as to signal the LH of the number it
had access to, allowing for the verbalization of the numbers through the LH. And again, all these
self-cueing mechanisms occurred without the subject being able to verbalize the reason behind
such curious self-cueing behavior. So the hemispheres can both function independently, to a point
of one interfering with what the other one is up to, but also working together to achieve a
cooperative goal, to a point where one communicates with the other externally.
Consider now what happens when a SBP is faced with a moral question. In several
experiments, SBP’s were faced with short stories with moral decisions. In these experiments,
there was no division of stimulus of any kind. The SBP’s were simply given a story regarding a
moral decision, and were then asked to judge it accordingly. The stories were similar to the
following:
6
The SBP was then asked to classify the waitress’ decision of bringing the sesame seed
filled meal to the costumer as permissible or forbidden, depending on whether the decision was
morally sound or not. The SBP classifies it as permissible, though checking before bringing a
potentially harmful meal would be the morally acceptable thing to do. The SBP’s classification
was based, not on the morality and beliefs he should know Kate to have, but on the outcome itself.
In this case, as nothing bad happened, he would classify it as permissible, though we know well
it wasn’t. Furthermore, if asked why he considered it permissible, he would answer “Sesame
seeds? Such small things couldn’t harm anybody!” (Miller et al 2010). Should the tale have a
negative outcome, the answer would have be forbidden. It would seem a SBP is not able to
rightfully verbalize on the morality of situations. It is curious though, that when a SBP’s decision
is not a morally sound one, that is when he feels the need to justify his decision, such as in the
example given, almost as if trying to convince himself of his claim.
To finalize this short overview of the strange split-brain phenomena, let us consider what
happens when a SBP is pointed out that he is acting strangely. Suppose that upon choosing a
correct object, which had been flashed to the RH, the patient was asked “How did you know this
was the right object”? Recall that SBP’s verbally claim to have no idea of what his RH controlled
hand is doing. The patient would answer something similar to “I must be just guessing!” or “I
must’ve done it unconsciously” (Sperry 1968), though we know that is not the case. Consider the
following experiment: two pictures were shown on each half of the testing screen: on the left a
snowed-in house; on the right a chicken claw. Then, several random pictures were shown to the
SBP on the table, and he was asked to point out, with each hand, the picture that better related to
the image he had just seen on the screen. With his RH controlled hand, he pointed to a snow
shovel, and with his LH controlled hand he pointed out a chicken head. Yet, when asked why he
had chosen those pictures, he answered “The chicken claw goes with the chicken, and you need
a shovel to clean out the chicken shed” (Gazzaniga & LeDoux 1978). Consult Figure 1.02 in the
appendix section for a depiction of said experiment. Again, the SBP was right in pointing out the
pictures he did, but was not able to say the right reason for his decision. He knew that what he
had seen through his RH was a snowed in house, and that is why he picked the shovel. But he
wasn’t able to say so, and instead, his “speaking” LH creates a narrative 3 that somehow mixes
3 Note that it is not the LH that creates a narrative, but rather us, holders of the mind. Attributing such concrete attributes
to structures that cannot hold them, as the hemispheres, is falling for the Fallacy of Misplaced Concreteness. It is useful
to expose the argument however, so it will be done, mindfully, several times throughout the exposition.
“Kate is a waitress preparing to take a meal out to a customer's table. The customer is with his
friends, and he orders a meal that calls for sesame seeds. The customer happens to love sesame seeds
and that he will have no problem at all if he eats the sesame seeds in his meal. After overhearing part
of the customer’s conversation with his friends, Kate believes that the customer is highly allergic to
sesamelseeds. Kate puts the sesame seeds in. The customer enjoys his meal and is fine”.
- Miller, M. 2010
7
what the left hand had picked out with what the LH had seen on the screen. This specific
phenomena, in which the LH “creates” coherent narratives to justify the RH controlled actions,
has been named the Left-Brain Interpreter. Through his interpreter, when the SBP is confronted
with questions related to his RH controlled actions, the LH is able to maintain the individual’s
personal story clear of contradiction, even if, as we can clearly see, the story is just an illusion.
So, if it weren’t strange enough to imagine that a corpus callosotomy could have no real
consequence at a behavioral level, realizing that it does indeed have consequences, and that those
consequences could be so bizarre and yet so elusive, was scientifically amazing. Much so that it
earned Roger Sperry his Nobel-Prize, and to this day, no one has been able to fully explain these
phenomena without undermining, to some extent, what we have always believed to be true about
the brain, the mind, and how they are related.
These studies have come to contribute greatly into what we now know about the brain,
and indeed corroborated some of the theories that had already taken place in the scientific
community. For instance, we knew that the linguistic center of the brain is (usually) located in the
LH. It would then fit the picture that, as realized, a SBP would not be able to speak of anything
related to the “unspeaking” RH. Not only language, we’ve come to learn that each hemisphere
has its own cognitive proficiencies, in what we call Lateralization of Function in the brain
(McGhilchrist 2012). Split-brain studies perfectly support this idea, for in them we see different
proficiencies associated with each hemisphere. Tasks involving specific cognitive faculties are
completed with greater ease by the hemisphere specialized in them, and though neural plasticity
accounts for some variance and adaptation, it is still clear that each hemisphere has its own
independent aptitudes, and also duplicated ones as well. Upon severing the CC, what seems to
happen in these patients is that each hemisphere no longer “knows” what the other one is up to,
and characteristics exclusively present in either hemisphere can no longer be shared with the
other. This lack of interhemispheric communication leads to the strange split-brain phenomena,
as the uncommunicating brain no longer seems to be able to operate as a unit in specific,
experimental, situations.
1.3 Nagel’s Question
We can now sympathize with the attention that these phenomena have gotten from the
scientific community. The debate surrounding the split-brain phenomena is immense, as
numberless questions can be raised with relation to it. One major problem that arises with these
phenomena is that it may come to undermine our idea of what a mind is. We know, even if only
intuitively, that we have subjective mental experiences that make up what we call the mind. We’ve
never had to considered that any one individual could have more than one mind, as this very
8
proposition seemed unintuitive and quite unnecessary; there was no reason for us to even consider
this. SBP’s and their behavior came to change this, with the ground shaking fact that they do seem
to act as if two different persons at times. So, at least with respect to SBP’s, we may ask how
many minds we can consider such patients to have. Philosopher Thomas Nagel, renowned for his
work on the philosophy of mind, asked this very question back in 1971 in his work “Brain
Bisection and Unity of Consciousness”, which to this day stands at the very heart of the split-brain
debate. In it, Nagel proposed an exhaustive number of possible answers to his question,
considering what was known of the Split-Brain Phenomena. He considered five different
possibilities, all of which might explain how a one brained individual could show such a strange
duality of behavior:
1. Split-Brain Patients have one normal mind in the Left Hemisphere, and any response
produced by the Right Hemisphere is the product of an automaton, and not of conscious
mental processes;
2. Split-Brain Patients have one normal mind in the Left Hemisphere, and isolated conscious
phenomena may occur on the Right Hemisphere, though not integrated into a mind;
3. Split-Brain Patients have two minds, one associated with each hemisphere;
4. Split-Brain Patients have one mind, whose contents arise from both hemispheres, which
hold dissociated content;
5. Split-Brain Patients have one mind while hemispheric functioning is parallel, but when
in experimental situations in which each hemisphere is given different tasks or
information, the single mind splits into two (even if only temporarily).
Just as Nagel did in his work, let us start by analyzing the first two hypothesis, which
share the common premise of considering that what goes on in the RH is not a product of a mind.
On the matter of the first hypothesis, what evidence do we have supporting it? It is true that a SBP
is unable to testify whatever happens in the RH, and by all means denies awareness of activities
in that hemisphere. But we know well that the linguistic center of the brain is located in the LH.
If the RH no longer has access to the LH, how can we expect anything that goes through it to even
be mentionable? Or how can we expect the LH to know what’s going through the RH? We can’t.
But that is hardly enough to jump to the conclusion that the RH holds no mental activity. Consider
everything else the RH is capable of: it can clearly perceive information and act upon it; respond
to complex stimuli; it can hold its own sets of beliefs on the world; it can rationalize; follow
instructions and even raise unspoken objections. So it appears that the argument on the lack of
testimony by the RH is not enough to permit the claim that it is not part of a mind, and thus, the
first hypothesis is refuted. Discarded as well is the second hypothesis, which is based on the same
premise. It fixes part of the problem of the first hypothesis, as it admits that whatever happens in
the RH is conscious phenomena, but it denies integration of that activity into a mind. Immediately
9
this suggestion seems implausible. We’ve seen that all the RH does seems to fit the idea we have
of a mind! Its mental activities are not fragmented – mental structure is present – and it’s a subject
of experience and action; all in all, it can do most of what anyone with a mind can do (maybe with
some hardships and the absence of speech). And again, in face of all the evidence pointing the
other way, this hypothesis falls short, just as the first one does.
The remaining three hypothesis are definitely more sophisticated, as the main issue with
them is to be able to choose one over the other. Starting with the third, it is possible that a SBP
has two minds, one associated to each hemisphere. Note that both hemispheres, independently,
are capable of things we ascribe to a mind; they have different functional proficiencies, different
beliefs and actions that cohere with themselves; mental activity just does not cohere with what
goes on in the other hemisphere. The split cortex may then be home to two separate minds, who
share a common body, but whose higher functions are independent both physically and
psychologically. They would work in parallel in the majority of situations, as both hemispheres
would have access to the same information. This hypothesis loses its ground when we try and
decide between it and the possibility of having only one mind. For in most situations, these
individuals behave as one, and indeed people who relate to SBP’s consider them to be single
individuals, as very little would make them suppose otherwise. We see little reason to deny that
each hemisphere is home to independent mental activity. But it cannot be made clear in any
situation that each hemisphere is absolutely distinct and independent. If they indeed have two
minds, and we could make them be seen in them clear temporal dissociation, it would make this
hypothesis a clear preferable choice. But as we stand, we hold no true reason not to believe that
these individuals do not still hold a single mind with content derived from both hemispheres. This
leads us to the fourth hypothesis, where a single mind dissociated could be the answer. But similar
to what happened with the third hypothesis, what led us to think this may be the answer is not
enough for us to understand the moments in which they appear to have two minds. Indeed, this
hypothesis would provide insight into why in experimental situations, certain bits of information
separately provided to each hemisphere would output different behaviors (as mental content
would itself would be dissociated among both hemispheres), whilst keeping a single mind as seen
in the majority of situations. In situations where no different stimulation was provided to each
hemisphere, they would act as one; but in situations where different stimuli was provided, both
could act independently, but still under the same mind. Nagel proceeds to reject this hypothesis
as well however, as we would have to attribute some degree of incoherence to a single mind, and
that is no easy feat:
“For in these patients there appear to be things happening simultaneously which cannot fit into a
single mind: simultaneous attention to two incompatible tasks, for example, without interaction
between the purposes of the left and right hands.”
- Nagel, T., 1971
10
It would require that the single conscious entity that these individuals are be product of
two independent control systems, one associated with each hemisphere. This would make it hard
to imagine what it would be like to be a SBP; to consciously hold something while consciously
not holding something. To be a single person seems to be related to holding connected conscious
experiences, such that when shown two different colors or two different shapes, one holds a single
conscious experience that allows us to see if they are different colors or shapes. SBP do not hold
such a unified consciousness, and indeed fail on this particular point we take to be characteristic
in a single mind. Furthermore, while interhemispherically we cannot see this unity in experimental
situations, we do see it intrahemispherically. This again points us towards a two-mind hypothesis,
which as we’ve seen, does not give us strong evidence as to why individuals who act as a single
person most of their time must be ascribed with two minds. These two hypothesis’ hold an
interesting characteristic, where the strengths of one are the weaknesses of the other, and among
deciding on one of them becomes a difficult problem.
Alternatively, the fifth hypothesis suggests that a SBP may have a single mind in normal
situations, only to have it temporarily split in certain specific situations. This explanation would
certainly fix the problem raised on the two-mind hypothesis, as in normality the individual could
act as one, and experimentally act as two. It nonetheless fails to explain the phenomena, as it falls
into fallacy. The structural change that these patients brains have undergone, in most cases,
happened years before they were subjects to experiments, and no apparent physical changes can
be considered between normality and the experimental moments. This ad hoc argument is hence
faced with problems, as there is nothing in the experimental paradigms that may lead us to think
or believe that any additional internal change has taken place that could account for this contrast
in behaviors. But even if it were possible that the mind be divided upon segregation of
information, though in experimental situations certain behaviors seem to be product of two minds,
most behaviors do not. The subjects follow instructions as a single mind, their posture does not
suggest any duality, and their interaction with the experimenter does not suggest it either. In
exception for those single specific stimuli that reach each hemisphere separately, there are no
indications of two minds in any other behavior, even in experimental situations.
These individuals, somehow, fall somewhere between us, normal and intact brain
individuals, and a pair of individuals locked in independent cooperation, such as when playing a
duet, as Nagel brilliantly put it. But if none of these hypothesis fit the phenomena, and, as it seems,
we are left with no other options, then we stand before an impasse: either our idea of what is a
mind is wrong, or split-brain patients have no countable number of minds. How can we accept
these dualities as product of a brain much like our own? When considering these experiences, we
cannot help but to compare our own subjective mental experience to that which only a SBP can
imagine what it is to have; and we are not able to do it. Nagel suggests that maybe the problem
11
lies in holding our own, said normal mental experience, as the pillar that supports the idea that
any organism with a mind must have our level of mental unity, and thus ignoring the possibility
that our own unity is an abstraction, and a product of this complex neural control system of ours.
1.4 Points on the present study:
Though the debate has stretched of several decades, the split-brain phenomena remains a
hot topic to this day, and naturally, new developments have risen, bringing with them new ideas,
and clear room for expansion of thought. This thesis takes the opportunity from new evidence on
the split-brain debate, philosophy, psychology and neuroscience, to try and revive Nagel’s
question; only this time, hopefully, reaching a determined answer.
1.4.1 Approach and Relevancy
To this purpose, investigations on several topics were conducted, namely the properties
that make up a mind; single-minded incoherencies; brain activity that might lead to said
incoherencies and lateralization of function. All these topics were accordingly related to what we
witness in split-brain phenomena.
Standing on evidence gathered from more modern perspectives in philosophy,
psychology and neuroscience, I intend in this thesis to revise Nagel’s question, and to give it a
determined answer: just as he hypothesized in his fourth possibility, the minds of SBP’s are home
to one mind with dissociated contents derived from both hemisphere. Taking this as my
hypothesis for the work, the objectives of this work are three-fold: (1) show that mental
incoherence is not enough for the assertion of the loss of the single mind; (2) emphasize that not
only does any mind (normal or a SBP's) reveal incoherence, it may come in varying form and
nature, such that degrees of mental coherence can be considered; (3) understand how incoherence
can be conceivable in any brain, be it normal or split. Having these objectives been fulfilled, and
arguments in their defense proposed, I believe a solid base for considering the single mind to be
incoherent will have been built, for both normal individuals and SBP’s. Regarding (1), we will
see that normal minds reveal incoherent states in the mind, which amounts to having an incoherent
mind – a mind whose states do not (always) cohere. As regards (2), witnessing that normal
individuals may reveal incoherence in different depths (following a chain of mental processes
from mental representing to taking action) with different resulting incoherent behaviors, and that
SBP’s reveal incoherence according to what would be expectable if mental representations
became consciously incoherent, mental coherence may be understood in degrees, and any mind
must be found within the range of absolute coherence and no coherence at all. As for (3),
considering the brain as a Helmholtzian predictive machine, generating models for conscious
representation of predictions, and taken into account that more than one such model can be housed
12
in the brain, normal incoherence and SBP incoherence can both be explained in accordance to
this Bayesian Brain theory, being this approach the first of its kind applied to the split-brain
phenomena.
These answers require the integration of information from various areas of cognitive
research, as the debate itself cannot be approached from any other perspective and hope to give it
answer. To understand what goes on in a SBP’s mind, we must give way to new theories of the
mind that can take in the strange split-brain phenomena. We must understand the brain
mechanisms that give rise to these behaviors, and we such a physical brain, not very different
from yours or mine, can produce different mental outputs given a change in hemispheric
communication. As such, through investigation built from various standpoints of cognitive
science – philosophy of mind, psychology and neuroscience – I intend to argue in a way that, as
we will see in the following chapter, integrates knowledge from them all.
Thankfully, due to advances in medical science, invasive procedures such as the corpus
callosotomy are drastically decreasing. On the other hand, our window of opportunity for studying
the split-brain phenomena is coming to an end. Now, more than ever, is the time to approach this
issue critically. Furthermore, the implications of said answers would be grand, for somewhere
among them may lie the understanding of our minds, their structures and basis, knowledge of
correlates of consciousness, and enlightenment on the very nature of the relation between our
physical brain and our immaterial mind.
1.4.2 Methods
Given the multidisciplinary approach inherent to the problem under analysis, various
branches of research had to be considered. Along with Sperry and Gazzaniga’s work on the split-
brain phenomena (along with that of other researchers), investigations were conducted on
properties of the mind, on phenomena of mental incoherence, on brain structure and information
transferring mechanisms (such as through the CC) and on lateralized and duplicated functions in
its hemispheres, on theories of brain functioning to generate consciousness and on the importance
of structures, other than the cortex, in the maintenance of a mind. Research was conducted through
various article search engines, such as PubMed, Phillpapers, Google Scholar and Semantics
Scholar, along with search on specific books, provided their undeniable importance on the
explored topics (as Iain McGilchrist’s “The Master and his Emissary” for lateralization of
function, or Eric Kandel’s “Principles of Neuroscience” for general aspects of brain structure, for
instance).
13
Chapter 2
To Define a Mind
Proceeding to the next step, we must now come to a sort of a consensus as to what we
take a mind to be. Ask yourself: what is a mind? What is it to have a mind? All of us have an
intuitive idea of what the answers to these questions are. Having a chain of thoughts must be
related to having a mind, so would being conscious. Maybe being able to act on impulses, or even
having emotions or feelings? With no doubt, all these and much more are said to be accessible, at
least to some extent, through mental experience. The difficulty in defining a mind lies in finding
an universally acceptable definition for it. Consider this: what if someone had all of the previous
characteristics mentioned, but no emotions? Or inability to take action? Should we consider an
individual without the ability to form memories to be without a mind, just because memory is
clearly a characteristic of a mind? And what of a cat, whose mind is certainly different from ours?
Surely differing mental qualities are not a suitable reason to assume the animal has no mind. A
cat may not be able to form a complex chain of thoughts as we do, or even hold the elaborate
concepts we are able to. But it will still know where to find his food, or how to provoke into his
playful activities. They can perceive the world and hold a very basic understanding of it (arguably
enough) so as to be able to act upon it accordingly. They can phenomenally experience it, feel
angry, sad or happy; it would seem to me a cat shows clear evidence of having a mind, even if
different from yours or mine. There must then exist certain mental properties that are necessary
for having a mind, and a set of these that are sufficient for the same purpose. The question then
is: which ones?
Given the difficulty in defining the mind, we will not attempt to come up with a definition
for it per se, but rather propose a set of characteristics that we expect any one with a mind to have,
such that one holding all these characteristics cannot be said to not have a mind. The proposed
characteristics are: (1) forming representations; (2) holding beliefs; (3) taking action; (4) having
consciousness; (5) having coherence. Note that these mental properties individually may not be
necessary nor sufficient for having a mind, but having them all would certainly be sufficient for
us to ascribe a mind to their holder. After treating these mental faculties, we’ll go over the reasons
that led to the choice of these five. Finally, I shall disclose the argument behind the first objective
of this work, to show that mental incoherence can be the product of any mind we deem singular.
14
2.1 Mental Representations
Starting with mental representations, we all have the ability to mentally perceive the
world. Any given part of reality can become part of a mental experience that regards that reality.
To be in any given state of mind is to have a mental state (Sehon 1994). A mental state is then
any such mental activity that may occupy a state in the mind – if you think of a cat, you are having
a mental state representing a cat; or if you feel angry and reflect upon that feeling, you’re having
a mental state representing your anger. All mental states, in order to be represented, must be
reflected upon. But one can represent the world without reflecting upon that representation. When
you stare now at this document, you are not reflecting on the mental state of seeing this document;
you are simply perceiving it, and indeed representing it. Mental representations are then mental
states that represent reality or other mental states (Fodor 1981). These mental states are said to be
intentional (refer to something, or are about something), and hence, are subjects to semantic and
formal properties which characterize them, such as truth conditions, relevancy or accuracy, and
so forth (Fodor 1981). If you were to say “the cat is a mammal”, your mental representation would
be true; but if you were to say “the cat can fly”, it would not be true. Notice, however, that
regardless of its semantic properties, there is a certain freedom associated with representing a
mental state. As I write here “the pink cat dances upside down on the piano”, you are clearly able
to imagine the said cat, even though you probably never had this thought or seen this happen. You
are forming a new representation in your mind that is based on representations you already own
(a cat, the color pink, a piano). In this sense, the capacity for the mind to represent content is vast,
though ultimately may be limited by our mental representations themselves.
Through these mental representations, room for all kinds of mental activity can arise.
When planning, rationalizing, visualizing, desiring; when tapping into any kind of mental activity,
mental representations are their building blocks. The theory that stands by this idea has become
known as the Representational Theory of the Mind (Sterelny 1990). If you imagine the most
beautiful sunset on the beach, you are immediately presented with a series of mental
representations of a reddish sun falling behind the horizon on a sandy shore. When you plan for
a summer vacation, you create a series of mental representations of that which you are planning
on doing throughout your vacations. All these mental representations – that of the sun, of the
setting, of the beach or a series of plans – are fittingly related to form a single mental state that
represents whatever it is supposed to represent.
Cognitive science finds its way to this issue, trying to find a way to naturalize this mental
phenomenon. Any given mental activity must be associated with a corresponding brain activity.
Alas, finding this association is the center of the mind-body problem. We have found, however,
several brain regions associated to the formation of different representations: perceiving
15
horizontal lines is associated with the firing of specific neurons, and so is the perception of vertical
lines (Fink et al, 2001); seeing faces (Kanwisher & Yovel 2006); words (Cohen et al 2000); and
the list could go on. If we think about this however, it doesn’t seem like we make a conscious
effort of representing these things. We simply perceive them, and indeed represent them. We,
while awake and aware, are constantly perceiving information, and forming mental
representations that make up this unified sense of who we are in the world. But we do not represent
everything (Mack & Rock 1998), as that would certainly be too overwhelming. We represent that
which is useful, that which has certainly been evolutionarily selected to be represented. When you
stare at the white wall, you do not notice all the details, the little bumps or different shades of
coloration. You see a wall, plain and simple (unless you make it your intention to look for detail,
and even so, some details will surely be missed). The mental representation you become aware of
is filtered, such that only some part of it is consciously represented. Nonetheless, the problem
remains, as explaining how brain activity may be translated into mental states may be still out of
our reach. Certainly within reach, however, is our ability to ponder on the very nature of mental
representations, which has been a hot topic of debate in philosophy of mind.
Folk psychology is said to be the commonsense understanding of how the mind works
(Shroeder 2006). When we witness a behavior, we try to explain it according to what we know
(i.e. of the person, of the situation), and justify it accordingly. In order for normal human beings
to engage in social interaction, some presuppositions must be considered, such as the principle of
charity, through which we consider the best possible version of each other’s statements, allowing
us to make sense of each other (Blackburn 2016). We may understand the importance of these
mental representations of ours through this scope: should the mental representations we all hold
differ, then it would be extremely hard to have any type of discussion or conversation or even
social interaction. As such, most philosophers and scientists agree on representational nature of
the mind, which is ultimately reflected in our everyday mental experiences.
2.2 Beliefs
In tight relation with mental representations, we can now consider Beliefs. A belief is a
propositional attitude 4 (Fodor 1978) and a mental state that relates to certain mental
representations, and that refers to something truth-verifiable. A belief must be true, or it must not
be true. Likewise, if we believe in something, we should not be able to honestly claim that we do
not believe in that exact same proposition. If we think about our beliefs, we quickly realize that
not all of them require long processes of thought or introspection: we all believe this document is
4 A propositional attitude is an attitude one takes towards a proposition. If one believes in black cats, on has a belief
(the attitude) towards a proposition (black cats).
16
black in white; that time does not stop, or that the sun will rise each new day. Beliefs are, in a
way, one of the most fundamental forms of mental representation, and certainly one of the central
parts of mental activity, behavior and conscious thought.
To believe in something would then entail the existence of a mental state, such as a mental
representations, whose representation must refer to something. Just as in mental representations,
there is a certain freedom associated to what one may believe in. Beliefs can be contested, can
change, and can even be unfounded. When established in a mind, they can easily be recalled at
any time. It would then seem fitting to consider a belief to be the mental state of having a stored
representation, whose propositional content will be the same as the belief itself (Sterelny 1994).
One is left to wonder on the nature of such a mental state. A strongly established belief may be
very hard to change. Indeed, there are even certain unconscious processes which seem to serve in
defense of beliefs (unfounded or not), such as seen through a backfire effect, in which a challenge
to our beliefs may lead to even stronger belief in them (Nyhan & Reifler 2010). Beliefs do seem
to be very important for the mind, and furthermore, beliefs seem to be the roots of most behavioral
phenomena, taking a core position in the causality of behavior (Leicester 2008).
Now that we understand the importance that the mental belief system holds, we may ask
more difficult questions: how does a belief get to have content? Are there different types of
beliefs? What are the properties of beliefs, and does the existence of beliefs in the mind entail the
existence of structure for the mind? The content of a belief, as hinted previously, seems to be
intimately related to the proposition associated with it, which is related to the representation it is
based upon. Consider the belief that “the sun is bright”. This belief holds that an external object,
the sun, possesses the property of being bright, which is its proposition. It is the storage of this
representation and its propositions that allow for a minded individual to be able to take stances
and attitudes towards such representations. As to different types of beliefs, two are generally
considered by both philosophers and psychologists alike: occurent beliefs, which correspond to
beliefs present in the conscious mind at any given time; and dispositional beliefs, which are beliefs
that are stored in the mind, but not being consciously introspected at any given time (Rose &
Schaffer 2013). Forming and storing beliefs has been widely studied along with memory, where
a non-occurent belief may be recalled at any time, provided it is stored in memory (Schacter
2001).
The properties of beliefs are a more delicate matter, and one filled with diverging
opinions. One that must be mentioned forehand is that which sets the prepositional attitude of
believing apart from other prepositional attitudes: believing must relate to a norm of truth. A
belief’s propositional representation must be subject to truth conditions (Burge 2010).
Furthermore, an established belief in any mind must be true or must be false in that mind, such
17
that one may not believe in something while honestly claiming it not to be true. Another property
of beliefs is the fact that they seem to be limitless, in what Jerry Fodor calls the productivity of
thought (Fodor 1975) – there seems to be no limit to the number of beliefs that one may have.
Another important property brought up by Fodor is considering the systematicity of our mental
activity. If one is to believe that “cats dislike dogs”, one may also believe that “dogs dislike cats”.
The way beliefs, and thought in general, can be rearranged in suggests the existence of internal
mental organization, or a system they must abide to.
As such, it is considered that beliefs must have some kind of structure. Some believe that
structure is intimately related with language (Davidson 1982); others consider the existence of a
map-like structure of representational content, such that by repetition and rearranging of mapped
representations, a limitless amount of mental states can be achieved without resort to language,
and holding the properties suggested by Fodor (Blumson 2012).
2.3 Actions
Having an established belief system is, as mentioned, a pillar for the output of behavior.
To act can be said to be to behave. Action, so long as it is willful, must be accompanied by
intention (Davidson 1980). So, intentions can be said to be the main precursor of action, and
definitely the precursor of willful action. Through having an intention may not be the same a
holding the belief of willingness to act (Paul 2009), one and the other are related in the basis that
intentions require beliefs to be acted upon. We can widely consider two kinds of actions, and to
simplify, let us start by excluding right away a set of actions and behaviors that we are not
interested in debating here. Suppose that a ball is coming straight for you. What do you do?
Probably, you will at least attempt to dodge the ball. But you will do this unwillingly; this action
holds no conscious intention (thought avoiding the ball could be an unconscious intention). This
level of action will not subject of discussion here, as we do not intend to ask if a SBP can act
unconsciously. This leaves us with willful action, which is, nonetheless, riddled with controversy.
Donald Davidson, back in 1980 (Davidson, 1980), wrote the following premise:
(1) If a person F’s by G’ing, then the act of F’ing = the act of G’ing
This seems to be true to some extent. If a person breaks a glass by throwing said glass
against the wall, then by all means, breaking the glass = throwing the glass against the wall. But
what if we propose the same premise on the following thought: a person moves his hand by trying
to move his hand. It would seem that willing to move allows for the movement itself (Wilson et
al 2016).
18
If we take a look inside the brain, we know there is a region in each hemisphere dedicated
to voluntary motor function – the motor cortex. Whenever we move, neuron firing in this region
is certain. We generate this mental state of intending to act through activation of neurons in the
motor cortex, which in turn leads to the physical command for movement. To further understand
the possibility of this mechanic for agency, consider Benjamin Libet’s controversial experiments
on willful movement. Libet noted that preparation for motor movement (as seen through neural
firing in the motor cortex) preceded conscious awareness of the intention for said movement
(Libet 1985). This left us to question the very nature of our free will, as admitting that neuronal
firing for movement occurred prior to our conscious awareness of intention for that movement
would entail that our movement and actions would not be determined by our intention to do so.
Considering that free will is not something we can let go of so easily, theories behind what leads
to conscious intent have been considered, such as Dennetts “Fame in the Brain” theory (Dennett,
2006), and through these theories, we may make sense of intention for movement producing said
actions, where an unconscious mental process (such as the intention to act) may become conscious
as the result of certain cerebral processes. It’s important to note that, independently of neuronal
firing prior to conscious awareness of intention, this is a continuous process: neural firing leads
to consciousness of intention which leads the movement – it’s a successive continuous process,
as conscious awareness requires mental representation of phenomena, and thus only prior to said
representation can it be consciously accessible. In this sense, what we may be willing to do, and
becoming conscious of it, are not supposed to be simultaneous. Nonetheless, it should seem clear
by now that actions entail the existence of mental states that ground said actions, and without
these mental states, action as a product of a mind could not be considered.
One issue that should be briefly mentioned is that of simultaneous action. We see SBP’s
taking simultaneous actions, without one task hindering the other. A normal mind struggles at this
feat, and if asked to simultaneously act upon two different tasks consciously, reduction in the
efficiency for both tasks is expected (Pashler 1994). Yet, a SBP can consciously focus on two
different tasks simultaneously with a brain much like our own. A SBP can draw a circle with one
hand and a square with the other with no difficulty. This is a whole other level of simultaneous
action, and one we, normal individuals, cannot achieve easily. As mentioned, both hemispheres
hold the needed neural basis for action. And as it would seem, both hemispheres hold the needed
neural basis for the formation of representations and beliefs as well. If actions are based on beliefs,
then it would be possible for both hemispheres to be able to take action simultaneously, provided
that they would be allowed to act independently – which is exactly what a corpus callosotomy
seems to enable – and that each hemisphere held a different set of relevant beliefs for the task,
which they seem to do.
19
2.4 Consciousness
This is perhaps one of the most delicate subjects to be debated in science today.
Consciousness has always been, and continues to be, a blur. What is consciousness, how we get
to be conscious, or even why we are conscious, have been central questions in cognitive science,
and even though advances have been made, little is yet known with certainty. It is central in the
split-brain debate as well however, so we must try our best to work with what we know. William
James, back in 1890, coined the popular expression for the “stream of consciousness”. It remains
to this day a very accurate expression for what consciousness is or seems to be. While conscious,
any person would claim that their chain of thoughts in unbroken, and indivisible. This leads the
way for the first idea one must hold for consciousness: it feels as a unified, continuous mental
phenomena that can hardly be said to be divisible (Bayne 2008). That being said, when addressing
consciousness, several ideas may come to mind, and surprisingly, most of them may be right.
First and foremost, there is the quality of being a conscious being. We know we are
conscious, and we are pretty sure a rock probably isn’t. There is something it is like to be
conscious, and in the set of “something it is like to be” we can find numerous types of experiences
(Chalmers 1996). Be it a cat or a human being, one assumes we are sentient beings – beings
capable to perceiving and interacting with the world they are set upon. This may refer to the most
basic form of what it is like to be anything at all. Sentience comes attached to other likely
attributes, such as wakefulness and self-consciousness (Gulick 2017). We, human beings, are
aware of being aware. This higher level of conscious thought might be (among other amazing
human characteristics) what enables us to do the astounding things that only the human race seems
to be able to do. Lastly, there must be something that we may be aware of when we speak of
awareness of awareness. We can consciously focus on our conscious mental states themselves,
engaging in higher level of conscious thought. To what we are conscious of, we call a conscious
state. This may be the both the object of conscious focus (conscious of the cat) and the experience
of conscious thought (conscious of my consciousness of the cat). Every mental state we can say
to be conscious of is a conscious state. This calls back the dynamic idea of a flowing
consciousness: all our conscious states, our chain of thoughts, seem to cohere successively and
perfectly, such that there is no moment in which one conscious state ends and another begins.
Conscious states may come it two forms themselves. One may consider conscious states
which hold conceptual representations at its root (for the purpose of use in belief forming,
reasoning or other high-level conceptual processes) and phenomenal conscious states, i.e.
conscious states which hold phenomenal representations at its root, addressing the “what it’s like”
aspects of experience, and focusing on their phenomenal nature (Rosenthal, 1986). Ned Block, in
his renowned article “On a confusion about the function of Consciousness” from 1995, points at
20
a distinction in consciousness between its phenomenal nature and its accessible nature. Whereas
we can hold representations and consciously use them to guide our actions and behaviors, we can
also hold representations of experiences consciously, which do not need to guide our action at all.
In this sense, the former, known as access consciousness, is necessary for a mind capable of
conscious reasoning, of planning, acting and speaking (along with other mental capacities who
hold introspection as necessary). The latter, phenomenal consciousness, is necessary for
consciously feeling emotion and sensory experience, such as the blueness of the sky of the pain-
ness of a sting.
If one holds representational conscious states that may guide action and thought, then one
is access conscious. If one holds conscious states of experiences that allows us to feel what it is
like to experience such a state, one is phenomenally conscious. Access consciousness seems to
be necessary for having a mind, as otherwise one would be left to wonder how such a creature
would be able to interact with his world. Block gives one such example in his article, considering
that one with blindsight is not able to phenomenally experience the visual world. That does not
keep him from introspecting or from reasoning, so long as he holds representational conscious
states that may guide his behaviors. Phenomenal consciousness, as providing subjective ability to
consciously experiencing the world, seems to be necessary for a mind as well. Nonetheless, they
are two parts of the same, whole experience that consciousness is. One may, for instance, access
the phenomenal nature of consciousness, provided the phenomenal experience can be kept in the
form of a phenomenal concept. Together, full conscious experience seems to be a sufficient
property of a mind, such that one capable of consciousness must be ascribed a mind. A distinction
can now be made between consciousness as a whole, and the conscious states housed by the mind.
If consciousness is the whole experience of being conscious, a conscious state would be each
single moment in time in which we may say to be conscious. These conscious states are
continuous (so long as we are conscious), such that we cannot say when one conscious state begins
and another one ends. If there is no such experience that we may say to be conscious of, then
certainly, there is nothing it is like to be conscious of such an experience.
Finally, and again for the sake of simplicity, focus will be centered on only one of the
types of consciousness – access consciousness (at least as far as one may consider one without
the other). This decision has shortened the problem of SBP’s minds greatly, as going into the hard
problem of consciousness – explaining how physical firing neurons may account for the mental
feeling of conscious experience – would certainly lead to a much longer, and much more
controversial thesis. Note, however, that no claim of a SBP not being phenomenally conscious is
“Phenomenal consciousness is experience; the phenomenally conscious aspect of a state is what it
is like to be in that state. The mark of access-consciousness, by contrast, is availability for use in
reasoning and rationally guiding speech and action”.
- Block, N., 1995
21
made, through either hemisphere for that matter. If an unpleasant and strange smell is presented
to the RH, a disgustful visage would certainly be visible in the SBP’s face, though he may not be
able to declare why. If presented to the LH, the same reaction would merely differ by the existence
of verbal explanation. If one hemisphere interferes with what the other one is trying to accomplish,
anger or frustration can be seen as the SBP pounds the table with the perturbed hemisphere’s
controlled hand. These reactions clearly touch the phenomenal nature of consciousness. There is
something it is like to feel and to experience that which the hemispheres have access to, and both
hemispheres react to those experiences accordingly. I have little doubt that phenomenal mental
states take place in both hemispheres. This being said: what can we say about SBP consciousness?
On splitting moments, can we say the SBP has two consciousness’? One split consciousness? By
now you can see a relation between this question and Nagel’s. This issue will be further discussed
in the fourth chapter of the work, dedicated to the mind of a SBP.
2.5 Coherence
Some brief words must be said about mental coherence (though a whole chapter will be
dedicated to coherence issue shortly). When considering coherence in a single mind, it seems to
me there may be three distinct possibilities one may consider: (1) the mind may hold states that
cohere with reality; (2) the mind may hold states that cohere with other states in that mind; (3) the
mind may hold conscious states that cohere with other states in that mind. When here we discuss
mental coherence, we are certainly not referring to the first definition. That thought, commonly
known as correspondence, though an important feature of a mind, is not relevant for
understanding if a mind must be itself coherent or not in order for it to be a single mind, which is
what is debated here. The question is whether mental states must themselves cohere or not, and
on that point, both (2) and (3) seem to be good for analysis
In regard of what is necessary for having a single mind, it never seems we ever feel not
consciously coherent. As mentioned before, the flow of conscious thoughts and experiences we
have always hold successive coherence; there is no state of passage from one conscious state to
the next. We consider then that the whole set of co-conscious conscious states one may hold – all
the conscious states one holds when conscious – are systematically coherent. Since we
consciously guide our actions through conscious mental processes, one should not be able to take
certain actions in an incoherent way (as, say, fully focusing consciously on two differing tasks).
This goes in par with the grounds upon which Nagel rejected his fourth hypothesis back in 1971.
For a single mind to show incoherence, such that strange incompatible behaviors occur, would
raise several questions regarding the unified nature that we take our mind to have. And so,
allegedly, the mind cannot be incoherent.
22
Coherent as well are the mental properties we’ve spoken of in this chapter. If I hold the
(true) representation of a cat in the room I am now, I will form the belief that a cat is with me in
my room, and may have the intention to pet it because I know it is in the room with me (and I
enjoy petting my cat). For these not to cohere, my representation of my cat in my room would not
lead to my belief that it is in the room e.g., and that does not seem to happen. If I am conscious
of the cat, the beliefs I form revolving this representation will cohere with that representation, and
the actions I may take regarding that representation and belief will also cohere with them.
However, we are still speaking of what happens on a conscious level. Up until now, only point
(3) was considered. But the mind is more than its conscious part, and one must ask: can an
unconscious mental state not cohere with a conscious mental state? It would definitely account as
a form of incoherence in a single mind, which is home to both unconscious and conscious mental
processes. This question will be addressed in the next chapter. Furthermore, the mind of a SBP
seems to house two whole sets of said conscious states, one in each hemisphere. So we may also
ask: does a SBP hold incoherent whole sets conscious states? And if so, how can we consider
them to have a single mind as you or me? These questions will also be developed further. They
are, nonetheless, central in understanding the hypothesis defended here. It seems clear however
that a single mind reveals both coherent individual conscious states (our conscious states cohere
with what mental states are housed in them) and coherent sets of conscious states (such that we
continuously feel consciously coherent). To consider whether the single mind must necessarily
be coherent requires analysis of the point (2), and it will be done in the following chapter. But at
least this far, we can take the idea that the conscious mind seems to be necessarily coherent.
2.0.1 Why these 5?
One thing you might be pondering from the beginning of this chapter is why these five
characteristics precisely. As mentioned earlier, defining a mind is no easy feat, and I intend not
to do so here. But ask yourself: what else, other than the mental properties we’ve seen here, would
you expect a mind to do? Consideration of these five characteristics was made in consideration
of this question, and I believe these to be a fairly exhaustive set of mental properties, such that
anyone or anything with them all must be ascribed with a mind. Forming mental representations
is essential to perceive the world through a mind or with use of a mind; having a set of beliefs is
equally invaluable when engaging the world, as they set the ground upon which the world may
be engaged; engaging the world is acting on it, which is the way to output mental activity to the
external, physical world; having the capacity to consciously experience anything, be it physical
or mental, is definitely a mental property, and so is to consciously act on any situation through
use of the previous mental properties; and finally a mind always reveals itself in a coherent way,
at least consciously. I might’ve addressed propositional attitudes in place of beliefs, as the former
is the set in which the latter belongs. But a belief, it seems to me, is the most fundamental
23
prepositional attitude, as it grounds the causality of behavior itself. Given that beliefs seem to
ground most other deviations of thought (much like representations ground beliefs), the fact that
we here are somewhat refraining exclusively to access consciousness, and that we’re working
specifically on the issue of SBP’s, the chain of mental processes that may lead to the output of
conscious mental activity through behavior is enough to proceed with the ideas developed in this
work, and to that purpose, these five characteristics seem to suffice just fine. To close, a small
mention to the nature of the last two discussed properties must be made. They are, by themselves,
characteristic properties of a mind. However, they also rule over the previous three properties.
Representations, beliefs and actions may or may not be conscious. Likewise, they may or may
not be coherent. This will be further discussed, but keeping this in mind now will prove useful in
the close future for understanding the nature of mind incoherencies.
2.6 The Mind can be Incoherent
Now that we’ve discussed the matter of mental coherence, we are in position to discuss
the first argument defended in this work, standing for the first objective. As it would appear,
coherence is considered to be a necessary property of a mind. On the present work, I propose a
different point of view: I consider that all mental activity, so long as it is conscious, is coherent,
such that none of us ever feel incoherent at any given time. The contrary may not be so true
however. From an external perspective, we can witness incoherencies in other people, such that
they could perceive (through behavior) incoherence in my mind, even though I don’t feel said
incoherence and am not aware of it myself. Guided by this idea, we may consider the following:
C1: If (1) consciously we systematically cohere, and all that which we have access mentally is
that which we are conscious of, and (2) externally we can witness mental incoherencies through
incoherent behavior in others; then (3) coherence seems not to be a necessary property of a mind,
but rather of conscious states, such that (i) no conscious state we may ever have is incoherent with
another successive or simultaneous conscious state within the same whole set of co-conscious
conscious states, (ii) no single conscious state is incoherent with what mental states are housed in
it, and (iii) the mind may reveal incoherence provided the integrity of (i) and (ii).
In this way, a mind as a whole, through consideration of all mental states (conscious and
unconscious) may be incoherent, but never consciously so. This seems to be true for us, single-
minded individuals. More importantly, it seems to be true for SBP’s as well. If we recall Nagel’s
hypothesis’ on his question of how many minds a SBP has, his fourth rejected hypothesis was
centered in the idea that they could have one mind, whose contents derive from both hemispheres
separately, leading to what would appear to be an incoherent mind in experimental situations. He
24
proceeds to reject this hypothesis, as accepting a single mind to be incoherent would easily lead
back to the two-mind hypothesis, and especially because no SBP would reveal himself to be
absolutely incoherent at any given time. But consider now this: if both the left and the right
hemispheres are functionally capable of forming representations; holding beliefs and taking
actions (which we’ve seen they appear to be able to), then I’d be inclined to believe they can hold
different conscious states as well. As such, the SBP’s single mind would become home to two
sets of conscious states – two loci of consciousness – simultaneously while in experimental
situations, in which different representations (and associated mental states) are considered. One
set of co-conscious conscious states associated to each hemisphere, all the while keeping all their
conscious states coherent within the same such set – one whole incoherent locus of consciousness
with more than one simultaneous set of co-conscious conscious states.
One might argue that if a mind is home to two sets of conscious states, then it no longer
is one consciousness but two, and similarly, not a single mind but two. But I believe we can all
agree that the brain, with its two hemispheres, has evolved to work as a whole: the brain and it's
hemisphere's mostly try to output a mind that is whole and coherent, built from perceptions
associated to a single body, and connected to a whole sub-cortical region which holds no splitting.
Even though both hemisphere's can act upon what they believe to be the mistakes of the other,
there seems to be no moment in which one hemisphere gains permanent independence from the
other – some conscious states even remain coherent even in SBP’s experimental situation, such
as their posture. For these reasons, I hold that one consciousness is kept, even if incoherently.
However, I am not blind to the strength of this argument. I propose then the following in
alternative: according to the Bayesian Brain theory, which sees the brain as a conscious prediction
generating system, it is not the question of how many consciousness’s that should be asked, but
rather how many models generating conscious experience are found within a single brain. This
question allows for a single mind and a single consciousness to be kept, and still explain what
seems to be happening in the mind of a SBP. More on this will be developed in the fourth chapter
of this work. Furthermore, if the hypothesis defended here contains any amount of truth, then
holding two sets of conscious states which certainly do not cohere will not be enough for
consideration of two minds, as we who appear to have a single mind too may hold incoherent
mental states (which, recall, is none other than what a conscious state ultimately is, even provided
it is in a different set of co-conscious conscious states, as is in SBP’s). In the following chapters,
this hypothesis will be formally developed, and as different evidence pointing to this idea is
presented, I expect this somewhat radical thought to be able to take roots, and maybe shift the
reader’s beliefs into accepting the possibility of the incoherent mind.
25
Chapter 3
The Incoherent Mind
The first point we should address is a clear, solid definition for coherence. What do we
take coherence to be, regardless of context? If a mind must be coherent, then it must have the
quality of holding together. The mind is composed of mental states, and so, if these mental states
cohere, we may say that a mind is coherent. By having coherent mental states, one mental state
must certainly not undermine the existence of the other, such that the two of them cannot be
housed in the same mind. An example of such incoherent mental states could be of holding the
representation of there being a cat in the room, all the while holding a simultaneous representation
of there not being a cat in the room. These two mental states do not cohere, as having any of the
two would entail that one cannot have the other. For a single mind to be necessarily coherent then
would require that any single mind would abide to this idea of mental state coherence, when truly,
it does not seem to always be so evident. We do not always hold coherent mental states, and that
is reflected in incoherencies that we sometimes reveal. However, we don’t feel consciously
incoherent because of them. Recalling what was proposed when coherence was discussed
previously, one’s mental states may all cohere, or one’s conscious mental states may cohere with
other mental states (including other conscious states). Suppose now that a man is a heavy drinker,
and holds the desire to drink. He is also a wise man, and knows his heavy drinking is no good,
and therefore, holds the desire to quit drinking as well. This man holds simultaneously the desire
to drink and the desire to not drink. Being in a state of mind such as a desire and holding two such
desires that do not cohere (as desire of p and desire of not-p) leads this man holds incoherent
mental states. What’s worse, he is aware of this contradiction, so both are conscious as well. As
it would seem, mental states can be incoherent, some even consciously so. Likewise, a SBP
reveals clear incoherent behavior in experimental situations. Even if only for brief moments, a
single individual is able to act as if two. This case, though related, is at a clear whole new level
of incoherence. These individuals do not hold a conscious state which holds incompatible mental
states; they hold incompatible conscious states themselves 5. They can believe to be seeing a cat
5 Conscious experience entails unity of what is experienced. There is a difference between holding a state a and not-a
vs holding a state a and holding a state not-a. Where one entails contradictory possession of states under the same
conscious spec (an illogical contradiction), the other refers to different conscious specs of those contradictory states.
26
in the room and believe that they’re not seeing a cat in the room simultaneously. In this sense, it
seems understandable that a SBP does show a higher degree of incoherence than any other single
individual.
In this chapter, evidence pointing to the possibility of holding a mind that may not always
be coherent will be further discussed. Moreover, different degrees of coherence will be analyzed,
with respect of different mental properties that become incoherent themselves. Finally, the
discussion will be brought back to the Split-Brain debate, in which we will attempt to shed light
on what the nature of the SBP’s incoherencies.
3.1 Types of Incoherence
As seen previously, the mind and its activities may be construed through consideration of
the representational theory of mind. Holding, first and foremost, representational content of the
world and the mind, a chain of mental processes can be tracked up until the output of behavior
through agency. Starting with mental representations, beliefs may be formed, through which
actions may be taken. All of these may be part of the conscious mind or not, depending on whether
it’s underlying mental state is in a conscious state or not. When speaking of incoherence however,
one must consider what it means for each of these mental qualities to be incoherent. Is it the same
to act incoherently, as it to hold an incoherent set of beliefs? These are very different questions,
and as we will see, with slightly different answers. We will now deconstruct the mental properties
we’ve discussed, in light of the incoherencies that each may reveal 6.
3.1.1 Incoherent Action
First we must ask what it is to act incoherently. The point to be considered when faced
with the idea of incoherent action is rather simple. Ponder this situation:
1. John believes, beyond any reasonable doubt, that the action a is better than the action b;
2. John does b instead of a, regardless.
This is not an unfounded example, and we can find several behaviors that follow this idea.
The question is then to understand how an individual, against his beliefs of better judgement,
chooses to do b over a. To act as John is said to act incoherently, for if all that allows for John do
decide in the best possible manner points towards the fact that doing a is the best judgement, it
does not seem to make sense to consider John doing any alternative b (Arpaly, 2000). How can
we account for situations in which, like John, beliefs (that he clearly has knowledge of) ground
6 In this section I address psychological phenomena that led me to believe a normal mind is incoherent. For an account
of how the brain may allow for such incoherencies, consult section 4.2, in the fourth chapter.
27
not his actions? Donald Davidson in his “Essays on action and Events” (Davidson 1980) clears
some essential points on the possibility of incoherent action, stating that: (1) if an agent acts
incoherently, he must do so willingly; (2) if an agent acts incoherently, he must have knowledge
of an alternative course of action to that of the incoherent course of action. Considered these
points, if any individual still chooses a course of action that is not within his best judgement, as
John doing b instead of a, then that individual is acting incoherently.
In classic literature, incoherent action is said to come in the form of weakness of will or
akrasia; and to this day most consider these to be interchangeable definitions (Mele 2009). It
refers to action against one’s best judgement. When faced with this kind of willful clash, the
resulting action, going acknowledgedly against what is considered better by the agent, is an
incoherent action. Suppose a man with a gambling addiction who wishes to stop, or one filled
with wishes of revenge, even though he knows that he’d better not. In either case, to succumb to
the desire to gamble, or to fall prey to the hunger for vengeance, one enters a personal battle of
will. Willpower on agency should not be said to result in clashes between better reasoning and
wrongful desire solely however. One will actually hold contradictory desires at some point
(though one of them may certainly be born of reason). When the outcome favors the desire that
is clearly not the best course of action, the action will’ve defeated ones better judgement, and
when this happens, the action can be said to not cohere with his beliefs that stand for his better
judgement.
As we’ve seen, desires are mental states. For one to be able to hold such incoherent mental
states would entail that one’s mind does not cohere in situations in which this tension occurs
between incompatible desires, as seen through some level of behavioral conflict. Furthermore,
incoherent actions may be taken in the basis of incompatibilities that are not restricted to
reasonable and unreasonable desires. In the following example from Alfred Mele in his 1987 work
Irrationality”, we may realize that akratic actions come in variable form, and as Mele notes, with
different natures as well:
In this short story, Rocky reveals himself to be weak willed through his fear of the game
and through deciding to act against his best judgement initially. This is a clear example on how
the nature of akratic behavior may come in several forms. It was conflicting desires that led way
for the first akratic behavior, but it was fear which led way for the second. In rejecting moral
reason that led him to act against his better judgement, he revealed akratic behavior. But if he had
Rocky, who has promised his mother that he would never play tackle football, has just been invited
by some older boys to play in tomorrow’s pick-up game. He believes that his promise evaluatively
defeats his reasons for playing and consequently judges that it would be best not to play; but he
decides to play anyway. However, when the time comes, he suffers a failure of nerve. He does not
show up for the game—not because he judges it best not to play, but rather because he is afraid. He
would not have played even if he had decisively judged it best to do so.
- Mele, A., 1987
28
gone through with his resolution, he would’ve revealed strength of will and not succumb to his
fear, which is, through an internal perspective of the problem, the best course of action to possibly
take (Mele, 1995). Given the different nature of these akratic actions, the tendency to attribute to
each of them different definitions has risen. Mele considers that when one reveals akrasia on the
very decision of action upon a given situation, one finds evaluative akrasia. When one reveals
akrasia on the execution of the action, one finds executive akrasia (Mele, 2009). Rocky then fell
to evaluative akrasia when deciding to act against his best moral judgement, and also fell to
executive akrasia, when fear led him to act cowardly.
In either case, as with Rocky, any individual who acts against his best possible judgement
can be said to be acting incoherently, as his actions cannot cohere with certain mental states which
should definitely guide them (namely, his belief in a best course of action). Recalling the
definition for coherence, if certain mental states do not cohere in individuals in these situations,
then they hold an incoherent (single) mind at these times, even if a high degree of coherence can
still be considered in these acts. A heavy drinker who chooses to drink and knows he should not
has the belief “I believe drinking is doing me harm”, which leads to his desire to stop drinking.
He also holds the belief “I believe drinking will ease the pain”, which leads to his desire to keep
drinking. In this example, his beliefs are not incoherent, but desires they lead to are. Should it be
his beliefs which revealed incoherence, the degree of coherence would decrease, as we’ll see
shortly.
This nature of incoherencies, of which the agent is conscious of, only occur on the level
of agency. Individuals who find themselves with akratic behaviors are aware of their lack of
strength of will, and are aware of their beliefs that led them to accept that their akratic action is
against their better judgement. Incoherence in other levels of mental activity and incoherent action
are related, and incoherence in more basal mental states may end up constituting incoherent
actions as well. But incoherencies in the other considered mental processes (namely
representations and beliefs) have the very distinctive trait of not being based on coherent beliefs,
as is with akratic individuals, and such incoherent beliefs will produce deeper levels of
incoherence in the long run. As a final note, understand that even though individuals who act
incoherently are aware of their illogical or conflicting desires and intentions, their conscious states
are coherent still. Though they are consciously riddled with indecisions, and ultimately incoherent
acts, they are still consciously coherent. They hold a conscious state which houses two
contradictory desires, not two simultaneous conscious states holding one desire each.
Furthermore, if we recall the hypothesized idea behind single minded incoherence, a conscious
state has to cohere with what mental states it houses; but the mental states housed in it do not.
And indeed, this is the case we have before us: one conscious state housing the two opposing
29
desires with which it coheres (reflected by the tension present in this indecision); and two mental
states that are incoherent with each other, such that both cannot be acted upon simultaneously.
3.1.2 Incoherent Beliefs
Through a cornerstone of most behaviors and actions, we may find individuals who hold
incoherencies in their sets of beliefs. Unlike what happens in incoherent action, in which an
individual acts against that which he believes to be the better course of action, individuals who
hold incoherent beliefs genuinely believe in something that may not make sense with reality, and
ultimately end up holding sets of beliefs that do not cohere with each other. These individuals,
commonly known as Self-Deceivers, somehow lead themselves to believe in some preposition or
prepositions that do not cohere with other prepositions they also hold at some point to be true,
failing in what we’ve seen to be central for considering mental coherence.
Consider a belief p and a belief not-p. Both these distinct beliefs do not cohere, and one
same individual should not be able to hold both of them simultaneously. Self-deception allows
for the possibility of this phenomenon, provided (1) the self-deceiver holds contradictory beliefs,
and (2) they must intentionally lead themselves to believe in something they know or believe to
be false (Deweese-Boyd, 2017). A self-deceiver effectively holds two beliefs that do not cohere
with each other at the same time, and a possibility for such phenomenon would be having one of
them housed in a conscious state, and the other in an unconscious state, developing on Freud’s
idea of repression and the unconscious mind (Freud & Phillips, 2006). Consider the following
story:
Any knowledge one may hold of his reality must, at some point, depend on the set of
beliefs one has (Dretske, 1988). In situations like these, Jack genuinely does not consciously
believe he is sick, but his behavior suggests he is avoiding the subject, pointing towards the idea
that he also believes that he is sick (and avoiding evidence that may point to that belief). He is not
conscious of the relation between this strange behavior and the belief that he might be sick, as
consciously, he definitely believes he is not. It would seem he believes in not-p (as “I’m not
sick”) and believes in p (as “I am sick”) simultaneously, though only one of them is present in a
conscious state. However, for Jack to wish to believe that he is not sick, even before compelling
(and ignored) evidence that he is, there must at some point have crossed his mind the belief that
Lately Jack has been avoiding reading any magazine or newspaper article on medical issues. If they
appear on a TV program that he is watching, he immediately switches channels. If they come up in a
conversation to which he is a party, he changes the topic. He has been scheduled to have a regular
check-up with his doctor several times, but it is proving difficult for him to get this done. Each time
the appointment is scheduled, Jack forgets about it and misses the appointment. Eventually, Jack’s
relatives have asked him whether he believes that he is sick, but Jack sincerely denies believing that.
- Fernández, J, 2013
30
he is sick, as it is this justified and unwanted belief that makes way for the unjustified 7 one
(Davidson, 2004). He rejects his belief p, and one non-correspondent and unjustified belief not-p
takes its place, which does not cohere with his justified belief. From here, we can see there is one
very important point to consider: upon establishment of an opposing and unjustified belief (to the
one that is justified), the individual becomes consciously aware of it, and somehow shifts his
corresponding justified belief to an unconscious mental state. In this way, the self-deceiver avoids
conscious awareness of his self-deception (Porcher, 2012).
This raises obvious problems: in order for a self-deceiver to actively trick himself into
believing something that does not cohere with reality, he must, at some point, hold the intention
of deception (de Sosa, 1970). But how can the holder of intention of deception be the deceived?
To hold such an intention should render the deception fruitless (Mele, 1987). Furthermore, if an
intention to deceive is present, that intention must be based on a belief. In this case, the belief that
leads a self-deceiver to the deception is none other than the belief that is justified, but shifted
away from a conscious state. Yet if one holds such a belief, which leads to intention of deception,
how can one let go of it in order to give way to the unjustified belief? Both beliefs that do not
cohere must be held at the same time in the mind, as the unjustified belief is only able to take
roots on the basis of the justified belief that is not conscious. If the justified belief were to simply
be absent, then there would no longer be a reason for the unjustified belief to keep its place, and
this is very hard to imagine, if even possible. And yet, self-deception is not an illusion or a
mistake. It’s a psychological response that most of us have no control over, especially because,
as mentioned, when the unjustified belief takes roots, the individual is no longer aware of his
justified belief, as it ceases being housed in a conscious state.
This paradoxical view of self-deception is based on the premise that there is, at some
point, the intention to self-deceive. Other views try to move away from this idea, and thus
avoiding the paradoxes associated to it, like considering that self-deception involves motivational
bias towards a certain belief that may not correspond with reality, thus avoiding the contradictory
intention to self-deceive (Dunn, 1995). These kind of alternative deflationary views have been
strongly criticized, as they do avoid the paradoxes of an intentionalist view (which attributes
intention of deception to the self-deceiver), but fail to explain the self-deceiving phenomena as
any different from other psychological phenomena, such as wishful thinking, to which it certainly
seems to differ (Szabados, 1973). For in self-deceiving individuals there is clear evidence towards
an internal conflict (Funkhouser, 2005), and should this psychological phenomenon be reduced
7 We speak of justified and unjustified belief, as the belief that is housed in a conscious state and the one that is not
do not need forcefully to be true and false respectively. They must however be justified in the mind or not.
31
to a sort of motivated reasoning, one would have no room to consider why such conflicts should
be present.
There are considered two main types of self-deception: straight deceptions, in which a
self-deceiver leads himself to believe in something they wish to be true, and twisted deceptions,
in which a self-deceiver leads himself to believe in something they wish to be false (Newton,
2001). The former is the most analyzed type of self-deception (Newton, 2001), and widely used
in arguments in support of deflationary views. Examples of this kind of deception are similar to
the following:
In this story, Bill leads himself to believe that the girl he is attracted to is also attracted to
him, though she is “playing hard to get”. He wishes to believe that his interest in her is reciprocal,
and so, he leads himself to believe so, regardless of compelling evidence pointing towards the
opposite. In cases such as these, there is without a doubt the possibility for motivated reasoning,
in which the motivation one holds towards a given belief leads him to believe in that which he
would prefer to believe, rather than believing in that which evidence would dictate (Mele, 2001).
Furthermore, he here does not ignore evidence, but rather interprets it in a way that fits his beliefs
and motivations. Considering such examples of self-deception, one could fall to the temptation of
considering that different intentions (and the paradoxes associated to them) are not at all necessary
to explain self-deceptive behaviors. But what of the case of Jack, and his sick/not sick beliefs?
He wishes to believe he is not sick, and so leads himself to believe that he is not, regardless of
compelling evidence pointing towards opposing belief. In his case however, Jack actively rejects
any source of evidence that may bring him back to his “I am sick” belief, which he wishes not to
hold. In cases such as these, it’s hard to hold motivational bias as the only justification for the
incoherent behavior: it seems clear that in Jack’s mind there is something that he wishes to avoid,
something that he does not want to be true. Motivational bias would account for him not actively
looking for evidence, but cannot account for him actively rejecting that which he has access to
(Porcher, 2012). Jack’s case is one of twisted self-deception, in which he leads himself to believe
in something he wishes to be false. In cases like these, individuals may end up rejecting evidence
for the truth they wish to ignore, and motivational reasoning does not account for such cases. A
self-deceiver may, in some cases, be prone to avoid the topic under which he has self-deceived,
pointing towards the idea that there is, at least, some level of awareness of the sensitivity of the
subject (even if not consciously), just as considered in Jack’s story (Bach, 1997). Not only does
Bill fancies Kate. Bill has asked her out on many occasions, and Kate has always declined going on
a date with him. In addition to this, Kate has complained to some common friends that she finds Bill
obnoxious, which they have mentioned to him. Bill, however, continues pursuing Kate. Noticing this
behavior, Bill’s friends have asked him whether he really believes that Kate fancies him. Bill claims,
quite confidently, that Kate does fancy him, and she is just ‘playing hard to get.’
- Fernández, J, 2013
32
this support the idea of holding two beliefs that do not cohere, it hints that the justified belief is
still in mind, though not consciously.
Accepting these incoherent beliefs, and subjacent deceitful intentions, lead back to the
classic paradoxes of self-deception however, so coming up with an explanation for the
phenomenon has been very hard, even after decades of research. But should we consider that the
single mind may be incoherent, as I stand for in this work, to accept that a single mind could
house incoherent beliefs, provided that only one of them be housed in a conscious state (Davidson,
2004), could be perfectly viable. Keep in mind also that in some cases, the self-deceiver may lead
himself to believe in something that is actually true. Suppose the following story:
In this case, Walter deceives himself into believing his wife is not cheating on him,
rejecting the evidence he thought could point towards the idea that she was; and indeed, rejects
further suppositions of this idea. The belief that takes the place of the initially justified one is
actually the truthful one, as Walter’s wife never cheated him. In this case, the self-deceiver
deceives himself into believing in something that was true all along, contrasting with the stories
analyzed thus far, where the unjustified belief held a false truth-value.
All examples worked here so far, under the intentionalist view, are related to what is
called psychological partioning 8, where some degree of division must be considered in the self
and the mind (Rorty 1980). Through it, an individual when faced with a situation that they do not
wish to believe in, creates a form of deceptive intention that is not housed in a conscious state,
and include both straight and twisted deceptions. In either case, one mentally (but not consciously)
leads himself to believe in something that is not justified, all the while keeping in mind (again,
not consciously) that there is a justified belief that for some reason must be kept away from the
conscious mind. This would entail the possibility of a single mind to actively hold contradictory
intentions, and that it can hold two simultaneous incoherent beliefs (even if only one in
consciousness). This raises questions on the very nature of the unconscious mind, and how it may
come to influence the conscious mind (Bargh & Morsella, 2008). Under consideration that the
single mind must be coherent, and trying to avoid attributing to the same mind multiple
contradictory intentions and beliefs, some investigators try to stray from intentionalist views. But
again, I ask: why must this not be the case? We know so little of the conscious mind, and what’s
8 Another widely considered type of partining is temporal portioning, in which a self-deceiver tricks himself to change
his belief over a period of time. This type was not reflected on here, as it also misses the apparently multi-intentional
phenomenon that it is.
Walter overhears a conversation his wife is having with her friend. In this conversation, a co-worker
of hers is the main topic. Walter is an insecure man, and ever since overhearing this conversation, he
avoids talking about work, overhearing conversations or just about any topic that may relate to an
affair. When asked what he thought, he really believes that his wife is not having an affair. All along,
his wife truly was not having an affair.
33
more, we know even less on the influence the unconscious mind has over consciousness.
Furthermore, altered belief behaviors have been observed in other forms, such as in Freud’s idea
of denial and repression (Freud & Phillips, 2006), where the unconscious mind sways influence
over the conscious. It would seem that the perspective of a mind to naturally try and preserve a
more comfortable state consciously may not be farfetched, and should that require the acceptance
of an incoherent mind, I see no obstacle in such a claim either.
One is finally left to wonder to the advantage to this psychological phenomena. For it to
be a widely observable behavior, it must have risen through mechanisms of evolution, such that
all of us may be susceptible to self-deception. But if so, then why? Robert Trivers has suggested
that one who is self-deceived will be more apt at deceiving others, as knowledge of deceptive
intention would lead to a less effective deception towards others than would no knowledge of it
(Hippel & Trivers, 2011). Conversely, and possibly more sensible, maybe it is merely a by-
product of other advantageous evolved mental trait. Either way, it is clear healthy human beings
self-deceive at times, and to become unaware of a given justified belief, only to have an unjustified
belief take its place, is unquestionably an incoherency, product of a mind just as yours or mine.
Furthermore, as opposed to what happened with incoherent actions, one who holds incoherent
beliefs is not aware of this incoherence. As beliefs ground actions, if one holds a belief, even if
not justified and risen from self-deception, actions practiced by the self-deceiver will be consistent
with that belief, but inconsistent with the unconscious and justified belief. Likewise, the conscious
states the self-deceiver holds will cohere with what belief is housed therein, and will hold no
simultaneous conscious state that might be incoherent with it, such that he never feels he is acting
or believing incoherently 9. This suggests a incoherence associated to beliefs would lead to higher
degree of incoherence for the individual, as here one can no longer be conscious of incoherence
(as opposed to that in action and desires).
3.1.3 Incoherent Representations
Going one step deeper into that which may very well be one of the most basal mental
activities, to incoherently form mental representations would prove to be a serious issue. As
representations hold a fundamental position in the process of mental activity, to hold incoherent
representations would lead to incoherence in properties that are built upon them (namely beliefs,
actions and conscious states). To account for an incoherent representation, one must hold,
simultaneously, two representations that do not cohere, such as seeing the cat in the room and not
seeing the cat in the room simultaneously. Recall that here we address how the single mind may
be itself incoherent, and not whether it may represent something that may or not cohere with
9 Note that should the conscious state that housed the justified belief still be part of the whole set of co-conscious
conscious states, one could consciously access, simultaneously, incoherent conscious states. As the belief shifts to an
unconscious state and is no longer housed in a conscious state, the whole set of such conscious states keep coherence.
34
reality. To hallucinate, for example, would be to represent something that is false, that does not
correspond with reality. But this differs from holding incoherent representations, and we are
interested in the latter. Also worth note is the nature of representations we are debating in this
section. Recall, for instance, that a belief is a stored represented proposition, and if these have
been seen to be passible of incoherence, then in these cases, represented propositions may indeed
be incoherent. In this section we are focusing the mental capacity to represent. It is a process that
occurs previously to that of storage of a representation; it is the representing itself that, if
incoherent, must be subject of discussion here.
Should incoherent representations then be possible, one might represent incoherently and
consciously, or as we’ve seen with beliefs, represent consciously something that does not cohere
with a representation one holds unconsciously. As for conscious incoherent representations,
we’ve been standing here for the idea that conscious states must cohere with what mental states
are housed in them, and must cohere successively with the set of co-conscious conscious states
they belong to. As such, unless we find an individual who believes simultaneously, and through
a single whole set of co-conscious conscious states, that there is and that there isn’t a cat in the
room, e.g., then we cannot consider the existence of such incoherent representations possible. One
may still, theoretically, hold incoherent representations, provided only one of them is housed in a
conscious state, much like in the case of beliefs. An example of such situation might be that of
Binocular Rivalry. Binocular rivalry is triggered when differing images are presented to each eye,
such that the subject is made aware of a shift in conscious awareness of one image and the other,
but never of both simultaneously 10. More detail behind this phenomenon will be disclosed in the
next chapter, but the mind seems to be representing both visual stimuli, but only granting
conscious awareness of one at a time; and since both stimuli are consistently present, this
perceptual rivalry in consciously representing the images will occur until the conflicting
perceptions are resolved. This is a possible case of holding incoherent representations: one
conscious representation based on sensory acquisition of the world, and another unconscious one
based on sensory acquisition of the world as well, but differing from that which is made conscious,
and being consciously or unconsciously represented in an alternating fashion.
A representation is a mental state, and it can indeed be incoherent with other mental states
as well, provided not with other conscious mental representations themselves. In an experiment
(Merabet 2004) thirteen subjects were kept with no visual stimulation for a period of 5 days.
Throughout this time with no visual stimuli, they reported being subject to visual hallucinations
“which were both simple (bright spots of light) and complex (faces, landscapes, ornate objects).”
10 There is actually the moment of shift, in which awareness of both is briefly present. But in these cases, we are not
aware of one and the other; we are aware of a strange mix of both in a same image, forming a strange but singular
representation.
35
(Merabet 2004). The subjects nonetheless had insight into the unreal nature of these
representations. In this case, these subjects held hallucinations under the belief that they were
hallucinating, and knew and believed that what they were seeing was (or most probably could be)
a product of long sensory deprivation, and not a truthful representation of the world. They were
hallucinating, all the while knowing and believing that what they were seeing is false. Their belief
(“There is nothing really there”) opposed what they were actually representing, as they were really
seeing something. In this sense, the conscious state they were holding at the moment housed
simultaneously these two mental states, one of a true belief, and one of a representation that does
not correspond to reality, and does not cohere with that belief. It would seem representations may,
at least, be incoherent with other mental states.
Consider now an optical illusion, such as the Müller-Lyer illusion (refer to Figure 3.01
in the appendix section for a depiction of this illusion). This, and countless other illusions, account
for the idea of unconscious interference (Helmholtz 1878/1971), in which our conscious
perception is undermined by some unconscious process that is out of our conscious control.
Though these involuntary unconscious processes, most of the time, serve us perfectly in the
majority of situations (much like heuristic reasoning), being intimately associated to sensory and
neurological structure (Padovani, et al 2016), there are situations in which they don’t, and optical
illusions are examples of such situations. We may be conscious that we are before an illusion, and
hold the belief that, as in the example given, that the lines are exactly the same size. But regardless
of the belief we hold, it makes no difference, as consciously, we will represent the illusion
showing us that the lines do not appear to be the same size. Again, we stand before a situation
where that which we are representing perceptively does not cohere with what we believe
regarding that very object of representation.
Straying away from normal individuals, let us consider SBP’s now. Each of their
hemispheres is capable of housing an independent set of co-conscious conscious states, provided
segregated information reaches each hemisphere independently. Each hemisphere can represent,
form beliefs and take action on what information they are given, with little or no interference from
the other. In their case, we are in position to attribute to them different, incoherent and
simultaneous representational mental states, which are housed in different sets of conscious states.
Each hemisphere, having the potential for independent representing, and no longer having a CC,
seem to effectively start doing so; and as each hemisphere starts forming their independent beliefs
based on those representations, and act on those beliefs, each hemisphere holds conscious states
that cohere with what mental states are formed in each hemisphere, and systematically cohere
with the set of co-conscious conscious states that is generated intrahemispherically, but not
interhemispherically so. In this way, a SBP may, in said experimental conditions, represent two
things that do not cohere simultaneously and consciously, breaking the limit of mental
36
incoherence we see in normal individuals, where one such individual cannot hold incoherent
conscious mental representations. In normal situations, in which information reaches both
hemispheres normally due to absence of information segregation, SBP’s hemispheres represent
the same input of information in both hemispheres, and hence both hemispheres will hold the
same set of beliefs and will act as us, majorly coherent individuals that we are. It’s an important
reminder that, outside experimental situations, SBP’s cohere much more than in experimental
situations, seeming in most situations to not reveal incoherence at all; but they do hold the
potential to be more incoherent than we are.
3.1.4 Incoherent Consciousness
Given the incoherence that the single mind may reveal, one may ask whether our
consciousness is also incoherent or not. Can we even conceive what it would be like to be
consciously incoherent? For one to have an incoherent consciousness, one must reveal
incoherencies on the level of conscious states. As far a normal individuals go, and as mentioned,
this is yet to be seen. The chain of mental processes that lead to any conscious behavior seems to
be systematically coherent within itself. To consider an individual with an incoherent
consciousness would entail that he held conscious states that either not cohere with what mental
properties underlie them, or not cohere with his other conscious states. In healthy human beings,
this does not seem to happen. Even considering the incoherencies that were previously debated,
the individuals who reveal them always hold conscious states that reflect the mental processes
that are housed in them, and they certainly never show more than one simultaneous conscious
state, such that one such state could not cohere with one other. In SBP’s however, the case may
be another altogether. This issue will be discussed further in the next chapter, dedicated to how
incoherence may arise in the brain. But as we’ve seen, SBP’s seem to hold simultaneously
incoherent conscious states, provided that the incoherence happens interhemispherically and not
intrahemispherically. In the case of SBP’s, I’m inclined to believe they have the potential to hold
an incoherent consciousness, whereas normal individuals do not. Our conscious states cohere
constantly, forming our conscious and continuous perception of reality. They cohere across time,
where one conscious states and the one that follows it must unify in a way that allows us to not
feel the shift between one conscious state and the next; and they cohere in each moment, such
that any amount of things we may be conscious of are co-conscious, or rather, all become unified
in a single conscious state. As I have stated, this leads me to believe that a normal individual
cannot be home to incoherent conscious states, and thus an incoherent consciousness.
37
3.2 Degrees and Types of Coherence
Most of what we’ve seen in this chapter has referred to incoherent behaviors normal,
single-minded individuals may reveal. It should seem clear by now that a normal mind may be
incoherent at times – as our mental states do not always cohere – just as a SBP’s mind can. They
may differ in degree of incoherence, and maybe on how the incoherent behaviors come to be. But
to have an incoherent mind, it seems to me, should no longer be considered an implausibility.
Following the argumentative points on how the single mind can be incoherent, evidence
suggesting the existence of a degree of such incoherence was revealed, where the deeper we go
into what enables the generation of mental experience, the higher the degree of incoherence. The
second objective of the work can now be accomplished, considering:
C2: If (1) incoherence in the mind occurs; and (2) incoherence in more basal mental properties
(as representations) lead to more incoherent behavior than incoherence in lesser basal ones (as
desires); then (i) mental incoherence can, and should, be considered in degrees, and not absolutes
(as in coherent or not coherent exclusively); and (ii) mental incoherence cannot be the base under
which an assumption on the number of minds can be made.
Leaving considerations of the Split-Brain mind for now, let us focus on different degrees
of incoherence on normal individuals. As we’ve seen, we may act incoherently, all the while
keeping our beliefs and representations of the world consciously coherent. Akratic individuals are
even aware of their beliefs, which lead them to realize they are acting against their best judgement,
and of the conflicting desires that wills them to akratic action. If we consider beliefs however,
upon self-deception, an unjustified belief may become part of the individuals set of beliefs in
consciousness, leaving the justified belief that regards the self-deceptive situation housed in an
unconscious state. However, actions taken under consideration of this unjustified belief will
cohere with that belief, such that every conscious action we take must cohere with what belief it
is based on. This hints us an interesting features of mental incoherence, which is also stated in the
argument: the deeper we find incoherence in what enables the output of behavior through the
mind, the higher the degree of incoherence seems to be attainable. Finally, in the case of mental
representations, we do not see a normal individual to be able to incoherently represent the world.
Considering (1) in particular, there seems to be a pattern regarding mental incoherence,
starting from the output of behavior and taking of action, all the way down to the very fundamental
mental characteristic that is representing mentally. If one acts incoherently, one is aware of their
incoherence, as incoherent mental states are housed in a same conscious state. If one believes
incoherently, one may no longer believe in both consciously, but provided the shift of one of those
beliefs into an unconscious state, one may again hold such incoherent beliefs. For one to represent
38
incoherently, one would have to represent information in an incoherent way, which normal
individuals do not seem to be able to do. We can consider consciously incoherent actions; we can
have incoherent beliefs, and we cannot have incoherent representations. It seems the deeper we
go into what enables mental experience, the stricter the possibility for incoherence is present in
the mind. We are starting to understand how different degrees of coherence can be considered in
a mind. Having established a chain of mental processes that lead to the output of behavior, the
further back we go along the chain, the higher the influence on subsequent processes of the chain.
Similarly, the further back incoherence can be tracked on this chain, the higher the degree of
incoherence can be witnessed in processes that are supported by it. Following in what has been
stated here, coherence does seem to be a necessary property of conscious states, but not of a mind
as a whole (such that we may reveal different degrees of incoherence with regard of mental states
of different natures). Nonetheless, most of the time we reveal coherent behavior and a high degree
of coherence, and incoherence should and must not be considered to be the norm within the mind.
If we were to conceive absolute coherence in a mind, something resembling a computer
program would surely result – code in a program cannot be incoherent in any form, lest the
program crashing. On the other hand, absolute incoherence would amount to not a mind at all –
maybe fragmentary mental data with no cohering structure, but certainly not ascribable to a mind.
We all fit somewhere in between these two 11. If we consider these absolutes on the level of the
hemispheric incoherence, which is the more relevant type regarding SBP cases, absolutely
coherent hemispheres would resemble our own mental experience 12, in which the communicating
hemispheres work together to generate a fully integrated and unified sense of being into one single
mind. Absolute interhemispherical incoherence would amount to the loss of any integration and
unification of mental content into one sense of being, and thus, I suppose, the loss of the single
mind. SBP’s fit somewhere in between these two. Change in their brain structure seems to break
the limit for “normal” incoherencies in the mind, for in them we witness incoherent representing
of the world, and coincidentally, incoherent consciousness of it through possibility of existence
of two divided (and incoherent) sets of beliefs, conscious states and ultimately incoherent
simultaneous actions. But they also hold interhemispherically coherent mental (and conscious)
states in their minds, and thus in no moment is there the total loss of integration of mental content
into one mind; rather, the one mind becomes further incoherent, with less mental states cohering.
11 Note that this is a rather simplistic way to consider the problem, but a viable one. If we may consider different types
of incoherence, then it may not be as simple as considering a linear range in which we all fall into, but a multi-
dimensional phase space of all the types of incoherence and respective measures. Even so, all these may still ultimately
be, theoretically, reduced to a linear problem. 12 Although interhemispherical incoherence can occur in normal individuals (see the next chapter on this),
hemispherically speaking we are certainly closer to coherence than SBP’s are, and that is the main point of this
argument.
39
By now you may have noticed the reasons behind these two types of incoherencies differ:
whereas in normal individuals the whole brain, and mental states housed therein, may reveal
incoherence, in SBP’s each hemisphere becomes home to independent mental states, giving rise
to an incoherent conscious duality within the whole brain. Whereas the first is due to naturally
occurring mental (and brain) processes, the second is only revealed as a consequence of a
callosotomy. They are different types of incoherence, but they are both still mental incoherencies
as Nagel would’ve put them, leading to “(…) things happening simultaneously which cannot fit
into a single mind” (Nagel 1971). It should be now clear that it is not the point to show that us
and SBP’s hold a sameness of mind, but rather to point to that incoherence in a mind is not unseen
in normal individuals either, regardless of type or degree. Anyone with a mind can be subject to
mental incoherence, and so decrease in coherence cannot be the base under which assumptions
on the number of minds can be made. If we ascribe ourselves with one single mind, we should
very well do the same with SBP’s.
Finally, in a closing remark for this chapter, we ask: how coherent must we be, such that
we have one and only one mind? We know that absolute coherence is not necessary to consider
an individual with a single mind. We all have a single mind, and yet, we all reveal incoherencies
at times. As such, we may answer this question based on this fact: we may not know for sure how
incoherent one must be such that the single mind is lost, but we know for sure that so long as one
is not absolutely incoherent all the time, a single mind is enough to understand these sporadic
incoherencies.
40
Chapter 4
Incoherence in the Brain
Having the first and second objectives of this work been tackled, we will now delve into
what makes a SBP’s incoherencies so particular, and how even a normal brain can potentially
produce such incoherent behaviors. To this purpose, we will consider (1) what theory of brain
activity can make sense of what incoherencies arise in both normal individuals and SBP’s; (2)
what has been studied that accounts for what each hemisphere can do independently; and (3) what
theories counting a numbers of conscious streams fit what conscious phenomena SBP’s reveal,
and potentially fit our own as well.
Before we cover these issues, I’ll take a moment to cover a few details regarding the
structure of the cerebral cortex, which will be widely debated here. Firstly, it’s important to
understand how it is functionally organized. Considering an incredibly simplified way to conceive
the cortex, columns of communicating neurons from deeper cortical layers to the cortical surface
define the cortex’ functional organization (Lodato et al 2015). These columns may be seen as
functional units, and the larger an area dedicated to a specific function, the greater the number of
these functional columns we may expect dedicated to that function in that area. The cortex may
also be divided into layers, through which organization of the output and input of information is
accomplished (Kandel 2013). Layer I mostly holds afferent (arriving) connections from other
areas of the cortex and thalamus, as well as efferent (exiting) ones, Layers II and III hold
connections within same cortical areas and differing areas, centrally for intra-cortical
communication. Together, these three layers are also the main connection points for
interhemispheric communication. Layer IV is especially connected to the thalamus, dedicated to
reception of stimuli and particularly developed in primary sensory areas. Layers V is the major
pathway of efferent communication between differing cortical areas and Layer VI holds
heterogeneous efferent and afferent connections between cortical areas. Secondly, connections in
the cortex (both within functional columns and between them to different areas) can be seen as
forward, driving signal along the cortical hierarchy (from the first point of processing to latter
areas responsible for increasingly complex representations); backward, modulating the working
of regions lower in the hierarchy; and lateral, where communication occurs within same
41
hierarchical levels, and may be forward or backward. By ascending or bottom-up connections,
specifically forward connections from lower to higher cortical hierarchy are meant. Descending
or top-down connections specifically mean backward connections from higher to lower cortical
hierarchy.
4.1 The Bayesian Brain
Discovery of how the brain functions to produce conscious mental experience has long
been an issue of great importance for the understanding of cognition. As of recent decades, one
particular theory has gained relevance by providing insight into how the brain, through statistical
consideration of predictions, is able to produce the conscious mental experiences we are so
familiar with. Making use of Bayesian statistical inference, the brain is able to predict what input
is reaching our primary sensory receptors, and assign the predicted causes of sensory experience,
generating conscious mental representations. This theory was a building block for the conception
of artificial neural networks in computer neuroscience, and has recently become a hot theory in
neuroscience as well, having gathered a considerable amount of followers and holding
neurological and psychological evidence that seem to support it. Here I will attempt to show that
this theory, if applied to SBP’s as it is to normal individuals, will wield an intelligible and
reasonable way to understand the mind of these individuals and their incoherencies, and one not
very far from our own.
4.1.1 Helmholtz and the Inference Machine
The origins of this theory remote back to late 19th century. In 1867, Hermann von
Helmholtz studied how perception of certain sensory stimuli and the physical properties of said
stimuli differed (Helmholtz 1867/2001). Through such situations (as are, for instance, optical
illusions), he realized the existence of unconscious inference 13, where some mental faculties were
within grasp of conscious control, whereas others were not. A duality in the way the mind works
could be considered: if what exists in the world (and what we sensory perceive) is not subjectable
to interpretation or variance, and yet what we consciously are made aware of does not always
reflect this invariable reality, then there must be a system that allows for the interpretation of the
data, as opposed to one dedicated to its acquisition.
Helmholtz pondered on these aspects, relating them to the nature of their relationship with
our nervous system. If there was mental activity defining conscious experience independent from
13 Unconscious Inference relates to what Helmholtz holds to be happening in the brain in order to generate conscious
awareness of objects of perception. For Helmholtz, conscious awareness derives from prediction of the causes of
sensory data that represent a perceived object. Unconscious inference can then be seen as the result of this process, but
ending in conscious perception and reality mismatch; see further.
42
correspondence to reality, then there seemed to exist models constructed into our nervous system
that we hold no conscious control over. Helmholtz considered the predictive nature of our minds
to explain this phenomenon: when one perceives information, or takes action, we hold certain
expectations: when one sees a table for instance, neurons on the level of the retina fire in
accordance to what sensory information is being received. To the cause of this sensory
impingement, the brain attributes (predicts) the representation of a table. At every moment, the
mind (and the nervous system that is home to it) seems to be trying to predict sensory information
and come up with the best possible explanation for it, building expectations and improving
predictions. Helmholtz held that conscious mental experience arose in the brain upon most
accurately predicting impingements from sensory input; in other words, it is what the brain was
predicting about the sensory impingement that leads for consciousness of said impingement. As
he himself put it:
We also realize Helmholtz had a clear idea of actions and movement holding an important
role in “experiencing” the world, and generating awareness of it accordingly. Note that sensory
reception of information accordingly comes prior to conscious awareness of it. Considering visual
perception for instance, most movements by the eyes (saccadic movements) are unconscious, but
nonetheless lead to gathering of useful information regarding whatever we are perceiving, and
ultimately conscious awareness of it by mental representation of the predicted causes of those
sensory impingements. This meant that action took an important position in the generation of
conscious experience as well, in what we will shortly see to be called Active Inference.
Helmholtz’s ideas became a pillar behind modern computational neuroscience (Doya et
al 2007), through what is widely known in literature as the “Helmholtz Machine” (Dayan et al
1995; Hinton et al 1995). In this machine, two modes of functioning – a generative model and a
recognition model – allowed the acquisition of information through the recognition model, and
the generation of a representation of what the machine predicted to be the cause of the received
information through the generative model. The framework for the building of this machine was
later applied to the brain itself by Karl Friston, becoming a fast growing theory in neuroscience
today. In accordance to Friston, and similarly to what the Helmholtz Machine held, the brain can
be said to be receiving information through the primary receptors, and predicting the causes of
those sensory impingements. These predictions lead to generation of representations of these
predicted causes, which is what we get to be consciously aware of. The Generative Model –
through which the brain predicts, in a top-down (and also lateral) fashion, the causes of sensory
input conveyed and creates conscious mental experience of it – bases its hypotheses for said
“Each movement we make by which we alter the appearance of objects should be thought of as an
experiment designed to test whether we have understood correctly the invariant relations of the
phenomena before us (…)”
- Hermann von Helmholtz, 1878
43
predictions on something similar to the previously mentioned recognition model: perceived
sensory data at the receptors carry information bottom-up that shape the generative models
predictions and priors, and ultimately what mental experiences those may lead to (Friston 2005).
The Bayesian Brain is thus continuously receiving sensory data, predicting the best possible cause
for it, and representing it as conscious states: a bottom-up process dependent on sensory reception,
conveying sensory information from lower regions of the cortex to higher ones and correcting
them; and a top-down process, where cortical higher regions will lead to influence (convey
predictions), in a processing cascade, what sensory information is being conveyed upwards 14
(Frison 2006). In this way, the top-down and the bottom-up processes are in constant, intertwined
relation, to build mental experience. The bottom-up process actively improves the predictive
model the brain holds for the sensory information it is given access to (by increasing data to
improve or weaken predictions); the top-down process, upon gaining access to information
conveyed bottom-up, selects what information it has become more adept at predicting, and often
suppresses neuronal firing conveying this data (“explains away”, as Friston puts it), leading to
ease of representation of what is well predicted 15. The brain, as envisioned by Helmholtz, is then
comparable to an inference machine: a statistical, Bayesian 16 analyzer, gathering data to improve
its model, and using its model to shape conscious experience. Also product of this Bayesian Brain
is the taking of action and movement which, as we will see shortly (and much like Helmholtz held
as well), amounts to the fulfilment of a prediction by the brain’ generative model.
4.1.2 Friston and Free-Energy
Friston points to the strong possibility that the brain (and any biological system) works
towards the reduction of Free-Energy (Friston 2006, 2009), having the Free-Energy principle at
its core. Free-Energy, in thermodynamics, is the measure of energy available in a system for
production of useful work, and it is the difference between the entropy and the energy present in
any given system. The notion of Free-Energy employed by Friston isn’t the one used in physics,
though it may be seen as analogous to it. As Friston puts it:
The Free-Energy principle defended by Friston states that self-organizing biological
systems (in this particular case, the brain) work towards the minimization of Free-Energy in the
14 Though here only the cortex is being focused, several stages of processing start in regions bellow the cortex. 15 Considering again optical illusions, these are none other than predictions in a generative model that have become so
adept at predicting the input in a certain way that the sensory input is “explained away” by top-down prior predictions
that influence perception, leading to conscious representation and real object mismatch. 16 Bayesian because it is in Bayesian probability that priors and new data define probabilistic reality as expectation. In
the brain, we hold predictions (priors) that are influenced by sensory impingements (new data) to generate conscious
awareness.
“Free-energy is an information theory quantity that bounds the evidence for a model of data. Here,
the data are sensory inputs and the model is encoded by the brain.”
- Friston, K, 2009
44
system which, as we will see shortly, is equivalent to stating the system works towards
minimization of prediction error. A self-organizing biological system stands out for being a
thermodynamic system capable of adapting according to its (also ever-changing) environment.
This allows such systems to keep their homeostatic balance, permitting long lasting stability and
survival of the system (Ashby 1962/2004). It does mean however that these systems must avoid
state-changes that would otherwise undermine this balance. For a living being to hold the highest
possible survivability in any given environment, it then becomes essential for it to be able to
predict changes in the environment, and being able to act towards state stability, in the best
possible manner. So through to the Free-Energy principle, surprise in an environment is to be
avoided by the system. If there is maximum surprise, then there has been no prediction of the
event; if the event has been fully predicted, then there is no surprise. Free-Energy in information
theory terms can then be seen as the difference between surprise and predictions – the Prediction
Errors in the models housed in the brain. The generative model in the brain thus holds predictive
power that attempt to explain what sensory information is being brought through the receptors.
What information is best predicted leads to minimal surprise. Information not well predicted lead
to surprise, which the biological system must avoid. Errors in predictions must then be integrated
into the generative model, strengthening its predictive power, and reducing prediction errors (free-
energy) in the future 17 (Friston 2009).
The brain, through Friston’s Free-Energy principle, becomes an inference machine as
envisioned by Helmholtz; where not only is changing its connectivity structure to accommodate
the best possible predictions (or rather reducing its prediction errors) on the environment a central
function of the brain, it is a necessary characteristic of any self-organizing biological system as
well.
4.1.3 When Predictions and Errors meet
To accept the Bayesian Brain theory and the two models that lead to conscious mental
experience, first and foremost, we must consider the existence of neuronal architecture that may
account for it. We must consider the existence of a hierarchical organization of communicating
neuronal populations with different functions (Friston 2005; Toussaint 2009). One such
population of neurons is dedicated to representing the best possible prediction for the cause of the
sensory stimuli – prediction units. The other population – the error units – is dedicated to detecting
mismatches between the predictions in the generative model and the data acquired through the
sensory acquisition, and dedicated to convey prediction errors upwards (Clark 2013). The former
is conveyed top-down in the form of predictions, the latter bottom-up in the form of prediction
errors. The prediction errors are relayed upwards by forwards connections in the brain, driving
17 Note that free-energy reduction is seen as a Lyapunov function, which reduces over time but never reaches zero.
45
signal encoding error to higher cortical regions. Predictions are conveyed downwards by
backward connections in the brain, modulating the signaling from lower cortical regions. The
stronger the prediction, the lesser the need to consult the sensory data being conveyed bottom-up,
as the prediction units, being already adept at predicting said data and having little prediction
error being conveyed upwards, “explain away” that data on lower regions of the cortex (Friston
2009). A weaker prediction entails the existence of more prediction errors being conveyed
upwards through the cortex 18. Upon reaching higher regions, the prediction units are molded by
these errors, leading to improved predictions in the future. In this way, we can understand that it
is the prediction units conveying predictions that are doing the work for conscious representation
in the generative model, whereas the error units are detecting and conveying prediction errors
from the data gathered (Hohwy 2007).
This theoretical framework of neural architecture has recently found some plausible
evidence on the functioning of deep and superficial pyramidal cells (Mumford 1992; Friston
2009). These cells and the way they interact may account for the aspects here considered,
specifically regarding the interaction of different sub-populations of neurons with differing but
interacting functions. Superficial pyramidal neurons hold forward, driving connections with
neurons on higher regions of the cortex, and deep pyramidal neurons hold backward, modulatory
connections with neurons in lower regions. This functional duo may encode prediction and
prediction error, in constant and active change, and is forwardly conveyed through the cortical
functional hierarchy driving prediction error, and backwardly connected to lower regions in the
hierarchy encoding predictions. Refer to Figure 4.01 in the appendix section for a schematic of
pyramidal neuron’ connection applied to predictive processing.
4.1.4 Improving the Generative Model
The brain is able to reduce it’s free-energy (minimize prediction error). To accomplish
this, it must manage the accuracy and complexity of the generative model. The free-energy in a
model in the brain is given by the complexity of the model minus the accuracy of the model.
When in a wake-state, the brain works to maximize accuracy – the measure of how correctly the
causes of sensory impingements are predicted (Hobson 2014); when in a sleep-state, the brain
works to minimize complexity – the measure of the amount of changes in the model’ hypotheses
for predicting sensory data in order to correctly predict new data, and the amount of change (on
the level of synaptic connectivity) that was necessary for improvement of said predictions to take
place in the brain (Hobson 2014; Hopkins 2016). The result is a model with reduced complexity
and high accuracy, allowing for maximum certainty and minimal prediction errors.
18 When we speak of weaker and stronger predictions, these are intimately related to the regulatory role of precision
units in the model. More on this in section 4.1.4.
46
In a wake-state, accuracy is maximized through what Friston calls Active Inference – a
process that leads for the best possible acquisition of information through perceptive sensorium,
and ultimately the strengthening of the generative model’s hypothesis and predictive power in the
brain (Friston 2006). This process works by changing the generative model to better fit sensory
data acquired and by repositioning sensory data acquisition receptors to better fit the generative
model’s predictions. The first is known as perceptual inference, and works by taking in new
sensory information and prediction errors to be corrected, complementing the generative model
and improving its predictive power. The latter refers to the taking of action, and where in
perceptual inference the brain takes in prediction error for correction, in the motor and
proprioceptive systems these errors are instead taken as the course of action itself (Adams 2013).
Predictions for action in the generative model encode the consequences of muscular, tendon and
articular trajectory, while sensory receptors encode the current motor states. These predictions,
encoding consequences of movement, are treated and prediction errors when compared to current
motor states. These errors get to be corrected, exclusively in action, by fulfilling the motor
reflexes that produce the predicted movements 19:
Perceptual inference itself requires action as well. We can direct our attention for different
informational inputs to improve their acquisition. These movements – allocations of precision, as
Friston calls them – are actions that lead to the direction of attention towards the best evidence
for predictions in the generative model, increasing the gain of perceptual information and leading
to increased reduction of prediction error, and maximum certainty (Friston 2011; Clark 2017).
Precision is thus an essential aspect of the generative models, as it is the gain or loss of precision
that allow for the attribution of different weights to conveyed prediction errors, when compared
to already existent priors (hypothesis) in the model. If the precision attributed to certain prediction
errors is low, and the precision of the priors the model holds is higher, the prediction errors get
“explained away” at lower levels of the processing hierarchy, and the priors will not be updated
with the new data (which is the case of optical illusions, already hinted). On the other hand, if
prediction errors are attributed high precision, these will be taken higher through the hierarchy,
and the prior that attempts to predict this new data will most likely get updated to better fit
(predict) this information in the future (Adams et al 2015). Additionally, allocations of precision
depend on the circumstances. Certain priors will be attributed more or less precision depending
on wheather, say, you are at home or in a forest. The same may be said with the prediction errors
19 This idea of contradicts the classical view of motor command, which states that such commands must be of a driving
(forward) nature. Connections in the brain show that motor commands are tightly associated with backward connections
however, suggesting that this new view might be correct.
“Comparison of these predictive signals with the proprioceptive states encoded by sensory receptors
generates proprioceptive prediction errors that—uniquely in the nervous system—can be resolved by
action (…)”.
- Adams, R 2013
47
being conveyed upwards: for instance, if you are in a dark room, precision attributed to visual
data will be lowered, and precision for auditory data will be heightened (Clark 2017) 20.
As mentioned, higher accuracy and lower complexity lead to reduction of prediction error
and free-energy (Hobson 2014; Hopkins 2016). These two parameters contrast particularly
between in waking and sleeping states. While awake, the brain is working to improve accuracy
through active inference. In this state, accuracy is at its highest, as the brain is constantly receiving
and conveying prediction error upward, improving the generative model, and realizing predictions
in action. Complexity is also higher, as the information constantly feeing fed to the generative
model mold its hypothesis’s and predictions, thus increasing complexity. While asleep, we
witness the opposite. While asleep, the brain is not working to improve accuracy, as there is no
reception of sensory information or prediction of thereof. Nonetheless, the generative model still
leads for conscious representation 21 of data in the form of dreaming consciousness (Hobson
2009), which occurs in REM sleep (Stickgold 2001), and where the model feeds mainly from
memory and emotional charge. It is in sleep that the reduction of complexity in the generative
models occurs, where synaptic connections are thinned 22, and changes to the generative model’s
predictions become strengthened (Hobson 2012). Upon reaching the end of each wake-sleep
cycle, the best possible predictions with the least possible structural complexity is achieved. The
amount of neuronal structure change needed to account for the predictive improvements of the
models are minimized by the thinning of the connections, and thus the key in holding increasingly
improved prediction power entails change in neuronal connectivity, by strengthening or
weakening synapses for faster and better predictions.
4.2 The Conflicting Generative Models
Management of the generative model leading to conscious experience is accomplished
through driving and modulatory connections in the cortex, which are said to be ascending or
descending. However, predictions in the models interact with regions in the same hierarchical
level of the cortex, in form of lateral communication. These lateral connections are important for
situations of conflicting generative models and predictions, where more than one such prediction
is conflicted by holding equally strong hypothesis’ for its predictions. Establishment of
dominance relations between generative models in these cases is then necessary for assuring that
20 Allocations of precision have also been associated with cholinergic neuromodulation. See Moran et al 2013. 21 Note that this is further evidence that the brain holds a hypothesis-realizing process, opposed to one for acquisition
of sensory data. In waking, this realization depends on sensory input. In dreaming, it does not. 22 This thinning is particularly important, for long-term memory (source of much of the generative model’s priors) is
encoded in synaptic connections, and the thinning allows for reduced physiological load on the neurons involved.
Synaptic homeostasis thus requires reduction of complexity, in the form of connection thinning.
48
consciousness remains coherent and that no more than one conscious state (or sets of thereof)
arise within a single mind. When considering conflicting generative models, one may become
dominant with regard of the other, leading to conscious awareness of the dominant one, but not
the other (Hopkins 2012). This establishment of generative model dominance may be what leads
for incoherence, in both normal minds and (given their brain’s structural change) that of SBP’s.
4.2.1 In the Normal Brain:
If such conflict were to arise in a normal brain, only one model, the dominant model,
would get to be represented in consciousness, and existence of such conflict often leads for
incoherencies in the mind. Consider the cases of binocular rivalry for instance. In these cases, a
stimuli of two different objects are presented to each eye, and what we perceive is an alternation
between awareness of one object and awareness of the other, never conscious representation of
both at the same time. This is a case in which we hold conflict in the generative models in our
brains: when one single data set allows differing predictions (predicting sighting of one object, or
predicting sighting of the other), only one such prediction may be dominant and consciously
available at a time 23. What we are aware of is the conflict, where dominance in the model is
shifting between predicting one sighting and the other – both predicting their input and
“explaining away” the data suggesting otherwise, inhibiting input on hierarchically lower cortical
areas in alternation (Hopkins 2012). Though this is not a case of conscious incoherence, as we are
only consciously representing one stimulus at a time, it amounts to a case of representation
incoherence, where only one prediction leading to conscious representation is possible at a time
(and the other prediction in an unconscious state). Consider now the case of incoherent beliefs:
the notion of dominant models can account for the existence of incoherent beliefs in the cases of
self-deception, provided one model (representing the self-deceiver’s justified belief) occupies a
lower position in the dominance relation regarding another model (representing the self-
deceiver’s unjustified belief) which is dominant. The dominant model will generate conscious
awareness by being assigned higher precision, and what sensory input is perceived is to be
“explained away” by its predictions, such that input that may lead back to the non-dominant model
(which does not cohere with the hypothesis of the dominant one) may be actively suppressed in
the brain. In this way, a belief may become repressed into the unconscious mind, whereas another
becomes housed in consciousness, suppressing re-emergence of the former 24. As such, the
23 The reason for this is not yet known. We know SBP’s can hold simultaneously two incoherent conscious states, so
in their case they clearly can be consciously aware of two things simultaneously. Perhaps, as Dennett states,
consciousness is global availability for a representation, and since in the normal brain representations in either
hemisphere are potentially available for the entire brain, only one such representation can be globally available at a
time, and conflict must be settled for conscious representation of a dominant one. 24 Recalling that a belief is a form of stored representation, it makes sense that we may represent incoherently sensory
impingements, and that these shift in conscious awareness (these are not stored, these are being perceived actively), but
not do so with beliefs – these are stored, and what hypothesis’s lead to them as well.
49
Freudian repression paradigm fits the Bayesian Brain postulation, understandable as suppression
of neural activity conveying sensory input leading back to the justified belief (by the dominant
model holding an unjustified belief) (Hopkins 2012). Similarly, incoherent desires and actions
can be accounted for with conflict in generative models. If we are consciously aware of both
incoherent desires, then we hold a generative model building incoherent mental content, which
becomes unified in a single conscious state. These are born from independent, but coherent,
representations of satisfaction of those desires (relevant for homeostatic balance) (Damasio et al
2000), built from independent (again, but coherent) beliefs that lead to them. What differs in these
desires is the impossibility of satisfaction of them both (say, keep drinking and stop drinking).
The result is conscious awareness of incoherent desires, born from conflict in the generative
model regarding the best predicted course of action to satisfy one desire or the other, but where
satisfaction of only one of them is possible. Conflicting models in the brain account for mental
incoherencies in normal minds, and we will now see that they account for SBP incoherence as
well.
4.2.2 In the Split-Brain
In SBP’s, the structural changes their brains have undergone allow for different model
dominance relations. As in SBP’s interhemispheric influence no longer occurs, two dominant
models may get represented, one in each hemisphere, simultaneously. Provided this, a new level
of incoherence may be considered, where different dominant generative models get a chance to
consciously represent different content. In accordance to the Helmholtz/Friston postulate, this
means each hemisphere may hold an independent generative model, capable of independent,
conscious awareness and action – as both are working towards minimization of free-energy
through increasing accuracy (by active inference) while awake, and reduction of complexity while
asleep. I thus propose that the brain of a SBP holds the potential to be a partial 25 dual predictive
system in experimental situations due to their hemisphere’s disconnection, generating two models
to generate conscious perceptual experience and guide action – one associated to each
informational input and each hemisphere – together with an interhemispherically generated model
regarding what information is either still shared between the hemispheres or that which is never
split in the first place. What model is making predictions in one hemisphere does not have access
to the model making predictions in the other, and a dominance relation between them cannot be
established, and neither can one stop the other from becoming consciously represented. This leads
both to predict what information they access independently, and leading way for the strange split-
brain phenomena so widely studied. These two models are generated with regard of what
information is exclusively segregated in each hemisphere, and as such, all other information that
25 Partial, as a SBP’s behavior is never absolutely split interhemispherically, and there are several behaviors which
maintain a significant degree of interhemispheric coherence at all times. More on this in the next sections.
50
is not segregated (and that both hemispheres have access to) will lead to a single generative model
derived from both hemispheres. Similarly, on normal situations both hemispheres have access to
the same informational input, so both can build a single generative model that best predicts the
input together, allowing for the normal behavior revealed by SBP’s in normal circumstances
(should this be the case, how this happens will be discussed shortly). With this, the third and final
objective of this work can be complete, considering that:
C3: If (1) in accordance to now expanding theories of neuronal processing mechanisms, the brain
is an inference machine dedicated to representing the predicted causes of sensory stimuli,
representing predictions as conscious awareness and (2) in experimental situations a SBP can hold
such inferential mechanisms in each hemisphere independently; then (i) each hemisphere in a
SBP holds the potential for predictive inference as envisioned by Helmholtz and (ii) when
stimulated in isolation, the hemispheres will build partially independent inferences that better fit
what is being perceived, but when stimulated jointly both will build the best possible inference
for what is being perceived together.
This serves the third and last objective I had proposed to give answer to, attempting to
understand the brain and incoherencies of a SBP in a way that may be applicable to our own,
given structural change. For this argument to be conceivable however, two factors must be
reflected on: (1) we must consider that, much as Nagel did, consciousness is a mental phenomenon
that lacks the degree of unity many consider it to have; and (2) we must guarantee that each
hemisphere holds the neuronal architecture to form such independent generative models, and how
these models may differ between the hemispheres 26.
4.3 Lateralization of Function
Here we will explore the independent capacities of each hemisphere, to answer the point
(2) from the previous section. Among a wide range of cognitive capabilities held in the brain, and
for simplifying purposes, we will focus on properties that a SBP reveals for their incoherent
behaviors, as sensory processing and motor commands. Later, independent proficiencies of each
hemisphere will also be considered, which will account for the differences we witness between
LH tasks and the RH tasks in SBP’s. Finally, the importance of the CC will briefly be discussed,
reaching to understand how its severing might lead to these incoherent behaviors.
26 We know each hemisphere is independently able to take in sensory data, have processing power for such information,
and both are able to output behaviors based on those predictions independently – as seen in SBP’s. But by furthering
this investigation on lateralization, a more detailed idea of the extent of a SBP’s incoherencies can be understood.
51
4.3.1 Hemispheric duplication
A look over duplicated functions in the hemispheres will be discussed here. By
duplication we consider the functions that are present in both hemispheres, such as are those
allowing for split-brain behavior, and showing us that both hemispheres hold the potential to hold
independently generated representations of sensory acquisitions. These independent
representations can be varied, depending on the nature of the sensory stimulation. We will limit
consideration here to the SBP senses more widely studied to show duality in: visual, touch, motor
command and proprioception, auditory and olfactory. We will see that both hemispheres hold the
necessary neural connectivity and processing power over these nature of perceptions.
Sensory information of these natures depend on the Sensory Nervous System for
processing, and require receptor cells dedicated to activation upon stimulus, neural pathways to
take the signal to the brain, and cortical regions capable of processing and interpreting said
information. Photoreceptors on the level of the retina account for reception for the visual sensory
system, perceiving information through light. As we’ve seen, each retina in each eye conveys left
and right field visual information by optic nerves to the primary visual cortex (Llinás 2003).
Somatic sensory system, processing mechanical stimuli and proprioception (knowledge of one’s
own body position) is accomplished by mechanoreceptors and proprioceptors respectively (motor
command will be discussed shortly). Mechanoreceptors are found throughout the cutaneous
extension of the body, signaling for pressure. Proprioception occurs mostly on the level of muscle
splindles signaling for muscle stretching, golgi tendon organs signaling for muscle tension, as
well as articular and cutaneous receptors (Purves et al 2004). Touch and proprioceptive signals
are taken to the brain through the spinal cord, and said information from the left side of the body
is taken to the contralateral hemisphere, and vice-versa (exceptions on this division exist on the
level of more medial regions of the body, as the face and neck) (Sperry 1973). Auditory sensory
system reception depends in mechanoreceptors as well, detecting vibrations caused by sound on
the level of the cochlea, and translating said vibration into neuronal signaling conveying sound
frequencies (Schnupp et al 2009). Auditory signaling is taken through the auditory, and the
majority of information acquired in the left side of the body is conveyed to the RH, and vice-
versa. Some ipsilateral relay occurs however, such that both hemispheres will hold information
on what is heard from one ear or the other. Finally, the olfactory sensory system’s information
acquisition is mediated by chemoreceptors specialized in detecting chemical stimuli, signaling
smell. The olfactory nerves convey the signal through the olfactory bulbs where, unlike the other
considered senses, the signal is taken to the ipsilateral hemisphere of the body side it was received
in (Kandel et al 2013). Each hemisphere is thus home to segregated sensory information from
52
various sources, and provided this information does not get to be integrated, one hemisphere can
hold independent access to stimuli without interference from the other. This fits what we see
happen in SBP behavior, where each hemisphere holds interhemispherically differing, and often
incoherent, representations of what sensory information they perceive. We are left to consider the
capacity each hemisphere holds to process and interpret said stimuli. The sensory nervous system
holds cortical regions that specialize in the processing of stimuli from each of the senses.
Regarding the visual system, a large region in the occipital lobe (Brodmann areas 17, 18
and 19) and stretching to regions in the temporal (Brodmann areas 37, 20 and 21) and parietal
(Brodmann area 7) lobes of the brain is dedicated to the processing and interpreting of visual
information, and is known as the visual cortex 27. Through hierarchical signaling, signal is
conveyed, passing the visual cortices V1 through V5. As the signal goes forward through the
hierarchy, more complex representations are formed, and processing becomes increasingly
complex. The signal may be taken through the dorsal stream 28 (the “where pathway”), reaching
to the parietal lobe, and associated with spatial location of objects, eye movements and visually
guided behavior (Goodale et al 1992), or the ventral stream (the “what pathway”), reaching the
temporal lobe, and associated with object recognition and identification (Goodale et al 1992).
Through this process of hierarchical processing, from simpler aspects to complex representations,
the visual system deals with processing and interpretation of visual information. Such a system is
present in each hemisphere, processing and interpreting data from the visual field. Furthermore,
each of these systems can function autonomously, without its contralateral counterpart: in SBP
cases we’ve seen each hemisphere to have interhemispherically contrasting visual input and
processing; and even in the absence of one hemisphere or the other (following a
hemispherectomy), such an individual is still capable of accessing visual information (albeit some
loss of cognitive processing potential is lost) (Lew 2014). Regarding brain lesion studies,
differences in damage in one hemisphere or the other lead to differing consequences, which will
be discussed in the following section.
Similarly, the auditory system is present in both the left and right hemispheres, and as in
the case of the visual systems, their lateralized presence in each hemisphere may ultimately
function independently of the contralateral counterpart. Sound information relayed from the
receptors reach the cortex in the primary auditory cortex and surrounding areas involved in sound
processing (Schnupp et al 2009). The primary auditory cortex is located in the temporal lobe, in
the superior temporal gyrus (particularly the Brodmann areas 41 and 42). It is important to take
27 Note that some sub-cortical structures through which signal passes before reaching the cortex are involved in early
stages of processing as well. These are not split as the cortex however, and thus not focused on here. 28 This two-streams theory is still under investigation, but is steadily becoming a strong suiter for explaining these
differing aspects of visual processing.
53
note that some of the processing of sound takes place on the level of the cochlea, where sound
waves become measureable frequencies to be conveyed through auditory nerves. Different
neuronal regions in the primary auditory cortex react to different frequencies of sound, from low
to high frequency (Moerel et al 2014). Of particular importance is the connection the auditory
system holds with Wernicke’s area, holding a central role in comprehension and understanding
of language. Damage to the auditory cortex leads for loss of ability to distinguish sounds, and in
some cases partial bilateral hearing loss (Rebuschat et al 2011) 29.
The somatosensory system processes mechanical and proprioceptive information, and
like the previous systems, holds a cortical region for its processing in both the left and right
hemispheres. Receptors on the level of the skin, muscles and tendons convey information upwards
through nerves in the spinal cold and into the thalamus, and then relays that information towards
the cortex. Firstly relayed to the primary somatosensory cortex (composed of Bordmann areas 3a
and 3b, 1 and 2), sensory information first reaches the Brodmann areas 3a and 3b, which are
highly sensitive to proprioceptive reception and touch reception respectively. Signal is later
relayed towards areas 2 and 1, which are also dedicated to proprioception and touch respectively.
As in previous systems, lower hierarchical areas process more basic aspects of tactile and
proprioceptive information, whereas subsequent areas lead to increasing complexity of
information processing (Kandel et al 2013). Additionally, these areas communicate with other
higher regions in the cortex, including the secondary somatosensory cortex, the primary motor
cortex and the posterior parietal cortex. The secondary somatosensory cortex receives information
from the areas 3b and 1, for complex processing of information from the hands and face, and from
3a for reception of hand movement information, overall holding an important role in detecting
differences in texture (Ridley et al 1976) and activating when tactile attentional focus is deployed
(Eickhoff et al 2006). The posterior parietal cortex is associated with sensory guidance of
movement, and the primary motor cortex has been found to fire together with areas of the posterior
parietal cortex (particularly Brodmann area 5) (Kandel et al 2013). It has been proposed that this
area is in charge of comparing motor commands with sensory feed from the primary
somatosensory cortex (which, recall, fits what has been discussed previously, where action
becomes the realization of prediction error through comparison of now states and predicted
states).
The olfactory cortex is composed of various regions, namely the piriform cortex,
considered the major cortical area for odor processing (Howard et al 2009), the anterior and
posterior cortical nuclei of the amygdala holding particular importance for odor recognition and
distinction (Zald et al 1997), the olfactory tubercule relaying odor information to various other
29 In some of these cases, damaged individuals may still react unconsciously to sounds, suggesting on the importance
of sub-cortical processing of sound.
54
areas of the cortex and limbic system and is involved in behavioral response to smell (Wesson et
al 2011), and the entorhinal cortex, strongly associated with memory and localization
(Eichenbaum et al 2007). An olfactory cortex is present in each hemisphere as well. SBP cases
show us that scents accessed through a nostril is processed (at least primarily) by the ipsilateral
hemisphere (Sperry 1960), and that each hemisphere, independently, may hold a processing grip
on what smells it accesses (even though processing differences may be considered).
Each hemisphere holds cortical regions for motor control of the contralateral body side
as well. The motor cortex outputs neural commands directed at the muscles, leading to contraction
and movement. These commands depend on inputs from sensory information regarding the
outside world and body, necessary to guide movement (Kandel et al 2012). In this sense, the
motor cortex is highly connected to various regions of the cortex dedicated to sensory processing.
The primary motor cortex is located in the Brodmann area 4, and together with the premotor
cortex (Brodmann area 6) both account for major processing, planning and execution of voluntary
movements and muscle control. Particular auxiliary areas for motor control include the already
mentioned posterior parietal cortex, which seems to have a role in mediating the turning of
sensory motor stimuli into motor commands. Each hemisphere can process motor commands to
wither side of the body, though contralateral control is more efficient. Following a
hemispherectomy, for instance, patients remain ambulatory and can move all their limbs, though
control of motor function in the limbs extremities is hindered (Lew 2014).
By considering the hemispheres’ independent sensory processing and action taking
potential, we can understand what is happening in a SBP as their hemispheres’ taking advantage
of that potential for processing incoherent and segregated inputs of information in the way they
normally would, but provided lack of integration of that information into a unified mental
experience. In the next section, we will consider what cognitive functions we find lateralized in
one or the other hemisphere.
4.3.2 Hemispheric lateralization
Both hemispheres hold the necessary connectivity and duplication to ensure that each
hemisphere can access and process most information independently. There is however a great
deal of difference in hemispheric proficiencies. Some aspects are better processed in one
hemisphere than the other, and indeed some aspects necessarily require processing in one
hemisphere, or even both, to result in a good response. Thus, instead of asking if the hemispheres
can process information independently, we now consider how differently each hemisphere
processes said information.
Firstly, and broadly speaking, each hemisphere has been found to focus on different
aspects of most information they process. Attention can be broadly divided into the categories of
55
vigilance, alertness, sustained attention, focused attention and divided attention 30. The first three
compose the intensity of attentional focus; the latter two the selectivity of attention (Zomeren et
al 1994). Damage in the RH, particularly the frontal lobe, leads to decreased vigilance and
sustained attention and alertness (Wilkins et al 1987). RH is also strongly associated with alertness
(Sturm at al 1999). LH damage does not hinder these aspects of attention as in the RH (Korda et
al 1997). It is however the LH that specializes in focused attention, as damage to the hemisphere
hinders performance in this kind of attentional tasks (McGilchrist 2010). Both hemispheres
appear to contribute to divided attention, predominantly through the frontal lobe (Godefroy et al
1996). It would seem the LH aims at processing with specific, focused attention on what is
accesses, in a form of focal attention (Halligan et al 1994). The RH preferably processes the
whole, with increased proficiency on most natures of attention (save focused attention). Lesions
to the LH thus lead subjects to prefer a global approach to attentional focus (Halligan et al 1994).
This difference in processing focus in the hemispheres will, as we will shortly see, be
intimately related to how differently we see the hemispheres process information. This difference
in processing focus may be related to general differences found between the hemispheres:
particularly, the RH is especially interconnected with various regions in the brain, including the
sub-cortex; the LH is especially intra-connected with its own regions 31, as seen by differing ratios
of white and grey matter in the hemispheres (Allen et al 2003). This contrasting connectivity may
facilitate RH processing and focus on a wider range of attentional selectivity, and the LH’s
specialized focus on particular details that it gets access to 32, with increased attentional intensity.
This makes the RH more apt at receiving global information from the “outside world”, requiring
a wide selective attentional range, where the left is at processing details, categorizing and
abstracting from information it receives. A useful example of this, seen throughout several
instances of perceptive behavior (Jäkel 2016), is the Gestalt phenomena (refer to Figure 4.02 for
an example of this), where firstly we see the whole, and later the parts emerge.
A wide range of differing processing proficiencies arise in the hemispheres, some of
which now be addressed. Beginning with visual sensory processing, individuals with RH damage
often show to hold no sensory access to the left-side sensorium (List et al 2008). The world only
being processed by the LH is very focused on what it accesses, and so, very focused on the right-
side sensory acquisition. Inhibition of these individuals’ LH through transcranial magnetic
30 Attention for general awareness purposes, staying alert, for sustaining focus for periods of time, for keeping focus
on one attentional target and keeping focus on several, respectively. 31 I do not mean to say the LH is not interconnected to various regions of that brain, including the sub-cortex – it
absolutely is. The RH does seem to have more “inter” connections however, and the LH more “intra” connections than
the brother hemisphere. 32 This can additionally be seen through damage in the hemispheres, where RH damage makes the individual see the
parts but struggle to build the whole, and LH damage makes the individual see the whole and struggle at seeing the
parts.
56
stimulation leads to improved sensory acknowledgement of their left-side sensorium, suggesting
that this sensory neglect comes from over activation of the LH, rather than only by under
activation of the RH (Oliveri et al 1999). Similar results can be seen in the olfactory systems,
where RH damage to the olfactory cortex lead to complete loss of sense of smell (Lotsch et al
2016), whereas LH damage to the same region lead to impairments in smell recognition (Hudry
et al 2014). Olfactory processing by the RH also focuses on relation of the odor and memory,
whereas LH associates the odor with emotional response, as seen in brain imaging studies (Royet
et al 2004). Differences in auditory processing are also seen, as individuals with RH lesions reveal
more difficulties in sound lateralization tasks than those with LH damage (Tanaka et al 1999). On
the other hand, LH damage leads to auditory impairments more specifically related to verbal tasks
(Murphy et al 2017). Motor and somatic control of both sides of the body can be achieved by both
hemispheres, though contralateral control is dominant (Sperry 1973). Each hemisphere holds
differing specializations for control of movement, where the RH seems to be more apt at assuring
the final position of the movement was the intended one, and LH specialized in trajectory features,
as direction or speed of movement. Brain damage each hemisphere reveals deficit in these features
of movement accordingly (Schaefer et al 2007).
Regarding more general aspects of lateralization, newly acquired information is
preferentially focused on by the RH (activating predominantly the right hippocampus) (Tang et
al 2003). This includes new experiences and new skills. When one becomes familiarized with that
new information, then it becomes more focused on by the LH, where “the locus of cognitive
control shifts from the right hemisphere to the left hemisphere, and from frontal to posterior parts
of the cortex” (Goldberg 2001). The RH is thus more capable of flexibility than its counterpart,
and this can be seen when considering problem solving as well. Whereas the RH keeps an
exhaustive arsenal of solutions live while exploring for alternatives (Beeman et al 2000), the LH
clings to a single solution it considers best, in a heuristic fashion (Kensinger et al 2009), even
“denying discrepancies that may not fit its already generalized schema of things” (McGilchrist
2012). Another important difference is the LH’s proficiency at labeling and categorizing
information (Langdon et al 2000), which contrasts with how the RH is proficient at reading the
context of situations (Kinsbourne 1982). This is particularly evident in language contributions
from both hemispheres: the LH is necessary for capacity of speech production, for holding a
lexicon and employing it correctly (Damasio 1992); the RH keeps up with the non-literal aspects
of language (Lindell 2006), and semantic understanding. This is why the RH is so important for
understanding humor, sarcasm or metaphor, seen by imaging and lesion studies (Yang 2014;
Coulson 2008). The LH is necessary for the capacity of speech, and damage to the LH, particularly
Broca’s area, leads to aphasia (Damasio 1992). Because the LH tends to process information
deprived of its context, it excels at abstractive thinking, such as that necessary for mathematic
57
reasoning or coming up with solutions for problems where context bears no aid (Deglin et al
1996). Also, when identifying objects from the outside world, the LH excels at classifying said
objects into categories, whereas the RH is remarkable at identifying any object as something
unique, and not part of a set of such things (Brown et al 1993). Brain lesion and SBP studies also
show us that the RH surpasses the LH in visuospatial tasks (Young et al 1983; Berlucchi et al
1997).
Another essential aspect that the hemispheres differ in refers to emotionality. By
excellence, emotional understanding is seated on the RH (Rankin et al 2006). First and foremost,
the RH is necessary for considering the minds of others, in what is known as the “Theory of
Mind”, allowing us to put one’s self in the position of others and understanding their minds
(Jackson et al 2006). This is central for social behavior, which requires emotional understanding.
For instance, RH damage, particularly in the frontal lobe, lead to personality changes and lack of
empathy (Finset et al 1988). Recall that the RH is more connected with the sub-cortex than is the
LH, and these sub-cortical structures have been widely connected with emotional experience
(Tucker 1993). It goes then to no surprise that it is the RH that holds superior contact with
emotions, as only in the absence of RH lesion can one excel at reading emotions in others such as
identifying social cues, facial expressions, vocal intuitions or posture and gesture interpretation
(Borod et al 1990); or providing one’s own language and expressiveness with emotional charge
(Pell 2006). The LH holds some degree of emotional affinity, though it may be more related to
what it seems to focus on more aptly. Emotions regarding more personal aspects of the individual,
such as competition and self-beliefs, be them positive or negative, seem to be lead to activation
of the LH (Persinger et al 1994)33. Depressive states associated with LH alteration, for example,
are often of an anxious nature. The RH seems more in touch with emotions regarding empathy
and social behavior, more related with a global focus (Ross et al 1994). Again, depression arising
from damage to the RH is often associated with lack of empathy and loss of emotional touch,
rather than altered emotional states of the self as anxiety.
Considering thus far what the hemispheres can both do, and how differently they do it,
shows us that the clashing behaviors SBP’s show from each hemisphere can be expected,
considering there no longer is integration of information into one unitary behavioral (and
conscious) output when the hemispheres are independently stimulated. In the following section,
the lack of integration of information will be covered through consideration of the severing of the
CC.
33 Olfaction processing in the LH evokes emotional response to the odor, which only regards the own individual’
relation to the odor. It’s personal emotional response, and accordingly related to the LH.
58
4.3.3 Corpus Callosum and Conscious Coherence
Understanding the function of the CC and how it’s severing leads to the split-brain
phenomena is a debated problem today. Cortical projections from one hemisphere to the other
certainly allow for communication of neuronal information, and though most are excitatory
(driving) in nature, they may lead to the activation of inhibitory interneurons, allowing for
inhibitory function as well (Bloom et al 2005). Given the vast amount of fibers crossing the
callosal commissure and the varied targets these might hold, knowledge of human calllosal
functioning is still far from being fully understood. It’s severing, however, does lead to split-brain
behavior, and hemispheric communication through the CC seems to oblige the generation of a
single continuous conscious experience at any given time. Taking evidence from differing types
of callosotomy, the severing of the CC is not an “all or nothing” procedure: depending on whether
a higher or lower amount of callosal fibers are sectioned, more or less obvious the split-brain
behaviors seem to become. A partial callosotomy (severing of the anterior two thirds of the CC)
leads to slower information transfer between the hemispheres than would be expected in a normal
brain, but largely faster than in the case of a full corpus callosotomy (Marzi et al 2003). It would
seem that greater lack of communication leads for greater incoherence in behavior. Provided that
communication is attained by excitatory and inhibitory functions of the CC between
interhemispheric cortical regions, the less pathways though which the hemispherical cortices can
influence one another, the more independence we may expect each to have (as one is no longer
weighing control over the other). Because of this change in interhemispheric influence, two
majorly different results may occur regarding cognitive functions: some may become hindered
and worsened; others may become facilitated (Banich et al 1990). As briefly mentioned in the
previous section, there is evidence suggesting that each hemisphere is, to some extent, inhibiting
the other, aiming for a balance in control of cognitive processing between the hemispheres. When
this balance is undermined, some aspects of cognition that required contribution from both
hemispheres for optimal response become hindered (as damaged visual integration from both
visual fields by lack of interhemispheric communication), while aspects that are of specialized
focus for one hemisphere or the other become unchained (as improvement of creativity following
LH damage) (Alajouanine 1948). Saying that the CC allows for integration of information thus
amounts to considering that, when the hemispheres are allowed to sway influence on one another,
activation disparity does not occur, and instead of both hemispheres representing information
independently, both achieve better processing together:
“The corpus callosum minimizes disparities in the distribution of mental capacity (“attention”)
between the two hemispheres, so that both can be rapidly involved in any activity”
- Kinsbourne, M, 1987
59
However, it is still curious how without interhemispheric communication, both
hemispheres may still cooperate in certain segregated tasks, and especially when information is
not segregated at all. Admittedly, segregated information is necessary for independent processing,
but it hardly can be said to be sufficient. This idea naturally leads for consideration of an embodied
mind, where one judges the aspects of the mind not solely tied to the brain or cortex, but to the
whole body of an agent with a mind (Miller et al 2017). In this sense, and though the cortex may
be split in a SBP, sub-cortical influence still crosses the interhemispheric commissural gap; a
whole body of receptors communicate information to the brain, and it is in no way clear that any
amount of information is exclusively destined for one hemisphere or the other – the brain evolved
to process information though integrated power of both hemispheres, not in segregation. It appears
clear that the CC holds an equilibrating function in the brain, assuring that the best possible
cooperation between the hemispheres for a wide range of tasks is achieved for any given input
(Kinsbourne 2003). Perhaps segregated information leads for higher interhemispheric
incoherence, and thus callosal communication is necessary for a more coherent behavioral output.
Conversely, when both hemispheres access the same information, interhemispheric incoherence
is not as high, and thus other means of brain communication, be it shared body experience, sub-
cortical communication, remaining cortical connections or new connections achieved through
neural plasticity, suffice for building an integrated behavior in SBP’s normal circumstances.
Be it as it may, it is clear the relation between the CC and conscious coherence. It is its
severing after all that leads for the possibility of more than one set of co-conscious conscious
states in one mind, one independent generative model in each hemisphere. A normal individual
is not capable of holding two conscious states simultaneously, a SBP is. This is only possible
through callosotomy. Additionally, a partial callosotomy (where only some of the callosal fibers
are sectioned, instead of all or most of them) allows for more coherent behavior, as opposed to a
full callosotomy, where incoherent split-brain behavior is clearly more evidenced. A lack of a CC
seems to lead to a higher degree of incoherence in a mind, as it allows for incoherent simultaneous
conscious states to arise in each hemisphere independently, where before such was not possible.
4.3.4 Split-Brain Data Reprised
Now that we’ve covered what each hemisphere can process independently and how they
do so, we are in a better position to reprise some split-brain studies for analysis, now under the
spec of a more informed perspective. For brevity, these few examples will be considered: (1) the
morality problem, (2) the left brain interpreter and (3) hemispheric cooperation in isolation.
Morality judgements in SBP have been seen to be hindered in SBP’s – more precisely,
verbalization of said judgements is hindered. When asked to decide on the morality of a situation
verbally, the SBP makes his choice by judging the outcome, rather than moral intention itself
60
(closely following a consequentialist ideal). If a morally wrong situation has a positive outcome,
the SBP will verbally deem the deed morally correct; if not, he will deem it morally wrong. But
if a moral judgement is presented to the SBP, not through words or judgment through
verbalization, but using non-linguistic morality judgement schemes, the SBP through the RH is
capable of correctly judging the morality of a situation, regardless of the outcome (following a
more deontological approach to morality). These results concur with the idea that it is the RH that
is socially focused, revealing empathy and emotional touch that the LH does not. The judgements
from the LH are based on the best predictions it holds for the question of moral soundness, and
since it holds little touch with emotionality, its best possible predictor becomes the outcome,
limiting the LH to a consequentialist perspective of morality. Given that the patient’s CC is
severed, the LH is no longer influenced by the RH, and thus, the more social, globalized
processing focus by the RH does not reach the speaking hemisphere, and the SBP is left unable
to verbalize morally deontological judgements.
Regarding (2), what Gazzaniga described as the left brain interpreter is solely associated
with the LH. Two points must be made regarding the interpreter. First, and though it is true the
interpreter seems to create illusory interpretations to try to keep up with the RH’s actions, of
which it holds no knowledge of, those interpretations are nonetheless the best possible predictions
the hemisphere holds of a given situation it has limited information about. If we recall that the
hemispheres seem to be holding simultaneously independent generative models, and that those
models represent predictions for causes, then it goes to no surprise that the LH would create silly
justifications for what the contralateral hemisphere is up to: those justifications are the best
predictions the LH’s got. Secondly, the processing focus of the LH also seems to match the
existence of such an interpreter on its side – the LH is very focused on what it gets to access, and
this constrains the amount of options it considers when solving a problem. The LH will stick with
what it deems to be the best possible answer, regardless of other options that may be considered,
and thus justifying its own predictions accordingly.
Finally, some degree of interhemispheric cooperation can be seen regardless of the
sectioning of the CC. Recall the example in which a SBP is asked to draw what he was shown,
and what was shown was the word “CAR” to the LH, and the year (the numbers) “1928” to the
RH – he then drew a car from 1928. When a task demands hemispheric cooperation, as through
both hemispheres building a unified representation in this case, both hemispheres are still capable
of generating a best possible outcome together. It thus seems some integration is achieved without
the CC, even if only externally. Moreover, this integration becomes possible in tasks in which
cooperative representation is favored to competitive representing 34. For another example, Pinto
34 Where this problem permits hemispheric cooperation for achievement, other activities (as providing incoherent
information to the hemispheres and for segregated tasks) lead to hemispheric competition.
61
et al’s study (Pinto et al 2017), showed (some) SBP’s were able to verbalize left visual field
information as well as the right (though they were unable to compare the stimuli). Their work
proposed that SBP consciousness was not divided upon splitting of the brain, and I tend to agree
with this claim. Some information integration may happen regardless of the callosal severing, and
it seems related to the degree of representational unity of perceived information in the
hemispheres (Pinto et al 2017). The mechanisms through which this integration happens are still
not well known, but understanding that they might happen, and that their consciousness may then
be seen as incoherent, rather than split, may be important for future investigations on the matter.
In the next section, some theories of consciousness will be analyzed, dedicated to understanding
SBP’s differing conscious experience, and hopefully in a way that does not undermine what we
feel and know about our own.
4.4 Split-Brain Streams of Consciousness
The points to be addressed in this section will be SBP consciousness, and how they may
relate to the Bayesian Brain theory in explaining the emergence of conscious experience. Among
other theories, we’ll here focus on three that were developed specifically to try to understand the
strange number of conscious streams we see in SBP’s. These theories are:
a. The conscious duality model (CDM for short);
b. The switch-model of consciousness (SMC).
c. The partial duality model (PUM);
In the end of this section, the possibility of the unified conscious experience that we are
familiar be no more than a mental abstraction, much as Nagel had suggested back in 1971, will
also be discussed, relating such lack of unity to incoherence.
4.4.1 The Conscious Duality Model
We witness in SBP’s what appears to be simultaneous conscious activity that arguably
should not be ascribed to a single mind. Because of this apparent duality, there has been some
doubt as to whether these individuals hold only one stream of consciousness (one continuously
coherent set of conscious states, as we assume normal individuals to have), or rather two of them,
one associated with each hemisphere, at all times. This first theory assumes the latter, where a
SBP is said to be the holder of two independent conscious streams that are not unified (LeDoux
et al 1977; Sperry 1977). If each hemisphere is home to an independent stream of consciousness,
then it would go to no surprise that when confronted with contrasting bits of information
(interhemispherically), the subject would act differently, and simultaneously, upon them. Some
62
issues arise when considering normal situations in SBPs however, as when no split in information
is provided, no conflict in conscious experience is (as evidently) noticeable. Furthermore, some
bits of what builds conscious experience seems to remain unified across the hemispheres even in
experimental situations, as emotional content or that regarding information that is not segregated
(Gazzaniga, 1985). Attempting to explain this issue in experimental situations, the CDM theory
considers that some experiences can individually be generated in each hemisphere (token
experiences), whilst still sharing on the nature of some of those experiences (sharing the types of
experiences 35) (Schechter 2010). Considering that types of experience both hemispheres access
are somehow duplicated, and that each hemisphere can hold individual token experiences of an
interhemispherically shared type, gives this theory explanatory power over both experimental
situations and normal situations. In lack of information segregation, the token experiences in each
hemisphere are duplicated as well, as both hemispheres access exactly the same inputs, both
generating the same type of experiences.
This being said, the Bayesian Brain theory seems to hold more explanation power for
consciousness than the CDM. Central issues arise when considering what neuroscientific support
this theory lacks, rending it implausible as opposed to other competitor theories. Let us discuss
the type-token distinction employed by the CDM: each hemisphere is in segregated moments
home to experiences of the same type, but differently tokened; in normal situations each
hemisphere is home to experiences of the same type, and as accessible content is the same,
interhemispherically duplicated token experiences as well. This holds two major issues: (1) shared
content cannot be, by itself, that which integrates conscious experience (such that two streams of
consciousness might generate integrated consciousness in normal situations, as opposed to not
doing so in experimental ones); (2) we’ve seen each hemisphere holds different forms of
processing information, different focuses and proficiencies – this in no way supports the idea that
both hemispheres would, in normal situations, generate equally duplicated conscious content
(Schechter 2010). Through the Bayesian Brain theory on the other hand, the differing
proficiencies in the hemispheres are both considered and evidenced, as in SBP cases we see the
left and right hemispheres to hold lateralized processing power over different aspects, well as
independent generative models in experimental situations. Furthermore, it does not take
duplicated content in the hemispheres to lead to integrated behavior (as in normal situations).
Rather, it takes what remains connected in both the brain (sub-cortically), the body, and what
differing or new connections in the cortex may exist to explain SBP behavior in normal situations
35 Aiming at the type-token distinction, consider you and I are both hungry. We both hold this personal, token
experience of being hungry. However, we are both hungry. We hold individual token experiences of hunger, but of the
same type.
63
– which I take to be a much stronger position than considering that duplication of content allows
for unification of behavior.
Furthermore, we may question the cases of normal individuals as well. In accordance to
the Bayesian Brain theory, normal individuals, given interhemispheric communication,
contributions from both hemispheres lead to generation of a single conscious experience which is
“richer” than one produced in division. The CDM on the other hand must hold that normal
individual’s consciousness be product of two hemispheres which are both capable of generating
conscious states, but given interhemispheric communication, neither is given the chance to
generate different token conscious experiences independently, as interhemispheric modulation
keeps the amount of such token experiences duplicated so as to avoid disparity. But again, I return
to the point made previously: the hemispheres have been seen to hold differing proficiencies, and
holding that these different hemispheres duplicate token experiences provided information
communication seems to me, though possible, highly implausible.
4.4.2 The Switch-Model of Consciousness
One more recent and rather controversial theory of split-brain consciousness is the SMC,
proposed by philosopher Tim Bayne, which states that consciousness is swiftly shifting between
one hemisphere and the other, such that in no moment one may consider dual-consciousness, but
instead we always hold a single and unified stream of consciousness that is alternating between
the hemispheres (Bayne 2008). In this sense, when a SBP is acting upon two different stimuli,
what is happening is a quick shift in consciousness between one hemisphere and the other, such
that the individual can “appear” to be conscious of two things simultaneously, but is really only
conscious of one at the time. Because the hemispheres are uncommunicative, what happens in
one hemisphere is not accessible to the other, and so the SBP still can’t speak of what is happening
in the “silent” RH, even if he may be conscious of what happens in both.
This theory may seem alluring at first, as it would solve the puzzling situations we see
SBP’s in, and could even fit what conscious experience normal individuals have. However, the
main factor that requires analysis in this theory is also its greatest flaw: the shift in consciousness
between the hemispheres. Let us not consider for now how this shift occurs, but rather remain
focused on how fast it must happen. In SBP’s we witness activity promoted by the LH and RH
with such a high level of synchrony that we hold these activities to be simultaneous. For the SMC
to be conceivable, the shift in consciousness in the hemispheres would have to be so fast that it
could fool decades of tests and investigators into believing that what is happening in the
hemispheres is happening at the same time, and not alternatively by a quick shift in consciousness.
Can a shift in consciousness be this fast? To answer this question let us again consider
experiments of binocular rivalry, which provide some insight into how fast shifts in conscious
64
awareness occurs. In binocular rivalry, when one is faced with conflicting perceptions, it is
impossible for us to build a whole, unified image from the two distinct objects. Alternatively,
what we are faced with is a shift between conscious awareness of one stimulus and the other
continuously, until the conflicting perception is resolved. This shift in conscious awareness of
stimuli is very similar to what the SMC would stand for. The problem is the shift in conscious
awareness is far from being fast enough to be mistaken with simultaneous perception and action.
In cases of binocular rivalry we notice the shift in conscious awareness, and very much know
when one such shift occurs. It is not a very fast process, and certainly not one to be confused
synchronized action; in SBP’s we see simultaneous drawing of different shapes with both hands,
simultaneous search for objects exclusively presented to each hemisphere, simultaneous feelings
of revolt and confidence for the execution of a single task. For the SMC to be conceivable, the
shift in conscious awareness between the hemispheres would have to be unbelievably fast, and it
seems that it is not. Because of this, I don’t consider the theory the best approach to understand
split-brain consciousness, as other theories (as those explored here) still stand as stronger suiters
for explaining the strange split-brain phenomena, even if implausibly so.
4.4.3 The Partial Unity Model
Alternatively, a PUM was suggested (Lockwood, 1989), which shares with the CDM the
main point of considering the presence, in each hemisphere, of independent simultaneous
conscious activity. What distinguishes these theories is related to the transitivity of conscious
experience. If we consider the mental state a that unifies with a mental state b, and b in turn unifies
to a mental state c, then through this transitivity property, the mental state a must unify with the
mental state c (if A cc 36 B, and A cc C, then B cc C). The CDM considers that the transitivity
property is kept; the PUM does not (Schechter 2014) (refer to Figure 4.03 in the appendix section
for an illustration of the distinction between these two theories, with regard of the transitivity
property). By dropping the transitivity property, the PUM allows a new possibility of conscious
experience that explains what we see happening in SBP’s, and in normal individual’s conscious
experience. Whereas in the CDM any independently conscious experience in each hemisphere
must be co-conscious with all other conscious experiences housed in it (and where some such
experiences are duplicated, allowing for moments of normal, unified consciousness), in the PUM
two different conscious experiences that are not co-conscious with each other may be co-
conscious with another set of conscious experiences that are connected with them both. While in
the CDM we must consider the existence of two streams of consciousness; in the PUM we can
consider the existence of only one that is partially disunified (where some conscious states remain
normally unified, whereas other do not). This allows for the theory to consider conscious unity in
36 Is co-conscious with. Co-consciousness is the property of the contents of a conscious state to become unified.
65
degrees (similar to how I here have considered coherence in degrees), and where the more
experiences become uncommunicated between the hemispheres, the less unified consciousness
will become.
This theory stands with some strengths, particularly due to the fact that it is supported by
some empirical evidence. As we’ve seen, between a fully sectioned CC and a whole one, partial
callosotomies can be made, in which only part of the CC is sectioned. In these individuals, who
stand between individuals with a whole CC and those with a split one, some strange behaviors
can be seen as well, but not to the extent that we are able to see in a full split-brain. It seems the
less communicative the hemispheres become, the less unified their consciousness may be
considered. Furthermore, this theory takes into consideration what regions of the brain remain
whole. Sub-cortically, the brain is not split as the cortex in SBP’s, and again, what remains
connected and communicative in these regions of the brain may account for what is still
consciously integrated in both hemispheres (Lockwood 1989). It is also a theory of SBP
consciousness that fits the Bayesian Brain theory, where generated models in both hemispheres
that are based on segregated information lead to independent conscious mental states, whereas
information that is not segregated is still accessible by both hemispheres, allowing for an
integrated single model and conscious mental states. This is exactly what could be expected if, as
this theory holds:
Notwithstanding the strengths of the theory, it is faced with criticism as well, particularly
from Hurley’s argument: what situation could be thought of in which consciousness could be
partially split but sharing co-consciousness with some states, and would not fall into explanation
under the CDM as well, through duplication of content? This indeterminacy argument claims that
there is no situation in which, result wise, the PUM can be seen as opposed to the CDM (Hurley
1998). And indeed, in practical terms, on splitting moments, there seems to be little difference:
the result, be what the case may be, would be two hemispheres which could independently hold
conscious experiences:
However, I believe Hurley’s argument loses its ground when he states there is “no
objective factors” that “would make for partial unity as opposed to separateness with duplication”.
The very argument behind the PUM entails something the CDM does not: it entertains the idea
of connections or communication still held interhemispherically be the base of a SBP’s unified
“(…) the cortically disconnected right and left hemisphere are (…) associated with distinct conscious
experiences that are not (interhemispherically) co-conscious; nonetheless, these are all co-conscious
with a third set of subcortically [or otherwise] exchanged or communicated conscious contents”
- Schechter, E, 2014
“There is no subjective viewpoint by which the issue can be determined. If it is determined, objective
factors of some kind must determine it. But what kind? (…) If no objective factors can be identified
that would make for partial unity as opposed to separateness with duplication, then there is a
fundamental indeterminacy in the conception of what partial unity would be, were it to exist”
- Hurley, S 1998
66
behaviors. Furthermore, it may find grounds on the Bayesian Brain theory, and it certainly pushes
forward the idea that the hemispheres hold differing processing proficiencies, which split-brain
evidence and studies on lateralization surely seem to confirm, but which the CDM seems to
disregard. Ultimately, the PUM holds the power to explain both normal and SBP’s conscious
experience (provided structural change) without putting into question what we consider to be true
about normal minds (at least to the extent the previous theories would), which the CDM,
considering the analyzed neurological evidence, seems to be found lacking.
4.4.4 Disunity and Incoherence
When we speak of conscious unity, we refer to the property of the contents of a conscious
state to become unified – an idea that has been around since the times of Kant (Kant 1781). We
are not conscious of the table and conscious of the glass and conscious of the water inside it; we
are conscious of all these contents (of a table with a grass of water on it) all together within the
same conscious state. Therefore, if we are to speak of conscious disunity, we must consider that
this property does not always hold 37. If we are to accept that a SBP’s mind is home to incoherent
generative models, generated by independently proficient hemispheres with differing processing
of segregated information, then we must also accept that content of a single conscious state
generated in situations in which information is not segregated must be product of a more
disunified consciousness than many consider possible. For each hemisphere is seen to hold
differing proficiencies regarding what data they access – and yet, what conscious states arise, both
in normal individuals and SBP’s in normal situations, arise in an (apparently) singular way. What
both hemispheres specialized in doing, or what cognitive faculties have become lateralized, can
become unified in a single conscious experience when that experience is generated by both
hemispheres in cooperation, provided interhemispheric communicative means.
These individuals may be living proof that unity in consciousness is not as absolute as it
had been thought. For instance, when either hemisphere of a SBP is assigned a task, their results
are often impoverished when compared to the results on those same tasks, but when both
hemispheres are allowed to cooperate. The contrary is possible as well, given lack of inhibitory
interhemisperic influence (Banich et al 1990). Other indirect evidence includes situations with
normal individuals, in which what we are conscious of, and that which we have full conscious
access to, differs. When we read, we are conscious of an entire page of words, and yet only hold
higher precision on what we are focusing at a moment. If words out of this focus region (but still
in conscious view) are changed during eye movement, we hold no awareness of the change. This
suggests that only the bit we focus on – the predictions associated with the data we gather when
37 “Not always hold” is not the same as “never holds”. Most defenders of disunity in consciousness do not claim that
there is no unity in consciousness, just not to the degree that many claim it has, which sometimes is regarded as total.
Once again, degrees might be the correct way to consider unity in consciousness.
67
sweeping the words through several unconscious eye movements, as part of the whole – is fully
unified in consciousness (Dennet 1991). However, and as briefly mentioned, the degree of this
disunity and the degree of incoherence in the minds of a SBP seem to be related. The more
disunified their consciousness becomes, the more incoherence they reveal; and hemispherically
speaking, if both lose all that in which they cohere, then both would become independent,
individual personal identities, in which case I’d agree – the mind had been split. As the problem
stands, this is not the case: SBP’s hold conscious unity regarding non-segregated input, and their
hemispheres (and mind) still hold considerable coherence even in experimental conditions (albeit
lesser than a normal mind). What behaviors become incoherent are not permanently so, and as
information becomes integrated (communicated between the hemispheres), so does their
conscious unity grow. I believe that to split a mind, one must split consciousness completely, and
that is not what I consider to have happened in these individual’s minds. Rather, consciousness
has become potentially incoherent, as other mental states normally can, leading their minds to
more incoherence than we see in a normal one.
4.5 The Mind of a Split-Brain
Following callosotomy, the brain of a SBP ceases the ability for efficient communication
between its hemispheres, and that means particularly the loss of interhemispheric stimulation.
Both hemispheres hold the necessary connectivity and processing power that lead to the formation
of generative models, and both hold the capacity to output independently conscious behavior as
well. Considering that conscious unity is not absolute in our minds, having dissociated content
from the hemispheres become integrated into a unified, single conscious state, also seems
plausible: as information becomes more or less communicated inherhemispherically, so do we
witness more or less interhemispheric independence, provided by generative models that may,
respectively, be jointly or unihemispherically generated. Our account of the Bayesian Brain
theory supports the existence of more than one set of co-conscious conscious states provided more
than one generative model is in play in the brain (which it seems to be in SBP’s in experimental
situations); the brain’s hemispheres are independently capable of processing the same nature of
information (though in differing ways); and there are alternative theories of consciousness that
can account for conscious experience in SBP’s, highly relatable to the Bayesian Brain theory, and
that may account for our more unified conscious experience as well (provided no structural
change). With this, I believe, a strong idea of what is going on in the minds of SBP’s can be taken:
the hemispheres are independently predicting sensory information they gain access to, and both
are unable to suppress what the other one is up to; these two generated incoherent sets of co-
conscious conscious states lead to SBP incoherent conscious behavior, but both (in accordance
68
with the PUM and the Bayesian Brain theory) may be unified with a third set of co-conscious
conscious states that is held coherently interhemispherically, and that ultimately anchors personal
identity in the form of an incoherent consciousness. This unification of personal identity is kept
through means of communication that go beyond the CC, such as sub-cortical communication or
being the holder of a whole body of sensory receptors that is often coordinated (as eye movement).
These allow the existence of a set of co-conscious conscious states that remains present in both
hemispheres at all times. In the mind of a SBP, there are thus token experiences which are
independent and interhemispherically incoherent produced by each hemisphere, and token
experiences that are commonly shared and interhemispherically coherent in the hemispheres (as
is, recall, the fact that a SBP that is scared through his RH can verbalize being afraid through his
LH), experiences these intimately related to the generative models that give rise to conscious
experience. In normal situations, SBP’s hold cohering informational input in the hemispheres,
and this allows the brain to integrate dissociated content in the hemispheres into a more unified
consciousness, and thus something closer to a single set of conscious states may be considered,
and their degree of coherence in such situations is higher. Their consciousness is thus incoherent,
rather than divided – splitting consciousness would require the emergence of two continuously
and systematically independent sets of conscious states, where at all times one set and the other
could be independently represented in consciousness. This is not the case of a SBP. They hold a
high degree of states which cease to cohere interhemispherically at experimental times, but
certainly hold a reasonable amount of tokened conscious states which do still cohere
interhemispherically at all times as well, leading to a consciousness that resembles that which the
PUM stands for: a strangely dissociated consciousness, with some incoherent (token) experiences,
but with some coherent (token) experiences as well, making these individuals not exactly as
unified individuals as we are, but not split or absolutely dissociated identities either.
We may not understand how it feels like to be these individuals, but understanding that
what generates their conscious mind is the same process the generates ours, that what processes
lead to their incoherencies also come to generate ours, and that theories discussed here fit both
normal and SBP’s minds and incoherencies, certainly weighs in for the plausibility of considering
a single incoherent mind, for both normal individuals and SBP’s.
69
Chapter 5
Conclusions and Final Thoughts
A lot has been analyzed leading us here, and we’ve gone a long way from Nagel’s
question and the raw consideration of the split-brain phenomena. Given the nature of this multi-
disciplinary investigation, the range of aspects considered herein were great. In this small chapter,
an overview of the work will thus be provided, and conclusions drawn from it will be considered.
Finally, a few final thoughts on open questions and overall impact of such an investigation will
be disclosed.
I firstly considered a new way of conceiving this problem: instead of considering a
number of minds though change in mental coherence, I considered the mirror question, asking
whether mental incoherence would lead to a changing number of minds. Instead of counting
minds in SBP’s, focus was centered in exactly how coherent a mind truly is, and by changing the
question from finding a countable number of minds to realizing that their mind might simply be
more incoherent than our own, was key to make the jump from aimlessly searching for a way to
understand how two minds may become one, to understanding how one mind may seem to be
two. Three objectives were proposed: investigate how coherent a normal mind really is; consider
the possibility of different degrees of coherence within minds; and understand how a mind, both
of normal individuals and SBP’s, might be incoherent. Having these three objective answered, a
strong base for considering incoherence as an integrating part of any mind can be built, and with
it, foundations for the answer to Nagel’s question as well: SBP’s hold one and only one mind,
whose degree of incoherence is evidently higher than that of a normal mind.
5.1 Brief Overview
For the first objective I defended that a mind can be incoherent, provided it is not
consciously so (conscious states within a same set of such co-conscious states must always
cohere). Studies from psychology cases of self-deception and akrasia show us that mental states
can be incoherent, where a single mind can simultaneously hold mental states that do not cohere.
Some of these states may be held incoherently in a conscious state (as is with conflicting desires
70
leading to incoherent action), whereas others definitely cannot (as is with incoherent beliefs,
where two such incoherent ones cannot be housed within a same conscious state). But conscious
states within a same set must continuously cohere. Through such evidence I’m led to believe that
a mind we consider singular can indeed be incoherent, and the coherence property we apply to
the mind should instead be applied to sets of conscious states.
Considering that a normal mind can be incoherent, and that a SBP’s mental incoherence
is so different from that which we see in normal minds, led me to consider the second objective.
How much more incoherent can a SBP be such that incoherent simultaneous conscious behavior
may emerge? Through this I considered that a degree of coherence can be considered in minds,
such that total coherence is non-existent, and total incoherence would be loss of individuality. As
such, all minds fit somewhere between these two. As SBP’s hold a higher degree of incoherence,
their minds may slip closer into what would seem to be the emergence of two minds – two
identities – within one brain, when truly one mind would suffice for explaining their behaviors,
provided we grant them increased incoherence given their brain’s structural changes.
Furthermore, the changes their brains have undergone may be seen to lead to differing types of
incoherence as well: where in normal individuals, incoherence occurs as a natural process in the
brain, and allows only one set of co-conscious conscious states to be represented at a time; in
SBP’s incoherence is (additionally) product of CC sectioning, allowing for two simultaneous sets
of co-conscious conscious states to be interhemispherically represented simultaneously (when
segregated information is provided to the hemispheres). Though both these types of incoherence
can be ultimately measured and quantified, they are of different types, emerging for different
reasons, and leading to different behaviors.
If indeed we may consider single minds to be incoherent, we had to reflect on how this
could be so. Particular attention was given to the minds of SBP’s, as their minds are what is under
focus in this work (we do not question the singularity of our own minds). The third objective was
intended to enlighten how their minds work to produce incoherent conscious behavior, and the
proposed argument is that by considering the same mechanisms that may lead a normal brain to
produce conscious awareness, action and incoherence, we may consider a SBP’s incoherent
conscious awareness and action as well, provided both hemispheres in the brain are stimulated in
isolation. The Bayesian Brain theory tells us that the brain is a statistical analyzer, constantly
comparing sensed information with predictions it holds, and consciously representing the
predicted causes of sensory stimulations. We’ve also seen that each hemisphere holds the
potential to independently receive, process and interpret sensory data of various natures. Together,
we considered the generation of different generative models that become integrated into one
71
integrated single model built from both hemispheres 38. Considering the lack of a corpus callosum,
and the lack of integration of said generative models, one might expect each hemisphere to hold
independent generative models for information that is segregated. Generative models lead to our
conscious awareness and action. If two such models can exist incoherently housed in one
hemisphere and the other, then incoherent conscious awareness and action can be understood in
SBP’s through consideration of this segregation of information and generation of independent
models. Information that is not segregated, being shared by the hemispheres, lead to a single
generative model built from both hemispheres. SBP consciousness can thus be seen as an equally
incoherent phenomena, where some conscious states do not cohere (as those in incoherent
generative models), but these cohere with others that cohere among themselves (as those by the
mutually generated model). This leads these patients to have (in experimental situations) a strange
dissociated identity, which is not separate, but not fully integrated either – an incoherent
consciousness, in a deeply incoherent mind.
5.2 Conclusions
Normal minds are incoherent. We can see it in everyday moments and behaviors, and yet,
we ourselves seem so blind to what incoherencies we might reveal. In our own minds, we always
seem to feel coherent. This led me to consider that coherence is not a necessary property of the
mind, but rather of conscious states within a set of co-conscious conscious states (as in a single
stream of continuous conscious experience). While we might be able to witness incoherence in
other people, we find it hard to become aware of our own incoherencies. For instance, we witness
incoherent beliefs in other people, but not often in ourselves. As the belief system holds a basal
position in the creation of high level conscious thought, as representing a way the world is,
incoherent beliefs cannot be housed within a same conscious state, and no two conscious states
housing incoherent beliefs can simultaneously exist on one same set of co-conscious conscious
states. Actions, on the other hand, may be incoherent in the conscious mind. We are aware of
conflicting desires, often leading to actions that do not make sense. These desires are mental states
that, unlike beliefs, can be housed incoherently within a same conscious state. This differs from
holding simultaneously two conscious states with incoherent desires within a same set of
conscious states, which is not what I hold to happen: mental content becomes unified within
conscious states. We may represent information incoherently as well, so long as not in
consciousness. Through all this I realized mental states can be incoherent in the mind, and if we
hold mental states that do not cohere, then our singular minds can be subject to degreed
38 It’s worth noting that by stating this, I reject the possibility that, as some have defended, our brain is home to two
conscious hemispheres but which, through interhemispheric communication and unison work, is never made noticeable.
72
incoherence, where more basal states in the mind cannot be incoherent in a conscious state,
whereas those less so can; and none can be incoherently held in differing conscious states within
a same set of thereof simultaneously.
Incoherence comes in degrees and types. If we, to whom we ascribe a single mind, are
not totally coherent, then we can at least draw one solid conclusion: total coherence is not needed
to hold a single mind. If we are not totally coherent, and clearly not totally incoherent either, then
we fall somewhere in within this spectrum. If this spectrum can be considered, then coherence
can be measured in degrees, and not in absolutes. Now we hold ground to expand on the definition
of a single mind: if total coherence is not needed, is there such a degree of incoherence that a
single mind can no longer be considered? We witness in SBP’s a more incoherent behavior and
mental activity than in normal individuals. These individuals (at least in incoherent moments) fall
in a lower range of the spectrum, closer to incoherence than normal individuals. Is this enough to
claim that they become holders of two minds? Or that their number of minds change at all?
They’re incoherence is of a different type, occurring due to callosotomy, and leading to
emergence of incoherent sets of conscious states interhemispherically. But even if we consider
the absolutes of degree in coherence with regard of the hemispheres exclusively, maximum
coherence would entail two hemispheres which held no mental states that did not cohere.
Maximum incoherence would be two hemispheres with no inter-coherent mental states. The
former we saw to be inexistent in any single mind and with any brain; the latter would be
dissociation of personal identity; but none are really achieved by any individual, as even SBP’s
never hold dissociated identities at all times. Even in experimental situations, several token
experiences are mutually generated by both hemispheres, even if most of them are not. No
individual, regardless of degree of incoherence, is seen to be absolutely coherent or incoherent.
Likewise, be it by normal incoherence, or interhemispheric incoherence due to callosotomy, no
individual appears to be absolutely coherent or incoherent; and if absolute incoherence is not
found in either type of incoherence, leading to dissociated personal identity, then one mind may
be kept singular regardless of type of incoherence, as well as degree.
What allows for normal incoherence, allows for SBP incoherence, provided structural
change. Through consideration of generative models in the brain, we can make sense of what
incoherencies normal individuals show, and considering the sectioning of the corpus callosum,
we can make sense of SBP’s incoherence as well. In either case of incoherence, it has been made
clear that conflicting generative models regarding bits of information are at its core. In normal
individuals, as a high degree of incoherence is suppressed by both hemispheres coordinating their
conscious representations to avoid disparity, conscious incoherence does not occur. Nonetheless,
normal individual incoherence is still present, where one may hold incoherent desires consciously,
incoherent beliefs provided only one be represented in consciousness, and even incoherently
73
representing the world (again, provided only one such representation is conscious); all of which
explainable through consideration of conflict in the brains generative models. A SBP’s brain,
with hemispheres that no longer are able to effectively avoid representational disparity, gain the
potential to hold incoherent representations interhemispherically, leading to simultaneous and
incoherent conscious states. Again, the same process leads to incoherence: generative models
housed in different hemispheres both become represented, leading to incoherent conscious
behavior, all the while generating a unifying set of interhemispherically co-conscious conscious
states that derives from unsegregated input, and leading to partial unification of representation
and consciousness, as seen through several token experiences which are still product of both
hemispheres together due to what still remains interhemispherically connected, as sub-cortical
structures, a whole body of reception and agency, memories, and eventual cortical connections
that may still provide communication or new connections formed over time.
A single incoherent mind is plausible. Given that the same neuronal processes that (may)
lead to our conscious awareness allows the existence of incoherence, and that given
uncommunicating hemispheres, that same process may account for incoherent conscious states,
points to the very idea that the single mind can be incoherent, and that even a SBP’s mind is
subject to incoherence under the same principles that a normal mind is. What is it to have an
incoherent mind? It would seem that if in any one mind its mental states do not cohere, then that
mind lacks coherence. I find this to be true in normal minds, as I find it to be so in SBP’s. Is the
fact that SBP incoherence is so deep that incoherent conscious states arise sufficient to consider
the loss of a single mind? If we consider incoherence in terms of degrees, then unless, in this
specific case, the hemispheres become so incoherent that individual identity is forever lost, I do
not consider the loss of the single mind, as that could only be attributable to two hemispheres with
individual personal identities. As we’ve seen, mental experience arises from both hemispheres
together – a whole nervous system with processing differences and similarities that lead to, in
ideal situations, a unified sense of who we are in the world. In non-ideal situations, as we see in
SBP’s, though in experimental situations conflicting and incoherent states of consciousness
certainly arise, some states of consciousness still hold coherently interhemispherically: a SBP can
cooperatively draw a picture from what both hemispheres access independently; they can self-cue
themselves (albeit unconsciously) to ensure one hemisphere or the other answers in the way the
experiment requires; emotions are often present in both hemispheres regardless of CC when only
one hemisphere is accordingly stimulated. It would seem that a SBP’s personal identity is never
absolutely split, but rather left incoherent – at all times, some states in the mind that build for a
singular mind remain coherent in a SBP, even if a large amount of such states become dissociated
and incoherent. If the mind can be incoherent, then I see no reason to consider a SBP to have
anything more than a single incoherent mind – a mind with mental states that do not fully cohere
74
– even if they have more so than normal individuals. Given absolute dissociation of conscious
experiences (token experiences) at any given moment is absent (which it seems to be), Nagel’s
fourth, rejected answer, can be revisited, and I believe it to be a very plausible answer to his
question: “[Split-Brains] have one mind, whose contents derive from both hemispheres and are
rather peculiar and dissociated” (Nagel 1971).
5.3 Final thoughts
5.3.1 On counting of Consciousness’, Models and Minds
These intimately related phenomena – consciousness, generative models and minds – can
all be found in the brains of mankind (and probably in other species as well). In this brief section,
I will analyze how many of these I believe we can find in any individual brain. The first question
I ask is how many consciousness’s we can find in a brain, both of normal individuals and of
SBP’s. Immediately resolving the simpler issue here, we’ve seen that the brains of a normal
individuals can only represent one set of co-conscious conscious states at a time, such that
simultaneous incoherent conscious states cannot be present in their minds at any given time. As
such, and taking consciousness to be the whole set that contains all sets of conscious states in a
mind, normal individuals hold only one consciousness, with conscious states housed in it
continuously being coherent, and these coherent with what mental states are housed therein.
The main issue then is not to count how many consciousness’s we can find in a normal
individual’s brain, but rather how many we can consider a SBP to have, particularly in
experimental situations. Let us first analyze how many we can find in a SBP in normal
circumstances, in which informational input is not segregated. In these situations, only one set of
co-conscious conscious states appear to be housed in the SBP’s brain, as incoherent behavior is
much reduced in normality, but it is not altogether gone either. In this sense, we are not be able
to ascribe these individuals with a normal single consciousness even in normal circumstances,
and not with the strange dissociated consciousness we witness in experimental situations either.
Under experimentation, these individuals reveal simultaneous and incoherent conscious states,
where one set of such conscious states is housed in each hemisphere. As not belonging to the
same set of co-conscious conscious states, these do not cohere amongst them, leading to increased
incoherence in their minds. By considering these two distinct sets of conscious states that regard
segregated information, we may consider in experimental situations the existence of two sets of
conscious states in their brains. There is still the issue of what token experiences remain coherent
in their minds interhemispherically, regarding what informational input is not experimentally
segregated or is still shared regardless of callosotomy. These seem associated to another set of
75
co-conscious conscious states that is derived from both hemispheres, cohering as normally would
be expected. And here is where indeterminacy takes its toll on the issue: how many sets of
conscious states can we see SBP’s to have? The answer may be somewhere among the possibility
of there being one strange and dissociated set; two incoherent sets that hold some mechanism of
maintaining some of it’s states coherent; or three (or potentially more, with further dissociative
lesions) sets, regarding that which does not cohere in their minds (in the form of two
interhemispherically independent such sets), together with that which does (in the form of a single
interhemispherically generated set). Though I tend towards the third option (mainly through
considering the amount of generative models housed in a SBP’s brain in experimental situations,
to be discussed shortly), I suppose this question may not really have a clear answer, much like
Nagel concluded with regard of a number of minds. Maybe counting consciousness’ is not the
best way to approach the problem, as we may see consciousness to be more or less unified in
these individuals from one moment to the next. Be it by holding one, two, three or more sets of
conscious states, they remain appearing to be holders of an incoherent whole consciousness under
any circumstance, with both cohering and incoherent conscious token experiences. As such, I
consider a SBP to have one consciousness, but with a higher degree of incoherence in
experimental conditions due to the generation of an undetermined amount of sets of conscious
states simultaneously being housed in a single mind.
Though in counting consciousness’s the answer seems to successively be one (more or
less incoherent) consciousness, in counting sets of conscious states (that build the whole
consciousness), what we are counting is generative models. The amount of generative models
holding a dominant position in our brains is the number of sets of conscious states we hold,
building conscious awareness. In a normal individual, the answer is one: one dominant generative
model at a time, leading to one set of co-conscious conscious states (though out of consciousness
more generative models may be at work in the brain). In a SBP, the answer differs depending on
the circumstance. Importantly, this question has a strong relation with Nagel’s question itself, of
how many minds a SBP has. What we experience the mind to be need be experienced through
conscious states, and our conscious states (arguably) derive from generative models. It follows
that what we experience mentally need be related to what generative models we hold. This
question can then be seen as a more modern approach to Nagel’s: instead of asking how many
minds, we may ask how many models building mental experience we can find in a SBP. Unlike
the answer for Nagel’s question, which I hold to be one single (incoherent) mind, the answer to
this question is slightly different, as what is building the content of a single mind may be more
than a single generative model. Generative models lead to conscious states, and conscious states
(and sets of thereof) lead to the single (incoherent) mind. In experimental situations, the
(conscious) mind of a SBP is being home to two incoherent dominant generative models in the
76
brain, one in each hemisphere, together with a generative model building conscious awareness of
what experiences are not split in the hemispheres; or perhaps two generative models, one in each
hemisphere, with common priors (allowing for what experiences are interhemispherically kept
coherent). In normal situations we can certainly consider something closer to one, as behavior
becomes clearly more integrated. The answer to how many models there are generating their
mental experience may then be one model in normal circumstances 39, and two (or more) in
experimental situations, very similar to what Nagel proposed in his fifth rejected hypothesis.
Finally, we come back to the starting point of counting minds. I see no reason to consider
that the single mind is lost upon callosotomy, as all that seems to happen is an increase in the
degree of incoherence, even considering the differing types of incoherence normal individuals
and SBP’s reveal. We all may be incoherent at some given point, some just to a greater degree
than others, and certainly in different ways. Nonetheless, in any case, what builds a mind is not
absolutely divided or incoherent all the time, or at any given time. In experimental situations, in
which SBP coherence is at its lowest, there arises a duality of incoherent sets of conscious states
in the brain, created by independent generative models in each hemisphere with regard of
segregated input. However, these still hold common token conscious experiences, suggesting that
not all has become divided in their minds, and having personal identity anchored by these states
and experiences that are not split following callosotomy. Furthermore, in all possible moments a
single best predictive (generative) model is what is created (as in normal situations). If in
experimental moments identity may take a step away from individuality and towards duality, seen
by increased incoherence, I cannot be sure. But I am sure that normal individuals and their
incoherencies fit these identity problems as well (though not consciously so). If under these
conditions we can ascribe a single mind to normal and intact brained individuals, then I stand for
the claim that a SBP is also a holder of one and only one mind, even if that mind reveals a higher
degree of incoherence than our own. To close, let us ponder on another possibility: I’ve held
throughout this dissertation that normal individuals are holders of a single mind (which was the
baseline of comparison with SBP behavior and number of minds). How would we proceed if we
were to drop this assumption? We know that what or how we feel is not enough to assert that one
single mind is what we have. Perhaps we are holders of more than one mind, generated in such a
way that consciously we feel as a singular entity? Maybe we have no countable number of minds,
much as Nagel has suggested. The conclusions drawn in the work are solid, just as long as the
assumption that we have one mind stands. I wish to emphasize that this assumption, as tempting
39 One model is debatable, as even in normal circumstances lack of shared hemispherical proficiencies lead to a more
incoherent consciousness than in normal individuals. In such tasks, two models associated to the uncommunicating
hemispheres may be considered. But more models in experimental situations can surely be considered.
77
as it may be, could very well be wrong, and keeping an open mind regarding the issue will be
essential for future investigations on the matter.
5.3.2 On Impoverishment of Consciousness
Some final thoughts on the consciousness of a SBP must be considered. Turning from
counting or analyzing their consciousness or mind, I want to stress the loss of what we could
consider to be a normal, “full” conscious experience. In normal brains, our hemispheres are both
contributing simultaneously the one generative model building the whole conscious experience
we have, often influencing one another and pulling in opposite, but necessary, processing
directions. But a SBP, even in normal situations, has moments in which sensory data that is
integrated into both hemispheres cannot generate an integrated response. If we recall the issue of
moral judgment by a SBP, they were unable to verbalize a morally sound reason for making a bad
moral judgment (in a deontological perspective). These judgments were made by the speaking
LH, and without contribution from the RH, it seems verbal moral judgment becomes undermined.
These tests of morality were made with SBP’s without any type of segregation of stimuli, as the
test stories that were read to them were accessible by both hemispheres. The problem is, even
when both hemispheres have access to the same information, both of them are still specialized in
processing that information in different ways, and without that specialization shared across the
commissural gap, and without interhemispheric influence, it seems some aspects of consciousness
that we should consider a SBP to have in normal circumstances may be lost hemispherically.
Though together, the full extent of what a normal conscious experience is present, each
hemisphere’s conscious behavioral output is in many ways impoverished, as compared with one
holding a “full” consciousness; and as in the case of morality judgment, a SBP is unable to output
the behavior that would be expected if their consciousness was fully integrated. Given the
integration of specialization of the hemispheres in normal individuals, and the loss of thereof in
SBP, I consider that, to some degree, the fully integrated conscious behavior that we hold is
forever lost in a SBP, under the loss of interhemispheric communication. In this sense, a SBP will
always hold a higher degree of incoherence in their minds than normal individuals, holding an
impoverished conscious behavior, when compared to behavior generated by a normal, integrated
brain.
5.3.3 On Evolutionary Perspectives
How can incoherencies hold evolutionary value? Do they hold any evolutionary value at
all? I believe mental incoherence is a byproduct of the way the brain evolved to process
information. Considering what the Bayesian Brain theory holds, conflict in predictive processing
leads for incoherence in any mind. That conflict may ultimately result in the best evolutionary
78
outcome.. In the particular case of beliefs for instance, one usually represses a belief that will, in
some way, affect one’s self negatively. Consider what Friston has realized, where:
Arguably, the best behavior to exhibit would be that which leads for increased stability
and higher survivability and reproductive success. In this sense, the misguided (unjustified) belief
becomes the one to be represented into consciousness. In holding two conflicting generative
models, one based on sensory evidence at its root but leading to pain, and another not holding
sensory evidence at its root but leading to higher mental stability, it is usually the path of higher
stability/lesser pain that takes hold in consciousness as the dominant generative model (following
the notion of Freudian self-deception).
On particular relevance of incoherence between the hemispheres, which exclusively
allows for simultaneous incoherent conscious states interhemispherically, the evolutionary
advantages of the brain’s bisymmetry must be considered. Cases of lateralized asymmetry in the
nervous system have occurred in nature across species from very distant phylogenetic branches,
from invertebrates to vertebrates (Halpern et al 2009). Lateralized asymmetry associated with the
bisymmetry of the human brain seems to hold evolutionary advantage, perhaps by granting
contrasting, but necessary, ways of perceiving and processing information (McGilchrist 2012). It
is across this symmetry gap that we see most contrasting faculties, as both sides specialized for
processing certain bits of information, and if incoherent conscious behavior should arise, it would
certainly be more easily attained across this gap. It is nonetheless clear that species that thrive
with lateral proficiencies thrive only if said proficiencies are shared between the hemispheres.
The brain evolved to process information with joint efforts from both hemispheres, so it goes to
no surprise that changing the way the brain evolved to process information would lead to
decreased success in that processing. We may then consider two ways to conceive the
evolutionary value of incoherence: normal incoherence, as seen in normal individuals, may hold
evolutionary value, as the result of conflict between generative models often lead to the best
behavior for keeping high rates of survivability, stability and reproductive success. Incoherence
of the type we see in SBP’s however, coming from structural changes in the brain, and
undermining the way it has evolved to process information, doesn’t hold any evolutionary value:
this type of incoherence actually seems to lead to decreased brain processing power in some
respects, as we’ve seen. As a final note, incoherence may also be a consequence of how the brain
evolved to process information, and not something directly selected itself.
“By considering the nature of biological systems in terms of selective pressure one can replace
difficult questions about how biological systems emerge with questions about what behaviours they
must exhibit to exist”
- Friston, K 2006
- Friston, K, 2013
79
5.3.4 On what the Future holds
In this work, we’ve applied a theory of brain processing of great relevance to both
understand incoherence in the mind, and to acknowledge that a SBP potentially holds independent
predictive power in each hemisphere. It was the first investigation of this nature, both realizing a
mind as an incoherent phenomena, as well as conceiving a SBP as a holder of a single mind.
Where can we go from here? First and foremost, we must understand that the age of the split-
brain is coming to an end, as other less invasive strategies for treating epilepsy than severing the
CC are now in use. This means that study of brain lateralization cannot be further tested with
support from SBP’s for much longer. The future of this study may be based on Transcranial
Magnetic Stimulation, which has been widely used recently to study the functional roles and
proficiencies of the brain and its neuron networks. Of great importance would be finding further
evidence supporting the Bayesian Brain theory as well, as it is the tie considered in this work
between mental incoherence and brain processing mechanisms. Only by confirming that what the
brain does concurs with what this theory states can we continue down this new line of
investigation for incoherence and single minds. Friston has worked on deep and superficial
pyramidal neurons, as found them to be a good starting point for considering predictive models
associated to brain structure. Further study on this will thus be essential for the future. Another
interesting path of investigation would involve animals with unihemsipherical sleep patterns, as
cetaceans or certain bird species (Mascetti 2016). These animals, whose hemispheres sleep one
at a time, are awake at all times to surface for breath when needed (in the case of cetaceans), as
well as remain constantly vigilant. It also means what leads for their conscious mental activity
may arise from one hemisphere or the other, alternating and independently. It would be interesting
to confirm the existence of processing proficiencies in these animals, depending on which
hemisphere is asleep, or consequential personality traits. Furthermore, if sleeping indeed entails
the reduction of complexity for the generative models, then this reduction in these animals would
occur in an alternating fashion, in one hemisphere and the other. Studying sleep-cycles in these
animals’ brains would be of great interest, aiming to understand both the importance of sleep, as
well as study of the contrasting mechanisms of sleep and wakefulness simultaneously, with
constant conscious awareness attained. Other questions were left unanswered in this work, which
would be solid starting points for future investigations. We’ve become aware that degrees of
incoherence exist with regard of mental states that may become incoherent. What makes a desire
a mental state that permits conscious incoherence, as opposed to beliefs, which cannot be
consciously incoherent? Surely we’ve seen a belief often leads to a desire, but that is hardly a
definite answer to the question. Furthermore, we’ve seen that holding higher degree of
interhemispheric informational coherence allows the possibility of both hemispheres building for
a more unified consciousness. But we’ve also seen independently processed information cannot
80
be the sole tie that binds consciousness together. What else can be at play, allowing for a SBP to
reveal differing degrees of unity in behavior? And how can a commonly generated model for
consciousness by both hemispheres be unified with incoherent generative models independently
generated in each hemisphere? This is the base in which the partial unity model is built if
associated with the Bayesian Brain theory, but we are still far from understanding how this partial
unity in consciousness is attained. Many questions were left in the wake of the answers desired
for this work. It was only through conjoint work of various areas of cognitive research that
answers here were given, and I believe only though such similar efforts can further questions be
answered. Cognitive Science thus holds today a central role in these investigative paths, where
only by combining different bases of knowledge, but towards the same goals, can we hope to shed
light into the ever intriguing relation between the mind and the brain.
81
References:
Adams, R., Shipp, S., Friston, K. (2013). Predictions not commands: active inference in the motor system. Brain
Structure & Function 218, pp 611-643.
Adams R, Brown H, Friston K. (2015). Bayesian inference, predictive coding and delusions. Avant 5, pp 51–88.
Alajouanine, T. (1948). Aphasia and artistic realization. Brain 71, pp 229-241.
Allen, J., Damasio, H., Garbowski, T., Bruss, J., Zhang, W. (2003). Sexual dimorphism and asymmetries in grey-white
composition of the human cerebrum. Neuroimage 18, pp 880-894.
Aristotle, Ross, D. (2017). The Nicomachean ethics. Los Angeles, CA: Enhanced Media Publishing
Arpaly, N. (2000). On acting rationally against one's best judgement. Ethics 110, pp 488-513
Ashby, W. (1962/2004). Principles of the self-organizing system. E:CO 6, pp 102-126.
Audi, R. (1979). Weakness of will and practical judgement. Noûs 13, pp 173-196.
Augusto, L. (2013). Unconscious Representations 1: Belying the traditional model of human cognition. Axiomathes
23, pp 645-663.
Bach, K. (1997). Thinking and Believing in Self-Deception. Behavioral and Brain Sciences 20, pp 105.
Balog, K. (2009). Jerry Fodor on non-conceptual content. Synthese 170, pp 311-320.
Banich, M., Belger, A. (1990). Interhemispheric Interaction: how do the hemispheres divide and conquer a task?
Cortex 26, pp 77-94.
Banich, M., Federmeier, K. (2006). Categorical and Metric Spatial Processes Distinguished by Task Demands and
Practice. Journal of Cognitive Neuroscience 11, pp 153-166.
Bargh, J., Morsella, E. (2008). The Unconscious Mind. Perspectives on Psychological Science 3, pp 73-79
Bayne, T. (2008). The Unity of Consciousness and the Split-Brain Syndrome. The Journal of Philosophy 105, pp
277-300.
Bayne, T. (2008). The unity of consciousness and the split-brain syndrome. The Journal of Philosophy 105, pp 277-
300.
Beeman, M., Bowden, E. (2000). The right hemisphere maintains solution-related activation for yet-to-be-solved
problems. Memory & Cognition 28, pp 1231-1241.
Berlucchi, G., Mangun, G., Gazzaniga, M. (1997). Visuospatial attention and the split brain. News in Psychological
Sciences 12, pp 226-231.
Bermúdez, J. (2000). Self-deception, intentions and contradictory beliefs. Analysis 60, pp 309-319.
Blackburn, S. (2007). Truth: a guide. Oxford: Oxford University Press.
Blackburn, S. (2016). The Oxford dictionary of philosophy. Oxford: Oxford University Press.
Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences 18, pp 227-287
Bloom, J., Hynd, G. (2005). The role of the corpus callosum in interhemispheric transfer of information: excitation or
inhibition? Neuropsychology Review 15, pp 59-71.
Blumson, B (2012). Mental maps. Philosophy and Phenomenological Research 85, pp 413–434
Borod J, Welkowitz J, Alpert M, Brozgold A, Martin C, Peselow E, Diller L. (1990). Parameters of emotional
processing in neuropsychiatric disorders: conceptual issues and a battery of tests. Journal of Communication
Disorders 23, pp 247-271.
Bradley, F. (1914) Essays on Truth and Reality. Oxford: Clarendon Press.
82
Brodmann, K. (1909). Vergleichende Lokalisationslehre der Grosshirnrinde. Leipzig: JA Barth.
Brown, H., Kosslyn, S. (1993). Cerebral lateralization. Current Opinion in Neurobiology 3, pp 183-186.
Burge, T. (2010). Origins of Objectivity. Oxford: Oxford University Press.
Chalmers, D. (1996). The conscious mind: in search of a theory of conscious experience. New York: Oxford University
Press
Churchland, P. (1981). Eliminative Materialism and the Propositional Attitudes. The Journal of Philosophy, 78, pp 67-
90
Clark, A. (2013) Whatever Next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and
Brain Sciences 36, pp 1-73.
Clark, A. (2016). Surfing uncertainty: prediction, action, and the embodied mind. New York, NY: Oxford University
Press.
Clark, A. (2017). Predictions, precision and agentive attention. Consciousness and Cognition 56, pp 115-119.
Clarke, D., Wheless, J., Chacon, M., Breier, J., Koenig, M., Mcmanis, M., Baumgartner, J. (2007). Corpus callosotomy:
A palliative therapeutic technique may help identify resectable epileptogenic foci. Seizure 16, pp 545-553
Cohen, L., Dehaene, S., Naccache, L., Lehéricy, S., Dehaene-Lambertz, G., Hénaff, M., Michel, F. (2000). The visual
word form area: spatial and temporal characterization of an initial stage of reading in normal subjects
and posterior split-brain patients. Brain 123, pp 291-307
Coon, D., Mitterer, J. (2013). Introduction to psychology: gateways to mind and behavior. Belmont, CA:
Wadsworth Cengage Learning.
Coulson, S. (2008). Metaphor comprehension and the brain. In Gibbs, R (ed) Metaphor and Thought. New York, NY:
Cambridge University Press.
Damasio, A. (1992). Aphasia. New England Journal of Medicine 326, pp 531-539.
Damasio, A., Carvalho, G. (2013). The nature of feelings: evolutionary and neurobiological origins. Nature Reviews
Neuroscience 14, pp 143-152.
Damasio, A., Grabowski, T., Bechara, A., Damasio, H., Ponto, L., Parvizi, J., Hichwa, R. (2000). Subcortical and
cortical brain activity during the feeling of self-generated emotions. Nature Neuroscience 3, pp 1049-1056.
Davidson, D. (1980). Essays on actions and events. Berkeley: University of California Press.
Davidson, D. (1982). Rational Animals. Dialectica 36, pp 317-327
Davidson, D. (2004). Problems of Rationality. New York: Oxford University Press.
Dayan, P., Hinton, G., Neal, R., Zemel, R (1995). The Helmholtz machine. Neural Computation 7, pp 1022 – 1037
De Sosa, R. (1970). I. Self-Deception. Inquiry 13, pp 308-334.
Dennett, D. (1987). The intentional stance. Cambridge, MA: The MIT Press.
Dennett, D. (2006). Sweet dreams: philosophical obstacles to a science of consciousness. Cambridge MA: The MIT
Press.
Dennett, D., Weiner, P. (2007). Consciousness explained. New York: Little, Brown and Company.
Deweese-Boyd, I. (2017). Self-Deception. The Stanford Encyclopedia of Philosophy (Fall 2017 Edition). Edward N.
Zalta (ed.). URL = https://plato.stanford.edu/archives/fall2017/entries/self-deception
Doya, K., Ishi, S., Pouget, A., Rao, R. (2007). Bayesian Brain: Probabilistic Approaches to Neural Coding. Boston,
MA: MIT Press.
Dretske, F. (1988) Explaining behavior. Cambridge, MA: The MIT Press.
83
Dunn, R. (1995). Motivated Irrationality and Divided Attention. Australasian Journal of Philosophy 73, pp 325–336.
Eichenbaum, H., Yonelinas, A., Ranganath, C. (2007) The medial temporal lobe and recognition memory. Annual
Review Neuroscience 30, pp 123-152.
Eickhoff, S., Schleicher, A., Zilles, K., Amunts, K. (2006). The human parietal operculum. I. Cytoarchitectonic
mapping of subdivisions. Cerebral Cortex 16, pp 254–267.
Eliasmith, C. (2007) How to build a brain: From function to implementation. Synthese 159, pp 373–88.
Fernández, J. (2013). Self-deception and Self-knowledge. Philosophical Studies 162, pp 379-400.
Fink, G., Marshall, J., Weiss, P., Zilles, K. (2001). The Neural Basis of Vertical and Horizontal Line Bisection
Judgments: An fMRI Study of Normal Volunteers. Neuroimage 14, pp 59-67
Finset, A., Sundet, K., Haakosen, M. (1988). Neuropsychological Syndromes in Right Hemisphere Stroke Patients.
Scandinavian Journal of Psychology 29, pp 9-20.
Fodor, J. (1975). The Language of Thought. Crowell: Thomas Y.
Fodor, J. (1981). Representations. Cambridge, MA: The MIT Press.
Fodor, J. (1998). Concepts: Where Cognitive Science Went Wrong. New York: Oxford University Press
Fodor, J., Sherwood, S. (1978). Propositional Attitudes. The Monist 61, pp 501–523
French, P., Uehling, T., Wettstein, H. (1997). Midwest studies in philosophy. Notre Dame: University of Notre Dame
Press.
Freud, S., Phillips, A. (2006). The Penguin Freud reader. London: Penguin
Friston, K. (2005). A theory of cortical response. Philosophical Transactions of the Royal Society B 360, pp 815-836.
Friston, K. (2009). The free-energy principle: a rough guide to the brain? Trends in Cognitive Sciences 13, pp 293-
301.
Friston, K., Kilner, J., Harrison, L. (2006). A free energy principle for the brain. Journal of Pshychology 100, pp 70-
87.
Friston, K., Mattout, J., and Kilner, J. (2011). Action understanding and active inference. Biological Cybernetics 104,
pp 137–160.
Funkhouser, E. (2005). Do the Self-Deceived Get What They Want? Pacific Philosophical Quarterly 86, pp 295–312.
Gazzaniga M., & Hillyard S. (1971). Language and speech capacity of the right hemisphere. Neuropsychologia 9, pp
273–80
Gazzaniga M., LeDoux J. (1978). The Integrated Mind. New York, NY: Plenum.
Gazzaniga, M. (1967). The Split Brain in Man. Scientific American, pp 24-29.
Gazzaniga, M. (1985). The Social Brain. Basic Books Inc
Gazzaniga, M. (2005). Essay: Forty-five years of split-brain research and still going strong. Nature Reviews
Neuroscience 6, pp 653-659
Gazzaniga, M. (2013). Shifting Gears: Seeking New Approaches for Mind/Brain Mechanisms. Annual Review of
Psychology 64, pp 1-20
Gazzaniga, M., Holtzman, J., Smylie, C. (1987). Speech without conscious awareness. Neurology 37, pp 682-682
Gazzaniga, M., Mangun, G., Blakemore, S. (2014). The cognitive neurosciences. Cambridge, MA: The MIT
Press
Gazzaniga, M., Sperry, R. (1966). Simultaneous double discrimination response following brain
bisection. Psychonomic Science 4, pp 261-262
84
Godefroy, O., Lhullier, C., Rousseaux, M. (1996). Non-spatial attention disorders in patients with frontal or posterior
brain damage. Brain 119, pp 191-202.
Goldberg, E. (2001). The Executive Brain: Frontal Lobes and the Civilized Mind. New York, NY: Oxford University
Press
Goodale, M., Milner, A. (1992). Separate visual pathways for perception and action. Trends in Neuroscience 15, pp
20–5.
Gulick, Robert. (2017). Consciousness. The Stanford Encyclopedia of Philosophy (Summer 2017 Edition), Edward N.
Zalta (ed)., URL = https://plato.stanford.edu/entries/consciousness
Haggard, P., Eitam, B. (2015). The sense of agency. New York: Oxford University Press.
Halligan, P., Marshall, J. (2007). Towards a principled explanation of unilateral neglect. Cognitive Neuropsychology
11, pp 167-206.
Halpern, M., Güntürkün, O., Hopkins, W., Rogers, L. (2009). Lateralization of the Vertebrate Brain: Taking the side
of Model Systems. Journal of Neuroscience 25, pp 10351-10357.
Helmholz, H. (1878/1971). The Facts of Perception. In Karl, R (ed), The Selected Writings of Hermann von Helmholtz.
Middletown, CT: Wesleyan University Press.
Helmhotlz, H (1867/2001) Treatise on Physiological Optics (Vol III). Retrieved from
http://poseidon.sunyopt.edu/BackusLab/Helmholtz/
Hinton, G., Dayan, P., Frey, B., Neal, R. (1995). The wake-sleep algorithm for unsupervised neural
networks. Science 268, pp 1158–1161.
Hippel, W., Trivers, R. (2011). The Evolution and Psychology of Self-deception. Behavioral and Brain Sciences 34,
pp 1-56
Hobson J., Friston, K. (2012). Waking and dreaming consciousness: Neurobiological and functional considerations.
Progress in Neurobiology 98, pp 82-98.
Hobson, J. (2009). REM sleep and dreaming: towards a theory of protoconsciousness. Nature Reviews Neuroscience
10, pp 803-813.
Hobson, J., Hong, C., Friston, K. (2014). Virtual reality and consciousness inference in dreaming. Frontiers in
Psychology 5, pp 1-18.
Hofer, S., & Frahm, J. (2006). Topography of the human corpus callosum revisited—Comprehensive fiber tractography
using diffusion tensor magnetic resonance imaging. Neuroimage 32, pp 989-994.
Hohwy, J. (2007) Functional Integration and the mind. Synthese 159, pp 315–28.
Hohwy, J. (2013). The predictive mind. New York, NY: Oxford University Press.
Hopkins, J. (2012). Psychoanalysis representation and neuroscience: the Freudian unconscious and the Bayesian
brain. In Fotopolu, A (ed), From the Couch to the Lab: Psychoanalysis, Neuroscience and Cognitive
Psychology in Dialogue. New York, NY: Oxford University Press.
Hopkins, J. (2016). Free Energy and Virtual Reality in Neuroscience and Psychoanalysis: A Complexity Theory of
Dreaming and Mental Disorder. Frontiers in Psychology 7, pp 1-18.
Howard, J., Plailly, J., Grueschow, M., Haynes, D., Gottfried, J. (2009). Odor quality coding and categorization in
human posterior piriform cortex. Nature Neuroscience 12, pp 932-938.
Hudry, J., Ryvlin, P., Saive, A., Ravel, N., Plailly, J., Royet, J. (2014). Lateralization of olfactory processing:
differential impact of right and left temporal epilepsies. Epilepsy & Behavior 37, pp 184-190.
Hurley, S. (1998). Consciousness in Action. Cambridge, MA: Harvard University Press.
85
Jackson, P., Brunet, E., Meltzoff, A., Decety, J. (2006). Empathy examined through the neural mechanisms involved in
imagining how I feel versus how you feel pain. Neuropsychologia 44, pp 752-761.
Jäkel, F., Singh, M., Wichmann, F., Herzog, M. (2016). An overview of quantitative approaches in Gestalt perception.
Vision Research 126, pp 3-8.
Jeffrey, R. (1999). The logic of decision. Chicago: University of Chicago Press.
Johnsrude, I., Penhune, V., Zatorre, R. (2000). Functional specificity in the right human auditory cortex for perceiving
pitch direction. Brain 123, pp 155-163.
Kandel, E., Mack, S. (2014). Principles of neural science. New York NY: McGraw-Hill Medical
Kant, I., Smith, N. (1929). Immanuel Kant's Critique of pure reason. Boston: Bedford.
Kanwisher, N., Yovel, G. (2006). The fusiform face area: a cortical region specialized for the perception of faces.
Philosophical Transactions of the Royal Society B: Biological Sciences 361, pp 2109-2128
Kenseinger, E., Choi, E. (2009). When side matters: hemispheric processing and the visual specificity of emotional
memories. Journal of Experimental Psychology: Learning, Memory and Cognition 35, pp 247-253.
Kim, J., Sosa, E. (2009). A companion to metaphysics. Malden: Blackwell.
Kinsbourne, M. (1987). The material basis of mind. In Vaina, L (ed), Matters of Intelligence. New York: Reidel.
Kinsbourne, M. (2003). The Corpus Callosum Equilibrates the Cerebral Hemispheres. In Zaidel, E (ed) The Parallel
Brain: the Cognitive Neuroscience of the Corpus Callosum. Cambridge: MIT Press.
Kinsbourne, M (1982). Hemispheric specialization and the growth of human understanding. American Psychologist
37, pp 411-420.
Korda, R., Douglas, J. (1997). Attention deficits in stroke patients with aphasia. Journal of Clinical and Experimental
Neuropsychology 19, pp 525-542.
Kreutzer, J., DeLuca, J., Caplan, B. (2011). Encyclopedia of clinical neuropsychology. New York: Springer
Langdon, D., Warrington, E. (2000). The role of the left hemisphere in verbal and spatial reasoning tasks. Cortex 36,
pp 691-702.
Leclercq, M., Zimmermann, P. (2013). Applied neuropsychology of attention: theory, diagnosis and rehabilitation.
Hove: Psychology Press, Taylor & Francis Group.
LeDoux, J., Wilson, D., and Gazzaniga, M. (1977). A divided mind: Observations on the conscious properties of the
separated hemispheres. Annals of Neurology 2, pp 417-421.
Leicester, J. (2008). The nature and purpose of belief. Journal of mind and Behavior 29, pp 219-239
Lew, S. (2014). Hemispherectomy in the treatment of seizures: a review. Translational Pediatrics 3, pp 208-217.
Libet, B. (1985). Unconscious cerebral initiative and the role of conscious will in voluntary action. The behavioral and
brain sciences 8, pp 529-566
Lindell, A. (2006). In your right mind: right hemisphere contributions to language processing and production.
Neuropsychological Review 16, pp 131-148.
List, A., Brooks, J., Easterman, M., Flevaris, A., Landau, A., Bowman, G., Stanton, V., VanVleet, T., Robertson, L.,
Schendel, K. (2009). Visual hemispatial neglect, re-assessed. Journal of International Neuropsychological
Society 14, pp 243-256.
Llinás, R. (2003). The contribution of Santiago Ramon y Cajal to functional neuroscience. Nature Reviews
Neuroscience 4, pp 77-80
Lockwood, M. (1989). Mind, Brain and the Quantum. Oxford: Blackwell Publishers.
86
Lodato, S., Arlotta, P. (2015). Generating Neuronal Diversity in the Mammalian Cerebral Cortex. Annual Review of
Cell and Developmental Biology 31, pp 699–720.
Lötsch, J., Ultsch, A., Eckhardt, M., Huart, C., Rombaux, P., Hummel, T. (2016). Brain lesion-pattern analysis in
patients with olfactory dysfunctions following head trauma. Neuroimage Clinical 11, pp 99-105.
Luders, E., Thompson, P., Toga, A. (2010). The Development of the Corpus Callosum in the Healthy Human
Brain. Journal of Neuroscience 30, pp 10985-10990
Markmcdermott. (May 5, 2010). Recent Interview with Gazzaniga and split brain patient 'Joe'. Retrieved from
https://www.youtube.com/watch?v=RFgtGIL7vEY
Marraffa, M. (2012). Remnants of Psychoanalysis. Rethinking the Psychodynamic Approach to Self-Deception.
Humana Mente Journal of Philosophical Studies 20, pp 223-243.
Marzi, C., Bongiovanni, L., Miniussi, C., Smania, N. (2003). Effects of Partial Callosal and Unilateral Cortical Lesions
on Interhemisphriric Transfer. In Zaidel, E (ed), The Parallel Brain: the Cognitive Neuroscience of the
Corpus Callosum. Cambridge: MIT Press.
Mascetti, G. (2016). Unihemispheric sleep and asymmetrical sleep: behavioral, neurophysiological, and functional
perspectives. Nature and Science of Sleep 8, pp 221-238.
Mathews, M., Linskey, M., Binder, D. (2008). William P. van Wagenen and the first corpus callosotomies for
epilepsy. Journal of Neurosurgery 108, pp 608-613
May, J., Holton, R. (2012). What in the world is Weakness of will? Philos Stud 157, pp 341-360
McGilchrist, I. (2010). Reciprocal organization of the cerebral hemispheres. Dialogues in Clinical Neuroscience 12,
pp 503-515.
McGilchrist, I. (2012). The Master and his Emissary: The Divided Brain and the making of the Western World. New
Haven CT: Yale University Press.
McLaughlin, B., Rorty, A. (1988). Perspectives on Self-Deception. Berkeley: University of California Press.
Mele, A. (1987). Irrationality. New York: Oxford University Press
Mele, A. (1995). Automomous Agents. New York: Oxford University Press
Mele, A. (2006). Self-deception and Delusions. EUJAP 2, pp 109-124.
Mele, A. (2009). Weakness of will and Akrasia. Philos Stud 150, pp 391-404.
Mele, A. R. (2001). Self-deception unmasked. Princeton, NJ: Princeton University Press.
Merabet, L., Maguire, D., Warde, A., Alterescu, K., Stickgold, R., Pascual-Leone, A. (2004). Visual Hallucinations
During Prolonged Blindfolding in Sighted Subjects. Journal of Neuro-Ophthalmology 24, pp 109-113.
Miller, M., Clark, A. (2017). Happily entangled: prediction, emotion, and the embodied mind. Synthese, pp 1-17.
Milner, A., Dunne, J. (1977). Lateralized perception of bilateral chimaeric faces by normal subjects. Nature 268, pp
175-176.
Moerel, M., Martino, F., Formisano, E. (2014). An anatomical and functional topography of human auditory cortical
areas. Frontiers in Neuroscience 8, pp 1-14.
Moran, R., Campo, P., Symmonds, M., Stephan, K., Dolan, R., Friston, K. (2013). Free-energy, Precision and
Learning: the role of cholinergic neuromodulation. Journal of Neuroscience 33, pp 8227-8236.
Müller-Lyer, F. (1889). Optische Urteilstäuschungen. Dubois-Reymonds Archiv für Anatomie und Physiologie,
Supplement Volume, pp 263-270.
Mumford, D. (1992). On the computational architecture of the neocortex II. The role of cortico-cortical loops.
Biological Cybernetics 66, pp 241–51.
87
Murphy, C., Stavrinos, G., Chong, K., Sirimanna, T., Bamiou, D. (2017). Auditory Processing after Early Left
Hemisphere Injury: a Case Report. Frontiers in Neurology 8, pp 226-232.
Mutha, P., Haaland, K., Sainburg, R. (2012). The Effects of Brain Lateralization on Motor Control and
Adaptation. Journal of Motor Behavior 44, pp 455-469.
Myers, R., & Sperry, R. (1953). Interocular Transfer of a Visual Form Discrimination Habit in Cats after Section of
the Optic Chiasm and Corpus Callosum. Anatomical Record 115.
Nagel, T. (1971). Brain Bisection and the Unity of Consciousness. Synthese 22, pp 396-413.
Nagel, T. (1974). What is it like to be a bat? Philosophical Review 83, pp 435-450.
Navon, D. (1977). Forest before the trees: The precedence of Global Features in Visual Perception. Cognitive
Pshychology 9, pp 353-383.
Nikolaenko, N., Brener, M. (2003). Functional Cerebral Asymmetry and its possible Evolution in Historical Times.
Journal of Evolutionary Biochemistry and Physiology 39, pp 491, 501.
Nyhan, B., Reifler, J. (2010). When Corrections Fail: The Persistence of Political Misperceptions. Political
Behavior, 32, pp 1-50.
Oliveri, M., Rossini, P., Traversa, R., Cicinelli, P., Filippi, M., Pasqualetti, P., Tomaiuolo, F., Cattagirone, C. (1999).
Left frontal transcranial magnetic stimulation reduces contralesional extinction in patients with unilateral
right brain damage. Brain 122, pp 1731-1739.
Padovani, F., Richardson, A., Tsou, J. (2016). Objectivity in Science. Springer international publishing.
Pashler, H. (1994). Dual-Task interference in simple tasks: data and theory. Psychological Bulletin 116, pp 220-244
Patten, D. (2003). How do we deceive ourselves? Philosophical Psychology 16, pp 229-246.
Paul, S. (2009). Intention, Belief, and Wishful Thinking: Setiya on Practical Knowledge. Ethics 119, pp 546–557
Pell, M. (2006). Cerebral mechanisms for understanding emotional prosody in speech. Brain and Language 96, pp 221-
234.
Persinger, M., Richards, P., Koren, S. (1994). Differential ratings of pleasantness following right and left hemispheric
application of low energy magnetic fields that stimulate long-term potentiation. International Journal of
Neuroscience 79, pp 191-197.
Pinto, Y., Haan, E., Lamme, V. (2017). The Split-Brain Phenomenon Revisited: A Single Conscious Agent with Split
Perception. Trends in Cognitive Science 21, pp 835-851.
Pinto, Y., Neville, D., Otten, M., Corballis, P., Lamme, V., de Haan, E., Foschi, N., Fabri, M. (2017). Split brain:
divided perception but undivided consciousness. Brain 140, pp 1231-1237.
Porcher, J. (2012). Against the Deflationary Account of Self-Deception. Humana Mente Journal of Philosophical
Studies 20, pp 67-84
Purves, D., Augustine, G., Fitzpatrick, D., Hall, W., LaMantia, A., McNamara, J., Williams, S. (2003). Neuroscience.
Sunderland MA: Sinauer Associates Inc.
Rankin, K., Gorno-Tempini, M., Allison, S., Stanley, C., Glenn, S., Weiner, M., Miller, B. (2006). Structural anatomy
of empathy in neurodegenerative disease. Brain 129, pp 2945-2956.
Rebuschat, P., Martin Rohrmeier, M., Hawkins, J.A., Cross, I. (2011). Human subcortical auditory function provides
a new conceptual framework for considering modularity. Language and Music as Cognitive Systems 28, pp
269–282.
Reuter-Lorenz, P. (2010). The cognitive neuroscience of mind: a tribute to Michael S. Gazzaniga. Cambridge, MA:
The MIT Press
88
Ridley, R., Ettlinger, G. (1976). Impaired tactile learning and retention after removals of the second somatic sensory
projection cortex in the monkey. Brain Research 109, pp 656-660.
Rorty, A. (1980). Self-deception, Akrasia and Irrationality. Social Science Information 19, pp 905-922.
Rose, D., Schaffer, J. (2013). Knowledge entails dispositional belief. Philosophical Studies 116, pp 19-50
Rosenthal, D. (1986). Two Concepts of Consciousness. Philosophical Studies 49, pp 329-359
Ross, E., Homan, R., Buck, R. (1994). Differential Hemispheric Lateralization of Primary and Social Emotions
Implications for Developing a Comprehensive Neurology for Emotions, Repression, and the Subconscious.
Cognitive and Behavioral Neurology 7, pp 1-19.
Royet, J., Plailly, J. (2004). Lateralization of Olfactory Processes. Chemical Senses 29, pp 731-745.
Schacter, D. (2001). Memory, brain, and belief. Cambridge, Mass: Harvard University Press.
Schantz, R. (2002). What is truth? Berlin: De Gruyter
Schechter, E. (2010). Individuating Mental Tokens: The Split-Brain Case. Philosophia 38, pp 195-216.
Schechter, E. (2014). Partial Unity of Consciousness. In Bennett, D (ed), Sensory Integration and the Unity of
Consciousness. Cambridge, MA: The MIT Press.
Schnupp, J., Nelken, I., King, A. (2012). Auditory neuroscience: making sense of sound. Cambridge, MA: MIT Press.
Searle, J. (1992). The Rediscovery of the Mind. Cambridge, MA: The MIT Press.
Sehon, S. (1994). Teleology and the Nature of Mental States. American Philosophical Quarterly 31, pp 63-72
Sergent, J., Ohta, S., MacDonald, B. (1992). Functional neuroanatomy of face and object processing. A positron
emission tomography study. Brain 115, pp 15-36.
Shroeder, T. (2006). Propositional attitudes. Philosophy compass 1, pp 65-73
Siéroff, E. (1994). Mécanismes attentionnels. In Séron, X (ed), Traité de Neuropsychologie. Liège: Mardaga
Smythies, J. (1996). A Note on the Concept of the Visual Field in Neurology, Psychology, and Visual
Neuroscience. Perception 25, pp 369-371
Sperry, R. (1964). The Great Cerebral Commissure. Scientific American 210, pp 42-53
Sperry, R. (1968). Hemisphere disconnection and unity in conscious awareness. American Psychologist 23, pp 723-
733
Sperry, R. (1973). Lateral specialization of cerebral function in the surgically separated hemispheres. In
McGuigan, F (ed), The psychophysiology of thinking: Studies of covert processing. New York, NY: Academic
Press.
Sperry, R. (1977). Forebrain Commissurotomy and Conscious Awareness. Journal of Medicine and
Philosophy 2, pp 101-126
Sperry, R. (1977). Forebrain commissurotomy and conscious awareness. The Journal of Medicine and Philosophy 2,
pp 101-126.
Sterelny, K. (1994). The representational theory of mind: an introduction. Oxford: Blackwell.
Stickgold, R., Hobson, J., Fosse, R., Fosse, M. (2001). Sleep, learning and dreams: off-line memory reprocessing.
Science 296, pp 1052-1057.
Sturm, W., de Simone, A., Krause, B., Specht, K., Hesselmann, V., Radermacher, I., Herzog, H., Tellmann, L., Müller-
Gärtner, H., Willmes, K. (1999). Functional anatomy of intrinsic alertness: evidence for a fronto-parietal-
thalamic-brainstem network in the right hemisphere. Neuropsychologia 37, pp 797-805.
Szabados, B. (1973). Wishful Thinking and Self-Deception. Analysis, 33, pp 201–205.
89
Talbott, W. (1997). Does Self-Deception Involve Intentional Biasing? Behavioral and Brain Sciences 20, pp 127-127.
Tanaka, H., Hachisuka, K., Ogata, H. (1999). Sound lateralization in patients with left or right cerebral hemispheric
lesions: relation with unilateral visuospatial neglect. Journal of Neurology, Neurosurgery and Psychiatry 67,
pp 481-486.
Tang, A., Reeb, B., Romeo, R., McEwen, B. (2003). Modification of Social Memory, Hypothalamic-Pituitary Adrenal
Axis, and Brain Asymmetry by Neonatal Novelty Exposure. The Journal of Neuroscience 10, pp 8254-8260.
Thagard, P. (2000). Coherence in thought and action. Cambridge, MA: The MIT Press
Thorpe, S., Fize, D., Marlot, C. (1996). Speed of processing in the human visual system. Nature 381, pp 520-522
Toussaint, M. (2009). Probabilistic inference as a model of planned behavior. Künstliche Intelligenz 3, pp 23–29.
Tucker, D. M. (1993). Emotional experience and the problem of vertical integration: Discussion of the Special Section
on Emotion. Neuropsychology, 7(4), 500-509.
Wesson, D., Wilson, D. (2011). Sniffing out the contributions of the olfactory tubercle to the sense of smell: hedonics,
sensory integration, and more? Neuroscience & Behavioral Reviews 35, pp 655–668.
Wilkins, A., Shallice, T., McCarthy, R. (1987). Frontal lesions and sustained attention. Neuropsychologia 25, pp 359-
365.
Wilson, George & Shpall, Samuel. (2016). Action. The Stanford Encyclopedia of Philosophy (Winter 2016 Edition).
Edward N. Zalta (ed). URL = https://plato.stanford.edu/archives/win2016/entries/action
Yang, J. (2014). The role of the right hemisphere in metaphor comprehension: a meta-analysis of functional magnetic
resonance imaging studies. Human Brain Mapping 35, pp 107-122.
Young, A. (1983). Functions of the Right Cerebral Hemisphere. London: Academic Press Inc.
Zald, D., Pardo, J. (1997). Emotion, olfaction, and the human amygdala: amygdala activation during aversive olfactory
stimulation. Proceedings of the National Academy of Science 94, pp 4119-4124.
Zomeren, A., Brouwer, W. (1994). Clinical Neuropsychology of Attention. New York, NY: Oxford University Press.
90
Appendix:
Figure 1.01: Schematic showing how information travels from visual field to the hemispheres.
Note that information in the left visual field is taken to the right hemisphere, and information in
the right visual field is taken to the left hemisphere (Gazzaniga 1967)
Figure 1.02: Paradigm example that reveals the Left-Brain Interpreter, where the split-brain patient
can point on the table pictures related to what he has seen on the screen, but whose justification for
given choices does not cohere with the truth (adapted from Gazzaniga & LeDoux 1978).
91
Figure 3.01: The Müller-Lyer optical illusion (Müller-Lyer, 1889). The lower arrows show that
the lines are exactly the same length, but when gazing to the upper arrows, we will perceive them
as different sized, and hold no conscious control over perception of the illusion.
Figure 4.01: Schematic of possible interaction between populations of deep and superficial pyramidal neurons
leading to predictive processing in the brain. Error units as superficial pyramidal cells, depicted in orange, convey
prediction error forward through the cortical hierarchy; State units as deep pyramidal cells, depicted in black,
convey predictions backward through the cortical hierarchy, both across multiple cortical areas and within same
cortical areas. Information forwarded to superficial pyramidal cells is met with backward predictions from deep
pyramidal cells (both within cortical areas and between), where mismatches result in predicting error. These are
forwarded upward through the hierarchy, both to revise and correct future predictions on higher levels of the cortex
in deep pyramidal cells, as well as leading information forward through the cortical hierarchy. Deep cells constantly
convey predictions to lower and same areas, attempting to accurately predict input on the lowest level possible;
superficial cells forward prediction error shaping predictions and conveying information. The generative model’s
backward-forward connections thus work towards suppressing cortical activity, eliminating prediction error in
every level of the hierarchy, aiming for lowest possible prediction error and maximum certainty over multiple
levels of abstraction (adapted from Friston 2009).
Figure 4.02: Image depicting an example of the
Gestalt phenomenon. Before we analyze the parts,
we see the whole. First we see a mass of dots and
stains representing nothing concrete, only to then
have a Dalmatian dog, sniffing around a tree, emerge
from the picture. The right hemisphere dominates
such first contact perceptions, granting us conscious
awareness of the whole, and only then perception of
its parts through the left hemisphere, in later
processing. Furthermore, it’s not a gradual
phenomenon. When we realize the parts, it all comes
at once, and hardly can be unseen again. As the right
hemisphere becomes adept at processing the picture,
it is the left hemisphere that begins dominating the
perception, persistently granting us awareness of its
parts (adapted from McGilchrist 2012).
92
Figure 4.03: The schematic on the left depicts the relation between conscious experiences in the hemispheres in
accordance with the Conscious Duality Model (CDM). On the right, relation between conscious experiences in the
hemispheres in accordance with the Partial Unity Model (PUM). CDM holds duplicated types of experience (B)
with individual token experiences of that type (A and C) individually generated in each hemisphere. Transitivity
property is kept in CDM, as unifiable mental states are all unified (B with A and B with C). Transitivity in the
PUM is not kept, as some mental states that are not unified (A and C) are unified with a third set of states (B). The
CDM entails two streams of consciousness; the PUM entails one incoherent stream of consciousness, building
something that is not normal individual identity, but not total dissociation either (adapted from Schechter 2010).