the general theory of relevance and reliability

117
The General Theory of Relevance and Reliability  A Synthesis of Game Theory and Communication Theory Forming an Information Theoretical Model of a Biophysical Reality Compatible with The Superluminal Neutrino  And Biological Symbiotic Cooperation By Mats Helander

Upload: matshelander

Post on 06-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 1/117

The General Theory of Relevance and Reliability

 A Synthesis of Game Theory and Communication Theory

Forming an Information Theoretical Model of a Biophysical Reality

Compatible with

The Superluminal Neutrino

 And

Biological Symbiotic Cooperation

By

Mats Helander

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 2/117

Contents

Dedication ............................................................................................................................................... 4

Abstract ............................................................................................................................................... 5

Introduction ......................................................................................................................................... 5

Information Theory ............................................................................................................................. 7

Active and Reactive Particles............................................................................................................. 10

Space ................................................................................................................................................. 13

Time and Causality ............................................................................................................................ 13

The Static Universe ............................................................................................................................ 17

The Semi-Static Universe ................................................................................................................... 17

Mass as State, as Occupation of Space and as Interaction Points .................................................... 20

Uncertainty ........................................................................................................................................ 21

Relativity ............................................................................................................................................ 23

Relative Distance and Mass ............................................................................................................... 23

Mass and Energy ............................................................................................................................... 26

Virtual Gravity ................................................................................................................................... 29

The Dynamic Universe ....................................................................................................................... 29

Theoretical Biophysics as Game Theory with Communication Theory ............................................. 30

Replicating Agent Systems ................................................................................................................ 50

Evolution of Natural Law ................................................................................................................... 51

Active Agent Gravity .......................................................................................................................... 52

Internal and External Mechanics ....................................................................................................... 53

Gravitons ........................................................................................................................................... 53

The Limits to Natural Law .................................................................................................................. 55

The Universe We Are In ..................................................................................................................... 57

Quantum Mechanics ......................................................................................................................... 58

Quantum Superposition .................................................................................................................... 60

Relevance Theory .............................................................................................................................. 67

Causal Loops ...................................................................................................................................... 71

The Big Bangs .................................................................................................................................... 78

The Power of Cooperation ................................................................................................................ 79

Superluminal motion ......................................................................................................................... 81

The Semi-sonic Bat ............................................................................................................................ 85

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 3/117

The Expendable Neutrino .................................................................................................................. 90

Universal Ping Pong ........................................................................................................................... 92

The Dimension of Trust ..................................................................................................................... 94

Spooky Tango at a Distance............................................................................................................... 99

Teleportation ................................................................................................................................... 100

De-coherence and Many Worlds ..................................................................................................... 101

Collapsing Dimensions ..................................................................................................................... 104

Dark Energy as Virtual Anti-Gravity ................................................................................................. 107

Indeterminism ................................................................................................................................. 109

The Necessity Of Comprise.............................................................................................................. 113

Logic and Mathematics ................................................................................................................... 114

Summary ......................................................................................................................................... 114

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 4/117

 

Dedication

This work is dedicated to Theza Helander, Kerstin Helander, Bo Helander and to Dick Lundqvist.

With special thanks for inspiration to Daniel C. Dennett, Richard Dawkins.

This work is also in living memory of Newton, Darwin, Einstein, Planck and Lao Tze.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 5/117

 Abstract 

Under Construction 

Introduction

Science has advanced by making measurements and creating mathematical models matching the

measurements, improving the correspondence of the models to our world over time with the

increasing amount of measurements. This approach will result in models that are mathematically

correct descriptions of our world as far as we have measured it, but it will not allow us to conclude

that reality is implemented using the same models as we have derived.

With relativity science arrived at a mathematical description of reality as a set of relative realities

that did not from the strict mathematical sense require any absolute reality to complete the model.

Quantum mechanics led in the same general direction where the micro scale would be represented

as a set of multiple overlapping realities but again with no absolute reality behind it. From a purely

mathematical perspective, the existence of an absolute reality became a redundancy.

Apparently mathematically simpler (one less reality in the model) and from that sense more

attractive, the descriptions of our world that would not include an absolute reality came to prevail in

science.

But we remind ourselves that while our scientific descriptions of the world will be mathematically

correct, they do not have to match the actual implementation of reality which could be implemented

in any other way that would give the same effect to our measurements. The question then becomes:

While reality as a set of relative realities including no absolute reality is a mathematical possibility for

modeling our universe, we could ask ourselves if it is the only model that could work and – should wesee that it is not - we could even ask if it is a statistically plausible implementation model for the

universe we exist in.

On one hand, unless logically prevented for some reason, it would seem that an implementation with

no absolute reality would win by virtue of being simpler as it supposes one less reality in the model.

On the other hand, an implementation using a model with an absolute reality from which the set of 

relative and overlapping realities could be derived would be less costly from an information

theoretical perspective and should then be preferred.

In this paper we will show that the set of relative realities of relativity and the set of overlapping

realities of quantum mechanics are fully derivable from a model with an absolute reality, ultimately

presenting a mathematically more attractive scientific model of the universe.

As information theory would indicate that using an absolute reality model would be the least costly

implementation for a universe that gives the effects we see in our measurements, we also observe

that the best (least improbable although not necessary) assumption is that the world around us is

indeed implemented in the form of an absolute reality, where the relative realities are derived from

the inescapable uncertainty in the measurements of its local observers.

In this paper we will examine concepts like time, space, mass and energy as well as logical machines

such as agents and replicators by building up information theoretical models of universes where weadd different concepts one by one. By doing so we are able to examine the nature of each concept in

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 6/117

isolation and ensure that they all oblige to information theoretical constraints. As we derive the

definitions for our model we must focus on their logical consistency but also on how they match

generally established expectations about the real-world features and behaviors of each concept

being defined, including any obligatory constraints known to be associated with them from real

world observations.

It will be an ultimately information theoretical discussion where we derive constraints for an abstract

model of a universe from constraints that must apply to all information. This exercise is useful

because constraints that must apply on any logically consistent abstract description of a universe

should logically apply to a concrete universe as well, since the rules of information theory can’t be

violated by “non-magical” physics (with a definition of “magic” as something that breaks the rules of 

information theory). We can see this follow from the observation that the set of all physically 

concrete universes in existence belongs to the set of all physically possible universes which is a subset

of the set of all logically possible universes that all general rules for all logically possible universes

(those constrained by the rules of information theory) apply to all existing physically concrete

universes as well.

In a way we are examining the entirely abstract specification for a potentially concretely

implementable computer program that if implemented would allow us to ensure the logical

consistency and compatibility of our model with the rules of information theory. This would not

necessarily have to be an expensive endeavor as the simulation of the kind of model universe that

we will end up with does not rely on exponentially expanding information capacity as it only includes

one absolute state model from which ephemeral state models representing the perspectives of local

observers in the model can be derived and then discarded as time in the model proceeds. This is like

the difference between having to evaluate all the positions in a chess state tree compared to just

having to evaluating one step forward, moving to the best step, discard the old calculations, evaluate

one step forward, move to the best step, repeat.

The specifications should describe features matching those we see of the universe around us as best

we can. If we can logically derive constraints that must apply to any implementation of the

specification in the form of a running program, then to the extent that the features described in our

specification match those of our universe, information theory and logic suggests that the same

constraints should apply to any concrete implementation of a universe such as ours.

The task of the reader will be to consider what constraints any implementation of the abstract

specification for a universe we describe must have and to verify that the logical deductions we use toderive such constraints in this paper are correct. If the deductions are correct then the constraints

we derive should apply to our universe to exactly the extent that the definitions in our specification

match the definitions for the information theoretically essential aspects of our universe.

Thus the reader must take great care to evaluate not just the logical deductions that will follow from

our premises, but also that the definitions we create correspond in a correct manner to the expected

features we see in measurements of the real-world phenomena we are trying to model. The claim of 

this paper boils down to “under these conditions, this and this would (not be allowed to) happen”.

The reader must decide not only if the deductions are correct, but if the required conditions apply at

all to the world around us.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 7/117

In the same way that Darwin’s argument for Evolution by Natural Selection is ultimately algorithmic

(purely information theoretical, specifically game theory) and can be proven logically correct in a

computer, the model or algorithm proposed in this paper can in the same way be proven formally by

a computer program. Should the program be shown to run correctly and according to the predictions

in this paper we could go on with some confidence to work on matching its features with greater

precision to our pre-existing knowledge in the form of our database of scientific measurements of 

our universe.

In the same way that deep information theoretical conclusions about abstract representations of the

universe can be drawn by considering Conway’s Game of Life (a very simplistic but working computer

model of an abstract universe) even without implementing that model in a computer, we hope to be

able to draw interesting conclusions about abstract universes and thus by extension our concrete

universe from our computer models of the universe.

Thus we will use what are really just logical and mathematical arguments to describe limitations on

what the physical universe we see around us must be. At some level it may also be hard todistinguish between the concepts of an abstract and a concrete universe. Nonetheless towards the

final pages of this paper we will go on to discuss what could constitute such a distinction and how a

universe could make the jump from abstract to concrete.

To see an example of a possible abstract universe that could perhaps not be concrete in a meaningful

sense, we will examine a fully static universe in which no time passes. There is space in such a

universe which might be enough to satisfy the requirements of concreteness for some readers but

opinions might differ as to whether a meaningful concept of time, distinguishable from space, is also

a necessary component for a universe to be really concrete. Towards the end we will discuss how

there could be a minimum requirement for any meaningful description of a concrete universe suchthat at least three concepts must be present: space with some kind of shape to it, time such that

shapes of space can change and some form of balance between causality and randomness

constraining the shape changes such as to be exploitable by self-replicating shapes.

The argument presented in this paper will be ultimately founded in logic and the constraints imposed

on all information as described in information theory. In order to make sure all our logical deductions

take place on solid ground we must begin by clearly establishing what those basic constraints are.

Information Theory

Concrete information has energy cost in space over time and relies on distinction (but not conflict) tohave meaning. Conversely, the destruction of abstract information has an energy cost such that 

information, once in existence, is never truly lost.

The most fundamental premise we should make for any model of a universe and that we should

always demand of the model we discuss in this paper is that that the sum total of mass and energy

should remain constant.

According to the Second Law of Thermodynamics, all signals in space degrade over time. A signal is to

be thought of as any kind of concrete representation of information (an ordered or less than random

state) meaning that the information must be corrupted over time (the inevitable direction of time

pushing the universe towards less and less ordered states). This law can be seen to correspond to

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 8/117

one of the most basic premises in information theory, which is that all sustenance of concrete

representations of information has an energy cost. On the one hand, thus, information theory seems

a very gloomy topic focused on the inevitability of the universe turning into chaos.

On the other hand, information theory also tells us that, once in existence, it would actually require

energy to totally destroy (or hide) information. These two premises seem completely at odds witheach other until you realize that the Second Law of Thermodynamic strictly speaking only applies to

concretely represented information. That is, information represented with solid matter. Information

in light particles, on the other hand, is eternal.

The simple way to picture this is that if you write a message in the sand, tides and crabs will

eventually move the sand so the concrete representation of your message becomes lost, but the light

particles that bounced from your message when it was there will forever zoom through space with

the information about what the message looked like. As they bounce off more things they will fill

with more information but they will never lose the information about your message as they do so.

This is why we can look “back in history” towards the birth of our cosmos – the light particles we pickup still carry information about that event and there are even particles that haven’t seen much other

action yet such that we can start to see a fairly clear picture of very early cosmological events.

Thus information is never truly lost, as this would break the most fundamental premise we have:

That the total mass and energy of the universe should remain constant (there is no extra energy

there to totally destroy information). So on balance there seems we have a dark side and a bright

side to information theory. Solid representations of information as patterns of solid particles will

wither but the pattern itself is never truly lost from the universe as it remains encoded in light

particles.

We will derive our conclusions from the premises of three basic information theoretical rules relating

to the concepts of space, time and distinction. Conversely, you might say that we will look at the

definitions of the concepts space, time and distinction from an information theoretical perspective.

It will be important in this paper to provide clear mental models for all concepts the reader is asked

to consider. If there is no clear mental model of what we are conceptually trying to say, it will be hard

to verify if the claims check out.

We will thus often take care to describe ultimately information theoretical relationships in the form

of concrete examples using space, time and distinction in the form of mass (the distinction of 

differential density for space) such that the reader can build and run models in their heads to see

that, logically, they should behave in the way we propose they should.

The three information theoretical rules we will rely on in all our arguments are the following:

1)  Information has cost of space, such that without any space for it to be in there is no

information. Space can be thought of in terms informational capacity and also in a logically

matching way as potential or energy . In the model we will consider in this paper there is a

logical minimal energy level associated with “empty” space and that is represented by the

number 1 such that mathematically speaking the energy of space approaches 1 rather than

0. Mathematics allows us to place the zero where we like so we use it in the fundamentallylogical way of letting 0 energy represent the concept of no energy at all whereas we let the

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 9/117

symbol 1 represent that minimal energy or rest mass for a particle to exist in a reliable way

to outside observers (such that it can represent non-random information to the observer,

meaning that it represents informational capacity to the observer). We see that information

has a basic minimal energy cost of space that we can think of as the information needing

space to exist in. Furthermore, moving information through space is associated with

degradation of the information such that a signal degrades in proportion to the distance in

space it has to travel .

2)  Information has a cost in time, such that (unless completely isolated in its space) without

maintenance information is corrupted over time. Maintaining isolation or maintaining the

non-isolated information both draw energy, so what we state here is that information has

energy  cost over time. In other words a signal degrades in proportion to the distance in time

it has to travel . This is essentially a restatement of the Second Law of Thermodynamics and is

a consequence of statistics. We should note here that information theory also states that

ultimately information is not destroyed spontaneously (you have to add energy to destroy it)

which seems like a contradiction at first but we have seen that the solution is that while any

concrete, solid particle representation of information will wither over time, the information

about all the states the information has been in (including corrupted states of course) will

live on forever as captured in photons. What if we imagine some pattern of solid particles

that no photons ever bounce off? If other solid particles bounce on them then the

information is abstractly contained in those particles and if no particles at all (solid or

photons) bounce off the pattern then those solid particles are also so isolated as to not

become corrupted over time, so everything still works out (no information is destroyed). We

conclude by observing that such a condition would correspond to a very strong isolation that

would require a lot of energy to maintain.

3)  Information relies on distinction. This is like observing that the bits of a computer needs two

states, 0 and 1, for it to be such a useful container of information. With, say, only zeros in it,

the only information it could represent would be the number of zeroes it had capacity for. A

computer with 6 memory cells that could only hold a zero each would only ever represent

the information of the number six (a rather inefficient use of a computer). Inefficient or not,

if there is any information (as the number 6 is) then it still relies on a minimal distinction- in

this case that between a minimal amount of memory space inside a computer and no

amount of memory space inside a computer.

To create a solid mental model to verify the logical claims against these rules of information theory

with, we can picture them as follows: Imagine one wants to store a message (in binary of course) by

creating a pattern with little pebbles in the sand on the beach.

You measure up a certain space with a twig such that all the pebbles would fit inside and use the twig

to draw parallel lines at equal distances along the beach. Then you go ahead and place a pebble in

every space between two lines where you want a 1 and leave the space between two lines empty

when you want a 0. Note that if you have more 1s than 0s in the pattern, you would inverse the

strategy to let pebbles represent zeros, such that having more 1s than 0s will not suddenly place a

new requirement on having a larger number of pebbles. You will in other words only need half as

many pebbles as you have drawn cells in the sand. If you have no pebbles you could just draw

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 10/117

another little line between those lines you had drawn where you wanted a 1 and no extra line where

you wanted a 0.

The important part is the distinction of density such that a pebble with mass is obviously denser than

no pebble at all, but we can also see that compared to the length of the twig you can use the

distinction such that lines are drawn at either the full length or half the length of the twig – thedensity of “lines per space” can be higher or lower.

The concept that information has a cost of space of course means that you need somewhere to place

the pebbles or at least draw the lines. Furthermore, if you keep moving the pattern away from the

rising tide, statistically speaking you will eventually make a mistake. We would be able to observe

that moving information through space is associated with degradation of the information such that a

signal degrades in proportion to the distance in space it has to travel .

The concept that information has cost in time just means that unless you build a really good wall

around the information, when you come back tomorrow the random impact of wind, rain andbypassing crabs will at least eventually move some of your pebbles out of place. Of course, if you

build a wall around the pebbles you would always have to come back periodically to maintain that

instead, for exactly the same reason (the wind and - unless you could strike some kind of deal with

them - the crabs. Killing the crabs is not really a viable solution as nature if left to it seems to provide

a very large supply of crabs and most of us would like to keep it that way).

The concept of distinction, finally, means that we need some way to describe the difference (pebble,

no pebble, even just more marks in the sand) between two equally long distances of space (sand).

Pebbles versus no pebbles will be a useful way to mentally model this concept of distinction as it will

help us to understand the fundamental relationship between space and mass. The distinction we willmake is thus that one information cell state will have lower density (as in less mass or, inversely,

more space) and the other information cell state will have higher density (more mass or less space)

such that mass as a concept will act as the inversion of the concept of space.

With these basic concepts in place we will now go on to use them to build our information

theoretical models of universes that we can examine for logically necessary behaviors. The path to

building up our first universe will start with particles, as they will be the basic building blocks in all of 

the several model universes we will examine in this paper. As we go on, take care to build up the

corresponding mental models, run them in the mind in accordance with the suggested behaviors and

verify that their behaviors do not become logically or mathematically impossible.

 Active and Reactive Particles

Our model will consider two mathematical classes of particles that we will call active particles and

reactive particles and that will correspond to our information theoretical concept of mass and space

as inverse concepts (the distinction that allows information to be information) such that active

particles are like solid particles with mass and reactive particles are like space (or, as we shall see,

photons).

The corresponding distinction we will make between active and reactive particles will be to state

that:

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 11/117

•  In our model, reactive particles only carry information about the active particles but no (or

rather only the logical minimum of) information about themselves whereas the active

particles can carry information about themselves and potentially about other active particles

as well.

We will also state our definitions such that:

•  In our model, active particles can cause changes in the information state of other active

particles (which can be pictured as their ability to push each other around) as well as being

able to change the information state of the reactive particles (reactive particles can bounce

off active particles). Reactive particles on the other hand can affect the information state of 

each other (push each other around) but are not able to affect the information state of any

active particles anywhere directly (reactive particles can’t push active particles around) but

can only impact active particles to the extent that active particles detect the information

about other active particles carried by a reactive particle and reacts to that information (If 

Bob makes a motion as to push Caesar and Caesar flinches and falls, Bob never physically

pushed Caesar even though he directly caused him to fall via information transmitted in

photons).

While the active particles can be thought of as solid particles, the reactive particles could really be

thought of as two different things (that may not really be so different after all). The reactive particles

in our model will represent the real-world concepts of space (in this model we think of space as a

reactive space particle) as well as the photon.

Information theory tells us that all information has cost, and so the default assumption should be

that one active particle carrying information about itself and a reactive particle carrying essentiallythe same information about the active particle ought to be equally “heavy” from an information

theoretical perspective. However, we will create our definitions in such a way as to make this not

true by stating that a reactive particle can be lighter from an information theoretical point of view

than the active particle it carries information about. We can thus see that it makes sense for an

information theoretical model to call the reactive particles “light” particles and the active particle

“heavy” (massive or solid) particles.

Relating the information in the particles to our usually desired concepts of a universe we say that the

information of a particle represents spatial shape such that it describes a shape in one or more

dimensions of space. Active particles are solid particles with their own shape whereas reactiveparticles have no shape of their own (are not solid) and can only carry information about the shape

of the active particles (as they carry no information about themselves to say what their own shape

should be).

The terms active and reactive thus relate to how active particles can maintain a shape due to their

state information about what their shape should be, whereas reactive particles cannot influence

their own shape (they carry no information about how to do so). The result is that the shape of a

reactive particle will always be formed in reaction to the active particles it interacts with.

To picture this, imagine a solid particle with a certain shape. The space around it could be considered

a reactive particle with a shape that is entirely the consequence of the solid particle. Herein lies the

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 12/117

explanation to why the reactive particle would not become as “heavy” (information has cost) as the

active particle even though it carried information about it. It is essentially free for the space around a

solid particle to represent the information about the shape of the particle, as no new information has

to be added to the system for the space around the solid particle to retain its shape. The inverse of 

an information state is directly mathematically derivable (101 becomes 010) so we only need the

information in the active particle to be able to completely derive the shape of the reactive particle by

inversion. An alternative to the terms active and reactive particles would be to talk of particles and

inverse particles, but we will stick with the terms active and reactive particles throughout the rest of 

this paper.

We must also go on to note that without instantaneous spread of information, if the solid particle

changes its shape the effect could not be instantaneously transmitted to other distant solid particles

via the corresponding shape change of space. The effect would have to travel at a maximum speed of 

information in a sort of ripple through “empty” space, and so we see that there must be some sort of 

limit to how close to a true concept of “nothing” empty space can really be as at the very least some

type of ripple effect or corresponding means of communicating information about state changes of 

solid particles must be able to travel through it. This corresponds to our assumption of a minimal

energy or information capacity for empty space such that the energy, minimal rest mass or

information capacity of empty space in our formulas will mathematically be seen to approach 1

rather than 0.

In other words reactive particles must have some minimal information state of their own, but we will

often talk about them as if they were completely empty of their own state. We must remember in all

such cases that we are really talking about a type of space particle that would still not be completely  

empty. Whenever we talk about reactive particles as if they were entirely the opposite of solid we

must remember that we are then just visiting an idealized mathematical universe where such things

could be possible, but we know that logically they are not completely corresponding to the concept

of void . We thus make a mental note that in physical reality it should logically be the case that

reactive particles would have to contain at least some minimal amount of state (as we do not allow

instantaneous information spread in a universe where time does pass, something we will go on to

examine in greater detail).

One possibility for how just a little state in reactive particles could communicate the required

information about state changes of active particles is that a reactive particle has just one piece of 

state information of its own – its position. Changes in positions of empty space particles could be

enough to communicate the information about shape changes in active particles. The phenomenon

of light could then be a case of ripple effects in the position changes of “empty space”.

To recapitulate, reactive particles have in our model only the logically minimal rest mass and we let

the concept of the reactive particle represent both the real-world concepts of photons and empty

space whereas active particles have a more than minimal rest mass and are seen in as the

representatives in our model for the real world concept of solid particles.

We have skipped ahead slightly to be able to introduce the fundamental concept of particles, but

their definitions rely to some extent on concepts of space, time and mass – terms we have defined

from an information theoretical perspective but that could use some more careful inspection toensure our definitions of them not only match the expected features that we conceptually associate

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 13/117

with these terms as a result of our scientific measurements but that they all represent

distinguishable concepts that require distinct classification.

Specifically, it seems hard to distinguish between the concepts of time and space at this point (they

seem to have very similar effects on information) and so we should take some care to figure out

what the actual distinction between time and space could be. Thus we will now step back to inspectour most fundamental definitions before we start building model universes with our particles.

Space

What qualities must we associate with the concept of space to meet the basic requirement on all our

definitions to fulfill generally expected features for the concepts we use?

At the very least, we should assume that any concept we want to model including space needs a

minimal amount of information to describe it. Equipped with only this fundamental observation we

can go on to examine some other necessary and expected qualities of space, again from an

information theoretical perspective but this time with greater attention to what we mean by theexpected behavior of space.

As we have seen, one aspect of space is to relate it to the information theoretical concept of 

informational cost, such that all information has a basic cost to the extent that it has to be stored

somewhere. In other words, storage capacity is one aspect of space that we should include in our

definitions.

Another important constraint to the behavior of space that we also get from information theory is

that space is supposed to have a quality such that a signal degrades the greater the distance it has to

travel. Thus space should also embody an aspect of distance, such that our definition of space should

not only reflect the storage needed to store the information about it, the information should in turn

be taken to represent geometrical distance such as can degrade a signal.

With the idea of geometrical distance comes the idea of geometrical shape, so we arrive at the idea

that the concept of space should represent information with storage cost where the information is

interpreted as geometrical distance (such a thing as degrades signals trying to traverse it) which is to

say that the information represents shape.

Time and Causality

With the concept of space in place we go on to examine the concept of time. We start by relating the

new concept of time to our established concept of space by assuming that one way to look at time is

as just another dimension of space and then we go on to see why this assumption does not quite

hold in order to distinguish between our concepts of space and time.

If we start by looking at the storage capacity for two dimensions of space, we could see this as a flat

checkerboard where each cell holds information about the shape in that position. To turn this into a

three dimensional space, we would extend the checkerboard into a box, such that if we had length *

width flat cells on the board, we would get length * width * height voluminous cells in the box.

However, to turn this into a world with two space dimensions and one time dimension we could also

 just extend the board into a box where the height of the box would correspond to the amount of time in the world. From a strict information storage capacity perspective, it seems time and space

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 14/117

will have the same basic storage requirements and so far cannot really be distinguished from one

another.

Furthermore we also know from information theory that time, just like space, will act to degrade

signals so the concept of some kind of geometrical distance seems to apply to time in just the same

way as it does to space (moving the pebbles up the beach has the same general effect as letting themstay in place a long time – either you, the wind or the crabs mess things up eventually). So far the

definition we have used for space seems like it could be used to model time as well, and we wouldn’t

really need two separate concepts.

Although space and time can have the same effect in how they both distort a signal and can both be

talked about in terms of similar information storage capacity requirements, they are not exactly the

same thing. We could have forty dimensions of space but without any activity (change or motion) 

there would be no meaningful concept of time.

We could create a rule stating that one of the space dimensions should in fact act as a timedimension, and this would work from the information theoretical perspective as the total cost of 

information in the system would not be affected (it would grow slightly to include an activity pointer

but would not keep growing after that). As we have seen there is a fundamental connection between

the concepts of time and space such that they have the same type of informational cost.

Nonetheless, if we create such a rule we see that we have asked one former space dimension to have

some kind of special status compared to the other space dimensions and thus we understand that

there must be some difference to the concepts of time and space. Logically, if we represent all the

shapes in the life of a universe in the memory of a computer, the idea with time is to say that not all

of those shapes should be interpreted as active in total parallel. The concept of different states beingeither active or not active at different points in time is known as activity and so we see that the

concept of time must be related to the concept of activity.

Furthermore, the concept of time together with causal rules (non-random activity) can minimize the

memory requirements on such a computer. Unless there is some need to keep track of past and

future shapes, a computer with rules for transformations between shapes would only need to have

enough memory to store the current shape of the universe plus some extra memory in which to

perform calculations and then it could use its processor to apply the transformation rules to arrive at

the next shape, then the next shape from that and so on, discarding old historical states as it goes.

In our model we thus define space and mass as information states with distinct character. We could

use the symbols 0 and 1 to represent them with logical and mathematical notation but it will be in

line with the conclusions of this paper to see that they could be better represented by 1 and 2,

letting 0 represent the undefined state where there is no reliable information at all - as in the null  

value commonly seen in computer science or as in the undefined range between the maximal low

value interpreted as 0 and the minimal high value interpreted as 1 in a digital computer.

Time is defined by the distinction of active and inactive information states such that we can interpret

information as something that can change with time. With only random activity, however, time can

still not really be distinguished from space except by restating our definition that different states

could be interpreted as active or inactive. We note that if states that went from active to inactive

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 15/117

could be considered “historic” and for that reason could be discarded we could begin to understand

a distinction between time and space in terms of differential storage requirements in a computer,

but the computer would still have to hold all future states in its memory until they could be

discarded. Should the future be infinite, the computer would require infinite storage capacity and we

see that discarding historical states would then not really help reduce storage requirements. If time,

on the other hand, were cyclical then future states would contain historical states and so they could

not be discarded, leaving us again with no real way to tell the distinction between time and space

without the introduction of one more distinction:

Causality is defined in contrast to randomness by the impact it has on the distinction between time

and space/mass as to make an information state more compressible. Non-random activity in the

form of state transformation rules allows a computer to calculate all future states and if it is also

allowed to discard historical states we can see that the memory requirements on a computer are

dramatically decreased, from having to contain information about all possible states to containing

the information about the active state and the necessary memory to compute the next active state

from the transformation rules.

We now have definitions in our model for the five concepts space, mass, time, causality and

randomness with logical and distinguishable meanings in relation to each other and to the

information theoretical aspect of how abstract systems with these concepts could be represented in

a theoretical computer.

Space and mass are seen as a conceptual pair such that one is the conceptual inverse of the other.

Causality and randomness is another conceptual pair where one is the conceptual inverse of the

other. Time, finally, relates the two conceptual pairs to each other by stating that the two ways that

the relationship between mass and space can change is for it to change either randomly or by causeand effect.

To recapitulate, for a computer to model only the concept of space the computer itself would only

require memory to store all the information about its modeled space, which in turn implies that the

computer itself only really needs some actual, external space to exist in. With only time added to the

model, the storage (memory) requirements on the computer grows, but only in the same way as it

would for an additional modeled dimension of space, and so we would still only need a computer

with memory but no processor and we could not in any meaningful way distinguish between the

modeled concepts of time and space.

With time and causality added to the model, the computer needs less memory as states can be

computed but the computer will now also need a processor as well as some actual, external time in

which to do the computing and some external rules of causality allowing its processor to work. In

other words, to represent space a computer needs space, to represent time and causality a computer

needs time and causality.

Rather than to consider this a form of circular logic it is about the closest we can get to a

fundamental understanding of the concepts space, mass, time, randomness and causality from an

information theoretical sense. It is also important to point out that it is in fact not a case of circular

logic for the same reason that Darwinian Evolution by Natural Selection is not.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 16/117

An objection sometimes raised against Darwin’s algorithm goes along the lines of “fine, humans

descended from apes and apes descended from fish, but it had to start somewhere! How did the first

one come about?”

It is a recursion that seems like it should have no end condition, which equates to an infinite

recursion. An infinite recursion is as bad as an infinite loop (which is what circular logic represents) inthat it would cause a computer to “freeze”. It could even be seen as worse than an infinite loop in

that it will also try to eat up all the memory in the computer in the process (by pushing more and

more state to a so called “recursion stack”).

But the obvious resolution to the Darwinian paradox is that the first being capable of replicating itself 

came into existence completely by chance. Correspondingly the answer here is that the root universe

(or computer) with the necessary qualities of space, time and causality such that it could result in a

system of computers modeling computers (or universes containing other universes) could have come

into existence completely by chance. As soon as it does, the concepts of space, time and causality as

defined by us would then continue to make sense from that day on for everyone inside such auniverse or any of its sub-universes.

A computer in this context should be thought of as a logical concept as in a Turing machine. What we

can then see is that there is a relationship between the definition for a logical Turing machine and

the logical definitions for time, space and causality. A Turing machine can be defined using the

concepts of time, space and causality but conversely the concepts time, space and causality can be

filled with meaning in relation to the definition of a Turing machine.

We now know that space and time are not the same things in our model, although time and space

can have similar information costs and distortion effects to signals. We can relate to what theirdifferent functions in an information system should be because they match fundamental concepts in

the definition of the logical machine known as a computer.

Thus we see that at a minimal level, the concept of space in our model should have to do with the

existence of information that has storage cost and that represents geometrical shape, our concept of 

time should relate to how information states can change, noting that strictly this only requires more

room in the storage of an external computer running the model of a universe with both space and

time, such that it can store all the information states. Causal or non-random time concerns how

states become different based on rules rather than randomly, which minimizes the storage

requirements of the computer running the universe program as the information for one of itsdimensions no longer has to be kept around, but it introduces the requirement of external time and

causality for the external computer to work in.

As we build up a model of a universe by adding concepts to it, we start by considering first a universe

without movement such that we have space but no time. Then we go on to add activity or motion 

giving meaning to the concept of time. We will see that with the concept of directional time follows

uncertainty which in our model will be seen to result in relativistic and quantum mechanical

experiences for all local observers inside it.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 17/117

The Static Universe

In our model of a fully static universe, both active and reactive particles would be completely

stationary. There is never any motion so even though it has a shape implying that the concept of 

space is present, time does not pass in this universe. We can theoretically store the full static

universe in a computer with only memory but no processor, and so we only need the concept of 

space to represent this universe.

The static universe is obviously not a very interesting place since nothing ever happens there and so

the only reason to pay it any attention is that it is a useful point of reference when contrasting

concepts to each other helping us to distinguish clearly between the ideas of space and time.

The Semi-Static Universe

In our model of a semi-static universe, active particles are stationary but reactive particles are

mobile. In other words, solid particles cannot move but information about them can move inside the

mobile reactive particles. We will use the semi-static universe to examine in isolation some

interesting constraints that will continue to apply as we go on to discuss the fully dynamic universe

where both active and reactive particles can move.

Remind yourself as we go along that the model we are building is of a universe with an absolute

reality. We will go on to see how models of the relative and overlapping (quantum) realities can be

derived from this model by including an aspect of some necessary uncertainty about the exact state

of the absolute reality – regarding the micro level as well as the macro level - for all observers inside

our model.

Extensive experience with mentally modeling a universe without any absolute reality in it such as

many scientists may have could turn this into an unintuitive exercise for some readers. It should notbe considered patronizing but helpful to say that we should picture this in our heads the way a child

would. If we don’t flex our mental muscles too much we can see that we are only trying to build a

mental picture to the effect that we begin in our model by representing what is really there under

the assumption that in our model something is really there.

Note that while we will examine a chaotic universe towards the end of this paper, here we will skip

directly to a universe with time that is based on causality such that there are some rules governing

the movement in the semi-static universe. In other words, a computer running a semi-static universe

program would not only require memory (space for the computer to exist in) but also a processor

(the computer needs to be in a universe that in turn has time and causality).

The requirements on computers running semi-static and dynamic universes (where as we shall see in

following sections both active and reactive particles can move) are thus exactly the same and so we

see that our concepts of space, time and causality will work the same in semi-static and dynamic

universes. The only difference between them will really be the (artificially imposed for analytical

reasons) rule that in the semi-static universe active (solid) particles are unable to move.

We will go on to pick a convenient set of causal state transformation rules for our universe in having

it simulate the behavior of our reactive particles zooming around in straight paths and bouncing off 

each other (in such a way as to require no additional energy to maintain their courses through space,

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 18/117

making them bounce around indefinitely unless disturbed by an outside force) and in the case of the

dynamic universe both active and reactive particles will be bouncing around.

There could be other rules and we will end up seeing that any rule set will in fact do (as long as it can

give rise to self-replicating shapes), so we pick rules that make sense and make it easy to picture how

they work. Thus we will say that the things that move in the semi-static and dynamic universes don’tmove at random but rather follow linear rules of movement where shapes don’t change paths

spontaneously but only as the result of interactions with each other.

From the perspective of an observer external to the model who would be looking at the absolute

reality and be able to see mass and distances in space and time as they really were (unlike relativistic

observers inside the model) the passing of time in the semi-static universe means that the reactive

particles are zooming around, such that at different points in time they are in different places. While

this time does pass, it has no direction (yet) such that it would make more sense for it to go forwards

than backwards as there is no obvious way to tell what would even be the forwards and backwards

of this kind of time.

It doesn’t really matter which way the balls zoom. Assuming the transformation is loss free - that is,

information is not destroyed as the balls move around and we have seen that information theory

asks that additional energy be inserted to ultimately destroy information as captured by bouncing

reactive particles with minimal rest mass so this assumption should hold fast – the computer would

essentially be free to execute either the given rules or the inverse of those rules without any logical

consequence to the system (except that it would be running in the opposite direction).

As we have observed, there is an information theoretical proposition to the effect that information

once in existence prefers not to be totally destroyed (even though it can be discontinued fromconcrete representation by solid particles) so our default assumption should be that without

additional features for destroying information, our transformation rules for the universe should be

expected to be loss free. Unless the passing of time in one direction rather than the other would

mean that information was being constructed (implying it would be destroyed if time ran the other

way) time could run equally well forwards and backwards and there would be no logical way to tell

which is which.

From the local perspective of an active particle inside the absolute reality of our model, however,

time has a directional meaning to it. Local observers internal to the model would experience time to

have a definite direction, such that distinguishing between going forward and backward in timebecomes meaningful, and it is based just on the difference in constructing versus destroying

information.

The following argument will be an essential aspect to this paper as we use it to define how the

relative realities (and eventually the overlapping quantum realities) of local observers will be derived

from the one absolute reality in our model. The reader should thus take care to see that the logic in

the coming paragraph checks out:

If each active particle starts out in a state of ignorance about the other active particles around it then

 from the local perspective of an active particle going forward in time means that more and more

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 19/117

reactive information particles will interact with it, giving it a continuously improving understanding of 

the shape of the universe around it as represented by other active particles.

Note that if the speed of the reactive, information carrying particles were to be infinite – if 

information could instantaneously spread everywhere at once – we are back to having no meaningful

description of the passing of time with direction (which according to its many fans is the only kind of time worth having). Thus as we move on we will base our understanding of directional time in our

model on how in the semi-static universe, time moves forward from the local perspective as the

information active particles have about each other continues to increase.

The conclusion is that not only is movement (activity) a logical necessity for us to be able to model

the concept of time in a meaningful way, but more precisely less than totally random (rule based)

activity where the maximum possible speed of information about the activity is less than infinite is

required for the passing of time to have the kind of meaning we prefer it to have in the form of 

reduced memory requirements of a computer. Finally we see that (at least thus far) we have to

invoke the local perspective to find the concept of directional time which is the kind of time that weshould assume most readers would consider themselves to experience and so is the kind of definition

for time we should strive to include in the model.

Universes with infinite speed of information and thus no time are of course thinkable, but like the

static universe they are boring places where the kind of directional time we are interested in does

not pass so we will continue by considering models of universes with maximum information speeds

and less than totally random transformation rules such that directional time can pass in that things

can happen and they can happen for a reason (as we seem to be in just such a fun and reasonably

happening universe). In all further discussions we therefor assume that in the model we create there

is a maximum speed with which information can travel inside the universe, and it is the speed withwhich the reactive particles move.

We also note that for the direction of time to work in the local perspective, there must be an actual

direction for it to move in at the absolute level as well in our model, only that which actual direction

it moves in on the absolute level does not matter as long as it picks one and doesn’t keep going back

and forth (as this would turn it into a chaotic universe). Regardless which direction the reactive

particles zoom in absolute space, from the local perspective of an active particle time will move

forward as it interacts with more and more reactive particles.

We will see later in this paper that if the future is ultimately unpredictable in our model, even to anexternal observer (and we will find good reason to conclude that it probably is) then time would have

a meaningful direction on the absolute level as well, such that the future is essentially unknowable 

but the past is essentially knowable.

This matches the statement from information theory that as concrete representations of information

change over time due to degradation (Second Law of Thermodynamics) the total information on the

abstract level increases as abstract representations of information (information in photons) need

unavailable energy to be destroyed. Abstract representations of information can come into existence

via reflection of the changes of concrete representations of information but they cannot go out of 

existence. In this perspective, time on the absolute level implies a process by which more and moreabstract information is created as time goes on.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 20/117

This means that memory requirements on a computer representing the model would have to

increase over time, but it would not mean that it would need enough memory to store all its future

states – the computer would “only” need more and more memory over time to store all of its history ,

seeing as in an information theoretically correct model all of history would have to be reflected in the

calculation of the next state for the absolute reality in the model (no information should be lost).

We note that a computer could use methods of (to some but not infinite extent lossless) compression 

to store historical information such that it can be retrieved by computation. In a mathematically

idealized version of the universe that allowed infinitely lossless compression on the abstract (photon)

level (as information theory suggests must be the case) we could then see that the memory

requirements to run such a model of the universe would be constant under the mathematical

assumption of a perfect compression algorithm for abstractly represented information.

We also see that the processor and memory requirements to calculate history from available

information could be seen to match the processor and memory requirements to calculate the future,

but the difference would remain that if an external observer stopped a computer simulation of sucha model, they could in theory derive all of its history by inspection of available information but they

would have to resume the running of the model to find out all about its future.

This aligns (at least mathematically) nicely with the idea that the concrete informational content – or

the sum total of mass and energy – of the universe should remain constant but the abstract  

information content should go up over time.

Mass as State, as Occupation of Space and as Interaction Points

When we relate the physical concept of rest mass to an active particle in our model, we are talking

about the state information that it contains about itself, such that the more state information, thehigher its mass. Unless there is no cost to information, the state information of a particle has to

occupy space, which matches the general idea of what it is mass is supposed to do (again, a particle

with mass is conceptually seen in this model as the inversion or the opposite of a particle containing

only empty space).

We recall that in our definitions a purely reactive particle contains no information about itself except

probably some minimal level of information (we have seen that the idea of a purely empty reactive

particle may be an idealization, because in practice it would have a requirement to carry information

about its position). Nonetheless we could consider such mathematical idealization as the main point

of this paper is to discuss limits to optimal performance.

Thus, even though it is argued in this paper that no completely purely reactive particles could exist in

practice and that all particles would have at least some state of their own equating to all particles

having some minimal rest mass, we will continue to consider the case of purely reactive particles only

in the form as representatives of ultimate mathematical constraints placed on our model. We will see

that we do not have to try to fit any actual such concept into our model to make it work, we only

compare to such a theoretical concept to find necessary constraints on the concepts we do have in

our model (which by logical and information theoretical necessity may well have to contain at most

near-reactive particles but where we shall see that it will become logical to call the theoretical 

champion of such near-reactive particles the “reactive” particle that in our model will represent bothspace and the photon).

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 21/117

Letting the state information of a particle represent its mass allows our concept of mass to meet our

basic expectation on this concept to occupy space (information has cost in space) and we will

continue to be able to derive useful information theoretical implications from this model. Proceeding

towards the first important such deduction we will examine how the mass of two active particles as

defined in our model should relate to each other .

To be able to do this, we will make the observation that mass can also be seen to represent the

number of interactions possibilities or interaction points a particle contains, such that the higher the

mass of a particle, the more interaction points it has where each interaction point represents an

opportunity to interact with the interaction points of another particle.

This observation follows from the assumption that information has cost in the form of occupied

space. The state information a particle holds about itself takes space to store such that the more

information, the bigger or the denser the particle has to be. Each bit of information takes up room in

space and it is also this occupation of space that represents the opportunity to interact with (be

bumped into by) the information bits of other particles.

This follows from how the concept occupancy should imply potential for conflict in our model. This is

really just a way to reformulate the word cost in the cost of information. We must thus go on to

make the following observation about a necessary constraint to the behavior of our model (which

would qualify as a minimal form of causality or specification for the transformation rules in a

computer program) as a logical conclusion to our definitions:

Only one shape can occupy a certain position in space (if all the information could share the same

space it wouldn’t really have any cost in space) which gives rise to interaction between particles in

any case where the transformational rules of time put two particles in competition for the same position.

If mass is defined as the occupancy of space (or reservation of positions in space) and interaction is

defined as relating to the conflict arising when two states compete for the same space or position

(such as two balls colliding and having to bounce off each other as they can’t both be in the same

position) then we see a direct relation between the mass of a particle and the amount of interaction

opportunity it has with other particles.

Uncertainty

We note that our model will allow us to deduce some necessary information theoretical constraints

on its behavior based on statistical laws governing how an active particle’s information about other

active particles around it can improve over directional time, something we will talk about as the

reduction of uncertainty associated with that information.

This improving information effect exactly matches the one we used to define the very concept of 

directional time, so the laws governing “how information can improve for solid particles over

directional time” will be internally consistent with our previous definitions in that we are essentially

 just restating our definition of directional time, of space and of mass. In fact, all we are about to do is

to restate the same logical model that we have already constructed thus far but this time using

mathematical notation in an equation, the correctness of which in relation to our model can be seen

in the following two laws that are ultimately just recapitulations of our definitions so far.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 22/117

The first deduction we will make from our model follows from our definition of space as information

representing geometrical distance. According to our definitions of space and time, the greater the

distance (the more space) there is between two active particles, the longer time it will take for each

to gain good information about the other as the longer the distance between them is, the longer

time it takes or the statistically more unlikely it becomes that a reactive particle interacts with them

both.

The second deduction we will make from our model follows from our definition of mass. According to

our definitions mass can be seen as the density of interaction points in an active particle (chances for

the active and reactive particles to interact, allowing the active particle to detect the information in

the reactive particle). It follows from our definition of mass and directional time that the greater the

combined mass of two active particles, the shorter time it will take for them to gain good information

about each other as the more interaction points each particle has, the statistically more likely it is

that a reactive particle interacts with both particles.

Combining these two facts we will see that the uncertainty of two particles concerning the state of the other is reduced over directional time in proportion to the product of the masses and in inverse

proportion to the product of the two distances - which is of course the same distance two times, as

reactive particles would have to travel the distance both ways to inform two active particles of each

other (and where the interaction success of one does not affect the chance to succeed for the other).

Letting F stand for the limit to the force of the information increasing (or uncertainty decreasing)

effect over directional time, m1 and m2 stand for the masses of the two active particles and d stand

for the distance between them, we get a formula we should be very familiar with:

F ≤ m1*m2 / d 

2

 

If we replace the less or equals sign with just an equals sign (as we would to describe the very

optimal performance of any natural force compatible with information theory, including gravity) we

get the same formula as for Newtonian Gravity and so we can see that we have just found a good

reason from information theory for why solid objects must (at best) obey this formula in their

trajectories through space.

When formulas that fall out of your models match established observations from the world around

you, this is an encouraging indication that the definitions of the concepts in your model bear some

resemblance to their corresponding concepts in the real world.

Conversely, as our formula has been derived entirely from information theoretical analysis of equally

information theoretically based definitions for the concepts of time, space and mass, we know that

our formula must hold true for all information theory compatible systems with features that

correspond from an information theoretical perspective to the concepts we have defined in our

model.

Unless we have made a mistake in the definitions of our premises or in our deductions from those

premises we can therefore see that the constraint described from this model should hold true for the

behavior of mass and space over time in our universe as well unless concrete representation of 

information is actually totally free such that all concrete information (all solid particles) in the

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 23/117

universe could exist at the exact same (vanishingly small) position in space at the same time - at

once, all the time and forever.

Relativity

For an active particle, its information about other active particles improves over time. This is to say

that there is a difference between the perspective of an external observer looking in on our model of 

the absolute reality from the outside and the limited and ultimately uncertain perspective of an

internal or local observer in the form of an active particle inside our model of the absolute reality.

This imperfect (but improving with time) view of the absolute reality is what the active particle is

stuck with - it is the only reality it can experience. It is not the true, absolute reality that really exists

(as it is represented in the full model of the universe, for example running in in an external computer

in another universe of its own) only the active particle’s distorted perception of that reality.

In our model the imperfect perceptions of the universe by active particles will be seen to match what

we also know as Einstein’s relativistic realities. In it, observers can experience effects such as howdistances seem different from the actual distances that exist in absolute space and time. In our

model the concept of a relative reality is represented by the perspective on the absolute reality by

solid particles inside the model which will always include some uncertainty as to the exact shape of 

the absolute reality.

As we relate all this to our own reality, where we do experience relativistic effects, we see that it is

not the kind of distorted perspective that assumes a conscious observer. In our model humans or

other biological systems are not the only ones to be exclusively doomed to interacting strictly with a

relativistic world because their brains are somehow bespectacled with relativistic glasses such that if 

they used a film camera it would not be fooled.

In our model, every atom is constrained to “experience” or to interact with and relate to all other

atoms in a relativistic way. This must be clear as we proceed to talk of active particles as conscious

little observers – it is only shorthand for talking about the only version of the world that unconscious

particles could interact with.

Relative Distance and Mass

If we imagine an active particle to be like a little scientist in a spaceship, we could see this as the

scientist making measurements of reactive particles around it and using the results to draw a

tentative map of the surroundings. It will be our mental model for picturing (and verifying the logicalcorrectness of) how in the perspective of a particle its relative reality is “created around it” to the

extent that it is able to physically interact with (and consequently perceive) more and more of its

surroundings over directional time.

As time passes, more and more reactive particles zooming by will allow the scientist to draw a better

and better map. In the case of actual biological systems (such as an actual, real-life scientist) the

effect of improved information over time would indeed be for them to see their environment

“materialize” around them. In the case of non-sentient particles, the effect would be the same from

a physical perspective as more and more of their surroundings “materializing” from the standpoint of 

detectability of and interaction opportunity with the absolute reality for the particles (except therewould in consequence to our problem statement be no sentience in the particle that would notice).

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 24/117

As the scientist draws the map, it is with the understanding derived from logical contemplation on

information theory that there is by necessity uncertainty associated with every measurement. As

external observers to and definers of this model, we know that the scientist is right on this account

because our definitions state that in our model information about other solid particles is not

expected to be perfect from the start for any solid particle in the model.

Perhaps worse, the scientist has also realized (as we shall soon as well) that it is impossible for any

local observer to tell precisely whether there are changes to the mass, time or space out there which

result in the information imperfection seen in the uncertainty of all measurements and thus could

never be really sure if a change to mass, space or time resulted in any certain and repeating change

to a measurement.

To see why the scientist couldn’t tell if changes to mass, space or time would be responsible for

changes to measurements, we can consider how as the scientist draws the map there will be a few

different ways to represent the necessary uncertainty associated with any of the measurements

made. One way would be to assign confidence numbers to all the measurements represented on themap, but there are also ways to represent the uncertainty without adding those numbers explicitly as

a separate dimension in the map.

Suppose for a moment that the scientist were given advice by an external observer stating the

existence of two particles in the surroundings of the spaceship such as for there to be 1000 meters to

neighboring active particle A with the mass 10 kg and also 1000 meters (in another direction) to

neighboring active particle B with the same mass of 10 kg. However the first measurement, says the

external observer, has a 90% confidence level (10% uncertainty) associated with it, whereas the

second measurement is more unreliable, with only a 20% confidence level. The external observer

knows the right answer, but decided to roll some dice (in turn with unpredictable outcome to ourexternal observer) to determine how close to correct answers should be given, and the correct odds

for those dice rolls were presented with the rest of the information to our scientist.

One way for the scientist to represent this in the map without representing the uncertainty numbers

explicitly would be to draw both particles at the same distance but to draw particle A with the

confident, bold marker while using the flimsier and easier to erase thin marker to draw particle B.

This way can be thought of as the scientist drawing objects with relative solidity, or relative mass, but

all distances on the map match actual, absolute distances in space and time.

Another way, if the scientist doesn’t want to use different markers, would be to representuncertainty as additional distance on the map. Thus a particle C that was known for a fact by our

scientist to be 1000 meters away (our external observer told the scientist that no dice were rolled

before divulging this information) would have been drawn as exactly 1000 meters away (to scale, of 

course) on the map. Particle A with its 90% confidence level would be drawn as just a little more than

1000 meters away and particle B with its 20% confidence level would be drawn as yet further away

than particle A.

When the uncertainty to measurement in this local perspective on the universe is perceived by

interpreting uncertainty as additional distance in our model, we say that the observers experience

relative distances rather than the absolute ones. As we saw from the example with the map drawingscientist, another valid way is to interpret uncertainty as relative mass.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 25/117

We know that with our definitions solid particles cannot start out by perceiving the full, absolute

reality in our model - such perception is only allowed to be approximated better and better over

directional time. If the other solid particles keep moving around, as we will see them doing in the

model of the dynamic universe, absolutely correct perception of the surroundings would never be

fully achieved. This means that there will be at least initial uncertainty for all local (and solid)

observers in our model of the static universe, and for local observers in our model of a dynamic

universe uncertainty will be a permanent part of their reality.

If an additional dimension for representing uncertainty is used (representing uncertainty explicitly

with numbers on the map) the corresponding effect to a sentient observer would be one of a pure

virtual dimension. That is, such an observer would perceive a dimension that did not match any

dimension in the absolute universe, a purely virtual dimension of uncertainty.

The alternative for the sentient observer (or for any particle constrained to behave in an information

theoretically compatible way under our model) would be to modify the experience of a dimension

that does exist in the absolute universe, which might perhaps also be seen as more efficient as theycould then use value triples of estimated relative mass, distance and time (as we shall see) adjusted

for uncertainty rather than quadruples of estimated absolute mass, distance and time plus

uncertainty of the estimation as its own value. All of these options are equally valid as such, but we

see that a local observer in our model cannot completely escape some form of relativistic experience.

Thus the experience of an observer such as our scientist will in our model only be compatible with

the assumption that while some of the aspects of the absolute universe could be potentially be

perceived in their true, absolute form, not all of them could be measured to their full extents

simultaneously.

Which aspects are considered relative is not necessarily important, whatever seems convenient

would work. We could note that if information has cost, value triples may seem more attractive than

quadruples, not to mention that slight distortions to perception in mass or distance might seem

preferable to any sentient observer before the “hallucination” of an entire imaginary dimension (to

picture this we could imagine a person perceiving absolute distance, mass and time, as well as could

be done, but to the extent that those distances, masses and times were uncertain, they would feel

“icky”). Towards the end of this paper we will give this possibility some further consideration.

From the aspects we have considered so far we can say that as particles or scientists build their maps

of the universe around them, and unless an extra dimension is used, they can assume that otherparticles have relative mass (the different markers) but that the space and time they measure is

absolute or they can assume that mass of other particles is absolute (using only one kind of marker

to draw other particles) but that distance is relative (such that particles A and B are drawn at

different distances from the observer) or they can distribute the uncertainty evenly over both mass

and distance to have two somewhat certain values rather than one more certain and one more

uncertain.

However, as we saw when building up our fundamental definitions, although time and space are

ultimately different concepts, they are also to some extent interchangeable from an information

theoretical perspective. Thus another option for the relativistic interpreter is to consider time to berelative rather than space such that it is the distances in time that are relative.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 26/117

As we go on we will talk about the relativistic effects as being interpreted by relative mass and

distance where distance is often then for sake of ease discussed as a space distance, but we should

note that we could just as well substitute space distance for time distance and the argument would

remain consistent. We also know that in our universe mass, space and time all seem impacted by

relativistic effects, and we should also note that for us the uncertainty could be distributed evenly

over our perception of the three dimensions of space, one dimension of time and one dimension of 

mass.

It should be noted that mathematically speaking mass is its own dimension, as in a number you can

change without affecting the numbers in the other four dimensions (three dimensions of space and

one of time) giving us a total of  five dimensions for our model - three dimensions of space, one

dimension of time and one dimension of mass.

We should note before this starts to come off as science fiction that we are actually describing the

same type of universe as the one we are in as, at least from a strict mathematical perspective, mass

must be a dimension in our universe as well. Strictly speaking, we should all go from talking aboutourselves as inhabitants of a four dimensional universe to observing that we live in what must at the

very least be a five dimensional universe where the fifth dimension is mass as in density of space or

the distinction between particles of different solidity – to say otherwise would be the mathematical

equivalent of proposing that there is no detectable difference between an empty swimming pool (or

one that is full of air) and a swimming pool full of water.

Mass and Energy

It is interesting to note the interchangeable nature of mass and distance in the relativistic distortion

in our model, seemingly similar to how time and space can sometimes become interchangeable.

One way to think about this is to consider a case where the scientist thinks there is reason to believe

a certain object with a certain mass is at a certain distance away (as in the hypothetical case when an

external observer has said so). But when checking the sensors, the scientist gets fewer interactions

with the object than would have been assumed given its mass and distance. There could be two

explanations for this, with no good way to tell which one is right. Either the object is further away

than was assumed, or it is less massive.

That is, either there is more space between the scientist and the other object, or there is more space

inside the object. Both situations would result in fewer interaction points of the other objects being

interacted with in just the same manner and so give exactly the same effect in the measurements.

There is thus a fundamental relationship between mass and distance such that they impact the

number of interaction possibilities between two objects in the inverse way of each other. But it

should come as no surprise that distance gives the opposite effect of mass (increased distance gives

the same effect as reduced mass) on measurements. In our definitions, empty distance is space and

we have defined mass as the occupation of space, or the inversion of empty space, so they should

have exactly that relationship of acting interchangeably as the opposite of each other.

We are just restating the statistical relationship that is implied by the definitions of the concepts of 

space and mass on the interaction possibilities for mass that followed from our model and that we

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 27/117

were able to capture with the formula F ≤ m1 * m2 / d 2

- which is in turn just the mathematical

expression for the definitions of mass and space in our model.

We are also reminded of our expectations on the interchangeable nature of mass and energy as we

have found those concepts to relate to each other experimentally in the real world in accordance

with the predictions of Einstein. So far we have left out any real definition for the concept of energyfrom our model but the time has come to extend the model with such a definition.

If mass can be converted to energy, this will in our model equate to saying that we can take some

mass and make it discontinue its occupancy of some position in space.

In our model we will thus let the term energy stand for the system activity (particular set of 

transformational rules) of converting mass into space or empty distance. Another way of saying that

a particle puts more empty distance or space between itself and some other particles is to say that it

moved in relation to those other particles, which matches our general understanding of how energy

relates to mass, giving us some confidence that the definition for energy we are formulating could bea good candidate for our model.

Conversely, energy also represents the activity of empty space being transformed into mass. How

much mass that can be gotten for a certain amount of converted space (or how much movement in

the form of new empty space that can be gotten when converting from mass) is constrained by the

communication speed of information in space as a solid particle can only convert into mass the parts

of space it has time to interact with during a given time interval.

In our model space is an optimal reactive particle (just like the photon) and if we call the speed of 

this reactive particle c, and we let the minimal rest mass of a space particle be represented by 1, we

see by the substitution of c to our earlier formula F ≤ m1 * m2 / d 2 and by setting m2 to 1 we can

describe the information theoretically derived constraint to the conversion rate of space particles to

solid particles and vice versa by the equation E ≤ m1 * m2 * c2 - or in reduced form E ≤ mc2.

We will use this second equation E ≤ m1 * m2 * c2to relate the concepts of energy  E and a maximum

speed of information c to our model. We define these concepts in our information theoretical model

by describing a logically derived constraint that can be seen to correspond to Einstein’s formula E =

mc2 just as we could see how in our model F ≤ m1 * m2 / d 

2 corresponds to the Newtonian formula for

gravitation F = m1 * m2 / d 2.

The symbol E will then stand for Energy when it relates to the way conversion of mass to emptyspace logically equates to making the remaining mass move. The concept of energy can be thought of 

as representing how far mass can be moved during a given time interval as the result of any

information theoretically compatible force, making our definitions continue to correspond well to

what we think of as the function of their counterparts in our universe.

The reason that we only see only one mass in the equation is that m2 is the space particle with a

minimal rest mass that is seen as approaching 1. The reason that c is squared is the same as for

squaring the distance in m1 * m2 / d 2, namely that the distance has to be traversed in both directions

to fully communicate the effect. The meaning of the component c2 is thus to represent the limitation

on the mutual communication of any effect during a given time interval where c is the maximumspeed with which information about the effect can travel.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 28/117

As an example of a mechanistic implementation method (as in compatible with information theory

and our definitions of space, mass, time and energy) for how such transformation between mass and

energy could work in our model, the conversion between solid particles and empty space particles

could be pictured as follows.

For converting empty space to mass, imagine how a little captain inside the solid particle openedminiature one-way entry hatchets at even distributions along the particle shell allowing the

surrounding particles of space to stream into the solid particle, where by virtue of being on the inside

of a solid particle they will now be considered occupied space.

The information in the usurped space particles will be counted with the information load of the solid

particle and thus either its mass or size will be seen to have increased. Space particles could be

packed so tightly as to not make the solid particle swell all that much or grow that much heavier

since they contain only a minimal rest mass corresponding to the information capacity required for

the information of one position in space.

When the captain decides it is time to set off in some direction, he opens a one-way exit hatchet

going in the other direction, releasing space particles out from the hatchet resulting in new space

being added between the solid particle and the place the captain wants to move away from.

We note that when converting empty space to mass we (or the captain) will remove empty space

evenly in all directions around the solid particle, which also equates to the particle coming a little

closer to all other particles around it, or the solid particle growing a bit (or the rest of space shrinking

a bit). We also note that with our definition that empty space contains at least one little bit of state,

namely its position, then it would make both a kind of common sense as well as information

theoretical sense to think of the disappearing (or occupied) space as removed (or condensed)positions between two solid particles, which logically equates to them coming closer to each other.

We are circling in on a very reasonable distinction between mass and space (active and reactive

particles) for our definitions. For a model that makes mathematical sense and is intuitively easy to

think about, we state that a reactive particle reserves only one position of space whereas active

(solid) particles reserve two or more positions of space. A particle that reserves only one position of 

space is a good candidate for the concept of a “space particle”, so our idea of seeing reactive

particles as such particles of space is logically consistent.

Thus the distinction that we relied on for the concept of information to work at all comes in the form

of the distinction between on the one hand reserving or occupying the lowest amount of positions in

space that is possible for something to exist in space at all (one position) in the manner of a reactive

 particle of space and on the other hand reserving or occupying more than the lowest amount of 

positions required to exist in space (two positions or more) in the manner of active (solid) particle of 

mass.

Mass would be seen as the condensation (density) of positions in space. When it moves it could be

interpreted under our model as a ripple effect transforming the relative densities of positions in

space in just the manner we imagined that light could do (but slower as more information would

have to move). In other words we can make our model so that what internal observers would

perceive as moving mass is implemented in the absolute reality in the model as a wave effect rippling

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 29/117

in a medium of  pure positions that stay in their basic place but can still become closer or further

apart from its neighboring positions so as to allow a wave of information to ripple through.

Again we note that when established formulas show up in our models it is a good sign that the

definitions of our concepts continue to make real world sense. In this case, we have derived E ≤ mc2 

as a logically necessary constraint on the conversion rate between empty space and mass entirelyfrom an information theoretical analysis of uncertainty in measurements of local observers.

Virtual Gravity

A perceptional effect that looks like gravity based on the reduction of uncertainty (increasing amount

of available information) over time can be seen by all observers in our model of a semi-static or

dynamic universe. As the perception of the universe around it clears with time for a certain active

particle, with a relativistic interpretation of distance the other active particles around it will seem to

accelerate in accordance with F = m1 * m2 / d 2 from their fuzzy, uncertain positions in space towards

their actual, precise locations where they will seem to stabilize.

The effect to the observing active particle will be as if the other particles were pulled by an invisible

hand of gravity towards their actual positions (or that they solidified in their actual positions, using

relativistic interpretation of mass) but an external observer to our model of the absolute reality

would know that none of the active particles moved or changed their mass, only the information

about their real positions and mass improved from the perspective of other active particles.

For two active particles that happened to be stationed just next to each other, it would indeed seem

to them like they were being pulled towards each other with accelerating speed (or rather they the

picture of each other would become less blurry as to reveal that they were indeed very close to each

other with increasing certainty). They could also interpret their measurements as the mass of theirneighbor growing or as their neighbor starting to unfreeze from a state of frozen time. Whichever

interpretation is used, the effect would manifest with the same acceleration as we associate with

gravity. The difference compared to real gravity would be that it would not end with a collision, as

the active particles were never really moving.

We could call this “illusionary” gravity that we can see that all solid particles in our model must

experience virtual gravity as it is only the experience of uncertainty for an active particle decreasing

with time as a result of interaction with reactive particles. We will distinguish it from real gravity (or

 just gravity) in our model of a dynamic universe where active particles can move and will indeed

crash into each other due to real gravity.

The experience of virtual gravity is nonetheless relevant to explaining why we see actual gravity

working the way it does in a dynamic universe, as it helps us underline the rational for the

information theoretical constraint on how real gravity is allowed to work. It is also useful to compare

the overlapping realities of the micro (quantum) world with the macro world to see that macro

objects seem overlapping (blurry) to all observers in much the way that we associate with

superposition (more on this later).

The Dynamic Universe

In a fully dynamic universe, both active and reactive particles are mobile. Without furtherclassification of particles, we can see that both active and reactive particles would simply zoom

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 30/117

around on straight paths through time and space like billiard balls on a table (but without slowing

down or accelerating) occasionally to bounce off each other (with the exception that in our model

reactive particles cannot push solid particles around).

In fact, we don’t have to assume that there are any purely reactive (space) particles at all as a

dynamic universe where all particles had some solidity would also work inside the same constraintsthat we will discuss (and we know that even the space particle should be seen to contain a minimal

rest mass in the form of its position). Active particles (particles occupying two or more positions of 

space) could well be carriers of information about both themselves as well as other active particles

and fulfill the role of information spreaders in the system. All we have to assume is that there will be

one kind of particle that is the fastest information carrier around (active or reactive) because

otherwise we would be back to a universe without time.

However, while not required for the dynamic universe to work, we will continue to invoke the

concept of maximally reactive particles that occupy only one position of space and contrast them to

active particles in order to examine an information theoretical constraint on particles as carriers of information.

It follows from our definitions based in information theory that with a maximum speed for

information to travel one active particle carrying information about itself as well as about a second

active particle could never travel as fast as a purely reactive particle carrying only the information

about the second particle, as the active particle contains more information in total.

As the amount of information in the active particle (its own state plus the state of a second active

particle) is greater than the amount of information in the reactive particle (only the state of the

second active particle) and there is a limit to the speed of information then more information bynecessity has to take longer time than less information would take to travel the same distance. To

suggest that more information could travel in the same time as less information by, say, increasing

the parallelism to improve the bandwidth just equates to increasing the general speed limit, so as

long as there is a speed limit to information (information has cost) the constraint must hold.

As derived entirely from logical application of information theory we can thus see a fundamental

constraint in our model that we will continue to exploit throughout this paper:

 Any force or effect based on information about it being spread by reactive particles will be able to

work faster than a force or effect based on information about it being spread by active particles.

Please take the time to validate that the argument above is logically consistent and holds true from

an information theoretical perspective. In essence we are stating that any hard information

theoretical constraints we find to apply by necessity to any theoretical biophysical model of a

universe must also by logical necessity constitute equally hard constraints on any theoretical physical  

model of a universe. It will be one of the most central ambitions of this paper to demonstrate how

we are able to use this fact to derive an ultimate constraint on the strength of any force of nature in

any universe that plays by the rules of information theory.

Theoretical Biophysics as Game Theory with Communication Theory

We will now go on to note that in our model it is possible in principle for active (solid) particles to

end up by chance in such a configuration that some of them will implement the function of sensors 

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 31/117

capable of detecting reactive particles while other active particles may happen to arrange themselves

into little motors, capable of influencing their future trajectory through space. In particularly happy

coincidences, such motors could become connected to sensors by controller mechanisms, all formed

by chance.

We know that it is possible for particles to become arranged in such a way in our world because a carcontains an example of particles arranged into a motor and vacuum cleaning robots (these exist) as

well as frogs represent examples of particles arranged into units of sensors controlling their motors.

Whereas humans rather than chance created the car engine and the vacuum cleaner robots, natural

selection created the frog and the human (and so by extension also the vacuum robots and the cars).

We can go on to derive by logical conclusion that both frogs and humans are the descendants of an

original system of sensors and motors that did come about purely by chance.

It is thus not only thinkable but logically derivable that at least very simple versions of motors and

sensors could (and therefor would if provided “enough” time and space) happen to arrangethemselves by chance into such systems in our model as well. We know this as we are all the result of 

 just such an event taking place way back in time when (in whatever primal soup life on our planet

began - on earth, in space, in a star or on some other planet, to list some principal alternatives) our

first ancestor spontaneously formed purely by coincidence.

In the following discussion we will examine the mathematical possibilities with respect to a model

that is allowed to contain little machines inside solid particles such that our active particles can have

internal systems of motors, sensors and controllers. We will do this to examine useful verification of 

the constraints to the energy of any physical natural force in our model by deriving constraints to the

energy of any biophysical force (gravity or other) being implemented by such little robot particlestrying to navigate towards or away from each other optimally.

We will also go on to discuss one alternative implementation for gravity that does not rely on

particles having substructures arranged as to make them little robots to see that any constraints we

have derived by the examination of robot particles will continue to hold.

When solid particles in our world happen to be grouped into arrangements of sensors, motors and

controllers connecting the sensors to the motors the group of particles can be seen to belong to a

special class of particle groups that are sometimes referred to by philosophers, biologists and

theoretical biophysicists as agents (or robots).

For reference the book The Intentional Stance by the American Scientific Philosopher Daniel C

Dennett examines the concept of logical agent systems comprehensively, including discussion around

how to model the intentions of what he calls intentional agents that have been blessed with a

healthy desire for survival by Darwinian Evolution by Natural Selection. We thus use the name active

agent particles to the class of active particles with corresponding substructures, such as to make

them essentially little robot particles. The following discussion in this paper will draw heavily on the

logical arguments presented by Dennett (which do themselves not include the specific conclusions

made by this paper with regards to a generalization of cooperation in nature nor any discussions

around the potential for superluminal motion).

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 32/117

If we add an element of competition for limited resources between robot particles then we could go

on to apply the information theoretically based mathematical framework of game theory (strictly a

subset of the even more abstract domain decision theory ) to further constrain the predictable

behaviors of such agent particles.

Game theory concerns the domain of mathematical analysis around economic evaluation of possiblescenarios and can be analyzed in terms of the potential economic value of a scenario if fully realized

(the number of dollars in a pot in a game of poker minus the ante you have to pay to play the game

and the percentage of the pot that goes to the house) and the probability that a scenario will be

realized (your reason to believe that you will win the pot which goes up for example if you hold four

aces but must be tempered by the reason to believe you will not win such as the mathematical risk -

and additional indications from some suspiciously giddy behavior - that someone might sit with a

royal straight flush).

The final derived value of a given scenario should be seen as the combination of its potential value

and the probability that the potential will be realized such that the overall economic value of asituation should be considered to decrease in proportion to the uncertainty that it will happen.

In other words, if the normalized maximal potential economic or energy value of a situation is seen

as 1 and the normalized maximal probability for the realization of the economic value of the situation

is also seen as 1, the derived economic value of the situation to the observer evaluating it will always

be lower than or at most 1. This logic is captured in the formula derived value =  potential *

 probability where potential and probability are each values between 0 and 1.

There is a known problem with our current application of game theory to nature, which is that it

seems like it should leave no room for cooperation that from the purely genetic evolutionaryperspective would be considered altruistic – yet just such cooperative behaviors are fairly

commonplace in nature. Examples include blood-sharing bats, dolphins saving injured and drowning

animals of many species and a fairly wide-spread tendency in the animal kingdom for being nicer to

youngsters than genetic game analysis would have predicted.

Our current application of game theory to nature using the theoretical framework of Darwinian

evolution by natural selection as applied to genes in The Modern Synthesis seems to predict that we

should not really be able to witness any reliable cooperative behaviors in nature except in the special

case of very closely related organisms (so called kin selection, more on this shortly). The problem is

that there is no shortage at all of examples of stable cooperation between non-related organisms innature.

One such example that could be familiar to some readers is that of the Egyptian Plover bird, also

known as the crocodile bird. The nickname comes from its reported but potentially mythical behavior

of cleaning the teeth of crocodiles that return the favor by not eating the birds (and the bird would

benefit because the old meat it cleans away from crocodile teeth is good food for the bird). Perhaps

too spectacular an example of cooperation in nature to be actually true this particular behavior has

not been scientifically confirmed, but lots of others including the dolphins and bats mentioned above

have been. Nonetheless, the example of a crocodile and a “dentist bird” will be used in this chapter

to illustrate the general problem with altruism in nature.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 33/117

While it may also be a myth that a scientist once proclaimed the flying bumblebee physically

impossible it is in fact the inconvenient situation that (while not going so far as to state the

impossibility of what we witness with our own eyes) currently science is confined to explaining some

cooperative behaviors (and should that of the dentist bird and the crocodile become confirmed it

would only add to the list) with hand-waving to roughly the effect that we “just haven’t figured out

how they are really being selfish yet and we must still expect them to betray each other at the first

opportunity. You just wait and see - those shady dolphins are probably somehow stealing money

from the pocket of every sailor they save!”

We should note that some dolphin behavior could be explained as the dolphins being dominant

enough in their niche to have enough resources to spare as to become playful, which in turn could

make them more agile and improve their fitness (making play time energy well spent). This way

saving drowning people and other animals could be explained as just dolphins playing around with

beneficial side effects to some sailors. But play time is only beneficial until the game becomes too

dangerous. The real problem for The Modern Synthesis comes when we see that the dolphins will

sometimes go so far as to defend a drowning animal from sharks – a potentially lethal activity for the

dolphins. Unless we could argue that the dolphins take no real lethal risk, perhaps by means of 

running some sort of protection racket in cahoots with the sharks to squeeze the pocket money from

hapless sailors (and we can’t simply because there is no real reward in it for the sharks to

compensate for not eating the sailors) we are left with a game theoretical mystery.

This observed behavior of dolphins and that of other animals sparing and even helping the

genetically distant offspring of each other does not seem to make solid game theoretical sense to us

at the moment, but we could perhaps explain them in terms of a good thing (playfulness) gone too

far, such as with the human “sweet tooth”. We evolved in conditions where sugar was rare and there

was no such thing as “too much sugar” and now with the advance of agriculture we suddenly find

ourselves eating too much sugar (irrationally much so that it ends up hurting us). But again it would

be the type of imbalance that natural selection should polish away over time and it becomes difficult

to explain why it happens with such regularity among dolphins and animals that are kind to young. In

short, the sweet tooth explanation could help explain some of dolphin behavior, but once they start

risking their lives we no longer understand what is going on.

One explanation to this involves how perhaps dolphins try to save sailors because their eyesight is so

bad and they actually think they are saving dolphins. The problem with this argument is that if 

dolphins couldn’t tell the difference between a closely and a less closely related dolphin (much less a

dolphin and a sailor) the only type of mechanism we have seen for cooperation to become

established outside of kin selection – reliable identification and punishment of traitors - becomes

impossible and strictly speaking even kin selection relies on the organisms being somehow able to

identify each other as close enough relatives. Systems such as the blood-sharing one we see in some

bats rely crucially on the bats not only being able to clearly distinguish one individual bat from

another but also on being able to remember which one of the unique bats out there did exactly what

to whom and when.

This is how the bats have been discovered via painfully careful analysis by biologists to be able to

build enough rational levels of trust between each other to let their so called “buddy-system” of 

blood sharing evolve. In other words, nearsightedness is not a good explanation for the cooperative

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 34/117

behavior of dolphins (the bats have to overcome their corresponding sight issues to be able to form

their cooperative behavior) and we are back to concluding that as far as we can see, dolphins are

playing a very suboptimal game from the perspective of The Modern Synthesis. We should note that

this is without even raising the perhaps uncomfortable discussion around how we should expect

further pruning to the genetically induced generosity of dolphins from the inevitable occurrence of 

perhaps thankful but ultimately also hungry sailors. Bluntly, we should see enough sailors who live to

tell family and friends over tasty dolphin stake dinner their amazing tale of how they were saved by

the nice dolphins from the hungry sharks.

There does remain a possibility that would make the game theoretical risk analysis more reasonable

again from the perspective of The Modern Synthesis. If dolphins were in fact so dominant in combat

over the sharks that the dolphins didn’t take a very big risk at all when saving prey from sharks then

the problem becomes smaller again – in fact the problem shrinks to just the extent that dolphins are

dominant over sharks. If the dolphins are dominant enough (and there is some reason to believe they

may indeed be) then the size of the game theoretical mystery shrinks back to the more manageable

size of that of our sweet tooth: Something that is perhaps suboptimal and we should expect natural

selection to sort away once it gets around to it, but we can see some suboptimal behaviors stay

around for at least some spans of time in a Darwinian system before they are pruned away.

Summing up the problem with the dolphins, it would seem to us at this moment that dolphins are

effectively applying some of their excess brain capacity to the task of making game theoretically bad

moves. Still, if they are strong enough compared to sharks they are not that stupid and we could

expect them to smarten up slightly given more time just like we should expect the human sweet

tooth to evolve away with time. In the same way, animals that spare the offspring of others are still

playing sub-optimally and given enough time all organisms should trend towards the behavior of 

lions (that do kill and eat the offspring of competitors) to the extent that they could not afford any

suboptimal behavior in response to general selection pressures.

Returning to the case of the dentist bird and the crocodile, we could potentially explain why birds a

crocodiles wouldn’t have to be expected to betray each other once they have managed to find their

respective strategic uses of each other if they have also entered a so called Evolutionarily Stable

Strategy where each would be punished economically by betraying the deal. Such a system is stable

in the same way that a terror balance would be and so we can see that it can stay around once in

existence. The problem is that – unlike a terror balance – we have no real way of explaining how such

a cooperative balance could come into existence except by enormously unlikely chance.

The real conundrum comes when we realize that for this cooperative behavior to evolve into being it

would rely on the prevalence of more than rationally trusting (naïve) birds and either extremely

smart or unreasonably stupid crocodiles. Simply put, until the birds have proven their value

conclusively over time to the crocodiles, the rational choice for the hungry crocodile is to think of the

birds as free dinner, thus leaving the otherwise potentially useful (and overly naïve) birds with no

time to prove their value.

Even if the crocodiles somehow figured out that the birds could be useful, Darwinism would still

seem to imply how the crocodile that also figures out that free lunch could be worth even more

whenever food is scare should prevail. That is under the assumption that it can also avoid thepotential repercussions from other crocodiles upset that someone ate their dentists in a way that

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 35/117

consumes less energy in the escape for the selfish crocodile than it gets by eating the bird. Thus we

can see how according to The Modern Synthesis the punishment from well organized (and rather

foresighted) crocodiles could let the dentist birds and the crocodiles form an Evolutionarily Stable

Strategy.

The problem facing biologists today is that the altruistic (irrationally generous or naïvely cooperative)behavior we now witness in some species seems to presuppose an evolutionary history equivalent to

one involving very naïve birds coupled with very dumb crocodiles too stupid to know free lunch when

it flies into their mouths (or crocodiles unrealistically smart and well organized across blood-lines).

Darwinian evolution by natural selection does not seem to allow room for any such abundance of 

happy fools and so we are left with trying to understand how, if not crocodiles, other animals can be

so foresighted and well organized as to be able to form cooperative behaviors.

This is when we start to see the real issue for The Modern Synthesis. Even if we could see how

animals could in theory and even in practice become smart enough to organize themselves such as to

punish traitors, we will find ultimate issues of trust for such organisms such that game theory againwould seem to imply that cooperation becomes unlikely bordering on impossible – except in the case

of kin selection, where the answer becomes that if organisms are closely enough related they would

have rational reason to trust each other and be able to evolve cooperative behaviors.

But with the blood-sharing bats and others, kin selection cannot answer the game theoretical

question of how the animals have been rationally able to trust each other in a way that allowed them

to build up their cooperative system at all. Biologists have found the mechanism that the bats

exploited to build up such trust but not in a way that has so far become formally generalized as to be

able to explain for example the case of the dolphins. What this paper will try to do is to abstract the

mechanism already identified in kin selection and in the blood-sharing system of bats into ageneralized game theoretical model that takes communication theory and the level of rational trust

between two organisms into account.

We should note that we must distinguish between cooperation in the form of mutually beneficial

symbiosis and parasitism and even slavery. There are several examples in nature of one species

enslaving another, by brute force or by cunning persuasion (including parasites that infect the brains

of their hosts such as to cause the host to act with greater risk for its own life but also with greater

potential to spread the genes of the parasite). In such cases there is no game theoretical mystery but

there still remain relationships in nature that seem very difficult to explain this way.

In the hypothetical example with our crocodiles they would hardly have been able to enslave the

birds as they could just fly away at any time and it is equally unreasonable to suggest that some other

species has somehow managed to enslave the dolphins. It also becomes problematic to imagine

what type of cunning tactic precisely would let one crocodile persuade a dentist bird to fly into its

mouth to do dentistry without the tactic also being equally useful for another hungry crocodile in

search of lunch (just as hungry sailors could catch dolphin dinners by pretending to drown).

However, as we note the problem crocodiles would have with enslaving the birds we start to see the

potential for turning the tables. If the birds were fast enough, such that they could relatively safely

steal a piece of meat from the jaws of a crocodile before it had time to snap its jaws shut, the dentistbird could simply be a case of a parasite on the crocodile that there is not enough return on

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 36/117

investment for the crocodiles in trying very hard to fight off. Examples of such relationships that we

know and understand about include the African Oxpecker birds that eat ticks from large mammals

and can sometimes be seen perched atop the horn of a rhinoceros. Thus we could actually see cases

of dentist birds and crocodiles if only the birds were fast enough. Crocodiles can close their mouths

pretty fast, but birds can be pretty quick as well so there could be room for parasitic dentist birds to

find a niche by exploiting crocodiles and where the crocodiles happened to benefit from the

exploitation.

We can see another example of something like the dentist bird and the crocodile in nature when we

look at the case of sharks and the pilot fish. The pilot fish will eat smaller parasites from the body of 

the shark that may find it difficult (too low return on investment) to catch these small fish making it

possible for a symbiosis much like one between a dentist bird and a crocodile to form. What we see

here is a parasitic relationship that because the exploitation by the parasite has benefits for the

exploited organism could be described as a symbiosis. The problem for such a system comes if the

dentist birds or pilot fish find a better source of food, at which point the crocodiles or the sharks are

left without dentists or body scrubbers. Thus such a parasitically based symbiosis is still sensitive to

betrayal in the long run.

While the parasitic explanation seems more likely in the case of the pilot fish and the shark, we can

use the example to illustrate the difference between a parasitically based symbiotic relationship and

slavery. To contrast, consider the following explanation: The shark swims into a school of pilot fish

that we in this though experiment assume it is very easy for the shark to catch and so for the unlucky

pilot fish life is now in practice over. However, some of the fish immediately start sucking up to the

new overlord by plucking parasites from it. The shark recognizes that in a wealth of free food it might

as well start by eating the least useful pilot fish. This way we could imagine that the pilot fish were in

fact just helping the shark in order to save their own lives and were in practice not able to just leave

the relationship if they discovered a better source for snacks. This would be a case of slavery rather

than a parasitically based symbiotic relationship. As long as the shark would be clever enough not to

eat all the pilot fish it could travel with an entourage of little helpers.

While possible, slavery is probably a less common explanation than parasitic relationships in nature.

We can for example see that pilot fish, vampire bats and mosquitos are all parasites but we could

also see that if the Egyptian Plover bird should turn out to perform dentistry on crocodiles it could be

another example of parasites in action (remembering that parasites can have positive side effects for

the exploited). We could also see predators as examples of parasites, such that lions are parasites on

the zebras and the wilder-beast in much the same way as vampire bats and mosquitos would be

(except the pray is not allowed to live, making bats and mosquitos more like “farming” parasites

which is distinct from the behavior of pure predators like the lion).

So, we can see that there are many behaviors in nature that become possible to explain with the

rational of The Selfish Gene, as discussed in the book of the same name by Richard Dawkins. In the

end not that many examples remain that cannot be explained by The Modern Synthesis. Those cases

that still do represent some mystery, such as the generous dolphin and the general tendency of 

many animals to be nicer than lions to the offspring of others, could ultimately be explained in the

same way as the sweet tooth, such that it is indeed suboptimal but nature is not perfectly aligned all

the time (as we see by the sweet tooth example) and so we should expect both our sweet tooth and

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 37/117

any extensive generosity in the animal kingdom to be pruned away over time by Darwinian evolution

by natural selection.

Essentially we are left with expecting that the behavior of dolphins and other generous animals are

noise rather than a reliable signal in our measurements as it is something that we assume is an effect

that should go away over time. What this paper will do is to examine a model that would be able topredict these signals and thus let us consider these behaviors in nature as examples of stable signals

rather than noise.

As we examine this issue closely we find that at the heart of the problem with cooperation in nature

we will find a basic issue of trust. Kin selection provides one solid answer under The Modern

Synthesis to the problem of how organisms (or their genes) could come to find rational reason to

trust each other. In other cases, as with the blood-sharing bats and their “buddy-system” that all the

participating bats benefit from, biologists have by careful measurements discovered alternative

paths for non-related animals (that is, animals not related closely enough for kin selection to work) to

establish rational bonds of trust.

With so called meme theory (more on this later in this paper) we have yet another explanation for

why organisms would sometimes cooperate in that it is good for the spreading rate of the ideas or

memes that control them, which would then make it a variant on the parasite-with-benefits theme

(or even a slavery-with-benefits theme such as with a benevolent dictator). However, the generally

kind behavior of dolphins and the recurring theme of many species to treat the kids of their

competitors with what seems from the perspective of their genes as irrationally good manners are

still proving unusually hard to explain (the sort of behavior The Modern Synthesis would expect is

exemplified by among others the lions which strategically eat the young of each other).

Generally nice behaviors by organisms seem irrationally altruistic from the perspective of their genes

and meme theory and parasitism or plain slavery cannot be expected to provide an answer every

time, so even though we understand that in every case there must be some good explanation (such

as the buddy system of bats or even something like a sweet tooth for the playful dolphins) we are

still left with many examples where we don’t know what that explanation is yet and we have no

formal framework such as kin selection to help us find it.

Unlike the clear-cut mathematics of kin selection for genes under The Modern Synthesis, cooperative

relationships such as the buddy system that are not formed directly around purely genetically based

rationales of trust have been more difficult to predict and always require very careful analysis to beable to explain at all in a conclusive way. The general issue is that cooperation outside kin selection

seems like it requires such very special circumstances to form that it appears such behaviors could

only form against all odds. But the relatively large amount of such behavior observed in nature

invites the idea that it would be better if we had a model that would allow these behaviors to form in

line with the odds, such that we could expect to see the amount of cooperation we do see in nature.

The central ambition of this paper will be to derive the information theoretical rational for why we

should be able to expect to see cooperation in nature between any communicating organisms in

proportion to the rational reason they have to trust each other. Kin selection is just one way for trust

to be rationally established and biologists have identified other examples of this general principle atwork. The task of this paper will be to capture the mathematics behind this relationship into a

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 38/117

formula that combines game theory and communication theory as to be fully compatible with all the

examples of cooperation we find in nature.

A careful analysis of our information theoretical claims must include an examination of how the

information theoretical definitions in our model hold up to their counterparts in the physical universe

around us – any mismatch there could indicate that we have made a mistake in our deductions ordefinitions such that the conclusions around cooperation in nature could become invalid as well.

Should we find that our model presupposes an imbalance to the total mass and energy in the

universe we would have to throw it on the same scrap heap as we do with designs for perpetual

mobiles. However, should we discover that there is slightly more (but constant) amounts of energy

required for our model to work, we could at least see if information theory prevents us from making

the assumption that such energy could be there. Furthermore, recent experiments such as the

superluminal neutrino have given us reason to believe that we do have more energy in the universe

than we have so far captured formally in our models. The result of this analysis will be that we will

also find implications of purely physical character from what is essentially an information theoreticalbiophysical analysis.

The answer as proposed by this paper to the problem we have just examined regarding unexpected

cooperation in nature will be the following: While the current application of game theory in The

Modern Synthesis captures perfectly the economic problem statement for two players competing

against each other tooth and nail for ultimate dominance over limited resources, it does not fully

reflect the conditions that come into play when two players cooperate to compete together against

the generally destructive quality of time and space (concrete signals degrade over time and space) to

all concrete representation of information (the Second Law of Thermodynamics).

The game theoretical formula 0 ≤ potential * probability ≤ 1 correctly describes the game theoretical

situation when the two players compete against each other for the same resource, but the

normalization we examined earlier where both values will take a value between 0 and 1 and are then

multiplied is not completely compatible with the problem statement regarding two players with

economic incentive to cooperate with each other to maximize the sustenance of the resource and

who share a communication channel with each other as well as some minimal level of rationally

based trust .

When two players evaluate the same scenario, the current application of game theory by biologists

would indicate that the two players (unless they wanted to be sorted out of existence by naturalselection) would have to compete for the economic or energy potential represented by that single

situation such that they must always fight for any available resource to the bitter end in a kind of 

Darwinian race to the bottom by ever more utterly selfish competitors. Yet, we know that something 

must be wrong with our current application of game theory as biologists can testify that we do see

lots of examples of cooperation in nature (especially when it comes to helping the young of 

competitors) which contradicts the stark conclusion that game theory and Darwinism seem to leave

us with where only the most selfish can win and all examples to the contrary are in the end doomed

to be polished away over time like the suboptimal sweet tooth.

It will be the task of this paper to show that the current application of game theory to nature withThe Modern Synthesis is not complete as it has not been fully combined with communication theory

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 39/117

by generalizing the dimension of trust and has not fully taken the general selection pressure implied

by the Second Law of Thermodynamics into account. When we do, we will see that a normalization

compatible with such a problem statement must let improved levels of reliability or trust (in

measurement of communication channels) improve the general combined potential for value

realization of two communicating and cooperating players in such a way that full reliability in this

relationship realizes a greater total energy value than is realized by full certainty in a relationship of 

pure competition between completely selfish players as their combined value potential cannot be

realized.

Kin selection has been formalized in The Modern Synthesis in Hamilton’s rule, where r stands for a

normalized value indicating how genetically related two organisms are, B stands for the economic

benefit of a cooperative behavior and C stands for the cost of that behavior in the expression rB > C .

What this expression shows us is that when organisms are closely enough related, the benefit B can

become greater than the cost C such that two related organisms can benefit more from cooperation

than from competition. This means that the maximum derived economic value (as in the maximumvalue for potential * probability ) for cooperators (at least in the case of kin selection) can be greater

than the maximum derived economic value for competitors, such that:

 potential * probability for cooperators ≥ potential * probability for competitors. 

Hamilton’s rule thus proposes that the potential economic value for two cooperating organisms (that

are related) is greater than for two competing organisms, such that if we let 0 ≤ C ≤ 1 and 0 ≤ r ≤ 1 

and we know that rB can take a value greater than C then we also know that B must be able to take a

value greater than 1. If we want to normalize such that all terms can only be 1 at most, we would

have to add the terms r and B rather than multiply them, such that:

0 ≤ r ≤ 1 

0 ≤ B ≤ 1

0 ≤ C ≤ 1

0 ≤ r+B ≤ 2

The proposition of this paper is that the r for the level of genetic relatedness in Hamilton’s rule can

be generalized into t standing for the level of rational grounds for trust between two organisms and

where being related is but one way they could have rational grounds for such trust:

0 ≤ t+B ≤ 2

When communication between cooperating players takes place we will see that the potential

economic or energy value - potential in the game theoretic formula potential * probability - of a

single situation involving two cooperating biological observers must from an information theoretical

perspective take a value between 0 and 2 (if we let the maximal value for competitors remain 1

before full normalization) to fully represent the additional communication theoretical potential for

energy efficient information representation. The probability in the game theoretical formula would

then stand for the reliability in the communication channel and could take a value between 1 and 0.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 40/117

Thus to capture that situation we could keep using the existing formula potential * probability with

multiplication but where potential can be up to two. We can also normalize the formula so that

potential takes a value between 0 and 1 but in that case we should also use addition rather than

multiplication between potential and probability as in derived value =  potential + probability so that

the maximum value can still become 2. By doing this we capture the relationship between

cooperators but on the other hand we no longer describe the constraints for competing players. By

introducing a third term reliability (substitutable for trust as in the rational trust in the

communication channel between cooperators) the formula becomes applicable to both problem

statements (competition and cooperation). We can then capture the constraints for cooperators and

competitors with the formula derived potential = (potential * probability) + reliability ≤ 2.

This paper will let relevance stand for the economic value of a situation derived by the game

theoretical formula potential * probability and let reliability stand for the level of rational trust two

cooperating players have in their communication channel. We let E stand for the derived economic

or energy potential of a situation for two potentially cooperating players with the rational level of 

trust between them expressed by the reliability value and go on to examine in detail the following

proposition:

0 ≤ Potential ≤ 1

0 ≤ Probability ≤ 1

0 ≤ Reliability ≤ 1

0 ≤ Derived Economic Value for Competitors = Potential * Probability ≤ 1 

0 ≤ Relevance = Potential * Probability ≤ 1 

0 ≤ Derived Economic Value for Cooperators = (Potential* Probability) + Reliability ≤ 2

0 ≤ E ≤ Relevance + Reliability ≤ 2

In strict game theoretical (or even “decision theoretical”) terms, the problem under discussion has

been demonstrated in the so called dictator game. In this game one experiment subject A (the

“dictator”) is given some free money, say 50$, and are then told that they can leave right away with

all the money or decide to share some of it (how much is up to subject A) with some other

experiment subject B sitting in another room.

It has been experimentally confirmed that subject A will reliably share some dollars with subject B if 

they are related in roughly the proportions that kin selection would predict. But people have been

shown to reliably share (fewer, but still) dollars with people that are not related to them as well.

Furthermore, and quite contrary to what we would expect from trying to explain this by some buddy-

system such as that of the bats, people in the role of subject A will continue to give some money to

subject B even when subject B would be none the wiser about the greed of subject A (because they

are not told about the whole setting of the experiment).

Again we could explain this in the same way as the sweet tooth. Maybe humans have a buddy systemin the manner of bats that ultimately rely on us being able to identify cooperators from cheaters and

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 41/117

we are just not good enough at figuring out exactly when to stop being generous, such as to err on

what seems like the safer side. The game theoretical term for the idealization of players with perfect

economic strategy is homo economicus and from the perspective of The Modern Synthesis homo

economicus ought to play like a true dictator and never share any of their money unless subjects are

related or could find ways to punish the dictator for being greedy. But what this paper will

demonstrate is that we should expect such generosity rather than see it as something that optimal

players would shun, simply because by sharing some the potential for full realization of what they 

keep goes up.

Subject A would then not only be predicted to share with subject B when prompted by

experimenters. We would then predict that if we introduce no subject B but secretly follow subject A

as they leave they would often go on to share some of their windfalls with some unfortunate soul

desperate enough to have to beg for food outside some shopping mall (and we do know from the

example of catastrophe funds that people will give of their hard earned money anonymously to

people in need that they will never even meet). This would be rational behavior by the test subject

because it is a small investment that reduces the risk that the beggar will be tempted or even have to

turn a thief for survival and go on to steal more or all of the windfalls. More abstractly, the argument

by this paper will be that with a small sacrifice – even an anonymous one - a rational player can

reduce the overall selection pressure from the surroundings allowing the player to keep more of 

their energy that they can go on to reinvest in lucrative strategies. By sharing some, the value the

dictator keeps can be put to more efficient use.

This general argument is not new but the contribution by this paper will be to try to formalize this

logic by replacing the r for relatedness with t for rational trust (or reliability) in Hamilton’s rule and

show how the behavior we can see in the dictator game and by animals kind to youngsters is rational

and in line with what homo economicus should do. We must note that this does not mean that lions

are not rational as such – they are indeed “leo economicus” in their niche - only that they have

become confined by their predatory lifestyles to a more sensitive (smaller) niche than an animal that

has learned to cooperate better with its environment.

This is in line with the general energy budget for the earth as a system. The amount of vegetation on

earth is constrained by the amount of energy the earth gets from the sun (mainly). The amount of 

vegetarians is in turn constrained by the amount of vegetation (vegetarians are essentially parasites -

but possibly beneficial such – on the vegetation). Finally the amount of carnivorous parasites (bats,

lions) living off the vegetarians (and to some extent each other) is constrained by the amount of 

vegetarians such that the amount of carnivores must ultimately always be smaller than the amount

of vegetarians (all these amounts are measured in tons of biomass). We should finally note that

carnivorous parasites are also sometimes beneficial - even in the case of predators that can help

remove sick individuals that could otherwise be dangerous to keep around for a herd (unless of 

course the “herd” has sufficient technology to help its sick in relatively safe ways).

The conventional wisdom of game theory will continue to hold such that uncertainty should reduce

overall economic value of a situation and we also agree that it has been correctly normalized with

multiplication for the problem statement as it has been posed by The Modern Synthesis. The solution

to the paradox, in a nutshell, is that the way we have applied game theory to nature we have only

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 42/117

allowed it to take one cake into account where in reality the universe will provide the energy

equivalent of two cakes, such that two cooperators could have one cake each.

The Second Law of Thermodynamics will inevitably nab away at both cakes to some extent, but that

extent can be minimized in direct proportion to how well the cooperating agents communicate based

on rational reasons to trust each other. Where we may have seen the Second Law of Thermodynamics as the ultimate incentive for all organisms to distrust and compete against each

other this paper will argue that it provides an incentive for all organisms to try to find rational ways

to build bonds of trust in order to cooperate and organize - in the end the common cause becomes

to help out with sustaining information that is valuable for everyone in the most efficient manner.

Such systems could become sensitive to the Tragedy of the Commons which roughly states that when

nobody really owns a valuable resource (and sometimes this becomes the feeling when everyone 

owns a resource) the result can become that nobody maintains it. Still, we also note that trust-based

cooperation could work whenever some information would represent an important benefit from an

economic perspective to anyone with access to it and then information could also be at leastsomewhat reliably maintained. A certain destructive effect in accordance with the tragedy of the

commons could perhaps always be expected but it should at least be tempered such as to be to

some extent inversely proportional in its strength to the actual value of the sustained information to

the participants in the system.

What this paper will go on to show as carefully as it can is that communication theory will dictate

that the combined derived energy value of a situation that includes two cooperating observers must

be logically described as maximally 2 if 1 continues to stand for the maximal energy of the same

situation for competitors. With reduced uncertainty in their communication channel two cooperating

agents with rational reason to trust each other could outcompete two selfish competitors byperforming more efficiently than has so far been formally reflected by The Modern Synthesis except

in the specific case of kin selection (the potential for strategies such as the bat buddy-systems have

yet not been fully formalized which is what this paper will try to do).

We can capture the essential proposition of this paper with regards to The Modern Synthesis as

follows:

•  The Modern Synthesis has correctly formalized the logical limitations showing how

irrationally naïve players become punished by natural selection. This paper will complete

that picture by formalizing the corresponding logical limitations to the effect that theirrationally selfish players become punished by natural selection. This does not lead us to

conclude that we are doomed to witness the self-destruction of all biophysical systems

because all players would be forced by natural selection to become irrationally selfish. Rather

we conclude that we must expect more cooperation in nature than The Modern Synthesis

would have predicted so far and that in turn matches our actual observations in nature.

One way to frame the point this paper makes is to begin by observing that game theory has correctly

analyzed the problem of the Prisoner’s Dilemma as per the conditions defined in the problem

statement. It is obvious that a “solution” to the dilemma would be if the prisoners were allowed to

talk to each other to coordinate their strategies (combining game theory with communication

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 43/117

theory) – the only reason this is not a good solution to the dilemma as posed is that it has been

explicitly forbidden by the problem definition.

This paper will not question any of the fundamental conclusions from game theory or

communication theory. We will only examine the claim that we have applied the conclusions in

slightly the wrong way to nature if we assume that all organisms are ultimately locked into prisoner’sdilemmas that could only reliably be mitigated by kin selection. We will see that the important

aspect for organisms in nature to be able to cooperate is not that they are able to identify each other

as close relatives but that they can communicate and that they have any rational reason (a reason

better than chance) to trust each other.

The Second Law of Thermodynamics gives organisms an excellent reason to want to trust each other

but the ultimate question that this paper must go on to address is if they really can by any other

means than kin selection. The generalization that this paper will make is to show that kin selection is

 just one way that organisms can find a rational way to trust each other but really any rational way

will do and we will go on to examine in careful detail that there is at least one more generallyavailable and potentially cost effective way to build rational trust between potentially

communicating organisms other than that of being closely related: every time two principal

opponents enter killing range with a valuable resource in the pot but don’t engage in combat, they

have sent a reasonably inexpensive but potentially very valuable signal to each other that they are

trustworthy and can be cooperated with. In essence it is a small sacrifice for a potentially very big

gain, which makes game theoretical sense.

The basic idea is that when one civilized player says “after you” and the other equally civilized player

reciprocates by saying “no after you” they have each signaled that they are both good sports who

could be potentially useful business partners (as long as they are able to pass through the dooreventually and do not become locked in infinite reciprocal recursion). We will investigate this

proposition in some length from a game theoretical and communication theoretical perspective in

the section about The Dimension of Trust .

While the information theoretically motivated energy proposed to be available by this paper has

already been identified formally by The Modern Synthesis in the form of kin selection and less

formally in the explanation to observations such as the buddy-system of vampire bats, the ambition

of this paper is to show that the economic argument by kin selection can be abstracted into a

generally applicable rational such that the additional economic (energy) potential already identified

as available under kin selection becomes more generally available in nature than had so far beenformally reflected.

Until fully combined with communication theory, game theory under The Modern Synthesis would

lead to the idea that two Darwinian agents would always have to fiercely struggle for any energy or

be outcompeted by those who did such than no cooperation would ever be expected in nature

except between close kin. But as biologists can confirm, nature shows us enough examples of 

cooperation between definitely very distantly related organisms to know that this application of 

game theory must be incomplete, and finding the bug in the argument would show us the reason

that we should be able to predict the prevalence of cooperators over purely selfish competitors in

nature.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 44/117

Thus far cooperation between organisms not closely related have presented several unexplained

mysteries of nature to biologists and mathematicians alike as seen from the perspective of 

Darwinian Evolution by Natural Selection and game theory (with an ultimately unsatisfying

accumulation of sweet tooth like explanations where it seems like natural selection should have

pruned away more of such behavior) but this paper hopes to derive the information theoretical

rational for why we should expect to see results fully in accordance with what biologists observe

every day in the wild.

The general logic of the argument that this paper strives to capture formally can be illustrated as

follows. Consider two water tanks, each with a pair of divers in it. In each tank there would be a

bottle of air. If one tank has two cooperating divers sharing their air and the other tank has two

mortal enemies fighting over their bottle, then the sharing divers would last longer underwater as

the enemy divers spend much of the air in their bottle on the struggle for that air. Should one fighter

win, the losing diver would be replaced by another diver (in order to reflect the conditions of nature

well in our thought experiment, there should always be another contestant in line) such that the

diver fighting all comers for the air-bottle (as many as can fit in his tank) will always end up with

shorter time underwater in total than the diver who knows to share the air with the other divers who

are able to fit in his tank.

The sharing divers can also go on to make their air last even longer than it otherwise would by

further improving their communication (less air will be lost in little mistakes as the bottle is handed

around). As the communication between sharing divers is improved they approach the full value

realization of the air (energy) in the bottle such that when no divers fight over air and have learned

optimal sharing techniques they could theoretically utilize all of their air while the fighting divers

could utilize at most half of it – or in other words sharing divers can access twice as much of the

available air in their bottle compared to the same amount of fighting divers in a “battle-tank”. We

will examine carefully in this paper the proposition that the relation should be that fighters only get

up to half of the air or energy represented by any situation, but at any rate we should see

immediately that the sharing divers could utilize more of their air than the fighters.

If we think of time spent in the tank as a dangerous situation and the longer you are able to stay alive

underwater the greater the chances become of finding rescue (as with the old example of the frog

treading milk in a bucket until it becomes cream and the frog can jump out) we can see that

cooperation could directly translate into improved survival fitness values for the cooperating

organisms - two frogs both treading milk could get out twice as fast. The particular metaphor with

the frogs does break down when we realize that fighting frogs would probably also turn milk into

cream, but the principle would continue to apply generally to situations where fighting would not

happen to have the beneficial side effect of improving chances for rescue. In the case of divers we

could easily see that if the rescue teams came to find a tank of still surviving sharing divers and a tank

of dead battle divers, the sharing divers (and their genes including any genes improving the tendency

towards sharing for the divers) would have won out.

The basic game theoretical proposition this paper will examine could thus be summed up as: Fighting

over a resource means the fighters use up part of that resource in the fight (they need its energy to

fight for it) which is not the case when sharing the same resource. As game theory cannot in its

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 45/117

application to nature by The Modern Synthesis yet accurately predict all cooperation in nature, we

know that  application must be incomplete somehow .

It does not boggle the mind to picture the proposition that divers sharing oxygen could outlast divers

fighting for their equal amount of oxygen. No new air (or energy) has been entered into the

equation, it is just a matter of energy being lost fighting that game theory in combination withDarwinism and genetics would have assumed had so inevitably to be lost that it should not be

represented as part of the potential economic value of the situation - unless the divers were related

or were able to device some cunning system ultimately based on identification and punishment of 

traitors in the manner of the blood-sharing bats. Under the essentially circular argument that

because half of energy must always be lost to competition for that energy, an ultimate incentive is

 provided for all organisms to compete for all energy such that half of it must always be lost to

competition the result to The Modern Synthesis has been that half of the potential energy of any

situation not involving close kin has yet not been formally included in its model.

In terms of poker: When two poker players each put a dollar into the pot, the total value in the pot isobviously two dollars. Game theory would correctly observe that assuming both players play to win

the pot, the derived value of the pot from the economic perspective of each player becomes the

total pot value minus what that player themselves put in. That is, if two players have put a dollar

each into a pot, the value of that two-dollar pot from the perspective of each player is just one dollar.

What this paper will do is to examine the claim that The Modern Synthesis has applied game theory a

little too strictly. By only formally describing the potential for reliably trust-based cooperation in kin

selection, it does not fully take into account other potentially equally rational ways to build trust

between non-related agents. The Second Law of Thermodynamics would in turn always provide the

general incentive for potentially communicating players to establish such rational trust as it forces

them (in direct proportion to their levels of rationally trust-based cooperation) to regard “the poker

game of life” in such a way that the challenge becomes to maintain the economic value of the pot

(now seen as their shared account) so that it keeps being worth two dollars as long as possible. The

basic principle has already been identified formally in the form of kin selection but we will go on to

generalize the principle to rely on a dimension of trustworthiness of the data in communication

channels between any cooperating agents.

The Modern Synthesis has correctly used game theory to observe that the selfish cynic wins over the

naïve altruist in nature but has not yet fully reflected how the cooperating realist goes on to win over

the selfish cynic by means of the cooperating realist realizing that in the end more total energy can

be put to good use by an organism that shares some instead of fighting bitterly for all of it . It is

rational to be a little bit trusting and also to be a little bit generous as long as too great risks are

never taken and there are realistically realizable rewards greater than the cost of the generosity to

be reaped by such behavior, allowing enough leeway in nature to let such relationships as that

between a dentist bird and a crocodile to evolve smoothly without the presupposition that either

birds nor crocodiles at any point in time had to act irrationally and without even assuming that one is

ultimately a parasite on the other (but then this particular case would on the other hand still suppose

perhaps unreasonably smart crocodiles, given how smart we think crocodiles to be in general).

The dolphins could then be explained as a sort of sweet tooth phenomenon but not based on

playfulness so much as a steady search for potentially cooperating partners to help out with mutually

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 46/117

improving overall fitness for the partners and the dolphins alike. Should sailors decide to return the

favor by not hunting as much dolphins (no sense in killing free rescue teams until hunger is extreme)

then the dolphins have already won out in their behavior (one or two killed by sharks in rescue

operations are weighed against many more killed by fishers with no sympathy for dolphins). Even if 

sailors would not reward them much for their rescue, it could still be a form of generally rational

behavior for dolphins to demonstrate to each other what great cooperators they are if their general

pathway to success in their niche has been heavily based on exploiting the power of cooperation. If 

their dominance over sharks is great enough, saving the prey would be an opportunity for dolphins to

say “after you”, “no after you” to signal to each other what great potential business partners they

are.

In the case of animals sparing or even helping the young of others, this is a small sacrifice they could

make to indicate that they are cooperative enough to be able to share a niche rather than requiring

total dominance to survive. A slight tendency towards such displays could in turn reduce the overall

selection pressure on that generous animal from its surroundings in the form of other competitors

then having reason to make the worse cooperators (such as the lion) their main focus when spending

competitive energy. We can see that rather than being a brilliant strategist as such, the lion can only

afford to eat the young of its competitors due to its dominance in its niche. We can furthermore see

from nature how that in turn makes lions more sensitive as a species.

When times are tough as with a draught, at first the lions seem better off as they can keep eating the

grass-eaters, but the energy equation of lions depends on there being more grass-eaters than lions

around, so in the end the last lions would perish before the last grass-eaters. The grass-eaters would

in turn find life easier without so many lions around and potentially be able to regain their footholds

quite fast if the draught subsides, at which point lions could have a niche again. We note that the

biggest disaster from the perspective of grass-eaters and lions alike would be if the last lion

somehow managed to eat the last grass-eater because then it could be quite a while before there

would be any grass-eaters or lions on the stage again (parasites including predators must take care to

never suck their hosts totally dry).

To put the proposition of this paper in perspective: The Modern Synthesis has so far seen the lion as

an example of the ultimately selfish and also ultimately rational player. Compared to lions, the

dolphins have seemed to be suboptimal players but this paper will propose that dolphins have an

ultimately more successful strategy than lions by displaying more cooperative behavior which results

in both a more stable niche as well as one that can realize more energy potential . In the end we must

see the dolphin as optimally rational and the lion as playing the suboptimal game (the lion only plays

an optimal game for someone who for some reason is doomed to be utterly selfish, but cooperators

would eventually win over lions).

A way of relating the argument presented by this paper to the often less biologically minded realm of 

dead matter physics will be as follows. Current game theory correctly describes the situation of two

ping pong players playing against each other such that in combination with a physical model (such as

The Standard Model) it could ultimately predict the constraints for how the ball could move between

two optimal players that were both trying to win the game. But it is a misapplication of game theory

to nature to assume that all ping pong players must try to win over each other. Game theory plus

physics would not by themselves correctly describe the constraints (predict the possible paths for the

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 47/117

ball) for the situation of two optimal ping pong players trying to keep the ball in play as long as

possible. We can correctly describe such a situation, and we will still have to apply game theory and

physics to capture the essential problem with keeping the ball in play even for optimal players. But

we will also have to invoke communication theory to find the ultimate constraints on the possible

paths for the ball in this type of situation.

Relating to how The Modern Synthesis has been applied to nature so far, it has roughly stated that in

the long run we should never be able to see ping pong balls move reliably in the cooperative rather

than the competitive way except if the players were closely related – but we do see phenomena

corresponding to such reliable cooperation based motion of ping pong balls in nature much more

often than we would expect. Turning back to the world of physics, biologists are ultimately presented

with an information theoretical problem in very much the same way that it would become

challenging to our model of physics from an information theoretical perspective if the measurement

of the superluminal neutrino were to be confirmed.

Our currently best application of physics tells us we should not be able to see repeatablemeasurements of such a thing as a superluminal neutrino. But if some recent experiments are

confirmed, physics may stand before a conundrum as deep as that of the biologists. It will be the

ambition of this paper to show that both of these mysteries have the same information theoretically

based solution.

We note that to the physicist, the biophysical discussions in this paper are meant to illustrate that we

can mathematically derive the ultimate constraints to a physical model from a biophysical model.

The connection to the biophysical discussion around cooperation will be to see that from an

information theoretical perspective, physicists have constrained the available energy in the universe

too harshly by a factor of two in essentially the same way as biologists have assumed that theconstraints for two optimal selfish players both doing everything in their power to win over the other

would describe to the ultimate constraint on the shape of behavior for organisms in our world. The

pure physical discussion will end up revealing a non-biophysical way to realize the additional

potential other than by strict biological cooperation by showing how statistical limitations to motion

can be overcome by redundancy .

For a general example of how this paper will relate biophysics and standard physics to each other, we

could start by observing how we know from a physical perspective that if we conclude “in a

biophysical way” from Darwinism and game theory that it would be physically impossible for a ping

pong ball to move in the path it would between two cooperating players then we must have made amistake somewhere. We know such a proposition is not true as we can see that it is physically 

 possible for the ping pong ball to move the way it would for cooperating players because this

happens whenever a parent teaches their child to play ping pong.

The question becomes if nature would allow two players to cooperate (and we know from

measurement that this seems to be the case between animals in the wild) such as to allow this type

of motion for a ping pong ball in practice between non-related players. The answer is that whenever

the ping pong ball represents a valuable resource for both players it makes more sense for each of 

them to try to cooperate to keep the resource around as long as possible (keep the ball in play)

rather than for either to try to risk the resource by starting to struggle for it (use some of the air inthe bottle to wrestle for dominance over it).

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 48/117

As already mentioned, game theorists and biologists (including Darwin himself) have already realized

one way this mechanism would work and have correctly included this aspect in their models. The

concept called kin selection describes how in a situation with four siblings where three were in a life

threatening situation and the fourth could save them at the expense of his own life, it would make

evolutionary sense (game theoretical sense from the perspective of his genes) for an individual to

sacrifice his life to save three or more individuals known for a fact to be siblings of the first individual.

This follows from how it would result in the genes of the self-sacrificing sibling rescuing 1.5 times the

total amount of copies of themselves (all genes really care about) which is more than the 1 copy of 

them contained in the unlucky altruist. Darwin did not know about genes but identified the general

possibility of kin selection which was later confirmed to work in practice for genes after Mendel had

identified them (as Darwinism was eventually combined with the discovery of genes into The Modern

Synthesis). So we could expect to see cooperation in nature between closely related organisms that

are able to identify each other as close relatives. But we also see cooperation in nature that cannot

be explained directly this way.

The answer to this mystery of biology and game theory, as suggested by this paper, is that the

general pressure on any concrete representations of information imposed by the Second Law of 

Thermodynamics is enough to provide the incentive for generally competing organisms to limit their

competition and instead organize themselves in ways constrained ultimately by game theory in

combination with communication theory and the limits to rational trust into cooperative systems

such as to keep valuable resources around for as long as possible to the ultimate benefit of all the

cooperating organisms.

Biology has modeled the selection pressure that organisms place on each other and concluded that

all organisms are doomed from this to eternal competition. But it has not fully drawn the conclusionfrom the additional selection pressure that the general randomness described by the Second Law of 

Thermodynamics place on all organisms as an aggregate such as to provide them with an ultimate

incentive to cooperate with each other in the general competition against rocks falling from space

(and other random events that are not the direct results of organisms competing).

We must at this point reiterate that no new energy will be added over time to this model of the

universe that was not already there such that our model respects the very most fundamental

expectation on all our models – the constant total sum of mass and energy in the universe.

However, we will go on to see that this fusion of game theory and communication theory will havepurely physical implications that may seem more speculative to the cautious reader than the idea

that sharing divers live longer in a water tank and that application of game theory to nature with The

Modern Synthesis should have not formally captured the entire potential for the divers to reliably

trust each other except by being closely related. In short, we will see that communication theory

does not only explain how cooperators can fare better in nature, it will also potentially provide an

explanation to the possibly discovered phenomenon of the superluminal neutrino.

The basic explanation to the energy in our model that seems missing from our currently most

generally accepted physical models is to see that the photon or the space particle would have a

minimal rest mass of 1 in our model.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 49/117

Like biologists have been on to the rational for cooperation in the form of kin selection but have not

gone so far in The Modern Synthesis as to conclude that there must exist a generally exploitable

rational to cooperate even for non-related organisms, physicists have long known that some minimal

rest mass could be associated with the photon and even empty space (which are seen as the same

thing in our model, mostly because we have discovered no logical reason to distinguish between the

concepts). But without the discovery of any generally exploitable rational for assuming that the rest

mass of a photon should be greater than 0 (no identifiable way the universe could “exploit” or be

potentially impacted by such additional energy in a “rational” or repeatable way) The Standard

Model has not gone so far as to make any such assumption.

However, we will see in this paper that information theory supports this idea of the minimal rest

mass or energy of space and photons and we can see ways that the universe could be impacted by

this extra energy in ways that could be detectable by experiments (directly in experiments with the

superluminal neutrino but also more indirectly with another experiment described in the section

about Dark Energy towards the end of this paper). This will allow our model to consider twice the

energy as had hitherto been assumed to be available in the universe. The superluminal neutrino,

although not yet experimentally confirmed, would be seen as a potentially “rational” (repeatable and

compatible with information theory) way to “exploit” (potentially measure) the extra energy of this

rest mass of space and the photon predicted by this model where redundancy (all neutrinos do not

reach their destinations) can be used to overcome the reliability constraint in a non-biological way.

Thus, it will be a prediction of this paper that the superluminal neutrino can be experimentally

confirmed. At the same time it is not a strong prediction that we must be able to provoke such

effects in earthly laboratories or even in our physical universe (other limitations not discovered by

this paper could still apply). The strongest physical prediction that this paper will make will be to

state that with fine enough measurement we should be able to see accelerating increased redshift

between objects on earth that we know are maintaining a constant relative distance.

While measurements to the effect of superluminal motion have recently been made, they have yet

to be confirmed. The model derived in this paper will make the observation that such superluminal

motion would not be impossible from an information theoretical perspective as a solid particle should

logically be able to travel up to nearly the speed of light through absolute space in our model of the

universe that includes an absolute reality with the result that two solid particles could approach each

other with a combined speed of up to 2 * c.

The total energy and mass will remain constant over time in our model – we will only come to seethat information theory reveals to us more of what is already there. The physical constraints we have

thus far seen as compatible with and even to some extent implied by information theory are simply

too conservative by a factor of two compared to what becomes allowed when communication (or

redundancy) is taken fully into account. In just the same way that game theory without

communication theory can only be used to constrain the predictions for the path of a ping pong ball

between competing players, the constraints information theory have seemingly implied on physics

have not completely taken communication theory into account such that we have not really included

the full range of possibility for the ways a ping pong ball would be able to physically move – either

because biological agents communicate or because there are simply enough ping pong balls that if 

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 50/117

we allow some to get lost on the way we could see others moving faster than we previously thought

possible.

In order to fully discover how something that has so far seemed like an information theoretically

implied constraint on the physics of our universe could turn out not to be imposed in such a way

after all, we must begin by returning to the world of theoretical biophysics.

Again, real world examples of biophysical agents include frogs, humans and vacuum cleaning robots

as well as much simpler agents such as the original ancestor to life on earth. It is useful to keep the

picture of what we know about real life examples of biological and robotic agents in mind as we

evaluate mentally our general model of how mathematical abstractions of biophysical robot particles

must behave.

It will be a core assumption of this paper that the same type of constraints from game theory that

apply to all living organisms would by necessity have to apply to such robot particles as well. As we

go on to consider the mathematical possibilities constraining robotic particles with motors, sensorsand controllers in their internal substructure we should therefor verify the logic against what we

know from observations of biological organisms in nature in addition to verifying the general

mathematical consistency and correspondence to measurements by physicists.

As we know, all arguments in this paper will be based in information theory including game theory

and communication theory. They will thus be ultimately logical and mathematical in nature and must

hold in the purely mathematical context, such that a mathematician and not only a biologist could

evaluate the propositions of the model we are about to construct. It could be observed however that

for maximal confidence in the model it should optimally be verified by mathematicians (which

includes students of information theory), physicists including theoretical biophysicists as well asbiologists.

This means that even if deemed mathematically consistent, the model presented by this paper can

be scientifically falsified with measurements by physicists as well as by any biologist finding stable

patterns of behaviors between organisms in nature that would have been considered impossible by

the mathematical formula we will derive.

Replicating Agent Systems

The life (as in continued existence in functioning form) of an agent depends on it being able to

maintain its internal structure of sensors, motors and controllers. Luckily for it, it has an internal

system of sensors, motors and controllers to help it do so. Interactions with the environment in the

form of surrounding solid particles could well result in degradation of the internal structure (the

information content or signal) of the agent, so the universe is a dangerous place for it. It may become

more dangerous still when other solid particles can also be arranged into agents, even though this

paper will do what it can to prove that some of those agents can represent opportunity in the form of 

cooperative partners.

It follows that those agents that happen to have their controllers connecting their sensors to their

motors in such a way as to improve their chances of maintaining their internal structures will become

a more frequent component of the universe over time than those with bad controller strategies, as

agents with bad strategies would perish while those with better strategies would stay around.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 51/117

Among the agents there is a special subset of possible machines that have the capability of building

replicas of themselves. We call this group of agents the replicators and we note that whenever we

have imperfect replicators competing for limited resources with variable success depending on their

strategy we will see Darwinian evolution by natural selection of their strategies in the form of 

adaptations to optimize for local or global selection pressures. A real world example of a replicator is

a gene.

But even before we have replicators we could potentially have agents to be sorted in and out of 

existence by selective Mother Nature in accordance with their fitness functions as randomly

assembled agents competed for limited local resources, with the result that over time we could see a

trend towards seeing a higher number of more efficient agent machines. The difference is that

before replicators, better machines will have to appear fully formed by chance rather than via small

replication mistakes in already established designs.

Such spontaneous arrangements are increasingly rare the more complex they are and so we can

expect a dramatically higher discovery rate of improved agent machines once replicator machineshappen to have been formed, but we do know that agents could arise by chance alone although

rarely as we are certain that they have arisen in the past. If replicators exist then we can deduce that

the first replicator had to have come about by chance. We do know that replicators exist, since we

and all other life forms on earth are examples of them (or at least our genes are) and therefore we

know that agents and replicators can form by chance. The one exception to the rule that the first

replicator must have come about by chance alone is that strictly speaking, the first replicator could

have been built by another agent that was formed by chance but that strictly was not a replicator as

it could not or for some other reason did not create a copy of itself .

Evolution of Natural LawThe question for an agent (including an agent that is a replicator) becomes how it can tune its

controllers as to make optimal movements based on available information. We use the same

shorthand as is common among biologists, which is to talk as if agents and replicators had conscious

desires and goals, but what we mean is that agents or replicators that happened to act as if in line

with such desires and goals - basically the desire to preserve energy and the goal to do so optimally

from a game theoretical perspective - would improve their fitness and be rewarded by natural

selection by becoming more common.

Just as we don’t require the reader to seriously consider that all particles are robots - or perhaps

even worse, spaceships with little scientists in them – few biologists waste their time asking you to

honestly contemplate how genes could dream (apart from the indirect sense in which they could

affect our dreams). Of course, some agents, such as us, are indeed sentient, but it is only a special

case that is not necessarily relevant to the current discussion.

We saw that in our model of the semi-static world active particles would appear to move (or change

mass) in the local perspectives of each other as a result of uncertainty regarding each other

decreasing over time. The same basic premise for uncertainty constrains the perception of active

particles in our model of the fully dynamic universe as well. In addition to the movements and mass

they actually have, solid particles will seem to each other to behave in accordance with extra

components of relative movement, mass and time based on the uncertainty associated with

measurements, such as for uncertainty levels to decrease over time to the extent that active particles

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 52/117

stay stationary but increase again over time to the extent that the active particles move around

relative to each other.

Consider two stationary active particles that happen to be just next to each other. As we saw in the

examination of virtual gravity in the semi-static universe, the passing of time will mean that each

particle gains a better understanding of the position of the other, something that will look to theparticles as if they were accelerating towards each other (or solidifying, or unfreezing in time) in a

gravitational way (with the acceleration we normally associate with gravitation). However, as we

know, they will not in fact collide since in our model they are both standing still in the absolute

reality.

But what if the active particles were actually agents and represented some kind of threat to each

other? As each instance of time would pass, the two particles would discover that they were yet a bit

closer to each other than they would have liked and so they compensate by telling their respective

motors to move them in the opposite direction, away from the other particle (we could consider this

to be the biophysical version in our model of a particle and what would for whatever economicreason be its “anti-particle”). If both particles behave this way and all the time keep accelerating

away exactly enough to compensate for how much closer they seemed to have gotten to the other

particle since the last time they looked, the effect will be the following:

The two active particles will in fact be accelerating away from each other in absolute space, but their

measurements of each other will tell each active particle that it is maintaining a constant distance to

the other. This is because uncertainty about the position of the other is going away in the same rate

as actual distance between them is increased.

If active agent particles behave according to a risk management strategy where they use their motorsto try to maintain constant perceived distance to (or mass of or speed of time in) all other active

particles (at least the ones of their “anti-kinds”) and also have to rely on the measurements of their

sensors to do this, then the result will be that while virtual gravity is a perceptional distortion that

does not draw objects together in the absolute reality of our model, active agent particles and anti-

particles suffering from the illusion that objects are affected by virtual gravity will work to counteract

it (or they are not suffering but fully aware of their perceptional distortion but unable to improve

rationally on their navigation due to the limitations of their measurements). Thus such a model of the

universe would end up with particles displaying real behavior in line with the inverse of gravity as

active agent particles highly skeptical of each other would behave as if affected by anti-gravity.

 Active Agent Gravity

We have seen that the best that available information will allow active agents to do in our model if 

they represent threats to each other and so want avoid each other is to follow a path of inverse

gravity. On the other hand, if agents somehow represent opportunity for each other, they should try

to inverse their strategy of movement to approach each other as fast as rationally possible instead. In

this case, the constraints of uncertainty will allow them to move towards each other exactly in

accordance with how they would move if virtual gravity were real and were pulling them together.

So if we have two solid (active) particles A and B in our model of the dynamic universe where each

represents a threat to the other, a global observer would see them accelerate away from each otherin absolute space in paths that were in line with inverse virtual gravity. If both represent opportunity

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 53/117

to each other they would accelerate towards each other in line with virtual gravity. If they have

different opinions on the matter such that A wants to move towards B but B wants to move away

from A or vice versa, then assuming both particles have the same motor capacity they will maintain

constant distance to each other in absolute space, with one chasing after the other.

We can see that the rate by which uncertainty evaporates for local observers puts an ultimateconstraint on how fast active agents can rationally approach or escape each other in this model. It is

one of the claims of this paper that it follows logically that this constraint applies generally to active

agent particle based implementations of any force of nature where there is less than infinite speed of 

information regarding the economic opportunities or risks to the agents that constitute the rational

for the force.

Internal and External Mechanics

To contrast with the active agent particle based gravity implementation, we will consider an

alternative explanation for why we can see real particles around us behaving according to real gravity

that moves solid particles around. Real gravity is an aspect that must be possible to capture in our

model to match the requirement that we should be able to see correspondence between the

behaviors of concepts such as mass, space and time in our model and their matching real world

counterparts.

In our world we know that it is reasonable to assume the existence of active agents on the macro

level, because we ourselves (or our genes) are examples not only of active agents but also of 

replicators, live and in action. We could of course ask the question if it is possible to consider active

agents (and maybe even replicators) on the micro level, in the substructure of what we perceive as

solid particles, but again the reader will not be asked to consider this as a real world possibility but

only to model this possibility from a mathematical perspective to see what the derivable constraints

would be.

But if active agents are not implementing gravity, what is the alternative? Solid things in our world do

seem to approach each other by the formula of F = m1 * m2 / d 2, and either there has to be some

magical “invisible hand of gravity” responsible for drawing them together, or the particles must have

a purely mechanical reason, internal or external , for setting off towards each other. The term

mechanical will be used as a substitute for a solution that is allowed within the constraints of 

information theory and also implies a measure of causality or non-randomness, such that a non-

mechanical implementation would have to be a chaotic, random implementation that would only

work by chance – or reliably by “magic” - from an information theoretical point of view.

The active agent particles represent an at least mathematically possible example where the internal

mechanical substructure of particles can explain why we see real gravity effects. We will now

examine an alternative (perhaps easier to consider as a candidate for a real-world implementation)

where external mechanical effects are responsible for implementing gravity.

Gravitons

We have defined our model such that the reactive agents have only the very minimal rest mass

associated with the informational capacity of one position in space, so that when they bump into a

solid particle they will bounce off it but they will not push the solid particle away, only provide it withinformation about other solid particles. But what if the reactive particles had a little more mass - or

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 54/117

rather (as we want to stay with our definition of a reactive particle) assume that there were a kind of 

solid particle that had so very little mass as to be very nearly a reactive particle but still had enough

mass as to be capable of pushing solid objects around ever so slightly? In other words, we will now

have a discussion about the constraints regarding the least massive active particle (reactive particles

are not solid but seen as space or light particles in our model even if they have a minimal rest mass of 

1 to correspond to the minimal informational requirement on a position in space). If we have thought

of space or the photon as an example of a purely reactive particle, we could call this very nearly but

not quite reactive particle the graviton. 

There is an explanation of gravity known as Le Sage's theory of gravitation, originally proposed by

Nicolas Fatio de Duillier, stating that if a solid particle is bombarded from all directions in an equal

distribution by such very nearly but not entirely immaterial particles, the solid particle would stay in

place. But if two solid particles were next to each other, they would to some extent shield each other

from the bombardment with the result of them being pushed towards each other.

The relation of mass and distance to the constraint on the strength of this effect nicely matches whatwe expect, the formula F = m1 * m2 /  d 

2, as the greater the masses the bigger the shields and the

closer a shield is the more shielding it does. This explanation is entirely mechanistic and has the

advantage, if we want to call it that, of not having to suppose that particles have substructures of 

little sensors and motors.

The effect would come from gravitons getting out of the way from between the more solid particles

and so we see that the speed of this effect is then limited by the speed of the graviton, as the

information about how they should get out of the way comes in the form of other gravitons, pushing

them away (or rather a relative lack of such gravitons coming from certain directions).

It follows from our definitions that the speed of the graviton is below light speed (as having greater

rest mass than light would indicate) such that the effect of graviton based gravity could not spread

quite at light speed. That is to say, if the information about gravity is spread by an active particle, it

would always have to work at a lower speed than an implementation of gravity where information

about it travels in a purely reactive particle, as we saw from the information theoretical constraint

that more information takes longer time to move.

We have stated that even the space particle would have a minimal rest mass, corresponding to the

information state of its position. Would that mean that space particles could push solid particles

around in our model and so gravity could be the result of uneven pressure from space? But gravitywould then in fact be the result of a relative lack of space particles between two particles which may

seem to make sense at first until you ask yourself what would then be there instead of the space

particles.

We could consider an infinitely recursive model with continuous existence of yet lighter space

particles representing yet fewer positions between two solid particles as heavier space particles got

out of the way. But there is a point that the recursion cannot logically pass and that is the limit of the

minimal amount of information (one position) such that the fastest spread of a graviton based

implementation would always rely on the existence of a lighter particle getting out of the way and

the lightest particle that can reliably exist as information such that it could be seen by outside

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 55/117

observers to change (as in getting out of the way or even being in the way in the first place) is the

purely reactive (or light) particle that occupies exactly one position in space.

Following such a recursion infinitely would at some point lead to the requirement of the invention of 

additional positions in space which should not be allowed by information theory (concrete

information has cost) and the rules of mass and energy conservation.

Thus we can see that such an externally mechanical effect that relies on things heavier than the

lightest available particle to push things around in practice could never work fully at the speed of that

lightest particle - at best it will work at the speed of the next heaviest particle. And even if it could, in

such a mathematical idealization gravity would still be constrained ultimately to work maximally at

the speed of light as the conservation rule would prevent that the positions were split in such a way

as to actually create more positions in space (more mass and energy in the universe) for an external

observer.

The Limits to Natural LawWe have seen from our model that if a physical force in it is implemented by particle interactions and

information about the force can only travel inside particles moving with finite speeds, then the force

cannot reliably act over time with force exceeding F ≤ m1 * m2 / d 2

as the masses and the distance

place a statistical limitation on the number of particle interactions implementing the force that may

take place. For exactly the same reason that an observer will have uncertainty associated with

measurements and experience relativistic effects in proportion to the same formula, we can

understand that forces of nature in our model cannot affect each other with a strength greater than

F ≤ m1 * m2 / d 2.

We can reformulate this if we use c to stand for the maximum speed with which the informationcommunicating the effect can travel into E ≤  m1 * m2 * c

2to describe the information theoretical

constraint on the maximum energy  E or the maximum distance that any and all natural forces can act

over during a given interval of time in our model. We have seen how we could derive E ≤ mc2

when

wanting to describe the maximum possible rate for effect communication under the maximum speed

c between one solid particle and one particle of space, such that E ≤ mc2

is a special case of E ≤  m1 *

m2 * c2

for an effect where m2 approaches or is 1.

We note that in our model the concept of  force that we depict by F relates to the concept of energy  

depicted by E in such a way that energy describes the maximum strength (or range as in distance)

that any (natural) force based on the energy could reliably work with over (a given) time and wetranslate between them by replacing the division with distance squared by multiplication with the

speed of light squared.

The two explanations of gravity that we have seen (one based on active agents and the other based

on gravitons pushing objects towards each other) are both purely mechanical in nature. As such, they

are equally good explanations in that nothing “magical” is presumed. Both could indeed be true and

responsible for effects we see around us in our world. The important aspect to the discussion of 

these two implementations is to test examples of logically and mathematically consistent

implementation methods of both internal and external character to verify once more that the

constraints suggested by the formulas we have derived from information theory must hold true forimplementation methods that in turn violate no information theory.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 56/117

The advantage, if you would call it that, with the active agent particle theory is that the effect of 

gravity or any other force of nature could potentially travel faster. As we have seen, it follows from

the observation that for particles to be able to push each other around they need some rest mass

greater than the lowest rest mass of (nearly) empty space (or the photon) – i.e. what we call an

active or a solid particle in our model - and we also assume that particles with rest mass cannot

travel quite as fast as particles without rest mass (purely reactive particles, e.g. photons) then this

would imply that the effect of graviton based gravity spreads at below the speed of agent based

gravity.

With the explanation based on active agents, we can see that the information particle does not have

to push the solid particle around. The agent particle only has to be able to detect the information

and can then update its motors to respond accordingly. This means that with active agent particles,

gravity could indeed spread with the speed of light (i.e. the speed of a purely reactive particle).

In other words, active agent based (internally mechanical, or active) natural forces can spread at the

speed of the fastest information carrying reactive particle, detectable by active agents, whereasgraviton based (externally mechanical, or passive) natural forces can only spread at the speed of the

fastest information carrying active particle, capable of pushing things around.

No force based on externally pushing things around (external mechanics) could work at a speed

greater than the solid particle doing the pushing and so we see that the fastest theoretically possible

external mechanical implementation of a force will never be faster than the fastest theoretically

possible internal implementation of the same force.

We can thus see that in our model the active agent particle theory describes an ultimate constraint

on any dynamic universe such that it represents the best possible performance of a natural force –one which can spread at the fastest speed of information in the universe – and we can see that the

best such an optimal implementation of any natural force could ever do is to be constrained by E ≤ 

m1 * m2 * c2.

We can therefor see that we have found a hard limit on our model and that if the concepts in our

model correspond logically to their counterparts in our physical reality this would translate into

having derived an equally hard limit on physical nature where we will continue to suppose that no

“magic” is allowed and all forces must have some mechanical (as in compatible with information

theory) implementation.

As this is a main step in the final constraint we will derive in this paper we should take care to

summarize what we have found once more and we will also note that the constraint seems fully in

accordance with both Newton and Einstein so far, which is a good sign that the definitions in our

model might also bear a good resemblance to their counterparts in our physical reality. What we

have found thus far is the following:

Constrained to mechanical implementations of natural forces either a force of nature is ultimately

somehow implemented by heavyweight active (solid) particles pushing each other around (the

external mechanical implementation) or by active agents detecting and responding to information in

lightweight reactive (photon) particles (the internal mechanical implementation). We have derived a

constraint that must hold true for our model and we understand that to the extent that our model is

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 57/117

logically sound and its concepts correspond to those in the real world, we are also describing a

constraint on any physical force of nature in our world. We know that the optimal external

implementation will never be able to outperform the optimal internal implementation as the internal

implementation can react on purely reactive particles and we know that E ≤ m1 * m2 * c2

puts a

constraint on the performance of an internal implementation. E ≤ m1 * c2

is a special case where one

of the masses equals 1 as for the minimal rest mass of a space particle. It follows that it must also be

a constraint on the implementation of all external (passive) implementations and thus it is a hard

constraint on any and all mechanically implemented forces of nature in our model and if our model is

good also in our universe.

The Universe We Are In

So we have two candidate explanations for natural forces. We have passive, externally mechanical

implementations and active, internally mechanical implementations. Which implementation is

responsible for the gravity – or indeed any natural force - we see in our universe?

Pretending for a moment that the model we have devised has been deemed acceptable and we are

allowed to draw conclusions about our world from it, it would then seem like it would be possible to

determine which implementation method (internal or external) is more accurately describing the

effects of gravity or some other force that we see around us by measuring if its effects travel at light

speed or below.

However, we would not necessarily be able to determine if what we perceive as light or even space is

indeed what we have described as purely reactive particle with minimal rest mass (or perhaps even

no rest mass at all if we have made a logical mistake somewhere in our deductions). Even the closest

we ever get to empty space may have more rest mass than we know at the moment.

What we think of as photons or even empty space could in fact be some kind of gravitons (not purely

reactive particles) with fairly but not totally minimal rest mass and because we are not able to

distinguish their rest mass from minimal (especially if minimal should turn out to be no rest mass at

all) we call them photons or space, thinking we are seeing purely reactive particles at work. In this

case, the gravitons could happen to be carriers of the information that we perceive as light as well.

If we are so unlucky that there are no photons but only gravitons, we may never be able to

determine if gravity would be due to gravitons pushing things around or active agent particles

chasing each other. The same goes if there are only photons but no gravitons, unless we are able to

prove conclusively that the photons we see really have no or logically minimal rest mass and arepurely reactive particles, in which case they shouldn’t be able to push things around (no yet lighter

particles to replace them to represent fewer positions in space as the gravitons get out of the way)

implying active agents at work.

Furthermore, because of the limitations imposed by the Planck length, we may never be able to look

inside the solid particles to see if there are any little motors and sensors there. However, if both

photons and gravitons exist and we were to become able to detect them both and tell the difference

we could conclude that at least the graviton gravitation is probably implemented in our universe (it

would seem like a kind of unavoidable effect if something like gravitons were around) but it would

not exclude the possibility that active agent particles were also doing their part to implement gravityor other natural forces.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 58/117

If we were to detect photons and gravitons and could conclusively determine that photons had no or

minimal rest mass and that some particles obey slower than light forces of nature but other particles

could detect and react to some physical force with the speed of light, then it would be a good guess

that the former particles were non-agent (or dumb-agent) particles pushed around by external

mechanical implementations (gravitons) whereas the latter particles would be examples of active

agent particles, reacting to information in their sensors by applying their motors.

We have seen two possible implementations for gravity, but what about the other forces of nature?

Well, they would all have either internal or external mechanical implementations unless we wanted

to invoke some kind of “magic” (break laws of information theory). We have seen an elegant external

implementation of gravity in Le Sage’s theory of gravity, perhaps there are clever external

mechanical implementations for the other forces of nature as well.

Conversely it is at least a mathematical possibility that any force of nature could also be

implemented by active agents responding to how different types of particles represent different

types of opportunities or threats to each other, such that for example the behavior of electrons couldbe analyzed from a game theoretical perspective to see how they relate to each other according to

economic rational. It must be stressed that the point of this paper is not to demonstrate the

necessary existence of robot particles around us, but to consider the mathematical constraints we

can derive from them as a mathematical concept.

Again, we remind ourselves that for all mechanical implementations of forces of nature - either

actively as agents or passively with an external mechanical implementation – our model states that

they can become no more efficient than to be constrained by E ≤ m1 * m2 * c2.

With passive implementations, any other constraints associated with some given natural force wouldcome from additional mechanical constraints of whatever passive effects were used for its

implementation. With active agent implementations, just like biologists can find explanations to

organism behaviors in Darwinian evolution by natural selection, so the behavior for any active agents

can be explained using Darwin’s framework of explanation and modeled mathematically with the

help of game theory.

Quantum Mechanics

But would it be physically possible in our world for the solid particles we see to have little sensors

and motors in them? Doesn’t the Planck length imply that such substructures would be utterly

unreliable? Not necessarily. While there exists a practical limit on how small structures we can knowanything about from the outside, this does not mean that substructures on the inside could not

work.

It is only an observer above the Planck length that could never be certain (in the very physical sense,

not just in the conscious sense) about the reality under the Planck length. There could theoretically

be a fully functioning set of sensors and motors inside a seemingly solid particle, working with better

than random stability as the smaller particles they consist of bounce around at least semi-confidently

in absolute space.

Indeed such will be the model we will consider for the micro world in this paper, that continues to

base its logic and mathematics on the inclusion of an absolute reality in the model from which

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 59/117

secondary realities are derived, that are really the limited perceptions (interaction opportunities)

with the absolute reality by the local observers (solid particles) in our model.

we will examine that it does not logically break with any of the hard constraints from our actual

universe that we have been able to derive about it, in this case that of the inevitable uncertainty

associated with measurement of the micro world under the Planck length. But for any constraint thatwe know about our universe we must also take care to remember precisely to what extent those

constraints must apply and where it is not logically necessary that they do.

According to the way our definitions will work, an objective observer of the absolute reality in our

model would be able to see very small particles to make up the substructures of the solid particles

that macro observers inside the universe can perceive. We have so far talked in terms of observers

external to our model, but now we will begin to talk of observers inside the model that are internal

and external (inside and outside) to the different scales (micro and macro) in the model. We will

therefore begin talking about the scientist examining the model universe in his computer as the

objective observer who can see things for what they really are in the absolute reality included in ourmodel.

The constraints we know from our world concerning interactions across scales between large

particles on the macro scale and small particles on the micro scale would not by any logical necessity

have to constrain the interaction among the micro particles themselves. The micro particles could in

turn have a smallest distance below which they could not measure or interact reliably, but that is not

a problem preventing them from working at least somewhat reliably with each other on their own

scale just as the macro particles above them only have to interact reliably with each other on their

scale to work.

The constraint in our world regarding the interactions between the micro level and the macro level is

imposed by the Planck length, but the constraint only tells us we in the macro world cannot make a

complete measurement of the micro world. We can still make some, albeit incomplete,

measurements of the micro world and furthermore we have no reason to believe that the micro

world couldn’t make at least some measurements of the macro world.

We will now begin to define the concepts of micro world (or scale) and macro world for our model

based on including the concept of a Planck scale. We start by defining their general relationship to

each other in terms of their capabilities for measurement on each other based on what we know

about real constraints from our world, as we want correspondence between the concepts in ourmodel and their real-world counterparts. We will later go on to derive information theoretical

motivation for the inclusion of a Planck scale in our model such that the concepts of a macro scale

and a micro scale must follow.

First, we define for our model (without yet motivating it further) that it includes a Planck scale such

that observers larger than the Planck scale are considered inhabitants of a macro world that cannot

make full measurement on observers smaller than the Planck scale and which are in turn considered

inhabitants of a micro world .

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 60/117

Secondly, we define for our model (not knowing for sure what the case would have to be in the real

world) that the micro world can make some measurement of the macro world . This would then imply

that the macro world may conversely be able to have some impact on the micro world .

Thirdly, we define for our model (as we know that the macro level can make some measurements of 

the micro level in our real world) that the macro world in our model is able to make somemeasurement of the micro world . Thus in our model the micro world would conversely be able to

have some impact on the macro world .

What this means to our model is that while the micro world may be able to have an effect on the

macro world, the macro world observers may not be able to establish conclusively exactly which

micro state that resulted in the macro effect.

We don’t know of any logical or measured constraint in our world excluding measurement by the

micro scale on the macro scale. We also don’t know that the necessary stability for machine-like

substructures could not exist in the micro world (logically we could not make a certain measurementto determine such a thing) so at least it is not excluded as a logical possibility that we have little

robot-like particles in our world as well, although we remind ourselves that we only need to consider

the mathematical construct of such a phenomenon to make sure that our model can become useful

as a scientific instrument of prediction derived from information theory for describing the possible

range of measurements on the real world.

Quantum Superposition

In our model of the universe, we could see that local observers would experience relativistic effects

as the result of uncertainty in measurements on the macro level. We can also see that

measurements by the macro level on the micro level is also associated with inevitable uncertainty inany model of a universe that includes a Planck length type of constraint on a minimal reliably

measurable scale for a given observer.

The currently most widely accepted scientific model of the micro (quantum) world is not based on

the presence of one absolute reality below that level, such that in the micro world there are only a

set of overlapping superposition realities that work something like the inverse of relativity. Where

the set of relative realities together seem to make up a sum of slightly less than that of one full

absolute reality (but approaching it) the set of superposition realities on the micro scale seem to add

up to a sum of more than one absolute reality.

None of the micro superposition realities seem by themselves to completely correspond to the

concept of an absolute reality, but the sum of those superimposed realities could possibly be seen as

a kind of “extra absolute” reality together. The superimposed realities can then “collapse” into

something that could be called one absolute reality as they are measured by macro observers, such

that only one of the multiple absolute realities in the micro world becomes realized from the

perspective of the macro world. This seems to imply that macro observers could be considered

inhabitants of an absolute universe as the absolute reality seems like it should be the thing that

would emerge from quantum states that collapse into one reality when they are measured.

But there is a small problem with this model. Comparing to the generally adopted scientific model of 

relativity, we see that again we are looking at a set of relative realities but with no absolute reality in

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 61/117

the model – this time even measuring reality will not make it any more real such that there only ever

exists relative realities on the macro scale. We see the shape of a paradox when the model of 

relativity is combined with the model of superposition. If the overlapping realities turn absolute

when measured, we see that combined with relativity they would not actually turn absolute but

rather they would go from superposition realities to become only relative realities as they are

measured by a relative (macro) observer.

We are left with a combined model that can make mathematically valid predictions and so is

scientifically useful but that does nonetheless seem to contain some form of contradiction at its core

in that absolute reality disappears from the system. Again, this is not a problem from a strict

mathematical sense but it does seem that the model is ultimately intuitively unsatisfying in that

while quantum mechanics begins by hinting at the promise that things might at least become really

real when measured, relativity seems to object that it will not mean they become really real at all,

only relatively real, such that any actual reality where things really exist more than in the

perspectives of each other will not appear anywhere in the model.

In contrast, our model will continue to use an absolute reality where relative as well as superposition

realities are mathematically derived from the absolute reality (as in computable from the model of 

the absolute reality by a computer with a processor thus reducing memory requirements of the

computer) by applying uncertainty to the views of its local observers. In this model, quantum effects

such as superposition are just the result of more inevitable uncertainty in measurement that we can

distribute as usual over the perceptions of relative time, space and mass for observers in the model.

As the mathematics of our currently generally accepted scientific model work, all we hope to do in

this part of the paper is to ensure that our model too continues to make the same sense by seeing

where the logical deductions from the definitions in our model take us and compare those results tothe measurements of the universe we are aware of. If both approaches turn out to be consistent, we

may be able to say that a model with an absolute reality is more intuitively satisfying to anyone who

prefers the existence of such a concept in their models but mathematically we should at this point

not expect to find a better model for describing reality than what we already have.

The seemingly strange behaviors we associate with quantum mechanics on the micro scale are thus

in the model of this paper the same type of effects resulting from uncertainty in measurement as the

relativistic effects on the macro scale. As observers larger than the Planck length cannot (so far just

because our definition says so) measure the reality of things below the Planck length reliably, we get

uncertainty in our measurements indicating weirdness such as things both existing and not existingat the same time, or two different things existing in the same place, or things existing in more than

one place as once, which in our model corresponds to the effect that is normally associated with

quantum superposition.

We added the concept of a Planck length to our model by stating that for any observer in the model

there will be a potential micro scale with elements of inevitable uncertainty below. We will now

begin to motivate why our model should include a Planck scale. In short, it is due to the constraints

imposed on the observer by the availability of particles with which to measure. As soon as the

smallest particle that the observer can use to measure is too big and blunt as to be able to measure

reliably much smaller particles we get inevitable uncertainty in measurement.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 62/117

To regain our solid footing in information theory, we will go on by looking at the so called Nyquist  

criterion of a law of information theory that is called the Sampling Law , which states that when

“sampling” a signal (measuring or converting an analog signal into a digital representation) the

sample rate must exceed the highest frequency contained in the detectable input. It is thus in our

model just an information theoretical consequence that bigger particles become constrained to some

uncertainty as to the exact states of any particles smaller than the smallest one they can use to

measure things.

The implication becomes that the Planck length can be defined in our model in such a way that it is

simply the best sampling rate allowed by the very smallest particle minimally reliably useful as a

measuring device. We also note that even in an absolute reality a smallest possible particle would not

really have to be assumed for the model to work, just like we don’t have to assume the existence of a

fastest possible particle to make the macro scale work for our model.

We thus can therefor begin by defining the Planck length of our model in a dynamic rather than static

way just like we have an essentially dynamic definition for light speed (the fastest particle around)but we note that the same static limitation to the smallest possible particle as we have discussed for

the fastest possible particle in the form of the minimal information state should also continue to

apply.

In our model we will thus use the information theoretical constraint of the Sampling Law in

combination with the concept of minimal cost of information (represented in our model by the rest

mass 1 of photons and space) to derive the constraint of a dynamic Planck length which we go on to

define for our model as follows:

•  The concept of a Planck length is defined in our model by observing that whenever there aretwo types of differently sized particles (one particle type is smaller than the other) and the

bigger particles are used to measure the smaller particles this will result in the effect of 

uncertainty showing up in measurements. The conclusion is that macro (larger particle)

observers of the micro (smaller particle) world will always be stuck with essential uncertainty

about any micro world containing particles smaller than the smallest particle the macro

world can use for measurement and we call this limit the Planck length for the macro

observer. There also exists a hard Planck length in the model which corresponds to the

concept of a particle reserving one position in space (a photon or space particle with the

minimally stable rest mass of 1) such that the smallest reliable measuring device for macro

observers becomes the photon implying that no substructures smaller than photons could be

reliably measured by macro observers.

In our model measurements by macro observers can never tell them exactly how the world looks

below their dynamic Planck length as set by their locally smallest available particle usable as a

measuring device. But there is thus also an absolute Planck length in our model in the hard

information theoretical limit for all macro observers that the reactive particle is the smallest particle

that could be used for any minimally reliable measurement.

Just as relativistic effects are in our model only distortions to the subjective perceptions of 

(unconscious) observers of objective macro phenomena, the quantum mechanical effects are in our

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 63/117

model only distortions in the subjective perception of (unconscious) macro level observers of 

objective micro phenomena.

We can see that in our model (but presumably in logical consequence also in our real world) the

existence of a Planck length does not by itself place any logical constraint on what the physical world

below Planck scale must look like such that we can know that an absolute micro reality is ruled out asa possibility in our universe. All we really know from our world is that we can’t measure such small

things and that they can’t exist in a totally reliable way and so our model continues to be compatible

with our measurable world in alignment with our core expectation on the definitions of our model to

be useful.

The micro world below Planck length in our universe could in fact and for all we know look mostly

like the macro world, only scaled down to miniature size, rather than to be an ultimately strange

place with states that could result in half-dead cats and things existing in two places at once. The

Planck length constraint concerns measurability between scales but does not by necessity constrain

the world to exist in an absolute form only inside any one, special scale.

There seems at first like there is a conflict here in the way we use the word “absolute” reality. But we

will see that in the definition of our model, the absoluteness of the reality comes from the absolute

rule we derived from information theory that no two pieces of reliably measurable information can

share the same position in space at the same time. The transformational rules will thus have to

absolutely respect this requirement, but such absolute rules could still leave some room for

optionality in the system. For example, as long as two pieces of information did not come into direct

conflict for position we could make the transformational rules such that two particles could

sometimes be allowed to swap positions rather than bounce off each other.

Given the right conditions that would allow both a swap and a bounce, we could even let the

decision for when the system should produce a swap rather than a bounce be totally random,

allowing us to introduce elements of randomness as well as causality in the system without breaking

the ultimate condition that the absolute reality in our model should never be allowed to break any

information theoretical constraints. We will go on to examine this idea in some greater detail.

A minimal amount of information state would correspond to an absolutely minimal size for things to

exist reliably (as measuring devices or as something to be measured) and we have defined things this

way in our model but we don’t have to assume such a limit to see the general point of the Sampling

Law and we don’t have to assume that the best measuring devices available to us in our world(photons) are indeed the smallest possible particles in existence – only that they are the smallest

particles reliably available to us as measuring devices.

Finally, we should see that even if there is a minimal possible particle and that in our world the

photons are it, there would be no requirement on a micro reality to function totally reliably should it

be deemed impossible for it to do so from an information theoretical perspective (because of too

small or per definition unreliable information state to be able for anything to work totally reliably on

such a scale).

All the micro world has to do is to be able to work slightly more reliably than chance as a whole for

our model to work out with an absolute but objectively unpredictable reality on the micro scale, such

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 64/117

that causal chains on this level are less than absolute (our absolute model of the universe can include

an element of true randomness – transformation rules do not have to be perfectly predictable, they

only have to perfectly respect the rules we and information theory set) rather than just not being

fully measurable by local observers. At any point in time the micro world in this model does have a

definitive (but not fully measurable to macro observers in the model) configuration but exactly which

configuration will follow is the result of some chance allowed to temper any otherwise absolute

causal rules.

Logically this indeterminism could be seen to follow from how well the sub-particles could measure

the states of each other such that they could not ever measure each other with full certainty and

therefore not interact in a fully reliable manner. They could still work in a way more reliable than

chance and we allow ourselves to make the observation that strictly this would be the only thing

required for little machine-like structures to be implemented in micro scale substructures as well.

The reason we can’t measure things below Planck length in our world is that the most precise

measuring devices we have (photons) are just too big to measure anything that small. But belowPlanck scale, things could work by means of much smaller versions of particles zooming about and

pushing each other around, although somewhat less reliably so (sometimes they zoom right through

each other by chance because of limitations by randomness on their mutual interactions).

To help picture this idea, consider a chess board with two pawns moving towards each other. On

both the macro level and the micro level, two pawns move could not share the same position on the

board, representing the absoluteness aspect in our model. On the macro level two pawns in the

same column that meet can’t pass each other because by stating that one the macro level they are

only allowed to move one at a time they would have to share a position to do so, which is not

allowed. To make the example as clear as possible by making pawns behave more like particles thatshould keep moving we could say that pawns should change direction when they meet such as if 

bouncing off each other.

On the micro level on the other hand we could say that pawns could move at the same time rather

than taking turns to wait for each other. The result would be that these pawns would be allowed to

effectively go through rather than bounce off each other by simply swapping positions after they

have met. They never have to share the same position on the board and our rule of absoluteness

continues to hold. Both worlds are absolute but slightly different causal rules apply on the micro level

(bouncing is not guaranteed) such that two micro particles (pawns) approaching each other out of 

synchronization would bounce but particles approaching in synchronization would sometimes not.

When two approaching particles are separated by just one square in the board they would both want

to assume that position – but only one of them is allowed to do so, leading to the type of information

theoretical conflict that we represent as a bounce. But whenever two micro particles approached

each other in synchronization such that they met with first four, then two, then no squares between

them, unpredictable limitations to their interactions could sometimes lead to them swapping places

with each other rather than to bounce, without breaking any of our rules of absoluteness.

This may seem like something that almost makes sense and at the same time does not. If we think of 

pixels on a computer screen we may have no trouble picturing how two “virtual pawns” could justswap places, but with solid physical pawns on a real board that should not be possible and it follows

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 65/117

from our very definitions that this is not the behavior we expect from solid mass. In the end, we must

remember that we are taking an information theoretical view and so we should think of everything

more in terms of pixels on a computer screen and we should simply ask ourselves what would be a

logical reason for seeing that a rule we stated should be applied for all our pixels should perhaps not

be applied to exactly all the pixels – we could describe logical conditions that should change the rules

slightly in special situations.

In game theoretical terms we have just described the difference between the extensive form which is

where players take turns and the normal form where players can move simultaneously. In other

words, from an information theoretical perspective we could distinguish between the micro world

and the macro world by stating that the micro world run under the normal form where the macro

world runs under the extensive form by using game theoretical terminology.

We need to capture the two necessary rules from information theory in the transformation rules of 

time in our model and we have seen them both represented in the example with a chess board and

pawns. The first rule is that of a maximum speed of information (pawns can only move one squareper time unit) and the second rule concerns the cost of information (only one pawn may occupy any

one square at any one time). But behaviors that would not have to be forbidden by the rules of 

information theory should be considered potentially allowed, and thus we can see that the type of 

behavior we would need from the micro world in our model to make it logically consistent with the

rest of our model and with what we know about our real world would be available.

Two particles that each have a minimal rest mass of 1 would not be able to pass through each other,

they would always bounce. But structures below the Planck length are with our definitions less than

solid. We must expand our definitions and understanding of mass in our model such as to say that

(sub)-structures with a mass 0 < m < 1 will not always bounce but will sometimes go through eachother. The reason we expect solid particles to bounce is as an effect of their interactions (our

information theoretical transformation rules) where the minimal rest mass 1 should simply be

thought of as the point where you have achieved such reliable chance for interaction with other solid

particles that they must always bounce. Below the rest mass of 1 there are statistical possibilities

that interactions sometimes don’t happen with the result of substructures not bouncing but

swapping places.

Regardless of whether there are in fact any little motors and sensors in the substructure of the solid

particles in our real world, we should note that a reasonable and as we have seen information

theoretically consistent explanation for quantum mechanical effects and the reality below Plancklength in our own universe is that there could just as well exist an absolute reality on the micro level,

and that it is only our measurements of it that are necessarily uncertain (due to physical limitations

on our measuring devices) without implying that the underlying reality itself must be any less than

absolute (even though it may contain some elements of absolute randomness in that mix).

The idea that things depend on being (unconsciously) observed in order to exist (the so called

Copenhagen interpretation of the quantum model) is then in our model the same idea as that of the

local perspective in relativity where in the perception of any particle it can only consider other things

to “exist” to the extent that it can interact with them. Two particles that cannot interact in any way

can never experience each other and so even though both particles exist from the objectiveperspective, from the local, relativistic perspective of each particle, the other does not exist. In

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 66/117

another formulation, two particles that could never interact can be thought of as inhabiting separate,

isolated universes.

One way to think of superposition in this model is as negative relative distance and mass. Instead of 

things seeming further apart than they really are, on the micro scale they seem closer together than

they really are, leading to micro structures that seem to overlap in their positions. We can alsointerpret the same effects as negative relative mass, leading to things that seem more than

completely solid (or seem to exist more than 100%) or even as negative relative time such that micro

level things seem to go backwards in time. Together, negative relativistic distance, mass and time can

make a substructure seem to exist in more than one place at once, which matches the description of 

superposition.

It would make mathematical and perhaps even logical sense to think this way as we would be

allowed (in math we place the zero where we like) to interpret the absolute distances and mass of 

structures on the micro level as negative with the logic that they are smaller than the smallest

measurable distance and mass for us, making them negative from our local macro perspective (eventhough they are positive on the absolute micro level or compared to an absolute zero). If we perceive

absolute distance and mass below Planck level as negative, their relativistic components should be

negative as well, as they are extensions to negative distances.

While relativistic distortions on the macro scale will go away over time in our model (unless particles

keep moving around) the limitations to perception of the micro scale will not. Macro observers will

be able to detect only the resulting macro effects of the workings of substructures on the micro

scale, and often enough will never become able to tell which of multiple possible micro effects lead

to the observed macro effect.

In such a case, where both micro effects A and B could lead to detectable macro effect C , the

interpretation of this uncertainty by an observer at the macro scale will be that both A and B appear

somewhat true, potentially leading to contradicting macro states that seemingly impossibly appear

to both be true at once, such as Schrödinger’s cat.

Again, this would in our model be a case of an experience of the world constrained by uncertainty as

to be perceived with the inevitable distortion of relativistic or superposition effects. As the

uncertainty of quantum states spread to macro states, what macro observers in our model

experience is just the same type of relativistic effects we associated with uncertainty of 

measurements, not an indication that in their absolute reality there is both a dead and a living cat ora cat that is both living and dead - nor, as we shall see later, will our model end up requiring the

existence of many worlds, such that one has a living cat and another has a dead cat in it.

We can see that in our model little machines with motors and sensors could in fact work on the

micro scale (and we note that this possibility has still not been ruled out in our physical world

although we still have no reason to believe our solid particles should be robotic in their inner nature).

The question with regards to the mathematical concept of active agent particles in our model is if the

micro level sensors could detect macro level events, but we have defined our model such as to

include this possibility as we see no logical or information theoretical reason that we must exclude it.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 67/117

To mentally verify the logic of this claim we may imagine that it would be exceedingly easy for the

micro world to tell that a gigantic photon has just made contact with their little world.

The other question would be if micro motors could move macro particles, but we see that they could

at least in principle because we already know that micro events can have macro effects in our world.

Information theory in turn has no objections so we consequently allow this real-world possibility intoour model as well.

We conclude that the only relevant constraint we can derive from information theory for our model

with regards to a micro-macro distinction is that it can be impossible for macro observers to tell

which micro effect precisely caused a macro effect. We also note that randomness may play a part at

the micro level such that transformation rules of causality include element of randomness but not

such as to be able to violate the information theoretical uniqueness constraint on occupation of even

fractional positions of space or a maximum speed of information spread.

Interestingly, from how a macro effect can lead to a micro effect that can in turn lead to a new macroeffect where it is not possible to determine which micro effect gave the final macro effect it follows

that a macro effect could via such a chain result in another macro effect such that it would be

impossible for macro observers to determine the full chain of cause and effect, ultimately making

them unable to determine the macro cause of another macro effect.

Furthermore, if there is any uncertainty in the measurement by the micro world of the macro world

(or even, as we have just discussed some actual randomness at work on the micro level) then this

would break down the chains of causality on the macro level as well such as for them to become

ultimately unpredictable as well as untraceable.

In the final pages of this paper we will see the implications this will have for the constraints regarding

pre-determinism in our model of the universe – and if information theory and our deductions hold

true, any physical universe that does not rely on “magic” for its implementation.

Relevance Theory

So far the model we have built has used mathematics from what is often considered the domain of 

biologists in the form of game theory and Darwinian evolution by natural selection in order to make

mathematical points about ultimately necessary constraints on all solid matter. We have derived a

model of the universe that so far seems compatible with the models and predictions of relativity and

quantum theory.

The time has now come to try to model biological behavior more explicitly and see how this can

further affect the constraints of our model. Again, all arguments will be based in information theory

and game theory and so will be ultimately mathematical and logical in their nature, but we note that

the experienced biologist would be well suited to evaluate the logical claims that we will continue to

make even without deeper experience with mathematical notations and theoretical physics.

We have seen that in our model a local observer will always be constrained to experience the

universe in a relativistic way, where perceived mass and distance will not match absolute mass and

distances. Objects will seem further away or less solid on the macro level or in the case of quantum

effects on the micro level impossibly close or impossibly solid as a result of the uncertainty

associated with measurements of them.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 68/117

If uncertainty in measurement will make something seem further away or less solid, could

economically relevant priority values associated with measurements by active agents (mathematical

agent particles as well as real biological machines in our world) as in the relevance of the

measurement to the general fitness of the agent have a similar effect?

If a zebra notices two things at seemingly equal distance and of equal mass but where one is a lionand the other is a boring rock, it would do better from a Darwinian perspective (game theory applied

to energy conservation) to assign less weight to the sighting of the rock than to the sighting of the

lion. A zebra that happened to experience the stone as less solid or perhaps further away than the

lion (perhaps experienced by the Zebra as “the camera suddenly zooming in” on the lion) might do

better on the savanna at spreading its genes into new zebras and such a perception of the universe

might end up to prevail among zebras in general.

For optimal fitness, when no lions were around the zebra’s perception of the stone would return to

the purely relativistic (uncertainty based) one, so that it doesn’t try to jump over the stone too late

under the impression that it would be further away than it really was or run through it under theimpression that it would not be very solid. If the zebra tried to flee the lion in the direction of the

rock, it should also rapidly seem more solid or closer again (the sighting of it would approach the

weight of the sighting of the lion) as it is suddenly very relevant to be able to jump over it at the right

time.

The model presented in this paper predicts that compatibly with the relativistic effect regarding mass

and distance experienced by a local observer due to uncertainty in measurements, the same effect

will arise from different relevance of measurements to an active agent observer. Uncertainty is, from

this perspective, one way to make the relevance of a measurement go down (or, as we shall see, up)

as with economic risk analysis where a less certain opportunity should be treated as less valuable.Thus in the relevance based experience it might be more appropriate to talk about relevant distance

and mass rather than relative distance and mass.

Interestingly, under this perspective there could also be cases where increased uncertainty would be

associated with higher risk for an agent (this is also consistent with economical risk analysis and

game theory) leading to the perception of some uncertain things as closer than they really are rather

than further away than they really are, much in the manner of quantum superposition.

We can thus see that the purely relativistic perspective based on just uncertainty is not necessarily

enough to predict the interactions of local agent observers, as the perceptions suggested by relativitycould be overruled for an active agent particle or a biological macro agent by the perceptions

suggested by relevance theory.

Furthermore, compatibly with how we saw from our definitions of relativity based on information

theory that two particles eternally unable to interact should be seen as inhabiting different

universes, relevance theory states that particles who are completely irrelevant to each other should

be seen as inhabiting separate universes.

Non-biophysical (dead matter) particles without robotic substructures could not take different

actions based on measurements and thus have no reason to prioritize their measurements. They

would interact in a strict relativistic way (still stemming from the general limitations to their

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 69/117

interaction possibilities) so they should not be seen as “uninterested” in each other such as to be in

separate universes as they could take no different course of action based on one option seeming

more interesting than another. It will however be necessary to take the different constraints that

apply to systems that have to maintain internal structures under competition into account in our

model and we should try to include the relevant mathematics into the formulas we use to describe

the logical constraints we derive.

To do this we will weave in more aspects of game theory into our formula for describing ultimate

physical constraints on our model and by extension (if our premises and deductions hold) on all

physical nature, but we realize we are already done modeling the constraints for “dead matter

physics”. That is, the physics of particles with no internal structure and thus no internal (own)

influence over their trajectories through space. Such particles will always be constrained by E ≤  m1 *

m2 * c2

and to the extent that nature is not allowed any mistakes and dead particles can never

deviate from their projected trajectories, insomuch as we include bouncing in these projections, they

would slavishly follow E = m1 * m2 * c2

as there is no way information theory would allow any

different behaviors.

We will go on in this paper to discuss a special freedom (as in the lack of a constraint) that follows

from the model we construct that includes an absolute reality. But before we go on to that discussion

we must observe that the difference between dead matter physics and “living matter physics” (or

theoretical biophysics) is that due to potential internal influence over their trajectories, agent

systems could deviate from the paths predicted by dead matter physics but they could of course still

only do so within the physical constraints imposed on all particles, dead or alive. To model the reality

of biological or agent systems we must add a factor to our formula but we are not allowed to do this

in such a way as to make biological systems able to move any faster through space than dead

particles.

In the model we build in this paper it will therefore be the case that all agents – macro agent systems

such as us as well as any unconscious arrangement of particles or even particle with substructures

turning them into agent systems – will experience stronger relativistic distortions based on the

differing importance of measurements to the agent’s fitness.

What this means is that while dead matter is confined to following the rules of physics precisely,

living matter is allowed to deviate from these rules to assume performance that is better for that

matter than physics alone would result in, but at the eventual cost of the living matter being able to

make mistakes resulting in it becoming dead matter.

The formula that such agents would use to derive the relevant mass and distance of their

surroundings would be based on game theory where relevance or priority values are assigned to all

measurements and so we would get E ≤ m1 * m2 * c2* r where r stands for the relevance of the

measurement and is the result of the standard game theory evaluation of r = potential * probability  

and potential and probability each can take values between 0 and 1. The relevance is a value

between 0 and 1 such that 1 stands for a 100% relevant measurement and 0% applies to a

measurement that in the opinion of a particle is completely inconsequent.

We note from game theory that the relevance is a combination of potential and probability (wherethe potential is tempered by the probability for realizing the potential). In other words the relations

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 70/117

represented by r can be broken up into the following way, in accordance with SWOT analysis of game

theory:

0 ≤ Potential = Strength - Weakness ≤ 1

0 ≤ Probability = Opportunity – Threat ≤ 1.

0 ≤ r = (Potential * Probability) ≤ 1. 

0 ≤ r = ((Strength - Weakness) * (Opportunity – Threat)) ≤ 1.

However, this formula would only apply for one agent observer prioritizing measurement of non-

agent, active particles. For a system with two agent particles the formula becomes E ≤ m1 * m2 * c2*r 1 

* r 2 where r1 and r2 are the priorities assigned to their own measurements by the two particles. In the

idealized case two particles would have the same assessment of their relative importance to each

other and we would approach E ≤ m1 * m2 * c2* r 

2 such that when r 

2is 1 both parties consider each

other completely important whereas when r 2

is 0 both particles consider each other totallyunimportant, which matches the requirement under relevance theory for them to be seen as

inhabiting two completely separate universes (even in cases where relativity would suggest they are

in the same universe). We remind ourselves that in our model, all particles are in the end inhabitants

of the same, absolute reality and when we talk about them in different universes, those are only the

derived realities we talk about that are the consequence of limited interaction opportunities.

We can see that according to our model the formula E ≤ m1 * m2 * c2* r 

2 places an ultimate

information theoretical constraint on all mechanical implementations of any and all physical forces as

it constrains any active agent implementation of a physical force and by extension all possible

mechanical implementations of any physical force. It also could be said to describe the constraint forthe shape of a biophysical universe with agent matter in it, and for a universe where agents can have

information advantages over each other the constraint to this shape becomes E ≤ m1 * m2 * c2*r 1 * r 2.

While the model is not yet complete, the model we have built thus far combines relativity, quantum

mechanics and biology into one mathematical framework built on information theory and game

theory and could be called a General Theory of Relevance (but we should refrain from naming it now

as our model is not yet complete). The claim of this paper so far is to propose that it explains the

“relevantistic” effects that it predicts to be experienced by all active agent observers inside the

model and that we can describe quantum mechanics and relativity in a cohesive way using a model

that includes an absolute reality and from which the macro and micro realities matching our currentmeasurements can be derived by applying uncertainty in measurements for all observers inside the

model.

The intention with the model as it is completed will be for it to combine the strictly physical aspects

of reality described by relativity and quantum mechanics and the biological aspects of reality

described by Darwinian evolution by natural selection into one cohesive framework of theoretical

biophysics that includes an absolute reality and that is draws its conclusions entirely from constraints

imposed by information theory.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 71/117

However, as the model of this paper is derived strictly from the premises of information theory, the

real unification presented by this paper will come in form of the integration of game theory and

communication theory.

Again we note that if the premises of information theory are correct and the deductions in this paper

are correct, then we can apply the same conclusions to all physical universes from such informationtheoretical constraints.

Finally we note that if our model is also in correspondence with established measurements we have

a scientifically attractive mathematical model with predictive power over our reality that is lighter

from an information theoretical perspective than the currently most generally accepted model which

does not include any absolute reality. This is because in the model of this paper that includes an

absolute reality, relative (local macro) and superposition (quantum or local micro) realities can be

derived from the absolute reality by applying a dimension of inevitable uncertainty in measurement

for all local observers inside the model.

However, there are still aspects to our model with an absolute universe that we have not considered.

In fact, the formula we are after is not yet fully derived as it does include game theory by not yet

communication theory. We will go on to consider the relevant extensions before the end of this

paper but before we do, we will take a detour to examine some of the extremes of our model to

verify that it continues to make logical sense even under stress.

If our logic starts to break down under pressure it is not acceptable, so we will go on to examine the

extreme perspectives on our model to see that even if it may not hold any predictive power as such

anymore (because we may never be able to measure these extremes) and for that reason such an

excursion is essentially only philosophical rather than strictly scientific (as in falsifiable), then at leastwe want to ensure that our model can continue to make logical sense, even from the very extreme

perspective.

We will start with a very close inspection on how a universe could begin at first, based on the

concept of logical replicators and see how “abstract” phenomena could become “concrete”

phenomena with the help of recursion such as to make a phenomenon “concrete” exactly to the

extent that it can influence the likelihood of itself to become more prevalent in existence than

chance alone would allow. The section will become a discussion not only on how a universe can

come about but also about a more fundamental analysis of the basic constraints of our model with

regards to the smallest level, or the extreme micro perspective.

We will then take the extreme macro perspective, zooming out so that we can look at what happens

after a “universe” (a subset of the absolute reality in our model) has died from the perspective of the

“multiverse” (the absolute reality in our model), observing that once a multiverse has come into

existence, even if subsets can become totally stable or totally unstable equating to the death of a

universe, there are ways that such a dead universe could become born again.

Causal Loops

Where do particles come from? We have discussed a special class of particle arrangements, capable

of making copies of themselves, called replicators. But the concept of replicators is really more

abstract than to apply just for solid particles making copies of their arrangements. Any abstract

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 72/117

phenomenon capable of positively influencing the likelihood of a similar phenomenon appearing

again can be considered a replicator.

We started this paper with the assumption that we had some kind of minimally concrete particles

that we classified as active and reactive (even the reactive particles seemed to require a minimal

information capacity such as to be considered at least minimally concrete). Could we somehow buildup to a universe with concrete particles in it from a more abstract version of a universe – or rather,

can we find a way to create our very definitions of “abstract” and “concrete” as for them to have a

logically useful distinction?

In the following discussion it will be as if we pretended that we already had reasonable definitions for

what the concepts abstract and concrete mean, as if an objective observer had given us a clue to

their meanings just like when the same hypothetical observer informed the scientist in the spaceship

about his surroundings in our earlier thought experiments. We will go on to see if we could logically

imagine an abstract universe housing abstract phenomena where concreteness could somehow form

in it with the help of replication among the abstract phenomena. As we do so, we remind ourselvesthat what we are really doing is deriving logically consistent definitions for the terms abstract and

concrete with respect to our model.

Imagine a chaotic universe where events occur randomly, a plethora of phenomena on parade. We

have space in this universe because the information in the phenomena have the usual information

theoretical costs, and we have some sort of time since information changes state, but since the

information bits flicker totally at random there is really no special meaning to the time dimension (it

does not become compressible with the help of non-randomness) as not only does the arrow of time

have no obvious forward and backward directions, it is totally broken such that it can go from any

point in time to any other point in time.

Then we introduce a minimal amount of causality into the system such that all events are not

completely random anymore, sometimes the occurrence of an event E implies increased likelihood of 

the event F occurring at some time after E . This heals the arrow into a straight line even if it does not

give it obvious forward and backward directions.

In such a universe, it could happen that a particular event E 1 would occur that increased the

likelihood of the same type of event E 2 occurring again in the same place but at a later time, either

directly or via a chain of other events (E 1 gives F 1 gives G1 … gives X 1 gives E 2) in what we could call a

causal loop. That new event E 2 would cause yet a new event E 3 to happen, and so on potentiallyindefinitely. This finally gives the arrow of time a direction such that moving forward in time means

that events like E will become increasingly commonplace in the information total of the universe.

Such an event, capable of increasing the likelihood of the same type of event occurring again, is a

kind of replicator that is able to create copies of itself over time. The effect is of a phenomenon that

can sustain itself over time and so we call this type of replicator a sustainer .

We can then go on to imagine a second type of event M that due to another causal loop manages to

cause itself (or rather a new event of the same type) to occur again at a later point in time and at a

different point in space. We call such a replicating event a mover .

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 73/117

We can also imagine a third type of event S that can not only cause a copy of itself to occur later in

time but which over time causes multiple copies of itself to appear at different positions in space. We

call such a replicating event a spreader .

A sustainer corresponds to a particle that sustains itself over time but is stationary in space, as with

the solid, active particles in the semi-static universe or any particle in the fully static universe. Amover is a particle that is persistent over time and that can move through space as we know active,

solid particles with mass to behave in our dynamic universe and reactive particles to behave in the

semi-static and dynamic universes.

A spreader is a particle that persists over time and can spread over space. An example of this might

be the photon (which in our model is the same thing as empty space) that we know from our world

to be able to behave as a wave form spreading in every direction in addition to being able to behave

as a particle (as in our model where light can be seen as a ripple effect of positions in space).

It is not hard to imagine how spreading could work for a purely reactive particle since such a particlehas no rest mass (no state of its own). Spreading out would not result in additional mass (positions,

as in more costly information) being inserted into the universe. It would only be an effect among

reactive particles communicating information about the shapes of active particles among each other

by influencing the (potentially infinitely precise) positions of other reactive particles.

We can also imagine an active particle spreader, but such a replicator that would actually grow in

mass would either imply an addition of mass to the universe from nowhere (not allowed by the most

generally agreed upon rules of physics) or that the spreader would simply steal the mass of other

particles and add it to its own internal mass. The behavior of such a particle, especially if it is very

efficient, reminds us of a black hole. The definition of a black hole could simply be that it is the localchampion of active spreaders of the subtype suckers that are capable of influencing all of its local

surroundings to become incorporated into its mass.

However there could also be another subtype of spreaders that we call converters. Converters do not

suck in the positions in space of other particles into its internal mass but simply convert the structure

of existing matter in its environment into copies of its own structure – or at the very least to contain 

copies of its own structure - that are then free to go about their own business (which would of course

to some extent be the business of the converter).

The photon would essentially be an example of a reactive converter as it only influences the

structures around it to share the representation of the information in it but steals no mass from

other particles into its own internal structure (which continues to contain only the minimal

information of one position in space but where the precise position keeps shifting to reflect more

and more of the information about the solid particles) and furthermore it allows the particles it

converts to keep its own information so it is a case where the converter lets the converted particle

away with containing a copy of the converter (rather than become a fully converted clone of the

original).

Moreover, photons share information with each other, so the full classification of a space or photon

particle in our model would be to call it a reactive bidirectional semi-converter .

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 74/117

It would not be unreasonable to consider a reactive bidirectional semi-converter and an active

unidirectional sucker – that is, photons and black holes – to hold positions as logical extreme particles

of a universe from an information theoretical point of view, where active unidirectional converters -

solid mass replicators, or life as we know it in the form of genes – could be found somewhere in the

middle between those two extremes on such a scale.

But life in between the extremes of photons and black holes, as we shall soon go on to see, is not

ultimately constrained to contain only active unidirectional converters. We can also see examples of 

active bidirectional semi-converters, such that two solid particles try to share information with each

other rather than fully convert each other to become an informational clone of the original.

So called memes or ideas could be an example in our world of active bidirectional semi-converters,

under the simple observation that two people, each with a unique idea, can walk away from a

conversation now each carrying both unique ideas in their heads. We will talk more about memes

later in this paper but for now just consider how the information content of an idea can start by

having a concrete representation in the brain of one person. After discussion with another person,that idea could go on to be concretely represented in the brains of two humans.

As we model the concept of the bidirectional semi-converters – active and reactive – we see that

there is an important subset of information theory that we have not considered enough but that

must be captured to make our information theoretical model of a universe complete.

The idea of particles able to share information with each other implies the concept of communication 

and so we must begin to discuss the information theoretical discipline of communication theory to

provide us with the final dimension to our mathematical formula for capturing the information

theoretically imposed constraints on any logically consistent model of a universe.

As we go on, keep in mind that all the thought experiments we make are meant to provide concrete

ground for the reader to verify the solidity of ultimately purely information theoretical propositions

about the constraints to communication.

Thus the reader should avoid the temptation to conclude that a discussion about the constraints to

the flight paths of bats or the risks involved in interstellar communication with potentially hungry

aliens should primarily concern perhaps one or two biologists interested in bats but otherwise mostly

fans of science fiction. The point of the thought experiments will be to illustrate the truth of logical

communication theoretical constraints in a way that the reader can mentally verify that the core

deductions must hold true.

Before we dig deeper into communication theory we should conclude our visit to the extreme micro

perspective and the extreme beginnings of a universe and we must also remember to visit the

extreme macro perspective.

The model we have examined suggests that by going from a completely random universe with space

but no time to a less than completely random universe with space and “proto-time” (non-directional

or random time) and then with the help of any form of causality that allows replicators to form

turning the universe into a more and more reliably performing system with space and directional

time, we transform an abstract universe into a concrete form. As we could see, time finally gets the

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 75/117

features we expect of it with distinguishable forwards and backwards directions when we add

replicators to the system.

With quantum uncertainty in micro measurements of the macro world we could even get a

meaningful direction for the absolute arrow of time in that history could be fully knowable but the

future is not, allowing us to see that even time on the absolute level could have a logically consistentdefinition in our model. Time on the absolute level gets its direction from how the past is derivable

but the future is not. We will revisit the implications for pre-determinism within our model (and, if 

the deductions hold, by extension for any logical universe).

In our model the definition and our understanding of particles is based on pure information theory

that includes game theory and will come to include communication theory such that we derive how

with the help of replicators exploiting the causality by means of causal loops, Darwinian evolution by

natural selection among loops competing for influence over locally available space over time will

cause the universe to continuously take a more and more concrete shape over time.

This is in the form of increasingly stable causal loop replicators out-competing each other in the

game of existing until we have replicators performing so stably that other superstructures can evolve

on top of them and that may perceive their constituent casual loops as solid particles. The new

superstructures are potentially new agent structures with Darwinian agendas of their own, adding

yet another layer of additional relevantistic distortions to their experiences.

We can relate these concepts of causal loops to the concepts of space, time and mass in our model

as follows. A phenomenon R (which could be an aggregate of phenomena A, B, C…etc.) that is able to

exploit causality to cause a new phenomenon of the same type to happen again directly with R => R 

(it directly causes a copy of itself to appear in the same place in the next instant of time) would beseen as a reactive particle – that is, a space particle or a photon. We can see this as the space particle

or photon having been able to “capture” a position of space and claim it over time.

An active particle – that is, a solid particle of mass - would equate to an aggregate phenomenon M+N 

where M and N were able to cause new representations of their own types to come about again

indirectly requiring more than one position in space over time such as in a chain M => N => M or

longer as it would require the reservation of more of the available positions in space (our ultimate

information theoretical resource) to persist itself. Positions in space over time are thus seen as the

“information capital” in our model. Motion is the ability of a phenomenon to capture neighboring

positions in space at one point in time and releasing them in a certain direction at a later point intime.

We note that over time we get competition for available positions in space so that any self-sustaining

phenomenon will have to keep working to sustain itself. This observation on our model matches the

Second Law of Thermodynamics that tells us concrete representations of information loses its

ordered state over time, so we can see that this statistical law works in our model as it should,

implemented by Darwinian competition for the occupancy of positions in space in time by self-

sustaining phenomena.

A phenomenon such as R, should it become reliable enough at causing “itself” to continue to happen, 

would not necessarily be possible to overtake (until the day it makes a mistake) as it constantly

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 76/117

occupies its position in space, meaning that the competition for available space over time would only

really have to concern the indirectly self-causing phenomena (once phenomena such as R performed

well enough). This corresponds to the information theoretical rules for how a signal represented by

solid particles is bound to randomly degrade over time as recapitulated in the Second Law of 

Thermodynamics (again from the competition for the positions in space among weakly or indirectly

self-sustaining information) but a signal represented by photons will live on forever.

This takes us to the concept of the Planck length. We can now see how in our model it would make

sense to see this simply as the concept of minimal stability of a directly self-sustaining phenomenon

such as R that occupies its position in space reliably enough as to appear as detectable information to

a larger observer (which does have a good correspondence to the logical rational behind the

Sampling Law that we used to define our concept of a Planck length earlier).

Thus from now on in our model we describe the concept of a Planck length as well as the concept of 

the minimal rest mass or minimal information content that we have used for photons and space

particles (reactive particles) in the following way:

The minimally stable rest mass 1 of a reactive particle (the photon or a space particle) which

corresponds to the Planck length in our model stands for the stability of a self-sustaining

phenomenon that has become reliable enough as to be seen and treated as a minimal level of 

information in the perspective of an external observer.

That does not mean that it has to have 100% stability internally. In other words the minimal

information capacity that we have been talking about is really the minimal stability level of 

information or a signal to be readable by external observers, such that it turns into a detectable piece

of information as seen by anyone else in the model.

We discussed causal loops as if they could “cause themselves” with absolute certainty thanks to an

absolute causality. But assume that R => R is not an absolute statement, such that R only ever so

slightly improves the likelihood of another R. This would be the reality for sub-Planck loops, such that

loops are not fully reliable by themselves, but enough of them together create enough stability for an

outside observer to be able to treat the sum of the instable loops as a somewhat reliable (better than

random) piece of information.

We therefore arrive at the following information theoretical definitions for the core concepts in our

model:

•  The concept of Space stands for the minimal stability level required by information (a signal)

to be measured at all by external observers such that one position in space becomes

occupied over some time.

•  The concept of Mass stands for information (a signal) with greater stability (reliability in its

measurability to external observers) than the very minimal one of Space such that more than

one position in space becomes occupied over some time.

•  The concept of Time stands for the distinction in stability between information such that the

less stable information or signal degrades more (becomes less reliably measurable toexternal observers) over the same amount of time.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 77/117

We can see that the distinction in stability in signals and their predictable degradation over time as

expressed in the Second Law of Thermodynamics would become implemented as a direct result of 

the Darwinian selection pressures forcing loops to compete for available time and space to exist in –

loops would become less stable as other loops tried to take over their time and space which is the

reason that signals degrade in space over time.

Of course, information that cannot be even minimally reliably interacted with (detected by) an

external observer could not affect that observer from an information theoretical perspective, so we

can see that it becomes clear with our definitions why information theory prevents any force of 

nature to act with an effect that moves faster than the speed of light – the concept “faster than light

speed” equates to the concept “not even minimally readable information”.

Note that our Planck length and minimally stable rest mass of 1 does not imply totally reliably

information to the external observer, only minimally reliable information. We have no reason to

assume that information would be able to reach a level of 100% reliability to external observers

(possibly with the exception of matters related to the event horizons of black holes). In other words,information may never be completely reliable inside nor outside the Planck length.

Information on the inside of a micro-cosmos (micro scale imperfect sustainers, movers, spreaders 

and converters as in causal or information loops sustaining themselves on their scale better than

chance would dictate and competing for the available space and time in their world below Planck

level) would be more than minimally reliable to other observers inside that micro-cosmos.

The Planck level thus really describes the level at which some self-sustaining informational

phenomenon becomes reliable enough to detect to external observers whereas under the Planck

level the information is only more than minimally reliable to other observers under that Plancklength.

We have new looked at what it would mean for information to become reliable enough to detect by

measurement by an external observer. If we go on to look at the situation from the perspective of an

observer internal to the micro level, we can see that as always in nature, the micro causal loops

would be competing with each other to some extent for the time and space of the micro cosmos. But

there is an even greater threat to every micro loop in this world than that of other micro loops.

The impact of the macro world on the micro world should really be to obliterate it, unless the

information on the micro scale could find a way to protect itself from the Darwinian selection

pressures of the macro world for its time and space. Thus the only way to maintain any level of 

minimal stability in the micro world is to create some kind of shield in order to protect the

information in the micro world from the informational impact of the macro world.

We can think of our initial section on information theory and the wall of sand we could build around

our pebbles to protect them from crabs. We could see that it follows logically in both directions

(from the internal and external perspectives) why we should expect to see a Planck length – seen

from the inside of the micro cosmos, the Planck length represents a boundary around their world

such as to be able to provide them with some minimal stability that allows the information on the

inside to exist.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 78/117

But we know from information theory that building such a wall – that is, isolating information – costs

energy, so the micro loops could not just build a wall and leave it there to go on to live happily ever

after. A wall has to be maintained, and we have to ask ourselves where the energy for this would

come from. Could the micro cosmos have access to some form of free energy such that it could be

able to maintain such a wall of informational isolation around itself?

Not in accordance with any of our established ideas about the universe that expects the total

information content of the universe in form of mass and energy to stay the same.

The solution is that while no free energy is created in the micro world, the loops under Planck level

have figured out the next best thing to free energy. They have learned to release a form of energy

that already does exist, but that is locked up as to seem non-existing until you discover how it can be

put to work.

In short, the micro loops have discovered the power of cooperation that in information theoretical

terms translate to the concept of communication theory. By communicating and thus being able tocoordinate, the energy levels of the micro world are simply bigger than the corresponding

informational pressure from the macro world (as long as the pressure from the macro world does not

become equally coordinated) thanks to cooperation allowed under information theory in the domain

of communication theory.

No new energy has to be added to the equation, it is enough to see that the combined energy levels

of cooperating and therefor to some extent organized self-sustaining information are greater than

the combined energy levels of strictly competing and therefor disorganized self-sustaining

information.

To verify the general logic of this proposition, consider how a civilization that spends its military

energy on infighting has less military energy over to protect itself from incoming asteroids from

space. It is the same energy, directed either inwards or outwards. We will see soon how this all works

out mathematically, but before we do we should conclude the tour of the extremes with a visit to the

extreme macro perspective.

The Big Bangs

We saw that in the relativistic experience of solid particles in our model, we could consider two

particles to occupy separate universes if they can never interact, even though they are really

inhabitants in the same, absolute universe.

We could dub this absolute universe the multiverse and relate it to the term universe such that we

have two separate universes in the multiverse when no two particles in either universe will ever able

to affect any particle in the other before the cold or heat death of their respective local

environments (their universes).

This would be seen to apply to particles that could interact in principle, such as two solid particles

that would bounce off each other, but due to extreme distance between them no signal would be

able to finish the journey from one to the other before the deaths (stagnation or descent into chaos)

of the two universes that the particles would then be seen to occupy. A dead universe in this

perspective is a local subset of the multiverse with no more (consistent) internal activity, thus

behaving from the perspective of the multiverse as a solid, passive (dead) particle.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 79/117

But even after two universes have died in our model, absolute time ticks on in the absolute

multiverse. The dead universes could be on a random drift through the multiverse and given enough

time they might finally happen to collide.

As an alternative, consider a universe of little robot particles, where all the particles still work but

have been placed in a game theoretical dilemma of exactly the kind you would see in an old westernmovie (a kind of Nash Equilibrium or “terror balance”). Everybody have their guns drawn and

pointing in a complex pattern at each other such as to totally lock everyone into position. Should

anyone so much as flinch, they are all dead meat.

This would equate to a potentially living but unfortunately stagnated universe. But imagine a big such

universe, locked in eternal stalemate and moving through time and space without any particle ever

moving in relation to each other. Mathematically this would equate to the cold death of a universe,

or stagnation into eternal stability.

But what if another little robot particle, outside the stagnated universe, came zooming through themultiverse, spotted the enormous collection of potential friends (it has been a very lonely particle as

of late) and steer itself with full motor capacity towards the…egg?

The image of a universal correspondence to our concepts of conception with a sperm and an egg are

striking, but the point here is not to become poetic but to drive home the realization that it really is

no coincidence that we see certain patterns repeated over and over again in nature – the patterns

represent constraints that can be derived from information theory and should therefore be expected

to show up precisely everywhere we look.

However a collision between two dead universes comes about, it seems to match the type of event

that we know as a Big Bang. We go on to note that two particles are only truly isolated if they will

never be able to have an effect on each other before the death of the multiverse, when global time

stops and no more big bangs will happen. However, it is possible that the multiverse never dies but

rather moves in an infinite loop. It is also possible that the multiverse is part of a greater - perhaps

infinite - structure of “multi-multiverses” that have no final death. Ultimately, we see from our model

that from the information theoretical perspective there is no logical constraint describing the

necessity of any final death for a multiverse and its universes.

The Power of Cooperation For any active agent particles, active unidirectional converters would pose a real threat as to all active

agent systems life (as in continued functioning as a machine) depends on maintaining their own

internal structures and not have it (fully) converted to resemble the structure of someone else. While

a threat, the active agent on its toes could survive another day if it carefully avoids all overly

aggressive converters in its surroundings and so active unidirectional converters present a somewhat

more realistically survivable threat than a black hole (an active unidirectional sucker).

As we will go on to observe, such survivable threats are practically a form of opportunity. The reason

is that this type of potentially avoidable threat sees to it that the requirements for Darwinian

evolution by natural selection are met, such that active agent systems can hone their behaviors

towards their optima in their relation to each other.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 80/117

We can see that the constraints from game theory must hold anywhere information competes for

concrete implementation opportunity, so this indicates that so called meme theory should hold from

an information theoretical perspective, even if we don’t know exactly the mechanisms for how that

information stores itself in our brains yet.

The meme theory was described in the book The Selfish Gene by Richard Dawkins and states thathuman culture can be observed from a game theoretic perspective of ideas competing in a Darwinian

way for the limited resources of human brains to exist in. From this observation we were invited to

derive a certain set of deductions allowing us some increased predictive power over the evolution of 

our culture. Specifically, the prediction becomes that as the shapes of our ideas are honed by the

mechanism of natural selection we should see that ideas that happen to have qualities that improve

their chances of spreading to more brains would become more common.

As is so often the case in nature there is a bright and a dark side to this conclusion, as we can see that

one quality that might make an idea spread fast is if it had positive consequences for the humans

repeating it - the bright side - but that we must also note the dark side, namely the predictableexpectation on nature to contain examples of twisted ideas that were bad for the humans promoting

them. The poor people thusly “infected” by such a bad idea would be unable to stop repeating it as

some quality to the idea made the person think that the idea must hold true and that it would be a

great move to act on it and (if the meme is really powerful) to tell as many people as possible about

it.

The power of the selfish competitor was derived conclusively by Darwin and has been confirmed by

game theoreticians to hold mathematically ever since. However, as we know nature has in what

could perhaps according to some overly theoretical biologists be seen a most uncooperative spirit

seen fit to provide us with cooperating organisms that are not even related to the left and right.

While we see no shortage of selfish competition strategies in nature, the mathematics that can be

derived from the ultimately information theoretical constrains discovered by Darwin actually seem to

go so far as to predict that is all that we should be able to see. We could perhaps explain away one or

two altruistic humans as suboptimal gamblers, but all our economic analysis seemed to indicate one

inescapable conclusion: The inevitable ultimate success of the most selfish competitor.

And yet nature is full of examples of cooperation among lowlier creatures than humans (an example

would include the crocodile and the dentist bird, but this paper intends not to dwell on the richness

of examples that any biologist could provide) apparently in total spite of everything game theoryclaims we should expect. Is nature really full of such lousy gamblers, never realizing that cooperation

doesn’t pay off in the end?

It is intended to be the central contribution of this paper to present an attempt to solve this

mathematical dilemma. We will examine as carefully as we can the claim that in the end we should

mathematically expect to witness the triumph of the friendly (but not overly naïve!) cooperator .

It will be contended that the key to answering the Darwinian paradox will be a closer analysis of 

communication theory in combination with game theory. We will use our information theoretical

analysis of a biophysical model with active agent particles and as we do we will continue to discover

interesting ramifications for the purely physical side of the model with regards to how any physical

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 81/117

force may behave. We will now examine some of the mathematical specialties around the

information theoretical constraints to physics that we will find to be associated with a model that

includes an absolute reality and is constrained by game theory and communication theory.

Superluminal motion In the model we have discussed, assume that two light particles are zooming directly towards each

other through absolute space. Consider then the question: how fast is the distance between them

decreasing in absolute space?

The only logical answer within the constraints of our model must be: at twice the speed of light.

Then if two nearly reactive but still a little bit solid particles are zooming towards each other each at

nearly the speed of light through absolute space, how fast does the distance between them decrease

in absolute space?

The answer could only be: at nearly twice the speed of light.

Using traditional definitions, this equates to describing superluminal motion or faster than light (FTL)

travel in that the particles approach each other with a speed greater than light, and speculation

around such activity is traditionally uncomfortably close to the territory of designers of perpetual

mobiles and other objects of pure science fiction. What makes the design of a perpetual mobile

impossible is that any such device would require new energy to be added to the universe, which is

not allowed in serious discussion. The sum of energy and mass in our model remains constant and so

our model will continue to stay in the realm of the ultimately possible.

But we see that in the model we have devised, superluminal motion according to its traditional

definition (which we will continue to use, such as when two particles approach each other with a

combined speed exceeding that of the speed of light) would be possible. Does this mean our entire

model must be wrong, and this has all been a rather lengthy exercise in futility?

It really does not. Our model is still logically and mathematically valid. While there could be reasons

to exclude superluminal motion from a pure information theoretical perspective, we have not found

it yet.

In fact, the information theoretical limit as imposed by essentially purely logical constraints would

seem to be that the maximum speed a single solid particle could move through absolute space would

indeed be maximally nearly the speed of light, and so far so good as this is in accordance withconventional wisdom on the topic. But here is where it gets interesting.

Why would two particles, each moving at nearly the speed of light from an absolute perspective, not

be able to move towards each other with combined superluminal speed as seen from the same

absolute perspective? The logic and mathematics of the General Theory of Relativity does not imply

that this could not be so because it doesn’t strictly speaking make any predictions at all about

behaviors in absolute reality – it has sorted that concept away from its model.

General relativity only claims that two particles could not approach each other with more than a

combined speed of light from the perspective of each relative reality in its models. It makes no claims

about any constraints from absolute reality as no absolute reality is present in the model. This means

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 82/117

that our model is strictly speaking not logically incompatible with the currently generally accepted

model. But on the other hand, it would seem to imply that we should be able to witness, even from

the relative perspective, how Alice could approach Bob at a combined speed greater than light.

We will resolve this apparent paradox between the model presented here and the currently widely

accepted one of general relativity in two ways.

First we will go on to make one more careful analysis of the information theoretical constraints that

apply to the model we have created and we will see that we have already identified the constraint

that makes our model compatible with the predictions of Einstein.

We will observe that while there is room in our model for superluminal motion such that solid

particles could approach each other by up to twice the speed of light, statistics will place hard

limitations on our model as to make such a phenomenon exceedingly unusual in reality.

We will also go on to note that our model would then become compatible with the recently

measured phenomenon of the superluminal neutrino, which seems like it defies conventional

wisdom on superluminal motion anyway. If the measurements can be repeated, we know that c is

not a hard limit on relative motion after all and thus a mathematically and logically consistent model

that would allow relative motion with a hard limit on 2 * c could become useful.

We will see that game theory provides the explanation for the statistical limitation that normally

makes superluminal motion impossible (or highly unlikely) which will eventually be a restatement of 

the same economic analysis stating that “selfish wins” and that has continuously puzzled biologists

that have to witness organisms that cooperate all the time despite the suggestions of the best

mathematical models on the subject.

We will go on to see that communication theory provides the explanation not only to the biologist

dilemma of having to explain away more and more cooperation discovered in nature with more and

more hand-waving to the effect that “we just have not figured out in what cunning way they are

really being selfish versus their partners yet”.

Communication theory will also by the same logic provide the answer to how the concept of 

superluminal motion should be treated in our information theoretical model of the universe,

ultimately to provide mathematically coherent explanations for both the superluminal neutrino as

well as for another measured phenomenon of our world that haunted Einstein and that he called

spooky action at a distance and that concerns communication between particles that have become“entangled” (been in physical contact and formed some kind of relationship at the micro level) at

what seems like millions of times the speed of light.

Spooky action at a distance has since been explained in a mathematically satisfying way, but it

reveals the logical problem with a model that includes no absolute reality in that it results in

recursive explosion of new virtual worlds with every event in every such virtual world.

While the mathematics of such a model works out, and can be seen to be compatible with the

mathematics presented in this paper, we still must note again that a model where only one absolute

reality has to actually be modeled with persistent information and the other relative realities can bemathematically derived from the information in the absolute reality is much lighter from an

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 83/117

information theoretical perspective (and the size of the information total can stay constant over

time) compared to a model where the amount of information that must be persisted increases with

every new relative reality that springs into existence (such that the information volume increases

exponentially over time) and so should be preferred from a mathematical perspective.

How could superluminal motion be possible? The solution to this conundrum lies in distinguishingbetween a logical possibility and a statistically reliable effect over time. In other words, the full

answer to the question about how fast two solid particles could approach each other would be:

Two solid particles each moving at nearly the speed of light would in our model approach each other

at a combined speed of nearly twice the speed of light, but they would usually crash into something

 physically unpredictable before they got very far because solid particles moving faster than half the

speed of light through absolute space can’t see where they are going.

We note that we cannot logically consider cases where active (solid) particles could move faster than

reactive particles, as information theory tells us this should not be possible due to the cost of information. In other words, we see that the constraint on solid particles to not move faster than

light through absolute space is a hard constraint from information theory.

No solid particles can move faster than light can through absolute space. But, seemingly in conflict

with the constraints we have discussed in this paper and in stark contrast to most conventional

opinion on superluminal motion, solid particles could still approach each other in absolute space with

a combined speed of more than light speed.

The relativistic constraint we have discussed in this paper states that solid particles should only be

able to approach each other with a maximum combined speed that should be lower than light speed,

such that solid particles could move at maximally half of light speed each through absolute space.

Strictly speaking we must observe that it would be logically and to that extent also physically possible

for solid particles to travel through absolute space at almost full light speed – the single problem is

that they couldn’t “see where they are going” which is the kind of argument that may make a

theoretical physicist who is not a theoretical biophysicist cautious, but again we will only invoke the

mathematical construct of robot-like active agent particles to see what type of constraints

information theory must place on our model. We will as always go on to compare how the

constraints we find this way go on to hold up against external implementations that do not suppose

any robot particles, and see that the constraints we have derived with the help of biophysics

continue to hold.

If we call the fastest reactive particle the light particle it becomes a mostly tautological exercise to

note that for an objective observer a light particle should move at the speed of light through

absolute space in a model that includes an absolute reality. As an objective observer watches two

light particles moving towards each other through absolute space in our model, each light particle

moving at exactly light speed through absolute space, the external observer must thus also see them

reducing the distance between each other at twice that speed which of course then equates to

saying that they approach each other at twice the light speed in absolute space.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 84/117

So does this mean that it is possible to send information or even physically travel faster than light,

with the definition of superluminal motion in our model as being two solid objects approaching each

other with a total combined speed in absolute space exceeding that of the speed of light?

Theoretically yes. However it could not actually result as the effect of any reliable physical force as it

could not be done in any rational or sustainable manner – at least until we invoke communicationtheory. The constraints we have discussed so far all hold, but they strictly speaking only apply to

mechanical implementations of natural forces that are supposed to be able to act in a statistically

reliable manner over time and that are not allowed any communication to coordinate the effects.

It is still the case that two active particles could not keep reducing total distance between them

faster than light speed in the long run due to uncertainty and information theory. Reactive particles

on the other hand can certainly move towards each other in absolute space such that they reduce

absolute distance between themselves at twice the speed of the reactive particle, nothing else would

make sense as in being logically consistent with the definitions we use.

Consider two active particles that are very nearly reactive (such as gravitons). We will go on to see

how information theory (specifically game theory) places a restriction on how fast they could

rationally move towards each other implying ultimate constraints on the effects of mechanical

physical forces. The argument will be based on observing that it in our model would still be logically

and physically possible despite how exceedingly statistically unlikely it would be to succeed for a pair

of daredevil pilots in nearly reactive spaceships to just go for it and zoom towards each other at a

combined speed of nearly twice the speed of light as seen from the objective perspective. They

couldn’t reliably do it and statistics predicts that they would run into a rock instead which is also a

restatement of the logic dictating why mechanical natural forces couldn’t do any better.

Still, two daredevil and extensively lucky pilots in nearly reactive spaceships could in fact crash into

one another twice as early as two rational pilots in nearly reactive spaceships. As the rational pilots

approached each other at nearly the speed of light to be able to stop in time rather than to actually

crash into each other, the lucky daredevils managed to reduce distance between them at almost

twice the speed of light (ending in a most spectacular collision) with significant statistical but no

logical objections from physics or information theory.

So in our model a nearly reactive particle could physically move at nearly light speed (i.e. the speed

of the fastest reactive particle) through absolute space, but due to uncertainty it could not really do

it for a long time and by extension not as the result of any mechanical physical force, constraining allsolid particles to move reliably no faster than nearly half the speed of light through absolute space,

reducing distance between them no faster than as to approach each other at near light speed.

It could be seen as solid particles being able to move through absolute space at nearly light speed in

theory but because of uncertainty associated with directional time their actual speed must be

dragged down by the ballast of uncertainty until they can go no faster than half light speed.

Or could they? We will now go on to examine the very extreme limits of what type of effects could be

physically allowed under the constraints of uncertainty described by the formula E ≤ m1 * m2 *

c2*r 1*r 2 to see if we can discover the distinction between the impossible and the maximally

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 85/117

improbable – and even how the general improbability of a phenomenon such as superluminal motion

could be reduced.

We will discuss two possible mechanical implementations of superluminal speed for solid objects.

One will be based on external, passive mechanics and one will be based on active agents with an

active, internal implementation. But before we do, it is time to root some of the basic concepts weare working with yet more firmly in the ground so that when we proceed to discuss superluminal

effects, we have a solid mental model against which to verify the logical validity of our assertions.

One problem when discussing relativity is that, as we all know, on the everyday level most of us

experience only Newtonian gravity and one has to move towards extremes such as going at nearly

light speed in order to be able to measure any strong relativistic effects. Experiments at such

energies are costly and potentially dangerous, and their conditions are so different from anything in

our ordinary lives that many of us have difficulty to relate mentally in a strong way to relativistic

effects.

Perhaps this could lead some of us to conclude in resignation that the whole business with twins that

become different ages because one takes a ride in a spaceship sounds awfully strange (although

deeper analysis shows that should they meet again, at least after a linear journey through space,

they would also turn out to be the same age again) but if math says it must be so then then perhaps

it is so.

It is fortunate then that relativistic effects in fact can show up at much lower energies, such that

relativistic effects could be spotted with the naked eye. Furthermore, we are not talking about some

exotic, perhaps theoretically possible but hideously expensive condition that could only work as a

thought experiment. It turns out that with the help of robotics we could see relativity scaled down toeffectively any speed we like – and we could even have potential examples of relativistic macro

particles behaving in at least some accordance with relativity that has been scaled down to our

normal, everyday speeds on our planet right now (although effects would be weak). Wonderfully

enough, we cannot just measure and make thought experiments around such existing macro

particles – we could in fact go and admire their beauty at the zoo!

The Semi-sonic Bat 

There is a very good reason why the fastest possible bird natural selection could provide should be

faster than the fastest possible bat that natural selection could provide and this reason should be

obvious to the careful reader of this paper. The reason is of course that birds can navigate by sightbut bats are blind and have to navigate by means of sound and the speed of light is greater than the

speed of sound. Note that a bat that is not blind would for the purposes of this argument not be

considered a “bat” anymore but would be a “bird”, regardless of what a genealogically minded

biologist could object.

A blind bat that tries to fly faster than sound will have a very real problem with flying into things. Bats

navigate by sending out sonar pings, kind of “sound particles” (strictly speaking waves in the air) that

it sends ahead while counting how long it takes for the ping to come back. The shorter time, the

closer whatever is ahead of it must be or, correspondingly, the more rarely it gets a ping back the less

solid something ahead of it is, under the assumption that a ping could go straight through something

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 86/117

less dense. It is easy to see that should the bat fly faster than sound it would have no way of 

detecting what is in front of it.

In other words, information theory places a hard constraint on nature such that Darwinian evolution

by natural selection will never be able to produce a supersonic bat, as such bats would keep running

into walls in the bat cave. This follows from our definitions (all bats are blind, a supersonic sightedbat is in fact a bird) in the same way as it follows from our definitions that solid particles can’t go

faster than light particles (if they do, they are just to be considered the de facto light particles).

Furthermore, the environment of bats will usually include other bats. This makes the ultimate

constraint on rationally navigable bat speed even stricter, taking it down from a maximum near the

speed of sound to a maximum just below half the speed of sound. As soon as two bats in a bat cave

both started to fly faster than half the speed of sound, they would become unable to detect each

other whenever they fly directly towards each other and would thus over time keep crashing into

each other if they flew too fast.

Information theory thus places an uncertainty based constraint on nature such that Darwinian

evolution by natural selection will never be able to produce better than a semi-sonic bat.

This bears repeating as it captures the central logic in this paper, the focus of which is to derive

constraints on natural forces via examining the mathematically optimal behavior of idealized

particles with desires of self-preservation: what we have just done is to provide yet a demonstration

of the same uncertainty based effects that we have discussed under the terms of relativity and seen

how it constrains nature to contain maximally semi-sonic bats.

This simple observation around bats actually captures the entire logic behind the theory that we

have spent the rest of this paper carefully examining in such detail. It turns out that in order to

demonstrate relativistic effects, measure them and even see them with the naked eye as scaled

down to the speed of sound, all one would have to do in theory is to visit a zoo and watch bats fly

around. At least this would be the case if bats were generally fast enough.

This paper predicts that to exactly the extent that a hypothetical creature that we call “bats”

navigated only by sound and were able in their biological designs to fly near half the speed of sound,

their flight paths would closely match those that would be predicted by relativistic effects as scaled

down to the speed of sound, or E ≤ m1 * m2 * c2

where c stands for the speed of sound .

Furthermore, as bats are competing replicating agents with unequal information about each other,we should see them fly around according to the relevantistic formula E ≤ m1 * m2 * c

2*r 1* r 2 (where c

still stands for the speed of sound) theoretically allowing us with fine enough measurement of fast

enough bats to derive the opinions or prioritizing by bats of each other. A decent guess could be that

bigger bats think they are more important than smaller bats. This could become expressed by bigger

bats flying a little faster than half the speed of sound at the expense of the smaller bats that would

have to fly a little slower than half the speed of sound to compensate, by the logic that the smaller

bats would still find it in their best interests to stay out of the way when compared to the alternative.

Disappointingly, nature has not yet seen fit to provide us with bats who fly much more than 100

kilometers per hour, which may not be near enough to half the speed of sound (1,236 km/h / 2 = 618km/h) to see any really pronounced effects. We would have to wait for gene manipulation to reach

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 87/117

greater heights such that it gives us semi-sonic bats before we can conclusively determine if our

predictions around the flight paths of such bats hold true.

We should, however, be able to build flying robots approaching those speeds. Alternatively we could

pluck the eyes from birds that we deem to be fast learners of optimal blind flight by their hearing if 

we find any that are fast and clever enough at the same moment as we temporarily misplace ourconscience. We should prefer the robots.

If we confine the robots to navigating by sound only, then we can assert that we should be able to

witness the constraints of strong relativistic effects at the speed of sound in how it constrained our

optimal programming of the robots (equating to the information theoretically derived physical

constraints on flying robots that must not keep crashing into each other). It would be a nice

experiment in that it would be a very concrete demonstration of the information theoretical rational

behind the corresponding constraint on natural forces that we have derived In this paper as we

program our robots to navigate optimally and discover that we would have to follow relativistic and

even relevantistic rules scaled down to subsonic speeds. It would also be an interesting confirmationto the unification of game theory and communication theory presented by this paper to let the flying

robots in such an experiment communicate with each other by means of sound signals to confirm

how this could modify the constraints to their flight paths.

We should also at this point repeat the important observation that as the entire model described in

this paper is derived in its entirety from information theory. That means we should be able to test

the logical validity of the model and its correspondence to the mathematical formulas describing its

constraints by pure computer simulation.

We could falsify the model as a relevant description of our world if it does not accurately predict (theimpossibility of) certain measurement, but we could strictly speaking verify that the model is correct

from the information theoretical perspective by creating a computer program, running it, and see

that it can under the constraints that information theory alone imposes on it (if we model those

constraints in our program) only behave in a way that is in accordance with the formulas derived in

this paper.

If the model of this paper is thus proven correct according to information theory by a computer, we

would logically have to accept that to the extent that real-world measurements did not correspond

to the predictions of this model, the conclusion would have to become that the premises of 

information theory would not be entirely correct (very unlikely) or that we have inaccurately definedthe concepts of space, time, mass and energy in our model from an information theoretical

perspective (more likely). In such a case we should still note that to the extent that these definitions

could be refined the model should gain better predictive power over the world around us and gain

better usefulness as a scientific instrument.

As we go on to consider with some special care the possibility of superluminal communication we

could liken the semi-sonic bats to our active, solid particles and the birds to our reactive particles to

get a good mental picture of how, like the bats are constrained to relativistic effects by the limit to

the speed of sound but birds are not, active particles are constrained by the speed of whatever is the

fastest reactive particle to experience and navigate by relativistic effects. Reactive particles are notconstrained in such a way, they are only constrained by whatever is the maximum speed of reactive

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 88/117

particles and that we call c such that it follows that two reactive particles can approach each other

with a combined speed of 2 * c.

It is thus a consequence of our definitions that, if we call the fastest particle light and we call the

speed of that particle c then no solid (non-light) particle can move faster than c. To be able to

distinguish clearly between what are just tautological restatements of our definitions and what areinteresting conclusions that fall out from them, we will examine this logic in more detail.

If we define a set of symbols “light particle”, “solid particle” and “c” such that:

•  “light particle” is the symbol we use to refer to the fastest particle in existence.

•  “solid particle” is the symbol we use to refer to any particle that is not referred to by the

symbol “light particle”.

•  “c” is the symbol we use to refer to the speed of the particle we refer to by the symbol “light

particle”.

Then it follows from our definitions that:

•  No particle can move faster than c because c is the symbol we use to refer to the speed of 

the fastest particle.

•  No particle can move faster than a light particle because “light particle” is the symbol we use

to refer to the particle that moves the fastest, the one with the speed c.

•  No solid particle can move faster than light, because then it would be the light particle and

the other, slower particle (formerly referred to by the symbol “light particle”) would be

considered the “solid particle”.

We can see from this perhaps painfully careful analysis that there is nothing ultimately mysterious to

the claim that no solid particle can move faster than c – we don’t have to suppose any absolute

maximum speed of the universe, we just use c to stand for whatever is the fastest particle around.

Nonetheless, in the information theoretical perspective we could go on to see that as more

information moves slower, the max speed of the universe and the effective limit to c could have to

do with any minimum limit to information. This could mean that there is an ultimate information

theoretical constraint to c such that it corresponds to the speed of the minimal amount of information, but all our logic would continue to work in a universe with only active particles and

where faster and faster converter replicator particles competed for the championship of being the

local light particle (being the particle considered locally reactive).

Becoming the local champion could carry with it the opportunity for reduced cost of information

(some of its information is the inversion of the information about its surroundings), making it worth

to compete for. Thus in practice the best “light particles” or “space” that we ever see could just be

local champions in much the same way as the best local active unidirectional sucker is locally a black

hole (while the mathematical construction of an active unidirectional sucker with a perfect mass

stealing strategy making it impossible to escape would be considered a “true” black hole). In other

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 89/117

words, until a universe has managed to evolve completely reactive particles, we should see the local

speed of light vary over time in our model.

This could be a potential explanation to the recent measurements of neutrinos that seem to go faster

than light (they are the new local champion, all hail the new light particle!) but we will go on to

consider an alternative explanation that does not rely on suboptimal performance by potentiallyboth the old and the new light particles such that it could work even if we happen to be in a place

where space and light particles are purely reactive (or the closest possible thing, such as our space

particle or photon reserving just one position in space).

So to resolve the mystery why mass couldn’t move faster than light all we have to do is to conclude

that if it did, it would be the light particle and the former light particle would now be considered

mass.

It is just an effect of our definitions, nothing more, that solid particles cannot travel faster than the

speed of light. We do however see a hard information theoretical limit that constrains solid particlesto navigate at best according to the relativistic and relevantistic formulas and so not only have to be

limited by definition to go slower than c (whatever the local c is at the moment) but limited by

uncertainty of measurement to go in practice no faster than half c.

In other words, to conclude that nothing solid can move faster than light is not really insightful as it

only follows from definitions, but to observe that solid particles can’t reliably over time keep going

faster than half the speed of light is a real insight which follows from information theoretical analysis

of the constraints applying to navigation and interaction in the reality of local observers suffering

from uncertainty in measurements (or inability to interact reliably over time).

As the example with bats or flying robots guided by sound demonstrates, relative effects are

themselves relative to the speed limit of information they derive from. Thus armed with our logically

consistent concept of relativistic bats we are finally ready to tackle superluminal information

exchange or motion. The bats help us picture relativistic effects at normal speeds and they also help

us get past the idea of physical impossibility regarding traditionally defined superluminal motion.

Solid particles going faster than a local “speed of light” are not more physically unthinkable than bats

flying faster than the speed of sound, it would only change our definitions as to change our meaning

of solid and light particles (or our definitions of bats and birds). However, reactive (solid) particles

going faster than purely reactive (light or space) particles is an information theoretical impossibility.

Thus, by setting c to the speed of the minimal amount of minimally reliably detectable information

(and also let this concept correspond to the Planck length in our model) we consequently get a hard

limit on relative movement (how fast two things can approach each other) that follow from our

definitions such that solid things can approach each other by maximally nearly 2 * c where c is the

speed of a real (not just local champion) photon or space particle that reserves exactly one position

of space over time.

It follows that solid particles each going faster than half the speed of light it is not more physically

impossible (and does not break any of our definitions such that birds turn into bats) than bats flying

faster than half the speed of sound as this constraint on their movement does not come from any

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 90/117

logical limits to its flight speed (corresponding to our definitions of bats and birds) but from the

information theoretical constraints to rational navigations imposed on it by uncertainty.

The question of superluminal motion, as it is normally defined by two objects approaching each

other by a total speed exceeding the speed of light, is thus ultimately a matter of overcoming the

statistical limitations to rational navigation imposed by uncertainty in measurements by localobservers.

In other words, you only have to overcome the constraints on absolute semi-luminal motion imposed

by uncertainty, not the hard constraints on physics from information theory that must follow from

our definitions such as to ultimately prevent any absolute superluminal motion where a solid particle

itself moves through absolute space at a speed exceeding that of light (such that two such solid

particles would approach each other in absolute space with a total speed exceeding 2c) as that would

either only turn that particle into the new light particle (making it locally seen as not solid anymore)

or be logically impossible if the local champion were also an absolute champion by merit of its

minimal information capacity.

The Expendable Neutrino

There are in fact two ways we could beat statistics and achieve somewhat reliable superluminal

motion for solid particles without breaking any of the information theoretical constraints we have set

up, making the concept a theoretical possibility. That does not automatically make it an actual

possibility in our concrete universe, where additional constraints could apply making faster than light

travel impossible for other reasons than purely information theoretical ones.

Furthermore, additional analysis of information theory could unveil logical constraints that rule out

superluminal motion but that we have not discovered in our analysis. Each new constraint wediscover just helps on refining our picture of the shape of logical possibility, and we may well go on to

refine the shape forever by continuously finding new constraints. What we will discuss in this and the

following section is a set of effects that we see no reason to disallow from the information

theoretical constraints we have discovered so far, but that does not mean that they must be

possible.

To understand the first way superluminal motion could be seen, we invoke the concept of our

relativistic bats. Consider a person that we call Bob and who had a bunch of especially gene modified

bats capable of somewhat faster than semi-sonic flight. As we have seen, nature would obviously not

produce them by herself as supersonic bats would keep flying into the walls of their bat caves andany better than semi-sonic bats would fly into each other. Moreover, in practice, Mother Nature has

so far only given us fractionally-sonic bats at best. Next we imagine that Bob wants to send a

message per semi-sonic bat to his friend Alice some distance away.

Note that we could imagine this experiment with supersonic bats, but in relation to our other

definitions that would equate to solid particles flying faster than light through absolute space

(turning bats into birds or space into matter and vice versa), so we disallow this possibility by

definition and look at how we can break the constraint imposed by information theory on semi-sonic

bats.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 91/117

The problem, again, with better than semi-sonic bats is that they keep flying into other semi-sonic

bats, as they have no way of detecting each other head on. In other words, as Bob keeps releasing

bats his hopes of better than semi-sonic bat communication dwindles as all his bats keep crashing

into other semi-sonic bats in the world (obviously his little genetic experiment had some unintended

consequences and a few escapee semi-sonic bats have managed to do very well in the wild despite

their initial navigational imperfections after having been honed by natural selection to fly at

maximally semi-sonic speeds).

Thus an increasingly disappointed Bob keeps sending messages by slightly-faster-than-semi-sonic

bats to Alice where none reach their destination - until one day, when suddenly Alice calls Bob on the

phone with the great news that a bat has arrived! Forgetting that having phones would seem to

make the entire endeavor somewhat redundant, Bob enthusiastically decides to increase the

production of bats, and determines to make them yet faster and faster. He realizes that it would be

impossible for him to go so far as to have supersonic bats – after all, he is not some kind of bird  

farmer – but as far as Bob is concerned the speed of sound – not half the speed of sound - is now the

limit for his bat communication ambitions.

The key to better than semi-sonic bat communication, it turns out, is to have enough better-than-

semi-sonic bats. By sending more and more of them at the same time, the reliability of the

communication channel can be improved further and further until with each batch the probability

that at least one lucky bat would get through should be very high, such that a fairly reliable semi-

sonic communication channel could be established with enough bats. With bats flying even

substantially faster than half the speed of sound (anything up to the actual speed of sound would be

allowed with our definitions) you could establish a bat communication channel with speeds

significantly better than half the speed of sound, you would only need proportionally more bats.

One approach to superluminal communication is thus to overcome statistical limitations to

unreliability in a communication channel by exploiting redundancy. This is how communication

theory starts to enter the picture as reliability or quality of communication channels is a central

concept from that domain.

Correspondingly to the bats flying faster than half the speed of sound, if there were a particle type

optimistic enough as to attempt superluminal speeds (that is, flying through absolute space at more

than half the speed of light) you could use them to send superluminal messages with better than

random reliability provided you could afford enough of them to send your messages redundantly in

parallel in very many copies. In addition to the superluminal message you could send with suchparticles it should be noted that the particle that actually survives the trip will have been a solid

particle approaching another solid particle at a relative speed faster than light (as in having a solid

particle moving faster than half the speed of light through absolute space).

This explanation to superluminal effects does not require an active agent particle implementation.

Dead matter particles that happened to go faster than half light speed through absolute space would

also be able to arrive faster than traditionally accepted constraints on superluminal motion would

have suggested, and dead particles do not even live dangerously as they rush blindly through space

on account of being dead.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 92/117

Thus we can see how we have also seen a passive, external implementation of faster than light

communication and motion. We will go on to examine another internal, active agent based

implementation of superluminal motion of up to 2 * c but we will also see the possibility for pure

communication at what will seem to external observers that are not privy to the symbols used in the

communication channel as communication at many – in fact potentially millions – of times the speed

of light.

It is important here to note that although photons could of course also theoretically zoom through

absolute space at the full speed of light, they too could only reliably travel at half the speed of light

through absolute space such that the phenomenon of light will only reliably spread at a relative

speed (the combined speed with which two photons would approach each other) at maximally c.

This resolves the paradox where logic on one hand dictates that a light particle should be able to

move at the speed of light through space but all our measurements of photons indicate that light

particles never approach (or escape) each other at a combined speed greater than the speed of light -

indicating that photons move (reliably) only at half the potential speed of light, which this modelstates that they do.

We go on to note that the redundancy based implementation based on expendable particles where

some don’t arrive but those who do travel faster than light seems to match up nicely with the

recently measured phenomenon of superluminal speeds of the neutrino particle, a particle that

sometimes seems to arrive faster than light but at other times seems to get lost on the way.

In our model such a phenomenon would correspond to a lost neutrino having simply bumped into

another neutrino (or similar particle) going too fast in the opposite direction. One way to model this

scenario could be that the emitted neutrino would just end up somewhere else than at the detectorwaiting for it. The neutrino would only have changed its course through space in a way that would

appear mysteriously random to any local observer until the mechanical explanation (neutrinos going

the other way) could be derived. Assuming such a collision would not destroy the neutrinos, they

would simply continue to zoom through space at superluminal speeds, eternally to keep running into

and bouncing off each other in ways that would seem ultimately random (more essential

uncertainty) for local observers as they would have no way to see the whole process using only light

particles to measure it.

Universal Ping Pong

To understand the following implementation of superluminal communication clearly we will go backto our model of idealized particles as little spaceships with scientists in them.

Consider the case of a lonely scientist in a spaceship. He measures the world around him with his

reactive particle detectors and he uses his motors to avoid comets and search for materials that can

be used to fuel his spaceship and as food for him and his dead cat (the scientist tries to convince

himself that according to quantum superposition the cat could be half alive, which would be better

than no company at all).

One day when the scientist routinely sends away a reactive particle as a “ping” to measure the time

until it comes back and thereby providing the scientist with information about the distance to or

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 93/117

solidity of the next solid object in that direction, the signal he gets back is not his expected ping, but a

“pong”.

That is to say, he gets back the mathematical inverse of his expected response signal which should be

enormously unlikely to happen as the result of bouncing off something that happened to have

exactly the right shape for that. A much more likely explanation would then be that the signal wasinverted by intelligent life, trying to make contact.

Aliens! Company!! Extreme loneliness can inspire desperate acts and so wild with hope of late

canasta evenings the scientist sets off towards the direction of the pong as fast as his motors and the

constraints of uncertain navigation allows him (forgetting in his excitement to feed his cat that

consequently passes into a determinate state).

As we have seen many times by now, uncertainty constrains the scientist regardless of his motor

capacity to zoom towards the aliens at no higher speed than half of c, or he would start crashing into

unseen space junk (which also mostly zooms around at maximally half c as even space junk with lesssense than that will tend to be sorted out of existence in fairly short order). The aliens, should they

decide to zoom towards the scientist in turn, could also go no faster than half of c so together they

could not approach each other faster than the speed of light. This is the very constraint from active

agent theory that allowed us to place a hard constraint on the strength of all natural forces.

But there is a special loophole in the constraint – the formulas we have seen so far strictly speaking

only applies to particles trying to navigate towards each other without active communication. When

navigation is only based on the measurements you can make about space around you, uncertainty

will not allow you to move faster than half of c, but when you are communicating with another party

based in the location you are trying to go to, they could send you information that they have alreadydetected about their local environment, allowing you to gain enough information to enter

superluminal speed (should your engines have enough capacity) without crashing into things on the

way.

There is a small but ultimately significant problem with this argument. Our scientist was too lonely

and secretly too existentially uncomfortable around his ghost cat to make rational judgment, but if 

we send out a ping and get back a pong indicating life (as in some kind of agents) in a certain

direction, how wise is it really to start rushing towards the sender of that pong. Perhaps, to put not

too fine a point to it, the aliens would like to eat us?

Only agents would be able to exploit the idea of using active communication in this way to achieve

faster than light travel, but one of the most fundamental aspects of being an agent in a Darwinian

competition with other agents for limited resourced of locally available space and time is that often

enough those other agents look at you and think “dinner”. Thus there is ultimately a basic trust issue

at the heart of this implementation of faster than light travel, which is fully compatible with the

reliability aspect we described for a communication channel in the example with the semi-luminal

bats.

We see that without being able to improve the reliability of communication channel the formula E ≤  

m1 * m2 * c2*r 

2holds fast even for active agents as statistically they will have no reason to trust each

other.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 94/117

As they zoom towards each other, the scientist and the alien could build up a communication

protocol such that they could not only transmit local environment information to each other, but

could also put each other through a kind of moral Turing test. Essentially, the scientist and the alien

could engage in trust building exercises, asking each other tricky questions of deeply moral nature to

test the waters and get a feeling for if it would be a good idea to approach each other faster than

light speed.

We could of course imagine how a scenario where one party failed such a mutual interrogation might

look (“of course, babies are certainly delicious; our species eats its young all the time! Pass the

mayo!”). There could in fact be any number of responses to different such questions that could

inspire the other party to turn their engines around and go in the opposite direction (“Hey! Where

are you going? But you look so delicious, please come back!”).

However, the constraint in our formula keeps working simply because there could ultimately be no

question so good that the answer would lead to absolute certainty that they were not lying about

being really great and fun aliens who weren’t planning on eating our very tasty scientist at all!

Or could there be a way to build such a trusting relationship between the scientist and the aliens?

We will now go on to inspect closely a final constraint to our constraint. We know that the formula

we have derived so far must be too strict, since it still does not allow superluminal motion,

something we have seen to be a logical possibility and so should be reflected as such in our

mathematical expression.

What the formula is missing is one common aspect – strictly speaking it is yet another dimension – of 

communication theory, namely that of reliability in a communication channel. When we include this

aspect in the formula, we will see not only how superluminal motion should be made to fit into the

picture, but that there is one final possibility to strain the limits of our constraint even a little bit

further, under special conditions.

The Dimension of Trust 

We have seen in our model how the constraints to nature described by Einstein in his General Theory

of Relativity became extended under relevance theory into the relativistic formula E ≤ m1 * m2 * c2

*

r 2

which ultimately constrains how particles must relate to each other in any “living” universe (one

with active agents in it). We remind ourselves that E ≤ m * c2

is the special case of this formula where

only one mass seems involved because the other particle is empty space and represented by the

minimal rest mass of 1 and r 2

is 1 under the assumption of an optimally performing natural force . In

a way E ≤ m * c2

can thus be seen as describing the (optimal) strength of any force of nature acting

between matter and empty space (it really describes the maximum conversion rate between matter

and empty space).

We have gone on to observe that in our model, the formula seems as if it should really be E ≤ m1 * m2 

* c2

* 2. We have not worked this into our formula yet but we that for it to capture the logic of the

model we have described, such a conclusion must be drawn, such that the formula of Einstein would

become revised to E ≤ mc22.

The relevantistic formula E ≤ m1 * m2 * c2*r 2 – which should consequently be reformulated as E ≤ m1 

* m2 * c2*r 

2* 2 - represents how our model implies constraints on particles behavior that to the

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 95/117

extent that information theory holds true and our deductions are correct should apply to any

biophysical, living or dead universe (with or without agents in it).

What it will state when we multiply it by two in this way is that active agent particles may move

towards each other with a combined speed of less than the speed of light if they don’t find each

other very interesting or almost twice the speed of light if they dare - but we know that statisticallythey would not make it. And when we inspect the components so far in our formula we see that if E ≤  

m1 * m2 * c2*r 

2and r 

2can at the most be 1 then this translates to a maximum of E ≤ m1 * m2 * c

2and

so the constraint from the General Theory of Relativity (E = mc2) seems to hold fast after all. But our

model says this should not be the case. Something must still be missing from our formula.

We thus have to examine closely how our formula should be extended in order to capture correctly

the potential for superluminal motion.

We begin by noting that the relevantistic formula is somewhat more generous in what is allowed to

happen than the relativistic formula and this could appear as a paradox. If all matter is constrainedby the relativist formula, how can the relevantistic formula allow biological matter to behave in ways

that pure physics in the form of relativity predicts it shouldn’t be able to do (that is sub-optimally,

from the perspective of a natural force, but in the direction of lunch from the perspective of a

biological agent)?

The paradox is resolved by noting that strictly, the relativistic formula only applies to dead universes

allowing it to describe a stricter constraint than the relevantistic formula which describes a wider set

of biophysical universes (living and dead) and thus by logical necessity should have matching or more

relaxed constraints. We note that the relativistic formula continues to govern the movement of 

passive or dead (non-agent) matter in a biophysical universe as well, only living active agent matter isallowed to roam free under the looser constraint of E ≤ m1 * m2 * c

2*r 

2.

In other words, Einstein’s formula describes the constraints of a five-dimensional, “dead” universe

with three dimensions of space, one dimension of directional time and one dimension of mass (or

density). The relevantistic formula describes a six-dimensional, “living” or biophysical universe with

the five dimensions of the dead universe plus an additional dimension of the relevance of living

things to each other from the economic perspective of opportunity and threat.

Resolving the paradox between the relativistic and the relevantistic constraints will be necessary to

understand why faster than light communication and motion could work without contradicting the

constraints we have discovered so far. The reason is that we have an unspoken assumption that

when made explicit shows how we have pretended to be able to target a wider set of situations than

we are strictly able to do with our formula so far.

We resolved the paradox between the relativistic and the relevantistic constraints by calling out the

hidden assumption that the relativistic constraint applies to living (agent) matter and dead (non-

agent) matter alike – it doesn’t, as relativity strictly only applies to dead matter. It is now time to call

out an unspoken limitation to the relevantistic constraint.

Whereas the relativistic constraint only applies to dead matter, the relevantistic constraint as

described thus far only applies to dead and living matter that cannot communicate with better thanrandom reliability in their communication channels.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 96/117

Strictly speaking there is a special class of the biophysical systems that we call agents and to which

the constraints we have derived from only game theory do not fully apply. When combined with the

other information theoretical domain of communication theory, the deductions from game theory

must be adjusted to take into account agents that have some kind of minimally reliable

communication channel. As we do so we find the answer not only to superluminal motion from a

physical perspective but we also solve the conundrum of cooperation in nature for the biologists.

The reason that cooperation pays off in nature is thus that with reliable communication between

mutually trusting parties such that the level of mutual trust regarding each other’s intentions are

included in the general assessment of the reliability of the communication channel, cooperators can

be more energy efficient than completely selfish competitors.

We will note that logically we can identify a very special class of active agents (particles or macro

particles) namely such active agent pairs that have not only physically met but have both survived the

encounter . Such pairs of agents with a minimal level of mutual trust form a special class of agent

pairs that we can call cooperators or friends and for them laxer rules than the strict relevantistic orrelativistic sets apply.

To have physically met for two solid particles means that they have had a direct interaction between

their two masses, not just sent information to each other via reactive particles. That is, they have

shaken hands, not just waved at each other.

The fundamental logic behind why such pairs form a special class from an information theoretical

perspective can be verified if we go back to the problem with how the scientist could know if the

alien with the “pong” is friendly and note that the uncertainty could be resolved with a dangerous

but conclusive experiment. If the scientist actually met the alien and it turned out the alien didn’t eatour scientist, then we know the alien is…well, at least a little bit better than to devour friendly

scientists at first opportunity.

A minimal level of trust has been built that will continue to increase the longer two agents spend in

direct proximity (killing range) without actually killing each other. Trust can further be increased by

both parties by yet more altruistic acts. Expressed in a way that should ring true to honest

businessmen everywhere – two parties with some minimal reason to trust each other can by showing

each other loyalty go on to build more trust in their relationship (their communication channel) to

the mutual benefit of both parties in form of increased business opportunity.

The greater the trust, the better reason the scientist and the alien have after their initial meeting to

navigate to future meetings with superluminal speeds, trusting that they won’t arrive to the nasty

surprise of a hungry alien (or scientist, conversely) as they do so. But we notice that from a game

theoretical perspective (which must apply to all but the overly naïve or perhaps redundant particle) it

is with the initial meeting that the state change in the trust dimension goes from absolute zero to a

minimal value, from there on to continue a steady journey towards better mutual trust with each

physical encounter that ends happily for both parties.

We know that the full formula for describing dead matter and living matter physics in our model

including living agent pairs that have met and survived the encounter should allow relative motion of 

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 97/117

up to twice the light speed. We also see that it is the aspect of reliability in communication channels 

that fundamentally changes the information theoretical constraint such as to allow this to happen.

All this implies that we should add a dimension of reliability or mutual trust to the game theoretical

part of our formula. We will go on to do this in a way such as to break the relevance dimension into

two separate dimensions where one continues to depict energy potential or relevance to biophysicalagents but the other depicts reliability in communication channels between biophysical agents. The

values of these dimensions should then become added to each other such that they can reach a

value up to 2 rather than just up to 1 as we have seen that with full reliability communication or

motion can become twice as fast - or twice as energy efficient – as compared to motion or

communication when there is no element of communication built on mutual trust.

We must be careful at this step of the analysis to show how we try to capture the logical constraints

on our model we have discovered and how we try to express those constraints using mathematical

notation.

The first observation we must make is that, strictly speaking, the conventional game theoretical

formula for assessing the economic value of a situation should generally be seen to hold, such that

the value of a situation should be tempered (decreased) by the uncertainty that the situation would

become realized. That is, we should expect potential * probability to hold.

On the other hand, we have realized that the stake in the tournament is greater than previously

expected, in such a way as to state that the full combined potential for two cooperating agents

should be seen as 2 rather than 1 (where 1 is the correct value for two competing agents). Consider

the formula E ≤ m1 * m2 * c2

that we derived to represents a constraint on the energy of any physical

force. if we reformulate it to represent the potential economic value of a situation for two competing agents m1 and m2 we can see that the potential value of the situation is just m1 from the perspective

of m2 and just m2 from the perspective of m1 in any case where they would compete for total

dominance over the energy of the other. Thus m1 and m2 should be multiplied and will never go

beyond 1 in a normalized system where the maximum economic value of the mass and energy of 

another agent would be 1.

But we can also see that in a system where the agents did not have to compete but each could

represent improved survival potential for the other should they figure out how to cooperate by

means of communication, the total value of the situation in the perspective of both agents becomes

m1 + m2 with a maximal normalized value of 2. This conclusion corresponds to the informationtheoretically grounded decision to set the minimal rest mass of the space or photon particle to 1,

giving us twice as much energy or mass in the universe as we thought we had to cooperate around.

We note that compatibly with the current models, whenever we fight for dominance over all the

energy we will not have twice as much energy available to us anymore but we also note that game

theory no longer constrains us to fight each other for all the energy as communication theory allows

us to find more energy efficient ways for us to cooperate instead.

Thus we could go on to express our formula in a way that would add the sums of the two masses

together, but we could normalize our formula further such that it does not rely on the non-

equivalence of the two masses to represent the potential we have discovered. By moving the plusinto the game theoretical formula such that it becomes potential + probability we not only make it

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 98/117

reflect the game theoretical situation where communication has been taken into account such as to

allow cooperation, but it also clearly separates the behavior of dead matter physics from that of 

biophysical matter physics such that the additional possibilities that apply to biophysical systems are

contained in their own parenthesis.

For equivalence with standard game theory we could see it as what we are describing is thatrelevance corresponds to the value of  potential * probability and potential could take a normalized

value of up to 2. We normalize this further into ( potential * probability) + reliability where reliability

stands for the trust in the communication channel between two parties such that all these values can

take a value between 0 and 1.

We understand that ultimately it is the reliability of the communication channel that allows the full

realization of the potential as a value of 2 (that in turn should be 2 because we added m1 and m2 

together in a scenario that allows communication) and therefor it is a good normalization to reduce

the potential value of the situation by the same measure as you build up a corresponding added

reliability value. This way normalization has allowed us to move the plus from between m1 and m2 toin between r and t in our formula below.

Thus we can see how the formula that will finally capture all the information theoretical constraints

we have discovered in this paper and where t is taken to represent a dimension of mutual trust 

between two parties and r depicts their economic relevance to each other must be:

E ≤ m1 m2 c2

(r 2

+ t 2 ) ≤ m

2c

22.

What we have here is a formula describing the ultimate information theoretical constraints on a

model of a seven-dimensional universe (assuming three dimensions of space) where the dimensionsare:

•  Three dimensions of space

•  One dimension of directional time

•  One dimension of mass

•  One dimension of the game theoretical biophysical relevance

•  One dimension of the game theoretical reason to trust a communication channel

As we have seen the possibility of superluminal motion follows from how trusting agent pairs could

maximally approach each other twice as fast as normal agents, with the result that the agents each

rushes towards the other at nearly light speed in absolute space which equates to saying that they

are approaching each other with a combined speed of twice the light speed in absolute space.

In relation to the passive (neutrino) implementation of superluminal motion, we could still think in

terms of a dimension of trust as our concept of trust ultimately only describes another form of 

uncertainty or unreliability in measurements. As we have seen, an alternative name for the

dimension of trust would be to call it the dimension of reliability .

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 99/117

Thus as we weave this dimension of reliability into our formula to describe how our traditional

definition of superluminal motion should be allowed into the range of the possible we will see that

uncertainty in the form of reliability has now been captured as an explicit dimension in our model

(perhaps to be perceived by some observers as the “icky” dimension) such that the values of 

relevance and the value of reliability should be added together such that 0 ≤ r 2

+ t 2 ≤ 2.

We still use the letter t to represent the dimension of reliability under its alias of the dimension of 

trust as the letter r is already taken by the relevance dimension. We observe that it makes sense to

think of the superluminal neutrino as a case of overcoming reliability issues by expendability or

redundancy, something that information theory graciously allows us to do.

With the dimensions of relevance and reliability in place and their relationship to each other and the

other dimensions captured in a formula, we finally arrive at the full General Theory of Relevance and 

Reliability , which is captured in the formula described above.

Spooky Tango at a DistanceThe communication theoretical aspect to our information theoretical discussion will also reveal how

the effect known as spooky action at distance can work in our model and how it does not break any

information theoretical laws. The phenomenon in question is one where particles that have become

“entangled” as the result of direct physical contact will suddenly seem able to send information to

each other by millions of times the speed of light.

The better two agents know each other, the more efficiently they can communicate (as an old

married couple who can eventually read each other’s minds) because the more their communication

can become compressed with tokens carrying larger meaning.

For example, if the scientist sends the message “meet me at the usual place” to the alien they could

have just transmitted a very precise location that should have taken them many more raw bits of 

information (1s and 0s in a computer) to describe and they could do this thanks to the information

theoretical concept of convention. It is a concept from communication theory that is the logical basis

for all lossless compression such that two parties have agreed to let certain symbols in

communication stand for preconfigured messages taking more information than the symbol takes

up.

But how can the scientist know that it is her old friend Alien Bob who sent a message to meet up at

the usual place, now causing our scientist Alice to rush toward that place blindly at superluminal

speed, daring to do so based on her trust of the accuracy of Bob’s local environment descriptions and

her general trust that Bob won’t eat her when she gets there. The problem is that if she can’t see

anything on the way she can’t know if it is Bob or a hungry imposter before she gets there, and then

it would be too late. Again the answer comes from information theory, this time from the concept of 

encryption.

The key is to use a secret code such that when Bob and Alice physically meet in a secure location with

no eavesdroppers, they agree on a secret challenge and response by asking a quantum randomness

generator to give them two truly random words. The generator spots out the gibberish letter

sequences “Yin” and “Yang” and Alice and Bob can then use these words such that when Alice sends

the secret challenge signal “Yin” (instead of “ping”) Bob would send back the secret response signal

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 100/117

“Yang” (instead of pong) and Alice can now be relatively safe in her assumption that as long as she

and Bob take care to verify the identity of each other using their “secret handshake”, they can now

storm towards their meetings at superluminal speeds.

Combining the information theoretical concepts of encryption between trusted parties (a concept

that ultimately relies on actual physical contact to work completely) and symbolic communication between parties (a form of data compression) we see the following effect that would not break our

new constraint on twice the light speed but would seem to do so to any external observers.

We can see that there would be nothing to prevent that two parties communicate a very great

information load symbolically such that while all the bits that are actually sent must travel at the

speed of the minimal amount of information (light speed), to an external observer it would seem that

the two parties communicated faster than light in exact proportion to how much more information

was implied by the symbol. This means that by the speed and energy of just one photon you could

symbolically send information matching hundreds of thousands or millions of photons.

From an information theoretical perspective this virtually equates to saying that you just improved

the bandwidth of the universe without actually increasing the general maximum speed of an

information bit by adding a new dimension for the parallelism. That is to say, no new dimension is

added to the absolute reality in our model, but to an external observer of certain communication

phenomena it would seem as if two agents would have to be communicating via some new

dimension allowing virtually unlimited parallel bandwidth for the communication – that is, until they

are in on the secret of the symbol table used by the communicating agents.

In other words we could see two parties symbolically communicate enormous amounts of 

information in a way such that they send a symbolic (packed) message of just one bit but to someoneunaware of the symbolic meaning of this bit it would be as if the two parties had managed to send a

much bigger message (the unpacked message) at a speed much faster than should be physically

possible by means of sending messages by photon express.

The bigger the information in the full, unpacked message the proportionally slower it should be to

fully communicate it as compared to the maximum speed of light. The result is that to an external

observer it would appear as if the communication goes proportionally faster than light speed the

more information there is contained in the unpacked message.

Communication could look as if it went at many times faster than light speed to non-party observers

without breaking the logical speed limit through absolute space that no bit (or photon) moves faster

than some value c. In other words we have a phenomenon that looks to outside observers as not just

superluminal communication but as communication at potentially millions of times the speed of 

light.

Teleportation

If two trusted parties sent a message between each other to the effect that one said “build a

machine of type 42” and the other knew the design for such a machine and went on to build it as fast

as it could, whereas the first party on their side started to dismantle a machine of type 42 as fast as it

could, it would look to an outside observer as a kid of hyper-luminal (many times the light speed)

teleportation effect.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 101/117

This is not a description of true motion - hyper-luminal or otherwise – but only of communication

effects that look like teleportation to an outside observer. It was not the same machine 42 that

moved to a new position – we had two machines where one was dismantled and the other was built.

However, we should note an interesting mathematical curiosity such as that while we could find a

mathematical possibility for actual hyper-luminal teleportation with this “trick” it may not be logicallypossible even if mathematically valid.

If we could assume that the trust between two parties were exactly 100% then we could see that if 

the sender of the message could somehow destroy mass and energy from the total equation of the

universe and the other could somehow add mass and energy (not just restructure existing mass and

energy) then if they also synched up they could do the teleportation of machine 42 such that the

sender not only dismantled his machine but removed its mass and energy entirely from the universe

at exactly the same time as the receiver inserted the corresponding amount of mass and energy into

the universe and then rebuilt machine 42 with it.

This would be a mathematical possibility such that we have not affected the total amount of mass

and energy in the universe but if it is a logical possibility to have 100% trust is a different matter. One

particular aspect of information theory that has been intensively studied by computer scientists is

the concept of transactional integrity and the best understanding on this topic has found no way to

ensure 100% reliable communications of an effect such that it can never be absolutely certain that all

parties who should receive notice about performing the effect will receive their messages.

That is to say that while it could be mathematically possible with “true” hyper-luminal motion (but is

it really the same machine 42 just because we destroyed and created mass rather than just the

structure of mass?) at many times the speed of light by this form of teleportation, it may makerequirements that are not logically possible to meet in practice. Conversely, it could be this logical

and ultimately information theoretical problem that would ultimately prevent the practical possibility

of creating even temporary imbalance in the total mass and energy equation of the universe.

De-coherence and Many Worlds

We can see that active agent particle pairs that have met and survived and therefor have a certain

trust built up do not in fact have to communicate strictly according to the same information

theoretical constraints (implying physical laws) as govern the rest of the universe – at least in the

experience of an observer external to the communication.

But while we can only see superluminal motion of up to 2 * c, we have seen that what we will define

as the entanglement effect in our model (particles that have met without killing each other, building

up trust and secure communication channels with encoding and encryption) can result in

communication at what looks to outside observers as if it were at many times the speed of light

(because they don’t know the codes) - such as if there were an information spreading effect capable

at moving at much more than 2 * c. Information theory holds because this effect cannot take place

without the communicating particles having physically met, which does ultimately change some

information theoretical and game theoretical constraints for them.

This interestingly matches the measured phenomenon known as spooky action at a distance such

that it could be seen to match the active agent pair based implementation of hyper-luminal

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 102/117

communication. It too is dependent on the so called “entanglement” of two particles, an effect

resulting of their physical interaction at one point in time, after which it seems that suddenly new

physical (or at least information theoretical, subdomain communication theoretical) laws apply to

them.

Entanglement and spooky action at a distance could thus finally be a real measurable way to tell if we have active agent particles in our universe as only such particles could implement any such effect

in our model. It seems that we do measure this type of effect which indicates that perhaps our solid

particles are really little machines below the Planck level with potentially sophisticated internal

design.

However, we should note that there is an alternative mathematically sound explanation to spooky

action at a distance and that we have already discussed to some extent when discussing the

difference between a model with an absolute universe in it and a model with only relative and

superposition realities in it.

The currently most widely accepted and mathematically consistent model for describing why spooky

action at a distance does not have to break any laws of information theory is the so called de-

coherence or many worlds theory.

Roughly it states that when two things can happen, both will happen such that the (essentially

already non-absolute) universe splits into two. The things both happen, each in its own universe - still

not really absolute, but mathematically speaking together with its inverse universe a little bit closer

to absolute (but without ever being able to reach the point of full absoluteness).

While this works mathematically (where something approaching a value can be substituted by that

value) we can see that in essence we are only trying to compensate for not having any absolute

reality in the model by inventing more and more local realities, something that is mathematically

allowed but possibly not as elegant (information efficient) as the alternative where an absolute

reality is assumed to exist.

Interestingly we can see that the relativity of Einstein has the same recursive invention of new virtual

realities to compensate for the lack of any one absolute reality, as can be observed in the example

with the travelling twins. One twin goes on a spaceship a few spins around earth and the other waits

on the ground. When the spacefaring twin returns, the young earthbound twin sees his suddenly

older brother with a long white beard.

But did we not note that the generally accepted model says they would be the same age again when

they met? Yes it does, if they go away from and towards each other again in a linear manner.

The exception comes if one takes a spin around the other in which case measurable effects such that

they seem not to be the same age when they meet will appear. Here is the well-known and

mathematically accepted solution to the paradox of what happens in this experiment (which has

been tested in practice with clocks):

In the local reality of the earth-bound twin he sees his brother with a beard, and if he asks his

brother to report about his perception he will hear his astronaut brother agree that the turn in spacemade him the older one as proved by the beard.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 103/117

But in the local reality of the spacefaring twin, according to relativity he should be able to decide to

consider himself stationary the whole time while earth and his brother suddenly went on an

interesting orbital trip around his spaceship. Thus when earth and his brother with it returns, the

stationary astronaut twin would with that frame of reference have the experience of being beardless

but seeing his now much older, earth-bound brother clad in a long white beard as they greet each

other, and they will both agree that it was the brother who stayed on earth that is now old and

bearded.

In other words, this is just as the many worlds theory, where a cat can be alive in one reality and

dead in another and particles can communicate with hyper-luminal speed (spooky action at a

distance) pretty much on account on how, to balance things out, local realities will be created where

the communication fails as compensation. Mathematically it works to the extent that as new local

realities are created with every event in the universe, the mathematical description will infinitely

approach a fully correct description, and in the world of mathematics something that infinitely

approaches a value should be substitutable for that value.

But the mathematics and the correspondence with established measurements also works out in a

model that assumes just one absolute reality and derived local realities (one per observer) and it

works without the substitution of an approaching value by the value it approaches.

In our model that does include an absolute reality, the two twins would not necessarily be able to

agree who had the beard and in absolute fact they would be just the same age as each other the

whole time. In other words, by sending a clock in a spin around earth as we have done in real

experiments (in our model the clock really does circle earth rather than the other way around

regardless of how it can seem to inside observers) we have managed to add a level of necessary

uncertainty to the measurements of that space clock for the remaining earthly observers and viceversa.

The clock we actually have in our world and that we have sent on an actual space-tour seems to us to

have slowed down as a result of our inability to measure it any better. In our model it really has not

slowed down from any effects of absolute time moving at different speeds for the space clocks and

the earth clock. The thing we measure and that looks like time has slowed down for one clock is

additional uncertainty that we, as per usual in our model, will interpret as additional virtual or

relative distance, mass and time (so the space clocks seems slow to us but an earth clock seems slow

to an astronaut).

In many worlds theory, it would be as if each twin brother in the thought experiment heard the other

say or even write down that the other was the bearded one, which seems logically impossible (unless

new worlds are invented) although mathematically feasible. With absolute world theory (the one we

have examined in this paper) it would not turn out the same way exactly. Rather, the twins would

have to agree that there was no way to truly measure who was older or had a beard and that would

be the end of that.

Well, not completely. There could be measures of uncertainty resulting from the space trip that

could potentially be reduced over time. For example, they could try to ruffle each other’s beards and

discover that in fact none of them were bearded.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 104/117

In fact, when measuring each other’s clocks they may both feel that the other has the slower clock,

but when it comes to whether someone has a beard, in the model presented in this paper it would

be a completely logical resolution to the paradox to just have the brothers ask each other if the other

has a beard – something which should at the very least be apparent to each brother is if they have a

beard of their own. They would not hear each other claim opposite points but would be able to

resolve the beard paradox (“aha, so none of us is bearded then”) without inventing any new

universes to make the math work out.

It may therefore be left to the reader to decide if the model with an absolute reality and necessary

uncertainty in measurements for local observers or the model with no absolute reality but a

recursively and exponentially growing amount of local or virtual worlds springing into existence as

the result of every event in the universe is the mental model they are more comfortable with to

picture the world around them.

Likewise one can chose between the many worlds theory or the “robot particle theory” to explain

the phenomenon of spooky action at a distance, each is equally mathematically consistent but itcould well be argued that the model with an absolute reality is much lighter from an information

theoretical perspective (no infinite and exponentially expanding recursion required to infinitely

approach an information theoretically valid description) and that the explanation with an absolute

reality may also seem more mentally comprehensible.

Collapsing Dimensions

So we have seen the description of a seven-dimensional universe with three dimensions of space,

one of directional time, one of mass, one of relevance and one of reliability. Should we try to squeeze

in a few more, just for good measure?

Or conversely, could we get rid of a few of those seven dimensions? For one thing, do we really need

all three space dimensions?

It turns out that logically and mathematically speaking we don’t. What we can do (and that will make

both mathematical and logical sense) is to assume that in the absolute universe we have only one

dimension of space, one dimension of time and one dimension of mass (density of space). With only

these three concrete dimensions we would still be able to derive the same two abstract or virtual  

dimensions as before for local observers, those of relevance and reliability.

So if our model bears correspondence to the real world, perhaps there is only one concrete

dimension of space in the universe we live in but we living things inside it, constrained to the

uncertainty of the local observer perspectives, will experience uncertainty or unreliability as “more

space” around a particle. Lower biological potential (greater irrelevance) of a point will make it seem

surrounded by more space still.

The result for a living observer could be one where they perceive relevance and reliability as two

additional dimensions of space, providing us with a reasonable and information theoretically sound

explanation as to why we seem to find ourselves in a universe with exactly three dimensions of 

space.

As far as the analysis of this paper has taken us, we can see that the smallest possible concrete

universe allowed within our model would require three absolute dimensions – one of space, one of 

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 105/117

time and with the inclusion to the model of reason (causality or non-randomness) we get the logical

basis for a dimension of density, consistency, “mass”, call it what you will but it should be seen as the

inversion of the concept of distance in space and time such that if distance degrades a signal, the

inverse implies a place where a signal does not become as degraded (we could talk of this as the

potential for differential cost of information).

Logically this would amount to stating that space is a condition with higher randomness that

degrades the signal faster whereas mass is a condition of lower randomness due to some (any) type

of causality such that a signal can degrade slower or sustain better there.

We thus have definitions of the three dimensions of the absolute universe in our model entirely

based on concepts from information theory. Space corresponds to the concept of the minimal

information content such that a signal can exist. The relationship between mass and space is that

mass is a condition of lower randomness than space and time is the thing over which signals can

degrade. A signal in turn is any type of information, making this a completely information theoretical

model of a universe.

This would give rise in our model to a reality with five perceived dimensions for living observers

inside the model such that life (biological or active agent observers) will experience three dimensions

of space, one dimension of time and one dimension of mass but where two of the perceived space

dimensions do really represent biological potential and reliability associated with measurement,

including uncertainty regarding the intentions of one another in a universe that favors cooperators

but still punishes the overly naïve.

What if there were more aspects to a position in space in some universe, such as the one we live in?

What if, for example, a position had a certain rotational spin? Or what if positions could havedifferent pulses or even be more or less attractive (charming) to each other – perhaps in ways that

would influence their behaviors such as to correspond to other physical forces we associate with our

universe (by means of external or internal mechanical implementations)?

If this were the case, it could be modeled mathematically as yet new dimensions to our model. For

every such feature, we would include it as a new dimension to the model. To picture this, the reader

is not asked to imagine hyper-dimensional cubes. Just as we are able to picture how mass must be its

own dimension inside our normal three dimensions of space (or an empty swimming pool and one

full with water would be indistinguishable) we could imagine that if our little particles spin and charm

each other we must mathematically model this as additional dimensions. Rather than imagining five-dimensional space the reader is just asked to picture that the little balls in space are now (for

example) spinning and have different looks, remembering that in a mathematical formula this is

considered as new dimensions insofar as the spin or look could be changed without changing the

density or position in space.

As being able to verify the logical (and by extension mathematical) claims by this paper using mental

models remains a central ambition, we take a little extra care to make sure that the reader feels no

discomfort with mentally picturing a multi-dimensional universe in a way that is accurate from a

mathematical perspective.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 106/117

To imagine a world with three dimensions of space, all the reader has to do is to picture a three-

dimensional cube with balls in it at different positions in the cube. To accurately mentally picture the

mathematical model of a world with three dimensions of space and one dimension of time, all one

has to do is to picture the balls moving around in the cube.

To accurately picture a mathematical model of a world with three dimensions of space, onedimension of space and one dimension of mass or density, all the reader has to do is to imagine that

the balls could be of different density, such that the difference could be told between a swimming

pool full of air and one full of water (consisting of denser balls). To picture a world with just one

actual dimension of space but two dimensions of relevance and reliability, exactly the same model

can be used.

To accurately extend the image of such a five-dimensional mathematical model to a seven-

dimensional universe with spin and charm all one has to do is to picture that the balls could rotate

and perhaps be different colors such that black balls generally found white balls more attractive and

vice versa (such a preference would in no way have to be absolute, only statistically detectable). Atno point would the reader be asked to imagine a four-dimensional cube with four dimensions of 

space as this would not only be unnecessary but ultimately an unhelpful mental model for most to

help with understanding what the claims of this paper should be seen to be.

As our very reliable scientific Standard Model would seem to suggest that we have additional such

qualities as spin and charm to take into account in a more refined model of our universe than will be

examined in this paper, we should expect such models to have many more dimensions than five in

them. This paper will stop its analysis at the five dimensional universe but note that to evaluate the

general arguments presented here it would be enough to start with the simulation of a fairly small

five dimensional universe. Additional dimensions could be added to approach a closer description toour own universe as computer power allowed. It is furthermore possible that such aspects as spin

and charm would even evolve out of the behaviors of a five dimensional universe given enough time,

in much the same way as the aspect of sex (strictly speaking its own dimension in a rich

mathematical model of biology) evolved in our biological world when it was given enough time.

The concept in this model of causal loops that evolve into universes with eventually very many

dimensions could potentially be seen as compatible with string theory and its models of universes

with eleven dimensions or more, but will not been examined to greater extent in this paper.

If the reader feels able to imagine that we only have one real dimension of space and the other twowere essentially both the effects of general uncertainty and differential relevance but still feels that

this mental model of our universe feels “icky”, the resolution becomes to see that quite possibly the

universe we are in has three actual dimensions of space.

What started as a universe with one dimension of space could evolve into a universe that should be

interpreted as one with three dimensions of space entirely as a consequence of what

transformational rules of time are used. This would be in just the same way as it could evolve aspects

such as “charm” or “sex” related to how the information in the universe should be interpreted by the

transformational rules. In practice, dimensions would be added to such a universe as information

found new ways to exploit causality to represent itself yet more efficiently.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 107/117

Thus we could be in a universe with one absolute dimensions of space that has “folded” itself into

something that thanks to the behavior of time must be seen as a universe with three actual

dimensions of space. The reliability dimension would then work in addition to three dimensions of 

space, one of time and one of mass such that the perception of all the three space dimensions as

well as the time dimension and the mass dimension would seem affected by a local observer. This

would be in such a way that lower reliability in measurement would make it seem like the space

around the measured thing grew a bit, like its mass became a little less solid and as if time slowed

down a bit in the thing. Improved relevance on the other hand would make the space seem to shrink

a little, mass become a little more solid and time move a little faster.

Dark Energy as Virtual Anti-Gravity

The concept of Dark Matter would in our model correspond to the minimal rest mass of the photon

and the space particle (the informational minimal capacity of one position in space). It could be used

to explain where the energy for such phenomena such as the superluminal neutrino could come from

but also help explain other well-known curiosities in our measurements that have indicated more

mass than there should be, somehow hiding in empty space and that we have so far in the Standard 

Model been resolved by including a concept of Dark Matter that we don’t know what it is but that

must be responsible for a set of gravitational effects that can’t be explained any other way. What the

analysis of this paper concludes is that there is good information theoretical reason to believe that

we do indeed have a form of Dark Matter in our universe in that empty space should be seen as

having the minimal rest mass of 1.

Another recently measured phenomenon has resulted in the need for yet another concept in the

Standard Model: Dark Energy that would be responsible for pushing all the other star systems away

from us in an accelerating manner. This observation has seemed deeply mystifying. In a nutshell, if 

they are accelerating away (and already by some fairly significant speed at that) why are they not

already long gone? Did they start spurting away from us recently? And why – did we somehow

offend them?

But our model, on closer inspection, would predict just such a phenomenon to occur as yet another

effect in measurement of observers in a way that would work like the inverse of the Virtual Gravity  

we discussed earlier. The effect comes from how space particles (or photons) are able to share

information with each other without destroying any of it (they are reactive perfectly bi-directional 

semi-converters). This assumes a perfect compression algorithm for light particles such that they can

keep sharing more and more information about the solid world with each other indefinitely, but this

is exactly what information theory tells us to expect for them (information lives on forever in

photons) – without the existence of such an algorithm for the photons, abstract information would

have to be destroyed from the model which would require constant addition of energy to the

universe (as is not allowed).

This means that if we look at a distant star that happened to maintain a constant distance to us, the

space between us and that star will keep filling up with more and more information without the

number of positions in space (number of space particles) between us and the star increasing at all.

One way for an observer to interpret this information increase to any empty space would be as if the

distance between itself and the object being observed would increase over time as a way to interpret

the increased amount of information between the observer and the solid object. The total amount of 

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 108/117

information in the space between two solid objects would of course in accordance with our

definitions increase over time in proportion to the product of the masses of the objects and in

inverse proportion to the square of the distance between them.

Thus, if an observer and a solid object were maintaining a constant distance to each other in absolute

space, reduction of initial uncertainty would look like the object solidifying (or accelerating towardsits correct position in space or freezing up in time) but after a certain point it would start to look to

the observer as if the solid object were accelerating away in line with anti-gravity (or becoming less

solid, or slowing down in time). If the observer set off in a spaceship towards the object they would

discover that it would only take as long time to get there as it seemed it should have taken before

the object seemed to start speeding away. This is logical because the observer will still have to

traverse the same amount of positions in space to get to the object and it will not take more energy

to traverse space particles that are filled with more and more abstract information contained in its

substructure.

One way to think of this is would be to consider the case of looking at some object through a verystrong microscope. At first the picture would be blurry and atoms would seem to overlap in their

positions giving the impression that the object would appear to solidify (become less blurry) as the

sharpness of the picture increased. This corresponds to the effect of Virtual Gravity in the model of 

this paper. But as the picture became even sharper it would eventually go on to reveal that between

the tiny atoms in a solid object there is mostly empty space and then the object in the picture would

seem to become less solid (fade out to black) the sharper the picture became. This corresponds to

Virtual Anti-Gravity in our model, or dark Energy.

Thus we would see an effect corresponding to that of Dark Energy in our model but it would be an

effect in measurement for local observers, not an indication of motion through absolute space of thesolid particles in our model.

The reason we currently say that it seems the stars accelerate away from us is the accelerating

redshift that has been measured in the light that have traveled from the stars to us. That could either

be explained by something like the stars zooming away (or becoming less solid, or slowing down in

time) or it could be explained as the photons somehow “warming up” empty space as it travels from

those stars to us. The model presented in this paper would explain this effect in terms of space

“warming up” as in being filled with more information (a form of increasingly complex vibration in

positions of space, corresponding to the concept of warming up, and we see that we have a rest

mass of 1 for empty space such that we have something to warm up) and that this effect should bemeasurable at much smaller scales with precise enough equipment.

We should note here that we could potentially see a way to confirm the general validity of the model

we have derived in this paper by means of predicting the outcome of a yet not performed scientific

experiment (predicting already measured neutrinos, spooky action at a distance, cooperation in

nature and the stars seeming to accelerate away from us would strictly not count as this theory was

presented after all of these observations). But the model of this paper will make the prediction that

we could detect an effect that makes it seem (under one interpretation) as if a camera was zooming

away from itself in exactly the way that the stars seem to zoom away from us. That is, we should be

able to measure the same redshift that we see from the stars under relatively low energy conditionson earth.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 109/117

If we send up a satellite with a film camera in such a way as for the satellite to be in a stationary

position relative to a spot on the moon and we send a robot to the moon with a mirror, such that the

film camera in the satellite could film itself in the mirror on the moon, the model of this paper

predicts that an examination of the film would show the same accelerating redshift effect as we see

in the light from the stars – but as we know that the moon and mirror remain at constant distance,

we would have to conclude that this increased redshift is not the result of the moon accelerating

away.

We should note that the mirror and perhaps even the camera could potentially be on earth as well,

and that the effect should be easier to detect the more vacuum we had between the camera and the

mirror. Thus, before we launch satellites to measure the effect predicted by this paper, it would

probably be well worth it to see if it could be detected under more mundane conditions as well. It is

possible that the only equipment required for demonstration of the Dark Energy effect predicted by

this paper could be a mirror, a film camera and a light bulb.

IndeterminismWe will see that given a universe with relativistic effects and active agent observers in it who are

capable of influencing the universe, there will be no constraint confining the universe such that it

must have a purely predetermined future, even if the reality is ultimately both real as well as

infinitely precise.

The usual reason to think that it seems reasonable for the course of the universe to be

predetermined is that one might think that’s all physics would allow. If we picture the universe, as is

our want, with balls bouncing around in space then it might seem like there is only one way the balls

could bounce without breaking any laws of physics.

Consider two universes, alike in every detail such that each contains a driver in a car heading towards

a cliff. In one universe, however, the driver sees a warning sign and stops in time whereas in the

other universe the driver looks at the sign as well but due to being confined to some necessary

uncertainty in all his measurements (both drivers are) he misses the warning sign, resulting in the

second driver going over the cliff to plummet helplessly towards his death.

It is important to observe here that even if we could argue that it could be predetermined that these

would be the things that would happen in each universe, it could not be argued that the causal chain

describing the difference between what happened in the two universes did not depend crucially on

the perception of the drivers – and then logically ultimately also on a measure of uncertainty.

But perhaps perception could be predetermined too? Or could it? We know that local observers are

confined to an uncertain view of the world, so all perception is imprecise and ultimately

unpredictable in which parts of reality that will become obscured at a given time. We can thus see

that in a universe with local observers having any influence, the future cannot be totally predictable

as chains of causality sometimes rely on unpredictable observations.

The impossibility to predict which perceptional distortions will come is a direct result of the

definitions in the information theoretical model we have built. It is the higher randomness in space

that degrades the signal, so the signal degradations are the result of randomness.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 110/117

We could not logically consider a universe with predictable signal degradation as any such

predictability could be exploited to improve the reliability of the signal again (consider how a causal

loop could exploit such causality to improve its stability) and by doing so we would also remove the

essential distinction between space and mass (higher density of concrete information equals lower

randomness corrupting that information over time). Thus we seem to rely on a measure of true

randomness for any logically consistent description of a world with features matching ours (we have

space, time and mass) and so we know such a universe must also be ultimately non-deterministic.

It would be the assumption of an absolute chain of causality that implied pre-determinism but we

have seen that in order to have a concept of mass such as we have in our universe, we must be

describing a universe with less than absolute causality in it (some degree of randomness) and so we

know with absolute certainty that we are not in a pre-determined universe and that we are able to

do as we like within the physical constraint that we can only approach each other at maximally twice

the speed of light (or half of that if we are not trusting friends – it should be noted that enemy agents

can only escape each other at maximally half the speed of light through absolute space, unless they

are escaping in the direction of their respective friends and communicating with them).

But wait a minute. We don’t really have to assume true randomness to make all our logic work out –

all we have to do is to assume that even if some absolute causality exists, it is for some reason not

exploitable by the signal to prevent degradation.

With this observation, we have really only stated something along the following lines. For a program

in a computer we could make it so that it is not possible for the program itself to precisely determine

the entire future of the state of that computer if we add a layer of corruption to some of the

computer’s readings of its own state. The operator of the computer could look what the true values

were behind the corruption layer and also (if a predictable corruption effect were used) whichmistakes the programs trying to predict their futures would make. We are back to considering a pre-

determined universe after all.

The operator could be seen as the famous demon of LaPlace, which represents the concept of an

outside observer that is able to see all the future of a universe that seems unpredictable to its local

observers. But we should also see that the operator could exist in a computer ran by yet another

operator. Where does this recursion take us?

Well, for the operator to exist, it would have to be (with our definitions) in a place where there was

still some randomness for him in his universe. The alternative is that there is no difference betweenspace and mass in his world such that he leads what we would have to think of as a purely abstract

existence.

Thus we could picture a type of existence that could perceive what would happen in a pre-

determined universe as a way of ending the recursion, but such a being could in our model never

affect anything due to the lack of a distinction between randomness and absolute causality in his

world.

In other words, the ultimate demon of LaPlace could only live either in a world of absolute chaos or

absolute causality, either way it would be an entirely abstract universe such that the demon could

perceive all of possible reality but could affect none of it.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 111/117

One might perhaps philosophically ask what would be the entertaining aspect (after initial interest

wears out) in perceiving the future of either a universe without time (nothing ever happens from this

perspective if there is absolute causality. For the demon it would be as if looking at a box of with two

dimensions of space and one dimension of time) or with too much time (everything happens totally

chaotically with no reason for any state to proceed the next, such as that it would make roughly the

same sense to look at such a universe it as it does to stare at the static of a television screen).

If the ultimate demon of LaPlace lives in an abstract universe with absolute causality, then it follows

that the same will go for all the concrete worlds below such that they are all really predetermined.

There is no way for anyone in such a world to know the future precisely and the only one who could

see all future can’t do anything about it or do anything else at all for that matter. It follows that given

enough time, all possible mistakes from this inability of all agents to see the future would occur, such

that all of physical possibility would be explored. It could therefore be observed that such a world

could be compared to the many worlds hypothesis in that the best way to picture this is as a world

where everything happens but with no real consequence as, well, everything happens, whichmatches our logical concern with pre-determinism such that nothing really matters because, well, it

was doomed to happen.

If on the other hand the ultimate demon of LaPlace lived in a world of absolute chaos, it would follow

that everything in the concrete worlds would be ultimately totally chaotic. Anything that seemed to

happen with regularity would ultimately do so by chance alone. Things would not be predetermined

but they would not happen for any reason either. Again we seem left with an ultimately meaningless

existence.

The resolution to this paradox is to see that when moving all the way to the level of the absolutelychaotic or the absolutely causal, we have gone one step too far in our search for the end condition of 

the recursion of a computer in a computer in a computer (or a god of a god etc.). The logic of our

model breaks down and we do not continue to make any real sense when we try to derive the full

mathematics of our universe from not just a timeless universe but (if we continue down that road)

eventually one without any space in it either.

We have to start with a universe that has some time, space and randomness for our model to make

sense, and the ultimate resolution to our paradox is not only to conclude that it is enough that these

three elements happen to spring into existence for the logic to continue to make sense from that

point on. We also conclude that:

1)  For this to happen would assume only one most basic requirement, that of some minimal

existence of some minimal randomness, allowing the three dimensions of our universe to

“happen” to spring into existence,

2)  If we exist (and many measurements point in favor of the popular theory that we do) then

we must have somehow been able to happen to spring in existence, so a minimal level of 

randomness must be a basic constraint on the existence of anything. The alternative is that

we (the multiverse) has always existed (such that it never had to “happen” to spring into

existence) but then we are back to concluding that as the multiverse must contain some

minimal randomness to belong to the set of models where we could exist, it would also have

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 112/117

had to always have contained some minimal amount of true randomness - or at least it must

now. From a point of view that probably refuses logical meditation, one might imagine some

special point in time to distinguish before and after the invention of the necessary

randomness for the universe to turn into a real place where things concretely happen.

However, this requires the hopelessly illogical event that randomness could be invented

randomly. In any case, randomness wins.)

Check mate, pre-determinism?

It is a logical conclusion from the premise that we are here that randomness is a core feature of our

root universe. Such a universe that contained some randomness or uncertainty from the start would

then be able to randomly invent both space and time (or one might even say that space and time are

aspects of the distinction between randomness and causality) and from that point on the whole

game of existence could start. And it also follows from the proof of the requirement on some initial

randomness on any world in which we could exist that any such world could not be totally

predetermined either, as it contains some level of randomness at its very core.

We can never ultimately prove a theory of origin - we can only exclude logical impossibilities, which is

what this paper has been attempting to do by using premises from information theory and making

deductions from them. In the case of our final philosophical inspection on pre-determinism, we have

only proved non-determinism by inversion.

But we have proven that full determinism is impossible insofar as any observer can exist to make

observations, and then what remains must be proof that we are in a non-deterministic universe.

However, there is a premise to every deduction. The alternative, that one must perhaps leave open

as some kind of possibility, is that we don’t exist. This would essentially amount to us only imaginingthat we are imagining that we are imagining ourselves, with no end in sight to the recursion. This is a

truly endless recursion and thus not a logically consistent model in the same way that a circular

argument is not (and endless recursion is as bad as an infinite loop, plus it consumes all the

computer’s memory). Still, although strictly not a logical possibility it might perhaps be seen as a

mathematical possibility such as “negative distance” which has no correspondence in a world of pure

logic.

A universe with some elements of both causality and randomness is not the same type of thing as a

universe with only one of them. To be able to be the way it is, our universe cannot not be just

mathematically derived in a transformation that adds no information from such a purely chaotic orcausal thing but must be derived from something that in turn must have both some level of causality

and some level of randomness for the logic to check out. We can thus know that either we exist in a

non-deterministic universe or we don’t exist at all (in which case we may do well to continue to

imagine that we imagine that we imagine ourselves).

To verify one final time the logic of this (pre-determinism is a sore loser and has demanded a

recount) we can say that by adding time such that we go from a fully static to a semi-static or

dynamic universe, we are not just interpreting the static universe in a new way, we are adding a

brand new quality to it such that the dynamic universe must be a different type of place than the

static universe. Even if our dynamic universe somehow evolved from a purely chaotic one (ordevolved from a strictly causal one) then that new quality of randomness or causality had to be

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 113/117

added (it could not just be the question of a mathematical reformulation of the same information as

existed before in that universe) to make our universe what it is and by doing so it became a different,

less meaningless place that would by logical necessity be ultimately non-deterministic - even for the

demon of LaPlace - from that point onwards in time.

Mathematically, the set of dynamic universes with time, space and mass is not a subset of the set of universes with only one or two (or none) of those dimensions and thus our universe is not a subset of 

any purely predetermined or purely chaotic universes. In other words, we live in a meaningful

universe.

So we can see that we are not in a predetermined universe. But did we have any very good reason to

believe we should be anyway? The infinitely precise motion of balls in space seems like a convincing

argument for a predetermined universe until we involve uncertainty in perception of active agents

but is it a good default view on the universe?

Would balls moving perfectly really be the easiest thing for a universe to do such that we shouldexpect that feature of it, or would it be an easier job for a universe that would be allowed a little

mistake here and there, now and then?

Perhaps the more reasonable default assumption is that the universe does not start out with balls

moving perfectly in space, but that such stable behavior is something it would in greater likelihood

evolve towards with the help of Darwinian evolution by natural selection of replicating agents – that

we have now seen to ultimately favor the communicating, friendly but also realistic or scientifically  

minded, trusting cooperator.

Thus we see that the universe is not only fundamentally unpredictable as soon as you have

relativistic observers and a Planck length in it – the universe was probably a deeply unpredictable

place from the start and only once particles had evolved that were stable enough to sustain us could

macro systems such as us show up on the scene to assume that the little balls making up our world

move perfectly.

The Necessity Of Comprise

The reason that information is allowed by information theory to live on forever in photons is that we

assume that they are able to compress this information using a fully lossless compression. Given

infinite precision of positions, which would be mathematically allowed, this equates from an

information theoretical perspective to saying that a position in space can be split into infinitely fine

parts, allowing for ultimately infinite amounts of information to be stored this way (as in the entire

history of the universe).

Different rules apply for concrete representations of information that will be doomed to wither over

time in accordance with the Second Law of Thermodynamics. But we have seen that the speed with

which such information must wither can be reduced to a minimum by cooperation between

communicating agents.

How far could the withering of concrete information be prevented? If a perfect, lossless information

compression algorithm could be devised, could concrete representations of information become

immortal, too – that is, could we get a model where even the active particles managed to become

reactive so that the model only includes reactive particles?

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 114/117

Potentially so in mathematical theory but it could still be the case that this is never a practical

possibility. In reality, all compression algorithms available for concrete signal representations could

be associated with inevitable loss (in fact it would seem to be a logical restatement of our definitions,

but that would only apply to this model and not exclude the potential for a mathematically valid

model where loss would not be inevitable).

Nonetheless, if information can be prioritized, then some loss could perhaps be deemed more

acceptable than others, such that it would make sense to apply some compression algorithms even if 

they were to some extent losing some of the least relevant information?

This is the essence of a compromise and of the information theoretical activity of abstraction (leaving

us finally with an idea about how information theory distinguishes between the concepts of abstract

and concrete) where some details are lost but the more essential patterns in some signal is kept

around. In order to cooperate optimally, communicating biophysical agents will eventually find it in

their mutual best interests to start compromising such that they agree on what information should

remain under their protection at the expense of allowing less vital information to be lost to thetyrannical forces of randomness as expressed by the Second Law of Thermodynamics.

Logic and Mathematics

If you have a one meter distance and you subtract two meters from it, what is the resulting distance?

The mathematical answer is “negative one meter”. The logical answer is that the problem statement

is logically inconsistent.

Negative numbers have no correspondence as a concept in physical reality and are a purely

mathematical invention. Placing the zero at our Planck length to give us negative space, masses and

time obscures the fact that the logical function of the symbol 0 is to represent the knowledge of thefact that a certain thing does not exist. Things below the Planck level do exist, and so we see that the

most logical use of math to describe our universe is to represent the Planck length not with the

symbol 0 but with the symbol 1, such that the rest mass of the photon is not 0 but 1 and the rest

mass of a certain sub-Planck substructure can approach 0 the smaller it is (or approach 1 the bigger it

is) but it can never become 0 without going out of existence.

Giving the photon a rest mass of one makes great sense with the rest of the math and the logic

presented in this paper, where we derived E ≤ mc2

from E ≤ m1 * m2 * c2

by equating the rest mass of 

m2 as a space particle with the minimal value 1. It is also logically compatible with a model including

an absolute universe where the concept of negative distances would be logically inconsistent and soshould not be a feature of the model.

Summary

This paper has proposed a scientifically testable model for our world that makes the suggestion that

we see non-biological solid particles attracted to each other at best in accordance with the

Newtonian formula F = m1 * m2 / d 2

as a direct consequence of information theory placing a

constraint on the strength on any force between particles in a universe where time passes

(information spread is not instantaneous).

The discussion has included the examination of the mathematical construction of active agent  particles as an implementation mechanism for the force of gravity under the idealization that solid

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 115/117

particles could be little machines with a substructure implementing sensors, motors and controllers.

When solid particles for some reason represent opportunity (or threat) to each other in the

economical perspective of energy preservation, they do their best to navigate towards each other

(or, conversely, away from each other) as fast as available information in their sensors, game theory

and the capacities of their motors allow them. Unless communication theory is invoked, available

information is improved over a given time with the formula E ≤ m1 * m2 * c2.

The alternative mechanical explanation for gravity discussed in this paper is the already well known

idea that gravitons push solid objects together, in which case the effect not only happens to match F 

= m1 * m2 / d 2

but must more importantly still always be constrained by E ≤ m1 * m2 * c2

as the most

optimal form it is possible for it to take (though a “bad” force of nature could work according to a

suboptimal formula) as we have seen this formula to be an ultimate constraint on the strength of any

physical force with a passive (externally mechanical) implementation.

The idea of solid particles as little robots chasing after or trying to escape each other as an

explanation for the implementation of natural forces may, while theoretically possible as we haveseen, feel as an unlikely candidate for explaining the actual natural forces we see in our universe

around us. However, the main role of introducing the mathematical concept of active agent

implementation of natural forces in this paper is that it helps us discover a hard limit on any physical

force.

We have also seen indications by the phenomenon of entanglement and spooky action at a distance

that we may in fact have solid particles with internal substructures in the form of little machines –

unless of course there is no absolute reality and everything “semi-happens” as in the many-worlds

theory (de-coherence). We see in the final papers that such a many-worlds model may be a

mathematical possibility but perhaps still not a logical possibility as it seems to suppose not only thatinformation is essentially totally without cost (unless no concrete information exists and we are all

 just reflections in light, which is known as the hologram universe interpretation of the many worlds

theory) but also that it seems implied by our very existence that we live in a universe more

meaningful than so. It is mathematically possible that we don’t concretely exists, but it requires an

endless recursion of the type “we imagine that we imagine that we imagine…” which could perhaps

be permitted by mathematics where a value that is approached can be used to substitute for the

approaching value, but it is from a logical perspective as bad as a circular argument.

Discussing the mathematical constraints to the hypothetical active agent particles we noted that

optimal navigating agent particles could form by chance alone given enough time but that analternative that would require less time would be if solid agent particles were replicators with the

ability to evolve rapidly by natural selection into a state where they are approximating optimal

navigation strategies to a very high degree.

The reason we see solid particles following the laws of gravity could be that they are in fact optimally

(or reasonably optimally) navigating active agents and it is also possible that they are replicators

allowing less time required in the universe to find increasingly well-navigating active agents, but this

paper will ultimate not make any claims to the extent that is must be so. Our concern in this paper is

with ruling out the impossible but not by trying to limit down the range of possibility into only one,

single future of what must come – that is, a totally pre-determined future of absolute certainty about

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 116/117

everything, such as been shown in this paper by logical deduction not to be logically consistent in any

model of the universe based in information theory.

This paper also suggests that just as all solid particles experience relativistic distortions in the form of 

relative distance or mass in accordance with Einstein’s General Theory of Relativity , agent particles or

macro structures that represent different opportunities or threats to each other will experiencefurther distortions to their perceptions of distance or mass, such that neutral particles seem further

away (or less solid) while particles representing either opportunity or threat will be perceived as

relatively closer (or more solid). We began by capturing this phenomenon in the formula E ≤ m1 * m2 

* c2*r 

2, which before the discovery of the possibility for relative superluminal motion at 2 * c to our

model was also seen as an ultimate constraint on movements of any matter - living (biological agents

or biophysical active agent particles) and dead matter alike.

We finally extended this formula with an information theoretically based dimension of 

trustworthiness or reliability of communication channels between mutually cooperating (or trusting)

agents. By examining the special communication theoretical conditions concerning mutually trustingagent pairs (so called cooperators of  friends) in communicative contact we arrived finally at a formula

describing a model of a seven-dimensional universe that we ultimately compressed into three

absolute dimensions of time, space, and mass, and two virtual or local dimensions of relevance and

reliability or trust ultimately constraining the physical possibilities of all biophysical systems in the

model.

The two virtual dimensions may be perceived by living observers as two extra dimensions of space,

where we also included the logical constraint from definitions that no solid particle can move faster

than light through absolute space. The formula ultimately describes the theoretical outlines for the

possible shapes of a biophysical (or living) universe where agents with intentions of survival can buildmutual trust to increase the reliability of their communication channels. The rational for building this

trust comes from their common enemy in the form of the Second Law of Thermodynamics (the

impossibility to compress concrete information infinitely without loss).

With r depicting relevance or potential and t for trust or estimated probability for realization of the

potential where the estimation includes a level of uncertainty regarding the ultimate intentions of 

the communicating partner, we see that we have arrived at the final version of the formula that

describes the constraints to our model of the universe. The formula captures the purely physical as

well as the biological realms under one formula by combining two domains of information theory –

game theory and communication theory – into one integrated framework that states:

E ≤ m1 * m2 * c2* (r 

2+ t 

2 ) ≤ m1 * m2 * c

2* 2 

The General Theory of Relevance and Reliability is thus a purely information theoretically based

analysis unifying game theory and communication theory into one coherent model. It is also an

information theoretically based integration of relativity, quantum mechanics and Darwinian

evolution by natural selection into one theoretical framework that can answer the question as to

why we can see such prominence of cooperating agents in nature.

8/3/2019 The General Theory of Relevance and Reliability

http://slidepdf.com/reader/full/the-general-theory-of-relevance-and-reliability 117/117

Where previous economic analysis based on game theory would have suggested the success of the

selfish replicator, we can see that the formula for nature reflected by our model predicts the ultimate

success of the cautious cooperator.

The optimal economic strategy for rational agents becomes to behave much in the manner of a mild-

mannered scientist: Measure first, because selfish replicators do exist out there, but if you find afriend you can trust, communicate and cooperate with, you will both become twice as powerful

together in a recursion that never really ends as more partners who have learned to behave in

trustworthy manners can always be brought into the mix. The only one who can’t be invited to the

party is Death Himself , as represented by the Second Law of Thermodynamics.

The causal loop explanation of the nature of particles suggests that all particles could indeed be

replicators, and we could well imagine replication mistakes over time as well as different particles

representing opportunities or threats to each other in different ways, so the suggestion may

ultimately not be so farfetched that some or even all of what we perceive as solid particles are in fact

little agent systems with sensors, motors and controllers, trying to survive in a hostile universe butsteadily having their behaviors honed towards local and global optima in a world where cooperators

ultimately fare better.

As they do so, at least in the information theoretical model we have derived in this paper, we note

that whereas we have for more than a hundred years assumed that Darwinism and game theory

constrained all living things to a lonely, ultimately selfish existence we can now clearly see the

information theoretical and thus mathematical rational for why friends are more efficient than

enemies.

It thus becomes the final conclusion from the analysis presented in this paper that the universewould seem on information theoretical grounds to be a place that trends towards a happy ending –

one where we should eventually be able to all get along as those who don’t will be ultimately

doomed to perish from influence over the available mass and energy (regardless of any seemingly

beneficial temporary imbalance) until they too learn how to cooperate.

Mats Helander, 2011-11-10