What is information? How does integrated information theory deal with information?
The word “information” has many uses; some of which tend to differ very strongly from the ones we use in everyday life. Indeed we can use the words (see here) of the original information theorist Claude E. Shannon to back this up:
“It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field.”
The most important point to realise is that minds (or observers) are usually thought to be required to make information… information. However, information is also said to exist without minds (or observers). Some claim that information existed before minds and it will exist after minds. This, of course, raises lots of philosophical and semantic questions.
It may help to compare information with knowledge. The latter requires a person, mind or observer. The former (as just stated), may not.
Integrated information theory’s use of the word “information” receives much support in contemporary physics. This support includes how such things as particles and fields are seen in informational terms. As for dynamics: if there’s an event which affects a dynamic system, then that event can read as information.
Consciousness as Integrated Information
It’s undoubtedly the case that Guilio Tononi believes that consciousness simply is information. Thus, if that’s an identity statement, then can we can invert it and say that information is consciousness? In other words, if
consciousness (or experience) = information
information = consciousness (or experience)
Consciousness doesn’t equal just any kind of information; though any kind of information (if embodied in a system) may be conscious (at least to some extent).
Tononi believes that an informational system can be divided into its parts. Its parts — taken individually — contain information. The whole of the system also has information. The information of the whole system is over and above the combined information of its parts. That means that such extra information (of that informational system) must emerge from the information contained in its parts. This, then, seems to be a commitment to some kind of emergentism.
The mathematical measure of that information (in an informational system) is φ (phi). Not only is the system more than its parts: that system also has degrees of informational integration. The higher the informational integration, the more likely that informational system will be conscious. Or, alternatively, the higher the degree of integration, the higher the degree of consciousness.
Emergence from Brain Parts?
Again, we can argue that the IIT position on what it calls “phi” is a commitment to some form of emergence in that an informational system is — according to Christof Koch — “more than the sum of its parts”. This is what he calls “synergy”. Nonetheless, a system can be more than the sum of its parts without any commitment to strong emergence. After all, if four matches are shaped into a square, then that’s more than an arbitrary collection of matches; though it’s not more than the sum of its parts. (Four matches scattered on the floor wouldn’t constitute a square.) However, emergentists have traditionally believed that consciousness is more than the sum of its/the brain’s (?) parts. Indeed, in a strong sense, it can even be said that consciousness itself has no parts. Unlike water and its parts (individual H₂0 molecules), consciousness is over and above what gives rise to it (whatever that is). It’s been seen as a truly emergent phenomenon. Water isn’t, strictly speaking, strongly emergent from H₂0 molecules. It’s a large collection of H₂0 molecules. (Water = H₂0 molecules.) Having said that, in a weaker sense, it can be said that water does weakly emerge from a large collection of H₂0 molecules.
The idea of the whole being more than the sum of its parts has been given concrete form in the example of the brain and its parts. IIT tells us that the individual neurons, ganglia, amygdala, visual cortex, etc. each have “non-zero phi”. This means that if they’re taken individually, they’re all (tiny) spaces of consciousness unto themselves. However, if you lump all these parts together (which is obviously the case with the human brain), then the entire brain has more phi than each of its parts taken individually; as well as more phi than each of its parts added together. Moreover, the brain as a whole takes over (or “excludes”) the phi of the parts. Thus the brain, as we know, works as a unit; even if there are parts with their own specific roles (not to mention the philosopher’s “modules”).
Causation and Information
Information is both causal and structural.
Say that we’ve a given structure (or pattern) x. That x has a causal effect on structure (or pattern) y. Clearly x’s effect on y can occur without minds. (At least if you’re not an idealist.)
Instead of talking about x and y, let’s give a concrete example instead.
Take the pattern (or structure) of a sample of DNA. That DNA sample causally affects and then brings about the development (in particular ways) of the physical nature of a particular organism (in conjunction with the environment, etc.). This would occur regardless of observers. That sample of DNA contains (or is!) information. The DNA’s information causally brings about physical changes; which, in some cases, can themselves be seen as information.
Some commentators also use the word “representation” within this context. Here information is deemed to be “potential representation”. Clearly, then, representations are representations to minds or observers; even if the information — which will become a representation — isn’t so. Such examples of information aren’t designed at all (except, as it were, by nature). In addition, just as information can become a representation, so it can also become knowledge. It can be said that although a representation of information may be enriched with concepts and cognitive activity; this is much more the case with information in the guise of knowledge.
The problem with arguing that consciousness is information is that information is everywhere: even basic objects (or systems) have a degree of information. Therefore such basic things (or systems) must also have a degree of consciousness. Or, in IIT speak, all such things (systems) have a “φ value”, which is the measure of the degree of information (therefore consciousness) in the system. Thus the philosopher David Chalmers claims that his thermostat (see here) will thus have a degree of consciousness (or, for Chalmers, “proto-experience”).
It’s here that we enter the territory of panpsychism. Not surprisingly, Tononi is (fairly) happy with panpsychism; even if his position isn’t identical to Chalmers’ panprotopsychism.
“[IIT] unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly ‘conscious’ at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are ‘slightly’ conscious (which would be fine), but that they can be unboundedly more conscious than humans are.”
Here again it probably needs to be stated that if consciousness = information (or that information — sometimes? — equals consciousness), then consciousness will indeed be everywhere.
Add-on: John Searle on Information
How can information be information without minds or observers?
The American philosopher John Searle denies that there can be information without minds/observers. Perhaps this is simply a semantic dispute. After all, the things which pass for information certainly exist and they’ve been studied — in great detail — from an informational point of view. However, they don’t pass Searle’s following tests; though that may not matter very much.
Take, specifically, Searle’s position as it was expressed in a 2013 review (in The New York Review of Books) of Christof Koch’s book Consciousness. In that piece, Searle complained that IIT depends on a misappropriation of the concept [information]. Thus:
“[Koch] is not saying that information causes consciousness; he is saying that certain information just is consciousness, and because information is everywhere, consciousness is everywhere. I think that if you analyze this carefully, you will see that the view is incoherent. Consciousness is independent of an observer. I am conscious no matter what anybody thinks. But information is typically relative to observers…
“…These sentences, for example, make sense only relative to our capacity to interpret them. So you can’t explain consciousness by saying it consists of information, because information exists only relative to consciousness.”
If information is the propagation of cause and effect within a given system, then John Searle’s position must be wrong. Searle may say, then, that such a thing isn’t information until it becomes information in a mind or according to observers. (Incidentally, there may be anti-realist problems with positing systems which are completely free of minds.)
Searle argues that causes and effects — as well as the systems to which they belong — don’t have information independently of minds. However, that doesn’t stop it from being the case that this x can become information because of direct observations of that information.
Anthropomorphically, the system communicates to minds. Or minds read the system’s messages.
Searle’s position on information can actually be said to be a position on what’s called Shannon information. This kind of information is “observer-relative information”. In other words, it doesn’t exist as information until an observer takes it as information. Thus when a digital camera takes a picture of a cat, each photodiode works in casual isolation from the other photodiodes. In other words, unlike the bits of consciousness, the bits of a photograph (before it’s viewed) aren’t integrated. Only when a mind perceives that photo are the bits integrated.
IIT, therefore, has a notion of intrinsic information.
Take the brain’s neurons. Such things do communicate with each other in terms of causes and effects. It’s said that the brain’s information isn’t observer-relative. Does this contradict Searle’s position? IIT is talking about consciousness as information not being relative to other observers; though, is it relative to the brain and consciousness itself?
There’s an interesting analogy here which was also cited by Searle. In his arguments against strong artificial intelligence (strong AI) and the mind-as-computer idea, he basically states that computers — like information — are everywhere. He writes:
“[T]he window in front of me is a very simple computer. Window open = 1, window closed = 0. That is, if we accept Turing’s definition according to which anything to which you can assign a 0 and a 1 is a computer, then the window is a simple and trivial computer.”
Clearly, in these senses, an open and shut window also contains information. Perhaps it couldn’t be deemed a computer if the window’s two positions didn’t also contain information. Thus, just as the window is only a computer to minds/observers, so too is that window’s information only information to minds/observers. The window, in Searle speak, is an as-if computer which contains as-if information. And so too is Chalmers’ thermometer and Koch’s photodiode.
Here’s Searle again:
“I say about my thermostat that it perceives changes in the temperature; I say of my carburettor that it knows when to enrich the mixture; and I say of my computer that its memory is bigger than the memory of the computer I had last year.”
Another Searlian (as well as Dennettian) way of looking at thermostats and computers is that we can take an intentional stance towards them. We can treat them — or take them — as intentional (though inanimate) objects. Or we can take them as as-if intentional objects.
The as-if-ness of windows, thermostats and computers is derived from the fact that these inanimate objects have been designed to perceive, know and memorise. Though this is only as-if perception, as-if knowledge, and as-if memory. Indeed it is only as-if information. Such things are dependent on human perception, human knowledge, and human memory. Perception, knowledge and memory require real — or intrinsic — intentionality; not as-if intentionality. Thermostats, windows, and computers have a degree of as-if intentionality, derived from (our) intrinsic intentionality. However, despite all these qualifications of as-if intentionality, as-if intentionality is still real intentionality (according to Searle); though it’s derived from actual intentionality.