Can information theory help us understand if alien signals are "possibly intelligent"? How can we possibly say if something is intelligent if we don't understand it? This final chapter features Carl Sagan, Phillip Morrison, Kent Cullers (SETI - The search for extraterrestrial intelligence)
Transcript
(xylophone music) Philip: People, what I would like to know is the answer to a very simple question. Are we alone, as conscious beings, in this entire buzzing 400-billion star galaxy, one of ten to the tenth other galaxies? It seems pretty implausible. Brit: The modern search for extraterrestrial intelligence, or SETI, began in 1959, when two Cornell physicists, Giuseppe Cocconi and Philip Morrison, published an article in Nature that outlined the possibility of using radio and microwaves to communicate between the stars.
In order for this to work, researchers assumed that any intelligent civilization will have discovered the ability to transmit radio waves. This assumption is based, in part, on the fact that it took human beings only 80 years to figure out how to do this, following Alessandro Volta's discovery of batteries and electric current. The premise is quite simple. We can create radio waves by sending short pulses of electric current through wires.
These waves can then travel beyond our atmosphere and out through Space with little interference. Once these radio or electromagnetic waves are sent out, they can be received, using antennas, and turned back into electrical pulses. In 1960, Frank Drake conducted the first search for radio signals from other solar systems. Much like turning a radio dial, Drake was trying to scan the sky in order to tune in to faint radio signals that might be coming from other worlds.
Though this first attempt did not result in any noteworthy findings, researchers have been scanning the stars ever since. Carl: So, there is some chance that, in the next few decades, we will get the signal from some spectacularly distant, spectacularly exotic civilization, and everything on Earth will, as a consequence, change. That is possible. Kent: The interesting thing about the SETI search is that, although we claim that we're looking for extraterrestrial intelligence, we can't actually define what an intelligent signal is, and so, a starting point is to say we look for a signal which nature will not produce by any mechanism that we understand.
Brit: An important question emerges. How can we ever know if such a signal is coming from an intelligent source? At the first SETI meeting, in 1961, John Lilly proposed that researchers study dolphin languages to help them learn more about what extraterrestrial signals might be like. Much of this early work culminated in the research conducted by Laurance R Doyle and Brenda McCowan.
Doyle and McCowan's work is based on the assumption that, if there is some common trait, in both human and non-human communication systems, then extraterrestrial communication systems should also share this trait. They analyzed a long sequence of vocalizations from both adult and baby humans, and dolphins, and, in the case of dolphins, this was a set of whistles and clicks. Now, human babies learn to speak (baby babbles) through a process of vocal imitation, slowly amassing a larger and larger set of sound signals. Baby in green top: (babbling) Brit: However, during what is known as the babbling phase, the sounds produced are more or less random or unstructured.
To see this, Doyle and McCowan plotted the different sound signals against their frequency, or how often they occur, then they ordered the symbols in the graph according to frequency, with the most common symbols on the left and the least common on the right. With human babies, the slope is nearly level as all sound signals produced occur fairly evenly or randomly. However, as children learn the language of their parents, they narrow their sound repertoire to fit the model to which they are exposed, Boy green top: Ah, eh, eh, oo. Brit: structure is imposed on our speech patterns.
Consequently, the slope of this graph converges towards a 45 degree angle, or a negative one slope on a log-log chart. This is known as Zipf's Law. What's interesting is that this same slope appears in different human languages and seems to be a pattern all humans share. Even more surprising is that this pattern also emerged when Doyle and McCowan analyzed non-human communication.
They found that the whistle sounds produced by baby dolphins seemed to be distributed in a pattern similar to human babies during the babbling phase. At first, the dolphin whistles are more or less unstructured, and, by the time they reach adulthood, the graph converges on a slope of about negative one, which is the same as humans. (dolphin whistles and clicks) This sort of analysis looks only at individual signals or words and doesn't say anything about the deeper linguistic structure of either human or dolphin communication systems. Let's clarify what we mean by deeper structure with an example.
If I select a random word from a book and ask you to guess what it is, you will have no clue what it might be and will have to simply guess. If, instead, I give you a random word from a book and ask you to predict the word that follows it, you will still have to guess. However, you'll notice it's likely easier to guess this word and, if I give you a sequence of two words from a book, the third word, it becomes more predictable still, and, if you are given a sequence of three words, this trend continues; the ability to guess is even easier. It seems that, as a result of the structure of language, the freedom of choice decreases as we look at longer and longer strings of words.
Intuitively, this is why we can finish each other's sentences. Now, to quantify this, Doyle and McCowan borrowed Claude Shannon's measure of entropy, which, as you recall, is a measure of surprise. Entropy can be thought of as the number of yes or no questions, or bits, required to guess the next word, so, as predictability increases, the information entropy decreases, and Doyle and McCowan calculated the entropy for different depths or orders, so, single words is first order, groups of two words is second order, groups of three words is third order, and so on, and then they plotted the value of information entropy against this depth and, for adult humans, as we may expect, they found that the information entropy decreases as the depth increases, and this is a result of the rule structure in our communication systems. Amazingly, Doyle and McCowan did the same thing with dolphin languages and found the same pattern.
Dolphin communication systems display decreasing information entropy as we look at longer sequences of sound signals. This means that, in dolphin communication systems, there is a rule structure which emerges and, arguably, this also allows dolphins to finish each other's sentences too. Now contrast this to just a random sequence of symbols which has a flat line on this information entropy graph, since there is no conditional dependence between symbols. Now, because this pattern emerges in both human and non-human communication systems, Doyle and McCowan have suggested that this decreasing of entropy is essential for the transmission of what we might call knowledge.
As Doyle puts it, "If we get a narrow band signal, "a negative one slope on the Zipf plot, "and higher order Shannon entropies, "we've nailed it." and all of this rests on a simple premise that aliens, too, can finish Man in film: How can I help you to communicate with us? Alien in film: Now I am able to speak by assimilation, a form of photosynthesis. I have been able to incorporate certain of Dr [Wyman's] functional processes. Man in film: Was Dr Wyman's death necessary?
Alien in film: Through his sacrifice, I can communicate. Brit: Without even understanding the language or culture of the other human or non-human species, Claude Shannon's entropy is a unit of measure that can allow us to detect the presence of these structural rules, regardless of meaning. Claude Shannon's model of information was borne out of a desire to save time over the telegraph wires, and this led to the global unit of information, the bit, a single difference, now the backbone of our information economy. The increasingly digital and network technologies that drive our modern world point to the power and persistence of Claude Shannon's ideas.
The bit is here to stay, and the study of information theory will continue to play a key role in our technological and social innovations, on Earth and, perhaps, beyond. Carl: I think even if there's a plausible argument for a few, we've got to keep looking. I'd even go further than that. If there's a plausible argument that there isn't anybody out there, bearing in mind that we can be wrong, we ought to keep looking, because the question is of the most supreme importance.
It calibrates our place in the Universe. It tells us who we are, and so, it is worthwhile trying to find other civilizations, so, I would say no matter what.