Categories
Uncategorized

Metaphor: the Alchemy of Thought

In the murky centuries before the dawn of the scientific age, alchemists used the phrase “As above, so below” to convey their belief that the neat order observed in the heavens could also be discerned amidst the chaos on earth. Thus the alchemists hoped to understand the one in terms of the other — the complex in terms of the simple. They viewed macrocosm and microcosm as reflections of each other. This remained an esoteric ideal rather than a formula for practical knowledge until Isaac Newton — himself a dabbler in alchemy — brought the stars and the earth closer together by showing that they could be understood using a unified language: mathematics.

“As above, so below.”

Metaphor is the alchemy of thought: not “as above, so below”, but “as known, so unknown”. According to linguists George Lakoff and Mark Johnson, “The essence of metaphor is understanding and experiencing one kind of thing in terms of another.” It might not be an exaggeration to say that metaphorical thinking is the basis of our ability to extend the boundaries of human knowledge.  For those of you who only remember the word from middle school English class, I imagine this dramatic inflation of the importance of metaphor comes as a surprise. Isn’t metaphor just a linguistic flourish? “Shall I compare thee to a summer’s day”? “Now is the winter of our discontent”? Surely this kind of frippery is only for poets and artists? For the cafe and the studio, rather than the workshop and the laboratory? Nothing could be further from the truth.

Categories
Uncategorized

Science Fiction – The Shadows cast by Modernity

This piece was written at the behest of a friend who works for Down to Earth magazine. It appeared there a few months ago in slightly edited form

*

There was a time, not too long ago, when the human ability to conjure visions from beyond the domain of everyday experience expressed itself only in tales of the supernatural — in myth, legend and fairy story. Humans once lived in a shadowy world populated by spirits, gods, demons, angels, and phantasmagorical beasts. Magic and mystery were the key forces in nature. Our myths gazed into the past — often to a Golden Age that came to a tragic end, perhaps because of human wickedness or the capriciousness of the gods. It was as if we once lived in a village on the edges of a dark and forbidding forest, and told each other tales of how our ancestors, expelled from Paradise, braved forgotten perils to forge an existence on the edge of Chaos. Our fireside myths helped to fend off the ever-present darkness on the margins of settled life.

 

But one day a new light arrived in the village, and the forest, with all its irrational terrors, was cleared to make way for the factory. We were told that magic and mystery would soon be replaced by reason and certainty. Wild nature would be tamed. When we moved from the village to the city, the glories and dangers we imagined no longer belonged to the past, but to a future in which humankind might one day illuminate all the dark corners of the Earth. But even electric light casts shadows, and factory fumes shroud us in a new kind of darkness. In the interplay of new forms of light and dark, good and evil, science fiction finds its wellspring.

 

*

 

In the yoking of science — with its methodical mastery of matter — to the freedom and flight of fiction, science fiction walks a tightrope between the possible and the fanciful — something not usually expected of myth. This creative tension finds expression in three broad and overlapping ways of seeing. Science fiction can manifest itself as a lens with which to examine the possibilities latent in an idea or technology, a funhouse mirror with which to reflect society or history, or a kaleidoscope with which to experience a sensory immersion in an alien realm.

 

Isaac Asimov’s body of work is exemplary of the first, and some might say purest, form of science fiction — a lens that brings into focus the fuzzy implications of science and technology. Here, the science really is central, and individual human characters often seem no more than vehicles for the unfolding conceptual drama. In the robot series, Asimov explores the moral and ethical consequences of the Three Laws of Robotics — (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. (2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Asimov’s Robots — portrayed as the epitome of rationality — contend with the inevitable conflicts and paradoxes that arise from a seemingly simple set of laws. What constitutes harm? What constitutes inaction? What should robots do if humans attempt to harm each other? And how can the robots be certain that the laws are in conflict? These conflicts culminate in the novel Robots and Empire, in which a robot with unique telepathic powers, R. Giskard Reventlov, divines a new law — the Zeroth Law — which places the concerns of humanity above those of individual humans: (0) A robot may not harm humanity, or, by inaction, allow humanity to come to harm. The Zeroth Law is not programmed into Giskard — it simply emerges in him — but in trying to rationally decide whether it will be good for humanity or not he ends up destroying his positronic brain. In this fatally consuming struggle we can see echoes of the tortured life of the mathematical genius Kurt Godel. Godel may have been driven mad by his logical proof that logic itself must either be inconsistent or incomplete. The robot Giskard, before he dies, passes on the Zeroth law — and his telepathic powers of persuasion — to another robot, R. Daneel Olivaw, who is entrusted with the task of being caretaker of humanity as it pushes beyond Earth to colonize the galaxy. In Asimov’s Foundation series, human society has long since spanned the galaxy, and we follow the legacy of a mathematician, Hari Seldon, who has developed the laws of psychohistory — a combination of history, sociology and statistics used to make predictions about large groups of people. (A physicist might call it statistical humanics!) Seldon’s laws predicted that the Galactic Empire would collapse, leading to a period of barbarism lasting thirty thousand years. Horrified, Seldon sets up two Foundations that are to strategically intervene in the events of the galaxy, reducing the period of barbarism to “just” one thousand years. The last book in the Foundation Series, Foundation and Earth, even links the story with the robot series, making explicit the connections between the aims of psychohistory and the Zeroth Law of Robotics. In taking a view of future history that stretches into the tens of thousands of years, Asimov’s lens focuses not on any particular technology, but on scientific rationality itself. Can humans and their technologies (robots) be used to take care of an abstraction — humanity? And who or what gets sacrificed for the “greater common good”? When science fiction takes on the form of a lens, it celebrates and critiques our scientific and technological lenses — the “extensions of man”, to use Marshall McLuhan’s nimble phrase.

 

Looking outward through a lens typically does not lend itself to much in the way of introspection. For this purpose we have mirrors. The 20th Century’s great genre-crossing works of dystopianism — Brave New World and 1984 — stand as canonical examples of the mirror style of science fiction. In Aldous Huxley’s Brave New World, The state religion is Fordism, inspired by Henry Ford’s assembly line — a system based on mass production, a rigid chemically-controlled caste system, and consumption of disposable consumer goods. Any desires that cannot be met in these ways can be assuaged with the wonder-drug soma. But those who dare to be dissatisfied with their lot in life are cast into exile. In George Orwell’s 1984, a police state perpetually at war is controlled by an omnipotent Party watched over by the deified and omnipresent leader, Big Brother. It is a nightmarish world of constant surveillance, torture, paranoia, servility, and betrayal. Neil Postman’s book Amusing Ourselves to Death offers us an evocative juxtaposition of these two masterpieces: “As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny ‘failed to take into account man’s almost infinite appetite for distractions.’ In 1984, Orwell added, people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we fear will ruin us. Huxley feared that our desire will ruin us.” During the Cold War, the world seemed to be presented with two opposed visions — the cruel totalitarianism represented by the Soviet Union, or the vampiric seduction represented by Western consumerist capitalism. The logical conclusion of one would give us Big Brother, and that of the other would be Fordism. Like old testament prophets of doom, Huxley and Orwell are inviting us look within ourselves to root out the seeds of such awful destinies. In the United States we seem to be witnessing the merging of these two means of control. The increasingly militarized police force beats back the protesters and dissidents — those for whom freedom means more than mass-produced hamburgers and shiny electronic toys. As sobering allegory, science fiction can not only “hold a mirror up to nature”, it can reveal the very tendencies in humankind that alienate us from nature.

 

But light need not only be used for practical purposes — for looking outwards or inwards. It is also a thing of beauty in itself, requiring no justification or purpose. Most science fiction, it must be said, does not take itself so seriously as to focus too sharply on any single moral, political or technological idea. Rather than haranguing us with portentous warnings about our present or our future, the kaleidoscopic aesthetic presents a dazzling hodge-podge. Many people enjoy science fiction for no better reason than its grand canvas of spaceships, androids, aliens, ray guns, and intrepid humans dashing about the planet or the universe on an adventure of limited pedagogical value. The forms painted on this canvas become less important than the brushtrokes, the technique, the texture. A lesson might conceivably be derived from a movie like Predator or Alien but it seems as if the whole point of such movies is the sheer visceral thrill engendered in the watching. Even 2001: A Space Odyssey, with its dark implications for malevolent artificial intelligence, is more an audiovisual adventure than a conceptual exploration — how else can we explain the power of the psychedelic dreamscape with which the film ends? This kind of interpretation seems especially true of a movie like The Fifth Element. It’s a giddy ride through a future that seems contrived purely for sensory stimulation. Even the quasi-mystical overtones in the plot appear kitschy and ironic. If we from refrain from intellectualizing our experience of science fiction, we can come to appreciate the effervescence and inventiveness of an artform that isn’t necessarily about anything. A kaleidoscope is one of the simplest celebrations of perception itself — rather than perception of something.

 

Some works of science fiction, however, cannot be said to fit neatly into any of the above categories. They are transcendental, in that they encompass all of the categories, creating an emergent whole that confounds easy pigeonholing. The film The Matrix is emblematic of this form. It uses the hacker aesthetic of the Internet Age as a springboard from which to launch into a stylized war between humans and rogue machines — machines that were once designed to serve humans, but later enslaved them in an illusory virtual world: the matrix. But The Matrix isn’t necessarily about the dangers of AI. It is also a mystical story of self-discovery and personal liberation, taking on an emancipatory logic found in many religions. Neo’s liberation from his womb-like prison is the first step on the road to discovering that he is the One — a man prophesied to end the war and transform the matrix itself. In this he is like the Buddha, attaining Nirvana and spreading his revolutionary message throughout the world. He is also a Messianic figure, dying and then being reborn for the salvation of the enslaved. But even this does not fully capture the multiplicity of messages latent in The Matrix. If emancipation means unplugging from a comfortable world and awakening to a real world of war and desolation, then The Matrix can also be read as a cry for left-wing revolution in the modern post-industrial world: Unplug from the matrix of consumption, and rise up against those who see us merely as a source of fuel! Whether the film-makers intended any or all of these interpretations is irrelevant. For a generation of young people, The Matrix was a rite of passage — for those who chose that path, it was an initiation into a world that could be read as a matrix of symbols.

 

Symbolism also plays a role in the epic television series Battlestar Galactica, but that role is far more murky. The overarching plot bears a vague resemblance to the biblical exodus, in which the Israelites wandered through the wilderness in search of the Promised Land. There are also explicit references to Greek mythology — there are important characters with names like Hera and Athena. Humanity has been all but exterminated by the Cylons — a race of renegade robots created long before by humans. The remnant of the destroyed Twelve Colonies of humanity — named after the 12 signs of the zodiac — band together as a flotilla searching for a new homeland. In their quest for a home planet they turn to ancient prophesies about a 13th colony, Earth. But the series is impossible to read as a coherent set of symbols. Unlike in The Matrix, some characters dismiss the prophesies as ancient religious babbling. And, to add to the strangeness, the Cylons appear to have a religion too — a form of monotheism that opposes the humans’ jumbled polytheism. While the symbolic mysteries unfold and baffle, the series also presents us with less ethereal — but no less engaging — debates on the nature of democracy and justice, on the role of the armed forces during crisis, and on racial profiling and discrimination. The central plot element that injects an unprecedented vitality to these debates is the discovery that the human population has been infiltrated by a group of Cylons that are indistinguishable from humans. Unlike the kind of science fiction in which the enemy is readily identifiable, the humans in Battlestar Galactica are also at least partly at war with themselves. The resonance with the era of “with us or against us”, Homeland Security and terrorist sleeper cells is undeniable, and yet Battlestar Galactica does not lend itself to any easy moral, political, philosophical or religious lessons. What you glean from the series depends to a great extent on which characters you choose to focus on, or which symbols you attempt to decipher, and in this complexity its closest equivalent is perhaps the Mahabharata.

 

But the quintessential example of transcendental, mystical science fiction is the original Star Wars trilogy — the series that ushered in a golden era of popular science fiction filmmaking in the 1980s. From the very outset it appears to go against standard science fiction protocol. It not set in a future Planet Earth, but “A long time ago in a galaxy far, far away”. It features grand battles between good and evil, but it resists any identification of these conflicts with contemporary political questions. And though it has all the riotous jazziness of pastiche — part Western, part Japanese samurai story, part WWII campaign against Nazi SS officers — it has an emotional core that goes beyond cinematic thrill-seeking while simultaneously satisfying that urge. In transforming the genre, Star Wars occupies a special place in the history of popular science fiction: it’s a blockbuster that paints a mythic story on a galactic canvas. It should therefore be no surprise that Joseph Campbell — the preeminent scholar of world mythology — was a major influence on its creator, George Lucas. Star Wars is in many ways an old story wearing new interstellar clothing. The plot can be recreated from the chapter subheadings of Joseph Campell’s 1949 magnum opus The Hero With A Thousand Faces, in which the monomyth — a basic pattern reflected in many of the world’s myths — is described. The saga begins in Episode IV: A New Hope, when Luke Skywalker encounters the droids who show him Princess Leia’s secret message asking for help. He then seeks out Obi-Wan Kenobi, who introduces him to the Force, which “surrounds us and penetrates us. It binds the galaxy together.” This stage is what Campbell calls The Call to Adventure. But at first Luke refuses to join Obi-Wan on his mission, instead choosing to stay and work on his uncle’s farm. This is the Refusal of the Call. After his home has been destroyed by the forces he tried to ignore, Luke’s doubts about joining the mission fade away. Soon Luke, under Obi-Wan’s tutelage, begins to feel the Force, after which Obi-Wan tells him “You have taken your first step into a larger world.” This is the Crossing of the First Threshold. Later, Luke, Han Solo and Leia are trapped in the trash compactor of the Death Star. They are in the Belly of the Whale. Luke goes into it a boy, and emerges from it a man. The characters progress though the Road of Trials, culminating in Episode VI: The Return of the Jedi with Luke’s Atonement with the Father.

 

It may seem that in mythologically-tinged science fiction, we have all but forgotten the word “science”. Recalling a key scene from the first film (Episode IV), will help us restore the link and bring our discussion full circle. Luke Skywalker, trying to target a vulnerability in the Death Star, hears the voice of Obi-Wan Kenobe, telling him to use the Force, rather than the computer targeting system. In doing so, Luke succeeds where others who used their computers failed — using the Force allows him to destroy the Death Star. One can’t help but interpret this climax as a momentary turning away from all that is technological and robotic. The forces of evil with their martial technologies are captained by Darth Vader — “more machine now than man” — while the forces of good are lead by a scruffy band of rebels and an old man espousing a “sad devotion to that ancient religion”. Even someone fully embedded in the world of technology — sitting in its very cockpit — has a choice regarding whether or not to fully submit to the machine. And in Star Wars, it seems as if organic mind must establish its dominion over mechanical matter, never allowing it to fully determine the way the rebels achieve their goals.

 

In this opposition — between mind and matter, organic and mechanical, spiritual and material — Star Wars bridges the gap between ancient myth and modern fiction, rendering problematic any simple dichotomy between the old and the new. At crucial points in human history we are confronted with the New — new circumstances, new ideas, new tools, and new ways of living — and the lack of precedent means that the wisdom of the past can never fully encompass the New. The New is therefore related to what the psychoanalyst Jacques Lacan called the Real: “that which is outside language and that resists symbolization absolutely”. The New confronts us with the power of the Universe to surprise us, to challenge us, and to threaten our cosy certainties. But those who do contend with the New — by learning, adapting and transforming — are those who have been able to put old tools to new use. They realize that humans are a part of this universe, not separate from it, and can participate in bringing forth what is New from their own wells of creativity. Rather than retreating from the shadows cast by modernity in order to be warmed by the age-old fires, the creators of science fiction, Prometheus-like, venture into the shadows to bring back visions of what might lie beyond the margins of Possibility. In showing us what our lenses, mirrors and kaleidoscopes are capable of, science fiction invites us to wonder what we humans are capable of, if only we are brave enough “to boldly go where no man has gone before”.

Categories
Uncategorized

The Great Red Spot (or, When Can a Thing be Said to Exist?)

Consider the Great Red Spot of Jupiter. It is a storm that has been around for two centuries. It’s a vortex big enough to contain two or three planets the size of the Earth. But is it a thing?

What does it mean to say that a thing exists? In what sense are plastic cups or rocks things? And are living things also ‘things’ in the same sense? To work towards a better understanding of ‘existence’, let us examine our intuition about the properties of things. A thing has a location in space and time, and a boundary that separates it from all that it is not. A thing can move around and morph its shape and size, but it retains some degree of integrity, so that as it undergoes transformations, it can still be recognized as the same thing. An apple remains the same apple as it decays, changes shape, and rolls around, until it crumbles and decays.  These changes can be quite dramatic. When a caterpillar becomes a butterfly, we say that it has changed, and not that some new flappy thing mysteriously replaced the creepy-crawly that was there a few days before.

Another property of a thing that we might propose is that it is always composed of the same stuff. The atoms and molecules in a rock do not change much. You could make a little etching on the rock, and then find it later on, confident that you had found the same thing.

But I think this kind of material consistency is the exception rather than the rule for many of the things we are interested in. Let’s go back to the Great Red Spot. Surely it is a thing? It has been around since long before you or I were born. Astronomers are readily able to identify it and talk about its shape and location. But a storm like the Great Red Spot does not generally retain all the material contained in it. Stuff comes in, stuff goes out. Back on Earth, we know that a storm can pick up gas molecules, water, houses, people and cows, and dump them elsewhere. Perhaps some molecules stick around with it for the duration of its destructive dance, but plenty of others just hitch a ride for a little while.

A storm is a process. And it is also a thing. It has position, velocity, shape and size, but it does not have a constant configuration of atoms and molecules. In this it is like a wave. There’s a lot you can learn about wave motion from a rope or a string. If you tie one end of a rope to a stationary object, you can send a wave along it by making the right sort of up-and-down shake on the other end.

What I’d like to draw your attention to is the fact that when a wave moves horizontally along a rope, the particles in the rope do not move horizontally. If they did move horizontally, the rope would eventually break! In fact, they move up and down. What moves along the horizontal direction is the movement itself. If we want to consider the wave a thing, we have to concede that it is not made up of a specific set of particles. It is a process, and the particles are just the medium by which the process flows.

What applies to waves also applies to human beings. The oldest particles in your body have been with you for perhaps seven years. The body is not a constant set of particles, it is a wave, a travelling pattern, a Great Spot whirling around the surface of the Earth. And in a sense, you already knew this. You take in food, air and water every day, and yet you maintain the same weight. (Well, more or less the same weight.) Cells die and are replaced by new ones that are made up of the matter you ingest. You are what you eat.

I like the way Richard Feynman gets this point across. “So what is this mind of ours: what are these atoms with consciousness? Last week’s potatoes! They now can remember what was going on in my mind a year ago—a mind which has long ago been replaced. To note that the thing I call my individuality is only a pattern or dance, that is what it means when one discovers how long it takes for the atoms of the brain to be replaced by other atoms. The atoms come into my brain, dance a dance, and then go out—there are always new atoms, but always doing the same dance, remembering what the dance was yesterday.”

So we are not what we are made of. We are what we do.

When we realize that all things are also processes, the concept of existence can take on a new meaning. Perhaps we should shift our focus from the idea that existence is about things. What exists is what is happening. No process goes on forever, and thus we can’t really speak of the things that exist for all time and in all places. Things, people, societies, ideas… these are all dancing patterns, and when the dancing stops, only stillness remains.

_______

Notes:

  • I began thinking about the ‘wave’ nature of human beings when I read an article from the Edge Foundation by Tor Nørretranders.  “I have changed my mind about my body. I used to think of it as a kind of hardware on which my mental and behavioral software was running. Now, I primarily think of my body as software. […] 98 percent of the atoms in the body are replaced every year. 98 percent! Water molecules stays in your body for two weeks (and for an even shorter time in a hot climate), the atoms in your bones stays there for a few months. Some atoms stay for years. But almost not one single atom stay with you in your body from cradle to grave.”
  • All the matter in us and around us was produced inside the furnaces of stars. If you believe in modern science, the phrase “we are stardust” is not a metaphor! But of course, we are more that just stardust. We are waves that use stardust as a medium for propagation.
  • If you use a spring instead of a rope, you can see two types of wave motion: transverse and longitudinal. Transverse waves are like the ones on a rope, with the particles moving in a up-and-down direction. In longitudinal waves, the particles do move back-and-forth in the horizontal direction, but still do not travel with the wave.
  • Wave-particle duality suggests that at a very fundamental level, the distinction between particle and wave can be blurry. Also, condensed matter physicists often find it useful to characterize the systems they study in terms of “quasiparticles“. You could go as far as saying that the a particle can be redefined as a type of phenomenon or interaction, rather than a thing. From here we can question the meaning of the word “fundamental”, perhaps be opposing it with the word “useful”. More on this later.

Here is a handy web applet that let’s you send waves along a virtual rope.

Categories
science and meta-science

Vision: the Master Metaphor

Human beings frequently conceptualize experience and understanding in terms of visual metaphors. These metaphors pervade our discourse: we ‘illuminate’, ‘shed light on’, and ‘dispel shadows’. When you think you understand, you often say “I see.” In IIT Bombay lingo, after explaining something you’d ask “Chamka kya?” (Did it shine?)

Art is believed to reveal a kind of truth: Hamlet declares that the “purpose of playing” is to hold “the mirror up to nature.” Ignorance is our inability to see through the darkness. St Paul says “now we see but a poor reflection as in a mirror; then we shall see face to face. Now I know in part; then I shall know fully, even as I am fully known.”

I can’t be sure why visual metaphors appear to dominate, rather than tactile or auditory ones. Neuroscience and history (evolutionary, cultural and linguistic) may one day shed some light on the origins and workings of this phenomenon. George Lakoff and Rafael E. Núñez , in their book Where Mathematics Comes From, go as far as suggesting that logic itself — seemingly alienated from human experience — is born of a kind of visual logic that manifests itself in the vision-behavior nexus. Perhaps this nexus is a substrate for ‘self-evident’ truths. Regardless of the origins of visual thinking, in general discourse it provides us with powerful analogies with which to structure our discussion of metaphor and the ‘axes’ of human understanding.

If art holds a mirror up to nature, science holds up lenses and prisms. Lenses symbolize observation, and prisms symbolize analysis and synthesis. I’ll talk about prisms first, and them move to lenses, which afford us extended, systematic metaphors.

A prism serves to break up a beam of light into its constituent spectrum, and can also (re)combine spectral components. The choice of prism material and shape depends on the spectral band of the radiation being investigated. In their role as dispersers, prisms analyze light. The word “analysis” is a transcription of the ancient Greek ἀνάλυσις: analusis, “a breaking up”, from ana- “up, throughout” and lysis “a loosening”. Chemical analysis can be seen as set of prisms by which the composition of a substance is revealed through ‘dispersion’. The role of prisms in recombination can symbolize the complementary process, synthesis, which come from the ancient Greek σύνθεσιςσύν “with” and θέσις “placing”, and means a combination of two or more entities that together form something new. Prisms combine red and green light, for instance, to yield yellow light.

Lenses can magnify images, bringing otherwise invisible objects into focus, so that we can better analyze their structure, composition and behavior. But lenses also warp and distort. Further, there is no single lens that can be used to capture all possible images. You cannot use a microscope to study the stars. This is not a technical difficulty. The scope of a lens and its resolution do not co-vary — you cannot simultaneously apprehend a square mile of territory and view it at one micron resolution. Similarly, you cannot simultaneously study an object at the quantum mechanical level and at the Google Street View level. A single device does not have a little knob with which to arbitrarily increase resolution while maintaining the size of the frame. To zoom in to a point is to discard more and more of the area around that point.

We build up our understanding of an object or process by employing multiple lenses. This raises the possibility of discontinuities in the picture we construct from the multiple views. Since any single device cannot simultaneously and smoothly vary its focus, scope and resolution across the whole range of human perception, we resort to the use of multiple devices. If the operating ranges of the devices overlap, it becomes possible to construct a composite image. This is one of the ways a panoramic photograph can be constructed. Multiple photos are stitched together.

Let us ground these metaphors in a specific example. DNA was discovered is 1869 by investigating pus in bandages with a microscope. In 1919 its composition was revealed by chemical analyses. By 1928 biochemists had established that DNA is a carrier of genetic information. It’s structure was determined in 1953 using X-ray diffraction, a technique previously used in crystallography. All of these views were integrated (‘stitched together’) with the glue of mathematics and shrewd deduction. And this beautifully synthesized view is still in no sense complete — though the human genome project has given us the complete sequence of base pairs (in a handful of people),  the nature of the “code” has not been cracked — it appears that a wider scope incorporating cellular and chemical context needs to be supplied: the burgeoning field of epigenetics appears to be orders of magnitude more complex than genetics. It seems that learning to read the ‘Book of Life’ is much harder than transcribing it, and may involve looking at some of the other books in the Library of Life. Chemistry, biology and physics were the lenses used to uncover what we know about DNA. Each field has its own scope, resolution and focus, and the process of stitching together the ‘image’ requires ingenious puzzle-solving abilities. And the puzzle pieces often fail to fit perfectly together! Even in the (literal) case of photography and imaging, image registration — the alignment of images to form composites — is a task that is far from straightforward. “Registration is necessary in order to be able to compare or integrate the data obtained from […] different measurements.” If you’re going to plan out a journey using two overlapping maps that have different scales and distortions, you’d better be careful about how and where they align.

What applies to image registration applies to all fields of human inquiry. Consider the world of physics. Popularizers of science (as opposed to actual scientists) will often have you believe that physics is — or will soon become — a unified view of the universe. A Grand Theory of Everything is supposedly within grasp. The following diagram illustrates the pre-unification state of physics. I’ve mapped out the subdomains of physics on axes of length (somewhat precise) and speed (not precise at all).

The subdomains of physics in relation to length scale and speed. (Click to embiggen.)

I’ve based this image on this handy visualization and other similar diagrams, but I want to draw attention to the white spaces, which are caricatures of the ‘holes’ in physics — they occur not just at the margins of our furthest perceptual reach, but in ‘central’ regions, as well. Quantum field theory unifies (registers) quantum mechanics with some relativistic concepts. But it cannot incorporate the effects of gravity, hence the quantum gravity (black?) hole. The most elegant example of ‘theory registration’ is the equivalence of classical/Newtonian mechanics with relativistic/Einsteinian physics. If the velocity term affecting the Lorentz factor is sufficiently low, Einsteinian physics reduces to Newtonian physics as a limiting case. The mathematics to show this is simple and unambiguous.

Showing the equivalence of quantum mechanics and classical physics, on the other hand, has not been clearly established yet. Many physicists assert that classical physics exists as a special case of quantum mechanics in the limit of very large numbers of particles. (In a sense this is obviously true if we conflate the theory with the reality. However, as a general rule of thumb, scaling up the number of elements in a theoretical system rarely yields results that correspond with experiment. More is different.) This assertion is known as the correspondence principle, but it is not quite a proven statement. Unlike in the case of relativity, no universally agreed upon mathematical procedure can show classical mechanics as the limiting case of quantum mechanics. To go back to the image registration metaphor, this would be like having a discontinuity in the stitched-up panoramic photo that we declare non-existent by fiat! Objects that span the classical-quantum divide — perhaps DNA molecules and carbon nanotubes — currently fall into conceptual no man’s land. But you are free to believe that one day a Grand Unification Theory will fill in all the holes in physics. Perhaps then the mosaic-like quality of our current understanding — riddled with discontinuities — will disappear?

I am not convinced that a Theory of Everything in physics will satisfy our general curiosity. Many of the most interesting problems we face have nothing to do with physics. I am drawn to a philosophical position among scientists — non-reductionism or emergence — that holds that grand unification may not be possible, and further, that even if it were possible, would not answer important questions about observable phenomena, even within physics. In other words, a Theory of Everything would explain very little of consequence.

This is the region where all the action is. And physics has not filled up all the holes.

In the picture above, I’ve highlighted the region that concerns most human beings. It is the region of the universe we live in — where genes, cells, brains, computers, and societies are much more ‘fundamental’ to our existence than quarks or galactic superclusters. This is the region of the map where physics per se is rarely able give us any useful information. Chemistry, biology, psychology, sociology and economics… these fields deal with phenomena that show no sign of revealing their mysteries to the physicists’ particle accelerators or radio telescopes. The scope and resolution of the physicists’ (conceptual) lenses simply won’t suffice. The truths we collect in these domains are multifaceted, inconsistent, and often nonmathematical. The ‘theory registration’ problems are therefore particularly acute.

Or rather, the alignment of various theories would be an acute problem if that were the primary goal of human inquiry. Accounting for quantum gravity, mathematizing the transition from quantum to classical — these sorts of goals are laudable, and when successful, frequently provide new insight into observable phenomena or suggest phenomena hitherto unobserved. But the unification program may sometimes be nothing more than papering over tiny gaps between the tiles of a mosaic — gaps that are only visible if you are looking for them (as opposed to using the mosaic of lenses to solve problems). Grand Unification seems often to be an aesthetic principle rather than a self-evident necessity of the universe. There is no reason a priori to assume that all domains of human understanding are mutually consistent. This search for consistency sends physicists looking for increasingly obscure regions of time and energy — the first few seconds of the universe, or deep inside the Large Hadron Collider. If your panoramic view of the sky has vast regions of empty space,  is it really important to find (or more often, create) phenomena that suggest disambiguating alignment procedures? Is it not sufficient that the telescope (theory) pointing in one direction sees (accounts for) the stars (observations) in its field of view, as long as there are other telescopes and microscopes for other stars or minuscule particles? If a star and a quark can never in any real sense be seen to interact, do we really need a theoretical bridge between astrophysics and QFT? What use would it serve?

I do not want to suggest that attempts at reductionist unification in the sciences are misguided or pointless. My aim is to demonstrate what human knowledge looks like as is, not as it should be or can be. Currently, human knowledge looks very much like a patchwork quilt of theories, ad hoc rules, stories, speculations and observations. A collage rather than a ‘veridical’ photograph. For this reason the truths that physicists have thus far described as universal are rarely universally useful — they have meaning and force only when viewed within the lens that gave rise to them. Quantum electrodynamics may be ‘universally’ true, but how it can be put to work in clinical psychology is far from clear.

Vision offers another interesting metaphor. If we see human knowledge as a fixed collage — one in which, say, quantum mechanics is the only lens for physics below the nanoscale, and neoclassical economics is the only lens for understanding the flow of money and labour — then we are in danger of reification: turning abstractions into reality. We can inoculate ourselves against premature ossification by remembering that lenses are not objective generators of truth. They require someone to look through them. They require an observer, who is always viewing things from a particular frame of reference, and asking particular questions. We don’t really need the principle of relativity to arrive at the realization that there are multiple frames of reference, and none of them are privileged. If we cling to a single frame of reference, we often make errors in measurement, such as parallax. Different frames of reference give us different views, and moving between them gives us a better sense of the object or process. (This introduces the problem of image registration to neuroscience. Shifting between different viewpoints, how does an individual brain/mind make mappings between mappings? Metamappings?)

I spent a few impressionable years being dazzled by postmodernism, mainly because it tends to stress the possibility of multiple viewpoints and the absence of a central, fundamental frame of reference, or “grand narrative” in postmodernese. But the postmodern theorists go too far — they jump from this observation to an unjustified assertion that all frames of reference are mutually incommensurable — ‘hermetically sealed’. But surely no viewpoint is totally isolated from all others? Frames of reference need not be irreducibly incommensurable or mutually unintelligible. Two frames of reference, one hopes, can refer to the same object — they are constrained by reality! For all the supposed incommensurability of human knowledge systems and cultures, the common frame of human behavior offers us a wide, overlapping region for interaction. Our toolboxes may contain very different lenses and prisms, but surely we can bring them to bear on the same situation? We can and do act together in the world in ways that allow us to align our theories and frames of reference, even if these alignments are contingent, provisional or ephemeral. Our lenses may create idiosycratic distortions, but we are more than our lenses. We are also our deeds, and our deeds interact even when our ideas do not. Shared praxes can align our axes.

_______

Notes
  • All metaphors break down eventually. Similarly, human vision breaks down at the size scale of the wavelength of visible photons. It makes little sense to visualize particles that are altered radically by interaction with a photon. A physics professor in IIT advised that we stop trying to visualize quantum mechanical systems. In many cases one has to ‘see’ an electron as nothing more than an abstract phenomenon governed by an equation. The process of disanalogy will become useful when we want to investigate the limits of our language, and the limits of our understanding. More on that later.
  • When red and green light arrive at the eye, humans see yellow light. There is no purely physics-based explanation for this. ‘Objectively’ yellow light has a frequency ranging from  570–580 nm, but mixing red and green light does not yield new wavelengths. The yellowness of the mixture has to do with the way human color vision works. Thus what we see depends not only on what is ‘out there’, but what is ‘in here’ too.

Further Reading

Anderson, P.W. (1972). “More is Different“. Science 177 (4047): 393–396.

A classic paper explaining emergent phenomena in terms of symmetry breaking. Quote: “At each stage entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as the previous one. Psychology is not applied biology, nor is biology applied chemistry.”

Laughlin, R. B. (2005). A Different Universe: Reinventing Physics from the Bottom Down. Basic Books. ISBN 978-0-465-03828-2.

Nobel Laureate Robert Laughlin makes a strong case for non-reductionism and emergence in relatively simple language.

Laughlin, R.B, and Pines, D. (2000) “The Theory of Everything” PNAS 97, 27-32

A thorough critique of Theories of Everything, complete with examples from physics. Quote: “We have succeeded in reducing all of ordinary physical behavior to a simple, correct Theory of Everything only to discover that it has revealed exactly nothing about many things of great importance.”

Categories
Uncategorized

Truth, Validity and Usefulness

There are three closely related — but in my opinion distinct — concepts that ought to be closely scrutinized before discussing metaphor proper. (The post is long but I hope you will bear with me: making the points I wanted to make took many more words than I had anticipated.)

~

Truth is the most difficult topic to do any justice to, especially if we start with the supposition that no truth is self-evident. Truth is a topic that has been flogged to death in academic philosophy, but it is far too important to surrender to philosophers. At the outset, let me just say that I subscribe to the view that truth is an attribute of statements about the world, and not of the world itself. Rainfall is a process in the world — it is neither true nor false. It just is. But a human statement about rain, such as “It is raining” can be either true or false (or undetermined, if you are in a windowless room). Truth is an aspect of communication. (Can animals lie? I’m sure they can intentionally mislead, but I don’t know if we should call this lying.)

How do we assess the truth of a statement? There is no foolproof formula or algorithm. The truth of statements about directly observable events is determined by corroboration. If you say it is raining, then I can go check. End of story. (At least, for most people. Ludwig Wittgenstein, for instance, refused to acknowledge that there was in fact no unicorn in the room, much to Bertrand Russell’s exasperation.) Most people trust their senses, especially when they receive multiple corroborations. This is usually adequate evidence to convince themselves that what their senses throw at them is not a private hallucination. Madness, after all, is a minority of one. (Is the world a collective hallucination then? This is a much stranger question, and one we’d better ignore for now.)

As soon as we get to unobservable events, the trouble begins. If you tell me that it rained in Boston, Massachusetts on August 13th, 1947 at 2:53 pm, I will have to use indirect methods to assess the truth of your statement. I might, say, consult newspaper archives looking for weather reports. But I will then need to place some faith in the newspaper’s accuracy, transforming the question of the truth of your statement about rain into an investigation of the trustworthiness of archived newspapers. We often decide on the truth of a statement based on the trustworthiness of the source. If you know someone who is a pathological liar, you take what s/he says with a grain of salt, until you can confirm what they say. Conversely, if you know a scrupulously honest person, you may believe whatever s/he says without seeking confirmation. Note that the liar could be telling the truth, and Honest Abe could be lying or simply mistaken. Without the ability to confirm statements using our own senses, we are at the mercy of other people.

The truth of a statement is often established using an appeal to authority. This is the typical technique of religious fundamentalists. But scriptural literalists are not the only people who appeal to a higher authority. Everyone does this, and it starts early. Children often ask “Who said so?” We implicitly evaluate what is said by finding out who originally said it. The value of a statement is displaced and relocated in the personality of its first proponent. Popularizers of science frequently refer to the statements of famous scientists. These are the ‘facts’ that Science — usually a disembodied entity — has ‘shown’. One seldom hears about the conceptual or theoretical framework within which the statement makes sense. The authority of a particular scientist may be tested via the systematized extrapolations of confirmation-via-the-senses that we call experiments, but very few nonscientists engage in this kind of activity. Most people are unable or unwilling to confirm statements by scientists and other public authority figures. Society places trust in some people (for reasons that are far from obvious) and individuals often just inherit this trust, even if they have the resources to test it themselves. I imagine most people assume that any statement being widely touted as the Truth would be subject to minute examination by persons more qualified and motivated than themselves. Alas…

Other ‘authorities’ we frequently appeal to: parents, teachers, politicians, philosophers, social ‘scientists’, priests, medical doctors, astrologers, and even journalists. There is also depersonalized tradition –“It’s true because this is what we’ve always believed”.  Another authority is aesthetics (“Beauty is truth, truth beauty” — a particularly pernicious obscurer of truth, even, or perhaps especially, for scientists. Perhaps more on this in a future post.) But the most mischievous authority we appeal to is ‘common sense’. What on earth is it? It has something to do with reasonableness, and rationality, and sensitivity to ‘evidence’, but beyond words that are themselves vague I can say very little.

~

Pinched from SMBC

I can, however, say something about validity, which is related to logic and therefore by association, to the popular perception of rationality. Validity — for the purposes of this blog at least — means logical consistency. We often conflate validity with truth. Many true statements also happen to be valid, in that they are consistent with other truths, and can be related to them using the algorithmic processes of logic. (We can include formal mathematics in the term ‘logic’ here, although there are those who might argue that logic should be seen as a subset of mathematics, not the other way round.)

Consider the sort of ‘truth’ arrived at by syllogisms such as the following:

Major premise: All humans are mortal.

Minor premise: All Greeks are human.

Conclusion: All Greeks are mortal.

The conclusion is valid because logic has been applied correctly, and it is also true. Why is it also true? Because the major and minor premises are both true to begin with. This is not always the case. Consider:

Major premise: All ducks have feathers.

Minor premise: All basketball players are ducks.

Conclusion: All basketball players have feathers.

It is important to recognize that the conclusion is valid. Logic has not been violated. But I hope everyone can agree that the conclusion lacks truthiness. The minor premise is absurd. I’ve picked a silly example, but you can easily imagine that things get murky very quickly if the premises sound sophisticated, plausible and/or ‘reasonable’. Logic is an internally consistent system that, if used correctly, will always give you valid results. Like a computer program, it’s Garbage In Garbage Out. (Logically consistent garbage, of course.) The truth lies elsewhere.

Wittgenstein asserted that statements that can be deduced by logical deduction are tautological — empty of meaning. Meaning and truth lie in what we put into the logic machine. We can put true statements (arrived at using the vague rules of thumb hinted at above) into a logic machine and crank out new true statements. I use the machine metaphor consciously: it is not very hard to get a computer to perform logical deductions, because it all boils down to rule-based symbol manipulation. Douglas Hoftstadter demonstrates this very vividly in Gödel, Escher, Bach (a strange, entertaining book whose fundamental argument eludes me). Doing a derivation in mathematics or physics resembles this algorithmic process. It’s all symbol-manipulation — moving around x’s and y’s using the laws of algebra and calculus (with much assistance from intuition and clever tricks, without which we would get nowhere). The meaning of a mathematical symbol lies elsewhere. The variables in formulae must (eventually) be mapped to observables in the world — this fundamental act of naming and associating is not done within mathematics or logic. Variable x can take on any value you give it. What value you give it depends on the problem at hand. (Even if we somehow had access to all true axioms and a sufficiently powerful logical system, we would still face a problem. Gödel used formal mathematics/logic to subvert itself, yielding his notorious incompleteness theorems. One of the interpretations of the first theorem is that there will always exist statements expressed in a mathematical system whose validity cannot be tested, even in principle, however powerful and consistent the system happens to be. Hofstadter uses an effective metaphor. Imagine that a mathematical system is like a phonograph. Playing a record establishes its validity. There will always be one record that carries a frequency that will destroy the player. You can try building a new, stronger phonograph, but you will always be able to make a new record that can destroy it.)

~

And now let us cease this academic chatter about truth and validity, and come to the problem at hand. The third leg of our tripod is usefulness. Statements (and systems of statements) are not merely true or valid. They can also be useful. Newton’s laws allow us to make precise predictions about the movements of bodies both terrestrial and heavenly. We are offered “better living through chemistry”. And quantum mechanics, for all its alleged conceptual incoherence, is the fertile soil from which spring all the wonders of the electronic age. Clearly there are statements that have power. Note that I am not concerned with subjective value here. You may hate iPods, railway trains, or vitamin C tablets, but you should be able to acknowledge the efficacy of chemistry, physics or engineering as tools for achieving the goals of its practitioners.

You might argue that the power of science comes from its truth. But we must also admit to ourselves that there are powerful, world-changing statements that are not true, or whose truth has not yet been definitively assessed. Questionable beliefs and superstitions have power in that they influence the way people behave. These effects may not be as well understood as phenomena involving inanimate matter, but people regularly find ways to deploy them reliably. Think of proselytizers, PR companies, or politicians.

Many of the statements born out of religious and spiritual tradition do not hold up to scientific standards of truth and validity, but even scientists are capable of recognizing the power of allegedly false beliefs in helping people cope with pain, anxiety, and poor health — and also in spreading hatred, conflict and ignorance. But even truth and validity do not always co-occur. The Schrödinger equation is true and useful, but it cannot be derived from other more “fundamental” principles. (Heuristic  derivations are used as didactic aids.) Physics and chemistry are littered with examples of ad hoc rules that do not simply flow logically from first principles. They may be consistent with other rules, but their validity was not the basis for their acceptance into the cannon of true theories. Validity is often discovered after the fact of a true discovery. This was the case with Newton’s calculus. Mathematicians in the 19th century lamented the fact that calculus was on shaky foundations (fluxions, anyone?), and proceeded to place calculus on more firm, rigorous foundations. I have often wondered whether the “foundation” metaphor is even appropriate here, given that calculus was already being used to great effect despite the untrustworthiness of its moorings, since the 17 century. (I intend to return to the metaphor of foundations and fundamentals at a later date.) Richard Feynman described mathematics as coming in two varieties: Greek and Babylonian. The Greeks were concerned with deriving all truths from a small set of self-evident axioms. The practical Babylonians, on the other hand, used intuition and heuristics, working out new truths from other truths that happened to be adjacent to the problem at hand. (I recommend reading the full Feynman quote on p. 46 or watching the lecture.)

I am not trying to knock rigor, however. The pursuit of rigor yields many discoveries that are interesting in their own right. And mathematical ideas that are valid and well-formed can sit around for decades before someone discovers a use for them. This was the case with non-Euclidean geometry, which was born of an attempt to prove (or validate) Euclid’s fifth postulate using the other four. The method known as proof by contradiction was employed, but no contradiction was forthcoming, so seemingly unearthly geometries — in which parallel lines intersected well before infinity! — lurked in mathematicians’ closets. Such geometries were dusted off and dragged into the daylight when Einstein announced to the world that spacetime is curved.

~

Usefulness, truth and validity can be used in concert to get us out of philosophical black holes. You might agonize over the question: “If logic establishes truth, then what established the truth of logic?” Worse, you might arrive at some kind of sophomoric nihilism, based on nothing more than the discovery that contrary to hope or expectation, truths are rarely well-founded or absolute. But if we remind ourselves that logic persists in human society because it serves us well, then questions about the validity of validity-establishing systems or the truth of truth-discovering systems lose their fearful circularity. They are still circular — and going around in circles may not be as pointless as it first appears — but the discomfort is gone. With these three ways of measuring, we can perhaps resolve the apparent dichotomy between theory and application. Truth and validity have their uses. And perhaps usefulness is a truth of its own. Each aspect supports the other, but they can and do exist independently of each other.

~

Truth, validity and usefulness can also be deployed in less-than-admirable ways. In a debate, you might complain that a truth held axiomatically is ‘invalid’, or that a valid argument is ‘useless’. And all this can be done even if your opponent has stated clearly that he means to establish one thing — truth, validity or usefulness — and not all three simultaneously. This is the debating equivalent of the three-card monte. Point-scoring debates can evolve into genuine opportunities for learning and progress if we cooperate with our partner (no longer an opponent) by accepting premises provisionally, recapitulating arguments, suggesting truth-testing rubrics, or imagining the uses of ideas or techniques. A shared goal can make communication much more interesting; the sharp tools of science and logic can then be made subordinate to a particular orientation, rather than simply being used to shred particular statements or churn out valid-but-useless ones.

Usefulness can breathe life into truth and validity. Truth can shine a light on the workings of power, and confer meaning to validity. And validity gives us a way to be cautious about alleged truths or attributions of usefulness. The truth-validity-usefulness triad gives us a kind of Instrumental Reason with which to explore the world and ourselves. Investigating their complex dynamic — separable but interpenetrating — is the key.

~

The post is already too long, so I’ll just end with an image of how these three concepts often get tangled together with related and important concepts that are not always relevant.


Extraneous values can often obscure Truth, Validity and Usefulness