In a 2018 post, “Attention: Mind the Mind Gap,” I claim that one of humanity’s greatest problems is the mind gap: i.e., “our inability to really know—directly, firsthand—what one another is thinking.” I point out in a footnote that another crucial challenge is complexity.
I’d like to briefly explain what I mean by complexity and why it’s worrisome. I’ll also say a little about how I think we should navigate that worry, which in fact turns out to be not so different from navigating the mind gap. This is unsurprising, as the mind gap is itself essentially a result of complexity (more about this below).
Defining complexity is difficult. My use of the term is looser, though does capture, for instance, the topic explored in Melanie Mitchell’s 2009 book Complexity, where one definition of complex system is given as: “a system that exhibits nontrivial emergent and self-organizing behaviors” (p 13).
But this doesn’t define complexity itself. That is the topic of the book’s seventh chapter, “Defining and Measuring Complexity.” The long and short of it is that there’s no single agreed-on definition of complexity, and the one you flesh out will likely depend on the regions of the world that interest you (e.g., your field of research).
Or perhaps you’ll go with a more general definition, like the one given by David Krakauer of the Santa Fe Institute (with which above-cited author Mitchell is also affiliated), on the Santa Fe Institute’s podcast Complexity: “that domain of reality that balances the random and the regular” (@7:39, “David Krakauer on the Landscape of 21st Century Science,” 10/9/2019).
For my purposes, complexity characterizes the limits of what the human mind is capable of keeping track of: if there are too many things going on for a human mind to track it, it’s complex. This is an idiosyncratic use of the term (I might, for instance, be rightly accused of a complexity theorist of conflating complicated and complex).
I could have used some other term, perhaps related to confusion or uncertainty, to capture the subjective and contextual quality of the thing. But I don’t think those terms are appropriate. We can be certain but mistaken, which is to say we can fail to feel appropriately confused, due to not having fully acknowledged the complexity of a thing.
One window to acknowledging this are the efforts we make to explicitly build uncertainty into our formal models of objects, relations, and events in the world (I’ll sometimes call those ORE, as a kind of rough shorthand of the stuff that makes up the world).
This is perhaps easiest to see in probability theory, particularly as a numerical activity (not all probability involves numbers, more about which in a moment). Here’s what I mean.
The probability that this coin lands heads is 0.499377334994, but we’ll call it 0.5 with no problem; actually, with less problem: it’s a profitable improvement to round to 0.5.
Rounding off the world, so to speak, will be the central theme here. It’s how we deal with complexity, and is often profitable. But it’s also often dangerous. Ok, back to the example.
When I flip this coin, I assign 0.5 because I lack the ability to track all the stuff going on in the world that guides the coin’s behavior. After I see it land, if I conceal from you that it landed tails, then the probability for me that it landed tails is 1, but for you it is still 0.5.
Or I suppose, we might say it is really 1 − 0.499377334994 = 0.500622665006 for you, but this is only known from God’s (let’s call it) perspective, at least with respect to the probability you should “really be assigning, strictly speaking” (for God, the probability was always 1 that this coin would land tails at this moment). Admittedly, it’s not so clear what all of this talk really means. Probability is strange and difficult, and, like so much else here, we could talk about it endlessly.
But the point is that God knows everything there is to know about the world, and so there is no complexity for God. Simple mathematical examples such as the above (and others I’ll share presently) give a quick glimpse into what I mean when I say “what the human mind can keep track of.”
I do think, then, that the quasi-mathematical dimensions of complexity—as discussed in Mitchell’s book—are well worth holding onto with respect to the phenomena to which I’ll be applying the term.
That said, the basic idea is that we are constantly faced with complexity. We deal with it, consciously or not, in formal and informal ways. All of these dealings amount to some sort of shorthand organizing principle. Such a principle might be best characterized as a model, map, representation, shorthand, slogan, identity-marker, name, narrative, lens, or as some system within which those things are instantiated (e.g., language, perceptual systems, mathematics, etc.).
But maps and models and so on all amount to the same thing: a way to make intelligible, for the human mind, some set of objects, relations, or events whose size and/or activities are otherwise too large or busy for human minds to track, but that, for whatever reason, some human mind has an interest in tracking.
The gain is that the set is made trackable, such that it can be engaged with in practice. But there’s always a cost. Sometimes the cost is utterly unimportant or is outweighed by the benefits. Sometimes the cost is too steep.
To see what I mean, I’ll start with an extremely basic example. One that is usually profitable. This example is emblematic of the broader phenomenon I’m worried about here, which extends to all aspects of human life.
When I say to you, “hand me that mug,” I am abbreviating countless events. Even the basic act of grasping the mug involves an astronomically high number of movements and relations (e.g., the constantly changing distances between fingers and palm and mug).
And the mug itself is made up of trillions of atoms. I could refer to the mug by each of those atoms and its relative position in space (broken up discretely, lest we run into something like Zeno’s paradox), while also describing each atom in your arm and hand as it moves through space, and so on. We’d be here all day. Instead, I just say, “hand me that mug.”
Now, “that mug” isn’t really shorthand for “that set of such-and-such particles in such-and-such region of space.” Rather, it refers to what is experienced by you and me as a unified object that moves through space and time and various uses (a small chip doesn’t make it a different object, nor does using it for tea, coffee, soup, holding ink pens). We don’t experience particle-01, particle02, … and so on, “arranged mug-wise” (as mereological nihilists like to put it; more about mereology shortly). We experience that mug, and we do so in various ways, such as with vision and touch. Let’s consider vision.
When photons bounce off of that set of particles and into my eyeballs, etc., the result is a sort of mental picture. You know: the one you have when you look at a mug (assuming you are sighted; feel free to exchange the example to align with your own experience). I like to call such mental pictures representations. But you could characterize them as models or maps, and so on, of the mug.
Some of those characterizations strike me as more controversial than others, but the basic idea is the same. The representation is a simplified, but highly trackable and indispensably useful, take on what really amounts to trillions of particles held together by forces in a certain arrangement for some duration of time.
I’m not worried today about carefully distinguishing between representations and models and maps, and so on. What I am interested in is persuading you that this example of a mug goes on in cases that are much costlier and more complex.
In fact, even at the basic level of what counts as an object, I could linger all day. We talk about solar systems, heaps of sand, unassembled jig saw puzzles, and even arbitrarily conceived of sets (e.g., a bag of groceries, the real number line) as singularly objects. A book is a singular object, as is each page of the book, as is a library of books. A table is an object, as is each of its legs.
Were I to glue a mug to that table, would this too be a singular object? If so, why do I have to glue it? Why must the mug be anywhere near the table? And so on.
Such questions are organized by philosophers under a sub-field called mereology. Someone who thinks that any two objects make a third object (i.e., the composition of those two objects) is a mereological universalist (if they think this extends over time, even, so that this mug and Cleopatra’s left ear compose a singular object, the person is a four-dimensional mereological universalist).
Someone who thinks no actual objects really exist, aside from irreducible (or indivisible or part-less etc.) particles (i.e., simples), is a mereological nihilist. I am one of those.
And there are designations between these extremes. I’ve written about this here: “In Favor of Compositional Nihilism: A Response to the Organicism of van Inwagen.”
The point at the moment is that we don’t really have solid answers to these questions. What we do agree on is that a mug is made up of trillions of particles, and “hand me that mug” is a convenient shorthand that, at least in principle, could be expressed in impossibly convoluted instructions.
Actually, as I already alluded to above, the words “that mug,” are a linguistic representation of the stuff in the world responsible for the mental representation of that stuff. This is of course true also, for instance, of things like pointing at the mug or painting it, and so on. This stuff gets difficult very fast. But we general navigate it effortless. “Hand me that mug” (in whatever language you like) is a trivially easy instruction for most humans to understand.
I’d like to emphasize just how incredibly useful all of this is! And it is not a guaranteed capacity for all adult humans to visually experience the handle and body and so on of the mug as a singular object. Or a certain set of eyeballs, nose, mouth, and so on as “Tonya’s face,” or just as “Tonya” (see, for instance, Wikipedia entries on “Face perception” and “Prosopagnosia,” a condition also known as “face blindness”).
The proverbial example is each of several people mistakenly experiencing the tusk, tail, ears, and so on of an elephant as the totality of the creature. This is a nice example, as it suggests that, if you step back you will see the complete entity come into place, at least as a physical description. (It won’t tell you other things that require closer looking, like that they are endangered mammals.)
A similar common imagery is that of seeing the trees but missing the forest by looking too closely at the details: we must back up to see the forest.
But we only need to back up because an individual human mind is unable to assemble the forest by learning everything about trees. This begins to head us in the direction of emergent properties, an involved topic about which I don’t wish to linger on here. But I will quote the following from neuroscientist Michael Gazzaniga’s 2011 book Who’s In Charge?: Free Will and the Science of the Brain (from a section called “Complex Systems”):
If you look at car parts, you won’t be able to predict a traffic pattern. You cannot predict it by looking at the next higher state of organization, the car, either. It is from the interaction of all the cars, their drivers, society and its laws, weather, roads, random animals, time, space, and who knows what else that traffic emerges.
Keep this thought in mind below, when I inevitably turn to the complexity of whatever it is that is responsible (e.g., the activity of neurons in response to external stimuli) for the goings-on of the human mind.
As useful as our capacity for modeling or mapping or abbreviating objects, relations, and events (again, let’s call that ORE) is, our efforts are still incomplete, thankfully: I cannot imagine what it would be like to look at a mug and see sextillions of independent and distinct parts, relations, colors.
Let’s go now, beyond basic perception.
if I wish to refer to (say) people who voted for Barack Obama, I need not list each such person by name every time. I say, rather, “people who voted for Barack Obama.”
We use such shorthand in mathematics, as well. We can draw attention to, say, all (x, y) coordinates on a standard Cartesian plane such that x2 + y2 is less than or equal to 9, in order to represent a certain region covering infinitely many points (that region, bound by a circle, is shaded here in purple):
As a (necessarily limited) human, I can quickly sketch the above without a calculator’s help by thinking of the inequality in terms of the Pythagorean theorem (i.e., a2 + b2 = c2), where the square root of 9 is the circle’s radius, and marking where the graph crosses the axes: (-3, 0), (3, 0) (0,-3), (0, 3). I can connect those points to make a circle (because I already have a nice informal understanding of what a circle is), and then shade everything in.
Back to the mug. Any sort of robust representation of a physical mug (e.g., as an object for human perception or for computer modeling), will involve some similar bit of shorthand as the above, and in fact much more of it: the surface’s textures, with its relatively drastic peaks and valleys will be smoothened over, and no matter how circular the mug’s base may appear, it won’t be a perfect circle.
Indeed, even the above-diagramed circle is an approximation in order to raise in the viewer’s mind the notion of this conceptual object we call circle. Most non-mathematicians have perfectly workable informal notions of what circles are. For the mathematician, the circle may be defined as the set of all points on a plane that are equidistant from a given point (we call that given point the center and that distance the radius). As noted above, we can translate this definition in terms of the Pythagorean theorem.
The outline of the above “circle” is not literally the set of all points: it is pixelated, and the closer you look (e.g., at the material composing your screen) the more gaps you’ll find. These aren’t the only problems with our “circle,” but I trust you get my point.
Back to the mug. There’s more to a mug than its being an object of perception. Modeling a mug as a three-dimensional object, for instance, is distinct from representing it, say, in a sales catalog: “this is a beautiful object that will improve your day by enhancing your experience of coffee”; or as evidence in a courtroom trial: “this is the weapon with which the defendant allegedly struck the deceased.”
We can represent mugs in ways that are strictly true, but have extremely little to do with mugs. Suppose the following true description showed up in a dictionary:
Mug (noun): It is not an aardvark.
(And then under aardvark: “It is not a mug.”)
Or, to borrow an example from Bertrand Russell, suppose we correctly describe mugs as a type of thing of which this mug is a token, another token of which was “the last thing Caesar saw before he died.”* This might be strictly true, but we very well couldn’t replace all instances of the word mug with that phrase unless our interlocutors understood the change was taking place.
For a related topic, see this 2017 Edge article by rogue mathematician Eric Weinstein, in response to the question What Scientific Term or Concept Ought to Be More Widely Known?: “Russell Conjugation.”]
And depending on one’s purposes, the mug may indeed be thought of in terms of its parts. A ceramist might think of the handle and rim and glaze and so on as meaningfully distinct parts. A chemist or physicist might think, in some contexts, of smaller parts and yet smaller parts still. An antique dealer might think of the mug in terms of where it has been, by whom owned or restored (and in what way).
It turns out that, were we to collect all the facts about a given mug, there would be far more to it than merely listing out its physical parts! I bet we’d find ourselves as far back as the Big Bang.
But “hand me that mug” is generally all we need in order for it to be handed over.
As convoluted as this has already gotten, the mug example as about as simple as we can hope to get when it comes to our basic, day-to-day modeling of the world, of ORE. In the simplest case: you cast your gaze on the object about which I said so much above, and you see a mug. Perception may be a complex process in its own right, but it takes me no effort to see in front of me the mug and the many other objects surrounding me at the moment.
There are plenty of other examples, even at the level of perception. Between what appear to the most sensitive human eye to be identical shades of green, and may in fact yield identical readings on a measuring device, a more finely tuned measuring device may report the shades to be quite far apart; humans can happily call both shades green. The same can be done for two identical sounding pitches. And on and on.
Before going on to more complex examples, I’d like to emphasis just how quickly the human mind is outpaced by relatively small sets of ORE. If you are familiar with basic combinatorics, you might want to skim a bit.
Suppose you have three distinct books, labeled A, B, C. There are six ways to order these books on a single shelf:
Actually, a tree diagram better represents what’s happening here mathematically:
There are three ways to fill the first open slot on the shelf, two ways to fill the second (i.e., you have two books to choose from after filling the first slot), and there’s one to fill the third. That’s 3 × 2 × 1 = 6. Likewise, if we had four books, it would be 4 × 3 × 2 × 1 = 24.
We have a shorthand for this: 3! (“three factorial”) = 6. Similarly, 4! = 24. And 5! = 5 × 4 × 3 × 2 × 1 = 120. So, there are 120 ways to arrange five books on a shelf. And 6! = 720 ways to order six books on a shelf.
These numbers are growing fast. How many ways are there to arrange 15 books? Take a guess, then scroll down.
… mulling over…
It’s 1,307,674,368,000. So, if you’re trying to seat 15 people in a row, there are over 1.3 trillion ways things can go. At 59 books (or people, or distinct playing cards), the number of permutations now well surpasses the number of atoms in the visible universe, the latter of which is estimated to be about 1080 (“visible” amounts to upwards of 2 trillion galaxies; read about it at Wikipedia: “Observable universe“).
That’s just 59 objects. One hundred objects? Forget it. Unfathomably huge. Problems involving just 100 people (e.g., seating them at a round table** so that any two neighbors “enjoy each other’s sense of humor”) can be so difficult that it would take a computer far, far, far more than 1080 years to mine for every satisfactory seating arrangement.
[**Note that the number of ways to order object in a circle is not n!, but (n-1)!. Keywords for beginning to look into potentially intractable computational problems are “P vs NP” and “computational complexity theory”—e.g., see the “Introduction” handout at this Tel-Aviv University course.]
Now figure, there are about 7.8 billion humans on Earth at the moment, and let’s say about 86 billion neurons in a typical adult human brain… that’s a lot of moving parts. Not to mention all the stuff like, I don’t know: trees, wind, butterfly wings, water molecules, and we’d be here all day listing them.
“That mug” has nothing on the complexity of human behavior, especially of society (however defined). But let’s look at a smaller set of ORE: a human individual.
My name is Dan. This is shorthand for a person who’s been on Earth for nearly 48 years now, starting as a chubby infant who hadn’t yet learned to talk, and who is now a middle-aged weirdo writing these words. My name is a shorthand. But shorthand for what? Whatever it is, it too seems to be some sort of representation (or model or map) or another.
My representation of myself will almost certainly be nothing like the one you have of me. And my own will of course be imperfect and subject to changing over time. Even my representation of the representation I had of myself last year could be wrong (i.e., “how did you see yourself last year? what sort of person did you then take yourself to be?”).
By my self-representations are also more complete than yours are of me, in certain crucial ways: I know right now what the room I’m sitting in sounds like, but you don’t (even if you happen to be Dan, 20 years from now).
Personal identity is a hard. Like much else here, we could go on all day about it (talk of old theological frameworks and making sure the right person is condemned to eternal punishment in Hell would surely come up). Instead, I’ll say only the following.
Whether I’m literally the same person (whatever that means) today that I was five minutes ago, yesterday, or five years ago, it is difficult to say that, at this moment, I’m literally the same person, in our ordinary understanding of that term, of the chubby little baby my parents named “Dan.” It’s a convenient shorthand, but it is a shorthand for something—for some set of objects (e.g., the cells that have composed my body, my changing visual aspect and general manner of comportment, or character, etc.), events, relations, and so on.
Extend this to social groups—political parties, races, ethnicities, Justin Bieber fans, people who voted for Barack Obama, Baby Boomers, commercial airline pilots, the Supreme Court, everyone between five and six feet tall waiting for the L train at Union Square at 3pm on October 12th in 1974, and on and on and on and on.
Whatever your thoughts about the literal existence or non-existence of those and many other social groups, as a thing in itself, the groups are made up of individual members (at a given moment and/or across time), social groups depend on the organizational capacities of the human mind in order to be conceivable as a thing that can be talked about by humans. This requires much greater organizational capacity than does perceiving a mug. By “organizational capacity,” I mean the capacity to establish and work with models, maps, representations, and so on.
Of course, we could invent groups that couldn’t possibly be tracked (every human with an odd number of brain cells between 3:00:00pm and 3:00:01pm today), but the very category itself is intelligible only because the shorthand for that category (“every human with an odd number of brain cells between 3:00:00pm and 3:00:01pm today”) is intelligible.
I hope you get the idea. These social group categories, which I think in reality refer to fictions that we behave as though are real, are convenient ways of modeling individuals as large groups of people. This has its advantages, but can turn bad fast. Stereotyping comes to mind, but is not the worst thing I could mention.
I have barely scratched the surface of how we turn to models in order to deal with complexity, not to mention how it can go wrong. The common phrase, “the map is not the territory” is useful here (a sentiment that has its own Wikipedia section). It is common in science and applied mathematics to hear this warning, which essentially amounts to, “don’t confuse your model with reality.” An important reminder. The fallacy is a tempting one that even excellent and well-intended researchers too often fall prey to.
And the more complex the territory, the less it is mappable (i.e., summarize-able for human understanding). Highly complex territory still admits of maps, but extremely limited ones. While this may pose dangers to researchers even in fields where this fact is constantly acknowledged, it is especially dangerous where you’re less likely to encounter constant acknowledgement that the map is not the territory. In fact, you might be told the opposite!
This may be due to the nature of the field itself, or may be due to how a given set of research findings are presented to the public (e.g., with no mention of maps or models or representations at all!).
The upshot is that the temptation to view the map as the territory is everywhere, including in some areas of academic study. I sometimes express this as follows. As undergraduates, we may find ourselves fitted with specialized glasses through which to view the world. Then we go onto graduate training, where those lenses are fashioned into a sphere that is sutured onto our shoulders, so that in every direction we look, we see the world through that lens.
This may be a Marxist’s lens, a cultural anthropologist’s (cultural or biological or whatever) lens, a chemist’s (environmental or organic or whatever) lens, an economist’s (Keynesian or Austrian or whatever) lens, and on and on (I haven’t mentioned sociology, physics, biology… we’d be here all day).
Those all amount to models or maps or narratives or however you’d like to put it of what is in fact an extremely complex world that always has and always will stubbornly refuse to in fact be those models or maps or narratives. Just as much as any medical illustration is not a body.
I’ve still only scratched the surface of the ways in which we make the world’s ORE intelligible. Here’s one more.
We make informal probabilistic models of the world when we feel around in our guts for the chances that such-and-such a thing we might say or do will go well, and we make formal probabilistic models when we think hard about how, perhaps with the help of Bayes’s theorem, to quantify those chances. But we need not quantify them.
We fall somewhere in between those two places when, while on jury duty, we deliberate hard about whether the defendant committed the murder; we still don’t put a number on it, but we do have a subjective, “reasonable-doubt” threshold against which to orient ourselves.
And here we’re back to the mind gap. We try to figure out the mental state of the defendant based on our intuitive assessment of whatever evidence we have, perhaps including some vague brain scans accompanied by competing interpretations, from expert witness, about what those scans represent.*** If we’re on the jury, that is. Otherwise, we make the assessment based on whatever evidence we might have (e.g., a video clip, a blurry photo, or a blurb about the event on a news website).
[***For a deeper critique along these lines, see the sixth chapter of Gazzaniga’s above-mentioned book. For a defense of “neurolaw,” see Peter Alces’s 2018 book, The Moral Conflict of Law and Neuroscience.]
You get the point. The danger of confusing the map with the territory is everywhere. Actually, “the territory” may itself be misleading. Better: “a territory,” because not everyone takes the map to refer to the same territory, to be an organized summary of the same ORE. Slogans are prey to this (whose malleability, in fact, are part of what makes them powerful).
Here are some other examples from the hopelessly inexhaustible list of how we summarize impossibly complex regions of the world.
An icon on a computer desktop, and the interface the user works in, are not the actual software, nor is whatever the software is designed to manipulate (e.g., some humongous data set) actually the thing that we’re using that manipulation to better understand.
Strategically developed metaphors and slogans and symbols and so on may become much like “icons” themselves, used as levers for engaging with an otherwise irreducibly complex social world (where social world refers to some unfathomably large set of ORE). Often these (and many similar) icons, in this computer-age sense, come to stand in for other humans.
But this too encourages a kind of metaphor, of the sort often found by those who filter the world through a lens of computer science (e.g., “conscience and law and ethical systems and social norms are kinds of software that guide individual behavior….”). Often I’m unsure when people who speak this way are being figurative (as I hope they are) or literal. The human mind is not wax, an assemblage of gears, or, as contemporary metaphors suggest, a computer running a naturally evolved operating system. Not literally, anyway.
And here again I find myself talking about human minds. I suppose this discussion inevitably will always lead me to the most complex sort of thing there is: intelligent sentience. And I certainly don’t just mean that of humans. Cephalopods come to mind.
I’ll wrap up by saying what I intend to suggest with my critique.
Let’s start with a question. Is there an ironic, or paradoxical, danger here? Developing a critique that rejects models, maps, representations, symbols, metaphors, icons, umwelt, narratives, lenses as means of really and truly and completely knowing the world, seems to admit of an ironic turn in which that rejection itself becomes the all-encompassing lens through which one sees the world.
I think this danger, if it really exists, is avoidable. My suggestion here is not to reject these things wholesale, but to be aware of the impossibility of escaping them, and to do our best to be good cartographers.
My sincere hope is that we can learn to foster an appropriate recognition of complexity, so that whenever we find ourselves faced with it, the default position is one of humility, compassion, attention to one another and oneself, concentration, the hard work of trying to imagine the scope of what we can’t imagine—or whatever the appropriate response is.
If spinning a cube around in our mind is hard, how can we hope to construct and then spin around a trustworthy model of the hidden contents of another human’s mind? Of the collective thoughts and emotions and behaviors of an entire group, of a civilization, of people long dead?
Does this mean we have to treat life like one big, impossibly difficult math problem? Not all of life. Not those “hand me that mug” moments. But most of it, yes. Maps, metaphors, symbols, models, representations, narratives, and on and on are just that: maps, metaphors, symbols, models, etcetera. They’re often lifesaving, often dangerous, always incomplete.
We are all cartographers, all the time. Like it or not, know it or not. Beware complexity.
Enjoy or find this post useful? Please consider pitching in a dollar or three to help me do a better job of populating this website with worthwhile words and music. Let me know what you'd like to see more of while you're at it. Transaction handled by PayPal.
Or click the banner to shop at Amazon (at no extra cost: it just gives me some of what would have gone to Amazon).