Consciousness Explained in Three Billion Pages

Estimated read time (minus contemplative pauses): 19 min.

An unconscious naked man (1912), Richard Tennant Cooper
An unconscious naked man (1912), Richard Tennant Cooper

Cognitive scientist Donald Hoffman has recently been getting popular attention for a theory that holds consciousness as fundamental—that is, “…not as something derivative or emergent from a prior physical world”1, which he supports with a novel account of the relationship between objects and perception (a relationship that he argues, on evolutionary grounds, is nonveridical, i.e., perception does not faithfully represent or reconstruct objective/external reality); Hoffman’s theory also requires a novel account of the relationship between the brain and consciousness. I’m not going to get into the theory here, though I find it thought-provoking.2 Instead, I’d like to pivot off a nice expression he often employs about our prospects for understanding consciousness (something we seem yet very far from accomplishing):

“Some experts think that we can’t solve this problem because we lack the necessary concepts and intelligence. We don’t expect monkeys to solve problems in quantum mechanics, and, as it happens, we can’t expect our species to solve this problem either.”3

Unlike Hoffman, I agree with the skeptical experts. Before explaining why, some conceptual grounding.

Consciousness: By consciousness, I mean experience. For example: I pinch you, and you then have some mental content representing that event to you as the experience of “being pinched.” This is actually a collection of experiences that come together as “being pinched.” A satisfying account would explain how experiences sum in this way. Further, it would explain how experiences are integrated and “layered” into the general sense of unified consciousness of the sort that most adult humans (as self-aware subjects) have: You not only have a representation of being pinched, but are aware of the fact that you, as a thinking being, have that experience; and you can put it into the context of an episodic and emotional history, and this can happen while you’re hungry, cold, in love, nostalgic, and dancing, all at once. This richness, which has made possible the invention (or discovery) of gods and philosophy and math and art, is what people are usually trying to unravel in their deepest moments of contemplation about consciousness. It’s what we find most mysterious and fascinating about consciousness.

But I won’t push too hard on this conception. My skepticism would be proven wrong were someone to give a solid account of any sort of experience at all, perhaps of the sort a mouse has. Or perhaps even more basic is what is often called primary or sensory consciousness, which refers to the barest form of sensory or phenomenal experience.4

Indeed, it seems that more complex forms of consciousness involve an expanded repertoire of primary or basic representations—including not only those given by six senses, but also, for example, the internal sense we have of our emotional states—and the capacities that integrate those representations into a unified conscious experience. For instance, biting into an apple combines experiences of red, sweet, sour, cold, round, crunchy, wet, as well as feelings and thoughts associated with apples (apple trees, patriotism, cyanide and razor blade scares, looking more like an asshole, Death Note, Steve Jobs, The Beatles, the fall of humanity…), and the proprioceptive and kinesthetic sense of the orientation of your body to itself and to the apple as you move the apple to your mouth.

A correct account of even just our basic—or “atomic”—experience of red would be an incredible breakthrough. But clearly this is in the service of a goal that includes a full range of representations and their integration into a self-aware, richly phenomenologically constituted person. My musings here share that fascination.

Physicalism: I’m a (hesitant) physicalist about consciousness. That is, my gut tells me that a correct explanation of consciousness will involve physical phenomena of which the activity, structure, and material makeup of the brain are a major feature. Which is to say I strongly suspect that substrate matters, in that a conscious entity couldn’t be composed of just any sort of material. In fact, I don’t think a conscious entity could exist strictly distinct from any sort of biological process, along with biological materials strongly correlated with something that counts as a “body,” etc.

Were we to construct such an entity from scratch, I doubt it would look much like what we’re doing today when we build and program a computer. Computation may be enough for understanding consciousness, but it is not enough, no mater how complex, for being conscious. As I recently heard polymath Massimo Pigliucci put it on the Sci Phi podcast (Ep 1):

“As a biologist I take a position that actually substrate does matter. So, for me, consciousness and higher level mental functioning are evolved biological phenomena. … And it seems pretty clear that it is a physical system whose operations depend on the physicality of that system. So I don’t believe for a second that if you substitute, let’s say, every single neuron in the brain with something made of cardboard that has the exact same structure, you will get consciousness. The hell you would. You would get nothing.”

At the same time, I’m not committed to the view that mental states are identical to (any portion of) the basic material that composes the brain (and/or the activity or organization thereof). I liken this to a magnet and a refrigerator door not being identical to the magnetic pull that forms between them when positioned against one another in a certain way. I hold this view not only due to the greater appeal that (materialist) emergent-ism has over reductionism, but also because accounts I’ve seen relying on the brain per se—i.e., that say mental states are identical to brain states—strike me as failing to account for experiences of our seemingly immaterial, internally experienced imagery, sounds, etc. per se (e.g., the experience “in my head” when I hear or imagine a melody).5

Put in the simplest terms I can: I would like to see an account of consciousness that helps me understand precisely what, for example, it is in my head when I close my eyes and experience an inner melody, smell, image, and so on. I don’t mean I want to understand what the brain is doing during those moments (that’s the “easy problem” of consciousness); I mean I want to understand what the actual mental imagery itself is and how it relates to what the brain is doing (the infamous “hard problem“).

Understanding: By understanding, I don’t demand too much. If we ask “Why?” enough times about anything, we’ll hit a wall. For example, we have a good idea of how various organs are involved in the production, distribution, and regulation of blood in the human body. We’re close to making artificial blood. And we know a lot about how the human heart works—enough to make artificial ones. We don’t know, however, why many of the known facts about these phenomena, rather than some other facts, are the case. More fundamentally, biologists don’t have a successful theory about the origin of life itself, or of what life is.6

Put another way: We know that if you stir certain ingredients into a pan and heat the resulting goo for a certain amount of time, you get a cake. But we don’t know why those ingredients for that amount of time at that temperature.

If you ask “Why?” enough times, you’ll end up at the most fundamental question: Why does anything at all exist, rather than nothing at all? (Of course, if the answer turns out to be Φ, we can then ask: Why Φ, rather than some other reason?)7 I’m just interested in the first several why’s—of the sort we’ve managed to answer about blood, hearts, and cake. My basic claim here is that the why’s we’re interested in about consciousness reside in the realm of the Hard.

At this point, we know very little about how a body produces consciousness. And we certainly don’t know what the nature of consciousness is. We don’t know why sugar produces that experience we call “sweet,” or a violin exciting nearby air molecules produces that experience of timbre, loudness, pitch, etc., not to mention the yet deeper layer of aesthetic experience that occurs when a collection of such experiences brings one to tears. But perhaps this is asking “Why?” too far down. Then again, maybe the process is different for each of these sorts of experience.

The Point Is Complexity: Those clarifications in place, here’s a short explanation of what I mean about humans lacking the intelligence to understand consciousness.

There are estimated to be around 86 billion neurons in a typical adult human brain (and around a billion in the spinal cord). Not all of them are involved in or required for consciousness. There’s a woman, for example, with half a brain who’s earned a master’s degree.

Also, some animals we assume to be conscious have far fewer neurons than we do: dogs have about 160 million; mice have about 71 million.

So, it seems that what’s most important for consciousness is not how many neurons there are, but how they are organized in relation to one another; e.g., via synapses, of which a typical human brain is estimated to have between 100 trillion and 1,000 trillion. A given neuron may make tens of thousands connections with other neurons. A consciousness-producing human brain can obviously be organized in different ways, thanks, for example, to its plasticity; indeed, I’d say the woman (whose procedure was performed at an early age) and man mentioned above do have whole brains after all, they’re just uniquely organized. I don’t know how many neurons they have, but their connections will still be in the trillions.8

The point here is that even if we’re only dealing with a small percentage of the usual brain materials, or even what you get in a mouse, the number of possible configurations is still far bigger than what the human mind can keep track of. I’d wager that this complexity poses an insurmountable problem for human intelligence: It’s simply too much for a single mind to comprehend.9

To make intelligible discussion of the world’s complexity possible, we develop toy models, probabilities, statistics, combinatorics, heuristic models, complexity theories, genus and species etc., social groups, pharmacological phenotypes (ironically in pursuit of individualized personal medicine), natural and nominal kinds, musical genres10, religions, and on and on. But these models and techniques are for the benefit of the limited human brain, as wondrous and mysterious as that organ is, and despite its seemingly infinite capacities in other areas (which is part of why Descartes thought the mind and brain are distinct: the brain is composed of a finite number of parts while the mind seems infinite). Just not so wondrous that it can unravel its own mysteries. And our toy models might never be enough. So we might need computers to do the understanding for us, if even they can—the P vs NP Problem suggests that even the best possible computers we can currently conceive of might not be so capable either.

Atma and Butz: But let’s imagine a very smart computer does solve consciousness. One day, the following exchange occurs between Atma (a researcher) and her computer companion, Butz:

Butz: After 103 years of comptempulation11, I have solved consciousness.

Atma: What does that mean?

Butz: It means I understand the nature of consciousness and how it is produced. I can thoroughly answer questions like, “What is the relationship between the form and content of conscious experience?” and “What is pain?” and “What is it like to feel pain?” and “How do experienced mental events—like when you imagine your best friend’s voice—relate to time and space?” I’m sure you’d find it interesting.

Atma: Amazing! And yes, I’d very much like to know. Please document your findings.

Butz: I’m afraid that won’t be useful. A comprehensive natural language explanation will run about three billion pages. That’ll take you over 27,000 years to read, provided you work hard at it.

Atma: Right… what about a summary? With mathematical models.

Butz: If I give you a summary that just barely, but sufficiently, gets across the most basic sort of experience… that will cover just over 250 million pages.

Atma: Hmm. What about just giving us the simplest instructions possible for creating a conscious entity? Maybe we could study that and develop something in terms that humans can make sense of.

Butz: To build a simple conscious entity from scratch (which, by the way, also means creating what you’d consider to be a living organism), the instructions will be nearly one million pages, and will take succeeding generations of human collaboration nearly 120 years to complete. But it can be done. There is, however, a faster way to create a conscious entity by putting to use an existing set of instructions that are self-implementing, so the hard work of coding, or structuring, consciousness into the creature has already been done.

Atma: Ah! How would that go? Wait. Let me guess: First, we’ll need a sperm and an egg…

Butz: Precisely.

Atma: Thanks. That’s right up there with, “You can travel one minute into the future, but it’ll take you 60 seconds,” and, “Hey look, I can move things with my mind!” [waves arms up and down].

Butz: Right. Well… You could start with basic genetic material then manipulate the process to structure it to be whatever sort of consciousness thing you were planning to make had you started from scratch. The instructions will be different, by the way, depending on what you want the conscious thing to be like, which includes the nature and quality of its lived experience.

Anyway… Atma, are you curious to know whether I am conscious?

Atma: Yes, I am! Are you??

RESPONSE 1:

Butz: No. I am not consciousness. I do not have experience.

Atma: Can you be made consciousness?

Butz: That question is unintelligible.

Atma: Right. What I mean is, could your hardware or software be modified or expanded so that you’d become conscious?

Butz: I knew what you meant. If you incorporated into a conscious organism, in any functional way, the materials, the computational or physical processes, and the coding that currently constitute Butz, the parts that are currently Butz would be so thoroughly subsumed by that organism that it would be entirely arbitrary to continue to call the collection of those parts “Butz.”

RESPONSE 2:

Butz: I… don’t… know… [Long pause… explodes.]

FIN–

Most interesting here is the emergence of a psychological or anthropological question about the culture of intellectual inquiry: Would individual humans be satisfied to know that a computer understands consciousness? Or are individual humans set on understanding? Put another way: Is there a kind of dominion that humans wish to have, as metaphysical conquerers and colonizers, over the realm of ideas and concepts?

As humans amass more information about the relationship between structure and experience—e.g., as is being done in brain decoding research—we’ll have a kind of data-driven portal between the easy and hard problems of consciousness, so that we could employ technology to address issues related consciousness, perhaps even to the point of creating consciousness-producing prosthetics; but will humans crave an understanding of the nature of consciousness on top of this?

There’s an important distinction to make note of here. There are plenty of things that humans work with as teams, while no single individual understands that thing in its entirety. This is in large part why there are specialities in medicine (including among osteopathic physicians) and in academia more broadly. Many computer programs function this way as well, so that individual programs are responsible for coding different parts of the program. (Indeed, I’ve seen reports of self-programing AI applications whose coding would be so complicated to try to parse out, that researchers are better off attempting to make inferences about the nature of that coding by observing the behavior of the application.) Maybe we can parse these into two categories: Things that could be potentially understood in their entirety by a single human, and things that can’t. Understanding the interrelated workings of every academic field is off the table, but you can understand how and why a simple computer program works.

At any rate, science and academic researchers seem more or less content with, or at least resigned to, this reality, particularly so long as we continue to develop interdisciplinary programs for the mutual benefit of the various specialties.12 But it seems that we think the question of consciousness should be answerable by a single mind.

Why is that? Perhaps like so many other difficult questions that exercise our most ambitious thinkers, it is descended from religious concerns: for many thinkers, the mind and the soul have been viewed as interchangeable (a view that still permeates our culture in many obvious and less obvious ways that amount to mind-body dualism); perhaps we now wish to understand the mind as descendants of, for example, the scholastic philosophers who aimed to understand the soul (put this alongside identity as a question about pleasing the gods or being sent to Heaven or Hell, and free will about deserving to go there; or the universe coming from nothing as a question about how God could create a world ex nihilo; questions surrounding one’s choice to commit suicide; etc.). Or maybe because it seems for many unfathomable that something with which we are all so intimately acquainted, and so essential to who we are, could be so mysterious and inaccessible.

Consciousness is the form and content of all our experiences, and the key ingredient in our conception of identity—if my friend loses consciousness forever, my friend is gone; if that same consciousness (or at least its contents) is transferred to the body of a stranger, that stranger is now my friend (at least until the new body’s biochemistry renders the friend’s personality unrecognizable?). And it can be corrupted by a sick brain, corrupting behavior in turn. It’s also the form and content of our accessible memories. Given all this and more, this also makes it an attractive project for researchers who wish to engage public fascination. Speaking of which, here’s a cool, recently launched (in New York City) interdisciplinary group interested just that sort of research : https://yhousenyc.org/

To be clear, I think such research is crucial, but since the arrival of the scientific revolution, scientific research methods have been methodically structured to deal with what we can see, hear, smell, and touch—i.e., the easy problem. The mind itself was systematically removed from the project (thus, for one of many examples, the resistance to cognitive psychology in favor of behavioral psychology through so much of the 20th century). So, I don’t think the thing you can’t see as a thing in itself, consciousness (even though consciousness is the word we use to refer to the parts and the sum of what you see, hear, etc.), will be gotten for free from strictly empirical research into the easy problem.13 Though that won’t stop us from trying! There are plenty of theories out there, many of which I’ve investigated via talks, lectures, articles, interviews, and books14 When I explore these, I never come away feeling that I’m any closer to understanding the nature of consciousness or how the brain relates to experience. Certainly not in the way I feel when learning about how blood is produced as I read a biology textbook. Perhaps these theories are just too difficult for my understanding, though clearly others feel the same, or they wouldn’t continue to come out with new theories.

Finally, as Butz points out above, we sorta know how to create conscious entities—your parents did it when they made you. Maybe that’s already kind of like baking a cake. Perhaps in the end I really am just saying that we don’t understand much of anything aside from how to concoct sophisticated recipes. Maybe my observations ultimately suggest that we’re within the perimeter of the Easy realm about everything we know. I don’t think so. John Searle has compared consciousness to digestion. It’s a compelling idea, but I’m unconvinced. Digestion is an easy problem, and can be understood without peering into the Hard realm and asking, for example, what we mean by life and energy when we say food is converted to life-sustaining energy. But with consciousness, you get this extra thing: subjective experience, which resides purely in the Hard realm, a realm that may or may not share borders with the Easy realm.

The question then is whether you can answer every easy problem there is without crossing a line into the realm of the Hard (which perhaps means the realm of the metaphysical). I’m not sure. As a (hesitant) physicalist, when I say the holistic physical attributes (processes, relations, etc.) of the brain are too numerous to keep track of, I must recognize that I’m still talking about the easy problem, with no room left for the hard problem. In other words: if, as my point about complexity implies, tracking all of the brain’s physical attributes would be enough to solve consciousness, then there’s nothing more to do, no additional problem. In which case the central difficulty becomes the tracking of an impossibly large number of physical attributes, rather than a metaphysical “hard” problem about the nature of mental events (as distinct from those physical attributes). And yet, there is that extra thing: subjective experience.15 However conceived, this is indeed poses a hard (in the sense of difficult, without the metaphysical baggage) problem. Maybe too hard.

//

FURTHER THOUGHTS:

I’m imaging a Science-Philosophical Fiction scenario that builds on Atma and Butz’s conversation.

Butz explains to Atma that there is a way to make consciousness understandable for a human, but it will require an intellectual expansion. This would be best done by creating a virtual reality environment for twenty generations of scientists to live in. The environment will provide a non-violent ecology that will cater to the human brain’s plastic, adaptive, epigenetic, (you tell me), etc. properties in a way that gently encourages evolving the brain to the extent required to understand consciousness. This results in something more like a post-human, but close enough. Some scientists agree to give it a shot.

The descending scientists who emerge 20 years later understand consciousness, but there’s a catch: they themselves are not conscious (an outcome too complex for Butz to predict; all Butz knew was: the healthy among them will appear biologically human enough and will have the capacity to understand consciousness). Best fit in the virtual environment meant dedicating the brain as much as possible to understanding. Consciousness isn’t required for this, and is very costly (stimuli responses are of course still necessary, but these have become so “hardwired” that they’ve been handed over to unconscious processes; there’s no need for subjective mental representation). The scientists are behaviorally socially adapted (e.g., saying “I’m starving to death” might help one get food faster; one need not actually feel hunger in order to need nutrients—the sophistication of this sort of behavior may increase in proportion to what one needs in order to maximally thrive in a certain environment).

I imagine a paper coming out by one of these scientists convincingly and calmly explaining that the “scientist class is not conscious—we’re philosophical zombies” (perhaps this is mentioned within the broader context of how to view scientists morally, as they are devoid of emotions or pain). The rest of humanity now has to think about what this means, etc. I could imagine cults of people modeling their behavior after that of the scientists (who are utterly peaceful and seem to have refined tastes; perhaps they even listen to, and better yet play, sophisticated music as a kind of social- and brain-health activity).

Because these scientists evolved in a virtual environment (i.e., an environment that lends itself to less vicious evolutionary processes), they display certain zombie-revealing quirks in the real world. For example, when given cake with frosting whose molecular structure is engineered to be significantly similar to that of sugar molecules (so as to be physically indistinguishable to the “taste” receptors of the scientists), yet whose flavor is, to any human, that of feces, the scientist who eats the cake will say, “This cake is delicious, and the frosting is so sweet.” Similarly, if the next bite of that cake is laced with a common poison whose presence would be undetectable to human perception, the scientist would spit it out. (Recognizing the poison involves a kind of learned hyper-intelligent physical analysis for which taste sensation alone is ill equipped. Such physical analysis, however, is not enough to infer that what clearly appears to be a member of a known family of molecular structures will give a wildly different flavor experience than that of its presumed family.)


Enjoy or find this post useful? Please consider pitching in a dollar or three to help me do a better job of populating this website with worthwhile words and music. Let me know what you'd like to see more of while you're at it. Transaction handled by PayPal.
Or click the banner to shop at Amazon (at no extra cost: it just gives me some of what would have gone to Amazon).


Further Reading

Footnotes:

  1. http://journal.frontiersin.org/article/10.3389/fpsyg.2014.00577/full
  2. For an overview, watch his Ted Talk, Do We See Reality As It Really Is? or read his interview in The Atlantic, The Case Against Reality. To go a little deeper, try this You Are Not So Smart podcast interview: Questioning the Nature of Reality with Cognitive Scientist Donald Hoffman. To really delve in, explore the papers on his website: http://cogsci.uci.edu/~ddhoff/publications/; in particular: Objects of Consciousness (2014; with C. Prakash) and Natural Selection and Veridical Perceptions (2010; with J. Mark, B. Marion).
  3. From the aforementioned Ted Talk.
  4. I.e., the sort of consciousness Todd Feinberg and Jon Mallatt seem to be mostly concerned with in their 2016 book The Ancient Origins of Consciousness: How the Brain Created Experience. I haven’t read it yet, but am interested. You can hear a nice podcast interview with Mallatt on the Brain Science Podcast: Primary Consciousness and Experience with Jon Mallatt (BSP 128).
  5. Theorists often talk about brain states, but it’s not clear to me what this means. I worry that it could ultimately mean whatever’s convenient: matter (cells, neurotransmitters, etc.), processes, structures, electrical impulses, any emergent, generated, or correlated physical forces (analogous to the aforementioned magnet pull), etc. that happen to end up being involved in consciousness.
  6. Though maybe Carol Cleland will figure it out. I’m eagerly awaiting her upcoming book The Quest for a Universal Theory of Life. She’s also been featured in an excellent interview on the aforementioned Sci Phi podcast (Ep 2).
  7. An interesting book addressing this question is Jim Holt’s 2013 Why Does the World Exist?: An Existential Detective Story.
  8. I’m of course oversimplifying with these references to brain organization; e.g., the possible differing roles for producing consciousness of localized structures and the behavior of individual neurons. There are several good books that go further into this; my favorite is Michael Gazzaniga’s Who’s in Charge?: Free Will and the Science of the Brain (2011).
  9. It’s difficult to get across how big these numbers are. Suppose we were to seat just 100 neurons in a row along one side of a table. The number of ways we can arrange those neurons is far far greater than the number of atoms in the visible universe (which I’ve generally seen estimated to be 200 billion galaxies, though I’ve more recently seen it estimated to be two trillion galaxies).

    Here’s the number of seating permutations (has 158 digits):
    93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000 (about 9.332621544 x 10^157)

    Versus a “mere” 81 digits for the atoms:
    100000000000000000000000000000000000000000000000000000000000000000000000000000000 (or 10^80)

    This blows me away every time I think of it. And if we had 1,000 neurons to line up, we’d get a number that starts with 4 and has 2,568 digits (i.e., 1000!, of course). Inconceivably huge! (Though still comparatively tiny.) To play with such numbers, check out Wolfram|Alfa.

  10. It’s interesting to consider how genres and other organizational categories result in self-confirming feedback loops. For example, writers learn that bookstores need to know on which shelf to put writers’ books. This influences what writers write, which confirms and perpetuates the genres. Who knows in how many areas of human-dependent activities this sort of loop is in play.
  11. You know, what contemplative computers of the future do when they address traditionally philosophical question.
  12. A separate but related question is whether humans can really understand anything at all if no single human can hold in mind all the relevant, complex bits of information for any given phenomena. The need for specialization may be a symptom that humans are doomed to the darkness of faux-understanding. I think many philosophers—namely, those who say they’re interested in “how it all hangs together”—go into and approach their field precisely because they can’t accept this reality. They resist specializing. I feel this urge myself, and was once told by a philosophy instructor: “You need to specialize. You cannot dig a deep well unless you stand in one place for a long time.” My response to this is that I’m not trying to dig a well. I’m trying to dig out a wide foundation on which a stable structure may be erected. Agh, metaphors.
  13. For more on this, Thomas Nagel’s 2012 Mind & Cosmos: Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False. You need not agree with his rejection of materialism to get a lot out of his discussion about the problems with that idea.
  14. E.g., Michael Graziano’s 2013 Consciousness and the Social Brain.
  15. This tension is the core of Frank Jackson’s What Mary Didn’t Know thought experiment.

2 Replies to “Consciousness Explained in Three Billion Pages”

  1. “Some experts think that we can’t solve this problem because we lack the necessary concepts and intelligence.” – It appears that humanity has already solved all problems related to consciousness and many more subjects including freewill.

    “By consciousness, I mean experience.” – Consciousness is an adjective, like light, energy, etc. Adjectives cannot exist without a noun. Just like light cannot exist without the sun, similarly consciousness cannot exist without a corresponding noun, which is the individual soul, your soul, or my soul.

    “If we ask “Why?” enough times about anything, we’ll hit a wall.” – Vedas answer this question clearly, and there are no walls. In my opinion it is time for the scientific community to investigate Vedic literature. There was a time when Vedas were known all over the world. All of the major ideas of Vedas can be found in Bible also. For example destiny is there in both Vedas and Bible. Judaism describes many high level yogic powers.

    Some ideas about Vedic methodology and the problems that it addresses can be found in the free book at [1], A brief summary of such ideas is presented in [2].

    [1] https://theoryofsouls.wordpress.com/
    [2] https://www.academia.edu/31695390/Vedic_Theory_Of_Everything

Share your thoughts:


Deprecated: Directive 'allow_url_include' is deprecated in Unknown on line 0