Beginnings of reason

I have recently come to the realization that something which I had been taught to believe without questioning, and to which I had subsequently held without any evidence of its veracity other than what I had been told, should no longer be assumed to be true. And I am now amazed that an intelligent person like me, who relies heavily on a highly developed sense of logic, would not have come to that realization many years ago. I’m sure a cogent explanation is available somewhere in print, but I haven’t read a completely satisfactory one yet, and so I have decided to try to write one myself. For what it’s worth, here is my attempt. I make no pretense that this is a scholarly document, and so, as much as is possible and advisable, I will avoid getting too technical or attempting to provide citations to support its accuracy. I welcome readers to investigate for themselves anything their own logic questions and to report the results to me.

I’ll start with an infant’s first experience of the world. As the senses develop and begin to report their findings to the brain, it develops the capability of relating them to each other and, using those relationships, to transmit instructions to other areas of the body. At some point this process seems to develop into what we refer to as reasoning thought, e.g., “My sense of touch has been strongly actuated. Muscles, move until it is no longer actuated.” And the baby squirms within the womb. Much of what we can later identify as reasoning begins at a rudimentary level as soon as the brain begins to develop. The sense of touch is joined by hearing, and, no one knows for certain when, smell, taste and sight begin to be included.

When the infant emerges from the womb, this process continues. Ideally, the touch, smell, sound, and taste that have, to this point, been the most comforting, are joined with a sight, which the brain combines to form the concept, “Mother.” Soon the brain learns that, upon hearing a certain sound, and using its developing location skills, when it tells the head to turn, it brings all the senses into alignment on the “Mother” concept. The infant will continue moving its head as long as it hears that sound until the sound is joined by sight. Thus, at a very rudimentary level, the brain is beginning to “fact check” the world. Very early on this process is strong. It takes a period of conditioning to break the child of inherently attempting to bring learned concepts into coherence. E.g., Dad turns on the computer and connects with Mom using Skype. Mom speaks and the infant turns to look and visually confirms that it is Mom. However, as the child develops, the child will try to relate sight with other senses. At this point you can see confusion arise. “Something is wrong. I can see and hear Mother, but I can’t smell or taste her, and when I feel, it doesn’t feel like I’m used to Mother feeling.” The brain area that attempts to bring the senses into coherence becomes strong very early in development.

Adults make many sounds that make no “sense” to an infant. In time, some combinations of sounds are accompanied by other senses and the infant begins to associate these sounds with previously developed concepts. One combination seems to relate to the “Mother” concept. Soon, when that “Mama” sound is made, the infant adds the new concept of sound combinations to the “Mother” concept. Soon, when anyone makes this sound, the infant will “reason” that the sound means the same thing as the entity its senses have already identified as “Mother.” And if someone says, “Mama,” when referring to anyone else, the child becomes confused. The already developed “coherence” tester in the brain sends out an alarm: “I know what that sound means, and I don’t see or hear the entity to which that sound refers.”

I have gone through this tedious process in an attempt (admittedly in an amateurish way) to describe the development of that magnificent area of the brain which “fact-checks” input, including words, and attempts to make “sense” of it. In fact, scientists attribute this ability of the brain as a main reason for the survival and eventual dominance of the otherwise inferior animal known as Homo sapiens. Where that process evaluates the attempt to make reliable inferences from a conceptual world, we use the word “logic” to describe it.

I wonder why that logic has not challenged concepts that we have been told to believe, irrespective of their cohering with our senses, or with any other independently developed concepts. Or, to put it more simply, why have we developed the ability to, in some circumstances, bypass this “bullshit” alarm area of the brain?

One conceivable answer is expedience. When something is flying at my face, routing the visual input through the “bull-shit alarm” before taking action will soon result in serious damage. The brain has developed ways to transfer certain input into muscle commands for body preservation. E.g., I don’t think when the temperature of the surface I have just touched is 450˚. My hand instantly moves.

Finding a reason for the tendency to bypass logic when dealing with certain concepts is much more complicated. Returning to the infant analogy, we need to investigate the mental concept of “trust.” Clearly, there are many inputs which infants are not equipped to evaluate. They react instinctively to loud sounds. Soon, they react to certain words, connecting them to concepts. When anyone says hot, they react. However, their reaction to other words may depend on the identity of the person speaking. They learn that the word “no,” spoken by Mom, means one thing, while spoken by someone else, i.e. Dad, it means something else. Hopefully, if the parents are consistent, infants learn reliable information about many concepts without having to go through painful, if not deadly, personal experience. Infants learn to rely explicitly on the words the parents use, because they suffer the negative results when they don’t. E.g., Mom says, “Hot, don’t touch,” and, if the infant ignores the words, pain results. In this, and many other ways, infants learn to “trust” spoken words. This trust is usually conferred on all adults. Auntie Faith says, “Hot, don’t touch,” and the infant draws back its hand.

As the infant becomes a child, this trust is continually strengthened, not only by positive and negative experience, but also by developing communication skills. Parents tell children to do or not to do certain things and give reasons why. So children, receiving new information, begin to expect reasons why, in other words, for evidence of veracity. The child may ignore the information if no reason is given, or if the reason doesn’t mean anything relating to the world the child is experiencing. This presents a dilemma for parents. On one hand, they want the child to trust implicitly what they say, but they also want the child to learn not to trust what all adults say. The parents try to construct a hierarchy of trust in the child, with parents at the top, and people you don’t know, strangers, at the bottom. The child learns not to accept candy from strangers, but it is OK to accept it from familiar people. Obviously this hierarchy is not perfectly reliable, since children can be harmed by familiar people as well as strangers.

The previously mentioned “bullshit” area of the brain continues to develop. Children, at some point, begin to question what they are told and seek evidence of its veracity on their own. I told my grandson that the skin incision that caused the scar that I showed him didn’t hurt because the doctor gave me a shot first. He looked askance at me, since he knows from experience how much it hurts when his arm is cut, and said: “Are you lying to me, Papa?” Truth and lies become a very important part of children’s lives. They are rewarded for truth and punished for lies. And so the ability to tell the difference becomes very strong. In many ways it becomes the most important survival trait children learn.

One very important fact is that children make truth distinctions based solely on relevance to the real world. If something has no real consequences to them, they will ignore it. If you tell them to straighten their room because it will look nicer, they will likely pay it no mind. If you offer to buy them a desired toy or food, you can get them to do that and much more. It should go without saying that reward and punishment are the two most useful tools for teaching children.

I promise soon to tie all this in to my overall point, but first I need to say one more thing. The concepts of death and infinity (forever) are ones that take some time to develop in a child, and even longer to register with the same importance that they hold for adults. Children may lose things, pets, and people, but until they understand the concept of forever, the losses don’t carry the all-important meaning they will someday carry. In fact, many concepts, to which adults have attached great significance, have little meaning to children. Adults fear death, and take extraordinary care to avoid those things that can lead to it. Children, on the other hand, because they don’t understand that death means being gone “forever,” don’t have the same fear and might not be as motivated as adults at the potential for death.

Trying to teach children to do or not to do something that goes against the values they have learned is very difficult. Sharing is a good example. Adults know that it is an activity that enhances children’s ability to prosper in a social world. But children learn ownership very early and only through extended effort do they learn to share. This is also true of other desirable traits. So, telling children to “be good” has little effect if it doesn’t coincide with their own perceived self-interest.

Here is where some parents face a dilemma. How do they teach important traits before children understand their later importance? One tried and true way is by creating stories utilizing invented characters whose actions both are interesting to children, and demonstrate the desired traits. One example is the story of Santa Claus, which occurs in many variations in many cultures. Although it is often embellished, as are most stories, the basic concept is of a magical character who can monitor the behavior of all children and reward them if they are “good,” and punish them if they aren’t. Adults justify telling children these invented stories because they have the effect of teaching them concepts adults find beneficial. A concise word for this practice is “myth.”

At some point in their lives, most children learn or are told that “myths” aren’t actual representations of the real world. Despite the distrust that naturally arises in children when they learn that adults they have been taught to trust have been, in effect, lying to them, adults don’t seem to be easily dissuaded from perpetuating these myths. Perhaps adults take a perverse pleasure in the gullibility of children, or they convince themselves that the ends (teaching desired behavior) justify the means (lying to their children, and thus helping to create distrustful youth). Most parents don’t seem to be affected by the deep resentment children harbor at this mendacity.

Finally we come to the subject that motivated this dissertation. It is a myth that many people, for a multitude of reasons, perpetuate. This myth is invoked early in life in an attempt to teach children to “be good.” It is one that involves an invisible character that can monitor their behavior and reward them if they are good and punish them if they are bad. Sound familiar? No, we already mentioned Santa. In variations of this myth, adults have used as many names as there have been cultures since the beginning of recorded time. Here are some names you may or may not recognize: Marduk, Atum, Elohim, Mbombo, Nanabozho, Unkulunkulu, Vishnu, Shiva, Brahma, Jawah, and God. The stories about these entities are as varied as their names. Granted, not all of them are specific in the way described above, but they do share many common features, the most common of which is the teaching that belief in them is required solely on the word of other people. In all cases, some person or group created stories to explain something for which they needed an explanation. If other people believed the stories, then the myths were perpetuated. And in every case children are taught to believe them explicitly. More importantly, children are taught not to pay any attention to anything “strangers” say about these myths. They must “trust” that their parents and the other adults in their life know best.

The significant difference between these myths and the, perhaps more innocent, ones of Santa, the Tooth Fairy, the Easter Bunny, etc., is that the adults who are teaching them to children have actually convinced themselves of their veracity and so insist on credulity even when the well developed “bullshit” area of the brain rejects the illogic of them. In order to deaden their own “bullshit” indicators, these people create elaborate reasons why it doesn’t matter that the myths don’t make sense. In fact, they often elevate to the highest moral value the acceptance of the myths with the full knowledge that there is no evidence of their veracity. A word has been created to identify this acceptance without evidence. It is called “faith.” And by pounding into children from the time they are infants the concept that some ideas must be accepted on “faith,” in other words, without questioning the word of the people espousing the ideas, the “bullshit” indicator in the brain is methodically deadened, so that, by the time a person reaches the age where their lives are required to be governed by reason and logic in every other area, they have learned not to do so in this one area.

Why do people choose to perpetuate the myths about their deities? First, it is because people fear death, and so grasp at any promise of a mechanism, no matter how fantastical, whereby they perceive they can escape it. This also explains why it is so difficult to excite children with deific ideas. They don’t understand the “forever” nature of death, and so don’t pay much attention when told not to do something, the result of which might be death. And so parents and other “caring” adults must spend an inordinate amount of time and energy indoctrinating children into their myths. And they use all the tools available to them; scaring them in whatever ways children have already learned to fear (god telling fathers to stab their children, fiery destruction of “evil” people, disobedient people swallowed by huge fish or magically turned into statues made of salt, etc.), enticing with promises of wonderful rewards for obedience, and repetition of mythical concepts so constantly that children become mesmerized by them. Is it any wonder that it is so difficult for a person to shake loose of this indoctrination and re-activate the “bullshit” indicator?

So, now I have proposed an explanation for why an intelligent person like me, who relies heavily on a highly developed sense of logic, would not have, at the same time as I rejected the existence of Santa Claus, also rejected as “bullshit” the myths about other invisible beings that had been pounded into me since I was born.

One Response to “Beginnings of reason”

  1. stevenspruill Says:

    Enoch, I like this very much. I’ve imported it into Pages and saved it with my own writings. I love the way you develop the logical train. Your discussion of the first efforts of the infant to know the world put me in mind of how you can often catch full-grown adults making a moue with their lips as they do some dexterous task with their hands or fingers, such as making a difficult pool shot. They are re-enacting a developmental stage in which we all used our tongues and mouths to palpate and investigate surfaces and this later spread to touching at the same time—touching while suckling, for example. It’s a historic link between fingertips and mouth that are best understood by studying a sensory homunculus, drawn so that the parts of the human body with the most nerve endings are drawn the largest. I’ve always loved those bizarre drawings, in themselves, an efficient way of grasping something.

    I was also very interested in your distinctions about sound, which, as a professional singer of the first rank, you understand so well. When I was pursuing an abortive doctorate in Human Learning and Psycholinguistics, I was quite surprised to learn that the main reason we have yet to develop a machine that can take dictation from a broad range of voices without being painstakingly programmed to understand each voice in turn is that the sound signal, itself, is vastly different between speakers saying the same simple word, such as “Cat.” This difference becomes more apparent when comparing the timbre and other qualities of a professional singer’s voice to that of a karaoke amateur. Looking at graphs of a given sound across many different human speakers makes this visually obvious, and yet, somehow, most of us are well able to recognize the word “Cat” no matter who is saying it. How we learn to do that is an integral part of the developmental process you sum up so accurately and well.

    I agree with you that a key to how we silence the bullshit detector might be found in our survival-dictated shorthand for making decisions with our spinal cords (like jerking the finger back from the hot stove before the brain even gets wind of what’s happening). Obviously, in this process, the bullshit detector IS bypassed. It’s errors are unlikely to be costly. For example, your spinal cord will jerk your finger back from a very cold surface before realizing it is touchable cold, not searing heat, that it is feeling. Better safe than sorry makes that system a keeper, and yet the larger fact that the bull-shit detector CAN be bypassed leads to some pretty big sorrows of its own.

    When I got away from Human Learning into the clinical end of psychology, I began, as you do, to appreciate the power of reward and punishment. B.F. Skinner thought that was pretty much all that existed to mediate a basic reflex arc, and it’s a lot harder than one would think to prove him wrong. The richness of the human mind is to be found in its mediators, for sure. Among many other questions your piece raises, I, too, wonder if one of the reasons children are prone to accept the false information of religion when conveyed to them by adults is that they see no punishment of the adult and conclude that no lie was given. They have been conditioned, as Skinner would say, to believe that lies are punished. Indeed, tales about invisible deities, with no evidence to back up a word of it, are not punished but are constantly rewarded by other powerful adults. “If it’s not punished, maybe it’s not a lie,” feels like a reasonable inference even though greater sophistication helps us see universes of possible exceptions.

    A lot of what you say is congruent with the concept of “critical periods,” during which the brain learns huge chunks of knowledge about the world and how it works. That “blank slate” with which we come into the world invites the world around us to scribble a torrent on it during the first four or five to ten years, which pretty well coincides with the “average age of religious conversion” as widely noted in studies by Christian research groups. I resist the image of a glass filling up, and yet it has some utility. Without a doubt, we learn an enormous percentage of everything we “know” during the first few years of life. The hard implications of this are recapitulated in my own experience when I write a novel. I have to take great care that what I write today does not unreasonably narrow the range of possibilities for what I might write tomorrow, or looking back, what I could still go back and change in order to improve the whole novel going forward. I find that there is a very strong limiting effect of each page I write. Using the image of the sculptor Rodin chipping away everything that is not “the thinker” helps to illustrate this. Once you’ve chipped away this or that block of marble, the left leg can no longer go here, but must go there. Thanks to “word processing,” writing, be it symphonies or novels, is a far more fluid enterprise, during which changes can be made at any point in time, from earlier chapters to ones that are only planned, but I find that one must fight very hard to keep these options open once one has progressed to a certain point.

    If we learn to believe in god as young squirts, other avenues of thinking are made incompatible, even foreclosed by that learning or acceptance, and this has the crippling effect we observe—indeed, the reason for your treatise. How could I have been so badly fooled? Once you believe something—anything—you limit the range of other beliefs that might contradict what you “know.” That’s why I’m trying so hard these days, in my writing and my thinking, to learn things provisionally while keeping my options open. That foreclosing effect is very powerful, I think, but not inevitable, as emergence from enslavement by religion illustrates. One of the great and liberating feelings that comes to me again and again now that I am out is the wonderful, relaxed feeling of knowing I don’t have to know everything. Religion requires a huge load of assertion and pseudo-knowledge. It literally has an answer (if only a half-assed or ridiculous one) for everything. That, in turn, forecloses a vast area of potential learning that can’t be pursued until the dogmatism of “knowing” mountains of bullshit is abandoned. If a religious person comes up to me and says, “All right, if you don’t go to heaven or hell when you die based on how nearly perfect a life you have lived and how much you have believed in god and not thought about the little red monkey, then what DOES happen when you die?” And I can say, “I don’t know,” and smile and feel okay with that, while the guy trapped in his dogma would feel very uneasy if he suddenly didn’t have his answer for that any more. He must have an answer for everything instead of a curiosity about everything. That “faith” is so often a part of his answer should worry him. For some, it does, while for others, “faith” is reified into the central and crowning aspect of the whole house of cards that is religion. I agree that faith, a word which sounds positive and hopeful to me even to this day, is pernicious, the foremost rotter of minds.

    Your thesis inspires me to examine why some few of us are able to UNfool ourselves. What was it in us that enabled us to do that? I’d love to know the percentage of non-believers who were first ardent and committed believers. The three most notable writers in the field—Harris, Hitchens and Dawkins—started out life, essentially, as non-believers. Apparently, no great effort was made by teachers or parents to indoctrinate them (at least according to their own autobiographical statements about their youths). Each dabbled briefly in religion, then apparently had the freedom to say, “No, that’s bullshit.” Was it really that freedom which left their bullshit detectors intact and sent them on their way into rational adulthoods? It had to have played a large role, I’m sure. Unlike these three men some of us who are now rationalists were subjected from the word go to an indoctrination from which we could escape only as we escaped our childhoods themselves. Of the fifteen to twenty percent of Americans who are rational thinkers, how many were once in thrall to religion? I’d wager only a small percentage, but I really have no idea. Unlike bullshit, truth isn’t complicated and illogical, it is usually elegant and straightforward. That difference is, or should be, a major component of the bullshit detector, but for us, maybe that principle of the elegance of truth never got set up in the first place. We were somehow lulled to sleep or made comatose so that we didn’t really ask questions so simple as, “If god is real, and cares about me personally, why doesn’t he ever appear to me?”

    Enoch, your thesis is rich and thought-provoking and I thank you for it. I will be mulling various parts of it for some time to come, and chiming back in, I’m sure.

Leave a Reply

You must be logged in to post a comment.