Update Terbaru BLUE.. Pada Article Hari Ini Penulis Akan Memberi Anda Cerita Yang Amat Menarik Hari Ini . Jadi Mari Kita Mula Membaca. Paul Bloom is the Brooks and Suzanne Ragen Professor of Psychology and Cognitive Science at Yale University. His research explores how children and adults understand the physical and social world, with special focus on morality, religion, fiction, and art. He has published more than a hundred scientific articles in journals such as Science and Nature, and his popular writing has appeared in the New York Times, The New Yorker, The Atlantic Monthly, Slate, Natural History, and many other publications. He has won numerous awards for his research and teaching.

Bloom is the author of Just Babies: The Origins of Good and Evil (2013), as well as How Pleasure Works: The New Science of Why We Like What We Like (2011), Descartes' Baby: How the Science of Child Development Explains What Makes Us Human (2005), and several other books. A selection of popular articles and academic articles can be found here.

In this article for The Atlantic Monthly, Bloom argues against the neuro-reductionist nonsense coming from people like Sam Harris, David Eagleman, Jonathan Haidt (all named directly), and Patricia Churchland (not named).

As regular readers will know, I side with Bloom (and Michael Gazzaniga, and Evan Thompson, and many other non-reductionist neuroscientists).

The War on Reason

Scientists and philosophers argue that human beings are little more than puppets of their biochemistry. Here's why they're wrong.

Paul Bloom | Feb 19 2014


Matt Dorfman

ARISTOTLE'S DEFINITION OF MAN as a rational animal has recently taken quite a beating.

Part of the attack comes from neuroscience. Pretty, multicolored fMRI maps make clear that our mental lives can be observed in the activity of our neurons, and we’ve made considerable progress in reading someone’s thoughts by looking at those maps. It’s clear, too, that damage to the brain can impair the most-intimate aspects of ourselves, such as the capacity to make moral judgments or to inhibit bad actions. To some scholars, the neural basis of mental life suggests that rational deliberation and free choice are illusions. Because our thoughts and actions are the products of our brains, and because what our brains do is determined by the physical state of the world and the laws of physics—perhaps with a dash of quantum randomness in the mix—there seems to be no room for choice. As the author and neuroscientist Sam Harris has put it, we are “biochemical puppets.”

This conception of what it is to be a person fits poorly with our sense of how we live our everyday lives. It certainly feels as though we make choices, as though we’re responsible for our actions. The idea that we’re entirely physical beings also clashes with the age-old idea that body and mind are distinct. Even young children believe themselves and others to be not just physical bodies, subject to physical laws, but also separate conscious entities, unfettered from the material world. Most religious thought has been based on this kind of dualist worldview, as showcased by John Updike in Rabbit at Rest, when Rabbit talks to his friend Charlie about Charlie’s recent surgery:
“Pig valves.” Rabbit tries to hide his revulsion. “Was it terrible? They split your chest open and ran your blood through a machine?”

“Piece of cake. You’re knocked out cold. What’s wrong with running your blood through a machine? What else you think you are, champ?”

A God-made one-of-a-kind with an immortal soul breathed in. A vehicle of grace. A battlefield of good and evil. An apprentice angel …

“You’re just a soft machine,” Charlie maintains.
I bristle at that just, but the evidence is overwhelming that Charlie is right. We are soft machines—amazing machines, but machines nonetheless. Scientists have reached no consensus as to precisely how physical events give rise to conscious experience, but few doubt any longer that our minds and our brains are one and the same.

Another attack on rationality comes from social psychology. Hundreds of studies now show that factors we’re unaware of influence how we think and act. College students who fill out a questionnaire about their political opinions when standing next to a dispenser of hand sanitizer become, at least for a moment, more politically conservative than those standing next to an empty wall. Shoppers walking past a bakery are more likely than other shoppers to make change for a stranger. Subjects favor job applicants whose résumés are presented to them on heavy clipboards. Supposedly egalitarian white people who are under time pressure are more likely to misidentify a tool as a gun after being shown a photo of a black male face.

 
Illustration by Stephen Doyle

In a contemporary, and often unacknowledged, rebooting of Freud, many psychologists have concluded from such findings that unconscious associations and attitudes hold powerful sway over our lives—and that conscious choice is largely superfluous. “It is not clear,” the Baylor College neuroscientist David Eagleman writes, “how much the conscious you—as opposed to the genetic and neural you—gets to do any deciding at all.” The New York University psychologist Jonathan Haidt suggests we should reject the notion that we are in control of our decisions and instead think of the conscious self as a lawyer who, when called upon to defend the actions of a client, mainly provides after-the-fact justifications for decisions that have already been made.

Such statements have produced a powerful backlash. What they represent, many people feel, are efforts at a hostile takeover of the soul: an assault on religious belief, on traditional morality, and on common sense. Derisory terms like neurotrash, brain porn, and (for the British) neurobollocks are often thrown around. Some people, such as the novelist Marilynne Robinson and the writer and critic Leon Wieseltier, argue that science has inappropriately ventured outside its scope and has still failed to capture the rich and transcendent nature of human experience. The author and clinical neuroscientist Raymond Tallis worries that such theories suggest no meaningful gap separates man and beast, a position that he argues, in Aping Mankind, is “not merely intellectually derelict but dangerous.”

For the most part, I’m on the side of the neuroscientists and social psychologists—no surprise, given that I’m a psychologist myself. Work in fields such as computational cognitive science, behavioral genetics, and social neuroscience has yielded great insights about human nature. I do worry, though, that many of my colleagues have radically overstated the implications of their findings. The genetic you and the neural you aren’t alternatives to the conscious you. They are its foundations.

KNOWING THAT WE ARE physical beings doesn’t tell us much. The interesting question is what sort of physical beings we are.

Nobody can deny that we are sometimes biochemical puppets. In 2000, an otherwise normal Virginia man started to collect child pornography and make sexual advances toward his prepubescent stepdaughter. He was sentenced to spend time in a rehabilitation center, only to be expelled for making lewd advances toward staff members and patients. The next step was prison, but the night before he was to be incarcerated, severe headaches sent him to the hospital, where doctors discovered a large tumor on his brain. After they removed it, his sexual obsessions disappeared. Months later, his interest in child pornography returned, and a scan showed that the tumor had come back. Once again it was removed, and once again his obsessions disappeared.

Other examples of biochemical puppetry abound. A pill used to treat Parkinson’s disease can lead to pathological gambling; date-rape drugs can induce a robot-like compliance; sleeping pills can lead to sleep-binging and sleep-driving. These cases—some of which are discussed in detail by David Eagleman in Incognito: The Secret Lives of the Brain (excerpted in the July/August 2011 Atlantic)—intrigue and trouble us because they involve significant actions that are disengaged from the normal mechanisms of conscious deliberation. When the victims are brought back to normal—the drug wears off; the tumor is removed—they feel sincerely that their desires and actions under the influence were alien to them, and fell outside the scope of their will.

For Eagleman, these examples highlight the need for a legal framework and criminal-justice system that can take into account our growing understanding of brain science. What we need, he argues, is “a shift from blame to biology.” This is reasonable enough. It’s hardly neurobollocks to think we should take the existence of a tumor into account when determining criminal responsibility for a sex offense.

But some cases raise thorny questions. Philosophers—and judges and juries—might disagree, for instance, as to whether an adult’s having been horrifically abused as a child can be considered as exculpatory as having a tumor. If the abuse visibly changed a person’s brain and stripped it of its full capacity for deliberation, should that count as a mitigating condition in court? What about individuals, such as certain psychopaths, who appear incapable of empathy and compassion? Should that diminish their responsibility for cruel actions?

Other cases are easier. It’s not hard to see the psychological distinction between the cold-blooded planning of a Mafia hit man and the bizarre actions of a paranoid schizophrenic. As you read this article, your actions are determined by physical law, but unless you have been drugged, or have a gun to your head, or are acting under the influence of a behavior-changing brain tumor, reading it is what you have chosen to do. You have reasons for that choice, and you can decide to stop reading if you want. If you should be doing something else right now—picking up a child at school, say, or standing watch at a security post—your decision to continue reading is something you are morally responsible for.

Some determinists would balk at this. The idea of “choosing” to stop (or choosing anything at all), they suggest, implies a mystical capacity to transcend the physical world. Many people think about choice in terms of this mystical capacity, and I agree with the determinists that they’re wrong. But instead of giving up on the notion of choice, we can clarify it. The deterministic nature of the universe is fully compatible with the existence of conscious deliberation and rational thought—with neural systems that analyze different options, construct logical chains of argument, reason through examples and analogies, and respond to the anticipated consequences of actions, including moral consequences. These processes are at the core of what it means to say that people make choices, and in this regard, the notion that we are responsible for our fates remains intact.

BUT THIS IS WHERE philosophy ends and psychology begins. It might be possible that we are physical beings who can use reason and make choices. But haven’t the psychologists shown us that this is wrong, that reason is an illusion? The sorts of findings I began this article with—about the surprising relationship between bakery smells and altruism, or between the weight of a résumé and how a job candidate is judged—are often taken to show that our everyday thoughts and actions are not subject to conscious control.

This body of research has generated a lot of controversy, and for good reason: some of the findings are fragile, have been enhanced by repeated testing and opportunistic statistical analyses, and are not easily replicated. But some studies have demonstrated robust and statistically significant relationships. Statistically significant, however, doesn’t mean actually significant. Just because something has an effect in a controlled situation doesn’t mean that it’s important in real life. Your impression of a résumé might be subtly affected by its being presented to you on a heavy clipboard, and this tells us something about how we draw inferences from physical experience when making social evaluations. Very interesting stuff. But this doesn’t imply that your real-world judgments of job candidates have much to do with what you’re holding when you make those judgments. What will probably matter much more are such boringly relevant considerations as the candidate’s experience and qualifications.

 
Illustration by Topos Graphics

Sometimes small influences can be important, and sometimes studies really are worth their press releases. It’s relevant that people whose polling places are schools are more likely to vote for sales taxes that will fund education. Or that judges become more likely to deny parole the longer they go without a break. Or that people serve themselves more food when using a large plate. Such effects, even when they’re small, can make a practical difference, especially when they influence votes and justice and health. But their existence doesn’t undermine the idea of a rational and deliberative self. To think otherwise would be like concluding that because salt adds flavor to food, nothing else does.

The same goes for stereotyping. Hundreds of studies have found that individuals, including those who explicitly identify themselves as egalitarian, make assumptions about people based on whether they are men or women, black or white, Asian or Jewish. Such assumptions have real-world consequences. They help determine how employers judge job applications; they motivate young children to interact with some individuals and not others; they influence police officers as they decide whether or not to shoot somebody. These are important findings. But as the Rutgers psychologist Lee Jussim points out in his recent book, Social Perception and Social Reality, these studies don’t mean what many people think they do.

For one thing, we apply stereotypes in a limited way, mainly when judging strangers. When we know someone, we’re far more influenced by facts about that individual than about the categories he or she belongs to. To a striking degree, too, we know what our stereotypes are. Ask people about their stereotypes of gay men, the elderly, or lawyers, say, and what they’ll tell you is likely to align pretty well with what social psychologists have found in their studies of unconscious bias. Furthermore, many stereotypes are accurate. To take one of the most obvious examples: men really are more prone to violence and sexual assault than women are. If you need to quickly judge the threat posed by a stranger standing at the corner of the street you’re about to walk down at night, you’ll probably fall back on this stereotype, consciously and unconsciously. And you’ll be right to do so.

None of this is to defend stereotyping. Strong moral arguments exist for why we should often try to ignore stereotypes or override them. But we shouldn’t assume they represent some irrational quirk of the unconscious mind. In fact, they’re largely the consequence of the mind’s attempt to make a rational decision.

A more general problem with the conclusions that people draw from the social-psychological research has to do with which studies get done, which papers get published, and which findings get known. Everybody loves nonintuitive findings, so researchers are motivated to explore the strange and nonrational ways in which the mind works. It’s striking to discover that when assigning punishment to criminals, people are influenced by factors they consciously believe to be irrelevant, such as how the attractive criminals are, and the color of their skin. This finding will get published in the top journals, and might make its way into the Science section of The New York Times. But nobody will care if you discover that people’s feelings about punishments are influenced by the severity of the crimes or the criminals’ past record. This is just common sense.

Whether this bias in what people find interesting is reasonable is a topic for another day. What’s important to remember is that some scholars and journalists fall into the trap of thinking that what they see in journals provides a representative picture of how we think and act.

OUR CAPACITY for rational thought emerges in the most-fundamental aspects of life. When you’re thirsty, you don’t just squirm in your seat at the mercy of unconscious impulses and environmental inputs. You make a plan and execute it. You get up, find a glass, walk to the sink, turn on the tap. These aren’t acts of genius, you haven’t discovered the Higgs boson, but still, this sort of mundane planning is beyond the capacity of any computer, which is why we don’t yet have robot servants. Making it through a single day requires the formulation and initiation of complex multistage plans, in a world that’s unforgiving of mistakes (try driving your car on an empty tank, or going to work without pants). The broader project of holding together relationships and managing a job or career requires extraordinary cognitive skills.

If you doubt the power of reason, consider the lives of those who have less of it. We take care of the intellectually disabled and brain-damaged because they cannot take care of themselves; we don’t let toddlers cook hot meals; and we don’t allow drunk people to drive cars or pilot planes. Like many other countries, the United States has age restrictions for driving, military service, voting, and drinking, and even higher age restrictions for becoming president, all under the assumption that certain core capacities, like wisdom and self-control, take time to mature.

Many commentators believe that we overemphasize reason’s importance. Social psychology, David Brooks writes in The Social Animal, “reminds us of the relative importance of emotion over pure reason, social connections over individual choice, character over IQ.” Malcolm Gladwell, for his part, argues in Outliers for the irrelevance of a high IQ. “If I had magical powers,” he says, “and offered to raise your IQ by 30 points, you’d say yes—right?” But then he goes on to say that you shouldn’t bother, because after you pass a certain basic threshold, IQ really doesn’t make any difference.

Brooks and Gladwell are both interested in the determinants of success. Brooks focuses on emotional and social skills, and Gladwell on the role of contingent factors, such as who your family is and where and when you were born. Both are right in assuming these factors to be significant, and Gladwell is probably correct that IQ, like other human traits, follows the law of diminishing returns. But both are wrong to doubt the central importance of intelligence. Indeed, intelligence, as measured by an IQ test, is correlated with all sorts of good things, such as steady job performance, staying out of prison, and being in a stable and fulfilling relationship. One might object that IQ is meaningful only because our society is obsessed with it. In the United States, after all, getting into a good university depends to a large extent on how well you do on the SAT, which is basically an IQ test. (The correlation between a person’s score on the SAT and on the standard IQ test is very high.) If we gave out slots at top universities to candidates with red hair, we would quickly live in a world in which being a redhead correlated with high income, elevated status, and other positive outcomes.

Still, the relationship between IQ and success is hardly arbitrary, and it’s no accident that universities take such tests so seriously. They reveal abilities such as mental speed and the capacity for abstract thought, and it’s not hard to see how these abilities aid intellectual pursuits. Indeed, high intelligence is not only related to success; it’s also related to kindness. Highly intelligent people commit fewer violent crimes (holding other things, such as income, constant) and are more cooperative, perhaps because intelligence allows one to appreciate the benefits of long-term coordination and to consider the perspectives of others.

Then there’s self-control. This can be seen as the purest embodiment of rationality, in that it reflects the working of a brain system (embedded in the frontal lobe, the part of the brain that lies behind the forehead) that restrains our impulsive, irrational, or emotive desires. In classic studies of self-control that he conducted in the 1960s, Walter Mischel investigated whether children could refrain from eating one marshmallow now to get two later. What he found was that the kids who waited for two marshmallows did better in school and on their SATs as adolescents, and ended up with better self-esteem, mental health, relationship quality, and income as adults. In his recent book, The Better Angels of Our Nature, Steven Pinker notes that a high level of self-control benefits not just individuals but also society. Europe, he writes, witnessed a thirtyfold drop in its homicide rate between the medieval and modern periods, and this, he argues, had much to do with the change from a culture of honor to a culture of dignity, which prizes restraint.

WHAT ABOUT THE capacity for moral judgment? In much of social psychology, morality is seen as the paradigm case of insidious irrationality. Whatever role our intellect might play in other domains, it seems largely irrelevant when it comes to our sense of right and wrong. Many people will tell you that flag burning, the eating of a deceased pet, and consensual sex between adult siblings are wrong, but when pressed to explain why, they suffer what Jonathan Haidt has described as “moral dumbfounding.” They flail around trying to find reasons, which suggests it’s not the reasons themselves that guided their judgments, but their gut intuition.

But as I argue in my book Just Babies, the existence of moral dumbfounding is less damning that it might seem. It is not the rule. People are not at a loss when asked why drunk driving is wrong, or why a company shouldn’t pay a woman less than a man for the same job, or why you should hold the door open for someone on crutches. We can easily justify these views by referring to fundamental concerns about harm, equity, and kindness. Moreover, when faced with difficult problems, we think about them—we mull, deliberate, argue. I’m thinking here not so much about grand questions such as abortion, capital punishment, just war, and so on, but rather about the problems of everyday life. Is it right to cross a picket line? Should I give money to the homeless man in front of the bookstore? Was it appropriate for our friend to start dating so soon after her husband died? What do I do about the colleague who is apparently not intending to pay me back the money she owes me?

Such rumination matters. If our moral attitudes are entirely the result of nonrational factors, such as gut feelings and the absorption of cultural norms, they should either be stable or randomly drift over time, like skirt lengths or the widths of ties. They shouldn’t show systematic change over human history. But they do. As the Princeton philosopher Peter Singer has put it, the moral circle has expanded: our attitudes about the rights of women, homosexuals, and racial minorities have all shifted toward inclusiveness.

Regardless of whether or not one views this as moral progress (some nihilists and cultural relativists think there is no such thing), it does suggest a cumulative evolution. People come to moral conclusions, often through debate and consultation with others, and these conclusions form the foundation for further progress. Just as modern evolutionary theory builds on the work of Darwin, our moral understanding builds on the moral discoveries of others, such as the wrongness of slavery and sexism.

WE'RE AT OUR WORST when it comes to politics. This helps explain why recent attacks on rationality have captured the imagination of the scientific community and the public at large. Politics forces us to confront those who disagree with us, and we’re not naturally inclined to see those on the other side of an issue as rational beings. Why, for instance, do so many Republicans think Obama’s health-care plan violates the Constitution? Writing in The New Yorker in June 2012, Ezra Klein used the research of Haidt and others to argue that Republicans despise the plan on political, not rational, grounds. Initially, he notes, they objected to what the Democrats had to offer out of a kind of tribal sense of loyalty. Only once they had established that position did they turn to reason to try to justify their views.

But notice that Klein doesn’t reach for a social-psychology journal when articulating why he and his Democratic allies are so confident that Obamacare is constitutional. He’s not inclined to understand his own perspective as the product of reflexive loyalty to the ideology of his own group. This lack of interest in the source of one’s views is typical. Because most academics are politically left of center, they generally use their theories of irrationality to explain the beliefs of the politically right of center. They like to explore how psychological biases shape the decisions people make to support Republicans, reject affirmative-action policies, and disapprove of homosexuality. But they don’t spend much time investigating how such biases might shape their own decisions to support Democrats, endorse affirmative action, and approve of gay marriage.

None of this is to say that Klein is mistaken. Irrational processes do exist, and they can ground political and moral decisions; sometimes the right explanation is groupthink or cognitive dissonance or prejudice. Irrationality is unlikely to be perfectly proportioned across political parties, and it’s possible, as the journalist Chris Mooney and others have suggested, that the part of the population that chose Obama in the most recent presidential election is more reasonable than the almost equal part that chose Romney.

But even if this were so, it would tell us little about the human condition. Most of us know nothing about constitutional law, so it’s hardly surprising that we take sides in the Obamacare debate the way we root for the Red Sox or the Yankees. Loyalty to the team is what matters. A set of experiments run by the Stanford psychologist Geoffrey Cohen illustrates this principle perfectly. Subjects were told about a proposed welfare program, which was described as being endorsed by either Republicans or Democrats, and were asked whether they approved of it. Some subjects were told about an extremely generous program, others about an extremely stingy program, but this made little difference. What mattered was party: Democrats approved of the Democratic program, and Republicans, the Republican program. When asked to justify their decision, however, participants insisted that party considerations were irrelevant; they felt they were responding to the program’s objective merits. This appears to be the norm. The Brown psychologist Steven Sloman and his colleagues have found that when people are called upon to justify their political positions, even those that they feel strongly about, many are unable to point to specifics. For instance, many people who claim to believe deeply in cap and trade or a flat tax have little idea what these policies actually mean.

So, yes, if you want to see people at their worst, press them on the details of those complex political issues that correspond to political identity and that cleave the country almost perfectly in half. But if this sort of irrational dogmatism reflected how our minds generally work, we wouldn’t even make it out of bed each morning. Such scattered and selected instances of irrationality shouldn’t cloud our view of the rational foundations of our everyday life. That would be like saying the most interesting thing about medicine isn’t the discovery of antibiotics and anesthesia, or the construction of large-scale programs for the distribution of health care, but the fact that people sometimes forget to take their pills.

Reason underlies much of what matters in the world, including the uniquely human project of reshaping our environment to achieve higher goals. Consider again our racial and gender stereotypes. Many people believe that circumstances exist in which it is wrong to use these stereotypes when making judgments. If we are worried about this, we can act. We can use reason to invent procedures that undermine our explicit and implicit biases. Blind reviewing and blind auditions block judges from using stereotypes, even unconsciously, by shielding them from information about candidates’ race or sex or anything else other than the merits of what one is supposed to be judging. Quota systems and diversity requirements take the opposite tack, and are rooted in different intuitions about the morally right thing to do; they enforce representation by minority groups, thereby taking the decision out of the hands of individuals with their own preferences and agendas and biases.

This is how moral progress happens. We don’t become better merely through good intentions and force of will, just as we don’t usually lose weight or give up smoking merely by wanting to. We use our intelligence. We establish laws, create social institutions, write constitutions, and evolve customs. We manage information and constrain options, allowing our better selves to overcome those gut feelings and appetites that we believe we would be better off without. Yes, we are physical beings, and yes, we are continually swayed by factors beyond our control. But as Aristotle recognized long ago, what’s so interesting about us is our capacity for reason, which reigns over all. If you miss this, you miss almost everything that matters.

~ Paul Bloom is a professor of psychology and cognitive science at Yale University, and the author of Just Babies: The Origins of Good and Evil (2013).
Bagaimana Menarikkan Article Pada Hari Ini . BLUE.Jangan Lupa Datang Lagi Untuk Membaca Article Yang lebih Menarik Pada Masa Akan Datang/

Posting Komentar

 
Top