Liviu Drugus's blog

Book Essay “The End of Science. Facing the limits of Knowledge in the Twilight of the Scientific Age”. By John Horgan, Little, Brown and Company, London, 1996. ISBN: 0 316 64052 2

Liviu Drugus

George Bacovia University, Bacau, Romania

As some readers of my previous book essays and book reviews may remember, I use the opportunity of reviewing a book to present also my own ideas on the general issue treated in the book reviewed. As Horgan himself declared he is not against science as such, but against some excesses and false pretentions of it. John Horgan, the author of ―The End of Science‖ received not only positive reviews to his book. But a very good thing I need to put firstly in this book essay was the feedback John Horgan felt necessary to put at the end of the book (pp.267-281). This simple fact underlies the idea that any de-mythization of science, of religious fundamentalist beliefs or of any other rigid ideatic systems may be done especially by a permanent dialog. I tried to obtain some clarifications directly from Horgan, and to continue the dialog, but I wasn‘t as lucky as he was when he interviewed some famous scholars for obtaining clarifications from themselves. Maybe in other life… John Horgan names the detached attitude towards science as ―ironic science‖ and his view on science is an ironic one. The ―classic‖/ modern science is too serious to be believed anymore…The postmodern science is de-structuring the modern science and ironically asks: and that is all? Only for that some would consider this knowledge as deserving to be infinitely repeated, respected and recognized as permanent values and disseminated to the new generations?

From immemorial times philosophers and researchers looked for the Answer. Historically speaking, ―It was in the summer of 1989 … that I began to think seriously about the possibility that science, pure science, might be over‖ (p.1) confessed Horgan at the beginning of his book. Horgan interviews prominent figures/ fathers of the strong, perennial and still en vogue disciplines like: physics, cosmology, evolutionary biology, social science, neuroscience, chaoplexity (chaos theory and complexity theory viewed as one), limitology, scientific theology
and philosophy. The superstring theory was called by some ironic physicists as ―a theory of everything‖. Those who lived in the former Communist countries may remember that Marxism was a ―theory of everything‖, though Marx himself would have strongly rejected such a stupid thing. Brian Greene, the promoter of superstring theory, had become a kind of Messiah announcing another revolution in physics, after quantum theory and relativity theory. ‖The mathematical structure of string theory was so beautiful and had so many miraculous properties that it had to be pointing toward something deep‖ (Schwartz, 2000). The depth of hidden things is a good excuse for not finding the essence of our cosmic existence. Instead of scientific proofs for strange theories, their authors and promoters underlined their beauty and symmetry. In her paper ―The elegance of The Elegant Universe: unity, beauty, and harmony in Brian Greene‘s popularization of superstring theory‖, Rachel Edford elegantly describe string theory that could be an answer to the quest of unification in science… Esthetics seems to be a way to cover the lack of experimental verification: It‘s not true, but you may observe how nice it is…In my opinion, invisible physics is very similar with invisible psychics (i.e. psyschology). Many of new visions (e.g. transdisciplinarity in Basarab Nicolescu‘s description) are based on something invisible (the hidden middle or the third included). Horgan‘s technique is to put the authors of well known theories to recognize that their theories are either incomplete, or not very useful and not necessary to be part of the Answer. I think that in every dialog it is compulsory to test if the meanings of the used words are quite clear and common to all the participants. Otherwise, it may happen to find out that every participant had different meanings in their minds. Here it is the definition of ―science‖ as it was used by John Horgan: ―By science (Horgan‘s italics) I mean not the applied science, but science at its purest and grandest, the primordial human quest to understand the universe and our place in it. Further research may yield no more great revelations or revolutions, but only incremental, diminishing returns.‖ (p.6). The practical consequences of this lack of concrete profitability has direct implications to the ―scientist‘ class‖ that feel uncomfortable to see their activity is less and less appreciated: ―When a given field of science begins to yield diminishing practical returns, scientists may have less incentives to pursue their research and society may be less inclined to pay for it. (p. 11). Here it comes a financial conclusion and decision. All ―scientists‖ claim for more and more money allocated to their activity but many of them simply do not have new ideas… In other words, Horgan is optimistic about technological progress, but is pessimistic about new and historical changes in the theoretical corpus of science. The very first conclusion of this Horganian definition of science may suggest the idea that any investment in theoretical research is quite non-useful… The structure of the book is putting together a number of fields whose results are considered as doubtful and/ or waste of time. These fields (with a ―The End of…‖ before) are: Progress, Philosophy, Physics, Cosmology, Evolutionary Biology, Social Science, Neuroscience, Chaoplexity, Limitology, and Scientific Theology (the Machine Science). It is interesting to note that Horgan speaks about Social Science and not about Social Sciences. No end (is announced in Horgan‘s book) for Chemistry….; may be in a revised and completed edition. (These days John Horgan is working on another book concerning the eternal issue of world peace).
The end of science is explained by Horgan with logical tools: ―If one believes in science, one must accept the possibility – even the probability – that the great era of scientific discovery is over‖ (p.6) It comes from here that JH considers science as a kind of ideology – a set of beliefs that help to attain specific interests. In this case, JH is a follower of Daniel Bell… Anyway, to write and to argue that a noble preoccupation as is scientific research is going to die is quite uncomfortable for most of us. Last year (July 2009), I attended at Romanian Academy a seminar on ―the state of economics‖ whose conclusion was that there is no science called Economics. It was a ―quarrel‖ among econometricians and political economists with the result that no one could refute the accusation of not doing science… Both of them were right! This
remembered me of JH‘s assertion that ―The common strategy of the strong scientist is to point to all the shortcomings of current scientific knowledge, to all the questions left unanswered‖ (p.7). The dissatisfaction of JH towards the results of science is determining him to express a utopian thought: ―…one day we humans will create intelligent machines that can transcend our punny knowledge. In my favorite version of this scenario, machines transform the entire cosmos into a vast, unified, information-processing network. All matter becomes mind. This proposal is not science, of course, but wishful thinking.‖ (p.8). It is surprisingly to discover in the protestant Horgan a classicist orthodox industrialist mechanicist… This short book essay is followed by a longer interview with John Horgan and I thank very much to the Editor who accepted to reprint this material in ETC. References
Drugus, Liviu (2009), How scientific are Social Sciences? Economy Transdisciplinarity Cognition, 1(2009) 7 – 9. See Edford, Rachel (2007), The elegance of The Elegant Universe: unity, beauty, and harmony in Brian Greene‘s popularization of superstring theory, Public Understanding of Science, 16 (2007) 441–454 Schwarz, J. (2000) review of B. Greene, The Elegant Universe, in American Journal of Physics 68(2): 199–200.
Schick, Theodore Jr. (1997) The End of Science? In: Skeptical Inquirer, March-April 1997 , Feyerabend, Paul (1975) Against Method, London, Verso
EDGE 16 — May 6, 1997 THIRD CULTURE „THE END OF HORGAN?” „Why I Think Science Is Ending” A Talk by John Horgan
Over the few months during which I’ve been following this website, various contributors have
said various things about my book „The End of Science”. These comments reflect some confusion about what it was that I really said. I therefore thought it might be useful for me to present a succinct summary of my end-of-science argument as well as a rebuttal of 10 common counter-arguments. THE REALITY CLUB John Horgan responds to Kevin Kelly and George Johnson Kevin Kelly, George Johnson, Ernest B. Hook, Paul Davies, and Lee Smolin on Horgan THIRD CULTURE „THE END OF HORGAN?” „Why I Think Science Is Ending” A Talk by John Horgan In his 1966 book The End of Science , John Horgan contends that science‹and particularly pure science rather than applied science, technology and medicine‹is coming to an end. This controversial hypothesis, which has received wide attention, has at once been greeted by consternation by many (but certainly not all) in the scientific community while giving comfort to those who want anything to do with science and technology to go away. In The Third Culture (1995), I write about ” scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are.” Horgan would disagree. In addition, he would take issue with many of the people in my book as well as other scientists I admire and respect. And yet we get along. What I particularly like about him is his pugilistic approach to life. John Horgan is a challenge. „WHY I THINK SCIENCE IS ENDING” A Talk by John Horgan Over the few months during which I’ve been following this website, various contributors have said various things about my book The End of Science. These comments reflect some confusion about what it was that I really said. I therefore thought it might be useful for me to present a succinct summary of my end-of-science argument as well as a rebuttal of 10 common counter-arguments. This posting is based both on talks that I’ve given recently and on an afterword that I wrote for the paperback edition of my book, which is being published in mid-May by Broadway Books.
My claim is that science is a bounded enterprise, limited by social, economic, physical and
cognitive factors. Science is being threatened, literally, in some cases, by technophobes like the Unabomber, by animal-rights activists, by creationists and other religious fundamentalists, by post-modern philosophers and, most important of all, by stingy politicians. Also, as science advances, it keeps imposing limits on its own power. Einstein’s theory of special relativity prohibits the transmission of matter or even information at speeds faster than that of light. Quantum mechanics dictates that our knowledge of the microrealm will always be slightly blurred. Chaos theory confirms that even without quantum indeterminacy many phenomena would be impossible to predict. And evolutionary biology keeps reminding us that we are animals, designed by natural selection not for discovering deep truths of nature but for breeding. All these limits are important. But in my view, by far the greatest barrier to future progress in science‹and especially pure science‹is its past success. Researchers have already created a map of physical reality, ranging from the microrealm of quarks and electrons to the macrorealm of planets, stars and galaxies. Physicists have shown that all matter consists of a few basic particles ruled by a few basic forces. Scientists have also stitched their knowledge into an impressive, if not terribly detailed, narrative of how we came to be. The universe exploded into existence roughly 15 billion years ago and is still expanding outwards. About 4.5 billion years ago, the debris from an exploding star condensed into our solar system. Sometime during the next few hundred million years, single celled organisms emerged on the earth. Prodded by natural selection, these microbes evolved into an amazingly diverse array of more complex creatures, including Homo sapiens. I believe that this map of reality that scientists have constructed, and this narrative of creation, from the big bang through the present, is essentially true. It will thus be as viable 100 or even 1,000 years from now as it is today. I also believe that, given how far science has already come, and given the limits constraining further research, science will be hard-pressed to make any truly profound additions to the knowledge it has already generated. Further research may yield no more great revelations or revolutions but only incremental returns. The vast majority of scientists are content to fill in details of the great paradigms laid down by their predecessors or to apply that knowledge for practical purposes. They try to show how a new high-temperature superconductor can be understood in quantum terms, or how a mutation in a particular stretch of DNA triggers breast cancer. These are certainly worthy goals. But some scientists are much too ambitious and creative to settle for filling in details or developing practical applications. They want to transcend the received wisdom, to precipitate revolutions in knowledge analogous to those triggered by Darwin’s theory of evolution or by quantum mechanics. For the most part these over-reachers have only one option: to pursue science in a speculative, non-empirical mode that I call ironic science. Ironic science resembles literature or philosophy or theology in that it offers points of view, opinions, which are, at best, „interesting,” which provoke further comment. But it does not converge on the truth.
One of the most spectacular examples of ironic science is superstring theory, which for more than a decade has been the leading contender for a unified theory of physics. Often called a
„theory of everything,” it posits that all the matter and energy in the universe and even space and time stem from infinitesimal, string-like particles wriggling in a hyperspace consisting of 10 (or more) dimensions. Unfortunately, the microrealm that superstrings allegedly inhabit is completely inaccessible to human experimenters. A superstring is supposedly as small in comparison to a proton as a proton is in comparison to the solar system. Probing this realm directly would require an accelerator 1,000 light years around. Our entire solar system is only one light day around. It is this problem that led the Nobel laureate Sheldon Glashow to compare superstring theorists to „medieval theologians.” How many superstrings can dance on the head of a pin? There are many other examples of ironic science that you have probably heard of, in part because science journalists like myself enjoy writing about them so much. Cosmology, for example, has given rise to all kinds of theories involving parallel universes, which are supposedly connected to our universe by aneurisms in spacetime called wormholes. In biology, we have the Gaia hypothesis of Lynn Margulis and James Lovelock, which suggests that all organisms somehow cooperate to ensure their self-perpetuation. Then there are the anti-Darwinian proposals of Brian Goodwin and Stuart Kauffman, who think life stems not primarily from natural selection but from some mysterious „laws of complexity” that they have glimpsed in their computer simulations. Psychology and the social sciences, of course, consist of little BUT ironic science, such as Freudian psychoanalysis, Marxism, structuralism and the more ambitious forms of sociobiology. Some observers say all these untestable, far-fetched theories are signs of science’s vitality and boundless possibilities. I see them as signs of science’s desperation and terminal illness. That’s my argument, in a nutshell. Now let me go through the most common objections. 1. That’s What They Thought 100 Years Ago. Nine times out of 10, when I give my end of science spiel‹whether to a Nobel laureate in physics or to some poor soul that I’m trapped at a cocktail party‹the response is some variation of, „Oh, come on, that’s what they thought 100 years ago.” The reasoning behind this response goes like this: As the 19th century wound down, scientists thought they knew everything. But then Einstein and other physicists discovered relativity and quantum mechanics, opening up vast new vistas for modern physics and other branches of science. The moral is that anyone who predicts science is ending will surely turn out to be as short-sight ed as those 19th-century physicists were. Another popular anecdote involves the U.S. patent commissioner who, sometime in the 19th century, supposedly quit his job because he thought everything had been invented. First of all, both of these tales are simply not true. No American patent official ever quit his job because he thought everything had been invented. And physicists at the end of the last century were engaged in debating all sorts of profound issues, such as whether atoms really exist.
What people are really implying when they say „that’s what they thought 100 years ago” is that, because science has advanced so rapidly over the past century or so, it can and will continue to do so, possibly forever. This is an inductive argument, and as an inductive argument it is deeply flawed. Science in the modern sense has only existed for a few hundred years, and its most
spectacular achievements have occurred within the last century. Because we were all born and raised in this era of exponential progress, we simply assume that it is an intrinsic, permanent feature of reality. But viewed from an historical perspective, the modern era of rapid scientific and technological progress appears to be not a permanent feature of reality but an aberration, a fluke, a product of a singular convergence of social, intellectual and political factors. Ask yourself this: Is it really more reasonable to assume that this period of extremely rapid progress will continue forever rather than reaching its natural limits and coming to an end? 2. Answers Always Raise New Questions. It is quite true that answers always raise new questions. But most of the answerable questions raised by our current theories tend to involve details. For example, when, exactly, did our ancestors begin walking upright? Was it three million years ago, or four million? On which chromosome does the gene for cystic fibrosis reside? The answers to such questions may be fascinating, or have enormous practical value, but they merely extend the prevailing paradigm rather than yielding profound new insights into nature Other questions are profound but unanswerable. The big bang theory, for example, poses a very obvious and deep question: Why did the big bang happen in the first place, and what, if anything, preceded it? The answer is that we don’t know, and we will never know, because the origin of the universe is too distant from us in space and time. That is an absolute limit of science, one forced on us by our physical limitations. There are lots of other unanswerable questions. Are there other dimensions in space and time in addition to our own? Are there other universes? Then there is a whole class of what I call inevitability questions. Just how inevitable was the universe, or the laws of physics, or life, or life intelligent enough to wonder how inevitable it was? Underlying all these questions is the biggest question of all: Why is there something rather than nothing? None of these inevitability questions are answerable. You can’t determine the probability of the universe or of life on earth when you have only one universe and one history of life to contemplate. Statistics require more than one data point. So, again, it is true that answers always raise new questions. But that does not mean that science will never end. It only means that science can never answer all possible questions, it can never quench our curiosity, it can never be complete. Unanswerable questions, by the way, are what give rise to superstring theory, Gaia, psychoanalysis and other example of ironic science, as well as all of philosophy. 3. What About Life on Mars? The day the life on Mars story broke last August, I walked into my office at Scientific American, and several colleagues immediately came up to me with big smirks and said, „So, what does Mr. No More Big Discoveries say now?”
As I said in my book, the discovery of extraterrestrial life would represent one of the most thrilling findings in the history of science. I hope to live long enough to witness such an event. But the so-called evidence presented last summer doesn’t even come close. It consists of some
organic chemicals and globule-shaped particles that vaguely resemble terrestrial microbes but which are subject to many alternative interpretations. Those scientists who are most knowledgeable about very old microfossils‹those who are the real experts in the origin of terrestrial life‹are also the most skeptical of the life-on-Mars interpretation. That’s a very bad sign. There is only one way we are going to know if there is life on Mars, and that is if we send a mission there to conduct a thorough search for it. Our best hope is to have a human crew drill deep below the surface, where there is thought to be enough liquid water and heat to sustain microbial life as we know it. It will be decades, at least, before we can muster the resources and money for such a project, even if society is willing to pay for it. Let’s say that we do eventually determine that microbial life existed or still exists on Mars. That would be fantastic, an enormous boost for origin-of-life studies and biology in general. But would it mean that science is suddenly liberated from all the limits that I have described? Hardly. If we find life on Mars, we will know that life arose in this solar system, and perhaps not even more than once. It may be that life originated on Mars and then spread to the earth, or vice versa. More importantly, we will be just as ignorant about whether life exists elsewhere in the universe, and we will still be facing huge obstacles to answering that question. Let’s say that engineers come up with a space transport method that boosts the velocity of spaceships by a factor of more than 10, to one million miles an hour. That spaceship would still require 3,000 years to reach the nearest star, Alpha Centauri. It’s possible that one of these days the radio receivers employed in our Search for Extraterrestrial Intelligence program, called SETI, will pick up electromagnetic signals‹the alien equivalent of Seinfeld‹coming from another star. But it’s worth noting that most of the SETI proponents are physicists, who have an extremely deterministic view of reality. Physicists think that the existence of a highly technological civilization here on earth makes the existence of similar civilizations elsewhere highly probable. The real experts on life, biologists, find this view ludicrous, because they know how much contingency‹just plain luck‹is involved in evolution. Stephen Jay Gould, the Harvard paleontologist, has said that if the great experiment of life were re-run a million times over, chances are it would never again give rise to mammals, let alone mammals intelligent enough to invent television. For similar reasons Gould’s colleague Ernst Mayr, who may be this century’s most eminent evolutionary biologist, has called the search for extra-terrestrial life a waste of time and money. The U.S. Congress apparently agrees with Mayr, because they terminated the funding for the SETI program three years ago. It’s now just getting by on private funds. 4. The Paradigm Shift Argument.
A surprising number of otherwise hard-nosed scientists, when confronted with the argument that science might be ending, start sounding like philosophical relativists, or social constructivists, or other doubters of scientific truth. They begin to sound, in other words, like the people who write for the postmodern journal Social Text, which last June was the victim of a hoax that was perpetrated by the New York University physicist Alan Sokal and subsequently made the front
page of The New York Times. According to these skeptics, science is a process not of discovery but of invention, like art or music or literature. We just think science can’t go any further because we can’t see beyond our current paradigms. In the future, we will submit to new paradigms that cause the scales to fall from our eyes and open up vast new realms of inquiry. This kind of thinking can be traced back to the philosopher Thomas Kuhn, who wrote the extremely influential book Structure of Scientific Revolutions, and who died last June. But modern science has been much less revolutionary‹much less susceptible to dramatic shifts in perspective‹than Kuhn suggested. Particle physics rests on the firm foundation of quantum mechanics, and modern genetics, far from undermining the fundamental paradigm of Darwinian evolution, has bolstered it. If you view atoms and elements and the double helix and viruses and stars and galaxies as inventions, projections of our culture, which future cultures may replace with other convenient illusions, then you are unlikely to agree with me that science is finite. If science is as ephemeral as art, of course it can continue forever. But if you think that science is a process of discovery rather than merely of invention, if you believe that science is capable of achieving genuine truth, then you must take seriously the possibility that all the great, genuine paradigm shifts are behind us. 5. End of Science Is Just Semantic Trickery My book had at least one serious shortcoming. In arguing that science will never achieve anything as fundamental as quantum mechanics or the theory of evolution, I should have said what I meant by „fundamental.” I’ll take a stab at that now. A fact or theory is fundamental in proportion to how broadly it applies both in space and in time. Both quantum mechanics and the theory of general relativity apply, as far as we know, throughout the entire universe at all times since its birth. That makes these theories truly fundamental. Technically, all biological theories are less fundamental than the cornerstone theories of physics, because biological theories apply‹as far as we know‹only to particular arrangements of matter that have existed on our lonely little planet for the past 3.5 billion years. But biology has the potential to be more meaningful than physics because it more directly addresses a phenomenon we find especially fascinating: ourselves. In his 1995 book Darwin’s Dangerous Idea, Daniel Dennett argued persuasively that evolution by natural selection is „the single best idea anyone has ever had,” because it „unifies the realm of life, meaning and purpose with the realm of space and time, cause and effect, mechanism and physical law.”
I agree. Darwin’s achievement‹especially when fused with Mendelian genetics into the new synthesis‹has rendered all subsequent biology oddly anticlimactic, at least from a philosophical perspective. Take developmental biology, which addresses the transformation of a single fertilized cell into a multicellular creature. In her review of my book for The New York Times, Natalie Angier expressed the hope that „unifying insights that illuminate pattern formation in the developing embryo” would disprove my end-of-science thesis. But according to the eminent British biologist Lewis Wolpert, those „unifying insights” may already be behind us. Wolpert was
recently quoted in Science as saying that „the principles of development are understood and all that remains is to fill in the details.” But this scientific triumph has unfolded virtually unnoticed by the public. One reason is that developmental biology is excruciatingly complicated, and most science writers avoid it. Natalie Angier is one of the few writers talented enough to make the subject fun to read about. But another reason that developmental biology does not attract more attention may be that its findings fit so comfortably within the broader paradigm of evolutionary theory and DNA-based genetics. It is „normal science,” to use the phrase favored by Kuhn. Normal science solves puzzles that are posed by the prevailing paradigm but does not challenge the paradigm’s basic tenets. In a way, all biology since Darwin has been normal science. Even Watson and Crick’s discovery of the double helix, although it has had enormous practical consequences, merely revealed how heredity works on a molecular level; no significant revision of the new synthesis was required. 6. The Chaoplexity Gambit Many modern scientists hope that advances in computers and mathematics will enable them to transcend their current knowledge and create a powerful new science. This is the faith that sustains the trendy fields of chaos and complexity. In my book I lump chaos and complexity together under a single term, chaoplexity, because after reading dozens of books about chaos and complexity and talking to scores of people in both fields, I realized that there is no significant difference between them. Chaoplexologists have argued that with more powerful computers and mathematics they can answer age-old questions about the inevitability, or lack thereof, of life, or even of the entire universe. They can find new laws of nature analogous to gravity or the second law of thermodynamics. They can make economics and other social sciences as rigorous as physics. They can find a cure for AIDS. These are all claims that have been made by researchers at the Santa Fe Institute. These claims stem from an overly optimistic interpretation of certain developments in computer science. Over the past few decades, researchers have found that various simple rules, when followed by a computer, can generate patterns that appear to vary randomly as a function of time or scale. Let’s call this illusory randomness „pseudo-noise.” A paradigmatic example of a pseudo-noisy system is the mother of all fractals, the Mandelbrot set, which is an icon of the chaoplexity movement. The fields of both chaos and complexity have held out the hope that much of the noise that seems to pervade nature is actually pseudo-noise, the result of some underlying, deterministic algorithm. But the noise that makes it so difficult to predict earthquakes, the stock market, the weather and other phenomena is not apparent but very real. This kind of noisiness will never be reduced to any simple set of rules, in my view.
Of course, faster computers and advanced mathematical techniques will improve our ability to predict certain complicated phenomena. Popular impressions notwithstanding, weather forecasting has become more accurate over the last few decades, in part because of improvements in computer modeling. But an even more important factor is improvements in
data-gatherin g‹notably satellite imaging. Meteorologists have a larger, more accurate database upon which to build their models and against which to test them. Forecasts improve through this dialectic between simulation and data-gathering. At some point, we are drifting over the line from science per se toward engineering. The model either works or doesn’t work according to some standard of effectiveness; „truth” is irrelevant. Moreover, chaos theory tells us that there is a fundamental limit to forecasting related to the butterfly effect. One has to know the initial conditions of a system with infinite precision to be able to predict its course. This is something that has always puzzled me about chaoplexologists: according to one of their fundamental tenets, the butterfly effect, many of their goals may be impossible to achieve. 7. What About the Human Mind? The human mind is by far the most wide open frontier for science, mainly because it is still so profoundly mysterious, in spite of all the advances of modern neuroscience. In his bestseller „Listening to Prozac” the psychiatrist Peter Kramer portrayed us as marching inexorably toward a Brave New World in which we can fine-tune our moods and personalities with drugs. This vision is a fantasy. What the scientific literature actually says is that Prozac and other so called wonder drugs are no more effective for treating depression and other common emotional disorders, statistically speaking, than the more primitive antidepressants, such as imipramine, which themselves are no more effective, statistically speaking, than talk therapy. Kramer was on firmer ground when he said, at the end of his book, that our understanding of our own minds is still „laughably primitive.” The question is, when, if ever, will that situation change? Last June I attended the annual meeting of the American Psychiatric Association in New York City, along with almost 20,000 other people. There were therapists there who still admit to being Freudians. And why not? No theory or treatment for the mind has been shown to be significantly better than psychoanalysis. Cheaper, maybe, but that’s not a scientific criterion. The hot, up-and-coming treatment for depression, and even schizophrenia and other disorders, is electroshock therapy, which can cause severe memory loss and other side effects. That does not seem like a sign of progress to me. The science of mind has‹in certain respects‹become much more empirical and less speculative since the days of Freud. We have acquired an amazing ability to probe the brain, with microelectrodes, magnetic resonance imaging, positron-emission tomography and the like. Maybe all this work will culminate in a great new unified theory of and treatment for the mind. But I suspect it won’t. What I think neuroscience can and will accomplish is correlating specific physiological processes in the brain to specific mental functions‹such as memory, perception and so forth‹in ever-finer detail. This kind of nitty-gritty, empirical research should have profound practical consequences, such as providing better ways to diagnose and treat mental illness.
But neuroscience will not deliver what so many philosophers and scientists yearn for. It will not solve all the ancient philosophical mysteries relating to the mind‹the mind-body problem, the problem of free will, the solipsism paradox, and so on. Nor will neuroscience demonstrate that consciousness is somehow a necessary component of existence, which is an idea that is alluring not only to New Agers but also to scientists and philosophers who should know better. This is a material world. We have all seen bodies without minds, but only psychics and psychotics have seen minds without bodies. The universe existed for billions of years before we
came along, and it will continue to exist for eons after we and our minds are gone. Psychologists, social scientists, neuroscientists and others seeking the key to the human psyche will periodically seize upon some „new” paradigm as the answer to their prayers. One paradigm that proves perennially alluring is Darwinian theory, which in its latest incarnation is called evolutionary psychology. But as crucial as it is for understanding life in general, Darwinian theory does not provide very deep insights into human nature, as I tried to show in „The New Social Darwinists,” published in the October 1995 Scientific American. Darwinians often complain that their views of human nature are rejected because of the continuing dominance within academia of left-leaning scientists, who for political reasons insist that humanity is infinitely malleable. That’s just not true. If evolutionary theory had turned out to be a truly powerful paradigm for explaining human behavior, it would have been embraced by the scientific community. Noam Chomsky has said that we will probably always learn more about human nature from novels than from science. I agree. 8. What About Applied Science? Some scientists grant that the basic rules governing the physical and biological realms may be finite, and that we may already have them more or less in hand. But they insist that we can still explore the consequences of these rules forever and manipulate them to create an endless supply of new materials, organisms, technologies and so forth. Proponents of this positio n‹many of whom adhere to a quasi-scientific cult called nanotechnology‹often compare science to chess. The rules of chess are quite simple, but the number of possible games that these rules can give rise to is virtually infinite. There’s some validity to this position. Applied science obviously has much further to go, and it is hard to know precisely where it might end. That fact was vividly demonstrated by the story of Dolly the cloned lamb; many scientists had believed that cloning from adult cells was impossible. But I still believe‹surprise, surprise‹that the limits of applied science are also coming into sight. Let me offer several examples. It once seemed inevitable that physicists’ knowledge of nuclear fusion‹which gave us the hydrogen bomb‹would culminate in a cheap, clean, boundless source of energy. But after 50 years and billions of dollars of research, that dream has now become vanishingly faint. In the last few years, the U.S. has drastically cut back on its fusion budget, and plans for next-generation reactors have been delayed. Now even the most optimistic researchers predict that it will take at least 50 years before we have economically viable fusion reactors. Realists acknowledge that fusion energy is a dream that may never be fulfilled: the technical, economic and political obstacles are simply too great to overcome.
Turning to applied biology, the most dramatic achievement that I can imagine is immortality. Many scientists are now attempting to identify the precise causes of aging. It is conceivable that if they succeed in pinpointing the mechanisms that make us age, researchers might then learn how to block the aging process and to design versions of Homo sapiens that can live indefinitely. But evolutionary biologists suggest that immortality may be impossible to achieve. Natural selection designed us to live long enough to breed and raise our children. As a result, senescence does not stem from any single cause or even a suite of causes; it is woven
inextricably into the fabric of our being. One might have more confidence in scientists’ ability to crack the riddle of senescence if they had had more success with a presumably simpler problem: cancer. Since President Richard Nixon officially declared a Federal „war on cancer” in 1971, the U.S. has spent more than $30 billion on research. But overall mortality rates have remained pretty much flat since 1971 and in fact for the last 50 years. Treatments are also still terribly primitive. Physicians still cut cancer out with surgery, poison it with chemotherapy and burn it with radiation. Maybe someday all our research will yield a „cure” that renders cancer as obsolete as smallpox. Maybe not. Maybe cancer‹and by extension mortality‹is simply too complex a problem to solve. Paradoxically, biology’s inability to solve certain important problems may be its greatest hope. Harvey Sapolsky, a professor of social policy at MIT, touched on this paradox in an article for Technology Review back in December 1995. He noted that the major justification for the funding of science since the Second World War was national security‹or, more specifically, the Cold War. Now that scientists no longer have the Evil Empire to justify their huge budgets, Sapolsky asked, what other goal can serve as a substitute? The answer he came up with was immortality. Most people think living longer, and possibly even forever, is desirable, he pointed out. But the best thing about making immortality the primary goal of science, Sapolsky said, is that it is almost certainly unattainable, so scientists can keep getting funds for more research forever. 9. The End of Science Is Itself an Ironic Hypothesis I admit that, as a journalist, I’m overly fond of playing gotcha games. In my book, for example, I describe an interview with the great philosopher Karl Popper, who argued that scientists can never prove a theory is true; they can only falsify it, or prove it is false. Naturally I had to ask Popper, Is your falsifiability hypothesis falsifiable? Popper was 90 then, but still intellectually armed and very dangerous. He put his hand on my hand, looked deep into my eyes, and said, very gently, „I don’t want to hurt you, but it is a silly question.” Given my style of journalism, I guess it’s only fair that some critics have tried to give me a taste of my own medicine, pointing out triumphantly that my own end-of-science thesis is an example of ironic theorizing, since it is ultimately untestable and unprovable. This argument was put forth in the review of my book in The Economist, American Scientist and elsewhere. But to paraphrase Karl Popper, „This is a silly objection.” Compared to atoms, or stars, or galaxies, or genes or other objects of genuine scientific investigation, human culture is ephemeral; an asteroid could destroy us at any moment and that would bring about the end not only of science but also of history, politics, art‹you name it. So obviously any prediction about the future of human culture is an educated guess, at best, at least compared to nuclear physics, or astronomy, or other disciplines that prove certain facts beyond a reasonable doubt. But just because we cannot know with certainty what our future is does not mean that we cannot make cogent arguments in favor of one scenario over another. I think my end-of-science scenario is much more plausible than the ones that I am trying to displace, in which we keep discovering profound new truths about the universe forever, or arrive at an end point in which we achieve perfect wisdom and mastery over nature.
10. The Lack-of-Imagination Argument Of all the criticisms of my thesis, the one that really gets under my skin is that it reflects what Newsweek called a „failure of imagination.” Actually, it is all too easy to imagine great discoveries just over the horizon. Our culture does it for us, with TV shows like Star Trek and movies like Star Wars and ads and political rhetoric that promise us tomorrow will be very different from‹and almost certainly better than‹today. Scientists, and science journalists, too, are forever claiming that a huge revelation or breakthrough or holy grail awaits us just over the horizon. I have to admit, I’ve written my share of such stories. What I want people to imagine is this: What if there is no big thing over the horizon? What if what we have is basically what we are going to have? We are not going to invent warp-drive spaceships that can take us to other galaxies or even other universes. We are not going to become infinitely wise or immortal through genetic engineering. We are not going to discover the mind of God, as the British physicist Stephen Hawking once put it. We are not going to know why there is something rather than nothing. We’ll be stuck in a permanent state of wonder before the mystery of existence‹which may not be such a terrible thing. After all, our sense of wonder is the wellspring not only of science but also of art, and literature, and philosophy, and religion. One final point. I’ve been accused by some critics‹such as Phil Anderson‹of having a hidden anti-science agenda. That’s ridiculous. I became a science writer because I love science. I think science is the most miraculous and noble and meaningful of all human creations. My conviction that science is ending is deeply disturbing to me, because I can’t imagine anything better for humanity to do than to try to figure out what we are, where we came from and where we are going. I sincerely hope that in my lifetime some scientist‹maybe even someone reading this posting‹will discover something as important as natural selection or quantum mechanics or the expansion of the universe, something that spawns a whole new era in pure science and proves me wrong. But I also sincerely believe that isn’t going to happen. THE REALITY CLUB John Horgan Responds to George Johnson and Kevin Kelly From: John Horgan Submitted: 4/25/97 I just discovered, belatedly, the responses by George Johnson and Kevin Kelly to my posting here of a few weeks ago (post #8 – 4/2/97).* Lest they think I was ignoring them, I’d like to respond, as briefly as possible. You know George, it must drive scientists crazy to hear you and me going at it. One guy thinks science is all over, and the other thinks it never really got anywhere in the first place. Some choice! And they work for the New York Times and Scientific American, no less! No wonder science is having so much trouble!
As for your characterization of me as a Platonist, well, I don’t think that’s quite right. I’d call
myself a functionalist, which I define as follows. If a theory works so well that it does everything asked of it‹prediction of new phenomena and extremely accurate description of old ones‹you have to grant that it is true in a functional if not absolute sense. As it survives test after test, it becomes increasingly unlikely to be displaced by any better theory, and therefore it becomes de facto a final theory. That seems to be the case with quantum mechanics and general relativity, which are theories that you find implausibly odd, and with good reason. I don’t think these mathematical formalisms are absolute truths, or „discoveries,” in the same way that the existence of galaxies or cell structures or elements are discoveries, but they come close just because they work so damn well. In my review of your book (and thanks for citing that in your note, because I go through all this in much more depth there) I call them „virtual discoveries.” Kevin, I agree that my style is bumptious, excessively so, no doubt, at times. But I get frustrated (and I think George Johnson does too) by the excessively fawning stance of much science writing these days, and by books and articles that pass off philosophical speculation as science. If you don’t want to take my word that superstring theory is unverifiable, in the same sense that the standard model is, read Weinberg’s Dreams of a Final Theory or Hawking’s Brief History of Time. They concede that the Planck scale, where superstrings supposedly dwell, can never be directly accessed through experiments. General relativity, although not nearly as well established as quantum mechanics, has been verified by plenty of different experiments. In fact, the Global Positioning System makes relativistic corrections in calculating positions. Relativity is plain old engineering now. John Horgan Kevin Kelly, George Johnson, Ernest B. Hook, Paul Davies, and Lee Smolin on Horgan From: Kevin Kelly Submitted: 4/30/97 Re: Horgan Reply I like the boldness of your argument, John Horgan, but you leave me behind whenever we arrive at this central issue of your own certainty that some fashionable science theories are unverifiable. Ironic science, you call it. When I queried you about why you reject superstring theory and not relativity ‹ both to my mind equally abstract and out of the realm of ordinary experience ‹ you replied back: If you don’t want to take my word that superstring theory is unverifiable, in the same sense that the standard model is, read Weinberg’s Dreams of a Final Theory or Hawking’s Brief History of Time. They concede that the Planck scale, where superstrings supposedly dwell, can never be directly accessed through experiments. Well, I don’t take your word, nor theirs, on this, as much as I respect Weinberg and Hawking. This is where I depart from your very interesting hypothesis: that you offer only an ironic and not a scientific means to predict what is „unverifiable” and what is not.
By what means are we so sure superstring theory is unverifiable? Before general relativity was verified, how much certainty was there that is was verifiable? Very little at first, as a fine grained reading of the history shows. Ditto for the more extreme notions in quantum theory, which of course even Einstein had doubts about anyone being able to prove. But now that they have been verified (and shown to be verifiable) they become „fundamental” in your view, whereas any far out idea that has not been verified yet becomes „unverifiable” in your notion. This is what I would call „ironic science”; Something is ironic until it becomes fundamental. How can a theory migrate from being „ironic” to „fundamental”? Only because those terms have no exactness or meaning except in retrospect. Here is another way to describe the confusion in the way you present your idea. As far as I can tell the only way you have of determining that a theory is verifiable is to verify it ‹ to prove that it is „truable” by proving it is true. This is confusing the veracity of an argument with its verifiability. You need to clarify that muddlement which pervades your book. If you want to propose a real scientific theory about science, you’ll have to come up with a way to determine apriori which notions are inherently and forever untestable, and then make some specific predictions (and ideally some unexpected predictions) about what theories are ironic and what are real. Until then, it is hard to take your arguments seriously, as intriguing as I find them. Kevin Kelly From: George Johnson Submitted: 5/1/97 Loose Ends of Science John Horgan’s recent position paper is certainly powerful ‹ a rhetorical masterpiece. But on closer inspection I’m not sure the argument hangs together. Science seems to be ending for so many different reasons that it’s hard to keep track of them all. Cosmology and biology are coming to an end, we’re told, because their reigning theories ‹ the Big Bang and Natural Selection ‹ are all but complete. Particle physics, on the other hand, is said to be ending largely because Congress cancelled the Superconducting Supercollider, making it impossible to proceed. The study of complex systems, John argues, never even got off to a start because it depends on computer simulations, which he sees as little more than very complicated video games. (There is something inherently fishy about computer models, he suggests, that somehow doesn’t apply to modeling with differential equations. I really don’t get the distinction.) Scientific questions that don’t fit into the above categories are declared to be unanswerable for metaphysical reasons. Neuroscience, for example, is supposed to be coming to an end because the nature of consciousness is forever unknowable. The deepest biological question of all ‹ whether life is an aberration or something universal ‹ will probably never be answered, John says, because we’re stuck inside the solar system and can only wait, in vain, for someone out there to contact us.
Is it true that the proposed title of the book was originally „The Ends of Science”? A different endtime scenario has been tailored to fit each scientific frontier. Maybe I’ll write a sequel called „The Loose Ends of Science.” Our theory of particle physics, the ingeniously jury-rigged Standard Model, stops far short of unifying the electroweak and strong nuclear forces and depends mightily on the existence of a particle, the Higgs boson, that has never been seen. The Big Bang theory cannot explain something so basic as how structure arose in the universe without declaring that most of it is made from a kind of „nonbaryonic” dark matter not included in the Standard Model. If science is over it is not because it is complete. The present theories are just running out of steam. Because of the insatiable human hunger to find pattern, the search for better theories will continue. And because no map can ever encompass all of creation, science can never really end GEORGE JOHNSON is a writer for The New York Times, working on contract from Santa Fe, NM. He formerly worked as a staff editor for „The Week in Review” section of The Times. His books include Fire in the Mind: Science, Faith, and the Search for Order(1995); In the Palaces of Memory: How We Build the Worlds Inside Our Heads (1991); Machinery of the Mind: Inside the New Science of Artificial Intelligence (1986), and Architects of Fear: Conspiracy Theories and Paranoia in American Politics (1984). From: Ernest B. Hook Submitted: 4/30/97 Re: Mr. Horgan’s Claims: Useless and Mildly Pernicious I am not sure why Mr. Horgan’s remarks forwarded to me by Norman Levitt via Alan Sokal seem worthy of consideration by an active research scientist. Just as the response to the man who pronounced „an end to history”, is that history continues nevertheless, so any claims that science is limited or ending or has an end, are refuted by „it” continuing nevertheless. Certainly Mr. Horgan is correct that each fact known so to speak, is one less to discover, so the supply of facts, theories, or explanatory paradigms etc. is presumably drying up. But it is tantamount to saying that each day the universe is winding down and we humans are one day closer to extinction. (Or that each poem written is one less that can be written, although the analogy here is less exact.) How useful is such an observation? The question is, where are we now on this huge time scale?? We are operating now on a temporal microscale. Mr. Horgan is talking about events on a temporal macroscale. I don’t regard Mr. Horgan’s claim as a very fruitful area of discussion unless Mr. Horgan could demonstrate how his thesis directly and practically had some implications for current research strategies and programs. As to the argument that scientists are just „filling in details now”, that same argument could have been made after the periodic table was discovered as well. The point is that yesterday’s dismissed „details” often have had the seeds of tomorrows great discoveries. So let us not worry if we are only working on what to Mr. Horgan appear only to be details.
Moreover, Mr. Horgan’s claim may well be pernicious in its consequences, if he can convince aspiring students that since „science” has limits or is coming to an end, that therefore there is not much point in individuals pursuing a scientific career. Certainly, some individuals have retrospectively cited Mr. Horgan’s or similar arguments to justify dropping out of science, and doing something else usually easier and less challenging ‹ once they had tenure at least ‹ but these are often decisions by individuals who didn’t enjoy doing science so much in the first place and whose primary goals may have been the search for solutions to particular problems that have been answered since they entered the field. Once answered. they lost or lacked curiosity to go on in other areas. The question really is, it seems to me, is the issue really worth discussing. If for the sake of argument one granted his claim, insofar as it is correct, what bit of difference would it make to the practice of science? Or to science policy? My own view is Mr. Horgan’s claims are dangerous to the extent that anyone might actually be deflected by them to change any practice or approach to science or science policy. Ernest B. Hook ERNEST B. HOOK is Professor, School of Public Health, University of California, Berkeley. From: Paul Davies Submitted: 5/1/97 Re: John Horgan Bravo John for a robust and entertaining defense of your thesis. I have a couple of points to make: 1. Life on Mars. As I am writing a book on this myself, I have thought a lot about the significance of the recent NASA „evidence”. You are right that, if the features in the meteorite do turn out to be evidence for life on Mars, the chances are it came from Earth or vice versa. Clearly the planets are not isolated. However, it is possible to discriminate between contamination and an independent origin. Suppose Mars life was based on left-handed DNA, rather than right handed as is Earth life. That would be strong circumstantial evidence that life had happened twice in the solar system. Then it is a dead cert that it has happened wherever conditions allow, and that the universe is teeming with life. This would surely be a major advance in science and a transformation of our world view, and would also demonstrate that the laws of nature are „rigged” to make the emergence of life inevitable. I agree that the latter position is regarded as ludicrous by most biologists (though not by Christian de Duve), but that is why the discovery of an independently-arising life form elsewhere would be so iconoclastic. 2. Consciousness. Maybe it is transitory, but we still don’t know how it arises, or what it takes for a system to be conscious, or why qualia (assuming you believe in them) exist, as they serve no evolutionary purpose. Even if consciousness is not a fundamental aspect of our universe, it is still a mystery yet to be solved. You can’t just shrug it aside as of no consequence because it may be limited to a tiny region of spacetime.
Congratulations on a stimulating essay! Sincerely, Paul Davies PAUL DAVIES, described by the Washington Times as „the best science writer on either side of the Atlantic,” is a professor of natural philosophy at the University of Adelaide, Australia, and author of more than 20 books including The Mind of God, Are We Alone, TheLast Three Minutes (Science Masters Series) and About Time which was shortlisted for the 1996 British Book Prize. In 1995 Davies was awarded the Templeton Prize for progress in religion, the world’s largest prize for intellectual endeavor. From: Lee Smolin Submitted: 5/5/97 As I’ve said several times before, John’s argument is not silly, and I don’t think he is making it in bad faith. He is also an interesting person, who I enjoyed meeting some time ago. But I believe he is wrong, and it is not hard to explain why. The basic reason is that the „map of reality” and „narrative of creation” that he described, while enormous achievements, are full of holes, unanswered fundamental questions and, in some cases, basic inconsistencies. This is because the scientific revolution that produced these achievements is not yet finished, but has some way to go. An indication of how much of this revolution remains unfinished can be gotten by writing down a list of questions that we cannot yet answer: How do cells differentiate into different cell types? How does a single cell develop into a coherent organism? Why are there procaryotes and eucaryotes, but apparently nothing in between? What is the exact story of how life began? How does the brain work? How did the galaxies form? What is responsible for the large scale structure of the galaxies? What keeps the star formation rate of many spiral galaxies constant in time? Why were the initial conditions in the early universe so symmetric? What are the reasons for the values of the twenty-odd parameters of the standard models of particle physics and cosmology?
Why do those values have the property that they make it possible for stars,
galaxies and complex chemistry to form? How is gravitation consistent with quantum phenomena? What happens inside of black holes? What happens at the end point of black hole evaporation? Each of these questions is the focus of intensive work by thousand of very bright young and not so young scientists. Among people engaged in this work there is a strong sense of optimism that the next years will see dramatic breakthroughs. Each of these will add to the „map of reality” knowledge as fundamental as anything discovered in the twentieth century and all will, sooner or later, lead to theories that are verified by observation and experiment. Even in quantum gravity and string theory (where there have been dramatic breakthroughs in the last years) there is a growing list of experimental predictions. (I wrote a paper some years ago cataloging the experimental predictions of quantum gravity and string theory, and the list has grown since.) It is true that these cannot yet be carried out, but I would not want to be in the position John is of betting against the possibility that the thousands of bright people working in these areas around the world will not find a way to carry out these tests. For more arguments against the „end” of science, please see my last exchange with John in Edge number ??. (By the way, I notice that John seems to have stopped defining ironic science as science that could not even in principle be tested experimentally, given the ease with which even string theory evades that.) But in closing, I would like to remark that I find John’s stance disturbingly characteristic of the present moment. We are in a time of extraordinary change, with positive developments all around us. In the last years democracy has expanded dramatically and the danger of war and reach of totalitarianism has receded, the economy is stable and growing, our nation is being reinvigorated by a new wave of immigration, amazing things are happening in the arts, theater, dance, while in science and medicine there have been a slew of breakthroughs, from effective treatments for AIDS and certain forms of mental illness to all the new observational data in astronomy and cosmology. So why are people so pessimistic? Why is there so much talk of the end of this and that? This for me is the great unanswered question of the present moment. LEE SMOLIN is a theoretical physicist; professor of physics and member of the Center for Gravitational Physics and Geometry at Pennsylvania State University; author of The Life Of The Cosmos, forthcoming (Oxford). Copyright ©1997 by Edge Foundation, Inc
War or Peace Neither doomed to violence nor peaceful by nature, we are shaped by the civilizations we create. Modern society spends a good deal of time, effort, and scientific resource on finding better ways to wage war. What if we directed just a fraction of that energy toward finding a better way to wage peace? by John Horgan As a science writer, I am sometimes asked what I consider to be the most important unsolved scientific problem. I used to rattle off pure science‘s major mysteries: Why did the big bang bang? How did life begin on Earth, and does it exist anywhere else in the cosmos? How does a brain make a mind? Sometime after 9/11, however, I started replying that by far the biggest problem facing scientists—and all of humanity—is the persistence of warfare, or the threat thereof, as a means for resolving disputes between people. Skeptics might object that war is not a scientific issue. Certainly, it is a dauntingly complex phenomenon, with political, economic, and social ramifications. But the same could be said of problems such as global warming, population growth, and AIDS, all of which are being rigorously addressed by scientists. Moreover, I believe that the problem of warfare— unlike mysteries such as the origin of the universe or life or consciousness, which may prove to be intractable—can and will be solved. Research has already revealed enough about warfare to dispel two persistent, contradictory myths. One is the idea of the noble savage, which blames warfare on civilization and holds that humans in their primordial state were peaceful and loving. This is the implicit theme of Margaret Mead‘s classic bestseller Coming of Age in Samoa. Mead describes the Polynesian island as a blissful utopia, whose inhabitants make love, not war. Actually, as critics of Mead have pointed out, Samoa has historically been wracked by warfare. Indeed, as far back as anthropologists have peered into human history and prehistory, they have found evidence of group bloodshed. In War Before Civilization, Lawrence Keeley, an anthropologist at the University of Illinois, estimates that up to ninety-five percent of primitive societies engaged in at least occasional warfare, and many fought constantly. Tribal combat usually involved skirmishes and ambushes rather than pitched battles. But over time, the chronic fighting could produce mortality rates as high as fifty percent. Unfortunately, these revelations about the ubiquity of warfare have led some scholars to perpetuate a much more insidious myth: Warfare is a constant of the human condition that can at best be controlled, but never eradicated. Fatalists who take this position often describe war in Darwinian terms—as an inevitable consequence of innate male ambition and aggression. ―Males have evolved to possess strong appetites for power,‖ Harvard University anthropologist Richard Wrangham contends in Demonic Males, ―because with extraordinary power comes extraordinary reproductive success.‖
As evidence for this hypothesis, Wrangham cites studies of societies such as the Yanomamo, a tribe scattered across the Amazonian region of Brazil and Venezuela.
Yanomamo men from different villages often engage in protracted feuds, marked by lethal raids and counterraids. Like most tribal societies, the Yanomamo are polygamous. Anthropologist Napoleon Chagnon, who has observed the Yanomamo for decades, found that killers had, on average, twice as many wives and three times as many children as nonkillers. But Chagnon, significantly, has rejected the notion that aggressive instincts compel Yanomamo warriors to fight. Truly compulsive, out-of-control killers, Chagnon explains, are quickly killed themselves, and don‘t live long enough to have many wives and children. Successful warriors are usually quite controlled and calculating; they fight because that is how a male advances in their society. Moreover, many Yanomamo men have confessed to Chagnon that they loathe war and wish it could be abolished from their culture—and, in fact, rates of violence have recently dropped dramatically, as Yanomamo villages have accepted the laws and mores of the outside world. History offers many other examples of warlike societies that rapidly became peaceful. Vikings were the scourge of Europe during the Middle Ages, but their Scandinavian descendants are among the most peaceful people on Earth. Similarly, early twentieth-century Japan was extremely belligerent; even Zen Buddhist leaders such as D.T. Suzuki, who later helped to popularize Buddhism in the West, encouraged attacks on China and other countries. But since its traumatic defeat in World War II, Japan has embraced pacifism. In fact, hard as it may be to believe, humanity as a whole has become much less violent than it used to be. Despite the massive slaughter that resulted from World Wars I and II, the rate of violent death for males in North America and Europe during the twentieth century was one percent. Worldwide, about 100 million men, women, and children died from warrelated causes, including disease and famine, in the last century. The total would have been 2 billion if our rates of violence had been as high as in the average primitive society. These statistics contradict the myth that war is a constant of the human condition. But they also suggest, contrary to the myth of the noble savage, that civilization has not created the problem of warfare; it is helping us solve it. We need more civilization, not less, if we wish to eradicate war. Civilization has given us legal institutions that resolve disputes by establishing laws, negotiating agreements, and enforcing them. These institutions, which range from local courts to the United Nations, have vastly reduced the risk of violence both within and between nations. They are what keep us from succumbing to the chronic violence that afflicts societies like the Yanomamo. Obviously, our institutions are far from perfect. Nations around the world still maintain huge arsenals, including weapons of mass destruction, and war keeps breaking out. So what should we do? Maybe we need more drastic measures to abolish war once and for all. One possibility would be to tinker with our physiologies to make ourselves less aggressive. Scientists have linked various genes and neurochemicals to violent tendencies. For example, many violent criminals have low levels of serotonin. Should we try to curb our aggressive instincts by altering our neurochemistry or genes?
Or maybe we should all have electrodes implanted in our brains, zapping us when we act or even think aggressively. This idea was actually proposed back in 1969 in Physical Control of the Mind, a book by Yale University neuroscientist Jose Delgado. To show his scheme‘s feasibility, Delgado implanted electrodes in the brains of psychiatric patients and manipulated their limbs and emotions with a remote-controlled device. He also carried out a
demonstration—reported on the front page of The New York Times—with a bull that had electrodes embedded in its brain. When the bull charged, Delgado pushed a button on a remote control, and the bull stopped in its tracks. The question is: Who gets the electrodes in the brain, and who gets the remote control? In his classic book On Aggression, biologist Konrad Lorenz acknowledges that it might be possible to ―breed out the aggressive drive by eugenic planning.‖ But that would be a huge mistake, Lorenz argues, because aggression is a vital part of our humanity. It plays a role in almost all human endeavors, including science, the arts, business, politics, and sports. In my hometown in upstate New York, a bunch of friends and I enjoy venting our aggression every winter by playing pond hockey. Aggression can even serve the cause of peace. I‘ve known some extremely aggressive peace activists. Moreover, one of the most positive findings to emerge from recent studies of warfare is that few men relish lethal combat— and not just because they fear being wounded or killed. In On Killing,Lieutenant Colonel Dave Grossman, a psychologist, military science expert, and former U.S. Army ranger, asserts that most men abhor killing, even when it is sanctioned by their society. As evidence, Grossman cites military surveys, which reveal that during the American Civil War and both World Wars, as many as eighty percent of men in combat deliberately avoided firing at the enemy. After World War II, Grossman notes, the armed services revamped its training to make soldiers less reluctant to kill. As a result, most American soldiers who saw combat in Vietnam fired at the enemy. But Grossman contends that U.S. soldiers in Vietnam paid a heavy price for being transformed into more effective killers; a majority of combat veterans are thought to have suffered some symptoms of post-traumatic stress disorder, including nightmares, flashbacks, panic, depression, and guilt. Mental health experts are already predicting that American soldiers fighting in Iraq will experience similar rates of posttraumatic stress disorder. Even if warfare is at least in part biologically based—and what human behavior isn‘t?—we cannot end it by altering our biology. Modern war is primarily a social and political phenomenon, and we need social and political solutions to end it. Many such solutions have been proposed, but all are problematic. One perennial plan is for all nations to yield power to a global institution that can enforce peace. This was the vision that inspired the League of Nations and the United Nations. But neither the United States nor any other major power is likely to entrust its national security to an international entity anytime soon. And even if they did, how would they ensure that a global military force does not become repressive? One encouraging finding to emerge from political science is that democracies rarely, if ever, fight each other. But does that mean democracies such as the United States should use military means to force countries with no democratic tradition to accept this form of governance? If history teaches us anything, it is that war often begets more war. Religion has been prescribed as a solution to war and aggression. After all, most religions preach love and forgiveness and prohibit killing, at least in principle. But, in practice, religion has often inspired, rather than inhibited, bloodshed.
Many feminists have predicted that as women gain more political power, we will evolve toward a more peaceful world. Females in all societies engage in violence much less than males do. In his book War and Gender, political scientist Joshua Goldstein estimates that
females have accounted for fewer than one percent of all those who have fought in wars throughout history. But he notes that women have also helped to perpetuate war throughout history by favoring warriors as mates and shunning cowards. During World War I, for example, women in Britain and the United States organized a campaign to hand out white feathers to men not wearing a uniform, shaming them for avoiding military service. Moreover, those few women who have risen to positions of great power in the modern era—notably Margaret Thatcher, Golda Meir, and Indira Gandhi—demonstrated that they could be just as aggressive as their male counterparts in leading their countries into war. Goldstein concludes that women ―do not appear to be more peaceful, more oriented to nonviolent resolution of international conflicts, or less committed to state sovereignty and territorial integrity than are male leaders.‖ In his new book Collapse: How Societies Choose to Fail or Succeed, Jared Diamond argues that many wars, both ancient and modern, spring from mismanagement of environmental resources. He notes, for example, that ethnic conflicts are only the proximate causes of the hostilities that have ravaged Rwanda, Somalia, and other African nations in the last decade. The ultimate cause is that overpopulation has led to deforestation, overgrazing, and soil depletion, and, hence, a Hobbesian struggle over dwindling resources. But resource scarcity has not played a significant role in other modern conflicts, such as the civil war that raged in the Balkans during the 1990s. War, it seems fair to say, is overdetermined—that is, it can spring from many different causes. Peace, if it is to be permanent, must be overdetermined too. Given the enormous complexity of the problem of war, I would like to see the United States establish a kind of Manhattan Project aimed at solving it once and for all. The project could be administered by the United States Institute of Peace, a low-profile federal institution that Congress quietly created in 1984. Just as a percentage of the budget for the Human Genome Project is allocated to ethical issues, so too should part of the Department of Defense‘s budget be allocated to peace studies. One tenth of one percent— or $500 million, roughly twenty times the institute‘s current budget—should be sufficient. The institute could support and coordinate the efforts of other research programs. The Correlates of War project, founded at the University of Michigan by political scientist J. David Singer, has stockpiled statistical information about more than 1,000 conflicts—ranging from small-scale civil wars up to the World Wars—that have occurred since 1815. Even broader in its scope is the Human Relations Area Files, based at Yale University, which has compiled ethnographic reports on more than 1,000 different societies around the world, from the Navajo to the African !Kung. These databases can help researchers formulate and test hypotheses linking war to, say, child-rearing practices, women‘s rights, criminal punishment, education, freedom of the press, environmental management, economic policies, and religious beliefs.
Through grants and publications, a generously resourced Institute of Peace would encourage ambitious young scientists to see peace as a challenge at least as worthy of pursuit as a cure for AIDS or a cheap, clean, renewable source of energy. War research would be the ultimate multidisciplinary enterprise, drawing upon such diverse fields as game theory, neurobiology, evolutionary psychology, theology, ecology, political science, and economics. The short-term goal of peace researchers would be to find ways to reduce conflict in the world today, wherever it might occur. The long-term goal would be to explore
how nations can make the transition toward permanent disarmament: the elimination of armies, arms, and arms industries. In his recent book The Blank Slate, Harvard University psychologist Steven Pinker argues for what he calls a ―tragic‖ view of human nature, which accepts that we are limited by our biological heritage. Pinker uses the term ―utopian‖ to describe the belief that we can transcend human nature and create a perfect world. By utopian, Pinker means hopelessly naive. Many scientists no doubt dismiss the goal of global disarmament as utopian in this sense. These skeptics will argue that we will always need some military force to protect us from our own aggressive instincts; at the very least, some transnational organization should always retain a military force, perhaps equipped with nuclear weapons, to deter or suppress attacks from outlaw states or organizations, such as North Korea and al-Qaida. Certainly, total disarmament seems a remote possibility now. But can we really accept armies and armaments, including weapons of mass destruction, as permanent features of civilization? As recently as the late 1980s, global nuclear war still seemed like a distinct possibility. Then, incredibly, the Soviet Union dissolved and the Cold War ended peacefully. Apartheid also ended in South Africa without significant violence, and human rights have advanced elsewhere around the world. Just in the last century, we humans have split the atom, landed spacecraft on the moon and Mars, and cracked the genetic code. Deep down—perhaps because I have two young children—I have faith that we will solve the problem of war. If the capacity for war is in our genes, as many seem to fear these days, so is the capacity—and the desire—for peace. Even our most hawkish leaders claim that peace is their ultimate goal. As an agnostic, I have a hard time believing in God, but I believe in humanity‘s common sense, moral decency, and instinct for self-preservation. We will abolish war someday. The only question is how, and how soon.
See at: Scientific & Nonscientific Approaches To Knowledge Scientific & Nonscientific Approaches To Knowledge By Jamie Hale What are the differences between scientific and nonscientific approaches to knowledge? Basically, science is a specific way of analyzing information with the goal of testing claims. What sets science apart from other modes ofknowledge acquisition is the use of what is commonly known as the scientific method. Giving a precise definition of the scientific method is difficult as there is little consensus in the scientific community as to what that definition is. Although the scientific community has been slow to agree upon a clear definition, the scientific method is rooted inobservation, experimentation, and knowledge acquisition through a process of objective reasoning and logic. One notable description of the scientific method comes from A. Aragon (Girth Control 2007, p. 9); he defines the scientific method as: ―systematic process for acquiring new knowledge that uses the basic principle of deductive (and to a lesser extent inductive) reasoning. It‘s considered the most rigorous way to elucidate cause and effect, as
well as discover and analyze less direct relationships between agents and their associated phenomena.‖ If you asked a panel of scientists to define the scientific method you would receive a large array of answers, but I think most would agree on the basic concepts. The following is an excerpt from Why People Believe Weird Things (Shermer 1997, p. 19). ―Through the scientific method, we may form the following generalizations: Hypothesis: A testable statement accounting for a set of observations. Theory: A well-supported and well-tested hypothesis or set of hypotheses. Fact: A conclusion confirmed to such an extent that it would be reasonable to offer provisional agreement.‖ When using the scientific method one of the primary goals is objectivity. Proper use of the scientific method leads us to rationalism (basing conclusion on intellect, logic and evidence). Relying on science also helps us avoid dogmatism (adherence to doctrine over rational and enlightened inquiry, or basing conclusion on authority rather than evidence). The nonscientific approach to knowledge involves informal kinds of thinking. This approach can be thought of as an everyday unsystematic uncritical way of thinking. Below I will discuss the major differences between the two. Comparing Scientific & Nonscientific Approaches to Knowledge Scientific Nonscientific General Approach Empirical Intuitive Observation Controlled Uncontrolled Reporting Unbiased Biased Concepts Clear definitions Ambiguous definitions Instruments Accurate/precise Inaccurate/imprecise Measurement Reliable/repeatable Non-reliable Hypotheses Testable Unstestable Attitude Critical Uncritical *Based on Table 1.1 pg. 6 Research Methods in Psychology (Shaughnessy & Zechmeister 1990) General approach The scientific approach to knowledge is empirical. The empirical approach emphasizes direct observation andexperimentation as a way of answering questions. Intuition can play a role in idea formation, but eventually the scientist is guided by what direct observation and experimentation reveal to be true. Their findings are often counterintuitive. Many everyday judgments are based on intuition. This usually means that ―gut feeling‖ or ―what feels right.‖ The Penguin Dictionary of Psychology defines intuition as a mode of understanding or knowing characterized as directand immediate and occurring without conscious thought or judgment. Intuition can be a valuable cognitive process, but becoming too reliant on intuition can be a problem. What‘s right is often counterintuitive. Our intuition often fails to recognize what is actually true because our perceptions may be distorted by cognitive biases or because we neglect to weigh evidence appropriately. We tend to perceive a relationship between events
when none exists. Weare also likely to notice events that are consistent with our beliefs and ignore ones that violate them. We remember the hits and forget the misses. Below is an example of the difference between the ―gut feeling‖ approach and the one preferred by scientists. The excerpt is from The Demon-Haunted World (Sagan 1996). ‗I am frequently asked, ―Do you believe there‘s extraterrestrial intelligence?‖ I give the standard arguments-there are a lot of places out there, the molecules of life are everywhere, I use the word billions, and so on. Then I say it would be astonishing to me if there weren‘t extraterrestrial intelligence, but of course there is as yet no compelling evidence for it. Often I am asked next, ―What do you really think?‖ I say, ―I just told you what I really think.‖ ―Yes, but what‘s your gut feeling?‖ But I try not to think with my gut. If I‘m serious about understanding the world, thinking with anything besides my brain, as tempting as that might be, is likely to get me in trouble. Really, it‘s okay to reserve judgment until the evidence is in.‘ Observation When observing phenomena a scientist likes to exert a specific level of control. When utilizing control, scientists investigate the effects of various factors one by one. A key goal for the scientist is to gain a clearer picture of those factors that actually produce a phenomenon. It has been suggested that tight control is the key feature of science. Non-scientific approaches to knowledge are often made unsystematically and with little care. The non-scientific approach does not attempt to control many factors that could affect the events they are observing (don‘t hold conditions constant). This lack of control makes it difficult to determine cause-and-effect relationships (too many confounds, unintended independent variable). The factors that the researcher manipulates in order to determine their effect on behavior are called the independent variables. In it‘s simplest form the independent variable has two levels. These two levels (or conditions) include the experimental condition; the condition in which the treatment is present and the control condition; the condition in which the treatment is absent. The measures that are used to assess the effect of the independent variables are called dependent variables (Shaughnessy & Zechmeister 1990). Proper control techniques must be used if changes in the dependent variableare to be interpreted as a result of the effects of the independent variable. Scientists generally divide control technique into three types: manipulation, holding conditions constant, and balancing. We have already discussed manipulation when we looked at the two levels of the independent variable. Holding conditions constant other than the independent variables is a key factor associated with control. This helps eliminate the possibility of confounds influencing the measured outcome. Balancing is used to control factors that cannot be manipulated or held constant (e.g. subjects characteristics). The most common method of balancing is to assign subjects randomly to the different groups being tested. An example of a random assignment would be putting names on a slip of paper and drawing them from a hat. This does not mean there will be no differences in the subject‘s characteristics, but the differences will probably be minor,and generally have no effect on the results. Reporting
How can two people witness the same event but see different things? This often occurs due to personal biases andsubjective impressions. These characteristics are common traits among non-scientists. Their reports often go beyond what has just been observed and involve speculation. In the book Research Methods in Psychology (Shaughnessy & Zechmeister 1990) an excellent example is given demonstrating the difference between scientificand non-scientific reporting. An illustration is provided showing two people running along the street with one person running in front of the other. The scientist would report it in the way it was just described. The non-scientist may take it a step further and report one person is chasing the other or they are racing. This is not objective information but speculation. Scientific reporting attempts to be objective and unbiased. One way to lessen the chance of biased reporting is checking to see if other independent observers report the same findings. Even when using this checkpoint the possibility of bias is still present. Following strict guidelines to prevent bias reporting decreases the chances of it occurring. Although I would say 100% unbiased reports rarely, if ever, occur. Concepts It is not unusual for people in everyday conversation to discuss concepts they really don‘t understand. Many subjects are discussed on a routine basis even though neither party knows exactly what the subject means. They may have an idea of what they are discussing (even though their ideas may be totally opposite). Although they cannot precisely define the concepts they are talking about. In my opinion this leads to a bunch of jibber-jabber (dead-end conversation). The scientist attaches an operational definition (a definition based on the set of operations that produced the thing defined) to concepts. An example of an operational definition follows: hunger a physiological need for food; the consequence of food deprivation. Once an operational definition has been established communication can move forward. Instruments In everyday life numerous instruments are used to measure events. Common instruments include gas gauges, weight scales, and timers. These instruments are not very precise compared to the more exact instruments used with the scientific approach. When you look at your gas gauge while driving wouldn‘t it be nice to know how many miles you can travel on ½ tank (or whatever the gas gauge registers). Your bathroom scale weighs you in pounds. What if you weigh 100lbs and 2 oz? What if your friend weighs 100 lbs and 6 oz? Your friend is heavier but the bathroom scale says you weigh the same. A common device used by coaches and athletes to measure sprint timesare hand held timers. These timers are highly inaccurate and read to the tenths place. In the Olympics winnersand losers are often separated by hundredths of a second. The instruments we generally depend on in everyday life give us approximations, not exact measurements. Measurement An instrument can provide accuracy and preciseness but still lack value if the measurement is non-valid. When determining the validity of the measurement one must ask does the measurement really measure the concept inquestion? We discussed this aspect of measurement earlier when we spoke about operational definitions. In the fitness industry a
common measurement of overall flexibility is the sit-and-reach test. This test is conducted while sitting with your legs extended straight in front of you. The next step is extending your arms as you reach towards the toes. This test is a poor indicator of overall flexibility. Flexibility is joint-specific, speed-specific, and plane of movement-specific. A battery of tests needs to be conducted to address each of these characteristics to validly measure flexibility. Another important aspect of measurement is reliability. A measurement is reliable when it occurs consistently. In the context of science it is important for measurements to be reliable. The non-scientist gets by with less emphasis on reliability. Validity and reliability are independent qualities. A measurement can be valid while not being reliable. A measurement can also be reliable and lack validity. In general, it is easier to show that a measurement is reliable than it is to show its validity. Both of these qualities are important to good measurement. Hypotheses A hypothesis is a tentative explanation for a phenomenon. It often attempts to the answer the questions ―How‖and Why?‖ Almost everyone has formed their own hypotheses that explain some elements of human behavior. Why do people steal? What causes people to take drugs? Why do some people do better socially than others? The scientist proposes hypotheses that are testable. The non-scientist suggests hypotheses that are un-testable. Hypotheses are not testable if the concepts they refer to are not accurately defined (i.e., conceptualization). To say someone uses drugs because they are ―mentally weak‖ is not testable. There is no universal operational definition that defines mentally weak. To say someone uses drugs because they have a specific chemical imbalance or neurological disorder is usually testable. Circular hypotheses are not testable. If you say someone takes drugs because they enjoy taking drugs you areusing a circular hypothesis. Liking and enjoying something means the same thing. This hypothesis is non testable as it leads back to its own beginning. A hypothesis is untestable if it is outside of the realm of science. To suggest someone steals because they arepossessed by the devil is nonscientific. The devil is beyond the realm of scientific analysis because this concept cannot be scientifically studied, analyzed, or explained. Attitude The key attribute of scientists is skepticism. Scientists question everything (almost everything). They want to see proof and more proof. They understand all knowledge is tentative. Many factors can interact and suggest causes for a specific event. It is important to recognize these factors and distinguish causative factors from correlation factors. It is also important to realize all humans are fallible. The scientist has the attitude that there are no absolute certainties. R.A Lyttleton suggests using the bead model of truth (Duncan R & Weston-Smith M 1977). This model depicts a bead on a horizontal wire that can move left or right. A 0 appears on the far left end and a 1 appears on the far right end. The 0 corresponds with total disbelief and the 1 corresponds with total belief (absolute certainty). Lyttleton suggests that the bead should never reach the far left or right end. The more that the evidence suggests the belief is true the closer the bead should be to 1. The more unlikely the belief is to be true the closer the bead should be
to 0. The non-scientist is ready to accept explanations that are based on insufficient evidence or sometimes no evidence. They heard it on CNN or their teacher said it so it must be true (logical fallacy of an Appeal to Authority). They reject notions because they can‘t understand them or because they don‘t respect the person making the claim. The scientist investigates the claim and critically evaluates the evidence. Even though the scientist is skeptical, it is not practical to be skeptical all the time. Imagine that every time someone tells you something you ask for evidence to support his or her claim. You would have very few friends andyou would get very little accomplished. Science or non-science I prefer the scientific approach to knowledge. The approach is not perfect, but it is the best method we have. Science is subject to change, and this is one of its best qualities. The possibility always remains that future evidence will cause a scientific theory to be changed. Scientific theories are provisional. In science the word theory is used differently than it is in everyday language (Johnson GB 2000). To a scientist, the word theory represents that of which he or she is most certain; in everyday language the word implies a guess (not sure). This often causes confusion for those unfamiliar with science. This confusion leads to the common statement ―It‘s only a theory.‖ In conclusion, science cannot explain how and why everything happens. Science is limited to objective interpretations of observable occurrences. Most individuals incorporate some degree of science as well as non-science into their everyday lives. Science finds solutions to problems when solutions are possible. Some things that cannot be explained presently will be explained in the future. On the other hand we must recognize the fact we will probably never be able to explain everything.
Final (Liviu Drugus) comments All these dialogues and answers, explanations and beliefs are just a(nother) starting point for new research. My vision on war and peace is less optimistic than that of John Horgan. I think competition and war are very closely linked, so stopping the propensity to make war may affect the propensity to compete. More directly said, no war – no capitalist society. Is abolishing war another way of telling people to replace capitalism with communism? Is Marxist theory, finally and really, a theory of everything? Or, John Horgan would like to have a global capitalist society without competitors? Maybe is this another (post) Kantian utopia for eternal peace? This book review is, for sure, a quite original one. I took the advantage of reading this challenging book in order to launch an invitation to discuss a very important issue: what is (nowadays) science and what we may expect from it? My opinions on science and research are already spread out through other articles of mine, but I thought it is a good thing to print information from internet concerning the possible End of science….

Lasă un răspuns

Completează mai jos detaliile tale sau dă clic pe un icon pentru a te autentifica:


Comentezi folosind contul tău Dezautentificare / Schimbă )

Poză Twitter

Comentezi folosind contul tău Twitter. Dezautentificare / Schimbă )

Fotografie Facebook

Comentezi folosind contul tău Facebook. Dezautentificare / Schimbă )

Fotografie Google+

Comentezi folosind contul tău Google+. Dezautentificare / Schimbă )

Conectare la %s

%d blogeri au apreciat asta: