Полная версия
The Greatest Benefit to Mankind: A Medical History of Humanity
Explanations of why and how these modern, secular western attitudes have come about need to take many elements into account. Their roots may be found in the philosophical and religious traditions they have grown out of. They have been stimulated by economic materialism, the preoccupation with worldly goods generated by the devouring, reckless energies of capitalism. But they are also intimately connected with the development of medicine – its promise, project and products.
Whereas most traditional healing systems have sought to understand the relations of the sick person to the wider cosmos and to make readjustments between individual and world, or society and world, the western medical tradition explains sickness principally in terms of the body itself – its own cosmos. Greek medicine dismissed supernatural powers, though not macrocosmic, environmental influences; and from the Renaissance the flourishing anatomical and physiological programmes created a new confidence among researchers that everything that needed to be known could essentially be discovered by probing more deeply and ever more minutely into the flesh, its systems, tissues, cells, its DNA.
This has proved an infinitely productive inquiry, generating first knowledge and then power, including on some occasions the power to conquer disease. The idea of probing into bodies, living and dead (and especially human bodies) with a view to improving medicine is more or less distinctive to the European medical tradition. For reasons technical, cultural, religious and personal, it was not done in China or India, Mesopotamia or pharaonic Egypt. Dissection and dissection-related experimentation were performed only on animals in classical Greece, and rarely. A medicine that seriously and systematically investigated the stuff of bodies came into being thereafter – in Alexandria, then in the work of Galen, then in late medieval Italy. The centrality of anatomy to medicine’s project was proclaimed in the Renaissance and became the foundation stone for the later edifice of scientific medicine: physiological experimentation, pathology, microscopy, biochemistry and all the other later specialisms, to say nothing of invasive surgery.
This was not the only course that medicine could have taken; as is noted below, it was not the course other great world medical systems took, cultivating their own distinct clinical skills, diagnostic arts and therapeutic interventions. Nor did it enjoy universal approval: protests in Britain around 1800 about body-snatching and later antivivisectionist lobbies show how sceptical public opinion remained about the activities of anatomists and physicians, and suspicion has continued to run high. That was the direction, however, which western medicine followed, and, bolstered by science at large, it generated a powerful momentum, largely independent of its efficacy as a rational social approach to good health.
The emergence of this high-tech scientific medicine may be a prime example of what William Blake denounced as ‘single vision’, the kind of myopia which (literally and metaphorically) comes from looking doggedly down a microscope. Single vision has its limitations in explaining the human condition; this is why Coleridge called doctors ‘shallow animals’, who ‘imagine that in the whole system of things there is nothing but Gut and Body’. Hence the ability of medicine to understand and counter pathology has always engendered paradox. Medicine has offered the promise of ‘the greatest benefit to mankind’, but not always on terms palatable to and compatible with cherished ideals. Nor has it always delivered the goods. The particular powers of medicine, and the paradoxes which its rationales generate, are what this book is about.
It may be useful to offer a brief resumé of the main themes of the book, by way of a sketch map for a long journey.
All societies possess medical beliefs: ideas of life and death, disease and cure, and systems of healing. Schematically speaking, the medical history of humanity may be seen as a series of stages. Belief systems the world over have attributed sickness to illwill, to malevolent spirits, sorcery, witchcraft and diabolical or divine intervention. Such ways of thinking still pervade the tribal communities of Africa, the Amazon basin and the Pacific; they were influential in Christian Europe till the ‘age of reason’, and retain a residual shadow presence. Christian Scientists and some other Christian sects continue to view sickness and recovery in providential and supernatural terms; healing shrines like Lourdes remain popular within the Roman Catholic church, and faith-healing retains a mass following among devotees of television evangelists in the United States.
In Europe from Graeco-Roman antiquity onwards, and also among the great Asian civilizations, the medical profession systematically replaced transcendental explanations by positing a natural basis for disease and healing. Among educated lay people and physicians alike, the body became viewed as integral to law-governed cosmic elements and regular processes. Greek medicine emphasized the microcosm/macrocosm relationship, the correlations between the healthy human body and the harmonies of nature. From Hippocrates in the fifth century BC through to Galen in the second century AD, ‘humoral medicine’ stressed the analogies between the four elements of external nature (fire, water, air and earth) and the four humours or bodily fluids (blood, phlegm, choler or yellow bile and black bile), whose balance determined health. The humours found expression in the temperaments and complexions that marked an individual. The task of regimen was to maintain a balanced constitution, and the role of medicine was to restore the balance when disturbed. Parallels to these views appear in the classical Chinese and Indian medical traditions.
The medicine of antiquity, transmitted to Islam and then back to the medieval West and remaining powerful throughout the Renaissance, paid great attention to general health maintenance through regulation of diet, exercise, hygiene and lifestyle. In the absence of decisive anatomical and physiological expertise, and without a powerful arsenal of cures and surgical skills, the ability to diagnose and make prognoses was highly valued, and an intimate physician-patient relationship was fostered. The teachings of antiquity, which remained authoritative until the eighteenth century and still supply subterranean reservoirs of medical folklore, were more successful in assisting people to cope with chronic conditions and soothing lesser ailments than in conquering life-threatening infections which became endemic and epidemic in the civilized world: leprosy, plague, smallpox, measles, and, later, the ‘filth diseases’ (like typhus) associated with urban squalor.
This personal tradition of bedside medicine long remained popular in the West, as did its equivalents in Chinese and Ayurvedic medicine. But in Europe it was supplemented and challenged by the creation of a more ‘scientific’ medicine, grounded, for the first time, upon experimental anatomical and physiological investigation, epitomized from the fifteenth century by the dissection techniques which became central to medical education. Landmarks in this programme include the publication of De humani corporis fabrica (1543) by the Paduan professor, Andreas Vesalius, a momentous anatomical atlas and a work which challenged truths received since Galen; and William Harvey’s De motu cordis (1628) which put physiological inquiry on the map by experiments demonstrating the circulation of the blood and the heart’s role as a pump.
Post-Vesalian investigations dramatically advanced knowledge of the structures and functions of the living organism. Further inquiries brought the unravelling of the lymphatic system and the lacteals, and the eighteenth and nineteenth centuries yielded a finer grasp of the nervous system and the operations of the brain. With the aid of microscopes and the laboratory, nineteenth-century investigators explored the nature of body tissue and pioneered cell biology; pathological anatomy came of age. Parallel developments in organic chemistry led to an understanding of respiration, nutrition, the digestive system and deficiency diseases, and founded such specialities as endocrinology. The twentieth century became the age of genetics and molecular biology.
Nineteenth-century medical science made spectacular leaps forward in the understanding of infectious diseases. For many centuries, rival epidemiological theories had attributed fevers to miasmas (poisons in the air, exuded from rotting animal and vegetable material, the soil, and standing water) or to contagion (person-to-person contact). From the 1860s, the rise of bacteriology, associated especially with Louis Pasteur in France and Robert Koch in Germany, established the role of pathogenie micro-organisms. Almost for the first time in medicine, bacteriology led directly to dramatic new cures.
In the short run, the anatomically based scientific medicine which emerged from Renaissance universities and the Scientific Revolution contributed more to knowledge than to health. Drugs from both the Old and New Worlds, notably opium and Peruvian bark (quinine) became more widely available, and mineral and metal-based pharmaceutical preparations enjoyed a great if dubious vogue (e.g., mercury for syphilis). But the true pharmacological revolution began with the introduction of sulfa drugs and antibiotics in the twentieth century, and surgical success was limited before the introduction of anaesthetics and antiseptic operating-room conditions in the mid nineteenth century. Biomedical understanding long outstripped breakthroughs in curative medicine, and the retreat of the great lethal diseases (diphtheria, typhoid, tuberculosis and so forth) was due, in the first instance, more to urban improvements, superior nutrition and public health than to curative medicine. The one early and striking instance of the conquest of disease – the introduction first of smallpox inoculation and then of vaccination – came not through ‘science’ but through embracing popular medical folklore.
From the Middle Ages, medical practitioners organized themselves professionally in a pyramid with physicians at the top and surgeons and apothecaries nearer the base, and with other healers marginalized or vilified as quacks. Practitioners’ guilds, corporations and colleges received royal approval, and medicine was gradually incorporated into the public domain, particularly in German-speaking Europe where the notion of ‘medical police’ (health regulation and preventive public health) gained official backing in the eighteenth century. The state inevitably played the leading role in the growth of military and naval medicine, and later in tropical medicine. The hospital sphere, however, long remained largely the Church’s responsibility, especially in Roman Catholic parts of Europe. Gradually the state took responsibility for the health of emergent industrial society, through public health regulation and custody of the insane in the nineteenth century, and later through national insurance and national health schemes. These latter developments met fierce opposition from a medical profession seeking to preserve its autonomy against encroaching state bureaucracies.
The latter half of the twentieth century has witnessed the continued phenomenal progress of capital-intensive and specialized scientific medicine: transplant surgery and biotechnology have captured the public imagination. Alongside, major chronic and psychosomatic disorders persist and worsen – jocularly expressed as the ‘doing better but feeling worse’ syndrome – and the basic health of the developing world is deteriorating. This situation exemplifies and perpetuates a key facet and paradox of the history of medicine: the unresolved disequilibrium between, on the one hand, the remarkable capacities of an increasingly powerful science-based biomedical tradition and, on the other, the wider and unfulfilled health requirements of economically impoverished, colonially vanquished and politically mismanaged societies. Medicine is an enormous achievement, but what it will achieve practically for humanity, and what those who hold the power will allow it to do, remain open questions.
The late E. P. Thompson (1924–1993) warned historians against what he called the enormous condescension of posterity. I have tried to understand the medical systems I discuss rather than passing judgment on them; I have tried to spell them out in as much detail as space has permitted, because engagement with detail is essential if the cognitive power of medicine is to be appreciated.
Eschewing anachronism, judgmentalism and history by hindsight does not mean denying that there are ways in which medical knowledge has progressed. Harvey’s account of the cardiovascular system was more correct than Galen’s; the emergence of endocrinology allowed the development in the 1920s of insulin treatments which saved the lives of diabetics. But one must not assume that diabetes then went away: no cure has been found for that still poorly understood disease, and it is becoming more prevalent as a consequence of western lifestyles. Indeed one could argue that the problem is now worse than when insulin treatment was discovered.
Avoiding condescension equally does not mean one must avoid ‘winners’ history. This book unashamedly gives more space to the Greeks than the Goths, more attention to Hippocrates than to Greek root-gatherers, and stresses strands of development leading from Greek medicine to the biomedicine now in the saddle. I do not think that ‘winners’ should automatically be privileged by historians (I have myself written and advocated writing medical history from the patients’ view), but there is a good reason for bringing the winners to the fore-ground – not because they are ‘best’ or ‘right’ but because they are powerful. One can study winners without siding with them.
Writing this book has not only made me more aware than usual of my own ignorance; it has brought home the collective and largely irremediable ignorance of historians about the medical history of mankind. Perhaps the most celebrated physician ever is Hippocrates yet we know literally nothing about him. Neither do we know anything concrete about most of the medical encounters there have ever been. The historical record is like the night sky: we see a few stars and group them into mythic constellations. But what is chiefly visible is the darkness.
CHAPTER II THE ROOTS OF MEDICINE
PEOPLES AND PLAGUES
IN THE BEGINNING WAS THE GOLDEN AGE. The climate was clement, nature freely bestowed her bounty upon mankind, no lethal predators lurked, the lion lay down with the lamb and peace reigned. In that blissful long-lost Arcadia, according to the Greek poet Hesiod writing around 700 BC, life was ‘without evils, hard toil, and grievous disease’. All changed. Thereafter, wrote the poet, ‘thousands of miseries roam among men, the land is full of evils and full is the sea. Of themselves, diseases come upon men, some by day and some by night, and they bring evils to the mortals.’
The Greeks explained the coming of pestilences and other troubles by the fable of Pandora’s box. Something similar is offered by Judaeo-Christianity. Disguised in serpent’s clothing, the Devil seduces Eve into tempting Adam to taste the forbidden fruit. By way of punishment for that primal disobedience, the pair are banished from Eden; Adam’s sons are condemned to labour by the sweat of their brow, while the daughters of Eve must bring forth in pain; and disease and death, unknown in the paradise garden, become the iron law of the post-lapsarian world, thenceforth a vale of tears. As in the Pandora fable and scores of parallel legends the world over, the Fall as revealed in Genesis explains how suffering, disease and death become the human condition, as a consequence of original sin. The Bible closes with foreboding: ‘And I looked, and behold a pale horse’ prophesied the Book of Revelation: ‘and his name that sat on him was Death, and Hell followed with him. And power was given unto them over the fourth part of the earth, to kill with sword, and with hunger, and with death, and with the beasts of the earth.’
Much later, the eighteenth-century physician George Cheyne drew attention to a further irony in the history of health. Medicine owed its foundation as a science to Hippocrates and his successors, and such founding fathers were surely to be praised. Yet why had medicine originated among the Greeks? It was because, the witty Scotsman explained, being the first civilized, intellectual people, with leisure to cultivate the life of the mind, they had frittered away the rude vitality of their warrior ancestors – the heroes of the Iliad – and so had been the first to need medical ministrations. This ‘diseases of civilization’ paradox had a fine future ahead of it, resonating throughout Nietzsche and Freud’s Civilization and its Discontents (1930). Thus to many, from classical poets up to the prophets of modernity, disease has seemed the dark side of development, its Jekyll-and-Hyde double: progress brings pestilence, society sickness.
Stories such as these reveal the enigmatic play of peoples, plagues and physicians which is the thread of this book, scotching any innocent notion that the story of health and medicine is a pageant of progress. Pandora’s box and similar just-so stories tell a further tale moreover, that plagues and pestilence are not acts of God or natural hazards; they are of mankind’s own making. Disease is a social development no less than the medicine that combats it.
In the beginning … Anthropologists now maintain that some five million years ago in Africa there occurred the branching of the primate line which led to the first ape men, the low-browed, big-jawed hominid Australopithecines. Within a mere three million years Homo erectus had emerged, our first entirely upright, large-brained ancestor, who learned how to make fire, use stone tools, and eventually developed speech. Almost certainly a carnivorous hunter, this palaeolithic pioneer fanned out a million years or so ago from Africa into Asia and Europe. Thereafter a direct line leads to Homo sapiens who emerged around 150,000 BC.
The life of early mankind was not exactly arcadian. Archaeology and palaeopathology give us glimpses of forebears who were often malformed, racked with arthritis and lamed by injuries – limbs broken in accidents and mending awry. Living in a dangerous, often harsh and always unpredictable environment, their lifespan was short. Nevertheless, prehistoric people escaped many of the miseries popularly associated with the ‘fall’; it was later developments which exposed their descendants to the pathogens that brought infectious disease and have since done so much to shape human history.
The more humans swarmed over the globe, the more they were themselves colonized by creatures capable of doing harm, including parasites and pathogens. There have been parasitic helminths (worms), fleas, ticks and a host of arthropods, which are the bearers of ‘arbo’ (arthropod-borne) infections. There have also been the micro-organisms like bacteria, viruses and protozoans. Their very rapid reproduction rates within a host provoke severe illness but, as if by compensation, produce in survivors immunity against reinfection. All such disease threats have been and remain locked with humans in evolutionary struggles for the survival of the fittest, which have no master plot and grant mankind no privileges.
Despite carbon-dating and other sophisticated techniques used by palaeopathologists, we lack any semblance of a day-to-day health chart for early Homo sapiens. Theories and guesswork can be supported by reference to so-called ‘primitive’ peoples in the modern world, for instance Australian aborigines, the Hadza of Tanzania, or the !Kung San bush people of the Kalahari. Our early progenitors were hunters and gatherers. Pooling tools and food, they lived as nomadic opportunistic omnivores in scattered familial groups of perhaps thirty or forty. Infections like smallpox, measles and flu must have been virtually unknown, since the micro-organisms that cause contagious diseases require high population densities to provide reservoirs of susceptibles. And because of the need to search for food, these small bands did not stay put long enough to pollute water sources or accumulate the filth that attracts disease-spreading insects. Above all, isolated hunter-foragers did not tend cattle and the other tamed animals which have played such an ambiguous role in human history. While meat and milk, hides and horns made civilization possible, domesticated animals proved perennial and often catastrophic sources of illness, for infectious disease riddled beasts long before spreading to humans.
Our ‘primitive’ ancestors were thus practically free of the pestilences that ambushed their ‘civilized’ successors and have plagued us ever since. Yet they did not exactly enjoy a golden age, for, together with dangers, injuries and hardships, there were ailments to which they were susceptible. Soil-borne anaerobic bacteria penetrated through skin wounds to produce gangrene and tetanus; anthrax and rabies were picked up from animal predators like wolves; infections were acquired through eating raw animal flesh, while game would have transmitted the microbes of relapsing fever (like typhus, a louse-borne disease), brucellosis and haemorrhagic fevers. Other threats came from organisms co-evolving with humans, including tapeworms and such spirochaetes as Treponema, the agent of syphilis, and the similar skin infection, yaws.
Hunter-gatherers being omnivores, they were probably not malnourished, at least not until rising populations had hunted to extinction most of the big game roaming the savannahs and prairies. Resources and population were broadly in balance. Relative freedom from disease encouraged numbers to rise, but all were prey to climate, especially during the Ice Age which set in from around 50,000 BC. Famine took its toll; lives would have been lost in hunting and skirmishing; childbirth was hazardous, fertility probably low, and infanticide may have been practised. All such factors kept numbers in check.
For tens of thousands of years there was ample territory for dispersal, as pressure on resources drove migration ‘out of Africa’ into all corners of the Old World, initially to the warm regions of Asia and southern Europe, but then farther north into less hospitable climes. These nomadic ways continued until the end of the last Ice Age (the Pleistocene) around 12,000–10,000 years ago brought the invention of agriculture.
Contrary to the Victorian assumption that farming arose out of mankind’s inherent progressiveness, it is now believed that tilling the soil began because population pressure and the depletion of game supplies left no alternative: it was produce more or perish. By around 50,000 BC, mankind had spilled over from the Old World to New Guinea and Australasia, and by 10,000 BC (perhaps much earlier) to the Americas as well (during the last Ice Age the lowering of the oceans made it possible to cross by land bridge from Siberia to Alaska). But when the ice caps melted around ten thousand years ago and the seas rose once more, there were no longer huge tracts of land filled with game but empty of humans and so ripe for colonization. Mankind faced its first ecological crisis – its first survival test.
Necessity proved the mother of invention, and Stone Age stalkers, faced with famine – elk and gazelle had thinned out, leaving hogs, rabbits and rodents – were forced to grow their own food and settle in one place. Agriculture enhanced mankind’s capacity to harness natural resources, selectively breeding wild grasses into domesticated varieties of grains, and bringing dogs, cattle, sheep, goats, pigs, horses and poultry under control. This change had the rapidity of a revolution: until around 10,000 years ago, almost all human groups were hunter-gatherers, but within a few thousand years cultivators and pastoralists predominated. The ‘neolithic revolution’ was truly epochal.
In the fertile crescent of the Middle East, wheat, barley, peas and lentils were cultivated, and sheep, pigs and goats herded; the neolithic peoples of south-east Asia exploited rice, sweet potatoes, ducks and chickens; in Mesoamerica, it was maize, beans, cassava, potatoes and guinea pigs. The land which a nomadic band would have stripped like locusts before moving on was transformed by new management techniques into a resource capable of supporting thousands, year in, year out. And once agriculture took root, with its systematic planting of grains and lentils and animal husbandry, numbers went on spiralling, since more could be fed. The labour-intensiveness of clearing woodland and scrub, weeding fields, harvesting crops and preparing food encouraged population growth and the formation of social hierarchies, towns, courts and kingdoms. But while agriculture rescued people from starvation, it unleashed a fresh danger: disease.