Are you concerned about your health? Do you believe your daily newspaper when it says that something is bad for you? Or when it says that an amazing new cure for breast cancer has just been found?
If so, read on.
Dylan Evans is a man with a rare talent: he can explain complicated scientific matters in a way which makes them readily understandable to the average untrained reader. What is more, he does it briefly: Placebo runs to just 215 pages.
I first came across Dylan Evans as the author of a book called Emotion: the Science of Sentiment. If you’re even remotely involved in writing and publishing fiction, this is a book that you should definitely read. Because emotion, whether you realise it or not, is your stock in trade.
Placebo is subtitled The Belief Effect. A placebo, in its purest form, is a pill which is made of bread and salt water or some other inert substance. It has been observed, over many years, that patients who are given placebos, while being told that they are being given a real drug, may end up being cured of their illness. This raises a number of questions which Dylan Evans addresses in this book. Why do placebos cure people (if only sometimes)? What does the placebo effect tell us, if anything, about the body/mind interaction? Why are placebos more effective in relation to some conditions than others?
In short, this is a book which is potentially of interest to anyone who is concerned about their own health or anyone else’s.
Chapter One deals with the history of investigations into the placebo response; the next chapter focuses on which particular medical conditions are most affected by this form of treatment.
In the third chapter, Evans puts forward a possible explanation for how placebos work, and he then argues that the key mental event which triggers the physiological improvement in the patient is the belief that one has been given an effective medical treatment.
Chapter Five looks at how human beings have evolved in such a way that their minds can somehow heal their bodies; and Chapter Six looks at some of the negative effects of placebos.
Towards the end of the book Evans takes a hard look at alternative medicine and psychotherapy, enquiring whether these forms of treatment can sensibly be regarded as anything more than placebos; and the final chapter looks at the ethical questions involved in administering and investigating ‘medicines’ which don’t actually contain any chemically effective substances.
For me, the most important parts of this book are the sections which highlight the technical difficulty of conducting research in this area; and indeed, the difficulties involved in undertaking any reliable experiments at all.
A good many years ago I took a PhD degree. And the one vital thing that I learnt from that exercise is that most forms of research depend, in their final stages, on statistical analysis of the results. But statistics, I discovered, is an absolute minefield for the unwary.
Here is an example drawn from my own area of interest, education. About thirty years ago, a group of researchers decided that it would be helpful to know whether modern, ‘progressive’ methods of teaching young children were, in reality, more effective than old-fashioned traditional methods. So they took two groups of children, one taught in the new way and the other taught in the old way, measured the children’s abilities at the beginning of the year, measured their abilities at the end of the year, and compared the results.
In order to compare the results they had to use statistical methods. And the researchers’ statistical analysis showed that traditional methods were best. This result was, quite literally, front-page news in the Daily Mail. It appeared to prove what a lot of parents (and grandparents) had long suspected.
However, there were those in the world of education who weren’t happy with this outcome at all. And so, with some difficulty, a second group of researchers managed to persuade the original group to let them look at the data. This second group did a second statistical analysis – of exactly the same figures, please note. And this time – guess what – the analysis gave precisely the opposite result. This time the progressive methods were shown to be the most effective form of teaching.
This second analysis did not make the front page of the Daily Mail.
After a few years, the curious case of the contradictory analyses attracted the attention of a third group of researchers. This third group was made up not of people who were directly involved in education, but of professional statisticians. And they wanted to know why it was that the two statistical analyses of the same data had yielded such different results.
So the third group took a long hard look at the data which had been collected. And the professional statisticians found… Well, they found that, for highly complicated statistical reasons, it was unsafe to draw any conclusions whatever from the data which the original group of researchers had collected.
Dylan Evans quotes another instance where the same thing happened, this time in relation to an attempt to measure the beneficial effects of psychotherapy.
These stories nicely illustrate the difficulties surrounding any form of research, and particularly research which is going to have profound implications for the ways in which children are taught in schools, or for the kind of medicines which are administered to sick patients. Research in these areas is fraught with difficulty, and the technical problems of analysing the data are enormous.
What has this to with placebos, you may be wondering. Well, one of the things that Dylan Evans explains, in his wonderfully clear way, is that much of the research in this area is either flawed in terms of its statistical anaylsis, or, worse, is flawed in terms of its research design. To put it simply, there are lots of things about the use of placebos which it would be useful to know, but which we do not yet know.
I have been told, by people in a position to know what they are talking about, that an alarmingly high proportion of the research, even in ‘hard science’ areas such as chemistry, is rendered more or less useless by the unsatisfactory and unreliable methods used to analyse the data. This is because, despite their many virtues, chemists and physicists, and even certain types of mathematicians, really don’t know very much about statistics.
What is more, the world is so constituted that scientists are under constant pressure to come up with results which are favourable to the organisation which paid for the research.
Suppose you run a pharmaceutical company which has spent years, and untold millions, producing a drug for arthritis. The time comes to try it out and see how effective it is.
If you are the boss of the pharmaceutical company, what you do not want to be told is that your new drug doesn’t work. Or that it doesn’t work any better than a placebo. Or that it works but it’s not as good as your competitor’s drug. And so on. So you pay for some research. And because you’re paying for it, you design the research so that the chances of it coming up with any negative results are, shall we say, limited from the outset.
Is this wicked? Is this unethical, immoral, even criminal? Depends on your point of view. And on how it’s done. But be in no doubt that it happens.
Private Eye recently provided a link to an article by Richard Smith, a former editor of the British Medical Journal, in which he points out that studies funded by a company are four times as likely to have results favourable to the company as are studies funded from other sources.
You don’t even have to be guilty of fraud to achieve this. What you do is test the product against a form of treatment known to be inferior; or you trial your drug against a too-low dose of a competitor; or you make the competitor’s dose too high, so that it looks toxic; and so on.
Last Saturday’s Times carried an article by Mark Henderson, based on a report in the Public Library of Science Medicine. This study points out that most of the published results of medical research are, in fact, false. They appear to be reliable at the time, but later studies show them to be wrong.
Many studies are based on a small number of participants and therefore produce results which are later proved not to be significant. For example, one piece of ‘research’ recently demonstrated that some sunbathers are addicted to tanning; and the research was based on 145 people.
Now you may think that we have come a long way from Dylan Evans’s book on placebos. But not so. The questions raised by Dylan Evans are those which concern us all. Which medicines work and which do not? And, if they do work, how do they work?
As we have seen, there are many obstacles to finding out the truth. These include honest errors based on faulty statistical analysis, and not-so-honest errors based on a desire to prove the value of a commercial product.
What does this mean for the poor bewildered patient, who is looking for a cure for his aching back?
Well, for a start, I suggest that it means that you should ignore more or less anything that appears in your newspaper about a ‘new miracle cure’. You can also ignore the results of any survey into what the public thinks about a particular topic. I have conducted one or two such surveys myself, and believe me, the idea that there is any such thing as ‘social science’ is so ludicrous a concept as to be a joke in very poor taste.
It may be that the results of any given survey are the best information we have about what the public thinks about anything. But that’s a long way from saying that the results can be relied on.
Dylan Evans’s conclusion, not surprisingly, is that we need to do more research into the placebo effect. ‘The power of the mind to heal the body may not be unlimited,’ he says, ‘but nor is it negligible…. The mind fights disease in many ways, and the most important of these is still by prompting us to take the right action.’