A commentary on Read, J., Kirsch, I., & McGrath, L. (2020). Electroconvulsive therapy for depression: A review of the quality of ECT versus sham ECT trials and meta-analyses. Ethical Human Psychology and Psychiatry, 21, 64-103.
Richard P. Bentall, Professor of Clinical Psychology, University of Sheffield
In a discipline to which controversy is no stranger, there are few controversies guaranteed to generate as much heat as that surrounding the benefits and costs of electroconvulsive therapy (ECT). The origins of the treatment can be traced to the work of the Hungarian psychiatrist Ladislaus von Meduna who, in the 1920s, began to use camphor to induce epileptic-like seizures in his patients on the assumption that epilepsy and ‘schizophrenia’ are incompatible conditions [1]. However, the first use of electricity for this purpose is attributed to Ugo Cerletti and his assistant Lucio Bini, psychiatrists at the University of Rome, who administered the first electrical shock treatment to a patient in 1938 [2]. In the 80 years since, the procedure has been made safer and more tolerable (modern patients are anaesthetised and receive a muscle relaxant before being administered shocks) and enthusiasm for the treatment has waxed and waned. In the early 1980s, when I began my career as a psychologist, the old psychiatric hospital in Denbigh in North Wales had a dedicated ECT suite which provided the procedure to a regular stream of inpatients and I spent a memorable afternoon watching some of the hospital residents receiving the treatment. Since the closure of the large hospitals its use has declined and there are now striking regional variations in the extent to which psychiatrists still prescribe it [3].
Current opinions about the treatment continue to be highly polarised. It is not hard to find people who argue that it is one of the most effective and undervalued treatment in the history of psychiatry [4] nor those who castigate it as a cruel and barbaric therapy that is an unwelcome hang over from a time that embraced other barbaric treatments, long since abandoned, such as insulin coma therapy and the prefrontal leucotomy. Whenever the matter is aired as a public, for example on Twitter, it does not take long for the debate to degenerate into vitriol. When I became embroiled in such a debate in early 2017, I was assured that ECT was effective for a bewildering variety of disorders – depression, psychosis, autism and even Parkinson’s Disease (it really is beyond imagination how any treatment could be effective for such a disparate group of conditions) – and was then subjected to a series of ad hominem attacks, for example by retired US psychiatrist Bernard Carroll who insisted that I was not allowed to have an opinion because I had, “as much standing to bloviate on ECT for incapacitating depression as I do to bloviate on neurosurgery for epilepsy”.
The new review of the quality of ECT research published by John Read, Irving Kirsch and Laura McGrath [5] is likely to provoke similar discord; from past experience I expect that we will not have to wait long before someone, mostly likely with a medical qualification, tries to dismiss the paper as a piece of anti-psychiatric propaganda. And yet it should not be this way, and it almost never is this way in any other areas of medicine. The question of whether the benefits of a treatment outweigh its costs is routine in modern therapeutics. Indeed, the systematic use of evidence to resolve this kind of question is the bedrock on which modern medicine is built.
What is evidence-based medicine?
In the 1970s, Cardiff University epidemiologist Archie Cochrane [6] provoked debate within the UK medical community by arguing that clinician judgment (whether or not a doctor thinks he/she can see that a treatment works in individual patients) is a very poor method of judging clinical effectiveness and by claiming that as many as 70% of treatments used by general medical practitioners were not, at that time, supported by scientific evidence. At about the same time, University of Birmingham epidemiologist Thomas McKeown [7]amassed historical data to show that many of the improvements in population health until then commonly attributed to advances in medicine were in fact attributable to social changes such as improvements in diet, better living conditions and better engineered sewerage. These challenges stimulated what eventually became a global movement towards evidence-based medicine, which has been defined as the “conscientious, explicit and judicious use of current best evidence in making decisions about the individual care of patients” [8]. This approach, of course, required the systematic collection and analysis of data about the effectiveness of treatments and, although a wide range of methodologies have been developed for this purpose, two have dominated the field.
The first is the randomized controlled trial (RCT), in which patients with a particular condition are randomly assigned to the treatment of interest or to some kind of control treatment. Random assignment (effectively a toss of a dice, although computers are used for this purpose today) ensures that there is no bias in the assignment of patients to the two conditions (so, for example, the patients receiving both types of treatment are equivalent in age or severity of illness). In a well-designed trial assiduous efforts are also made to eliminate other sources of bias that might affect the results. Hence, the researchers who assess whether the patients have improved are kept ‘blind’ to which treatment they have received. Ideally, the patients should also be blind to which treatment they have received (otherwise maybe a depressed patient will just get better just because he/she believes he/she is in receipt of a particularly powerful therapy) in which case the trial is said to be a double-blind RCT[1]. For this reason, in many cases, the ideal control treatment is a placebo – something that looks and feels like the treatment of interest but is inert (a sugar pill in the case of a drug; sham treatment in the case of ECT). (RCT methodology involves many other safeguards against bias which cannot be discussed here for reasons of space; see Chapter 8 of my book ‘Doctoring the mind’ for a detailed account of how these trials are designed and analysed [9]).
The second important methodology that has become increasingly dominant in the evidence-based medicine follows from the observation that every RCT has limitations, with the consequence that the results from RCTs testing the same therapy often vary. How are clinicians to decide whether to use a therapy if some studies suggest it is effective and others suggest that it is not? Meta-analysis provides a solution to this problem in the form of a set of statistical procedures that can be used to combine the data from many trials to get an overall result and also assess the consistency of the findings (obviously a high level of inconsistency between trials undermines confidence in the overall result). Again the details are complex but, suffice it to say, modern meta-analytic techniques include cunning methods for spotting whether the trial data is biased because the results of some unsuccessful trials have been hidden away in file drawers (amazingly, it might seem, it is possible to do this without finding the missing trials or even knowing how many have been lost), and also, when trials give different results, methods for identifying factors which distinguish between the successful trials and the unsuccessful ones.
What have Read, Kirsch and McGrath done?
In the early days of meta-analysis, its introduction provoked many criticisms. A common criticism was that the validity of a meta-analysis must depend on the quality of the trial data that is being analysed. A meta-analysis of many low quality and systematically biased trials may lead to a misleading conclusion. The solution to this problem has been to develop criteria for assessing the quality of clinical trials.
Read and his colleagues have applied quality criteria to trials in which ECT has been compared to sham ECT (SECT), in which the control group is treated in exactly the same way as the treated group (including receiving an anaesthetic and muscle relaxant) except that they have not received an actual electric shock. It is my opinion, and the opinion of Read and his colleagues, that SECT is the crucial control in ECT research because there is a prima facie case for assuming that passing electricity through a patient’s brain is hazardous, and because ECT provides the ideal opportunity for a powerful placebo effect (the procedure is, in some sense, quite theatrical). By identifying five published meta-analyses they were able to extract 11 published ECT vs SECT trials (the last published in 1985!) with patients suffering from major depression and have shown that they are all of such low quality that it is difficult to draw any serious conclusions from them. Of these studies, four found ECT significantly superior to SECT at the end of treatment, five found no significant difference and two found mixed results. Of two that followed up the patients beyond receipt of the last shock, one found evidence of a small effect for ECT whereas the other found stronger evidence that SECT was superior.
In my view, Read and his colleagues have actually been very generous in assigning quality scores to the studies. For example, they score the trials as high quality for sample size if there are at least ten patients in each group. In reality, the required sample size of a study should be calculated in advance based on assumptions about the expected effectiveness of a treatment (higher sample sizes are required to detect more modest effects) and, so far as I can see, this has never been done in any ECT vs SECT trial. By comparison, I have carried out two trials of psychological treatment for chronic fatigue syndrome which had sample sizes (N) of 114[10] and 257 [11], two proof of concept trials of psychotherapy for psychosis with Ns = 54 [12] and 60[2], and four full scale trails of psychological treatment for psychosis with Ns = 316 [13], 253 [14], 288 [15] and 256 [16]. The largest ECT vs SECT trial had an N of 77 and it seems unlikely to me that any safe conclusions could have been reached with such small numbers.
Numerous other shortcomings of the 11 studies are pointed out by Read and his colleagues, leading them to conclude that the meta-analyses from which the trails were extracted could not reach any safe verdict about the effectiveness of ECT.
What is to be done?
I believe that Read and his colleagues have done an important service in pointing out the parlous state of ECT research and I am grateful for their efforts, not least because this is a drum I have banging for some time. The answer to the question of whether ECT is effective is not ‘no – it’s a barbaric therapy that causes more harm than good’, nor ‘yes – it is one of the most effective treatments in psychiatry’. The answer is: ‘we do not know because the quality of the research on ECT is too poor to form a judgment’. In other words, ECT is a classic failure of evidence-based medicine.
In raising this issue with psychiatric colleagues who favour ECT, I have often been told that they have seen that it works with their own eyes (a sure indication that they simply fail to understand the concept of evidence-based medicine) or I have been challenged to explain what I would do if faced with a patient suffering from life threatening depression (to which the answer is: try other therapies but, if there really was no alternative and death was imminent, I would probably try ECT in desperation despite the questionable evidence of its effectiveness). In the polarised world of ECT debates – in which most people seem to want to either stop ECT altogether (which is not my position) or carry on using it without restriction – very few people seem to be willing to embrace the logical conclusion of the findings of Read and colleagues, which is that we need more and better quality data.
It is not hard to imagine how a good double-blind placebo-controlled RCT could be carried out. Ideally, it would be planned by a consortium of ECT advocates and experienced clinical trialists who are sceptical about the treatment (hint: I am an experienced clinical trialist). It could be conducted in many countries but Britain would be a good place because, in the form of our National Institute of Health Research, we have an excellent infrastructure and funding system for clinical trials. The consortium should agree not only the design of the study but also, in advance of the study being carried out, an agreed definition of what would count as evidence of effectiveness as well as the methods that would be used to analyse the data. The protocol for the trial, including the definition of effectiveness and the analytical techniques to be employed, should be published in advance so that everyone and anyone can check adherence to the plan and the rigour of the study. To make the trial practical, there should be only two conditions – ECT and SECT – and the ECT protocol should be optimal from the point of view of the ECT advocates (it should be their best guess at the most effective ECT therapy). Importantly, the trial would have a sufficient number of patients to be able to detect modest effectiveness – as noted earlier, it is quite easy to calculate how many patients would be required but, at a guess based on my experience of psychotherapy trials, it would be more than a hundred in each group – and patients would be followed-up for at least six months after the end of therapy (a therapy is of very limited use if its effects are transient; the shortest follow-up period in my psychotherapy trials has been 6 months after the end of therapy and the longest has been 18 months). A study of this kind would have a good chance of resolving the ECT controversy once and for all.
When I have put this proposal to leading advocates of ECT (I will preserve them from embarrassment by not naming them) they have invariably replied that such a trial would be unethical because it would involve withholding an effective treatment from some patients. Of course, the whole point of such a trial would be to discover whether ECT is effective. The very fact that they have failed to understand this point suggest that psychiatry as a profession needs a more thorough education in the idea of evidence-based medicine.
References
1. Fink, M., Electroshock: Healing mental illness. 1999, Oxford: Oxford University Press.
2. Cerletti, U., Electroshock therapy, in The great physiodyamic therapies in psychiatry: An historical reappraisal, A.M. Sackler, et al., Editors. 1956, Hoeber-Harper: New York. p. 91-120.
3. Read, J., et al., An audit of ECT in England 2011–2015: Usage, demographics, and adherence to guidelines and legislation. Psychology and Psychotherapy: Theory, practice, research, 2018. 91(263-277).
4. Shorter, E. and D. Healy, Shock therapy: A history of electroconvulsive treatment in mental illness. 2007, New Brunswick: Rutgers University Press.
5. Read, J., I. Kirsch, and L. McGrath, Electroconvulsive therapy for depression: A review of the quality of ECT versus sham ECT trials and meta-analyses. Ethical Human Psychology and Psychiatry, 2020. 21: p. 64-103.
6. Cochrane, A.L., Effectiveness and efficiency: Random reflections on health services. 1972, London: Nuffield Provincial Hospital Trust.
7. McKeown, T., The role of medicine: Dream, mirage or nemesis? 1979, Oxford: Blackwell.
8. Sackett, D.L., et al., Evidence based medicine: What it is and what it isn’t. British Medical Journal, 1996. 312: p. 71-72.
9. Bentall, R.P., Doctoring the mind: Why psychiatric treatments fail. 2009, London: Penguin.
10. Powell, P., et al., Patient education to encourage graded exercise in chronic fatigue syndrome. British Journal of Psychiatry, 2004. 184: p. 142-146.
11. Wearden, A.J., et al., Nurse led, home based self help treatment for patients in primary care with chronic fatigue syndrome: randomised controlled trial. British Medical Journal, 2010. 340: p. c1777.
12. Morrison, A.P., et al., A randomised controlled trial of early detection and cognitive therapy for preventing transition to psychosis in high risk individuals: Study design and interim analysis of transition rate and psychological risk factors. British Journal of Psychiatry, 2002. 181 (Suppl 43): p. s78-s84.
13. Tarrier, N., et al., 18-month follow-up of a randomized controlled trial of cognitive-behaviour therapy in first episode and early schizophrenia. British Journal of Psychiatry, 2004. 184: p. 231-239.
14. Scott, J., et al., Cognitive behaviour therapy plus treatment as usual compared to treatment as usual alone for severe and recurrent bipolar disorders: a randomised controlled treatment trial. British Journal of Psychiatry, 2006. 118: p. 313-320.
15. Morrison, A.P., et al., Early Detection and Intervention Evaluation for people at risk of psychosis (EDIE-2): A multisite randomised controlled trial of cognitive therapy for at risk mental states. British Medical Journal, 2012. 334: p. e2233.
16. Pribe, S., et al., Effectiveness and cost-effectiveness of body psychotherapy in the treatment of negative symptoms of schizophrenia – a multi-centre randomised controlled trial. BMC Psychiatry, 2013. 13: p. 26.
[1] Trials of psychological treatments obviously cannot be double-blind. It is not possible to deliver psychotherapy without the patient’s knowledge. For this reason, in psychotherapy research single-blind trials (in which the researchers assessing improvement do not know which patients have received therapy) are the gold-standard. This has sometimes led biological psychiatrists to make the ridiculous claim that there have been no proper trials of the effectiveness of psychotherapy when in fact there is a huge amount of trial evidence on this issue.
[2] Ongoing trial of EMDR for psychosis
Are there any other methodological approaches that could be used other than a straightforward randomised controlled trial?
How very interesting. Having met a former medical doctor who had been irreparably damaged by ECT, I would not be willing to participate in the trial.
The explanation of evidence-based medicine is also most useful – thank you.
Richard, as ‘an experienced clinical trialist’ what different methodologies do you think might be employed that would get round the objections you mention, or be superior methodologically
Hi Richard, I’m aware that your grasp of formal logic is better than mine but it seems to me that you didn’t get round to mentioning a logical alternative: your psychiatrist colleagues might have a reasonable but different view of the evidence. Differences in interpretation like that form one of the limitations of evidence-based medicine. In the case of this review, the authors have a case to make that the randomised controlled trials of ECT v sham are all too old to report their methodology or conduct the trials to a standard now considered acceptable. They do not set out to review meta-analyses studying whether ECT is better than antidepressants, which even the sceptical authors have shown to be themselves effective in the past. NICE finds ECT is more likely to lead to response. That’s why no one will fund the trial you suggest. Do you want to collaborate on an RCT on ECT for schizophrenia instead?
Am I to assume that your lack of response means you are unable or unwilling to address the question?