The Doctrine of Double Effect:A Closed-Form Solution to a Computationally Hard Ethical Dilemma?

posted before 2019-09-15

As a cadet, I was required to take Philosophy and Ethical Reasoning, where 1/3 of the course was on Just War Theory (JWT).  I found this theory to be a convenient framework for framing the difficult ethical decisions to be made before and in war, but it did not sit right with my intuition about what is actually ethical.  To me, JWT seemed better classified as a compilation of heuristics than a standalone normative ethical theory.  This cognitive dissonance led me to discover several papers that backed up my intuitions with Bayesian Decision Theory based Utilitarianism. My paper goes into this dilemma, along with the deep problems of the Doctrine of Double Effect.


Abstract

Suppose the chief goal of ethics is to prevent atrocity and maximize what is most desirable (utility). Suppose that a normative ethical theory is recognized as credible insofar as it adheres to the following requirements: A1) it is consistent within itself A2) insofar as the theory is strictly adhered to, the goal of ethics is furthered A3) insofar as the theory is strictly adhered to, the goal of ethics is furthered better than would counterfactual adherence to any other normative ethical theory. Notably, these requirements I provide for gauging credibility do not give weight to the probability of successful implementation by actors of different degrees of ethical intelligence. An ethical theory can be credible, but not pragmatically implementable by every single actor who wishes to abide by it. Now suppose the Doctrine of Double Effect (DE) is credible not by the sake of its own existence, but by its credibility as a normative ethical theory. When evaluating DE based on the criterion above, it seems that A1 is the case; that A2 is usually the case, but DE notably fails in several thought experiments; and A3 is not the case, for the normative ethical theory of Bayesian Decision Theory based Utilitarianism (Bayesian utilitarianism) formally outperforms DE in all scenarios (Harsanyi, Bayesian Decision Theory and Utilitarian Ethics 1). In my paper, I will explain some of the problems with DE, which include how it is too lenient to combatants while also too restricting in the most grave of scenarios. I will also explain how Bayesian Utilitarianism provides a superior framework for situations which would typically invoke the use of DE.

Background

The Doctrine of Double Effect (DE) was first written about by Catholic philosopher Thomas Aquinas in his work Summa Theologica (McIntyre). It was the attempt to provide a balance between the need for self-defense and the Catholic prohibition on killing. He argued that one can be justified in killing one’s assailant, provided they did not intend to. DE has since evolved to generally “emphasize the distinction between causing a morally grave harm as a side effect of pursuing a good end and causing a morally grave harm as a means of pursuing a good end” (McIntyre). Joseph T. Mangan’s formulation of DE is oft used by philosophers: A person may licitly perform an action that he foresees will produce a good effect and a bad effect provided that four conditions are verified at one and the same time: 1) that the action in itself from its very object be good or at least indifferent; 2) that the good effect and not the evil effect be intended; 3) that the good effect be not produced by means of the evil effect; 4) that there be a proportionately grave reason for permitting the evil effect (Hill). While the principle of DE has application in a several social domains, I will focus on its application in war under Just War Theory (JWT).

Application in Just War Theory

As summarized by Steven Lee, JWT needs a principle of discrimination, such as DE, in order to resolve the inconsistency between three of its primary tenants: “1) some wars are justifiable 2) civilians must not be attacked in war 3) military operations in general cannot be carried out without civilians being attacked” (Lee). DE resolves this inconsistency by removing the requirement to keep civilians safe, and instead, requiring that combatants not have the desire to harm them. But this does not go far enough. As Lee writes, “Not only should combatants not try to harm civilians; they should try not to harm them.” (Lee). Thought experiments can easily support this: the easiest way to destroy a command post is by dropping a large bomb on it. This may have the byproduct of destroying much of a city, but as long as combatants focus their attention just on destroying the military of the target, DE permits them to do so. Thus, a shallow version of DE is easily refuted. Michael Walzer proposes a stronger version of DE to satisfy the principle of discrimination where: “Combatants choose, in respect to every military objective, a means to attempt to achieve it that is a member of the set of lesser-civilian-risk alternatives for that objective” (Lee). This is a better heuristic, but it still puts a military objective as the highest good in a given situation, and does not weigh noncombatant lives against the importance of a military objective. If the only way to eliminate one low-threat combatant is to bomb them along with 20 noncombatants, this framework does not require or even encourage one to look for a different military objective elsewhere on the battlefield.

The Doctrine of Double Effect as a Heuristic

The biggest problem with DE is that it is a heuristic which doesn’t recognize itself as being so. Like other principles in Just War Theory, it claims to be a self-evident, perfect consolidation of axiomatic moral truths. To violate itself would be worse than destroying the whole world; one cannot ever be permitted to purposely kill one noncombatant (e.g. if Kim Jung Un cared about only two other people, then one of those people) even when doing so would reliably accomplish a gravely important strategic goal (e.g. reliably prevent the extinction of the only known intelligent life in the universe). To illustrate, suppose that a prediction market composed of our best statisticians, strategists, psychologists, and game theorists truly believed that if we killed someone that a dictator like Kim Jung Un holds most dear (and nobody would know that we were responsible for this), we would have a 50 % reduction in the probability of a nuclear war, which they and others have computed was hovering around 40%. Each of these experts had an excellence track records in prediction markets, and had built up considerable wealth by out-predicting other prediction markets again and again. DE would not allow us to take kill 1 noncombatant for a 20% reduction in the probability of a nuclear war, which could be expected to kill at least 6 billion people and set back humanity’s world economy and technological progress for generations. A simple expected utility calculation that doesn’t consider long-term population ethics effects (which only strengthens the calculation unless one is negative utilitarian) would yield (.4*6 billion - .2*6 billion + .00*1 - 1*1) that killing the noncombatant in this situation would have an expected utility of saving 1,199,999,999 lives. Our calculations need not stop there, however, as we could take in more information and sum more of our subjective probabilities and utilities, while for DE, the computation ends once one course of action involves intentionally causing an evil for a greater good.

A Better Ethical Decision-Making Criterion

If we can evaluate whether or not a model is meritorious after it has cranked out its solution, why can we not skip the model and compute a solution with the tools we are using to gauge the model’s solutions? Further, why must we limit ourselves when computing the optimally ethical course of action to a particular model when it seems that we have competing models at play, and more information than can fit into any one particular model, save Bayesian utilitarianism? If we have health care models which use Disability Adjusted Life Years to compute desirable ways to distribute health care resources, game theory models for determining the probability that other agents will do Y actions given our X actions, as well as historical data about what tends to work and what does not, why must we limit ourselves to strictly computing through the lens of a single model that considers so little of the possible information at available? Instead of claiming to be a useful heuristic for causing ethical action- a useful guide for the overloaded combatant on the ground who is not well versed in utilitarianism and Bayesian Decision Theory- DE claims to wholly represent the ethical space where expected utilities are summed for all possible future worlds. As a loose analogy, Newtonian physics does a decent job at modeling and predicting the world as it actually is, but if we really want to be precise at both nano- and cosmic-scales (and actually, at all scales), we need to use equations from quantum mechanics and general relativity (Yudkowsky). But the predictive power of the model goes only one way; as computationally intractable as it is to model a Boeing 747 with quantum mechanics, we will always get more accurate results with quantum mechanical models than Newtonian physics models. But this analogy is too charitable, for it is computationally tractable to outperform DE in a multitude of situations. DE is not strictly grounded in strict ethical reality, where actions probabilistically influence the physical world and have expected net effects on well-being and suffering. I am not alone in this perception of ethical reality, for Peter Hammond writes:

Further, Nobel Prize winning economist John Harsanyi has written: “...The Bayesian criterion of expected-utility maximization is the only decision criterion consistent with rationality. . . the Bayesian criterion, together with the Pareto optimality requirement, inescapably entails a utilitarian theory of morality.” (Harsanyi, Bayesian decision theory, rule utilitarianism, and Arrow’s impossibility theorem 1).

In summary, DE is too lenient to combatant; at best, it does not mandate a weighing between the value of the lives of noncombatants and the strategic importance of a military objective. At worst, it permits the wanton destruction of noncombatants insofar as combatants do not have the intention to kill them. A better alternative exists: recognize DE as an oft pragmatic guide to decision-making on the battlefield when one does not have the computational resources available to determine what is actually ethical. Heuristics are extremely useful, but we sell ourselves short when we do not admit they are only guides in the space of computable ethical action.

References

Hammond, Peter. (1996). Consequentialist Decision Theory and Utilitarian Ethics. Ethics, ratio- nality and economic behaviour.

Harsanyi, John C. “Bayesian Decision Theory and Utilitarian Ethics.” The American Economic Review, vol. 68, no. 2, 1978, pp. 223–228. JSTOR, JSTOR, www.jstor.org/stable/1816692.

Hill, Connor. “Journal of Moral Theology, Volume 6, Number 2.” Google Books, books.google.com/books?id=bfctDwAAQBAJlpg= C. Harsanyi, John. (1979).

Bayesian decision theory, rule utilitarianism, and Arrow’s impossi- bility theorem. Theory and Decision. 11. 289-317. 10.1007/BF00126382.

Lee, Steven. “Double Effect, Double Intention, and Asymmetric Warfare”, Ethics Center, USNA, isme.tamu.edu/JSCOPE04/Lee04.html.

Mangan Joseph T. ”An Historical Analysis of the Principle of Double Effect,” Theological Studies 10, no. 1(1949):43.

McIntyre, Alison, ”Doctrine of Double Effect”, The Stanford Encyclopedia of Philosophy (Winter 2014 Edition), Edward N. Zalta (ed.), URL = ¡https://plato.stanford.edu/archives/win2014/entries/double- effect/¿.

Yudkowsky, Eliezer. “Reductionism 101”, LessWrong. https://www.lesswrong.com/posts/tPqQdLCuxanjhoaNs/reductionism