Your Acts Need Not Be Universalizable

I see a particular thinking error every now and then, yet I do not recall ever seeing it be addressed properly. What is this error?

It is thinking that one’s acts can only be ethical if one is able to, in good conscience, will that everyone else do them.Β  You may know this as Kant’s categorical imperative. Specifically, Kant wrote, “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

Frankly, I feel that this is clearly not the right way to go about things. We should act as the world is expected to be, not how the world could be if everyone obeyed the same maxim. To put it more formally, Bayesian Decision Theory requires an agent to calculate expected utilities over the entire range of possible outcomes; we should not optimize strictly for one outcome (everyone acting according to the same maxim) when we can also consider additional plausible outcomes in our decision calculus.

Let Angie be a person who is considering not having kids because she expects them to be of net-drain to society due to severe expected health problems. Additionally, Angie expects their lives, on the whole, to be characterized by suffering far more than well-being. Insofar as these expectations are credible and based on the proper weighing of evidence she has access to, Angie seems morally correct in thinking she should not have kids.

What is the categorical imperative test that Angie supposedly has to consider? It’s not entirely clear. Should she consider what the world would be like if ‘everyone chooses not to have kids’, or should she consider what the world would be like if ‘anyone with the exact same set of information about the expected disutility of them having kids chooses not to have kids’? If we are as specific as possible, the categorical imperative implies one maxim for a particular situation can never apply to another situation because no two physical, real-world, morally-relevant situations are ever precisely the same.

If we want our maxim to be less specific to the point where more than one person could possibly apply it, what algorithm do we apply to strip a maxim of some of its situation-dependent information? We have two options: cost-benefit analysis (we are left with no choice but to optimize for some utility function) or using a pool of entropy to randomly remove chunks of information from the maxim. The former method of maxim-design is typically characterized as the process behind implementing rule utilitarianism. The latter surely does not reliably lead to better worlds (how could random normative rules reliably produce better worlds?)

Bottom line: We should act as the world isΒ expectedΒ to be, not how the world could be if everyone obeyed the same maxim.

Leave a Reply