The Moral Importance of Future People

posted before 2019-09-15

Conventional ethics does not explicitly assign significant moral weight to people who are expected to exist in the future. This is problematic for a number of reasons:

Even if one does not buy into total utilitarianism and does not think that we have an obligation to create beings that are expected to have net-positive subjective experiences, one should still value the lives of human beings who will probabilistically exist. To suggest otherwise is to say that time alone has an effect on the quality of subjective experience, which just like location, is not the case.

How should we consider the moral weight of future expected beings? I believe we should use expected utility, and consider the moral weight of people to the probability that they will exist. For example, 100 people who on average have a 95% probability of existing in the future should have the moral weight of 95 people in our utilitarian calculations.

Of course, we could be more certain of the effects of long-term interventions for future people as well as the flow-through-effects of helping people today, but that is a different question. Before we can decide that the expected utility of a future intervention makes it not worth it compared to helping people today, we must acknowledge the moral weight of future people.