Narrowing in on Cause X

posted before 2019-09-15

BLUF: Since we’ve unsuspectedly committed moral atrocities in the past and do not appear to have 100% solved ethics, our Bayesian prior should be that we are probably committing some sort of moral atrocity today. Since utilitarian axioms dictate that not doing a vast amount of net-good when it’s possible to do so is a moral atrocity in itself, there is likely some extremely urgent cause, Cause X, which we are presently neglecting to actively pursue. This is my attempt to narrow in slightly in the space of possible actions on that Cause X.

Three Heuristics for Finding Cause X

Kerry Vaughan on effectivealtruism.org wrote an article about three heuristics for finding Cause X:
1) Expanding the moral circle.
2) Looking for forthcoming technological progress that might have large implications on the future of sentient life.
3) Searching for crucial considerations.

The article is great, and definitely deserves a read IMO, but I want to add my own takes which were not mentioned.

Expanding the Moral Circle

Not only do we need to include factory farm animals, wild animals, and future potential beings into our moral circle, but I believe we might need to include the fundamental physics operations behind qualia itself. To do otherwise may be neglecting 99.99% of the potential cosmic value that could be captured. In other words, if we neglect to build an AGI which tiles the light cone with hedonium, but it turns out that this was actually the most morally salient thing we could have done, our impact on the world will be pitifully small compared to what it could have been otherwise. What I am practically advocating now is to conduct much more research into the neurobiology and physics of qualia.

Preparing for Forthcoming Technology Progress

Technological development is progressing at an alarming rate and offensive threats are vastly outpacing defensive capabilities. Even without the actions of omnicidal agents, we are at risk of accidentally destroying ourselves from technology like nanotechnology and artificial intelligence. Making matters worse, a significant number of people identify as omnicidal agents, and whether they are motivated by religious extremism or negative utilitarianism, believe that it is their solemn duty to destroy intelligent life on Earth. I am aware of several possible solutions for this:

Identifying Crucial Considerations

“A crucial consideration,” writes Nick Bostrom, “is an idea or argument that might plausibly reveal the need for not just some minor course adjustment in our practical endeavours but a major change of direction or priority.” I believe that longtermism a.k.a. the long-term value thesis, as well as the artificial intelligence alignment problem are crucial considerations that the effective altruism community has successfully identified, internalized, and acted on (although arguably not enough). The problem of crucial considerations deserves a lot more attention that I will provide in this post; however, I think some neglected crucial considerations that I haven’t heard much on include:

Increasing our Collective Problem Solving Capability

This is a meta-approach that I think can help us hone in on Cause X. Possible effective means of doing this include:

If you are unfamiliar with some of the already identified cause areas in effective altruism, I recommend checking out 80,000 Hour’s Problem Profiles.

That’s all :)