Narrowing in on Cause X

BLUF: Since we’ve unsuspectedly committed moral atrocities in the past and do not appear to have 100% solved ethics, our Bayesian prior should be that we are probably committing some sort of moral atrocity today. Since utilitarian axioms dictate that not doing a vast amount of net-good when it’s possible to do so is a moral atrocity in itself, there is likely some extremely urgent cause, Cause X, which we are presently neglecting to actively pursue. This is my attempt to narrow in slightly in the space of possible actions on that Cause X.

Three Heuristics for Finding Cause X

Kerry Vaughan on effectivealtruism.org wrote an article about three heuristics for finding Cause X:
1) Expanding the moral circle.
2) Looking for forthcoming technological progress that might have large implications on the future of sentient life.
3) Searching for crucial considerations.

The article is great, and definitely deserves a read IMO, but I want to add my own takes which were not mentioned.

Expanding the Moral Circle

Not only do we need to include factory farm animals, wild animals, and future potential beings into our moral circle, but I believe we might need to include the fundamental physics operations behind qualia itself. To do otherwise may be neglecting 99.99% of the potential cosmic value that could be captured. In other words, if we neglect to build an AGI which tiles the light cone with hedonium, but it turns out that this was actually the most morally salient thing we could have done, our impact on the world will be pitifully small compared to what it could have been otherwise. What I am practically advocating now is to conduct much more research into the neurobiology and physics of qualia.

Preparing for Forthcoming Technology Progress

Technological development is progressing at an alarming rate and offensive threats are vastly outpacing defensive capabilities. Even without the actions of omnicidal agents, we are at risk of accidentally destroying ourselves from technology like nanotechnology and artificial intelligence. Making matters worse, a significant number of people identify as omnicidal agents, and whether they are motivated by religious extremism or negative utilitarianism, believe that it is their solemn duty to destroy intelligent life on Earth. I am aware of several possible solutions for this:

  • Deliberately slow down technological development to allow our ethics, safety research, and understanding of the threat-space to progress more before these technologies are here. On a personal level, one can become a safety researcher rather than an active technological progressor. Further, we can coordinate and actively slow certain kinds of technological progress as a society. This isn’t easy to do, but there are ways to do it.
  • Prioritizing the development of safe, aligned artificial general intelligence so it can help us navigate all our other problems.
  • Creating a surveillance state exclusively to reduce the threats of lone-wolf omnicidal agents who will increasingly have access to potentially world-destroying technology such as CRISPR and atomically precise 3D printing. I am, as of 2019-07-4, a geolibertarian, so it’s not fun to write this, but I imagine that it is possible in principle to tile the world with AI-enabled cameras not for taking note of petty crimes but only for identifying actors working on omnicidal projects.

Identifying Crucial Considerations

“A crucial consideration,” writes Nick Bostrom, “is an idea or argument that might plausibly reveal the need for not just some minor course adjustment in our practical endeavours but a major change of direction orΒ priority.” I believe that longtermism a.k.a. the long-term value thesis, as well as the artificial intelligence alignment problem are crucial considerations that the effective altruism community has successfully identified, internalized, and acted on (although arguably not enough). The problem of crucial considerations deserves a lot more attention that I will provide in this post; however, I think some neglected crucial considerations that I haven’t heard much on include:

  • Possible ethical implications of the Many Worlds Interpretation of quantum mechanics, which almost all the physicists I know, such as David Deutsch, Max Tegmark, and Sean Caroll, support. Examples include not diversifying worlds enough or simply not giving the future proportional weight considering the possible existence of exponentially more future worlds, assuming MWI is true and the ‘thinning’ of value with each split is not a thing.
  • The possibility there is suffering in fundamental physics or that suffering is more energy-efficient than well-being in fundamental physics.
  • The possibility that X-risk reduction could go too far because if we lose the ability to end all sentient life, we will be screwed if we end up finding out that the expected value of the future is negative. We might want to become much more confident in our theories of morality before we decide to send self-replicating probes into the far reaches of our light cone, which could potentially bring suffering on an astronomical scale. To be clear, I am not formally a negative utilitarian–I think at least some non-negative utilitarians should be concerned about this too.

Increasing our Collective Problem Solving Capability

This is a meta-approach that I think can help us hone in on Cause X. Possible effective means of doing this include:

  • The pressing need for society to have more people who are 3+ standard deviations above the mean in intelligence in order to have more of the best idea miners attack our biggest problems. By not actively pursuing positive eugenics (not the non-consensual kind!) such as embryo selection for intelligence, we are probably missing out on a lot of future Nick Bostroms, John von Neumanns, and the like who could help us capture a lot of the remaining cosmic value.
  • Solving the AI alignment problem as aligned AGI should help us solve all our other problems.
  • Designing and implementing better institutions such as prediction markets and improved voting schemes to better pool our collective intelligence and knowledge.
  • Spreading promising ideas which can help reduce the akrasia and mental health issues that get in the way of productivity.
  • Spreading the ideas encoded in Bayesian epistemology such as probabilistic reasoning, Occam’s razor priors, belief updating, and the notion of expected utility maximization. Additionally, spreading the relevant insights from evolutionary psychology research including how to overcome our pre-programmed biases & heuristics.
  • Promoting a culture of epistemic humility, moral uncertainty, skepticism, rational inquiry, deep conversations, and frequent writing and reflection. We have to get people willing to think and say, “I don’t know”, and we have to get more people blogging and thinking about how they can fill in their knowledge gaps.

If you are unfamiliar with some of the already identified cause areas in effective altruism, I recommend checking out 80,000 Hour’s Problem Profiles.

That’s all πŸ™‚

Leave a Reply