Computational Intractability as an Argument for Entropy-Driven Decision Making

BLUF: I have another argument in favor of choosing courses of action via recently produced, quantum-generated random numbers.

Recently, I wrote a long post about a decision criterion for choosing Courses of Action (CoA) under moral uncertainty given the Many-Worlds Interpretation of quantum mechanics. While this has been shot down on LessWrong and I am still working on formalizing my intuition regarding relative choice-worthiness (a combination of Bayesian moral uncertainty and traditional expected utility), as well as figuring out exactly what it means to live in a Many-Worlds universe, I tentatively believe I have another argument in favor of entropy-driven CoA selection: computational intractability.

Traditional decision theory has not focused a ton, to my knowledge, on the process of agents actually computing real world expected-utility estimates. I think the simplest models basically assume agents have infinite computations available. What decision is an agent to make when they are far from being done computing the expected-utility of different CoA? Of course, this depends on the algorithm they use, but in general, what decision should they make when the time to make a decision comes early?

In a Many-Worlds universe, I am inclined to think agents should deliberately throw entropy into their decisions. If they have explored the optimization space to the point where they are 60% sure they have found the optimal decision, they should literally seek out a quantum mechanics generated random number–in this case between 1 and 5–and if the number is 1, 2, or 3, they should choose the course of action they are confident in; otherwise, they should choose a different promising course of action. This ensures that child worlds are appropriately diversifying so “all of our eggs are not in one basket”.

If the fundamental processes in the universe–from statistical mechanics to the strong economic forces present today in local worlds based on human evolutionary psychology–lean in favor of well-being over suffering, then I argue that this diversification is anti-fragile.

A loose analogy (there are slightly different principles at play) is investing in a financial portfolio. If you really don’t know which stock is going to take off, you probably don’t want to throw all your money into one stock. And choosing courses of action based on quantum random number generation is *the only* way to reasonably diversify one’s portfolio; even if one feels very uncertain about one’s decision, in the majority of child worlds, one will have made that very same decision. The high-level processes of the human brain are generally robust against any single truly random quantum mechanics event.

I am still working on understanding what the generic distribution of child worlds looks like under Many-Worlds, so I am far from completely certain that this decision-making principle is ideal. However, because it does seem promising, I am seeking to obtain a hardware true random number generator to experiment with this principle–I won’t learn the actual outcomes, which have to be predicted from first-principles, but I can learn how it feels psychologically to implement this protocol. At this point, it looks like I am going to have to make one. I’ll add to this post when I do.

A Singleton Government is not as Dangerous as Omnicidal Agents

Back in November of 2018, I had a brief discussion with the economist (polymath, really) Robin Hanson on Twitter and inspired him to write a blog post.  We were discussing essentially whether a strong, central government or free individual agents are going to pose a greater threat to future society.  He wrote that a strong, centralized government is riskier, and he did so for interesting reasons.

While his argument is persuasive, I still think that individual agents will pose a greater threat in the future.

Society is going to face these two risks in the future, and they are in contention with each other.  An advanced monitoring system and highly capable police is the only way to prevent omnicidal agents from carrying out their plan. These should not sound far fetched:

 The Likelihood of a Future Strong, Central Government

Artificial intelligence will enable strong, central governance more than we have ever seen. Just look at China and their social credit program.  Additionally, a central government is likely one of the few solutions to our Mutually Assured Destruction deadlock, which probabilistically is a recipe for doom.

The Likelihood of Omnicidal Agents

A number of individuals motivated by religious arguments, ecology preservation, anti-natalism, hate, or other motives have longed to destroy humanity, all sentient life, or even the whole world. Technology is progressing at unprecedented rates and dangerous technology is gradually becoming more accessible to individuals and small groups. While it still takes a nation-state to even make a single nuclear weapon, it only takes one biologist to engineer a pandemic that can potentially kill millions. There is no reason to think a more deadly technology will not eventually be discovered and fall into the hands of the public that gives small groups the ability to destroy significantly more value than ever before.
As it turns out, we are going to have to, to a degree, choose our risk here.  If we go with unlimited liberty, depending on technological development, we face nearly certain destruction; until human psychology changes, someone is going to act to destroy the world should they have the opportunity.  A strong, central government has the risk of enforcing highly sub-optimal values at scale. This is a real problem, but I believe that only a powerful state has the power to prevent omnicidal agents from fulfilling their plans.

Of all possible future worlds, I bet you see relatively fewer fully libertarian societies in the far future because they do not survive past a certain point of technological development. Societies with a strong, central government carry the risk of misgovernance, but at least there is a better chance at surviving into the long future. Yes, there are fates worse than death, but I don’t think most far future strong governments are actually going to be as Orwellian as we traditionally suspect.

I have not ironed out all my ideas about this yet, so I want to hear your feedback!

To learn more about this line of reasoning and related arguments, see Nick Bostrom’s paper The Vulnerable World Hypothesis. Just so one knows, I had formulated these ideas before reading Bostrom’s paper, so I am not just rehashing his ideas here.

We Should Have Prevented Other Countries from Obtaining Nukes

Ever since nuclear weapons fell into the hands of the USSR and other countries beyond the United States, human civilization has been under tremendous risk of extinction. For decades now, the Doomsday clock has been perilously close to midnight; we continue to flirt with disaster which could strike once any nuclear leader falls into a suicidal mindset, which breaks the calculus of Mutually Assured Destruction. There is no solution in sight: we will only continue to avoid the destruction of all that we care about insofar as a handful of world leaders value living more than winning or being right. Perhaps down the road, some institution will emerge which will lead denuclearization to non-extinction levels, but even navigating this transition will be risky.

Given the dilemma of this current state of affairs, we messed up. We should have had the strategic foresight to prevent this from happening, and done nearly everything in our power to prevent it from happening. We should have negotiated more fiercely with the Soviet Union to make them stand down their nuclear development, and we should have backed up our words with the threat of bombs. Further, moral philosophy messed up by not laying the groundwork for this to happen at the time: as undesirable as it would have been to target a research lab in Siberia or even a populated city, this pales in comparison to the hundreds of millions, billions, or even all future people (we are talking trillions+) who remain under significant, perpetual risk in the current nuclear environment we created.

We should have never allowed more than one state to develop the Bomb. “But this one state might abuse their power and try to dominate the world” one might counter. This could be the case, but I would venture that one state enforcing its values on another would probably not have been as bad as extinction. Further, this one nuclear state would have an incentive to be good stewards of their power to discourage others’ pursuit of nuclear development; insofar as non-nuclear states are not desperately unsatisfied with their lives, it does not make sense to pursue nuclear development under the threat of annihilation should the one nuclear state find out before they had amassed an arsenal big enough for Mutually Assured Destruction.

Possible Moral Trade Implementation

I’ve been thinking about Toby Ord’s Moral Trade paper, and think a new Repledge website is a desirable thing, legal questions aside. Here’s the idea (edited with my own takes) for those unfamiliar:

Create a website where people can donate to a cause, but where if someone else donates to the opposite cause, both peoples’ money is instead diverted to a 3rd charity that both parties also support (e.g. GiveWell). To discourage GiveWell supporters from waiting and donating to whatever interest group is necessary to double their donations, the running balance is kept private. After a set time (say, once a week, Saturday at 1800), the tied money goes to GiveWell and the surplus money goes to the interest group it was intended for.

People interested in supporting interest groups should be interested in funding this way if:
1) they believe their opponent’s interest group could advance their interest better with a dollar than their own
2) they would rather give $2 to GiveWell than $.5001 to their own interest group
3) some reconciliation of #1 and #2.

Trust problems can be resolved with smart open-source software and 3rd party (not GiveWell) auditing.

Given only the option of donating X dollars through the site or outside it, I think a rational agent should donate according to the following procedure so as to maximize utility:

uA= a utility/$ ———utility per dollar of one’s interest group
uB= -b utility/$——–negative utility per dollar of the opposition’s interest group
uG= g utility/$ ———utility per dollar of the neutral 3rd group (GiveWell)

If abs(uB)>uA:
Donate through site so as to get 2uG
If uA>2uG:
Donate directly to A
If uD>2uG:
Donate directly to A
Donate so as to get 2uG

If this is a good idea in theory, the next obstacle to tackle is the question of legality.  I imagine that people should be able to consent to their money being used in this way, but laws, especially campaign finance laws, are not always intuitive.

The next question is whether the expected donations to GiveWell would be worth the effort to tackle this project. The effort, of course, could widely vary; we could hire a team of software engineers to build a secure system where humans are effectively out of the loop and this could be verified by 3rd party investigators. Or we could make two Venmo-like accounts (one for each side of a partisan issue that a poll shows people are interested in funding on both sides), and literally just live stream and post a video weekly of the site’s owner subtracting the difference between pairs of accounts, donating the money to the winning site (with the camera still rolling), and donating the matched money to GiveWell.

There is a very good chance that we will not find prospective donators on opposite sides of an issue that both buy into the calculus and trust the site enough, but it’s possible.  The cost is low enough however that this simpler system could be implemented within hours by one trusted third party should a community find itself sharply divided on an issue and be willing or already spending money on organizations with opposing missions.

Thanks for reading! I would love your feedback 🙂