Computational Intractability as an Argument for Entropy-Driven Decision Making

BLUF: I have another argument in favor of choosing courses of action via recently produced, quantum-generated random numbers.

Recently, I wrote a long post about a decision criterion for choosing Courses of Action (CoA) under moral uncertainty given the Many-Worlds Interpretation of quantum mechanics. While this has been shot down on LessWrong and I am still working on formalizing my intuition regarding relative choice-worthiness (a combination of Bayesian moral uncertainty and traditional expected utility), as well as figuring out exactly what it means to live in a Many-Worlds universe, I tentatively believe I have another argument in favor of entropy-driven CoA selection: computational intractability.

Traditional decision theory has not focused a ton, to my knowledge, on the process of agents actually computing real world expected-utility estimates. I think the simplest models basically assume agents have infinite computations available. What decision is an agent to make when they are far from being done computing the expected-utility of different CoA? Of course, this depends on the algorithm they use, but in general, what decision should they make when the time to make a decision comes early?

In a Many-Worlds universe, I am inclined to think agents should deliberately throw entropy into their decisions. If they have explored the optimization space to the point where they are 60% sure they have found the optimal decision, they should literally seek out a quantum mechanics generated random number–in this case between 1 and 5–and if the number is 1, 2, or 3, they should choose the course of action they are confident in; otherwise, they should choose a different promising course of action. This ensures that child worlds are appropriately diversifying so “all of our eggs are not in one basket”.

If the fundamental processes in the universe–from statistical mechanics to the strong economic forces present today in local worlds based on human evolutionary psychology–lean in favor of well-being over suffering, then I argue that this diversification is anti-fragile.

A loose analogy (there are slightly different principles at play) is investing in a financial portfolio. If you really don’t know which stock is going to take off, you probably don’t want to throw all your money into one stock. And choosing courses of action based on quantum random number generation is *the only* way to reasonably diversify one’s portfolio; even if one feels very uncertain about one’s decision, in the majority of child worlds, one will have made that very same decision. The high-level processes of the human brain are generally robust against any single truly random quantum mechanics event.

I am still working on understanding what the generic distribution of child worlds looks like under Many-Worlds, so I am far from completely certain that this decision-making principle is ideal. However, because it does seem promising, I am seeking to obtain a hardware true random number generator to experiment with this principle–I won’t learn the actual outcomes, which have to be predicted from first-principles, but I can learn how it feels psychologically to implement this protocol. At this point, it looks like I am going to have to make one. I’ll add to this post when I do.

Unpopular Ideas: Theory and Links

I was inspired by some of Julia Galef’s posts and am going to collect some unpopular ideas here as it’s paramount we consider ideas far outside the status-quo on a continual basis.

Just as many ideas we hold as true today were counter-intuitive in the past, we will probably eventually shift towards accepting a number of currently counter-intuitive beliefs in the future. This seems almost inevitable unless you think we have reached the pinnacle of human progress, but that is probably (not to mention hopefully) not the case. Since an eventual change in many of our beliefs is almost inevitable, we should hasten the progress and let “that which can be destroyed by the truth, be destroyed by the truth.”

One way this can happen quickly is by taking unconventional and unpopular ideas seriously and trying to make them work. Not only will we find hidden truths sometimes, but even when we do disprove these unconventional ideas, we still benefit from decoupling a bit from the status-quo and thinking in a new way.

A counterargument to considering unconventional ideas are that some of them are information hazards and that merely by considering them, one will inevitably be brought astray. I acknowledge that this can be the case in some people, and even in myself if I somehow forget my priors and to be intellectually rigorous in my explorations. That being said, if some of our most rational people cannot safely consider unpopular ideas, humanity probably doesn’t have the intelligence to survive long-term anyways.

I am not endorsing these ideas! I disagree with many of them and am merely collecting them for the sake of intellectual thoroughness.

Julia Galef’s Lists of Unpopular Ideas about:

My List: Coming soon once I get through Julia’s.

We Should Have Prevented Other Countries from Obtaining Nukes

Ever since nuclear weapons fell into the hands of the USSR and other countries beyond the United States, human civilization has been under tremendous risk of extinction. For decades now, the Doomsday clock has been perilously close to midnight; we continue to flirt with disaster which could strike once any nuclear leader falls into a suicidal mindset, which breaks the calculus of Mutually Assured Destruction. There is no solution in sight: we will only continue to avoid the destruction of all that we care about insofar as a handful of world leaders value living more than winning or being right. Perhaps down the road, some institution will emerge which will lead denuclearization to non-extinction levels, but even navigating this transition will be risky.

Given the dilemma of this current state of affairs, we messed up. We should have had the strategic foresight to prevent this from happening, and done nearly everything in our power to prevent it from happening. We should have negotiated more fiercely with the Soviet Union to make them stand down their nuclear development, and we should have backed up our words with the threat of bombs. Further, moral philosophy messed up by not laying the groundwork for this to happen at the time: as undesirable as it would have been to target a research lab in Siberia or even a populated city, this pales in comparison to the hundreds of millions, billions, or even all future people (we are talking trillions+) who remain under significant, perpetual risk in the current nuclear environment we created.

We should have never allowed more than one state to develop the Bomb. “But this one state might abuse their power and try to dominate the world” one might counter. This could be the case, but I would venture that one state enforcing its values on another would probably not have been as bad as extinction. Further, this one nuclear state would have an incentive to be good stewards of their power to discourage others’ pursuit of nuclear development; insofar as non-nuclear states are not desperately unsatisfied with their lives, it does not make sense to pursue nuclear development under the threat of annihilation should the one nuclear state find out before they had amassed an arsenal big enough for Mutually Assured Destruction.