Back in November of 2018, I had a brief discussion with the economist (polymath, really) Robin Hanson on Twitter and I’m pretty sure I inspired him to write this blog post. We were discussing essentially whether a strong, central government or free individual agents are going to pose a greater threat to future society. He wrote that a strong, centralized government is riskier because national leaders periodically fall into suicidal mindsets, and we want a diversity of governments so all is not destroyed when the supreme leader falls into a suicidal mindset. This argument is concerning, but my concern is as follows:
A number of individuals motivated by religion, ecology preservation, anti-natalism, hate, or other motives have longed to destroy humanity, all sentient life, or even the whole world. Technology is progressing at unprecedented rates and destructive technology is gradually becoming more accessible to individuals and small groups. While it still takes a nation-state to even make a single nuclear weapon, it only takes one biologist to engineer a pandemic that can potentially kill millions. There is no reason to think an exceedingly dangerous technology will remain undiscovered and not fall into the hands of the public that gives small omnicidal groups the ability to destroy significantly more value than ever before.
So How do we Reconcile These?
As it turns out, we get to choose our risk here. If we go with unlimited liberty, depending on humanity’s course of technological development which we have strong Knightian uncertainty about, we face nearly certain destruction; until human psychology changes, someone is going to act to destroy the world should they have the opportunity. The only way to reliably circumvent this is, in my opinion, permitting a strong, central government with an advanced surveillance system and powerful police. This government, however, has the risk of enforcing highly sub-optimal values at scale. The question then is of choosing the route with high Knightian uncertainty regarding extinction or a higher, less uncertain risk of suboptimal outcomes.
Of all possible future worlds, I bet you see relatively fewer fully libertarian societies in the far future because they do not survive past a certain point of technological development. Societies with a strong, central government carry the risk of misgovernance, but at least there is a better chance at surviving into the long future. Yes, there are fates worse than death, but I don’t think most far future strong governments are actually going to be as Orwellian as we traditionally suspect.
To learn more about this line of reasoning and related arguments, see Nick Bostrom’s paper The Vulnerable World Hypothesis. Just so one knows, I had formulated these ideas before reading Bostrom’s paper, so I am not just rehashing his ideas here.