Counterfactual People Are Important

When looking at the morality or desirability of abortions, many claim “if my sibling with X-serious-disability were aborted, then they wouldn’t be alive today with their net-positive life and I wouldn’t know them.” This statement is true and is evidence against the desirability of abortion, but what I have never heard from a pro-lifer is the moral weight of the counterfactual people that could have been born should an abortion have taken place. This is an extremely important factor in deciding the desirability of abortion, and in population ethics in general.

The valuation of counterfactual people is the same as that of potential people which is very similar to the valuation of future people. It is based on some subjective expected-utility of the nature of the person’s subjective experience, the person’s expected net-impact on ethically relevant phenomena, and the probability of them coming into existence, minus a cost function. For counterfactual people, that cost-function is the moral weight of a person who could exist otherwise. This may seem like a tautological definition, but let’s look at a though experiment to more explicitly highlight the need to look at counterfactual people:

“Suppose it was a phenomenon of nature that every women’s first embryo implanted in their womb was destined to live a life barely worth living and would be expected to only barely give back more to society than the societal resources used to raise it. A woman could raise this child to term and get pregnant with a typical child several months after that, or she could have an abortion and end the pregnancy within a couple weeks of it starting, and then get pregnant with a typical child, with much greater expected well-being and societal impact, within a month or two of that. The latter choice, if rationally taken, would require considering marginal cost– that is, the weight of counterfactual people.” I think it’s clear that society would be worse off if we didn’t make the latter choice at least a majority of the time.

Considering marginal cost between having different children doesn’t mean that we must be harsh to our children with less expected impact on society and less well-being. It’s just as morally relevant to be kind to people that could be affected by our words and actions. However, let’s not pretend we are angels when we do a good and ignore the counterfactual better good that could have taken place.

The Moral Importance of Future People

Conventional ethics does not explicitly assign significant moral weight to people who are expected to exist in the future. This is problematic for a number of reasons:

  • their subjective experiences will be just as salient when they are alive as ours are today.
  • we can do a number of actions to help them out today.
    • we can focus more resources on reducing our risk of civilizational collapse to ensure their existence.
    • whereas much of current people’s quality of life is ‘locked-in’ via their genes, we can significantly influence the genes of future people to ensure they have the best chance at living their best life
  • technological and cultural inventions tend to help many more future people than many direct interventions that help people today. Consider if we transferred some of the exorbitant amount of resources used on end-of-life care to invest in gerontological research

Even if one does not buy into total utilitarianism and does not think that we have an obligation to create beings that are expected to have net-positive subjective experiences, one should still value the lives of human beings who will probabilistically exist. To suggest otherwise is to say that time alone has an effect on the quality of subjective experience, which just like location, is probably not the case.

How should we consider the moral weight of future expected beings? I believe we should use expected utility, and consider the moral weight of people to the probability that they will exist. For example, 100 people who on average have a 95% probability of existing should have the moral weight of 95 people.

Of course, we could be more certain of the effects of long-term interventions for future people as well as the flow-through-effects of helping people today, but that is a different question. Before we can decide that the expected utility of a future intervention makes it not worth it compared to helping people today, we must acknowledge the moral weight of future people.

Your Acts Need Not Be Universalizable

I see a particular thinking error every now and then, yet I do not recall ever seeing it be addressed properly. What is this error?

It is feeling like one’s acts can only be ethical if one is able to, in good conscience, will that everyone else do them (act on your maxim). You may know this as Kant’s categorical imperative. Specifically, Kant wrote, “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

Frankly, I feel that this is clearly not the right way to go about things. We should act as the world is expected to be, not how the world could be if everyone obeyed the same maxim. To put it more formally, Bayesian Decision Theory requires an agent to calculate expected utilities over the entire range of possible outcomes; we should not optimize strictly for one outcome (everyone acting according to the same maxim) when we can also consider additional possible outcomes in our decision calculus.

Take Angie who is considering not having kids because she expects them to be of net-drain to society due to severe expected health problems. Additionally, Angie expects their lives, on the whole, to be characterized by suffering far more than well-being. Insofar as these expectations are credible and based on properly weighing the sum of evidence she has access to, Angie seems morally correct that she should not have kids.

What is the categorical imperative test that Angie supposedly has to consider? It’s not entirely clear. Should she consider what the world would be like if ‘everyone chooses not to have kids’, or should she consider what the world would be like if ‘anyone with the exact same set of information about the expected disutility of them having kids chooses not to have kids’? If we are as specific as possible, the categorical imperative implies one maxim for a particular situation can never apply to another situation because no two physical, real-world, morally-relevant situations are ever precisely the same.

If we want our maxim to be less specific to the point where more than one person could possibly apply it, what algorithm do we apply to strip a maxim of some of its situation-dependent information? We have two options: cost-benefit analysis (we are left with no choice but to optimize for some utility function) or using a pool of entropy to randomly remove chunks of information from the maxim. The former method of maxim-design is typically characterized as the process behind implementing rule utilititarianism. The latter surely does not reliably lead to better worlds (how could random normative rules reliably produce better worlds?)

Bottom line: We should act as the world is expected to be, not how the world could be if everyone obeyed the same maxim.

Welfare Stems from Fundamental Physics

It’s pretty clear that our subjective experiences stem from activities going on in the brain.  You can even test this out by physically altering the brain in various ways, and it’s owner will report different subjective experiences. Consider, for instance, a concussion–damage to the brain can lead to memory loss, vision impairment, pain, and other complications.

The subjectivity that stems from the brain either emerges when all the brain’s circuitry is wired in harmony, or it emerges granularly.  Most phenomena in the physical world are granular and change gradually, so I suspect that subjective experience boils down to smaller subcomponents than the entire brain itself. Additionally, the multifaceted nature of subjective experiences themselves supports this. Further, their are a ridiculous number of combinations of emotions and qualia for an emergent model to account for– a granular theory of consciousness much more simply explains the nature of our subjective experiences.

If consciousness is indeed granular, then we should be left wondering where that granularity ends. For any structure we choose, we can probably alter it a bit in a particular way and the subjectivity wouldn’t change. For instance, I highly doubt that there isn’t at least one carbon atom that one couldn’t remove in a group of neurons in order to not to alter any of the subjectivity that is produced.

How can we can be as specific as possible about describing the structure and components required to produce subjectivity?  We can theoretically describe any physical system using the smallest theoretical objects and forces in the world. These may be strings as described in m-theory.

Hence, when we talk about how we want to propagate welfare, we really mean that we want to propagate certain fundamental physics structures and their operations. These structures and their operations are not necessarily only those that perfectly form human brain– slight modifications and simplification may lead to more desirable subjective experiences.

Implications

When we want to pursue the good, we should be as explicit as possible about what it is we want to propagate. If we only propagate average homo sapiens in their present form, fundamental physics says we are not necessarily propagating the greatest good.  Humans are incredibly multifaceted, but we may also want to look into propagating those fundamental physics structures that most closely produce well-being.

These idealized physical structures optimized for well-being (and not necessarily instrumental utility, such as the ability to solve human-specific problems) have been called hedonium or orgasmium.  A simple way to do a lot of good may be to design and produce these. They could be brains on chips where we reliably know that there is the subjective experience of intense bliss. These don’t have to replace people or be our only goal. Our moral uncertainty and utility function are too complicated to require just that. But perhaps we will want to consider designing and building these structures.

Our Responsibility to Reduce Personal Risk

Insofar as the (expected value of the world with us alive) minus (the expected value of the world with us dead) is positive, we have an obligation to remain alive.  Therefore, it is unethical to bear personal risks of death when the expected benefits of doing so do not outweigh the expected marginal benefit of one remaining alive.

What does this mean? One has a responsibility to live carefully.

When I grew up, I frequently bore considerable personal risk to try to display bravado and skill to my friends and have fun.  Yes, this had personal utility in the form of positive subjective experiences and confidence building, but it probably did not outweigh my (risk of death) times (the value of the future with me in it).

Just for a quick list of some of the unnecessary risks I chose to bear:

  • Climbed dozens of trees to heights greater than 20 feet.
  • Drifted 4-wheelers and road way too fast through narrow snowy trails in the woods, even after wiping out a few times and nearly having one land on me.
  • Sprinted through dense Florida forest at night to avoid getting tagged in games of ‘manhunt’.  Looking back on it, this gave me a relatively great probability of getting a stick in my eye, which would, sans a personal mindset revolution, be expected to reduce my life-long productivity.
  • Skiied down all of the black diamond trails at a park the first day I learned how to ski.
  • I was a very alert, focused driver, but I drove way too aggressively when I was 16-17. I would drift and weave aggressively through traffic to save a couple minutes on my commute.
  • I’ve done lots of breath holding activities alone in the water.

I know many (perhaps you) have assumed greater unnecessary risks, but my actions were morally problematic.  Combined, I say they easily gave me a 3% chance of death.  I expect to avert, at the minimum, 10,000 Disability Adjusted Life Years over the course of my life through effective giving ($300,000 to Against Malaria Foundation or the like).  Just counting this impact alone, I statistically allowed 300 years of human suffering to happen that I could have prevented.  But I certainly had fun doing these risky activities- perhaps it all adds up to .25 Quality Adjusted Life Years worth of well-being.

As you can see, the cost of these actions is somewhere in the neighborhood of at least 1200 times greater than the benefits.  I am not going to beat myself up for what already happened, but the question I have to ask myself now is, how can I reduce unnecessary risk in my life?

As much as part of me wants to continue to flirt with danger and downhill mountain bike, ski, and continue doing road cycling, I now plan on mitigating these risks as much as possible.  The world doesn’t need me to be fast at getting down mountains on my bike.  It needs me to have only enough leisure to be happy so that I can maximize my productivity and impact.

We Should Have Prevented Other Countries from Obtaining Nukes

Ever since nuclear weapons fell into the hands of the USSR and other countries beyond the United States, human civilization has been under tremendous risk of extinction. For decades now, the Doomsday clock has been perilously close to midnight; we continue to flirt with disaster which could strike once any nuclear leader falls into a suicidal mindset, which breaks the calculus of Mutually Assured Destruction. There is no solution in sight: we will only continue to avoid the destruction of all that we care about insofar as a handful of world leaders value living more than winning or being right. Perhaps down the road, some institution will emerge which will lead denuclearization to non-extinction levels, but even navigating this transition will be risky.

Given the dilemma of this current state of affairs, we messed up. We should have had the strategic foresight to prevent this from happening, and done nearly everything in our power to prevent it from happening. We should have negotiated more fiercely with the Soviet Union to make them stand down their nuclear development, and we should have backed up our words with the threat of bombs. Further, moral philosophy messed up by not laying the groundwork for this to happen at the time: as undesirable as it would have been to target a research lab in Siberia or even a populated city, this pales in comparison to the hundreds of millions, billions, or even all future people (we are talking trillions+) who remain under significant, perpetual risk in the current nuclear environment we created.

We should have never allowed more than one state to develop the Bomb. “But this one state might abuse their power and try to dominate the world” one might counter. This could be the case, but I would venture that one state enforcing its values on another would probably not have been as bad as extinction. Further, this one nuclear state would have an incentive to be good stewards of their power to discourage others’ pursuit of nuclear development; insofar as non-nuclear states are not desperately unsatisfied with their lives, it does not make sense to pursue nuclear development under the threat of annihilation should the one nuclear state find out before they had amassed an arsenal big enough for Mutually Assured Destruction.

Special Obligations Don’t Exist, but they are Useful Heuristics

Utilitarianism implies our obligation is to maximize net well-being.  We should be willing to donate to whatever charities do the most good; these charities may not necessarily help those in our local community.  What then of social, local, special obligations? Does a parent not have a special obligation to their child? Ought we to follow through with our social contracts?  If your organization dumps thousands of dollars worth of training into you with the expectation you would work there for X years, shouldn’t you strive to fulfill your side of the bargain?

I venture that special obligations don’t actually exist.  Insofar as there is an Ethical Reality with things that are truly Good and Imperative actions, special obligations are not truly a feature of this space. In other words, there is no special disutility to violating a special obligation beyond the effect this has on net well-being.

That being said, the notion of special obligations are extremely useful in navigating Ethical Reality.  “I don’t know what the real good is, but I know that social coordination via social contracts is generally part of it, and since I don’t have an overwhelmingly good reason to violate this particular social contract, I’m going to fulfill it” is an example of proper reasoning about special obligations.

The fact that this is not obvious to every philosopher leads me to think that some people are not really trying to make consequentialist-utilitarianism work; they don’t attempt to prove it by assuming it’s true, actually trying to make it work, and then seeing how well it describes our intuitions.  Instead, they maintain their doubt about utilitarianism, barely try to make it work, and then complain when it does not match their ethical intuitions!

Is Life Insurance Rational?

BLUF: Life insurance might be rational for many effective altruism-minded people, but the devil is in the details- that is, one’s probable risk of death and the payout/cost of a plan.

US military servicemembers are afforded the option of paying $29 a month for $400,000 in life insurance. The norm is that the overwhelming majority of servicemembers pay for this. I decided that if I happen to die while in the military, I wanted the $400k to go to the most effective charities. The separate $100k death gratuity would go to my family members, but I would divide up the $400k payout among four think-tanks/charities I care about:
the Machine Intelligence Research Institute,
80,000 Hours,
GiveDirectly
the Future of Humanity Institute at Oxford.

While this may not be the optimal way of distributing funds and there is an argument that in a sufficiently-sized donation market, it’s optimal for people to just donate to the one charity they think is the best, I still hate the idea of putting all my eggs in one basket. Additionally, if I die, I expect my donations to go on the local news or at least be on my obituary, and I think it would be better for people to see four organizations than one. Hence, I decided to focus on the four cause areas of AI-alignment (MIRI), meta- effective altruism (80,000 Hours), direct impact on global poverty (GiveDirectly), and general existential risk research (FHI).

Are my odds of death good enough to warrant this $29 a month that cannot go to these charities? I assumed my odds of death were good enough when I was an enlisted deep sea diver (laundry list of diving hazards for the uninitiated). I feel they are still decent enough as a West Point Triathlete who frequently bikes on terrible New York roads 8+ months of the year to warrant the $29 a month. However, I recently realized that EA is probably big enough now that I don’t need to make sure that these organizations get a minimum amount of money from me, but that the community is big enough now that I should just maximize the expected utility of my donations. In other words, I might actually be wasting a portion of $29 a month depending on my actual risk of death. Thus, I figured it was time to just shut up and multiply:

It turns out the calculus really doesn’t have to be that hard. $29 a month * 12 months a year * 7.3 more years in the Army= $2540. $400k/2540= 157. Essentially, if I have greater than a 1 in 157 chance in dying over my expected time remaining in the Army, it makes sense that I buy the life insurance. I think this is definitely within a natural log order-of-magnitude of the true probability, which is between a 1 in 55 chance and a 1 in 403 chance (ln157=5 ; e^4= 55 ; e^6= 403). Generic actuarial data supports this (average odds of dying while a 25 year old male (my average age over the next 7 years) is 0.001451. This yields a 1 in 94 chance of dying, which is riskier than 1 in 157. Additionally, while I may not be doing the risky driving which fuels deaths for people my age, I think the road cycling and risks while on deployment (assuming we don’t go to war with a near-peer) will keep my number at least in the ballpark of the national average. Hence, it seems rational for me and others in the military to buy life insurance if we want to maximize expected dollars donated to the causes we care about.

Why is SGLI so cheap if the expected utility works out for most people? This could signal my math is faulty.  Contributing factors include:
-SGLI is run by the Department of Veteran Affairs and they are not making a profit.
-they probably make a lot of interest from investing the money.

Does my math check out? Thanks for reading!

Possible Moral Trade Implementation

I’ve been thinking about Toby Ord’s Moral Trade paper, and think a new Repledge website is a desirable thing, legal questions aside. Here’s the idea (edited with my own takes) for those unfamiliar:

Create a website where people can donate to a cause, but where if someone else donates to the opposite cause, both peoples’ money is instead diverted to a 3rd charity that both parties also support (e.g. GiveWell). To discourage GiveWell supporters from waiting and donating to whatever interest group is necessary to double their donations, the running balance is kept private. After a set time (say, once a week, Saturday at 1800), the tied money goes to GiveWell and the surplus money goes to the interest group it was intended for.

People interested in supporting interest groups should be interested in funding this way if:
1) they believe their opponent’s interest group could advance their interest better with a dollar than their own
2) they would rather give $2 to GiveWell than $.5001 to their own interest group
3) some reconciliation of #1 and #2.

Trust problems can be resolved with smart open-source software and 3rd party (not GiveWell) auditing.

Given only the option of donating X dollars through the site or outside it, I think a rational agent should donate according to the following procedure so as to maximize utility:

uA= a utility/$ ———utility per dollar of one’s interest group
uB= -b utility/$——–negative utility per dollar of the opposition’s interest group
uG= g utility/$ ———utility per dollar of the neutral 3rd group (GiveWell)

If abs(uB)>uA:
Donate through site so as to get 2uG
If uA>2uG:
Donate directly to A
uD=uA-uB
If uD>2uG:
Donate directly to A
Else:
Donate so as to get 2uG

If this is a good idea in theory, the next obstacle to tackle is the question of legality.  I imagine that people should be able to consent to their money being used in this way, but laws, especially campaign finance laws, are not always intuitive.

The next question is whether the expected donations to GiveWell would be worth the effort to tackle this project. The effort, of course, could widely vary; we could hire a team of software engineers to build a secure system where humans are effectively out of the loop and this could be verified by 3rd party investigators. Or we could make two Venmo-like accounts (one for each side of a partisan issue that a poll shows people are interested in funding on both sides), and literally just live stream and post a video weekly of the site’s owner subtracting the difference between pairs of accounts, donating the money to the winning site (with the camera still rolling), and donating the matched money to GiveWell.

There is a very good chance that we will not find prospective donators on opposite sides of an issue that both buy into the calculus and trust the site enough, but it’s possible.  The cost is low enough however that this simpler system could be implemented within hours by one trusted third party should a community find itself sharply divided on an issue and be willing or already spending money on organizations with opposing missions.

Thanks for reading! I would love your feedback 🙂

Experience Machines Support Ethical Hedonism

Suppose there was an experience machine that would give you any experience you desired. Super-duper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life experiences? […] Of course, while in the tank you won’t know that you’re there; you’ll think that it’s all actually happening […] Would you plug in? (44-45, Nozick)

Robert Nozick’s arguments basically boil down to:
1. If all we cared about was pleasure, we would agree to plug into the experience machine.
2. However, we do not want to plug-in.
3. Thus, there are things which matter to us besides pleasure.

My Response:
Critics of experience machines do not formalize their intuitions enough. If they did, they’d discover they didn’t actually have a problem with experience machines in their simplest form.  Here is a thought experiment which I believe speaks for itself:

Suppose our best AI experts agreed it’s now safe to create a powerful, benign AGI. This AGI swiftly created a thriving post-scarcity economy. This all happened 10,000 years ago. Now, you are alive and face a choice of entering an experience machine. You could remain alive and climb mountains and experience in the base reality, or you could experience the same and more in a simulation where you only think you are in a base reality. Of note, there is nothing more that you could do for others in this base reality- you would only be causing relatively less benefits to others than this AGI could over a period of time interacting with another person. There is also nothing you could do to secure the future of humanity or sentient life- this AI is far smarter than you and the future is secured under its control. So I ask, why not go use the experience machine now? Why not let your family use it? If it creates a subjective reality literally designed to maximize wellbeing and it reliably does a fantastic job at this, then why not?

On the Permissibility of Abortion

Here is a rough draft of my paper with my thoughts on abortion.  I don’t claim that abortion is always ethical, but rather (in today’s society) that a woman’s desire to have one is a strong predictor that it is ethical to do so.

The most interesting part of my paper might be how I break down moral weight and argue it is based at least partially off 3 factors: expected subjective well-being, the existence of preferences, and social impact. Namely, if we are as honest with ourselves as possible, we must accept that not all self-replicating life forms have the same moral weight, including Homo Sapiens.