Economic Policies that Optimize for Future People

BLUF: This isn’t a profoundly deep post– it just shows my current, general views on a variety of current economic issues.

I do not believe future people are intrinsically less valuable than the people existing today. In fact, I think they might be more valuable because their lives will intrinsically be more worth living as their well-being will probably be greater. I also respect the validity of the 20% chance of extinction by 2100 that is the average of a number of researchers’ estimations, so I think that even considering a variety of extinction scenarios, there are many more future people expected to exist in the future. Thus, I think we should optimize our political and economic policies to serve their interests even more than the selfish interests of those people alive today. What does this look like in concrete policies?

  • Steep carbon taxes
  • Taxes on essential ecological service destruction in general priced at the cost of replacement
  • A land-value tax
  • Universal Basic Income at a cost to less-efficient government social programs like Social Security

Principles behind optimizing our economy for the long-term future

  • A willingness to bear the temporary economic losses, as a society, of implementing steep carbon taxes and essential ecological service destruction taxes.
  • More deliberate experimentation to test policies via states and charter cities.
  • Beyond concerns about environmental destruction, being willing to optimize for economic growth more than redistributing resources to satisfy the preferences of everyone who happens to be alive today. Social security recipients are no longer actively contributing to the economy, so we should cut their funding to give everyone a UBI.

Beyond optimizing for the long-term, I generally support:

  • lifting economically stifling regulation; we should make entrepreneurship as easy as possible. One shouldn’t need to consult a lawyer to start many personal businesses.
  • lifting barriers to competition like government-mandated licensing (e.g. taxicabs)
  • Free trade
  • Much greater immigration, especially of educated people, but not quite open borders.

Understanding the True Cost of Land-Use Projects

Update: My team’s paper earned the coveted Outstanding rating! Further, our paper won the Rachel Carlson award, which “is presented to a team selected by the Head Judge of ICM Problem E for excellence in using scientific theory and data in its modeling.” Over 4,800 collegiate teams from around the world were competing in Problem E, so I am honored that our work was recognized as the best! Here are the results.

BLUF: My team of two other college sophomores competed in an academic competition involving 99 hours of modeling and paper writing. This post presents our work.

We ended up cranking this paper out: “Ecological Services Valuation Model: Understanding the True Cost of Land-Use Projects”

Intro: Our team was hired to tackle one of the greatest problems remaining in the 21st century: how do we prevent the “tragedy of the commons?” Specifically, our task was to “create an ecological services valuation model to understand the true economic costs of land use projects when ecosystem services (ES) are considered.” We discovered that answering this question is key for governments to rent land to entities for land-use projects at a price necessary to preserve the value of ES owned by all.

In our pursuit of creating a model, we began by researching the philosophical underpinnings of value. We decided that well-being, based off conscious-subjective experiences, is the only good which is intrinsically valuable. While we maintain a degree of moral uncertainty on this matter, we ultimately decided to base our valuation of ecosystem services from their expected impact on well-being of conscious creatures, most especially humans.

We then explored the economic systems that best support our value-theory, and settled on Georgism, an economic philosophy which asserts that, while individuals ought to own the fruits of their own labor, natural resources are a public good [1]. Then, we researched the possible frameworks we could use to price ecosystem services, and determined the price should reflect the cost of artificially replacing ES. In other words, the value of an ES depends on the price to replace its services. For services that are irreplaceable, we propose a method of converting lost environmental services into Quality-Adjusted Life Years (QALYs), which may then converting into dollars based off the rate of producing QALYs.

We explored preexisting models for pricing the ES affected by land-use projects, and found several highly-developed, but difficult to apply models. To solve for this, we sought to create a model which balances accurate valuation with ease of applicability, while still maintaining our values of maximizing well-being. Thus, we designed a general model with only the most applicable variables.

Check out our paper for the full report.

 

Counterfactual People Are Important

When looking at the morality or desirability of abortions, many claim “if my sibling with X-serious-disability were aborted, then they wouldn’t be alive today with their net-positive life and I wouldn’t know them.” This statement is true and is evidence against the desirability of abortion, but what I have never heard from a pro-lifer is the moral weight of the counterfactual people that could have been born should an abortion have taken place. This is an extremely important factor in deciding the desirability of abortion, and in population ethics in general.

The valuation of counterfactual people is the same as that of potential people which is very similar to the valuation of future people. It is based on some subjective expected-utility of the nature of the person’s subjective experience, the person’s expected net-impact on ethically relevant phenomena, and the probability of them coming into existence, minus a cost function. For counterfactual people, that cost-function is the moral weight of a person who could exist otherwise. This may seem like a tautological definition, but let’s look at a though experiment to more explicitly highlight the need to look at counterfactual people:

“Suppose it was a phenomenon of nature that every women’s first embryo implanted in their womb was destined to live a life barely worth living and would be expected to only barely give back more to society than the societal resources used to raise it. A woman could raise this child to term and get pregnant with a typical child several months after that, or she could have an abortion and end the pregnancy within a couple weeks of it starting, and then get pregnant with a typical child, with much greater expected well-being and societal impact, within a month or two of that. The latter choice, if rationally taken, would require considering marginal cost– that is, the weight of counterfactual people.” I think it’s clear that society would be worse off if we didn’t make the latter choice at least a majority of the time.

Considering marginal cost between having different children doesn’t mean that we must be harsh to our children with less expected impact on society and less well-being. It’s just as morally relevant to be kind to people that could be affected by our words and actions. However, let’s not pretend we are angels when we do a good and ignore the counterfactual better good that could have taken place.

The Exponential Impact of Socially-Contagious Philanthropy

If doing any significant amount of good was basically intractable, it would be more permissible for individuals to ignore the utilitarian imperative to do the most good. However, doing incredible amounts of good is in the reach of many of us. We don’t necessarily have to research and contribute to Multiverse-wide Cooperation via Correlated Decision Making in order to do our part; doing good can be as simple as donating 10% of one’s salary to EA Funds, which, if used for causes as effective as the Against Malaria Foundation, can avert a year of lost health (a DALY) for $29. One may be able to do far more good than just this though. Consider the power of exponential growth:

If you commit to convincing two other people per year to donate 10% of their income to the EA Funds, and convince them to convince two people to do the same themselves, etc., you can expect to have 27 people donating 10% of their income within three years. Considering a simple model based on a mean US income of $72,000, one can expect to be responsible for averting 814,097 DALYs within 7 years. This assumes that these people would not do anything productive with 10% of their money if they did not donate it, that none of these people would have discovered effective giving over this time period, and that the $29 per averted DALY rate would hold. Even accounting for more realistic estimates of these factors, it is likely that one could still claim responsibility for averting over 500,000 DALYs over a 7 year period.

This is a substantial amount of good. Frankly, I struggle to imagine how these 2187 people’s discretionary income could be better spent. To say it would be better for people to not deliberately spend part of their discretionary money on charity and research via the EA Funds is to suggest either that the EA Funds managers are ineffective at choosing organizations and causes to give to, or that each person could get about 247 years worth of pleasure by spending that 10% of their income on themselves. I think both of these are highly unlikely to be the case.

If this inspired you, I encourage you to take a giving pledge and share your reasons for taking the pledge on social media! Like all habits, giving is contagious 🙂

Making Total Utilitarianism More Intuitive

BLUF: If total utilitarianism’s obligation to create new beings seems non-intuitive, think of increased numbers of beings as increased duration of subjective experience. We are allowed to redefine population as duration because subjective experience is fundamentally impersonal and based on physics, where there is no room for personhood.

Utilitarianism usually states that maximizing the quality of conscious experience is important…However, Henry Sidgwick has asked, “Is it total or average happiness that we seek to make a maximum?”[1]

Total hedonic utilitarianism says that we ought to consider both the desirability of any subjective experience as well as the number of subjective experiences in determining moral action. Level of happiness is a pretty intuitive aspect of total utilitarianism.  All things considered, we want any creature to have a more desirable subjective experience, everything else the same.

I empathize that it less intuitive that we ought to value additional beings with marginally net-positive subjective experiences. “Are we really obligated to make additional happy beings?” is a fair question. This is captured in Derek Parfit’s repugnant conclusion, “For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living”.

I think that a more intuitive, but equally fair way to consider the moral intuitiveness of our obligation to maximize beings with net-positive subjective experiences is that we can represent number of beings via duration of experience. More beings being alive at any given moment means there is a greater duration of subjective experience.

I think duration is pretty intuitive to ethically desire. All things the same, we prefer a desirable subjective experience to continue.

We are allowed to redefine population as duration because subjective experience is fundamentally impersonal and based on physics (where there is no room for personhood).

Suspend Moral Self-Judgement for Higher-Quality Reasoning

I think most people have an intense psychological need to feel they are ‘good’. After all, if we are not ‘good’, we probably have extra work ahead of ourselves to set ourselves straight and at the very least, preserve our social standing. Some of us do moral calculus all the time in order to stave off guilt and justify our current course of action. The mature among us value intellectual honesty when doing this, and try to avoid jumping to convenient conclusions.

With all this being said, I think a lot of us too often fall short of being intellectually honest because we really value perceiving ourselves as being ‘good’. For example, just consider the most common argument against moral philosopher Peter Singer’s main point in his famous essay “Famine, Affluence, and Morality“. Many people reject his argument because it’s too demanding–not because its clauses are flawed or the logic tying them together is faulty, but because the conclusion implies just about everyone is currently not as good as they think the are.

If people could better suspend their moral self-judgment, they wouldn’t fall into this sort of trap. There is a time and a place to deal with moral guilt (hopefully by altering our behavior), but it shouldn’t be while we are trying to determine moral truth.

If this sounds trivially obvious, when is the last time you felt you were a moral monster? When did you last feel heavy guilt for spending resources on yourself that could be better allocated to reliably avert a lot of others’ suffering? If you’ve never felt that guilt, you may be putting the cart before the horse in your moral reasoning.

On Moral Relativism

Here I briefly describe why I think some people think moral relativism has a significant truth value.

What does moral relativism say? “You cant say one cultures values are better than another because you evaluate them through your own biased lense.” I hope this is a fair synopsis.

Counter: Moral relativism is incoherent because utility is grounded in the real world, and different actions certainly have different effects on the real world. I bet this is clear to most moral relativists, but I believe there exists a line of reasoning which obscures their thinking.

Moral relativists are probably seeking tolerance. People used to get burned at the stake for believing something outside of the Overton window. For most of us that have discovered the fruits of living in a liberal, diverse society, we obviously do not want to live in a society where people with “wrong” opinions they believe are true are too afraid to speak their mind.

Moral relativists are also probably against imperialism. A big justifier of imperialism is that the colonized beliefs are wrong and they need to be managed.

Moral relativists want to prevent stonings, the closure of public discourse, and imperialism, but instead of focusing on how one ought to respond to another person’s/culture’s wrong belief, they say the belief isn’t wrong in the first place. The threat of the slippery slope that starts at judgement of another culture’s values is perceived as worse than not acting as if cultural beliefs did have different utility.

Followed to its conclusions, this creates an unmanageable world to live in as we simply cannot maximize utility functions that are grounded at all in the real world.

The Moral Importance of Future People

Conventional ethics does not explicitly assign significant moral weight to people who are expected to exist in the future. This is problematic for a number of reasons:

  • their subjective experiences will be just as salient when they are alive as ours are today.
  • we can do a number of actions to help them out today.
    • we can focus more resources on reducing our risk of civilizational collapse to ensure their existence.
    • whereas much of current people’s quality of life is ‘locked-in’ via their genes, we can significantly influence the genes of future people to ensure they have the best chance at living their best life
  • technological and cultural inventions tend to help many more future people than many direct interventions that help people today. Consider if we transferred some of the exorbitant amount of resources used on end-of-life care to invest in gerontological research

Even if one does not buy into total utilitarianism and does not think that we have an obligation to create beings that are expected to have net-positive subjective experiences, one should still value the lives of human beings who will probabilistically exist. To suggest otherwise is to say that time alone has an effect on the quality of subjective experience, which just like location, is probably not the case.

How should we consider the moral weight of future expected beings? I believe we should use expected utility, and consider the moral weight of people to the probability that they will exist. For example, 100 people who on average have a 95% probability of existing should have the moral weight of 95 people.

Of course, we could be more certain of the effects of long-term interventions for future people as well as the flow-through-effects of helping people today, but that is a different question. Before we can decide that the expected utility of a future intervention makes it not worth it compared to helping people today, we must acknowledge the moral weight of future people.

The DoD can Further Optimize its Retirement System

BLUF: The DoD could probably save money, increase servicemember compensation, and better optimize talent management if it further increased TSP contributions while reducing 20-year retirement pensions.

Introduction to the Military Retirement: For decades, the DoD had a retirement system where after 20 years of active-duty service (and upon leaving the service) a servicemember (SM) was entitled to an immediate annuity based on years of service and basic pay using a 2.5% multiplier. For instance, after 24 years of service, a SM essentially earned 2.5% * 24 = 48% of their base pay for the rest of their life.  In 2016, Congress created a new system called the Blended Retirement System (BRS) that reduces the base pay multiplier to 2.0% but adds two new components: 5% matching Thrift Savings Plan (TSP) contributions and continuation pay (CP) (a few month’s pay for signing and continued service obligation for 8-12 year SMs).

The traditional 20-year retirement model (T20R) served as a powerful retention tool.  It takes a lot of time and resources to develop a senior Non-Commissioned Officer (NCO) or a field grade officer, and the T20R is a strong incentive for retaining talent who often have lucrative civilian career options. The problem with the T20R is that it is expensive to the Department of Defense (DoD), accounting for about $52 billion of its budget annually.[1]

The BRS was developed in the interest of saving money while also strengthening talent management.  Costs are initially higher under the BRS because of expenditures for CP and TSP matching contributions, but costs will eventually lower due to the lowered retirement pensions.[2]

The BRS promotes talent management because it improves incentivizes for short-term and mid-term service as well as long-term, 20+ year service.  Some individuals who would be valuable assets to the DoD are not necessarily careerists, and matching TSP-contributions is an incentive to attract them. Additionally, too many servicemembers tend to leave at 20 years; the BRS would better incentivize additional service.

My Argument: The military/Congress decided on a matching contributions rate of only up to 5% of base pay, but I argue that a higher percent of matching contributions, such as 10%, can further save the DoD money over its T20R system, increase average SM compensation, and increase talent management by offering more optimal incentives for SM retention.

Current Retirement System Example: A typical compensation scenario under the traditional retirement system: If an officer retires at a paygrade of O-5 at 20 years of service, it will cost the DoD an additional $10,278 per year compared to the BRS model which has a pension set at 40% of final base pay.  With future improvements in medical care, 20-year retirees (approximately 42 years old) might very well live up into their 90s, which may be 50 years after their retirement. Under the T20R, this 50-year pension would cost the DoD $2,569,530, and under the BRS, this would cost $2,055,624. Out of this $513,906.00 in savings comes the TSP matching contributions and continuation service pay in the new BRS.

Blended Retirement System Example:  Under the BRS, the military plans to match up to 5% of a SM’s contribution to their TSP. Over a 20-year career where a SM chose to donate at least 5% of their base pay to the TSP, this would cost the DoD $75,573.80 over the 20 years.  Continuation pay, an additional retention incentive, costs the service 2.5*base pay. For an O-4 with 12 years in service, this would cost the DoD $18,075.45. Under the new BRS, the military is not spending approximately $513,906 due to the difference in pensions (at 50 years after retirement), while additional expenditures for a 20-year officer retiree would generally only amount to $93,649.254.  Just as the DoD is saving $420,256.74, SMs who transfer to the BRS lose approximately this much income over their lifetime due to smaller monthly pensions (assuming they would have donated the TSP matching contributions themselves).

Proposal: While the DoD ought to save money and cut costs, I argue that it is even more important to offer competitive incentives for the sake of talent management and retention.  To illustrate a point, if the military were to maximize TSP contributions (limit of $18,500) for a 20-year career officer, this would cost the DoD $370,000. This is still less than the $420,256 in savings the DoD makes under the new BRS. This would allow the SM to open up a private investment account like a Roth IRA, which one would expect to match the returns of the TSP.  While the DoD would still be saving $50,256, the SM still investing $18,500 a year could now reasonably expect to earn an additional $550,894 over 20 years assuming a market growth of 4% over inflation. If the SM did not touch this money and stopped contributing to the Roth IRA after retiring at 20 years TIS, they could expect to have both their TSP and Roth IRA each worth $3,915,032 after 50 years.

This is an extreme argument to exemplify a point: the DoD can save money and increase SM compensation by utilizing market forces and compounding interest.  Only about 17% of SMs end up serving 20 years and retiring [3], so it would be very costly to the DoD if it were paying everyone an additional $18,500 a year for their TSPs. The military, however, is interested in increasing short-term service incentives for talented individuals who know a 20-year career is not for them.  The military also may be interested in reducing the incentive for mid-career (8-15 years) SMs to force themselves to continue service (out of fear of walking away with no retirement benefits; this the military does not want to retain members who really are ready to go). Additionally, the military is interested in reducing the overly strong incentive for talented senior SMs to get out shortly after 20 years of service.[3] See Appendix A for a chart of officer retention.[3]

To balance these incentives and the cost behind them, the DoD can increase matching TSP contributions to, say, 10%, while paying for this through reducing the pension percentage of base pay after 20 years to say, 35%.  For an officer who allots the full $18,500, and would allot the extra 10% of their Base Pay saved by the DoD’s 10% matching contributions, this would yield an extra $55,089.45 after 20 years.

According to my model (detailed below), a SM will always have less total compensation than the T20R until after 24 years after retirement at 20 years (assuming a RR < .5).  After 24 years, BRS retirees will see their total compensation surpass that off T20R retirees. If a BRS retiree is most interested in maximizing their lifetime compensation (perhaps for the sake of setting up a trust for a good cause) and they are willing to bear the lower pension up until their 24th year after retirement (approximately 66 years old), they are actually best off with a lower RR.  The best RR for that frame of mind is actually 0.

Model

Problem: Find an optimal TSP matching contributions rate and pension rate (as a final percentage of final base pay) to maximize lifetime earnings of service-members (SMs) without increasing costs to the DoD.

Variables:
        RR, the new 20 year retirement rate to optimize ;
LSEn, lifetime service earnings at year n ;
        TBPn, total base pay, the summation of base pay earned in all ranks prior to year n ;
        BPn, base pay at year n ;
        TCn=MCR*BPn ;
        RDn, retirement pay difference n years after being retired ;
        TSPn, value of TSP account at year n after starting account ;
        TSPr, value of TSP at exit from government service ;
       Yr, year of military retirement ;
        Yc, year of cessation of benefits ;
        n, years (of service or retirement) ;

Parameters:
         SMBRS, the number of service members (SMs) in the BR ;
         I, inflation rate of the US dollar, assumed to be 2% ;
         MRR, market return rate and rate a balanced TSP portfolio is predicted to grow.  For a particular solution, I assume 6% annual growth (1.06) ;
         MCR, the BRS matching TSP contribution rate ;
         AYURD, Average number of years lived until from retirement to death

Assumptions: For simplicity’s sake, I assume the TSP index fund growth will remain constant.  I also ignore future rises in the cost-of-living with the presumption that the military will continue to make cost-of-living-adjustments to its pay and retirement system.  I am also speculating an increased rate of SM retirement from 17% to 20%.

Model
General model for lifetime service earnings at year n due to TSP account growth:
             LSEn= TBPn – n * TCn + TCn * ( 1 – MRRn ) / ( 1 – MRR )

 Model for TSP account growth after retirement:
            TSPn = TSP*  MRR * ( Yd – Yr )

Additional cost of the BRS to the DoD compared to T20R for an individual SM:
           MCR * BP* n + 2.5 * BP12

Savings to the DoD compared to T20R:
           ( .5 – RR ) * BPr * 12 * ( Y– Y)

Cost of BRS:
           ( MCR * BP* n + 2.5 * BP12 ) * SMBRS( .5 – RR ) * BPr * 12 * AYURD

The general solution to find the optimal solution is:
RD(RR, n) = 6BPr + 12nRRBPr – 6nBPr -(3nRRBPry(1 – nMRR))/(25*(1 – MRR))

The particular solution:
RD(RR,n)= 13896nn + 129504nRR – 64752.2n – RR 27792nn

Importantly, we should set realistic constraints on RR and n:
           .2 < RR < .5   ;   0 < n < 50

The constraints are chosen because the markets could fail and SMs should still have a safety net of 20% of their final base pay.  A 50-year cap is chosen because of the assumption that retirees will want to eventually utilize their TSP accounts (at this age, perhaps setting up a trust).

Rather than try to save the DoD money in this model, I am routing all the money from the savings from the reduced pension payouts back into the TSP matching contributions for all service members. Because there are now 5 SMs to provide matching TSP contributions for instead of just 1 SM to provide a pension for, the savings from a reduced payout will be divided by 5, and this will be the amount of money the DoD can pay out to each SM over their careers.

Normally, rank determines compensation, but to simplify this model, everyone will receive the same amount of matching contributions over their careers. Thus, this model divides the amount of TSP matching contribution money available to each SM by 20 and provides this as the maximum amount of matching contributions available to each SM every year.

See Appendix B for an Excel sheet that uses Solver to optimize for the pension and matching contributions rate.  What we find is that the function is non-convex in the domain defined by our constraints, so there are no relative maxima.  There is only an optimum RR for any given n (and vice versa).

Further Discussion: If one is willing to wait over 24 years after retiring to touch their TSP accounts, it seems that my current model is better than the T20R. However, what I have failed to account for is how much money extra money a SM retired under the T20R could invest in private investment accounts.  This amount is modeled by the expression:

(.5 – M) * BP* (1 – MRR*n) / (1 – MRR)

For example, an O-5 SM retiring under T20R and investing the difference between the two pensions could expect to have more money regardless of RR (as long as it is the same as the TSP) up until year 36.  The BRS only surpasses 36 years after military retirement due to the 20 years of earlier TSP contributions that would finally catch up and make the difference.

While it takes a long time for lifetime compensation for my modeled BRS SMs to surpass the T20R SMs, the benefit of this really seen elsewhere.  Now, every SM, regardless of how long they stay in the DoD, has a retirement benefit. Not only does this help millions of individuals financially, but it also will considerably promote talent management.

Appendix A

appendix2BRS

Appendix B

appendix3BRS

Link to Google Sheets File

Sources

[1] https://budget.house.gov/hbc-publication/364048/

[2] https://www.rand.org/pubs/research_reports/RR1373.html

[3] https://warontherocks.com/2015/03/military-retirement-too-sweet-a-deal

Our Obligation to Add Beings to the World

Bryan Caplan brings up some solid points in his post, “Where are the Pro-Life Utilitarians?”  I still am inclined to feel that abortions are ethically permissible the majority of the time, but I obviously need to formalize my intuitions, look more closely at this, and possibly change my position if the cost-benefit analysis is clear enough.

However, I am currently ready to face the generally unpopular idea among utilitarians: “we have a moral duty to have lots of babies”. Insofar as a baby is expected to grow up and have a net-positive life, naive felicific calculus says that all things being equal, it’s better to bring this child into the world than not.

If humans reliably had lives which were overwhelmingly worth living, this would be even more clear.  However, I think that current average welfare falls a good bit short of overwhelmingly worth living, so the felicific calculus is not as easy.  Another variable for deciding whether or not to bring a child into the world is their expected impact on the welfare of other beings.

More people do constitute a drain on relatively finite resources, but they also statistically serve in net-positive ways in the economy.  Additionally, this holds true for k and k+1 people; as society grows larger, there exists even more goods producers, idea miners, and service providers.  A society with 8 billion people can produce a wider variety and quantity of goods for human welfare and increase the rate of technological and idea development more than a society of 1 billion people, so long as resources key to human welfare are not limited.

If I were to have 8 kids and could invest in all of them enough to make them average at least 1/8th as net-impactful as 1 kid I could counterfactually raise, and they are expected to have net-positive welfare over their lives, then I can see the argument that I ought to raise all of them.

The key question then is: will my nth kid have net-positive welfare and will their positive impact on society outpace their cost, such as by marginally hastening global warming at an irredeemable rate?
Their well-being largely depends on the genetics of me and my partner, as well as their upbringing.  Given these factors, I think it is safe to say their life is very likely to be overall worth living.  That is to say, even after having acknowledged an evolution-derived bias towards living, they would still impartially rate their life as net-worth living for qualia reasons alone.

Will they have a net positive effect on society?  While I admit I have not irrefutably backed up that the average person is of net-benefit to society right now, I believe that my kids will be predisposed for above average impact/productivity.  Importantly, they will also grow up in a household which values utilitarian principles so they will, in theory,  be more likely to explicitly pursue high-impact activities.

I still have a lot more felicific calculus to do, but it seems to me preliminarily that many people, including many extra-conscientious, extra-happy individuals,  do in fact have a moral duty to have children for the benefit of their children and everyone else in society.

If I made a jump so far in my felicific calculus that was uncalled for, please let me know.

Risk-Tolerance is Not a Thing

Traditional finance literature says that individuals may choose their risk-tolerance in order to adequately model their financial preferences. A person who chooses a high-risk financial option is willing to deal with greater variability; a person who chooses a low-risk option prefers outcomes with low-variability.

Underlying these notions is the idea that risk-tolerance is a stand-alone, unique, irreplaceable concept.

I like to compress ideas to their simplest form, and I think we can model individual financial preferences without the notion of risk-tolerance:

Risk-tolerance can be wholy framed as prefering a utility function with a particular curve of marginal diminishing utility.  There is no need for extra calculus beyond standard von Neumann-Morgenstern (vNM) rationality, which involves acting so as to maximize expected utility (considering one’s utility function). Financial modeling is as simple as assigning an amount of utility per dollar over the range of the real numbers, and acting so as to maximize one’s expected utility.

If one appropriately updates their utility function, they don’t have to deviate from this simple application of vNM.  If my preference is to definitely make $9,000 over a 1% chance of $1 million, you could just say I give greater utility to the first $9k, less utility per dollar for higher amounts of money, and I still should just act so as to maximize expected utility as per vNM.

I am not saying the use of the term “risk-tolerance” has to go, but let’s not think it’s this special thing when it is not.

Your Acts Need Not Be Universalizable

I see a particular thinking error every now and then, yet I do not recall ever seeing it be addressed properly. What is this error?

It is feeling like one’s acts can only be ethical if one is able to, in good conscience, will that everyone else do them (act on your maxim). You may know this as Kant’s categorical imperative. Specifically, Kant wrote, “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

Frankly, I feel that this is clearly not the right way to go about things. We should act as the world is expected to be, not how the world could be if everyone obeyed the same maxim. To put it more formally, Bayesian Decision Theory requires an agent to calculate expected utilities over the entire range of possible outcomes; we should not optimize strictly for one outcome (everyone acting according to the same maxim) when we can also consider additional possible outcomes in our decision calculus.

Take Angie who is considering not having kids because she expects them to be of net-drain to society due to severe expected health problems. Additionally, Angie expects their lives, on the whole, to be characterized by suffering far more than well-being. Insofar as these expectations are credible and based on properly weighing the sum of evidence she has access to, Angie seems morally correct that she should not have kids.

What is the categorical imperative test that Angie supposedly has to consider? It’s not entirely clear. Should she consider what the world would be like if ‘everyone chooses not to have kids’, or should she consider what the world would be like if ‘anyone with the exact same set of information about the expected disutility of them having kids chooses not to have kids’? If we are as specific as possible, the categorical imperative implies one maxim for a particular situation can never apply to another situation because no two physical, real-world, morally-relevant situations are ever precisely the same.

If we want our maxim to be less specific to the point where more than one person could possibly apply it, what algorithm do we apply to strip a maxim of some of its situation-dependent information? We have two options: cost-benefit analysis (we are left with no choice but to optimize for some utility function) or using a pool of entropy to randomly remove chunks of information from the maxim. The former method of maxim-design is typically characterized as the process behind implementing rule utilititarianism. The latter surely does not reliably lead to better worlds (how could random normative rules reliably produce better worlds?)

Bottom line: We should act as the world is expected to be, not how the world could be if everyone obeyed the same maxim.

Welfare Stems from Fundamental Physics

It’s pretty clear that our subjective experiences stem from activities going on in the brain.  You can even test this out by physically altering the brain in various ways, and it’s owner will report different subjective experiences. Consider, for instance, a concussion–damage to the brain can lead to memory loss, vision impairment, pain, and other complications.

The subjectivity that stems from the brain either emerges when all the brain’s circuitry is wired in harmony, or it emerges granularly.  Most phenomena in the physical world are granular and change gradually, so I suspect that subjective experience boils down to smaller subcomponents than the entire brain itself. Additionally, the multifaceted nature of subjective experiences themselves supports this. Further, their are a ridiculous number of combinations of emotions and qualia for an emergent model to account for– a granular theory of consciousness much more simply explains the nature of our subjective experiences.

If consciousness is indeed granular, then we should be left wondering where that granularity ends. For any structure we choose, we can probably alter it a bit in a particular way and the subjectivity wouldn’t change. For instance, I highly doubt that there isn’t at least one carbon atom that one couldn’t remove in a group of neurons in order to not to alter any of the subjectivity that is produced.

How can we can be as specific as possible about describing the structure and components required to produce subjectivity?  We can theoretically describe any physical system using the smallest theoretical objects and forces in the world. These may be strings as described in m-theory.

Hence, when we talk about how we want to propagate welfare, we really mean that we want to propagate certain fundamental physics structures and their operations. These structures and their operations are not necessarily only those that perfectly form human brain– slight modifications and simplification may lead to more desirable subjective experiences.

Implications

When we want to pursue the good, we should be as explicit as possible about what it is we want to propagate. If we only propagate average homo sapiens in their present form, fundamental physics says we are not necessarily propagating the greatest good.  Humans are incredibly multifaceted, but we may also want to look into propagating those fundamental physics structures that most closely produce well-being.

These idealized physical structures optimized for well-being (and not necessarily instrumental utility, such as the ability to solve human-specific problems) have been called hedonium or orgasmium.  A simple way to do a lot of good may be to design and produce these. They could be brains on chips where we reliably know that there is the subjective experience of intense bliss. These don’t have to replace people or be our only goal. Our moral uncertainty and utility function are too complicated to require just that. But perhaps we will want to consider designing and building these structures.

A Singleton Government is not as Dangerous as Omnicidal Agents

Back in November of 2018, I had a brief discussion with the economist (polymath, really) Robin Hanson on Twitter and inspired him to write a blog post.  We were discussing essentially whether a strong, central government or free individual agents are going to pose a greater threat to future society.  He wrote that a strong, centralized government is riskier, and he did so for interesting reasons.

While his argument is persuasive, I still think that individual agents will pose a greater threat in the future.

Society is going to face these two risks in the future, and they are in contention with each other.  An advanced monitoring system and highly capable police is the only way to prevent omnicidal agents from carrying out their plan. These should not sound far fetched:

 The Likelihood of a Future Strong, Central Government

Artificial intelligence will enable strong, central governance more than we have ever seen. Just look at China and their social credit program.  Additionally, a central government is likely one of the few solutions to our Mutually Assured Destruction deadlock, which probabilistically is a recipe for doom.

The Likelihood of Omnicidal Agents

A number of individuals motivated by religious arguments, ecology preservation, anti-natalism, hate, or other motives have longed to destroy humanity, all sentient life, or even the whole world. Technology is progressing at unprecedented rates and dangerous technology is gradually becoming more accessible to individuals and small groups. While it still takes a nation-state to even make a single nuclear weapon, it only takes one biologist to engineer a pandemic that can potentially kill millions. There is no reason to think a more deadly technology will not eventually be discovered and fall into the hands of the public that gives small groups the ability to destroy significantly more value than ever before.
As it turns out, we are going to have to, to a degree, choose our risk here.  If we go with unlimited liberty, depending on technological development, we face nearly certain destruction; until human psychology changes, someone is going to act to destroy the world should they have the opportunity.  A strong, central government has the risk of enforcing highly sub-optimal values at scale. This is a real problem, but I believe that only a powerful state has the power to prevent omnicidal agents from fulfilling their plans.

Of all possible future worlds, I bet you see relatively fewer fully libertarian societies in the far future because they do not survive past a certain point of technological development. Societies with a strong, central government carry the risk of misgovernance, but at least there is a better chance at surviving into the long future. Yes, there are fates worse than death, but I don’t think most far future strong governments are actually going to be as Orwellian as we traditionally suspect.

I have not ironed out all my ideas about this yet, so I want to hear your feedback!

To learn more about this line of reasoning and related arguments, see Nick Bostrom’s paper The Vulnerable World Hypothesis. Just so one knows, I had formulated these ideas before reading Bostrom’s paper, so I am not just rehashing his ideas here.

Our Responsibility to Reduce Personal Risk

Insofar as the (expected value of the world with us alive) minus (the expected value of the world with us dead) is positive, we have an obligation to remain alive.  Therefore, it is unethical to bear personal risks of death when the expected benefits of doing so do not outweigh the expected marginal benefit of one remaining alive.

What does this mean? One has a responsibility to live carefully.

When I grew up, I frequently bore considerable personal risk to try to display bravado and skill to my friends and have fun.  Yes, this had personal utility in the form of positive subjective experiences and confidence building, but it probably did not outweigh my (risk of death) times (the value of the future with me in it).

Just for a quick list of some of the unnecessary risks I chose to bear:

  • Climbed dozens of trees to heights greater than 20 feet.
  • Drifted 4-wheelers and road way too fast through narrow snowy trails in the woods, even after wiping out a few times and nearly having one land on me.
  • Sprinted through dense Florida forest at night to avoid getting tagged in games of ‘manhunt’.  Looking back on it, this gave me a relatively great probability of getting a stick in my eye, which would, sans a personal mindset revolution, be expected to reduce my life-long productivity.
  • Skiied down all of the black diamond trails at a park the first day I learned how to ski.
  • I was a very alert, focused driver, but I drove way too aggressively when I was 16-17. I would drift and weave aggressively through traffic to save a couple minutes on my commute.
  • I’ve done lots of breath holding activities alone in the water.

I know many (perhaps you) have assumed greater unnecessary risks, but my actions were morally problematic.  Combined, I say they easily gave me a 3% chance of death.  I expect to avert, at the minimum, 10,000 Disability Adjusted Life Years over the course of my life through effective giving ($300,000 to Against Malaria Foundation or the like).  Just counting this impact alone, I statistically allowed 300 years of human suffering to happen that I could have prevented.  But I certainly had fun doing these risky activities- perhaps it all adds up to .25 Quality Adjusted Life Years worth of well-being.

As you can see, the cost of these actions is somewhere in the neighborhood of at least 1200 times greater than the benefits.  I am not going to beat myself up for what already happened, but the question I have to ask myself now is, how can I reduce unnecessary risk in my life?

As much as part of me wants to continue to flirt with danger and downhill mountain bike, ski, and continue doing road cycling, I now plan on mitigating these risks as much as possible.  The world doesn’t need me to be fast at getting down mountains on my bike.  It needs me to have only enough leisure to be happy so that I can maximize my productivity and impact.

My Philosophical Beliefs

Evan Sandhoefner’s post, Read This First – My Beliefs , almost perfectly describes how I see ethics. I plan on writing my own post on my views in the near future, but I am glad I can share his post in the meantime.  These are beliefs I take for granted in much of my other writing.

College Courses Should be Student-Refined

Most teachers in our society are very well-versed in their areas of focus and have long forgotten what it’s like to learn the material for the first time.  Even when they are careful, they often introduce lexicon before they explain it, focus inordinately long on easier topics, and inadvertently breeze over more difficult topics.  This is all understandable, but it impedes student learning comprehension.

Here is a possible solution: Assign 2-5 lessons per student over the course of a semester with multiple students covering any given lesson where they have an obligation to take detailed notes of how they think the lesson material can be taught better. Then, have them submit their curriculum adjustments within a couple of days after class.  To help students take this seriously, 5-20% of the course grade can rest on putting energy into sending in detailed, adjusted lessons.  With these corrections, the instructor can better decide how they will teach the class again.

This sort of work may or may not be useful for the student making the updated lesson, but it should be useful for future classes’ students.  This could be studied empirically by seeing how student course ratings change from semester to semester, and how grades change on standardized tests between semesters in classes taught by the same teacher.

A Novel Way to Mitigate Sexual Violence

BLUF: This is an edgy idea far outside the Overton window that actually might be able to solve a part of our society’s sexual violence problem.

Background

Despite significant cultural and technological progress in recent decades, sexual violence remains a significant problem in our society.  Some of this sexual violence happens in the bedroom after two adults initially consent, but one changes their mind. Additionally, while less common, false reports are a source of anxiety for many men.  How can we reduce both sexual violence and false reports?

One idea advocated by some is the idea of documenting consent.  The problem with this is it does not protect someone who decides to change their mind about sex after they sign the agreement.

My Idea

Here is a better solution which ensures the truth of what happens in the bedroom is documented, but privacy is secured:
1) Design a special, cryptographic video camera.  I will go into details of this below, but essentially, it films twice-encrypted video which is essentially uncrackable unless both users that want to have safer sex agree to provide their private key.

2) Two people who want to get intimate can agree and set up this special video camera in a bedroom right before they go at it.

3) If sexual violence or a false claim of sexual violence occurs, it will be in the innocent’s best interest to report the existence of the encrypted video footage, as well as provide their private key, to law enforcement.

4) Any party that is unwilling to provide their private key to law enforcement will seem to be hiding evidence from the law.  Thus, the word of the person who provided their private key and has nothing to hide may be deemed as more trustworthy.  On the contrary, if both parties provide their private key, law enforcement will be able to fully decrypt the video to see what really happened.

5) Either way, law enforcement will have enough evidence to pursue justice and prosecute the guilty.

Technical Details of the Special Camera

The camera could look like a large GoPro, and be manufactured and sold for under several hundred dollars.  Each camera would have a built-in, unique, time-based one-time password generator and a unique ID code so that the person in a potential sexual encounter that didn’t bring the camera can verify that the camera is authentic through the manufacturer’s secure website.

It can function as a regular camera (and display a “red” LED on its front), but it can only record twice-encrypted video while displaying a signature (say “green”) LED on its front. This LED is so that users know they are not being filmed with non-duel encrypted video. In order to begin recording this duel-encrypted video, two users must each input a private key.

To actually input the private key, the users could each open their app (downloaded from the Android or Apple app store), log in to their account, generate a private key (that is backed up online), and the special camera could read the private key via its own lens (w/ machine learning).  Once both users have input their private key, the camera may begin recording.

While participants record their intimate session, the camera continually backs up the video to a remote server over WiFi and/or a cellular data, as well as to two SD cards that they can each take (useful if the internet goes down).

The camera would have been designed in a way to be tamper proof (e.g. an essential circuit literally breaks if the camera is opened up).  A software solution for iOS could possibly be a simpler solution, but I believe a hardware solution could more reliably prevent hacking and abuse.  We certainly do not want anyone’s bedroom activities being leaked.

Additional Info

  • This system does not reduce the veracity of the word of victim’s who don’t use it.  It’s just a way to protect people that want more protection.
  • For people who have experienced sexual violence or false accusations, they might only be willing to get romantic if they have a system like this.
  • Without this technology, individuals only have the options of:
    1) deterring violence with the threat of their own word after-the-fact.  Obviously, this does not always work.
    2) destroying both their own and their partner’s privacy for perfect accountability (e.g. live video uploaded that is not duel encrypted).

This may not be your cup of tea, and it may not be ideal for enough people to make this worth developing.  However, this seems like a valid partial solution to a particular subset of sexual violence. What do you think?

We Should Have Prevented Other Countries from Obtaining Nukes

Ever since nuclear weapons fell into the hands of the USSR and other countries beyond the United States, human civilization has been under tremendous risk of extinction. For decades now, the Doomsday clock has been perilously close to midnight; we continue to flirt with disaster which could strike once any nuclear leader falls into a suicidal mindset, which breaks the calculus of Mutually Assured Destruction. There is no solution in sight: we will only continue to avoid the destruction of all that we care about insofar as a handful of world leaders value living more than winning or being right. Perhaps down the road, some institution will emerge which will lead denuclearization to non-extinction levels, but even navigating this transition will be risky.

Given the dilemma of this current state of affairs, we messed up. We should have had the strategic foresight to prevent this from happening, and done nearly everything in our power to prevent it from happening. We should have negotiated more fiercely with the Soviet Union to make them stand down their nuclear development, and we should have backed up our words with the threat of bombs. Further, moral philosophy messed up by not laying the groundwork for this to happen at the time: as undesirable as it would have been to target a research lab in Siberia or even a populated city, this pales in comparison to the hundreds of millions, billions, or even all future people (we are talking trillions+) who remain under significant, perpetual risk in the current nuclear environment we created.

We should have never allowed more than one state to develop the Bomb. “But this one state might abuse their power and try to dominate the world” one might counter. This could be the case, but I would venture that one state enforcing its values on another would probably not have been as bad as extinction. Further, this one nuclear state would have an incentive to be good stewards of their power to discourage others’ pursuit of nuclear development; insofar as non-nuclear states are not desperately unsatisfied with their lives, it does not make sense to pursue nuclear development under the threat of annihilation should the one nuclear state find out before they had amassed an arsenal big enough for Mutually Assured Destruction.