Posts

Resigning from West Point

Recently, I decided to resign my appointment to the United States Military Academy at West Point. This was not an easy decision, but I’m convinced it was the right one given the sum of evidence I had available. The bottom line is that this was a careful, dispassionate, utilitarian decision that I had to make in order to maximize my expected impact on the world. The primary supportive reason was the comparative advantage argument.

Comparative Advantage

In any organizations with a finite number of jobs available and more demand for the jobs than supply (e.g. the United States Army), the impact of one’s actions that one can take credit for is not merely all the good that one does through one’s role, but the marginal difference of the impact of one’s actions over the impact of the actions of the person who would have taken the job otherwise. Merely doing a good job, even if doing that job is very taxing on oneself, does not mean one is actually causing that much of an impact. Rather, one’s labor is only especially useful in a given job if the person they are replacing (the next best person who would get the job otherwise) would not do as good of a job.

The equation to calculate one’s impact of taking a job is a bit more complicated than this because one must also consider the marginal impact of the displaced person who doesn’t get your job over the impact that you would have had should you have not taken the job. In essence, although I was not quantifying my intuitive sense of impact with ‘utils’, this was the situation I was facing:

Evan Bob
Army 55 utils 50 utils
Non-Army careers X utils 30 utils

Bob is the person wanting to commission through OCS or ROTC that the Army would permit to commission if I resigned. As you can see, even if I would have done a slightly better job than Bob (which is not guaranteed), it still makes sense for me to resign so long as my impact outside of the Army is more than ‘5 utils’ greater than Bob’s would have been.

I do not wish to share a number for how relatively impactful a career I think I can have outside the Army is. However, I believe that I can likely have an impact greater than ‘5 utils’ more than Bob’s impact outside the Army and frankly, an impact greater than what I would have had in the Army if I use my talents to work hard in especially important, solvable, and neglected cause areas.

That’s essentially it. It wasn’t an emotionally easy decision–I have read my share of military books and had many dreams of leading troops and positively shaping my future units and the Army as a whole. I fully expected to come back to the big Army after I left my position as an enlisted Army engineer Diver to attend West Point.

I could go into more depth about all the details I considered. However, seeing that you know I want to work in impactful and neglected cause areas outside of the Army, I think you get the point that my decision to resign was optimal, or at least, respectable.

Computational Intractability as an Argument for Entropy-Driven Decision Making

BLUF: I have another argument in favor of choosing courses of action via recently produced, quantum-generated random numbers.

Recently, I wrote a long post about a decision criterion for choosing Courses of Action (CoA) under moral uncertainty given the Many-Worlds Interpretation of quantum mechanics. While this has been shot down on LessWrong and I am still working on formalizing my intuition regarding relative choice-worthiness (a combination of Bayesian moral uncertainty and traditional expected utility), as well as figuring out exactly what it means to live in a Many-Worlds universe, I tentatively believe I have another argument in favor of entropy-driven CoA selection: computational intractability.

Traditional decision theory has not focused a ton, to my knowledge, on the process of agents actually computing real world expected-utility estimates. I think the simplest models basically assume agents have infinite computations available. What decision is an agent to make when they are far from being done computing the expected-utility of different CoA? Of course, this depends on the algorithm they use, but in general, what decision should they make when the time to make a decision comes early?

In a Many-Worlds universe, I am inclined to think agents should deliberately throw entropy into their decisions. If they have explored the optimization space to the point where they are 60% sure they have found the optimal decision, they should literally seek out a quantum mechanics generated random number–in this case between 1 and 5–and if the number is 1, 2, or 3, they should choose the course of action they are confident in; otherwise, they should choose a different promising course of action. This ensures that child worlds are appropriately diversifying so “all of our eggs are not in one basket”.

If the fundamental processes in the universe–from statistical mechanics to the strong economic forces present today in local worlds based on human evolutionary psychology–lean in favor of well-being over suffering, then I argue that this diversification is anti-fragile.

A loose analogy (there are slightly different principles at play) is investing in a financial portfolio. If you really don’t know which stock is going to take off, you probably don’t want to throw all your money into one stock. And choosing courses of action based on quantum random number generation is *the only* way to reasonably diversify one’s portfolio; even if one feels very uncertain about one’s decision, in the majority of child worlds, one will have made that very same decision. The high-level processes of the human brain are generally robust against any single truly random quantum mechanics event.

I am still working on understanding what the generic distribution of child worlds looks like under Many-Worlds, so I am far from completely certain that this decision-making principle is ideal. However, because it does seem promising, I am seeking to obtain a hardware true random number generator to experiment with this principle–I won’t learn the actual outcomes, which have to be predicted from first-principles, but I can learn how it feels psychologically to implement this protocol. At this point, it looks like I am going to have to make one. I’ll add to this post when I do.

Economic Policies that Optimize for Future People

BLUF: This isn’t a profoundly deep post– it just shows my current, general views on a variety of current economic issues.

I do not believe future people are intrinsically less valuable than the people existing today. In fact, I think they might be more valuable because their lives will intrinsically be more worth living as their well-being will probably be greater. I also respect the validity of the 20% chance of extinction by 2100 that is the average of a number of researchers’ estimations, so I think that even considering a variety of extinction scenarios, there are many more future people expected to exist in the future. Thus, I think we should optimize our political and economic policies to serve their interests even more than the selfish interests of those people alive today. What does this look like in concrete policies?

  • Steep carbon taxes
  • Taxes on essential ecological service destruction in general priced at the cost of replacement
  • A land-value tax
  • Universal Basic Income at a cost to less-efficient government social programs like Social Security

Principles behind optimizing our economy for the long-term future

  • A willingness to bear the temporary economic losses, as a society, of implementing steep carbon taxes and essential ecological service destruction taxes.
  • More deliberate experimentation to test policies via states and charter cities.
  • Beyond concerns about environmental destruction, being willing to optimize for economic growth more than redistributing resources to satisfy the preferences of everyone who happens to be alive today. Social security recipients are no longer actively contributing to the economy, so we should cut their funding to give everyone a UBI.

Beyond optimizing for the long-term, I generally support:

  • lifting economically stifling regulation; we should make entrepreneurship as easy as possible. One shouldn’t need to consult a lawyer to start many personal businesses.
  • lifting barriers to competition like government-mandated licensing (e.g. taxicabs)
  • Free trade
  • Much greater immigration, especially of educated people, but not quite open borders.

A Decision Theory for Many-Worlds Living

Here, I describe a decision theory that I believe applies to Many-Worlds living that combines principles of quantum mechanical randomness, evolutionary theory, and choice-worthiness. Until someone comes up with a better term for it, I will refer to it as Random Evolutionary Choice-worthy Many-worlds Decisions Theory, or RECMDT.

Background

If the Many World’s Interpretation (MWI) of quantum mechanics is true, does that have any ethical implications? Should we behave any differently in order to maximize ethical outcomes? This is an extremely important question that I’m not aware has been satisfactorily answered. If MWI is true and if we can affect the distribution of other worlds through our actions, it means that our actions have super-exponentially more impact on ethically relevant phenomena. I take ethically relevant phenomena to be certain fundamental physics operations responsible for the suffering and well-being associated with the minds of conscious creatures.

My Proposal

We ought to make decisions probabilistically based on sources of entropy which correspond with the splitting of worlds (e.g. particle decay) and the comparative choice-worthiness of different courses of action (CoA). By choice-worthiness, I mean a combination of the subjective degree of normative uncertainty and expected utility of a CoA. I will go into determining choice-worthiness in another post.

If one CoA is twice as choice-worthy as another, then I argue that we should commit to doing that CoA with 2:1 odds or 66% of the time based on radioactive particle decay.

Why?

Under a single unfolding of history, the traditional view is that we should choose whichever CoA available to us which has the highest choice-worthiness. When presented with a binary decision, the thought is that we should choose the most choice-worthy option given the sum of evidence every single time. However, the fact that a decision is subjectively choice-worthy does not mean it is guaranteed to actually be the right decision—it could actually move us towards worse possible worlds. If we think we are living in a single unfolding of history but are actually living under MWI, then a significant subset of the trillions↑↑↑ (but a finite number) of existing worlds end up converging on similar futures, which are by no means destined to be good.

However, if we are living in a reality of constantly splitting worlds, I assert that it is in everyone’s best interest to increase the variance of outcomes in order to more quickly move towards either a utopia or extinction. This essentially increases evolutionary selection pressure that child worlds experience so that they either more quickly become devoid of conscious life or more quickly converge on worlds that are utopian.

As a rough analogy, imagine having a planet covered with trillions of identical, simple microbes. You want them to evolve towards intelligent life that experiences much more well-being. You could leave these trillions of microbes alone and allow them to slowly incur gene edits so that some of their descendants drift towards more intelligent/evolved creatures. However, if you had the option, why not just increase the rate of the gene edits, by say, UV exposure? This will surely push up the timeline for intelligence and well-being and allow a greater magnitude of well-being to take place. Each world under MWI is like a microbe, and we might as well increase the variance, and thus, evolutionary selection pressure in order to help utopias happen as soon and as abundantly as possible.

What this Theory Isn’t

A key component of this decision heuristic is not maximizing chaos and treating different CoAs equally, but choosing CoAs relative to their choice-worthiness. For example, in a utopian world with, somehow, 99% of the proper CoAs figured out, only in 1 out of 100 child worlds must a less choice worthy course of action be taken. In other words, once we get confident in particular CoA, we can take that action the majority of the time. After all, the goal isn’t for 1 world to end up hyper-utopian, but to maximize utility over all worlds.

If we wanted just a single world to end up hyper utopian, then we want to act in as many possible ways based on the results of true sources of entropy. It would be ideal to come up with any cource of action and flip a (quantum) coin and go off its results like Two-Face. Again, the goal is to maximize utility over all worlds, so we only want to explore paths in proportion to the odds that we think a particular path is optimal.

Is it Incrementally Useful?

A key component of most useful decision theories is that they are useful insofar as they are followed. As long as MWI is true, each time RECMDT is deliberately adhered to, it is supposed to increase the variance of child worlds. Following this rule just once, depending on the likelihood of worlds becoming utopian relative to the probability of them being full of suffering, likely ensures many future utopias will exist.

Crucial Considerations

While RECMDT should increase the variance and selection pressure on any child worlds of worlds that implement it, we do not know enough about the likelihood and magnitude of suffering at an astronomical level to guarantee that the worlds that remain full of life will overwhelmingly tend to be net-positive in subjective well-being. It could be possible that worlds with net-suffering are very stable and do not tend to approach extinction. The merit of RECMDT may largely rest on the landscape of energy-efficiency of suffering as opposed to well-being. If suffering is very energy inefficient compared to well-being, then that is good evidence in favor of this theory. I will write more about the implications of the energy-efficiency of suffering soon.

Is RECMDT Safer if Applied Only with Particular Mindsets?

One way to hedge against astronomically bad outcomes may be to only employ RECMDT when one fully understands and is committed to ensuring that survivability remains dependent on well-being. This works because following this decision theory essentially increases the variance of child worlds like using birdshot instead of a slug. If one employs this heuristic only while having a firm belief and commitment to a strong heuristic to reduce the probability of net-suffering worlds, then it seems that yourself in child worlds will also have this belief and be prepared to act on it. You can also only employ RECMDT while you believe in your ability to take massive-action on behalf of your belief that survivability should remain dependant on well-being. Whenever you feel unable to carry out this value, you should perhaps not act to increase the variance of child worlds because you will not be prepared to deal with the worst-case scenarios in those child worlds.

Evidence against applying RECMDT only when one holds certain values strongly, however, is all the Nth-order effects of our actions. For decisions that have extremely localized effects where one’s beliefs dominate the ultimate outcome, the plausible value of RECMDT over not applying it is rather small.

For decision with many Nth order effects, such as deciding which job to take (which, for example, has many unpredictable effects on the economy), it seems that one cannot control for the majority of the effects of one’s actions after an initial decision is made. The ultimate effects likely rest on features of our universe (e.g. the nature of human market economies in our local group of many-worlds) that one’s particular belief has little influence over. In other words, for many decisions, one can affect the world once, but they cannot control the Nth order effects through acting a second time. Thus, while certain mindsets are useful to hold dearly regardless of whether one employs RECMDT, it seems that it is not generally useful for one to not employ RECMDT if they are not holding any particular mindsets.

Converting Radioactive Decay to Random Bit Strings

In order to implement this decision theory, agents much require access to a true source of entropy—pseudo-random number generators will NOT work. There are a variety of ways to implement this, such as by having an array of Geiger counters surrounding a radioactive isotope and looking at which groups of sensors get triggered first in order to yield a decision. However, I suspect one of the cheapest and most reliably random sensors would be built to implement the following algorithm from HotBits:

Since the time of any given decay is random, then the interval between two consecutive decays is also random. What we do, then, is measure a pair of these intervals, and emit a zero or one bit based on the relative length of the two intervals. If we measure the same interval for the two decays, we discard the measurement and try again, to avoid the risk of inducing bias due to the resolution of our clock.

John Walker
from HotBits

Converting Random Bit Strings to Choices

We have a means above to generate truly random bit strings that should differ between child worlds. The next question is how do we convert these bit strings to choices regarding which CoA we will execute? This depends on the number of CoAs we were considering and the specific ratios that we arrived at for comparative choice-worthiness. We simply need to determine the least common multiple of all the individual odds of each CoA, and acquire a bit string that is long enough that its representation as a binary number is higher than the least common multiple. From there, we can use a simple preconceived encoding scheme to have the base 2 number encoded in the bit string select for a particular course of action.

For example, in a scenario where one CoA is 4x as choice-worthy as another, we need a random number that represents the digits 0 to 4 equally. Drawing the number 4 can mean we must do the less-choice worthy CoA, and drawing 0-3 can mean we do the more choice-worth CoA. We need at least 3 random bits in order to do this. Since 2^3 is 8 and there is no way to divide the states 5, 6, 7 equally to the states 0, 1, 2, 3, and 4, we cannot use this bit string if it is over 4, and must acquire another one until we acquire a number under 4. Once we select a bitstring with a number below our least-common-multiple, we can use the value of the bit string to select our course of action.

The above selection method prevents us from having to make any rounding errors, and it shouldn’t take that many bits to implement as any given bit string of the proper length always has over a 50% chance of working out. Other encoding schemes introduce rounding errors, which only detract from the uncertainty of our choice-worthiness calculations.

What Does Application Look Like?

I think everyone with solid choice-worthy calibrating ability should have access to truly random bits to choose courses of action from.

Importantly, the time of the production of these random bits is relevant. A one-year-old random bitstring captured from radioactivity is just as random as one captured 5 seconds ago, but employing the latter is key for ensuring the maximum number of recent sister universes make different decisions.

Thus, people need access to recently created bit strings. These could be from a portable, personal Gieger counter, but it could also be from a centralized Gieger counter in say, the middle of the United States. The location does not matter as much as the recency of bit production. Importantly, however, bit strings should not ever be reused as this is not as random as using new bit strings as whatever made you decide to reuse them is non-random information.

Can We Really Affect the Distribution of Other Worlds through Our Actions?

One may think that since everything is quantum mechanics including our brains, can we really affect the distribution of child worlds from our intentions and decisions? This raises the classic problem of free will and our place in a deterministic universe. I think the simplest question to ask is: do our choices have an effect on ethically-relevant phenomena? If the answer is no, then why should we care about decision theory in general? I think it’s useful to think of the answer as yes.

What if Many Worlds Isn’t True?

If MWI isn’t true, then RECMDT optimizes for worlds that will not exist at the potential cost to our own. This may seem to be incredibly dangerous and costly. However, as long as people make accurate choice-worthiness comparisons between different CoAs, then I will actually argue that adhering to RECMDT is not that risky. After all, choice-worthiness is distinct from expected-utility.

It would be a waste to have people, in a binary choice of actions with one having 9x more expected-utility than the other, choose the action with less expected-utility even 10% of the time. However, it seems best, even in a single unfolding of history, that where we are morally uncertain, we should actually cycle through actions based on our moral uncertainty via relative choice-worthiness.

By always acting to maximize choice-worthiness, we risk not capturing any value at all through our actions. While I agree that we should maximize expected-utility in both one shot and iterative scenarios alike and be risk neutral assuming we adequately defined our utility function, I think that given the fundamental uncertainty at play in a normative uncertainty assessment, it is risk neutral to probabilistically decide to implement different CoAs relative to their comparative choice-worthiness. Importantly, this is only the ideal method if the CoAs are mutually exclusive–if they are not, one might as well optimize for both moral frameworks.

Hence, while I think RECMDT is true, I also think that even if MWI is proven false, a decision theory exists which combines randomness and relative choice-worthiness. Perhaps we can call this Random Choice-worthy Decision Theory, or RCDT.

I am still actively working on this post, but I am excited enough about this idea enough that I didn’t want to wait to post it. Let me know what you think of this!

My Secret Addiction: Project Euler Problems

BLUF (Bottom Line Up Front): This is a personal post how I become involved with programming and my continued interest in solving mathy programming problems.

I used to believe that programming was just beyond my abilities. I thought that only the hacker type that discovered programming books when they were 10 and had few other distractions could really do it, and that it was beyond the reach of most ordinary people, including me.

As a Plebe (freshman) at West Point, I had to take an introduction to information technology course based around Jython (Python on JVM). The course material was surprisingly confusing to me for a while (it didn’t help that we had a terrible REPL which didn’t color code language syntax versus permissible variable names and that we used a lot of obscure functions for picture editing).

I was over halfway through the course when I somehow discovered ProjectEuler.net and a better REPL. I ended up programming for nearly all of my free time for nearly 2 weeks straight and realized that I actually did have a knack for programming. I decided to transfer from being an operations research major to being a computer science major, which I was and still am excited about.

Anyways, I still spend some of my free time programming these Project Euler problems because they are so damn fun and a tractable way to improve my still basic programming skills. I lost all my computer files last semester, so below is the Project Euler problems that I have solved since then:

A Counterintuitive Probability Game

I read an interesting math paper by Thomas Cover that I struggled to believe at first. I recommend you read it before continuing reading this post (it’s short). Thus, I decided to test out the claim using Python.

The results hold. When the range that the third random number, C, falls in is the same as the range which the other two numbers may be selected from, the win rate is 66%. When C is a fixed number equally between the upper and lower limit of what the other two numbers may be, the win rate is 75%.

Thus, the benefit of using a wide range of C on the Real number line is to ensure that you are at least sometimes choosing a value in between the ranges in which the other play is selecting numbers from (they don’t have to choose anywhere around 0). If your C is never between their two numbers, your probability of winning is indeed 1/2.

Understanding the True Cost of Land-Use Projects

Update: My team’s paper earned the coveted Outstanding rating! Further, our paper won the Rachel Carlson award, which “is presented to a team selected by the Head Judge of ICM Problem E for excellence in using scientific theory and data in its modeling.” Over 4,800 collegiate teams from around the world were competing in Problem E, so I am honored that our work was recognized as the best! Here are the results.

BLUF: My team of two other college sophomores competed in an academic competition involving 99 hours of modeling and paper writing. This post presents our work.

We ended up cranking this paper out: “Ecological Services Valuation Model: Understanding the True Cost of Land-Use Projects”

Intro: Our team was hired to tackle one of the greatest problems remaining in the 21st century: how do we prevent the “tragedy of the commons?” Specifically, our task was to “create an ecological services valuation model to understand the true economic costs of land use projects when ecosystem services (ES) are considered.” We discovered that answering this question is key for governments to rent land to entities for land-use projects at a price necessary to preserve the value of ES owned by all.

In our pursuit of creating a model, we began by researching the philosophical underpinnings of value. We decided that well-being, based off conscious-subjective experiences, is the only good which is intrinsically valuable. While we maintain a degree of moral uncertainty on this matter, we ultimately decided to base our valuation of ecosystem services from their expected impact on well-being of conscious creatures, most especially humans.

We then explored the economic systems that best support our value-theory, and settled on Georgism, an economic philosophy which asserts that, while individuals ought to own the fruits of their own labor, natural resources are a public good [1]. Then, we researched the possible frameworks we could use to price ecosystem services, and determined the price should reflect the cost of artificially replacing ES. In other words, the value of an ES depends on the price to replace its services. For services that are irreplaceable, we propose a method of converting lost environmental services into Quality-Adjusted Life Years (QALYs), which may then converting into dollars based off the rate of producing QALYs.

We explored preexisting models for pricing the ES affected by land-use projects, and found several highly-developed, but difficult to apply models. To solve for this, we sought to create a model which balances accurate valuation with ease of applicability, while still maintaining our values of maximizing well-being. Thus, we designed a general model with only the most applicable variables.

Check out our paper for the full report.

 

Piano Songs [Overt Signaling]

I started playing piano in 7th grade and played up through 10th grade. I’ve been able to play here and then since then, and this is what I’ve played recently:

I used to like to play classically difficult songs like Maple Leaf Rag, but now I only make time to improvise and maintain the songs I can already play. Over the last few years, I’ve lost most of my sight reading ability, but I have far improved my ability to play artistically and with nuance.

What Should You Do With Your Life? [Link List]

80,000 Hours-Perhaps the gold standard for high-impact career advice right now. “Make the right career choices, and you can help solve the world’s most pressing problems, as well as have a more rewarding, interesting life.”

Y Combinator’s Requests for Startups- “Many of the best ideas we’ve funded were ones that surprised us, not ones we were waiting for. There are, however, some startups that we’re very interested in seeing founders apply with.”

Alexey Guzey’s Link List- Very similar idea as this, with more links.

EA Take Action– “New to EA and figuring out where to start? Long-timer looking for new things to do? Here’s our guide to what to do next.”

Counterfactual People Are Important

When looking at the morality or desirability of abortions, many claim “if my sibling with X-serious-disability were aborted, then they wouldn’t be alive today with their net-positive life and I wouldn’t know them.” This statement is true and is evidence against the desirability of abortion, but what I have never heard from a pro-lifer is the moral weight of the counterfactual people that could have been born should an abortion have taken place. This is an extremely important factor in deciding the desirability of abortion, and in population ethics in general.

The valuation of counterfactual people is the same as that of potential people which is very similar to the valuation of future people. It is based on some subjective expected-utility of the nature of the person’s subjective experience, the person’s expected net-impact on ethically relevant phenomena, and the probability of them coming into existence, minus a cost function. For counterfactual people, that cost-function is the moral weight of a person who could exist otherwise. This may seem like a tautological definition, but let’s look at a though experiment to more explicitly highlight the need to look at counterfactual people:

“Suppose it was a phenomenon of nature that every women’s first embryo implanted in their womb was destined to live a life barely worth living and would be expected to only barely give back more to society than the societal resources used to raise it. A woman could raise this child to term and get pregnant with a typical child several months after that, or she could have an abortion and end the pregnancy within a couple weeks of it starting, and then get pregnant with a typical child, with much greater expected well-being and societal impact, within a month or two of that. The latter choice, if rationally taken, would require considering marginal cost– that is, the weight of counterfactual people.” I think it’s clear that society would be worse off if we didn’t make the latter choice at least a majority of the time.

Considering marginal cost between having different children doesn’t mean that we must be harsh to our children with less expected impact on society and less well-being. It’s just as morally relevant to be kind to people that could be affected by our words and actions. However, let’s not pretend we are angels when we do a good and ignore the counterfactual better good that could have taken place.

The Exponential Impact of Socially-Contagious Philanthropy

If doing any significant amount of good was basically intractable, it would be more permissible for individuals to ignore the utilitarian imperative to do the most good. However, doing incredible amounts of good is in the reach of many of us. We don’t necessarily have to research and contribute to Multiverse-wide Cooperation via Correlated Decision Making in order to do our part; doing good can be as simple as donating 10% of one’s salary to EA Funds, which, if used for causes as effective as the Against Malaria Foundation, can avert a year of lost health (a DALY) for $29. One may be able to do far more good than just this though. Consider the power of exponential growth:

If you commit to convincing two other people per year to donate 10% of their income to the EA Funds, and convince them to convince two people to do the same themselves, etc., you can expect to have 27 people donating 10% of their income within three years. Considering a simple model based on a mean US income of $72,000, one can expect to be responsible for averting 814,097 DALYs within 7 years. This assumes that these people would not do anything productive with 10% of their money if they did not donate it, that none of these people would have discovered effective giving over this time period, and that the $29 per averted DALY rate would hold. Even accounting for more realistic estimates of these factors, it is likely that one could still claim responsibility for averting over 500,000 DALYs over a 7 year period.

This is a substantial amount of good. Frankly, I struggle to imagine how these 2187 people’s discretionary income could be better spent. To say it would be better for people to not deliberately spend part of their discretionary money on charity and research via the EA Funds is to suggest either that the EA Funds managers are ineffective at choosing organizations and causes to give to, or that each person could get about 247 years worth of pleasure by spending that 10% of their income on themselves. I think both of these are highly unlikely to be the case.

If this inspired you, I encourage you to take a giving pledge and share your reasons for taking the pledge on social media! Like all habits, giving is contagious 🙂

Productivity Link List

Alexey Guzey’s Concise Thoughts on Productivity -A great place to start if you feel like you are not at peak productivity.

The Most Dangerous Writing App-“Don’t stop typing, or all progress will be lost.” This is a great tool to break through writing barriers and force yourself to make progress. Hemingway and Grammarly can also help you write better!

Last Pass -Logging in to websites quickly while maintaining secure passwords can be a huge time waster. Last Pass is a solution that I personally trust and Harvard does too. I just use a separate randomly-generated passphrase and a different two-factor authentication system for my email to spread my risk.

Rescue Time-There are so many distractions available today, such important work to do, and so little time. This app can help you track how you actually use your time so you can better hold yourself accountable for how you spend your time.

WorkFlowy -“WorkFlowy is a single document that can contain infinite documents inside it. It’s a more powerful, easier way to organize all the information in your life.” Some people swear by this, but personally, I prefer Standard Notes, as seen below.

Standard Notes– If WorkFlowy is not your thing or you want an encrypted alternative, Standard Notes seems to be the best option out there.D

Making Total Utilitarianism More Intuitive

BLUF: If total utilitarianism’s obligation to create new beings seems non-intuitive, think of increased numbers of beings as increased duration of subjective experience. We are allowed to redefine population as duration because subjective experience is fundamentally impersonal and based on physics, where there is no room for personhood.

Utilitarianism usually states that maximizing the quality of conscious experience is important…However, Henry Sidgwick has asked, “Is it total or average happiness that we seek to make a maximum?”[1]

Total hedonic utilitarianism says that we ought to consider both the desirability of any subjective experience as well as the number of subjective experiences in determining moral action. Level of happiness is a pretty intuitive aspect of total utilitarianism.  All things considered, we want any creature to have a more desirable subjective experience, everything else the same.

I empathize that it less intuitive that we ought to value additional beings with marginally net-positive subjective experiences. “Are we really obligated to make additional happy beings?” is a fair question. This is captured in Derek Parfit’s repugnant conclusion, “For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living”.

I think that a more intuitive, but equally fair way to consider the moral intuitiveness of our obligation to maximize beings with net-positive subjective experiences is that we can represent number of beings via duration of experience. More beings being alive at any given moment means there is a greater duration of subjective experience.

I think duration is pretty intuitive to ethically desire. All things the same, we prefer a desirable subjective experience to continue.

We are allowed to redefine population as duration because subjective experience is fundamentally impersonal and based on physics (where there is no room for personhood).

Suspend Moral Self-Judgement for Higher-Quality Reasoning

I think most people have an intense psychological need to feel they are ‘good’. After all, if we are not ‘good’, we probably have extra work ahead of ourselves to set ourselves straight and at the very least, preserve our social standing. Some of us do moral calculus all the time in order to stave off guilt and justify our current course of action. The mature among us value intellectual honesty when doing this, and try to avoid jumping to convenient conclusions.

With all this being said, I think a lot of us too often fall short of being intellectually honest because we really value perceiving ourselves as being ‘good’. For example, just consider the most common argument against moral philosopher Peter Singer’s main point in his famous essay “Famine, Affluence, and Morality“. Many people reject his argument because it’s too demanding–not because its clauses are flawed or the logic tying them together is faulty, but because the conclusion implies just about everyone is currently not as good as they think the are.

If people could better suspend their moral self-judgment, they wouldn’t fall into this sort of trap. There is a time and a place to deal with moral guilt (hopefully by altering our behavior), but it shouldn’t be while we are trying to determine moral truth.

If this sounds trivially obvious, when is the last time you felt you were a moral monster? When did you last feel heavy guilt for spending resources on yourself that could be better allocated to reliably avert a lot of others’ suffering? If you’ve never felt that guilt, you may be putting the cart before the horse in your moral reasoning.

On Moral Relativism

Here I briefly describe why I think some people think moral relativism has a significant truth value.

What does moral relativism say? “You cant say one cultures values are better than another because you evaluate them through your own biased lense.” I hope this is a fair synopsis.

Counter: Moral relativism is incoherent because utility is grounded in the real world, and different actions certainly have different effects on the real world. I bet this is clear to most moral relativists, but I believe there exists a line of reasoning which obscures their thinking.

Moral relativists are probably seeking tolerance. People used to get burned at the stake for believing something outside of the Overton window. For most of us that have discovered the fruits of living in a liberal, diverse society, we obviously do not want to live in a society where people with “wrong” opinions they believe are true are too afraid to speak their mind.

Moral relativists are also probably against imperialism. A big justifier of imperialism is that the colonized beliefs are wrong and they need to be managed.

Moral relativists want to prevent stonings, the closure of public discourse, and imperialism, but instead of focusing on how one ought to respond to another person’s/culture’s wrong belief, they say the belief isn’t wrong in the first place. The threat of the slippery slope that starts at judgement of another culture’s values is perceived as worse than not acting as if cultural beliefs did have different utility.

Followed to its conclusions, this creates an unmanageable world to live in as we simply cannot maximize utility functions that are grounded at all in the real world.

Effective Self-Care Link List

BLUF: A periodically-updated consortium of links relating to improving one’s well-being and personal life.


James Clear– I’m partially allergic to the term “self-improvement”, which often just consists of the same not-even-wrong platitudes regurgitated repeatedly, but James actually has a bunch of decent, concise articles that I appreciate every so often.

How to Survive Being Attacked by Nuclear Missiles, in 60 Seconds– Good, concise advice.

NutritionFacts.org– Given all the problems and conflicting advice in nutrition science, I think most of us still have plenty of doubts about what constitutes a healthy diet. I am no expert, but I was obsessed about nutrition for a few months about 4 years ago, and I ended up trusting him and his website more than anyone else.

EFF Surveillance Self-Defense– If one can’t trust the Electronic Frontier Foundation, I am not sure who they can trust as a source for information about digital security. This is a good list of many topics relating to infosec.

Ethics Demands Speed

BLUF: For problems that we are destined to solve, one’s impact can be thought of as how quickly their efforts expedite a solution times the utility of each moment of that expedition.

We are already well on the path to solve a few great ethical challenges that face us. A clear example, in my mind, is factory farming. The science is already here that these animals are very likely to be experiencing immense suffering, we are quickly coming up with replacements for meat (to satisfy stubborn consumers) via food technology, and moral advocacy efforts are leading people to become sentientists and reducetarians or vegans. It’s only a matter of time until the will of the populace leads politicians to pass legislation to drastically tax animal products and improve the conditions of factory farms, or outright shut them down.

There is a lot to be done between now and the end of factory farming, so I don’t mean to detract from the efforts still required at all. In my mind, this is still a very neglected cause area with room for many people to spend a life devoted to it. That being said, I would bet a lot that factory farming will be done away with in the western world by 2100, and hopefully much sooner. Perhaps the simplest moral question then for one to ask is “when?”

There are a variety of ways to measure one’s impact in a cause area. How much does one increase the probability of a solution coming to fruition times the marginal utility of said solution? The other clearest metric to me is how much sooner will a solution be enacted via one’s efforts? If my actions cause a tax spike on animal products to arrive two weeks early and in those first two weeks, we see a reduction in demand of animals by 50 million, which leads to a reduction in supply by 40 million, I am responsible for preventing the suffering of these 40 million animals. That’s a huge impact. On the contrary, if I didn’t do anything for these factory farms when I could have statistically sped up tax-hikes by 2 weeks, then the suffering of those 40 million animals is on my hands.

Ethics demands speed.

The Moral Importance of Future People

Conventional ethics does not explicitly assign significant moral weight to people who are expected to exist in the future. This is problematic for a number of reasons:

  • their subjective experiences will be just as salient when they are alive as ours are today.
  • we can do a number of actions to help them out today.
    • we can focus more resources on reducing our risk of civilizational collapse to ensure their existence.
    • whereas much of current people’s quality of life is ‘locked-in’ via their genes, we can significantly influence the genes of future people to ensure they have the best chance at living their best life
  • technological and cultural inventions tend to help many more future people than many direct interventions that help people today. Consider if we transferred some of the exorbitant amount of resources used on end-of-life care to invest in gerontological research

Even if one does not buy into total utilitarianism and does not think that we have an obligation to create beings that are expected to have net-positive subjective experiences, one should still value the lives of human beings who will probabilistically exist. To suggest otherwise is to say that time alone has an effect on the quality of subjective experience, which just like location, is probably not the case.

How should we consider the moral weight of future expected beings? I believe we should use expected utility, and consider the moral weight of people to the probability that they will exist. For example, 100 people who on average have a 95% probability of existing should have the moral weight of 95 people.

Of course, we could be more certain of the effects of long-term interventions for future people as well as the flow-through-effects of helping people today, but that is a different question. Before we can decide that the expected utility of a future intervention makes it not worth it compared to helping people today, we must acknowledge the moral weight of future people.

[Link] It’s Supposed To Feel Like This: 8 emotional challenges of altruism – Holly Morgan

You should try to do more good in the world because it’s the right thing to do.

Unfortunately, this isn’t always sufficient motivation for people to act accordingly, so altruists[1] often tell non-altruists about the personal benefits of altruism — the evidence that giving makes you happier, the friendly and supportive communities of altruists you can join, the sense of meaning altruism can bring to fill the void left by an increasingly secular world, and so on.

But leading an altruistic life is not always plain sailing, and I think it’s important to acknowledge that from time to time. And not only to acknowledge that times get rough, but the specific ways in which they get rough.

-Holly Morgan, It’s Supposed To Feel Like This: 8 emotional challenges of altruism

Update: An additional useful link about a related topic, scrupulosity

Subjective Probability

BLUF: I found an essay by Nick Bostrom that perfectly coincides with my ideals towards Bayesian probability and how I aspire to consciously hold degrees-of-belief and continually update on evidence.

For me, belief is not an all-or-nothing thing—believe or disbelieve, accept or reject. Instead, I have degrees of belief, a subjective probability distribution over different possible ways the world could be. This means I am constantly changing my mind about all sorts of things, as I reflect or gain more evidence. While I don’t always think explicitly in terms of probabilities, I often do so when I give careful consideration to some matter. And when I reflect on my own cognitive processes, I must acknowledge the graduated nature of my beliefs.

-Nick Bostrom, 2008, Response to the 2008 EDGE Question: “What Have You Changed Your Mind About?”