In my Plebe (freshman) year literature class, about 7 months ago, I wrote an adventurous short story (~2600 words) that combines effective altruism, the tech world, fake news, and a moral lesson about the possible tragic consequences of naive unilateral action. Let me know what you think!
Getting the Hands Dirty for a Good Cause
“I know. You’re disappointed that I didn’t turn out to be like you,” boomed out of the surround-sound system connected to the 68 inch plasma TV, which blocked the view of the Pacific from San Francisco.
“No. No, nonono. I was disappointed… that you tried.”
Morgan clicked off the television, intent to use this temporary wave of positive emotion to do something good with his time. He refused to let himself fall victim to this modern societal “curse” of ennui and complacency that seemed to naturally follow from having his basic evolutionary needs so easily met. He was intent on escaping the boredom this summer day.
He pulled out an index card with 3 things he set for himself last night to do today. He knew that if he didn’t write out goals and accomplish them early in the day, he would probably wind up with little to show for his time. But it was summer, so he certainly didn’t feel like going too crazy. But 3 things was reasonable, and his list for today was:
Create and distribute a flyer for teaching piano lessons.
Get in a sprint workout at the track.
Figure out something really good to do for society.
He had created the flyer in the morning and hung up a few around his neighborhood community center at the beginning of his run to the track, where he had gotten in his track workout. He checked both those off.
He had just finished his lunch of tofu and vegetable stir fry, which was eaten while lounging on the couch and watching Inception, which he found playing on TV.
As it was summer, he had virtually no external obligations, which was awesome at first, but he had found his that life was void of the usual meaning he found when he was busy during the school year.
He had brought up this boredom to his parents and they suggested he volunteer, and he was bored enough that he took their advice: he went online, found a local food bank, and went in for about 6 hours. It helped his boredom a little, but by the end of the day, he felt like he did more good for himself just moving heavy boxes of food around than he did for the hungry people of his area. He wasn’t really redirecting resources to places they wouldn’t go otherwise, he figured; he was just processing them in a job that probably would have been filled by someone else had he not committed to volunteering the hours he did.
So he got interested in doing something good this summer that was a better use of his time.
The only problem was that any ideas he could come up with were far too unappealing. They would constitute a lot of work, and just not have an impact that really made it worth it to him.
But he kept thinking. “How can I do the most good?”, he punched into his Google search bar on his laptop.
The Most Good You Can Do -Wikipedia was his 2nd hit, and he clicked on the link.
“Mate, toss me another Red Bull” commanded Arthur, who never looked away from his code. He had 4 minutes left out of an hour for this problem in the hackathon, and he was not about to lose this early on the competition. He was almost done 10 minutes ago, but his computer restarted without his permission and he ended up with having to retype everything. You might think a Google software engineer with a Ph.D. in computer science would have automated his computer saving by now, but unfortunately, you would be wrong.
Anyways, there was money on the line- $50,000 for 1st place- and equally important, his pride. He would never hear the end of his coworker’s jokes if he failed to complete this problem. Hiding Moby Dick in a picture with fewer pixels than characters in the text didn’t require learning anything particularly new, but it did require some techniques he hadn’t practiced since his undergrad years at CalTech.
2:59, 2:58, 2:57 the clock went down.
He wasn’t going to make it. He was going to have to have to play one of his strongest cards now. He was going to freeze Gmail. Yes, disable all Gmail transfers. It was a trick he set up while working in the G-suite department and one that only would work once after the patch would be emplaced within probably 2 hours. He was hoping he would end up using it for something bigger, like shorting Alphabet stock, but he was also concerned the Feds and Google’s investigation team would trace it back. This seemed petty enough that no one would trace it back to him. He texted “stop” from his burner phone to his trigger mechanism he implanted in Google’s server farm. He tried to send an email from an account not connected to him to his Gmail account. Nothing showed up in his inbox. It had worked!
1:59, 1:58. 1:57
He got up and took his laptop to the bathroom and got back to his coding and hoped enough people haven’t sent their submissions yet so that competition regulators would extend the time. Obviously, something was wrong if half the people didn’t submit anything this early on.
Arthur’s adrenaline was pumping. He would get blasted by the courts- civil and criminal- if his trick was ever traced back to him. He was fine though, he knew he was. He finished up his text compression algorithm, compressed Moby Dick to about a third of its original size, and hid it into a picture that was not even 2 megabytes. It was already 3 minutes after the clock had run down, but he submitted his work to the competition officials, knowing that he was still in the competition. He checked Gmail’s Twitter, and sure enough, they were already apologizing for technical errors. He felt like how the Inception team must have felt when they woke up on the 747 after implanting an idea in Fischer’s mind. He felt amazing.
1 week later.
Morgan was typing away passionately. His life had changed so much in the past week. He had discovered the effective altruism movement, which was about using science and rationality to do the most good possible, which generally meant finding charities in developing countries that added the most years of quality human life per dollar. Apparently, some charities were 1000x better than others, and some could reliably add a year of human life for about $100.
But soon later, he had discovered the existential risk reduction movement, whose basic idea was: it would be great to help out some of the 7.5 billion people alive today, but even better to ensure that trillions of people got to exist in the future, which could only happen if humanity does not become extinct. He felt like there was a pretty good chance of happening in the next century, considering the Tweets being exchanged by Trump and “The Little Rocket Man” Kim Jung Un.
Anyways, he found the ideas so compelling that he turned down requests from his friends to hang out. He was just reading the major works in the field and taking detailed notes while he did so. Mainly, he was frustrated by the lack of concern policymakers, or even scientists, seemed to make about this. “There is more scholarly work on the life-habits of the dung fly than on existential risks” he read from Nick Bostrom. Ensuring humanity’s survival was obviously a more critical problem than almost anything else, but it was just not salient enough for the public to vote on. Politicians would be accountable for spending billions on preventing something that over the coming century, would most likely not happen.
While he was doing so, an insane idea came to him: what if he could spread enough fake news about a biological superbug that would get society to actually care about the threat and invest in protecting against it? That would certainly raise the salience of the issue to the American people and get the extra billions he thought was needed for the CDC.
Morgan decided he needed a machine learning expert to help him optimally spread fake news, and was posting to all sorts of websites like StackExchange looking for one, saying how he would pay top dollar for a short-term altruistic problem. By the evening, Morgan had got several Gmails with resumes attached, which he just thought was insane. One of them, in particular, stood out- a Google AI researcher with a Ph.D. from UC Berkeley named Arthur. He figured he was getting trolled by some scam artist, but sure enough, he found the guy’s name on a Google website and his thesis published online.
He emailed Arthur back with his phone number and insisted they call to explain his proposal. Sure enough, Morgan’s phone number rang within 5 minutes, which might have sounded needy except if he did not offer $20,000 for the gig.
“Good evening, this is Arthur Oppenheimer. Is this Jack Rutherford?”
“Yes, it is” Morgan replied in a professional voice, relieved that Arthur sounded just like he did on a video he found on YouTube of him lecturing. And yes, he wasn’t about to give up his name this early on.
“I saw your ad posted for an AI gig, what is it that I can help you with?”
“Yeah, I am not comfortable talking on this unsecured line, do you have Signal?” Morgan asked, writing about an encrypted internet phone app.
“Yes, of course,” Morgan replied, knowing that many entrepreneurs in Silicon Valley were paranoid about getting their ideas stolen. “Same number?”
“I’ll call you right back then.” Arthur hung up and called back “Jack” on Signal.
“Hey Jack, what can I help you with?”
“I am looking to spread a message.”
“No, well yes, but I need an AI researcher to help me do it because I need the message to be adaptable and look like news.”
Arthur, although alone in his apartment, raised an eyebrow, concerned this was some kind of trap. “What kind of news?” he asked.
“Do you believe the ends justify the means?” Morgan asked as calmly as possible, his palms sweating, as he realized the implication of what he was doing.
“I am very concerned organizations like the CDC and the NIH are not being funded enough to do research into preventing abuse of CRISPR and the creation of superbugs. I think there is a significant risk of a catastrophic outbreak, and do to the interconnectedness of everyone, a chance that 99% of people could become infected with a bug that could be made in a lab or inadvertently in a factory farm.”
“I hear you. What are you asking of me?”
“I want you to help me create a fake pandemic and get as many people to believe it as possible.” Morgan was sweating profusely now.
Arthur was shocked by the question, and while he hated what Cambridge Analytica did, this didn’t sound bad to him. He thought about some of the algorithms he could write to propagate this “news” to himself, but he was getting nervous too. What if this was a trap, or even if it wasn’t, what if this idiot would mess something up and get him caught?
“Nah mate, not interested. You really ought to go about the proper channel to make the change you want to see in the world.” Arthur hung up, glad that he had made the right call. He had a great job, and he just got the $50,000 in his bank account from the hackathon. He didn’t need to risk what he had built for himself over $20,000.
Morgan felt relieved in the ensuing silence, but he also felt ashamed that he wasn’t going through the proper channels. No, he told himself, the proper channels don’t work, and you have to take this into your own hands. He knew that existential risk movement was doing all they could, petitioning Congress, writing books. No one was getting their hands dirty, so he had to do it. If he were caught, the world would eventually thank him, although he wasn’t sure he wouldn’t end up in jail.
Arthur had trouble sleeping that night, mad at himself for failing to exercise at all. His mind wandered back to the strange conversation he had earlier, and he considered the offer again. It would be an excellent opportunity to apply one of his algorithms he had developed in grad school, but he was concerned about getting caught. Other people were so careless about their digital footprint. He himself narrowly dodged a bullet from today, when he almost considered calling Jack with his burner phone from the hackathon, which was still off and with the battery out. The idea came to him: What if he did it himself? He could anonymously reach out to Jack with his Monero (anonymous cryptocurrency) public key, and offer to do it, so long that he got, say $5,000 after the first bit of news surrounding the article appeared on Twitter, and $15,000 after he got mention of it on the local news.
He got up then and emailed Jack anonymously and inquired about the job offer, pretending to be a new person.
Morgan had been up, responding to more emails, generally focusing on ensuring the quality of the applicants more than he was explaining his job. He did not want to get hung up on again. He needed to find the right person.
“Send your CV for further consideration, please” Morgan replied.
“How many people have you interviewed for the job?” asked Arthur.
“Plenty” lied Morgan, unsure of the real usefulness of the question.
Arthur sent back, “I know your job, and I’ll do it. I’ll need you to look for the hashtag #biothreat on Twitter when I tell you to, and then you will pay me, and then I will do more, and then you will pay me more, and we can do this. But I am going to do this myself.” Arthur also attached his Monero public key along with instructions on how to pay.
“Ok, I trust you,” replied Morgan. What else could he do?
“Save the Monero key, and delete these emails, and I’ll contact you in a couple weeks.
Two weeks later, after the two exchanged their emails, the #biothreat hashtag started trending on Twitter, but more significant than that, reports started trending about a nuclear bomb detonating over Delhi, India. Initial fingers naturally pointed at Pakistan, and situation rooms all around the world were rapidly filled with their leaders and staff. However, the Indian’s claimed responsibility- they said they had made the extremely difficult call to neutralize a rapidly spreading virus that was released from a biological warfare lab and killing people within the hour. India’s president reported that scientist said the virus had likely already spread to 100,000 people, and they believed it was being dispersed by the wind.
Little did they know, except for Morgan and Arthur at the time, that Arthur’s black box AI algorithm, designed to maximize reshares and retweets, had acted a horribly wrong. Except it hadn’t. It had acted right, per the goals programmed into it. Way too right.