Top Time Travel Interventions?

post by abramdemski · 2020-10-26T23:25:07.973Z · LW · GW · 3 comments

This is a question post.

Contents

  Answers
    24 Daniel Kokotajlo
    19 Linch
    12 avturchin
    6 Tao Lin
    6 Gurkenglas
    5 abramdemski
    4 Tao Lin
    4 johnswentworth
    1 Richard Horvath
    1 erinbailey
    1 AnthonyC
None
3 comments

I encourage people to treat this as an exercise and avoid looking at the other answers before posting your own. (Maybe use spoiler blocks?) This is more for fun than for serious. However, it may potentially serve as an analogy [LW · GW] to help think about what the top modern-day interventions might be.

The challenge: you have one opportunity to go back in time and change something, for the good of all humankind. Use your upvotes wisely; it's how we'll judge success. My suggestion: upvote based on what you think would actually do good; strong upvote what you think works best. If you want to give kudos for other reasons, such as insightfulness, I suggest using a comment instead.

You don't have magical powers to change anything you want (so for example, you can't just "prevent the agricultural revolution"; you have to have a plan to do that). However, you do have the ability to take modern technology and resources back with you. For simplicity, though, let's keep it to one time-travel event -- your machine is single-use-only and you can't afford to build two. (You could bring the science of time travel back with you, but I don't know if that's a good idea...)

For the sake of the thought experiment, pretend that you can actually change things via time travel, giving rise to a new timeline (rather than there only being one consistent timeline, like we'd expect of real time travel).

What do you do, to do the most good?

Stretch goals: solve the problem under harsher constraints, such as time travel machines with restricted ranges. Block key things most important to your strategies so far, and see what you can still do.

I think this problem is challenging for some specific reasons, but I'll put them behind a spoiler block for those who want to just dive into it.

ETA: I also posted a similar challenge on the EA forums [EA · GW].

I'm specifically interested in this challenge because of X-risk. I used to think that, in order to the most good, you basically take as much scientific knowledge as you can, as far back as you can, and try to teach it to people or something. But it's quite plausible that that just brings X-risk nearer, for a wide variety of sciences and technologies one might try to smuggle back.

Similarly, you can just try to do basic good for people -- averting large historical disasters, etc. But again, this might just accelerate progress, giving humans the power to create global catastrophic risks sooner, and thus giving humanity less time overall.

This makes things feel a little hopeless -- is it possible to do very much good at all?

Answers

answer by Daniel Kokotajlo · 2020-10-27T08:33:03.199Z · LW(p) · GW(p)

I go back in time to the year 1992. I bring with me as much information about AI, AI risk, AI safety, etc. as I can. I also bring back the sort of information that will help me make loads of money--technologies and startups to invest in, for example. The code for Ethereum or Bitcoin or something. The code for Transformers and LSTMs and whatnot.

Basically, my plan involves repeated applications of the following two steps:

  1. Make money on the stock market (later, as a venture capital firm, and later still, as a giant tech company)
  2. Identify extraordinary people who in my timeline were sympathetic to AI risk (e.g. Nick Bostrom, EY, Elon Musk) and convince them that I'm from the future and I'm here to help etc., then convince them of AI risk and get them to join my conspiracy/corporation.

By 2020, I'd have a team of some of the world's most capable people, at the helm of a megacorporation the size of Alphabet, Facebook, Amazon, Tesla, and SpaceX combined. The rationalist and EA movements would also exist, but they'd have more funding and they would have started ten years earlier. We'd all be very concerned about AI safety and we'd have a budget of a hundred billion dollars per year. We'd have tens of thousands of the world's smartest researchers working for us. I feel fairly confident that we would then figure out what needs to be done and do it.

answer by Linch · 2020-10-27T02:09:35.354Z · LW(p) · GW(p)

Broadly, I think I'm fairly optimistic about "increasing the power, wisdom, and maybe morality of good actors, particularly during times pivotal [EA · GW] to humanity's history."

(Baseline: I'm bringing myself. I'm also bringing 100-300 pages of the best philosophy available in the 21st century, focused on grounding people in the best cross-cultural arguments for values/paradigms/worldviews I consider the most important). 

Scenario 0: Mohist revolution in China

When: Warring States Period (~400BC)

Who: The Mohists, an early school of proto-consequentialists in China, focused on engineering, logic, and large population sizes.

How to achieve power: Before traveling back in time, learn old Chinese languages and a lot of history and ancient Chinese philosophy. Bring with me technological designs from the future, particularly things expected to provide decisive strategic advantages to even small states (eg, gunpowder, Ming-era giant repeating cross bows, etc. Might need some organizational theory/logistical advances stuff to help maintain the empire later, but possible Mohists are smart enough to figure this out on their own. Maybe some agricultural advances too). Find the local Mohists, teach them the relevant technologies and worldviews. Help them identify a state willing to listen to Mohists to prevent getting crushed, and slowly change the government from within while winning more and more wars.

Desired outcome: Broadly consequentialist one-world government, expanding outwards from Mohist China. Aware of all the classical arguments for utilitarianism, longtermism, existential risks, long reflection, etc.

Other possible pivotal points:

  1. Give power to leaders of whichever world religion we think is most conducive for longterm prosperity (maybe Buddhism? High impartiality, scientific-ish, vegetarian, less of a caste system than close contender Hinduism)
    1. Eg, a) give cool toys to Ashoka and b) convince Ashoka of the "right" flavors of Buddhism
  2. Increase power to old-school English utilitarians.
    1. One possible way to do this is by stopping the American revolution. If we believe Bentham and Gwern, the American revolution was a big mistake.
      1. Talking to Ben Franklin and other reasonable people at the time might do this
      2. Might be useful in general to talk to people like Bentham and other intellectual predecessors to make them seem even more farsighted than actually were
    2. It's possible you can increase power to them through useful empirical/engineering demonstrations that helps people think they're knowledgeable.
  3. Achieve personal power
    1. Standard thing where you go back in time by <50 years and invest in early Microsoft, Google, Dominoes Pizza, bitcoin etc
    2. Useful if we believe now is at or near the hinge of history
  4. Increase power and wisdom to early transhumanists, etc.
    1. "Hello SL4. My name is John Titor. I am from the future, and here's what I know..."
    2. Useful in most of the same worlds #3 is useful.
  5. Long-haul AI Safety research
    1. Bring up current alignment/safety concerns to early pioneers like Turing, make it clear you expect AGI to be a long time away (so AGI fears aren't dismissed after the next AI winter).
    2. May need to get some renown first by casually proving/stealing a few important theorems from the present.

In general I suspect I might not be creative enough. I wouldn't be surprised if there are many other pivotal points around, eg, the birth of Communism, Christianity, the Scientific Revolution, etc.
 

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-27T08:37:23.696Z · LW(p) · GW(p)

Fuck yeah! I'd love to see a short story written with this premise. The Mohists sound really cool, and probably would have been receptive to EA ideas, and it's awesome to imagine what the world would have been like if they had survived and thrived. Make sure you bring back lots of stories about why and how communism can go wrong, since one failure mode I anticipate for this plan is that the government becomes totalitarian, starts saying the ends justify the means, etc. Maybe bring an econ textbook.

Replies from: Linch, Linch
comment by Linch · 2020-10-27T23:29:39.047Z · LW(p) · GW(p)

I'd love to see a short story written with this premise.

I'd love to see this. I've considered doing it myself but decided that I'm not good enough of a fiction writer (yet). 

comment by Linch · 2020-10-28T02:38:45.662Z · LW(p) · GW(p)

I'm also generally excited of many different stories involving Mohism and alternative history. I'd also like to see somebody exploring the following premises (for different stories):

1) a young Mohist disciple thought about things for a long time, discovered longtermism, and realized (after some calculations with simplified assumptions) that the most important Mohist thing to do is guarantee a good future hundreds or thousands of years in the future. He slowly convinces the others. The Mohists try to execute on thousand-year plans (like Asimov's Foundation minus the availability of computers and advanced math). 

2) An emperor converts to Mohism. 

3) The Mohists go underground after the establishment of Qin dynasty and alleged extreme suppression of dissenting thought. They develop into a secret society (akin to Freemasons) dedicated to safeguarding the longterm trajectory of the empire while secretly spreading consequentialist ideas.

4) Near-schism within the now-Mohist China due to the introduction of a compelling religion. Dissent about whether to believe in the supernatural, burden of proof, concerns with infinite ethics, etc

comment by abramdemski · 2020-10-27T17:59:01.526Z · LW(p) · GW(p)

Oh, wow, Mohists do sound really awesome. From wikipedia:

The Mohists formed a highly structured political organization that tried to realize the ideas they preached, the writings of Mozi. Like Confucians, they hired out their services not only for gain, but also in order to realize their own ethical ideals. This political structure consisted of a network of local units in all the major kingdoms of China at the time, made up of elements from both the scholarly and working classes. Each unit was led by a juzi (literally, "chisel"—an image from craft making). Within the unit, a frugal and ascetic lifestyle was enforced. Each juzi would appoint his own successor. Mohists developed the sciences of fortification[clarification needed] and statecraft, and wrote treatises on government, ranging in topic from efficient agricultural production to the laws of inheritance. They were often hired by the many warring kingdoms as advisers to the state. In this way, they were similar to the other wandering philosophers and knights-errant of the period.

Sure, it's "similar to the other wandering philosophers and knights-errant of the period", but it's such a good position for a group of proto-rationalists to be in.

And their philosophy has so much in common with EA! They're utilitarian consequentialists! (Not hedonists, but some kind of utilitarian.)

Replies from: Linch
comment by Linch · 2020-10-28T02:29:27.725Z · LW(p) · GW(p)

Personally, I feel a lot of spiritual kinship towards Mohists (imo much cooler by my modern/Westernized tastes than Legalists, Daoists, Confucians and other philosophies popular during that time). 

(the story below is somewhat stylized. Don't take it too literally).

The Mohists' main shtick is that they'd travel the land teaching their ways during the Warring States period, particularly towards weaker nations at risk of being crushed by larger/more powerful ones. Their reputation was great enough that kings will call off invasions based only on the knowledge that Mohist disciples are defending targeted cities.

One (somewhat anachronistic) analogy I like thinking of Mohists is as nerdy Jedi. They are organized in semi-monastic orders. They live ascetic lifestyles, denying themselves worldly pleasures for the greater good. They are exquisitely trained in the relevant crafts (diplomacy and lightsaber combat for Jedi; logic, philosophy, and siege engineering for Mohists). 

Even their most critical flaws are similar to that of Jedi. In particular, their rejection of partiality and emotion feels reminiscent of what led to the fall of the Jedi (though I have no direct evidence it was actually bad for Mohist goals). More critically, their short-term moral goals do not align with a long-term stable strategy. In hindsight, we know that preserving "balance" between the various kingdoms was not a stable strategy since "empire" was an attractor state. 

In the Mohists' case, they fought on the side of losing states. Unfortunately, eventually one state won, and then the ruling empire's morality were not fans of philosophies that espoused defending the weak. 

comment by DanielFilan · 2020-10-27T03:21:59.237Z · LW(p) · GW(p)

Talking to Ben Franklin and other reasonable people at the time might do this

FWIW, after reading his biography, I get the impression that Franklin was very much under pressure from Bostonians who were really mad at the British, and could not have been less pro-revolution without being hated and discredited. I think what you actually want is to somehow prevent the Boston 'massacre' or similar.

Replies from: Linch
comment by Linch · 2020-10-27T23:28:38.711Z · LW(p) · GW(p)

Darn. Hmm I guess another possibility is to see if ~300 years of advances in propaganda social technology would mean someone from our timeline is much more persuasive than 1700s people, and, after some pre-time travel reading and marketing/rhetoric classes, try to write polemical newsletters directly (I'm unfortunately handicapped by being the wrong ethnicity so I need someone else to be my mouthpiece if I do this). 

Preventing specific pivotal moments (like assassinations or Boston 'massacre') seems to rely on a very narrow theory of change, though maybe it's enough?

comment by abramdemski · 2020-10-27T17:44:31.295Z · LW(p) · GW(p)

How to achieve power: Before traveling back in time, learn old Chinese and a lot of history and philosophy. Bring with me technological designs from the future, particularly things expected to provide decisive strategic advantages to even small states (eg, gunpowder, Ming-era giant repeating cross bows, etc. Maybe some agricultural advances too). Find the local Mohists, teach them the relevant technologies and worldviews. Help them identify a state willing to listen to Mohists to prevent getting crushed, and slowly change the government from within while winning more and more wars.

Responding only to whether this part would work:

I like the idea of bringing back an entire sequence of war tech, so that you can always keep your side ahead of the curve.

I'm no historian of war, so the following might not be good enough, but something like...

Bring back horseback fighting techniques, which won out over chariot-based cavalry at some point. Teach this to the Mohists.

When others imitate, start teaching improved blacksmithing to the Mohists, for better swords, arrows, and armor.

When others imitate, bring out the gunpowder.

Etc.

Of course, this kind of sequence might require multiple generations, since any one of these technologies has the potential to continue providing momentum for a long time. But it seems like it could be incredibly effective.

Perhaps, if the Mohists are sane enough, you could teach them everything at once, but with the plan to carefully stage the use of various technologies.

comment by abramdemski · 2020-10-27T17:35:56.221Z · LW(p) · GW(p)

Broadly, I think I'm fairly optimistic about "increasing the power, wisdom, and maybe morality of good actors, particularly during times pivotal [EA · GW] to humanity's history."

After this first sentence, I expected that I would disagree with you along the lines of "you're going to accelerate progress, and hasten the end", but now I'm not so sure. It does seem like you're putting more of an emphasis on wisdom than power, broadly taking the approach of handing out power to the wise in order to increase their influence.

But suppose you roll a 20 on your first idea and have a Critical Success establishing a worldwide Mohist empire.

Couldn't that have the effect of dramatically accelerating human technological progress, without sufficiently increasing the quality of government or the state of AI safety?

You aren't bringing democracy or other significantly improved governmental forms to the world. In the end it's just another empire. It might last a few thousand years if you're really lucky.

If we assume technological progress is about the same or only accelerated a little, then this means consequentialist ideals (Mohist thinking plus whatever you bring back) get instilled across the whole world, completely changing the face of human moral and religious development. This seems pretty good?

But part of how you're creating a worldwide empire is by giving the Mohists a technological lead. I'm going to guess that you bring them up to the industrial revolution or so.

In that case I think what you've done is essentially risk 2 thousand years of time for humans to live life on Earth, balancing this against the gamble that a Mohist empire offers a somewhat more sane and stable environment in which to navigate technological risks.

This seems like a bad bargain to me.

Replies from: Linch
comment by Linch · 2020-10-28T03:00:41.061Z · LW(p) · GW(p)

Couldn't that have the effect of dramatically accelerating human technological progress, without sufficiently increasing the quality of government or the state of AI safety?

You aren't bringing democracy or other significantly improved governmental forms to the world. In the end it's just another empire. It might last a few thousand years if you're really lucky.

Hmm I don't share this intuition. I think a possible crux is answering the following question:

Relative to possible historical trajectories, is our current trajectory unusually likely or unlikely to navigate existential risk well?

I claim that unless you have good outside view or inside view reasons to believe otherwise, you should basically assume our current trajectory is ~50th percentile of possible worlds. (One possible reason to think we're better than average is anthropic survivorship bias, but I don't find it plausible since I'm not aware of any extinction-level near misses). 

With the 50th percentile baseline in mind, I think that a culture that is broadly 

  • consequentialist
  • longtermist
  • one-world government (so lower potential for race dynamics)
  • permissive of privacy violations for the greater good
  • prone to long reflection and careful tradeoffs
  • has ancient texts explicitly warning of a) the dangers of apocalypse and b) a strong ingrained belief that the end of the world, is, in fact, bad.
  • Specific scenarios (from the ancient texts) warning of specific anticipated anthropogenic risks (dangers of intelligent golems, widespread disease, etc)

seems to just have a significantly better shot at avoiding accidental existential catastrophe than our current timeline. For example, you can imagine them spending percentage points of their economy on mitigating existential risks, the best scholars of their generation taking differential technological progress seriously, bureaucracies willing to delay dangerous technologies, etc.

Does this seem right to you? If not, approximately what percentile you will place our current trajectory?

___

In that case I think what you've done is essentially risk 2 thousand years of time for humans to live life on Earth, balancing this against the gamble that a Mohist empire offers a somewhat more sane and stable environment in which to navigate technological risks.

This seems like a bad bargain to me.

Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.

Replies from: abramdemski
comment by abramdemski · 2020-10-28T16:36:26.462Z · LW(p) · GW(p)

Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.

Fair enough, you've convinced me (moral uncertainty aside).

I was anchoring too much to minimizing downside risk.

comment by abramdemski · 2020-10-27T17:20:52.173Z · LW(p) · GW(p)

If we believe Bentham and Gwern

Could you turn that into a link? I was not previously familiar with this, and it was not immediately obvious which Gwern essay to look up.

Replies from: Linch
comment by Linch · 2020-10-27T23:20:43.223Z · LW(p) · GW(p)

Added some links! I love how Gwern has "American Revolution" under his "My Mistakes" list. 

answer by avturchin · 2020-10-27T15:52:24.531Z · LW(p) · GW(p)

I will wait a little until nanotech and advance AI appear. After that I will send one nanobot to the beginning of the universe. It will secretly replicate and cover all visible universe with its secret copies. It will go inside every person brain and upload this person at the moment of death. It will also turnoff pain during intense suffering. Thus I will solve the problem of past sufferings and resurrection of the dead.

answer by Tao Lin · 2020-10-29T00:22:06.048Z · LW(p) · GW(p)

Edit the Bible. It is the information replicated the most times throughout history, and thus it's probably the best vehicle for a cultural or intellectual agenda. Finding the right edits would not be easy, because the bible would need to retain the qualities that made it so viral in the first place.

Edits could include reducing mysogeny/anti-LGBTQ, valuing the happiness and suffering of all beings, and putting more faith in reason. Adding more reason could easily undermine the persuasive power of the bible, but something could probably be done.

The bible was written between 0 and 100 AD in Greek, so "the team" of time travellers would need to learn ancient Greek (the known parts now, all the unrecorded parts when they arrived), go back to either 1 or 2 bc and influence early manuscripts / verbal recitations, or perhaps arrive around 50AD and write the official Bible, or influence those who wrote it.

comment by abramdemski · 2020-10-29T12:07:16.569Z · LW(p) · GW(p)

Finding the right edits would not be easy, because the bible would need to retain the qualities that made it so viral in the first place.

IMHO there's a lot of cruft in there which doesn't serve virality very well; there's a huge effect where people pay attention to the parts they like. So, it might be much easier to insert additional messages, rather than edit existing messages. (Although there's some risk you insert a message so unpopular that it makes the Bible less popular, of course.)

The bible has also been heavily pruned and edited at times, so it might not be so easy to inject things...

For that reason, the Koran seems like a better target for this sort of project. It has been faithfully transmitted from the beginning (to the point of including spelling errors, if I've heard correctly).

But I also like the idea of editing Euclid's Elements. I believe it is the most reproduced book after the bible. Adding an extra part to Euclid's Elements which discusses axiomatic probability theory and utility theory could be a start (although it's not clear what the impact of that alone would be).

Replies from: tao-lin
comment by Tao Lin (tao-lin) · 2020-11-06T00:13:20.719Z · LW(p) · GW(p)

Probability+utility theory might be recognized as important on its own, so there might not be a big difference between including it in Elements and publishing it as its own volume.

I like the idea of editing the Koran. It spread through conquest earlier in its life than the Bible, so perhaps it's text isn't as vital to its success as the Bible's which had to spread organically more before it was spread by force.

There's also the issue of great filters: if the great filter is in our recent past, then anything we change would be net negative, and we would be better off not going back far at all. 

answer by Gurkenglas · 2020-10-27T00:15:30.141Z · LW(p) · GW(p)

I don't trust humanity to make it through the invention of nuclear weapons again, so let's not go back too far. Within the last few decades, you could try a reroll on the alignment problem. Collect a selection of safety papers and try to excise hints at such facts as "throwing enough money at simple known architectures produces AGI". Wait to jump back until waiting longer carries a bigger risk of surprise UFAI than it's worth, or until the local intel agency knocks on your door for your time machine. You could build a reverse box - a Faraday bunker that sends you back if it's breached, leaving only a communication channel for new papers, X-risk alerts and UFAI hackers - some UFAIs may not care enough whether I make it out of their timeline. Balance acquiring researcher's recognition codes against the threat of other people taking the possibility of time travel seriously.

answer by abramdemski · 2020-10-27T16:50:17.048Z · LW(p) · GW(p)

We're working under the constraint that by default, technology and ideas accelerate progress, and progress can lead to doom via technological x-risk. This is our main bottleneck.

So we're going to want to work on changing that.

But, in case we fail, we're also going to want to bring back as little as possible in the way of modern technology, and obfuscate what we do bring back.

I propose that the main modern technology we allow ourselves is antibiotics. Bring back a large amount of many varieties, with extensive instructions as to their uses and dangers. Ideally, bring back a medical doctor with you, with any other supplies they recommend. This will help us survive to do whatever we intend to do.

My primary angle of attack will be to raise the sanity waterline by encouraging political systems which are marginally less insanity-inducing. The insanity of politics touches everything else, because the state is (for the most part) the one entity that can make top-down decisions which "change the nature of the game" to encourage or discourage insanity across the board. A wise state leads to wise decisions which engender further wisdom. ("Wisdom" here is intended to be "alignment" on the capabilities-vs-alignment spectrum; I'm trying to give humankind the best lever with which to choose the course of the future, in a value-aligned sense rather than a maximum-impact sense.)

This would hopefully improve the overall situation humankind faces today with respect to existential risks, by virtue of improved institutions for social coordination, saner public discourse, saner political dynamics, and perhaps improved relations between nations.

To this end, my goal is to fix the American constitution, primarily by getting rid of first-past-the-post voting in favor of better options. Other voting methods have many advantages, but the issue most salient to me is whether a voting method results in a two-party system; two-party systems create political polarization, resulting in increasingly tribal political discourse. 

Why intervene in the American revolution in particular?
1. It plausibly set a standard for democracies to come. A better constitution for America might mean a better constitution for many others.
2. I'm somewhat familiar with it. Unlike the formation of democracy in ancient Greece, or the Roman Republic, we have a pretty detailed and reliable information about its foundation. (But clearly I should bring back a knowledgeable historian of early America, not rely on my own knowledge.) I speak English, I am familiar with Christian culture, etc.
3. It seems to me that the foundation of America was unusually ideologically driven, and thus, open to the influence of forceful argumentation.

I've stated my basic objective. My means is as follows: write a persuasive series of essays, and deliver these essays to the doorstep of all the relevant people. The founding fathers. The essays might also be published more broadly, like Common Sense.

The date of delivery? I'm not sure, exactly, but I want enough time for the ideas to sink in and feel "long established" by the time of the writing of the constitution. On the other hand, the further back the ideas are delivered, the greater the risk that we change too much -- somehow derail the revolutionary war, or greatly change the nature of early deliberation about government (possibly for the worse), et cetera. So, it might be better to deliver the ideas as late as possible. 

But I imagine a pamphlet received close to the writing of the constitution might be seen for what it is, a transparent attempt to manipulate the contents of the constitution. The closer we get to the actual writing of the constitution, the more the founding fathers might be in compromise-mode (trying to figure out the minimal reasonable thing that will be accepted by everyone), and the less idealistic and utopian. I therefore suspect they'll be much more receptive to these ideas as younger, more optimistic, less war-torn men. So, perhaps, juuust after the start of the revolutionary war itself?

In any case, on to the contents of the pamphlet.

What voting method should we advocate for?

If I had my free choice of voting method, I would select either 3-2-1 voting or STAR voting, as these voting methods are robustly good. However, I think these are too complex to explain in a forceful and convincing argument. I myself forget the details of these methods and the arguments in their favor. Furthermore, none of these methods are practical for quick in-person votes during meetings, the way a show of hands is practical. I suspect a prerequisite of common acceptance of an improved voting method in the 1700s might be its practicality for quick in-person votes.

Approval voting is, therefore, a likely candidate.

What argument could be issued in its favor?

I would want to defer this question to a team of excellent writers and voting theorists, informed by a historian familiar with the popular ideas of the time and the writing of the founding fathers. However, here is my take:

In order for approval voting to persuasively win out over first-past-the-post, we need to overcome the forceful "one person, one vote" motto -- the idea of approval voting is precisely that you can vote for multiple options.

My idea is to lean heavily on Bentham's utilitarianism, emphasizing that the goal of a  democratic voting system is the greatest good for the greatest number. Perhaps the title of the series of pamphlets could be "The Greatest Good for The Greatest Number".

In this context, the various now-well-known failures of first-past-the-post voting would be illustrated through examples, such as this excellent illustration of why first-past-the-post is the worst, and instant runnoff voting is the second worst.

Primary arguments would include:
1. One-vote-per-person is really two-choices-per-election: due to the spoiler effect, only two primary candidates can plausibly run. This works against the common good, as can be illustrated forcefully with examples.
2. Approval voting is just the simplest version of score voting. Score voting has an obvious close relationship with Bentham's utility theory. So we can argue for score voting from first principles, while endorsing approval voting in the end on merits of simplicity. (If the founding fathers decide to go for range voting instead, even better.)

Since STAR voting is a very simple improvement on score voting, which is relatively easy to motivate, STAR and its advantages could be explained in one of the essays -- as a stretch goal, just in case we have success beyond our expectations. However, even that essay should reiterate the sins of first-past-the-post and the idea that approval voting is the simplest fix. We really, really, really don't want to spoil the debate by offering too many alternatives to first-past-the-post; we want one solid improvement that can stick. It's great if the founding fathers end up debating approval vs score vs STAR, but realistically, I worry that too much debate of that sort might result in a compromise on first-past-the-post, as no one can agree as to which improved method to take.

We would not hide ourselves as the origin of the pamphlets. If the founding fathers are sufficiently interested, let them approach us -- all the better if we can get a seat at the constitutional convention out of this. To better exploit that possibility, we should bring along a good orator/negotiator. This person can pose as the author of the pamphlet if the need arises.

Secondary Activities

Since we're going back in time anyway, we want to accomplish whatever else we can.

In addition to making a compelling case against first-past-the-post voting (and for some specific alternative), it likely makes sense to bring back as much solid political theory as we can, particularly with an eye toward information which helps reduce corruption and increase the sanity waterline.
- As much technical voting theory as possible, re-written to suit the times. We may not want the founding fathers to get overwhelmed with ideas and options, but it's fine if other nations have a wealth of material to mull over when considering their constitutions.
- Some solid economic theory? I'm thinking especially of information about the ills of monopolies, to minimize the influence of large companies on the government.
- Game theory and mechanism design, to aid in the formation of fruitful social institutions?

In addition to this, we would of course want to take back as much information as possible relating to AI alignment theory.
- The formal theory of computation.
- The formal theory of logic. This seems necessary just to open people up to the idea of artificial intelligence.
- Formal decision theory. This would include a great deal of information about Bayesian epistemology, which might itself serve some role in helping to increase the sanity waterline -- especially if Bayesian alternatives to p-value hypothesis testing can be adopted early.
- As much information as possible related to Goodhart-style failures. The idea of a thinking machine with a utility function should be vividly discussed, along with the perverse potential of optimizing incorrect utility functions.

I'm not sure what the best way of delivering all of this information would be. My first inclination is to sneak it into the archives of the Royal Society.

EDIT AFTER READING OTHERS:

I agree with Gurkenglas's idea [LW(p) · GW(p)] that we should wait some time, to bring back as much relevant safety research as possible. This seems like a significant boon to the plan.

It's hard to compete with Daniel K's plan [LW(p) · GW(p)], which we can be much more confident in, due to the short time-period involved and the direct relationship between actions and outcomes (IE, we remain in control, rather than simply hoping that we've created a better environment for the people who eventually are faced with existential risk).

I'm amending my plan to specify that we take a number of rationalists/EAs back. (Note that there's a trade-off with the current timeline -- this timeline doesn't disappear, after all! So we don't want to entirely drain it of people working against x-risk.) We would establish a society for the preservation, improvement, and careful application/distribution of knowledge from our time. This society would be the source of the initial publications about voting methods, and would ideally continue to publish influential literature on governance and public policy -- it would operate somewhat like a think tank. Not sure how to make it stable across generations, though. The hope would be to use all that time to work on x-risk solutions.

ETA: Another mistake in my original answer was that I focused so much on single-winner elections (EG the presidential election) to the exclusion of multi-winner elections [LW · GW]. It's important that STV or another good multi-winner method become the default for electing representatives in legislative branches.

answer by Tao Lin · 2020-10-29T00:38:35.292Z · LW(p) · GW(p)

If the time machine is a single-use object (rather than a spontaneous event),  I would wait as long as possible before using it. There are a couple reasons for this: there are certainly decades of "Historical Priorities Research" to be done to find the best intervention, the actual traveller would require years of preparation, and we would have access to more technology, should we choose to bring it back. That's given that people will devote their lives to research with likely little proof that the machine is legit, but this research is already somewhat on the EA agenda anyway. During that time people could be on guard to flee to the past in the event of a true catastrophe. Even if there was a nuclear war, I don't know whether we should go back and prevent the war, or whether our historical plans would be more valuable. In either case, they would want to be ready to leave if anything happened to Earth. I would shoot for a reaction time of about 12 hours, and a rugged computer that's constantly synced with general knowledge to bring with them, such that the team could learn more about the catastrophe after a quick escape.

answer by johnswentworth · 2020-10-27T02:12:27.782Z · LW(p) · GW(p)

One class of answers: use the opportunity to run experiments in introduction of technology. For instance, we could go back a long way, construct a ship and navigational equipment capable of crossing oceans, then travel around introducing specific technologies to various isolated groups to see what happens. At a bare minimum, it should provide a better idea of the extent of technological determinism and the relevance (or irrelevance) of economic or social prerequisites to the adoption of new technology. That would, in turn, inform our theories about what effects future technology are likely to have.

comment by abramdemski · 2020-10-27T18:09:13.086Z · LW(p) · GW(p)

But then what?

Replies from: johnswentworth
comment by johnswentworth · 2020-10-27T18:46:08.843Z · LW(p) · GW(p)

Depends on what we learn. That's rather the point - we can make better decisions with better understanding. Right now we don't really have the understanding of how human systems evolve to figure out how best to make them evolve in good directions. But a test platform would likely go a long way toward changing that.

In other words: this is a situation where value of information is higher than the expected value of most interventions. When we don't even know which interventions are positive value, the obvious next step is to get better information about the value of our interventions.

Replies from: abramdemski
comment by abramdemski · 2020-10-27T19:05:15.679Z · LW(p) · GW(p)

Sure, but I set up the scenario so that there's only one shot here. So you'll have you're answer, if you can visit each location twice, but you're stuck in a boat in the past -- and reaching old age, depending on how long the experiment takes. Maybe not the best situation. So I wan just wondering how you planned to get the information to the relevant actors.

Perhaps your intention is just that future historians would be able to see what happened, and use this data in their own time-travel attempts?

Replies from: johnswentworth
comment by johnswentworth · 2020-10-27T19:20:05.170Z · LW(p) · GW(p)

I assume that I'm "the relevant actor", though I could pass everything on to kids/others if old age becomes a problem. Start a cult of secret-future-knowledge with a mission of saving the world, if it's really going to take a long time.

Ideally, after running a bunch of experiments, me or my heirs would be sitting around with a bunch of knowledge about how new ideas/technologies do or don't get rapidly adopted, and what effects they have on society. We'd also have a pile of of future-knowledge about technologies. (Pile of new technical knowledge) + (knowledge of how introduction of technical knowledge steers society) = (ability to steer society), assuming that dissemination of technical knowledge is capable of steering society at all.

answer by Richard Horvath · 2020-10-27T23:12:49.058Z · LW(p) · GW(p)

I travel back in time to the 1170s and shoot Temüjin, aka Genghis Khan, before he could establish his empire.

Although there had been good policies he promoted (e.g., religious tolerance, trade), the probable upsides vastly outweigh this.

Just to name a few that I consider to be most important:

1. During the Mongol conquest tens of millions perished. This had been the approximately third bloodiest "conflict" in all human history. However, unlike e.g. the World Wars, where several large  belligerents existed without a single pivotal person (e.g., even without a Hitler, a bloody Second World War could have happened, just as a first one did in the same region between the same states) and almost all of it could have been avoided if the Mongol Empire is not formed at all.

2. As part of these conquests Baghdad and it's Grand Library was destroyed, which were the center of Islamic scholarship of that time. Most likely this had been a huge factor in the decline of secularism and scientific inquiry in the Middle East.

3. The mainstream theory regarding the spread of Black Death in Europe says it arrived via Genoese traders who fled from the Mongol siege of Kaffa, Crimea, where mongols catapulted infected corpses over the city walls. If really that was the source, avoiding this could have changed the prevented/delayed the spread of the disease and deathtoll might have been much lower.

As these all would have happened about 8 centuries ago, the long term effects would be even greater.

answer by erintatum (erinbailey) · 2020-10-27T17:15:59.881Z · LW(p) · GW(p)

I would choose an intervention that would increase the quality of life for the most number of people possible. With that constraint, the intervention should be easily reproduced and inexpensive. It should also minimize the risk of being used incorrectly, with possibly unforeseen negative consequences.

That is why I would bring inexpensive microscopes and information on germ theory, with the aim to introduce hand washing and food hygiene practicing to as many regions of the world as possible. In this scenario, bringing an effective campaign to disiminate the information (pre-printed, with infographics) is more important than the technology itself. 
 

I’m unsure where in time exactly would be the best point to do this. Perhaps sometime after world travel via boats is reasonable to do in a lifetime.

answer by AnthonyC · 2020-10-27T14:59:30.497Z · LW(p) · GW(p)

If I'm creating new timelines through time travel, then I'm creating new worlds, and with them new beings living new lives who may or may not live better lives than the beings I'm leaving behind. If it goes really well, it might serve as a kind of one-time acausal trade if we get the new timeline to simulate the old one and let us escape from it.

I don't believe that just bringing back advanced technology, as such, would be very useful because I would also need the influence and time and space and support to build the infrastructure needed to keep it going.

My first thought is to go the direction of the Moties. Can I engineer a a museum, or a library, or other easily absorbable body of knowledge to serve as a seed, usable by the civilization I travel back to, to help guide them in the direction I want to go? Unlike the Moties, I have the advantage of already knowledge some key information, like where large concentrations f natural resources will be located, that I can make use of in addition to scientific and mathematical knowledge and samples of technologies and materials and tools. If I'm personally present there is the risk of this becoming a cult around myself. Maybe I make many such seeds and scatter them around the world, while I myself go live in the asteroid built and have robotic systems build a Dyson sphere (hidden somehow with fancy optics until Earth is more scientifically mature) to be ready when Earth needs and is ready for it, thereby sidestepping much of the pollution and many of the other harmful effects of the Industrial Revolution. 

Maybe I divide important knowledge among the seeds so people have reason to trade when they meet, and/or still reward independent investigation. Maybe I encode later info, a la Fine Structure, so that anyone who takes the time to study the parts that are unlocked to date has the sense that more is possible, just by how much knowledge and growth potential there seems to be in their future. I want a society that looks forward with hopeful eyes, open minds due caution, and the will to grow.

Where and how far back would I go? Not before the bronze age. Without writing and some division of labor in society, my job gets a lot harder. Probably not later than the renaissance - over time it gets much harder to shift the direction society is already going. I think late bronze/early iron age. Most likely shortly after the late bronze age collapse, when people knew they used to be able to do great things now out of reach, and are already rebuilding, rediscovering, and innovating new ways, while also mixing with near and distant neighbors often enough to know other peoples have other ways and useful resources they need.

If I'm really, really lucky, this might dissuade people from conducting experiments that are truly dangerous, on the basis that the answers might already be somewhere in the encoded info they expect to eventually unlock.

3 comments

Comments sorted by top scores.

comment by jacobjacob · 2020-10-26T23:30:58.214Z · LW(p) · GW(p)

Yay! More exercises!

comment by frontier64 · 2020-10-27T21:06:38.301Z · LW(p) · GW(p)

A suitable answer to this problem would require at minimum a few months focused planning. And after that's done I'm sure the preparation period will last for a few further months. Time in the present should be minimized to reduce the risk of something happening in the modern world which will make the time travel impossible. But to make sure I've compiled all the helpful technology and information to bring back with me from the present will probably take a medium-sized team of scientists and engineers a few month's work. Even more to make sure it's all properly summarized and indexed. So I'm going to write a full answer eventually, but that's going to be an answer without the necessary months of thought that would be required if this was challenge was real.

The opportunity to go back in time with a requirement that it's done at the drop of a hat with nothing but the clothes on one's back and the thoughts in one's head would be a godsend to humanity. The opportunity to go back in time and bring future technology and materials with me, while I get to prepare for it ahead of time, and I get to bring tomes of knowledge with me, that is an opportunity to be God. I believe that having such an opportunity and spending it on going back ~20 years and slowing AI risk or trying to make people a little more moral would be a waste.

comment by Linch · 2020-10-27T01:40:54.638Z · LW(p) · GW(p)

The ability to go back in time and rectify old mistakes is one thing I fantasize about from time to time, so this will be a fun exercise for me! Might think about more detailed answers later.