If I Was An Eccentric Trillionaire

post by niplav · 2023-08-09T07:56:46.259Z · LW · GW · 8 comments

Contents

  Culture
  Language
  Art
  Science
  Metascience
  Other
None
8 comments

cross-posted from niplav.site

What I might do if I magically got hold of a very large amount of money, and couldn't spend it maximally altruistically.

I sometimes like to engage in idle speculation. One of those speculations is: "If someone came up to me and told me that they would give me a lot of money, but only under the condition that I would spend most of it on unconventional and interesting projects, and I was forbidden to give it to Effective Altruist organizations narrowly defined, what would I do? Not disallowing the projects from having positive consequences accidentally, of course."

The following is a result of this speculation. Many of the ideas might be of questionable morality; I hope it's clear I would think a bit more about them if I were to actually put them into practice (which I won't, since I don't have that type of money, nor am I likely to get hold of it myself anytime soon).

Lots of these ideas aren't mine, and I have tried to attribute them wherever I could find the source. If guess that if they were implemented (not sure whether that's possible: legality & all that) I'd very likely become very unpopular in polite society. But the resulting discourse would absolutely be worth it.

Intervention Cost
Snowball fights ?
Buy a small island nation $5 bio.
Personal futarchy on steroids $100 mio.
Save dying languages $2 bio.
Raise native speakers of an engineered conlang $30.8 mio.
Philosophically solve language $10 mio.
SCP series $1 bio.
Antimemetics Division spinoff $200 mio.
Discontinuous & fast AI takeoff movie $500 mio.
Double Crux podcast $2 mio.
Fictional ethnography of native Antarcticans $100k
Studying foreveraloners $900 mio.
Really Out There Stuff Institute $60 mio.
Sum $ 9.81 bio.

Culture

Language

Art

Science

Metascience

Other


  1. It's not clear that Nauru is the best choice here. While it probably is the smallest nation state that can conceivably be bought (I don't think there is any realistic (or unrealistic) amount of money for which Vatican City could be acquired), it is not very fertile, and has only limited freshwater reserves, relying mostly on rainwater. The highest point is only 71 metres above sea level, which means that a large part of the island might be at risk of going under water with rising sea levels. ↩︎

  2. "Yes, I want housing costs to be AS HIGH AS POSSIBLE! MWAHAHAHAHAH!" ↩︎

  3. Rest in peace. ↩︎

  4. Indeed, there is some evidence that Auckland island was settled briefly by Polynesians 600-700 years ago. ↩︎

  5. Maybe I'm lacking in imagination, but this implies both that polynesians can survive for weeks on the open ocean, can reliably find their way back home if need be, and are adventurous enough to just sail out to the open ocean in the hopes of finding new islands. This seems extremely crazy to me. ↩︎

  6. Another method of finding and moving to Antarctica would be from the Tierra del Fuego to the Siffrey point, which is much closer (~1030km). I'm not sure whether this is more or less likely: the Yahgan people have lived in the Tierra del Fuego for ~8k years, which would give far more time for for extensive exploration, and the Prime Head is likely warmer and more hospitable than the rest of Antarctica, but I believe that the Polynesians were much better at spending long durations of time at sea, and at finding far away land from subtle cues. ↩︎

  7. Since the experiment would solely involve prostitution, my best guess is that it would be significantly more difficult to find a similar number of female participants. ↩︎

  8. I'd like to hear feedback on what people believe the right amounts of money for indifference between membership of the two groups+participation would be. ↩︎

8 comments

Comments sorted by top scores.

comment by cata · 2023-08-09T20:28:50.157Z · LW(p) · GW(p)

There's already a well-written (I only read the first) history of part of EVE Online: https://www.amazon.com/dp/B0962ZVWPG

comment by Wei Dai (Wei_Dai) · 2023-08-09T20:35:50.692Z · LW(p) · GW(p)

Metaphilosophy

I appreciate you sharing many of the same philosophical interests as me (and giving them a signal boost here), but for the sake of clarity / good terminology, I think all the topics you list under this section actually belong to object-level philosophy, not metaphilosophy.

I happen to think metaphilosophy is also extremely interesting/important, and you can see my latest thoughts on it at Some Thoughts on Metaphilosophy [LW · GW] (which also links to earlier posts on the topic) if you're interested.

Replies from: niplav
comment by niplav · 2023-08-09T20:42:22.482Z · LW(p) · GW(p)

Thanks for the heads up! I'll correct it.

comment by Algon · 2023-08-09T09:48:40.380Z · LW(p) · GW(p)

Surely there are more prediction markets you'd want to serve as a liquidity provider on. Like, markets on longevity approaches, on intelligence augmentation, on nuclear fusion, on alzheimer's cures, on the effects of gene drives to remove malaria etc.

Replies from: niplav
comment by niplav · 2023-08-09T09:57:07.052Z · LW(p) · GW(p)

Agreed! I'd have much more to add, but at ~7k words I decided to publish.

Or, in other words,

Replies from: Algon
comment by Algon · 2023-08-09T10:07:35.964Z · LW(p) · GW(p)

Fair enough. It just felt like this list didn't contain the most impactful interventions, even accounting for constraints. I'm confused about what you're optimizing for, so I suppose it is eccentric. Also, what's up with "$mio" and "$bio" instead of "$mil" and "$bil"?

comment by wyrd (nym-1) · 2023-08-10T01:14:05.292Z · LW(p) · GW(p)

ohmygodthatlojbanbabyissocute! —but anyway I don't think you need to be raised speaking a new language for a good one to have large effect on your ability to think.

I find it weird that people call it the "Sapir-Whorf hypothesis" as if there's an alternative way people can robustly learn to think better. Engineering a language isn't really about the language, it's about trying to rewrite the way we think. LessWrong and other academic disciplines have had decent success with this on the margin, I'd say—and the phrase "on the margin" is a good example of a recent innovation that's marginally helped us think better.

There seems to be a trend that breakthrough innovations often arise from somebody trying to deeply understand and reframe the simplest & most general constituents of whatever field they're working in. At least it fits with my own experience and the experience of others I've read. I think it's fairly common advice in math research especially.

The reason I'm enthusiastic about the idea of creating a conlang is that all natural languages have built up a large amount of dependency debt that makes it very difficult to adapt them to fit well with whatever specialised purposes we try to use it for. Just like with large code projects, it gets increasingly expensive to refactor the base if it needs to be adapted to e.g. serve novel purposes.[1]

For language, you also face the problem that even if you've correctly identified a pareto-improvement in theory, you can't just tell people and expect them to switch to your system. Unless they do it at the same time (atomic commit), there's always going to be a cost (confusion, misunderstanding, embarrassment, etc) associated with trying to push for the change. And people won't be willing to try unless they expect that other people expect it to work.

Those are some of the reasons I expect natural languages to be very suboptimal relative to what's possible, and just from this I would expect it to be easy to improve upon it for people who've studied cognition to the extent that LessWrongers have—iff those changes could be coordinated on. For that, we first need a proof of concept. It's not that it's futile or pointless—just nobody's tried. Lojban doesn't count, and while Ithkuil is probably the closest, it doesn't have the right aims. You'd really be willing to spend only ~40M on it?

  1. ^

    Let's say you're trying to rewrite a very basic function that was there from the beginning, but you notice that 40 other functions depend on it. The worst-case complexity of trying to refactor it isn't limited to those 40 functions: even if you only have to adapt 10 of them to fit your new ontology, those might have further dependencies you have to control for. When the dependencies are obscure, it can get whac-a-mole-y: for each change you consider, you have to search a branching graph of dependencies to check for new problems.

    Language is just worse because A) you have to coordinate a change with many more people, and B) very few words have "internal definitions" that make it easy to predict the consequences of intervening on them. Words usually have magnetic semantics/semiotics, where if you try to shift the meaning of one word, the meaning of other words will often 1) move in to fill the gap, 2) be dragged along by association, 3) be displaced, or 4) be pushed outward by negative association. 

comment by Jiro · 2023-08-09T16:43:51.890Z · LW(p) · GW(p)

If your plan for being a trillionaire unconditionally is "maximize EA-style utility to others", then your plan for being a trillionaire, conditional on not having EA as a primary goal, should be "maximize EA-style utility to the extent that the conditions permit it".  Since you are allowed to do things that incidentally help others, you should maximize the incidental benefit that your choices do to others.

If the conditions require that you do things that benefit yourself or that you would find amusing, you should go down the list of things that benefit yourself or that you would find amusing and choose the ones with the greatest incidental benefit to others.  So snowball fights should be right out.

Disclaimer: I am not an EA, I am just taking the reasoning to its logical conclusion and don't endorse it.