Posts
Comments
Last night I spent a couple of hours obsessively hammering away at Excel to be the first to solve this before noticing firstly that it's three years old, and secondly that I was nowhere near solving it.
Found it a hugely entertaining concept though, and it was truly time well spent. Before checking the thread's replies, I ended up going for Str+2, Dex+1, Con+2, Cha +5 for a 75% chance.
The most interesting part came today, when I estimated my own stats and wondered how I'd spend the 10 points on myself.
I feel in 2024 the value lies in Int>Wis>Con>Cha>Dex>Str. In the past and future this looks way different, though.
yet my probability of success would be absolutely tiny – like 0.01% even if I tried my absolute hardest. That's what I mean when I say that most people would have a near-zero chance. There are maybe a few hundred (??) people in the world who we even need to consider
Could you explain how you come to this conclusion? What do you think your fundamental roadblock would be? Getting the code for AGI or beating everyone else to superintelligence?]
My fundamental roadblock would be getting the code to AGI. My hacking skills are non-existent and I wouldn't be able to learn enough to be useful even in a couple of decades. I wouldn't want to hire anybody to do the hacking for me as I wouldn't trust the hacker to give me my unlimited power once he got his hands on it. I don't have any idea how to assemble an elite armed squad or anything like that either.
My best shot would be to somehow turn my connections into something useful. Let's pretend I'm an acquaintance of Elon Musk's PA (this is a total fabrication, but I don't want to give any actual names, and this is the right ballpark). I'd need to somehow find a way to meet Elon Musk himself (1% chance), and then impress him enough that, over the years, I could become a trusted ally (0.5%). Then, I'd need Elon to be the first one to get AGI (2%) and then I'd need to turn my trusted position into an opportunity to betray him and get my hands on the most important invention ever (5%). So that's 20 million to one, but I've only spent a couple of hours thinking about it. I could possibly shorten the odds to 10,000 to one if I really went all in on the idea.
How would you do it?
However, there still may be a few people want to harm the world enough to justify trying. They would need to be extremely motivated to cause damage. It's a big world, though, so I wouldn't be surpized if there were a few people like this.
Here we agree. I think most of the danger will be concentrated in a few, highly competent individuals with malicious intent. They could be people close to the tech or people with enough power to get it via bribery, extortion, military force etc.
Hi! Missed your reply for a few days. Sorry, I'm new here.
I'm not sure most people would have a near-zero chance of getting anywhere.
I think our disagreement may stem from our different starting points. I'm considering literally every person on the planet and saying that maybe 1% of them would act malevolently given AGI. So a sadistic version of me, say, would probably be in the 98% percentile of all sadists in terms of ability to obtain AGI (I know people working in AI, am two connections away from some really key actors, have a university education, have read Superintelligence etc.), yet my probability of success would be absolutely tiny – like 0.01% even if I tried my absolute hardest. That's what I mean when I say that most people would have a near-zero chance. There are maybe a few hundred (??) people in the world who we even need to consider.
Theft, extortion, hacking, eavesdropping, and building botnets are things a normal person could do, so I don't see why they wouldn't have a fighting chance.
I disagree. Theft and extortion are the only two (sort of) easy ones on the list imo. Most people can't hack or build botnets at all, and only certain people are in the right place to eavesdrop.
But OK, maybe this isn't a real disagreement between us. My starting point is considering literally everybody on the planet, and I think you are only taking people into account who have a reasonable shot.
How many people on the planet do you think meet the following conditions?
- Have > 1% of obtaining AGI.
- Have malevolent intent.
Personally, I think this topic is worth considering since the potential downside of malevolence + AGI is so terrifying. *I have low epistemic confidence in what I’m about to say because serious thinking on the topic is only a few years old, I have no particular expertise and the landscape will probably change radically, in unpredictable ways, between now and AGI.
For a malicious actor to establish a singleton assuming a hard takeoff, basically three conditions would be necessary: there is at least one malicious actor, at least one such actor can acquire the code for the AGI, and at least one actor who obtained the information is able to use it to establish a singleton.
I think assigning probabilities 0.5 to each of those conjunctions would be reasonable. All seem quite plausibly correct, and quite plausibly incorrect. I'm not sure what could be argued to justify much lower probabilities than these.
I’m not sure this is the best way of framing the probability (but see*). I reckon:
- There are many people on the planet who would act malevolently given a real chance to egt their hands on AGI. I’d say a conservative estimate would be the percentage of the population estimated to be psychopathic, which is 1%.
- The vast majority of these people have a near-zero chance of getting anywhere near it. Just to throw a few numbers around wildly, maybe a very rich, very corrupt businessman would have 1% chance, while someone working on the core AGI development team could have as high as 50%. Then you’d have hackers, out-and-out criminals etc. to consider. This variable is so hard to even guess at because it depends how secret the project is, how seriously people are taking the prospect of AGI, and several other factors.
- I’m agnostic on whether the 0.5 about the singleton should be higher or lower.
If security isn’t taken very seriously indeed, I don’t think we can disregard this. I’m concerned normalcy bias may cause us to be less prepared than we should be.
TL;DR: Choir agrees preacher’s sermon was very interesting.
So yes, I read this book with no small amount of motivation to enjoy it as I like Julia’s other stuff and am often terrified by the misery that irrationality causes. This is likely not a very impartial comment.
If we assume the goal was to achieve maximum possible swing in total human rationality*, I think it was correct to write the book with a less academic tone than some would have liked. If there had been a load more Bayes’ Theorem in it, people like me would have enjoyed it slightly more, but many others would have stopped reading.
Getting fresh blood to appreciate the benefits of rationality is huge. Once they’re in, they can explore more academic/technical resources if they want.
And even if you are very familiar with the subject matter, you may still need a hand in stopping your soldier mindset barging around with his ridiculous ideas. I have a Zoom call with friends in a bit, and despite just having read The Scout Mindset and being in the middle of writing this sentence, I’ll probably still get too attached to my beliefs once we start talking politics. There’s plenty of low-hanging fruit out there when it comes to walking the talk.
*Whatever the actual goal was, I don't think this is a terrible proxy.