Posts
Comments
Very helpful reply, thank you!
(My salary has always been among the lowest in the organization, mostly as a costly signal to employees and donors that I am serious about doing this for impact reasons)
I appreciate that!
I have completely forfeited my salary, and donated ~$300k to Lightcone at the end of last year myself to keep us afloat
If you had known you were going to do this, couldn't you have instead reduced your salary by ~60k/year for your first 5 years at Lightcone and avoided paying a large sum in income taxes to the government?
(I'm assuming that your after-tax salary from Lightcone from your first 5-6 years at Lightcone totaled more than ~$300k, and that you paid ~$50k-100k in income taxes on that marginal ~$350k-$400k of pre-tax salary from Lightcone.)
I'm curious if the answer is "roughly, yes" in which case it just seems unfortunately sad that that much money had to be unnecessarily wasted on income taxes.
I originally missed that the "Expected Income" of $2.55M from the budget means "Expected Income of Lighthaven" and consequently had the same misconception as Joel that donations mostly go towards subsidizing Lighthaven rather than almost entirely toward supporting the website in expectation.
Something I noticed:
"Probability that most humans die because of an AI takeover: 11%" should actually read as "Probability that most humans die [within 10 years of building powerful AI] because of an AI takeover: 11%" since it is defined as a sub-set of the 20% of scenarios in which "most humans die within 10 years of building powerful AI".
This means that there is a scenario with unspecified probability taking up some of the remaining 11% of the 22% of AI takeover scenarios that corresponds to the "Probability that most humans die because of an AI takeover more than 10 years after building powerful AI".
In other words, Paul's P(most humans die because of an AI takeover | AI takeover) is not 11%/22%=50%, as a quick reading of his post or a quick look at my visualization seems to imply, but is actually undefined, and is actually >11%/22% = >50%.
For example, perhaps Paul thinks that there is a 3% chance that there is an AI takeover that causes most humans to die more than 10 years after powerful AI is developed. In this case, Paul's P(most humans die because of an AI takeover | AI takeover) would be equal to (11%+3%)/22%=64%.
I don't know if Paul himself noticed this. But worth flagging this when revising these estimates later, or meta-updating on them.
What exactly is wrong? Could you explicitly show my mistake?
See my top-level comment.
I'm a halfer, but think you did your math wrong when calculating the thirder view.
The thirder view is that the probability of an event happening is the experimenter's expectation of the proportion of awakenings where the event happened.
So for your setup, with k=2:
There are three possible outcomes: H, HT, and TT.
H happens in 50% of experiments, HT happens in 25% and TT happens in 25%.
When H happens there is 1 awakening, when HT happens there are 2 awakenings, and when TT happens there are 4 awakenings.
We'll imagine that the experiment is run 4 times, and that H happened in 2 of them, HT happened once, and TT happened once. This results in 2*1=2 H awakenings, 1*2=2 HT awakenings, and 1*4=4 TT awakenings.
Therefore, H happens in 2/(2+2+4)=25% of awakenings, HT happens in 25% of awakenings, and TT happens in 50% of awakenings.
The thirder view is thus that upon awakening Beauty's credence that the coin came up heads should be 25%.
What is you [sic] credence that in this experiment the coin was tossed k times and the outcome of the k-th toss is Tails?
Answering your question, the thirder view is that there was a 6/8=75% chance the coin was tossed twice, and a 4/6 chance that the second toss was a tails conditional on it being the case that two tosses were made.
Unconditionally, the thirder's credence is 4/8=50% chance that it is both true that the coin was tossed two times and that the second toss was a tails.
I have time-space synesthesia, so I actually picture some times as being literally farther away than others.
I visualize the months of the year in a disc slanted away from me, kind of like a clock with New Years being at 6pm, and visualize years on a number line.
I thought of the reason independently: it's that if the number before 66 is not odd, but even instead, it must be either 2 or 4, since if it was 6 then the sequence would have had a double 6 one digit earlier.
150 or 151? I don't have a strong intuition. I'm inclined to trust your 150, but my intuition says that maybe 151 is right because 100+99/2+almost1 rounds up to 151. Would have to think about it.
(By the way, I'm not very good at math. (Edit: Ok, fair. Poorly written. What I meant is that I have not obtained certain understandings of mathematical things that those with formal educations in math have widely come to understand, and this leads me to being lower skilled at solving certain math problems than those who have already understood certain math ideas, despite my possibly having equal or even superior natural propensity for understanding math ideas.). I know high school math plus I took differential equations and linear algebra while studying mechanical engineering. But I don't remember any of it well and don't do engineering now or use math in my work. (I do like forecasting as a hobby and think about statistics and probability in that context a lot.) I wouldn't be able to follow your math in your post without a lot of effort, so I didn't try.)
Re the almost1 and a confusion I noticed when writing my previous comment:
Re my:
E.g. For four 100s: Ctrl+f "100,100,100,100" in your mind. Half the time it will be proceeded by an odd number for length 4, a quarter of the time it will be length 5, etc.
Since 1/2+1/4+1/8...=1, the above would seem to suggest that for four 100s in a row (or two 6s in a row) the expected number of rolls conditional on all even is 5 (or 3). But I saw from your post that it was more like 2.72, not 3, so what is wrong with the suggestion?
My intuition was that B is bigger.
The justification was more or less the following: any time you roll until reaching two in a row, you will have also hit your second 6 at or before then. So regardless what the conditions are, must be larger than .
This seems obviously wrong. The conditions matter a lot. Without conditions that would be adequate to explain why it takes more rolls to get two 6s in a row than it does to get two 6s, but given the conditions that doesn't explain anything.
The way I think about it is that you are looking at a very long string of digits 1-6 and (for A) selecting the sequences of digits that end with two 6s in a row going backwards until just before you hit an odd number (which is not very far, since half of rolls are odd). If you ctrl+f "66" in your mind you might see that it's "36266" for a length of 4, but probably not. Half of your "66"s will be proceeded by an odd number, making half of the two-6s-in-a-row sequences length 2.
For people that didn't intuit that B is bigger, I wonder if you'd find it more intuitive if you imagine a D100 is used rather than a D6.
While two 100s in a row only happens once in 10,000 times, when they do happen they are almost always part of short sequences like "27,100,100" or "87,62,100,100" rather than "53,100,14,100,100".
On the other hand, when you ctrl+f for a single "100" in your mind and count backwards until you get another 100, you'll almost always encounter an odd number first before encountering another "100" and have to disregard the sequence. But occasionally the 100s will appear close together and by chance there won't be any odd numbers between them. So you might see "9,100,82,62,100" or "13,44,100,82,100" or "99,100,28,100" or "69,12,100,100".
Another way to make it more intuitive might be to imagine that you have to get several 100s in a row / several 100s rather than just two. E.g. For four 100s: Ctrl+f "100,100,100,100" in your mind. Half the time it will be proceeded by an odd number for length 4, a quarter of the time it will be length 5, etc. Now look for all of the times that four 100s appear without there being any odd numbers between them. Some of these will be "100,100,100,100", but far more will be "100,32,100,100,88,100" and similar. And half the time there will be an odd number immediately before, a quarter of the time it will be odd-then-even before, etc.
EDIT: I did as asked, and replied without reading your comments on the EA forum. Reading that I think we are actually in complete agreement, although you actually know the proper terms for the things I gestured at.
Cool, thanks for reading my comments and letting me know your thoughts!
I actually just learned the term "aleatory uncertainty" from chatting with Claude 3.5 Sonnet (New) about my election forecasting in the last week or two post-election. (Turns out Claude was very good for helping me think through mistakes I made in forecasting and giving me useful ideas for how to be a better forecaster in the future.)
I then ask, knowing what you know now, what probability you should have given.
Sounds like you might have already predicted I'd say this (after reading my EA Forum comments), but to say it explicitly: What probability I should have given is different than the aleatoric probability. I think that by becoming informed and making a good judgment I could have reduced my epistemic uncertainty significantly, but I would have still had some. And the forecast that I should have made (or what market prices should have been is actually epistemic uncertainty + aleatoric uncertainty. And I think some people who were really informed could have gotten that to like ~65-90%, but due to lingering epistemic uncertainty could not have gotten it to >90% Trump (even if, as I believe, the aleatoric uncertainty was >90% (and probably >99%)).
Ah, I think I see. Would it be fair to rephrase your question as: if we "re-rolled the dice" a week before the election, how likely was Trump to win?
Yeah, that seems fair.
My answer is probably between 90% and 95%.
Seems reasonable to me. I wouldn't be surprised if it was >99%, but I'm not highly confident of that. (I would say I'm ~90% confident that it's >90%.)
That's a different question than the one I meant. Let me clarify:
Basically I was asking you what you think the probability is that Trump would win the election (as of a week before the election, since I think that matters) now that you know how the election turned out.
An analogous question would be the following:
Suppose I have two unfair coins. One coin is biased to land on heads 90% of the time (call it H-coin) and the other is biased to land on tails 90% of the times (T-coin). These two coins look the same to you on the outside. I choose one of the coins, then ask you how likely it is that the coin I chose will land on heads. You don't know whether the coin I'm holding is H-coin or T-coin, so you answer 50% (50%=0.5*.90=+0.5*0.10). I then flip the coin and it lands on heads. Now I ask you, knowing that the coin landed on heads, now how likely do you think it was that it would land on heads when I first tossed it? (I mean the same question by "Knowing how the election turned out, how likely do you think it was a week before the election that Trump would win?").
(Spoilers: I'd be interested in knowing your answer to this question before you read my comment on your "The value of a vote in the 2024 presidential election" EA Forum post that you linked to to avoid getting biased by my answer/thoughts.)
That makes sense, thanks.
Knowing how the election turned out, how likely do you think it was a week before the election that Trump would win?
Do you think Polymarket had Trump-wins priced too high or too low?
Foreign-born Americans shifted toward Trump
Are you sure? Couldn't it be that counties with a higher percentage of foreign-born Americans shifted toward Trump because of how the non-foreign-born voters in those counties voted rather than how the foreign-born voters voted?
The title "How I Learned To Stop Trusting Prediction Markets and Love the Arbitrage" isn't appropriate for the content of the post. That there is a play-money prediction market in which it costs very little to make the prices on conditional questions very wrong does not provide significant reasons to trust prediction markets less. That this post got 193 karma leading me to see it 2 months later is a sign of bad-voting IMO. (There are many far better, more important posts that get far less karma.)
criticism in LW comments is why he stopped writing Sequences posts
I wasn't aware of this and would like more information. Can anyone provide a source, or report their agreement or disagreement with the claim?
I second questions 1, 5, and 6 after listening to the Dwarkesh interview.
Re 6: at 1:24:30 in the Dwarkesh podcast Leopold proposes the US making an agreement with China to slow down (/pause) after the US has a 100GW cluster and is clearly going to win the race to build AGI to buy time to get things right during the "volatile period" before AGI.
(Note: Regardless of whether it was worth it in this case, simeon_c's reward/incentivization idea may be worthwhile as long as there are expected to be some cases in the future where it's worth it, since the people in those future cases may not be as willing as Daniel to make the altruistic personal sacrifice, and so we'd want them to be able to retain their freedom to speak without it costing them as much personally.)
I'd be interested in hearing peoples' thoughts on whether the sacrifice was worth it, from the perspective of assuming that counterfactual Daniel would have used the extra net worth altruistically. Is Daniel's ability to speak more freely worth more than the altruistic value that could have been achieved with the extra net worth?
Retracted, thanks.
Retracted due to spoilers and not knowing how to use spoiler tags.
Received $400 worth of bitcoin. I confirm the bet.
@RatsWrongAboutUAP I'm willing to risk up to $20k at 50:1 odds (i.e. If you give me $400 now, I'll owe you $20k in 5 years if you win the bet) conditional on (1) you not being privy to any non-public information about UFOs/UAP and (2) you being okay with forfeiting any potential winnings in the unlikely event that I die before bet resolution.
Re (1): Could you state clearly whether you do or do not have non-public information pertaining to the bet?
Re (2): FYI The odds of me dying in the next 5 years are less than 3% by SSA base rates, and my credence is even less than that if we don't account for global or existential catastrophic risk. The reason I'd ask to not owe you any money in the worlds in which you win (and are still alive to collect money) and I'm dead is because I wouldn't want anyone else to become responsible for settling such a significant debt on my behalf.
If you accept, please reply here and send the money to this Bitcoin address: 3P6L17gtYbj99mF8Wi4XEXviGTq81iQBBJ
I'll confirm receipt of the money when I get notified of your reply here. Thanks!
IMO the largest trade-offs of being vegan for most people aren't health trade-offs, but they're other things like the increased time/attention cost of identifying non-vegan foods. Living in a place where there's a ton of non-vegan food available at grocery stores and restaurants makes it more of a pain to get food at stores and restaurants than it is if you're not paying that close attention to what's in your food. (I'm someone without any food allergies, and I imagine being vegan is about as annoying as having certain food allergies).
That being said, it also seems to me that the vast majority of people's diets are not well optimized for health. Most people care about convenience, cost, taste, and other factors as well. My intuition is that if we took a random person and said "hey, you have to go vegan, lets try to find a vegan diet that's healthier than your current diet" that we'd succeed the vast majority of the time simply because most people don't eat very healthily. That said, the random person would probably prefer a vegan diet optimized for things other than just health more than a vegan diet optimized for just health.
I only read the title, not the post, but just wanted to leave a quick comment to say I agree that veganism entails trade-offs, and that health is one of the axes. Also note that I've been vegan since May 2019 and lacto-vegetarian since October 2017, for ethical reasons, not environmental or health or other preferences reasons.
It's long (since before I changed my diet) been obvious to me that your title statement is true since a prior it seems very unlikely that the optimal diet for health is one that contains exactly zero animal products, given that humans are omnivores. One doesn't need to be informed about nutrition to make that inference.
Probability that most humans die because of an AI takeover: 11%
This 11% is for "within 10 years" as well, right?
Probability that the AI we build doesn’t take over, but that it builds even smarter AI and there is a takeover some day further down the line: 7%
Does "further down the line" here mean "further down the line, but still within 10 years of building powerful AI"? Or do you mean it unqualified?
I made a visualization of Paul's guesses to better understand how they overlap:
https://docs.google.com/spreadsheets/d/1x0I3rrxRtMFCd50SyraXFizSO-VRB3TrCRxUiWe5RMU/edit#gid=0
I took issue with the same statement, but my critique is different: https://www.lesswrong.com/posts/mnCDGMtk4NS7ojgcM/linkpost-what-are-reasonable-ai-fears-by-robin-hanson-2023?commentId=yapHwa55H4wXqxyCT
But to my mind, such a scenario is implausible (much less than one percent probability overall) because it stacks up too many unlikely assumptions in terms of our prior experiences with related systems.
You mentioned 5-6 assumptions. I think at least one isn't needed (that the goal changes as it self-improves), and disagree that the others are (all) unlikely. E.g. Agentic, non-tool AIs are already here and more will be coming (foolishly). Taking a point I just heard from Tegmark on his latest Lex Fridman podcast interview, once companies add APIs to systems like GPT-4 (I'm worried about open-sourced systems that are as powerful or more powerful in the next few years), then it will be easy for people to create AI agents that uses the LLMs capabilties by repeatedly calling it.
This is the fear of “foom,”
I think the popular answer to this survey also includes many slow takeoff, no-foom scenarios.
And then, when humans are worth more to the advance of this AI’s radically changed goals as mere atoms than for all the things we can do, it simply kills us all.
I agree with this, though again I think the "changed" can be ommitted.
Secondly, I also think it's possible that rather than the unaligned superintelligence killing us all in the same second like EY often says, that it may kill us off in a manner like how humans kill off other species (i.e. we know we are doing it, but it doesn't look like a war.)
Re my last point, see Ben Weinstein-Raun's vision here: https://twitter.com/benwr/status/1646685868940460032
Furthermore, the goals of this agent AI change radically over this growth period.
Noting that this part doesn't seem necessary to me. The agent may be misaligned before the capability gain.
Plausibly, such “ems” may long remain more cost-effective than AIs on many important tasks.
"Plausibly" (i.e. 'maybe') is not enough here to make the fear irrational ("Many of these AI fears are driven by the expectation that AIs would be cheaper, more productive, and/or more intelligent than humans.")
In other words, while it's reasonable to say "maybe the fears will all be for nothing", that doesn't mean it's not reasonable to be fearful and concerned due to the stakes involved and the nontrivial chance that things do go extremely badly.
And yes, even if AIs behave predictably in ordinary situations, they might act weird in unusual situations, and act deceptively when they can get away with it. But the same applies to humans, which is why we test in unusual situations, especially for deception, and monitor more closely when context changes rapidly.
"But the same applies to humans" doesn't seem like an adequate response when the AI system is superintelligent or past the "sharp left turn" capabilities threshold. Solutions that work for unaligned deceptive humans won't save us from a sufficiently intelligent/capable unaligned deceptive entity.
buy robots-took-most-jobs insurance,
I like this proposal.
If we like where we are and can’t be very confident of where we may go, maybe we shouldn’t take the risk and just stop changing. Or at least create central powers sufficient to control change worldwide, and only allow changes that are widely approved. This may be a proposal worth considering, but AI isn’t the fundamental problem here either.
I'm curious what you (Hanson) think(s) *is* the fundamental problem here if not AI?
Context: It seems to me that Toby Ord is right that the largest existential risks (AI being number one) are all anthropormphic risks, rather than natural risks. They also seem to be risks associated with the development of new technologies (AI, biologically engineered pandemics, (distant third and fourth:) nuclear risk, climate change). Any large unknown existential risk also seems likely to be a risk resulting from the development of a new technology.
So given that, I would think AI *is* the fundamental problem.
Maybe we can solve the AI problems with the right incentive structures for the humans making the AI, in which case perhaps one might think the fundamental problem is the incentive structure or the institutions that exist to shape those incentives, but I don't find this persuasive. This would be like saying that the problem is not nuclear weapons, it's that the Soviet Union would use them to cause harm. (Maybe this just feels like a strawman of your view in which case feel to ignore this part.)
Doomers worry about AIs developing “misaligned” values. But in this scenario, the “values” implicit in AI actions are roughly chosen by the organisations who make them and by the customers who use them.
There is reason to think "roughly" aligned isn't enough in the case of a sufficiently capable system.
Second, Robin's statement seems to ignore (or contradict without making an argument) the fact that even if it is true for systems not as smart as humans, there may be a "sharp left turn" at some point where, in Nate Soares' words, "as systems start to work really well in domains really far beyond the environments of their training" "it’s predictably the case that the alignment of the system will fail to generalize with it."
Yudkowsky and others might give different reasons why waiting until later to gain more information about the future systems doesn't make sense, including pointing out that that may lead us to missing our first "critical try."
Robin, I know you must have heard these points before--I believe you are more familiar with e.g. Eliezer's views than I am. But if that's the case I don't understand why you would write a sentence like last one in the quotation above. It sounds like a cheap rhetorical trick to say "but instead of waiting to deal with such problems when we understand them better and can envision them more concretely" especially without saying why people who don't think we should wait don't think that's a good enough reason to wait / think there are pressing reasons to work on the problems now despite our relative state of ignorance compared to future AI researchers.
To clarify explicitly, people like Stuart Russell would point out that if future AIs are still built according to the "standard model" (a phrase I borrow from Russell) like the systems of today, then they will continue to be predictably misaligned.
This part doesn't seem to pass the ideological Turing test:
At the moment, AIs are not powerful enough to cause us harm, and we hardly know anything about the structures and uses of future AIs that might cause bigger problems. But instead of waiting to deal with such problems when we understand them better and can envision them more concretely, AI “doomers” want stronger guarantees now.
I strongly agree with this request.
If companies don't want to be the first to issue such a statement then I suggest they coordinate and share draft statements with each other privately before publishing simultaneously.
Demis Hassabis answered the question "Do you think DeepMind has a responsibility to hit pause at any point?" in 2022:
Question: Are innerly-misaligned (superintelligent) AI systems supposed to necessarily be squiggle maximizers, or are squiggle maximizers supposed to only be one class of innerly-misaligned systems?
It'd be nice if Hassabis made another public statement about his views on pausing AI development and thoughts on the FLI petition. If now's not the right time in his view, when is? And what can he do to help with coordination of the industry?
On the subject of DeemMind and pausing AI development, I'd like to highlight Demis Hassabis's remark on this topic in a DeepMind podcast interview a year ago:
'Avengers assembled' for AI Safety: Pause AI development to prove things mathematically
Hannah Fry (17:07):
You said you've got this sort of 20-year prediction and then simultaneously where society is in terms of understanding and grappling with these ideas. Do you think that DeepMind has a responsibility to hit pause at any point?
Demis Hassabis (17:24):
Potentially. I always imagine that as we got closer to the sort of gray zone that you were talking about earlier, the best thing to do might be to pause the pushing of the performance of these systems so that you can analyze down to minute detail exactly and maybe even prove things mathematically about the system so that you know the limits and otherwise of the systems that you're building. At that point I think all the world's greatest minds should probably be thinking about this problem. So that was what I would be advocating to you know the Terence Tao’s of this world, the best mathematicians. Actually I've even talked to him about this—I know you're working on the Riemann hypothesis or something which is the best thing in mathematics but actually this is more pressing. I have this sort of idea of like almost uh ‘Avengers assembled’ of the scientific world because that's a bit of like my dream.