Posts

Geoffrey Hinton - Full "not inconceivable" quote 2023-03-28T00:22:01.626Z
Transcript: NBC Nightly News: AI ‘race to recklessness’ w/ Tristan Harris, Aza Raskin 2023-03-23T01:04:15.338Z
[Linkpost] GatesNotes: The Age of AI has begun 2023-03-22T04:20:34.340Z
Can AI systems have extremely impressive outputs and also not need to be aligned because they aren't general enough or something? 2022-04-09T06:03:10.068Z
DeepMind: The Podcast - Excerpts on AGI 2022-04-07T22:09:22.300Z
[Expired] 20,000 Free $50 Charity Gift Cards 2020-12-11T20:02:13.674Z

Comments

Comment by WilliamKiely on How AI Takeover Might Happen in 2 Years · 2025-02-19T18:28:30.152Z · LW · GW

Around 50% within 2 years or over all time?

Comment by WilliamKiely on How AI Takeover Might Happen in 2 Years · 2025-02-09T04:59:18.305Z · LW · GW

Thanks for the clarification. My conclusion is that I think your emoji was meant to signal disagreement with the claim that 'opaque vector reasoning makes a difference' rather than a thing I believe.

I had rogue AIs in mind as well, and I'll take your word on "for catching already rogue AIs and stopping them, opaque vector reasoning doesn't make much of a difference".

Comment by WilliamKiely on How AI Takeover Might Happen in 2 Years · 2025-02-09T04:49:08.254Z · LW · GW

Why do you think that?

Don't the mountain of posts on optimization pressure explain why ending with "U3 was up a queen and was a giga-grandmaster and hardly needed the advantage. Humanity was predictably toast" is actually sufficient? In other words, doesn't someone who understands all the posts on optimization pressure not need the rest of the story after the "U3 was up a queen" part to understand that the AIs could actually take over?

If you disagree, then what do you think the story offers that makes it a helpful concrete example for people who both are skeptical that AIs can take over and already understand the posts on optimization pressure?

Comment by WilliamKiely on How AI Takeover Might Happen in 2 Years · 2025-02-09T04:31:37.711Z · LW · GW

Ryan disagree-reacted to the bold part of this sentence in my comment above and I'm not sure why: "This tweet predicts two objections to this story that align with my first and third bullet point (common objections) above."

This seems pretty unimportant to gain clarity on, but I'll explain my original sentence more clearly anyway:

For reference, my third bullet point was the common objection: "How would humanity fail to notice this and/or stop this?"

To my mind, someone objecting that the story is unrealistic because "there's no reason why OpenAI would ever let the model do its thinking steps in opaque vectors instead of written out in English" (as stated in the tweet) is an objection of the form "humanity wouldn't fail to stop AI from sneakily engaging in power-seeking behavior by thinking in opaque vectors." Like it's a "sure, AI could takeover if humanity were dumb like that, but there's no way OpenAI would be dump like that."

It seems like Ryan was disagreeing with this with his emoji, but maybe I misunderstood it.

Comment by WilliamKiely on How AI Takeover Might Happen in 2 Years · 2025-02-08T23:22:03.004Z · LW · GW

Good point. At the same time, I think the underlying cruxes that lead people to being skeptical of the possibility that AIs could actually take over are commonly:

  • Why would an AI that well-intentioned human actors create be misaligned and motivated to takeover?
  • How would such an AI go from existing on computer servers to acquiring power in the physical world?
  • How would humanity fail to notice this and/or stop this?

I mention these points because people who mention these objections typically wouldn't raise these objections to the idea of an intelligent alien species invading Earth and taking over.

People generally have no problem granting that aliens may not share our values, may have actuators / the ability to physically wage war against humanity, and could plausibly overpower us with their superior intellect and technological know-how.

Providing a detailed story of what a particular alien takeover process might look like then isn't actually necessarily helpful to addressing the objections people raise about AI takeover.

I'd propose that authors of AI takeover stories should therefore make sure that they aren't just describing aspects of a plausible AI takeover story that could just as easily be aspects of an alien takeover story, but are instead actually addressing peoples' underlying reasons for being skeptical that AI could take over.

This means doing things like focusing on explaining:

  • what about the future development of AIs leads to the development of powerful agentic AIs with misaligned goals where takeover could be a plausible instrumental subgoal,
  • how the AIs initially acquire substantial amounts of power in the physical world,
  • how they do the above either without people noticing or without people stopping them.

(With this comment I don't intend to make a claim about how well the OP story does these things, though that could be analyzed. I'm just making a meta point about what kind of description of a plausible AI takeover scenario I'd expect to actually engage with the actual reasons for disagreement of the people who say "can the AIs actually take over".)

Edited to add: This tweet predicts two objections to this story that align with my first and third bullet point (common objections) above:

It was a good read, but the issue most people are going to have with this is how U3 develops that misalignment in its thoughts in the first place.

That, plus there's no reason why OpenAI would ever let the model do its thinking steps in opaque vectors instead of written out in English, as it is currently

Comment by WilliamKiely on How AI Takeover Might Happen in 2 Years · 2025-02-08T21:25:45.560Z · LW · GW

Thanks for the story. I found the beginning the most interesting.

U3 was up a queen and was a giga-grandmaster and hardly needed the advantage. Humanity was predictably toast.

I think ending the story like this is actually fine for many (most?) AI takeover stories. The "point of no return" has already occurred at this point (unless the takeover wasn't highly likely to be successful), and so humanity's fate is effectively already sealed even though the takeover hasn't happened yet.

What happens leading up to the point of no return is the most interesting part because it's the part where humanity can actually still make a difference to how the future goes.

After the point of no return, I primarily want to know what the (now practically inevitable) AI takeover implies for the future: does it mean near-term human extinction, or a future in which humanity is confined to Earth, or a managed utopia, etc?

Trying to come up with a detailed concrete plausible story of what the actual process of takeover actually looks like isn't as interesting seeming (at least to me). So I would have preferred more detail and effort put into the beginning of the story explaining how humanity managed to fail to stop the creation of a powerful agentic AI that would takeover rather than see as much detail and effort put into imagining how the takeover actually happens.

Comment by WilliamKiely on How AI Takeover Might Happen in 2 Years · 2025-02-08T21:02:18.947Z · LW · GW

Specifically I’m targeting futures that are at my top 20th percentile of rate of progress and safety difficulty.

Does this mean that you think AI takeover within 2 years is at least 20% likely?

Or are there scenarios where progress is even faster and safety is even more difficult than illustrated in your story and yet humanity avoids AI takeover?

Comment by WilliamKiely on How AI Takeover Might Happen in 2 Years · 2025-02-08T20:59:09.773Z · LW · GW

The fake names are a useful reminder and clarification that it's fiction.

Comment by WilliamKiely on Everywhere I Look, I See Kat Woods · 2025-01-21T18:43:04.571Z · LW · GW

I'm going to withdraw from this comment thread since I don't think my further participation is a good use of time/energy. Thanks for sharing your thoughts and sorry we didn't come to agreement.

Comment by WilliamKiely on Everywhere I Look, I See Kat Woods · 2025-01-21T18:39:35.132Z · LW · GW

I agree that that would be evidence of OP being more curious. I just don't think that given what OP actually did it can be said that she wasn't curious at all.

Comment by WilliamKiely on Everywhere I Look, I See Kat Woods · 2025-01-20T23:03:09.553Z · LW · GW

Thanks for the feedback, Holly. I really don't want to accuse the OP of making a personal attack if OP's intent was to not do that, and the reality is that I'm uncertain and can see a clear possibility that OP has no ill will toward Kat personally, so I'm not going to take the risk by making the accusation. Maybe my being on the autism spectrum is making me oblivious or something, in which case sorry I'm not able to see things as you seem them, but this is how I'm viewing the situation.

Comment by WilliamKiely on Everywhere I Look, I See Kat Woods · 2025-01-20T22:50:05.890Z · LW · GW

Hey Holly, great points about PETA.

I left one comment replying to a critical comment this post got saying that it wasn't being charitable (which turned into a series of replies) and now I find myself in a position (a habit?) of defending the OP from potentially-insufficiently-charitable criticisms. Hence, when I read your sentence...

There's a missing mood here-- you're not interested in learning if Kat's strategy is effective at AI Safety.

...my thought is: Are you sure? When I read the post I remember reading:

But if it’s for the greater good, maybe I should just stop being grumpy. 

But honestly, is this content for the greater good? Are the clickbait titles causing people to earnestly engage? Are peoples’ minds being changed? Are people thinking thoughtfully about the facts and ideas being presented? 

This series of questions seems to me like it's wondering whether Kat's strategy is effective at AI safety, which is the thing you're saying it's not doing.

(I just scrolled up on my phone and saw that OP actually quoted this herself in the comment you're replying to. (Oops. I had forgotten this as I had read that comment yesterday.))

Sure, the OP is also clearly venting about her personal distaste for Kat's posts, but it seems to me that she is also asking the question that you say she isn't interested in: are Kat's posts actually effective?

(Side note: I kind of regret leaving any comments on this post at all. It doesn't seem like the post did a good job encouraging a fruitful discussion. Maybe OP and anyone else who wants to discuss the topic should start fresh somewhere else with a different context. Just to put an idea out there: Maybe it'd be a more productive use of everyone's energy for e.g. OP, Kat, and you Holly to get on a call together and discuss what sort of content is best to create and promote to help the cause of AI safety, and then (if someone was interested in doing so) write up a summary of your key takeaways to share.)

Comment by WilliamKiely on Everywhere I Look, I See Kat Woods · 2025-01-20T22:18:22.658Z · LW · GW

by putting her name in  the headline (what I meant by name-calling)

Gotcha, that's fair.

If it wasn't meant to tarnish her reputation, why not instead make the post about just her issues with the disagreeable content?

I can think of multiple possible reasons. E.g. If OP sees a pattern of several bad or problematic posts, it can make sense to go above the object-level criticisms of those posts and talk about the meta-level questions.

but the standard you've set for determining intent is as naive as

Maybe, but in my view accusing someone of making personal attacks is a serious thing, so I'd rather be cautious, have a high bar of evidence, and take an "innocent until proven guilty" approach. Maybe I'll be too charitable in some cases and fail to condemn someone for making a personal attack, but that's worth it to avoid making the opposite mistake: accusing someone of making a personal attack who was doing no such thing.

because it's fun to do

That stated fun motivation did bother me. Obviously given that people feel the post is attacking Kat personally making the post for fun isn't a good enough reason. However, I do also see the post as raising legitmate questions about whether the sort of content that Kat produces and promotes a lot of is actually helping to raise the quality of discourse on EA and AI safety, etc, so it's clearly not just a post for fun. The OP seemed to be fustrated and venting when writing the post, resulting in it having an unnecessarily harsh tone. But I don't think this makes it amount to bullying.

Why don't you hold yourself to a higher standard?

I try to. I guess we just disagree about which kind of mistake (described above) is worse. In the face of uncertainty, I think it's better to caution on the side of not mistakenly accusing someone of bullying and engaging in a personal attack than on the side of mistakenly being too charitable and failing to call out someone who actually said something mean (especially when there are already a lot of other people in the comments like you doing that).

Comment by WilliamKiely on Everywhere I Look, I See Kat Woods · 2025-01-20T01:15:35.832Z · LW · GW

And yes, it was meant to tarnish her reputation because, well, did you not read the headline of the post?

[...]

But what drove reputation change here much more significantly is browsing name calling Kat in the headline

The headline I see is "Everywhere I Look, I See Kat Woods". What name is this calling Kat? Am I missing something?

And why do you think that you can infer that the OP's intent was to tarnish Kat's peraonal reputation from that headline? That doesn't make any sense.

Anyway, I don't know the OP, but I'm confident in saying that the information here is not sufficient to conclude she was making a personal attack.

If she said that was her intent, I'd change my mind. Or if she said something that was unambiguously a personal attack I'd change my mind, but at the moment I see no reason not to read the post as well meaning innocent criticism.

And that’s why I think this is so inappropriate for this forum.

I also don't think it's very appropriate for this forum (largely because the complaint is not about Kat's posting stlye on LessWrong). I didn't downvote it because it seemed like it had already received a harsh enough reaction, but I didn't upvote it either.

Comment by WilliamKiely on Everywhere I Look, I See Kat Woods · 2025-01-19T19:24:45.591Z · LW · GW

Herego, his actions expressly are meant to tarnish her reputation.

OP is a woman not a man.

Comment by WilliamKiely on meemi's Shortform · 2025-01-19T18:56:02.839Z · LW · GW

How much funding did OpenAI provide EpochAI?

Or, how much funding do you expect to receive in total from OpenAI for FrontierMath if you haven't received all funding yet?

Comment by WilliamKiely on Everywhere I Look, I See Kat Woods · 2025-01-19T10:00:55.319Z · LW · GW

I don't think you're being charitable. There is an important difference between a personal attack and criticism of the project someone is engaging in. My reading of the OP is that it's the latter, while I understand you to be accusing the OP of the former.

He's a dick politician (but a great husband)? 

"Dick" is a term used for personal attacks.

If you said "He's a bad politician; He's a good husband and good man, and I know he's trying to do good, but his policies are causing harm to the world" so we really shouldn't support Pro-America Joe (or whatever--assume Pro-America is a cause we support and we just don't agree with the way Joe goes about trying to promote America) then I'd say yes, that's how we criticize Pro-America Joe without attacking him as a person.

Comment by WilliamKiely on Everywhere I Look, I See Kat Woods · 2025-01-19T09:01:32.863Z · LW · GW

expressly attempts to tarnish someone's reputation

I don't think that's accurate. The OP clearly states:

One upfront caveat. I am speaking about “Kat Woods” the public figure, not the person. If you read something here and think, “That’s not a true/nice statement about Kat Woods”, you should know that I would instead like you to think “That’s not a true/nice statement about the public persona Kat Woods, the real human with complex goals who I'm sure is actually really cool if I ever met her, appears to be cultivating.”

Comment by WilliamKiely on Practicing Bayesian Epistemology with "Two Boys" Probability Puzzles · 2025-01-02T06:44:17.369Z · LW · GW

I have two children, at least one of whom is a boy born on a day that I'll tell you in 5 minutes.


"[A] boy born on a day that I'll tell you in 5 minutes" is ambiguous. There are two possible meanings, yielding different answers.

If "a boy born on a day that I'll tell you in 5 minutes" means "a boy, and I'll tell you the name of a boy I have in 5 minutes" then the answer is 1/3 as Liron says.

However, if "a boy born on a day that I'll tell you in 5 minutes" means "a boy born on a particular singular day that I just wrote down on this piece of paper and will show you in 5 minutes", then this is equivalent to saying "a boy born on a Tuesday" and the answer is 13/27.

The reason why the second meaning is equivalent to "a boy born on a Tuesday" is because it's a statement that at least one of the children is a particular kind of boy that only 1/7th of boys are, just like how "a boy born on a Tuesday" is a statement that at least one of the children is a particular kind of boy that only 1/7th of boys are. (Conversely, for the first interpretation: "a boy born on a day that I'll tell you in 5 minutes" is a statement that at least one of the children is a a boy, period.)

Another way to notice the difference if it's still not clear:

When told "I have two children, at least one of whom is [a boy born on a particular singular day that I just wrote down on this piece of paper and will show you in 5 minutes]", you assign a 1/7th credence to the paper showing Sunday, 1/7th to Monday, 1/7th to Tuesday, etc.

Then, conditional on the paper showing Tuesday, you know that the parent just told you "I have two children, at least one of whom is [a boy born on [Tuesday and I will show you the paper showing Tuesday in 5 minutes]]", which is equivalent to the parent saying "I have two children, at least one of whom is a boy born on Tuesday".

So you then have a 1/7th credence that the paper shows Tuesday, and if it's Tuesday, your credence that both children are boys is 13/27. So your overall credence, reflecting your uncertainty about what day the paper shows is (13/27)*(1/7)+(13/27)*(1/7)+(13/27)*(1/7)+(13/27)*(1/7)+(13/27)*(1/7)+(13/27)*(1/7)+(13/27)*(1/7)=13/27.

Comment by WilliamKiely on Book Review (mini): Co-Intelligence by Ethan Mollick · 2024-12-30T20:43:39.128Z · LW · GW

the listening experience would have been better with another narrator

Perhaps, but I'm liking the narration so far. I find it about as good as your narration of your book, and even perhaps a bit better.

Comment by WilliamKiely on Book Review (mini): Co-Intelligence by Ethan Mollick · 2024-12-30T20:41:58.296Z · LW · GW

At the beginning of Chapter 2, Mollick misattributes the idea of a "paperclip maximizer" to Bostrom. Yudkoswky is the actual originator of the idea. Source: https://www.lesswrong.com/tag/squiggle-maximizer-formerly-paperclip-maximizer

Comment by WilliamKiely on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-02T05:54:18.013Z · LW · GW

Very helpful reply, thank you!

(My salary has always been among the lowest in the organization, mostly as a costly signal to employees and donors that I am serious about doing this for impact reasons)

I appreciate that!

Comment by WilliamKiely on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-02T05:31:54.406Z · LW · GW

I have completely forfeited my salary, and donated ~$300k to Lightcone at the end of last year myself to keep us afloat

If you had known you were going to do this, couldn't you have instead reduced your salary by ~60k/year for your first 5 years at Lightcone and avoided paying a large sum in income taxes to the government?

(I'm assuming that your after-tax salary from Lightcone from your first 5-6 years at Lightcone totaled more than ~$300k, and that you paid ~$50k-100k in income taxes on that marginal ~$350k-$400k of pre-tax salary from Lightcone.)

I'm curious if the answer is "roughly, yes" in which case it just seems unfortunately sad that that much money had to be unnecessarily wasted on income taxes.

Comment by WilliamKiely on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-02T05:12:28.673Z · LW · GW

I originally missed that the "Expected Income" of $2.55M from the budget means "Expected Income of Lighthaven" and consequently had the same misconception as Joel that donations mostly go towards subsidizing Lighthaven rather than almost entirely toward supporting the website in expectation.

Comment by WilliamKiely on My views on “doom” · 2024-12-02T04:03:51.858Z · LW · GW

Something I noticed:

"Probability that most humans die because of an AI takeover: 11%" should actually read as "Probability that most humans die [within 10 years of building powerful AI] because of an AI takeover: 11%" since it is defined as a sub-set of the 20% of scenarios in which "most humans die within 10 years of building powerful AI".

This means that there is a scenario with unspecified probability taking up some of the remaining 11% of the 22% of AI takeover scenarios that corresponds to the "Probability that most humans die because of an AI takeover more than 10 years after building powerful AI".

In other words, Paul's P(most humans die because of an AI takeover | AI takeover) is not 11%/22%=50%, as a quick reading of his post or a quick look at my visualization seems to imply, but is actually undefined, and is actually >11%/22% = >50%.

For example, perhaps Paul thinks that there is a 3% chance that there is an AI takeover that causes most humans to die more than 10 years after powerful AI is developed. In this case, Paul's P(most humans die because of an AI takeover | AI takeover) would be equal to (11%+3%)/22%=64%.

I don't know if Paul himself noticed this. But worth flagging this when revising these estimates later, or meta-updating on them.

Comment by WilliamKiely on [deleted post] 2024-11-26T19:18:32.653Z

What exactly is wrong? Could you explicitly show my mistake?

See my top-level comment.

Comment by WilliamKiely on [deleted post] 2024-11-26T19:15:16.101Z

I'm a halfer, but think you did your math wrong when calculating the thirder view.

The thirder view is that the probability of an event happening is the experimenter's expectation of the proportion of awakenings where the event happened.

So for your setup, with k=2:

There are three possible outcomes: H, HT, and TT.

H happens in 50% of experiments, HT happens in 25% and TT happens in 25%.

When H happens there is 1 awakening, when HT happens there are 2 awakenings, and when TT happens there are 4 awakenings.

We'll imagine that the experiment is run 4 times, and that H happened in 2 of them, HT happened once, and TT happened once. This results in 2*1=2 H awakenings, 1*2=2 HT awakenings, and 1*4=4 TT awakenings.

Therefore, H happens in 2/(2+2+4)=25% of awakenings, HT happens in 25% of awakenings, and TT happens in 50% of awakenings.

The thirder view is thus that upon awakening Beauty's credence that the coin came up heads should be 25%.

What is you [sic] credence that in this experiment the coin was tossed k times and the outcome of the k-th toss is Tails?

Answering your question, the thirder view is that there was a 6/8=75% chance the coin was tossed twice, and a 4/6 chance that the second toss was a tails conditional on it being the case that two tosses were made.

Unconditionally, the thirder's credence is 4/8=50% chance that it is both true that the coin was tossed two times and that the second toss was a tails.

Comment by WilliamKiely on Which things were you surprised to learn are metaphors? · 2024-11-24T13:21:22.388Z · LW · GW

I have time-space synesthesia, so I actually picture some times as being literally farther away than others.

I visualize the months of the year in a disc slanted away from me, kind of like a clock with New Years being at 6pm, and visualize years on a number line.

Comment by WilliamKiely on A very strange probability paradox · 2024-11-24T11:57:45.216Z · LW · GW

I thought of the reason independently: it's that if the number before 66 is not odd, but even instead, it must be either 2 or 4, since if it was 6 then the sequence would have had a double 6 one digit earlier.

Comment by WilliamKiely on A very strange probability paradox · 2024-11-24T10:53:24.490Z · LW · GW

150 or 151? I don't have a strong intuition. I'm inclined to trust your 150, but my intuition says that maybe 151 is right because 100+99/2+almost1 rounds up to 151. Would have to think about it.

(By the way, I'm not very good at math. (Edit: Ok, fair. Poorly written. What I meant is that I have not obtained certain understandings of mathematical things that those with formal educations in math have widely come to understand, and this leads me to being lower skilled at solving certain math problems than those who have already understood certain math ideas, despite my possibly having equal or even superior natural propensity for understanding math ideas.). I know high school math plus I took differential equations and linear algebra while studying mechanical engineering. But I don't remember any of it well and don't do engineering now or use math in my work. (I do like forecasting as a hobby and think about statistics and probability in that context a lot.) I wouldn't be able to follow your math in your post without a lot of effort, so I didn't try.)

Re the almost1 and a confusion I noticed when writing my previous comment:

Re my:

E.g. For four 100s: Ctrl+f "100,100,100,100" in your mind. Half the time it will be proceeded by an odd number for length 4, a quarter of the time it will be length 5, etc.

Since 1/2+1/4+1/8...=1, the above would seem to suggest that for four 100s in a row (or two 6s in a row) the expected number of rolls conditional on all even is 5 (or 3). But I saw from your post that it was more like 2.72, not 3, so what is wrong with the suggestion?

Comment by WilliamKiely on A very strange probability paradox · 2024-11-24T09:42:19.339Z · LW · GW

My intuition was that B is bigger.

The justification was more or less the following: any time you roll until reaching two  in a row, you will have also hit your second 6 at or before then. So regardless what the conditions are,  must be larger than .

This seems obviously wrong. The conditions matter a lot. Without conditions that would be adequate to explain why it takes more rolls to get two 6s in a row than it does to get two 6s, but given the conditions that doesn't explain anything.

The way I think about it is that you are looking at a very long string of digits 1-6 and (for A) selecting the sequences of digits that end with two 6s in a row going backwards until just before you hit an odd number (which is not very far, since half of rolls are odd). If you ctrl+f "66" in your mind you might see that it's "36266" for a length of 4, but probably not. Half of your "66"s will be proceeded by an odd number, making half of the two-6s-in-a-row sequences length 2.

For people that didn't intuit that B is bigger, I wonder if you'd find it more intuitive if you imagine a D100 is used rather than a D6.

While two 100s in a row only happens once in 10,000 times, when they do happen they are almost always part of short sequences like "27,100,100" or "87,62,100,100" rather than "53,100,14,100,100".

On the other hand, when you ctrl+f for a single "100" in your mind and count backwards until you get another 100, you'll almost always encounter an odd number first before encountering another "100" and have to disregard the sequence. But occasionally the 100s will appear close together and by chance there won't be any odd numbers between them. So you might see "9,100,82,62,100" or "13,44,100,82,100" or "99,100,28,100" or "69,12,100,100".

Another way to make it more intuitive might be to imagine that you have to get several 100s in a row / several 100s rather than just two. E.g. For four 100s: Ctrl+f "100,100,100,100" in your mind. Half the time it will be proceeded by an odd number for length 4, a quarter of the time it will be length 5, etc. Now look for all of the times that four 100s appear without there being any odd numbers between them. Some of these will be "100,100,100,100", but far more will be "100,32,100,100,88,100" and similar. And half the time there will be an odd number immediately before, a quarter of the time it will be odd-then-even before, etc.

Comment by WilliamKiely on Seven lessons I didn't learn from election day · 2024-11-16T07:21:40.348Z · LW · GW

EDIT: I did as asked, and replied without reading your comments on the EA forum. Reading that I think we are actually in complete agreement, although you actually know the proper terms for the things I gestured at.

Cool, thanks for reading my comments and letting me know your thoughts!

I actually just learned the term "aleatory uncertainty" from chatting with Claude 3.5 Sonnet (New) about my election forecasting in the last week or two post-election. (Turns out Claude was very good for helping me think through mistakes I made in forecasting and giving me useful ideas for how to be a better forecaster in the future.)

I then ask, knowing what you know now, what probability you should have given.

Sounds like you might have already predicted I'd say this (after reading my EA Forum comments), but to say it explicitly: What probability I should have given is different than the aleatoric probability. I think that by becoming informed and making a good judgment I could have reduced my epistemic uncertainty significantly, but I would have still had some. And the forecast that I should have made (or what market prices should have been is actually epistemic uncertainty + aleatoric uncertainty. And I think some people who were really informed could have gotten that to like ~65-90%, but due to lingering epistemic uncertainty could not have gotten it to >90% Trump (even if, as I believe, the aleatoric uncertainty was >90% (and probably >99%)).

Comment by WilliamKiely on Seven lessons I didn't learn from election day · 2024-11-16T07:08:55.434Z · LW · GW

Ah, I think I see. Would it be fair to rephrase your question as: if we "re-rolled the dice" a week before the election, how likely was Trump to win?

Yeah, that seems fair.

My answer is probably between 90% and 95%.

Seems reasonable to me. I wouldn't be surprised if it was >99%, but I'm not highly confident of that. (I would say I'm ~90% confident that it's >90%.)

Comment by WilliamKiely on Seven lessons I didn't learn from election day · 2024-11-15T04:17:12.235Z · LW · GW

That's a different question than the one I meant. Let me clarify:

Basically I was asking you what you think the probability is that Trump would win the election (as of a week before the election, since I think that matters) now that you know how the election turned out.

An analogous question would be the following:

Suppose I have two unfair coins. One coin is biased to land on heads 90% of the time (call it H-coin) and the other is biased to land on tails 90% of the times (T-coin). These two coins look the same to you on the outside. I choose one of the coins, then ask you how likely it is that the coin I chose will land on heads. You don't know whether the coin I'm holding is H-coin or T-coin, so you answer 50% (50%=0.5*.90=+0.5*0.10). I then flip the coin and it lands on heads. Now I ask you, knowing that the coin landed on heads, now how likely do you think it was that it would land on heads when I first tossed it? (I mean the same question by "Knowing how the election turned out, how likely do you think it was a week before the election that Trump would win?").

(Spoilers: I'd be interested in knowing your answer to this question before you read my comment on your "The value of a vote in the 2024 presidential election" EA Forum post that you linked to to avoid getting biased by my answer/thoughts.)

Comment by WilliamKiely on Seven lessons I didn't learn from election day · 2024-11-15T03:55:55.588Z · LW · GW

That makes sense, thanks.

Comment by WilliamKiely on Seven lessons I didn't learn from election day · 2024-11-14T20:47:14.837Z · LW · GW

Knowing how the election turned out, how likely do you think it was a week before the election that Trump would win?

Do you think Polymarket had Trump-wins priced too high or too low?

Comment by WilliamKiely on Seven lessons I didn't learn from election day · 2024-11-14T20:44:17.006Z · LW · GW

Foreign-born Americans shifted toward Trump

Are you sure? Couldn't it be that counties with a higher percentage of foreign-born Americans shifted toward Trump because of how the non-foreign-born voters in those counties voted rather than how the foreign-born voters voted?

Comment by WilliamKiely on How I Learned To Stop Trusting Prediction Markets and Love the Arbitrage · 2024-10-23T19:03:45.201Z · LW · GW

The title "How I Learned To Stop Trusting Prediction Markets and Love the Arbitrage" isn't appropriate for the content of the post. That there is a play-money prediction market in which it costs very little to make the prices on conditional questions very wrong does not provide significant reasons to trust prediction markets less. That this post got 193 karma leading me to see it 2 months later is a sign of bad-voting IMO. (There are many far better, more important posts that get far less karma.)

Comment by WilliamKiely on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-24T04:59:01.198Z · LW · GW

criticism in LW comments is why he stopped writing Sequences posts

I wasn't aware of this and would like more information. Can anyone provide a source, or report their agreement or disagreement with the claim?

Comment by WilliamKiely on Former OpenAI Superalignment Researcher: Superintelligence by 2030 · 2024-06-08T19:54:04.949Z · LW · GW

I second questions 1, 5, and 6 after listening to the Dwarkesh interview.

Comment by WilliamKiely on Former OpenAI Superalignment Researcher: Superintelligence by 2030 · 2024-06-08T19:52:06.016Z · LW · GW

Re 6: at 1:24:30 in the Dwarkesh podcast Leopold proposes the US making an agreement with China to slow down (/pause) after the US has a 100GW cluster and is clearly going to win the race to build AGI to buy time to get things right during the "volatile period" before AGI.

Comment by WilliamKiely on simeon_c's Shortform · 2024-05-10T21:57:02.898Z · LW · GW

(Note: Regardless of whether it was worth it in this case, simeon_c's reward/incentivization idea may be worthwhile as long as there are expected to be some cases in the future where it's worth it, since the people in those future cases may not be as willing as Daniel to make the altruistic personal sacrifice, and so we'd want them to be able to retain their freedom to speak without it costing them as much personally.)

Comment by WilliamKiely on simeon_c's Shortform · 2024-05-10T21:34:13.862Z · LW · GW

I'd be interested in hearing peoples' thoughts on whether the sacrifice was worth it, from the perspective of assuming that counterfactual Daniel would have used the extra net worth altruistically. Is Daniel's ability to speak more freely worth more than the altruistic value that could have been achieved with the extra net worth?

Comment by WilliamKiely on Two Percolation Puzzles · 2023-07-04T07:32:16.374Z · LW · GW

Retracted, thanks.

Comment by WilliamKiely on Two Percolation Puzzles · 2023-07-04T07:23:07.742Z · LW · GW

Retracted due to spoilers and not knowing how to use spoiler tags.

Comment by WilliamKiely on UFO Betting: Put Up or Shut Up · 2023-06-24T02:52:59.290Z · LW · GW

Received $400 worth of bitcoin. I confirm the bet.

Comment by WilliamKiely on UFO Betting: Put Up or Shut Up · 2023-06-24T00:59:52.794Z · LW · GW

@RatsWrongAboutUAP I'm willing to risk up to $20k at 50:1 odds (i.e. If you give me $400 now, I'll owe you $20k in 5 years if you win the bet) conditional on (1) you not being privy to any non-public information about UFOs/UAP and (2) you being okay with forfeiting any potential winnings in the unlikely event that I die before bet resolution.

Re (1): Could you state clearly whether you do or do not have non-public information pertaining to the bet?

Re (2): FYI The odds of me dying in the next 5 years are less than 3% by SSA base rates, and my credence is even less than that if we don't account for global or existential catastrophic risk. The reason I'd ask to not owe you any money in the worlds in which you win (and are still alive to collect money) and I'm dead is because I wouldn't want anyone else to become responsible for settling such a significant debt on my behalf.

If you accept, please reply here and send the money to this Bitcoin address: 3P6L17gtYbj99mF8Wi4XEXviGTq81iQBBJ

I'll confirm receipt of the money when I get notified of your reply here. Thanks!

Comment by WilliamKiely on Change my mind: Veganism entails trade-offs, and health is one of the axes · 2023-06-02T04:06:17.383Z · LW · GW

IMO the largest trade-offs of being vegan for most people aren't health trade-offs, but they're other things like the increased time/attention cost of identifying non-vegan foods. Living in a place where there's a ton of non-vegan food available at grocery stores and restaurants makes it more of a pain to get food at stores and restaurants than it is if you're not paying that close attention to what's in your food. (I'm someone without any food allergies, and I imagine being vegan is about as annoying as having certain food allergies).

Comment by WilliamKiely on Change my mind: Veganism entails trade-offs, and health is one of the axes · 2023-06-02T04:03:39.032Z · LW · GW

That being said, it also seems to me that the vast majority of people's diets are not well optimized for health. Most people care about convenience, cost, taste, and other factors as well. My intuition is that if we took a random person and said "hey, you have to go vegan, lets try to find a vegan diet that's healthier than your current diet" that we'd succeed the vast majority of the time simply because most people don't eat very healthily. That said, the random person would probably prefer a vegan diet optimized for things other than just health more than a vegan diet optimized for just health.

Comment by WilliamKiely on Change my mind: Veganism entails trade-offs, and health is one of the axes · 2023-06-02T04:00:23.837Z · LW · GW

I only read the title, not the post, but just wanted to leave a quick comment to say I agree that veganism entails trade-offs, and that health is one of the axes. Also note that I've been vegan since May 2019 and lacto-vegetarian since October 2017, for ethical reasons, not environmental or health or other preferences reasons.

It's long (since before I changed my diet) been obvious to me that your title statement is true since a prior it seems very unlikely that the optimal diet for health is one that contains exactly zero animal products, given that humans are omnivores. One doesn't need to be informed about nutrition to make that inference.