Posts

Death notes - 7 thoughts on death 2024-10-28T15:01:13.532Z
Do you want to do a debate on youtube? I'm looking for polite, truth-seeking participants. 2024-10-10T09:32:59.162Z
Advice for journalists 2024-10-07T16:46:40.929Z
A new process for mapping discussions 2024-09-30T08:57:20.029Z
Foundations - Why Britain has stagnated [crosspost] 2024-09-23T10:43:20.411Z
What happens if you present 500 people with an argument that AI is risky? 2024-09-04T16:40:03.562Z
Ten arguments that AI is an existential risk 2024-08-13T17:00:03.397Z
Request for AI risk quotes, especially around speed, large impacts and black boxes 2024-08-02T17:49:48.898Z
AI and integrity 2024-05-29T20:45:51.300Z
Questions are usually too cheap 2024-05-11T13:00:54.302Z
Do you know of lists of p(doom)s/AI forecasts/ AI quotes? 2024-05-10T11:47:56.183Z
What is a community that has changed their behaviour without strife? 2024-05-07T09:24:48.962Z
This is Water by David Foster Wallace 2024-04-24T21:21:09.445Z
1-page outline of Carlsmith's otherness and control series 2024-04-24T11:25:36.106Z
What is the best AI generated music about rationality/ai/transhumanism? 2024-04-11T09:34:59.616Z
Be More Katja 2024-03-11T21:12:14.249Z
Community norms poll (2 mins) 2024-03-07T21:45:03.063Z
Grief is a fire sale 2024-03-04T01:11:06.882Z
The World in 2029 2024-03-02T18:03:29.368Z
Minimal Viable Paradise: How do we get The Good Future(TM)? 2023-12-06T09:24:09.699Z
Forecasting Questions: What do you want to predict on AI? 2023-11-01T13:17:00.040Z
How to Resolve Forecasts With No Central Authority? 2023-10-25T00:28:32.332Z
How are rationalists or orgs blocked, that you can see? 2023-09-21T02:37:35.985Z
AI Probability Trees - Joe Carlsmith (2022) 2023-09-08T15:40:24.892Z
AI Probability Trees - Katja Grace 2023-08-24T09:45:47.487Z
What wiki-editing features would make you use the LessWrong wiki more? 2023-08-24T09:22:01.300Z
Quick proposal: Decision market regrantor using manifund (please improve) 2023-07-09T12:49:01.904Z
Graphical Representations of Paul Christiano's Doom Model 2023-05-07T13:03:19.624Z
AI risk/reward: A simple model 2023-05-04T19:25:25.738Z
FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next 2022-11-09T02:14:19.623Z
Feature request: Filter by read/ upvoted 2022-10-04T17:17:56.649Z
Nathan Young's Shortform 2022-09-23T17:47:06.903Z
What should rationalists call themselves? 2021-08-09T08:50:07.161Z

Comments

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-14T11:42:47.200Z · LW · GW

Is there a summary of the rationalist concept of lawfulness anywhere. I am looking for one and can't find it. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-14T11:15:04.244Z · LW · GW

But isn't the point of karma to be a ranking system? Surely its bad if it's a suboptimal one?

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-10T07:57:20.501Z · LW · GW

I would have a dialogue with someone on whether Piper should have revealed SBF's messages. Happy to take either side.

Comment by Nathan Young on A new process for mapping discussions · 2024-10-10T01:04:00.733Z · LW · GW

Thanks, appreciated. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-10T01:00:12.571Z · LW · GW

Sure but shouldn't the karma system be a prioritisation ranking, not just "what is fun to read?"

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-10T00:59:34.211Z · LW · GW

I would say I took at least 10 hours to write it. I rewrote it about 4 times.

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-10T00:58:00.432Z · LW · GW

Yeah but the mapping post is about 100x more important/well informed also. Shouldn't that count for something? I'm not saying it's clearer, I'm saying that it's higher priority, probably.

Comment by Nathan Young on Advice for journalists · 2024-10-09T16:26:39.621Z · LW · GW

Hmmmm. I wonder how common this is. This is not how I think of the difference. I think of mathematicians as dealing with coherent systems of logic and engineers dealing with building in the real world. Mathematicians are useful when their system maps to the problem at hand, but not when it doesn't. 

I should say i have a maths degree so it's possible that my view of mathematicians and the general view are not conincident.

Comment by Nathan Young on AI labs can boost external safety research · 2024-10-09T12:40:51.113Z · LW · GW

Yeah this seems like a good point. Not a lot to argue with, but yeah underrated.

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-09T12:39:08.273Z · LW · GW

It is disappointing/confusing to me that of the two articles I recently wrote, the one that was much closer to reality got a lot less karma.

  • A new process for mapping discussions is a summary of months of work that I and my team did on mapping discourse around AI.  We built new tools, employed new methodologies. It got 19 karma
  • Advice for journalists is a piece that I wrote in about 5 hours after perhaps 5 hours of experiences. It has 73 karma and counting

I think this is isn't much evidence, given it's just two pieces. But I do feel a pull towards coming up with theories rather than building and testing things in the real world. To the extent this pull is real, it seems bad.

If true, I would recommend both that more people build things in the real world and talk about them and that we find ways to reward these posts more, regardless of how alive they feel to us at the time.

(Aliveness being my hypothesis - many of us understand or have more live feelings about dealing with journalists than a sort of dry post about mapping discourse)

Comment by Nathan Young on Advice for journalists · 2024-10-09T12:31:51.190Z · LW · GW

Hmmm, what is the picture that the analogy gives you. I struggle to imagine how it's misleading but I want to hear.

Comment by Nathan Young on Advice for journalists · 2024-10-09T12:30:23.096Z · LW · GW

I common criticism seems to be "this won't change anything" see (here and here). People often believe that journalists can't choose their headlines and so it is unfair to hold them accountable for them. I think this is wrong for about 3 reasons:

  • We have a loud of journalists pretty near to us whose behaviour we absolutely can change. Zvi, Scott and Kelsey don't tend to print misleading headlines but they are quite a big deal and to act as if creating better incentives because we can't change everything seems to strawman my position
  • Journalists can control their headlines. I have seen 1-2 times journalists change headlines after pushback. I don't think it was the editors who read the comments and changed the headlines of their own accord. I imagine that the journalists said they were taking too much pushback and asked for the change. This is probably therefore an existence proof that journalists can affect headlines. I think reality is even further in my direction. I imagine that journalists and their editors are involved in the same social transactions as exist between many employees and their bosses. If they ask to change a headline, often they can probably shift it a bit. Getting good sources might be enough to buy this from them.
  • I am not saying that they must have good headlines, I am just holding the threat of their messages against them. I've only done this twice, but in one case a journalist was happy to give me this leverage. And having it, I felt more confident about the interview.

I think there is a failure mode where some rats hear a system described and imagine that reality matches it as they imagine it. In this case, I think that's mistaken - journalists have incentives to misdescribe their power of their own headlines. And reality is a bit messier than the simple model suggests.  And we have more power than I think some commenters think.

I recommend trying this norm. It doesn't cost you much, it is a good red flag if someone gets angry when you suggest it and if they agree you get leverage to use if they betray you. Seems like a good trade that only gets better the more of us do it. Rarely is reality so kind (and hence I may be mistaken)

Comment by Nathan Young on Advice for journalists · 2024-10-09T12:03:31.198Z · LW · GW

I don't think that's the case, because the journalist you are speaking to is not the person who's makes the decision. 

I think this is incorrect. I imagine journalists have more latitude to influence headlines when they arelly care. 

Comment by Nathan Young on Advice for journalists · 2024-10-09T08:51:47.225Z · LW · GW

Why do you think it's stretched. It's about the difference between mathematicians and engineers. One group are about relating the real world the other are about logically consistent ideas that may be useful. 

Comment by Nathan Young on Advice for journalists · 2024-10-09T08:49:59.208Z · LW · GW

I exert influence where I can. I think if all of LessWrong took up this norm we could shift the headline-content accuracy gap.

Comment by Nathan Young on Advice for journalists · 2024-10-09T08:49:12.310Z · LW · GW

Sure but I don't agree with their lack of concern for privacy and I think they are wrong to. I think they are making the wrong call here. 

I also don't think privacy is a binary. Some things are almost private and some things are almost public. Do you think that a conversation we have in LessWrong dms is as public as if I tweeted it?

Comment by Nathan Young on Advice for journalists · 2024-10-09T08:46:55.465Z · LW · GW

Well I do talk to journalists I trust and not those I don't. And I don't give quotes to those who won't take responsibility for titles. But yes, more suggestions appreciated.

Comment by Nathan Young on A new process for mapping discussions · 2024-10-02T09:22:53.099Z · LW · GW

I would appreciate feedback on how this article could be better. 

The work took me quite a long time and seems in line with a LessWrong ethos. And yet people here didn't seem to like it very much. 

Comment by Nathan Young on A new process for mapping discussions · 2024-10-02T09:05:06.783Z · LW · GW

Thank you.

Comment by Nathan Young on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-30T12:34:31.564Z · LW · GW

Yeah aren't a load of national parks near large US conurbations and hence the opportunity cost in world terms is significant.

Comment by Nathan Young on Nathan Young's Shortform · 2024-09-24T11:00:22.144Z · LW · GW

What is the best way to take the average of three probabilities in the context below?

  • There is information about a public figure
  • Three people read this information and estimate the public figure's P(doom)
  • (It's not actually p(doom) but it's their probability of something
  • How do I then turn those three probabilities into a single one?

Thoughts.

I currently think the answer is something like for probability a,b,c then the group median is 2^((log2a + log2b + log2c)/3). This feels like a way to average the bits that each person gets from the text.

I could just take the geometric or arithmetic mean, but somehow that seems off to me. I guess I might write my intuitions for those here for correction.

Arithmetic mean (a + b + c)/3. So this feels like uncertain probabilities will dominate certain ones. eg (.0000001 + .25)/2 = approx .125 which is the same as if the first person was either significantly more confident or significantly less. It seems bad to me for the final probability to be uncorrelated with very confident probabilities if the probabilities are far apart. 

On the other hand in terms of EV calculations, perhaps you want to consider the world where some event is .25 much more than where it is .0000001. I don't know. Is the correct frame possible worlds or the information each person brings to the table?

Geometric mean (a *  b  * c)^ 1/3. I dunno, sort of seems like a midpoint. 

Okay so I then did some thinking. Ha! Whoops.

While trying to think intuitively about what the geometric mean was, I noticed that 2^((log2a + log2b + log2c)/3) = 2^ (log2 (abc) /3) = 2 ^ log 2 (abc)^1/3 = (abc) ^1/3.  So the information mean I thought seemed right is the geometric mean. I feel a bit embarrassed, but also happy to have tried to work it out. 

This still doesn't tell me whether the arithmetic worlds intuition or the geometric information interpretation is correct. 

Any correction or models appreciated. 

Comment by Nathan Young on Did Christopher Hitchens change his mind about waterboarding? · 2024-09-23T14:25:00.641Z · LW · GW

@Ben Pace I would like a vote here on what percentage chance we think that an omnicient reviewer would say this narrative is true. The display it on an axis, probably with dots (anonymous) for each person. eg like this. 

Comment by Nathan Young on Foundations - Why Britain has stagnated [crosspost] · 2024-09-23T10:51:56.280Z · LW · GW

I want to run one of @Ben Pace's polls at the bottom here. Please could people put statements that they might want to agree or disagree with relating to this essay as comments here. Some starters:

  • If the UK wants to grow then it would do well to give energy production, housing and infrastructure a higher priority
  • France is able to be dysfunctional and still wealthy because it gets the basics of housing, energy and infrastructure right
  • The UK Town and Planning act was probably very damaging
  • If the UK wants more growth it should build more housing where people want to live
  • I think that cities would generally grow more if they had more people in them
Comment by Nathan Young on AI forecasting bots incoming · 2024-09-10T10:51:34.144Z · LW · GW

I made a poll of statements from the manifold comment section to try and understand our specific disagreements. Feel free to add your own. Takes about 2 minutes to fill in,

https://viewpoints.xyz/polls/ai-forecasting 

Comment by Nathan Young on Nathan Young's Shortform · 2024-09-10T10:40:09.481Z · LW · GW

I read @TracingWoodgrains piece on Nonlinear and have further updated that the original post by @Ben Pace was likely an error. 

I have bet accordingly here. 

Comment by Nathan Young on AI forecasting bots incoming · 2024-09-10T09:11:14.397Z · LW · GW

I am really annoyed by the Twitter thread about this paper. I doubt it will hold up and it's been seen 450k times. Hendryks had ample opportunity after initial skepticism to remove it, but chose not to. I expect this to have reputational costs to him and to AI safety in general. If people think he (and by association some of us) are charlatan's for saying one thing and doing anohter in terms of being careful with the truth, I will have some sympathy with their position.

Comment by Nathan Young on AI forecasting bots incoming · 2024-09-10T09:03:27.394Z · LW · GW

This market is now both very liquid by manifold standards and confident that there are flaws in the paper. 

https://manifold.markets/NathanpmYoung/will-there-be-substantive-issues-wi?r=TmF0aGFucG1Zb3VuZw (I thought manifold embed worked?)

Comment by Nathan Young on How I Learned To Stop Trusting Prediction Markets and Love the Arbitrage · 2024-09-03T10:51:50.476Z · LW · GW

I think the post moved me in your direction, so I think it was fine. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-08-22T10:55:08.409Z · LW · GW

Communication question. 

How do I talk about low probability events in a sensical way? 

eg "RFK Jr is very unlikely to win the presidency (0.001%)" This statement is ambiguous. Does it mean he's almost certain not to win or that the statement is almost certainly not true?

I know this sounds wonkish, but it's a question I come to quite often when writing. I like to use words but also include numbers in brackets or footnotes. But if there are several forecasts in one sentence with different directions it can be hard to understand. 

"Kamala is a slight favourite in the election (54%), but some things are clearer. She'll probably win Virginia (83%) and probably loses North Carolina (43%)" 

Something about the North Carolina subclause rubs me the wrong way. It requires several cycles to think "does the 43% mean the win or the loss". Options:
 

  • As is 
  • "probably loses North Carolina (43% win chance)" - this takes up quite a lot of space while reading. I don't like things that break the flow 
Comment by Nathan Young on the Giga Press was a mistake · 2024-08-21T20:57:04.000Z · LW · GW

As for voids, they can create weak points; I think they were the reason the cybertruck hitch broke off in this test.

Though as I understand it that test was after a load of other tests. Perhaps relevant. 

Comment by Nathan Young on Ten arguments that AI is an existential risk · 2024-08-15T20:02:29.038Z · LW · GW

What do you think P(doom from corporations) is. I've never heard much worry about current non-AI corps?

Comment by Nathan Young on Ten arguments that AI is an existential risk · 2024-08-15T20:01:36.634Z · LW · GW

Sure, but still experts could not agree that AI is quite risky, and they do. This is important evidence in favour, especially to the extent they aren't your ingroup.

I'm not saying people should consider it a top argument, but I'm surprised how it falls on the ranking.

Comment by Nathan Young on How I Learned To Stop Trusting Prediction Markets and Love the Arbitrage · 2024-08-14T22:04:57.031Z · LW · GW

I made that market. Some thoughts

1.Seems kind of inaccurate to not put in Matt's tweet particularly if you're gonna call it "objective sounding".

 

Matt himself calls says "seriously but not literally". So he agrees with you, I think. 

2.Regarding fees on conditional markets, I don't know.

3.

all but a very few markets are pure popularity contests, dominated by those who don't mind locking up their mana for a month for a guaranteed 1% loss.

I don't know that this is as big a problem as it seems. A popularity contest where it costs something to vote is a better kind than we usually see. Overall I agree that conditional markets aren't to be taken too seriously. But I think the tone of this is probably too negative to this audience. This one went fine.

4.

Out of epistemic cooperativeness as well as annoyance, I spent small amounts of mana on the markets where it was cheap to reset implausible odds closer to Harris' overall odds of victory.

Thanks. I did the same. Overall I thought the markets seemed to say pretty sensible things.

Comment by Nathan Young on Ten arguments that AI is an existential risk · 2024-08-14T02:51:12.961Z · LW · GW

You have my and @katjagrace’s permission to test out other poll formats if you wish.

Comment by Nathan Young on Ten arguments that AI is an existential risk · 2024-08-13T22:56:03.600Z · LW · GW

I'm surprised that the least compelling argument here is Expert opinion.  

Anyone want to explain to me why they dislike that one? It looks obviously good to me?

Comment by Nathan Young on Dragon Agnosticism · 2024-08-02T20:22:13.392Z · LW · GW

I would perhaps prefer we had a list of three things we don't discuss (say Politics, Race science and Infohazards) and if we want to not discuss a new thing we have to allow discussion of one of those others. Seems better to be clear what isn't being discussed. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-24T17:35:00.204Z · LW · GW

I disagree. They don't need to be reasonable so much as I now have a big stick to beat the journalist with if they aren't.

"I can't change my headlines"
"But it is your responsibility right?"
"No"
"Oh were you lying when you said it was"

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-24T01:05:23.748Z · LW · GW

I have 2 so far. One journalist agreed with no bother. The other frustratedly said they couldn't guarantee that and tried to negotiate. I said I was happy to take a bond, they said no, which suggested they weren't that confident.

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-23T16:51:47.724Z · LW · GW

Thanks to the people who use this forum.

I try and think about things better and it's great to have people to do so with, flawed as we are. In particularly @KatjaGrace and @Ben Pace

I hope we can figure it all out.

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-18T13:11:32.896Z · LW · GW

So far a journalist just said "sure". So n = 1 it's fine.

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-16T12:26:43.469Z · LW · GW

Trying out my new journalist strategy.

Image
Comment by Nathan Young on Reliable Sources: The Story of David Gerard · 2024-07-11T09:26:13.971Z · LW · GW

Did you reformat all the footnotes or do you have a tool for that?

Comment by Nathan Young on Loving a world you don’t trust · 2024-07-02T17:19:03.783Z · LW · GW

My main takeaway from this series is that Carlsmith seems to be gesturing at some important things where I want a more diagrammy, mathsy approach to come along after. 

What does "Green" look like in more blue terms? When specifically might we want to be paperclippers and when not? Where are the edges of the different concepts?

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-01T10:20:47.146Z · LW · GW

So by my metric, Yudkowsky and Lintemandain's Dath Ilan isn't neutral, it's quite clearly lawful good, or attempting to be. And yet they care a lot about the laws of cognition.

So it seems to me that the laws of cognition can (should?) drive towards flouishing rather than pure knowledge increase. There might be things that we wish we didn't know for a bit. And ways to increase our strength to heal rather than our strength to harm. 

To me it seems a better rationality would be lawful good. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-01T10:17:51.720Z · LW · GW

Yeah I find the intention vs outcome thing difficult.

What do you think of "average expected value across small perturbations in your life". Like if you accidentally hit churchill with a car and so cause the UK to lose WW2 that feels notably less bad than deliberately trying to kill a much smaller number of people. In many nearby universes, you didn't kill churchill, but in many nearby universes that person did kill all those people.

Comment by Nathan Young on Nathan Young's Shortform · 2024-06-30T12:48:32.097Z · LW · GW

Here is a 5 minute, spicy take of an alignment chart. 

What do you disagree with.

To try and preempt some questions:

Why is rationalism neutral?

It seems pretty plausible to me that if AI is bad, then rationalism did a lot to educate and spur on AI development. Sorry folks.

Why are e/accs and EAs in the same group.

In the quick moments I took to make this, I found both EA and E/acc pretty hard to predict and pretty uncertain in overall impact across some range of forecasts. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-06-23T13:03:50.791Z · LW · GW

Under considered might be more accurate?

And yes, I agree that seems bad.

Comment by Nathan Young on Nathan Young's Shortform · 2024-06-22T12:30:46.440Z · LW · GW

Joe Rogan (largest podcaster in the world) giving repeated concerned mediocre x-risk explanations suggests that people who have contacts with him should try and get someone on the show to talk about it.

eg listen from 2:40:00 Though there were several bits like this during the show. 

Comment by Nathan Young on Ilya Sutskever created a new AGI startup · 2024-06-20T03:34:30.951Z · LW · GW

Weakly endorsed

“Curiously enough, the only thing that went through the mind of the bowl of petunias as it fell was Oh no, not again. Many people have speculated that if we knew exactly why the bowl of petunias had thought that we would know a lot more about the nature of the Universe than we do now.”

The Hitchhiker’s Guide To The Galaxy, Douglas Adams

Comment by Nathan Young on Nathan Young's Shortform · 2024-05-30T16:36:29.160Z · LW · GW

Feels like FLI is a massively underrated org. Cos of the whole vitalik donation thing they have like $300mn.