Posts

Dear AGI, 2025-02-18T10:48:15.030Z
The Peeperi (unfinished) - By Katja Grace 2025-02-17T19:33:29.894Z
Claude 3.5 Sonnet (New)'s AGI scenario 2025-02-17T18:47:04.669Z
Don't go bankrupt, don't go rogue 2025-02-06T10:31:14.312Z
Anatomy of a Dance Class: A step by step guide 2025-01-26T18:02:04.974Z
Will bird flu be the next Covid? "Little chance" says my dashboard. 2025-01-07T20:10:50.080Z
What I expected from this site: A LessWrong review 2024-12-20T11:27:39.683Z
Careless thinking: A theory of bad thinking 2024-12-17T18:23:16.140Z
A car journey with conservative evangelicals - Understanding some British political-religious beliefs 2024-12-06T11:22:45.563Z
Death notes - 7 thoughts on death 2024-10-28T15:01:13.532Z
Do you want to do a debate on youtube? I'm looking for polite, truth-seeking participants. 2024-10-10T09:32:59.162Z
Advice for journalists 2024-10-07T16:46:40.929Z
A new process for mapping discussions 2024-09-30T08:57:20.029Z
Foundations - Why Britain has stagnated [crosspost] 2024-09-23T10:43:20.411Z
What happens if you present 500 people with an argument that AI is risky? 2024-09-04T16:40:03.562Z
Ten arguments that AI is an existential risk 2024-08-13T17:00:03.397Z
Request for AI risk quotes, especially around speed, large impacts and black boxes 2024-08-02T17:49:48.898Z
AI and integrity 2024-05-29T20:45:51.300Z
Questions are usually too cheap 2024-05-11T13:00:54.302Z
Do you know of lists of p(doom)s/AI forecasts/ AI quotes? 2024-05-10T11:47:56.183Z
What is a community that has changed their behaviour without strife? 2024-05-07T09:24:48.962Z
This is Water by David Foster Wallace 2024-04-24T21:21:09.445Z
1-page outline of Carlsmith's otherness and control series 2024-04-24T11:25:36.106Z
What is the best AI generated music about rationality/ai/transhumanism? 2024-04-11T09:34:59.616Z
Be More Katja 2024-03-11T21:12:14.249Z
Community norms poll (2 mins) 2024-03-07T21:45:03.063Z
Grief is a fire sale 2024-03-04T01:11:06.882Z
The World in 2029 2024-03-02T18:03:29.368Z
Minimal Viable Paradise: How do we get The Good Future(TM)? 2023-12-06T09:24:09.699Z
Forecasting Questions: What do you want to predict on AI? 2023-11-01T13:17:00.040Z
How to Resolve Forecasts With No Central Authority? 2023-10-25T00:28:32.332Z
How are rationalists or orgs blocked, that you can see? 2023-09-21T02:37:35.985Z
AI Probability Trees - Joe Carlsmith (2022) 2023-09-08T15:40:24.892Z
AI Probability Trees - Katja Grace 2023-08-24T09:45:47.487Z
What wiki-editing features would make you use the LessWrong wiki more? 2023-08-24T09:22:01.300Z
Quick proposal: Decision market regrantor using manifund (please improve) 2023-07-09T12:49:01.904Z
Graphical Representations of Paul Christiano's Doom Model 2023-05-07T13:03:19.624Z
AI risk/reward: A simple model 2023-05-04T19:25:25.738Z
FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next 2022-11-09T02:14:19.623Z
Feature request: Filter by read/ upvoted 2022-10-04T17:17:56.649Z
Nathan Young's Shortform 2022-09-23T17:47:06.903Z
What should rationalists call themselves? 2021-08-09T08:50:07.161Z

Comments

Comment by Nathan Young on Arbital has been imported to LessWrong · 2025-02-20T11:56:25.516Z · LW · GW

I am excited about improvements to the wiki. Might write some. 

Comment by Nathan Young on Arbital has been imported to LessWrong · 2025-02-20T11:55:32.107Z · LW · GW

Claims

 The claims logo is ugly. 

Comment by Nathan Young on Dear AGI, · 2025-02-19T18:34:51.964Z · LW · GW

This piece was inspired partly by @KatjaGrace who has a short story idea that I hope to cowrite with her. Also partly inspired by @gwern's discussion with @dwarkeshsp 

Comment by Nathan Young on Dear AGI, · 2025-02-19T15:55:21.481Z · LW · GW

What would you conclude or do if

It's hard to know, because I feel this thing. I hope I might be tempted to follow the breadcrumbs suggested and see that humans really do talk about consciousness a lot. Perhaps to try and build a biological brain and quiz it. 

Comment by Nathan Young on Claude 3.5 Sonnet (New)'s AGI scenario · 2025-02-18T10:44:33.436Z · LW · GW

I was not at the session. Yes Claude did write it. I assume the session was run by Daniel Kokatajlo or Eli Lifland. 

If I had to guess, I would guess that the prompt show is all it got. (65%)

Comment by Nathan Young on Nathan Young's Shortform · 2025-01-28T09:48:13.498Z · LW · GW

I wish we kept and upvotable list of journalists so we could track who is trusted in the community and who isn't.

Seems not hard. Just a page with all the names as comments. I don't particularly want to add people, so make the top level posts anonymous. Then anyone can add names and everyone else can vote if they are trustworthy and add comments of experiences with them.

Comment by Nathan Young on Nathan Young's Shortform · 2025-01-28T09:36:18.505Z · LW · GW

This journalist wants to talk to me about the Zizian stuff.

https://www.businessinsider.com/author/rob-price 

I know about as much as the median rat, but I generally think it's good to answer journalists on substantive questions.

Do you think is a particularly good or bad idea, do you have any comments about this particular journalist. Feel free to DM me.

Comment by Nathan Young on Nathan Young's Shortform · 2025-01-24T16:59:06.819Z · LW · GW

How might I combine these two datasets? One is a binary market, the other is a date market. So for any date point, one is a percentage P(turing test before 2030) the other is a cdf across a range of dates P(weakly general AI publicly known before that date).

Here are the two datasets. 

Suggestions:

  • Fit a normal distribution to the turing test market such that the 1% is at the current day and the P(X<2030) matches the probability for that data point
  • Mirror the second data set but for each data point elevate the probabilities before 2030 such that P(X<2030) matches the probability for the first dataset

Thoughts:

Overall the problem is that one doesn't know what distribution to fit the second single datapoint to. The second suggestion just uses the distribution of the second data set for the first, but that seems quite complext.

"Why would you want to combine these datasets?"

Well they are two different views when something like AGI will appear. Seems good to combine them.

Comment by Nathan Young on meemi's Shortform · 2025-01-20T17:25:40.287Z · LW · GW

Suggested market. Happy to take suggestions on how to improve it:

https://manifold.markets/NathanpmYoung/will-o3-perform-as-well-on-the-fron?play=true 

Comment by Nathan Young on Don’t ignore bad vibes you get from people · 2025-01-20T17:19:10.394Z · LW · GW

I guess I frame this as "vibes are signals too". Like if my body doesn't like someone, that's a signal. And it might be they smell or have an asymmetric face, but also they might have some distrustworthy trait that my body recognises (because figuring out lying is really important evolutionarily). 

I think it's good to analyse vibes and figure out if unfair judgemental things are enough to account for most of the bad vibes or if there is a missing component that may be fair.

Comment by Nathan Young on The Difference Between Prediction Markets and Debate (Argument) Maps · 2025-01-15T23:26:43.738Z · LW · GW

Seems fine, though this doesn't seem like the central crux. 

Currently:

  • Prediction markets are used
  • Argument maps tend not to be.
Comment by Nathan Young on Nathan Young's Shortform · 2025-01-08T10:58:59.338Z · LW · GW

My bird flu risk dashboard is here:
 

http://birdflurisk.com 

 

If you find it valuable, you could upvote it on HackerNews:

 

https://news.ycombinator.com/item?id=42632552 

Comment by Nathan Young on Review: Planecrash · 2025-01-07T22:55:00.993Z · LW · GW

Yeah I wish someone would write a condensed and less onanistic version of Planecrash. I think one could get much of the benefit in a much shorter package. 

Comment by Nathan Young on Noting an error in Inadequate Equilibria · 2024-12-19T17:21:34.503Z · LW · GW

Error checking in important works is moderately valuable. 

Comment by Nathan Young on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2024-12-19T17:19:50.266Z · LW · GW

I recall thinking this article got a lot right. 

I remain confused about the non-linear stuff, but I have updated to thinking that norms should be that stories are accurate not merely informative with caveats given. 

I am glad people come into this community to give critique like this. 

Comment by Nathan Young on The King and the Golem · 2024-12-19T17:14:14.784Z · LW · GW

Solid story. I like it. Contains a few useful frames and is memorable as a story. 

Comment by Nathan Young on Recursive Middle Manager Hell · 2024-12-19T17:09:55.398Z · LW · GW

I have listened to this essay about 3 times and I imagine I might do so again. Has been a valuable addition to my thinking about whether people have contact with reality and what their social goals might be. 

Comment by Nathan Young on Enemies vs Malefactors · 2024-12-19T17:08:46.279Z · LW · GW

I have used this dichotomy, 5 - 100 times during the last few years. I am glad it was brought to my attention.

Comment by Nathan Young on Careless thinking: A theory of bad thinking · 2024-12-19T12:30:16.862Z · LW · GW

Sure, but again to discuss what really happened, it wasn't that it wasn't prioritised, it was that I didn't realise it until late into the process. 

That isn't prioritisation, in my view, that's halfassing. And I endorse having done so.

Comment by Nathan Young on Careless thinking: A theory of bad thinking · 2024-12-19T12:28:59.636Z · LW · GW

Or a coordination problem. 

I think coordiantion problems are formed from many bad thinkers working together. 

Comment by Nathan Young on Careless thinking: A theory of bad thinking · 2024-12-19T10:33:12.159Z · LW · GW

I mean the Democratic party insiders who resisted the idea that Biden was unsuitable for so long and counselled him to stay when he was pressed. I think those people were thinking badly.

Or perhaps I think they were thinking more about their own careers than the next administration being Democrat.

Comment by Nathan Young on Careless thinking: A theory of bad thinking · 2024-12-19T10:17:30.025Z · LW · GW

Yes, this is one reason I really like forecasting. I forces me to see if my thinking was bad and learn what good thinking looks like.

Comment by Nathan Young on Careless thinking: A theory of bad thinking · 2024-12-19T10:16:50.967Z · LW · GW

I think it caused them to have much less time to choose a candidate and so they chose a less good candidate than they were able to. 

If thinking is the process of coming to conclusions you reflectively endorse, I think they did bad thinking and that in time people will move to that view.

Thinking is about choosing the action that actually wins, not the one that is justifiable by social reality, right?

Comment by Nathan Young on Careless thinking: A theory of bad thinking · 2024-12-19T10:13:45.500Z · LW · GW

Do you mean this as a rebuke? 

I feel a little defensive here, because I think the acknowledgement and subsequent actions were more accurate and information preserving than any others I can think of. I didn't want to rewrite it, I didn't want to quickly hack useful chunks out, I didn't want to pretend I thought things I didn't, I actually did hold these views once.

If you have suggestions for a better course of action, I'm open.

Comment by Nathan Young on Nathan Young's Shortform · 2024-12-19T10:10:11.620Z · LW · GW

Do you find this an intuitive framework? I find the implication that conversation fits neatly into these boxes or that these are the relevant boxes a little doubtful.

Are you able to quickly give examples in any setting of what 1,2,3 and 4 would be?

Comment by Nathan Young on Nathan Young's Shortform · 2024-12-18T08:37:43.269Z · LW · GW

I don't really understand the difference between simulacra levels 2 and 3.

  1. Discussing reality
  2. Attempting to achieve results in reality by inaccuracy
  3. Attempting to achieve results in social reality by inaccuracy

I've never really got 4 either, but let's stick to 1 - 3.

Also they seem more like nested circles rather than levels - the jump between 2 and 3 (if I understand it correctly) seems pretty arbitrary.

Comment by Nathan Young on Nathan Young's Shortform · 2024-12-16T09:06:40.444Z · LW · GW

Upvote to signal: I would buy a button like this, if they existed.

Comment by Nathan Young on Nathan Young's Shortform · 2024-12-16T09:06:25.937Z · LW · GW

Physical object.

I might (20%) make a run of buttons that say how long since you pressed them. eg so I can push the button in the morning when I have put in my anti-baldness hair stuff and then not have to wonder whether I did.

Would you be interested in buying such a thing?

Perhaps they have a dry wipe section so you can write what the button is for.

If you would, can you upvote the attached comment.

Comment by Nathan Young on Nathan Young's Shortform · 2024-12-13T07:10:32.612Z · LW · GW

Politics is the Mindfiller

 

There are many things to care about and I am not good at thinking about all of them.

Politics has many many such things.

Do I know about:

  •  Crime stats
  • Energy generation
  • Hiring law
  • University entrance
  • Politicians' political beliefs
  • Politicians' personal lives
  • Healthcare
  • Immigration

And can I actually confidently think that things you say are actually the case. Or do I have a surface level $100 understanding? 

Poltics may or may not be the mindkiller, whatever Yud meant by that, but for me it is the mindfiller, it's just a huge amount of work to stay on top of.

I think it would be healthier for me to focus on a few areas and then say I don't know about the rest.

Comment by Nathan Young on Nathan Young's Shortform · 2024-12-12T11:38:03.409Z · LW · GW

Some thoughts on Rootclaim

Blunt, quick. Weakly held. 

 

The platform has unrealized potential in facilitating Bayesian analysis and debate.


Either

  •   The platform could be a simple reference document
  •  The platform could be an interactive debate and truthseeking tool
  •  The platform could be a way to search the rootclaim debates

Currently it does none of these and is frustrating to me.

Heading to the site I expect:

  • to be able to search the video debates
  • to be able to input my own probability estimates to the current bayesian framework
  • Failing this, I would prefer to just have a reference document which doesn't promise these
Comment by Nathan Young on A car journey with conservative evangelicals - Understanding some British political-religious beliefs · 2024-12-08T12:08:28.277Z · LW · GW

I am not sure most foodies are thinking about food with every new person. Maybe hardcore foodies?

Comment by Nathan Young on A car journey with conservative evangelicals - Understanding some British political-religious beliefs · 2024-12-08T11:23:17.725Z · LW · GW

Sure but then those things aren't due to an actual relationship with an actual God, they are for the reasons you state. Which is really really importantly different.

Comment by Nathan Young on A car journey with conservative evangelicals - Understanding some British political-religious beliefs · 2024-12-06T11:25:19.178Z · LW · GW

I find it pretty tiring to add all the footnotes in. If the post gets 50 karma or this gets 20 karma, I probably will. 
@Ben Pace do you folks have some kind of substack upload tool. I know you upload Katja's stuff. If there were a thing I could put a substack address into and get footnotes properly, that would be great. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-14T11:42:47.200Z · LW · GW

Is there a summary of the rationalist concept of lawfulness anywhere. I am looking for one and can't find it. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-14T11:15:04.244Z · LW · GW

But isn't the point of karma to be a ranking system? Surely its bad if it's a suboptimal one?

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-10T07:57:20.501Z · LW · GW

I would have a dialogue with someone on whether Piper should have revealed SBF's messages. Happy to take either side.

Comment by Nathan Young on A new process for mapping discussions · 2024-10-10T01:04:00.733Z · LW · GW

Thanks, appreciated. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-10T01:00:12.571Z · LW · GW

Sure but shouldn't the karma system be a prioritisation ranking, not just "what is fun to read?"

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-10T00:59:34.211Z · LW · GW

I would say I took at least 10 hours to write it. I rewrote it about 4 times.

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-10T00:58:00.432Z · LW · GW

Yeah but the mapping post is about 100x more important/well informed also. Shouldn't that count for something? I'm not saying it's clearer, I'm saying that it's higher priority, probably.

Comment by Nathan Young on Advice for journalists · 2024-10-09T16:26:39.621Z · LW · GW

Hmmmm. I wonder how common this is. This is not how I think of the difference. I think of mathematicians as dealing with coherent systems of logic and engineers dealing with building in the real world. Mathematicians are useful when their system maps to the problem at hand, but not when it doesn't. 

I should say i have a maths degree so it's possible that my view of mathematicians and the general view are not conincident.

Comment by Nathan Young on AI labs can boost external safety research · 2024-10-09T12:40:51.113Z · LW · GW

Yeah this seems like a good point. Not a lot to argue with, but yeah underrated.

Comment by Nathan Young on Nathan Young's Shortform · 2024-10-09T12:39:08.273Z · LW · GW

It is disappointing/confusing to me that of the two articles I recently wrote, the one that was much closer to reality got a lot less karma.

  • A new process for mapping discussions is a summary of months of work that I and my team did on mapping discourse around AI.  We built new tools, employed new methodologies. It got 19 karma
  • Advice for journalists is a piece that I wrote in about 5 hours after perhaps 5 hours of experiences. It has 73 karma and counting

I think this is isn't much evidence, given it's just two pieces. But I do feel a pull towards coming up with theories rather than building and testing things in the real world. To the extent this pull is real, it seems bad.

If true, I would recommend both that more people build things in the real world and talk about them and that we find ways to reward these posts more, regardless of how alive they feel to us at the time.

(Aliveness being my hypothesis - many of us understand or have more live feelings about dealing with journalists than a sort of dry post about mapping discourse)

Comment by Nathan Young on Advice for journalists · 2024-10-09T12:31:51.190Z · LW · GW

Hmmm, what is the picture that the analogy gives you. I struggle to imagine how it's misleading but I want to hear.

Comment by Nathan Young on Advice for journalists · 2024-10-09T12:30:23.096Z · LW · GW

I common criticism seems to be "this won't change anything" see (here and here). People often believe that journalists can't choose their headlines and so it is unfair to hold them accountable for them. I think this is wrong for about 3 reasons:

  • We have a loud of journalists pretty near to us whose behaviour we absolutely can change. Zvi, Scott and Kelsey don't tend to print misleading headlines but they are quite a big deal and to act as if creating better incentives because we can't change everything seems to strawman my position
  • Journalists can control their headlines. I have seen 1-2 times journalists change headlines after pushback. I don't think it was the editors who read the comments and changed the headlines of their own accord. I imagine that the journalists said they were taking too much pushback and asked for the change. This is probably therefore an existence proof that journalists can affect headlines. I think reality is even further in my direction. I imagine that journalists and their editors are involved in the same social transactions as exist between many employees and their bosses. If they ask to change a headline, often they can probably shift it a bit. Getting good sources might be enough to buy this from them.
  • I am not saying that they must have good headlines, I am just holding the threat of their messages against them. I've only done this twice, but in one case a journalist was happy to give me this leverage. And having it, I felt more confident about the interview.

I think there is a failure mode where some rats hear a system described and imagine that reality matches it as they imagine it. In this case, I think that's mistaken - journalists have incentives to misdescribe their power of their own headlines. And reality is a bit messier than the simple model suggests.  And we have more power than I think some commenters think.

I recommend trying this norm. It doesn't cost you much, it is a good red flag if someone gets angry when you suggest it and if they agree you get leverage to use if they betray you. Seems like a good trade that only gets better the more of us do it. Rarely is reality so kind (and hence I may be mistaken)

Comment by Nathan Young on Advice for journalists · 2024-10-09T12:03:31.198Z · LW · GW

I don't think that's the case, because the journalist you are speaking to is not the person who's makes the decision. 

I think this is incorrect. I imagine journalists have more latitude to influence headlines when they arelly care. 

Comment by Nathan Young on Advice for journalists · 2024-10-09T08:51:47.225Z · LW · GW

Why do you think it's stretched. It's about the difference between mathematicians and engineers. One group are about relating the real world the other are about logically consistent ideas that may be useful. 

Comment by Nathan Young on Advice for journalists · 2024-10-09T08:49:59.208Z · LW · GW

I exert influence where I can. I think if all of LessWrong took up this norm we could shift the headline-content accuracy gap.

Comment by Nathan Young on Advice for journalists · 2024-10-09T08:49:12.310Z · LW · GW

Sure but I don't agree with their lack of concern for privacy and I think they are wrong to. I think they are making the wrong call here. 

I also don't think privacy is a binary. Some things are almost private and some things are almost public. Do you think that a conversation we have in LessWrong dms is as public as if I tweeted it?

Comment by Nathan Young on Advice for journalists · 2024-10-09T08:46:55.465Z · LW · GW

Well I do talk to journalists I trust and not those I don't. And I don't give quotes to those who won't take responsibility for titles. But yes, more suggestions appreciated.