Posts

What is the best AI generated music about rationality/ai/transhumanism? 2024-04-11T09:34:59.616Z
Be More Katja 2024-03-11T21:12:14.249Z
Community norms poll (2 mins) 2024-03-07T21:45:03.063Z
Grief is a fire sale 2024-03-04T01:11:06.882Z
The World in 2029 2024-03-02T18:03:29.368Z
Minimal Viable Paradise: How do we get The Good Future(TM)? 2023-12-06T09:24:09.699Z
Forecasting Questions: What do you want to predict on AI? 2023-11-01T13:17:00.040Z
How to Resolve Forecasts With No Central Authority? 2023-10-25T00:28:32.332Z
How are rationalists or orgs blocked, that you can see? 2023-09-21T02:37:35.985Z
AI Probability Trees - Joe Carlsmith (2022) 2023-09-08T15:40:24.892Z
AI Probability Trees - Katja Grace 2023-08-24T09:45:47.487Z
What wiki-editing features would make you use the LessWrong wiki more? 2023-08-24T09:22:01.300Z
Quick proposal: Decision market regrantor using manifund (please improve) 2023-07-09T12:49:01.904Z
Graphical Representations of Paul Christiano's Doom Model 2023-05-07T13:03:19.624Z
AI risk/reward: A simple model 2023-05-04T19:25:25.738Z
FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next 2022-11-09T02:14:19.623Z
Feature request: Filter by read/ upvoted 2022-10-04T17:17:56.649Z
Nathan Young's Shortform 2022-09-23T17:47:06.903Z
What should rationalists call themselves? 2021-08-09T08:50:07.161Z

Comments

Comment by Nathan Young on Partial value takeover without world takeover · 2024-04-18T23:04:46.023Z · LW · GW

I struggle a bit to remember what ASI is but I'm gonna assume it's Artificial Super Intelligence. 

Let's say that that's markedly cleverer than 1 person. So it's capable of running very successful trading strategies or programming extremely well. It's not clear to me that such a being:

  • Has been driven towards being agentic, when its creators will prefer something more docile
  • Can cooperate well enough with itself to manage some massive secret takeover
  • Is competent enough to recursively self improve (and solve the alignment problems that creates)
  • Can beat everyone else combined

Feels like what such a being/system might do is just run some terrifically successful trading strategies and gather a lot of resources while frantically avoiding notice/trying to claim it won't take over anything else. Huge public outcry, continuing regulation but maybe after a year it settles to some kind of equilibrium. 

Chance of increasing capabilities and then some later jump, but seems plausible to me that that wouldn't happen in one go. 

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T20:16:50.507Z · LW · GW

I guess, why is it a problem.

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T17:40:43.051Z · LW · GW

Isn't the case usually that housing is the single greatest factor between a US and UK standard of life? Or do you not agree?

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T16:04:05.919Z · LW · GW

To quote @Ege Erdil attempting to steelman:

there could be an interest rate effect - as interest rates fall, claims on future rents become more expensive so housing prices go up.

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T15:26:45.981Z · LW · GW

I want to try this as a way of argument mapping alongside a community that might use it. 

It seems likely that a proper accounting of the argumetns may involve some false statements.

If it goes well I think it could be useful to me and readers, but I guess it will take several iterations.

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T11:00:59.969Z · LW · GW

The UK has a productivity problem, not a housing one. This from bernoulli_defect:

While housing would increase quality of life and luxury, it’s questionable whether it would fix low British productivity in non-housing constrained industries.

Consider how the Bay Area has had huge GDP growth despite housing shortages as people just cram into bedsits

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T10:59:01.126Z · LW · GW

I don't think lw dialogues match how I think, which is in nested bullet points. I sense from how often I see thinking displayed in this nested bullet point way (AI impacts, Kialo, Rootclaim) that many feel similarly.

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T09:37:25.857Z · LW · GW

There isn't a housing shortage. There are more houses than there are households. This from Ian Mulheirn:

No. Back in 1991 there were just over 3.0% more houses than there were households in the UK according to government data. Today, using the ONS’s latest household estimates, there appear to be 5.2% more places to live than there are households that want to live in them. In fact growth in the stock of dwellings appears to have outstripped that of households over the past 50 years or so. This is a strange sort of ‘endemic shortage’.

He shows this graph, showing that households (blue line) have repeatedly undershot expectations (red and green line)

 

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T09:31:23.311Z · LW · GW

COMMENT THREAD

If you comment anywhere other than here, Nathan will delete your comment.

Comment by Nathan Young on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-11T12:34:36.094Z · LW · GW

May I submit more songs anywhere?

Comment by Nathan Young on What is the best AI generated music about rationality/ai/transhumanism? · 2024-04-11T09:41:07.905Z · LW · GW

This is one of the best I've heard: https://www.udio.com/songs/aALrHWVtRAhExxKTT7HjdE 

Comment by Nathan Young on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-26T18:14:18.974Z · LW · GW

To me this reads as a person caught in the act of bullying who is trying to wriggle out of it. Fair play for challenging him, yuck at the responses.

Comment by Nathan Young on [Linkpost] Vague Verbiage in Forecasting · 2024-03-22T23:37:13.240Z · LW · GW

I don't like looking at it. Also it's basically the whole article so it feels unnecessary. Something about it narrowing the text. 

I'm on a laptop. But I think people should be able to use their phones to visit almost any website.

Comment by Nathan Young on [Linkpost] Vague Verbiage in Forecasting · 2024-03-22T21:28:39.284Z · LW · GW

I would not put the whole thing in quotes. I find it harder to read.

Comment by Nathan Young on Open Thread Spring 2024 · 2024-03-19T02:01:55.744Z · LW · GW

Yeah and a couple of relevant things:

  1. The time EA sexual abuse article includes 1 person who isn't an EA and a sort of vibe that iirc includes most tech houses in the bay in the heading of "EA". This is inaccurate.
  2. EA takes a pretty strong stance on sexual harassment. Look at what people are banned from the forum for and scale it up. I've heard about people being banned from events for periods of time for causing serial discomfort. Compare this to Church communities I've been a part of and political communities and this is much stricter. 
Comment by Nathan Young on Some (problematic) aesthetics of what constitutes good work in academia · 2024-03-15T00:23:30.344Z · LW · GW

When he put it all together, he ended up with a different conclusion from what you would get if you just read the abstracts. It was a completely novel piece of work that reviewed this whole evidence base at a level of thoroughness that had never been done before, came out with a conclusion that was different from what you naively would have thought, which concluded his best estimate is that, at current margins, we could cut incarceration and there would be no expected impact on crime. He did all that. Then, he started submitting it to journals. It’s gotten rejected from a large number of journals by now [laughter]. I mean starting with the most prestigious ones and then going to the less.…

 

Why doesn't OpenPhil found a journal? Feels like they could say it's the journal of last resort initially but it probably would pict up status, especially if it contained only true, useful and relevant things.

Comment by Nathan Young on Nathan Young's Shortform · 2024-03-11T18:07:51.242Z · LW · GW

I did a quick community poll - Community norms poll (2 mins) 

I think it went pretty well. What do you think next steps could/should be? 

Here are some points with a lot of agreement.

Comment by Nathan Young on Vote on Anthropic Topics to Discuss · 2024-03-08T00:35:53.586Z · LW · GW

This is so cool! Are we gonna get k-means clustering on lesswrong at any point? Find different groups of respondents and how they are similar/different.

Comment by Nathan Young on The World in 2029 · 2024-03-04T17:12:32.544Z · LW · GW

My current model is that China is a bit further away from invasion than people think. And a recession in the next few years could cripple that ability. People think that all you need for an invasion is ships, but you also need an army navy and air force capable of carrying it out. 

Comment by Nathan Young on The World in 2029 · 2024-03-04T01:59:29.520Z · LW · GW

I wish there were a subscript button.

Comment by Nathan Young on The World in 2029 · 2024-03-04T01:19:10.662Z · LW · GW

I don't think it's appropriate to put personal forecasts in Wiki pages. But yeah manifold metaculus polymarket. Or maybe just link them so they are more resilient.

Comment by Nathan Young on The World in 2029 · 2024-03-04T01:16:05.360Z · LW · GW

Yeah that sounds about right. A junior dev who needs to be told to do individual features.

You're hit thi singularity doesn't sound wrong but I'll need to think

Comment by Nathan Young on The World in 2029 · 2024-03-04T01:14:16.218Z · LW · GW

Yeah I was trying to use richard's terms. 

I also guess that the less training data there is, the less good the AIs will be. So while the maybe be good at setting up a dropshipping website for shoes (a 1 - 10 hour task) they may not be good at alignment research.

To me the singularity is when things are undeniably zooming, or perhaps even have zoomed. New AI tech is coming out daily or perhaps the is even godlike AGI. What do folks think is a reasonable definition?

Comment by Nathan Young on The World in 2029 · 2024-03-04T00:50:07.911Z · LW · GW

Do you have any suggestions for actions I can take to make my piece more like daniel's?

Comment by Nathan Young on The World in 2029 · 2024-03-02T23:43:03.064Z · LW · GW

How should I represent forecasts in text?

 

Comment by Nathan Young on The World in 2029 · 2024-03-02T23:20:59.835Z · LW · GW

I added a note at the top

Oh interesting, yeah I was more linking them as spaces to disagree get opinions. Originally I didn't put my own values at all but that felt worse.  What would you recommend?

Comment by Nathan Young on The World in 2029 · 2024-03-02T19:07:01.827Z · LW · GW

edit for clarity

I am not very good at sizes. But I guess that it's gonna keep increasing in price, so yeah, maybe >$5bn (30%). 

Comment by Nathan Young on The World in 2029 · 2024-03-02T18:47:42.700Z · LW · GW

Ideally I would like to rewrite most LessWrong wiki pages about real world stuff in this style, with forecasts as links. 

Comment by Nathan Young on Every "Every Bay Area House Party" Bay Area House Party · 2024-02-17T16:37:43.180Z · LW · GW

Funny! I shared screenshots with several people.

Comment by Nathan Young on Why Improving Dialogue Feels So Hard · 2024-01-21T20:20:41.455Z · LW · GW

I think an issue is that dialogue is often about two very different models of the world clashing. It takes a lot of work for those two to develop a common language and even then it may just be the two of them.  Add to that that they may be ill informed, dialogue is just very expensive. I really like dialogue and yet it takes a lot for me to be in an actually truth seeking state about it or to read others.

Comment by Nathan Young on The Onion Test for Personal and Institutional Honesty · 2024-01-21T19:37:18.590Z · LW · GW

I think about this framing quite a lot. Is what I say going to lead to people assuming roughly the thing I think even if I'm not precise. So the concept is pretty valuable to me. 

I don't know if it was the post that did it, but maybe!

Comment by Nathan Young on Introducing Pastcasting: A tool for forecasting practice · 2024-01-21T19:27:21.692Z · LW · GW

I've used this a bit, but not loads. I prefer fatebook, metaculus and manifold and betting. I don't quite know why I don't use it more, here are some guesses.

  • I found the tool kind of hard to use
  • It was hard to search for the kind of information that I use to forecast
  • Often I would generate priors based on my current state, but those were wrong in strange ways (I knew something happened but after the deadline)
  • It wasn't clear that it was helping me to get better versus doing lots of forecasting on other platforms.
Comment by Nathan Young on The impossible problem of due process · 2024-01-21T16:33:28.807Z · LW · GW

This post. It gives some useful context around the bad outcomes of panels.

Comment by Nathan Young on The impossible problem of due process · 2024-01-21T16:26:28.373Z · LW · GW

As a counter example I think the EA community health team does better than I would expect. A huge global community and they are relatively trusted and the community takes responsibility for itself. Like I don't know that I'd say it does well but I think I'd be hard pressed to find an analogous community that has a better such function.

Feels like they have a better batting average than rationalists in dealing with issues like this (I guess at least partly cos events are more central to EA than rationalism). Many other communities seem to have no responsibility at all - many political groups seem rife with issues but noone taking responsibility over them.

I agree overall that this is a really hard problem, though I suppose I am more optimistic than OP. 

Comment by Nathan Young on The impossible problem of due process · 2024-01-21T16:21:09.013Z · LW · GW

Yeah I agree a bit here. This document itself is a nice case study in how things can go wrong. There could be a similar one for "don't ask women out"

Comment by Nathan Young on When "yang" goes wrong · 2024-01-19T13:42:15.744Z · LW · GW

This well-encapsulates my concern that there really is a lot of power going to a small group of people/value sets and that I don't make my best decisions when I only listen to myself. If I were to run a country I would want a blend of elite values and some blend of everyone's values. I am concerned that a guardian wouldn't do that and that we are encouraging that outcome.

Comment by Nathan Young on Against Nonlinear (Thing Of Things) · 2024-01-19T13:01:02.465Z · LW · GW

I am not sure I would characterise 2 and 4 like that.

I think I'd say ozzy's criticisms were more like:

  • So-Called Top EAs: Nonlinear was focused on status as a metric rather than helping their interns become better at their jobs
  • Sunshine Is The Best Disinfectant: Nonlinear says that any org would have this many errors. Ozzy says let's get it all out in the open then
Comment by Nathan Young on Nathan Young's Shortform · 2024-01-02T10:56:49.872Z · LW · GW

I am trying to learn some information theory.

It feels like the bits of information between 50% and 25% and 50% and 75% should be the same.

But for probability p, the information is -log2(p).

But then the information of .5 -> .25 is 1 bit and but from .5 to .75 is .41 bits. What am I getting wrong?

I would appreciate blogs and youtube videos.

Comment by Nathan Young on Prediction Markets aren't Magic · 2023-12-22T19:53:35.295Z · LW · GW

I think that the main benefit of manifold.love is that it allows friends to do some matchmaking, but I'd do that for free and am not really doing it for profit anyway. I guess currently the other main benefit is as a schelling point for the rat community. 

Comment by Nathan Young on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-20T18:51:04.321Z · LW · GW

How private are the LessWrong votes?

Would you want to do it overall or blog by blog. Seems pretty doable.

Comment by Nathan Young on Nonlinear’s Evidence: Debunking False and Misleading Claims · 2023-12-20T13:33:50.884Z · LW · GW

What are legal threats like in the real world?

I've been threated with legal action once (by Jay Z's record company for a parody I made) and it felt like a bet. I probably could win if I spent a lot of money, but I didn't have that money and so I took the song down. 

Comment by Nathan Young on Nonlinear’s Evidence: Debunking False and Misleading Claims · 2023-12-19T16:19:42.923Z · LW · GW

continue to believe that most of what Ben relayed from Alice was true

I can believe she is being precise without conveying an accurate picture. I am not sure that I ever thought that alice's account was the most accurate version of events.

Comment by Nathan Young on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-19T15:17:21.208Z · LW · GW

Scanned it, seems pretty fair. In our discussion you convinced me pace should probably have given the team a week to respond, unless other information comes out. 

Comment by Nathan Young on Nonlinear’s Evidence: Debunking False and Misleading Claims · 2023-12-19T15:12:26.211Z · LW · GW

Well I guess I can only talk about my takeaways from Ben's article. Like who gets to say what Ben's article really means? I think probably you should see my reading as pretty different to the median reading. I think I can justify that but if I had realised how differently you all read the article I would have said sooner.

Comment by Nathan Young on Nonlinear’s Evidence: Debunking False and Misleading Claims · 2023-12-18T19:22:44.800Z · LW · GW

You can read my comments at the time, I don't think I considered Nonlinear as cruel or abusive. I guess that I might describe the worst of their behaviour like that, maybe, but people behave within broad ranges.
 

Comment by Nathan Young on Nonlinear’s Evidence: Debunking False and Misleading Claims · 2023-12-18T12:23:03.288Z · LW · GW

This matches my experience too. When I initially made pretty milquetoast criticisms here all of my comments went down by ~10.

Comment by Nathan Young on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-17T00:55:52.313Z · LW · GW

Have people thought about doing gene editing stuff at proposera? Seems legal there?

Comment by Nathan Young on Nonlinear’s Evidence: Debunking False and Misleading Claims · 2023-12-15T13:11:26.561Z · LW · GW

No I'm not saying that.

I am saying about halfway between that and "Ben's account holds up".

What specifically is the most grievous error here.

Comment by Nathan Young on Nathan Young's Shortform · 2023-12-15T10:06:30.567Z · LW · GW

Things I would do dialogues about:

(Note I may change my mind during these discussions but if I do so I will say I have)

  • Prediction is the right frame for most things
  • Focus on world states not individual predictions
  • Betting on wars is underrated
  • The UK House of Lords is okay actually
  • Immigration should be higher but in a way that doesn't annoy everyone and cause backlash
Comment by Nathan Young on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-15T09:56:12.882Z · LW · GW

Yeah I mean the answer is, just make prediction markets and bet on them. I think we are getting a lot better at that.

(Also I'm a lesswrong user who makes a lot of prediction markets about AI)

In particular: