Posts

Questions are usually too cheap 2024-05-11T13:00:54.302Z
Do you know of lists of p(doom)s/AI forecasts/ AI quotes? 2024-05-10T11:47:56.183Z
What is a community that has changed their behaviour without strife? 2024-05-07T09:24:48.962Z
This is Water by David Foster Wallace 2024-04-24T21:21:09.445Z
1-page outline of Carlsmith's otherness and control series 2024-04-24T11:25:36.106Z
What is the best AI generated music about rationality/ai/transhumanism? 2024-04-11T09:34:59.616Z
Be More Katja 2024-03-11T21:12:14.249Z
Community norms poll (2 mins) 2024-03-07T21:45:03.063Z
Grief is a fire sale 2024-03-04T01:11:06.882Z
The World in 2029 2024-03-02T18:03:29.368Z
Minimal Viable Paradise: How do we get The Good Future(TM)? 2023-12-06T09:24:09.699Z
Forecasting Questions: What do you want to predict on AI? 2023-11-01T13:17:00.040Z
How to Resolve Forecasts With No Central Authority? 2023-10-25T00:28:32.332Z
How are rationalists or orgs blocked, that you can see? 2023-09-21T02:37:35.985Z
AI Probability Trees - Joe Carlsmith (2022) 2023-09-08T15:40:24.892Z
AI Probability Trees - Katja Grace 2023-08-24T09:45:47.487Z
What wiki-editing features would make you use the LessWrong wiki more? 2023-08-24T09:22:01.300Z
Quick proposal: Decision market regrantor using manifund (please improve) 2023-07-09T12:49:01.904Z
Graphical Representations of Paul Christiano's Doom Model 2023-05-07T13:03:19.624Z
AI risk/reward: A simple model 2023-05-04T19:25:25.738Z
FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next 2022-11-09T02:14:19.623Z
Feature request: Filter by read/ upvoted 2022-10-04T17:17:56.649Z
Nathan Young's Shortform 2022-09-23T17:47:06.903Z
What should rationalists call themselves? 2021-08-09T08:50:07.161Z

Comments

Comment by Nathan Young on Nathan Young's Shortform · 2024-05-21T16:08:41.532Z · LW · GW

A problem with overly kind PR is that many people know that you don't deserve the reputation. So if you start to fall, you can fall hard and fast.

Likewise it incentivises investigation that you can't back up.

If everyone thinks I am lovely, but I am two faced, I create a juicy story any time I am cruel. Not so if am known to be grumpy.

eg My sense is that EA did this a bit with the press tour around What We Owe The Future. It built up a sense of wisdom that wasn't necessarily deserved, so with FTX it all came crashing down.

Personally I don't want you to think I am kind and wonderful. I am often thoughtless and grumpy. I think you should expect a mediocre to good experience. But I'm not Santa Claus.

I am never sure whether rats are very wise or very naïve to push for reputation over PR, but I think it's much more sustainable.

@ESYudkowsky can't really take a fall for being goofy. He's always been goofy - it was priced in.

Many organisations think they are above maintaining the virtues they profess to possess, instead managing it with media relations.

In doing this they often fall harder eventually. Worse, they lose out on the feedback from their peers accurately seeing their current state.

Journalists often frustrate me as a group, but they aren't dumb. Whatever they think is worth writing, they probably have a deeper sense of what is going on.

Personally I'd prefer to get that in small sips, such that I can grow, than to have to drain my cup to the bottom.

Comment by Nathan Young on Nathan Young's Shortform · 2024-05-15T19:59:10.313Z · LW · GW

I've made a big set of expert opinions on AI and my inferred percentages from them. I guess that some people will disagree with them. 

I'd appreciate hearing your criticisms so I can improve them or fill in entries I'm missing. 

https://docs.google.com/spreadsheets/d/1HH1cpD48BqNUA1TYB2KYamJwxluwiAEG24wGM2yoLJw/edit?usp=sharing

Comment by Nathan Young on Questions are usually too cheap · 2024-05-13T20:33:24.440Z · LW · GW

Though sometimes the obligation to answer is right, right? I guess maybe it's that obligation works well at some scale, but then becomes bad at some larger scale. In a coversation, it's fine, in a public debate, sometimes it seems to me that it doesn't work.

Comment by Nathan Young on Questions are usually too cheap · 2024-05-13T20:31:58.970Z · LW · GW

I think the motivating instances are largely:

  • Online debates are bad
  • Freedom Of Information requests suck

I think I probably backfilled from there.

I do sometimes get persistant questions on twitter, but I don't think there is a single strong example.

Comment by Nathan Young on Questions are usually too cheap · 2024-05-13T20:30:37.009Z · LW · GW

Sadly you are the second person to correct me on this @Paul Crowley was first. Ooops. 

Comment by Nathan Young on Questions are usually too cheap · 2024-05-11T14:51:14.257Z · LW · GW

The solution is not to prevent the questions, but to remove the obligation to generate an expensive answer.

Good suggestion.

Comment by Nathan Young on Do you know of lists of p(doom)s/AI forecasts/ AI quotes? · 2024-05-11T12:58:57.507Z · LW · GW

Thank you, this is the kind of thing I was hoping to find.

Comment by Nathan Young on What is a community that has changed their behaviour without strife? · 2024-05-08T08:33:01.161Z · LW · GW

What changes do you think the polyamory community has made?

Comment by Nathan Young on Habryka's Shortform Feed · 2024-05-07T09:34:11.955Z · LW · GW

I find this a very suspect detail, though the base rate of cospiracies is very low.

"He wasn't concerned about safety because I asked him," Jennifer said. "I said, 'Aren't you scared?' And he said, 'No, I ain't scared, but if anything happens to me, it's not suicide.'"

https://abcnews4.com/news/local/if-anything-happens-its-not-suicide-boeing-whistleblowers-prediction-before-death-south-carolina-abc-news-4-2024

Comment by Nathan Young on What is a community that has changed their behaviour without strife? · 2024-05-07T09:30:56.640Z · LW · GW

To be more explicit about my model, I see communities as a bit like people. And sometimes people do the hard work of changing (especially as they have incentives to) but sometimes they ignore it or blame someone else.

Similarly often communties scapegoat something or someone, or give vague general advice.

Comment by Nathan Young on [deleted post] 2024-05-03T19:05:36.381Z

Sure sounds good. Can you crosspost to the EA forum? Also I think Nicky's pronouns are they/them.

Comment by Nathan Young on Which skincare products are evidence-based? · 2024-05-03T13:56:31.866Z · LW · GW

It seems underrated for LessWrong to have cached high quality answers to questions like this. Also stuff on exercise, nutrition, parenting and schooling. That we don't really have a clear set seems to point towards this being difficult or us being less competent than we'd like.

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T21:28:26.133Z · LW · GW

Nevertheless lots of people were hassled. That has real costs, both to them and to you. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T21:22:38.026Z · LW · GW

If that were true then there are many ways you could partially do that - eg give people a set of tokens to represent their mana at the time of the devluation and if at future point you raise. you could give them 10x those tokens back.

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T20:18:34.198Z · LW · GW

I’m discussing with Carson. I might change my mind but i don’t know that i’ll argue with both of you at once.

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:44:29.782Z · LW · GW

Austin said they have $1.5 million in the bank, vs $1.2 million mana issued. The only outflows right now are to the charity programme which even with a lot of outflows is only at $200k. they also recently raised at a $40 million valuation. I am confused by running out of money. They have a large user base that wants to bet and will do so at larger amounts if given the opportunity. I'm not so convinced that there is some tiny timeline here.

But if there is, then say so "we know that we often talked about mana being eventually worth $100 mana per dollar, but we printed too much and we're sorry. Here are some reasons we won't devalue in the future.."

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:43:36.942Z · LW · GW

Austin took his salary in mana as an often referred to incentive for him to want mana to become valuable, presumably at that rate.

I recall comments like 'we pay 250 in referrals mana per user because we reckon we'd pay about $2.50' likewise in the in person mana auction. I'm not saying it was an explicit contract, but there were norms.

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:42:29.341Z · LW · GW

From https://manifoldmarkets.notion.site/Charitable-donation-program-668d55f4ded147cf8cf1282a007fb005

"That being said, we will do everything we can to communicate to our users what our plans are for the future and work with anyone who has participated in our platform with the expectation of being able to donate mana earnings."

"everything we can" is not a couple of weeks notice and lot of hassle.  Am I supposed to trust this organisation in future with my real money?

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:41:14.268Z · LW · GW

Well they have a much larger donation than has been spent so there were ways to avoid this abrupt change:


"Manifold for Good has received grants totaling $500k from the Center for Effective Altruism (via the FTX Future Fund) to support our charitable endeavors."

Manifold has donated $200k so far. So there is $300k left. Why not at least, say "we will change the rate at which mana can be donated when we burn through this money" 

(via https://manifoldmarkets.notion.site/Charitable-donation-program-668d55f4ded147cf8cf1282a007fb005 )

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:38:52.092Z · LW · GW

Carson:
 

Ppl don't seem to understand that Manifold could literally not exist in a year or 2 if they don't find a product market fit

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:37:56.719Z · LW · GW

Carson's response:

There was no implicit contract that 100 mana was worth $1 IMO. This was explicitly not the case given CFTC restrictions?

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:37:18.001Z · LW · GW

Carson's response:

weren't donations always flagged to be a temporary thing that may or may not continue to exist? I'm not inclined to search for links but that was my understanding.

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:36:48.073Z · LW · GW

seems like they are breaking an explicit contract (by pausing donations on ~a weeks notice)

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:36:26.185Z · LW · GW

seems breaking an implicity contract (that 100 mana was worth a dollar) 

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:35:45.870Z · LW · GW

Nathan and Carson's Manifold discussion.

As of the last edit my position is something like:

"Manifold could have handled this better, so as not to force everyone with large amounts of mana to have to do something urgently, when many were busy. 

Beyond that they are attempting to satisfy two classes of people:

  • People who played to donate can donate the full value of their investments
  • People who played for fun now get the chance to turn their mana into money

To this end, and modulo the above hassle this decision is good. 

It is unclear to me whether there was an implicit promise that mana was worth 100 to the dollar. Manifold has made some small attempt to stick to this, but many untried avenues are available, as is acknowledging they will rectify the error if possible later. To the extent that there was a promise (uncertain) and no further attempt is made, I don't really believe they really take that promise seriously.

It is unclear to me what I should take from this, though they have not acted as I would have expected them to. Who is wrong? Me, them, both of us? I am unsure."

Threaded discussion

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-26T12:47:10.757Z · LW · GW

Counter point: We would likely guess that the graph of rent to income would look similar. 

Comment by Nathan Young on Difference between European and US healthcare systems [discussion post] · 2024-04-25T15:24:33.335Z · LW · GW

This comment may be replied to by anyone. 

Other comments are for the discussion group only.

Comment by Nathan Young on This is Water by David Foster Wallace · 2024-04-25T09:44:33.325Z · LW · GW

Do you find it dampens good emotions. Like if you are deeply in love and feel it does it diminish the experience?

Comment by Nathan Young on What is the best AI generated music about rationality/ai/transhumanism? · 2024-04-25T09:02:23.122Z · LW · GW

I write this song about Bryan Caplan's My Beautiful Bubble 

https://suno.com/song/5f6d4d5d-6b5d-4b71-af7b-2cc197989172 

Comment by Nathan Young on The Inner Ring by C. S. Lewis · 2024-04-25T08:18:15.918Z · LW · GW

I wish there were a clear unifying place for all commentary on this topic. I could create a wiki page I suppose.

Comment by Nathan Young on This is Water by David Foster Wallace · 2024-04-25T07:58:05.369Z · LW · GW

Can I check that I've understood it.

Roughly, the essay urges one to be conscious of each passing thought, to see it and kind of head it off at the tracks - "feeling angry?" "don't!". But the comment argues this is against what CBT says about feeling our feelings.

What about Sam Harris' practise of meditation which seems focused on seeing and noticing thoughts, turning attention back on itself. I had a period last night of sort of "intense consciousness" where I felt very focused on the fact I was conscious. It. wasn't super pleasant, but it was profound. I can see why one would want to focus on that but also why it might be a bad idea.

Comment by Nathan Young on This is Water by David Foster Wallace · 2024-04-25T07:56:14.442Z · LW · GW

Thanks. And i appreciate that LessWrong is a space where mods feel empowered to do this, since it’s the right call.

Comment by Nathan Young on The Inner Ring by C. S. Lewis · 2024-04-24T23:12:33.848Z · LW · GW

Yeboooiiiii.

Also this was gonna be the second essay i posted, so great minds think alike!

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-24T22:28:02.763Z · LW · GW

I think I'm gonna start posting top blogpost to the main feed (mainly from dead writers or people I predict won't care) 

Comment by Nathan Young on This is Water by David Foster Wallace · 2024-04-24T21:23:54.771Z · LW · GW

I find this essay very moving and it helps me notice a certain thing. Life is passing and we can pay attention to one thing or another. What will I pay attention to? What will I worship?

Some quotes:

There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?”

 

There is no such thing as not worshipping. Everybody worships. The only choice we get is what to worship. And the compelling reason for maybe choosing some sort of god or spiritual-type thing to worship–be it JC or Allah, be it YHWH or the Wiccan Mother Goddess, or the Four Noble Truths, or some inviolable set of ethical principles–is that pretty much anything else you worship will eat you alive. If you worship money and things, if they are where you tap real meaning in life, then you will never have enough, never feel you have enough. It’s the truth. Worship your body and beauty and sexual allure and you will always feel ugly. And when time and age start showing, you will die a million deaths before they finally grieve you.

Comment by Nathan Young on 1-page outline of Carlsmith's otherness and control series · 2024-04-24T17:23:34.907Z · LW · GW

I sort of don't think it hangs together that well as a series. Like I think it implies a lot more interesting points that it makes, hence my reordering. 

Comment by Nathan Young on 1-page outline of Carlsmith's otherness and control series · 2024-04-24T16:10:13.058Z · LW · GW

Someone said they dislike bullet point lists. Here is the same piece formatted as paragraphs. Do you prefer it? (in which case I will edit and change it)

Carlsmith tackles two linked questions:

  • How should we behave towards future beings (future humans, AIs etc)?
  • What should our priors be about how AIs will behave towards us?
     

Let's consider the first question - what should our poise be towards the future? Perhaps we are worried about the future being less valuable than it could be or that humans will be killed or outcompeted by AI. 

The blog posts contain a range of answers to this question, broadly categorised as follows:

  1.  We could trust in some base goodness - the universe, god, AIs being good
  2. We could accept that all future beings will be alien to us and stop worrying (see here and here)
  3. We rely on moral systems or concepts of goodness/niceness
  4. We could seize power over the future (see here and here)
  5. We could adopt a different poise which centred around notions like growth/harmony/ “attunement” (here and here)

Let's look at each. Generally each point links to a blog or pair of blog posts.

Trusting in basic goodness, eg god or the universe. I might think God holds the future in His hands, or more broadly that things tend to be okay. Carlsmith considers a distrust of this as a feature of Yudkowskianism, which he labels "deep atheism". Not merely not trusting in God, but not trusting that things will be 'okay' unless we make them so. For similar reasons, Yudkowskians don't assume AIs will be good.  AIs will be good. For them this isn't a good answer.

Next, I might decide this isn't fixable. Hanson argues future people of any stripe might be as deeply alien to us as we are to, say the Ancient Greeks. He doesn't expect a future we consider good to be possible or, likely, desirable. Carlsmith notes that Yudkowskians don't hold this view and muses why. Are they avoiding how alien future people will be? Do they have a clear notion of good that's robust over time? ( Yudkowsky doensn't seem to think so). Or are they avoiding thinking about something uncomfortable?

Many answers seem to rely on moral systems, but these present their own problems. Moral systems vary wildly at edge cases meaning that at the scale of the future, many beliefs systems would advocate for seizing control against others.  We fear paperclipping partially because it is involuntary and aesthetically dull. But the arguments also extend to law abiding and even relatively joyful beings taking increasing control of the future via legal and positive sum means. 

However, without the above, most justifications for seizing control of the future look like those of the AIs. I too would be trying to gain the most resources for my aims at the cost of others, regardless of their ethical stances. In this sense, the AIs aren't bad because they foom, they are bad because they are.. not us. However this looks worrying like a justification that Stalin or the paperclippers could use. See here and here.

Finally, Carlsmith posits a hidden fifth option, for which we current lack good concepts. He points to a notion of  trust/ growth/ balance/ attunement. He talks about the colour, 'green' from Magic the Gathering, which is about growing in harmony with complex systems, sometimes trusting, sometimes acting. He notes rationalists and EAs are quite historically inimical to this (favouring 'blue' and 'black'). He repeatedly tries to point at this missing way of being. 

There are therefore a number of ways of dealing with the future, with a number of flaws.

But there is a second parallel discussion, about how might AIs treat us. Perhaps because our imagination of our future selves informs how we imagine AIs. 

For instance if we assume that we cannot trust things (not 1,3) then it's very easy to see AIs as a tool or competitor. Either it is more powerful than us or we are more powerful than it. 

However, if there is a meaningful position on (5). There may be other ways to relate to AIs and future people. Here we might not control them and they might not control us.  We might relates as gentle aliens (eg octopus), as dead-but-not-supreme nature (like a bear). Or something even more other than that. Something we cannot imagine but should attempt to. 

Note that this doesn't mean we shouldn't fear AIs, they might still be capable of ruining the future, but this poise feels different. 

In conclusion, this is my shortest summary of this set of blogs (though there is much much more in there).  We should consider other ways to be towards those we could control but who might control us. We should consider other possible relationships towards AI. In AI discourse there is a lack of clarity in notions of attunement, respect, harmony in relation to the sub-optimal choices of other conscious beings. It is possible that this affects our priors about what AI might be like, possibly pushing towards a worse equilibrium. 

Comment by Nathan Young on Gentleness and the artificial Other · 2024-04-24T12:36:52.085Z · LW · GW

I think this series might be easier for some to engage with if they imagine Carlsmith to be challenging priors around what AI minds will be like. I don't claim this is his intention.

For me, the series makes more sense read back to front - starting with some options of how to engage with the future, noting the tendency of LessWrongers to distrust god and nature, noting how that leads towards a slightly dictatorial tendency, suggesting alternative poises and finally noting that just as we can take a less controlling poise towards the future, so might AIs towards us. 

I flesh out this summary here: https://www.lesswrong.com/posts/qxakrNr3JEoRSZ8LE/1-page-outline-of-carlsmith-s-otherness-and-control-series

More procatively I find it rasies questions in me like "am I distrustful towards AI because of the pain I felt in leaving christianity and an inability to trust that anyone might really tell me the truth or act in my best interests, despite many people doing so, much of the time". 

I would enjoy hearing lenses that others found useful to engage with this work through.

Comment by Nathan Young on Motivation gaps: Why so much EA criticism is hostile and lazy · 2024-04-23T09:51:19.527Z · LW · GW

Good article. 

It's an asymmetry worth pointing out.

It seems related to some concept of "low interest rate phenomenon in ideas". Sometimes in a low interest rate environment, people fund all sorts of stuff, because they want any return and credit is cheap. Later much of this looks bunk. Likewise, much EA behaviour around the plentiful money and status of the FTX era looks profligate by todays standards. In the same way I wonder what ideas are held up by some vague consensus rather than being good ideas.

Comment by Nathan Young on Motivation gaps: Why so much EA criticism is hostile and lazy · 2024-04-23T09:44:45.686Z · LW · GW

Feels like there is something off about the following graph. Many people writing critiques care a lot. Émile spends a lot of time on their work for instance. I don't think motivation really catches what's going on.

Epistemic status: generating theories

I theorise it's two different effects in one:

  • The voices we hear in the discussion (which links to yours)
  • The norms of the communities holding those voices

First, as you say, the voices we hear most are the most confident/motivated, which leaves out a lot of voices, many of whom might talk in a way we'd prefer. Instead we only hear from the fringes, which makes a normal distribution look bimodal.

I wonder if this is more like supply and demand than your "bars" model. Ie it's not about crossing a bar but about supplying criticism that people demand.  And correcting a status market - EA is too high status, let's fix it. 

Secondly, the edges of this normal distribution have different norms. Let's say there are 3 areas:

  • one likes steelmanning in disagreements 
  • one likes making clear to be on the side of minorities
  • one likes being interesting

Let's imagine we are discussing something that has people from all these areas.

The people who like each of these things most strongly perhaps talk more, as in the above example. But not only do they talk more, they talk differently. So now the discussion is polarised in different languages, because the people in the middle are less confident and speak less (this jump feels like weakest step in the argument[1])

Amount of people with different views (central line is one group of people, who hold all views weakly)

So now we have this:

So I think probably my overall thing about why criticism is poor is something like "criticism looks poor to us because it isn't for us". It is for the people in the same communities by whom it is written. And probably to them our pieces look pretty poor as it is. 

Some questions then:

  • How do we respond in language that other groups will understand?
  • Should we want to? Torres for instance seems to be a bit of a bully. But I'm not sure that makes their arguments bad. But if I were doing it they would definitely call me out for it.
  • Is it worth taking time to really try and write the strongest versions of criticism in language we understand. Or find ways for people to signal confusion
  1. ^

    Why should the people in each group talk most confidently? I dunno, but I hear a lot more from Yud, Alman,  Adreeson and Torres than many more moderate voices. Feels like something is going on here. Can anyone suggest it?

Comment by Nathan Young on "You're the most beautiful girl in the world" and Wittgensteinian Language Games · 2024-04-20T23:18:24.663Z · LW · GW

I am so here for this comment section

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-20T11:24:08.702Z · LW · GW

I recall a comment on the EA forum about Bostrom donating a lot to global dev work in the early days. I've looked for it for 10 minutes. Does anyone recall it or know where donations like this might be recorded?

Comment by Nathan Young on Legal punishment should limit the privacy rather than freedom (new discussion format) · 2024-04-19T17:35:56.761Z · LW · GW

COMMENT THREAD

If you comment anywhere other than here, Nathan will delete your comment.

Comment by Nathan Young on Your Strength as a Rationalist · 2024-04-19T16:26:25.049Z · LW · GW

Trying to understand this.

I *knew* that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.

I think what Yud means there is that a good model will break quickly. It only explains a very small set of things because the universe is very specific. So it's good that it doesn't explain many many things.

It's a bit like David Deutsch arguing that models should be sensitive to small changes.  All of their elements should be important.

Comment by Nathan Young on Partial value takeover without world takeover · 2024-04-18T23:04:46.023Z · LW · GW

I struggle a bit to remember what ASI is but I'm gonna assume it's Artificial Super Intelligence. 

Let's say that that's markedly cleverer than 1 person. So it's capable of running very successful trading strategies or programming extremely well. It's not clear to me that such a being:

  • Has been driven towards being agentic, when its creators will prefer something more docile
  • Can cooperate well enough with itself to manage some massive secret takeover
  • Is competent enough to recursively self improve (and solve the alignment problems that creates)
  • Can beat everyone else combined

Feels like what such a being/system might do is just run some terrifically successful trading strategies and gather a lot of resources while frantically avoiding notice/trying to claim it won't take over anything else. Huge public outcry, continuing regulation but maybe after a year it settles to some kind of equilibrium. 

Chance of increasing capabilities and then some later jump, but seems plausible to me that that wouldn't happen in one go. 

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T20:16:50.507Z · LW · GW

I guess, why is it a problem.

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T17:40:43.051Z · LW · GW

[This one needs work]

Isn't the case usually that housing is the single greatest factor between a US and UK standard of life? Or do you not agree?

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T16:04:05.919Z · LW · GW

Housing is a good source of long-term income, there are comparatively poor options, so prices go up compared to incomes. 

To quote @Ege Erdil attempting to steelman:

there could be an interest rate effect - as interest rates fall, claims on future rents become more expensive so housing prices go up.

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T15:26:45.981Z · LW · GW

I want to try this as a way of argument mapping alongside a community that might use it. 

It seems likely that a proper accounting of the argumetns may involve some false statements.

If it goes well I think it could be useful to me and readers, but I guess it will take several iterations.

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-18T11:00:59.969Z · LW · GW

Housing just isn't that high of a priority. The UK is poor because of productivity, not housing costs.

This from bernoulli_defect:

While housing would increase quality of life and luxury, it’s questionable whether it would fix low British productivity in non-housing constrained industries.

Consider how the Bay Area has had huge GDP growth despite housing shortages as people just cram into bedsits