Posts

AI and integrity 2024-05-29T20:45:51.300Z
Questions are usually too cheap 2024-05-11T13:00:54.302Z
Do you know of lists of p(doom)s/AI forecasts/ AI quotes? 2024-05-10T11:47:56.183Z
What is a community that has changed their behaviour without strife? 2024-05-07T09:24:48.962Z
This is Water by David Foster Wallace 2024-04-24T21:21:09.445Z
1-page outline of Carlsmith's otherness and control series 2024-04-24T11:25:36.106Z
What is the best AI generated music about rationality/ai/transhumanism? 2024-04-11T09:34:59.616Z
Be More Katja 2024-03-11T21:12:14.249Z
Community norms poll (2 mins) 2024-03-07T21:45:03.063Z
Grief is a fire sale 2024-03-04T01:11:06.882Z
The World in 2029 2024-03-02T18:03:29.368Z
Minimal Viable Paradise: How do we get The Good Future(TM)? 2023-12-06T09:24:09.699Z
Forecasting Questions: What do you want to predict on AI? 2023-11-01T13:17:00.040Z
How to Resolve Forecasts With No Central Authority? 2023-10-25T00:28:32.332Z
How are rationalists or orgs blocked, that you can see? 2023-09-21T02:37:35.985Z
AI Probability Trees - Joe Carlsmith (2022) 2023-09-08T15:40:24.892Z
AI Probability Trees - Katja Grace 2023-08-24T09:45:47.487Z
What wiki-editing features would make you use the LessWrong wiki more? 2023-08-24T09:22:01.300Z
Quick proposal: Decision market regrantor using manifund (please improve) 2023-07-09T12:49:01.904Z
Graphical Representations of Paul Christiano's Doom Model 2023-05-07T13:03:19.624Z
AI risk/reward: A simple model 2023-05-04T19:25:25.738Z
FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next 2022-11-09T02:14:19.623Z
Feature request: Filter by read/ upvoted 2022-10-04T17:17:56.649Z
Nathan Young's Shortform 2022-09-23T17:47:06.903Z
What should rationalists call themselves? 2021-08-09T08:50:07.161Z

Comments

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-24T17:35:00.204Z · LW · GW

I disagree. They don't need to be reasonable so much as I now have a big stick to beat the journalist with if they aren't.

"I can't change my headlines"
"But it is your responsibility right?"
"No"
"Oh were you lying when you said it was"

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-24T01:05:23.748Z · LW · GW

I have 2 so far. One journalist agreed with no bother. The other frustratedly said they couldn't guarantee that and tried to negotiate. I said I was happy to take a bond, they said no, which suggested they weren't that confident.

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-23T16:51:47.724Z · LW · GW

Thanks to the people who use this forum.

I try and think about things better and it's great to have people to do so with, flawed as we are. In particularly @KatjaGrace and @Ben Pace

I hope we can figure it all out.

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-18T13:11:32.896Z · LW · GW

So far a journalist just said "sure". So n = 1 it's fine.

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-16T12:26:43.469Z · LW · GW

Trying out my new journalist strategy.

Image
Comment by Nathan Young on Reliable Sources: The Story of David Gerard · 2024-07-11T09:26:13.971Z · LW · GW

Did you reformat all the footnotes or do you have a tool for that?

Comment by Nathan Young on Loving a world you don’t trust · 2024-07-02T17:19:03.783Z · LW · GW

My main takeaway from this series is that Carlsmith seems to be gesturing at some important things where I want a more diagrammy, mathsy approach to come along after. 

What does "Green" look like in more blue terms? When specifically might we want to be paperclippers and when not? Where are the edges of the different concepts?

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-01T10:20:47.146Z · LW · GW

So by my metric, Yudkowsky and Lintemandain's Dath Ilan isn't neutral, it's quite clearly lawful good, or attempting to be. And yet they care a lot about the laws of cognition.

So it seems to me that the laws of cognition can (should?) drive towards flouishing rather than pure knowledge increase. There might be things that we wish we didn't know for a bit. And ways to increase our strength to heal rather than our strength to harm. 

To me it seems a better rationality would be lawful good. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-07-01T10:17:51.720Z · LW · GW

Yeah I find the intention vs outcome thing difficult.

What do you think of "average expected value across small perturbations in your life". Like if you accidentally hit churchill with a car and so cause the UK to lose WW2 that feels notably less bad than deliberately trying to kill a much smaller number of people. In many nearby universes, you didn't kill churchill, but in many nearby universes that person did kill all those people.

Comment by Nathan Young on Nathan Young's Shortform · 2024-06-30T12:48:32.097Z · LW · GW

Here is a 5 minute, spicy take of an alignment chart. 

What do you disagree with.

To try and preempt some questions:

Why is rationalism neutral?

It seems pretty plausible to me that if AI is bad, then rationalism did a lot to educate and spur on AI development. Sorry folks.

Why are e/accs and EAs in the same group.

In the quick moments I took to make this, I found both EA and E/acc pretty hard to predict and pretty uncertain in overall impact across some range of forecasts. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-06-23T13:03:50.791Z · LW · GW

Under considered might be more accurate?

And yes, I agree that seems bad.

Comment by Nathan Young on Nathan Young's Shortform · 2024-06-22T12:30:46.440Z · LW · GW

Joe Rogan (largest podcaster in the world) giving repeated concerned mediocre x-risk explanations suggests that people who have contacts with him should try and get someone on the show to talk about it.

eg listen from 2:40:00 Though there were several bits like this during the show. 

Comment by Nathan Young on Ilya Sutskever created a new AGI startup · 2024-06-20T03:34:30.951Z · LW · GW

Weakly endorsed

“Curiously enough, the only thing that went through the mind of the bowl of petunias as it fell was Oh no, not again. Many people have speculated that if we knew exactly why the bowl of petunias had thought that we would know a lot more about the nature of the Universe than we do now.”

The Hitchhiker’s Guide To The Galaxy, Douglas Adams

Comment by Nathan Young on Nathan Young's Shortform · 2024-05-30T16:36:29.160Z · LW · GW

Feels like FLI is a massively underrated org. Cos of the whole vitalik donation thing they have like $300mn. 

Comment by Nathan Young on keltan's Shortform · 2024-05-30T04:42:26.149Z · LW · GW

Re safety, I don't know about Oakland but some parts of SF are genuinely the most dangerous feeling places I've ever been to after dark (because normally I wouldn't go somewhere, but SF feels very fine until it isn't). If I am travelling to places in SF after dark I'll check how dodgy the street entrances are. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-05-30T04:39:40.853Z · LW · GW

What are the LessWrong norms on promotion? Writing a post about my company seems off (but I think it could be useful to users). Should I write a quick take?

Comment by Nathan Young on Nathan Young's Shortform · 2024-05-24T19:43:25.195Z · LW · GW

Given my understanding of epistemic and necessary truths it seems plausible that I can construct epistemic truths using only necessary ones, which feels contradictory.

Eg 1 + 1 = 2 is a necessary truth

But 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 = 10 is epistemic. It could very easily be wrong if I have miscounted the number of 1s.

This seems to suggest that necessary truths are just "simple to check" and that sufficiently complex necessary truths become epistemic because of a failure to check an operation.

Similarly "there are 180 degrees on the inside of a triangle" is only necessarily true in spaces such as R2. It might look necessarily true everywhere but it's not on the sphere. So what looks like a necessary truth actually an epistemic one.

What am I getting wrong?

Comment by Nathan Young on Nathan Young's Shortform · 2024-05-21T16:08:41.532Z · LW · GW

A problem with overly kind PR is that many people know that you don't deserve the reputation. So if you start to fall, you can fall hard and fast.

Likewise it incentivises investigation that you can't back up.

If everyone thinks I am lovely, but I am two faced, I create a juicy story any time I am cruel. Not so if am known to be grumpy.

eg My sense is that EA did this a bit with the press tour around What We Owe The Future. It built up a sense of wisdom that wasn't necessarily deserved, so with FTX it all came crashing down.

Personally I don't want you to think I am kind and wonderful. I am often thoughtless and grumpy. I think you should expect a mediocre to good experience. But I'm not Santa Claus.

I am never sure whether rats are very wise or very naïve to push for reputation over PR, but I think it's much more sustainable.

@ESYudkowsky can't really take a fall for being goofy. He's always been goofy - it was priced in.

Many organisations think they are above maintaining the virtues they profess to possess, instead managing it with media relations.

In doing this they often fall harder eventually. Worse, they lose out on the feedback from their peers accurately seeing their current state.

Journalists often frustrate me as a group, but they aren't dumb. Whatever they think is worth writing, they probably have a deeper sense of what is going on.

Personally I'd prefer to get that in small sips, such that I can grow, than to have to drain my cup to the bottom.

Comment by Nathan Young on Nathan Young's Shortform · 2024-05-15T19:59:10.313Z · LW · GW

I've made a big set of expert opinions on AI and my inferred percentages from them. I guess that some people will disagree with them. 

I'd appreciate hearing your criticisms so I can improve them or fill in entries I'm missing. 

https://docs.google.com/spreadsheets/d/1HH1cpD48BqNUA1TYB2KYamJwxluwiAEG24wGM2yoLJw/edit?usp=sharing

Comment by Nathan Young on Questions are usually too cheap · 2024-05-13T20:33:24.440Z · LW · GW

Though sometimes the obligation to answer is right, right? I guess maybe it's that obligation works well at some scale, but then becomes bad at some larger scale. In a coversation, it's fine, in a public debate, sometimes it seems to me that it doesn't work.

Comment by Nathan Young on Questions are usually too cheap · 2024-05-13T20:31:58.970Z · LW · GW

I think the motivating instances are largely:

  • Online debates are bad
  • Freedom Of Information requests suck

I think I probably backfilled from there.

I do sometimes get persistant questions on twitter, but I don't think there is a single strong example.

Comment by Nathan Young on Questions are usually too cheap · 2024-05-13T20:30:37.009Z · LW · GW

Sadly you are the second person to correct me on this @Paul Crowley was first. Ooops. 

Comment by Nathan Young on Questions are usually too cheap · 2024-05-11T14:51:14.257Z · LW · GW

The solution is not to prevent the questions, but to remove the obligation to generate an expensive answer.

Good suggestion.

Comment by Nathan Young on Do you know of lists of p(doom)s/AI forecasts/ AI quotes? · 2024-05-11T12:58:57.507Z · LW · GW

Thank you, this is the kind of thing I was hoping to find.

Comment by Nathan Young on What is a community that has changed their behaviour without strife? · 2024-05-08T08:33:01.161Z · LW · GW

What changes do you think the polyamory community has made?

Comment by Nathan Young on Habryka's Shortform Feed · 2024-05-07T09:34:11.955Z · LW · GW

I find this a very suspect detail, though the base rate of cospiracies is very low.

"He wasn't concerned about safety because I asked him," Jennifer said. "I said, 'Aren't you scared?' And he said, 'No, I ain't scared, but if anything happens to me, it's not suicide.'"

https://abcnews4.com/news/local/if-anything-happens-its-not-suicide-boeing-whistleblowers-prediction-before-death-south-carolina-abc-news-4-2024

Comment by Nathan Young on What is a community that has changed their behaviour without strife? · 2024-05-07T09:30:56.640Z · LW · GW

To be more explicit about my model, I see communities as a bit like people. And sometimes people do the hard work of changing (especially as they have incentives to) but sometimes they ignore it or blame someone else.

Similarly often communties scapegoat something or someone, or give vague general advice.

Comment by Nathan Young on [deleted post] 2024-05-03T19:05:36.381Z

Sure sounds good. Can you crosspost to the EA forum? Also I think Nicky's pronouns are they/them.

Comment by Nathan Young on Which skincare products are evidence-based? · 2024-05-03T13:56:31.866Z · LW · GW

It seems underrated for LessWrong to have cached high quality answers to questions like this. Also stuff on exercise, nutrition, parenting and schooling. That we don't really have a clear set seems to point towards this being difficult or us being less competent than we'd like.

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T21:28:26.133Z · LW · GW

Nevertheless lots of people were hassled. That has real costs, both to them and to you. 

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T21:22:38.026Z · LW · GW

If that were true then there are many ways you could partially do that - eg give people a set of tokens to represent their mana at the time of the devluation and if at future point you raise. you could give them 10x those tokens back.

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T20:18:34.198Z · LW · GW

I’m discussing with Carson. I might change my mind but i don’t know that i’ll argue with both of you at once.

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:44:29.782Z · LW · GW

Austin said they have $1.5 million in the bank, vs $1.2 million mana issued. The only outflows right now are to the charity programme which even with a lot of outflows is only at $200k. they also recently raised at a $40 million valuation. I am confused by running out of money. They have a large user base that wants to bet and will do so at larger amounts if given the opportunity. I'm not so convinced that there is some tiny timeline here.

But if there is, then say so "we know that we often talked about mana being eventually worth $100 mana per dollar, but we printed too much and we're sorry. Here are some reasons we won't devalue in the future.."

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:43:36.942Z · LW · GW

Austin took his salary in mana as an often referred to incentive for him to want mana to become valuable, presumably at that rate.

I recall comments like 'we pay 250 in referrals mana per user because we reckon we'd pay about $2.50' likewise in the in person mana auction. I'm not saying it was an explicit contract, but there were norms.

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:42:29.341Z · LW · GW

From https://manifoldmarkets.notion.site/Charitable-donation-program-668d55f4ded147cf8cf1282a007fb005

"That being said, we will do everything we can to communicate to our users what our plans are for the future and work with anyone who has participated in our platform with the expectation of being able to donate mana earnings."

"everything we can" is not a couple of weeks notice and lot of hassle.  Am I supposed to trust this organisation in future with my real money?

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:41:14.268Z · LW · GW

Well they have a much larger donation than has been spent so there were ways to avoid this abrupt change:


"Manifold for Good has received grants totaling $500k from the Center for Effective Altruism (via the FTX Future Fund) to support our charitable endeavors."

Manifold has donated $200k so far. So there is $300k left. Why not at least, say "we will change the rate at which mana can be donated when we burn through this money" 

(via https://manifoldmarkets.notion.site/Charitable-donation-program-668d55f4ded147cf8cf1282a007fb005 )

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:38:52.092Z · LW · GW

Carson:
 

Ppl don't seem to understand that Manifold could literally not exist in a year or 2 if they don't find a product market fit

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:37:56.719Z · LW · GW

Carson's response:

There was no implicit contract that 100 mana was worth $1 IMO. This was explicitly not the case given CFTC restrictions?

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:37:18.001Z · LW · GW

Carson's response:

weren't donations always flagged to be a temporary thing that may or may not continue to exist? I'm not inclined to search for links but that was my understanding.

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:36:48.073Z · LW · GW

seems like they are breaking an explicit contract (by pausing donations on ~a weeks notice)

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:36:26.185Z · LW · GW

seems breaking an implicity contract (that 100 mana was worth a dollar) 

Comment by Nathan Young on Nathan Young's Shortform · 2024-04-26T16:35:45.870Z · LW · GW

Nathan and Carson's Manifold discussion.

As of the last edit my position is something like:

"Manifold could have handled this better, so as not to force everyone with large amounts of mana to have to do something urgently, when many were busy. 

Beyond that they are attempting to satisfy two classes of people:

  • People who played to donate can donate the full value of their investments
  • People who played for fun now get the chance to turn their mana into money

To this end, and modulo the above hassle this decision is good. 

It is unclear to me whether there was an implicit promise that mana was worth 100 to the dollar. Manifold has made some small attempt to stick to this, but many untried avenues are available, as is acknowledging they will rectify the error if possible later. To the extent that there was a promise (uncertain) and no further attempt is made, I don't really believe they really take that promise seriously.

It is unclear to me what I should take from this, though they have not acted as I would have expected them to. Who is wrong? Me, them, both of us? I am unsure."

Threaded discussion

Comment by Nathan Young on Housing Supply (new discussion format) · 2024-04-26T12:47:10.757Z · LW · GW

Counter point: We would likely guess that the graph of rent to income would look similar. 

Comment by Nathan Young on Difference between European and US healthcare systems [discussion post] · 2024-04-25T15:24:33.335Z · LW · GW

This comment may be replied to by anyone. 

Other comments are for the discussion group only.

Comment by Nathan Young on This is Water by David Foster Wallace · 2024-04-25T09:44:33.325Z · LW · GW

Do you find it dampens good emotions. Like if you are deeply in love and feel it does it diminish the experience?

Comment by Nathan Young on What is the best AI generated music about rationality/ai/transhumanism? · 2024-04-25T09:02:23.122Z · LW · GW

I write this song about Bryan Caplan's My Beautiful Bubble 

https://suno.com/song/5f6d4d5d-6b5d-4b71-af7b-2cc197989172 

Comment by Nathan Young on The Inner Ring by C. S. Lewis · 2024-04-25T08:18:15.918Z · LW · GW

I wish there were a clear unifying place for all commentary on this topic. I could create a wiki page I suppose.

Comment by Nathan Young on This is Water by David Foster Wallace · 2024-04-25T07:58:05.369Z · LW · GW

Can I check that I've understood it.

Roughly, the essay urges one to be conscious of each passing thought, to see it and kind of head it off at the tracks - "feeling angry?" "don't!". But the comment argues this is against what CBT says about feeling our feelings.

What about Sam Harris' practise of meditation which seems focused on seeing and noticing thoughts, turning attention back on itself. I had a period last night of sort of "intense consciousness" where I felt very focused on the fact I was conscious. It. wasn't super pleasant, but it was profound. I can see why one would want to focus on that but also why it might be a bad idea.

Comment by Nathan Young on This is Water by David Foster Wallace · 2024-04-25T07:56:14.442Z · LW · GW

Thanks. And i appreciate that LessWrong is a space where mods feel empowered to do this, since it’s the right call.

Comment by Nathan Young on The Inner Ring by C. S. Lewis · 2024-04-24T23:12:33.848Z · LW · GW

Yeboooiiiii.

Also this was gonna be the second essay i posted, so great minds think alike!