Posts
Comments
Is there a summary of the rationalist concept of lawfulness anywhere. I am looking for one and can't find it.
But isn't the point of karma to be a ranking system? Surely its bad if it's a suboptimal one?
I would have a dialogue with someone on whether Piper should have revealed SBF's messages. Happy to take either side.
Thanks, appreciated.
Sure but shouldn't the karma system be a prioritisation ranking, not just "what is fun to read?"
I would say I took at least 10 hours to write it. I rewrote it about 4 times.
Yeah but the mapping post is about 100x more important/well informed also. Shouldn't that count for something? I'm not saying it's clearer, I'm saying that it's higher priority, probably.
Hmmmm. I wonder how common this is. This is not how I think of the difference. I think of mathematicians as dealing with coherent systems of logic and engineers dealing with building in the real world. Mathematicians are useful when their system maps to the problem at hand, but not when it doesn't.
I should say i have a maths degree so it's possible that my view of mathematicians and the general view are not conincident.
Yeah this seems like a good point. Not a lot to argue with, but yeah underrated.
It is disappointing/confusing to me that of the two articles I recently wrote, the one that was much closer to reality got a lot less karma.
- A new process for mapping discussions is a summary of months of work that I and my team did on mapping discourse around AI. We built new tools, employed new methodologies. It got 19 karma
- Advice for journalists is a piece that I wrote in about 5 hours after perhaps 5 hours of experiences. It has 73 karma and counting
I think this is isn't much evidence, given it's just two pieces. But I do feel a pull towards coming up with theories rather than building and testing things in the real world. To the extent this pull is real, it seems bad.
If true, I would recommend both that more people build things in the real world and talk about them and that we find ways to reward these posts more, regardless of how alive they feel to us at the time.
(Aliveness being my hypothesis - many of us understand or have more live feelings about dealing with journalists than a sort of dry post about mapping discourse)
Hmmm, what is the picture that the analogy gives you. I struggle to imagine how it's misleading but I want to hear.
I common criticism seems to be "this won't change anything" see (here and here). People often believe that journalists can't choose their headlines and so it is unfair to hold them accountable for them. I think this is wrong for about 3 reasons:
- We have a loud of journalists pretty near to us whose behaviour we absolutely can change. Zvi, Scott and Kelsey don't tend to print misleading headlines but they are quite a big deal and to act as if creating better incentives because we can't change everything seems to strawman my position
- Journalists can control their headlines. I have seen 1-2 times journalists change headlines after pushback. I don't think it was the editors who read the comments and changed the headlines of their own accord. I imagine that the journalists said they were taking too much pushback and asked for the change. This is probably therefore an existence proof that journalists can affect headlines. I think reality is even further in my direction. I imagine that journalists and their editors are involved in the same social transactions as exist between many employees and their bosses. If they ask to change a headline, often they can probably shift it a bit. Getting good sources might be enough to buy this from them.
- I am not saying that they must have good headlines, I am just holding the threat of their messages against them. I've only done this twice, but in one case a journalist was happy to give me this leverage. And having it, I felt more confident about the interview.
I think there is a failure mode where some rats hear a system described and imagine that reality matches it as they imagine it. In this case, I think that's mistaken - journalists have incentives to misdescribe their power of their own headlines. And reality is a bit messier than the simple model suggests. And we have more power than I think some commenters think.
I recommend trying this norm. It doesn't cost you much, it is a good red flag if someone gets angry when you suggest it and if they agree you get leverage to use if they betray you. Seems like a good trade that only gets better the more of us do it. Rarely is reality so kind (and hence I may be mistaken)
I don't think that's the case, because the journalist you are speaking to is not the person who's makes the decision.
I think this is incorrect. I imagine journalists have more latitude to influence headlines when they arelly care.
Why do you think it's stretched. It's about the difference between mathematicians and engineers. One group are about relating the real world the other are about logically consistent ideas that may be useful.
I exert influence where I can. I think if all of LessWrong took up this norm we could shift the headline-content accuracy gap.
Sure but I don't agree with their lack of concern for privacy and I think they are wrong to. I think they are making the wrong call here.
I also don't think privacy is a binary. Some things are almost private and some things are almost public. Do you think that a conversation we have in LessWrong dms is as public as if I tweeted it?
Well I do talk to journalists I trust and not those I don't. And I don't give quotes to those who won't take responsibility for titles. But yes, more suggestions appreciated.
I would appreciate feedback on how this article could be better.
The work took me quite a long time and seems in line with a LessWrong ethos. And yet people here didn't seem to like it very much.
Thank you.
Yeah aren't a load of national parks near large US conurbations and hence the opportunity cost in world terms is significant.
What is the best way to take the average of three probabilities in the context below?
- There is information about a public figure
- Three people read this information and estimate the public figure's P(doom)
- (It's not actually p(doom) but it's their probability of something
- How do I then turn those three probabilities into a single one?
Thoughts.
I currently think the answer is something like for probability a,b,c then the group median is 2^((log2a + log2b + log2c)/3). This feels like a way to average the bits that each person gets from the text.
I could just take the geometric or arithmetic mean, but somehow that seems off to me. I guess I might write my intuitions for those here for correction.
Arithmetic mean (a + b + c)/3. So this feels like uncertain probabilities will dominate certain ones. eg (.0000001 + .25)/2 = approx .125 which is the same as if the first person was either significantly more confident or significantly less. It seems bad to me for the final probability to be uncorrelated with very confident probabilities if the probabilities are far apart.
On the other hand in terms of EV calculations, perhaps you want to consider the world where some event is .25 much more than where it is .0000001. I don't know. Is the correct frame possible worlds or the information each person brings to the table?
Geometric mean (a * b * c)^ 1/3. I dunno, sort of seems like a midpoint.
Okay so I then did some thinking. Ha! Whoops.
While trying to think intuitively about what the geometric mean was, I noticed that 2^((log2a + log2b + log2c)/3) = 2^ (log2 (abc) /3) = 2 ^ log 2 (abc)^1/3 = (abc) ^1/3. So the information mean I thought seemed right is the geometric mean. I feel a bit embarrassed, but also happy to have tried to work it out.
This still doesn't tell me whether the arithmetic worlds intuition or the geometric information interpretation is correct.
Any correction or models appreciated.
@Ben Pace I would like a vote here on what percentage chance we think that an omnicient reviewer would say this narrative is true. The display it on an axis, probably with dots (anonymous) for each person. eg like this.
I want to run one of @Ben Pace's polls at the bottom here. Please could people put statements that they might want to agree or disagree with relating to this essay as comments here. Some starters:
- If the UK wants to grow then it would do well to give energy production, housing and infrastructure a higher priority
- France is able to be dysfunctional and still wealthy because it gets the basics of housing, energy and infrastructure right
- The UK Town and Planning act was probably very damaging
- If the UK wants more growth it should build more housing where people want to live
- I think that cities would generally grow more if they had more people in them
I made a poll of statements from the manifold comment section to try and understand our specific disagreements. Feel free to add your own. Takes about 2 minutes to fill in,
I read @TracingWoodgrains piece on Nonlinear and have further updated that the original post by @Ben Pace was likely an error.
I have bet accordingly here.
I am really annoyed by the Twitter thread about this paper. I doubt it will hold up and it's been seen 450k times. Hendryks had ample opportunity after initial skepticism to remove it, but chose not to. I expect this to have reputational costs to him and to AI safety in general. If people think he (and by association some of us) are charlatan's for saying one thing and doing anohter in terms of being careful with the truth, I will have some sympathy with their position.
This market is now both very liquid by manifold standards and confident that there are flaws in the paper.
https://manifold.markets/NathanpmYoung/will-there-be-substantive-issues-wi?r=TmF0aGFucG1Zb3VuZw (I thought manifold embed worked?)
I think the post moved me in your direction, so I think it was fine.
Communication question.
How do I talk about low probability events in a sensical way?
eg "RFK Jr is very unlikely to win the presidency (0.001%)" This statement is ambiguous. Does it mean he's almost certain not to win or that the statement is almost certainly not true?
I know this sounds wonkish, but it's a question I come to quite often when writing. I like to use words but also include numbers in brackets or footnotes. But if there are several forecasts in one sentence with different directions it can be hard to understand.
"Kamala is a slight favourite in the election (54%), but some things are clearer. She'll probably win Virginia (83%) and probably loses North Carolina (43%)"
Something about the North Carolina subclause rubs me the wrong way. It requires several cycles to think "does the 43% mean the win or the loss". Options:
- As is
- "probably loses North Carolina (43% win chance)" - this takes up quite a lot of space while reading. I don't like things that break the flow
As for voids, they can create weak points; I think they were the reason the cybertruck hitch broke off in this test.
Though as I understand it that test was after a load of other tests. Perhaps relevant.
What do you think P(doom from corporations) is. I've never heard much worry about current non-AI corps?
Sure, but still experts could not agree that AI is quite risky, and they do. This is important evidence in favour, especially to the extent they aren't your ingroup.
I'm not saying people should consider it a top argument, but I'm surprised how it falls on the ranking.
I made that market. Some thoughts
1.Seems kind of inaccurate to not put in Matt's tweet particularly if you're gonna call it "objective sounding".
Matt himself calls says "seriously but not literally". So he agrees with you, I think.
2.Regarding fees on conditional markets, I don't know.
3.
all but a very few markets are pure popularity contests, dominated by those who don't mind locking up their mana for a month for a guaranteed 1% loss.
I don't know that this is as big a problem as it seems. A popularity contest where it costs something to vote is a better kind than we usually see. Overall I agree that conditional markets aren't to be taken too seriously. But I think the tone of this is probably too negative to this audience. This one went fine.
4.
Out of epistemic cooperativeness as well as annoyance, I spent small amounts of mana on the markets where it was cheap to reset implausible odds closer to Harris' overall odds of victory.
Thanks. I did the same. Overall I thought the markets seemed to say pretty sensible things.
You have my and @katjagrace’s permission to test out other poll formats if you wish.
I'm surprised that the least compelling argument here is Expert opinion.
Anyone want to explain to me why they dislike that one? It looks obviously good to me?
I would perhaps prefer we had a list of three things we don't discuss (say Politics, Race science and Infohazards) and if we want to not discuss a new thing we have to allow discussion of one of those others. Seems better to be clear what isn't being discussed.
I disagree. They don't need to be reasonable so much as I now have a big stick to beat the journalist with if they aren't.
"I can't change my headlines"
"But it is your responsibility right?"
"No"
"Oh were you lying when you said it was"
I have 2 so far. One journalist agreed with no bother. The other frustratedly said they couldn't guarantee that and tried to negotiate. I said I was happy to take a bond, they said no, which suggested they weren't that confident.
Thanks to the people who use this forum.
I try and think about things better and it's great to have people to do so with, flawed as we are. In particularly @KatjaGrace and @Ben Pace.
I hope we can figure it all out.
So far a journalist just said "sure". So n = 1 it's fine.
Trying out my new journalist strategy.
Did you reformat all the footnotes or do you have a tool for that?
My main takeaway from this series is that Carlsmith seems to be gesturing at some important things where I want a more diagrammy, mathsy approach to come along after.
What does "Green" look like in more blue terms? When specifically might we want to be paperclippers and when not? Where are the edges of the different concepts?
So by my metric, Yudkowsky and Lintemandain's Dath Ilan isn't neutral, it's quite clearly lawful good, or attempting to be. And yet they care a lot about the laws of cognition.
So it seems to me that the laws of cognition can (should?) drive towards flouishing rather than pure knowledge increase. There might be things that we wish we didn't know for a bit. And ways to increase our strength to heal rather than our strength to harm.
To me it seems a better rationality would be lawful good.
Yeah I find the intention vs outcome thing difficult.
What do you think of "average expected value across small perturbations in your life". Like if you accidentally hit churchill with a car and so cause the UK to lose WW2 that feels notably less bad than deliberately trying to kill a much smaller number of people. In many nearby universes, you didn't kill churchill, but in many nearby universes that person did kill all those people.
Here is a 5 minute, spicy take of an alignment chart.
What do you disagree with.
To try and preempt some questions:
Why is rationalism neutral?
It seems pretty plausible to me that if AI is bad, then rationalism did a lot to educate and spur on AI development. Sorry folks.
Why are e/accs and EAs in the same group.
In the quick moments I took to make this, I found both EA and E/acc pretty hard to predict and pretty uncertain in overall impact across some range of forecasts.
Under considered might be more accurate?
And yes, I agree that seems bad.
Joe Rogan (largest podcaster in the world) giving repeated concerned mediocre x-risk explanations suggests that people who have contacts with him should try and get someone on the show to talk about it.
eg listen from 2:40:00 Though there were several bits like this during the show.
Weakly endorsed
“Curiously enough, the only thing that went through the mind of the bowl of petunias as it fell was Oh no, not again. Many people have speculated that if we knew exactly why the bowl of petunias had thought that we would know a lot more about the nature of the Universe than we do now.”
The Hitchhiker’s Guide To The Galaxy, Douglas Adams
Feels like FLI is a massively underrated org. Cos of the whole vitalik donation thing they have like $300mn.