Comment by philh on Podcast - Putanumonit on The Switch · 2019-06-27T14:30:17.668Z · score: 2 (1 votes) · LW · GW

Is a transcript available, or likely to become so?

Comment by philh on Does scientific productivity correlate with IQ? · 2019-06-20T14:23:36.576Z · score: 13 (4 votes) · LW · GW

In the case of Feynman, I just don't believe that his IQ was only 126.

I remembered gwern talking about this and found this comment on the subject: https://news.ycombinator.com/item?id=1159719

Comment by philh on Recommendation Features on LessWrong · 2019-06-19T07:56:55.436Z · score: 15 (5 votes) · LW · GW

Fwiw, on other sites I sometimes find that I see something interesting just as I'm clicking away, and then when I come back the interesting thing is gone. Making the recommendations a little sticky would help with that. (I see they don't reload if I use the back button, so that might be sufficient.)

Comment by philh on LessWrong FAQ · 2019-06-17T14:54:47.605Z · score: 8 (2 votes) · LW · GW

Hijacking this as a typo thread:

Our com­ment­ing guidelines state that is prefer­able:

If you dis­agree with some,

Comment by philh on Learning magic · 2019-06-13T14:47:48.112Z · score: 2 (1 votes) · LW · GW

If you take a lesson in London, and if you'd be interested in having people join you, then I might be interested in joining you.

Comment by philh on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-10T22:02:51.424Z · score: 2 (1 votes) · LW · GW

Thanks, that makes sense.

Rambling:

In the specific case of iteration, I'm not sure that works so well for multiplayer games? It would depend on details, but e.g. if a player's only options are "cooperate" or "defect against everyone equally", then... mm, I guess "cooperate iff everyone else cooperated last round" is still stable, just a lot more fragile than with two players.

But you did say it's difficult, so I don't think I'm disagreeing with you. The PD-ness of it still feels more salient to me than the SH-ness, but I'm not sure that particularly means anything.

I think actually, to me the intuitive core of a PD is "players can capture value by destroying value on net". And I hadn't really thought about the core of SH prior to this post, but I think I was coming around to something like threshold effects; "players can try to capture value for themselves [it's not really important whether that's net positive or net negative]; but at a certain fairly specific point, it's strongly net negative". Under these intuitions, there's nothing stopping a game from being both PD and SH.

Not sure I'm going anywhere with this, and it feels kind of close to just arguing over definitions.

Comment by philh on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-09T21:15:50.132Z · score: 2 (1 votes) · LW · GW

Meta: you have a few asterisks which I guess are just typos, but which caused me to go looking for footnotes that don't exist. "Philos­o­phy Fri­days*", "Fol­low rab­bit trails into Stag* Country", "Fol­low rab­bit trails that lead into *Stag-and-Rab­bit Coun­try".

Comment by philh on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-09T12:18:55.728Z · score: 2 (1 votes) · LW · GW

Most prob­lems which ini­tially seem like Pri­soner’s Dilemma are ac­tu­ally Stag Hunt, be­cause there are po­ten­tial en­force­ment mechanisms available.

I'm not sure I follow, can you elaborate?

Is the idea that everyone can attempt to enforce norms of "cooperate in the PD" (stag), or not enforce those norms (rabbit)? And if you have enough "stag" players to successfully "hunt a stag", then defecting in the PD becomes costly and rare, so the original PD dynamics mostly drop out?

If so, I kind of feel like I'd still model the second level game as a PD rather than a stag hunt? I'm not sure though, and before I chase that thread, I'll let you clarify whether that's actually what you meant.

Comment by philh on Feedback Requested! Draft of a New About/Welcome Page for LessWrong · 2019-06-02T08:24:26.748Z · score: 4 (2 votes) · LW · GW

I wouldn’t feel any need to add a dis­claimer here if the text I was recom­mend­ing were The Brothers Kara­ma­zov, though I’d want to briefly say why it’s rele­vant, and I might worry about the length.

Not sure if this was deliberate on your part, but note that HPMOR is almost twice the length of Karamazov. (662k vs 364k.)

Comment by philh on "But It Doesn't Matter" · 2019-06-01T19:33:52.741Z · score: 6 (3 votes) · LW · GW

I feel it's important here to distinguish between "H doesn't matter [like, at all]" and "H is not a crux [on this particular question]". The second is a much weaker claim. And I suspect a more common one, if not in words then in intent, though I can't back that up.

(I'm thinking of conversations like: "you support X because you believe H, but H isn't true" / "no, H actually doesn't matter. I support X because [unrelated reason]".)

Comment by philh on Feedback Requested! Draft of a New About/Welcome Page for LessWrong · 2019-06-01T19:16:13.991Z · score: 4 (2 votes) · LW · GW

Typo: "Mem­bers on LessWrong rely on many of the ideas from their writ­ers in their own posts,"

I guess that should be "these writers".

Comment by philh on Naked mole-rats: A case study in biological weirdness · 2019-05-22T13:05:55.916Z · score: 11 (6 votes) · LW · GW

Wikipedia says the damaraland mole rat (closely related, but in a different family) is also eusocial, and apparently it too spends its whole life underground. https://en.wikipedia.org/wiki/Damaraland_mole-rat

They can also survive low oxygen levels. (I can't immediately see directly comparable numbers.) But the other weirdnesses either don't seem to apply to them, or are unmentioned.

(Fun fact: autocomplete aptly gave me "the damaraland mole rat is also ridiculous".)

Comment by philh on Tales From the American Medical System · 2019-05-14T15:01:40.885Z · score: 13 (4 votes) · LW · GW

There was no re­al­is­tic sce­nario in which this would cost your friend more than the plans they made for the next day!

Do note that, although they probably weren't in this case, the consequences of dropping your plans for the next day can be almost arbitrarily bad.

For example, it might cause you to lose your job, which in turn causes you to lose your health insurance and your flat.

(In another situation, you might be told "either you work tomorrow or you're fired", and someone could tell you that you're not in danger of losing more than your plans for tomorrow. But in that situation, maybe your plans for tomorrow include "visiting the doctor to get the insulin you need to remain alive".)

A reckless introduction to Hindley-Milner type inference

2019-05-05T14:00:00.862Z · score: 16 (4 votes)
Comment by philh on Hull: An alternative to shell that I'll never have time to implement · 2019-05-04T16:52:48.851Z · score: 3 (2 votes) · LW · GW

Could be run on a ramdisk when you're not too worried about power failures and such.

Comment by philh on Many maps, Lightly held · 2019-04-30T21:36:28.304Z · score: 2 (1 votes) · LW · GW

The fable of the ra­tio­nal vam­pire. (I wish I had a link to credit the au­thor). The ra­tio­nal vam­pire ca­su­ally goes through life ra­tio­nal­is­ing away the symp­toms – “I’m aller­gic to gar­lic”, “I just don’t like the sun”. “It’s im­po­lite to go into some­one’s home un­in­vited, I’d be mor­tified if I did that”. “I don’t take self­ies” and on it goes. Con­stant ra­tio­nal­i­sa­tion.

Perhaps I'm missing the point, but it's far from obvious to me that this hypothetical rationalist is wrong. Vampires don't exist, after all. It is more likely that I have a mental illness that makes me think I have vampire-like symptoms, than that I'm actually a vampire. Rationalists should be more confused by fiction than reality, and I think that extends to fictional rationalists living in a fictional world that differs from our own only by isolated facts.

(Like, in a world with vampires, there should be reasons to believe in vampires that don't apply in our world. In a world with Santa Claus, there should be reason to believe in Santa Claus - if he puts presents under trees, "presents mysteriously appear under trees" should be a known fact about the world.)

Comment by philh on Counterspells · 2019-04-28T20:43:52.081Z · score: 5 (3 votes) · LW · GW

Agree that this is a marginal improvement over just naming fallacies, or even (as I've sometimes done) naming and giving a link to the definition.

Proposed counterspell to Bulverism - "well, maybe Dr Robotnik is wrong about hedgehogs spreading tuberculosis, and if so, it's plausible that his hated for one particular hedgehog is clouding his judgement. But you still haven't actually convinced me that he's wrong."

To the fallacy fallacy - "you're absolutely right that GPT-2's argument for banning tofu is riddled with fallacies. But you seem to suggest that that means we shouldn't ban tofu; I still think there are good arguments for doing so".

Aside, I don't think any of your "typical examples" (of murder, theft, racism) are actually typical in the sense of common. I would rather say "a prototypical example", which seems more technically accurate and almost as legible.

Comment by philh on Crypto quant trading: Intro · 2019-04-25T13:30:17.126Z · score: 2 (1 votes) · LW · GW

Minor comments:

price_change is the mul­ti­plica­tive fac­tor, such that: new_price = old_price * price_change; ad­di­tion­ally, if we had long po­si­tion, our port­fo­lio would change: new_port­fo­lio_usd = old_port­fo­lio_usd * price_change.

These should be "(price_change + 1)", right?

One neat thing about that that I didn’t re­al­ize my­self is that it looks like the SMA has done a de­cent job of act­ing as sup­port/​re­sis­tance his­tor­i­cally.

I'm not familiar with support/resistance, can you clarify?

Comment by philh on Why is multi worlds not a good explanation for abiogenesis · 2019-04-18T19:24:39.320Z · score: 4 (2 votes) · LW · GW

I'm still not entirely clear what you mean by "MWI proves too much".

If I try to translate this into simpler terms, I get something like: MWI only matches our observations if we apply the Born rule, but it doesn't motivate the Born rule. So there are many sets of observations that would be compatible with MWI, which means P(data | MWI) is low and that in turn means we can't update very much on P(MWI | data).

Is that approximately what you're getting at?

(That would be a nonstandard usage of the phrase, especially given that you linked to the wikipedia article when using it. But it kind of fits the name, and I can't think of a way for the standard usage to fit.)

Comment by philh on Slack Club · 2019-04-18T14:55:49.386Z · score: 4 (2 votes) · LW · GW

The term post-ra­tio­nal­ist was pop­u­larized by the di­as­pora map and not by peo­ple who see them­selves as post-ra­tio­nal­ists and wanted to dis­t­in­guish them­selves.

Here's a 2012 comment (predating the map by two years) in which someone describes himself as a post-rationalist to distinguish himself from rationalists: https://www.lesswrong.com/posts/p5jwZE6hTz92sSCcY/son-of-shit-rationalists-say#ryJabsxh7m9TPocqS

The post rats may not have popularised the term as well as Scott did, but I think that's mostly just because Scott is way more popular than them.

To the ex­tent that there’s a new per­son who has a similar founder po­si­tion right now that’s Scott Alexan­der and not any­body who self-iden­ti­fies as post-ra­tio­nal­ist.

Well, the claim was about what the post rats were (consciously or not) trying to do, not about whether they were successful.

And I think Scott has rebranded the movement, in a relevant sense. There's a lot of overlap, but SSC is its own thing, with its own spinoffs. E.g. I believe most SSC readers don't identify as rationalists.

("Rebranding" might be better termed "forking".)

Comment by philh on Why is multi worlds not a good explanation for abiogenesis · 2019-04-18T13:30:54.916Z · score: 2 (1 votes) · LW · GW

I stipulate that nearly anything can be a consequence of MWI, but not with equal probability. If I see a thousand quantum coinflips in a row all come up heads, I don't think "well, under MWI anything is possible, so I haven't learned anything". So I'm not sure in what sense you think it proves too much.

(I think this is roughly what Villiam was getting at, though I can't speak for him.)

Comment by philh on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-12T19:58:43.702Z · score: 5 (3 votes) · LW · GW

If an IQ test came back significantly higher than I expect, then I'd start to think I'm underperforming my potential, and I'd look for reasons for that. Perhaps there are skills missed by the test that I'm unusually weak on, like maybe I give up quicker than average if I find something hard. Then I can work on those skills, or position myself to avoid relying on them.

Conversely, if it came back lower than I expect, I'd think that perhaps I'm unusually strong in those skills, and I'd be able to position myself to take advantage of them.

Comment by philh on The Case for The EA Hotel · 2019-04-09T13:12:54.835Z · score: 4 (2 votes) · LW · GW

I ob­serve that less vet­ting means fewer de­ci­sions and less costs for the Ho­tel. Fur­ther, if de­mand for slots is low enough that no vet­ting is re­quired, this effec­tively makes the pro­ject zero-risk to the Ho­tel.

This seems to assume the marginal cost to the hotel of taking on a guest is negligible. That does seem plausible to me, but it's worth highlighting explicitly.

Comment by philh on You Have About Five Words · 2019-03-18T07:58:21.794Z · score: 2 (1 votes) · LW · GW

I had been under the impression that Hillary's was "I'm with her"? But I think I mostly heard that in the context of people saying it was a bad slogan.

Comment by philh on Graceful Shutdown · 2019-02-22T14:28:05.994Z · score: 2 (1 votes) · LW · GW

The point I am try­ing to make here is that while hard-can­cel sig­nal trav­els nec­es­sar­ily out-of-band, the grace­ful shut­down sig­nal must be, equally nec­es­sar­ily, passed in-band.

Minor, but in band vs out of band isn't really a firm distinction. Like, there's a sense in which SIGINT is in band and SIGKILL is out of band, but I think that most of the time, that's not the natural way to think of it.

Comment by philh on Graceful Shutdown · 2019-02-21T14:54:58.039Z · score: 2 (1 votes) · LW · GW

It's true that you need to be able to handle hard disconnects, but sometimes a graceful shutdown will give a better experience than a hard one is capable of. E.g. "close all connections, flush, then shutdown" might avoid an expensive "restore from journal" when you next start up.

Comment by philh on Why is this utilitarian calculus wrong? Or is it? · 2019-01-30T14:55:54.404Z · score: 3 (2 votes) · LW · GW

My intuition here is: Giving someone money moves wealth around. Creating a widget (at $20 cost, which at least one person values at $30), produces wealth. So [the world where a widget gets created] has more total wealth than [the one where it doesn't], and so it's not surprising if your moral calculus values it more highly.

Comment by philh on The Very Repugnant Conclusion · 2019-01-24T15:52:25.681Z · score: 2 (1 votes) · LW · GW

That doesn't fix it, it just means you need bigger numbers before you run into the problem.

Maybe if you have an asymtote, but I fully expect that you still run into problems then.

Comment by philh on What are the open problems in Human Rationality? · 2019-01-20T14:42:36.263Z · score: 7 (4 votes) · LW · GW

Are those comparable, though? My model of open source is that it prototypically looks like someone building something that's useful for themselves, then other people also find it useful and help to work on it (with code, bug reports, feature requests). But that first step doesn't really exist for LW2, because until you're ready to migrate the whole site, the software has very little value to anyone.

Can you think of any open source projects where the first useful version seems comparable in effort to LW2, and that had no financial backing for the first useful version?

Edit: some plausible candidates come to mind, though I wouldn't bet on any of them. Operating systems (e.g. Linux kernel, haiku, menuetOS); programming languages and compliers for them (e.g. gcc, Perl, python, Ruby); and database engines (e.g. postgres, mongo, neo4j).

(Notably, I'd exclude something like elm from the languages list because I think it was a masters or PhD project so funded by a university.)

Comment by philh on Why Don't Creators Switch to their Own Platforms? · 2018-12-26T19:41:58.067Z · score: 2 (1 votes) · LW · GW

Another example would be rooster teeth. They have a bunch of stuff on YouTube, but at least some content that's exclusive to their site. (I'm specifically thinking of the latest season of RWBY, I doubt know if there's other examples.)

Comment by philh on Player vs. Character: A Two-Level Model of Ethics · 2018-12-20T16:03:13.241Z · score: 2 (1 votes) · LW · GW

To the con­trary, this does not get you one iota closer to “ought”.

This is true, but I do think there's something being pointed at that deserves acknowledging.

I think I'd describe it as: you don't get an ought, but you do get to predict what oughts are likely to be acknowledged. (In future/in other parts of the world/from behind a veil of ignorance.)

That is, an agent who commits suicide is unlikely to propagate; so agents who hold suicide as an ought are unlikely to propagate; so you don't expect to see many agents with suicide as an ought.

And agents with cooperative tendencies do tend to propagate (among other agents with cooperative tendencies); so agents who hold cooperation as an ought tend to propagate (among...); so you expect to see agents who hold cooperation as an ought (but only in groups).

And for someone who acknowledges suicide as an ought, this can't convince them not to; and for someone who doesn't acknowledge cooperation, it doesn't convince them to. So I wouldn't describe it as "getting an ought from an is". But I'd say you're at least getting something of the same type as an ought?

Comment by philh on What podcasts does the community listen to? · 2018-12-18T19:29:28.428Z · score: 3 (2 votes) · LW · GW

Curious why this was dovnvoted? It's not a literal answer to the question, but it seems reasonably likely to satisfy the intent of the question.

Comment by philh on Who's welcome to our LessWrong meetups? · 2018-12-13T21:40:13.830Z · score: 8 (5 votes) · LW · GW

For the London meetups I write this:

We're a fortnightly London-based meetup for members of the rationalist diaspora. The diaspora includes, but is not limited to, LessWrong, Slate Star Codex, rationalist tumblrsphere, and parts of the Effective Altruism movement.

You don't have to identify as a rationalist to attend: basically, if you think we seem like interesting people you'd like to hang out with, welcome! You are invited. You do not need to think you are clever enough, or interesting enough, or similar enough to the rest of us, to attend. You are invited.

Comment by philh on Is cognitive load a factor in community decline? · 2018-12-12T20:14:59.963Z · score: 6 (3 votes) · LW · GW

The quote doesn't say explicitly, so just to make sure we're on the same page: I take from this that yes, when more looms were tended, each loom required less attention. Do you agree?

Comment by philh on LW Update 2018-12-06 – Table of Contents and Q&A · 2018-12-11T07:58:47.402Z · score: 10 (2 votes) · LW · GW

I’m cu­ri­ous to know why you would ever not want to fol­low the head­ing syn­tax, if what you want to pro­duce is a head­ing?

I've sometimes used regular bold for headings, I think mostly because it's lower friction. I don't need to think about what level of heading I should use semantically, or how that level actually renders.

Comment by philh on Is cognitive load a factor in community decline? · 2018-12-10T15:49:18.932Z · score: 7 (4 votes) · LW · GW

over the course of the 19th cen­tury the av­er­age Lan­cashire op­er­a­tive roughly dou­bled the num­ber of ma­chines tended

ap­prox­i­mately 1⁄4 of the 50-fold in­crease in cloth out­put per worker-hour be­tween 1800 and 1900 was due to each weaver sim­ply op­er­at­ing more looms than they had done ini­tially.

A question jumps out to me: did each individual machine and loom require equal effort at the beginning and end of these periods? It could be that more machines were tended because each machine required less tending.

Comment by philh on Act of Charity · 2018-11-20T15:29:47.174Z · score: 3 (2 votes) · LW · GW

You quoted the conclusion, not the argument. The argument is based on skepticism that rocking the boat will do much good.

Comment by philh on Rationality Is Not Systematized Winning · 2018-11-15T15:29:03.933Z · score: 8 (5 votes) · LW · GW

Saying rationality is systematized winning is ridiculous. It ignores that systematized winning is the default, you need to do more than that to be attractive. I think the strongest frame you can use to start really exploring the benefits of rationality is to ask yourself what advantage it has over societal defaults.

I don't think systematized winning is the default. Some people follow societal defaults and win systematically, but I think that more people follow societal defaults and just do pretty okay.

Comment by philh on Octopath Traveler: Spoiler-Free Review · 2018-11-11T12:13:38.962Z · score: 2 (1 votes) · LW · GW

It is great to have the flexibility to go where you feel like going, and do what you feel like doing, in any order, and even bypass things entirely if you are so inclined.

It sounds like someone who enjoyed the first half of Final Fantasy VI, but not so much the second half, might not be a good fit for Octopath Traveler. Does that seem accurate to you?

Comment by philh on Mark Eichenlaub: How to develop scientific intuition · 2018-10-28T13:33:04.644Z · score: 3 (2 votes) · LW · GW

Thanks! "It's nonequilibrium" feels like it points at my specific mistake. Apparently my intuitions don't currently always remember to consider that question.

Comment by philh on Mark Eichenlaub: How to develop scientific intuition · 2018-10-27T11:58:02.021Z · score: 4 (2 votes) · LW · GW

One solution to that problem is that when the rock is on the bottom of the lake, it exerts more force on that part of the bottom of the lake than is exerted at other places. By contrast, when the rock is still in the boat, the only thing touching the bottom of the lake is water, and the water pressure is the same everywhere, so the weight of the rock is distributed evenly across the entire lake.

This confuses me because it sounds like the situation "rock is fully submerged and sinking but still near the top of the lake" would be analysed like the situation "rock is on boat", not "rock is on bottom of lake". But that would give the wrong answer.

What am I missing?

Comment by philh on Why do we like stories? · 2018-10-26T21:47:08.550Z · score: 4 (2 votes) · LW · GW

Aristotle, the philosopher and one of the first story analyzers, recognized that every story contains a three-act structure: Beginning, Middle, and End. The structure roughly corresponds to Olson’s "And", "But" and "Therefore".

That doesn't seem true to me? If I split the Wizard of Oz into three acts, your ABT covers acts 1 (Kansas) and 2 (traveling to the Emerald City) but not 3 (challenging the Wicked Witch of the West).

If I gave an ABT for Harry Potter and the Philosopher's Stone, I think it would do the same. ("...But he's actually a wizard, therefore he goes off to magic school". Versus act 1 with the Dursleys, act 2 in Hogwarts, act 3 when they go down the corridor.)

Of course you could split them in other places. Perhaps the beginning is until Dorothy runs away, the middle is until she returns and gets sucked into a twister, and then end is all of her time in Oz. But I don't think anyone would naturally choose to split it that way, and if you choose a split to make the correspondence work, then the correspondence says nothing at all.

Upon returning to the tribe, the savage starts relating the experience, including the location and time he met the tiger. This describes the introduction to the problem which is the "AND". After this, he describes what he did to avoid the danger: He escaped to a secure cave. This is the stage when he is trying to find a solution to the problem: the "BUT". Finally, he explains how he solved the problem by entering a cave to avoid danger: the “Therefore”.

It sounds like the story would be something like "I was walking through the forest AND I saw a tiger BUT I ran into a cave THEREFORE I escaped"? But here the AND introduces conflict, while earlier you said the BUT did that.

Comment by philh on case study: iterative drawing · 2018-10-21T19:12:04.769Z · score: 2 (1 votes) · LW · GW

It looks like a large chunk of this post is missing? I see four footnotes but only a reference to one of them.

Comment by philh on "Now here's why I'm punching you..." · 2018-10-19T09:38:52.276Z · score: 4 (2 votes) · LW · GW

Thanks.

Comment by philh on "Now here's why I'm punching you..." · 2018-10-18T13:57:05.098Z · score: 9 (5 votes) · LW · GW

When I saw Gurkenglas' comment, I had a quick think for "a name for the class of things that "punching" is a metaphor for", and didn't come up with anything. But I agree that "sanctions" fits, so thanks for supplying that word.

Still, I'm basically going to ignore this criticism. Not that it's necessarily unfair or incorrect or anything. (It doesn't strike me as particularly salient. But I may be atypical, or I may be too close to be objective.)

But I'm not confident I could have reliably anticipated it without also anticipating a bunch of other potential criticisms that would seem similarly important. And I have a hard enough time writing something that satisfies myself. I don't want to add more prune.

As an aside: I assume it's just an oversight, but I would prefer if you link your copy back to the original, since it's publicly listed.

Comment by philh on "Now here's why I'm punching you..." · 2018-10-18T11:07:34.119Z · score: 2 (1 votes) · LW · GW

I was thinking of actions, not motivations. If Alice wants to convince people to punch Bob, then her motivations (punishment, protection, deterrence, restoration) will be relevant to what sort of arguments she makes and whether other people are likely to agree. But I don't think they're particularly relevant to the contents of this post.

Comment by philh on "Now here's why I'm punching you..." · 2018-10-17T22:40:21.905Z · score: 2 (1 votes) · LW · GW

that’s very different from the decisions philh is talking about.

So, I've had the feeling from all of your comments on this thread that you think I'm talking about something different from what I think I'm taking about. I've not felt like going to the effort of teasing out the confusion, and I still don't. But I would like to make it clear that I do not endorse this statement.

Comment by philh on "Now here's why I'm punching you..." · 2018-10-17T13:46:42.556Z · score: 2 (1 votes) · LW · GW

To be clear, I think this is a good (prosocial) way for individuals to act. I'm not trying to advocate that we should make it a community norm.

But I'm unconvinced by this particular failure mode.

if there is a norm like this in place, Alice always has a strong incentive to pretend that she is punching based on some generally accepted theory, and that the only thing that needs arguing is the application of this theory to Bob (point 2).

Surely this incentive exists anyway for Alice? There's no existing norm against what I propose.

she will almost certainly be able to convincingly pass it off as such.

I don't see why this would be. At least not any general principle that her readers will be familiar with and agree with, which is what would be required.

I'm not suggesting that after Alice publishes part (2), people who don't think "punching Bob is better than the alternatives" should punch Bob. Alice doesn't just need to convince people that there is an argument for punching Bob, she needs to convince people to punch Bob.

Comment by philh on "Now here's why I'm punching you..." · 2018-10-17T11:54:30.434Z · score: 5 (2 votes) · LW · GW

I agree with Raemon here. It would be good to think about ambiguous cases in advance, and I like the idea that fiction is one way of doing so.

But ambiguous cases are still going to come up, and you need to have some way of dealing with them. (And if you deal with them by never punching anyone, then you're encouraging bad actors to seek them out.)

"Now here's why I'm punching you..."

2018-10-16T21:30:01.723Z · score: 32 (19 votes)
Comment by philh on Why don’t we treat geniuses like professional athletes? · 2018-10-15T18:31:42.789Z · score: 5 (3 votes) · LW · GW

Something I don't think anyone's said explicitly is that athletic performance is more legible than intellectual performance. If a new breakfast food makes you 1% faster or slower, that's not necessarily easy to notice, but it's easier than noticing if it makes you 1% "better at thinking".

Comment by philh on Modes of Petrov Day · 2018-09-24T13:53:43.048Z · score: 4 (2 votes) · LW · GW

Hm. I'd been thinking the whole thing would work better if each party could perform some small negative-sum defection against the other. Along the likes of, each party commits to destroy $10, and has the ability to restore $1 to themselves while increasing the other party's obligation by $2 up to a max of $30. (And after either party gets nuked, the money values remain fixed.)

I think that would be a good thing for us to practice, but I agree the "just don't press the button that you have no reason to press anyway" variant is also good to practice.

Pareto improvements are rarer than they seem

2018-01-27T22:23:24.206Z · score: 58 (21 votes)

2017-10-08 - London Rationalish meetup

2017-10-04T14:46:50.514Z · score: 9 (2 votes)

Authenticity vs. factual accuracy

2016-11-10T22:24:38.810Z · score: 5 (9 votes)

Costs are not benefits

2016-11-03T21:32:07.811Z · score: 5 (6 votes)

GiveWell: A case study in effective altruism, part 1

2016-10-14T10:46:23.303Z · score: 0 (1 votes)

Six principles of a truth-friendly discourse

2016-10-08T16:56:59.994Z · score: 4 (7 votes)

Diaspora roundup thread, 23rd June 2016

2016-06-23T14:03:32.105Z · score: 5 (6 votes)

Diaspora roundup thread, 15th June 2016

2016-06-15T09:36:09.466Z · score: 24 (27 votes)

The Sally-Anne fallacy

2016-04-11T13:06:10.345Z · score: 27 (27 votes)

Meetup : London rationalish meetup - 2016-03-20

2016-03-16T14:39:40.949Z · score: 0 (1 votes)

Meetup : London rationalish meetup - 2016-03-06

2016-03-04T12:52:35.279Z · score: 0 (1 votes)

Meetup : London rationalish meetup, 2016-02-21

2016-02-20T14:09:42.635Z · score: 0 (1 votes)

Meetup : London Rationalish meetup, 7/2/16

2016-02-04T16:34:13.317Z · score: 1 (2 votes)

Meetup : London diaspora meetup: weird foods - 24/01/2016

2016-01-21T16:45:10.166Z · score: 1 (2 votes)

Meetup : London diaspora meetup, 10/01/2016

2016-01-02T20:41:05.950Z · score: 2 (3 votes)

Stupid questions thread, October 2015

2015-10-13T19:39:52.114Z · score: 3 (4 votes)

Bragging thread August 2015

2015-08-01T19:46:45.529Z · score: 3 (4 votes)

Meetup : London meetup

2015-05-14T17:35:18.467Z · score: 2 (3 votes)

Group rationality diary, May 5th - 23rd

2015-05-04T23:59:39.601Z · score: 7 (8 votes)

Meetup : London meetup

2015-05-01T17:16:12.085Z · score: 1 (2 votes)

Cooperative conversational threading

2015-04-15T18:40:50.820Z · score: 25 (26 votes)

Open Thread, Apr. 06 - Apr. 12, 2015

2015-04-06T14:18:34.872Z · score: 4 (5 votes)

[LINK] Interview with "Ex Machina" director Alex Garland

2015-04-02T13:46:56.324Z · score: 6 (7 votes)

[Link] Eric S. Raymond - Me and Less Wrong

2014-12-05T23:44:57.913Z · score: 23 (23 votes)

Meetup : London social meetup in my flat

2014-11-19T23:55:37.211Z · score: 2 (2 votes)

Meetup : London social meetup

2014-09-25T16:35:18.705Z · score: 2 (2 votes)

Meetup : London social meetup

2014-09-07T11:26:52.626Z · score: 2 (2 votes)

Meetup : London social meetup - possibly in a park

2014-07-22T17:20:28.288Z · score: 2 (2 votes)

Meetup : London social meetup - possibly in a park

2014-07-04T23:22:56.836Z · score: 2 (2 votes)

How has technology changed social skills?

2014-06-08T12:41:29.581Z · score: 16 (16 votes)

Meetup : London social meetup - possibly in a park

2014-05-21T13:54:16.372Z · score: 2 (2 votes)

Meetup : London social meetup - possibly in a park

2014-05-14T13:27:30.586Z · score: 2 (2 votes)

Meetup : London social meetup - possibly in a park

2014-05-09T13:37:19.129Z · score: 1 (2 votes)

May Monthly Bragging Thread

2014-05-04T08:21:17.681Z · score: 10 (10 votes)

Meetup : London social meetup

2014-04-30T13:34:43.181Z · score: 2 (2 votes)

Why don't you attend your local LessWrong meetup? / General meetup feedback

2014-04-27T22:17:01.129Z · score: 25 (25 votes)

Meetup report: London LW paranoid debating session

2014-02-16T23:46:40.591Z · score: 11 (11 votes)

Meetup : London VOI meetup 16/2, plus socials 9/2 and 23/2

2014-02-07T19:17:55.841Z · score: 4 (4 votes)

[LINK] Cliffs Notes: "Probability Theory: The Logic of Science", part 1

2014-02-05T23:03:10.533Z · score: 6 (6 votes)

Meetup : Meetup : London - Paranoid Debating 2nd Feb, plus social 9th Feb

2014-01-27T15:01:16.132Z · score: 6 (6 votes)

Fascists and Rakes

2014-01-05T00:41:00.257Z · score: 41 (53 votes)

London LW CoZE exercise report

2013-11-19T00:34:21.950Z · score: 14 (14 votes)

Meetup : London social meetup - New venue

2013-11-12T14:39:13.441Z · score: 4 (4 votes)

Meetup : London social meetup, 10/11/13

2013-11-03T17:27:18.049Z · score: 2 (2 votes)

Open thread, August 26 - September 1, 2013

2013-08-26T21:00:41.560Z · score: 5 (5 votes)

Meetup : Comfort Zone Expansion outing - London

2013-08-23T13:59:04.896Z · score: 5 (5 votes)

Meetup : London Social - The Unwelcome but Probable Decline and Fall of Direct Sunlight

2013-08-08T14:08:17.043Z · score: 8 (8 votes)

Meetup : London Meetup, 28th April

2013-04-18T23:48:03.973Z · score: 2 (2 votes)