Posts

London Rationalish meetup (part of SSC meetups everywhere) 2019-09-12T20:32:52.306Z · score: 7 (1 votes)
Is this info on zinc lozenges accurate? 2019-07-27T22:05:11.318Z · score: 25 (7 votes)
A reckless introduction to Hindley-Milner type inference 2019-05-05T14:00:00.862Z · score: 16 (4 votes)
"Now here's why I'm punching you..." 2018-10-16T21:30:01.723Z · score: 32 (19 votes)
Pareto improvements are rarer than they seem 2018-01-27T22:23:24.206Z · score: 58 (21 votes)
2017-10-08 - London Rationalish meetup 2017-10-04T14:46:50.514Z · score: 9 (2 votes)
Authenticity vs. factual accuracy 2016-11-10T22:24:38.810Z · score: 5 (9 votes)
Costs are not benefits 2016-11-03T21:32:07.811Z · score: 5 (6 votes)
GiveWell: A case study in effective altruism, part 1 2016-10-14T10:46:23.303Z · score: 0 (1 votes)
Six principles of a truth-friendly discourse 2016-10-08T16:56:59.994Z · score: 4 (7 votes)
Diaspora roundup thread, 23rd June 2016 2016-06-23T14:03:32.105Z · score: 5 (6 votes)
Diaspora roundup thread, 15th June 2016 2016-06-15T09:36:09.466Z · score: 24 (27 votes)
The Sally-Anne fallacy 2016-04-11T13:06:10.345Z · score: 27 (27 votes)
Meetup : London rationalish meetup - 2016-03-20 2016-03-16T14:39:40.949Z · score: 0 (1 votes)
Meetup : London rationalish meetup - 2016-03-06 2016-03-04T12:52:35.279Z · score: 0 (1 votes)
Meetup : London rationalish meetup, 2016-02-21 2016-02-20T14:09:42.635Z · score: 0 (1 votes)
Meetup : London Rationalish meetup, 7/2/16 2016-02-04T16:34:13.317Z · score: 1 (2 votes)
Meetup : London diaspora meetup: weird foods - 24/01/2016 2016-01-21T16:45:10.166Z · score: 1 (2 votes)
Meetup : London diaspora meetup, 10/01/2016 2016-01-02T20:41:05.950Z · score: 2 (3 votes)
Stupid questions thread, October 2015 2015-10-13T19:39:52.114Z · score: 3 (4 votes)
Bragging thread August 2015 2015-08-01T19:46:45.529Z · score: 3 (4 votes)
Meetup : London meetup 2015-05-14T17:35:18.467Z · score: 2 (3 votes)
Group rationality diary, May 5th - 23rd 2015-05-04T23:59:39.601Z · score: 7 (8 votes)
Meetup : London meetup 2015-05-01T17:16:12.085Z · score: 1 (2 votes)
Cooperative conversational threading 2015-04-15T18:40:50.820Z · score: 25 (26 votes)
Open Thread, Apr. 06 - Apr. 12, 2015 2015-04-06T14:18:34.872Z · score: 4 (5 votes)
[LINK] Interview with "Ex Machina" director Alex Garland 2015-04-02T13:46:56.324Z · score: 6 (7 votes)
[Link] Eric S. Raymond - Me and Less Wrong 2014-12-05T23:44:57.913Z · score: 23 (23 votes)
Meetup : London social meetup in my flat 2014-11-19T23:55:37.211Z · score: 2 (2 votes)
Meetup : London social meetup 2014-09-25T16:35:18.705Z · score: 2 (2 votes)
Meetup : London social meetup 2014-09-07T11:26:52.626Z · score: 2 (2 votes)
Meetup : London social meetup - possibly in a park 2014-07-22T17:20:28.288Z · score: 2 (2 votes)
Meetup : London social meetup - possibly in a park 2014-07-04T23:22:56.836Z · score: 2 (2 votes)
How has technology changed social skills? 2014-06-08T12:41:29.581Z · score: 16 (16 votes)
Meetup : London social meetup - possibly in a park 2014-05-21T13:54:16.372Z · score: 2 (2 votes)
Meetup : London social meetup - possibly in a park 2014-05-14T13:27:30.586Z · score: 2 (2 votes)
Meetup : London social meetup - possibly in a park 2014-05-09T13:37:19.129Z · score: 1 (2 votes)
May Monthly Bragging Thread 2014-05-04T08:21:17.681Z · score: 10 (10 votes)
Meetup : London social meetup 2014-04-30T13:34:43.181Z · score: 2 (2 votes)
Why don't you attend your local LessWrong meetup? / General meetup feedback 2014-04-27T22:17:01.129Z · score: 25 (25 votes)
Meetup report: London LW paranoid debating session 2014-02-16T23:46:40.591Z · score: 11 (11 votes)
Meetup : London VOI meetup 16/2, plus socials 9/2 and 23/2 2014-02-07T19:17:55.841Z · score: 4 (4 votes)
[LINK] Cliffs Notes: "Probability Theory: The Logic of Science", part 1 2014-02-05T23:03:10.533Z · score: 6 (6 votes)
Meetup : Meetup : London - Paranoid Debating 2nd Feb, plus social 9th Feb 2014-01-27T15:01:16.132Z · score: 6 (6 votes)
Fascists and Rakes 2014-01-05T00:41:00.257Z · score: 41 (53 votes)
London LW CoZE exercise report 2013-11-19T00:34:21.950Z · score: 14 (14 votes)
Meetup : London social meetup - New venue 2013-11-12T14:39:13.441Z · score: 4 (4 votes)
Meetup : London social meetup, 10/11/13 2013-11-03T17:27:18.049Z · score: 2 (2 votes)
Open thread, August 26 - September 1, 2013 2013-08-26T21:00:41.560Z · score: 5 (5 votes)
Meetup : Comfort Zone Expansion outing - London 2013-08-23T13:59:04.896Z · score: 5 (5 votes)

Comments

Comment by philh on Integrating the Lindy Effect · 2019-09-13T14:20:38.025Z · score: 2 (1 votes) · LW · GW

(Note: your final equation has the << and >> swapped.)

Comment by philh on Look at the Shape of Your Utility Distribution · 2019-09-06T14:56:53.421Z · score: 2 (1 votes) · LW · GW

Since utility is only defined up to positive affine transformation, I feel like these graphs need some reference point for something like "neutral threshold" and/or "current utility". I don't think we want to be thinking of "most options are kind of okay, some are pretty bad, some are pretty good" the same as "most options are great, some are pretty good, some are super amazing".

If nothing good is going to happen, then its best option is to stop wasting resources.

That's not at all obvious. Why not "if nothing good is going to happen, there's no reason to try to conserve resources"?

Comment by philh on What Programming Language Characteristics Would Allow Provably Safe AI? · 2019-09-05T10:55:16.991Z · score: 4 (2 votes) · LW · GW

I believe Ed Kmett is working on a language while at MIRI. Probably not specifically AI safety focused, he was working on it before he joined. But maybe worth looking into. I'm not sure if there's much public info about it.

Comment by philh on A Personal Rationality Wishlist · 2019-08-30T04:54:44.427Z · score: 19 (7 votes) · LW · GW

Even having looked at a bike, there are details I don't understand, but I think not enough that I'd dispute your claim.

Derailleurs, and the transmission from the break levers to the break pads, seem kind of magical to me. I'm not sure if there's a detail I'm missing, or if they just work far better than I would have expected. Especially derailleurs - pulling laterally on the chain, a tiny amount, makes it move from one gear to another, even if the gears are very different sizes? (I suddenly wonder if the slow mo guys have done an episode on derailleurs.)

I wouldn't be able to tell you how stability works, either.

I reckon I understand a fixie with stabiliser wheels well enough, though.

Comment by philh on Schelling Categories, and Simple Membership Tests · 2019-08-28T12:19:57.482Z · score: 2 (1 votes) · LW · GW

The claim isn't that you'd never do that. The claim is that even when you wouldn't do it - even when you wouldn't even bother to look at x_41 from a God's eye perspective because it provides literally zero predictive power conditioned on x_1 through x_40 - even then, you might find that (as a human limited by human reasoning and human coordination) you get the best results when you use x_41 alone for your decision boundary.

Comment by philh on Goodhart's Curse and Limitations on AI Alignment · 2019-08-21T16:22:55.516Z · score: 2 (1 votes) · LW · GW

I think you credit the optimizer's curse with power that it doesn't, as described, have. In particular, it doesn't have the power that "people who try to optimize end up worse off than people who don't".

In the linked post by lukeprog, when the curse is made concrete with numbers, people who tried to optimize ended up exactly as well off as everyone else - but that's only because by assumption, all choices were exactly the same. ("there are k choices, each of which has true es­ti­mated [ex­pected value] of 0.") If some choices are better than the others, then the optimizer's curse will make the optimizer disappointed, but it will still give her better results on average than the people who failed to optimize, or who optimized less hard. (Ignoring possible actions that aren't just "take one of these options based on the information currently available to me".)

I'm making some assumptions about the error terms here, and I'm not sure exactly what assumptions. But I think they're fairly weak.

(And if the difference between the actually-best choice and the actually-second best is large compared to the error terms, then the optimizer's curse appears to have no power at all.)

There can be other things that go wrong, when one tries to optimize. With your shoes and your breakfast routine, it seems to me that you invested much effort in pursuit of a goal that was unattainable in one case and trivial in another. Unfortunate, but not the optimizer's curse.


I wrote the above and then realised that I'm not actually sure how much you're making the specific mistake I describe. I thought you were partly because of

attempts to optimize for a measure of success result in increased likelihood of failure to hit the desired target

Emphasis mine. But maybe the increased likelihood just comes from Goodhart's law, here? It's not clear to me what the optimizer's curse is contributing to Goodhart's curse beyond what Goodhart's law already supplies.

Comment by philh on Could we solve this email mess if we all moved to paid emails? · 2019-08-16T14:29:01.803Z · score: 6 (3 votes) · LW · GW

Tangential, but I confess I'm surprised that the model is "pay if you get a reply". I would have expected "pay if they think you wasted their time" (i.e. you attach an amount of money, they read your email and then choose to collect the money or return it to you).

I guess that would be solving a different problem. Of the four "have you ever"s from the beginning, I think it would help with like, one and a half.

Comment by philh on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2019-08-03T14:01:41.488Z · score: 2 (1 votes) · LW · GW

It also doesn’t necessarily matter whether the memories are true or not, as long as it helps the healing process along.

False memories can have negative consequences unrelated to the healing process. You might falsely remember something that causes you to think badly of someone, for example.

But even ignoring those, I feel like "I'm going to remember false things for instrumental gain" is the kind of thinking that gets people into this kind of mess.

Comment by philh on Is this info on zinc lozenges accurate? · 2019-07-31T12:36:14.128Z · score: 2 (1 votes) · LW · GW

I've put some links in a comment below. I'll edit them into the post as well when I get a chance.

Comment by philh on Is this info on zinc lozenges accurate? · 2019-07-29T15:35:55.973Z · score: 3 (2 votes) · LW · GW

Are you able to be more specific?

My feeling is that if the podcast is accurate, and you're taking them correctly, the effect should be really pronounced. (For me, the uncertainty is "did I have a cold at all?", except for the first time when I took them too late.)

If you're taking them as recommended and have an effect size like "yeah, this seems to knock a couple of days off, I think", then... while they still seem like good things to recommend, it feels like pretty strong evidence against the main thrust of the podcast. (With caveat that I'm not sure how much individual variance to expect.) E.g. I wouldn't describe them as "these things cure colds" if that's the typical experience.

Comment by philh on Is this info on zinc lozenges accurate? · 2019-07-29T15:09:05.723Z · score: 2 (1 votes) · LW · GW

Ah, the relevant pH is 7.4, not 5, so with negative ions slightly outnumbering positive. So I guess there's another factor than numerical quantity in why they don't bind the zinc. But "things staying in constant flux" sounds like it could be that factor, thanks :)

Comment by philh on What supplements do you use? · 2019-07-29T14:48:31.145Z · score: 6 (4 votes) · LW · GW

I'm curious why you mentioned the health risks of fish oil while linking to a page saying fish oil doesn't contain Mercury. Is that not the health risk you were thinking of?

I take creatine, D3 and fish oil. (Only five days a week because I take them at work. When I keep them at home I forget.) I don't remember exactly why those. When I stopped taking them all for a while I thought I felt a bit worse in ways I no longer remember, so I started again, and possibly I then started to feel better.

Comment by philh on Is this info on zinc lozenges accurate? · 2019-07-27T23:20:29.733Z · score: 3 (2 votes) · LW · GW

Thanks!

I take it the concentration of H+ is inversely related to the concentration of negative ions, because if there's a high concentration of both, they'll just bind each other?

And when it comes to producing zinc ions from, say, zinc acetate - the H+ captures the acetate away from the zinc, but the negative ion doesn't then bind the Zn+? (Or at least not quickly enough to stop it binding in the cellular tissue?)

(This probably doesn't have much bearing on the question at hand, I'm just curious.)

Comment by philh on Is this info on zinc lozenges accurate? · 2019-07-27T23:14:44.687Z · score: 2 (1 votes) · LW · GW

I think these links are all to the correct product:

(LE US) https://www.lifeextension.com/vitamins-supplements/item01961/enhanced-zinc-lozenges

(LE Europe) https://www.lifeextensioneurope.com/enhanced-zinc-lozenges-30-vegetarian-lozenges

(LE UK) https://www.lifeextensioneurope.co.uk/enhanced-zinc-lozenges-30-vegetarian-lozenges

(Amazon UK) https://www.amazon.co.uk/Life-Extension-Enhanced-Lozenges-Vegetarian/dp/B00PYX2SVM/ref=sr_1_1?crid=30H97UHDCVWSB&keywords=enhanced+zinc+lozenges&qid=1564268937&s=gateway&sprefix=enhanced+zinc+lo%2Caps%2C149&sr=8-1 ("You purchased this item on 5 Dec 2018", so unless they changed the product while keeping the same product id, it should be good.)

Not sure what's up with your amazon US link. The description talks about zinc methionate, but the ingredients list in the picture says acetate. I would guess it's correct.

Comment by philh on Appeal to Consequence, Value Tensions, And Robust Organizations · 2019-07-23T15:39:15.909Z · score: 11 (2 votes) · LW · GW

When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer. This is the same way that we regularly find solutions to complex negotiations between multiple parties, or plan complex situations with multiple constraints, even though many of those tasks are naively uncomputable.

I'm not confident that it does. I perhaps expect people doing this using the native architecture to feel like they've found a reasonable answer. But I would expect them to actually be prioritising their own feelings, in most cases. (Though some people will underweight their own feelings. And perhaps some people will get it right.)

Perhaps they will get close enough for the answer to still count as "reasonable"?

If someone attempts to give equal weight to their own needs, the meds of their interlocutor, and the needs of the forum as a whole - how do we know whether they've got a reasonable answer? Does that just have to be left to moderator discretion, or?

Comment by philh on Intellectual Dark Matter · 2019-07-20T17:58:58.416Z · score: 2 (1 votes) · LW · GW

consider the founding of Amazon.com in 1994. One could infer at the time from Bezos’ previous employment, an article in his high school newspaper, and reports from his ex-girlfriend that he planned for Amazon to take over all of e-commerce to net enough money to start a space tech company.

I'd be interested to read more about that.

the Medallion Fund managed by Renaissance Technology has returned an average of 40% annually since its inception, including a 100% return in 2008, making it by far the best-performing hedge fund in history and netting its investors tens of billions of dollars.

How confident are we that this isn't luck or fraud? This isn't an accusation, just - I feel like intellectual dark matter is exactly what we'd expect fraud to look like.

From a quick glance at wikipedia, it seems the fund is 20 years old, which I guess mostly rules out luck; and "is available only to current and past employees and their families", which I guess mostly rules out fraud.

Comment by philh on Why artificial optimism? · 2019-07-19T14:29:55.188Z · score: 2 (1 votes) · LW · GW

It's not really clear to me what it would mean for a situation to be rational or irrational; Jessica didn't use either of those words.

If the answers are "no" and "lots", doesn't that just mean you're in a bad Nash equilibrium? You seem to be advising "when caught in a prisoner's dilemma, optimal play is to defect", and I feel Jessica is more asking "how do we get out of this prisoner's dilemma?"

Comment by philh on Podcast - Putanumonit on The Switch · 2019-06-27T14:30:17.668Z · score: 2 (1 votes) · LW · GW

Is a transcript available, or likely to become so?

Comment by philh on Does scientific productivity correlate with IQ? · 2019-06-20T14:23:36.576Z · score: 13 (4 votes) · LW · GW

In the case of Feynman, I just don't believe that his IQ was only 126.

I remembered gwern talking about this and found this comment on the subject: https://news.ycombinator.com/item?id=1159719

Comment by philh on Recommendation Features on LessWrong · 2019-06-19T07:56:55.436Z · score: 15 (5 votes) · LW · GW

Fwiw, on other sites I sometimes find that I see something interesting just as I'm clicking away, and then when I come back the interesting thing is gone. Making the recommendations a little sticky would help with that. (I see they don't reload if I use the back button, so that might be sufficient.)

Comment by philh on LessWrong FAQ · 2019-06-17T14:54:47.605Z · score: 8 (2 votes) · LW · GW

Hijacking this as a typo thread:

Our com­ment­ing guidelines state that is prefer­able:

If you dis­agree with some,

Comment by philh on Learning magic · 2019-06-13T14:47:48.112Z · score: 2 (1 votes) · LW · GW

If you take a lesson in London, and if you'd be interested in having people join you, then I might be interested in joining you.

Comment by philh on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-10T22:02:51.424Z · score: 2 (1 votes) · LW · GW

Thanks, that makes sense.

Rambling:

In the specific case of iteration, I'm not sure that works so well for multiplayer games? It would depend on details, but e.g. if a player's only options are "cooperate" or "defect against everyone equally", then... mm, I guess "cooperate iff everyone else cooperated last round" is still stable, just a lot more fragile than with two players.

But you did say it's difficult, so I don't think I'm disagreeing with you. The PD-ness of it still feels more salient to me than the SH-ness, but I'm not sure that particularly means anything.

I think actually, to me the intuitive core of a PD is "players can capture value by destroying value on net". And I hadn't really thought about the core of SH prior to this post, but I think I was coming around to something like threshold effects; "players can try to capture value for themselves [it's not really important whether that's net positive or net negative]; but at a certain fairly specific point, it's strongly net negative". Under these intuitions, there's nothing stopping a game from being both PD and SH.

Not sure I'm going anywhere with this, and it feels kind of close to just arguing over definitions.

Comment by philh on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-09T21:15:50.132Z · score: 2 (1 votes) · LW · GW

Meta: you have a few asterisks which I guess are just typos, but which caused me to go looking for footnotes that don't exist. "Philos­o­phy Fri­days*", "Fol­low rab­bit trails into Stag* Country", "Fol­low rab­bit trails that lead into *Stag-and-Rab­bit Coun­try".

Comment by philh on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-09T12:18:55.728Z · score: 2 (1 votes) · LW · GW

Most prob­lems which ini­tially seem like Pri­soner’s Dilemma are ac­tu­ally Stag Hunt, be­cause there are po­ten­tial en­force­ment mechanisms available.

I'm not sure I follow, can you elaborate?

Is the idea that everyone can attempt to enforce norms of "cooperate in the PD" (stag), or not enforce those norms (rabbit)? And if you have enough "stag" players to successfully "hunt a stag", then defecting in the PD becomes costly and rare, so the original PD dynamics mostly drop out?

If so, I kind of feel like I'd still model the second level game as a PD rather than a stag hunt? I'm not sure though, and before I chase that thread, I'll let you clarify whether that's actually what you meant.

Comment by philh on Feedback Requested! Draft of a New About/Welcome Page for LessWrong · 2019-06-02T08:24:26.748Z · score: 4 (2 votes) · LW · GW

I wouldn’t feel any need to add a dis­claimer here if the text I was recom­mend­ing were The Brothers Kara­ma­zov, though I’d want to briefly say why it’s rele­vant, and I might worry about the length.

Not sure if this was deliberate on your part, but note that HPMOR is almost twice the length of Karamazov. (662k vs 364k.)

Comment by philh on "But It Doesn't Matter" · 2019-06-01T19:33:52.741Z · score: 6 (3 votes) · LW · GW

I feel it's important here to distinguish between "H doesn't matter [like, at all]" and "H is not a crux [on this particular question]". The second is a much weaker claim. And I suspect a more common one, if not in words then in intent, though I can't back that up.

(I'm thinking of conversations like: "you support X because you believe H, but H isn't true" / "no, H actually doesn't matter. I support X because [unrelated reason]".)

Comment by philh on Feedback Requested! Draft of a New About/Welcome Page for LessWrong · 2019-06-01T19:16:13.991Z · score: 4 (2 votes) · LW · GW

Typo: "Mem­bers on LessWrong rely on many of the ideas from their writ­ers in their own posts,"

I guess that should be "these writers".

Comment by philh on Naked mole-rats: A case study in biological weirdness · 2019-05-22T13:05:55.916Z · score: 11 (6 votes) · LW · GW

Wikipedia says the damaraland mole rat (closely related, but in a different family) is also eusocial, and apparently it too spends its whole life underground. https://en.wikipedia.org/wiki/Damaraland_mole-rat

They can also survive low oxygen levels. (I can't immediately see directly comparable numbers.) But the other weirdnesses either don't seem to apply to them, or are unmentioned.

(Fun fact: autocomplete aptly gave me "the damaraland mole rat is also ridiculous".)

Comment by philh on Tales From the American Medical System · 2019-05-14T15:01:40.885Z · score: 13 (4 votes) · LW · GW

There was no re­al­is­tic sce­nario in which this would cost your friend more than the plans they made for the next day!

Do note that, although they probably weren't in this case, the consequences of dropping your plans for the next day can be almost arbitrarily bad.

For example, it might cause you to lose your job, which in turn causes you to lose your health insurance and your flat.

(In another situation, you might be told "either you work tomorrow or you're fired", and someone could tell you that you're not in danger of losing more than your plans for tomorrow. But in that situation, maybe your plans for tomorrow include "visiting the doctor to get the insulin you need to remain alive".)

Comment by philh on Hull: An alternative to shell that I'll never have time to implement · 2019-05-04T16:52:48.851Z · score: 3 (2 votes) · LW · GW

Could be run on a ramdisk when you're not too worried about power failures and such.

Comment by philh on Many maps, Lightly held · 2019-04-30T21:36:28.304Z · score: 2 (1 votes) · LW · GW

The fable of the ra­tio­nal vam­pire. (I wish I had a link to credit the au­thor). The ra­tio­nal vam­pire ca­su­ally goes through life ra­tio­nal­is­ing away the symp­toms – “I’m aller­gic to gar­lic”, “I just don’t like the sun”. “It’s im­po­lite to go into some­one’s home un­in­vited, I’d be mor­tified if I did that”. “I don’t take self­ies” and on it goes. Con­stant ra­tio­nal­i­sa­tion.

Perhaps I'm missing the point, but it's far from obvious to me that this hypothetical rationalist is wrong. Vampires don't exist, after all. It is more likely that I have a mental illness that makes me think I have vampire-like symptoms, than that I'm actually a vampire. Rationalists should be more confused by fiction than reality, and I think that extends to fictional rationalists living in a fictional world that differs from our own only by isolated facts.

(Like, in a world with vampires, there should be reasons to believe in vampires that don't apply in our world. In a world with Santa Claus, there should be reason to believe in Santa Claus - if he puts presents under trees, "presents mysteriously appear under trees" should be a known fact about the world.)

Comment by philh on Counterspells · 2019-04-28T20:43:52.081Z · score: 5 (3 votes) · LW · GW

Agree that this is a marginal improvement over just naming fallacies, or even (as I've sometimes done) naming and giving a link to the definition.

Proposed counterspell to Bulverism - "well, maybe Dr Robotnik is wrong about hedgehogs spreading tuberculosis, and if so, it's plausible that his hated for one particular hedgehog is clouding his judgement. But you still haven't actually convinced me that he's wrong."

To the fallacy fallacy - "you're absolutely right that GPT-2's argument for banning tofu is riddled with fallacies. But you seem to suggest that that means we shouldn't ban tofu; I still think there are good arguments for doing so".

Aside, I don't think any of your "typical examples" (of murder, theft, racism) are actually typical in the sense of common. I would rather say "a prototypical example", which seems more technically accurate and almost as legible.

Comment by philh on Crypto quant trading: Intro · 2019-04-25T13:30:17.126Z · score: 2 (1 votes) · LW · GW

Minor comments:

price_change is the mul­ti­plica­tive fac­tor, such that: new_price = old_price * price_change; ad­di­tion­ally, if we had long po­si­tion, our port­fo­lio would change: new_port­fo­lio_usd = old_port­fo­lio_usd * price_change.

These should be "(price_change + 1)", right?

One neat thing about that that I didn’t re­al­ize my­self is that it looks like the SMA has done a de­cent job of act­ing as sup­port/​re­sis­tance his­tor­i­cally.

I'm not familiar with support/resistance, can you clarify?

Comment by philh on Why is multi worlds not a good explanation for abiogenesis · 2019-04-18T19:24:39.320Z · score: 4 (2 votes) · LW · GW

I'm still not entirely clear what you mean by "MWI proves too much".

If I try to translate this into simpler terms, I get something like: MWI only matches our observations if we apply the Born rule, but it doesn't motivate the Born rule. So there are many sets of observations that would be compatible with MWI, which means P(data | MWI) is low and that in turn means we can't update very much on P(MWI | data).

Is that approximately what you're getting at?

(That would be a nonstandard usage of the phrase, especially given that you linked to the wikipedia article when using it. But it kind of fits the name, and I can't think of a way for the standard usage to fit.)

Comment by philh on Slack Club · 2019-04-18T14:55:49.386Z · score: 4 (2 votes) · LW · GW

The term post-ra­tio­nal­ist was pop­u­larized by the di­as­pora map and not by peo­ple who see them­selves as post-ra­tio­nal­ists and wanted to dis­t­in­guish them­selves.

Here's a 2012 comment (predating the map by two years) in which someone describes himself as a post-rationalist to distinguish himself from rationalists: https://www.lesswrong.com/posts/p5jwZE6hTz92sSCcY/son-of-shit-rationalists-say#ryJabsxh7m9TPocqS

The post rats may not have popularised the term as well as Scott did, but I think that's mostly just because Scott is way more popular than them.

To the ex­tent that there’s a new per­son who has a similar founder po­si­tion right now that’s Scott Alexan­der and not any­body who self-iden­ti­fies as post-ra­tio­nal­ist.

Well, the claim was about what the post rats were (consciously or not) trying to do, not about whether they were successful.

And I think Scott has rebranded the movement, in a relevant sense. There's a lot of overlap, but SSC is its own thing, with its own spinoffs. E.g. I believe most SSC readers don't identify as rationalists.

("Rebranding" might be better termed "forking".)

Comment by philh on Why is multi worlds not a good explanation for abiogenesis · 2019-04-18T13:30:54.916Z · score: 2 (1 votes) · LW · GW

I stipulate that nearly anything can be a consequence of MWI, but not with equal probability. If I see a thousand quantum coinflips in a row all come up heads, I don't think "well, under MWI anything is possible, so I haven't learned anything". So I'm not sure in what sense you think it proves too much.

(I think this is roughly what Villiam was getting at, though I can't speak for him.)

Comment by philh on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-12T19:58:43.702Z · score: 5 (3 votes) · LW · GW

If an IQ test came back significantly higher than I expect, then I'd start to think I'm underperforming my potential, and I'd look for reasons for that. Perhaps there are skills missed by the test that I'm unusually weak on, like maybe I give up quicker than average if I find something hard. Then I can work on those skills, or position myself to avoid relying on them.

Conversely, if it came back lower than I expect, I'd think that perhaps I'm unusually strong in those skills, and I'd be able to position myself to take advantage of them.

Comment by philh on The Case for The EA Hotel · 2019-04-09T13:12:54.835Z · score: 4 (2 votes) · LW · GW

I ob­serve that less vet­ting means fewer de­ci­sions and less costs for the Ho­tel. Fur­ther, if de­mand for slots is low enough that no vet­ting is re­quired, this effec­tively makes the pro­ject zero-risk to the Ho­tel.

This seems to assume the marginal cost to the hotel of taking on a guest is negligible. That does seem plausible to me, but it's worth highlighting explicitly.

Comment by philh on You Have About Five Words · 2019-03-18T07:58:21.794Z · score: 2 (1 votes) · LW · GW

I had been under the impression that Hillary's was "I'm with her"? But I think I mostly heard that in the context of people saying it was a bad slogan.

Comment by philh on Graceful Shutdown · 2019-02-22T14:28:05.994Z · score: 2 (1 votes) · LW · GW

The point I am try­ing to make here is that while hard-can­cel sig­nal trav­els nec­es­sar­ily out-of-band, the grace­ful shut­down sig­nal must be, equally nec­es­sar­ily, passed in-band.

Minor, but in band vs out of band isn't really a firm distinction. Like, there's a sense in which SIGINT is in band and SIGKILL is out of band, but I think that most of the time, that's not the natural way to think of it.

Comment by philh on Graceful Shutdown · 2019-02-21T14:54:58.039Z · score: 2 (1 votes) · LW · GW

It's true that you need to be able to handle hard disconnects, but sometimes a graceful shutdown will give a better experience than a hard one is capable of. E.g. "close all connections, flush, then shutdown" might avoid an expensive "restore from journal" when you next start up.

Comment by philh on Why is this utilitarian calculus wrong? Or is it? · 2019-01-30T14:55:54.404Z · score: 3 (2 votes) · LW · GW

My intuition here is: Giving someone money moves wealth around. Creating a widget (at $20 cost, which at least one person values at $30), produces wealth. So [the world where a widget gets created] has more total wealth than [the one where it doesn't], and so it's not surprising if your moral calculus values it more highly.

Comment by philh on The Very Repugnant Conclusion · 2019-01-24T15:52:25.681Z · score: 2 (1 votes) · LW · GW

That doesn't fix it, it just means you need bigger numbers before you run into the problem.

Maybe if you have an asymtote, but I fully expect that you still run into problems then.

Comment by philh on What are the open problems in Human Rationality? · 2019-01-20T14:42:36.263Z · score: 7 (4 votes) · LW · GW

Are those comparable, though? My model of open source is that it prototypically looks like someone building something that's useful for themselves, then other people also find it useful and help to work on it (with code, bug reports, feature requests). But that first step doesn't really exist for LW2, because until you're ready to migrate the whole site, the software has very little value to anyone.

Can you think of any open source projects where the first useful version seems comparable in effort to LW2, and that had no financial backing for the first useful version?

Edit: some plausible candidates come to mind, though I wouldn't bet on any of them. Operating systems (e.g. Linux kernel, haiku, menuetOS); programming languages and compliers for them (e.g. gcc, Perl, python, Ruby); and database engines (e.g. postgres, mongo, neo4j).

(Notably, I'd exclude something like elm from the languages list because I think it was a masters or PhD project so funded by a university.)

Comment by philh on Why Don't Creators Switch to their Own Platforms? · 2018-12-26T19:41:58.067Z · score: 2 (1 votes) · LW · GW

Another example would be rooster teeth. They have a bunch of stuff on YouTube, but at least some content that's exclusive to their site. (I'm specifically thinking of the latest season of RWBY, I doubt know if there's other examples.)

Comment by philh on Player vs. Character: A Two-Level Model of Ethics · 2018-12-20T16:03:13.241Z · score: 2 (1 votes) · LW · GW

To the con­trary, this does not get you one iota closer to “ought”.

This is true, but I do think there's something being pointed at that deserves acknowledging.

I think I'd describe it as: you don't get an ought, but you do get to predict what oughts are likely to be acknowledged. (In future/in other parts of the world/from behind a veil of ignorance.)

That is, an agent who commits suicide is unlikely to propagate; so agents who hold suicide as an ought are unlikely to propagate; so you don't expect to see many agents with suicide as an ought.

And agents with cooperative tendencies do tend to propagate (among other agents with cooperative tendencies); so agents who hold cooperation as an ought tend to propagate (among...); so you expect to see agents who hold cooperation as an ought (but only in groups).

And for someone who acknowledges suicide as an ought, this can't convince them not to; and for someone who doesn't acknowledge cooperation, it doesn't convince them to. So I wouldn't describe it as "getting an ought from an is". But I'd say you're at least getting something of the same type as an ought?

Comment by philh on What podcasts does the community listen to? · 2018-12-18T19:29:28.428Z · score: 3 (2 votes) · LW · GW

Curious why this was dovnvoted? It's not a literal answer to the question, but it seems reasonably likely to satisfy the intent of the question.

Comment by philh on Who's welcome to our LessWrong meetups? · 2018-12-13T21:40:13.830Z · score: 8 (5 votes) · LW · GW

For the London meetups I write this:

We're a fortnightly London-based meetup for members of the rationalist diaspora. The diaspora includes, but is not limited to, LessWrong, Slate Star Codex, rationalist tumblrsphere, and parts of the Effective Altruism movement.

You don't have to identify as a rationalist to attend: basically, if you think we seem like interesting people you'd like to hang out with, welcome! You are invited. You do not need to think you are clever enough, or interesting enough, or similar enough to the rest of us, to attend. You are invited.

Comment by philh on Is cognitive load a factor in community decline? · 2018-12-12T20:14:59.963Z · score: 6 (3 votes) · LW · GW

The quote doesn't say explicitly, so just to make sure we're on the same page: I take from this that yes, when more looms were tended, each loom required less attention. Do you agree?