Open Thread September 2018

post by Elo · 2018-08-31T21:38:19.118Z · LW · GW · 72 comments

Contents

72 comments

If it’s worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

  1. What accomplishments are you celebrating from the last month?
  2. What are you reading?
  3. What reflections do you have for yourself or others from the last month?
  4. What have you tried out this month?
  5. (Teaser for my next post) What is your relationship with yourself?

72 comments

Comments sorted by top scores.

comment by Vaniver · 2018-09-21T22:07:17.390Z · LW(p) · GW(p)

Mod notice: There's a discussion going on in the Bay Area rationality community involving multiple users of LW that includes allegations of serious misconduct. We don't think LW is a good venue to discuss the issue or conduct investigations, but we think it's important for the safety and health of the LW community that we host links to a summary of findings once the discussion has concluded. If you'd like to discuss this policy, please send a private message to me and I'll talk it over with the mod team. [Comments on this comment are disabled.]

Replies from: Vaniver
comment by Vaniver · 2019-12-13T06:40:52.830Z · LW(p) · GW(p)

The discussion concluded without a summary of findings, in part because ialdabaoth went into exile. Moving forward, ialdabaoth is banned [LW · GW] from LessWrong, for reasons discussed there (that are related to, but not directly, the allegations).

comment by drethelin · 2018-09-12T22:14:00.100Z · LW(p) · GW(p)

I've been thinking a lot lately about where I want to live long-term. I'm currently in Madison Wi, which is really nice, but kinda small and has an unfortunately hot/humid summer. Financially I can live pretty much anywhere I want, except maybe Monaco.

Things I want, not in order of importance:

1. A nice house. In an ideal world, the house would house several of my closest friends, be walkable to parks, shops, and restaurants, and be close enough to other friends that they drop by regularly. I am also very interested in running a public space or a semi-public space adjacent or close to the house, possibly a makerspace, possibly a cafe, or something else. This is one of the reasons it's not instantly obvious that I should move to Berkeley or Manhattan or something. I'm financially well-off but there's like, an order of magnitude in difference in cost of having a nice big place to live. On the other hand, I'm also pretty flexible about living in apartment or something, but for the long term I much prefer having a space I own and can modify and build up to become better and better over the years.

2. People. My best friend and one of my partners lives in Madison at the moment, but most everyone else I like to spend time with or who wants to spend time with me seems to live on one of the coasts or are scattered elsewhere. This is the aspect for which Berkeley is the most obvious winner: I know a ton of rationalists, like meeting new ones and rat-adjacents, and in general like having social situations that I don't personally have to seek out or plan. In the Bay Area, there are tons of regular events that I can go to without having to do the leg-work myself. In addition, there are millions of other people in places like the bay area and NYC of varying kinds and personalities.

3. Climate: I'm big, and while I'm losing weight I still get hot and sweaty very easily. My ideal place in terms of year round climate is the Faroe Islands, where it stays between 33 and 55 degrees Fahrenheit year round in the capitol city. I'd like a place where I can walk around without wanting to die for much of the year. The Bay Area is pretty good for this, but still tends to make me sweaty in the afternoons, especially if I'm walking around the hilly parts of San Francisco. At this point I'm pretty resigned to this, but it's still a factor in where-to-live tradeoffs. I also really enjoy having Seasons, love Autumn and Spring, so places that are as similar all year round as the bay are less preferable in that respect.

4. Walkability/transitability. This one is pretty standard. I like being able to go cool places without having to spend hours in traffic or hassle hugely to park my car.

5. Culture: I like living in a place that has lots of little cafes and bookstores and restaurants that are around, as well as museums and bars and live music and stuff. To a lesser extent I care about the culture of the kinds of people who live in the city, but in practice I'm going to end up mostly hanging out with small subcultures anyway.

6. Something one could summarize as Coolness/Importance: I like the idea of being involved in Big, World-changing Things, despite being very lazy. Places like Manhattan, Hollywood, SF, etc. are attractive at least partially because I can see and potentially participate in important cultural events and shifts. This is one of the things I could conceivably do in Madison but, like assembling a friend group of like-minded people from eg grad students, would be a lot of work.

7. The classic things like low-crime and not too smelly/loud would be good but I can mitigate most of these by living in nicer parts of places. Still, not a zero-factor.

8. I like the idea of being a locally medium-to-high status person whose place people like to visit and who people talk to when they want introductions, as a sort of community nexus type thing. This obviously trades off against moving to places where such people already exist.

One of the options I'm considering is buying a big house in Madison and setting it up something like REACH or the Blackpool EA hotel, and trying to lure rationalists to come live here, as well as making it an outreachy type place for local potential rats. The plus side is I'd get to stay in the city I know like, the downside is it would be a lot more work and potentially not even achieve the kind of life I want. But if I succeed in creating a mini-hub, I'll get to have pretty much all I want for like, 1/10 the cost of moving to SF. Another option is to nominally stay in Madison, but travel 2-3 months out of the year, probably in deepest winter/summer.

I'm looking for input like: direct recommendations for specific cities in ways I probably haven't considered, people who specifically like me commenting that they want me to live in their city, case studies/reports of people who have moved and think they're similar enough to me to give me good input, comparisons from people who have lived in multiple Big Cities as to which are Better, and whatever else people feel like mentioning. Also I want horror stories of living in Manhattan, SF, Berkeley, LA, and Seattle (this is my current shortlist).

Thanks in advance!

Replies from: Raemon, marcus_gabler
comment by Raemon · 2018-09-12T22:49:55.845Z · LW(p) · GW(p)

A major consideration / uncertainty here seems to be "is a hub in Madison something remotely practical?", and you might want to specifically test that with something kickstarter-esque (i.e "I will try this if and only if at least X people commit to moving here if and only if X other people commit to moving here, etc")

(Testing this unfortunately is fair bit of work, but relatively small compared to the work involved in the actual project, so maybe also a good test of "can drethelin [LW · GW] pull this off?")

Replies from: drethelin
comment by drethelin · 2018-09-13T03:19:52.188Z · LW(p) · GW(p)

Something I've been thinking of doing is asking a lot of specific people I like who else would have to move somewhere before they would move and seeing if there's a smallish cluster

Replies from: Raemon
comment by Raemon · 2018-09-13T07:33:46.966Z · LW(p) · GW(p)

Makes sense.

comment by marcus_gabler · 2018-10-01T07:19:27.853Z · LW(p) · GW(p)

Well, I think your rationalistic approach is not appropriate here.

While I basically agree that most or even all issues/questions can be best answered rationally, your quest seems to lack considering emotional aspects.

Strictly rationalistic, you might even actually prioritize and weight your above criteria, like by assigning points from 0 to 10 and then summing up the score for various properties or locations.

I can't help but feeling there would be something fundamentally wrong about such approach.

Did you consider the following:

  • Where do I feel at home?
  • Is living in a house really the best option, or are 2-3 apartments maybe an alternative
  • Many people want to live in a quiet, safe area in walking range of cafes etc. Many people also want a Porsche. But does it make them happy? Or will they get used to the Porsche sooner or later only to feel they now need that Lambo?

" I'm looking for input like: direct recommendations for specific cities... "

Looking for specific input seems biased.

What seems interesting is that you can imagine actually having friends LIVING in your house or even doing something like a hotel. Most people would live in a house with their family.

Only you can know how strong this idea is. Doing so could really be a fulfilling thing, but you only mention it on the side, so I kinda doubt you are determined enough.

" I like the idea of being a locally medium-to-high status person ...."

Well, rationally, what does that have to do with your future home?

Either you are that person or not. NOT moving to some city just because the already are such people is 100% wrong.

" I like the idea of being involved in Big, World-changing Things, despite being very lazy. "

Well, do you really mean lazy or depressed/unmotivated?

Anyway, there is a lot in your post that makes me guess you first have to review your mindset.

I could write a lot here guessing into the blue, but if my comment (hopefully) rang any bell with you, feel free to contact me.

Take care, Marcus.

Replies from: Raemon
comment by Raemon · 2018-10-01T20:15:51.766Z · LW(p) · GW(p)

Hmm. It sounds like you may have been reading some stuff into drethelin [LW · GW]'s comment that they didn't necessarily imply (I don't think he said anything about his approach being rationalistic in the first place)

Replies from: marcus_gabler
comment by marcus_gabler · 2018-10-02T05:40:55.437Z · LW(p) · GW(p)

>>> you may have been reading some stuff into drethelin [LW · GW]'s comment that they didn't necessarily imply

True: Not necessarily - but possibly. :-)

comment by Ben Pace (Benito) · 2018-09-15T01:00:37.534Z · LW(p) · GW(p)

Seems like some other people want to bring back the Sabbath.

comment by hamnox · 2018-09-03T00:50:47.287Z · LW(p) · GW(p)

Folk values -- the qualities of the "I love science" crowd as contrasted to the qualities of actual, exceptional scientists -- matter too. The common folk outnumber the epic heroes.

This holds true even if you believe that everyone can become an epic hero! People need to know, rather than guess and hope, that walking the path to becoming an epic hero might look and feel rather different than doing active epic heroing. In theory one ought to be able to derive the appropriate instrumental goals from the terminal goal, but in practice people very frequently mess this up.

The general crowd has a different job than the inner circle, and treating this difference as orthogonal propagates fewer errors than treating it as a matter of degree.

Folk rationality needs to strongly protect against infohazards until one gets a chance to develop less vulnerable internal habits. Folk rationality needs to celebrate successfully satisficing goals and identifying picas rather than going for hard optimization because amateur min-maxing just spawns Goodhart [LW · GW] demons every which way. Folk rationality needs to prize keeping social commitments and good conflict mediation tools; it needs to honor social workers straightforwardly addressing social or resource problems. Folk rationality needs luminosity [LW · GW], and therapy. Folk rationality should also have civic duty of proactive personal data collection, cheering on replications, participating in RCTs, and not ghosting or lizardmanning surveys... because science needs to get done d'arvit.

Interested in cruxing

Replies from: Pattern
comment by Pattern · 2018-09-03T20:04:30.410Z · LW(p) · GW(p)

comment by ChristianKl · 2018-09-01T21:03:25.307Z · LW(p) · GW(p)

I think it would be benefitial to always link the last open thread in a new open thread in the main text.

comment by ChristianKl · 2018-09-01T20:36:00.017Z · LW(p) · GW(p)

The EU seems to get rid of the habit of changing the clocks around twice a year, in an exercise of listening to public feedback.

He said that the decision was taken after a vast majority of EU citizens — primarily from Germany — who took part in a survey on the issue called for an end to biannual clock changes.
Massive support for halting daylight saving time
Over 80 percent of respondents supported abolishing changing the clocks in summer and winter in a survey that ran between July 4 and August 16, according to media reports on the results.

It's interesting that the EU seems to be able to coordinate currently on an issue like this where the right answer is more or less obvious but where the coordination problem is massive.

Do we have other similar problems with obvious answers that are just a matter of getting enough people coordinated?

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2018-09-04T13:04:03.502Z · LW(p) · GW(p)

I wouldn't call the answer obvious. I'm not even sure if I could have guessed the majority view on this beforehand. Why do you think it's obvious? Are there no upsides to changing or are the downsides too significant?

Replies from: ChristianKl
comment by ChristianKl · 2018-09-09T19:41:49.087Z · LW(p) · GW(p)

The main argument in favor of changing time zones that it supposedly saves energy doesn't seem to be true these days.

Two examples of issues: People seem to work 16 minutes less the Monday after daylight savings. It also increases heart attacks.

Replies from: gjm
comment by gjm · 2018-09-22T00:00:27.790Z · LW(p) · GW(p)

We can detect that when we switch between "normal" and "daylight saving" time bad things happen at the transitions. But that doesn't mean that switching is worse than not switching. We don't know what bad things would happen when if we didn't switch.

(E.g., one reason for the bad things is that people's sleep pattern is disturbed and that has bad health effects. But it might also be bad for sleep to have dawn as early relative to the hours people want to sleep as it would be in the middle of summer without dayli

comment by ChristianKl · 2018-09-02T12:26:58.586Z · LW(p) · GW(p)

I remember reading on LessWrong about a study a while back that compared trained psychologists to lay-people and found that the trained psychologists didn't do any better. Does anybody know the study or LessWrong post?

comment by Oscar_Cunningham · 2018-09-01T17:12:14.313Z · LW(p) · GW(p)

Eliezer made this attempt at naming a large number computable by a small Turing machine. What I'm wondering is exactly what axioms we need to use in order to prove that this Turning machine does indeed halt. The description of the Turing machine uses a large cardinal axiom ("there exists an I0 rank-into-rank cardinal"), but I don't think that assuming this cardinal is enough to prove that the machine halts. Is it enough to assume that this axiom is consistent? Or is something stronger needed?

Replies from: Viliam, Gurkenglas
comment by Viliam · 2018-09-02T21:05:00.547Z · LW(p) · GW(p)

I used to be quite good at math at high school, but I haven't studied it afterwards. This seems like a good opportunity to ask: Which book(s) should I read in order to fully understand that post?

Assume great knowledge of high-school math, but almost nothing beyond that. I want to get from there to... understanding the cardinals and ordinals. I have a vague impression of what they likely are, but I'd like to have a solid foundation, i.e. to know the definitions and to understand the proofs (in ideal case, to be able to prove some things independently [LW · GW]).

Bonus points if the books you mention are available at Library Genesis. ;)

Replies from: Oscar_Cunningham, Vladimir_Nesov
comment by Oscar_Cunningham · 2018-09-03T09:10:35.767Z · LW(p) · GW(p)

As well as ordinals and cardinals, Eliezer's construction also needs concepts from the areas of computability and formal logic. A good book to get introduced to these areas is Boolos' "Computability and Logic".

Replies from: Viliam
comment by Viliam · 2018-09-03T22:04:20.254Z · LW(p) · GW(p)

Thank you!

comment by Vladimir_Nesov · 2018-09-03T07:36:50.256Z · LW(p) · GW(p)

Two good first books on set theory (with a similar scope) are

  • H. B. Enderton, Elements of Set Theory
  • Karel Hrbacek, Thomas Jech, Introduction to Set Theory

(Though they might be insufficient to parse the post.)

Keep in mind that set theory has a very different character from most math, so it might be better to turn to something else first if "studying math" is more of a motivation.

Replies from: Viliam
comment by Viliam · 2018-09-03T22:05:20.378Z · LW(p) · GW(p)

Thank you!

comment by Gurkenglas · 2018-09-25T17:28:51.690Z · LW(p) · GW(p)

The only step in which his machine can fail to halt is "Run all programs such that a halting proof exists, until they halt.". A program would have to have a halting proof, yet not halt. T̶h̶e̶r̶e̶f̶o̶r̶e̶,̶ ̶b̶e̶y̶o̶n̶d̶ ̶w̶h̶a̶t̶ ̶w̶e̶ ̶n̶e̶e̶d̶ ̶t̶o̶ ̶t̶a̶l̶k̶ ̶a̶b̶o̶u̶t̶ ̶t̶u̶r̶i̶n̶g̶ ̶m̶a̶c̶h̶i̶n̶e̶s̶ ̶a̶t̶ ̶a̶l̶l̶,̶ ̶t̶h̶e̶ ̶o̶n̶l̶y̶ ̶e̶x̶t̶r̶a̶ ̶a̶x̶i̶o̶m̶ ̶n̶e̶e̶d̶e̶d̶ ̶i̶s̶ ̶"̶T̶ ̶i̶s̶ ̶c̶o̶n̶s̶i̶s̶t̶e̶n̶t̶.̶"̶.̶

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2018-09-25T18:07:25.927Z · LW(p) · GW(p)

Consistency of T isn't enough, is it? For example the theory (PA + "The program that searches for a contradiction in PA halts") is consistent, even though that program doesn't halt.

Replies from: Gurkenglas
comment by Gurkenglas · 2018-09-26T00:52:53.108Z · LW(p) · GW(p)

I don't follow. I agree that (PA + "PA is inconsistent") is consistent. How does it follow that consistency of T isn't enough? The way I use consistency there is "If T proves that a program halts, then that program does halt and we can safely run it.".

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2018-09-26T11:00:25.121Z · LW(p) · GW(p)

I'm arguing that, for a theory T and Turing machine P, "T is consistent" and "T proves that P halts" aren't together enough to deduce that P halts. And as I counter example I suggested T = PA + "PA is inconsistent" and P = "search for an inconsistency in PA". This P doesn't halt even though T is consistent and proves it halts.

So if it doesn't work for that T and P, I don't see why it would work for the original T and P.

Replies from: Gurkenglas
comment by Gurkenglas · 2018-09-26T11:41:38.436Z · LW(p) · GW(p)

Right. Perhaps the axiom schema "If T proves φ, then φ."?

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2018-09-26T12:34:52.460Z · LW(p) · GW(p)

Yeah, I think that's probably right.

I thought of that before but I was a bit worried about it because Löb's Theorem says that a theory can never prove this axiom schema about itself. But I think we're safe here because we're assuming "If T proves φ, then φ" while not actually working in T.

comment by Aiyen · 2018-09-27T00:30:31.035Z · LW(p) · GW(p)

Happy Petrov Day!

comment by Hazard · 2018-09-02T12:52:14.366Z · LW(p) · GW(p)

I'm a bit confused by the rationalWiki. Is that maintained by anyone? I saw page for EY, and it seemed either be genuinely harsh/scathing/dismissive, or a poorly executed inside joke.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-02T13:26:27.179Z · LW(p) · GW(p)

RationalWiki is maintained by people who really dislike Less Wrong in general and Eliezer personally.

My own view is that RationalWiki is a terrible, terrible source for anything.

Replies from: Hazard
comment by Hazard · 2018-09-02T15:28:24.707Z · LW(p) · GW(p)

Thanks, will take that into account. Did it start as a LW project, and then shifted, or was it that way from the beginning?

Replies from: Viliam, SaidAchmiz, vedrfolnir
comment by Viliam · 2018-09-02T21:28:48.083Z · LW(p) · GW(p)

RationalWiki is older than LW, and their definition of "rationality" is quite different from the one used here.

To put it simply, their "rationality" means science as taught at universities + politically correct ideas; and their "irrationality" means pseudoscience + religion + politically incorrect ideas + everything that feels weird (such as many-world hypothesis).

Also, their idea of rational discussion is that after you have decided that something is irrational, you should describe it in a snarky way, and feel free to exaggerate if it makes the page more funny. So when later anyone points out a factual error in your article, you can defend it by saying "it was obviously a joke, moron".

In my understanding, this is how they most likely got obsessed with Eliezer and LessWrong:

1) How does a high-school dropout dare to write a series of articles about quantum physics? Only university professors are allowed to have opinions on such topic. Obviously, he must be a crackpot. And he even identifies as a libertarian, which makes him a perfect target for RationalWiki: attack pseudoscience and right-wing politics in the same strike!

2) Oops, a debate at Stack Exchange confirmed that the articles about quantum physics... contain a few minor technical mistakes, but the explanation is correct in general, and about half of people who do quantum physics professionally actually accept some variant of the many-world hypothesis. So... not a crackpot? But we already have a snarky article about him, so there must be something wrong about him... keep digging...

3) Oh, there was once a deleted thread about an idea that sounded genuinely crazy and dangerous. Also, he is polyamorous... and although judging other people's sexuality is generally considered politically incorrect, it does not necessarily apply to people we already decided are bad. Therefore: he is a leader of a dangerous cult, sexually abusing his followers. There; we knew there was something fishy going on!

4) Then a really long debate followed on RationalWiki's talk pages, and some statements about Eliezer and LessWrong were rephrased to a milder form, which is what you see there today.

5) Currently, David Gerard from RationalWiki is camping at the Wikipedia article about Less Wrong, and works hard to keep it as bad as possible, mostly by adding citations by "reliable sources" which happen to be journalists using information they found at RationalWiki, and removing anything that would put Less Wrong into more positive light, such as any mention of effective altruism etc.

¯\_(ツ)_/¯

Replies from: ChristianKl, gjm, TAG
comment by ChristianKl · 2018-09-06T20:54:55.622Z · LW(p) · GW(p)

It's worth noting that David Gerard did contribute a lot on LessWrong in it's early days as well, so he's not really someone who's simply an outsider.

comment by gjm · 2018-09-09T01:12:14.703Z · LW(p) · GW(p)

The Wikipedia page on LW doesn't seem particularly awful at the moment. (And in particular it does in fact mention effective altruism.)

Replies from: Viliam
comment by Viliam · 2018-09-10T22:53:16.678Z · LW(p) · GW(p)

Slightly better than the last time I saw it.

Still, the "Neoreaction" section is 3x longer than the "Effective Altruism" section. Does anyone other than David Gerard believe this impartially describes Less Wrong? (And where are the sections for the other political alignments mentioned in LW surveys? Oh, we are cherry picking, of course.)

No mention of the Sequences, other than "seed material to create the community blog". I guess truly no one reads them anymore. :(

Replies from: SaidAchmiz, gjm
comment by Said Achmiz (SaidAchmiz) · 2018-09-10T23:03:56.562Z · LW(p) · GW(p)

I guess truly no one reads them anymore. :(

Not true!

ReadTheSequences.com has gotten a steady ~20k–25k monthly page views (edit: excluding bots/crawlers, of course!) for 11 months and counting now, and I am aware of a half-dozen rationality reading groups around the world which are doing Sequence readings (and that’s just those using my site).

(And that doesn’t, of course, count people who are reading the Sequences via LW/GW, or by downloading and reading the e-book.)

Replies from: habryka4
comment by habryka (habryka4) · 2018-09-11T00:46:54.638Z · LW(p) · GW(p)

We are getting about 20k page hits per month on the /rationality page on LessWrong, and something in the 100k range on all sequences posts combined.

comment by gjm · 2018-09-11T16:22:30.601Z · LW(p) · GW(p)

Cherry-picking indeed! The NRx section is about 2.5x the length of the EA section (less if you ignore the citations) and about 1/4 of it is the statement "Eliezer Yudkowsky has strongly repudiated neoreaction". Neoreaction is more interesting because in most places there would be (to a good approximation) zero support for it, rather than the rather little found on LW.

I mean, I don't want to claim that the WP page is good, and I too would shed no tears if the section on neoreaction vanished, but it's markedly less terrible than suggested in this thread.

Replies from: Viliam
comment by Viliam · 2018-09-11T22:26:35.574Z · LW(p) · GW(p)

If Jehovah Witnesses come to my door, I spend a few minutes talking with them, and then ask them to leave and never return, will I also get a subsection "Jehovah Witnesses" on Wikipedia? I wouldn't consider that okay even if the subsection contained words "then Viliam told them to go away". Like, why mention it at all, if that's not what I am about?

I suppose if there was a longer article about LW, I wouldn't mind spending a sentence or two on NR. It's just that in current version, the mention is disproportionately long -- and it has its own subsection to make it even more salient. Compare with how much place the Sequences get; actually, not mentioned at all. But there is a whole paragraph about the purpose of Less Wrong. One paragraph about everything LW is about, and one paragraph mentioning that NR was here. Fair and balanced.

Replies from: gjm
comment by gjm · 2018-09-13T01:20:29.476Z · LW(p) · GW(p)

What if a bunch of JWs camped out in your garden for a month, and that was one of the places where more JWs congregated than anywhere else nearby? I think then you'd be in danger of being known as "that guy who had the JWs in his garden", and if you had a Wikipedia page then it might well mention that. It would suck, it would doubtless give a wrong impression of you, but I don't think you'd have much grounds for complaint about the WP page.

LW had neoreactionaries camped out in its garden for a while. It kinda sucked (though some of them were fairly smart and interesting when they weren't explaining smartly and interestingly all about how black people are stupid and we ought to bring back slavery; it's not like there was no reason at all why they weren't all just downvoted to oblivion and banned from the site) and the perception of LW as a hive of neoreaction is a shame -- and yes, there are people maliciously promoting that perception and I wish they wouldn't -- but I'm not convinced that that WP article is either unfair or harmful. It says "neoreactionaries have taken an interest in LW" rather than "LW has taken an interest in neoreaction" and the only specific LW attitude to neoreaction mentioned is that the guy who founded the site thinks NRx is terrible. I don't think anyone is going to be persuaded by the WP article that LW is full of neoreactionaries, and if someone who has that impression reads the article they might even be persuaded that they're wrong.

Again, for the avoidance of doubt, I'm not claiming that the WP article is good. But it's hardly "as bad as possible" either. That's all.

Replies from: Viliam
comment by Viliam · 2018-09-14T21:45:20.674Z · LW(p) · GW(p)

I mostly agree, except for:

I don't think anyone is going to be persuaded by the WP article that LW is full of neoreactionaries, and if someone who has that impression reads the article they might even be persuaded that they're wrong.

I believe this is not how most people think. The default human mode is thinking in associations. Most people will read the article and remember that LW is associated with something weird right-wing. Especially when "neoreaction" is a section header, which makes it hard to miss. The details about who took interest in whom, if they notice them at all, will be quickly forgotten. (Just like when you publicly debunk some myths, it can actually make people believe them more, because they will later remember they heard it, and forget it was in the context of debunking.)

If the article would instead have a section called "politics on LW" mentioning the 'politics is the mindkiller' slogan, and how Eliezer is a libertarian, and then complete results of a political poll (including the NR)... most people would not remember that NR was mentioned there.

Similarly, the length of sections is instinctively perceived as a degree of how much the things are related. Effective altruism is reduced to one very unspecific sentence, while Roko's basilisk has a relatively longer explanation. Of course (availability bias) this makes an impression that the basilisk is more relevant than effective altruism.

If the article would instead have a longer text on effective altruism (for example, a short paragraph or explaining the outline of the idea, preceded by a link "Main article: Effective altruism"), people would get an impression that LW and EA are significantly connected.

The same truth can be described in ways that leave completely opposite impression. I think David understands quite well how this game works, which is why he keeps certain sections shorter and certain sections longer, etc.

Replies from: gjm
comment by gjm · 2018-09-20T13:58:00.240Z · LW(p) · GW(p)

Fair point about association versus actual thinking. (Though at least some versions of the backfire effect are doubtful...)

I don't think this is all David Gerard's fault (at least, not the fault of his activities on Wikipedia). Wikipedia is explicitly meant to be a summary of information available in "reliable sources" elsewhere, and unfortunately I think it really is true that most of the stuff about LW in such sources is about things one can point at and laugh or sneer, like Roko's basilisk and neoreaction. That may be a state of affairs that David Gerard and RationalWiki have deliberately fostered -- it certainly doesn't seem to be one they've discouraged! -- but I think the Wikipedia article might well look just the way it does now if there were some entirely impartial but Wikipedia-rules-lawyering third party watching it closely instead of DG. E.g., however informative the LW poll results might be, it's true that they're not found in a "reliable source" in the Wikipedia sense. And however marginal Roko's basilisk might be, it's true that it's attracted outside attention and been written about by "reliable sources".

Replies from: Oscar_Cunningham, Viliam
comment by Oscar_Cunningham · 2018-09-20T20:11:55.066Z · LW(p) · GW(p)

This is a good point. The Wikipedia pages for other sites, like Reddit, also focus unduly on controversy.

comment by Viliam · 2018-09-29T20:39:26.444Z · LW(p) · GW(p)
Wikipedia is explicitly meant to be a summary of information available in "reliable sources" elsewhere

So there seems to be an upstream problem that the line between "reliable sources" and "clickbait" is quite blurred these days.

This is probably not true for things that are typically written about in textbooks; but true for things that are typically written about in mainstream press.

comment by TAG · 2018-09-04T13:34:00.972Z · LW(p) · GW(p)

How does a high-school dropout dare to write a series of articles about quantum physics? Only university professors are allowed to have opinions on such topic. Obviously, he must be a crackpot.

Have you noticed that most writings by laypeople on QM actually are crackpottery? RW's priors are in the right place, at least.

Replies from: Viliam, cousin_it
comment by Viliam · 2018-09-08T20:59:02.705Z · LW(p) · GW(p)
RW's priors are in the right place, at least.

I fully agree (about the priors on QM). The problem is somewhere else. I see two major flaws:

First, the "rationality" of RW lacks self-reflection. They sternly judge others, but consider themselves flawless. To explain what I mean, imagine that I would know nothing about QM other than the fact that 99% of online writings about QM are crackpottery; and then I would find an article about QM that sounds weird. -- Would I trust the article? No. That's what the priors are for. Would I write my own article denouncing the author of the other article as a crackpot? No. Because I would be aware that I know nothing about QM, and that despite the 99% probability of crackpottery, there is also the 1% probability it is correct; and that my lack of knowledge does not allow me to update after reading the article itself, so I am stuck with my priors. I would try to leave writing the denunciation to someone who actually understands the topic; to someone who can say "X is wrong, because it is actually Y", instead of merely "X is wrong, because, uhm, my priors" or even "X is wrong, trust me, I am the expert". But the latest version is most similar to what RW does. They pretend to be experts at science and pseudoscience, but in fact they are not. They merely follow a few simple heuristics which allow them to be correct in most cases, and sometimes they misfire. In this specific case, they followed a (good) heuristic about QM writings, instead of actually understanding QM and judging Eliezer's articles by their content; but they made it sound like there is a problem specifically with the articles.

Second, it is difficult to update if you burn the bridges after making your first estimate. If I publicly say I disagree with you, I may later change my mind and say that after giving it more thought I actually agree with you; and I will not lose a lot of face, especially among rational people. But if instead I publicly call you a crackpot or worse, and then it turns out that maybe you were right... it will cost me a lot of face to admit it. Being a human, the natural reaction is to double down regardless of evidence, cherry-pick in favor of my original judgment, and try to move the goalpost. And you can hardly avoid burning the bridges if you make everything political (is Eliezer's libertarianism really relevant for evaluating whether he is wrong about QM? no, but was so tempting to connect libertarianism with supposed crackpottery), and if your culture of communication is "snarky" i.e. when you are an asshole and proud of it.

To make a mistake when the priors point the wrong way is unavoidable. To resist further evidence so strongly that a few years down the line you are spending your nights editing your opponent's Wikipedia page just to prove that you were right... that's insane.

Replies from: TAG
comment by TAG · 2018-09-21T15:06:30.651Z · LW(p) · GW(p)

First, the “rationality” of RW lacks self-reflection. They sternly judge others, but consider themselves flawless.

Are you quite sure "they" are a cohesive goup?

would I write my own article denouncing the author of the other article as a crackpot? No. Because I would be aware that I know nothing about QM

Are you quite sure "they" couldn't possibly include any actual physicists?

Second, it is difficult to update if you burn the bridges after making your first estimate.

So LW never makes sweeping denunciations?

Replies from: Viliam
comment by Viliam · 2018-09-29T20:45:26.322Z · LW(p) · GW(p)
Are you quite sure "they" are a cohesive group?

I suppose that people who disagree with the snarky way of looking at political opponents will not remain for long time.

There is also a difference between a forum and a wiki. (Medium is the message, kind of.) In a forum, you can write an article expressing your opinions, and then I can write an article about why I disagree with your opinions. In a wiki, I will simply revert your edit. Thus, wikis are more likely to converge to a unified view.

comment by cousin_it · 2018-09-04T15:17:27.669Z · LW(p) · GW(p)

(deleted)

comment by Said Achmiz (SaidAchmiz) · 2018-09-02T16:01:03.866Z · LW(p) · GW(p)

No, RationalWiki never had anything to do with Less Wrong.

comment by vedrfolnir · 2018-09-12T04:01:41.132Z · LW(p) · GW(p)

It started as the leftist alternative to Conservapedia.

Replies from: Benito
comment by Ben Pace (Benito) · 2018-09-12T04:11:08.199Z · LW(p) · GW(p)

Really? Do you have any links on that? I wasn’t aware.

Replies from: vedrfolnir
comment by vedrfolnir · 2018-09-12T06:37:48.144Z · LW(p) · GW(p)

The Wikipedia article on it does.

Replies from: Benito
comment by Ben Pace (Benito) · 2018-09-12T06:59:29.862Z · LW(p) · GW(p)

You're right, literally says it in the second line.

comment by habryka (habryka4) · 2018-08-31T21:51:51.330Z · LW(p) · GW(p)

I think given that we seem to have settled on Open Threads being stickied, we can get rid of the first bullet point.

Replies from: Elo
comment by Elo · 2018-09-01T03:42:25.126Z · LW(p) · GW(p)

Done

comment by eukaryote · 2018-09-25T16:13:40.165Z · LW(p) · GW(p)

How do people organize their long ongoing research projects (academic or otherwise)? I do a lot of these but think I would benefit from more of a system than I have right now.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2018-09-25T16:34:20.475Z · LW(p) · GW(p)

I write notes in a single plain text file, using the dates they are made to cite them in newer notes. There are two types of notes, brainstorming throw-away ones that maintain the process of thinking about a problem or of learning something (such as carefully reading a paper), and more lucid ones, with some re-reading value, which are marked differently and have a one-sentence summary. The notes are intended to never be made public, so that I feel free to use them to resolve any silly confusions.

comment by sayan · 2018-09-02T11:27:10.318Z · LW(p) · GW(p)

Just finished reading Yuval Noah Harari's new book 21 Lessons for the 21st Century. Primary reaction: even if you already know all the things being presented in the book, it is worth a read just because of the clarity into the discussion the book offers.

Replies from: ChristianKl
comment by ChristianKl · 2018-09-02T15:13:19.086Z · LW(p) · GW(p)

Without saying anything about the content, I don't find this comment valuable.

Replies from: Sherrinford, Elo
comment by Sherrinford · 2018-09-04T15:31:47.887Z · LW(p) · GW(p)

Here is a review: https://www.economist.com/books-and-arts/2018/09/01/big-data-is-reshaping-humanity-says-yuval-noah-harari

comment by Elo · 2018-09-02T19:22:47.334Z · LW(p) · GW(p)

Interesting because I do. Slightly updated towards reading this.

comment by Theist · 2018-09-19T18:33:44.995Z · LW(p) · GW(p)

This article seems to have some bearing on decision theory, but I don't know enough about it or quantum mechanics to say what that bearing might be.

I'd be interested to know others' take on the article.

Replies from: Mitchell_Porter, Pattern
comment by Mitchell_Porter · 2018-09-19T23:48:59.478Z · LW(p) · GW(p)

It's a minor new quantum thought experiment which, as often happens, is being used to promote dumb sensational views about the meaning or implications of quantum mechanics. There's a kind of two-observer entangled system (as in "Hardy's paradox"), and then they say, let's also quantum-erase or recohere one of the observers so that there is no trace of their measurement ever having occurred, and then they get some kind of contradictory expectations with respect to the measurements of the two observers.

Undoing a quantum measurement in the way they propose is akin to squirting perfume from a bottle, then smelling it, and then having all the molecules in the air happening to knock all the perfume molecules back into the bottle, and fluctuations in your brain erasing the memory of the smell. Classically that's possible but utterly unlikely, and exactly the same may be said of undoing a macroscopic quantum measurement, which requires the decohered branches of the wavefunction (corresponding to different measurement outcomes) to then separately evolve so as to converge on the same state and recohere.

Without even analyzing anything in detail, it is hardly surprising that if an observer is subjected to such a highly artificial process, designed to undo a physical event in its totality, then the observer's inferences are going to be skewed somehow. So, you do all this and the observers differ in their quantum predictions somehow. In their first interpretation (2016), Frauchiger and Renner said that this proves many worlds. Now (2018), they say it proves that quantum mechanics can't describe itself. Maybe if they try a third time, they'll hit on the idea that one of the observers is just wrong.

comment by Pattern · 2018-09-22T17:33:14.197Z · LW(p) · GW(p)

Someone made a post [LW · GW] on it.

comment by ImmortalRationalist · 2018-09-03T03:05:28.956Z · LW(p) · GW(p)

Should the mind projection fallacy actually be considered a fallacy? It seems like being unable to imagine a scenario where something is possible is in fact Bayesian evidence that it is impossible, but only weak Bayesian evidence. Being unable to imagine a scenario where 2+2=5, for instance, could be considered evidence that 2+2 ever equaling 5 is impossible.

Replies from: Oscar_Cunningham, Pattern
comment by Oscar_Cunningham · 2018-09-03T08:08:29.260Z · LW(p) · GW(p)
being unable to imagine a scenario where something is possible

This isn't an accurate description of the mind projection fallacy. The mind projection fallacy happens when someone thinks that some phenomenon occurs in the real world but in fact the phenomenon is a part of the way their mind works.

But yes, it's common to almost all fallacies that they are in fact weak Bayesian evidence for whatever they were supposed to support.

Replies from: TAG
comment by TAG · 2018-09-05T11:50:06.519Z · LW(p) · GW(p)

Accusations that something or other is a mind-projection generally lack rigorous criteria. The conclusion of a mind-projection argument generally end up supporting the intuitions of the person making it.

comment by Pattern · 2018-09-04T03:31:08.649Z · LW(p) · GW(p)