Open & Welcome Thread - October 2019

post by Ruby · 2019-10-01T23:10:57.782Z · LW · GW · 46 comments

Contents

46 comments

If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [LW · GW] section of the LessWrong FAQ [LW · GW].

The Open Thread sequence is here [? · GW].

46 comments

Comments sorted by top scores.

comment by jasoncrawford · 2019-10-17T01:06:29.063Z · LW(p) · GW(p)

Hi everyone. I've discovered the rationality community gradually over the last several years, starting with Slate Star Codex, at some point discovering Julia Galef on Twitter/Facebook, and then reading Inadequate Equilibria. I still have tons of material on this site to go through!

I'm also the author of a blog, The Roots of Progress (https://rootsofprogress.org), about the history of technology and industry, and more generally the story of human progress.

Replies from: eigen, None
comment by eigen · 2019-10-16T23:43:58.996Z · LW(p) · GW(p)

Oh wow, welcome. I've read many essays on your blog and I think they are great.

I believe you'll find a lot of content (and people) here, who also share the noble pursuit of your blog.

comment by [deleted] · 2019-10-17T14:34:18.948Z · LW(p) · GW(p)

I, like eigen, am also a fan of your blog! Welcome!

comment by countedblessings · 2019-10-02T02:21:24.397Z · LW(p) · GW(p)

Would people be interested in a series of posts about category theory? There's a lot of great introductions to the subject out there, but I'd like to fill a particular niche—I don't want to assume my audience knows topology yet. I think you can still get a lot of value out of category theory at the high school senior level.

Replies from: Benito, Hazard, ryan_b, Pattern, Dmar
comment by Ben Pace (Benito) · 2019-10-02T02:25:32.940Z · LW(p) · GW(p)

That sounds quite interesting to me.

comment by Hazard · 2019-10-02T20:45:07.276Z · LW(p) · GW(p)

I'd be interested! No knowledge of topology. I've been annoyed several times by watching talks at a programming conference titled "Why Programmers Should Learn Category Theory", and they never explain why, they only define basic CT ideas and say "These are cool!". Still overall convinced that there's interesting things hiding in CT.

Replies from: philh
comment by philh · 2019-10-03T14:59:22.101Z · LW(p) · GW(p)

Not to steal countedblessings' thunder, but you may be interested in "Category Theory for Programmers".

I'm not actually convinced that "programmers" in general should learn category theory. (Though I don't know it well, myself.) I do think there's an analogy between programming and category theory which is interesting to think about and can lead to important insights in PL design; but when someone else has had those insights, other people can use them without knowing category theory.

https://bartoszmilewski.com/2014/10/28/category-theory-for-programmers-the-preface/

Replies from: countedblessings, SaidAchmiz
comment by countedblessings · 2019-10-04T19:01:23.111Z · LW(p) · GW(p)

There are a bunch of really great introductions to category theory, and this is definitely one of them. There's also Youtube videos of his lectures going over the same material.

My plan is to go very slowly, and to assume everything needs to be explained in as much detail as possible. This will make for a tediously long but hopefully very readable series.

comment by Said Achmiz (SaidAchmiz) · 2019-10-13T20:45:04.352Z · LW(p) · GW(p)

For what it’s worth, I tried reading that (I’d seen it recommended elsewhere, and this latest mention reminded me to give it a try).

I haven’t quite given up yet, but it’s not looking good. I found the preface to be thoroughly unconvincing as an argument for why I (a “working programmer”, as Milewski puts it) would want to learn category theory; and the next chapter (the Introduction) seems to be packed with some of the most absurd analogies I have ever seen, anywhere. (One of which is outright insulting—what, so the reason we need static type systems is that programmers are nothing more than monkeys, hitting keys at random, and with static typing, that randomly generated code will not compile if it’s wrong? But I am not a monkey; how does this logic apply to a human being who is capable of thought, and who writes code with a purpose and according to a design? Answer: it doesn’t.)

I will report back when I’ve read more (or finally given up), I suppose…

Replies from: gjm, philh
comment by gjm · 2019-10-13T22:04:26.265Z · LW(p) · GW(p)

Immediately after the bit about monkeys there's this

The usual goal in the typing monkeys thought experiment is the production of the complete works of Shakespeare. Having a spell checker and a grammar checker in the loop would drastically increase the odds. The analog of a type checker would go even further by making sure that, once Romeo is declared a human being, he doesn’t sprout leaves or trap photons in his powerful gravitational field.

which feels like a bit of an own goal to me, because I suspect the analogue of a type checker would actually make sure that once Romeo is declared a Montague it's a type error for him to have any friendly interactions with a Capulet, thus preventing the entire plot of the play.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-10-13T22:15:03.355Z · LW(p) · GW(p)

That’s an interesting (and amusing) point—I didn’t even think of that when reading it! (I was too busy shaking my head at the basic absurdity of the analogy: what human playwright, when writing a play, would accidentally have one of their main characters turn into a plant or a stellar object or any such thing? If we take the analogy at face value, doesn’t it show that type checking is manifestly unnecessary if your code is being written by humans…?)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-10-15T01:35:46.412Z · LW(p) · GW(p)

Furthermore I don't get how type checking would help monkeys write code any better. They would just have less code compile (and the same is true of adding a spelling and grammar checker to their Shakespeare plays)

comment by philh · 2019-10-14T13:16:35.372Z · LW(p) · GW(p)

To be clear, I recommend the "teaching category theory to programmers" aspect of it, which I remember being effective at teaching me. I have no particular memories of the "convincing programmers to learn category theory" aspect.

comment by ryan_b · 2019-10-02T20:40:46.767Z · LW(p) · GW(p)

I support this idea. Especially if it incorporates some motivation for topology, which weirdly seems to hang out by itself until it suddenly becomes critical.

Replies from: gjm
comment by gjm · 2019-10-06T09:57:11.984Z · LW(p) · GW(p)

It's more usual for topology to motivate category theory than the other way around. (That's where category theory originally came from, historically.)

comment by Pattern · 2019-10-03T00:27:30.177Z · LW(p) · GW(p)

As someone who doesn't know topology yet, that sounds amazing!

comment by Dmar · 2019-10-07T14:19:47.070Z · LW(p) · GW(p)

Interested!

comment by Wei Dai (Wei_Dai) · 2019-10-08T16:33:14.872Z · LW(p) · GW(p)

What are some good discussions of "ideology" from a rationalist perspective? E.g., what it is, what causes people to have them, what's the best way to fight harmful ideologies, how to prevent harmful ideologies from forming in one's own social movement, etc. From what I've been able to find myself, it seems to be a rather neglected topic on LW:

I'd also be interested in good discussions of it from outside the rationalist community.

Replies from: ryan_b, Raemon, hg00
comment by ryan_b · 2019-10-31T16:21:18.078Z · LW(p) · GW(p)

I have always understood this to be a consequence of the Politics is the Mindkiller [LW · GW] custom. The most relevant pieces outside the Craft and the Community on LessWrong are Raemon's The Relationship Between the Village and the Mission, and The Schelling Choice is Rabbit, not Stag [LW · GW].

I can think of a couple relevant-but-not-specific areas outside the rationalist community:

multivocality - the fact that single actions can be interpreted coherently from multiple perspectives simultaneously, the fact that single actions can be moves in many games at once, and the fact that public and private motivations cannot be parsed.

This leads to something they call robust action, which basically means "hard to interfere with." So my prior for successful movements is a morally multivocal ideology for hunting stag robustly.

comment by Raemon · 2019-10-08T20:55:29.413Z · LW(p) · GW(p)

Might be useful to taboo ideology.

It seems like a few sequence posts touch on this (Guardians of Ayn Rand [LW · GW], Guardians of Truth [LW · GW], and other pieces of the craft and the community sequence). I'm not sure if they seemed irrelevant to the question you meant to be orienting around, or you were looking for newer things, or just forgot.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-10-09T08:38:47.682Z · LW(p) · GW(p)

I guess by ideology I mean a set of ideas or beliefs that are used to rally a social movement around, which tend to become unquestionable "truths" once the social movement succeeds in gaining power. So for example theism, Communism, Aryan "master race". The "Guardian" posts you cite do seem somewhat relevant but don't really address the main questions I have, which I list below. (Also I didn't find them because I was searching for "ideology" as the keyword.)

  1. Eliezer's posts don't seem to address the "rallying flag" function of ideology. Given that ideologies are useful as rallying flags for people to coordinate / build alliances around but can also become increasingly harmful as they become more embedded (into e.g. education and policy) and unquestionable, what should someone trying to build a social movement do?
  2. What to do if one observes some harmful ideology growing in influence? If you try to argue against it, you become an enemy of the movement and might suffer a lot of personal consequences. If you try to build a counter-movement, you probably end up creating its own ideology which might not be less harmful.
  3. What to do if the harmful ideology has already taken over a whole society?
Replies from: Wei_Dai, Raemon
comment by Wei Dai (Wei_Dai) · 2019-10-09T19:52:31.495Z · LW(p) · GW(p)

Given that ideologies are useful as rallying flags for people to coordinate / build alliances around but can also become increasingly harmful as they become more embedded (into e.g. education and policy) and unquestionable, what should someone trying to build a social movement do?

One idea is to have some sort of timed auto-destruct mechanism for the ideology. For example, have the founders and other high-status members of the movement record a video asking people to question the ideology, and giving a bunch of reasons for why the ideology might be false or people shouldn't be so certain about it, to be released after the movement succeeds in gaining power. People concerned about ideologies could try to privately talk the leaders into doing this. But with deepfakes being possible, this might not work so well in the future (and also the timing mechanism seems tricky to get right) so I wonder what else can be done.

comment by Raemon · 2019-10-09T20:20:02.497Z · LW(p) · GW(p)

My guess is that there are fragments of things addressing at least part of this, just not oriented around ideology as a keyword (belief as attire [? · GW], professing and cheering [LW · GW], fable of science and politics [LW · GW]). I guess one thing is that much of the sequence are focused on "here is a way for beliefs to be wrong" rather than examining more closely why having this way-of-treating-beliefs might be useful. (Although Robin Hanson's work I think often explores that more directly)

What to do if you spot a harmful ideology is a political question, and in some cases the answer might be pretty orthogonal to rationality. (although you might mean the more specific subquestion of "how to stop harmful ideologies while maintaining/raising the sanity waterline." i.e. many people fight harmful ideologies with counter ideologies).

some random additional thoughts (this might also be part of what you were already thinking of, it's just what my brain had easily available)

I think I see the word ideology as a bit more neutral than you're phrasing here. Or at least, your examples are 'generally accepted around here as false/bad'. But, LessWrong has an overall ideology of beliefs-that-we-coordinate around, complete with "those beliefs being object-level useful" and "some people using those beliefs as attire, sometimes for reasons that are plausibly virtuous and sometimes for reasons that seem like exactly the sort of thing Eliezer wrote the sequences to complain about.

Science also has an ideology (similar but different from Yudkowskianism). The sequences also cover "how to address wrongness in the science ideology", I think. For example in Science or Bayes [LW · GW]:

In physics, you can get absolutely clear-cut issues.  Not in the sense that the issues are trivial to explain.  But if you try to apply Bayes to healthcare, or economics, you may not be able to formally lay out what is the simplest hypothesis, or what the evidence supports.  But when I say "macroscopic decoherence is simpler than collapse" it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code. Nor is the evidence itself in dispute.
I wanted a very clear example—Bayes says "zig", this is a zag—when it came time to break your allegiance to Science. [emphasis mine]
"Oh, sure," you say, "the physicists messed up the many-worlds thing, but give them a break, Eliezer!  No one ever claimed that the social process of science was perfect.  People are human; they make mistakes."
But the physicists who refuse to adopt many-worlds aren't disobeying the rules of Science.  They're obeying the rules of Science.
The tradition handed down through the generations says that a new physics theory comes up with new experimental predictions that distinguish it from the old theory.  You perform the test, and the new theory is confirmed or falsified.  If it's confirmed, you hold a huge celebration, call the newspapers, and hand out Nobel Prizes for everyone; any doddering old emeritus professors who refuse to convert are quietly humored.  If the theory is disconfirmed, the lead proponent publicly recants, and gains a reputation for honesty.

(Paul Graham's "What you can't say" is also relevant)

So, one way to fight bad/wrong/incomplete ideology is... well, to argue against it, if you're in an environment where that sort of thing works. If you're not in an environment conducive to clear argument, the obvious choices are "first try to make the environment conducive to argument" or, well, various dark-artsy rhetorical flourishes that work symmetrically whether your ideas are good or not.

It seems like you have more specific questions in mind (would be curious what your motivating examples are).

The way I'd have carved up your question space is less like "how to stop/fight ideologies" and more like "what to do about the general fact of some sets of beliefs becoming sticky over time?"

The sequences also touch upon, in response to the claim "Death is good because it kills old scientists that are stuck in their ways, which allows science to march forward", to which Eliezer replies "Jesus Christ sure, but you can just make scientists retire without killing them." But, you do still need to implement the part where you actually make them retire as public figures.

Replies from: Wei_Dai, Raemon
comment by Wei Dai (Wei_Dai) · 2019-10-10T19:02:53.436Z · LW(p) · GW(p)

What to do if you spot a harmful ideology is a political question, and in some cases the answer might be pretty orthogonal to rationality. (although you might mean the more specific subquestion of “how to stop harmful ideologies while maintaining/raising the sanity waterline.” i.e. many people fight harmful ideologies with counter ideologies).

Right, politics as usual seems to imply a sequence of ideologies replacing each other, and it might just be a random walk as far as how beneficial/harmful the ideologies are. My question is how to do better than that.

It seems like you have more specific questions in mind (would be curious what your motivating examples are).

My original motivating examples came from contemporary US politics, so it's probably better not to bring them up here, but I'm now also worried about the implications for the "long reflection" / "great deliberation".

first try to make the environment conducive to argument

By doing what? I mean it seems possible to build environments conducive to argument for a relatively small group of people, like LW, but I don't know what can be done to push a whole society in that direction, so that's part of my question.

The way I’d have carved up your question space is less like “how to stop/fight ideologies” and more like “what to do about the general fact of some sets of beliefs becoming sticky over time?”

I think I'm still more inclined to use the first framing, because if we make beliefs less sticky, it might just speed up the cycles of ideologies replacing each other, and it seems like the bigger problem is "beliefs as rallying flags" (i.e., beliefs can selected for because they are good rallying flags instead of for epistemic reasons).

comment by Raemon · 2019-10-09T20:21:28.619Z · LW(p) · GW(p)

(btw, I think this comment would work well as a question, which might make it easier to reference in the future)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-10-10T19:15:36.466Z · LW(p) · GW(p)

I'd have no problem with turning it into a top-level question post, if that's something you can do. (I posted it in Open Thread in case there was already some sequence of posts that directly addressed my questions, that I simply missed.) It not, I may write a question post after I do some more research and think/talk things over.

comment by hg00 · 2019-10-14T04:31:45.833Z · LW(p) · GW(p)

The mental models in this post seem really generally useful: https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporative-cooling-of-group-beliefs [LW · GW]

comment by Siljamaeki (Siljamäki) · 2019-10-29T13:34:59.052Z · LW(p) · GW(p)

Hi there! I've stumbled upon this forum on and off while reading up on Effective Altruism, which I first got acquainted with December 2018 (less than a year ago). I'm interested in learning how to think and act more rationally, both for my own personal development as well as within EA issues. I look forward to binging on all the interesting articles and discussions on here and possibly meeting up RL with people in Stockholm, Sweden.

Question: does my username show up alright? No font break or weird symbols?

Edit: I had it changed from Siljamäki to Siljamaeki. Just in case.

Replies from: habryka4
comment by habryka (habryka4) · 2019-10-29T19:07:12.309Z · LW(p) · GW(p)

Welcome!

Username looks good to me (Chrome + Mac OS).

comment by An_Amazing_Login · 2019-10-12T00:44:15.254Z · LW(p) · GW(p)

Hey people!

I'm "new" here, having spent the last 2 years reading and following the "core" material. I have no clue how I got here. I remember following an idle path of exploration and suddenly finding this beautiful place where it seems like there are true adults. [LW · GW]

I'm a young adult constantly amazed at the scope of this world, a composition student that has somehow gained (mild) success (read: gotten paid anything at all), I've used basic rationality tools when deciding if my
(then) relationship was worth it, found out that it wasn't, I didn't accept the answer, tried again when everything in that relationship was on fire and managed to leave relatively unharmed. I hope to gain some friendships from this community, and I'm looking for people willing to do some betting to help train a beginner rationalist mind.

I think it's time for me to engage in the community, now that I'm young and able to change my habits "easily" for the better. I've been thinking of starting some sort of rationalist hangout/meetup in Malmö, Sweden (where i currently live). I'm slightly unsure where I could check if there's any interest at all, pointers would be welcome :)

Replies from: habryka4
comment by habryka (habryka4) · 2019-10-12T01:32:24.535Z · LW(p) · GW(p)

Adding yourself to the map is the first thing I would do, and then most likely I would see whether there have been any meetups historically anywhere close to your area. It appears there is a meetup in 10 days in Copenhagen, which seems pretty close to Malmo.

https://lesswrong.com/events/A7LAzFXD79pgFtsdA/cph-meetup-10-10-19

At that meetup you might also be able to figure out whether there are any people directly from your area interested in a meetup.

comment by DanielFilan · 2019-10-22T05:40:09.701Z · LW(p) · GW(p)

Rationality is basically therapy [citation needed]. A common type of therapy is couples therapy. As such, you'd think that 'couples rationality' would exist. I guess it partially does (Double Crux, Againstness, "group rationality" when n=2, polyamory advocacy), but it seems less prevalent than you'd naively think. Maybe because rationalists tend to be young unmarried people? Still, it seems like a shame that it's not more of a thing.

Replies from: gilch, ChristianKl
comment by gilch · 2019-10-24T06:11:31.542Z · LW(p) · GW(p)

Aumann's agreement theorem: two agents acting rationally cannot agree to disagree.

Share Models, Not Beliefs [? · GW]

For bigger groups: Voting Theory Primer for Rationalists [? · GW]

Replies from: clone of saturn
comment by clone of saturn · 2019-10-24T07:08:37.360Z · LW(p) · GW(p)

Aumann's agreement theorem is an extremely bad basis for any kind of couples therapy...

Replies from: gilch
comment by gilch · 2019-10-25T04:39:55.876Z · LW(p) · GW(p)

...if less than two of them are rationalists?

Replies from: clone of saturn
comment by clone of saturn · 2019-10-25T05:05:05.392Z · LW(p) · GW(p)

Literally no one is rational enough to actually reach Aumann agreement on anything but a simple toy problem. See https://www.lesswrong.com/posts/JdK3kr4ug9kJvKzGy/probability-space-and-aumann-agreement [LW · GW]

comment by ChristianKl · 2019-10-28T08:03:54.433Z · LW(p) · GW(p)

Therapy is a specific setting. You have a therapist and you have a client (or two). Most rationality technique on the other hand seem to be designed to be able to be done by a single person.

comment by WannabeChthonic · 2019-10-11T00:01:50.472Z · LW(p) · GW(p)

I found out about LessWrong via this community session on the 35. Chaos Communication Congress. It was by far the best talks I had while on congress. And that says something because during congress I usually have lot and lots of good talks.

Personally I feel like there are rather-emotional and rather-rational people. Personally I'm far into the rather-rational territory and I look forward to meeting new people, learning about new ideas and generally advancing my decision making.

I study computer science and I read one or another grand philosophical book so far... I'd personally consider myself "GIT/GP/GO" which is Geek Code V3 for "Geek of Information Technology / Geek of Philosophy / Geek of Other".

comment by ryan_b · 2019-10-03T14:31:10.384Z · LW(p) · GW(p)

I've noticed I navigate my entertainment largely by things to avoid. I hate coming of age tales in general and anything involving a school in particular. I despise children-of-destiny stories, which is weird because I've always liked prophecies. I avoid books when people talk about the worldbuilding.

This strikes me as strange considering how much of my reading when I was young consisted of a child of destiny who comes of age amid crappy worldbuilding. Maybe it is an acquired sensitivity or something.



Replies from: polymathwannabe
comment by polymathwannabe · 2019-10-13T16:52:29.296Z · LW(p) · GW(p)

What do you like?

Replies from: ryan_b
comment by ryan_b · 2019-10-15T14:27:15.311Z · LW(p) · GW(p)

Lately short stories, action, and good prose. Short stories are an excellent antidote to the glut of long book series; they don't allow enough space for fluff, so I find they are consistently better reads. Also lower investment, which is nice. And good prose is good prose, like always.

A year or so ago I read some of Ursula K. Le Guin's short stories, and that was when I really noticed that there were levels to the whole business. I don't recall the story, but the scene which struck me was walking down a road in the autumn. I now suspect that depicting banal events well is a mark of craft in the same way as drawing a circle or squaring an edge.

comment by Rafael Harth (sil-ver) · 2019-10-21T20:15:49.413Z · LW(p) · GW(p)

Has anyone written a summary of all organizations that work on AI alignment? If not, what is the best way to keep track of that?

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2019-10-21T23:47:49.057Z · LW(p) · GW(p)

This post [EA · GW] has a discussion of every major alignment organization, and summarizes their mission to some extent.

comment by Richard Meadows (richard-meadows-1) · 2019-10-11T19:41:34.018Z · LW(p) · GW(p)

Question/feature request: does cross-posting automatically add a canonical URL element pointing to the original content? If not, would it be possible to do so? (Google doesn't necessarily penalise duplicate content, but it does effect search rankings etc.)

Replies from: habryka4
comment by habryka (habryka4) · 2019-10-11T21:44:00.766Z · LW(p) · GW(p)

We already implemented this!

When we set up crossposting we can set a flag on whether to have the canonical URL point towards its original source (this doesn't always make sense, for example for things like the AI Alignment Newsletter), but if you want to automatically crosspost while preserving the canonical URL we can set that up for you.

Replies from: richard-meadows-1