Open thread, Oct. 27 - Nov. 2, 2014

post by MrMind · 2014-10-27T08:58:10.828Z · LW · GW · Legacy · 403 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

403 comments

Comments sorted by top scores.

comment by Artaxerxes · 2014-10-27T15:48:37.959Z · LW(p) · GW(p)

Not all of the MIRI blog posts get cross posted to Lesswrong. Examples include the recent post AGI outcomes and civilisational competence and most of the conversations posts. Since it doesn't seem like the comment section on the MIRI site gets used much if at all, perhaps these posts would receive more visibility and some more discussion would occur if these posts were linked to or cross posted on LW?

Replies from: John_Maxwell_IV, Curiouskid
comment by John_Maxwell (John_Maxwell_IV) · 2014-10-27T21:51:27.393Z · LW(p) · GW(p)

Re: "civilizational incompetence". I've noticed "civilizational incomptence" being used as a curiosity stopper. It seems like people who use the phrase typically don't do much to delve in to the specific failure modes civilization is falling prey to in the scenario they're analyzing. Heaven forbid that we try to come up with a precise description of a problem, much less actually attempt to solve it.

(See also: http://celandine13.livejournal.com/33599.html)

Replies from: Artaxerxes
comment by Artaxerxes · 2014-10-28T01:50:05.799Z · LW(p) · GW(p)

I too, have seen it used too early or in contexts where it probably shouldn't have been used. As long as people don't use it so much as an explanation for something, but rather as a description or judgement, its use as a curiosity stopper is avoidable.

So I suppose there is a difference between saying "bad thing x happens because of civilisational incompetence", and "bad thing x happens, which is evidence that there is civilisational incompetence."

Separate to this concern is that it also has a slight Lesswrong-exceptionalism 'peering at the world from above the sanity waterline' vibe to it as well. But that's no biggie.

comment by Curiouskid · 2014-10-31T06:25:54.255Z · LW(p) · GW(p)

I had the same thought when I read Hayworth's recent interview. It's really good.

comment by Artaxerxes · 2014-10-27T15:36:08.829Z · LW(p) · GW(p)

Is the recommended courses page on MIRI's website up to date with regards to what textbooks they recommend for each topic? Should I be taking the recommendations fairly seriously, or more with a grain of salt? I know the original author is no longer working at MIRI, so I'm feeling a bit unsure.

I remember lukeprog used to recommend Bermudez's Cognitive Science over many others. But then So8res reviewed it and didn't like it much, and now the current recommendation is for The Oxford Handbook of Thinking and Reasoning, which I haven't really seen anyone say much about.

There are a few other things like this, for example So8res apparently read Heuristics and Biases as part of his review of books on the course list, but it doesn't seem to appear on the course list anymore, and under the heuristics and biases section Thinking and Deciding is recommended (once reviewed by Vaniver).

Replies from: So8res, Strangeattractor, None
comment by So8res · 2014-10-28T20:25:50.943Z · LW(p) · GW(p)

No, it's not up to date. (It's on my list of things to fix, but I don't have many spare cycles right now.) I'd start with a short set theory book (such as Naive Set Theory), follow it up with Computation and Logic (by Boolos), and then (or if those are too easy) drop me a PM for more suggestions. (Or read the first four chapters of Jaynes on Probability Theory and the first two chapters of Model Theory by Chang and Keisler.)

Edit: I have now updated the course list (or, rather, turned it into a research guide) that is fairly up-to-date (if unpolished) as of 6 Nov 14.

comment by Strangeattractor · 2014-10-29T12:04:03.507Z · LW(p) · GW(p)

I have some suggestions for books related to the topics you mentioned. There's a pretty good section on cognitive ergonomics in Wickens' Introduction to Human Factors Engineering that is a clear introduction to the topic, and mentions some examples of design issues that can arise from human beings' cognitive limitations and biases.

Also, Chris Eliasmith's book Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems shows some of the technical approaches people have taken to modelling what happens in the brain.

I'm not sure if either of those is what you're looking for, but I found them interesting.

comment by [deleted] · 2014-10-28T08:17:29.387Z · LW(p) · GW(p)

Is the recommended courses page on MIRI's website up to date with regards to what textbooks they recommend for each topic? Should I be taking the recommendations fairly seriously, or more with a grain of salt? I know the original author is no longer working at MIRI, so I'm feeling a bit unsure.

I think Understanding Machine Learning (out this year) is better than Bishop's book (which is, frankly, insufferably obscurantist), and that instead of model-checking you ought to be learning a proof assistant (I learned Coq from Benjamin Pierce's Software Foundations).

Replies from: Artaxerxes
comment by Artaxerxes · 2014-10-28T09:00:01.147Z · LW(p) · GW(p)

The book the page recommends is Kevin Murphy's Machine Learning: A Probabilistic Perspective. I don't see any of Chris Bishop's books on the MIRI list right now, was Pattern Recognition and Machine Learning there at some point? Or am I missing something you're saying.

Replies from: None
comment by [deleted] · 2014-10-28T09:52:54.835Z · LW(p) · GW(p)

Oh, well all right then. I was under the mistaken impression Bishop's book was listed. My bad!

comment by Artaxerxes · 2014-10-27T16:11:35.400Z · LW(p) · GW(p)

Luke's IAMA on reddit's r/futurology in 2012 was pretty great. I think it would be cool if he did another, a lot has changed in 2+ years. Maybe to coincide with the December fundraising drive?

Replies from: None, Evan_Gaensbauer
comment by [deleted] · 2014-10-28T08:20:26.951Z · LW(p) · GW(p)

If he could not repeat the claim that UFAI is so easily compressible it could "spread across the world in seconds" through the internet, that would be quite helpful, actually. Even in the rich world, with broadband, transferring an intelligent agent all across the world will take whole hours, especially given the time necessary for the bugger to crack into and take control of the relevant systems (packaging itself as a trojan horse and uploading itself to 4chan in a "self-extracting zip" of pornography will take even longer).

comment by Evan_Gaensbauer · 2014-10-28T06:53:20.552Z · LW(p) · GW(p)

I just sent a message to Luke. Hopefully he will notice it.

comment by NancyLebovitz · 2014-10-29T16:02:13.975Z · LW(p) · GW(p)

The outside view.... (The whole link is quoted.)

Yesterday, before I got here, my dad was trying to fix an invisible machine. By all accounts, he began working on the phantom device quite intently, but as his repairs began to involve the hospice bed and the tubes attached to his body, he was gently sedated, and he had to leave it, unresolved.

This was out-of-character for my father, who I presumed had never encountered a machine he couldn’t fix. He built model aeroplanes in rural New Zealand, won a scholarship to go to university, and ended up as an aeronautical engineer for Air New Zealand, fixing engines twice his size. More scholarships followed and I first remember him completing his PhD in thermodynamics, or ‘what heat does’, as he used to describe it, to his six-year-old son.

When he was first admitted to the hospice, more than a week go, he was quite lucid – chatting, talking, bemoaning the slow pace of dying. “Takes too long,” he said, “who designed this?” But now he is mostly unconscious.

Occasionally though, moments of lucidity dodge between the sleep and the confusion. “When did you arrive?” he asked me in the early hours of this morning, having woken up wanting water. Once the water was resolved he was preoccupied about illusory teaspoons lost among the bedclothes, but then chatted in feint short sentences to me and my step-mum before drifting off once more.

Drifting is a recent tendency, but in the lucidity he has remained a proud engineer. It’s more of a vocation, he always told his students, than a career.

Last week, when the doctors asked if he would speak to medical trainees, he was only too happy to have a final opportunity to teach. Even the consultants find his pragmatic approach to death somewhat out of the ordinary and they funnelled eager learners his way where he was happy to answer questions and demonstrate any malfunctioning components.

“When I got here”, he explained to them, “I was thermodynamically unstable but now I think I’m in a state of quasi-stability. It looks like I have achieved thermal equilibrium but actually I’m steadily losing energy.”

“I’m not sure”, I said afterwards, “that explaining your health in terms of thermodynamics is exactly what they’re after.”

“They’ll have to learn,” he said. “You can’t beat entropy.”

comment by Gunnar_Zarncke · 2014-10-28T08:32:52.707Z · LW(p) · GW(p)

Today I had an aha moment when discussing coalition politics (I didn't call it that, but it was) with elementary schoolers, 3rd grade.

As a context: I offer an interdisciplinary course in school (voluntary, one hour per week). It gives a small group of pupils a glimpse of how things really work. Call it rationality training if you want.

Today the topic was pairs and triple. I used analogies from relationships: Couples, parents, friendships. What changes in a relationship when a new element appears. Why do relationships form in the first place? And this revealed differences in how friendships work among boys and among girls. And that in this class at this moment at least the girl friendships were largely coalition politics: "If you do this your are my best friend," or "No we can't be best friends if she it your best friend." For the boys it appears to be at least wquantitatively different. But maybe just the surface differs.

I the end I represented this as graphs (kind of) on the board. And the children were delighted to draw their own coalition diagrams, even abbreviating names by single letters. You wouldn't have bet that these diagrams were from 3rd grade.

Replies from: MrMind, ChristianKl
comment by MrMind · 2014-10-28T14:59:05.727Z · LW(p) · GW(p)

I wonder what would happen if we trained monkeys to reveal this kind of detalis with us.

Replies from: Emile, Gunnar_Zarncke
comment by Emile · 2014-10-28T21:46:19.587Z · LW(p) · GW(p)

You may be interested in "Chimpanzee Politics", by Frans de Waals (something like that), which is about exactly that (observing a group of Chimps in a zoo, and how their politics and alliances evolves, with a couple coups).

Replies from: MrMind
comment by MrMind · 2014-10-29T08:14:17.173Z · LW(p) · GW(p)

Great! Added to my Amazon whislist ;)

comment by Gunnar_Zarncke · 2014-10-28T17:44:14.698Z · LW(p) · GW(p)

But maybe we could. Considering the tricky setups scientists use to compare the intelligence of mice and rats I'd think that it should be possible to devise an experiment which teaches monkeys to reveal their clan structure. I'm thinking along the line of first training association of buttons with clan members (photos) and the allowing to select groups which should get or not get a treat.

comment by ChristianKl · 2014-10-29T11:26:42.060Z · LW(p) · GW(p)

How did you deal with the prospect of one of the kids being emotional hurt by the whole process of being explicit about relationships?

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-10-29T11:53:05.907Z · LW(p) · GW(p)

I of course have an eye on the emotional wellbeing of the children. But I'm not really clear what kind of emotional hurt you mean. Being exposed to e.g. be the loner possibly? I probably wouldn't try it in this relatively direct way if the group weren't that small (4 children) when I can keep the discourse inspirational and playful at all time.

Replies from: ChristianKl
comment by ChristianKl · 2014-10-29T11:57:06.532Z · LW(p) · GW(p)

Being exposed to e.g. be the loner possibly?

Yes. Getting children to openly state: "We can't be best friend because you are best friends with X" seems to ask for trouble but if you have enough presence in the room to keep the discourse inspirational and playful it might be fine.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-10-29T12:14:52.964Z · LW(p) · GW(p)

Ah yes. "We can't be best friend because you are best friends with X" wasn't literally said with respect to someone in the room. Something like that was quoted by a girl as an example thus it wasn't personal in that moment but I assume that it is a real statement too.

comment by [deleted] · 2014-10-27T12:06:08.889Z · LW(p) · GW(p)

Recently, I started a writing wager with a friend to encourage us both to produce a novel. At the same time, I have been improving my job hunting by narrowing my focus on what I want out of my next job and how I want it. While doing these two activities, I began to think about what I was adding to the world. More specifically, I began to ask myself what good I wanted to make.

I realized that writing a novel was not from a desire to add a good to the world (I don't want to write a world changing book), but just something enjoyable. So, I looked at my job. I realized that it was much the same. I'm not driven to libraries specifically by a desire to improve the world's intellectual resources; that's just a side effect. I'm driven to them out of enjoyment for the work.

So, if I'm not producing good from the two major productions of my life, I thought about what else I could produce or if I should at all. But I couldn't think of any concrete examples of good I could add to the world outside of effective altruism. I'm not an inventor nor am I a culture-shifting artist. But I wanted to find something I could add to the world to improve it, if only for my own vanity.

I decided, for the time being, on myself. Since my two biggest enjoyments (work and play) were important to me as personal achievements, not world achievements, I decided that the best thing to start with was to make myself the most efficient version of me that I could. Part of this probably came from my reading of Theodore Roosevelt's doing much the same to transform himself from an idiot into a badass. Sure, I've already been engaging in self-improvement for a while, but this idea of making the best me is more about trying to produce an individual worth having, rather than just maximizing my utility in a few areas for a few limited goals (i.e. writing a book, getting a job).

I'm sure this sounds simplistic since much of the LW literature already discusses such things, but it was a bit of an "aha" moment for me, and it made optimization and self-improvement more interesting. It made them into concrete projects with a real world application. I'm trying to give the world one less ineffective, dangerously deluded person. That's a good goal to strive for, I like to think.

Replies from: wadavis, ChristianKl, None
comment by wadavis · 2014-10-27T19:00:30.151Z · LW(p) · GW(p)

Yes, take the Invisible Hand approach to altruism, by pursuing your own productive wellbeing you will generate wellbeing in the worlds of others. Trickle down altruism is a feasible moral policy. Come to the Dark Side and bask in Moral Libertarianism.

comment by ChristianKl · 2014-10-27T15:31:55.273Z · LW(p) · GW(p)

I'm sure this sounds simplistic since much of the LW literature already discusses such things

Important insights usually happen to sound simple but the insight still takes years to achieve.

comment by [deleted] · 2014-10-29T05:07:40.870Z · LW(p) · GW(p)

my reading of Theodore Roosevelt's doing much the same to transform himself from an idiot into a badass

Link/source?

comment by NancyLebovitz · 2014-10-30T22:07:48.492Z · LW(p) · GW(p)

How communities Work, and What Wrecks Them

One of the first things I learned when I began researching discussion platforms two years ago is the importance of empathy as the fundamental basis of all stable long term communities. The goal of discussion software shouldn't be to teach you how to click the reply button, and how to make bold text, but how to engage in civilized online discussion with other human beings without that discussion inevitably breaking down into the collective howling of wolves.

Behavior patterns that grind communities down: endless contrarianism, axe-grinding, persistent negativity, ranting, and grudges.

Replies from: Nornagest
comment by Nornagest · 2014-10-30T22:21:22.418Z · LW(p) · GW(p)

I agree about all of that except for contrarianism (and yes, I'm aware of the irony). You want to have some amount of contrarianism in your ecosystem, because people sometimes aren't satisfied with the hivemind and they need a place to go when that happens. Sometimes they need solutions that work where the mainstream answers wouldn't, because they fall into a weird corner case or because they're invisible to the mainstream for some other reason. Sometimes they just want emotional support. And sometimes they want an argument, and there's a place for that too.

What you don't want is for the community's default response to be "find the soft bits of this statement, and then go after them like a pack of starving hyenas tearing into a pinata made entirely of ham". There need to be safe topics and safe stances, or people will just stop engaging -- no one's always in the mood for an argument.

On the other hand, too much agreeableness leads to another kind of failure mode -- and IMO a more sinister one.

Replies from: Adele_L
comment by Adele_L · 2014-10-31T00:31:57.086Z · LW(p) · GW(p)

The article talked about endless contrarianism, where people disagree as a default reaction, instead of because of a pre-existing difference in models. I think that is a problem in the LW community.

Replies from: TrE
comment by TrE · 2014-10-31T15:47:08.548Z · LW(p) · GW(p)

On the contrary, from my experience it isn't.

Sorry, I could not resist the opportunity. But seriously, I don't often see people disagreeing for the sake of disagreeing. More often, they'll point out different aspects, or their own perspective on a topic. To be honest, support and affirmation are perhaps a bit rarer than they should be, but I've rarely perceived disagreement to be hostile, as opposed to misunderstanding, or legitimate and resolvable via further discussion.

More datapoints, anyone?

Replies from: ChristianKl
comment by ChristianKl · 2014-11-01T20:22:51.612Z · LW(p) · GW(p)

If other people disagree with what I write they usually do it for the sake of disagreeing. However if I disagree... ;)

comment by Evan_Gaensbauer · 2014-10-28T07:00:43.522Z · LW(p) · GW(p)

I posted a link to the 2014 survey in the 'Less Wrong' Facebook group, and some people commented they filled it out. Another friend of mine started a Less Wrong account to comment that she did the survey, and got her first karma. Now I'm curious how many lurkers become survey participants, and are then incenitivized to start accounts to get the promised karma by commenting they completed it. If it's a lot, that's cool, because having one's first comment upvoted after just registering an account on Less Wrong seems like a way of overcoming the psychological barrier of 'oh, I wouldn't fit in as an active participant on Less Wrong...'

If you, or someone you know, got active on Less Wrong for the first time because of the survey, please reply as a data point. If you're a regular user who has a hypothesis about this, please share. Either way, I'm curious to discover how strong an effect this is, or is not.

Replies from: None, Sjcs
comment by [deleted] · 2014-10-29T05:33:49.841Z · LW(p) · GW(p)

My first comment was after I completed the 2014 survey. I've only been lurking for about a month, and this was the first survey I've participated in.

comment by Sjcs · 2014-10-29T04:02:39.577Z · LW(p) · GW(p)

I have been an on-and-off lurker for ~15 months, and only recently created an account (not because of the survey though). I have participated in both 2013 and 2014's surveys.

comment by Artaxerxes · 2014-10-28T10:43:30.774Z · LW(p) · GW(p)

Someone has created a fake Singularity Summit website.

(Link is to MIRI blog post claiming they are not responsible for the site.)

MIRI is collaborating with Singularity University to have the website taken down. If you have information about who is responsible for this, please contact luke@intelligence.org.

comment by Omid · 2014-10-27T15:56:13.713Z · LW(p) · GW(p)

What chores do I need to learn how to do in order to keep a clean house?

Replies from: Emily, Manfred, Risto_Saarelma, hyporational, Richard_Kennaway
comment by Emily · 2014-10-27T16:03:06.025Z · LW(p) · GW(p)

Laundry (plus ironing, if you have clothes that require that - I try not to), washing up (I think this is called doing the dishes in America), mopping, hoovering (vacuuming), dusting, cleaning bathroom and kitchen surfaces, cleaning toilets, cleaning windows and mirrors. That might cover the obvious ones? Seems like most of them don't involve much learning but do take a bit of getting round to, if you're anything like me.

Replies from: Richard_Kennaway, Omid
comment by Richard_Kennaway · 2014-10-27T16:13:52.475Z · LW(p) · GW(p)

I'd add, not leaving clutter lying around. It both collects dust, and makes cleaning more of an effort. Keep it packed away in boxes and cupboards. (Getting rid of clutter entirely is a whole separate subject.)

comment by Omid · 2014-10-27T16:11:18.401Z · LW(p) · GW(p)

Thank you, how many hours a week do you spend doing these things?

Replies from: Nornagest, Emily
comment by Nornagest · 2014-10-27T19:07:32.884Z · LW(p) · GW(p)

It's really hard to estimate that accurately, because for me something like 90% of cleanliness is developing habits that couple it with the tasks that necessitate it: always and automatically washing dishes after cooking, putting away used clothes and other sources of clutter, etc. Habits don't take mental effort, but for the same reason it's almost impossible to quantify the time or physical effort that goes into them, at least if you don't have someone standing over you with a stopwatch.

For periodic rather than habitual tasks, though, I spend maybe half an hour a week on laundry (this would take longer if I didn't have a washer and dryer in my house, though, and there are opportunity costs involved), and another half hour to an hour on things like vacuuming, mopping, and cleaning porcelain and such.

comment by Emily · 2014-10-28T09:31:46.302Z · LW(p) · GW(p)

My timelog tells me that over the last ~7 weeks I've spent an average of 22 mins/day doing things with the tag "chores". That time period does include a two week holiday during which I spent a lot less time than usual on that stuff, so it's probably an underestimate. Agree with Nornagest below about the importance of small everyday habits! (Personally I am good at some of these, terrible at others.)

Replies from: Emily
comment by Emily · 2014-10-28T11:33:23.777Z · LW(p) · GW(p)

I should add that I live with another person, who does his share of the chores, so this time would probably increase if I wanted the same level of clean/tidy while living alone. I'm not sure how time per person scales with changes in the number of people though... probably not linearly, but it must depend on all sorts of things like how exactly you share out the chores, what the overhead sort of times are like for doing a task once regardless of how much task there is, and how size of living space changes with respect to number of people living in it. Also, if you add actively non-useful people like babies, I expect all hell breaks loose.

comment by Manfred · 2014-10-27T23:37:30.529Z · LW(p) · GW(p)

Adding on to Emily:

Having a particular hamper or even corner of your room where you put dirty laundry, so that it isn't all over your floor. When this hamper / corner is full, do your laundry.
Analogous organized or occasionally-organized places for paperwork or whatever else is being clutter-y.
If you have ancient carpet and it's dirty and stinky, learn how to rent a Rug Doctor-type steam cleaner from a nearby supermarket.
If you have a bunch of broken or dirty / stinky stuff in your house, learn how to get the trash people to haul it away, and learn where to buy cheap used furniture / cheap online kitchen supplies / whatever to replace your old junk.
Having tools handy to tidy up nails / tighten loose screws etc. when you notice them.
Keeping a bush and plunger near your toilet.
If your sink has clogged any time in the past 6 months, also consider having chemical unclogger / a long skinny "snake" (that's what it's actually called) that you shove down the drain and wiggle around to bust clogs.
Figure out where all the places that are hard to clean are. These are the places that will have 50 years of accumulated nasty dirt that will make the whole house smell better when you get rid of it.

comment by Risto_Saarelma · 2014-11-01T15:43:59.452Z · LW(p) · GW(p)

Learn to notice things that need cleaning.

Know a good way to get rid of everything you possess when you no longer need it (bookcrossing, electronic waste recycling or just a trash bag). Learn to notice when you have things cluttering up the place that you no longer need.

comment by hyporational · 2014-10-28T10:57:30.885Z · LW(p) · GW(p)

If you've got the money and a simple enough apartment layout, I recommend a vacuum cleaning robot. My crawling saucer collects a ridiculous amount of dust from the floor every day, and this seems to keep other surfaces and the air dustless too. There's no way I could clean up that much dust myself, and I'd do the cleaning so rarely that the dust would get all over the place.

comment by Richard_Kennaway · 2014-10-28T09:56:37.261Z · LW(p) · GW(p)

Avoid these and you'll be off to a good start. :)

comment by James_Miller · 2014-10-27T18:08:05.745Z · LW(p) · GW(p)

Assume that Jar S contains just silver balls, whereas Jar R contains ninety percent silver balls and ten percent red balls.

Someone secretly and randomly picks a jar, with an equal chance of choosing either. This picker then takes N randomly selected balls from his chosen jar with replacement. If a ball is silver he keeps silent, whereas if a ball is red he says “red.”

You hear nothing. You make the straightforward calculation using Bayes’ rule to determine the new probability that the picker was drawing from Jar S.

But then you learn something. The red balls are bombs and if one had been picked it would have instantly exploded and killed you. Should learning that red balls are bombs influence your estimate of the probability that the picker was drawing from Jar S?

I’m currently writing a paper on how the Fermi paradox should cause us to update our beliefs about optimal existential risk strategies. This hypothetical is attempting to get at whether it matters if we assume that aliens would spread at the speed of light killing everything in their path.

Replies from: private_messaging, jkaufman, polymathwannabe, Lumifer, Manfred
comment by private_messaging · 2014-10-27T18:31:17.350Z · LW(p) · GW(p)

I had a conversation with another person regarding this Leslie's firing squad type stuff. Basically, I came up with a cavemen analogy with the cavemen facing lethal threats. It's pretty clear - from the outside - that the cavemen which do probability correctly and don't do anthropic reasoning with regards to tigers in the field, will do better at mapping lethal dangers in their environment.

Replies from: James_Miller
comment by James_Miller · 2014-10-28T13:57:13.548Z · LW(p) · GW(p)

Thanks for letting me know about "Leslie's firing squad[s]"

Replies from: private_messaging
comment by private_messaging · 2014-10-28T15:17:10.199Z · LW(p) · GW(p)

You're welcome. So what's your actual take on the issue? I never seen a coherent explanation why bombs must make a difference. I seen appeals to "but you wouldn't be thinking anything if it was red", which ought to perfectly cancel out if you apply that to the urn choice as well.

edit: i.e. this anthropics, to me, is sort of like how you could calculate the forces in a mechanical system, but make an error somewhere, and that yields an apparent perpetuum mobile, as forces on your wheel with water and magnets fail to cancel out. Likewise, you evaluate impacts of some irrelevant information, and you make an error somewhere, and irrelevant information makes a difference.

Replies from: James_Miller
comment by James_Miller · 2014-10-29T18:26:25.271Z · LW(p) · GW(p)

To a first approximation I don't think it makes a difference, but it does add some logical uncertainty. Also, intuitively I want to be able to use anthropic reasoning to say "there is only a tiny chance that the universe would have condition X, but I'm not surprised by X because without X observers such as us won't exist", but I think doing this implies I have to give a different estimate if red = bomb.

Replies from: private_messaging, jnarx
comment by private_messaging · 2014-12-13T04:50:50.303Z · LW(p) · GW(p)

Also, intuitively I want to be able to use anthropic reasoning to say "there is only a tiny chance that the universe would have condition X, but I'm not surprised by X because without X observers such as us won't exist"

Hmm, that's an interesting angle on the issue, I didn't quite realize that was the motivation here.

I would be surprised by our existence if that was the case, and not further surprised by observation of X (because I already observed X by the way of perceiving my existence).

Let's say I remember that there was an strange, surprising sign painted on the wall, and I go by the wall, and I see that sign, and I am surprised that there's that sign on the wall at all, but I am not surprised that I am seeing it (because I can perform an operation in my head that implies existence of the sign - my memory tells me I seen it before). Same with the existence, I am surprised we exist at all but I am not surprised when I observe something necessary for my existence because I could've derived it from prior observations.

comment by jnarx · 2014-10-30T20:26:37.115Z · LW(p) · GW(p)

I think this particular example doesn't really exemplify what I think you're trying to demonstrate here.

A simpler example would be:

You draw one ball our of a jar containing 99% red balls and 1% silver balls (randomly mixed).

The ball is silver. Is this surprising? Yes.

What if you instead draw a ball in a dark room so you can't see the color of the ball (same probability distribution). After drawing the ball, you are informed that the red balls contain a high explosive, and if you draw a red ball from the jar it would instantly explode, killing you.

The lights go on. You see that you're holding a silver ball. Does this surprise you?

Replies from: private_messaging
comment by private_messaging · 2014-12-13T04:41:59.409Z · LW(p) · GW(p)

Well, being alive would surprise me, but not the colour of the ball. Essentially what happens is that the internal senses (e.g. perceiving own internal monologue) end up sensing the ball colour (by the way of the high explosive).

comment by jefftk (jkaufman) · 2014-10-30T16:34:35.904Z · LW(p) · GW(p)

This is related to the Sleeping Beauty Problem, and in general the answer depends what you're trying to do with "probability". For lots and lots more, Bostrom's PhD thesis is very detailed: Anthropic Bias: Observation Selection Effects in Science and Philosophy.

Bostrom's Observation Selection Effects and Human Extinction Risks paper is less philosophical and sounds like it's more relvant to the paper you're working on.

comment by polymathwannabe · 2014-10-27T18:13:18.099Z · LW(p) · GW(p)

Before I actually do the math, "you hear nothing" appears to affect my estimate exactly in the same way as "you're still alive."

Replies from: Kindly
comment by Kindly · 2014-12-13T16:34:42.172Z · LW(p) · GW(p)

This seems like the obvious answer to me as well. What am I missing?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-12-13T20:49:51.864Z · LW(p) · GW(p)

Now that I see this problem again, my thoughts on it are slightly different.

In the version with no bombs, there's a possible scenario where the picker draws a red ball but lies to you by keeping silent. So, there's a viable way for "you hear nothing" AND "Jar R" to happen.

But in the version with bombs, the scenario with "you are alive" AND "Jar R" can never happen. So, being alive in the with-bomb version is stronger evidence for Jar S than hearing nothing in the no-bomb version.

Replies from: Kindly
comment by Kindly · 2014-12-14T04:05:39.076Z · LW(p) · GW(p)

Okay, sure. The picker could be lying or speaking quietly; the bomb could be malfunctioning or have a timer that hasn't gone off yet. (Note to self: put down the ball as soon as you find out that it could be a bomb.) These things don't seem like they should be the point of a thought experiment.

comment by Lumifer · 2014-10-27T18:45:09.479Z · LW(p) · GW(p)

A side note: under the cherry bomb scenario the probability of you hearing the word "red" is zero.

comment by Manfred · 2014-10-27T23:06:35.991Z · LW(p) · GW(p)

If the two jar scenarios start with equal anthropic measure (i.e. looking in from the outside), then you really are less likely to have jar R if you're not dead.

comment by ruelian · 2014-10-27T17:13:24.938Z · LW(p) · GW(p)

I have a question for anyone who spends a fair amount of their time thinking about math: how exactly do you do it, and why?

To specify, I've tried thinking about math in two rather distinct ways. One is verbal and involves stating terms, definitions, and the logical steps of inference I'm making in my head or out loud, as I frequently talk to myself during this process. This type of thinking is slow, but it tends to work better for actually writing proofs and when I don't yet have an intuitive understanding of the concepts involved.

The other is nonverbal and based on understanding terms, definitions, theorems, and the ways they connect to each other on an intuitive level (note: this takes a while to achieve, and I haven't always managed it) and letting my mind think it out, making logical steps of inference in my head, somewhat less consciously. This type of thinking is much faster, though it has a tendency to get derailed or stuck and produces good results less reliably.

Which of those, if any, sounds closer to the way you think about math? (Note: most of the people I've talked to about this don't polarize it quite so much and tend to do a bit of both, i.e. thinking through a proof consciously but solving potential problems that come up while writing it more intuitively. Do you also divide different types of thinking into separate processes, or use them together?)

The reason I'm asking is that I'm trying to transition to spending more of my time thinking about math not in a classroom setting and I need to figure out how I should go about it. The fast kind of thinking would be much more convenient, but it appears to have downsides that I haven't been able to study properly due to insufficient data.

Replies from: RowanE, Luke_A_Somers, Strangeattractor, wadavis, Richard_Kennaway, Bundle_Gerbe, Fhyve, Gunnar_Zarncke, TsviBT, lmm
comment by RowanE · 2014-10-27T18:38:09.006Z · LW(p) · GW(p)

I'm only a not-very-studious undergraduate (in physics), and don't spend an awful lot of time thinking about maths ourside of that, but I pretty much only think about maths in the nonverbal way - I can understand an idea when verbally explained to me, but I have to "translate it" into nonverbal maths to get use out of it.

comment by Luke_A_Somers · 2014-10-27T22:40:48.545Z · LW(p) · GW(p)

I don't tend to do a lot of proofs anymore. When I think of math, I find it most important to be able to flip back and forth between symbol and referent freely - look at an equation and visualize the solutions, or (to take one example of the reverse) see a curve and think of ways of representing it as an equation. Since when visualizing numbers will often not be available, I tend to think of properties of a Taylor or Fourier series for that graph. I do a visual derivative and integral.

That way, the visual part tells me where to go with the symbolic part. Things grind to a halt when I have trouble piecing that visualization together.

Replies from: ruelian
comment by ruelian · 2014-10-28T16:55:11.223Z · LW(p) · GW(p)

This appears to be a useful skill that I haven't practiced enough, especially for non-proof-related thinking. I'll get right on that.

comment by Strangeattractor · 2014-10-27T18:52:59.272Z · LW(p) · GW(p)

I usually think about math nonverbally. I am not usually doing such thinking to come up with proofs. My background is in engineering, so I got a different sort of approach to math in my education about math than the people who were in the math faculty at the university I attended.

Sometimes I do go through a problem step by step, but usually not verbally. I sometimes make notes to help me remember things as I go along. Constraints, assumptions, design goals, etc. Explicitly stating these, which I usually do by writing them on paper, not speaking them aloud, if I'm working by myself on a problem, can help. But sometimes I am not working by myself and would say them out loud to discuss them with other people.

Also, there is often more than one way to visualize or approach a problem, and I will do all of them that come to mind.

I would suggest, to spend more time thinking about math, find something that you find really beautiful about math and start there, and learn more about it. Appreciate it, and be playful with it. Also, find a community where you can bounce ideas around and get other people's thoughts and ideas about the math you are thinking about. Some of this stuff can be tough to learn alone. I'm not sure how well this advice might work, your mileage may vary.

When I am really understanding the math, it seems like it goes directly from equations on the paper right into my brain as images and feelings and relations between concepts. No verbal part of it. I dream about math that way too.

Replies from: ruelian
comment by ruelian · 2014-10-27T19:26:20.932Z · LW(p) · GW(p)

I only got to a nonverbal level of understanding of advanced math fairly recently, and the first time I experienced it I think it might have permanently changed my life. But if you dream about math...well, that means I still have a long way to go and deeper levels of understanding to discover. Yay!

Follow-up question (just because I'm curious): how do you approach math problems differently when working on them from the angle of engineering, as opposed to pure math?

Replies from: Strangeattractor
comment by Strangeattractor · 2014-10-28T07:57:17.027Z · LW(p) · GW(p)

It seemed to me that the people I knew who were studying pure math spent a lot of time on proofs, and that math was taught to them with very little context for how the math might be used in the real world, and without a view as to which parts were more important than others.

In engineering classes we proved things too, but that was usually only a first step to using the concepts to work on some other problem. There was more time spent on some types of math than on others. Some things were considered to be more useful and important than others. Usually some sort of approximations or assumptions would be used, in order to make a problem simpler and able to be solved, and techniques from different branches of math were combined together whenever useful, often making for some overlap in the notation that had to be dealt with.

There was also the idea that any kind of math is only an approximate model of the true situation. Any model is going to fail at some point. Every bridge that has been built has been built using approximations and assumptions, and yet most bridges stay up. Learning when one can trust the approximations and assumptions is vital. People can die if you get it wrong. Learning the habit of writing down explicitly what the assumptions and approximations are, and to have a sense for where they are valid and where they are not, is a skill that I value, and have carried over into other aspects of my life.

Another thing is that math is usually in service of some other goal. There are design constraints and criteria, and whatever math you can bring in to get it done is welcome, and other math is extraneous. The beauty of math can be admired, but a kludgy theory that is accurate to real world conditions gets more respect than a pretty theory that is less accurate. In fact, sometimes engineers end up making kludgy theory that solves engineering problems into some sophisticated mathematics that looks more formal and has some interesting properties, and then it has a beauty of its own, although some of the beauty comes from knowing how it fits into a real world phenomenon.

Also, engineers tend to work in teams, not alone. So communicating with each other, and making sure that all the people on the team have a similar understanding of a situation, is a non-trivial part of the work. You don't want a situation where one person has one type of abstraction in their head, and another person has a different one, and they don't realize it, and when they go off to do their separate work, it doesn't match up. This can lead to all sorts of problems, not limited to cost overruns, design flaws, delays, and even deaths. So, if you hear engineers discussing nitpicky details and going over technical concepts more than once, that is one major reason why. You really need people to be on the same page.

Teamwork is so important to engineering that when taking classes, we were encouraged to talk to each other and work together on problems, before submitting answers. Whereas the people over in math were forbidden to talk to each other about their work before handing it in. That policy might be different at different schools. But I think it shows an important difference in culture.

Math is certainly something that can be enjoyed and practiced solo. But especially on some of the most tricky concepts of math that I have learned, I benefitted a lot from being able to discuss it with people, and get new insights and understanding from their perspectives. Sometimes I didn't even realize that I didn't properly understand a concept until I attempted to use it, and got a completely different answer from someone else who was attempting to use it.

I said it can get kludgy, and that the focus is on real world problems, but there are times when it does feel clean and pure, especially when people make real world objects that correspond pretty well to ideal mathematical objects. For example, using 4th-order differential equations to calculate the bending moments for I-beams felt peaceful and pretty, once I got the hang of it, and I think not it is not unlike something you might find in a pure math course.

I'm pretty enthusiastic about math, it's one of my favourite things to think about and do.

comment by wadavis · 2014-10-27T18:50:48.675Z · LW(p) · GW(p)

As someone employed doing mid-level math (structural design), I'm much like most others you've talked to. The entirely non-verbal intuitive method is fast, and it tends to be highly correct if not accurate. The verbal method is a lot slower, but it lends itself nicely to being put to paper and great for getting highly accurate if not correct answers. So everything that matters gets done twice, for accurate correct results. Of course, because it is fast the intuitive method is prefered for brainstorming, then the verbal method verifies any promising brainstorms.

Replies from: ruelian
comment by ruelian · 2014-10-27T19:28:04.252Z · LW(p) · GW(p)

Could you please explain what you mean by "correct" and "accurate" in this case? I have a general idea, but I'm not quite sure I get it.

Replies from: wadavis
comment by wadavis · 2014-10-27T20:14:41.824Z · LW(p) · GW(p)

Correct and Precise may have been better terms. By correct I mean a result that I have very high confidence in, but that is not precise enough to be useable. By accurate I mean a result that is very precise but with far less confidence that it is correct.

As an example, consider a damped oscillation word problem from first year. You are very confident that as time approaches infinity that the displacement will approach a value just by looking at it, but you don't know that value. Now when you crunch the numbers (the verbal process in the extreme) you get a very specific value that the function approaches, but have less confidence that that value is correct, you could have made any of a number of mistakes. In this example the classic wrong result is the displacement is in the opposite direction as the applied force.

This is a very simple example so it may be hard to separate the non-verbal process from the verbal, but there are many cases where you know what the result should look like but deriving the equations and relations can turn into a black box.

Replies from: ruelian
comment by ruelian · 2014-10-27T20:40:03.805Z · LW(p) · GW(p)

Right, that makes much more sense now, thanks.

One of my current problems is that I don't understand my brain well enough for nonverbal thinking not to turn into a black box. I think this might be a matter of inexperience, as I only recently managed intuitive, nonverbal understanding of math concepts, so I'm not always entirely sure what my brain is doing. (Anecdotally, my intuitive understanding of a problem produces good results more often than not, but any time my evidence is anecdotal there's this voice in my head that yells "don't update on that, it's not statistically relevant!")

Does experience in nonverbal reasoning on math lend actually itself to better understanding of said reasoning, or is that just a cached thought of mine?

Replies from: wadavis
comment by wadavis · 2014-10-27T22:02:24.006Z · LW(p) · GW(p)

Doing everything both ways, nonverbal and verbal, has lent itself to better understanding of the reasoning. Which touches on the anecdote problem, if you test every nonverbal result; you get something statistically relevant. If your odds are more often than not with nonverbal, testing every result and digging for the mistakes will increase your understanding (disclaimer: this is hard work).

Replies from: ruelian
comment by ruelian · 2014-10-28T17:23:58.066Z · LW(p) · GW(p)

So, essentially, there isn't actually any way of getting around the hard work. (I think I already knew that and just decided to go on not acting on it for a while longer.) Oh well, the hard work part is also fun.

comment by Richard_Kennaway · 2014-10-28T09:51:03.719Z · LW(p) · GW(p)

Which of those, if any, sounds closer to the way you think about math?

Each serves its own purpose. It is like the technical and artistic sides of musical performance: the technique serves the artistry. In a sense the former is subordinate to the latter, but only in the sense that the foundation of a building is subordinate to its superstructure. To perform well enough that someone else would want to listen, you need both.

This may be useful reading, and the essays here (from which the former is linked).

Replies from: ruelian
comment by ruelian · 2014-10-28T16:53:12.797Z · LW(p) · GW(p)

reads the first essay and bookmarks the page with the rest

Thanks for that, it made for enjoyable and thought-provoking reading.

comment by Bundle_Gerbe · 2014-10-28T07:36:22.863Z · LW(p) · GW(p)

As someone with a Ph.D. in math, I tend to think verbally in as much as I have words attached to the concepts I'm thinking about, but I never go so far as to internally vocalize the steps of the logic I'm following until I'm at the point of actually writing something down.

I think there is another much stronger distinction in mathematical thinking, which is formal vs. informal. This isn't the same distinction as verbal vs. nonverbal, for instance, formal thinking can involve manipulation of symbols and equations in addition to definitions and theorems, and I often do informal thinking by coming up with pretty explicitly verbal stories for what a theorem or definition means (though pictures are helpful too).

I personally lean heavily towards informal thinking, and I'd say that trying to come up with a story or picture for what each theorem or definition means as you are reading will help you a lot. This can be very hard sometimes. If you open a book or paper and aren't able to get anywhere when you try do this to the first chapter, it's a good sign that you are reading something too difficult for your current understanding of that particular field. At a high level of mastery of a particular subject, you can turn informal thinking into proofs and theorems, but the first step is to be able to create stories and pictures out of the theorems, proofs, and definitions you are reading.

comment by Fhyve · 2014-10-28T07:18:47.453Z · LW(p) · GW(p)

I'm a math undergrad, and I definitely spend more time in the second sort of style. I find that my intuition is rather reliable, so maybe that's why I'm so successful at math. This might be hitting into the "two cultures of mathematics", where I am definitely on the theory builder/algebraist side. I study category theory and other abstract nonsense, and I am rather bad (relative to my peers) at Putnam style problems.

comment by Gunnar_Zarncke · 2014-10-27T21:52:21.670Z · LW(p) · GW(p)

I don't see a clear verbal vs. non-verbal dichotomy - or at least the non-verbal side has lots of variants. To gain an intuitive non-verbal understanding can involve

  • visual aids (from precise to vague): graphs, diagrams, patterns (esp. repetitions), pictures, vivid imagination (esp. for memorizing)

  • acoustic aids: rhythms (works with muscle memory too), patterns in the spoken form, creating sounds for elements

  • abstract thinking (from precise to vague): logical inference, semantic relationships (is-a, exists, always), vague relationships (discovering that the more of this seems to imply the more of that)

Note: Logical inference seems to be the verbal part you mean, but I don't think symbolic thinking is always verbal. Its conscious derivation may be though.

And I hear that the verbal side despite lending itself to more symbolic thinking can nonetheless work its grammar magic on an intuitive level too (though not for me).

Personally if I really want to solve a mathematical problem I immerse myself in it. I try lots of attack angles from the list above (not systematically but as it seems fit). I'm an abstract thinker and don't rely on verbal, acoustic or motor cues a lot. Even visual aids don't play a large role though I do a lot of sketching, listing/enumerating combinations, drawing relations/trees, tabulating values/items. If I suspect a repeating pattern I may tap to it to sound it out. If there is lengthy logical inference involved that I haven't internalized I speak the rule repeatedly to use the acoustic loop as memory aid. I play around with it during the day visualizing relationships or following steps, sometimes until in the evening everyting blurs and I fall asleep.

comment by TsviBT · 2014-10-27T21:46:38.095Z · LW(p) · GW(p)

Personally, the nonverbal thing is the proper content of math---drawing (possibly mental) pictures to represent objects and their interactions. If I get stuck, I try doing simpler examples. If I'm still stuck, then I start writing things down verbally, mainly as a way to track down where I'm confused or where exactly I need to figure something out.

comment by lmm · 2014-10-27T19:36:52.462Z · LW(p) · GW(p)

I don't really draw that distinction. I'd say that my thinking about mathematics is just as verbal as any other thinking. In fact, a good indication that I'm picking up a field is when I start thinking in the language of the field (i.e. I will actually think "homology group" and that will be a term that means something, rather than "the group formed by these actions...")

Replies from: ruelian
comment by ruelian · 2014-10-27T20:13:33.465Z · LW(p) · GW(p)

I'd say that my thinking about mathematics is just as verbal as any other thinking.

Just to clarify, because this will help me categorize information: do you not do the nonverbal kind of thinking at all, or is it all just mixed together?

Replies from: lmm
comment by lmm · 2014-10-27T22:26:07.488Z · LW(p) · GW(p)

I'm not really conscious of the distinction, unless you're talking about outright auditory things like rehearsing a speech in my head. The overwhelming majority of my thinking is in a format where I'm thinking in terms of concepts that I have a word for, but probably not consciously using the word until I start thinking about what I'm thinking about. Do you have a precise definition of "verbal"? But whether you call it verbal or not, it feels like it's all the same thing.

Replies from: ruelian
comment by ruelian · 2014-10-28T16:50:20.725Z · LW(p) · GW(p)

I don't really have good definitions at this point, but in my head the distinction between verbal and nonverbal thinking is a matter of order. When I'm thinking nonverbally, my brain addresses the concepts I'm thinking about and the way they relate to each other, then puts them to words. When I'm thinking verbally, my brain comes up with the relevant word first, then pulls up the concept. It's not binary; I tend to put it on a spectrum, but one that has a definite tipping point. Kinda like a number line: it's ordered and continuous, but at some point you cross zero and switch from positive to negative. Does that even make sense?

Replies from: lmm
comment by lmm · 2014-10-28T19:00:46.216Z · LW(p) · GW(p)

It makes sense but it doesn't match my subjective experience.

Replies from: ruelian
comment by ruelian · 2014-10-28T19:33:15.050Z · LW(p) · GW(p)

Alright, that works too. We're allowed to think differently. Now I'm curious, could you define your way of thinking more precisely? I'm not quite sure I grok it.

Replies from: lmm
comment by lmm · 2014-10-29T23:24:33.647Z · LW(p) · GW(p)

So, I'd say there are three modes of thinking I can identify:

  • Normal thinking, what I'm doing the vast majority of the time. I'm thinking by manipulating concepts, which are just, well, things.
  • Introspective thinking, where I'm doing the first kind of thinking, and thinking about it. Because the map can't be the territory, when I'm thinking about thinking the concepts I'm thinking about are represented by something simpler than themselves - if you're thinking about thinking about sheep then the sheep you're thinking about thinking about can't be as complex as the sheep you're thinking about. In fact they're represented either by words, or by something isomorphic to words - labels for concepts. So when I'm thinking about thinking, the thinking-about-thinking is verbal - but the thinking isn't (although there's a light-in-the-fridge effect that might make one think it was).
  • Auditory thinking, where I'm thinking in words in my head, planning a speech (or more likely a piece of writing - and most of the time I never actually write or say it). This is the only kind of thinking I'm conscious of doing that really feels verbal, but it feels sensory rather than thinking in words; I'm hearing a voice in my cartesian theater.
comment by Lumifer · 2014-10-27T16:11:03.814Z · LW(p) · GW(p)

An good semi-rant by Ken White of popehat on GamerGate. I recommend it as an excellent example of applied rationality and sorting out through the hysterics.

Replies from: Azathoth123
comment by Azathoth123 · 2014-10-30T06:08:18.973Z · LW(p) · GW(p)

Meh, he seems to be trying to hard to pretend to be wise. One of the more egregious examples:

There's no excuse for threats to anyone, whatever "side" they are on. Posting someone's home address or private phone number or financial details will almost never be relevant to a good-faith dispute — it's clearly intended to terrorize, and it risks empowering disturbed people to do real harm. These things are wrong no matter who does them, no matter the motive, and no mater the target.

That's like saying "Violence is wrong no matter who does it, therefore if an armed gang invades your neighborhood you should passively comply".

Replies from: Lumifer
comment by Lumifer · 2014-10-30T14:36:52.024Z · LW(p) · GW(p)

he seems to be trying to hard to pretend to be wise

He does a better job of it than most people I know :-) It's not that I completely agree with him, but he writes well and makes a lot of valid points.

The biggest objection to his position is, essentially, Yvain's post on weak men.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-04T05:53:52.624Z · LW(p) · GW(p)

Sarah Hoyt has a good description of the problems with the post here.

Replies from: Lumifer
comment by Lumifer · 2014-11-04T06:18:15.744Z · LW(p) · GW(p)

Meh. It's a rant and not a particularly well-thought-out one.

comment by NancyLebovitz · 2014-10-30T03:33:58.086Z · LW(p) · GW(p)

Training horses to indicate whether they want to wear a blanket or have a blanket taken off

Replies from: Vulture
comment by Vulture · 2014-10-31T22:48:27.951Z · LW(p) · GW(p)

This could be a big deal for the bestiality debate (although conducting the necessary training without falling afoul of the original ethical concerns would probably be a trick).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-01T15:09:29.278Z · LW(p) · GW(p)

A general training in do want, don't want for ordinary things like blankets and types of food could go a long way to solving the problem.

Replies from: fubarobfusco, Vulture
comment by fubarobfusco · 2014-11-01T19:41:03.495Z · LW(p) · GW(p)

Warning: this comment is a ramble without a conclusion. Horses participating in tell culture? Cool. Preferences and consent are complicated.

This line of thinking seems to lead to some interesting places about the idea of consent.

I'm increasingly of the opinion that the whole notion of "consent" is socially constructed (that is, learned) — that it is desirable but cannot be assumed to be natural or inherent. People have to learn, not only to ask others' consent, but to recognize when their consent is being asked: not only to ask "Do you want this?" but to know when someone wants them to have and express a preference.

Indeed, the idea of developing preferences of one's own has to be learned. (Possibly the whole notion of having an identity, too.)

People raised in very controlling households seem to have trouble with this — with formulating and communicating preferences and seeking consent, rather than just ① going ahead and doing things that affect others and then seeing how those others react, or ② expecting others to do the reciprocal. They expect interactions to be, not necessarily forced, but certainly not negotiated. "Better to ask forgiveness than seek permission" is one thing as a maxim for decision-making in a bureaucratic office, but quite another thing in personal relationships!

This leads to communications problems between these folks and people who have been taught to exchange consent. For instance, "Would you like to do thus-and-so with me?" for one person can mean "I expect you to do thus-and-so with me and will be disappointed or angry if you don't" whereas for another it can mean "I actually don't know if thus-and-so would be worth doing for us; what do you think?"

Previously I thought that this difference was that (to put it overly strongly) people from controlling households had had their free will beaten out of them — that they had been abused or neglected in a way that made them alieve that people would not respect their preferences or dissent, and so did not bother to express any. But now I think the opposite: "just do stuff and see how others react" is the state of nature, whereas "formulate and express preferences and negotiate with others" is socially constructed.

And as a society, it seems we are demanding more and more of it. That sounds like a pretty good thing to me, especially for people whose preferences would otherwise be denied or disregarded. But it isn't free or obvious; it's a big structure of socially-constructed-stuff that people have to learn.

Computationally speaking, preferences aren't free. Even if we model people as agents with utility functions (which I'm not sure we should!), having a utility function doesn't mean having explicit knowledge of what your utility function is! In order to express preferences, an agent has to notice facts about itself, notice regularities about those facts, figure out what it might want another agent to do ... and so on. All that requires brain power.

Teaching a horse to express preferences — that it can communicate something that will influence its handler's actions, to get something done that it can't do for itself — seems like a pretty big deal. Affirmatively communicating about a specific action is more "consent-like" than, say, merely expressing an emotional state of dissatisfaction or contentment.

I get the sense that people who live with animals generally do have a notion of what the animals like or dislike. But that isn't the same as communicating preferences or consent.

On zoophilia/bestiality, I at one point thought something like: "A dog or horse can obviously express dissatisfaction with physical acts it doesn't like — by pulling away, kicking, biting, etc. Some animals can clearly 'propose' sexual acts with humans, such as a dog humping a person's leg. And we don't expect people to seek animals' consent to a hell of a lot of things that we do expect them to seek consent from humans — such as medical treatment or being put in a cage. So what's the big deal?"

But a dog humping someone's leg isn't proposing a sexual act or consenting to one; it's initiating one. If a human did the equivalent, to random people they didn't have an existing relationship with, well, we wouldn't want to put up with that sort of thing.

People (and, I suspect, horses) have different degrees of insight into their own preferences. It is perfectly possible to be wrong about your preferences: to believe that you would be happier if you ate a bag of candy, when in fact you would give yourself a stomachache and be less happy.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-01T20:43:02.958Z · LW(p) · GW(p)

Consent is really tricky.

Imagine a woman sitting at the bar. The woman knows what she's doing and knows that when she smiles in a certain way at a man there a 90% chance that the man will approach her, however only in 10% of the cases the man has an idea that the woman did something to make the woman approach.

If the woman initiates an interaction like that does she have informed consent? Is there some ethical imperative for her to inform the man that she initiated the interaction?

To frame the question in another way, if all you are doing is trigger the system 1 of the other person do let the person engage in certain actions, but you never ask a question to give system 2 the opportunity to reflect, do you have consent?

Replies from: fubarobfusco
comment by fubarobfusco · 2014-11-01T20:58:48.633Z · LW(p) · GW(p)

Guess cultures are really tricky!

If it is indeed the case that everyone knows for certain what the signals mean, then they can be very specific communications of intent and consent: there is not actually any guessing going on! But if the point of using facial expressions and gestures rather than words is that the former are deniable, then it probably can't be the case that everyone knows for certain: deniability relies on ambiguity.

If two people have slightly different interpretations of what the signals mean, then they can end up with extremely divergent interpretations of what happened in a particular exchange.

For that matter, if everyone in the bar grew up in the same town and went to the same schools, that's a pretty different situation from if the bar is an assemblage of people from wildly different backgrounds who happen to have landed in the same location.

(I may be computing from stereotypes in saying this ... but I expect that guess cultures prize uniformity, and fear diversity as a source of confusion; whereas tell cultures may consider uniformity boring, and prize diversity as a source of novelty.)

Sexually, it seems to me that if all you are doing is triggering the System 1 of the other person and neither person is waiting around for System 2 to engage and reflect, that may be very hot indeed — Erica Jong's "zipless fuck" — but the failure modes are correspondingly huge.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-01T21:33:35.342Z · LW(p) · GW(p)

If it is indeed the case that everyone knows for certain what the signals mean, then they can be very specific communications of intent and consent: there is not actually any guessing going on!

It's possible to send signal A and the other person not understanding what the signal means and doing nothing.

But it's also possible that they don't understand the signal but the signal causes them to feel a certain emotion and that emotion lets them engage in an action without them having any idea of the casual chain.

The more I learn about how humans work the more I get those practical ethical dilemmas. Even worse, to really know what I'm doing I have to experiment and I'm curious ;)

comment by Vulture · 2014-11-01T21:33:10.171Z · LW(p) · GW(p)

That seems like a huge leap in terms of capability, though, to add the free parameter of "condition to be started/stopped" somehow.

comment by MathiasZaman · 2014-10-27T21:28:54.705Z · LW(p) · GW(p)

I've recently started a tumblr dedicated to teaching people what amounts to Rationality 101. This post isn't about advertising that blog, since the sort of people that actually read Less Wrong are unlikely to be the target audience. Rather, I'd like to ask the community for input on what are the most important concepts I could put on that blog.

(For those that would like to follow this endeavor, but don't like tumblr, I've got a parallel blog on wordpress)

Replies from: wadavis, jkadlubo, dthunt, Manfred, ruelian
comment by wadavis · 2014-10-27T22:18:57.015Z · LW(p) · GW(p)

Admitting you are wrong.

Replies from: Manfred
comment by Manfred · 2014-10-27T22:48:28.114Z · LW(p) · GW(p)

Highly related: When you even might be wrong, get curious about that possibility rather than scared of it.

comment by jkadlubo · 2014-10-28T12:11:16.340Z · LW(p) · GW(p)

Excercises in small rational behaviours. E.g. people genrally are very reluctant to apologize about anything, even if the case means little to them and a lot to the other person. Maybe it's "if I apologize, that will mean I was a bad person in the first place" thinking, maybe something else.

It's a nice excercise: if somebody seems to want something from you or apparently is angry with you when you did nothing wrong, stop for a moment and think: how much will it cost me to just say "I'm sorry, I didn't mean to offend you". After all, those are just words. You don't have to "win" every confrontation and convince the other person you are right and their requirements are ridiculus. And if you apologize, in fact you both will have a better day - the other person will feel appreciated and you will be proud you did something right.

(A common situation from my experience is that somebody pushes me in a queue, I say "excuse me, but please don't stand so close to me/don't look over my arm when I'm writing the PIN code etc." and then the pusher often starts arguing how my behaviour is out of line - making both of us and the cashier upset)

Come to think of it, it's a lot like Quirrell's second lesson in HPMoR...

comment by dthunt · 2014-10-30T20:37:42.470Z · LW(p) · GW(p)

Noticing confusion is the first skill I tried to train up last year, and is definitely a big one, because knowing what your models predict and noticing when they fail is a very valuable feedback loop that prevents you from learning if you can't even notice it.

Picturing what sort of evidence would unconvince you of something you actively believe is a good exercise to pair with the exercise of picturing what sort of evidence would convince you of something that seems super unlikely. Noticing unfairness there is a big one.

Realizing when you are trying to "win" at truthfinding, which is... ugh.

comment by Manfred · 2014-10-27T22:49:27.310Z · LW(p) · GW(p)

Taking stock of what information you have, and what might be good sources for information, well in advance of making a decision.

comment by ruelian · 2014-10-28T19:56:44.980Z · LW(p) · GW(p)

Map and territory - why is rationality important in the first place?

comment by Jackercrack · 2014-10-27T19:14:21.188Z · LW(p) · GW(p)

I'd like to ask LessWrong's advice. I want to benefit from CFAR's knowledge on improving ones instrumental rationality, but being a poor graduate I do not have several thousand in disposable income nor a quick way to acquire it. I've read >90% of the sequences but despite having read lukeprog's and Alicorn's sequences I am aware that I do not know what I do not know about motivation and akrasia. How can I best improve my instrumental rationality on the cheap?

Edit: I should clarify, I am asking for information sources: blogs, book recommendations, particularly practice exercises and other areas of high quality content. I also have a good deal of interest in the science behind motivation, cognitive rewiring and reinforcement. I've searched myself and I have a number of things on my reading list, but I wanted to ask the advice of people who have already done, read or vetted said techniques so I can find and focus on the good stuff and ignore the pseudoscience.

Replies from: cursed, gjm, RomeoStevens
comment by cursed · 2014-10-28T06:50:06.244Z · LW(p) · GW(p)

I've been to several of CFAR's classes throughout the last 2 years (some test classes and some more 'official' ones) and I feel like it wasn't a good use of my time. Spend your money elsewhere.

Replies from: hyporational
comment by hyporational · 2014-10-28T11:12:04.349Z · LW(p) · GW(p)

What made it poor use of your time?

Replies from: cursed
comment by cursed · 2014-10-28T21:15:02.557Z · LW(p) · GW(p)

I didn't learn anything useful. They taught, among other things, "here's what you should do to gain better habits". Tried it and didn't work on me. YMMV.

One thing that really irked me was the use of cognitive 'science' to justify their lessons 'scientifically'. They did this by using big scientific words that felt like they were trying to attempt to impress us with their knowledge. (I'm not sure what the correct phrase is - the words weren't constraining beliefs? don't pay rent? they could have made up scientific sounding words and it would have had the same effect.)

Also, they had a giant 1-2 page listing of citations that they used to back up their lessons. I asked some extremely basic questions about papers and articles I've previously read on the list and they had absolutely no idea what I was talking about.

ETA: I might go to another class in a year or two to see if they've improved. Not convinced that they're worth donating money towards at this moment.

Replies from: Unnamed, Unnamed, Jackercrack
comment by Unnamed · 2014-10-30T01:24:29.098Z · LW(p) · GW(p)

(This is Dan from CFAR again)

We have a fair amount of data on the experiences of people who have been to CFAR workshops.

First, systematic quantitative data. We send out a feedback survey a few days after the workshop which includes the question "0 to 10, are you glad you came?" The average response to that question is 9.3. We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response to that question was 9.6.

Less systematically but in more fleshed out detail, there are several reviews that people who have attended a CFAR workshop have posted to their blogs (A, B+pt2, C +pt2) or to LW (1, 2, 3). Ben Kuhn's (also linked above under "C") seems particularly relevant here, becaue he went into the workshop assigning a 50% probability to the hypothesis that "The workshop is a standard derpy self-improvement technique: really good at making people feel like they’re getting better at things, but has no actual effect."

In-person conversations that I've had with alumni (including some interviews that I've done with alumni about the impact that the workshop had on their life) have tended to paint a similar picture to these reviews, from a broader set of people, but it's harder for me to share those data.

We don't have as much data on the experiences of people who have been to test sessions or shorter events. I suspect that most people who come to shorter events have a positive experience, and that there's a modest benefit on average, but that it's less uniformly positive. Partly that's because there's a bunch of stuff that happens with a full workshop that doesn't fit in a briefer event - more time for conversations between participants to digest the material, more time for one-on-one conversations with CFAR staff to sort through things, followups after the workshop to work with someone on implementing things in your daily life, etc. The full workshop is also more practiced and polished (it has been through many more iterations) - much moreso than a test session; one-day events are in between (the ones advertised as alpha tests of a new thing are closer to the test session end of the spectrum).

Replies from: MTGandP, cursed
comment by MTGandP · 2014-10-30T02:41:47.764Z · LW(p) · GW(p)

We send out a feedback survey a few days after the workshop which includes the question "0 to 10, are you glad you came?" The average response to that question is 9.3.

I've seen CFAR talk about this before, and I don't view it as strong evidence that CFAR is valuable.

  • If people pay a lot of money for something that's not worth it, we'd expect them to rate it as valuable by the principle of cognitive dissonance.
  • If people rate something as valuable, is it because it improved their lives, or because it made them feel good?

For these ratings to be meaningful, I'd like to see something like a control workshop where CFAR asks people to pay $3900 and then teaches them a bunch of techniques that are known to be useless but still sound cool, and then ask them to rate their experience. Obviously this is both unethical and impractical, so I don't suggest actually doing this. Perhaps "derpy self-improvement" workshops can serve as a control?

comment by cursed · 2014-11-02T13:35:27.758Z · LW(p) · GW(p)

Hey Dan, thanks for responding. I wanted to ask a few questions:

You noted the non-response rate for the 20 randomly selected alumni. What about the non-response rate for the feedback survey?

"0 to 10, are you glad you came?" This is a biased question, because you frame that the person is glad. A similar negative question may say "0 to 10, are you dissatisfied that you came?" Would it be possible to anonymize and post the survey questions and data?

We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response to that question was 9.6.

It's great that you're following up with people long after the workshops end. Why not survey all alumni? You have their emails.

I've read most of the blog posts about CFAR workshops that you linked to - they were one of my main motivations for attending a workshop. I notice that all reviews are from people who have already participated in LessWrong and related communities. (all refer to some prior CFAR, EA and rationality related topics before they attended camp). Also, it seems like in person conversations are majorly subjected to the availability bias, as the people who attended workshops || know people who work at MIRI/CFAR || are involved in LW meetups in Berkeley and surrounding areas would contribute to the positivity of these conversations.. Also, the evaporative cooling effect may also play a role, in that people who weren't satisfied with the workshop would leave the group. Are there reviews from people who are not already familiar with LW/CFAR staff?

Also, I agree with MTGandP. It would be nice if CFAR could write a blog post or paper on how effective their teachings are, compared to a control group. Perhaps two one-day events, with subjects randomized across both days, should work well as a starting point.

comment by Unnamed · 2014-10-30T01:17:06.881Z · LW(p) · GW(p)

(Dan from CFAR here)

Hi cursed - glad to hear your feedback, though I'm obviously not glad that you didn't have a good experience at the CFAR events you went to.

I want to share a bit of information from my point of view (as a researcher at CFAR) on 1) the role of the cognitive science literature in CFAR's curriculum and 2) the typical experience of the people who come to a CFAR workshop. This comment is about the science; I'll leave a separate comment about thing 2.

Some of the techniques that CFAR teaches are based pretty directly on things from the academic literature (e.g., implementation intentions come straight from Peter Gollwitzer's research). Some of our techniques are not from the academic literature (e.g., the technique that we call "propagating urges" started out in 2011 as something that CFAR co-founder Andrew Critch did).

The not-from-the-literature techniques have been through a process of iteration, where we theorize about how we think the technique works, then (with the aid of our best current model) we try to teach people to use the technique, and then we get feedback on how it goes for them. Then repeat. The "theorizing" step of this process includes digging into the academic literature to get a better understanding of how the relevant parts of the mind work, and that often plays a role in shaping the class. With "propagating urges," at first none of the people that Critch taught it to were able to get it to work for them, but then Critch made a connection to some neuroscience he'd been reading, we updated our model of how the technique was supposed to work, and then more people were able to make use of the technique. (I'm tempted to go into more specifics here, but that feels like a tangent and this comment is going to be long enough without it.)

Classes based on from-the-academic-literature techniques also go through a similar process of iteration. For example, there are a lot of studies that have shown that people who are instructed to come up with implementation intentions for a particular goal make more progress towards that goal. But I don't know of any academic research on attempts to teach people the skill of being able to create implementation intentions, and the habit of actually using them in day-to-day life. And that's what we're trying to do at CFAR workshops, so that class has gone through a similar process of iteration as we get feedback on whether people are making use of implementation intentions and how it goes for them. (One simple change that helped get more people to use implementation intentions: giving the technique a different name. We now call it "trigger action planning").

So the cognitive science literature plays both of these roles for us: it's a source of evidence about particular techniques that have been tested and found to work (or to not work), and it's a source of models of how the mind works so that we can develop better techniques. We mention both of these types of scientific references in class (and in the further resources), and we try to be careful to distinguish them. Sharing our models in class (e.g., saying a few sentences in the propagating urges class about what we think the orbitofrontal cortex might be doing in this process) seems to be helpful for getting people to use the technique as we understand it (rather than getting confused about the steps, or rounding the technique off to the nearest cached thought). It also seems to help with getting people to take ownership of the technique and treat it as something that they can tinker with, rather than as a rote series of steps for them to follow (cf. learned blankness).

Finally, a brief comment on this:

Also, they had a giant 1-2 page listing of citations that they used to back up their lessons. I asked some extremely basic questions about papers and articles I've previously read on the list and they had absolutely no idea what I was talking about.

Each CFAR class has one staff member who takes the lead in developing the class, and I'm the research specialist who does a lot of digging into the literature and sharing/discussing research with whoever is developing the class. The aim is for the two of us to be conversant in the relevant academic literature. For the rest of the CFAR team, the priority is to be able to use the techniques and help other people use them, not to know all the studies. (Often there will be more than just us two puzzling things over together, but it typically isn't the whole team.) The instructor who teaches a class at a CFAR event isn't always the person who has been developing it, especially at one-day events which are just being run by 2 instructors instead of the full CFAR staff. If I'd been at the event you came to, the instructor who you asked about the articles probably would've referred you to me and we could've had an interesting conversation.

comment by Jackercrack · 2014-10-29T04:44:07.245Z · LW(p) · GW(p)

Do you think it was unhelpful because you already had a high level of knowledge on the topics they were teaching and thus didn't have much to learn or because the actual techniques were not effective? Do you think your experience was typical? How useful do you think it would be to an average person? An average rationalist?

Replies from: cursed
comment by cursed · 2014-10-29T05:12:00.421Z · LW(p) · GW(p)

Do you think it was unhelpful because you already had a high level of knowledge on the topics they were teaching and thus didn't have much to learn or because the actual techniques were not effective?

I don't believe I had a high level of knowledge on the specific topics they were teaching (behavior change, and the like). I did study some cognitive science in my undergraduate years, and I take issue with the 'science'.

Do you think your experience was typical?

I believe that the majority of people don't get much, if anything, from CFAR's rationality lessons. However, after the lesson, people may be slightly more motivated to accomplish whatever they want to, in the short term just because they've paid money towards a course to increase their motivation.

How useful do you think it would be to an average person?

There was one average person at one of the workshops I attended. e.g. never read LessWrong/other rationality material. He fell asleep a few hours into the lesson, I don't think he gained much from attending. I'm hesitant to extrapolate, because I'm not exactly sure what an average person entails.

An average rationalist?

I haven't met many rationalists, but would believe they wouldn't benefit much/at all.

Replies from: Jackercrack, Unnamed, Jackercrack
comment by Jackercrack · 2014-10-29T06:05:28.909Z · LW(p) · GW(p)

Well that's a bit dispiriting, though I suppose looking back my view of CFAR was a bit unrealistic. Downregulating chance that CFAR is some kind of panacea.

comment by Unnamed · 2014-10-30T01:15:21.859Z · LW(p) · GW(p)

(Dan from CFAR here)

Hi cursed - glad to hear your feedback, though I'm obviously not glad that you didn't have a good experience at the CFAR events you went to.

I want to share a bit of information from my point of view (as a researcher at CFAR) on 1) the role of the cognitive science literature in CFAR's curriculum and 2) the typical experience of the people who come to a CFAR workshop. This comment is about the science; I'll leave a separate comment about thing 2.

Some of the techniques that CFAR teaches are based pretty directly on things from the academic literature (e.g., implementation intentions come straight from Peter Gollwitzer's research). Some of our techniques are not from the academic literature (e.g., the technique that we call "propagating urges" started out in 2011 as something that CFAR co-founder Andrew Critch did).

The not-from-the-literature techniques have been through a process of iteration, where we theorize about how we think the technique works, then (with the aid of our best current model) we try to teach people to use the technique, and then we get feedback on how it goes for them. Then repeat. The "theorizing" step of this process includes digging into the academic literature to get a better understanding of how the relevant parts of the mind work, and that often plays a role in shaping the class. With "propagating urges," at first none of the people that Critch taught it to were able to get it to work for them, but then Critch made a connection to some neuroscience he'd been reading, we updated our model of how the technique was supposed to work, and then more people were able to make use of the technique. (I'm tempted to go into more specifics here, but that feels like a tangent and this comment is going to be long enough without it.)

Classes based on from-the-academic-literature techniques also go through a similar process of iteration. For example, there are a lot of studies that have shown that people who are instructed to come up with implementation intentions for a particular goal make more progress towards that goal. But I don't know of any academic research on attempts to teach people the skill of being able to create implementation intentions, and the habit of actually using them in day-to-day life. And that's what we're trying to do at CFAR workshops, so that class has gone through a similar process of iteration as we get feedback on whether people are making use of implementation intentions and how it goes for them. (One simple change that helped get more people to use implementation intentions: giving the technique a different name. We now call it "trigger action planning").

So the cognitive science literature plays both of these roles for us: it's a source of evidence about particular techniques that have been tested and found to work (or to not work), and it's a source of models of how the mind works so that we can develop better techniques. We mention both of these types of scientific references in class (and in the further resources), and we try to be careful to distinguish them. Sharing our models in class (e.g., saying a few sentences in the propagating urges class about what we think the orbitofrontal cortex might be doing in this process) seems to be helpful for getting people to use the technique as we understand it (rather than getting confused about the steps, or rounding the technique off to the nearest cached thought). It also seems to help with getting people to take ownership of the technique and treat it as something that they can tinker with, rather than as a rote series of steps for them to follow (cf. learned blankness).

Finally, a brief comment on this:

Also, they had a giant 1-2 page listing of citations that they used to back up their lessons. I asked some extremely basic questions about papers and articles I've previously read on the list and they had absolutely no idea what I was talking about.

Each CFAR class has one staff member who takes the lead in developing the class, and I'm the research specialist who does a lot of digging into the literature and sharing/discussing research with whoever is developing the class. The aim is for the two of us to be conversant in the relevant academic literature. For the rest of the CFAR team, the priority is to be able to use the techniques and help other people use them, not to know all the studies. (Often there will be more than just us two puzzling things over together, but it typically isn't the whole team.) The instructor who teaches a class at a CFAR event isn't always the person who has been developing it, especially at one-day events which are just being run by 2 instructors instead of the full CFAR staff. If I'd been at the event you came to, the instructor who you asked about the articles probably would've referred you to me and we could've had an interesting conversation.

comment by Jackercrack · 2014-10-29T05:48:56.285Z · LW(p) · GW(p)

Well, that's a bit dispiriting but thanks for responding anyway. Was this recently or when they were just starting up?

comment by gjm · 2014-10-29T00:12:48.580Z · LW(p) · GW(p)

(Apologies for the slight thread hijack here.)

It occurs to me that CFAR's model of expensive workshops and generous grants to the impoverished (note: I am guessing about the generosity) is likely to produce rather odd demographics: there's probably a really big gap between (1) the level of wealth/income at which you could afford to go, and (2) the level of wealth/income at which you would feel comfortable going, especially as -- see e.g. cursed's comments in this thread -- it's reasonable to have a lot of doubt about whether they're worth the cost. (The offer of a refund mitigates that a bit.)

Super-handwavy quantification of the above: I would be really surprised if a typical person whose annual income is $30k or more were eligible for CFAR financial aid. I would be really surprised if a typical person whose income is $150k or less were willing to blow $4k on a CFAR workshop. (NB: "typical". It's easy to imagine exceptions.) Accordingly, I would guess that a typical CFAR workshop is attended mostly by people in three categories: impoverished grad students, etc., who are getting big discounts; people on six-figure salaries, many of them quite substantial six-figure salaries; and True Believers who are exceptionally convinced of the value of CFAR-style rationality, and willing to make a hefty sacrifice to attend.

I'm not suggesting that there's anything wrong with that. In fact, it strikes me as a pretty good recipe for getting an interesting mix of people. But it does mean there's something of a demographic "hole".

Replies from: Jackercrack
comment by Jackercrack · 2014-10-29T06:11:52.320Z · LW(p) · GW(p)

I rather think there may be demand for a cheaper, less time dependent method of attending. It may be several seasons before they end up back in my country for example. Streaming/recording the whole thing and selling the video package seems like it could still get a lot of the benefits across. Their current strategy only really makes sense to me if they're still in the testing and refining stage.

Replies from: ChristianKl, dthunt
comment by ChristianKl · 2014-10-29T11:54:14.714Z · LW(p) · GW(p)

Their current strategy only really makes sense to me if they're still in the testing and refining stage.

I think they are. If everything goes well they will have published papers that proves that their stuff works by the time they move out of the testing and refining stage.

Replies from: Jackercrack
comment by Jackercrack · 2014-10-29T23:51:31.932Z · LW(p) · GW(p)

Any idea how long that will be (months, years, decades)?

comment by dthunt · 2014-10-30T20:51:26.270Z · LW(p) · GW(p)

You can always shoot someone an email and ask about the financial aid thing, and plan a trip stateside around a workshop if, with financial aid, it looks doable, and if after talking to someone, it looks like the workshop would predictably have enough value that you should do it now rather than when you have more time and money.

comment by RomeoStevens · 2014-10-27T20:34:19.089Z · LW(p) · GW(p)

CFAR has financial aid.

Also, attending LW meetups and asking about organizing meetups based on instrumental rationality material is cheap and fun.

Replies from: Jackercrack
comment by Jackercrack · 2014-10-27T21:56:20.047Z · LW(p) · GW(p)

Somehow I doubt the financial aid will stretch to the full amount, and my student debt is already somewhat fearsome.

I'm on the LW meetups already as it happens. I'm currently attempting to have my local one include more instrumental rationality but I lack a decent guide of what methods work, what techniques to try or what games are fun and useful. For that matter I don't know what games there are at all beyond a post or two I stumbled upon.

Replies from: Vaniver
comment by Vaniver · 2014-10-27T22:28:23.619Z · LW(p) · GW(p)

Somehow I doubt the financial aid will stretch to the full amount, and my student debt is already somewhat fearsome.

You could ask Metus how much they covered for them, or someone at CFAR how much they'd be willing to cover. The costs for asking are small, and you won't get anything you don't ask for.

Replies from: Jackercrack
comment by Jackercrack · 2014-10-27T22:57:55.473Z · LW(p) · GW(p)

Fair point, done. On a related note, I wonder how I can practice convincing my brain that failure does not mean death like it did in the old ancestral environment.

Replies from: Metus, ChristianKl, NancyLebovitz
comment by Metus · 2014-10-27T23:16:54.873Z · LW(p) · GW(p)

Exposure therapy: Fail on small things, then larger ones, where it is obvious that failiure doesn't mean death. First remember past experiences where you failed and did not die, then go into new situations.

comment by ChristianKl · 2014-10-29T10:52:18.078Z · LW(p) · GW(p)

CFAR suggests doing exercises to extend your comfort zone for that purpose.

comment by NancyLebovitz · 2014-10-28T02:27:29.644Z · LW(p) · GW(p)

Even in the ancestral environment, not all failures (I suspect a fairly small proportion of them) meant death.

comment by DataPacRat · 2014-10-29T16:07:21.104Z · LW(p) · GW(p)

Seeking LWist Caricatures

I've written the existence of a cult-like "Bayesian Conspiracy" of mostly rebellious post-apocalypse teens - and now I'm looking for individuals to populate it with. What I /want/ to do is come up with as many ways that someone who's part of the LW/HPMOR/Sequences/Yudkowsky-ite/etc memeplex could go wrong, that tend not to happen to members of the regular skeptical community. Someone who's focused on a Basilisk, someone on Pascal's Mugging, someone focused on dividing up an infinity of timelines into unequal groups...

Put another way, I've been trying to think of the various ways that people outside the memeplex see those inside it as weirdos.

(My narrative goal: For my protagonist to experience trying to be a teacher. I'd be ecstatic if I could have at least one of the cultists be able to teach her a thing or two in return, but since I've based her knowledge of the memeplex on mine, that's kind of tricky to arrange.)

I can't guarantee that I'll end up spending more than a couple of sentences on any of this - but I figure that the more ideas I have to try building with, the more likely I will.

(Also asked on Reddit at https://www.reddit.com/r/rational/comments/2kopgx/qbst_seeking_lwist_caricatures/ .)

Replies from: philh, fubarobfusco, ChristianKl, None, IlyaShpitser, Sjcs
comment by philh · 2014-10-29T17:45:09.802Z · LW(p) · GW(p)

The person who uses ev psych to justify their romantic preferences to potential and current partners. (There's a generalisation of this that I'm not sure how to describe, but I've fallen into it when talking with friends about the game-theoretical value of friendship.)

Replies from: fubarobfusco, skeptical_lurker
comment by fubarobfusco · 2014-10-30T07:40:05.606Z · LW(p) · GW(p)

One possible generalization: Being insecure about personal preferences, and so seeking to show that one's personal likes are rooted directly in something universal — something outside one's own personal history, culture, subculture, upbringing, etc.

comment by skeptical_lurker · 2014-10-31T13:24:47.445Z · LW(p) · GW(p)

If the problem is that you shouldn't have to justify your romantic preferences then I can see where you are coming from, but if you do want a justification, what is wrong with evo psych?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-01T15:28:05.720Z · LW(p) · GW(p)

Evo psych tends to be too general and too unproven.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-11-04T12:11:53.111Z · LW(p) · GW(p)

I dunno if that's true, but regardless its a general argument against evo psych, rather than an aguement against using ev psych to justify romantic preferences.

comment by fubarobfusco · 2014-10-30T07:49:10.487Z · LW(p) · GW(p)

The person who airs fringe supremacist (or even eliminationist) views ... then is surprised and offended when members of the targeted groups shun him or her instead of arguing the points as if they were a matter of abstract intellectual interest.

No, wait, that's probably not LW-specific enough.

Replies from: None
comment by [deleted] · 2014-11-03T07:44:00.900Z · LW(p) · GW(p)

I dunno, it seems to be happening here a disturbingly large amount lately.

comment by ChristianKl · 2014-10-29T16:48:50.704Z · LW(p) · GW(p)

Calculating Bayes rule for everything can be quite weird for a lot of people. I remember a case where someone found it weird that another person asked on LW how to do a Bayesian calculation for the likelihood that a specific girl likes him.

Calculating probabilities for many everyday issues is hugely weird for many people. You might even have to take care to make it sound believable even if you do describe a real world character.

I remember an anecdote of a person doing an utility calculation that suggest having sex without a condom and being exposed to the chance of getting AIDS is quite okay.

Another of those things that CFAR preaches that can be seen as pretty weird is purposeful comfort zone extension. It's the kind of topic where you also have to worry about believability if you just tell real world stories.

Replies from: Lumifer
comment by Lumifer · 2014-10-30T15:45:16.119Z · LW(p) · GW(p)

Calculating probabilities for many everyday issues is hugely weird for many people.

And rightly so. The great majority of people are badly calibrated, can't estimate priors properly, etc. If they tried to calculate probabilities for "many everyday issues" I would bet most of them would land straight in the valley of bad rationality.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-02T05:48:21.061Z · LW(p) · GW(p)

Heck, many people here can't do it right. I'm in particular thinking of the recent thread about computing probability of UFOs or aliens.

comment by [deleted] · 2014-10-30T15:30:22.976Z · LW(p) · GW(p)

Someone who applies useful effective behaviors towards the achievement of a ridiculous or reprehensible end goal.

Replies from: DataPacRat
comment by DataPacRat · 2014-10-30T16:16:16.028Z · LW(p) · GW(p)

I think I have this one covered; my character entry is simply "I wanna be a pony!".

(And, now that I think about it, my protagonist has said that if they don't have any other end goals they can think of, they're going to act as if their end goal is to "read comics".)

comment by IlyaShpitser · 2014-10-30T11:03:57.143Z · LW(p) · GW(p)

people outside the memeplex see those inside it as weirdos.

When judging how weird a community is, people often approximate a kind of "weirdness Pagerank" by looking at people the community holds in high esteem. I think Yudkowsky can come across as weird and offputting to some folks (not in person, but online. This is a bit of a tangent, but I think it is very interesting to think about the systematic ways our online and offline personas differ and why they do so). If people perceive that, their alarms immediately go off and they conclude folks are brainwashed since they are not seeing the weirdness themselves.

Replies from: DataPacRat
comment by DataPacRat · 2014-10-30T16:18:47.771Z · LW(p) · GW(p)

This can add some useful background detail. My protagonist is acting as a pseudo-Yudkowsky to the group, and has already been called the "Mad Queen" at least once.

comment by Sjcs · 2014-10-30T00:41:14.045Z · LW(p) · GW(p)

Put another way, I've been trying to think of the various ways that people outside the memeplex see those inside it as weirdos.

The lurker, who may not be gaining as much utility as they would if they participated. However, they still receive the same (or a degree of) connotations from those outside the memeplex, due to their association with the group. These percepts from the outside may be either good or bad.

comment by dthunt · 2014-10-29T06:02:05.479Z · LW(p) · GW(p)

Hey, does anyone else struggle with feelings of loneliness?

What strategies have you found for either dealing with the negative feelings, or addressing the cause of loneliness, and have they worked?

Replies from: ChristianKl, cousin_it, Manfred, IlyaShpitser, MrMind
comment by ChristianKl · 2014-10-29T11:06:41.210Z · LW(p) · GW(p)

Do you feel lonely because you spent your time alone or because you will you don't connect with the people with whom you spend your time?

Two separate problems.

Replies from: dthunt
comment by dthunt · 2014-10-30T19:44:39.724Z · LW(p) · GW(p)

Not feeling connected with people, or, increasingly feeling less connection with people.

I actively socialize myself, and this helps, but the other thing maybe suggests to me I'm doing something wrong.

(Edit: to clarify, my empathy thingy works as well (maybe better) than it ever has, I just feel like the things I crave from social interactions are getting harder to acquire. Like, people "getting" you, or having enough things in common that you can effectively talk about the stuff that interests you. So, like, obviously, one of the solutions there is to hang out with more bright-and-happy CFAR-ish/LW-ish/EA-ish people.)

Replies from: Ben_LandauTaylor, Ben_LandauTaylor, ChristianKl
comment by Ben_LandauTaylor · 2014-11-01T17:24:55.800Z · LW(p) · GW(p)

I found the Nonviolent Communication method extremely helpful for feeling more connected to my friends.

comment by Ben_LandauTaylor · 2014-11-01T17:23:22.131Z · LW(p) · GW(p)

I found the http://www.amazon.com/Nonviolent-Communication-Language-Marshall-Rosenberg/dp/1892005034/ method extremely helpful for feeling more connected to the people in my life.

comment by ChristianKl · 2014-10-31T13:08:51.854Z · LW(p) · GW(p)

www.meetup.com can be a good place to find groups of likeminded people.

comment by cousin_it · 2014-10-29T13:16:12.574Z · LW(p) · GW(p)

In my experience, "dealing with the negative feelings" is useless, because if you deal with them today and you're still lonely tomorrow, the feelings will just come back. It's better to find people who are interested in the same things as you, and hang out with them.

comment by Manfred · 2014-10-29T18:40:29.171Z · LW(p) · GW(p)

Joining clubs is good - especially if you're willing to put in enough work for it to be implicitly joining a social scene (unfortuanately, this bit has plenty of caveats, but trial and error sometimes works fine). Do you make music? There are scenes for that. Dance, ditto. Playing card games, ditto.

LW is almost big enough to work for this, actually - certainly if one lives in a big city.

comment by IlyaShpitser · 2014-10-30T11:08:19.713Z · LW(p) · GW(p)

Sometimes negative emotions are just bad weather -- you have to get stuff done anyways. I also agree with and second sensible advice below on dealing with causes.

comment by MrMind · 2014-11-03T10:31:37.886Z · LW(p) · GW(p)

What strategies have you found for either dealing with the negative feelings, or addressing the cause of loneliness, and have they worked?

On one side, a feeling of loneliness is a signal that in my life I should socialize and connect more.
Other times though, decisions and actions taken under that emotion turned up to be pretty bad: it would have been better to just be and feel alone.
I have thus filled up my week but have left slots of time to be alone, and I know that any feeling of loneliness that I get is just a withdrawal symptom.
I've filled my social life with dancing classes, founding a local go club, joined a teaching class and time to go out with my generic friends. On the other side, when I still feel alone I just take some minutes to sit quietly and imagine being in a pleasant social or sexual situation, trying to focusing on every detail. This is usually more than enough to clean me from any negative state of mind.

comment by Curiouskid · 2014-10-31T06:26:24.077Z · LW(p) · GW(p)

Bayesianism and Causality, or, Why I am only a Half-Bayesian (Judea Pearl)

“The bulk of human knowledge is organized around causal, not probabilistic relationships, and the grammar of probability calculus is insufficient for capturing those relationships.”

comment by Dias · 2014-10-30T02:54:47.384Z · LW(p) · GW(p)

Suppose I was an unusual moral, unusually insightful used car saleswoman. I have studied the dishonest sales techniques my colleagues use, and because I am unusually wise, worked out the general principles behind them. I think it is plausible that this analysis is new, though I guess it could already exist in an obscure journal.

Is it moral of me to publish this research, or should I practice the virtue of silence?

  • It might help people resist such techniques.
  • It might help salesmen employ these immoral techniques better.
  • Salesmen are more likely to already understand much of the content - vulnerable outsiders would have more to learn
  • Salesmen are more incentivized to learn from my analysis.
  • It is quite interesting to read as a purely abstract matter.
  • I like producing and sharing interesting research.

Obviously the dishonest car salesman is just an example so don't get too tied up on the efficiency of the second hand car market.

Replies from: gjm, ChristianKl
comment by gjm · 2014-10-30T13:32:37.761Z · LW(p) · GW(p)

Robert Cialdini did something a bit like this in researching his book "Influence", and so far as I can tell pretty much everyone agrees it's a good thing he wrote it.

I suspect attitudes to your doing this would depend on what your publication looked like. You could write

  • a book called "Secrets of Successful Second-hand Sales", aimed at used car salespeople, advising them on how to manipulate their customers;
  • a book called "Secrets of the Sinister Second-hand Sellers", aimed at used car buyers, advising them on what sort of things they should expect to be done to them and how to see through the bullshit and resist the manipulation;
  • a book called "A Scientific Study of Second-hand Sales Strategies", aimed at psychologists and other interested parties, presenting the information neutrally for whatever use anyone wants to make.

(As an unusually moral person you probably wouldn't actually want to write the first of those books. But some others in a similar situation might.)

My gut reaction to the first would be "ewww", to the second would be "oh, someone trying to drum up sales by attention-grabbing hype",and to the third would be "hey, that's interesting". Other people's guts may well differ from mine. Cialdini's book is mostly the third, with a little touch of the second.

Replies from: ChristianKl
comment by ChristianKl · 2014-10-30T13:37:57.811Z · LW(p) · GW(p)

Cialdini's book is mostly the third, with a little touch of the second.

And read by people who want to read the first ;)

Replies from: gjm
comment by gjm · 2014-10-30T19:02:37.683Z · LW(p) · GW(p)

And also who want to read the second or the third. But yes, of course, writing for one audience won't stop others taking advantage.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-10-31T00:51:14.890Z · LW(p) · GW(p)

I estimate that 95% of readers of Cialdini read it for business.

comment by ChristianKl · 2014-10-30T11:03:18.903Z · LW(p) · GW(p)

I think it depends very much on the case.

There are things in the social skill space that I discovered via experimentation that I don't openly share.

Sales man aren't the only people who care about getting people to make decisions. In medicine compliance is pretty important and choice engineering as a field isn't completely evil.

Understanding our decision making can also give us insight into issues like akrasia.

comment by mare-of-night · 2014-10-29T17:56:37.330Z · LW(p) · GW(p)

There have been discussions here in the past about whether "extreme", lesswrong-style rationality is actually useful, and why we don't have many extremely successful people as members of the community.

I've noticed that Ramit Sethi often uses concepts we talk about here, but under different names. I'm not sure if he's as high a level as we're looking for as evidence, but he appears to be extremely successful as a businessman. I think he started out in life/career coaching, and then switched to selling online courses when he got popular. His stuff is generally around the theme of "how to win at life", but focused on his own definition of that, which is mainly having a profitable and interesting career. (He has a lot of free content which is only inconvenience-walled by being part of a mailing list - this video is one of those things.)

I'm curious if anyone else here knows of him, and what you think of him.

Replies from: wadavis, hyporational, wadavis
comment by wadavis · 2014-10-29T22:35:01.941Z · LW(p) · GW(p)

Side point: I've found material like his, "concepts we talk about here, but under different names", extremely useful when I want to explain the idea of rationality to someone without having to work around the lesswrong lingo and trying to have a conversion while tabooing all the lesswrong phases and cached thoughts.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-10-30T10:54:59.870Z · LW(p) · GW(p)

Yes! In my opinion, it's a great habit to be on the lookout for things under a different name. This is the "academic coordination problem:" things are often rediscovered again and again, because people have incentives to write but not to read.

comment by hyporational · 2014-10-30T06:01:47.173Z · LW(p) · GW(p)

why we don't have many extremely successful people as members of the community.

I'm not sure if the community has been around long enough for this to be a useful kind of a measurement. Success doesn't happen in an instant and there's a lot of turnover. People who are already successful don't have much pressure to join in.

Replies from: RowanE
comment by RowanE · 2014-10-30T13:46:56.732Z · LW(p) · GW(p)

Additionally, "extreme success" is usually defined in zero sum terms that make it definitionally extremely rare, in addition to the strong influence of chance in whether one achieves success in most fields. So a community as small as ours with "not many extremely successful people" may still be completely worthwhile and have a high rate of extreme success per capita compared to most groups.

comment by wadavis · 2014-10-29T22:30:51.532Z · LW(p) · GW(p)

Fully agree that he uses concepts used with less wrong, under different names. And I've seen him referenced frequently on less wrong as somewhere to look for rational financial / career advice.

I follow his free material, it has provided me with inspiration/direction/confidence to aggressively pursue increased compensation, successfully. I've been tempted to purchase his material before, but am always discouraged last second by the smell of snake oil.

Replies from: mare-of-night
comment by mare-of-night · 2014-10-30T03:05:37.765Z · LW(p) · GW(p)

I've been doing the same thing, for a while. I also get turned off a bit by the snake oil, and I've been following some of the mailing lists long enough that the content starts to feel repetitive. I might still buy, if he ever put out anything inexpensive (doesn't seem likely, but Jeff Walker did a while ago even though his business has a similar strategy, so it might happen..).

I wonder if everyone gets that slight snake oil feeling from him? And in particular, whether the kinds of marketing he's using still work when the reader recognizes what tactic is being used.

Replies from: wadavis
comment by wadavis · 2014-10-30T14:39:44.385Z · LW(p) · GW(p)

The question kept coming up; If I can smell snake oil, am I the target audience?

Even if it is legit and honest (I think it is), it kept on reminding me of nigerian phishers using poor language to discourage all but the most gullible from wasting their time.

comment by cursed · 2014-10-28T06:51:27.988Z · LW(p) · GW(p)

Those who are currently using Anki on a mostly daily or weekly basis: what are you studying/ankifying?

To start: I'm working on memorizing programming languages and frameworks because I have trouble remembering parameters and method names.

Replies from: Emile, philh
comment by Emile · 2014-10-28T11:02:01.083Z · LW(p) · GW(p)

These days, most of my time on Anki is on Japanese (which I'm learning for fun) and Chinese (which I already know, but I'm brushing up on tones and characters).

Looking through my decks, I also have decks on:

  • Algorithms and data structures (from a couple books I read on that)
  • Communication (misc. tips on storytelling, giving talks, etc.)
  • Game Design (insights and concepts that seemed valuable)
  • German
  • Git and Unix Command Line commands
  • Haskell
  • Insight (misc. stuff that seemed interesting/important)
  • Mnemonics
  • Productivity (notes from Lukeprog's posts and vairous other sources)
  • Psychology and neuroscience
  • Rationality Habits (one of the few decks I have that come all made, from Anna Salmon I think, though I also added some stuff and delted others)
  • Statistics
  • Web Technologies (some stuff on Angular JS and CSS that I got tired of looking up all the time)

(also a few minor decks with very few cards)

I review those pretty much every day (I sometimes leave a few unfinished, depending on how much idle time I have in queues, transport, etc.)

Replies from: cursed
comment by cursed · 2014-10-28T21:23:39.796Z · LW(p) · GW(p)

That's fantastic. How many cards total do you have, and how many minutes a day do you study?

Replies from: Emile
comment by Emile · 2014-10-28T22:01:47.190Z · LW(p) · GW(p)

Apparently I have 6887 cards (though that includes those I suspended because they're boring, useless, too difficult, duplicated, or possibly wrong; I tend to often suspend cards instead of deleting them); of those around 3000 are Chinese pinyin cards I automatically created with a Python script (I set them up to get between 1 and 5 new ones per day, depending on how busy I tend to be), 1000 are Japanese (the biggest deck of manually-entered cards), and the remaining decks rarely go over 300 cards.

I study probably between 20 and 40 minutes per day, usually in public transit or during "downtime" (waiting in line, carrying the baby around the house hoping for him to sleep, in the restroom, the elevator...). The time depends of how many new cards I entered recently.

comment by philh · 2014-10-28T19:09:47.164Z · LW(p) · GW(p)

Geography: "what direction [relative to central london] is this tube stop in?", English counties (locations), U.S. states (locations, capitals), Canadian territories and provinces (locations and capitals), countries (locations, capitals, and at some point I'll add flags). (Most of these came from ankiweb originally, but I had to add reverse cards.)

Bayes: conversions between odds, probabilities and decibels (specific numbers and more recently the general formulas)

Miscellaneous: the NATO phonetic alphabet, logs (base 2 of 1.25, 1.5, 1.75, and base 10 of 2 through 9), some words I can never remember how to spell (this turns out not to help), some computer stuff (the order of the arguments in python's datetime.strptime, and the difference between a left join and a right join), some definitions in machine learning, some historical dates (e.g. wars, first moon landing, introduction of the model T), some historical inflation rates, some astronomical facts.

Also a deck based on the twelve virtues of rationality essay. (This one and most of the bayes one I found through LW.)

I'm not sure most of this is useful, but most of it hasn't cost me significant effort either.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-10-30T01:17:08.756Z · LW(p) · GW(p)

if you memorize logs, I recommend memorizing natural logs of primes. This is all you need to quickly calculate natural log, log_2, and log_10 of any integer.

You get ln of any number by adding together the natural logs of the prime factors, and you get log_m of n by the formula

log_m(n)=ln(n)/ln(m)

(maybe memorize ln(10) too to make the calculation a little easier)

Replies from: philh
comment by philh · 2014-10-30T10:31:38.718Z · LW(p) · GW(p)

I can't do real division in my head, but if I wanted to maximise my logarithm-ability while minimizing my number of cards, I would go for logs base (probably 10) of primes, and 1/log(e) and 1/log(2).

But I'm not too fussed about minimizing cards, or about natural logs. Learning more primes might be helpful, but I can get them approximately. E.g. I don't have log_10(11) memorized, but I know it's between log_10(10) and log_10(2*6) which are 1 and 1.08, and it would be closer to the latter (my calculator says 1.041, which is slightly lower than I would have guessed, but if I put it in Anki I'd only go to 1.04 anyway).

comment by fubarobfusco · 2014-11-01T17:53:58.735Z · LW(p) · GW(p)

I've seen a few discussions recently where people seem to argue past one another because they're using different senses of the terms "subjective" and "objective".

Some things are called "subjective" because they are parametrized by subject. For instance, everyone who can see has a field of vision, but no two people have the same field of vision (because two people can't stand in the same spot at the same time). However, we can reason and calculate accurately about someone else's field of vision.

Other things are called "subjective" because they are internal to a subject. For instance, since humans are not telepathic we don't have access to the thoughts or mood of another person. The only way we can discover them is by being told about them — or, theoretically, brain-scans — and even this doesn't convey how it feels to be that person.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-02T18:38:41.155Z · LW(p) · GW(p)

For instance, since humans are not telepathic we don't have access to the thoughts or mood of another person. The only way we can discover them is by being told about them — or, theoretically, brain-scans

I think various people are better at mood detection via reading body language than brain-scans. Both brain scans and reading body language are cases where you have partial information and use that to do pattern matching. I have multiple experiences where I meet people who can perceive my own mood better than I can myself.

There are many times where I get a better idea about someone mood by hugging that person then by asking them verbally and them telling me about how they feel.

comment by polymathwannabe · 2014-10-29T00:38:09.364Z · LW(p) · GW(p)

NY Times on the wrongness of political party-related discrimination.

Replies from: fubarobfusco, Lumifer
comment by fubarobfusco · 2014-10-30T08:05:16.677Z · LW(p) · GW(p)

I doubt this generalizes very well.

There have clearly been cases in the history of the world where one party made it clear that they really did intend to hurt or kill their perceived opponents. And then, after acceding to power, went on to do just that.

I've seen remarks here on LW from at least one person in a central European country that he or she felt increasingly personally unsafe due to particular political factions in that country producing increasingly violent rhetoric. I would not tell that person that he or she would be wrong to shun people who advocated political violence against him or her.

Here in the U.S., it sure seems that political eliminationist rhetoric (of the "All the Other Party should be killed as traitors" sort) is produced largely as a form of commercial entertainment, not serious political advocacy. But I say that from a position of relative security and privilege ....

comment by Lumifer · 2014-10-30T15:39:53.048Z · LW(p) · GW(p)

David Brooks is more or less correct about the US where the two mainstream parties are not very distinguishable. He is entirely wrong about many other places of the world. There are enough countries where someone's political views are "a marker for basic decency".

P.S. I am amused by a piece of incidental research he cites:

For example, political scientists Shanto Iyengar and Sean Westwood gave 1,000 people student résumés and asked them which students should get scholarships. The résumés had some racial cues (membership in African-American Students Association) and some political cues (member of Young Republicans). Race influenced decisions. Blacks favored black students 73 percent to 27 percent, and whites favored black students slightly.

That is called blatant racism and in case of s/black/white/ would be cause for much hand-wringing, soul-searching, and probably obligatory "diversity training" for everyone.

comment by Thomas · 2014-10-27T09:59:57.183Z · LW(p) · GW(p)

Where are you right, while most others are wrong? Including people on LW!

Replies from: bramflakes, Viliam_Bur, RowanE, lmm, James_Miller, Ixiel, gattsuru, pianoforte611, summerstay, satt, Daniel_Burfoot, ChristianKl
comment by bramflakes · 2014-10-27T19:53:59.956Z · LW(p) · GW(p)

My thoughts on the following are rather disorganized and I've been meaning to collate them into a post for quite some time but here goes:

Discussions of morality and ethics in the LW-sphere overwhelmingly tend to short-circuit to naive harm-based consequentialist morality. When pressed I think most will state a far-mode meta-ethical version that acknowledges other facets of human morality (disgust, purity, fairness etc) that would get wrapped up into a standardized utilon currency (I believe CEV is meant to do this?) but when it comes to actual policy (EA) there is too much focus on optimizing what we can measure (lives saved in africa) instead of what would actually satisfy people. The drunken moral philosopher looking under the lamppost for his keys because that's where the light is. I also think there's a more-or-less unstated assumption that considerations other than Harm are low-status.

Replies from: Azathoth123, Larks
comment by Azathoth123 · 2014-10-28T04:00:06.104Z · LW(p) · GW(p)

Ah, yes. The standard problem with measurement based incentives: you start optimizing for what's easy to measure.

comment by Larks · 2014-10-28T02:04:36.070Z · LW(p) · GW(p)

Do you have any thoughts on how to do EA on the other aspects of morality? I think about this a fair bit, but run into the same problem you mentioned. I have had a few ideas but do not wish to prime you. Feel free to PM me.

comment by Viliam_Bur · 2014-10-28T00:56:24.543Z · LW(p) · GW(p)

It is extremely important to find out how to have a successful community without sociopaths.

(In far mode, most people would probably agree with this. But when the first sociopath comes, most people would be like "oh, we can't send this person away just because of X; they also have so many good traits" or "I don't agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole". I believe that avoiding these - any maybe many other - failure modes is critical if we ever want to have a Friendly society.)

Replies from: Vaniver, ChristianKl, lmm, Risto_Saarelma, drethelin, pianoforte611, drethelin
comment by Vaniver · 2014-10-30T14:49:51.872Z · LW(p) · GW(p)

It is extremely important to find out how to have a successful community without sociopaths.

It seems to me there may be more value in finding out how to have a successful community with sociopaths. So long as the incentives are set up so that they behave properly, who cares what their internal experience is?

(The analogy to Friendly AI is worth considering, though.)

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-02T04:16:08.263Z · LW(p) · GW(p)

(The analogy to Friendly AI is worth considering, though.)

Ok, so start by examining the suspected sociopath's source code. Wait, we have a problem.

comment by ChristianKl · 2014-10-29T20:19:58.169Z · LW(p) · GW(p)

It is extremely important to find out how to have a successful community without sociopaths.

What do you mean with the phrase "sociopath"?

A person who's very low on empathy and follows intellectual utility calculations might very well donate money to effective charities and do things that are good for this community even when the same person fits the profile of what get's clinically diagnosed as sociopathy.

I think this community should be open for non-neurotypical people with low empathy scores provided those people are willing to act decently.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-10-30T08:57:06.511Z · LW(p) · GW(p)

I'd rather avoid going too deeply into definitions here. Sometimes I feel that if a group of rationalists were in a house that is on fire, they would refuse to leave the house until someone gives them a very precise definition of what exactly does "fire" mean, and how does it differ on quantum level from the usual everyday interaction of molecules. Just because I cannot give you a bulletproof definition in a LW comment, it does not mean the topic is completely meaningless.

Specifically I am concerned about the type of people who are very low on empathy and their utility function does not include other people. (So I am not speaking about e.g. people with alexithymia or similar.) Think: professor Quirrell, in real life. Such people do exist.

(I once had a boss like this for a short time, and... well, it's like an experience from a different planet. If I tried to describe it using words, you would probably just round it to the nearest neurotypical behavior, which would completely miss the point. Imagine a superintelligent paperclip maximizer in a human body, and you will probably have a better approximation. Yeah, I can imagine how untrustworthy this sounds. Unfortunately, that also is a part of a typical experience with a sociopath: first, you start doubting even your own senses, because nothing seems to make sense anymore, and you usually need a lot of time afterwards to sort it out, and then it is already too late to do something about it; second, you realize that if you try to describe it to someone else, there is no chance they would believe you unless they already had this type of experience.)

I think this community should be open for non-neurotypical people with low empathy scores provided those people are willing to act decently.

I'd like to agree with the spirit of this. But there is the problem that the sociopath would optimize their "indecent" behavior to make it difficult to prove.

Replies from: ChristianKl, Vaniver, NancyLebovitz
comment by ChristianKl · 2014-10-30T10:04:57.334Z · LW(p) · GW(p)

Just because I cannot give you a bulletproof definition in a LW comment, it does not mean the topic is completely meaningless.

I'm not saying that the topic is meaningless. I'm saying that if you call for discrimination of people with a certain psychological illness you should know what you are talking about.

Base rates for clinical psychopathy is sometimes cited as 5%. In this community there are plenty of people who don't have a properly working empathy module. Probably more than average in society.

When Eliezer says that he thinks based on typical mind issues that he feels that everyone who says: "I feel your pain" has to be lying that suggests a lack of a working empathy module. If you read back the first April article you find wording about "finding willing victims for BDSM". The desire for causing other people pain is there. Eliezer also checks other things such as a high belief in his own importance for the fate of the world that are typical for clinical psychopathy. Promiscuous sexual behavior is on the checklist for psychopathy and Eliezer is poly.

I'm not saying that Eliezer clearly falls under the label of clinical psychopathy, I have never interacted with him face to face and I'm no psychologist. But part of being rational is that you don't ignore patterns that are there. I don't think that this community would overall benefit from kicking out people who fill multiple marks on that checklist.

Yvain is smart enough to not gather the data for amount of LW members diagnosed with psychopathy when he asks for mental illnesses. I think it's good that way.

If you actually want to do more than just signaling that you like people to be friendly and get applause, than it makes a lot of sense to specify which kind of people you want to remove from the community.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-10-30T14:02:27.384Z · LW(p) · GW(p)

I am not an expert on this, but I think the kind of person I have in mind would not bother to look for willing BDSM victims. From their point of view, there are humans all around, and their consent is absolutely irrelevant, so they would optimize for some other criteria instead.

This feels to me like worrying about a vegetarian who eats "soy meat" because it exposes their unconscious meat-eating desire, while there are real carnivores out there.

specify which kind of people you want to remove from the community

I am not even sure if "removing a kind of people" is the correct approach. (Fictional evidence says no.) My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern. Which also has a possible problem with false reporting; which maybe also could be solved by noticing patterns.

Speaking about society in general, we have an experience that sociopaths are likely to gain power in different kinds of organizations. It would be naive to expect that rationalist communities would be somehow immune to this; especially if we start "winning" in the real world. Sociopaths have an additional natural advantage that they have more experience dealing with neurotypicals, than neurotypicals have with dealing with sociopaths.

I think someone should at least try to solve this problem, instead of pretending it doesn't exist or couldn't happen to us. Because it's just a question of time.

Replies from: ChristianKl, Lumifer, Azathoth123
comment by ChristianKl · 2014-10-30T17:38:53.629Z · LW(p) · GW(p)

I am not an expert on this, but I think the kind of person I have in mind would not bother to look for willing BDSM victims. From their point of view, there are humans all around, and their consent is absolutely irrelevant, so they would optimize for some other criteria instead.

Human beings frequently like to think of people they don't like and understand as evil. There various very bad mental habits associated with it.

Academic psychology is a thing. It actually describes how certain people act. It describes how psychopaths acts. They aren't just evil. Their emotional processes is screwed in systematic ways.

My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern.

Translated into every day language that's: "Rationalists should gossip more about each other." Whether we should follow that maxime is a quite complex topic on it's own and if you think that's important write an article about it and actually address the reasons why people don't like to gossip.

I think someone should at least try to solve this problem, instead of pretending it doesn't exist or couldn't happen to us.

You are not really addressing what I said. It's very likely that we have people in this community who fulfill the criteria of clinical psychopathy and I also remember an account of a person who said they trusted another person from a LW meetup who was a self declared egoist too much and ended up with a bad interaction because they didn't take the openness the person who said that they only care about themselves at face value.

Given your moderator position, do you think that you want to do something to garden but lack power at the moment? Especially dealing with the obvious case? If so, that's a real concern. Probably worth addressing more directly.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-10-30T19:36:19.123Z · LW(p) · GW(p)

Unfortunately, I don't feel qualified enough to write an article about this, nor to analyze the optimal form of gossip. I don't think I have a solution. I just noticed a danger, and general unwillingness to debate it.

Probably the best thing I can do right now is to recommend good books on this topic. That would be:

  • The Mask of Sanity by Hervey M. Cleckley; specifically the 15 examples provided; and
  • People of the Lie by M. Scott Peck; this book is not scientific, but is much easier to read

I admit I do have some problems with moderating (specifically, the reddit database is pure horror, so it takes a lot of time to find anything), but my motivation for writing in this thread comes completely from offline life.

As a leader of my local rationalist community, I was wondering about the things that could happen if the community becomes greater and more successful. Like, if something bad happened within the community, I would feel personally responsible for the people I have invited there by visions of rationality and "winning". (And "something bad" offline can be much worse than mere systematic downvoting.) Especially if we would achieve some kind of power in real life, which is what I hope to do one day. I want to do something better than just bring a lot of enthusiastic people to one place and let the fate decide. I trust myself not to start a cult, and not to abuse others, but that itself is no reason for others to trust me; and also, someone else may replace me (rather easily, since I am not good at coalition politics); or someone may do evil things under my roof, without me even noticing. Having a community of highly intelligent people has the risk that the possible sociopaths, if they come, will likely also be highly intelligent. So, I am thinking about what makes a community safe or unsafe. Because if the community grows large enough, sooner or later problems start happening. I would rather be prepared in advance. Trying to solve the problem ad-hoc would probably totally seem like a personal animosity or joining one faction in an internal conflict.

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2014-10-30T19:49:21.794Z · LW(p) · GW(p)

Can you express what you want to protect against while tabooing words like "bad", "evil", and "abuse"?

comment by ChristianKl · 2014-10-30T22:31:04.579Z · LW(p) · GW(p)

In the ideal world we could fully trust all people in our tribe to do nothing bad. Simply because we have known a people for years we could trust a person to do good.

That's no rational heuristic. Our world is not structured in a way where the amount of time we know a person is a good heuristic for the amount of trust we can give that person.

There are a bunch of people I meet in the topic of personal development whom I trust very easily because I know the heuristics that those people use.

If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn't have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.

But if you use that as a criteria for kicking people out you people won't be open about their own beliefs anymore.

In general trusting people a lot who tick half of the criterias that constitute clinical psychopathy isn't a good idea.

On the other hand LW is per default inclusive and not structured in a way where it's a good idea to kick out people on such a basis.

Replies from: Nornagest
comment by Nornagest · 2014-10-30T22:40:22.995Z · LW(p) · GW(p)

If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn't have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.

Intelligent sociopaths generally don't go around telling people that they're sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of. I have heard people saying similar things before, but they've generally been confused teenagers, Internet Tough Guys, and a few people who're just really bad at recognizing their own emotions -- who also aren't the best people to trust, granted, but for different reasons.

I'd be more worried about people who habitually underestimate the empathy of others and don't have obviously poor self-image or other issues to explain it. Most of the sociopaths I've met have had a habit of assuming those they interact with share, to some extent, their own lack of empathy: probably typical-mind fallacy in action.

Replies from: ChristianKl
comment by ChristianKl · 2014-10-30T23:24:20.021Z · LW(p) · GW(p)

Intelligent sociopaths generally don't go around telling people that they're sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of.

The usually won't say it in a way that the would predict will put other people on guard. On the other hand that doesn't mean that they don't say it at all.

I don't find the link at the moment but a while ago someone posted on LW that he shouldn't have trusted another person from a LW meetup who openly said those things and then acted like that.

Categorising Internet Tough Guys is hard. Base rates for psychopathy aren't that low but you are right that not everyone who says those things is a psychopath. Even that it's a signal for not giving full trust to that person.

comment by Lumifer · 2014-10-30T15:15:57.620Z · LW(p) · GW(p)

I think someone should at least try to solve this problem

(a) What exactly is the problem? I don't really see a sociopath getting enough power in the community to take over LW as a realistic scenario.

(b) What kind of possible solutions do you think exist?

comment by Azathoth123 · 2014-11-02T04:29:55.920Z · LW(p) · GW(p)

My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern.

What do you mean by "harm". I have to ask because there is a movement (commonly called SJW) pushing an insanely broad definition of "harm". For example, if you've shattered someone's worldview have you "harmed" him?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-02T11:15:10.099Z · LW(p) · GW(p)

if you've shattered someone's worldview have you "harmed" him?

Not per se, although there could be some harm in the execution. For example if I decide to follow someone every day from their work screaming at them "Jesus is not real", the problem is with me following them every day, not with the message. Or, if they are at a funeral of their mother and the priest is saying "let's hope we will meet our beloved Jane in heaven with Jesus", that would not be a proper moment to jump and scream "Jesus is not real".

comment by Vaniver · 2014-10-30T15:11:11.025Z · LW(p) · GW(p)

I once had a boss like this for a short time, and... well, it's like an experience from a different planet. If I tried to describe it using words, you would probably just round it to the nearest neurotypical behavior, which would completely miss the point.

Steve Sailer's description of Michael Milken:

I had a five-minute conversation with him once at a Milken Global Conference. It was a little like talking to a hyper-intelligent space reptile who is trying hard to act friendly toward the Earthlings upon whose planet he is stranded.

Is that the sort of description you have in mind?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-10-30T16:55:08.234Z · LW(p) · GW(p)

I really doubt the possibility to convey this in mere words. I had previous experience with abusive people, I studied psychology, I heard stories from other people... and yet all this left me completely unprepared, and I was confused and helpless like a small child. My only luck was the ability to run away.

If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0. If I wouldn't have met than one specific person, I would believe today that the scale only goes from 0 to 2; and if someone tried to describe me how the 10 looks like, I would say "yeah, yeah, I know exactly what you mean" while having a model of 2 in my mind. (And who knows; maybe the real scale goes up to 20, or 100. I have no idea.)

Imagine a person who does gaslighting as easily as you do breathing; probably after decades of everyday practice. A person able to look into your eyes and say "2 + 2 = 5" so convincingly they will make you doubt your previous experience and believe you just misunderstood or misremembered something. Then you go away, and after a few days you realize it doesn't make sense. Then you meet them again, and a minute later you feel so ashamed for having suspected them of being wrong, when in fact it was obviously you who were wrong.

If you try to confront them in front of another person and say: "You said yesterday that 2 + 2 = 5", they will either look the other person in the eyes and say "but really, 2 + 2 = 5" and make them believe so, or will look at you and say: "You must be wrong, I have never said that 2 + 2 = 5, you are probably imagining things"; whichever is more convenient for them at the moment. Either way, you will look like a total idiot in front of the third party. A few experiences like this, and it will become obvious to you that after speaking with them, no one would ever believe you contradicting them. (When things get serious, these people seem ready to sue you for libel and deny everything in the most believable way. And they have a lot of money to spend on lawyers.)

This person can play the same game with dozens of people at the same time and not get tired, because for them it's as easy as breathing, there are no emotional blocks to overcome (okay, I cannot prove this last part, but it seems so). They can ruin lives of some of them without hesitation, just because it gives them some small benefit as a side effect. If you only meet them casually, your impression will probably be "this is an awesome person". If you get closer to them, you will start noticing the pattern, and it will scare you like hell.

And unless you have met such person, it is probably difficult to believe that what I wrote is true without exaggeration. Which is yet another reason why you would rather believe them than their victim, if the victim would try to get your help. The true description of what really happened just seems fucking unlikely. On the other hand their story would be exactly what you want to hear.

It was a little like talking to a hyper-intelligent space reptile who is trying hard to act friendly toward the Earthlings upon whose planet he is stranded.

No, that is completely unlike. That sounds like some super-nerd.

Your first impression from the person I am trying to describe would be "this is the best person ever". You would have no doubt that anyone who said anything negative about such person must be a horrible liar, probably insane. (But you probably wouldn't hear many negative things, because their victims would easily predict your reaction, and just give up.)

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-02T04:23:55.052Z · LW(p) · GW(p)

Not a person, but I've had similar experiences dealing with Cthulhu and certain political factions.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-02T11:12:01.328Z · LW(p) · GW(p)

Sure, human terms are usually applied to humans. Groups are not humans, and using human terms for them would at best be a metaphor.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-04T04:03:39.028Z · LW(p) · GW(p)

On the other hand, for your purpose (keeping LW a successful community), groups that collectively act like a sociopath are just as dangerous as individual sociopaths.

comment by NancyLebovitz · 2014-11-03T00:50:14.365Z · LW(p) · GW(p)

Narcissist Characteristics

I was wondering if this sounds like your abusive boss-- it's mostly a bunch of social habits which could be identified rather quickly.

comment by lmm · 2014-10-28T19:09:43.020Z · LW(p) · GW(p)

I think the other half is the more important one: to have a successful community, you need to be willing to be arbitrary and unfair, because you need to kick out some people and cannot afford to wait for a watertight justification before you do.

Replies from: Jiro
comment by Jiro · 2014-10-28T19:39:03.092Z · LW(p) · GW(p)

The best ruler for a community is an uncorruptible, bias-free, dictator. All you need to do to implement this is to find an uncorruptible, bias-free dictator. Then you don't need a watertight justification because those are used to avoid corruption and bias and you know you don't have any of that anyway.

Replies from: Lumifer, lmm
comment by Lumifer · 2014-10-28T19:54:22.249Z · LW(p) · GW(p)

The best ruler for a community is an uncorruptible, bias-free, dictator.

There is also that kinda-important bit about shared values...

comment by lmm · 2014-10-29T23:11:17.857Z · LW(p) · GW(p)

I'm not being utopian, I'm giving pragmatic advice based on empirical experience. I think online communities like this one fail more often by allowing bad people to continue being bad (because they feel the need to be scrupulously fair and transparent) than they do by being too authoritarian.

Replies from: Viliam_Bur, Azathoth123
comment by Viliam_Bur · 2014-10-30T08:14:01.048Z · LW(p) · GW(p)

I think I know what you mean. The situations like: "there is 90% probability that something bad happened, but 10% probability that I am just imagining things; should I act now and possibly abuse the power given to me, or should I spend a few more months (how many? I have absolutely no idea) collecting data?"

comment by Azathoth123 · 2014-11-02T04:39:21.790Z · LW(p) · GW(p)

The thing is from what I've heard the problem isn't so much sociopaths as ideological entryists.

comment by Risto_Saarelma · 2014-11-01T17:30:50.911Z · LW(p) · GW(p)

But when the first sociopath comes, most people would be like "oh, we can't send this person away just because of X; they also have so many good traits" or "I don't agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole".

How do you even reliably detect sociopaths to begin with? Particularly with online communities where long game false social signaling is easy. The obviously-a-sociopath cases are probably among the more incompetent or obviously damaged and less likely to end up doing long-term damage.

And for any potential social apparatus for detecting and shunning sociopaths you might come up with, how will you keep it from ending up being run by successful long-game signaling sociopaths who will enjoy both maneuvering themselves into a position of political power and passing judgment and ostracism on others?

The problem of sociopaths in corporate settings is a recurring theme in Michael O. Church's writings, but there's also like a million pages of that stuff so I'm not going to try and pick examples.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-01T20:58:06.882Z · LW(p) · GW(p)

All cheap detection methods could be fooled easily. It's like with that old meme "if someone is lying to you, they will subconsciously avoid looking into your eyes", which everyone has already heard, so of course today every liar would look into your eyes.

I see two possible angles of attack:

a) Make a correct model of sociopathy. Don't imagine sociopaths to be "like everyone else, only much smarter". They probably have some specific weakness. Design a test they cannot pass, just like a colorblind person cannot pass a color blindness test even if they know exactly how the test works. Require passing the test for all positions of power in your organization.

b) If there is a typical way sociopaths work, design an environment so that this becomes impossible. For example, if it is critical for manipulating people to prevent their communication among each other, create an environment that somehow encourages communication between people who would normally avoid each other. (Yeah, this sounds like reversing stupidity. Needs to be tested.)

comment by drethelin · 2014-11-02T19:45:54.533Z · LW(p) · GW(p)

I think it's extremely likely that any system for identifying and exiling psychopaths can be co-opted for evil, by psychopaths. I think rules and norms that act against specific behaviors are a lot more robust, and also are less likely to fail or be co-opted by psychopaths, unless the community is extremely small. This is why in cities we rely on laws against murder, rather than laws against psychopathy. Even psychopaths (usually) respond to incentives.

comment by pianoforte611 · 2014-10-29T19:43:46.216Z · LW(p) · GW(p)

Are you directing this at LW? Ie. is there a sociopath that you think is bad for our community?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-10-30T09:03:12.930Z · LW(p) · GW(p)

Well, I suspect Eugine Nier may have been one, to show the most obvious example. (Of course there is no way to prove it, there are always alternative explanations, et cetera, et cetera, I know.)

Now that was an online behavior. Imagine the same kind of person in real life. I believe it's just a question of time. Using the limited experience to make predictions, such person would be rather popular, at least at the beginning, because they would keep using the right words that are tested to evoke a positive response from many lesswrongers.

Replies from: IlyaShpitser, Lumifer
comment by IlyaShpitser · 2014-10-30T09:59:43.398Z · LW(p) · GW(p)

A "sociopath" is not an alternative label for [someone I don't like.] I am not sure what a concise explanation for the sociopath symptom cluster is, but it might be someone who has trouble modeling other agents as "player characters", for whatever reason. A monster, basically. I think it's a bad habit to go around calling people monsters.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-10-30T13:46:19.762Z · LW(p) · GW(p)

I know; I know; I know. This is exactly what makes this topic so frustratingly difficult to explain, and so convenient to ignore.

The thing I am trying to say is that if a real monster would come to this community, sufficiently intelligent and saying the right keywords, we would spend all our energy inventing alternative explanations. That although in far mode we admit that the prior probability of a monster is nonzero (I think the base rate is somewhere around 1-4%), in near mode we would always treat it like zero, and any evidence would be explained away. We would congratulate ourselves for being nice, but in reality we are just scared to risk being wrong when we don't have convincingly sounding verbal arguments on our side. (See Geek Social Fallacy #1, but instead of "unpleasant" imagine "hurting people, but only as much as is safe in given situation".) The only way to notice the existence of the monster is probably if the monster decides to bite you personally in the foot. Then you will realize with horror that now all other people are going to invent alternative explanations why that probably didn't happen, because they don't want to risk being wrong in a way that would feel morally wrong to them.

I don't have a good solution here. I am not saying that vigilantism is a good solution, because the only thing the monster needs to draw attention away is to accuse someone else of being a monster, and it is quite likely that the monster will sound more convincing. (Reversed stupidity is not intelligence.) Actually, I believe this happens rather frequently. Whenever there is some kind of a "league against monsters", it is probably a safe bet that there is a monster somewhere at the top. (I am sure there is a TV Tropes page or two about this.)

So, we have a real danger here, but we have no good solution for it. Humans typically cope with such situations by pretending that the danger doesn't exist. I wish we had a better solution.

Replies from: NancyLebovitz, arromdee
comment by NancyLebovitz · 2014-10-30T20:37:13.078Z · LW(p) · GW(p)

I can believe that 1% - 4% of people have little or no empathy and possibly some malice in addition. However, I expect that the vast majority of them don't have the intelligence/social skills/energy to become the sort of highly destructive person you describe below.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-10-30T22:36:49.205Z · LW(p) · GW(p)

That's right. The kind of person I described seems like combination of sociopathy + high intelligence + maybe something else. So it is much less than 1% of population.

(However, their potential ratio in rationalist community is probably greater than in general population, because our community already selects for high intelligence. So, if high intelligence would be the only additional factor -- which I don't know whether it's true or not -- it could again be 1-4% among the wannabe rationalists.)

Replies from: Lumifer, NancyLebovitz
comment by Lumifer · 2014-10-31T01:05:53.669Z · LW(p) · GW(p)

The kind of person I described seems like combination of sociopathy + high intelligence + maybe something else.

I would describe that person as a charismatic manipulator. I don't think it requires being a sociopath, though being one helps.

comment by NancyLebovitz · 2014-10-30T22:59:15.379Z · LW(p) · GW(p)

The kind of person you described has extraordinary social skills as well as being highly (?) intelligent, so I think we're relatively safe. :-)

I can hope that a people in a rationalist community would be better than average at eventually noticing they're in a mind-warping confusion and charisma field, but I'm really hoping we don't get tested on that one.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-10-31T09:39:38.762Z · LW(p) · GW(p)

I think we're relatively safe

Returning to the original question ("Where are you right, while most others are wrong? Including people on LW!"), this is exactly the point where my opinion differs from the LW consensus.

I can hope that a people in a rationalist community would be better than average at eventually noticing they're in a mind-warping confusion and charisma field

For a sufficiently high value of "eventually", I agree. I am worried about what would happen until then.

I'm really hoping we don't get tested on that one.

I'm hoping that this is not the best answer we have. :-(

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-10-31T14:01:33.249Z · LW(p) · GW(p)

To what extent is that sort of sociopath dependent on in-person contact?

Thinking about the problem for probably less than five minutes, it seems to me that the challenge is having enough people in the group who are resistant to charisma. Does CFAR or anyone else teach resistance to charisma?

Would noticing when one is confused and writing the details down help?

Replies from: Viliam_Bur, Viliam_Bur, Lumifer, Azathoth123, IlyaShpitser
comment by Viliam_Bur · 2014-10-31T21:24:16.527Z · LW(p) · GW(p)

In addition to what I wrote in the other comment, a critical skill is to imagine the possibility that someone close to you may be manipulating you.

I am not saying that you must suspect all people all the time. But when strange things happen and you notice that you are confused, you should assign a nonzero value to this hypothesis. You should alieve that this is possible.

If I may use the fictional evidence here, the important thing for Rational!Harry is to realize that someone close to him may be Voldemort. Then it becomes a question of paying attention, good bookkeeping, gathering information, and perhaps making a clever experiment.

As long as Harry alieves that Voldemort is far away, he is likely to see all people around him as either NPCs or his party members. He doesn't expect strategic activity from the NPCs, and he believes that his party members share the same values even if they have a few wrong beliefs which make cooperation difficult. (For example, he is frustrated that Minerva doesn't trust him more, or that Dumbledore is okay with the idea of death, but he wouldn't expect either of them trying to hurt him. And the list of nice people includes also Quirrell, which is the most awesome of them all.) He alieves that he lives in a relatively safe bubble, that Voldemort is somewhere outside of the bubble, and that if Voldemort tried to enter the bubble, it would be an obviously extraordinary event that he would notice. (Note: This is no longer true in the recent chapters.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-01T15:26:42.484Z · LW(p) · GW(p)

Harry also just doesn't want to believe that Quirrell might be very bad news. (Does he consider the possibility that Quirrell is inimical, but not Voldemort?) Harry is very attached to the only person who can understand him reliably.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-02T15:43:35.587Z · LW(p) · GW(p)

Does he consider the possibility that Quirrell is inimical, but not Voldemort?

This was unclear-- I meant that Quirrell could be inimical without being Voldemort.

The idea of Voldemort not being a bad guy (without being dead)-- he's reformed or maybe he's developed other hobbies-- would be an interesting shift. Voldemort as a gigantic force for good operating in secret would be the kind of shift I'd expect from HPMOR, but I don't know of any evidence for it in the text.

comment by Viliam_Bur · 2014-10-31T15:51:17.487Z · LW(p) · GW(p)

Perhaps we should taboo "resistance to charisma" first. What specifically are we trying to resist?

Looking at an awesome person and thinking "this is an awesome person" is not harmful per se. Not even if the person uses some tricks to appear even more awesome than they are. Yeah, it would be nice to measure someone's awesomeness properly, but that's not the point. A sociopath may have some truly awesome traits, for example genuinely high intelligence.

So maybe the thing we are trying to resist is the halo effect. An awesome person tells me X, and I accept it as true because it would be emotionally painful to imagine that an awesome person would lie to me. The correct response is not to deny the awesomeness, but to realize that I still don't have any evidence for X other than one person saying it is so. And that awesomeness alone is not expertise.

But I think there is more to a sociopath than mere charisma. Specifically, the ability to lie and harm people without providing any nonverbal cues that would probably betray a neurotypical person trying to do the same thing. (I suspect this is what makes the typical heuristics fail.)

Would noticing when one is confused and writing the details down help?

Yes, I believe so. If you already have a suspicion that something is wrong, you should start writing a diary. And a very important part would be, for every information you have, write down who said that to you. Don't report your conclusions; report the raw data you have received. This will make it easier to see your notes later from a different angle, e.g. when you start suspecting someone you find perfectly credible today. Don't write "X", write "Joe said: X", even if you perfectly believe him at the moment. If Joe says "A" and Jane says "B", write "Joe said A. Jane said B" regardless of which one of them makes sense and which one doesn't. If Joe says that Jane said X, write "Joe said that Jane said X", not "Jane said X".

Also, don't edit the past. If you wrote "X" yesterday, but today Joe corrected you that he actually said "Y" yesterday but you have misunderstood it, don't erase the "X", but simply write today "Joe said he actually said Y yesterday". Even if you are certain that you really made a mistake yesterday. When Joe gives you a promise, write it down. When there is a perfectly acceptable explanation later why the promise couldn't be fulfilled, accept the explanation, but still record that for perfectly acceptable reasons the promise was not fulfilled. Too much misinformation is a red flag, even if there is always a perfect explanation for each case. (Either you are living in a very unlikely Everett branch, or your model is wrong.) Even if you accept an excuse, make a note of the fact that something had to be excused.

Generally, don't let the words blind you from facts. Words are also a kind of facts (facts about human speech), but don't mistake "X" for X.

I think gossip is generally a good thing, but only if you can follow these rules. When you learn about X, don't write "X", but write "my gossiping friend told me X". It would be even better to gossip with friends who follow similar rules; who can make a distinction between "I have personally seen X" and "a completely trustworthy person said X and I was totally convinced". But even when your friends don't use this rule, you can still use it when speaking with them.

The problem is that this kind of journaling has a cost. It takes time; you have to protect the journal (the information it contains could harm not only you but also other people mentioned there); and you have to keep things in memory until you get to the journal. Maybe you could have some small device with you all day long where you would enter new data; and at home you would transfer the daily data to your computer and erase the device.

But maybe I'm overcomplicating things and the real skill is the ability to think about anyone you know and ask yourself a question "what if everything this person ever said to me (and to others) was a lie; what if the only thing they care about is more power or success, and they are merely using me as a tool for this purpose?" and check whether the alternative model explains the observed data better. Especially with the people you love, admire, of depend on. This is probably useful not only against literally sociopaths, but other kinds of manipulators, too.

Replies from: ChristianKl, NancyLebovitz
comment by ChristianKl · 2014-11-01T21:59:53.263Z · LW(p) · GW(p)

But I think there is more to a sociopath than mere charisma. Specifically, the ability to lie and harm people without providing any nonverbal cues that would probably betray a neurotypical person trying to do the same thing. (I suspect this is what makes the typical heuristics fail.)

I don't think "no nonverbal cues" is accurate. A psychopath shows no signs of emotional distress when he lies. On the other hand if they say something that should go along with a emotion if a normal person says it, you can detect that something doesn't fit.

In the LW community however, there are a bunch of people with autism that show strange nonverbals and don't show emotions when you would expect a neurotypical person to show emotions.

But maybe I'm overcomplicating things and the real skill is the ability to think about anyone you know and ask yourself a question "what if everything this person ever said to me (and to others) was a lie; what if the only thing they care about is more power or success, and they are merely using me as a tool for this purpose?"

I think that's a strawman. Not having long-term goals is a feature of psychopaths. The don't have a single purpose according to which they organize things. The are impulsive.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-02T10:34:49.574Z · LW(p) · GW(p)

Not having long-term goals is a feature of psychopaths. The don't have a single purpose according to which they organize things. The are impulsive.

That seems correct according to what I know (but I am not an expert). They are not like "I have to maximize the number of paperclips in the universe in the long term" but rather "I must produce some paperclips, soon". Given sufficiently long time interval, they would probably fail at Marshmallow test.

Then I suspect the difference between a successful and an unsuccessful one is whether their impulses executed with their skills are compatible with what the society allows. If the impulse is "must get drunk and fight with people", such person will sooner or later end in prison. If the impulse is "must lie to people and steal from them", with some luck and skill, such person could become rich, if they can recognize situations where it is safe to lie and steal. But I'm speculating here.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-02T14:33:39.745Z · LW(p) · GW(p)

Human behavior is more complex than that.

Rather than thinking "I must steal" the impulse is more likely to be "I want to have X" and a lack of inhibition for stealing. Psychopath usually don't optimize for being evil.

comment by NancyLebovitz · 2014-11-01T15:23:56.484Z · LW(p) · GW(p)

Are you suggesting journaling about all your interactions where someone gives you information? That does sound exhausting and unnecessary. It might make sense to do for short periods for memory training.

Another possibility would be to record all your interactions-- this isn't legal in all jurisdictions unless you get permission from the other people being recorded, but I don't think you're likely to be caught if you're just using the information for yourself.

Journaling when you have reason to suspicious of someone is another matter, and becoming miserable and confusing for no obvious reason is grounds for suspicion. (The children of such manipulators are up against a much more serious problem.)

It does seem to me that this isn't exactly an individual problem if what you need is group resistance to extremely skilled manipulators.

http://www.ribbonfarm.com/the-gervais-principle/-- some detailed analysis of sociopathy in offices.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-01T21:06:51.036Z · LW(p) · GW(p)

http://www.ribbonfarm.com/the-gervais-principle/ -- some detailed analysis of sociopathy in offices.

Ironically, now I will be the one complaining that this definition of a "sociopath" seems to include too many people to be technically correct. (Not every top manager is a sociopath. And many sociopaths don't make it into corporate positions of power.)

I agree that making detailed journals is probably not practical in real life. Maybe some mental habits would make it easier. For example, you could practice the habit of remembering the source of information, at least until you get home to write your diary. You could start with shorter time intervals; have a training session where people will tell you some information, and at the end you have an exam where you have to write an answer to the question and the name of the person who told you that.

If keeping the diary itself turns out to be good for a rationalist, this additional skill of remembering sources could be relatively easier, and then you will have the records you can examine later.

comment by Lumifer · 2014-10-31T16:16:23.128Z · LW(p) · GW(p)

the challenge is having enough people in the group who are resistant to charisma.

Since we are talking about LW, let me point out that charisma in meatspace is much MUCH more effective than charisma on the 'net, especially in almost-purely-text forums.

comment by Azathoth123 · 2014-11-02T04:12:50.254Z · LW(p) · GW(p)

Does CFAR or anyone else teach resistance to charisma?

Well, consider who started CFAR (and LW for that matter) and how he managed to accomplish most of what he has.

comment by IlyaShpitser · 2014-11-02T15:47:05.186Z · LW(p) · GW(p)

resistance to charisma?

Ex-cult members seem to have fairly general antibodies vs "charisma." Perhaps studying cults without being directly involved might help a little as well, it would be a shame if there was no substitute for a "school of hard knocks" that actual cult membership would be.

Incidentally, cults are a bit of a hobby of mine :).

comment by arromdee · 2014-10-30T18:47:49.567Z · LW(p) · GW(p)

Whenever there is some kind of a "league against monsters", it is probably a safe bet that there is a monster somewhere at the top. (I am sure there is a TV Tropes page or two about this.)

https://allthetropes.orain.org/wiki/Hired_to_Hunt_Yourself

comment by Lumifer · 2014-10-30T15:31:59.398Z · LW(p) · GW(p)

Well, I suspect Eugine Nier may have been one, to show the most obvious example.

Why do you suspect so? Gaming ill-defined social rules of an internet forum doesn't look like a symptom of sociopathy to me.

You seem to be stretching the definition too far.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-10-30T17:07:03.636Z · LW(p) · GW(p)

Abusing rules to hurt people is at least a weak evidence. Doing it persistently for years, even more so.

comment by drethelin · 2014-11-02T19:30:29.809Z · LW(p) · GW(p)

Why is this important?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-02T21:46:34.743Z · LW(p) · GW(p)

My goal is to create a rationalist community. A place to meet other people with similar values and "win" together. I want to optimize my life (not just my online quantum physics debating experience). I am thinking strategically about an offline experience here.

Eliezer wrote about how a rationalist community might need to defend itself from an attack of barbarians. In my opinion, sociopaths are even greater danger, because they are more difficult to detect, and nerds have a lot of blind spots here. We focus on dealing with forces of nature. But in the social world, we must also deal with people, and this is our archetypal weakness.

The typical nerd strategy for solving conflicts is to run away and hide, and create a community of social outcasts where everything is tolerated, and the whole group is safe more or less because it has so low status that typical bullies rather avoid it. But at the moment we start "winning", this protective shield is over, and we do not have any other coping strategy. Just like being rich makes you an attractive target for thieves, being successful (and I hope rationalist groups will become successful in near future) makes your community a target for people who love to exploit people and get power. And all they need to get inside is to be intelligent and memorize a few LW keywords. Once your group becomes successful, I believe it's just a question of time. (Even a partial success, which for you is merely a first step along a very long way, can already do this.) That will happen much sooner than any "barbarians" would consider you a serious danger.

(I don't want to speak about politics here, but I believe that many political conflicts are so bad because most of the sides have sociopaths as their leaders. It's not just the "affective death spirals", although they also play a large role. But there are people in important positions who don't think about "how to make the world a better place for humans", but rather "how could I most benefit from this conflict". And the conflict often continues and grows because that happens to be the way for those people to profit most. And this seems to happen on all sides, in all movements, as soon as there is some power to be gained. Including movements that ostensibly are against the concept of power. So the other way to ask my question would be: How can a rationalist community get more power, without becoming dominated by people who are willing to sacrifice anything for power? How to have a self-improving Friendly human community? If we manage to have a community that doesn't immediately fall apart, or doesn't become merely a debate club, this seems to me like the next obvious risk.)

Replies from: ChristianKl
comment by ChristianKl · 2014-11-02T22:22:37.018Z · LW(p) · GW(p)

I don't want to speak about politics here, but I believe that many political conflicts are so bad because most of the sides have sociopaths as their leaders.

How do you come to that conclusion? Simply because you don't agree with their actions? Otherwise are there trained psychologists who argue that position in detail and try to determine how politicians score on the Hare scale?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-03T07:58:49.658Z · LW(p) · GW(p)

How do you come to that conclusion? Simply because you don't agree with their actions?

Uhm, no. Allow me to quote from my other comment:

If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0.

I hope it illustrates that my mental model has separate buckets for "people I suspect to be sociopaths" and "people I disagree with".

Replies from: ChristianKl
comment by ChristianKl · 2014-11-03T15:58:54.735Z · LW(p) · GW(p)

Diagnosing mental illness based on the kind of second hand information you have about politicians isn't a trivial effort. Especially if you lack the background in psychology.

comment by RowanE · 2014-10-27T11:41:18.860Z · LW(p) · GW(p)

I think this could be better put as "what do you believe, that most others don't?" - being wrong is, from the inside, indistinguishable from being right, and a rationalist should know this. I think there have actually been several threads about beliefs that most of LW would disagree with.

Replies from: ChristianKl, Thomas
comment by ChristianKl · 2014-10-27T16:41:09.477Z · LW(p) · GW(p)

I think this could be better put as "what do you believe, that most others don't?" - being wrong is, from the inside, indistinguishable from being right, and a rationalist should know this.

I think you are wrong. Identifying a belief as wrong is not enough to remove it. If someone has low self esteem and you give him an intellectual argument that's sound and that he wants to believe that's frequently not enough to change the fundamental belief behind low self esteem.

Scott Alexander wrote a blog post about how asking a schizophrenic for weird beliefs makes the schizophrenic tell the doctor about the faulty beliefs.

If you ask a question differently you get people reacting differently. If you want to get a broad spectrum of answers than it makes sense to ask the question in a bunch of different ways.

I'm intelligent enough to know that my own beliefs about the social status I hold within a group could very well be off even if those beliefs feel very real to me.

If you ask me: "Do you think X is really true and everyone who disagrees is wrong?", you trigger slightly different heuristics than in me than if you ask "Do you believe X?".

It's probably pretty straightforward to demonstrate this and some cognitive psychologist might even already have done the work.

comment by Thomas · 2014-10-27T11:59:31.581Z · LW(p) · GW(p)

Very well. But do you have such a belief, that others will see it as a wrong one?

(Last time this was asked, the majority of contrarian views were presented by me.)

Replies from: RowanE, ZankerH
comment by RowanE · 2014-10-27T14:55:02.110Z · LW(p) · GW(p)

The most contra-LW belief I have, if you can call it that, is my not being convinced on the pattern theory of identity - EY's arguments about there being no "same" or "different" atoms not effecting me because my intuitions already say that being obliterated and rebuilt from the same atoms would be fatal. I think I need the physical continuity of the object my consciousness runs on. But I realise I haven't got much support besides my intuitions for believing that that would end my experience and going to sleep tonight won't, and by now I've become almost agnostic on the issue.

comment by ZankerH · 2014-10-27T13:39:03.203Z · LW(p) · GW(p)
  • Technological progress and social/political progress are loosely correlated at best

  • Compared to technological progress, there has been little or no social/political progress since the mid-18th century - if anything, there has been a regression

  • There is no such thing as moral progress, only people in charge of enforcing present moral norms selectively evaluating past moral norms as wrong because they disagree with present moral norms

Replies from: Metus, Nate_Gabriel, fubarobfusco, Richard_Kennaway
comment by Metus · 2014-10-27T14:24:31.479Z · LW(p) · GW(p)

I think I found the neoreactionary.

Replies from: gjm
comment by gjm · 2014-10-27T15:50:13.063Z · LW(p) · GW(p)

The neoreactionary? There are quite a number of neoreactionaries on LW; ZankerH isn't by any means the only one.

Replies from: Metus
comment by Metus · 2014-10-27T16:05:24.544Z · LW(p) · GW(p)

Apparently LW is a bad place to make jokes.

Replies from: gjm, Lumifer
comment by gjm · 2014-10-27T17:09:53.475Z · LW(p) · GW(p)

The LW crowd is really tough: jokes actually have to be funny here.

comment by Lumifer · 2014-10-27T16:12:47.347Z · LW(p) · GW(p)

That's not LW, that's internet. The implied context in your head is not the implied context in other heads.

comment by Nate_Gabriel · 2014-10-27T13:52:51.275Z · LW(p) · GW(p)

Compared to technological progress, there has been little or no social/political progress since the mid-18th century - if anything, there has been a regression

Regression? Since the 1750s? I realize Europe may be unusually bad here (at least, I hope so), but it took until 1829 for England to abolish the husband's right to punish his wife however he wanted.

Replies from: RowanE
comment by RowanE · 2014-10-27T14:32:18.472Z · LW(p) · GW(p)

I think that progress is specifically what he's on about in his third point. It's standard neoreactionary stuff, there's a reason they're commonly regarded as horribly misogynist.

Replies from: Capla
comment by Capla · 2014-10-27T18:06:02.668Z · LW(p) · GW(p)

I want to discuss it, and be shown wrong if I'm being unfair, but saying "It's standard [blank] stuff" seems dismissive. Suppose I was talking with someone about friendly AI or the singularity, and a third person comes around and says "Oh, that's just standard Less Wrong stuff." It may or may not be the case, but it feels like that third person is categorizing the idea and dismissing it, instead of dealing with my arguments outright. That is not conducive to communication.

Replies from: RowanE, Lumifer
comment by RowanE · 2014-10-27T19:20:19.596Z · LW(p) · GW(p)

I was trying to say "you should not expect that someone who thinks no social, political or moral progress has been made since the 18th century to consider women's rights to be a big step forward" in a way that wasn't insulting to Nate_Gabriel - being casually dismissive of an idea makes "you seem to be ignorant about [idea]" less harsh.

comment by Lumifer · 2014-10-27T18:15:19.262Z · LW(p) · GW(p)

but it feels like that third person is categorizing the idea and dismissing it, instead of dealing with my arguments outright.

This comment could be (but not necessarily is) valid with the meaning of "Your arguments are part of a well-established set of arguments and counter-arguments, so there is no point in going through them once again. Either go meta or produce a novel argument.".

comment by fubarobfusco · 2014-10-28T04:06:02.125Z · LW(p) · GW(p)

How do you square your beliefs with (for instance) the decline in murder in the Western world — see, e.g. Eisner, Long-Term Historical Trends in Violent Crime?

comment by Richard_Kennaway · 2014-10-27T15:00:28.046Z · LW(p) · GW(p)

What do you mean by social progress, given that you distinguish it from technological progress ("loosely correlated at best") and moral progress ("no such thing")?

Replies from: ZankerH
comment by ZankerH · 2014-10-27T15:15:28.393Z · LW(p) · GW(p)

Re: social progress: see http://www.moreright.net/social-technology-and-anarcho-tyranny/

We use the term “technology” when we discover a process that lets you get more output for less investment, whether you’re trying to produce gallons of oil or terabytes of storage. We need a term for this kind of institutional metis – a way to get more social good for every social sacrifice you have to make – and “social technology” fits the bill. Along with the more conventional sort of technology, it has led to most of the good things that we enjoy today.

The flip side, of course, is that when you lose social technology, both sides of the bargain get worse. You keep raising taxes yet the lot of the poor still deteriorates. You spend tons of money on prisons and have a militarized police force, yet they seem unable to stop muggings and murder. And this is the double bind that “anarcho-tyranny” addresses. Once you start losing social technology, you’re forced into really unpleasant tradeoffs, where you have sacrifice along two axes of things you really value.

As for moral progress, see whig history. Essentially, I view the notion of moral progress as fundamentally a misinterpretation of history. Related fallacy: using a number as an argument (as in, "how is this still a thing in 2014?"). Progress in terms of technology can be readily demonstrated, as can regression in terms of social technology. The notion of moral progress, however, is so meaningless as to be not even wrong.

Replies from: Toggle
comment by Toggle · 2014-10-27T17:09:04.200Z · LW(p) · GW(p)

More Right

That use of 'technology' seems to be unusual, and possibly even misleading. Classical technology is more than a third way that increases net good; 'techne' implies a mastery of the technique and the capacity for replication. Gaining utility from a device is all well and good, but unless you can make a new one then you might as well be using a magic artifact.

It does not seem to be the case that we have ever known how to make new societies that do the things we want. The narrative of a 'regression' in social progress implies that there was a kind of knowledge that we no longer have- but it is the social institutions themselves that are breaking down, not our ability to craft them.

Cultures are still built primarily by poorly-understood aggregate interactions, not consciously designed, and they decay in much the same way. A stronger analogy here might be biological adaptation, rather than technological advancement, and in evolutionary theory the notion of 'progress' is deeply suspect.

Replies from: Lumifer
comment by Lumifer · 2014-10-27T17:28:40.971Z · LW(p) · GW(p)

Gaining utility from a device is all well and good, but unless you can make a new one then you might as well be using a magic artifact.

The fact that I can't make a new computer from scratch doesn't mean I'm using one as "a magical artifact". What contemporary pieces of technology can you make?

It does not seem to be the case that we have ever known how to make new societies that do the things we want.

You might be more familiar with this set of knowledge if we call it by its usual name -- "politics".

Replies from: Toggle
comment by Toggle · 2014-10-27T17:43:44.909Z · LW(p) · GW(p)

I was speaking in the plural. As a civilization, we are more than capable of creating many computers with established qualities and creating new ones to very exacting specifications. I don't believe there was ever a point in history where you could draw up a set of parameters for a culture you wanted, go to a group of knowledgeable experts, and watch as they built such a society with replicable precision.

You can do this for governments, of course- but notably, we haven't lost any information here. We are still perfectly capable of writing constitutions, or even founding monarchies if there were a consensus to do so. The 'regression' that Zanker believes in is (assuming the most common NRx beliefs) a matter of convention, social fabrics, and shared values, and not a regression in our knowledge of political structures per se.

Replies from: Lumifer
comment by Lumifer · 2014-10-27T17:57:33.911Z · LW(p) · GW(p)

I don't believe there was ever a point in history where you could draw up a set of parameters for a culture you wanted, go to a group of knowledgeable experts, and watch as they built such a society with replicable precision.

That's not self-evident to me. There are legal and ethical barriers, but my guess is that given the same level of control that we have in, say, engineering, we could (or quickly could learn to) build societies with custom characteristics. Given the ability to select people, shape their laws and regulations, observe and intervene, I don't see why you couldn't produce a particular kind of a society.

Of course you can't build any kind of society you wish just like you can't build any kind of a computer you wish -- you're limited by laws of nature (and of sociology, etc.), by available resources, by your level of knowledge and skill, etc.

Shaping a society is a common desire (look at e.g. communists) and a common activity (of governments and politicians). Certainly it doesn't have the precision and replicability of mass-producing machine screws, but I don't see why you can't describe it as a "technology".

Replies from: Toggle
comment by Toggle · 2014-10-27T19:21:19.210Z · LW(p) · GW(p)

Human cultures are material objects that operate within physical law like anything else- so I agree that there's no obvious reason to think that the domain is intractable. Given a long enough lever and a place to stand, you could run the necessary experiments and make some real progress. But a problem that can be solved in principle is not the same thing as a problem that has already been mastered- let alone mastered and then lost again.

One of the consequences of the more traditional sorts of technology is that it is a force towards consensus. There is no reasonable person who disagrees about the function of transistors or the narrow domains of physics on which transistor designs depend; once you use a few billion of the things reliably, it's hard to dispute their basic functionality. But to my knowledge, there was never any historical period in which consensus about the mechanisms of culture appeared, from which we might have fallen ignominiously. Hobbes and Machiavelli still haven't convinced everybody; Plato and Aristotle have been polarizing people about the nature of human society for millenia. Proponents of one culture or another never really had an elaborate set of assumptions that they could share with their rivals.

Replies from: Lumifer
comment by Lumifer · 2014-10-27T19:34:04.003Z · LW(p) · GW(p)

Let me point out that you continue to argue against ZankerH's position that the social technology has regressed. That is not my position. My objection was to your claim that the whole concept of social technology is nonsense and that the word "technology" in this context is misleadiing. I said that social technology certainly exists and is usually called politics -- but I never said anything about regression or past golden ages.

comment by lmm · 2014-10-27T19:49:41.156Z · LW(p) · GW(p)
  • Arguing on the internet is much like a drug, and bad for you
  • Progress is real
  • Some people are worth more than others
    • You can correlate this with membership in most groups you care to name
  • Solipsism is true
Replies from: NancyLebovitz, Evan_Gaensbauer
comment by NancyLebovitz · 2014-10-27T20:35:51.685Z · LW(p) · GW(p)

Some people are worth more than others
Solipsism is true

Are these consistent with each other? Should it at least be "Some "people" are worth more than others"?

Replies from: lmm
comment by lmm · 2014-10-27T22:37:03.425Z · LW(p) · GW(p)

Words are just labels for empirical clusters. I'm not going to scare-quote people when it has the usual referent used in normal conversation.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2014-10-30T06:59:54.528Z · LW(p) · GW(p)

What do you mean by solipsism?

Replies from: lmm
comment by lmm · 2014-10-30T13:04:30.879Z · LW(p) · GW(p)

My own existence is more real than this universe. Humans and our objective reality are map, not territory.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2014-10-31T07:00:02.906Z · LW(p) · GW(p)

What does it mean for one thing to be more real than another thing?

Also, when you say something is "map not territory", what do you mean? That the thing in question does not exist, but it resembles something else which does exist? Presumably a map must at least resemble the territory it represents.

Replies from: lmm
comment by lmm · 2014-10-31T19:36:25.013Z · LW(p) · GW(p)

Maybe "more fundamental" is clearer. In the same way that friction is less real than electromagnetism.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2014-11-01T01:31:04.193Z · LW(p) · GW(p)

More fundamental, in what sense? e.g. do you consider yourself to be the cause of other people?

Replies from: lmm
comment by lmm · 2014-11-01T16:56:07.067Z · LW(p) · GW(p)

e.g. do you consider yourself to be the cause of other people?

To the extent that there is a cause, yes. Other people are a surface phenomenon.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2014-11-02T06:04:22.019Z · LW(p) · GW(p)

What do you mean by surface? Do you mean people exist as your perceptions but not otherwise? And is there anything 'beneath' this 'surface', whatever it is?

comment by Evan_Gaensbauer · 2014-10-28T07:17:56.274Z · LW(p) · GW(p)

Progress is real

What do you mean by 'progress'? There is more than one conceivable type of progress: political, philosophical, technological, scientific, moral, social, etc.

What's interesting is there is someone else in this thread who believes they are right about something most others are wrong about. ZankerH believes there hasn't been much political or social progress, and that moral progress doesn't exist. So, if that's the sort of progress you are meaning, and also believe that you're right about this when most others aren't, then this thread contains some claims that would contradict each other.

Alas, I agree with you that arguing on the Internet is bad, so I'm not encouraging you to debate ZankerH. I'm just noting something I find interesting.

comment by James_Miller · 2014-10-27T21:56:53.869Z · LW(p) · GW(p)

I've signed up for cryonics, invest in stocks through index funds, and recognize that the Fermi paradox means mankind is probably doomed.

comment by Ixiel · 2014-10-27T23:22:25.117Z · LW(p) · GW(p)

Inequality is a good thing, to a point.

I believe in a world where it is possible to get rich, and not necessarily through hard work or being a better person. One person owning the world with the rest of us would be bad. Everybody having identical shares of everything would be bad (even ignoring practicalities). I don't know exactly where the optimal level is, but is it closer to the first situation than the second, even if assigned by lottery.

I'm treating this as basically another contrarian views thread without the voting rules. And full disclosure I'm too biased for anybody to take my word for it, but I'd enjoy reading counterarguments.

Replies from: Viliam_Bur, Nate_Gabriel, lmm
comment by Viliam_Bur · 2014-10-28T00:48:11.244Z · LW(p) · GW(p)

My intuition would be that inequality per se is not a problem, it only becomes a problem when it allows abuse. But that's not necessarily a function of inequality itself; it also depends on society. I can imagine a society which would allow a lot of inequality and yet would prevent abuse (for example if some Friendly AI would regulate how you are allowed to spend your money).

comment by Nate_Gabriel · 2014-10-27T23:37:29.821Z · LW(p) · GW(p)

Do you think we currently need more inequality, or less?

Replies from: Ixiel
comment by Ixiel · 2014-10-28T00:33:02.951Z · LW(p) · GW(p)

In the US I would say more-ish. I support a guaranteed basic income, and any benefit to one person or group (benefitting the bottom without costing the top would decrease inequality but would still be good), but think there should be a smaller middle class.

I don't know enough about global issues to comment on them.

comment by lmm · 2014-10-28T19:13:14.219Z · LW(p) · GW(p)

If we're stipulating that the allocation is by lottery, I think equality is optimal due to simple diminishing returns. And also our instinctive feelings of fairness. This tends to be intuitively obvious in a small group; if you have 12 cupcakes and 4 people, no-one would even think about assigning them at random; 3 each is the obviously correct thing to do. It's only when dealing with groups larger than our Dunbar number that we start to get confused.

Replies from: Ixiel
comment by Ixiel · 2014-10-29T11:35:14.383Z · LW(p) · GW(p)

Assuming that cupcakes are tradable, that seems intuitively false to me. Is it just your intuition, or is there also reason? Not denying intuitions' values, they are just not as easy to explain to one who does not share them.

Replies from: lmm
comment by lmm · 2014-10-29T23:04:37.961Z · LW(p) · GW(p)

If cupcakes are tradeable for brownies then I'd distribute both evenly to start and allow people to trade at prices that seemed fair to them, but I assume that's not what you're talking about. And yeah, it's primarily an intuition, and one that I'm genuinely quite surprised to find isn't universal, but I'd probably try to justify it in terms of diminishing returns, that two people with 3 cupcakes each have a higher overall happiness than one person with 2 and one with 4.

comment by gattsuru · 2014-10-27T17:56:20.158Z · LW(p) · GW(p)

General :

  • There are absolutely vital lies that everyone can and should believe, even knowing that they aren't true or can not be true.

  • /Everyone/ today has their own personal army, including the parts of the army no one really likes, such as the iffy command structure and the sociopath that we're desperately trying to Section Eight.

  • Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.

Political :

  • Network Neutrality desires a good thing, but the underlying rule structure necessary to implement it makes the task either fundamentally impossible or practically undesirable.

  • Privacy policies focused on preventing collection of identifiable data are ultimately doomed.

LessWrong-specific:

  • "Karma" is a terrible system for any site that lacks extreme monofocus. A point of Karma means the same thing on a top level post that breaks into new levels of philosophy, or a sufficiently entertaining pun. It might be the least bad system available, but in a community nearly defined by tech and data-analysis it's disappointing.

  • The risks and costs of "Raising the sanity waterline" are heavily underinvestigated. We recognize that there is an individual valley of bad rationality, but haven't really looked at what this would mean on a national scale. "Nuclear Winter" as argued by Sagan was a very, very overt Pascal's Wager: this Very High Value event can be avoided, so much must avoid it at any cost. It /also/ certainly gave valuable political cover to anti-nuclear war folk, may have affected or effected Russian and US and Cuban nuclear policy, and could (although not necessarily would) be supported from a utilitarian perspective... several hundred pages of reading later.

  • "Rationality" is an overloaded word in the exact sort of ways that make it a terrible thing to turn into an identity. When you're competing with RationalWiki, the universe is trying to give you a Hint.

  • The type of Atheism that is certain it will win, won't. There's a fascinating post describing how religion was driven from its controlling aspects in History, in Science, in Government, in Cleanliness ... and then goes on to describe how religion /will/ be driven from such a place on matters of ethics. Do not question why, no matter your surprise, that religion remains on a pedestal for Ethics, no matter how much it's poked and prodded by the blasphemy of actual practice. Lest you find the answer.

  • ((I'm /also/ not convinced that Atheism is a good hill for improved rationality to spend its capital on, anymore than veganism is a good hill for improved ethics to spend its capital on. This may be opinion rather than right/wrong.))

MIRI-specific:

  • MIRI dramatically weakens its arguments by focusing on special-case scenarios because those special-case situations are personally appealing to a few of its sponsors. Recursively self-improving Singularity-style AI is very dangerous... and it's several orders of complexity more difficult to describe that danger, where even minimally self-improving AI still have potential to be an existential risk and requires many fewer leaps to discuss and leads to similar concerns anyway.

  • MIRI's difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that's a value of "difficulty working with outsiders" that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))

Replies from: Nornagest, polymathwannabe, Evan_Gaensbauer
comment by Nornagest · 2014-10-27T18:55:21.182Z · LW(p) · GW(p)

Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.

Isn't this basically Goodhart's law?

Replies from: gattsuru
comment by gattsuru · 2014-10-28T00:11:33.507Z · LW(p) · GW(p)

It's related. Goodhart's Law says that using a measure for policy will decouple it from any pre-existing relationship with economic activity, but doesn't predict how that decoupling will occur. The common story of Goodhart's law tells us how the Soviet Union measured factory output in pounds of machinery, and got heavier but less efficient machinery. Formalizing the patterns tells us more about how this would change if, say, there had not been very strict and severe punishments for falsifying machinery weight production reports.

Sometimes this is a good thing : it's why, for one example, companies don't instantly implode into profit-maximizers just because we look at stock values (or at least take years to do so). But it does mean that following a good statistic well tends to cause worse outcomes that following a poor statistic weakly.

That said, while I'm convinced that's the pattern, it's not the only one or even the most obvious one, and most people seem to have different formalizations, and I can't find the evidence to demonstrate it.

comment by polymathwannabe · 2014-10-27T18:35:10.342Z · LW(p) · GW(p)

There are absolutely vital lies that everyone can and should believe, even knowing that they aren't true or can not be true.

Desirability issues aside, "believing X" and "knowing X is not true" cannot happen in the same head.

Replies from: Lumifer
comment by Lumifer · 2014-10-27T18:39:11.239Z · LW(p) · GW(p)

"believing X" and "knowing X is not true" cannot happen in the same head

This is known as doublethink. Its connotations are mostly negative, but Scott Fitzgerald did say that "The test of a first rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function" -- a bon mot I find insightful.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-10-27T20:35:11.921Z · LW(p) · GW(p)

Example of that being useful?

Replies from: gattsuru, Lumifer
comment by gattsuru · 2014-10-27T22:10:44.145Z · LW(p) · GW(p)

(Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.)

Having an internalized locus of control strongly correlates with a wide variety of psychological and physiological health benefits. There's some evidence that this link is causative for at least some characteristics. It's not a completely unblemished good characteristic -- it correlates with lower compliance with medical orders, and probably isn't good for some anxiety disorders in extreme cases -- but it seems more helpful than not.

It's also almost certainly a lie. Indeed, it's obvious that such a thing can't exist under any useful models of reality. There are mountains of evidence for either the nature or nurture side of the debate, to the point where we really hope that bad choices are caused by as external an event as possible because /that/, at least, we might be able fix.. At a more basic level, there's a whole lot of universe that isn't you than there is you to start with. On the upside, if your locus of control is external, at least it's not worth worrying about. You couldn't do much to change it, after all.

Psychology has a few other traits where this sort of thing pops up, most hilariously during placebo studies, though that's perhaps too easy an example. It's not the only one, though : useful lies are core to a lot of current solutions to social problems, all the way down to using normal decision theory to cooperate in an iterated prisoner's dilemma.

It's possible (even plausible) that this represents a valley of rationality -- like the earlier example of Pascal's Wagers that hold decent Utilitarian tradeoffs underneath -- but I'm not sure falsifiable, and it's certainly not obvious right now.

Replies from: Evan_Gaensbauer, Vulture
comment by Evan_Gaensbauer · 2014-10-28T07:26:39.740Z · LW(p) · GW(p)

Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.

As an afflicted individual, I appreciate the content warning. I'm responding without having read the rest of the comment. This is a note of gratitude to you, and a data point that for yourself and others that such content warnings are appreciated.

comment by Vulture · 2014-10-31T23:02:58.361Z · LW(p) · GW(p)

I second Evan that the warning was a good idea, but I do wonder whether it would be better to just say "content warning"; "Basilisk" sounds culty, might point confused people towards dangerous or distressing ideas, and is a word which we should probably be not using more than necessary around here for the simple PR reason of not looking like idiots.

Replies from: gattsuru
comment by gattsuru · 2014-11-01T01:33:58.329Z · LW(p) · GW(p)

Yeah, other terminology is probably a better idea. I'd avoided 'trigger' because it isn't likely to actually trigger anything, but there's no reason to use new terms when perfectly good existing ones are available. Content warning isn't quite right, but it's close enough and enough people are unaware of the original meaning, that its probably preferable to use.

comment by Lumifer · 2014-10-27T20:47:01.962Z · LW(p) · GW(p)

Mostly in the analysis of complex phenomena with multiple in(or barely)compatible frameworks of looking at them.

A photon is a wave.
A photon is a particle.

Love is temporary insanity.
Love is the most beautiful feeling you can have.

Etc., etc.

Replies from: RowanE
comment by RowanE · 2014-10-27T22:18:06.040Z · LW(p) · GW(p)

It's possibly to use particle models or wave models to make predictions about photons, but believing a photon is both of those things is a separate matter, and is neither useful nor true - a photon is actually neither.

Truth is not beauty, so there's no contradiction there, and even the impression of one disappears if the statements are made less poetic and oversimplified.

comment by Evan_Gaensbauer · 2014-10-28T07:24:37.732Z · LW(p) · GW(p)

MIRI's difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that's a value of "difficulty working with outsiders" that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))

I agree, and it's something I could, maybe should, help with instead of just complaining about. What's stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn't work, then, what would be stopping us?

Replies from: gattsuru, Viliam_Bur
comment by gattsuru · 2014-10-29T17:15:31.799Z · LW(p) · GW(p)

What's stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn't work, then, what would be stopping us?

In organized form, I've joining the Youtopia page, and the current efforts appear to be either busywork or best completed by a native speaker of a different language, there's no obvious organization regarding generalized goals, and no news updates at all. I'm not sure if this is because MIRI is using a different format to organize volunteers, because MIRI doesn't promote the Youtopia group that seriously, because MIRI doesn't have any current long-term projects that can be easily presented to volunteers, or for some other reason.

For individual-oriented work, I'm not sure what to do, and I'm not confident the best person to do it. There are also three separate issues, of which there's not obvious interrelation. Improving the Sequences and accessibility of the Sequences is the most immediate and obvious thing, and I can think of a couple different ways to go about this :

  • The obvious first step is to make /any/ eBook, which is why a number of people have done just that. This isn't much more comprehensible than just linking to the Sequences page on the Wiki, and in some cases may be less useful, and most of the other projects seem better-designed than I can offer.

  • Improve indexing of the Sequences for online access. This does seem like low-hanging fruit, possibly because people are waiting for a canonical order, and the current ordering is terrible. However, I don't think it's a good idea to just randomly edit the Sequences Wiki page, and Discussion and Main aren't really well-formatted for a long-term version-heavy discussion. (And it seems not Wise for my first Discussion or Main post to be "shake up the local textbook!") I have started working on a dependency web, but this effort doesn't seem produce marginal benefits until large sections are completed.

  • The Sequences themselves are written as short bite-sized pieces for a generalized audience in a specific context, which may not be optimal for long-form reading in a general context. In some cases, components that were good-enough to start with now have clearer explanations... that have circular redundancies. Writing bridge pieces to cover these attributes, or writing alternative descriptions for the more insider-centric Sequences, works within existing structures, and providing benefit at fairly small intervals. This requires fairly deep understanding of the Sequences, and does not appear to be a low-hanging fruit. (And again, not necessarily Wise for my first Discussion or Main post to be "shake up the local textbook!")

But this is separate from MIRI's ability to work with insiders and only marginally associated with its ability to work with outsiders. There are folk with very significant comparative advantages (ie, anyone inside MIRI, anyone in California, most people who accept their axioms) on these matters, and while outsiders have managed to have major impact despite that, they were LukeProg with a low-hanging fruit of basic nonprofit organization, which is a pretty high bar to match.

There are some possibilities -- translating prominent posts to remove excessive jargon or wordiness (or even Upgoer Fiving them), working on some reputation problems -- but none of these seem to have obvious solutions, and wrong efforts could even have negative impact. See, for example, a lot of coverage in more mainstream web media. I've also got a significant anti-academic streak, so it's a little hard for me to understand the specific concern that Scott Alexander/su3su2u1 were raising, which may complicate matters further.

comment by Viliam_Bur · 2014-10-28T08:29:07.873Z · LW(p) · GW(p)

over six-to-nine months to get the Sequences eBook proofread

This is one of the things that keep me puzzled. How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?

Is it because people don't volunteer enough for the work because proofreading seems low status? Is it a bystander effect, where everyone assumes that someone else is already working on it? Are all people just reading LW for fun, but unwilling to do any real work to help? Is it a communication problem, where MIRI has a lack of volunteers, but the potential volunteers are not aware of it?

Just print the whole fucking thing on paper, each chapter separately. Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven't read the whole Sequences, they can just pick a chapter they haven't read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.

Replies from: kalium, lmm, gattsuru, Evan_Gaensbauer
comment by kalium · 2014-10-30T05:42:50.464Z · LW(p) · GW(p)

I used to work as a proofreader for MIRI, and was sometimes given documents with volunteers' comments to help me out. In most cases, the quality of the comments was poor enough that in the time it took me to review the comments, decide which ones were valid, and apply the changes, I could have just read the whole thing and caught the same errors (or at least an equivalent number thereof) myself.

There's also the fact that many errors are only such because they're inconsistent with the overall style. It's presumably not practical to get all your volunteers to read the Chicago Manual of Style and agree on what gets a hyphen and such before doing anything.

comment by lmm · 2014-10-28T19:15:06.521Z · LW(p) · GW(p)

I'm just reading LW for fun and unwilling to do any real work to help, FWIW.

comment by gattsuru · 2014-10-28T14:56:40.242Z · LW(p) · GW(p)

How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?

It's the 'norm-palatable' part more than the proofreading aspect, unfortunately, and I'm not sure that can be readily made volunteer work

As far as I can tell, the proofreading part began in late 2013, and involved over two thousand pages of content to proofread through Youtopia. As far as I can tell, the only Sequence-related volunteer work on the Youtopia site involves translation into non-English languages, so the public volunteer proofreading is done and likely has been done for a while (wild guess, probably somewhere in mid-summer 2014?). MIRI is likely focusing on layout and similar publishing-level issues, and as far as I've been able to tell, they're looking for a release at the end of the year that strongly suggests that they've finished the proofreading aspect.

That said, I may have outdated information: the Sequence eBook has been renamed several times in progress for a variety of good reasons, and I'm not sure Youtopia is the current place most of this is going on, and AlexVermeer may or may not be lead on this project and may or not be more active elsewhere than these forums. There are some public project attempts to make an eReader-compatible version, though these don't seem much stronger from a reading order perspective.

In fairness, doing /good/ layout and ePublishing does take more specialized skills and some significant time, and MIRI may be rewriting portions of the work to better handle the limitations of a book format -- where links are less powerful tools, where a large portion of viewer devices support only grayscale, and where certain media presentation formats aren't possible. At least from what I've seen in technical writing and pen-and-paper RPGs, this is not a helpfully parallel task: everyone needs must use the same toolset and design rules, or all of their work is wasted. There was also a large amount of internal MIRI rewriting involved, as even the early version made available to volunteer proofreaders was significantly edited.

Less charitably, while trying to find this information I've found references to an eBook project dating back to late 2012, so nine months may be a low-end estimate. Not sure if that's the same project or if it's a different one that failed, or if it's a different one that succeeded and I just can't find the actual eBook result.

comment by Evan_Gaensbauer · 2014-10-28T09:42:57.938Z · LW(p) · GW(p)

Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven't read the whole Sequences, they can just pick a chapter they haven't read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.

Thanks for the suggestion. I'll plan some meetups around this. Not the whole thing, mind you. I'll just get anyone willing at the weekly Vancouver meetup to do exactly that: take a mild amount of time reviewing a chapter/post, and providing feedback on it or whatever.

comment by pianoforte611 · 2014-10-28T00:36:53.373Z · LW(p) · GW(p)

Diet and exercise generally do not cause substantial long term weight loss. Failure rates are high, and successful cases keep off about 7% of they original body weight after 5 years. I strongly suspect that this effect does not scale, you won't lose another 7% after another 5 years.

It might be instrumentally useful though for people to believe that they can lose weight via diet and exercise, since a healthy diet and exercise are good for other reasons.

Replies from: Lumifer, ChristianKl, RomeoStevens
comment by Lumifer · 2014-10-29T19:52:51.901Z · LW(p) · GW(p)

Diet and exercise generally do not cause substantial long term weight loss

There is a pretty serious selection bias in that study.

I know some people who lost a noticeably amount of weight and kept it off. These people did NOT go to any structured programs. They just did it themselves.

I suspect that those who are capable of losing (and keeping it off) weight by themselves just do it and do not show up in the statistics of the programs analyzed in the meta-study linked to. These structured programs select for people who have difficulty in maintaining their weight and so are not representative of the general population.

comment by ChristianKl · 2014-10-29T19:21:37.577Z · LW(p) · GW(p)

Diet and exercise generally do not cause substantial long term weight loss.

"Healthy diet" and dieting are often two different things.

Healthy diet might mean increasing the amount of vegetables in your diet. That's simply good.

Reducing your calorie consumption for a few months and then increasing it in what's commonly called the jo-jo effect on the other hand is not healthy.

comment by RomeoStevens · 2014-10-29T19:06:48.210Z · LW(p) · GW(p)

Why is this surprising? You give someone a major context switch, put them in a structured environment where experts are telling them what to do and doing the hard parts for them (calculating caloric needs, setting up diet and exercise plans), they lose weight. You send them back to their normal lives and they regain the weight. These claims are always based upon acute weight loss programs. Actual habit changes are rare and harder to study. I would expect CBT to be an actually effective acute intervention rather than acute diet and exercise.

Replies from: pianoforte611
comment by pianoforte611 · 2014-10-29T19:39:18.964Z · LW(p) · GW(p)

I hadn't thought of CBT, it does work in a very loose sense of the term although I wouldn't call weight loss of 4 kg that plateaus after a few months much of a success. I maintain that no non-surigcal intervention (that I know of) results in significant long term weight loss. I would be very excited to hear about one that does.

Replies from: RomeoStevens
comment by RomeoStevens · 2014-10-29T21:02:29.634Z · LW(p) · GW(p)

I would bet that there are no one time interventions that don't have a regression to pre-treatment levels (except surgery).

comment by summerstay · 2014-10-27T13:52:51.786Z · LW(p) · GW(p)

It would be a lot harder to make a machine that actually is conscious (phenomenally conscious, meaning it has qualia) than it would be to make one that just acts as if is conscious (in that sense). It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.

Replies from: hyporational, polymathwannabe, bbleeker
comment by hyporational · 2014-10-27T14:57:54.072Z · LW(p) · GW(p)

It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.

I haven't gotten that impression. The p-zombie problem those other guys talk about is a bit different since human beings aren't made with a purpose in mind and you'd have to explain why evolution would lead to brains that only mimic conscious behavior. However if human beings make robots for some purpose it seems reasonable to program them to behave in a way that mimics behavior that would be caused by consciousness in humans. This is especially likely since we have hugely popular memes like the Turing test floating about.

I tend to believe that much simpler processes than we traditionally attribute consciousness to could be conscious in some rudimentary way. There might even be several conscious processes in my brain working in parallel and overlapping. If this is the case looking for human-like traits in machines becomes a moot point.

Replies from: Capla
comment by Capla · 2014-10-27T18:18:57.144Z · LW(p) · GW(p)

I often wonder if my subconsciousness is actually conscious, it's just a different consciousnesses than me.

Replies from: hyporational
comment by hyporational · 2014-10-28T09:40:49.145Z · LW(p) · GW(p)

I actually arrived at this supposedly old idea on my own when I was reading about the incredibly complex enteric nervous system in med school. For some reason it struck me that the brain of my gastrointestinal system might be conscious. But then thinking about it further it didn't seem very consistent that only certain bigger neural networks that are confined by arbitrary anatomical boundaries would be conscious, so I proceeded a bit further from there.

comment by polymathwannabe · 2014-10-27T14:18:00.713Z · LW(p) · GW(p)

EY has declared that P-zombies are nonsense, but I've had trouble understanding his explanation. Is there any consensus on this?

Replies from: RowanE
comment by RowanE · 2014-10-27T14:43:51.568Z · LW(p) · GW(p)

Summary of my understanding of it: P-zombies require that there be no causal connection between consciousness and, well, anything, including things p-zombie philosophers say about consciousness. If this is the case, then a non-p-zombie philosopher talking about consciousness also isn't doing so for reasons causally connected to the fact that they are conscious. To effectively say "I am conscious, but this is not the cause of my saying so, and I would still say so if I wasn't conscious" is absurd.

comment by Sabiola (bbleeker) · 2014-10-27T18:42:29.516Z · LW(p) · GW(p)

How would you tell the difference? I act like I'm conscious too, how do you know I am?

comment by satt · 2014-10-29T02:50:38.828Z · LW(p) · GW(p)

Where are you right, while most others are wrong? Including people on LW!

A friend I was chatting to dropped a potential example in my lap yesterday. Intuitively, they don't find the idea of humanity being eliminated and replaced by AI necessarily horrifying or even bad. As far as they're concerned, it'd be good for intelligent life to persist in the universe, but why ought it be human, or even human-emulating?

(I don't agree with that position normatively but it seems impregnable intellectually.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-10-29T17:09:01.493Z · LW(p) · GW(p)

it'd be good for intelligent life to persist in the universe, but why ought it be human, or even human-emulating

Just to make sure, could this be because you assume that "intelligent life" will automatically be similar to humans in some other aspects?

Imagine a galaxy full of intelligent spiders, who only use their intelligence for travelling the space and destroying potentially competing species, but nothing else. A galaxy full of smart torturers who mostly spend their days keeping their prey alive while the acid dissolves the prey's body, so they can enjoy the delicious juice. Only some specialists among them also spend some time doing science and building space rockets. Only this, multiplied by infinity, forever (or as long as the laws of physics permit).

Replies from: satt
comment by satt · 2014-10-29T23:44:05.817Z · LW(p) · GW(p)

Just to make sure, could this be because you [sic] assume that "intelligent life" will automatically be similar to humans in some other aspects?

It could be because they assume that. More likely, I'd guess, they think that some forms of human-displacing intelligence (like your spacefaring smart torturers) would indeed be ghastly and/or utterly unrecognizable to humans — but others need not be.

comment by Daniel_Burfoot · 2014-10-27T13:59:11.526Z · LW(p) · GW(p)

Residing in the US and taking part in US society (eg by pursuing a career) is deeply problematic from an ethical point of view. Altruists should seriously consider either migrating or scaling back their career ambitions significantly.

Replies from: Lumifer, DanielLC, ChristianKl, Capla, Daniel_Burfoot
comment by Lumifer · 2014-10-27T15:06:15.936Z · LW(p) · GW(p)

Interesting. This is in contrast to which societies? To where should altruists emigrate?

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2014-10-28T07:44:33.651Z · LW(p) · GW(p)

If anyone cares, the effective altruism community has started pondering this question as a group. This might work out for those doing direct work, such as research or advocacy: if they're doing it mostly virtually, what they need the most is Internet access. If a lot of the people they'd be (net)working with as part of their work were also at the same place, it would be even less of a problem. It doesn't seem like this plan would work for those earning to give, as the best ways of earning to give often depend on geography-specific constraints, i.e., working in developed countries.

Note that if you perceive this as a bad idea, please share your thoughts, as I'm only aware of its proponents claiming it might be a good idea. It hasn't been criticized, so it's an idea worthy of detractors if criticism is indeed to be had.

Replies from: drethelin
comment by drethelin · 2014-11-02T19:39:21.645Z · LW(p) · GW(p)

Fundamentally the biggest reason to have a hub and the biggest barrier to creating a new one is coordination. Existing hubs are valuable because a lot of the coordination work is done FOR you. People who are effective, smart, and wealthy are already sorted into living in places like NYC and SF for lots of other reasons. You don't have to directly convince or incentivize these people to live there for EA. This is very similar to why MIRI theoretically benefits from being in the Bay Area: They don't have to pay the insanely high a cost to attract people to their area at all, vs to attract them to hang out with and work with MIRI as opposed to google or whoever. I think it's highly unlikely that even for the kind of people who are into EA that they could make a new place sufficiently attractive to potential EAs to climb over the mountains of non-coordinated reasons people have to live in existing hubs.

comment by DanielLC · 2014-10-27T19:19:57.822Z · LW(p) · GW(p)

If I scale back my career ambitions, I won't make as much money, which means that I can't donate as much. This is not a small cost. How can my career do more damage than that opportunity cost?

comment by ChristianKl · 2014-10-27T15:09:11.949Z · LW(p) · GW(p)

Residing in the US and taking part in US society (eg by pursuing a career) is deeply problematic from an ethical point of view.

Do you follow some kind of utilitarian framework where you could quantify that problem? Roughly how much money donated to effective charities would make up the harm caused by participating in US society.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2014-10-27T16:29:03.837Z · LW(p) · GW(p)

Thanks for asking, here's an attempt at an answer. I'm going to compare the US (tax rate 40%) to Singapore (tax rate 18%). Since SG has better health care, education, and infrastructure than the US, and also doesn't invade other countries or spy massively on its own citizens, I think it's fair to say that 22% extra of GDP that the US taxes its citizens is simply squandered.

Let I be income, D be charitable donations, R be tax rate (0.4 vs 0.18), U be money usage in support of lifestyle, and T be taxes paid. Roughly U=I-T-D, and T=R(I-D). A bit of algrebra produces the equation D=I-U/(1-R).

Consider a good programmer-altruist making I=150K. In the first model, the programmer decides she needs U=70K to support her lifestyle; the rest she will donate. Then in the US, she will donate D=33K, and pay T=47K in taxes. In SG, she will donate D=64K and pay T=16K in taxes to achieve the same U.

In the second model, the altruist targets a donation level of D=60, and adjusts U so she can meet the target. In the US, she payes T=36K in taxes and has a lifestyle of U=54K. In SG, she pays T=16K of taxes and lives on U=74K.

So, to answer your question, the programmer living in the US would have to reduce her lifestyle by about $20K/year to achieve the same level of contribution as the programmer in SG.

Most other developed countries have tax rates comparable or higher than the US, but it's more plausible that in other countries the money goes to things that actually help people.

Replies from: bramflakes, None, ChristianKl
comment by bramflakes · 2014-10-27T19:39:49.156Z · LW(p) · GW(p)

I'm going to compare the US to Singapore

this is the point where alarm bells should start ringing

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2014-10-27T21:55:09.057Z · LW(p) · GW(p)

The comparison is valid for the argument I'm trying to make, which is that by emigrating to SG a person can enhance his or her altruistic contribution while keeping other things like take-home income constant.

comment by [deleted] · 2014-10-27T21:04:03.571Z · LW(p) · GW(p)

Since SG has better health care, education, and infrastructure than the US, and also doesn't invade other countries or spy massively on its own citizens, I think it's fair to say that 22% extra of GDP that the US taxes its citizens is simply squandered.

This is just plain wrong. Mostly because Singapore and the US are different countries in different circumstances. Just to name one, Singapore is tiny. Things are a lot cheaper when you're small. Small countries are sustainable because international trade means you don't have to be self-sufficient, and because alliances with larger countries let you get away with having a weak military. The existence of large countries is pretty important for this dynamic.

Now, I'm not saying the US is doing a better job than Singapore. In fact, I think Singapore is probably using its money better, albeit for unrelated reasons. I'm just saying that your analysis is far too simple to be at all useful except perhaps by accident.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-10-28T01:25:52.288Z · LW(p) · GW(p)

Things are a lot cheaper when you're small.

Things are a lot cheaper when you're large. It's called "economy of scale".

Replies from: None
comment by [deleted] · 2014-10-28T13:03:47.575Z · LW(p) · GW(p)

Yes, both effects exist and they apply to different extents in different situations. A good analysis would take both (and a host of other factors) into account and figure out which effect dominates. My point is that this analysis doesn't do that.

comment by ChristianKl · 2014-10-27T17:17:18.529Z · LW(p) · GW(p)

Consider a good programmer-altruist making I=150K

I think given the same skill level the programmer-altruist making 150K while living in Silicon Valley might very well make 20K less living in Germany, Japan or Singapore.

Replies from: Nornagest
comment by Nornagest · 2014-10-27T21:32:04.548Z · LW(p) · GW(p)

I don't know what opportunities in Europe or Asia look like, but here on the US West Coast, you can expect a salary hit of $20K or more if you're a programmer and you move from the Silicon Valley even to a lesser tech hub like Portland. Of course, cost of living will also be a lot lower.

comment by Capla · 2014-10-27T18:11:37.062Z · LW(p) · GW(p)

I'm not sure what you mean. Can you elaborate, with the other available options perhaps? What should I do instead?

To be more specific, what's morally problematic about wanting to be a more successful writer or researcher or therapist?

Replies from: Lumifer
comment by Lumifer · 2014-10-27T18:23:05.740Z · LW(p) · GW(p)

what's morally problematic about wanting to be a more successful writer or researcher or therapist?

The issue is blanket moral condemnation of the whole society. Would you want to become a "more successful writer" in Nazi Germany?

The simple step of a courageous individual is not to take part in the lie." -- Alexander Solzhenitsyn

Replies from: faul_sname, Daniel_Burfoot
comment by faul_sname · 2014-10-27T19:28:57.523Z · LW(p) · GW(p)

The issue is blanket moral condemnation of the whole society. Would you want to become a "more successful writer" in Nazi Germany?

...yes? I wouldn't want to write Nazi propaganda, but if I was a romance novel writer and my writing would not significantly affect, for example, the Nazi war effort, I don't see how being a writer in Nazi Germany would be any worse than being a writer anywhere else. In this context, "the lie" of Nazi Germany was not the mere existence of the society, it was specific things people within that society were doing. Romance novels, even very good romance novels, are not a part of that lie by reasonable definitions.

ETA: There are certainly better things a person in Nazi Germany could do than writing romance novels. If you accept the mindset that anything that isn't optimally good is bad, then yes, being a writer in Nazi Germany is probably bad. But in that event, moving to Sweden and continuing to write romance novels is no better.

Replies from: Lumifer, Nornagest
comment by Lumifer · 2014-10-27T19:43:08.548Z · LW(p) · GW(p)

I don't see how being a writer in Nazi Germany would be any worse than being a writer anywhere else

The key word is "successful".

To become a successful romance writer in Nazi Germany would probably require you pay careful attention to certain things. For example, making sure no one who could be construed to be a Jew is ever a hero in your novels. Likely you will have to have a public position on the racial purity of marriages. Would a nice Aryan Fräulein ever be able to find happiness with a non-Aryan?

You can't become successful in a dirty society while staying spotlessly clean.

Replies from: faul_sname, NancyLebovitz
comment by faul_sname · 2014-10-27T19:48:47.028Z · LW(p) · GW(p)

So? Who said my goal was to stay spotlessly clean? I think more highly of Bill Gates than of Richard Stallman, because as much as Gates was a ruthless and sometimes dishonest businessman, and as much as Stallman does stick to his principles, Gates, overall, has probably improved the human condition far more than Stallman.

Replies from: Lumifer
comment by Lumifer · 2014-10-27T20:13:59.260Z · LW(p) · GW(p)

Who said my goal was to stay spotlessly clean?

The question was whether "being a writer in Nazi Germany would be any worse than being a writer anywhere else".

If you would be happy to wallow in mud, be my guest.

The question of how much morality could one maintain while being successful in an oppressive society is an old and very complex one. Ask Russian intelligentsia for details :-/

comment by NancyLebovitz · 2014-10-27T20:32:20.198Z · LW(p) · GW(p)

Lack of representation isn't the worst thing in the world.

if you could write romance novels in Nazi Gernany (did they have romance novels?) and the novels are about temporarily and engagingly frustrated love between Aryans with no nasty stereotypes of non-Aryans, I don't think it's especially awful.

Replies from: Douglas_Knight, Azathoth123
comment by Douglas_Knight · 2014-10-28T22:32:20.823Z · LW(p) · GW(p)

did [Nazi Germany] have romance novels?

What a great question! I went to wikipedia which paraphrased a great quote from NYT

Germans love erotic romance...The company publishes German writers under American pseudonyms "because you can't sell romance here with an author with a German name"

which suggests that they are a recent development. Maybe there was a huge market for Georgette Heyer, but little production in Germany.

One thing that is great about wikipedia is the link to corresponding articles in other languages. "Romance Novel" in English links to an article entitled "Love- and Family-Novels." That suggests that the genres were different, at least at some point in time. That article mentions Hedwig Courths-Mahler as a prolific author who was a supporter of the SS and I think registered for censorship. But she rejected the specific censorship, so she published nothing after 1935 and her old books gradually fell out of print. But I'm not sure she really was a romance author, because of the discrepancy of genres.

comment by Azathoth123 · 2014-10-30T04:58:20.962Z · LW(p) · GW(p)

What do your lovers find attractive about each other? It better be their Aryan traits.

comment by Nornagest · 2014-10-27T20:44:22.560Z · LW(p) · GW(p)

I wouldn't want to write Nazi propaganda, but if I was a romance novel writer and my writing would not significantly affect, for example, the Nazi war effort, I don't see how being a writer in Nazi Germany would be any worse than being a writer anywhere else.

Well, there is the inconvenient possibility of getting bombed flat in zero to twelve years, depending on what we're calling Nazi Germany.

Replies from: RowanE
comment by RowanE · 2014-10-27T22:21:00.616Z · LW(p) · GW(p)

Considering the example of Nazi Germany is being used as an analogy for the United States, a country not actually at way, taking allied bombing raids into account amounts to fighting the hypothetical.

Replies from: Nornagest
comment by Nornagest · 2014-10-27T22:26:49.296Z · LW(p) · GW(p)

Is it? I was mainly joking -- but there's an underlying point, and that's that economic and political instability tends to correlate with ethical failures. This isn't always going to manifest as winding up on the business end of a major strategic bombing campaign, of course, but perpetrating serious breaches of ethics usually implies that you feel you're dealing with issues serious enough to justify being a little unethical, or that someone's getting correspondingly hacked off at you for them, or both. Either way there are consequences.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-10-28T19:16:58.784Z · LW(p) · GW(p)

It's a lot safer to abuse people inside your borders than to make a habit of invading other countries. The risk from ethical failure has a lot to do with whether you're hurting people who can fight back.

comment by Daniel_Burfoot · 2014-10-27T19:00:52.495Z · LW(p) · GW(p)

I'm not sure I want to make blanket moral condemnations. I think Americans are trapped in a badly broken political system, and the more power, prestige, and influence that system has, the more damage it does. Emigration or socioeconomic nonparticipation reduces the power the system has and therefore reduces the damage it does.

Replies from: Lumifer
comment by Lumifer · 2014-10-27T19:14:03.201Z · LW(p) · GW(p)

I'm not sure I want to make blanket moral condemnations.

It seems to me you do, first of all by your call to emigrate. Blanket condemnations of societies do not extend to each individual, obviously, and the difference between "condemning the system" and "condemning the society" doesn't look all that big..

comment by Daniel_Burfoot · 2014-10-27T15:20:55.997Z · LW(p) · GW(p)

I would suggest ANZAC, Germany, Japan, or Singapore. I realized after making this list that those countries have an important property in common, which is that they are run by relatively young political systems. Scandinavia is also good. Most countries are probably ethically better than the US, simply because they are inert: they get an ethical score of zero while the US gets a negative score.

(This is supposed to be a response to Lumifer's question below).

Replies from: Lumifer, DanielFilan
comment by Lumifer · 2014-10-27T15:34:32.290Z · LW(p) · GW(p)

would suggest ANZAC, Germany, Japan, or Singapore. ... Scandinavia is also good.

That's a very curious list, notable for absences as well as for inclusions. I am a bit stumped, for I cannot figure out by which criteria was it constructed. Would you care to elaborate why do these countries look to you as the most ethical on the planet?

Replies from: Daniel_Burfoot, Metus
comment by Daniel_Burfoot · 2014-10-27T22:05:33.161Z · LW(p) · GW(p)

I don't claim that the list is exhaustive or that the countries I mentioned are ethically great. I just claim that they're ethically better than the US.

Replies from: Lumifer
comment by Lumifer · 2014-10-28T15:02:36.624Z · LW(p) · GW(p)

Hmm... Is any Western European country ethically worse than the USA from your point of view? Would Canada make the list? Does any poor country qualify?

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2014-10-28T15:15:29.588Z · LW(p) · GW(p)

In my view Western Europe is mostly inert, so it gets an ethics score of 0, which is better than the US. Some poor countries are probably okay, I wouldn't want to make sweeping claims about them. The problem with most poor countries is that their governments are too corrupt. Canada does make the list, I thought ANZAC stood for Australia, New Zealand And Canada.

comment by Metus · 2014-10-27T21:46:02.011Z · LW(p) · GW(p)

Modern countries with developed economies lacking a military force involved and/or capable of military intervention outside of its territory. Maybe his grief is with the US military so I just went with that.

Replies from: Azathoth123
comment by Azathoth123 · 2014-10-28T03:54:16.079Z · LW(p) · GW(p)

Which is to say they engage in a lot of free riding on the US military.

comment by DanielFilan · 2014-10-27T22:50:13.588Z · LW(p) · GW(p)

For reference, ANZAC stands for the "Australia and New Zealand Army Corps" that fought in WWI. If you mean "Australia and New Zealand", then I don't think there's a shorter way of saying that than just listing the two countries.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-10-28T22:03:55.863Z · LW(p) · GW(p)

"the Antipodes"

comment by ChristianKl · 2014-10-27T15:33:20.551Z · LW(p) · GW(p)

The importance of somatics is currently likely the most significant.

Replies from: RowanE
comment by RowanE · 2014-10-27T22:31:02.813Z · LW(p) · GW(p)

I don't know what this sentence means. At least one other person is similarly confused, since you've been downvoted - can you clarify?

comment by FiftyTwo · 2014-11-02T19:07:29.247Z · LW(p) · GW(p)

Can anyone recommend any good books/resources on dyspraxia?

Ideally suitable for adults with a reasonable background understanding of psychology. Most of the stuff I've been able to find has been aimed for teachers/parents.

comment by falenas108 · 2014-10-28T15:03:09.459Z · LW(p) · GW(p)

I keep finding the statistic that "one pint of donated blood can save up to 3 lives!" But I can't find the average number of lives saved from donating blood. Does anyone know/is able to find?

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2014-10-28T16:53:11.912Z · LW(p) · GW(p)

What do you mean with "lives saved by donating blood" in the first place?

Quantity people who would die without any blood donations


Liters of blood donated

That's not a pretty useful number if you want to make personal decisions based on it. If our Western system would need more blood, raising the incentives for donations isn't that hard.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-10-28T17:04:30.338Z · LW(p) · GW(p)

WHO prefers all blood donations to be unpaid:

"Regular, unpaid voluntary donors are the mainstay of a safe and sustainable blood supply because they are less likely to lie about their health status. Evidence indicates that they are also more likely to keep themselves healthy."

Replies from: ChristianKl
comment by Lumifer · 2014-10-28T15:11:08.718Z · LW(p) · GW(p)

I keep finding the statistic that "one pint of donated blood can save up to 3 lives!"

The expression "can save up to" should immediately trigger your bullshit detector. It's a reliable signal that the following number is meaningless.

comment by NancyLebovitz · 2014-10-28T13:45:31.899Z · LW(p) · GW(p)

I did a little research to find out whether there are free survey sites that offer "check all answers that apply" questions.

Super Simple Survey probably does, but goddamned if I'll deal with their website to make sure.

On the almost free side, Live Journal enables fairly flexible polls (including checkboxes) for paid accounts, and you can get a paid account for a month for $3. Live Journal is a social media site.

Replies from: Manfred
comment by Salemicus · 2014-10-28T10:47:30.786Z · LW(p) · GW(p)

It has been experimentally shown that certain primings and situations increase utilitarian reasoning; for instance, people are more willing to give the "utilitarian" answer to the trolley problem when dealing with strangers, rather than friends. Utilitarians like to claim that this is because people are able to put their biases aside and think more clearly in those situations. But my explanation has always been that it's because these setups are designed to maximise the psychological distance between the subject and the harm they're going to inflict - the more people are confronted with the potential consequences of their actions, the less likely they are to make the utilitarian mistake. And now, a new paper suggests that I was right all along! Abstract:

The hypothetical moral dilemma known as the trolley problem has become a methodological cornerstone in the psychological study of moral reasoning and yet, there remains considerable debate as to the meaning of utilitarian responding in these scenarios. It is unclear whether utilitarian responding results primarily from increased deliberative reasoning capacity or from decreased aversion to harming others. In order to clarify this question, we conducted two field studies to examine the effects of alcohol intoxication on utilitarian responding. Alcohol holds promise in clarifying the above debate because it impairs both social cognition (i.e., empathy) and higher-order executive functioning. Hence, the direction of the association between alcohol and utilitarian vs. non-utilitarian responding should inform the relative importance of both deliberative and social processing systems in influencing utilitarian preference. In two field studies with a combined sample of 103 men and women recruited at two bars in Grenoble, France, participants were presented with a moral dilemma assessing their willingness to sacrifice one life to save five others. Participants’ blood alcohol concentrations were found to positively correlate with utilitarian preferences (r = .31, p < .001) suggesting a stronger role for impaired social cognition than intact deliberative reasoning in predicting utilitarian responses in the trolley dilemma. Implications for Greene’s dual-process model of moral reasoning are discussed.

However, given my low opinion of such experiments, perhaps I should be very careful about uncritically accepting evidence that supports my priors.

Replies from: lmm, NancyLebovitz, Lumifer
comment by lmm · 2014-10-28T19:24:39.296Z · LW(p) · GW(p)

I highly doubt the subjects were drunk enough to have trouble figuring out that 5 > 1. So one could equally offer an interpretation that e.g. drunk people answered honestly, while sober people wanted to signal that they were too caring to kill someone under any circumstances.

It's a fascinating result, but I don't think the interpretation is a slam dunk.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-10-30T01:11:12.378Z · LW(p) · GW(p)

I doubt this. I conjecture that more people lie and say they would be utilitarian than lie and say they would not be utilitarian. I hope that I would do the utilitarian thing, but I am not sure that I actually would be able to get myself to do it. (Maybe I would be more likely to actually do it if I were drunk)

Replies from: lmm
comment by lmm · 2014-10-30T13:07:24.819Z · LW(p) · GW(p)

On LW sure, being utilitarian is the thing you want to signal here. Ordinary people in a bar? I highly doubt it. Being unwilling to kill is far, far more socially acceptable than the utilitarian answer.

comment by NancyLebovitz · 2014-10-28T13:31:40.552Z · LW(p) · GW(p)

I've been wondering whether utilitarianism undervalues people's loyalty to their own relationships and social networks.

comment by Lumifer · 2014-10-28T15:27:36.898Z · LW(p) · GW(p)

In two field studies with a combined sample of 103 men and women recruited at two bars in Grenoble, France

Field studies are hard work :-D

Replies from: ChristianKl
comment by ChristianKl · 2014-10-28T16:44:35.622Z · LW(p) · GW(p)

They needed the native habitat for the alcohol consumption.

comment by Evan_Gaensbauer · 2014-10-28T00:37:28.898Z · LW(p) · GW(p)

The following model is my new hypothesis for generating better OKCupid profiles for myself while remaining honest.

  • I brainstorm what I want to include in my profile in a positive way without lying. This may include goal-factoring on what honest signals I'm trying to send. Then, I see how what I brainstormed fits into the different prompts on OKCupid profiles.

  • I generate multiple clause-like chunks for each item/object/quality of myself I'm trying to express in my profile. I then A/B test the options for each item across a cross-section of individuals similar to the ones I would want to attract on OKCupid. This may include random assignment to conditions to participants to some extent. I would still need to think of metrics or ratings for this to best suit my goals.

  • Construct complete paragraphs for the various sections of my profile using whichever were the most successful Caveats: I would want enough experimental control to ensure the test participants were people I could trust to respond honestly, and without trolling me. However, this would decrease random selection. How much should I care about random selection, and thus external validity, in this case?

Otherwise, what do you think of the model? What's wrong with it? If it's not completely awful, I'll play-test it with an OKCupid profile just for the value of information, and see if we can't learn something.

Replies from: MrMind
comment by MrMind · 2014-10-28T15:02:03.513Z · LW(p) · GW(p)

Just test it and report back the result :) That will teach you and us many things we can't see right now.

comment by CAE_Jones · 2014-11-03T04:57:35.126Z · LW(p) · GW(p)

Two years ago, I wrote this cringe-worthy thing.

I can't tell if things have gotten worse, or if they've stayed the same. I lean toward worse.

4 years ago, I asked a psychiatrist about my soul-crushing Akrasia issues. He prescribed Focalin, at 5mg/day for the first week, then 10mg/day for the second. The first week saw improvements--I didn't feel like I had much choice over what I wound up focusing on, but I actually finished things--the second week did not work at all, and a pile of unpleasant things all hit at once on one of those nights. So we switched to Prozac, nothing came of it, and here we are.

For reasons, skills that are basic and trainable for most people are down-right mutant to me. (It's almost as though any problems that aren't sheltered-nerd problems amplified by blindness are blindness problems amplified by sheltered nerdiness. There exist blind nerds without this suite of doom-problems, but on investigation nothing seems to generalize.)

I didn't mention psychiatric problems in the 2012 post because all the psych professionals I've spoken to don't seem to believe I have them. But I'm pretty sure I have symptoms of ADHD-PI. And Schizoid Personality. And Avoidant Personality. And Social Anxiety. And Atypical Depression but I'm pretty sure that's a response to everything else. Whether any of these is actually the case is unknown to me, and all of the above mean that finding a professional to ask (then actually telling them everything) is stupidly difficult. Then they need to be competent.

I really have no idea what the best starting place would be. I'm trying to find another psychiatrist (though I dunno if I can actually communicate the problems), I've exhausted the less dramatic training facility and the return-to-college option, and am considering one of the National Federation of the Blind training centers (as a general rule, everyone who is not a member finds the NFB offputting, but it's pretty much their programs or nothing, if the internet is to be believed).

TL;DR: My life sucks and if I can't fix it soon, I will start complaining that decent wireheading is not yet available.

Replies from: ChristianKl, Strangeattractor
comment by ChristianKl · 2014-11-05T13:03:23.903Z · LW(p) · GW(p)

Instead of a psychiatrist maybe a psychologist might be the better option?

comment by Strangeattractor · 2014-11-05T15:31:05.396Z · LW(p) · GW(p)

Have you considered the idea of learning echolocation? Here is the beginning of a series of blog posts from blind programmer Austin Seraphim about how he learned to use echolocation to navigate the environment and get a spatial sense of things without touching them. He learned it from a teacher from World Access for the Blind.

It came to mind because you mentioned a National Federation of the Blind training center, and I'm not sure what you would learn there, but I'm pretty sure they don't offer echolocation training.

comment by khafra · 2014-10-30T18:29:33.003Z · LW(p) · GW(p)

Are there lists of effective charities for specific target domains? For social reasons, I sometimes want to donate to a charity focused on some particular cause; but given that constraint, I'd still like to make my donation as effective as possible.

comment by [deleted] · 2014-10-30T19:13:47.228Z · LW(p) · GW(p)

This article discusses a paper that seems interesting from the perspective of effective altruism and how peoples behavior changes based on where they think their money might be going:

http://www.vox.com/2014/10/30/7131345/overhead-free-donations-charity-fundraising-seed-matching-gneezy

If you want a link directly to the paper, that link is both in the article and reposted here:

http://www.sciencemag.org/content/346/6209/632

Short summary: When considering donations, people in the study donated more when they know their donation is not going to overhead.

comment by Capla · 2014-10-28T00:02:10.846Z · LW(p) · GW(p)

It had never occurred to me that the term "applause light" could be taken so literally.

Replies from: RomeoStevens, Evan_Gaensbauer
comment by RomeoStevens · 2014-10-29T19:10:44.774Z · LW(p) · GW(p)

Politician, noun: a person who cheers in-group values professionally.

comment by Evan_Gaensbauer · 2014-10-28T07:58:30.893Z · LW(p) · GW(p)

My friend recently attended an event at which Ray Kurzweil and an urban planner named Richard Florida were speaking. He didn't like Richard Florida as a speaker, citing how Richard Florida 'sounded just like a politician', and was speaking 'only in applause lights'. I noted it was funny to use 'applause light' in that context, as an auditorium where the speaker looks over a crowd while bathed in light, saying things specifically to garner applause, is just about the most literal interpretation of 'applause light' I could think of.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-10-28T23:29:43.160Z · LW(p) · GW(p)

"Applause lights" is a metaphor based on a concrete thing that really exists

comment by Strangeattractor · 2014-10-27T18:39:38.243Z · LW(p) · GW(p)

After reading through the Quantum Physics sequence, I would like to know more about the assumptions and theories behind the idea that an amplitude distribution factorizes, or approximately factorizes. Where would be a good place to learn more about this? I would appreciate some recommendations for journal articles to read, or specific sections of specific books, or if there's another better way to learn this stuff, please let me know.

In the blog posts in the sequence, an analogy comes up a few times, saying that it doesn't make sense to distinguish between the two factors of 3 when multiplying 3 x 3 x 2 to get 18, and that similarly, the amplitude blobs in configuration space that can sometimes appear to be like particles are factors of...I'm not sure what. The wavefunction? A probability density function (but we're calling it amplitude instead of probabilities)? Something else? I didn't entirely follow that section, so I'm not sure how to look it up.

When I searched on Google Scholar for "quantum factorization" I got journal articles about how to use quantum computers to factor prime numbers. When I looked up "particle indistinguishability" I got papers about very small numbers of particles in a state of quantum entanglement. When I searched for "amplitude distribution factorizes" I got articles about tomography and mesons and keys for quantum cryptography.

I'm also confused about: what precisely is an amplitude distribution? Amplitude of what? Distributed over what? I can make some guesses, but how do I look it up?

I would also like to know: what needs to be true in order for this concept to be true? Does it depend on the many worlds interpretation of quantum mechanics, or would it hold true in the other interpretations? Does it require the wavefunction? Just how good is the analogy about the factors of 18, and where would the analogy break down? What do the equations look like that lead to these conclusions, and what are they called, so I can look them up? What assumptions are used to formulate the equations? What is the difference between factorizing exactly and approximately? Why does the idea of roughly factorizing come up at all, why isn't it all exact? How accurate would it be to describe a person as a factor of the wave function, and what does that mean? Is there a technical term for "blob of amplitude"?

The Quantum Physics sequence is the best introduction to quantum mechanics that I've read, but it is rather incomplete about explicitly stating the assumptions it is using, or giving references to where to learn more about each topic.

Help?

Replies from: Manfred, DanielLC
comment by Manfred · 2014-10-27T22:52:40.899Z · LW(p) · GW(p)

Relevant wikipedia link. The keyword is something like "many-body wavefunction."

But seriously, if you're curious, try to find a textbook online, or a series of video lectures for an introductory course (you might either watch the whole course, or skip to what you want to learn and then try and figure out what the prerequisites are, then do the same thing for the prerequisites).

comment by DanielLC · 2014-10-27T20:47:12.819Z · LW(p) · GW(p)

I think the factorization is a reference to https://en.wikipedia.org/wiki/Creation_and_annihilation_operators from quantum field theory. I haven't learned quantum field theory though, so I can't comment much. From what I can gather, multiplying something by the creation operator gets you the same state but with an extra particle.

I can tell you that at the very minimum, assuming Copenhagen and the minimal amount of physics to allow entanglement to happen at all, whenever two of the same kind of particle are entangled, they have a 50% chance of swapping. If you use MWI, it's that I can find a universe with the same probability density in which those particles are swapped.

comment by Wes_W · 2014-10-29T16:47:22.338Z · LW(p) · GW(p)

I stumbled across an article about Amelia, a program that can supposedly perform low-level human jobs like call center operator. A brief search hasn't turned up anything particularly illuminating. Has this been discussed on LW before?

On the one hand, everything I read about her sounds sufficiently vague that I suspect it's hype (and possibly native advertising). Still, I'm curious about the underlying tech - is it some kind of substantial improvement over past attempts, or is she just Siri++ in the way that Eugene Goostman was a slightly better chatbot?

Replies from: Douglas_Knight, polymathwannabe
comment by Douglas_Knight · 2014-10-30T05:11:49.129Z · LW(p) · GW(p)

Probably Siri-- in the way that Eugene Goostman was a slightly worse chatbot.

comment by polymathwannabe · 2014-10-29T17:41:55.738Z · LW(p) · GW(p)

The manufacturer's website is only merely illustrative.