Open thread, Nov. 17 - Nov. 23, 2014

post by MrMind · 2014-11-17T08:25:58.177Z · LW · GW · Legacy · 329 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

329 comments

Comments sorted by top scores.

comment by sixes_and_sevens · 2014-11-17T13:35:27.103Z · LW(p) · GW(p)

Recently I have been thinking about imaginary expertise. It seems remarkably easy for human brains to conflate "I know more about this subject than most people" with "I know a lot about this subject". LessWrongers read widely over many areas, and as a result I think we are more vulnerable to doing this.

It's easy for a legitimate expert to spot imaginary expertise in action, but do principles exist to identify it, both in ourselves and others, if we ourselves aren't experts? Here are a few candidates for spotting imaginary expertise. I invite you to suggest your own.

Rules and Tips vs Principles
At some point, a complex idea from [topic] was distilled down into a simple piece of advice for neonates. One of those neonates took it as gospel, and told all their friends how this advice formed the fundamental basis of [topic]. Examples include "if someone touches their nose, they're lying" and "never end a sentence with a preposition".

If someone offers a rule like this, but can't articulate a principled basis for why it exists, I tend to assume they're an imaginary expert on the subject. If I can't offer a principled basis for any such rule I provide myself, I should probably go away and do some research.

Grandstanding over esoteric terminology
I've noticed that, when addressing a lay audience, experts in fields I'm familiar with rarely invoke esoteric terminology unless they have to. Imaginary experts, on the other hand, seem to throw around the most obscure terminology they know, often outside of a context where it makes sense.

I suspect being on the receiving end of this feels like Getting Eulered, and dishing it out feels like "I'm going to say something that makes you feel stupid".

Heterodoxy
I have observed that imaginary experts often buy into the crackpot narrative to some extent, whereby established experts in the field are all wrong, or misguided, or slaves to an intellectually-bankrupt paradigm. This conveniently insulates the imaginary expert from criticism over not having read important orthodox material on the subject: why should they waste their time reading such worthless material?

In others, this probably rings crackpot-bells. In oneself, this is presumably much more difficult to notice, and falls into the wider problem of figuring out which fields of inquiry have value. If we have strong views on an established field of study we've never directly engaged in, we should probably subject those views to scrutiny.

Replies from: Viliam_Bur, Gunnar_Zarncke, Gunnar_Zarncke, ChristianKl, arundelo, Artaxerxes, Gunnar_Zarncke, NancyLebovitz, ChristianKl, passive_fist
comment by Viliam_Bur · 2014-11-19T12:37:22.637Z · LW(p) · GW(p)

I agree with what you wrote. Having said this, let's go meta and see what happens when people will use the "rules and tips" you have provided here.

  • A crackpot may explain their theory without using any scientific terminology, even where a scientist would be forced to use some. I have seem many people "disprove" the theory of relativity without using a single equation.

  • If there is a frequent myth in your field that most of the half-educated people believe, trying to disprove this myth will sound very similar to a crackpot narrative. Or if there was an important change in your field 20 years ago, and most people haven't heard about it yet, but many of them have read the older books written by experts, explaining the change will also sound like contradicting all experts.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-11-19T13:31:41.625Z · LW(p) · GW(p)

In response to your second point, I've found "field myths" to be quite processable by everyday folk when put in the right context. The term "medical myth" seems to be in common parlance, and I've occasionally likened such facts to people believing women have more ribs than men, (i.e. something that lots of people have been told, and believe, but which is demonstrably false).

It does seem a bit hazardous to have "myths" as a readily-available category to throw ideas in, though. Such upstanding journalistic tropes as Ten Myths About [Controversial Political Subject] seem to teach people that any position for which they hold a remotely plausible counterargument is a "myth".

comment by Gunnar_Zarncke · 2014-11-17T23:22:15.654Z · LW(p) · GW(p)

Status of your Domain

There was a post some time ago which I can't find right now which talked about the danger of one field looking down upon other fields. The prototype example is hard sciences looking down on soft sciences. The soft sciences are seen as less rigorous or less mathematically well founded and thus if you belong to the higher domain you think yourself better qualified to judge the presumably less rigorous fields.

Replies from: Sarunas
comment by Sarunas · 2014-11-18T11:48:09.774Z · LW(p) · GW(p)

It was this post by Robin Hanson. However, I am not sure that "status" (which suggests having something like a one-dimensional order) is the right explanation here. For example, philosophers tend to say things about a lot of other fields and people from other fields tend to say a lot of things about philosophy. Therefore, unless one considers all those fields to be of equal status, one-dimensional order doesn't seem to be applicable as an explanation.

It seems to me that:

  • if field A has a lot of widely applicable methods that are abstract enough and/or have enough modelling capabilities so as to be able to model a wide variety of objects, including most/all objects that are the object of research of field B, then the experts of field A will often express their opinions about the object of research of field B, theories of field B, and the state of field B itself. In other words, if field A has a widely applicable powerful method, then experts of field A will try to apply this method to other fields as well. For example, physics has a lot of powerful methods that are applicable well beyond physics. Some ideas developed in philosophy are often abstract enough so as to be (or seem to be) applicable to a wide variety of things. Note, however, that in some cases these methods may fail to capture important features of the object of field B. On the other hand, experts of field B have incentives to claim that methods of field A are not applicable to their field, because otherwise field B is redundant. In response, they may start to focus on the aspects of their field that are harder to model using field A's tools, whether those aspects are important or not. Yet another interesting case is when there are two fields A and A_1 whose methods are capable of modelling certain aspects of the object of research of field B. Then experts of A may denounce the application of A_1's methods to the field B (and vice versa) as failing to capture important things and contradicting findings of field A.

  • if the object of research of field A is closely related to human experiences that are (very) common and/or concepts that are related to common sense concepts (e.g.cover the same subject), then people of other fields will often express their opinions about the object of field A, theories of field A and field A itself. For example, many concepts of philosophy, psychology (and maybe most/all social sciences) are directly related to human experiences and common sense concepts, therefore a layperson is more likely to speculate about the topics of those fields.

Well, of course there is the idea that the findings of two research fields A and B should not contradict each other. And there is this informal "purity" rank, i.e. if A is to the right of B, then experts of A will regard findings of B as wrong, whereas experts of B will regard theories of A as failing to capture important features of the object of research of their field. However, it doesn't seem to me that all fields can be neatly ordered this way. One reason is there is at least two possible partial orders. One is the "reductionist" way, i.e. if the object of field B is made out of the stuff that is the object of field A, then field A is to the right of object B. Another way is to arrange fields according to their methods. If field B borrows a lot of methods from the field A, but not vice versa (in other words, methods of A can be used to model the object of field B), then field B can be said to be to the left of A. In some cases, these two orders do not coincide. For example, the object of economics is made out of the object of psychology, however, it is my impression that most subfields of economics borrow their methods directly from mathematics (or physics), i.e. in the second partial ordering psychology is not between economics and mathematics. By the way, conflating these two orders might give rise to intuitions that are not necessarily grounded in reality. For example, mathematics (despite the fact that not all physicists' results are formalized yet), is probably to the right of physics when we order fields according to their methods. That gives us intuition that mathematics is to the right of physics in the "reductionist" order as well, i.e. the object of physics is made out of mathematical structures. Well, this might turn out to be true but you should not proclaim this as casually as some people do.

comment by Gunnar_Zarncke · 2014-11-17T23:18:18.874Z · LW(p) · GW(p)

Some indications can be taken from Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields:

Low Hanging Fruit Heuristic

if a research area has reached a dead end and further progress is impossible except perhaps if some extraordinary path-breaking genius shows the way, or in an area that has never even had a viable and sound approach to begin with, it’s unrealistic to expect that members of the academic establishment will openly admit this situation and decide it’s time for a career change.

Thus: You are likely having imaginary expertise if all low hanging fruits in your field have been taken.

Ideological/venal Interest Heuristic

You should be doubtful about your expertise when

things under discussion are ideologically charged or a matter in which powerful interest groups have a stake.

Note that these concern your expertise despite you actually being an expert.

Replies from: Sarunas
comment by Sarunas · 2014-11-18T11:55:49.949Z · LW(p) · GW(p)

You should note that these cut both ways - given a field of research, they reduce (ceteris paribus) the trustworthiness of both layperson's independent attempts to form an opinion about this field and the trustwortiness of the established experts in that field.

comment by ChristianKl · 2014-11-17T17:36:57.729Z · LW(p) · GW(p)

Making predictions about the subject matter and seeing whether those come true, is one of the great ways to measure expertise.

comment by arundelo · 2014-11-17T15:16:19.241Z · LW(p) · GW(p)You probably mean neophyte not neonate.Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-11-17T16:00:01.987Z · LW(p) · GW(p)

I blame World of Darkness circa 1998.

comment by Artaxerxes · 2014-11-17T14:35:54.885Z · LW(p) · GW(p)

So kind of like an intellectual chuunibyou.

If we have strong views on an established field of study we've never directly engaged in, we should probably subject those views to scrutiny.

Sounds like a pretty good idea to me. You've listed some symptoms, I guess if you catch yourself engaging in such behaviour, there are potential solutions?

Avoid heterodoxy by reading opposing viewpoints while steelmanning as much as possible.

To avoid grandstanding via vocab, keep things simple. Make sure you can play taboo with words, or up goer five.

For rules and tips, make sure you're clear with yourself on when your rules don't apply as well as when they're useful.

But really, it is a hard one to notice in yourself, and a hard one to fix without actually becoming an expert.

Replies from: sixes_and_sevens, Username
comment by sixes_and_sevens · 2014-11-17T16:05:07.750Z · LW(p) · GW(p)

I have an ongoing project to read an introductory textbook for every subject I claim to be interested in. This won't make me an expert in those subjects, but it should hopefully stop me saying things that cause actual experts to facepalm.

comment by Username · 2014-11-18T04:08:28.072Z · LW(p) · GW(p)

chuunibyou, heterodoxy, steelmanning, grandstanding via vocab

But really, it is a hard one to notice in yourself, and a hard one to fix without actually becoming an expert.

I present exhibit A.

(Meant lightly, but this really is a good example of phrasing that could be unnecessarily confusing to an average reader.)

Replies from: Artaxerxes
comment by Artaxerxes · 2014-11-18T04:21:00.230Z · LW(p) · GW(p)

I can see what you mean, but rephrasing what someone else has said is the opposite of using unnecessarily confusing phrasing, no? It's just good practice, and a big part of what the guide to words stuff was all about.

But you're right, I should probably put in a link for steelmanning, just in case.

comment by Gunnar_Zarncke · 2014-11-17T16:19:06.314Z · LW(p) · GW(p)

Great insight. I added it to my Anki deck. I think this could have been a Discussion post of its own (even at this length).

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-11-17T16:29:21.749Z · LW(p) · GW(p)

If I collect any more indicators of imaginary expertise, I may assemble it into a discussion post.

comment by NancyLebovitz · 2014-11-17T13:39:35.792Z · LW(p) · GW(p)

You've got a worthwhile project there, but I gather that in sufficiently complex fields (like engineering) there'are a lot of rules and tips because there isn't a complete set of principles.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-11-17T13:52:42.550Z · LW(p) · GW(p)

Can you provide any examples? I have introductory exposure to engineering, but can't readily pinpoint the sort of rules or tips you're referring to.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-17T14:25:32.768Z · LW(p) · GW(p)

Page down to Assumptions, Preditions, and Simplifications.

I didn't actually have any examples-- the idea that engineering has methods which aren't firmly based in math or physics was just something I'd heard enough times to have some belief that it was true. The link has some examples.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-11-17T16:27:37.247Z · LW(p) · GW(p)

I'm not sure these are quite the same "rules and tips". There's still a rationale behind model assumptions that an expert could explain, even if that rationale is "this is essentially arbitrary, but in practice we find it works quite well".

comment by ChristianKl · 2014-11-17T21:59:04.920Z · LW(p) · GW(p)

At some point, a complex idea from [topic] was distilled down into a simple piece of advice for neonates. One of those neonates took it as gospel, and told all their friends how this advice formed the fundamental basis of [topic]. Examples include "if someone touches their nose, they're lying"

The reasoning for "if someone touches their nose, they're lying" is: If someone lies they feel anxiety. It's well known that anxiety can produce effects like someone getting red because of changes in blood circulation due to tension changes. Some tension changes can make a person want to touch the corresponding area.

Something along those lines is the motivation for that tip. The problem is that there are many reason why someone touches their nose and it's no good tell unless you calibrated it for a person.

Of course you could say that the fact that I know the explanation suggest that I know something about the topic. I'm probably better than average at lie detection. On the other hand I have not practiced that skill well enough to trust my skills to draw strong conclusions.

On the other hand in biology there are many cases where I have true beliefs where the justification isn't deeper than "the textbook says so". There no underlying logic that can tell you what a neurotransmitter does in every case. Evolution is quite random. In many cases I could tell you which textbook or university lecture is responsible for me believing a certain fact about biology.

Facts backed by empirical research are better than facts backed by theoretical reasoning like the one in the nose touching example.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-11-17T22:34:25.577Z · LW(p) · GW(p)

I've heard a few justifications for the nose-touching example, and the one you provided is new to me.

The nose-touching example could alternatively have an empirical basis rather than a theoretical one. It came to mind because areas like body language, "soft skills" and the like are, in my experience, rife with people who've learned a list of such "facts".

comment by passive_fist · 2014-11-17T21:04:14.186Z · LW(p) · GW(p)

That's a good start, now take those ideas about imaginary expertise and use them to do some introspection... :)

Anyway, it's hard to clearly define what an 'imagined expert' is. All it means is that the person overestimates their knowledge of a subject, which affects ALL human beings, even bona-fide experts. Expertise in anything other than a very small slice of human knowledge is obviously impossible.

It's easy to spot lack of knowledge in another person but not so easy to spot it in yourself.

About 'crackpottery', I don't like labelling people as crackpots. Instead it's much better to talk about crackpot ideas. There is no distinction between 'crackpot' and 'wrong'. All 'crackpot' means is that the idea has very low likelihood of being true given our knowledge about the world; this is just the definition of 'wrong'. It is in human nature to be immediately suspicious of consensus; that's not the problem (in fact it's part of the reason why science exists in the first place). The problem is when people try to push alternative agendas by pushing wrong information as correct information. That's not a problem of imagined expertise; it's a problem of willful misguiding of others.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-11-19T07:22:14.424Z · LW(p) · GW(p)

I don't see sixes_and_sevens labeling people as crackpots:

I have observed that imaginary experts often buy into the crackpot narrative to some extent, whereby established experts in the field are all wrong, or misguided, or slaves to an intellectually-bankrupt paradigm.

(Emphasis added.)

In other words, it's not that someone has or lacks the crackpot label. Rather, there is a "crackpot narrative", a sort of failure mode of reasoning, which people can subscribe to (or repeat to themselves or others) to a greater or lesser extent.

The difference is significant. It's like the difference between saying "Joe is a biased person" and saying "Joe sure does seem to exhibit fundamental attribution error an awful lot of the time, doesn't he?"

comment by selylindi · 2014-11-17T22:56:56.566Z · LW(p) · GW(p)

On the "all arguments are soldiers" metaphorical battlefield, I often find myself in a repetition of a particular fight. One person whom I like, generally trust, and so have mentally marked as an Ally, directs me to arguments advanced by one of their Allies. Before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any charitable interpretation of the text, to accept the arguments. And in the contrary case, in a discussion with a person whose judgment I generally do not trust, and whom I have therefore marked as an (ideological) Enemy, it often happens that they direct me to arguments advanced by their own Allies. Again before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any flaw in the presentation of the argument or its application to my discussion, to reject the arguments. In both cases the behavior stems from matters of trust and an unconscious assignment of people to MySide or the OtherSide.

And weirdly enough, I find that that unconscious assignment can be hacked very easily. Consciously deciding that the author is really an Ally (or an Enemy) seems to override the unconscious assignment. So the moment I notice being stuck in Ally-mode or Enemy-mode, it's possible to switch to the other. I don't seem to have a neutral mode. YMMV! I'd be interested in hearing whether it works the same way for other people or not.

For best understanding of a topic, I suspect it might help to read an argument twice, once in Ally-mode to find its strengths and once in Enemy-mode to find its weaknesses.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-19T12:41:16.532Z · LW(p) · GW(p)

Just wondering if it would make sense to consider everyone a Stupid Ally. That is, a good person who is just really really bad at understanding arguments. So the arguments they forward to you are worth examining, but must be examined carefully.

Replies from: pjeby
comment by pjeby · 2014-11-22T20:09:12.169Z · LW(p) · GW(p)

Just wondering if it would make sense to consider everyone a Stupid Ally. That is, a good person who is just really really bad at understanding arguments. So the arguments they forward to you are worth examining, but must be examined carefully.

This is generally how I read self-help books, especially the ones I have to hold my nose for while reading. (Because their logic stinks.) ;-)

Basically, I try to imagine Ignaz Semmelweiss telling me his bullshit theory about handwashing, when I know there is no way that there could be such a powerful poison in corpses, that a tiny invisible amount would be deadly. So I know that he is totally full of bullshit, at the same time that I must take at face value the observation that something different is happening in his clinic. So I try to screen out the theory, and instead look for:

  • What are they observing (as opposed to opining about the observations)
  • What concrete actions are they recommending, and
  • What effect of these actions are they predicting

As this information can be quite useful, once the insane theories are scrubbed off the top.

While it's easy to laugh at doctors for ignoring Semmelweiss now, there was no scientific theory that could account for his observations until well after his death. A similar phenomenon exists in self-help, where scientists are only now producing research that validates self-help ideas introduced in the 70's and 80's... usually under different names, and with different theories. Practice has a way of preceding theory, because useful practices can be found by accident, but good theories take hard work.

So, "Stupid Ally" makes sense in the same way: even an idiot can be right by accident; they're just unlikely to be right about why they're right!

comment by [deleted] · 2014-11-18T01:39:13.730Z · LW(p) · GW(p)

I figured this would be broadly of interest to this site:

http://www.pnas.org/content/111/45/16106.abstract

"Chlorovirus ATCV-1 is part of the human oropharyngeal virome and is associated with changes in cognitive functions in humans and mice "

A metagenomics study of the throat virome of a bunch of people in Baltimore revealed that a full 40% of them were persistently infected with a clade of chloroviruses, very large ~300 kilobase DNA viruses (possibly very distantly related to poxviruses but it's difficult to tell) which have previously only been known to infect freshwater algae. Upon looking at correlations, they found an extremely significant correlation between infection and a mild degradation in memory and attention tasks. Infecting mice with the virus both caused a decline in memory function as measured by maze-running tasks and, since unlike a human you can crack open a mouse and extract RNA from pieces of its brain, very clear changes in the gene expression of the hippocampus. Not a clue about the mechanism.

This virus had already been noted to be odd a few years ago - a paper from 2011 (http://www.sciencedirect.com/science/article/pii/S1360138511002275) noted that they contained carbon catabolism genes and carbohydrate-processing enzymes that seem to come from animals despite the fact that they were only known to infect algae.

Interestingly, human-and-algae-infecting viruses appear to have been empirically identified in the vaginal secretions of people whose secretions killed algae by scientists in Ukraine several years ago, and this has largely gone unnoticed (possibly justifiably) since it was published (www.bioorganica.org.ua/UBAdenovo/pubs_9_2_11/Stepanova.pdf) in a small journal with poor English translation.

Replies from: gwern, Lumifer
comment by gwern · 2014-11-18T03:57:05.699Z · LW(p) · GW(p)

Fulltext: https://pdf.yt/d/dr3uP9XOtT1BPimU / https://dl.dropboxusercontent.com/u/5317066/2014-yolken.pdf / http://libgen.org/scimag7/10.1073/pnas.1418895111.pdf

I didn't much like it. This thing reeks of data dredging in every step; I don't see why they controlled for birth place where you'd think that current residence would be much more relevant (Baltimore has rich and poor areas like most big cities; and if nothing else, it'd give you an idea of infection vectors if carriers cluster around the harbor or specific places); I find it odd that their WAIS subtest shows zero (0.0) decrease in the infected group while their weirdo IQ test I've never heard of shows a fall; and I'm not sure how convincing I find their mice models* - to what extent does it really mimick human infections with no apparent symptoms? It wouldn't surprise me if, every time you gave mice a big injection of infectious organisms, their scores fell simply because you made them sick with something, so I'm not sure whether the mice experiment part is testing the right causal hypothesis (it might be testing 'raging infections decrease cognitive performance', not 'this algae, and not other infectious agents, decreases cognitive performance').

I would not be surprised if this never replicates.

* kudos to them for trying to experimentally test it, though

Replies from: None
comment by [deleted] · 2014-11-18T04:05:28.766Z · LW(p) · GW(p)

Good points all - I was hoping you'd show up. It's odd enough though that I would be quite interested in any attempts at replication. Course, that might be coming from my interest in the evolutionary history and ecological role of a virus that can apparently infect organisms as different as blue-green algae and mammals.

comment by Lumifer · 2014-11-18T02:35:28.638Z · LW(p) · GW(p)

I keep on wondering to what degree our discovery and exploration of the all the microflora and microfauna that lives insides us will overturn "standard" medical theories of how humans work. It's already generally accepted that the diet's effect on nutrition is noticeably mediated by the gut biota, though the details are still very very fuzzy, and that's only a crude start...

comment by Viliam_Bur · 2014-11-21T13:30:41.624Z · LW(p) · GW(p)

This stupid bot has almost 20 000 comment karma on Reddit.

I have seen it in action, and sometimes it may take some time for humans to recognize it is a bot, not a passively aggressive human. Because, well, there are many kinds of humans on internet.

But this made me think -- maybe we could use "average reddit karma per comment" or something like this as a measure of Turing test. And perhaps we could make a bot-writing competition, where the participant bots would be released on Reddit, and the winner is the one which collects most karma in 3 months.

Of course the rules would have to be a bit more complex. Some bots are useful by being obvious bots, e.g. the wikipediabot who replies with summaries of wikipedia articles to comments containing links to wikipedia. Competition in making useful bots would also be nice, but I would like to focus on bots that seem like (stupid) humans. Not sure how to evaluate this.

Maybe the competition could have an additional rule, that the authors of the bots are trying to find other bots on Reddit, and if they find them, they can destroy them by writing a phrase that each bot must obey and self-destruct, such as "BOT, DESTROY YOURSELF!". (That would later become a beautiful meme, I hope.) The total score of the bot is the karma it has accumulated until that moment. Authors would be allowed to launch several different instances of their bot code, e.g. in different subreddits, or initialized using different data, or just with different random values.

Has anyone tried something like this before? What is the reddit policy towards bots?

Replies from: sixes_and_sevens, Lumifer, ZankerH, Azathoth123
comment by sixes_and_sevens · 2014-11-21T13:41:43.496Z · LW(p) · GW(p)

Related: Stealth Mountain, a twitter bot (now defunct) which would correct tweets containing the expression "sneak peak".

Both this and the bot you link to rely less on getting machines to cleverly reproduce human behaviour, and more on identifying robotic human behaviour that can be carried out by stupid machines. Since this is probably a winning strategy, I'd recommend making that the focus of such a competition.

comment by Lumifer · 2014-11-21T15:42:26.065Z · LW(p) · GW(p)

Core War in social media? This could get a wee bit out of hand... X-)

comment by ZankerH · 2014-11-22T14:33:56.757Z · LW(p) · GW(p)

Has anyone tried something like this before? What is the reddit policy towards bots?

The only site-wide rules I'm aware of are ones against abusing the API (ie, your bot shouldn't be functionally equivalent to a DDoS attack). Other than that, most large subreddits seem to allow bots, but it's up to the individual moderators.

comment by Azathoth123 · 2014-11-27T03:38:06.273Z · LW(p) · GW(p)

Are you familiar with the various only automatic rant generators?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-27T09:07:55.097Z · LW(p) · GW(p)

I have seen various random text generators on their own web pages, but never actively participating in a forum.

comment by NancyLebovitz · 2014-11-17T15:51:49.601Z · LW(p) · GW(p)

Would supporting open immigration count as part of effective altruism?

Replies from: Izeinwinter, Lumifer, satt, ChristianKl, skeptical_lurker, badger, Azathoth123
comment by Izeinwinter · 2014-11-17T17:11:23.094Z · LW(p) · GW(p)

Not unless you have an amazing insight / highly effective novel argument to inject into the political debate on the issue. It is an altruistic goal, but pushing for political changes that have near zero chance of actually coming to pass or have overton window shifts in the desired direction fails hard at the "Effective" part. Dont get me wrong, political action can certainly be a very effective way to do good in the world, but this particular issue is so far out of reach of the overton window it kind of isn't part of the building anymore.

The best argument I've ever been able to come up with is that we should underbid the people smuggling industry - Grant anyone who can post a 3000 dollar bond and a return ticket a temporary right of residence and employment as long as their money lasts, and I've never been able to sell anyone on it that weren't already opposed to controls on immigration.

I have also seen people much, much better at politics than me try to persuade people of the immorality of immigration restrictions. I have never seen it work. Anecdote, data, I know, but if this was a good attack angle, I should be able to think of examples of it not just abysmally failing.

If I could figure out a way to make seasteading actually profitable (iron fertilization fishing industry? ) doing that under an appropriate flag of convenience and recruiting labor out of the various hellholes people need to leave seems workable, because it doesn't require persuasion. However, it also requires what would effectively be company towns to be run to first world standards of living...

Replies from: Lumifer
comment by Lumifer · 2014-11-17T17:27:27.159Z · LW(p) · GW(p)

pushing for political changes that have near zero chance of actually coming to pass

It's not an either-or thing. There are a lot of intermediate states and pushing in the direction of open immigration can be effective even if you don't expect to get to the abolisment of national borders in the near future. EU exists.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-11-17T18:12:29.035Z · LW(p) · GW(p)

And if we could get our politicians to stop fucking up for a couple of years, expanding the union further would be the most viable angle of attack I can think of. But that requires the european economy to be doing much better than it currently is to be viable, so step one would be to deprogram our political class of their love affair with austerity...

Replies from: Salemicus, Lumifer
comment by Salemicus · 2014-11-17T18:46:22.117Z · LW(p) · GW(p)

EU expansion doesn't rely on a successful European economy. Croatia's accession to the EU took place entirely post-GFC.

The real reason the EU won't expand significantly is because there's nowhere for it to expand to. Norway, Switzerland and Iceland don't want to join. Turkey and the Ukraine aren't wanted. Some of the remaining Balkan countries will join eventually, but they total only 18 million people, and none are great accession candidates; it's now recognised that Bulgaria was probably a mistake, and there's no wish to repeat that. I'd say there's more chance that the UK leaves the EU than it undertakes a significant expansion in the next, say, 20 years.

comment by Lumifer · 2014-11-17T18:21:02.771Z · LW(p) · GW(p)

And if we could get our politicians to stop fucking up for a couple of years

Ah, well, I wouldn't advise you to hold your breath... X-)

But that requires the european economy to be doing much better

The most pressing problem at the moment for that expansion east is that Mr.Putin doesn't seem to like it. And if you are thinking about expanding south, the voters don't seem to like that.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-11-17T18:52:38.037Z · LW(p) · GW(p)

The voters are reasoning beings - if we could get the economy created by the last expansion round to work well, they would have more of an appetite for further rounds. Nothing much is viable as long as we are collectively shooting ourselves in one foot, reloading and switching to the other, however. It hurts my head that our political class doesnt seem to be updating on evidence at all. Austerity makes things worse. Try something else. ANYTHING ELSE, Wage hikes in Germany. A monster project to build a low-pressure intercity subway between every city in europe with more than 100000 inhabitants. I'm not picky, as long as it's less idiotic than "If we remove money from the economy it's sure to grow!"

Replies from: Lumifer
comment by Lumifer · 2014-11-17T19:03:17.308Z · LW(p) · GW(p)

The voters are reasoning beings

Are they now? They persistently elect those politicians who make you despair. And I'm not sure at all that the anti-immigrant moods can be assuaged by throwing money at the voters.

Austerity makes things worse.

There is no austerity in Western Europe (with the possible exception of Greece). Take a look at the government budgets -- did they significantly contract? Take a look at the ECB balance sheet and the current interest rates as well.

ANYTHING ELSE

Sure. How about you deregulate the economy and let capitalism do what it does best?

Replies from: lmm
comment by lmm · 2014-11-20T21:14:31.298Z · LW(p) · GW(p)

From a Keynesian point of view the impression is more important than the reality. There may not be austerity, but people think there is, and they're saving rather than consuming as a result, and that slows the economy.

Replies from: Capla
comment by Capla · 2014-11-20T22:26:08.334Z · LW(p) · GW(p)

Why? If savings come out the other end as investment.

Replies from: lmm
comment by lmm · 2014-11-20T23:45:57.512Z · LW(p) · GW(p)

They don't always. Even if they do, investment doesn't help if it doesn't have anywhere to go. Right now the economy is not investment-limited, it's demand-limited (as you can tell by e.g. low returns on investment).

Replies from: Capla
comment by Capla · 2014-11-21T01:56:21.343Z · LW(p) · GW(p)

Why doesn't the market for capital equalize the discrepancy between supply and demand by adjusting the price (the interest rate)?

Replies from: lmm, Lumifer
comment by lmm · 2014-11-21T08:36:18.970Z · LW(p) · GW(p)

a) there's a persistent market distortion because investment profits are undertaxed

b) savers are often not terribly price-sensitive; a lot of people will "save as much as they can" at any interest rate. Also interest can't go below nominal zero (many bank accounts are very close to this).

Replies from: Azathoth123, Lumifer
comment by Azathoth123 · 2014-11-27T03:46:26.718Z · LW(p) · GW(p)

there's a persistent market distortion because investment profits are undertaxed

What do you mean? Corporate tax rates, at least in the US, are higher than personal tax rates.

comment by Lumifer · 2014-11-21T16:17:46.361Z · LW(p) · GW(p)

Also interest can't go below nominal zero

Sure it can.

comment by Lumifer · 2014-11-21T02:02:54.593Z · LW(p) · GW(p)

It does for long-term interest rates. Short-term interest rates are effectively determined by the government.

Replies from: Capla
comment by Capla · 2014-11-21T02:04:20.621Z · LW(p) · GW(p)

"And in the long run we're all dead"? : )

Replies from: Lumifer
comment by Lumifer · 2014-11-21T02:15:22.187Z · LW(p) · GW(p)

That too, but conventionally long-term interest rates are 10-years and up. In the US the term structure usually goes out to 30 years, in other countries sometimes more.

comment by Lumifer · 2014-11-17T16:17:01.699Z · LW(p) · GW(p)

Heh. That's an interesting question. Some people certainly believe so.

comment by satt · 2014-11-18T03:14:06.103Z · LW(p) · GW(p)

It's something that GiveWell's looked into over the last year or two, and so has Carl Shulman on Reflective Disequilibrium as part of his effective altruism research notes.

comment by ChristianKl · 2014-11-17T17:32:59.079Z · LW(p) · GW(p)

"Effective" assumes that you believe you make an impact. Depending on how you plan to support it, effectiveness might vary greatly.

comment by skeptical_lurker · 2014-11-17T21:07:11.878Z · LW(p) · GW(p)

I went to a talk which presented the horrifying fact that immigrants are several times more likely to develop psychosis (varying from 2x to 10x dependant upon group, IIRC), and they theorised that some people have trouble empathising with people of other races, and this literally drives them insane. (Their other theory was that the cause is stimulant psychosis, but there wasn't evidence to support this as the primary cause)

I'd rather this wasn't the case. I quite like some level of controlled immigration, especially if they bring curry. But I worry that driving people insane is a fairly severe negative externality.

Replies from: NancyLebovitz, maxikov
comment by NancyLebovitz · 2014-11-17T22:06:46.139Z · LW(p) · GW(p)

Here's a study.

Schizophrenia occurs in national populations with an annual prevalence of 1.4 to 4.6 per 1000 and incidence rates of 16–42 per 100 000 (Jablensky, 2000). Although the incidence rates vary between countries by a factor of less than 3, wider ranges of variation are found among population subgroups within single countries. In the UK, for example, incidence rate ratios of 4 or above have been estimated both for the lowest social class in the indigenous White population and for Black immigrant groups. These data provide the most compelling evidence yet to hand for the role of socio-economic factors in aetiology.

I think the amount of additional schizophrenia is low enough that this isn't a major issue for immigration.

Replies from: satt
comment by satt · 2014-11-18T03:02:24.089Z · LW(p) · GW(p)

Guessing that link's meant to go somewhere else...

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-18T04:24:47.204Z · LW(p) · GW(p)

Thanks. I've found a better link than the one I intended, so I've updated the comment text.

comment by maxikov · 2014-11-20T01:23:21.681Z · LW(p) · GW(p)
  1. Did this study consider the difference between white and non-white immigrants to mostly white Western countries?

  2. Did this study consider the difference between white and non-white immigrants to non-white countries?

  3. Did this study consider the difference between immigrants who (try to) assimilate to local communities, and those who prefer to stay within national communities?

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-11-20T20:28:39.567Z · LW(p) · GW(p)

Did this study consider the difference between white and non-white immigrants to mostly white Western countries?

Yes, the effect was very pronounced in Africans, possibly due to the use of the traditional African stimulant khat. But white people do drugs too, so this seems very dubious.

Did this study consider the difference between white and non-white immigrants to non-white countries?

&

Did this study consider the difference between immigrants who (try to) assimilate to local communities, and those who prefer to stay within national communities?

No, although maybe other studies did.

comment by badger · 2014-11-17T19:19:41.548Z · LW(p) · GW(p)

For context on the size of the potential benefit, an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars). The main question is the rate of migration if barriers are partially lowered, with estimates varying between 1% and 30%. Completely open migration could double world output. Based on Table 2 of Clemens (2011)

Replies from: Lumifer, skeptical_lurker
comment by Lumifer · 2014-11-17T19:42:32.234Z · LW(p) · GW(p)

an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars)

I am having strong doubts about this number. The paper cited is long on handwaving and seems to be entirely too fond of expressions like "should make economists’ jaws hit their desks" and "there appear to be trillion-dollar bills on the sidewalk". In particular, there is the pervasive assumption that people are fungible so transferring a person from a $5,000 GDP/capita economy to a $50,000 GDP/capita economy immediately nets you $45,000 in additional GDP. I don't think this is true.

Replies from: skeptical_lurker, badger, army1987
comment by skeptical_lurker · 2014-11-17T20:33:57.079Z · LW(p) · GW(p)

In particular, there is the pervasive assumption that people are fungible so transferring a person from a $5,000 GDP/capita economy to a $50,000 GDP/capita economy immediately nets you $45,000 in additional GDP. I don't think this is true.

This is ridiculous. If this were possible, wouldn't the industries just relocate to the third world? (obviously, yes, some industries are inherently local, but not all industries that can move have moved)

Replies from: Lumifer, DanielLC
comment by Lumifer · 2014-11-17T20:39:00.878Z · LW(p) · GW(p)

wouldn't the industries just relocate to the third world?

Um....

:-D

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-11-18T07:21:07.604Z · LW(p) · GW(p)

Yes, I know about China, but (a) not all industry that could move has (b) China isn't really a third world country.

Replies from: lmm
comment by lmm · 2014-11-20T21:18:03.052Z · LW(p) · GW(p)

Not any more it isn't, now that all the industry moved there.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-11-20T21:43:29.319Z · LW(p) · GW(p)

All the industry moved there? And so why hasn't it now moved to the remaining third-world countries?

Replies from: lmm
comment by lmm · 2014-11-20T23:48:04.557Z · LW(p) · GW(p)

Not all of it, but a lot of it. My point is that its economy grew as a result.

Industries are moving out of China to cheaper places like Africa and the Philippines. But the real-world economy is not completely frictionless; it takes time for these effects to occur.

Replies from: skeptical_lurker, NancyLebovitz
comment by skeptical_lurker · 2014-11-21T09:01:12.708Z · LW(p) · GW(p)

But the real-world economy is not completely frictionless; it takes time for these effects to occur.

Indeed, it takes time, and for a third world person to reach the productivity of a first world person could take maybe 15 years of first-world education at a bare minimum? So people are not perfectly fungible.

Replies from: lmm
comment by lmm · 2014-11-21T12:21:21.096Z · LW(p) · GW(p)

I very much doubt your number. Third world people who go to first world universities do fine, so that's 3 years, not 15. And for many jobs - even a very intellectual one like mine - I'm skeptical how much difference a degree really makes.

If immigrants didn't have the skills that were needed to be productive, companies wouldn't want to hire them. Instead we see companies chafing at the H-1B limits and crying out for more immigration.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-11-21T12:43:42.286Z · LW(p) · GW(p)

I very much doubt your number. Third world people who go to first world universities do fine, so that's 3 years, not 15. And for many jobs - even a very intellectual one like mine - I'm skeptical how much difference a degree really makes.

But these people are not the average third world people, they are presumably the ones fortunate enough to get a decent education in a country where many people don't.

If immigrants didn't have the skills that were needed to be productive, companies wouldn't want to hire them. Instead we see companies chafing at the H-1B limits and crying out for more immigration.

I'm certainly not arguing some crazy position like 'all immigrants are stupid.' I'm also guessing that these immigrants that companies desperately want to hire are mostly not from third world countries.

comment by NancyLebovitz · 2014-11-21T02:56:50.507Z · LW(p) · GW(p)

Any thoughts about what will happen when there are no longer really cheap places to move to?

Replies from: lmm, Lumifer
comment by lmm · 2014-11-21T08:38:53.967Z · LW(p) · GW(p)

Productivity will equalize and then growth will slow. When I'm pessimistic I fear every national economy will look like Japan's as soon as we catch up.

comment by Lumifer · 2014-11-21T03:21:12.351Z · LW(p) · GW(p)

The robots will take over.

That's not a joke -- industrial automation is proceeding at a rapid clip and the range of jobs for which you want dumb but cheap human labour continues to contract.

comment by DanielLC · 2014-11-21T23:22:43.699Z · LW(p) · GW(p)

Also, the industries that can move and would benefit already did, so the rest are in particular need of people from developed nations.

comment by badger · 2014-11-17T21:29:29.284Z · LW(p) · GW(p)

The paper cited is handwavy and conversational because it isn't making original claims. It's providing a survey for non-specialists. The table I mentioned is a summary of six other papers.

Some of the studies assume workers in poorer countries are permanently 1/3rd or 1/5th as productive as native workers, so the estimate is based on something more like a person transferred from a $5,000 GDP/capita economy to a $50,000 GDP/capita economy is able to produce $10-15K in value.

Replies from: Lumifer, skeptical_lurker
comment by Lumifer · 2014-11-17T21:53:43.027Z · LW(p) · GW(p)

It's providing a survey for non-specialists.

It looks to me as providing evidence for a particular point of view it wishes to promote. I am not sure of its... evenhandedness.

I think that social and economic effects of immigration are a complex subject and going about trillions lying on the sidewalk isn't particularly helpful.

comment by skeptical_lurker · 2014-11-18T07:52:35.047Z · LW(p) · GW(p)

Some of the studies assume workers in poorer countries are permanently 1/3rd or 1/5th as productive as native workers

Are they advocating for abolition of the minimum wage? Can one survive on 1/5th the average salery? Will the combination of inequality and race cause civil unrest?

comment by A1987dM (army1987) · 2014-11-22T11:53:43.583Z · LW(p) · GW(p)

Sure, a randomly chosen person in a $5,000 GDP/capita economy is unlikely to make $50,000 no matter where you move them, but no-one is proposing to move people at random.

comment by skeptical_lurker · 2014-11-17T20:54:59.095Z · LW(p) · GW(p)

Do you happen to know of a scatter graph for immigration rate vs GDP? It might shed a little light on the matter, though fertility would be a cofounder.

comment by Azathoth123 · 2014-11-18T03:11:42.551Z · LW(p) · GW(p)

I fail to see the connection between more immigration and improved utility. Notably most of the arguments in this regard tend to assume that people are completely fungible and that third-worlders will magically acquire the characteristics of the average first-wolder the moment they cross the boarder.

Replies from: NancyLebovitz, army1987
comment by NancyLebovitz · 2014-11-18T04:27:08.341Z · LW(p) · GW(p)

I don't think that's a necessary implication for wanting to open up immigration. All that's needed is that new immigrants should do significantly better than they did in their home country, and do some good in the country they've moved to.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-18T04:35:10.164Z · LW(p) · GW(p)

All that's needed is that new immigrants should do significantly better than they did in their home country, do some good in the country they've moved to.

Notice that your two assumptions nearly contradict each other. After all if the average citizen of the old country was as capable of doing good as the average citizen of the destination, the old country wouldn't be the kind of place one needs to leave to do significantly better.

Replies from: NancyLebovitz, satt
comment by NancyLebovitz · 2014-11-18T14:42:03.725Z · LW(p) · GW(p)

Consider the common case of a Latin American man who leaves to become a construction worker in the US.

He's a basically sensible person who's willing to work hard, but due to corruption, lack of infrastructure, and probably prejudice in Latin America, he's extremely poor.

He risks his life to go to America, where he's still subject to corruption (employers can get away with cheating him of his wages, and there may be payments for his work into Social Security that he will almost certainly never be able to collect) and prejudice, but he's still better off because he's hooked into much better infrastructure, probably less corruption, and possibly less prejudice.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-21T06:26:09.493Z · LW(p) · GW(p)

corruption, lack of infrastructure, and probably prejudice in Latin America

Why are these problems so much worse in Latin America? Probably a lot of it has to do with the character of the people there. Thus when he's in the country he's likely to do things that incrementally increase the problems he left Latin America to get away from.

Replies from: polymathwannabe, AlexSchell, army1987, NancyLebovitz
comment by polymathwannabe · 2014-11-21T15:29:15.605Z · LW(p) · GW(p)

If you get irritated by malicious comments like "U.S. people are self-centered, greedy, manipulative, meddlesome, trigger-happy devourers of the world's resources, entitled policemen of the world's affairs, and deluded by their overinflated self-importance", then that should give you a hint of how your odious generalization about Latin Americans is likely to be received.

comment by AlexSchell · 2014-11-23T02:04:24.259Z · LW(p) · GW(p)

Many Western societies have seen pretty dramatic productivity-enhancing institutional changes in the last few hundred years that aren't explicable in terms of changes in genetic makeup. In light of this, your view seems to rely on believing that most currently remaining institutional variation is genetic, whereas this wasn't the case ~300 years ago. Do you think this is the case?

Hong Kong, Singapore, and South Korea seem to make a pretty strong case for a huge independent effect of institutions.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-27T04:46:47.840Z · LW(p) · GW(p)

Many Western societies have seen pretty dramatic productivity-enhancing institutional changes in the last few hundred years that aren't explicable in terms of changes in genetic makeup.

Who said anything about genetics?

Hong Kong, Singapore, and South Korea seem to make a pretty strong case for a huge independent effect of institutions.

Korea is. China (I assume this is what you mean by Hong Kong and Singapore) is evidence against.

Replies from: AlexSchell
comment by AlexSchell · 2014-11-27T05:56:24.419Z · LW(p) · GW(p)

Oops, shouldn't have assumed you're talking about genetics :)

Still, if you're talking about character in a causally neutral sense, it seems that you need to posit character traits that hardly change within a person's lifetime. Here I admit that the evidence for rapid institutional effects is weaker than the evidence for institutional effects in general.

(Re: Hong Kong, Singapore, no, I do mean those cities specifically. Their economic outcomes differ strikingly from culturally and genetically similar neighbors because of their unique histories.

comment by A1987dM (army1987) · 2014-11-22T11:12:44.085Z · LW(p) · GW(p)

You seem to assume that everybody in Latin America has the same character, in which case how comes certain people emigrate and other don't?

comment by NancyLebovitz · 2014-11-21T12:51:55.302Z · LW(p) · GW(p)

You're leaving out that he left Latin America to get away from those problems, and also that a lot of immigrants want to become real Americans (or whichever country they're moving to).

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-27T04:49:46.129Z · LW(p) · GW(p)

You're leaving out that he left Latin America to get away from those problems

But do they understand what caused them.

also that a lot of immigrants want to become real Americans (or whichever country they're moving to).

I'd be more comfortable with an immigration policy that explicitly screened for something like this.

comment by satt · 2014-11-18T06:02:18.685Z · LW(p) · GW(p)

All that's needed is that new immigrants should do significantly better than they did in their home country, do some good in the country they've moved to.

Notice that your two assumptions nearly contradict each other. After all if the average citizen of the old country was as capable of doing good as the average citizen of the destination, the old country wouldn't be the kind of place one needs to leave to do significantly better.

That argument seems to me non-responsive, fallacious, or at least inadequately fleshed out, in three different ways.

  1. Immigrants needn't be representative of their country of origin, in which case arguments about the average citizen in that country of origin aren't automatically relevant.

  2. The "average citizen of the old country" needn't be "as capable of doing good as the average citizen of the destination" for NancyLebovitz's point to go through; they just need to generate a net gain (however one operationalizes "a net gain") at the margin.

  3. Something like a fallacy of composition: a priori, "the average citizen of the old country [being] as capable of doing good as the average citizen of the destination" doesn't automatically imply that "the old country [isn't] the kind of place one needs to leave to do significantly better". Given, say, increasing returns to scale & specialization, it's quite possible that denizens of a tiny country would be outperformed by those of a bigger country, but would nonetheless be quite capable of matching their new neighbours if they moved to that bigger country.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-18T08:54:06.620Z · LW(p) · GW(p)

That argument seems to me non-responsive, fallacious, or at least inadequately fleshed out, in three different ways.

Yes, it was a Baysian not a mathematical argument.

Immigrants needn't be representative of their country of origin, in which case arguments about the average citizen in that country of origin aren't automatically relevant.

They are unless you have reason to believe the immigrants are above average.

Given, say, increasing returns to scale

Comparing per-capita GDP with populations suggests we have decreasing returns to scale.

One way to see the problem with Nancy's argument is to consider the following question: If most people from country X want to move to country Y then wouldn't it be easier for country Y to simply annex country X? You save on relocation costs and the people are now in country Y.

Replies from: gjm, army1987, skeptical_lurker, gjm, NancyLebovitz, army1987, satt
comment by gjm · 2014-11-18T17:21:28.863Z · LW(p) · GW(p)

unless you have reason to believe the immigrants are above average.

You do. (For a particular sense of "above average" that's appropriate here.) The people who choose to leave country A to seek their fortune in country B are going to be (on average) atypical in a bunch of ways.

  • They will tend to be more optimistic about their prospects in country B and maybe less optimistic about their prospects in country A. It's not immediately clear what we should expect this to say about them overall, but let's "change basis" as follows: they will tend to have a higher opinion of how much better they'd do in B than in A, and this seems like it should correlate with actual prospects in B if it's a healthier country than A.

  • They will tend to be more proactive, more go-getting. This seems like it should also correlate with productive work.

  • They will tend to be actually able to get themselves from country A to country B without starving, getting arrested by overzealous police in country A or B or in between, failing to get past border controls, etc. This all seems like it would correlated with effectiveness in getting stuff done. (Both directly, and because their ability to do this will be affected by the resources they have in country A, which for multiple reasons will correlate with their ability to get things done.)

  • There will probably be differences in their relationships with other people in country A, but I'm not sure which direction the overall effect goes. (Maybe they have looser ties, and that correlates with being less good with people, and that correlates with doing badly; maybe they have good friends and family making them feel well supported and confident, and that correlates with doing well.)

Having got from A to B, they are then going to be strongly motivated to make the trouble and expense worth while, which probably means that whatever their underlying competence they will work harder and more resourcefully in country B than they would have in country A.

So there are lots of reasons to expect people who have emigrated from dysfunctional country A to more-functional country B to be more effective workers in country B than the population average in country A.

(Note: I don't think this is by any means the dominant reason to agree with Nancy and disagree with Azathoth on this point.)

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-21T06:32:03.312Z · LW(p) · GW(p)

Something like that was formerly more true. Now I suspect part of the problem is that improvements in transportation have made it "too easy" to immigrate.

Replies from: Salemicus
comment by Salemicus · 2014-11-27T11:06:47.825Z · LW(p) · GW(p)

I suspect you mean "formerly."

Replies from: Azathoth123
comment by Azathoth123 · 2014-12-01T05:43:19.465Z · LW(p) · GW(p)

Thanks, fixed.

comment by A1987dM (army1987) · 2014-11-19T02:02:53.981Z · LW(p) · GW(p)

They are unless you have reason to believe the immigrants are above average.

Ahem

comment by skeptical_lurker · 2014-11-18T09:44:02.360Z · LW(p) · GW(p)

Yes, it was a Baysian not a mathematical argument.

I don't quite understand - Bayes is maths.

They are unless you have reason to believe the immigrants are above average.

One could require immigrants to have demonstrable skills. However, if the immigrants are above average then this causes a brain drain for the home country.

If most people from country X want to move to country Y then wouldn't it be easier for country Y to simply annex country X? You save on relocation costs and the people are now in country Y.

This would also stop smaller first world countries like the Japan or the UK becoming overcrowded. OTOH its also completely infeasible in the foreseeable future because people wouldn't agree to it.

comment by gjm · 2014-11-19T12:17:34.507Z · LW(p) · GW(p)

decreasing returns to scale

The scatterplot shown here appears to show a strong positive correlation between population and GDP per capita.

[EDITED to add: no, I'm an idiot and misread the plot, which shows a clear correlation between population and total GDP and suggests rather little between population and per capita GDP. Sorry about that. The Gapminder link posted by satt also suggests very little correlation between population and per capita GDP. So the context for my (unchanged) argument below is not "Increasing returns to scale are just one factor; here are a bunch more" but "Increased returns to scale are probably negligible; here are a bunch of things that aren't".]

In any case, "increasing returns to scale" were just one example (and I think not the best) of how someone might be more productive on moving from (smaller, poorer, more corrupt, less developed) country A to (larger, richer, less corrupt, more developed) country B. Here, let me list some other specific things that might make someone more productive if they move from (say) Somalia to (say) France.

  • Better food and healthcare. Our migrant will likely be healthier in country B, and people do more and better work when healthier.
  • Easier learning. Our migrant may arrive in country B with few really valuable skills, but will find more opportunities than in country A to learn new things.
  • Better infrastructure. Perhaps our migrant is working on making things; country B has better roads, railways, airports, etc., for shipping the products around for sale. Perhaps s/he is (after taking advantage of those educational opportunities) working on computer software; country B has reliable electricity, internet that doesn't suck, places to buy computer hardware, etc.
  • Richer customers. Perhaps our migrant is making food or cleaning houses. People in country B will pay a lot more for this, because they are richer and their time is worth more to them. So, at least as measured in GDP, the same work is more productive-in-dollars in country B than in country A. (Is this a real gain rather than an artefact of imperfect quantification? Maybe. If people in country B are richer and their time is worth more because they are actually doing more valuable things then any given saving in their time is helping the world more.)
  • Less corruption. Many poor dysfunctional countries have a lot of corruption. This imposes a sort of friction on otherwise-productive activities -- one has to spend time and/or money bribing and sweet-talking corrupt officials, and it could have been used for something else. In country B this happens much less.
Replies from: Azathoth123
comment by Azathoth123 · 2014-11-21T06:37:01.079Z · LW(p) · GW(p)

And what caused these differences between these two countries? (Hint: it's not magical corruption ray located in Mogadishu.) And how will these traits change as more people move from Somalia to France?

Replies from: gjm
comment by gjm · 2014-11-21T11:34:44.902Z · LW(p) · GW(p)

It could be any number of things. Including the one I take it you're looking for, namely some genetic inferiority on the part of the people in country A. But even if that were the entire cause it could still easily be the case that when someone moves from A to B their productivity (especially if expressed in monetary terms) increases dramatically.

I'm actually not quite sure what point you're arguing now. A few comments back, though, your claim was that Nancy was (nearly) contradicting herself by expecting immigrants to (1) be productive in their new country even though (2) their old country is the kind of place where it's really hard to be productive, on the grounds that for #2 to be true the people in the old country must be unproductive people.

It seems to me that for this argument to work you'd need counters to the following points (which have been made and which you haven't, as it seems to me, given any good counterargument to so far):

  • There are lots of other ways in which the old country could make productivity harder than the new -- e.g., the ones I mention above.

    • Let me reiterate that these apply even if the old country's productivity is entirely a matter of permanent, unfixable genetic deficiencies in its people. Suppose the people of country A are substantially stupider and lazier than those of country B; this will lead to all kinds of structural problems in country A; but in country B it may well be that even someone substantially stupider and lazier than the average can still be productive. (Indeed I'm pretty sure many such people are.)
    • If the differences between A and B do indeed all arise in this way (which, incidentally, I think there are good reasons to think is far from the truth) then yes, if the scale of migration from A to B is large enough then it could make things worse rather than better overall. Given that the empirical evidence I'm aware of strongly suggests that migration to successful countries tends to make them better off, I think the onus is on you if you want to make the case that this actually happens at any credible level of migration.
  • The people who move from country A to country B may be atypical of the people of country A, in ways that make them more likely overall to be productive in country B.

    • Your only response to this has been a handwavy dismissal, to the effect that that might have been true once but now immigration is too easy so it isn't any more. How about some evidence?
Replies from: Azathoth123
comment by Azathoth123 · 2014-11-27T04:42:30.767Z · LW(p) · GW(p)

It could be any number of things. Including the one I take it you're looking for, namely some genetic inferiority on the part of the people in country A.

Not necessarily, my argument goes through even if it's memetic.

The people who move from country A to country B may be atypical of the people of country A, in ways that make them more likely overall to be productive in country B.

Your only response to this has been a handwavy dismissal, to the effect that that might have been true once but now immigration is too easy so it isn't any more. How about some evidence?

How about some yourself. Note simply saying that something may happen is not a reason to ignore the prior that it won't. I responded to your only argument about the prior. Also, look at the way the immigrants are in fact behaving, I believe it involves lots of riots and creating neighborhoods that the police are afraid to go into.

comment by NancyLebovitz · 2014-11-18T14:35:16.939Z · LW(p) · GW(p)

The cost of annexing can be very high. War is hard on people, and we don't have methods of smoothly and sensibly rearranging national borders.

Also, at that point, country Y has inherited all of country X's problems.

As far as I can tell, you don't get everyone (or a very large proportion) of people wanting to leave a country unless there's a lot of violence or a natural disaster. If the problem is poverty, people would rather have some members of their families emigrate to work, and send money back.

I think "annexing country X" might show some problems with utilitarian thinking-- a tendency to abstract away important amounts of detail and the costs of getting from point A to point B. This doesn't mean utilitarian thinking is always wrong, but these are problems to watch out for.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-21T06:33:47.634Z · LW(p) · GW(p)

Also, at that point, country Y has inherited all of country X's problems.

That's my point. Having many people from country X emigrate to country Y causes country X to acquire country Y's problems.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-21T12:47:43.497Z · LW(p) · GW(p)

Immigration doesn't lead to importing country X's government.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-27T04:41:33.113Z · LW(p) · GW(p)

True, but it may very well lead to importing the reason country X has the government it does.

comment by A1987dM (army1987) · 2014-11-19T17:43:24.444Z · LW(p) · GW(p)

If most people from country X want to move to country Y then wouldn't it be easier for country Y to simply annex country X?

That's exactly what the FRG did with the GDR.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-21T06:29:25.826Z · LW(p) · GW(p)

That was a special case since there the difference was (originally) entirely due to the Soviet occupation. And even then it wasn't particularly effective, i.e., today the former GDR is doing much worse then the former FRG and (I think) comparably to the neighboring former Soviet satellites that weren't annexed.

comment by satt · 2014-11-19T08:57:25.098Z · LW(p) · GW(p)

Yes, it was a Baysian not a mathematical argument.

That is a bit better, but even as a Bayesian argument it quietly rests on empirical priors which I find odd (as we're about to see).

They are unless you have reason to believe the immigrants are above average.

In fact I do. gjm has listed a priori reasons to expect this. More empirically, I already know that epidemiologists talk about a "healthy immigrant effect", which suggests that immigrants are selected (or indeed self-select) to be healthier & wealthier than the average for their home country. I've also seen people bringing up the selectedness of immigrants in arguments about race & IQ, to rebut observations that Third World immigrants tend to do better in their new homes than the estimated mean IQs of their home countries would suggest.

Comparing per-capita GDP with populations suggests we have decreasing returns to scale.

It does, but it is a very weak suggestion. The correlation is not that great to start with, and doesn't account for other factors, like international differences in the working-age proportion of the population. Economists have tried to take a more systematic approach based on more involved regressions and/or fitting production functions, but the results here seem mixed, varying across industries and the level of aggregation.

(My own prior that increasing returns to scale occur in manufacturing — the class of industries, AFAIK, most important to a country's development — seems broadly consistent with the available evidence. I may as well add that my point 3 invokes "increasing returns to scale & specialization" simply as an example, and the basic conclusion that there's "[s]omething like a fallacy of composition" going on stands even if returns to scale are everywhere decreasing.)

One way to see the problem with Nancy's argument is to consider the following question: If most people from country X want to move to country Y then wouldn't it be easier for country Y to simply annex country X? You save on relocation costs and the people are now in country Y.

I have little to add to what NancyLebovitz & skeptical_lurker have already said.

comment by A1987dM (army1987) · 2014-11-19T01:53:45.774Z · LW(p) · GW(p)

I fail to see the connection between more immigration and improved utility.

If I didn't think I'd have higher utility in a different country than in mine, I wouldn't want to move to that country in the first place. Granted, my moving there could affect other people's utility, but it's not obvious a priori what the sign of the net effect would be. EDIT: Most of the obvious externalities are pecuniary, so their effects on total utility cancel out when considering both sides of each transaction (assuming they have equal marginal utility for money).

comment by Sarunas · 2014-11-21T18:58:25.545Z · LW(p) · GW(p)

I was reading the thread about Neoreaction and remembered this old LW post from five years ago:

Somewhere in the vastness of the Internet, it is happening even now. It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing. But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)

So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood. Or if there are new members, their quality also has gone down.

When I saw the question posted in the discussion, I thought it had a potential to be a good discussion topic. After all, I reasoned, there are a many thoughtful people on LessWrong who are interested in politics, history, political philosophy. There are a lot of insights to be gained from discussing interesting and difficult questions about society. And there are quite a few insightful neoreactionaries here, eg. Konkvistador and others (some of whom sadly no longer actively participate on LW). And some neoreactionary ideas are interesting and worth a look.

Despite all this, it seems that currently so many of subthreads of that thread basically turned into an unproductive flamewar. Why? Well, politics is the mind-killer, of course. What should have I expected. Nevertheless, I think it could have been avoided. I am not totally against political discussion in some threads. In fact, even many comments in that thread are good. Well, taken individually many comments are quite reasonable, for example, some of them explain a certain position and while you may agree or disagree with a stated position, you can't say anything bad about the comment itself. However, when aggregated they do not form a productive atmosphere of the thread. While many comments are reasonable, it is suspiciously easy to sort most of them into a two group - pro- and anti- neoreaction with little middle ground who could act as (sort of) judges that could help evaluate the claims of both sides. There is suspiciously little belief updating (even on small issues) going on (maybe it is different among lurkers), which is probably a very important measure of whether discussion was actually productive (I do not claim that all LessWrong discussions are productive. A lot of them aren't). Many people aren't arguing in good faith. Some of them even post links to this discussion in other forums as if it is representative of LessWrong as a whole.

I am not calling for censorship or deletion of certain comments. Nor I want discussing controversial issues being prohibited. I am calling for a moment of reflection about mind-killing. For a moment of consideration about whether you yourself aren't mind-killed, whether you yourself are no longer updating your beliefs honestly, whether you are no longer arguing in a good faith (even if the other side lowered its standards first). I don't know what ritual could be devised to reinforce this point. Maybe it is true that there is only three stable equilibria points (Vladimir_M, alas, no longer comments on LessWrong). Is it possible to devise something, I don't know, maybe a ritual or social norm or yet something else that would help keep things in an unstable point of both having the broad scope of questions and the high quality discussion? Or is any such attempt doomed to crumble due to the influence of discussion standards of the outside world?

In addition to that, I think that the question itself was poorly worded. It was way too broad. The questions that are likely to be polarizing would benefit from being much more narrow and much more specific. Maybe this way everyone would be talking about the same thing, as it would be much harder to try to steer the discussion into things that you like to talk about. Maybe this way everyone would have clearer and more concrete idea about what everyone else is talking about, making it easier to reason about the whole situation, easier to weigh the evidence in favour of one or the other position.

Replies from: Viliam_Bur, ChristianKl
comment by Viliam_Bur · 2014-11-22T11:23:57.832Z · LW(p) · GW(p)

I agree it would be good to find a better method to debate politics. Maybe we should have a meta-rule that anyone who starts a political debate must specify rules how the topic should be debated. (So now the burden is on the people who want to debate politics here.)

It seems to me that in political topics most of updating happens between the conversation. It's not like you say something and the other person is "oh, you totally convinced me, I am changing my mind now". Instead, you say something, the other person looks at you very suspiciously and walks away. Later they keep thinking about it, maybe google some data, maybe talk with other people, and the next time you meet them, their position is different from the last time.

For example, I have updated, from mildly pro-NR to anti-NR. I admit they have a few good points. But this is generally my experience with political movements: they are often very good at pointing out the obvious flaws of their competitors; the problem is that their own case is usually not much better, only different. I appreciate the few insights, they made me update, and I still keep thinking about some stuff. I just didn't come to the same conclusion; I separated the stuff that makes sense to me from the stuff that doesn't. Just like I try to draw good ideas e.g. from religion, without becoming religious. Instead of buying the whole package, I take a few bricks and add them to my model of the world. There are a few bricks in my model now that an outside observer could call "neoreactionary", although that would probably depend on the exact words I would use to describe them (because they are not unique for NR). The other bricks I have judged separately, and I was unimpressed. That's where I am now.

There is also this irritating fact that NRs keep associating themselves with LW. I consider that a huge dishonesty and in a way an attack on this community. If people are impressed by LW, this can make them more open towards NR. If people are disgusted by NR, this can make them dislike LW by association. They gain, we lose. It never goes the other way round; no one is going to debate overcoming their cognitive biases just because they fell in love with NR. To put it bluntly, we are used as a recruitment tool for some guy's cult, and all his shit falls on our heads. Why should we tolerate that? (This, especially #1 should be a required reading for every nerd.) That alone makes me completely unwilling to debate with them, because such debates are then used as further evidence that "LW supports NR". (As an analogy, imagine how much would you want to have a polite debate with a politician you dislike, if you know that the reason he debates with you is that he can take a photo of you two having a conversation, put it on his personal webpage, and claim that you are one of his supporters, to impress people who know you.) I refuse to ignore this context, because I am strongly convinced that NRs are fully aware of what they are doing here.

So even if we try having rational debates about politics, I would prefer to try them on some other political topics.

Replies from: Sarunas, Azathoth123
comment by Sarunas · 2014-12-02T16:03:47.684Z · LW(p) · GW(p)

Maybe we should have a meta-rule that anyone who starts a political debate must specify rules how the topic should be debated. (So now the burden is on the people who want to debate politics here.)

I think this is a great suggestion, since it allows different standards for different types of political discussion, as well as giving us a chance to actually observe which set of rules leads to most productive discussion.

It seems to me that in political topics most of updating happens between the conversation.

Well, I think this is probably true in my experience. On the other hand, since this is an internet forum, no one is forcing them to post their answers immediately. Maybe for most people it takes months to change their position on significant political belief even if they have a lot of evidence that contradict that belief, thus we do not expect that given a person he/she would change their beliefs after a conversation. However, thinking at the margin, there must have been people who were on the fence. There must have been people who quickly jump from one set of beliefs to another one whenever someone posts an interesting essay. Maybe for them a week would have been enough to update? And since this was not a real time conversation, they could post about their update after a week and it was their pride that prevented them from doing so? However, less people seemed be on the fence than I expected, "the distribution of opinions about neoreaction" seemed bimodal. However, now that I write this, I realize that such people would have been less motivated to write their beliefs in the first place, thus they were underrepresented in the total volume of posts in that thread. Thus, it is possible that the impression of bimodality is partially an artefact of that.

It is good to hear that you have found something in that thread that you thought was worth updating on. I also agree that neoreaction is better at finding flaws of other movements (for example, I think that some trends they describe as dangerous are actually dangerous) and providing some intellectual tools for thinking about the world that can be added into one's toolbox (I am not a neoreactionary, whether those tools accurately describe the world is a different question, to me it seems that they are at least worth thinking about, can they shed some light on things that other intellectual tools neglect?) than providing an alternative course of action, an alternative model of what kind of society is good or an alternative movement that would be worth following (it seems to me that neoreaction is more like, well, reaction to progressivism (in neoreactionary sense of the word) rather than coherent set of goals in itself (it seems to me that the groups that compose neoreaction are as different from each other as either of them is from progressivism)). So, basically, I think my position towards neoreaction is somewhat similar to yours.

There is also this irritating fact that NRs keep associating themselves with LW. I consider that a huge dishonesty and in a way an attack on this community. If people are impressed by LW, this can make them more open towards NR. If people are disgusted by NR, this can make them dislike LW by association. They gain, we lose. It never goes the other way round; no one is going to debate overcoming their cognitive biases just because they fell in love with NR. To put it bluntly, we are used as a recruitment tool

This is where my intuition differs from yours. Maybe this is because I have never been to a LW meetup, nor I have ever met another person who reads LW in real life. In addition, I have never met a single neoreactionary in real life. Or maybe I simply don't know about them, I don't think I have ever met a single SJW in real life either. I understand that LessWrong consists of real people, but when I think about LessWrong, the mental image that comes to my mind is that of a place, abstract entity and not a community of people. Although I obviously understand that without all these people this place would not exist, the mental image of LessWrong as "a place (maybe cloudlike, maybe vaguely defined) where LW style discussion about LW topics happens (style and topics are most important part of what defines LW to me)" feels more real to me than the mental image of community of people. I do not know much about LW posters beyond what they post here or on other blogs. For example, when I first started reading LessWrong, for quite a long time I thought that Yvain and Gwern were women. Why did I think this? I don't remember. What I'm trying to say is that I guess that the difference between our intuitions may come from the difference between how we think about these two layers (place, style of discussion, topics vs community of real people). It may be a bias on my part (i.e. I don't know what kind of thinking leads to an optimal outcome, I am not sure how exactly such optimal outcome would look like) that I neglect the community building aspect of LessWrong, I am not sure. I haven't disentangled my thoughts about these things in my mind yet, they are very messy. This post is partially an attempt to write down my intuitions about this (as you can see, it is not a very coherent argument), maybe it will help myself to clarify some things.

In addition to that, while an individual identity is relatively well defined ("I am me"), identity of someone who belongs (or does not belong) to a certain group is much less clearly defined and whether someone actively feels belonging to a certain group seems to depend on a situation.

What I am trying to say is that when I see neoreactionaries commenting on LessWrong, I do not perceive them as "them" if they talk in a manner that is close enough to LessWrong style about the topics that are LW topics. In this situation, I do not perceive LWers and LW neoreactionaries as distinct groups in a way that a statement about the attack on the community would make sense. In fact, in this situation, only a small part of my attention is dedicated to identity related thoughts. The situation is different when, e.g. I read someone's comments (usually outside of LessWrong) attacking LessWrong. In this case the part of my attention that is dedicated to identity related things is much larger. In such situations, I do think of myself as someone who regularly reads LessWrong and finds it a great place with a lot of interesting people who write about their insights, when someone attacks it, my emotions create an urge to defend LessWrong. In such situations much larger part of my attention is dedicated to this, and I do start thinking in terms of who belongs to what group. But unless it is neoreactionaries who are attacking LessWrong, I usually still do not feel (I am just describing what I feel in such situations) that LW neoreactionaries (not neoreactionaries in general) are distinct group. Thus, in my case, it seems that it is conflicts and disagreements that create a sense of identity (even more than vice versa), since, as I have said, I have never participated in an offline LW community. (to be continued in the next comment)

Replies from: Viliam_Bur, Sarunas
comment by Viliam_Bur · 2014-12-03T16:04:06.162Z · LW(p) · GW(p)

less people seemed be on the fence than I expected, "the distribution of opinions about neoreaction" seemed bimodal

I suspect this is the polarizing effect of politics, not something specific for LW nor specific for neoreaction. We are talking about labels, not ideas. I may agree with half of ideas of some movement, and disagree with other half of ideas, but I usually have a clear opinion about whether I want to identify with a label or not.

I understand that LessWrong consists of real people, but when I think about LessWrong, the mental image that comes to my mind is that of a place, abstract entity and not a community of people.

My mental image for LW community is more or less "people who have read the Sequences, and in general agree with them". Yes, I am aware that in recent years many people ignore this stuff, to the degree where mentioning the Sequences is a minor faux pas. (And for a while it was a major faux pas, and some people loudly insisted that telling someone to read the Sequences is a lesswrongeese for "fuck you". Not sure how much of that attitude actually came from the "Rational"Wiki.) That, in my opinion, is a bad thing, and it sometimes leads to reinventing the wheel in the debates. To put it shortly, it seems to me we have lost the ability to build new things, and became an online debate club. Still a high quality online debate club. Just not what I hoped for at the beginning.

What I am trying to say is that when I see neoreactionaries commenting on LessWrong, I do not perceive them as "them" if they talk in a manner that is close enough to LessWrong style about the topics that are LW topics.

LessWrong was built upon some ideas, and one of them was that "politics is the mindkiller" and that we strive to become more rational, instead of being merely clever arguers. At this moment, neoreactionaries are the group most visibly violating this rule. They strongly contribute to the destruction of the walled garden. Debating them over and over again is privileging a hypothesis; why not choose any other fringe political belief instead, or try creating a new one from scratch, or whatever?

And I guess that if we are to overcome biases we will have to deal with politics.

Politics is an advanced topic for a rationalist. Before going there, one should make sure they are able to handle the easier situations first. Also, there should be some kind of feedback, some way of warning people "you have strayed from the path". Otherwise we will only have clever arguers competing using their verbal skills. When a rationalist sympathetic to neoreaction reads the SSC neoreaction anti-faq, they should be deeply shocked and start questioning their own sanity. They should realize how much they have failed the art of rationality by not realizing most of that on their own. They should update about their own ability to form epistemically correct political opinions. Instead of inventing clever rationalizations for the already written bottom line.

In my opinion, Yvain is the most qualified person for the task of debating politics rationally, and the only obvious improvement would be to somehow find dozen different Yvains coming from different cultural backgrounds, and let them debate with each other. But one doesn't get there by writing their bottom line first.

Replies from: Azathoth123, Salemicus, Lumifer
comment by Azathoth123 · 2014-12-08T01:28:34.012Z · LW(p) · GW(p)

To put it shortly, it seems to me we have lost the ability to build new things, and became an online debate club.

Did LW as a group ever have this ability? Going by the archives it seems that there were a small number (less than 10) of posters on LW who could do this. Now that they're no longer posting regularly, new things are no longer produced here.

try creating a new one from scratch, or whatever?

A reasonable case could be made that this is how NRx came to be.

Replies from: Vulture
comment by Vulture · 2014-12-11T06:24:35.015Z · LW(p) · GW(p)

A reasonable case could be made that this is how NRx came to be.

If this is where NRx came from, then I am strongly reminded of the story of the dog that evolved into a bacterium. An alternative LW-like community that evolved into an aggresive political movement? Either everyone involved was an advanced hyper-genius or something went terribly wrong somewhere along the way. That's not to say that something valuable did not result, but "mission drift" would be a very mild phrase.

Replies from: Lumifer
comment by Lumifer · 2014-12-11T07:31:31.228Z · LW(p) · GW(p)

evolved into an aggresive political movement?

As far as I can see it evolved into mostly smart people writing dense texts about political philosophy. That's a bit different :-)

Replies from: bogus
comment by bogus · 2014-12-11T09:19:55.440Z · LW(p) · GW(p)

As far as I can see it evolved into mostly smart people writing dense texts about political philosophy.

That would describe quite a few political movements, actually - it's hardly exclusive to NRx.

Replies from: Lumifer
comment by Lumifer · 2014-12-11T15:31:17.646Z · LW(p) · GW(p)

Nope, political movements and political philosophy belong to different categories.

Some political movements evolve out of political philosophy texts, but not all political philosophy texts evolve into political movements.

Replies from: Vulture
comment by Vulture · 2014-12-11T16:41:21.763Z · LW(p) · GW(p)

I think that at this point it would be fair to say that a movement has developed out of NRx political philosophy.

Replies from: Lumifer
comment by Lumifer · 2014-12-11T16:47:07.683Z · LW(p) · GW(p)

Show me that movement in actual politics. Is any NRx-er running for office? Do they have an influential PAC? A think tank in Washington, some lobbyists, maybe?

Replies from: None, Vulture
comment by [deleted] · 2014-12-11T17:15:02.100Z · LW(p) · GW(p)

Nah, man. Once you get to that level of politics, you're already pozzed.

comment by Vulture · 2014-12-11T17:14:29.562Z · LW(p) · GW(p)

Oh, I think we're using the phrase "political movement" in different senses. I meant something more like "group of people who define themselves as a group in terms of a relatively stable platform of shared political beliefs, which are sufficiently different from the political beliefs of any other group or movement". Other examples might be libertarianism, anarcho-primitivism, internet social justice, etc.

I guess this is a non-standard usage, so I'm open to recommendations for a better term.

Replies from: Lumifer
comment by Lumifer · 2014-12-11T17:23:19.936Z · LW(p) · GW(p)

Yep, looks like we are using different terminology. The distinction between political philosophy and political movement that I drew is precisely the difference between staying in the ideas/information/talking/discussing realm and moving out into the realm of real-world power and power relationships. What matches your definition I'd probably call a line of political thought.

Mencius Moldbug is a political philosopher. Tea Party is a political movement.

comment by Salemicus · 2014-12-03T16:33:04.505Z · LW(p) · GW(p)

When a rationalist sympathetic to neoreaction reads the SSC neoreaction anti-faq, they should be deeply shocked and start questioning their own sanity

Sentiments like this are, in my opinion, a large part of why "politics is the mind-killer." I am no neoreactionary, but I thought the SSC neoreaction anti-faq was extremely weak. You obviously thought it was extremely strong. We have parsed the same arguments and the same data, yet come out with diametrically opposed conclusions. That's not how it's supposed to work. And this is far from a unique occurrence. I frequently find the same article or post being held up as brilliant by people on one side of the political spectrum, and dishonest or idiotic by people on the other side.

It is not merely that people don't agree on what's correct, we don't even agree on what a successful argument looks like.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-12-03T17:29:52.976Z · LW(p) · GW(p)

I thought the SSC neoreaction anti-faq was extremely weak. You obviously thought it was extremely strong. We have parsed the same arguments and the same data, yet come out with diametrically opposed conclusions. That's not how it's supposed to work.

Well, sometimes that's exactly how it's supposed to work.

For example, if you have high confidence in additional information which contradicts the premises of the document in whole or in part, and VB is not confident in that information, then we'd expect you to judge the document less compelling than VB. And if you wished to make a compelling argument that you were justified in that judgment, you could lay out the relevant information.

Or if you've performed a more insightful analysis of the document than VB has, such that you've identified rhetorical sleight-of-hand in the document that tricks VB into accepting certain lines of reasoning as sound when they actually aren't, or as supporting certain conclusions when they actually don't, or something of that nature, here again we'd expect you to judge the document less compelling than VB does, and you could lay out the fallacious reasoning step-by-step if you wished to make a compelling argument that you were justified in that judgment.

Do you believe either of those are the case?

Replies from: Salemicus
comment by Salemicus · 2014-12-03T18:00:30.882Z · LW(p) · GW(p)

I don't want to focus on the anti-neoreactionary FAQ, because I don't want to get this dragged into a debate about neoreaction. In particular I simply don't know how Viliam_Bur parsed the document, what additional information one of us is privy to that the other is not. My point is that this is a general issue in politics, where one group of people finds a piece compelling, and another group finds a piece terrible.

And note too that this isn't experienced as something emotional or personal, but rather as a general argument for the truth. In this case, VB thinks neo-reactionaries should be "deeply shocked and start questioning their own sanity." In other words, he thinks this is basically a settled argument, and implies that people who persist in their neoreaction are basically irrational, crazy or something along those lines. Again, this is a general issue in politics. People generally believe (or at least, talk like they believe) that people who disagree with them politically are clinging to refuted beliefs in the face of overwhelming evidence. I don't just think this is due to epistemic closure, although that is part of it. I think it's partly an emotional and cultural thing, where we are moved for pre-rational reasons but our minds represent this to us as truth.

I am certainly not saying I am immune from this, but I don't have the third-party view on myself. I am not saying I am right and Viliam_Bur is wrong on the case in point. But I do wonder how many neoreactionaries have been deconverted by that FAQ. I suspect the number is very low...

Replies from: TheOtherDave, Lumifer, Viliam_Bur
comment by TheOtherDave · 2014-12-03T19:13:57.445Z · LW(p) · GW(p)

To the extent that you're making a general point -- which, if I've understood you correctly, is that human intuitions of truth are significantly influenced by emotional and cultural factors, including political (and more broadly tribal) affiliations -- I agree with your general point.

And if I've understood you correctly, despite the fact that most of your specific claims in this thread are about a specific ideology and a specific document, you don't actually want to discuss those things. So I won't.

Replies from: Salemicus
comment by Salemicus · 2014-12-03T20:25:31.628Z · LW(p) · GW(p)

I'm happy to discuss specifics, just not about the neo reactionary FAQ. I agree with VB that LW has an unhealthy tendency that every discussion becomes about neo reaction, and I don't like it.

Instead, how about this article. Jim Edwards is a bright guy, and he clearly intended to persuade with that post. And indeed he has plenty of commenters who think he was making a valuable point. Yet I am at a loss to say what it is. Here he is, claiming to have a graph showing that government spending affects economic growth, yet all that graph shows is changes in government spending. It doesn't show a correlation, it doesn't suggest causation, it doesn't do anything of the sort. Yet some poeople find this persuasive.

When someone says they like dance music (for example), I feel like I'm missing out; they get joy out of something I hate, which in some ways makes them better than me, but fundamentally de gusts us non set disputandum. The older I get, the more I feel like that's how all persuasion works.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-12-03T23:45:23.076Z · LW(p) · GW(p)

Yup, those charts puzzle me, too (based on about five seconds of analysis, admittedly, but I have a strong preexisting belief that there are many examples of such silliness on the Internet, so I'm strongly inclined to agree that this particular chart is yet another example... which is of course yet another example of the kind of judgment-based-on-non-analytical factors we're discussing).

How confident are you that this is how all persuasion works?

Replies from: Salemicus
comment by Salemicus · 2014-12-04T11:00:20.804Z · LW(p) · GW(p)

I don't know how general this is, but I do think it's an important factor that I don't see discussed.

Another point is peer effects. I remember at school my physics teacher used to use proof by intimidation where he would attempt to browbeat and ridicule students into agreeing with him on some subtly incorrect argument. And he wouldn't just get agreement because he scared people, the force of his personality and the desire to not look foolish would genuinely convince them. And then he'd get cross for real, saying no, you need to stand up for yourself, think through the maths. But if you can't fully think through the soundness of the arguments, if you are groping around both on the correct and the incorrect answer, then you will be swayed by these social effects. I think a lot of persuasion works like that, but on a more subtle and long-term level.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-12-04T15:54:12.556Z · LW(p) · GW(p)

I think a lot of persuasion works like that, but on a more subtle and long-term level.

Yes, I agree.

comment by Lumifer · 2014-12-03T18:06:13.238Z · LW(p) · GW(p)

My point is that this is a general issue in politics

That's kinda a general issue in humans and usually goes by the name of Confirmation Bias.

For example, debates about religion or, say, global warming work in exactly the same way.

Replies from: Salemicus
comment by Salemicus · 2014-12-03T20:31:20.592Z · LW(p) · GW(p)

But I don't think it's just confirmation bias. People do get won over by arguments. People do change their minds, convert, etc. And often after changing their mind they become just as passionate for their new cause as they ever were for the old. But what is persuasive and what is logical sometimes seem disjoint to different people.

You are right that these things afflict some areas more than others. Politics and religion are notoriously bad. And I do think a large part of it is that people simply have very different standards for what a successful argument looks like, and that this is almost an aesthetic.

Replies from: Lumifer
comment by Lumifer · 2014-12-03T21:52:05.061Z · LW(p) · GW(p)

People do get won over by arguments.

Sure, confirmation bias is a force but it's not an insurmountable force. It only makes changing one's beliefs difficult, but not impossible.

But what is persuasive and what is logical sometimes seem disjoint to different people.

I agree and I don't find this surprising. People are different and that's fine.

Take the classic "Won't somebody please think of the children!" argument. I, for example, find it deeply suspect to the extent that it works as an anti-argument for me. But not an inconsiderate number of people can be convinced by this (and, in general, by emotional-appeal strategies).

I guess what kind of people are convinced by what kind of arguments would be an interesting area to research.

comment by Viliam_Bur · 2014-12-04T12:26:55.063Z · LW(p) · GW(p)

But I do wonder how many neoreactionaries have been deconverted by that FAQ. I suspect the number is very low...

This is an interesting question that seems empirically testable -- we could ask those people and make a poll. Although there is a difference between "believing that NRs are probably right about most things" and "self-identifying as NR". I would guess there were many people impressed (but not yet completely convinced) by NR without accepting the label (yet?), who were less impressed after reading the FAQ. So the losses among potential NRs were probably much higher than among already fully convinced NRs.

comment by Lumifer · 2014-12-03T16:17:14.871Z · LW(p) · GW(p)

"politics is the mindkiller"

That's a warning sign, not a barbed-wire fence patrolled by guards with orders to shoot to kill.

why not choose any other fringe political belief instead, or try creating a new one from scratch, or whatever?

Neoreaction is an interesting line of thought offering unusual -- and so valuable -- insights. If you don't want to talk about NRx, well, don't. If you want to talk about different political beliefs, well, do.

some way of warning people "you have strayed from the path"

What is "the path"? LW is a diverse community and that's one of its strengths.

When a rationalist sympathetic to neoreaction reads the SSC neoreaction anti-faq, they should be deeply shocked and start questioning their own sanity. They should realize how much they have failed the art of rationality by not realizing most of that on their own.

You did mention mindkill, didn't you? I recommend a look in the mirror. In particular, you seem to be confusing rationality with a particular set of political values.

epistemically correct political opinions

Political opinions are expressions of values. Values are not epistemically correct or wrong -- that's a category error.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-12-04T12:18:55.184Z · LW(p) · GW(p)

Talking about "neoreaction" (or any other political group) already is a package-deal fallacy. NRs have a set of beliefs. Each of those beliefs individually can be true or false (or disconnected from evidence). These beliefs should be debated individually. It is quite possible that within the set, some beliefs will be true, some will be false, and some will be undefined. Then we can accept the true beliefs, and reject the false beliefs. There is no need to use the word "neoreaction" anywhere in that process.

So, instead of having threads about neoreaction, we (assuming we are going to debate politics) should have threads about each individual belief (only one such thread at a time). Then we should provide evidence for the belief or against the belief. Then we should judge the evidence, and come to a conclusion, unconstrained by identity labels.

The fact that we are not already doing it this way, is for me an evidence on the meta level that we are not ready for having political debates.

Debating beliefs separately, understanding the conjuction fallacy, providing evidence, avoiding labels, tabooing words... this is all rationality 101 stuff. This is "the path" we have already strayed from. If we collectively fail at rationality 101, I don't trust our ability to debate more complex things.

Political opinions are expressions of values. Values are not epistemically correct or wrong -- that's a category error.

Value is "I don't want children to starve". Political opinion is "we should increase the minimal wage (so the children will not starve)". There is more than the value; there is also the model of the world saying that "increasing minimal wage will reduce the number of starving children (without significant conflict with other values)". Other person may share the value, but reject the model. They may instead have a model that "increasing minimal wages increases unemployment, and thus increases the number of starving children", and therefore have a political opinion "we should remove minimal wage (so the children will not starve)". Same value, different models, different political opinions.

It seems to me that people usually differ more in their models than in their values. There are probably few people who really want to optimize the world to increase the number of starving children, but there are many people with political opinions contradicting each other. (Believing too quickly that our political opponents have different values is also covered in the Sequences.)

Replies from: Lumifer, gjm
comment by Lumifer · 2014-12-04T16:16:38.838Z · LW(p) · GW(p)

Each of those beliefs individually can be true or false (or disconnected from evidence). These beliefs should be debated individually.

I don't think it's quite that simple.

You are arguing for atomicity of beliefs as well as their independence -- you are saying they can (and should) stand and fall on their own. I think the situation is more complicated -- the beliefs form a network and accepting or rejecting a particular node sends ripples through the whole network.

Beliefs can support and reinforce each other, they can depend on one another. Some foundational beliefs are so important to the whole network that rejecting them collapses the whole thing. Consider e.g. Christianity -- a particular network of beliefs. Some can stand or fall on their own -- the proliferation of varieties of Christianity attests to that -- but some beliefs support large sub-networks and if you tear them down, the rest falls, too. At the root, if you reject the belief in God, debating, for example, the existence of purgatory is silly.

The package-deal fallacy exists and is real, but excessive reductionism is a fallacy, too, and just as real.

If we collectively fail at rationality 101, I don't trust our ability to debate more complex things.

Oh, I don't trust our ability to debate complex things. But debate them we must, because the alternative is much worse. That ability is not a binary flag, by the way.

There is more than the value; there is also the model of the world

True, and these should be separated to the extent possible.

It seems to me that people usually differ more in their models than in their values.

I don't know about that -- I'd like to see more evidence. One of the problems is that people may seem to have the same values at the level of costless declarations (everyone is for motherhood and apple pie), but once the same people are forced to make costly trade-offs between things important to them, the real values come out and I am not sure that they would be as similar as they looked before.

Replies from: Salemicus
comment by Salemicus · 2014-12-04T16:25:21.948Z · LW(p) · GW(p)

[P]eople may seem to have the same values at the level of costless declarations (everyone is for motherhood and apple pie), but once the same people are forced to make costly trade-offs between things important to them, the real values come out and I am not sure that they would be as similar as the looked before.

I wish I could give this more than one upvote.

comment by gjm · 2014-12-04T15:02:44.697Z · LW(p) · GW(p)

These beliefs should be debated individually.

Maybe. It seems to me that there could be two systems of political ideas -- call them A and B -- both of which are pretty credible when taken as wholes, but for which if you take any single proposition from one and examine it in the context of the other, it's obviously wrong.

(The same thing happens with scientific theories. Key words: "Quine-Duhem thesis".)

On the other hand, it does also happen that basically-unrelated ideas get bundled together as part of a package deal, and in that case we probably do generally want to try to separate them. So I'm not sure what the best way is to make the tradeoff between splitting and lumping.

comment by Sarunas · 2014-12-02T16:04:21.476Z · LW(p) · GW(p)

(cont.) I guess it is likely true that more people find about neoreaction on LessWrong than vice versa. However, it is not obvious to me that hardly anyone would even join LessWrong to discuss LW topics after being exposed to neoreaction first. I mean, MoreRight recommends its readers to read LessWrong and SlateStarCodex as well. Xenosystems has LW, SSC and OvercomingBias on its blogroll. Radish Magazine's list of the people they admire includes Eliezer Yudkowsky. Obviously some of those people were LWers themselves, some might post links to LessWrong because they try to make their place more comfortable to LWers who might wander there. But still, I hope that at least some neoreactionaries would come here with an interest in LW topics (cognitive biases, future of humanity and artificial intelligence). I guess that it is probably true that neoreaction gains more members from LW than vice versa. This is the community layer. But there is also the intellectual toolbox layer. And I think that if LessWrong started discussing politics, having a few neoreactionaries (not just any neoreactionaries, but those who think in LW terms, are able to notice cognitive biases) here would probably be beneficial. And I guess that if we are to overcome biases we will have to deal with politics. You see, I fear that by paying attention only to cognitive biases that are easy to test in the lab we are like a proverbial drunk man searching for his keys under a lamp-post. For example, field of cognitive biases researches what happens inside a person's head, but certain things from political science and economics such as median voter's theorem, Duverger's law and Motte-and-Bailey effect (when instead of happening inside a person's head, it happens to a movement - when different people from the same movement occupy motte and bailey (I think that individual and group motte-and-bailey's are quite distinct)), seems to be analogous enough so as to be thought of as yet another kind of biases that prevent from optimal decision making at a group level. And if we were to start discussion about things like these, it would be hard to avoid using political examples altogether. By the way, the idea that the list of biases LessWrong usually talks about is not exhaustive enough has already been discussed here.

So even if we try having rational debates about politics, I would prefer to try them on some other political topics.

Yes, definitely. I think that the political topics (especially at the beginning) would have to be much more specific and less related to the questions of identity.

Replies from: Vulture
comment by Vulture · 2014-12-11T06:18:57.438Z · LW(p) · GW(p)

Motte-and-Bailey effect (when instead of happening inside a person's head, it happens to a movement - when different people from the same movement occupy motte and bailey (I think that individual and group motte-and-bailey's are quite distinct))

This could just as easily be described, with the opposite connotation, as the movement containing some weakmans*, which makes me think that we need a better way of talking about this phenomenon. 'Palatability spread' or 'presentability spread'? But that isn't quite right. A hybrid term like 'mottemans' and 'baileymans' would be the worst thing ever. Perhaps we need a new metaphor, such as the movement being a large object where some parts are closer to you, and some parts are further away, and they all have some unifying qualities, and it is usually more productive to argue against the part that is closer to you rather than the part that is far away, even though focusing on the part that is far away makes it easy to other the whole edifice (weakmanning); and motte-and-baileying is ascribing to the further-away part of your own movement but pretending that you are part of the closer part.

*in the technical sense; their positions may be plenty strong but they are less palatable

Edit: Whoops no one will see this because it's in an old open thread. Oh well.

Replies from: Sarunas
comment by Sarunas · 2014-12-11T13:10:18.577Z · LW(p) · GW(p)

What I had in mind was a situation when "a person from outside" talks to a person who "occupies a bailey of the movement" (for the sake of simplicity let's call them "a movement", although it doesn't have to be a movement in a traditional sense). If the former notices that the position of the latter one is weakly supported, then the latter appeals not to the motte position itself, but to the existence of high status people who occupy motte position, e.g. "our movement has a lot of academic researchers on our side" or something along those lines, even though the position of the said person doesn't necessarily resemble that of the "motte people" beyond a few aspects, therefore "a person from outside" should not criticize their movement. In other words, a criticism against a particular position is interpreted to be a criticism against the whole movement and "motte people", thus they invoke "a strongman" do deflect the criticism from themselves.

I think you made a very good point. From the inside, if an outsider criticizes a certain position of the movement, it looks as if they attacked a weakman of the movement and since it feels like they attacked a movement itself, an insider of the movement feels that they should present a stronger case for the movement, because allowing an outsider to debate weakmen without having to debate stronger positions could give the said outsider and other observers an impression that these weakmen was what the movement was all about. However, from the said outsider's perspective it looks like they criticized a particular position of a movement, but then (due to solidarity or something similar) the movement's strongmen were fielded against them, and from the outsider's perspective it does look like that the movement pulled a move that looks very similar to a motte-and-bailey.

Whoops no one will see this because it's in an old open thread. Oh well.

I think that replying to old comments should be encouraged. Because otherwise if everyone feels that they should reply as quickly as possible (or otherwise not reply at all), they will not think their positions through and post them in a hurry.

comment by Azathoth123 · 2014-12-08T01:18:12.175Z · LW(p) · GW(p)

Maybe we should have a meta-rule that anyone who starts a political debate must specify rules how the topic should be debated.

Um, this is a horrible idea. The problem is people will make rules that amount to "you're only allowed to debate this topic if you agree with me".

comment by ChristianKl · 2014-11-22T12:52:35.545Z · LW(p) · GW(p)

One aspect of neoreactionary thought is that it relies on historical narratives instead of focusing on specific claims that could be true or false in a way that can be determined by evidence.

To quote Moldbug:

Classifying traditions by their cladistic ancestry is a fine example. The statement that Universalism exists, that it is a descendant of Christianity, and that it is not a descendant of Confucianism, can only be interpreted intuitively. It is not a logical proposition in any sense. It has no objective truth-value. It is a pattern that strikes me as, given certain facts, self-evident. In order to convince you of this proposition, I repeat these facts and arrange them in the pattern I see in my head. Either you see the same pattern, or another pattern, or no pattern at all.

Given such an idea of how reasoning works, it's not clear that there an easy solution that allows for agreeing on a social norm to discuss politics.

Replies from: fubarobfusco, Azathoth123
comment by fubarobfusco · 2014-11-22T18:04:54.028Z · LW(p) · GW(p)

It isn't clear to me that this sort of thought should be called "reasoning", a term which is commonly used for dealing with propositions that do have truth-values, at all.

It seems to me to be more in the vein of "poetry" or "poetry appreciation".

Replies from: ChristianKl
comment by ChristianKl · 2014-11-22T19:16:10.534Z · LW(p) · GW(p)

It seems to me to be more in the vein of "poetry" or "poetry appreciation".

I don't think that's entirely fair to Moldbug. Illustrating patterns and using the human ability for pattern matching does have it's place in knowledge generation. It's more than just poetry appreciation.

Replies from: Sarunas
comment by Sarunas · 2014-12-02T19:28:24.059Z · LW(p) · GW(p)

After reading the quote I thought that he was trying to make an analogy between finding a historical narrative from historical facts and drawing a curve that has the best fit to a given series of data points. Indeed, saying that such curve is "true" or "false" does not make a lot sense, since just because a point lies outside the graph of a function does not mean that this function cannot be a curve of best fit - one cannot decide that from a small number of data points, one needs to measure (in)accuracy of the model over the whole domain. Such analogy would lead to interesting follow-up questions, e.g. how exactly does one measure inaccuracy of a historical narrative?

However, after reading Moldbug's post I see that he does not try to make such analogy, instead he tries to appeal to intuitive thinking. I think this is not a good argument, since intuition is the ability to acquire knowledge without inference or the use of reason, therefore saying that you used your intuition to arrive at a certain conclusion is basically saying that you used "something else" (similarly to how you cannot build stuff out of nonwood) - this category does not seem specific enough to be a good explanation. Humans are able to find a lot of patterns, some of which are not meaningful. It is an interesting problem how to recognize which patterns are meaningful and which aren't. But this applies to the whole field of history, not just Moldbug's ideas.

comment by Azathoth123 · 2014-12-08T01:16:03.816Z · LW(p) · GW(p)

One aspect of neoreactionary thought is that it relies on historical narratives instead of focusing on specific claims that could be true or false in a way that can be determined by evidence.

I don't see how it does this any more than any other political philosophy.

Replies from: ChristianKl
comment by ChristianKl · 2014-12-08T14:03:45.570Z · LW(p) · GW(p)

It's not true for someone who does get his beliefs by thinking about issues individually. Whether or not you call such a person having a political philosophy is another matter.

comment by Capla · 2014-11-20T22:21:02.662Z · LW(p) · GW(p)

If you could have perfect control of your own mind, what would you do with it?

(I realize the question is a bit vague. Please try and answer anyway.)

Replies from: CAE_Jones, polymathwannabe, MrMind, wadavis, Lumifer, ike, None
comment by CAE_Jones · 2014-11-21T02:47:31.257Z · LW(p) · GW(p)

The same stuff I normally do, except with less Akrasia and procrastinating, and more rapid research and self-correction.

So, basically, "Be a functional human being", rather than "sit around trying to do cool things, but cycle through Facebook and Lesswrong and a couple other sites all day instead".

comment by polymathwannabe · 2014-11-20T23:19:36.827Z · LW(p) · GW(p)

Learn every human language (and invent one that is more pleasant, more practical and more efficient than them all), free up bytes by deleting my entire MP3 collection and playing all my music inside my head (I can already do this to some degree), go to bookstores and consume dozens of books at a sitting by flipping the pages and remembering every word, catch up with my pending movies by watching them all at the same time, periodically look at HD photographs of the night sky to scan for differences that might point to supernovas or incoming asteroids, and donate my spare processing power to run both LHC@home and SETI@home in my head.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-21T09:41:53.814Z · LW(p) · GW(p)

Learn every human language (and invent one that is more pleasant, more practical and more efficient than them all)

Have you already invested effort into language invention?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-11-21T14:26:49.040Z · LW(p) · GW(p)

Yes, but so far the results have been too Eurocentric.

comment by MrMind · 2014-11-24T10:13:38.110Z · LW(p) · GW(p)

Pain and "ugh field" asymbolia. That would be pretty awesome...

Ugh field asymbolia would be the state in which you perceive the sensations attached to "ugh fields" but do the things anyway.

comment by wadavis · 2014-11-21T20:40:40.122Z · LW(p) · GW(p)

Set up the subconscious to cache its processes in memory palaces, so its work can be reviewed much like a debug file.

Replies from: Capla
comment by Capla · 2014-11-21T21:52:14.745Z · LW(p) · GW(p)

Hmm. Can you elaborate on how that would look? it sounds like you wan tot view your own source code, but I don't see why memory palaces are the best format (unless I don't understand how memory palaces can be used ).

Replies from: wadavis
comment by wadavis · 2014-11-21T23:09:29.645Z · LW(p) · GW(p)

This is primarily inspired by Blink by Malcolm Gladwell combined with the cliche math teacher insisting that you show your work instead of jumping to the solution.

The goal would be to inspect your own snap judgements to understand the conclusions reached while screening for biases. So not exactly the source code but a print out of every variable the code writes, if the metaphor holds up.

A common example: you step out your door, and know it is going to rain today. You know it is going to rain because of a lifetime of experience in the region combined with the perceived cloud cover, humidity, air pressure all give you the strong hunch it will rain. However if asked to defend the hunch consciously, you would have a hard time even noticing the changes in humidity and air pressure.

Replies from: Capla
comment by Capla · 2014-11-21T23:32:33.989Z · LW(p) · GW(p)

Why in form of memory palaces instead of mentally visualized printouts?

Replies from: wadavis
comment by wadavis · 2014-11-21T23:52:09.321Z · LW(p) · GW(p)

The memory palace was just the first idea for storing large amounts of unconnected information.

I have no experience with memory palaces.

comment by Lumifer · 2014-11-20T22:24:44.563Z · LW(p) · GW(p)

Achieve enlightenment.

Replies from: Capla
comment by Capla · 2014-11-21T02:06:53.415Z · LW(p) · GW(p)

I know my question was vague, but would you elaborate on what that means?

Replies from: Lumifer
comment by Lumifer · 2014-11-21T02:35:05.922Z · LW(p) · GW(p)

Well, here I meant the entirely traditional concepts of moksha or nirvana. The relevant abilities include escape from the bonds of the material world and ability to see things as they truly are :-)

comment by ike · 2014-11-21T03:30:11.620Z · LW(p) · GW(p)

Definitely take over the world. Study enough psychology to be able to guess other people's passwords, then hack into the NSA, then download their secret blackmail stash. I now have near-complete control over almost anyone who's been online for any significant amount of time including politicians.

Replies from: RowanE, Lumifer
comment by RowanE · 2014-11-21T09:14:12.256Z · LW(p) · GW(p)

My university network has password requirements strict enough that when I had to make a new one, I generated a random 8-digit string and it disallowed it because it looked too much like a real word or phrase (I have no idea what), and I had to pick a string it generated for me that was "random enough". The NSA probably doesn't have much looser standards for its employees' passwords than my university has for its students', so I expect your advanced psychology skills approaching the level of mindreading would just tell you who's annoyed they weren't allowed to use "password", and who's got an even longer string of random digits than usual.

comment by Lumifer · 2014-11-21T03:38:29.264Z · LW(p) · GW(p)

Study enough psychology to be able to guess other people's passwords

You don't need to study psychology for that. The lists of common passwords are freely available along with tools to apply them.

then hack into the NSA

You don't imagine they rely on text passwords, do you..?

Replies from: ike
comment by ike · 2014-11-21T03:47:31.815Z · LW(p) · GW(p)

I'm reasonably sure there's somewhere along the chain that I'd be able to figure out how to get in at 100% control of my own brain. Maybe not specifically passwords, but if I hacked some top employees emails, I should be able to build on that. Or add myself to the top-secret clearance list and do it that way.

If the NSA used common passwords, they'd already be hacked many times over. I'm talking about getting their secure passwords, which should be a lot easier than doing it now. Social engineering when you can think many times faster than the person you're talking to would be a game-changer. The method doesn't matter, specifically, especially on a hypothetical.

Are you taking the position that with a perfect brain, it would still be impractical to get into NSA databases? What about multiple brains, then?

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2014-11-21T03:51:33.924Z · LW(p) · GW(p)

I'm talking about getting their secure passwords

Proper security (nowadays) usually depends on a mix of three things -- what you know (passwords), what you have (physical tokens), and what you are (biometrics).

with a perfect brain

Not sure which capabilities do you think a "perfect" brain will have. You still won't become an X-man.

Replies from: dxu
comment by dxu · 2014-11-21T05:27:07.133Z · LW(p) · GW(p)

Proper security (nowadays) usually depends on a mix of three things -- what you know (passwords), what you have (physical tokens), and what you are (biometrics).

Do you think it's possible to get into the NSA at some point along the security chain with only "what you know (passwords)"? Just curious.

Not sure which capabilities do you think a "perfect" brain will have. You still won't become an X-man.

I would assume that that means no more cognitive biases, perfect memory, no logical errors, etc. The increase in processing speed that would come from no longer having to deal with the corresponding mental garbage would no doubt be quite impressive. Furthermore, if you have "perfect control" as specified by the OP, you ought to be able to simulate people just by forcing your mind to act like theirs. It doesn't seem like too much of a stretch that you might be able to guess some NSA workers' passwords while simulating them with near-perfect fidelity.

Replies from: Lumifer
comment by Lumifer · 2014-11-21T07:24:42.640Z · LW(p) · GW(p)

Do you think it's possible to get into the NSA at some point along the security chain with only "what you know (passwrods)"? Just curious.

Depends on what do you mean by "get into the NSA", of course, but I'd rate your chances as very low, especially post-Snowden.

To start with, I assume their sensitive information is air-gapped.

The increase in processing speed that would come from no longer having to deal with the corresponding mental garbage would no doubt be quite impressive.

Not sure about that. You still have the same underlying biology, you don't get to optimize your neuron topology, and neurons are slow.

you ought to be able to simulate people just by forcing your mind to act like theirs.

No, that wouldn't work because you don't have enough information about their mind in order to simulate it.

you might be able to guess some NSA workers' passwords

Guessing passwords is not the hard part :-)

comment by ChristianKl · 2014-11-21T09:52:02.348Z · LW(p) · GW(p)

Are you taking the position that with a perfect brain, it would still be impractical to get into NSA databases?

Attacking the NSA is very high risk and probably still a dumb move.

comment by [deleted] · 2014-11-23T22:01:14.149Z · LW(p) · GW(p)

I'd crank up the pleasure center of my brain until the handle broke off.

Why hasn't anyone mentioned this yet? I'm sad to see that others would sacrifice greatly just to be more capable or powerful.

Hopefully, self-deception is to blame; others want to want to improve themselves instead, but if they were able to sample bliss, they wouldn't trade it for anything.

comment by advancedatheist · 2014-11-22T15:25:09.148Z · LW(p) · GW(p)

Speaking of Social Justice Warriors (SJW's) versus a man familiar to many of us:

http://www.bloombergview.com/articles/2014-11-21/economics-is-a-dismal-science-for-women

http://www.overcomingbias.com/2014/11/hanson-loves-moose-caca.html

http://www.unz.com/isteve/noah-smith-tries-to-sic-shirtstorm-mob-on-poor-robin-hanson/

Replies from: army1987, fubarobfusco, Azathoth123
comment by A1987dM (army1987) · 2014-11-22T22:18:40.246Z · LW(p) · GW(p)

Am I the only one to whom ‘what's wrong with raping someone if they don't get injured, traumatized, pregnant, nor get STDs’ sounds a lot like ‘what's wrong with driving at 100 km/h while drunk, sleep-deprived and talking on the phone if you don't have any accidents’?

Replies from: knb, ChristianKl, fubarobfusco
comment by knb · 2014-11-23T02:42:29.683Z · LW(p) · GW(p)

You obviously missed the point completely. Hanson's thought experiment wasn't claiming there would be nothing wrong with committing that type of rape, his point was that it would be traumatic in the same way being cuckolded is traumatic. And yet committing that type of rape is illegal, while cuckolding is not even a misdemeanor. His point wasn't to denigrate the psychological harm of rape, it was to investigate the roots of the difference in the way these harms are treated.

comment by ChristianKl · 2014-11-23T12:22:47.013Z · LW(p) · GW(p)

It's not the same thing, rape is still a violation of bodily integrity even if there's no health damage of memories.

If a sterile man without STDs drugs a women to be unconscious and then rapes her, that's still wrong and still should be punished strongly by law. The sacred value of a woman's control over her own body is still violated. The fact that Hanson doesn't address that sacred value is what makes his post a bit creepy.

Replies from: fubarobfusco, Azathoth123
comment by fubarobfusco · 2014-11-23T17:02:22.444Z · LW(p) · GW(p)

It isn't just a matter of "sacred values". It's a matter of the consequences of making the statement.

"What's wrong with doing if ?" will predictably have the effect, on the margin, of encouraging people to do even when don't actually hold. We can predict this for reasons closely analogous to why knowing about biases can hurt people: Arming people with more rationalizations for bad things that they already were tempted to do will generally make them worse, not better.

Conducting motivated search for conditions under which something normally very harmful can be justified as barely non-harmful is the sort of thing someone would do, in conversation, if they wanted to negotiate down the badness of a specific act.

"What I did isn't real reckless driving. In real reckless driving — the maximally bad sort — the driver has to be driving too fast, while drunk, sleep-deprived, and talking on the phone. Me, I was only sleep-deprived. So stop treating me like my drugged-out ass ran over a dozen schoolkids or something."

(See actual political discussions of "real rape".)

Replies from: ChristianKl, Azathoth123
comment by ChristianKl · 2014-11-23T17:58:04.131Z · LW(p) · GW(p)

It isn't just a matter of "sacred values".

Sacred values is a term out of modern decision theory. Putting quotes around it is like putting quotes around cognitive bias.

"What's wrong with doing if ?" will predictably have the effect, on the margin, of encouraging people to do even when don't actually hold.

I don't think that's a strong argument. It's quite useful to play out scenarios of "is X still a bad idea if we change Y" to understand why we think X is a bad idea. It's how you do reductionist analysis. You reduce something into separate parts to see which of those parts is the real issue.

If I say: "Stealing is bad but there are cases where a person has to steal to avoid starvation.", that's a permissible statement. We don't ban that kind of analysis just because stealing is generally bad.

(See actual political discussions of "real rape".)

I think it's quite foolish to believe that a societal debate about what rape happens to be is bad when your goal is to reduce rape. Tabooing that discussion prevents people to speak in polite company openly about issues of consent and as a result a lot of people don't think deeply about those issues and make bad decisions.

It's silly to try to raise rape awareness while at the same time wanting to prevent the topic from getting discussed.

comment by Azathoth123 · 2014-11-27T01:24:30.756Z · LW(p) · GW(p)

(See actual political discussions of "real rape".)

Which is extremely idiotic and mostly seems to consist of feminists attempting to get away with further and further expanding the definition of "rape" while keeping the word's connotations the same.

comment by Azathoth123 · 2014-11-27T01:19:50.706Z · LW(p) · GW(p)

The sacred value of a woman's control over her own body is still violated.

Would you accept the same argument for cuckoldry violating the sacredness value of the marriage? If so then wasn't Hanson comparing two sacredness violations? If not how do you decide which sacredness values to accept?

Replies from: ChristianKl
comment by ChristianKl · 2014-11-27T07:02:10.306Z · LW(p) · GW(p)

Would you accept the same argument for cuckoldry violating the sacredness value of the marriage? If so then wasn't Hanson comparing two sacredness violations?

No, Hanson didn't speak about sacred values at all. If one wants to make the argument that marriage is sacred and violations should be punished then the logical conclusion are laws that criminalize adultery. Hanson is not in favor of those.

comment by fubarobfusco · 2014-11-23T00:04:04.661Z · LW(p) · GW(p)

Yes, a little bit.

However, one difference is that in the rape example, some people in the audience are more likely to see themselves in the role of the rapist ("Is he saying that I should be allowed to get away with that, if it really caused no harm?") whereas others are more likely to see themselves in the role of the rape victim ("Is he saying that someone should be allowed to do that to me, if they can explain away my objections?").

comment by fubarobfusco · 2014-11-22T17:57:07.341Z · LW(p) · GW(p)

Summary:

1. Some economists (including Robin Hanson) said some things using rape as an example to illustrate abstract theoretical points, which pissed a lot of people off to whom rape is not an abstract matter. One of them (Steve Landsburg) also participated in sexual harassment of an activist. Some other economists (including Lawrence Summers) claimed that women are (relatively) bad at math and science.

Some yet other economists actually performed a study and found that economics has "a persistent sex gap in promotion that cannot be readily explained by productivity differences" and that "the average female economist -- unlike the average female physicist or mathematician -- is likely to have a better publication record than her male peers", and thus that economics as an institution is biased against women practitioners.

2. Robin Hanson vigorously agrees that economics as an institution is biased against women practitioners, but doesn't think his use of rape as a theoretical example has anything to do with that, and certainly that he didn't intend to minimize rape. Just the opposite: "Just as people who accuse others of being like Hitler do not usually intend to praise Hitler, people who compare other harms to rape usually intend to emphasize how big are those other harms, not how small is rape." He cites professional racist Steve Sailer in his defense, because that's really going to change anyone's minds.

3. Steve Sailer calls a bunch of people vile names. Yawn.


My conclusion: Robin Hanson is probably not personally responsible for the observed phenomenon of economics as an institution being biased against women practitioners. But the level of instrumental rationality and/or good rhetoric demonstrated here is not super great. If you want to convince people that you are not a bigot, why cite a bigot's defense of you as his ally?

Second, it isn't clear to me that the "badness" of using rape as a theoretical example has anything to do with minimizing it. Rather, it has to do with choosing to offhandedly mention something really awful which is linked to the victimization of a fraction of your students or readers. Being constantly squicked by gratuitous scary examples that selectively target you is not great for the concentration.

In a book about management that my partner S was reading recently, there was an analogy where managers are compared to ship captains. A page later, bad managers are described as "whipping" their underlings — that is, using punishment as an incentive. Now, S is from the West Indies, and the proximity of "ship captain" and "whipping" immediately reminded her of the slave trade, and gave her a pretty heavy-duty squick reaction. Does this mean the authors of the book intended to turn people off of being managers through stereotype threat or something? No. Does it mean they even intended to remind anyone of the slave trade? No. Does it mean that these analogies cumulatively had the effect of distracting S from the point the authors were trying to make? Yes.

If the authors were challenged on this, would they resort to citing professional racists in their defense? Probably not; they'd probably say something like "oops, wow, that was totally unintentional and really embarrassing."

Replies from: Salemicus
comment by Salemicus · 2014-11-22T18:52:30.978Z · LW(p) · GW(p)

One of them (Steve Landsburg) also participated in sexual harassment of an activist.

This is so obviously and egregiously false that it makes me question your bona fides.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-11-23T00:08:41.908Z · LW(p) · GW(p)

Okay, how about "Steve Landsburg justified and participated in calling someone a lot of sexual slurs in response to that person's testimony before Congress"?

(I'll admit that "sexual harassment" is a pretty vague term and thus ill-chosen. Landsburg's behavior was still detestable and vile by either progressive or conservative standards of acceptable public conduct: he managed to be both a frothing sexist and an ungentlemanly boor.)

Replies from: Salemicus, ChristianKl, Daniel_Burfoot
comment by Salemicus · 2014-11-23T10:22:27.037Z · LW(p) · GW(p)

Steve Landsburg justified and participated in calling someone a lot of sexual slurs

Still false. Easily verifiably, objectively, false.

At best, you have a reckless disregard for the truth. I'm out.

EDIT: Just to be clear for others reading this, the post has never been deleted, so anyone can check it right here. Landsburg does not call Fluke either a slut or a prostitute. He calls her an “extortionist with an overweening sense of entitlement” which is no compliment, but not a sexual slur. You will note that even Noah Smith, in his vile hit piece, does not accuse Landsburg of sexual slurs. Instead he says that Landsburg 'seemed to call pro-contraception activist Sandra Fluke a “prostitute"' (italics mine). Why this "seemed"? Because if you read the post, you'll note that Landsburg explicitly stated that calling Fluke a prostitute was wrong! Yet people who I can only assume engaged in deliberate misreading used it to accuse him of calling her a prostitute, and Smith now piggybacks on those accusations to say "seemed" about a statement that he knows is a tawdry lie - but by using that word, he can claim he never said anything false himself.

It's disgusting, and Smith, and you, should be ashamed of yourselves.

comment by ChristianKl · 2014-11-23T00:52:15.072Z · LW(p) · GW(p)

Stretching terms like "sexual harassment" to apply them to more people doesn't do good. It weakens them when they are used in the proper context. There no reason to call everything that's not acceptable public conduct "sexual harassment".

Replies from: fubarobfusco
comment by fubarobfusco · 2014-11-23T00:56:59.864Z · LW(p) · GW(p)

As I said, vague and ill-chosen.

Still, the problem remains. If you lie down with dogs, you get up with fleas. Hanson's choice to associate himself — and his audience — with people who do nasty and rude things in public and with people who espouse and practice harming others for their views is unfortunate. It is especially unfortunate in this context; it thoroughly undermines his apparent attempt to dissociate himself from some of the same.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-23T12:04:51.548Z · LW(p) · GW(p)

I think vague is a cop out. The term does have a clear meaning. It doesn't exist that you can throw it out as a slur. The fact that you incorrectly slur other people to complain about them engaging in slurring has it's irony.

comment by Daniel_Burfoot · 2014-11-23T17:09:07.738Z · LW(p) · GW(p)

Landsburg's behavior was still detestable and vile

I think falsely accusing someone of sexual harassment is detestable and vile.

comment by Azathoth123 · 2014-11-27T01:29:51.531Z · LW(p) · GW(p)

I wonder if Eliezer will now attempt to disassociate himself from Hanson like he has the NRx's.

comment by sixes_and_sevens · 2014-11-21T13:03:11.149Z · LW(p) · GW(p)

I was writing a Markov text generator yesterday, and happened to have a bunch of corpora made up of Less Wrong comments lying around from a previous toy project. This quickly resulted in the Automated Gwern Comment Generator, and then the Automated sixes_and_sevens Comment Generator.

Anyone who's ever messed around with text generated from simple Markov processes (or taken the time to read the content of some of their spam messages) will be familiar with the hilarious, and sometimes strangely lucid, garbage they come out with. Here is AutoGwern:

Why is psychopathy not strongly confident. In a slippery slope down to affirm or something. Several common misunderstandings are superior, but I copied from StackExchange running in a new trilemma on the police, there is a very large effect sizes. The DNA studies at the clandestine level.

Plausible yet pseudonymously provided a measure of the role of SATs is pretty darn awful advice, and how they plan to use data from markets that shut down in self-experimentation, then that. The repeating logic. It wouldn't surprise when two women sent me unsolicited PM's asking if we drop the <1m bitcoins Satoshi Nakamoto or La Griffe du Lion. That just means that it.

Thanks.

(I should point out that I'm picking on Gwern because his contribution to LW means I have an especially massive text file with his name on it.)

Here's some AutoMe:

I'll be reading this. Then the restless spirit of Paul Graham sat on my body in some obscure location, Nigel Hawthorne was the hypothetical was that sword-privilege is a response, though in the idea in my life so cognitively exhausting relative to your calibrated sitters. Which should they pick? This seems credible, but I'd be a TV and film from a welfare system.

It's your lucky day. I can start hearing a bicycle. There's even in a position on all users' self-esteem, adjusting it for various other OKCupid users to encourage them, There are a small box of which routinely deals with his Dangerously Ambitious Startup Ideas essay, about how well-trained in deconstructivism.

I've actually a lecture by default?

This is perhaps an unfair question, which you'll only a sixth of the QS-types take in a group to help setting off to the music, OK? A little less fat. It isn't obvious, but I'm actually kind of these selections of the above, Yvain could also happen when I feel like my brain compels me to be the money pump.

I have become weirdly fascinated by these. Although they ditch any comprehensible meaning, they preserve distinctive tics in writing style and vocabulary, and in doing so, they preserve elements of tone and sentiment. Without any meaningful content to try and parse, it's a lot easier to observe what the written style feels like.

On a less introspective note, it's also interesting to note how dumping out ~300 words maintains characteristics of Less Wrong posts, like edits at the end, or spontaneous patches of Markov-generated rot13.

Also Yvain could happen when I feel like my brain compels me to be the money pump.

Replies from: Vaniver, Viliam_Bur, gwern
comment by Vaniver · 2014-11-21T14:16:47.735Z · LW(p) · GW(p)

Then the restless spirit of Paul Graham sat on my body in some obscure location

Well that's a fanfiction I haven't read before.

comment by Viliam_Bur · 2014-11-21T13:18:57.765Z · LW(p) · GW(p)

This should be generated for every user on their "overview" page. :D

comment by gwern · 2014-11-21T16:53:30.924Z · LW(p) · GW(p)

I did this a while back using cobe: dumped in gwern.net, all my IRC logs, LW comments etc. It seems that the key is to set the depth fairly high to get more meaningful language out, but then it works eerily well: with some curation, it really did sound like me! (As long as you were reading quickly and weren't trying to extract real meaning out of it. I wonder if a recurrent neural network could do even better?) You can see some at http://pastebin.com/tPGL300J - I particularly like

gwern> haha oh wow, for a moment i thought it wasn't working. and then I actually read the entire line slowly going 'wait a minute...'
gwern> = 15:35 <@gwern> (for all that I read fiction the right way to think of asking for a different purpose... the company did last year; but in fact was designed to protect MO5(G) from the "Alternate Reality" scene to Shinji saying "I don't think reader is central. I did avoid the Internet, in the Japanese media industries. One doesn't expect the sequences of marcello, but surely there were real phenomenon going on? Well, either you have a thesis? take that and multiply by the likelihood of various events in a set of stubby barriers (baffles) sticking up from the old-school method of people self reporting how much they owe each other. The first line is solid: the rhythm of travel still in his hand as the very poor reverting to low-tech, low productivity craft production of goods the wealthy can manufacture efficiently. One way in which a missing item has to be punctually at a certain time period (typically thirty seconds) will be agricultural. Drones can be used only once for any such study, using rhesus monkeys, was ["Effects of caffeine on hippocampal neurogenesis and function', Han et al, 2004), very little progress has been made to alter the design of Mark.06 is a bit secluded, but is at least 1 was a complete waste of time'
A> that is wonderfully coherent
B> That's more like it


gwern> '<@gwern> kurzweil is a flake...' <-- it speaks the truth!


gwern> '= 15:32 < gwern> turns out I'm not a loaf of white bread'


gwern> '= 06:05 < gwern> crime pays'


gwern> 'The atheist church fits in a single session on the Millennium Falcon.'
C> what.
gwern> well, it's just Han. everyone else believes in the Force or whatever Wookiees believe in


gwern> '17:08 < gwern> 'A lot of people are working through SICP all the time (FLOSS programmers may lose interest, another 17 years will see a sequence of output to get the impression you know it’s up to something,” “It just seems horribly inappropriate and wrong, but... 'Medication' here is 'kusuri.'"] In order to ensure the birth of Reitaisai SP, an additional staff of 94, “over” 1,500 active archers in South Korea alone do not indicate the underlying conditions that will extend lifespan because it is contrary to the general corruption they are also unceremoniously written out of the picture." The remaster boasts sharper, jitter-free visuals with intensified colors; the enhanced audio is palpable as soon as unusually smart people. The best covered seems to be dead.''
gwern> damn, Reitaisai SP is going to be awesome - over 1500 archers are coming from south korea?
B> From there alone!
gwern> 'their arrows will blot out the sun!'
gwern> 'good. then we will cosplay in the shade.'
A> SMARTAAAA
gwern> LOLIS! HOLLLDDDDD!!!!!!


gwern> 'the video is killing the ref, I'm getting 1993/1994 addresses for that one episode and then went through her budget with them, if I set 'div#content { line-height: 100%; }', I still see the white snakes' <-- CSS would be so much easier to program if we could just get rid of the damn white snakes in the browsers!


gwern> 'gwern> puritan: supposedly my .htaccess was put in charge of directing an episode for the first big step: a switch between mother and friend is an easy to read it a lot as they run a huge trade surplus, and arises from unmeasured benefits, such as transfer effects that scaled with training task gains. Again, such an experiment is just a rose falling apart. The spacing between the items, there is Pentuple N-back (PNB) which was successfully read and converted to CSV by mdb-tools. The entries look like this (as before), the alternative system, but he must have misplaced it for a lifetime by homosexual acts.' <-- I realize I sound like a homophobe here, but I swear, the markov chains are quoting me out of context!
C> pentuple n-back converted to CSV. how does that work exactly?
gwern> C: oh, you just treat the black spaces as the delimiter and parse as usual
D> "such an experiment is just a rose falling apart" :)


gwern> '= 00:35 <@gwern> second episode of EVANGELION: he keeps undermining himself! there must not be very practical without them. but even if, say, NERV base. (Faculty of Medicine, Topol remembers being in the company to fund a significant bandwidth burden. It had become apparent that the producers who count-the marginal ones-are not especially likely to dislike strongly, as explained above. Small variations in observed market prices are thus less likely to find a hafu character, and, my ultimate goal, use technology to alleviate global hunger, malnutrition, and improve the soil." "Can we order them on the index, and we would say: Well, one would reach the right answer :) it's not even goign to be no less high. Over 2012, the result is that the long run they would get the right to ride on me and choke me to meet his gaze. “Can’t do the study to try and reach out with their thumbs. That's very rare. Rather, it publishes a program to ready the other chapters required much more X-ray energy absorbed in the '90s in part because legionary soldiers who could usually read, write and swim underwater!' <-- that's why Rome conquered the known world: fucking underwater legions

comment by maxikov · 2014-11-19T12:16:23.236Z · LW(p) · GW(p)

Couple of random thoughts about cryonics:

  • It would actually be better to have cryonics legally recognized as a burial ritual than as a cadaver experimentation. In that way it can be performed on someone who hasn't formally signed a will, granting their body as an anatomical gift to the cryonic service provider. Sure, ideally it should be considered a medical procedure on a living person in a critical condition, but passing such legislation is next to impossible in the foreseeable future, whereas the former sounds quite feasible.

  • The stabilization procedure should be recognized as an acceptable form of active euthanasia. This is probably the shortest way to get to work with not yet brain-dead humans, and it would allow people to trade couple of months or years of rather painful live for better chances at living again.

  • Insulin injections should probably be a part of the stabilization protocol (especially in the previous case). According to "Glucose affects the severity of hypoxic-ischemic brain injury in newborn pigs" by LeBlanc MH et al, hypoglycemic brains sustain hypoxia much better than normally. That totally makes sense: oxygen is mainly consumed for glycolysis, so if there's nothing to oxidize, oxygen consumption will decrease.

  • Some of the major problems of cryonics can probably be solved by preventing water from expanding upon freezing. According to 1 and 2, ice is denser than water at about 30 kbar. That is a bit technically complicated, but I would speculate that with this trick we could have reversible freezing in wood frogs right now.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-19T14:53:22.653Z · LW(p) · GW(p)

Sure, ideally it should be considered a medical procedure on a living person in a critical condition

That would destroy cryonics companies who make money via insurance that depends on people legally dying.

Some of the major problems of cryonics can probably be solved by preventing water from expanding upon freezing.

What do you mean exactly? If I understood it right then vitrification is done to prevent ice crystals from forming. Do you mean something different?

Replies from: ZankerH, DanielLC, maxikov
comment by ZankerH · 2014-11-19T17:48:39.861Z · LW(p) · GW(p)

Vitrification also amounts to antifreeze poisoning. Quite a big deal if you're counting on full-body revival (which, granted, I don't consider feasible in the first place - cryonics is, in the best case, an information backup as far as I'm concerned).

Replies from: maxikov
comment by maxikov · 2014-11-19T21:58:47.876Z · LW(p) · GW(p)

That's the whole point: if we can if we can prevent water from expanding by freezing and keeping the sample under high pressure, thus making crystal formation harmless (probably), we can use less cryoprotectant. I don't know if it's possible to get rid of it completely, so I mentioned wood frogs, that already have all the mechanisms necessary to survive slightly below the freezing temperature. It's just their cryoprotectant isn't good enough to go any colder, but it's not so poisonous either. Also, they're small, so it's easier to find high pressure units to fit them in - they're perfect model organisms for cryonics research.

As of now, cryonics is at best an information backup indeed, but I see no reason why we should be content with that. Yes, we will probably eventually invent advance nanomachinery, as well as whole brain simulation and scanning, but that's too many unknowns in the equation. We could do much better than that.

comment by DanielLC · 2014-11-22T00:14:33.427Z · LW(p) · GW(p)

That would destroy cryonics companies who make money via insurance that depends on people legally dying.

The insurance companies would have to alter their contracts. Ideally, whatever legislation classifies cryonics as a medical procedure would include a clause that for the purposes of any contract written before then, it counts as death.

comment by maxikov · 2014-11-19T21:46:44.272Z · LW(p) · GW(p)

That would destroy cryonics companies who make money via insurance that depends on people legally dying.

Wouldn't it just shift to health insurance in this case? But generally, yes, recognizing cryonic patients as alive has a lot of legal ramifications. On the other hand, it provides a much better protection against unfreezing: just like with the patients in a persistent vegetative state, someone authorized has to actively make a decision to kill them, as opposed to no legal protection at all. I'm not sure which of these is the net positive. Besides, that would challenge the current definition of death, which currently basically boils down to "we positively can do nothing to bring the patient back from this state". Including potential technologies in the definition is a rather big perspective change, that can also have consequences for vegetative patients as well.

If I understood it right then vitrification is done to prevent ice crystals from forming. Do you mean something different?

As ZankerH mentioned below, vitrification leads to cryoprotectant poisoning, which is a sufficiently big problem to prevent us from experimenting with unfreezing even in small organisms. If the function of the cryoprotectant can be fully or partially replaced by keeping the sample under the high pressure, that problem is mostly solved. That doesn't prevent crystals from forming, but unlike normal ice, these crystals take less volume than the water they were made of, so they shouldn't damage the cells. In addition, amorphous solids aren't guaranteed to be stable, and can undergo slow crystallization. I'm not sure how pig of a problem that is for cryonics, but in case of going directly to ice-IX, that's definitely not a problem anymore.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-19T21:51:25.985Z · LW(p) · GW(p)

Wouldn't it just shift to health insurance in this case?

Health insurance doesn't automatically pay for everything.

On the other hand, it provides a much better protection against unfreezing: just like with the patients in a persistent vegetative state, someone authorized has to actively make a decision to kill them, as opposed to no legal protection at all.

A bit but not that much. A company that can't afford to freeze would still shut down. Nobody is foreced to pay to keep the machines on in every case.

comment by [deleted] · 2014-11-18T06:38:58.972Z · LW(p) · GW(p)

This year's edition of Stoic Week begins next monday. It's a seven day long introduction to the basic ideas of Stoicism combined with an attempt to measure its effects.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-11-22T20:27:09.829Z · LW(p) · GW(p)

I have signed up for this, and also registered for the meeting in London on the 29th. Has anyone else?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-11-29T12:39:03.254Z · LW(p) · GW(p)

I am at the event in London right now. Anyone else?

comment by Brillyant · 2014-11-17T21:05:13.191Z · LW(p) · GW(p)

Anyone know of, or have links to research into the question of whether the near-unlimited access to global media via the Internet, etc. has a net negative effect on people's self esteem?

Couple examples...

Facebook

My brother quit Facebook saying, "I know these people's lives are not nearly interesting as their feed's make them out to be. Every time I hang out with these people... and most of them sit around trying to decide what to watch on Netflix six nights out of seven. It's annoying."

Excepting my cynical brother, FB seems to dupe a lot of people—I've heard it anecdotally and it seems like a reasonable hypothesis—into believing the lives of their friends are much better and more interesting than their own. I hear people say, "People these attractive pictures at fun events while I'm at home eating reheated Noodles & Co in my sweatpants..."

The Internet and Perceptions of Beauty

I had a discussion about this a female friend recently as to whether she'd considered female perceptions of beauty had become way out of whack since the Internet.

Thinking about it... In 1930, you might be the most beautiful women anyone had ever seen in your lifetime. And because people were disconnected, there would be thousands, or even millions, of "Most-Beautiful-Evers".

In 1980, there's going be much fewer MBEs due to TV and print media.

In 2014, there is a short list of MBEs due to...well...Buzzfeed alone, and everyone else is feeling comparatively less attractive as a result.

It seems to be the sort of "big fish in little pond" scenario everyone goes through as they progress in life. Except accelerated by technology.

Is there more info on this anyone is aware of?

(Note: Anticipating criticism of using females in the second example... Sorry. Feel free to insert male interchangeably if it suits you.)

Replies from: Lumifer, Azathoth123, fubarobfusco
comment by Lumifer · 2014-11-17T21:47:15.668Z · LW(p) · GW(p)

There is TONS of mostly feminist literature on how movies/internet/Barbie/etc. teach women their bodies and themselves are inadequate and so lower their self-esteem. It is a very popular feminist trope.

Replies from: Brillyant
comment by Brillyant · 2014-11-17T22:32:25.176Z · LW(p) · GW(p)

Sure. I should have been clearer. I guess I'm more curious about the degree to which this exists because of the internet. Assuming it's true that self-esteem takes a negative hit (at least initially) due to realizing you're a small fish in a huge pond, does the availability of information compound that effect to a level where it is exponentially consequential?

Ultimately, I think it's a very natural and spiritual process to go through—to realize you are, in fact, not the center of the universe. But it's been a more gradual process in our history. Nowadays, it's like saturation in the idea that "I suck. I'm boring. I'm ugly" 24/7 at the age of 10 or before.

Replies from: Lumifer, JQuinton, MathiasZaman
comment by Lumifer · 2014-11-18T02:25:23.247Z · LW(p) · GW(p)

does the availability of information compound that effect to a level where it is exponentially consequential?

Could you, please, debuzzwordify your question and add some meaning to it?

Replies from: Brillyant
comment by Brillyant · 2014-11-18T17:57:36.534Z · LW(p) · GW(p)

I'll try to be more clear. No promises.

Let's assume there is a social comparison element to self-esteem. Whether it be strength, intelligence, tennis ability, driving skills, beauty—People reference where they stand in any given dimension among the total population to gauge how they feel about themselves.

If you are in the 99th percentile of a good trait based on the portion of the population you're aware of, you generally feel good about identifying with that. Given awareness of a large enough population, you might fall from 99th to the 50th percentile. Or to the 10th. Your trait was unchanged, but your awareness of it's rank among the whole population changed.

Is it possible this exponential leap in the sheer number of people we can (or are almost forced to) compare ourselves because of the internet corresponds directly to the hit our self-esteem takes? If so, how severe are the consequences?

I've read some about 'Facebook Depression', where one of the components is the envy and disillusionment that comes from seeing how everyone else is doing cool things while you're sitting at home. I was just wondering about people's thoughts about the Internet writ large.

Replies from: Lumifer
comment by Lumifer · 2014-11-18T19:01:54.190Z · LW(p) · GW(p)

Is it possible this exponential leap in the sheer number of people we can (or are almost forced to) compare ourselves because of the internet corresponds directly to the hit our self-esteem takes?

First, people differentiate between the online world and the meatspace world. If I am the prettiest girl at school, I may be aware that Kim Kardashian's butt almost broke the internet and I (hopefully) realize that my butt does not have the same capabilities, but I am still the prettiest girl at my school. "Population" is different and here the relevant metric is closeness to you. Comparing yourself to people you meet every day is rather different from comparing yourself to pop star pictures on a screen.

Second, this whole self-esteem thing is driven by System 1 and System 1 is pretty bad at large numbers. In fact, I suspect that that System 1 counts like this: one, two, three, Miller's number, ~12, ~30, Dunbar's number, many (see e.g. this). In this progression the jump from 1 million to 10 million doesn't happen -- both are "many".

Replies from: Brillyant
comment by Brillyant · 2014-11-18T22:23:49.640Z · LW(p) · GW(p)

First, people differentiate between the online world and the meatspace world.

Of course. But I'm not sure they are able to fully separate the two. Internet has some effect, right?

If I am the prettiest girl at school, I may be aware that Kim Kardashian's butt almost broke the internet and I (hopefully) realize that my butt does not have the same capabilities, but I am still the prettiest girl at my school.

True. And people will always enjoy benefits from being the biggest fish in their particular pond, whatever population defines that.

"Population" is different and here the relevant metric is closeness to you.

A person may believe they are in the 99th percentile by assuming the population they are in is representative of the global population. Even if this isn't true, there will be zero loss of self-esteem without sufficient spread of information, since said person may never become aware of the objective reality that they are in fact in, say, the bottom 20%.

Comparing yourself to people you meet every day is rather different from comparing yourself to pop star pictures on a screen.

Why?

Replies from: Lumifer
comment by Lumifer · 2014-11-19T02:11:52.984Z · LW(p) · GW(p)

Internet has some effect, right?

It certainly has, we are only discussing its magnitude.

Why?

For example, you interact with the former but you do not with the latter.

Replies from: Brillyant
comment by Brillyant · 2014-11-19T16:20:44.092Z · LW(p) · GW(p)

It certainly has, we are only discussing its magnitude.

Your language left it unclear to me whether you thought the differentiation to be total.

For example, you interact with the former but you do not with the latter.

Can you explain how that effects anything? (In fact, I've seen cases to the contrary of what you seems to be suggesting all the time. People often idolize pop stars they've never met and think of them as flawless...due, in some part, to the fact they don't get to see them up close in their unedited regular-ness.)

comment by JQuinton · 2014-11-18T22:25:41.339Z · LW(p) · GW(p)

I think I understand what you're talking about.

I didn't get internet access until I was almost in my 20s. So I grew up with certain talents where friends/family would consistently tell me that I was the best at what I did. Nowadays, you can go to online discussion boards where people who are the best of the best in field X congregate and see just how "average" you are in that bigger pond.

Though I was good enough to get into specialized high school/colleges for that, I chose not to go that route. I'm guessing that the same sort of seeing how average I was in that larger pond where everyone is at the top of their game would have happened anyway had I gone to those specialized schools.

comment by MathiasZaman · 2014-11-18T07:27:51.867Z · LW(p) · GW(p)

Nowadays, it's like saturation in the idea that "I suck. I'm boring. I'm ugly" 24/7 at the age of 10 or before.

This is why people should move to tumblr, where the people know they're boring and ugly and celebrate that fact :-)

(I'm joking, but in my experience the LW-tumblr space is very accepting of and open about defects like that. Despite the insistence of calling everyone cute.)

Replies from: maxikov
comment by maxikov · 2014-11-20T01:13:41.841Z · LW(p) · GW(p)

I'm not if it works with physical attractiveness, but in case of intellectual adequacy, I'm just not letting any internal doubts in my competence to interfere with external confidence. Even if I suspect that I'm not as smart as people around me, I still act exactly as if I am.

comment by Azathoth123 · 2014-11-21T06:12:06.705Z · LW(p) · GW(p)

I had a discussion about this a female friend recently as to whether she'd considered female perceptions of beauty had become way out of whack since the Internet.

I think the main damage may actually come from female perception of male sexiness, (and conversely), e.g., a girl being disappointed in her boyfriend because he isn't Brad Pitt.

Edit: fixed.

Replies from: NancyLebovitz, FiftyTwo
comment by NancyLebovitz · 2014-11-21T12:54:53.705Z · LW(p) · GW(p)

Female perceptions of beauty were way out of whack before the internet. The internet may be making things worse, but I think there's a natural drift towards more extreme standards. If the ideal is to be strikingly thin, then the demand will be to be ever thinner.

Male standards have gotten more extreme as a body builder aesthetic has partially taken hold.

Replies from: sixes_and_sevens, ChristianKl
comment by sixes_and_sevens · 2014-11-21T13:17:01.500Z · LW(p) · GW(p)

In the early 90s there was a TV show called Time Trax, about an Übermensch law enforcement officer from 200 years in the future, sent back to apprehend fugitives from his own time, who were carrying out futuristic crimes in the 20th century.

I re-watched a couple of episodes recently. Something that stood out, beyond the Windows Movie Maker level special effects, was how non-ripped the main character was.

comment by ChristianKl · 2014-11-22T21:10:26.255Z · LW(p) · GW(p)

Thinness is a very superficial standard. It's something you can use to judge people who are fake and who hide their emotions. You don't need more than a picture to see whether someone is thin.

It harder to visually distinguish whether someone is depressed. The average person won't identify the physical signs for it, especially not via seeing an image on the internet.

Yesterday I was dancing a while Salsa with a girl who's a beginner but dances for a few months. She's thin but she doesn't really relax. Even after a few songs of dancing her hands are still cold because they aren't well vascularized. I find a girl who weighs a bit more but who's in touch with her body more beautiful.

Height is also a very interested topic. Plenty of girls think they would be more beautiful if they would be 1,80m tall instead of 1,60m tall because models are tall. On the catwalk being tall matters. On the other hand I think a majority of guys prefer a 1,60m tall girl over a 1,80m tall girl. Shorter girls get more OkCupid messages than taller girls.

comment by FiftyTwo · 2014-11-21T08:33:29.539Z · LW(p) · GW(p)

Wy is that the "main" damage? I'd agree mmale appearance standards have also changed, but generic western society values women on their appearance more than men so you'd expect the psychological impat to be larger.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-21T08:45:38.162Z · LW(p) · GW(p)

Sorry, I was revising the comment, mistake corrected.

comment by fubarobfusco · 2014-11-19T07:16:51.203Z · LW(p) · GW(p)

On the other hand, no matter how weird your kink is, the Internet will let you find other people who share it.

comment by Dahlen · 2014-11-17T18:10:11.033Z · LW(p) · GW(p)

Advice/help needed: how do I study math by doing lots of exercises when there's nobody there to clue me in when I get stuck?

It's a stupid problem, but because of it I've been stuck on embarrassingly simple math since forever, when (considering all the textbooks and resources I have and the length of time I've had it as a goal) I should have been years ahead of my peers. Instead, I'm many years behind. (Truth be told, when performance is tested I'm about the same as my peers. But that's because my peers and I have only struggled for a passing grade. That's not what my standard of knowledge is. I want to learn everything as thoroughly as possible, to exhaust the textbook as a source of info; I usually do this by writing down the entire textbook, or at least every non-filler info.)

There is a great disparity between the level of math I've been acquainted with during my education, and the level of math at which I can actually do all the exercises effortlessly. In theory by now I'm well into undergraduate calculus and linear algebra. In practice I need to finish a precalculus exercise book (tried and couldn't). While I'm learning math, I constantly oscillate between boredom ("I'm too old for this shit" ; "I've seen this proof tens of times before") and the feeling of getting stuck on a simple matter because of a momentary lack of algebraic insight ("I could solve this in an instant if only I could get rid of that radical"). I've searched for places online where I could get my "homework" questions answered, but they all have rather stringent rules that I must follow to get help, and they'd probably ban me if I abused the forums in question.

This problem has snowballed too much by now. I kept postponing learning calculus (for which I've had the intuitions since before 11th grade when they began teaching it to us) and therefore all of physics (which I'd terribly love to learn in-depth), as well as other fields of math or other disciplines entirely (because my priority list was already topped by something else).

I've considered tutoring, but it's fairly expensive, and my (or my tutor's) schedule wouldn't allow me to get as much tutoring as I would need to - given that I sometimes only have time to study during the night.

Do any LessWrongers have resources for me to get my questions answered? Especially considering that, at least at the beginning until I get the hang of it, I will be posting loads of these. Tens to hundreds in my estimation.

Replies from: othercriteria, Viliam_Bur, tut, NancyLebovitz, Sarunas, NancyLebovitz, ChristianKl, Gimpness, Douglas_Knight
comment by othercriteria · 2014-11-17T18:56:24.954Z · LW(p) · GW(p)

stupid problem

embarrassingly simple math since forever

I should have been years ahead of my peers

momentary lack of algebraic insight ("I could solve this in an instant if only I could get rid of that radical")

for which I've had the intuitions since before 11th grade when they began teaching it to us

Sorry to jump from object-level to meta-level here but it seems pretty clear that the problem here is not just about math. Your subjective assessments of how difficult these topics are is inconsistent with how well you report you are doing at them. And you're attaching emotions of shame and panic ("problem has snowballed") to observations that should just be objective descriptions of where you are now. Get these issues figured out first (unless you're in some educational setting with its own deadlines). Math isn't going anywhere; it will still be there when you're in a place where doing it won't cause you distress.

Replies from: Dahlen
comment by Dahlen · 2014-11-17T19:32:36.682Z · LW(p) · GW(p)

I can see how it would sound to an outside observer now that you point it out, but in my situation at least I have trouble buying into the idea that math isn't going anywhere. The problem really is urgent; there are loads of fields I want to study that build upon math (and then upon each other), and it just isn't feasible that I can further postpone deep, lasting learning of basic math concepts any further, until after I'm in the "right mindset" for it. There just isn't time, and my neuroplasticity won't get any better with age. It'll take me at least a decade to reach the level I desire in all these fields. Not to mention that I've long since been having trouble with motivation, or else I could have been done with this specific math in about 2011-2012. I'm not doing well at these topics (despite evaluating them as easy) because I spend less than a few hours per month on them.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-20T12:36:46.115Z · LW(p) · GW(p)

There just isn't time, and my neuroplasticity won't get any better with age.

How old are you? I think the peak is around 30 years of age.

Replies from: Dahlen
comment by Dahlen · 2014-11-20T21:10:54.888Z · LW(p) · GW(p)

I'm 21. I thought it began to decline after early/mid 20s. It'll definitely take me longer than that to learn just a few prerequisites thoroughly.

comment by Viliam_Bur · 2014-11-19T12:59:50.876Z · LW(p) · GW(p)

I've considered tutoring, but it's fairly expensive, and my (or my tutor's) schedule wouldn't allow me to get as much tutoring as I would need to - given that I sometimes only have time to study during the night.

In my opinion, what you probably need is some mix of tutoring and therapy -- I don't know if it exists, and if there is a word for it -- someone who would guide you through the specific problem, but also through your thought processes, to discover what it is you are doing wrong. Not just what are you doing wrong mathematically, but also what are you doing wrong psychologically. This assumes you would speak your thoughs aloud while solving the problem.

The psychological level is probably more important than the mathematical level, because once the problems with thinking are fixed, you can continue to study maths on your own. But this probably cannot be done without doing the specific mathematical problems, because it's your thought patterns about those math problems that need to be examined and fixed.

I happen to have a background in math and psychology and teaching, so if you would like to try a free Skype lesson or two, send me an e-mail to "viliam AT bur.sk". Don't worry about wasting my time, since it was my idea. (Worst case: we will waste two hours of time and see that it doesn't work this way. Best case: your problem with maths is fixed, I get an interesting professional experience.)

Replies from: Dahlen
comment by Dahlen · 2014-11-20T21:28:02.826Z · LW(p) · GW(p)

Thanks for the offer! Yes, this sounds interesting. One of the things I've tried to get out of my math tutoring experience was to see how the teacher looked at the problem before beginning to actually solve it and see if it works out. But they never actually thought out loud for me to understand their thought process; also, I often can't tell what mental resources someone else uses when solving a difficult/tricky math problem. (Experience? Sudden insight? Additional notions I don't yet have?)

comment by tut · 2014-11-18T12:55:03.318Z · LW(p) · GW(p)

Find somebody else who is in the same situation and study together.

comment by NancyLebovitz · 2014-11-18T20:05:53.427Z · LW(p) · GW(p)

I'm wondering whether you're expecting some math to be effortless when you're actually still at the stage of trying one thing and another, and need to be less focused on how you want the process to feel.

Replies from: Dahlen
comment by Dahlen · 2014-11-18T21:00:00.787Z · LW(p) · GW(p)

Effortless, no; however, some of it is sufficiently familiar by now that I don't think there is additional value in rehearsing the same material. (Example: I first encountered derivatives on Khan Academy, before they officially taught them to us; then in 11th grade as an intro to calculus; then on several tests and exams; then in freshman year of college. I don't think I need to be given the intuition on derivatives one more time, and if I took a test in derivatives I'd ace it without much effort.) I'm not expecting new notions to be very easy to learn -- moderately challenging and definitely not impossible, but not easy.

comment by Sarunas · 2014-11-17T20:56:08.466Z · LW(p) · GW(p)

Have you looked into Physics Forums? They have Homework & Coursework Questions subforum.

Replies from: Dahlen
comment by Dahlen · 2014-11-17T21:36:46.388Z · LW(p) · GW(p)

Yes. Recently made an account there, started off by asking a homework question. Wouldn't want to spam it, though.

comment by NancyLebovitz · 2014-11-17T20:07:01.914Z · LW(p) · GW(p)

Maybe you should post here about one or two or the problems you're stuck on.

I'm wondering whether you're demanding too much facility of yourself before you go on, but this is only a guess.

Have you looked into Khan Academy?

Replies from: Dahlen
comment by Dahlen · 2014-11-17T20:42:00.466Z · LW(p) · GW(p)

Maybe you should post here about one or two or the problems you're stuck on.

Only if the LW Study Hall organized threads like that; I've been considering joining it, but never got around to it. Otherwise, I'd feel it would be a waste of people's time. There are lots of exercises I might get stuck on, and it's usually not as if any particular one of them represents a key insight to me; the answer/solution/hint to any given exercise is low-value to me, but the answer to all or most of them is high-value.

Been on Khan Academy since 2012, I think. Earned a Sun Badge and everything. (That type of performance didn't constitute my real challenge, though; I've been struggling for these two years to earn the modest Good Habits badge, to no avail to this day.)

comment by ChristianKl · 2014-11-20T09:53:07.472Z · LW(p) · GW(p)

In general having a goal to exhaut textbooks by copying them is stupid. That's not what they are for. Get rid of that strategy.

Most forums have rules that prevent you from asking questions without deeply thinking about those questions yourself.

On math.stackexchange you are allowed to ask all maths questions provided you search before for similar questions and put in the effort to write a decent question.

Asking decent questions is a skill. Learn it and people will answer your questions.

Replies from: Dahlen
comment by Dahlen · 2014-11-20T21:20:24.367Z · LW(p) · GW(p)

In general having a goal to exhaut textbooks by copying them is stupid. That's not what they are for. Get rid of that strategy.

Suspected as much... but I am not sure what strategy to replace it with. I definitely plan on doing all the exercises, and sometimes if I also write down some of the theory/proofs it helps with recalling them later. I'm guessing good rules of thumb are: stick to the essentials; use common sense; review at appropriate intervals.

I'm fine at asking a question that doubtlessly won't get removed. A question. I'm not sure that asking 20 (good, rule-abiding, thought-out) questions per week in a given forum would last me very long. That's about how often I might get stuck if I do some exercises every day.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-21T09:14:55.061Z · LW(p) · GW(p)

On stackexchange nobody has a problem with someone asking a lot of question provided they are good, rule-abiding and thought-out.

You can also answer questions of other people. It might even be better than textbook problem because someone well tell you when you are wrong via comments.

comment by Gimpness · 2014-11-19T00:56:21.977Z · LW(p) · GW(p)

Have you tried using WolframAlpha? If you get the pro subscription (~$66 per year - ~$45 a year for students) you gain access to their step by step solutions and unlimited problem generator. I am currently studying a Masters of Biostatistics (which has a heavy calculus section) and found this service invaluable.

Examples http://i59.tinypic.com/2yuniv9.png http://i60.tinypic.com/c5i7n.png

comment by Douglas_Knight · 2014-11-19T21:11:04.520Z · LW(p) · GW(p)

You should go on to more advanced topics without being fluent in less advanced topics. You can always later go back to the exercises that stumped you. The typical student taking calculus without being fluent in adding fractions is a mistake, but that don't over-correct that mistake.

comment by Ritalin · 2014-11-18T14:38:19.330Z · LW(p) · GW(p)

A riddle for Lesswrong: what exactly is the virtue of Bissonomy?

When I read the article, I got the feeling that there were enough clues to extrapolate a solution in the same way that EY extrapolated the Dementors' 'true natures'. That this was a solvable riddle. I've got my suspicions, but I'd like to hear what you guys can come up with.

Replies from: Lumifer, Artaxerxes, 27chaos, 27chaos, 27chaos, 27chaos, 27chaos, 27chaos, 27chaos
comment by Lumifer · 2014-11-18T21:09:57.516Z · LW(p) · GW(p)

Bissonomy is the virtue of stability.

Specifically, bissonomy is the virtue of knowing (-nomy) how to attach yourself ("bissys" is the name for filaments by which certain bivalves attach themselves to rocks and other substrate) to some stable object.

Arguments:

  • The name. It basically tells you outright (and "bisso" is Portuguese for "bissys", just in case).
  • She was turned into oysters -- bivalves -- which is a big hint.
  • She was punished for throwing a mole. What do moles do? They dig! They undermine and clearly, the virtue of stability couldn't be seen undermining anything.
  • Two children is another hint at stability as families with two children neither increase nor decrease the population -- they keep it stable.
  • She was forgotten -- for the only thing constant in the world is change.
comment by Artaxerxes · 2014-11-19T01:03:56.833Z · LW(p) · GW(p)

Sounds like a noodle virtue to me, or at least it uses the same basic idea for humour.

But if you want to keep in the spirit of EY's dementors, the article writer does some wacky reasoning to end up roughly at the virtue of dietary restriction.

Replies from: DanielLC, Ritalin
comment by DanielLC · 2014-11-22T01:22:33.885Z · LW(p) · GW(p)

Terry Pratchett's noodle implements generally make sense if you work them out. I doubt this is any different.

comment by Ritalin · 2014-11-19T03:52:09.234Z · LW(p) · GW(p)

A-ha! That makes sense! Also, it's actually an important virtue! People judge you on it!

comment by 27chaos · 2014-11-18T22:05:17.760Z · LW(p) · GW(p)

You say that this feels like a riddle to you, but I would prefer to call it a koan. I think the best hint that we have to the nature of Bissonomy lies in the vagueness of its descriptions, as this is the sort of irony that Pratchett is famed for, and this sort of self-answering riddle has a Buddhist feel to it in turn. This also seems like a useful starting point for another reason: there are only so many potential virtues that Bissonomy can plausibly be, and the standard Western ones are already accounted for.

I suspect Bissonomy is the emotional acceptance of both that which is known to be true and that which is uncertain or unknowable. Ignorance, as we all know, is purported to be bliss, and that sounds like biss. However, Bissonomy is not the embrace of ignorance, nor does it bring bliss. Rather, it is the acceptance of that which is known to be unknowable, and it brings inner-peace.

Bissonomy and Tubso seem to be connected. Both virtues were forgotten due to their rarity. Explaining what one was should hopefully tell us something about the other. But it's even harder to say things about Tubso than about Bissonomy; the only thing that we know about Tubso is that its name is absurd. In a world where nominative determinism exists, however, this might be the only hint we need. I submit that Tubso is the virtue of absurdism, which surely has a place in Discworld. This fits with the koan framework, and establishes the desired connection between the two lost virtues; a koan's answer is absurd and hints at strange knowledge, but truly understanding a koan requires the recognition that one can never understand it fully, if at all.

Of necessity, this theory is largely speculative. We may never know the true answer to this question, and that's okay. And that, in turn, is Bissonomy.

Replies from: Ritalin
comment by Ritalin · 2014-11-19T00:02:39.777Z · LW(p) · GW(p)

This sounds somewhat like a specialized form of the very Christian value of Resignation, specifically resignation towards the Ineffability of God and his Mysterious Ways, and the seemingly chaotic creation.

Then again God often plays a Dao-like "empty center that lets the wheel turn" in these sorts of doctrines.

comment by 27chaos · 2014-11-18T22:00:46.305Z · LW(p) · GW(p)

You say that this feels like a riddle to you, but I would prefer to call it a koan. I think the best hint that we have to the nature of Bissonomy lies in the vagueness of its descriptions, and there is a Buddhist feel to this. There is also the sort of irony that Pratchett is famous for. Also, this seems like a useful starting point for another reason: there are only so many potential virtues that Bissonomy can plausibly be, and all the standard Western ones are already accounted for.

I suspect Bissonomy is the emotional acceptance of both that which is known to be true and that which is uncertain or unknowable. Ignorance, as we all know, is purported to be bliss, and that sounds like biss. However, Bissonomy is not the embrace of ignorance, nor does it bring bliss. Rather, it is the acceptance of that which is known to be unknowable, and it brings inner-peace.

Bissonomy and Tubso seem to be connected. Both virtues were forgotten due to their rarity. Explaining what one was should hopefully tell us something about the other. But it's even harder to say things about Tubso than about Bissonomy; the only thing that we know about Tubso is that its name is absurd. In a world where nominative determinism exists, however, this might be the only hint we need. I submit that Tubso is the virtue of absurdism, which surely has a place in Discworld. This fits with the koan framework, and establishes the desired connection between the two lost virtues; a koan's answer is absurd and hints at strange knowledge, but truly understanding a koan requires the recognition that one can never understand it fully, if at all.

Of necessity, this theory is largely speculative. We may never know the true answer to this question, and that's okay. And that, in turn, is Bissonomy.

comment by 27chaos · 2014-11-18T21:56:36.387Z · LW(p) · GW(p)

You say that this feels like a riddle to you, but I would prefer to call it a koan. I think the best hint that we have to the nature of Bissonomy lies in the vagueness of its descriptions, and there is a Buddhist feel to this. There is also the sort of irony that Pratchett is famous for. Also, this seems like a useful starting point for another reason: there are only so many potential virtues that Bissonomy can plausibly be, and all the standard Western ones are already accounted for.

I suspect Bissonomy is the emotional acceptance of both that which is known to be true and that which is uncertain or unknowable. Ignorance, as we all know, is purported to be bliss, and that sounds like biss. However, Bissonomy is not the embrace of ignorance, nor does it bring bliss. Rather, it is the acceptance of that which is known to be unknowable, and it brings inner-peace.

Bissonomy and Tubso seem to be connected. Both virtues were forgotten due to their rarity. Explaining what one was should hopefully tell us something about the other. But it's even harder to say things about Tubso than about Bissonomy, the only thing that we know about Tubso is that its name is absurd. In a world where nominative determinism exists, however, this might be the only hint we need. I submit that Tubso is the virtue of absurdism, which surely has a place in Discworld. This fits with the koan framework, and establishes the desired connection between the two lost virtues; a koan's answer is absurd and hints at strange knowledge, but truly understanding a koan requires the recognition that one can never understand it fully, if at all.

Of necessity, this theory is largely speculative. We may never know the true answer to this question, and that's okay. And that, in turn, is Bissonomy.

comment by 27chaos · 2014-11-18T20:40:50.701Z · LW(p) · GW(p)

This is rather disorganized, unfortunately. Don't read ahead unless you're willing to be frustrated by near stream-of-consciousness writing.

http://english.stackexchange.com/questions/116456/meaning-of-onomy-ology-and-ography

I strongly suspect that the virtue of Bissonomy is the virtue of acknowledging that which you do not or can not understand, also known as epistemic humility. (Yes, faith is its sister virtue - don't ask how they're both able to be virtues at once despite their symmetrical character, as that would mortally offend both of them.) The vague and unspecified nature of Bissonomy is the best clue we have as to its nature, there is a Buddhist feel to it. This has enough of a satiric bite to it that I can imagine the author actually writing it.

Biss sounds like bliss, which as we all know is the result of ignorance. But Bissonomy is not the intellectual embrace of ignorance, although it might sound like it (haha). Rather, Bissonomy is the emotional acceptance/knowledge that our map will never fully match the territory, and that some parts of the territory are not mappable at all (eg uncertainties of quantum measurements).

The main reason to be skeptical of this idea, other than a severe lack of evidence for it (perhaps this allows it to be reconciled with faith, though :p), is that it fails to explain the virtue of Tubso, and it seems to me like understanding one of the lost virtues might require also understanding the other.

However, I have a suspicion that the virtue of Tubso might be something absurd like "not throwing shellfish at the shadows of deities", or perhaps Tubso is absurdism itself. The word certainly sounds silly enough for it. If we adopt the perspective that Bissonomy and Tubso are the "lost Buddhist" virtues, in my opinion this makes quite a lot of sense. A hint of absurdism paired with peaceful acceptance, just like a koan paired with its answer.

I think adopting this perspective seems like a good idea. There are only so many potential virtues, after all. And although I've not read enough of Pratchett's work, I would bet he's used non-Western culture as a source of inspiration before.

Another reason to be skeptical of my interpretation is that I ironically exhibit a lack of Bissonomy myself in so hastily jumping to the conclusion that it is possible to deduce what Bissonomy means, despite that Prachett provides little evidence around it. But perhaps this is simply another aspect of Prachett's joke - forcing the reader to be hypocritical in order to understand the virtue of Bissonomy would be like a half-serious chastisement/reward for spoiling the fun of leaving the virtue unknown, similar to how trying very hard to find a true answer to a koan is a bad but necessary step in allowing the koan to change the way your mind thinks.

Ultimately, we may never know. And that's okay. And that's Bissonomy.

comment by 27chaos · 2014-11-18T20:34:57.303Z · LW(p) · GW(p)

http://english.stackexchange.com/questions/116456/meaning-of-onomy-ology-and-ography

I suspect that the virtue of Bissonomy is the virtue of acknowledging that which you do not or can not understand, also known as epistemic humility. (Faith is its sister virtue - don't ask how they're both able to be virtues at once despite their symmetrical nature, that would mortally offend both of them!) The vague and unspecified nature of Bissonomy is the best clue we have as to its nature, there is a Buddhist feel to it. This has enough of a satiric bite to it that I can imagine the author actually writing it.

Biss sounds like bliss, which as we all know is the result of ignorance. But Bissonomy is not the intellectual embrace of ignorance, although it might sound like it (haha). Rather, Bissonomy is the emotional acceptance/knowledge that our map will never fully match the territory, and that some parts of the territory are not mappable at all (eg uncertainties of quantum measurements).

The main reason to be skeptical of this idea, other than a severe lack of evidence for it (perhaps this allows it to be reconciled with faith), is that it fails to explain the virtue of Tubso, and it seems to me like understanding one might require also understanding the other.

However, I have a suspicion that the virtue of Tubso might be something absurd like "not throwing shellfish at the shadows of deities", or perhaps Tubso is absurdism itself. The word certainly sounds silly enough for it. If we adopt the perspective that Bissonomy and Tubso are the "lost Buddhist" virtues, in my opinion this makes quite a lot of sense. A hint of absurdism paired with peaceful acceptance, just like a koan paired with its answer.

Another reason to be skeptical of my interpretation is that I ironically exhibit a lack of Bissonomy myself in so hastily jumping to the conclusion that it is possible to deduce what Bissonomy means, despite that Prachett provides little evidence around it. But perhaps this is simply another aspect of Prachett's joke - forcing the reader to be hypocritical in order to understand the virtue of Bissonomy would be like a half-serious chastisement/reward for spoiling the fun of leaving the virtue unknown, similar to how trying very hard to find a true answer to a koan is a bad but necessary step in allowing the koan to change the way your mind thinks.

Ultimately, we may never know. And that's okay. And that's Bissonomy.

comment by 27chaos · 2014-11-18T20:29:31.135Z · LW(p) · GW(p)

What do you think? Tell us in a week at most, please?

comment by 27chaos · 2014-11-18T20:29:08.203Z · LW(p) · GW(p)

http://english.stackexchange.com/questions/116456/meaning-of-onomy-ology-and-ography

I'm not very familiar with the series. However, I suspect that the virtue of Bissonomy is the virtue of acknowledging that which you do not understand, also known as epistemic humility. (Faith is its sister virtue - don't ask how they're both able to be virtues despite their symmetrical nature, that would mortally offend both of them!) The vague and unspecified nature of Bissonomy is its own clue, there is a Buddhist feel to it. This has enough of a satiric bite to it that I can imagine the author actually writing it.

Biss sounds like bliss, which as we all know is the result of ignorance. But Bissonomy is not the intellectual embrace of ignorance, although it might sound like it (haha). Rather, Bissonomy is the emotional acceptance/knowledge that our map will never fully match the territory, and that some parts of the territory are not mappable at all (eg uncertainties of quantum measurements).

The main reason to be skeptical of this idea, other than a severe lack of evidence for it (perhaps this allows it to be reconciled with faith), is that it fails to explain the virtue of Tubso, and it seems to me like understanding one might require also understanding the other.

However, I have a suspicion that the virtue of Tubso might be something absurd like "not throwing shellfish at the shadows of deities", or perhaps Tubso is absurdism itself. The word certainly sounds silly enough for it. If we adopt the perspective that Bissonomy and Tubso are the "lost Buddhist" virtues, in my opinion this makes quite a lot of sense. A hint of absurdism paired with peaceful acceptance, just like a koan paired with its answer.

Another reason to be skeptical of my interpretation is that I unfortunately exhibit a supreme lack of Bissonomy myself in so hastily jumping to the conclusion that it is possible to deduce what Bissonomy means, despite that Prachett provides little evidence around it. But perhaps this is simply another aspect of Prachett's joke - forcing the reader to be hypocritical in order to understand the virtue of Bissonomy would be like a half-serious chastisement for spoiling the fun.

Ultimately, we may never know. And that's okay. And that's Bissonomy.

comment by advancedatheist · 2014-11-17T15:32:47.920Z · LW(p) · GW(p)

Does the ten year old child provide an actuarial model for superlongevity?

According to the actuarial tables:

http://www.ssa.gov/oact/STATS/table4c6.html

A ten year old boy has a probability of survival in that year of 0.999918. After that, his probability of survival in a given year decreases with every additional year.

If you could lock in the ten year old's probability of survival per year after the age of 10, mathematically a population of such individuals would have a "half life" of ~ 8000 years. In other words, if you had a population of 1,000 such individuals with their annual probability of survival fixed at .999918, about half of them could survive for 8,000 years or so.

Of course this sort of calculation doesn't mean anything without an empirical demonstration; the data have to come in over 8,000 years. That shows the problem with claims of life extension breaktrhoughs within current human life expectancies. You can't measure the results any faster than the rate at which humans already happen to live, and unmodified humans can regularly live longer than other mammals any way.

Replies from: Artaxerxes
comment by Artaxerxes · 2014-11-17T16:37:37.156Z · LW(p) · GW(p)

A half life of ~8000 given current-day levels of accidental and other death. If we get our act together enough to get rid of the problem of aging, I would assume that we would continue to get rid of other sources of death as well, which would make the actuarial model less useful.

Replies from: advancedatheist
comment by advancedatheist · 2014-11-17T18:27:32.595Z · LW(p) · GW(p)

In the real world, the probability of a ten year old's death also reflects the fact that in developed countries, children live in sheltered conditions.

Replies from: Adele_L
comment by Adele_L · 2014-11-17T22:54:59.575Z · LW(p) · GW(p)

According to the CDC, the leading causes for death for children aged 5-9 (in 2012 in the United States) are:

  1. Unintentional injury
  2. Malignant neoplasm (aka cancer)
  3. Congenital disorders
  4. Homicide
  5. Heart disease

If we solved aging, it seems likely we could eliminate or significantly reduce deaths from cancer, congenital disorders and heart disease.

Once we look at the 10-14 age bracket or above, suicide makes it into the top five causes of death until age ~50 and above.

We can also look at the leading causes of unintentional injury. For the 5-9 age bracket, we have

  1. Motor vehicle accidents
  2. Drowning
  3. Fire/Burns
  4. Unintentional suffocation
  5. Other land transport injuries

Traffic accidents seem likely to be solvable to a large degree with self-driving car technology. Not as sure about the others. It's worth noting that the primary cause of unintentional injury deaths for adults is unintentional poisoning. This was surprising to me; I would guess it's mostly due to drug use.

Replies from: Vaniver
comment by Vaniver · 2014-11-21T14:50:58.521Z · LW(p) · GW(p)

It's worth noting that the primary cause of unintentional injury deaths for adults is unintentional poisoning. This was surprising to me; I would guess it's mostly due to drug use.

Food poisoning is more common and serious than most people I've talked about it with have expected.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-11-30T02:15:24.318Z · LW(p) · GW(p)

"Unintentional poisoning" is 36k deaths, according to Adele's link. Food poisoning is only 3k deaths (5k according to an older article). I think most of the accidental poisoning deaths are drug overdoses. Half (broken link) of those are legal drugs. 2/3 of the legal ones are opiates, 1/3 benzos.

I am not convinced that food poisoning is even included in accidental poisoning deaths. Wonder has deaths classified by ICD codes. About half of accidental poisonings are listed as X44 other/unknown. My guess is that this means an overdose on street drugs.

comment by Sarunas · 2014-11-17T08:58:21.574Z · LW(p) · GW(p)

Less Wrong survey asked what is your favourite Less Wrong post. Slate Star Codex survey asks what is your favourite SSC post. Naturally, the next question is:

What are your favourite Overcomingbias posts? What posts did you find especially insightful or informative? What posts changed the way you think about something? What posts did you find thought-provoking even if you disagree with the ideas expressed in them (perhaps similarly to how Bryan Caplan thinks about Robin Hanson's ideas)? What good posts do you think should be better known? What posts would you recommend to others?

Replies from: John_Maxwell_IV
comment by Kawoomba · 2014-11-20T18:53:45.444Z · LW(p) · GW(p)

No general procedure for bug checks will do.

Now, I won’t just assert that, I’ll prove it to you.

I will prove that although you might work till you drop,

you cannot tell if computation will stop.

A poetic proof of the Halting Problem's undecidability by one Geoffrey Pullum, continued here. Enjoy!

comment by [deleted] · 2014-11-18T11:09:54.654Z · LW(p) · GW(p)

Ummm... are things like this OB post really supposed to be considered even remotely about Rationality? It's basically just Robin Hanson using a few anecdotes to weave himself a short narrative designed to justify his misanthropy and make himself feel better about holding political positions others consider immoral.

Replies from: ChristianKl, Richard_Kennaway
comment by ChristianKl · 2014-11-18T17:59:16.725Z · LW(p) · GW(p)

are things like this OB post really supposed to be considered even remotely about Rationality?

Why are you asking that question?

Replies from: None
comment by [deleted] · 2014-11-19T07:44:58.888Z · LW(p) · GW(p)

Because it was linked under "Recent on Rationality Blogs", I figured I'd give it a whirl, and was pretty surprised to find such low-grade reasoning on such a list.

Replies from: Viliam_Bur, ChristianKl
comment by Viliam_Bur · 2014-11-19T13:13:14.757Z · LW(p) · GW(p)

Even a generally rational person may write an irrational article once in a time. (I am speaking generally here; I haven't read the specific article yet.) To create a list containing only rational articles, someone would have to check each one of them individually, and vote.

Adding such functionality to LW software would be too much work. But maybe it could be done indirectly. We could do the voting on some other website (for example Reddit), and import only the positively-voted links to LW. But this would need a group of people to add new articles, read them, and vote.

Replies from: army1987
comment by A1987dM (army1987) · 2014-11-22T23:21:08.960Z · LW(p) · GW(p)

“Once in a time” is quite an understatement of the frequency of completely preposterous posts on OB.

comment by ChristianKl · 2014-11-19T14:46:33.581Z · LW(p) · GW(p)

That's a stupid argument.

There are multiple valid criticism of the way things are labeled but if you want to make that criticism it would be worthwhile to think a bit deeper about the structures and why they are arranged the way they are.

comment by Richard_Kennaway · 2014-11-18T14:24:16.082Z · LW(p) · GW(p)

Tenure.

BTW, did anyone else get an audio track on that page? I don't see any visible embedded media, but some sort of music track started a few seconds after loading the page, and stopped the moment I closed the window. Same thing when I reopened the page. I downloaded the HTML, but I didn't see a cause of it there. A few minutes later it no longer happened, although the HTML of the page was identical to the previous download.

ETA: This may be the explanation. Some sort of advertising thing. The song described there is what I heard.

comment by Capla · 2014-11-18T00:00:57.581Z · LW(p) · GW(p)

If my karma takes a hit is there an easy way that I can find out what is being downvoted? I can't self correct if I don't know what is disliked.

Replies from: Lumifer, None
comment by Lumifer · 2014-11-18T02:40:16.940Z · LW(p) · GW(p)

I can't self correct if I don't know what is disliked.

Self-correction on the basis of twitches of the hive mind is not necessarily a great idea.

Replies from: Capla
comment by Capla · 2014-11-18T03:06:22.547Z · LW(p) · GW(p)

Haha.

Yeah. I've been posting a lot recently and my overall karma is increasing, but my percentage of positive karma is steadily decreasing. I suppose that if I'm not upsetting some people, I'm not doing a good job, even among a community that I by and large respect.

Replies from: Emile, Lumifer
comment by Emile · 2014-11-18T17:13:01.920Z · LW(p) · GW(p)

if I'm not upsetting some people, I'm not doing a good job

Why?? I occasionally hear that repeated, but it sounds like a cheap excuse to act like a dick, or to retroactively brush off when people point out when you said something wrong in public. It calls to mind the image of a lazy teen spouting out every random stupid idea that goes through his mind and considering that the essence of being a Brave Independent Thinker.

(this is not targeted at you, Capla, I haven't been paying special attention to your posts)

Replies from: Lumifer
comment by Lumifer · 2014-11-18T19:16:56.816Z · LW(p) · GW(p)

Why??

Because if you're not upsetting some people, you are not impacting the status quo in any meaningful way.

Replies from: Emile, Adele_L
comment by Emile · 2014-11-18T23:09:34.748Z · LW(p) · GW(p)

What's so great about impacting the status quo? That doesn't seem like something worth aiming for. I mean, yeah, sure, most ways of making the world a better place impact the status quo; but most ways of making the world a better place involve making noise at one point of the other, that doesn't mean that making noise is some great thing we should aim for.

Things that make the world (or lesswrong, or your family, etc.) a worse place are more likely to make people upset than things that make the world a better place. There are also more ways to make things worse than to make things better.

Replies from: Lumifer
comment by Lumifer · 2014-11-19T02:25:58.577Z · LW(p) · GW(p)

What's so great about impacting the status quo?

Depends on your value system, of course. But impacting the status quo is a synonym for "making a difference" and if you don't ever make a difference, well...

Replies from: faul_sname
comment by faul_sname · 2014-11-25T19:27:20.172Z · LW(p) · GW(p)

well...

Please continue. If you don't ever make a difference, then what?

Replies from: Lumifer
comment by Lumifer · 2014-11-25T19:52:34.782Z · LW(p) · GW(p)

Then I'm not sure what your terminal values are beyond surviving.

Replies from: faul_sname
comment by faul_sname · 2014-11-25T20:37:14.520Z · LW(p) · GW(p)

I think there may be a communication failure here. While most desirable changes are themselves changes to the status quo, the phrase "changing the status quo" generally has the connotation of moving away from an undesirable state, instead of moving toward a desirable state.

For a concrete example, if I wanted to eradicate malaria, I would say "I want to eradicate malaria," not "I want to impact the status quo" or "I want to make a difference," even though both types of statements are true. The goal is to make a specific difference, not to make a difference.

Replies from: Lumifer
comment by Lumifer · 2014-11-25T20:42:07.483Z · LW(p) · GW(p)

the phrase "changing the status quo" generally has the connotation of moving away from an undesirable state, instead of moving toward a desirable state.

Yes, looks like a communication problem. The phrase "changing the status quo" has no such connotation for me, at all. I would be happy to call eradicating malaria changing the status quo.

comment by Adele_L · 2014-11-18T19:36:49.064Z · LW(p) · GW(p)

Maybe that is true in many cases, but even so, it still is a bad thing to optimize for. The outside view says that most of the time, having your percentage of positive karma steadily decreasing means the quality of your comments are getting worse. If you want to be controversial and still be taken seriously, you need to signal your competence in less controversial areas.

Replies from: Lumifer
comment by Lumifer · 2014-11-18T19:45:37.284Z · LW(p) · GW(p)

it still is a bad thing to optimize for

Impacting the status quo is a fine thing to optimize for. Negative karma is a stupid thing to optimize for.

However I believe that here we are not talking about optimizing, but rather about warning signs.

Replies from: Azathoth123
comment by Azathoth123 · 2014-11-21T04:57:37.196Z · LW(p) · GW(p)

Impacting the status quo is a fine thing to optimize for.

No, it's not. It is much easier to "impact the status quo" by making things worse than by making things better.

Replies from: Lumifer
comment by Lumifer · 2014-11-21T05:28:28.825Z · LW(p) · GW(p)

Optimization implies that you picked a direction. Optimizing impact means optimizing not just magnitude, but magnitude of a specific sign.

comment by Lumifer · 2014-11-18T05:28:01.858Z · LW(p) · GW(p)

I can hazard a guess at your downvotes. You're flooding the forum with lots of short posts a large part of which can be answered by a few minutes with Google, and some fraction of the rest would disappear if only you were to think about the matter for a few minutes...

comment by [deleted] · 2014-11-18T00:09:42.531Z · LW(p) · GW(p)

You can look at your recent posts/comments and see what looks negative.

Replies from: Capla
comment by Capla · 2014-11-18T00:11:37.766Z · LW(p) · GW(p)

Thanks.

Found it.

comment by [deleted] · 2014-11-21T16:21:10.196Z · LW(p) · GW(p)

Is there a way to see what comments (not articles) I have downvoted? Or get a summary of how many downvotes and upvotes I've made?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-11-21T18:27:16.749Z · LW(p) · GW(p)

On your profile page, click on "DISLIKED."

comment by Curiouskid · 2014-11-19T00:50:38.624Z · LW(p) · GW(p)

[Cross-posted from So8res's, recent guide to MIRI's research]

Just thought add links to these other "guides":

"Atoms of Neural computation": List of promising research directions for neuro-inspired AI (IOW, tries to answer the question "Deep Learning is just regression, so what could we possibly do next?")

"Physical Principles for Scalable Neural Recording": List of promising research directions for developing tools to do live recording of the brain (a separate issue from connectomics).

comment by XiXiDu · 2014-11-18T09:40:12.241Z · LW(p) · GW(p)

Is this a case of multiple discovery?[1] And might something similar happen with AGI? Here are 4 projects who have concurrently developed very similar looking models:

(1) University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models

(2) Baidu/UCLA: Explain Images with Multimodal Recurrent Neural Networks

(3) Google: A Neural Image Caption Generator

(4) Stanford: Deep Visual-Semantic Alignments for Generating Image Descriptions

[1] The concept of multiple discovery is the hypothesis that most scientific discoveries and inventions are made independently and more or less simultaneously by multiple scientists and inventors.

Replies from: othercriteria
comment by othercriteria · 2014-11-18T14:31:56.940Z · LW(p) · GW(p)

How meaningful is the "independent" criterion given the heavy overlaps in works cited and what I imagine must be a fairly recent academic MRCA among all the researchers involved?

comment by NancyLebovitz · 2014-11-20T07:25:58.171Z · LW(p) · GW(p)

http://selenite.livejournal.com/282731.html

A discussion of constraining an AI by building in detailed contracts and obedience to laws against theft and criminal behavior.

I don't think this is obviously correct (if nothing else, parts of a complex set of rules can interact unpredictably), but these are the tools that humans have developed for dealing with semi-malign natural intelligences, so we should at least take a look at them.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-20T09:16:23.173Z · LW(p) · GW(p)

Laws are useful to prevent behavior where you know what kind of behavior you want to forbid. They are often possible to game and need judging to interpret what happens in edge cases.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-20T15:17:07.731Z · LW(p) · GW(p)

Copying laws and contracts might not be the complete solution, but there's a lot of stored experience in the legal system which could be worth studying.

We know some of the behavior we want to forbid-- we definitely don't want the AI to take our atoms. Well, almost definitely.

Would it be a good outcome if the F?AI made really good simulations of us in a well-done simulated environment so that it was possible to have a lot more people?

Replies from: ChristianKl
comment by ChristianKl · 2014-11-20T18:48:07.361Z · LW(p) · GW(p)

We know some of the behavior we want to forbid-- we definitely don't want the AI to take our atoms. Well, almost definitely.

Some medical interventions probably include taking atoms. Even something as simply as disposing of bodily secretions could be problematic.

comment by Curiouskid · 2014-11-19T01:58:56.180Z · LW(p) · GW(p)

[Meta]

I noticed that this recent lw post showed up on the "recent comments" side-bar, but that it doesn't show up in the list of "discussion" posts. Is this just me? Do other people have this show up in "discussion"? (Also, this is not the first time that I've noticed that there are posts in the side-bar that I can't find in Discussion).

Replies from: hyporational
comment by hyporational · 2014-11-19T02:10:34.187Z · LW(p) · GW(p)

It's in Main where the best written and most rationality-relevant articles go. Discussion is for lower quality and less relevant posts. I wonder if missing Main is common and whether something should be done about that.

Replies from: iarwain1, Curiouskid
comment by iarwain1 · 2014-11-19T02:55:12.365Z · LW(p) · GW(p)

It took me several months before I discovered that not everything in Main shows up when you click on "Main" - you need to first click on "Main" and then click on "New". It took me even longer before I figured out that there's an RSS feed for Main / New so I could find out when something gets added there.

For me, at least, anything in Main that wasn't also promoted had almost no visibility for several months. Which is a pity because a lot of the best stuff goes there.

comment by Curiouskid · 2014-11-19T04:02:38.321Z · LW(p) · GW(p)

If it's in Main, why does the article's Karma Bubble look like it's in Discussion? (i.e. it's not a filled in with green).

Replies from: Nornagest
comment by Nornagest · 2014-11-19T07:53:55.579Z · LW(p) · GW(p)

Not everything in Main is promoted; in fact, most of it isn't. Only promoted articles get the green bubbles.

comment by Mark-Mills · 2017-12-12T10:11:08.690Z · LW(p) · GW(p)

Interesting

comment by passive_fist · 2014-11-17T20:45:31.908Z · LW(p) · GW(p)

Is there any remotely feasible way for us to contain a superintelligence aside from us also becoming superintelligences?

Replies from: Artaxerxes, drethelin, DanielLC, Capla
comment by Artaxerxes · 2014-11-18T04:30:57.662Z · LW(p) · GW(p)

That's what MIRI are trying to work out, right?

comment by drethelin · 2014-11-17T20:58:22.059Z · LW(p) · GW(p)

Prevent it from becoming a super-intelligence in the first place. You can't guarantee boxing a fully self-improving AI but if you run an AI on a Macintosh 2 you could probably keep it contained.

Replies from: passive_fist
comment by passive_fist · 2014-11-18T01:12:52.122Z · LW(p) · GW(p)

That doesn't answer the question.

Replies from: RowanE
comment by RowanE · 2014-11-18T21:17:16.659Z · LW(p) · GW(p)

It does, the answer given is "no".

comment by DanielLC · 2014-11-22T01:29:18.960Z · LW(p) · GW(p)

You could put it in a box with no gatekeeper or other way to interact with the outside world. It would be completely pointless and probably unethical, but you could do it.

comment by Capla · 2014-11-18T00:15:38.722Z · LW(p) · GW(p)

Isn't this the uncomputable utilon question?

comment by Bayeslisk · 2014-11-18T02:33:05.189Z · LW(p) · GW(p)

Just here to remind you to notice when you are confused. If you don't, this ( https://www.youtube.com/watch?v=7BOWOMPUbvE ) WILL happen to you and everyone WILL laugh. And you wouldn't want that.

comment by advancedatheist · 2014-11-18T04:35:46.784Z · LW(p) · GW(p)

I keep wondering when the cryonics community will attract the attention of female social justice warriors (SJW's) because of the perception that wealthy white guys dominate cryonics, even though plenty of middle class men have signed up for cryopreservation as well by using life insurance as the funding mechanism. The stereotype should push the SJW's buttons about inequality, patriarchy and the lack of diversity.

Instead these sorts of women have ignored cryonics so far instead of trying to meddle with it to transform it according to their standards of "social justice." If anything, cryonics acts like "female Kryptonite."

I've also noticed the absence of another sort of women, namely adventuresses. If people believe that lonely white guys with money sign up for cryonics, you would expect to see more young, reasonably attractive women showing up at public gatherings of cryonicists to try to find men they can exploit for a share of the wealth.

So what kind of tipping point in the public's view of cryonics would have to happen to make SJW's and adventuresses notice cryonics as a field for social activism or financial exploitation?

Replies from: ChristianKl, bramflakes, Viliam_Bur, polymathwannabe, army1987, army1987, Azathoth123
comment by ChristianKl · 2014-11-18T14:49:02.235Z · LW(p) · GW(p)

In general if your beliefs lead you to expect something that doesn't happen, it's time to question your beliefs. Reality isn't wrong. Beliefs must pay rent.

comment by bramflakes · 2014-11-18T12:49:27.091Z · LW(p) · GW(p)

I think you'd get replies if you didn't pepper it so much with needless political tribal signaling. We get it; you read Steve Sailer.

comment by Viliam_Bur · 2014-11-19T13:20:44.000Z · LW(p) · GW(p)

The way you wrote the question is horrible, but here are some things to consider:

  • most people haven't heard about cryonics;
  • most of those who heard about it believe it doesn't work;
  • there are many things way more expensive than cryonics.
comment by polymathwannabe · 2014-11-18T14:32:07.732Z · LW(p) · GW(p)

you would expect to see more young, reasonably attractive women showing up at public gatherings of cryonicists to try to find men they can exploit for a share of the wealth.

Wow. And a million times, wow.

One must really, really, very deeply hate women for that to be the expectation that comes to mind.

comment by A1987dM (army1987) · 2014-11-22T11:27:50.939Z · LW(p) · GW(p)

Search your favourite search engine for "hostile wife phenomenon".

comment by A1987dM (army1987) · 2014-11-22T11:44:16.215Z · LW(p) · GW(p)

If people believe that lonely [...] guys [...] sign up for cryonics, you would expect to see more [...] showing up at public gatherings of cryonicists to try to find men [...]

You don't see many men in yoga classes either.

comment by Azathoth123 · 2014-11-21T08:11:31.497Z · LW(p) · GW(p)

I keep wondering when the cryonics community will attract the attention of female social justice warriors (SJW's) because of the perception that wealthy white guys dominate cryonics

When it reaches the point that most of them have heard of it. I know we've had at least one SJW come by and rant about how talking about cryonics reflected white privilege.