Open thread, Sep. 14 - Sep. 20, 2015

post by MrMind · 2015-09-14T07:10:19.515Z · LW · GW · Legacy · 192 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

192 comments

Comments sorted by top scores.

comment by benjaminzand · 2015-09-15T11:30:44.692Z · LW(p) · GW(p)

Could we live forever? - hey guys. I made a film about transhumanism for BBC News. It features some people in this community and some respected figures. Let me know what you think and if i missed anything etc.

https://www.youtube.com/watch?v=STsTUEOqP-g&feature=youtu.be

Replies from: username2
comment by username2 · 2015-09-15T16:47:43.890Z · LW(p) · GW(p)

Awesome! You should repost this as a top level post.

comment by Panorama · 2015-09-14T21:14:52.379Z · LW(p) · GW(p)

26 Things I Learned in the Deep Learning Summer School

In the beginning of August I got the chance to attend the Deep Learning Summer School in Montreal. It consisted of 10 days of talks from some of the most well-known neural network researchers. During this time I learned a lot, way more than I could ever fit into a blog post. Instead of trying to pass on 60 hours worth of neural network knowledge, I have made a list of small interesting nuggets of information that I was able to summarise in a paragraph.

At the moment of writing, the summer school website is still online, along with all the presentation slides. All of the information and most of the illustrations come from these slides and are the work of their original authors. The talks in the summer school were filmed as well, hopefully they will also find their way to the web.

comment by advancedatheist · 2015-09-14T13:02:20.214Z · LW(p) · GW(p)

Probably the biggest cryonics story of the year. In the print edition of The New York Times, it appeared on the front page, above the fold.

A Dying Young Woman's Hope in Cryonics and a Future, by Amy Harmon

http://www.nytimes.com/2015/09/13/us/cancer-immortality-cryogenics.html

You can also watch a short documentary about Miss Suozzi here:

http://www.nytimes.com/video/science/100000003897597/kim-suozzis-last-wishes.html

Replies from: advancedatheist, James_Miller, Fluttershy
comment by advancedatheist · 2015-09-15T17:21:01.436Z · LW(p) · GW(p)

Yet som there be that by due steps aspire

To lay their just hands on that Golden Key

That ope's the Palace of Eternity.

(John Milton, Comus, lines 12-14)

May Kim find that Golden Key some day.

comment by James_Miller · 2015-09-14T15:45:16.468Z · LW(p) · GW(p)

I wonder if the article will increase Alcor's membership? As "Why have so few people signed up for cryonics" is a big mystery for cryonics supporters such as myself we should use the opportunity of the article to make predictions about the article's impact. I predict that the article will boost Alcor's membership over the next year by 10% above trend which basically means membership will be 10% higher a year from now than it is currently.

EDIT: I predict Alcor's membership will be 11% higher a year from now than it is today. Sorry for the poorly written comment above.

Replies from: gjm, btrettel, entirelyuseless
comment by gjm · 2015-09-14T16:14:32.115Z · LW(p) · GW(p)

Are those two 10% figures equal only by coincidence?

To me, "boost membership by 10% above trend" means either "increase this year's signups by 10% of what they would otherwise have been" or else "increase this year's signups enough to make membership a year from now 10% higher than it otherwise would have been".

The second of these is equivalent to "membership will be 10% higher a year from now" iff membership would otherwise have been exactly unaltered over the year, which would mean that signups are a negligibly small fraction of current membership.

The first is equivalent to "membership will be 10% higher a year from now" iff m+1.1s = 1.1m where m,s are current membership and baseline signups for the next year, which is true iff m = 11s.

Those are both rather specific conditions, and the first seems pretty unlikely. Did you actually mean either of them, or have I misunderstood?

Replies from: Lumifer
comment by Lumifer · 2015-09-14T16:34:38.891Z · LW(p) · GW(p)

I am reading the grandparent literally as "increase membership" which does imply that the current trend is flat and the membership numbers are not increasing.

Replies from: gjm
comment by gjm · 2015-09-14T16:49:24.760Z · LW(p) · GW(p)

Could be. But is Alcor really doing so badly? (Or: does James_Miller think they are?)

The graphs on this Alcor page seem to indicate that membership is in fact increasing by at least a few percent year on year, even if people are no longer counted as members after cryosuspension.

Replies from: Lumifer
comment by Lumifer · 2015-09-14T16:59:58.813Z · LW(p) · GW(p)

Hm. Yes, Alcor's membership is going up nicely. I don't know what James_Miller had in mind, then.

comment by btrettel · 2015-09-15T20:11:18.247Z · LW(p) · GW(p)

I made this into a prediction on PredictionBook.

Replies from: ChristianKl
comment by ChristianKl · 2015-09-19T09:58:01.174Z · LW(p) · GW(p)

Is the relevant data publically accessible?

Replies from: btrettel
comment by entirelyuseless · 2015-09-14T16:58:47.431Z · LW(p) · GW(p)

My understanding is that the number of people signed up is in the thousands, which if it is correct means probably a bit less than one in a million persons.

You might have meant it rhetorically, but if it is true that it is a "big mystery" to you why most people have not signed up, then your best guess for the reason for this should be that signing up for cryonics is foolish and useless, just as if a patient in a psychological ward finds himself thinking, "I wonder why so few people say they are Napoleon?", his best guess should be that the reason for this is that the people he knows, including himself, are not in fact Napoleon.

As another example, if you are at the airport and you see two lines while you are checking in, a very long one and a very short one, and you say, "It's a big mystery to me why so many people are going in that long line instead of the short one," then you'd better get in that long line, because if you get in the short one, you are going to find yourself kicked out of it. On the other hand if you do know the reasons, you may be able to get in the short line.

In the cryonics case, this is pretty much true no matter how convincing you find your reasons, until you can understand why people do not sign up.

Replies from: James_Miller, Lumifer, None
comment by James_Miller · 2015-09-14T17:21:58.075Z · LW(p) · GW(p)

But the intellectual quality of some of the people who have signed up for cryonics is exceptionally high (Hanson, Thiel, Kurzweil, Eliezer). Among the set of people who thought they were Napoleon (excluding the original), I doubt you would find many who had racked up impressive achievements.

if you are at the airport and you see two lines while you are checking in, a very long one and a very short one, and you say, "It's a big mystery to me why so many people are going in that long line instead of the short one," then you'd better get in that long line, because if you get in the short one, you are going to find yourself kicked out of it.

What if you see Hanson, Thiel, Kurzweil, and Eliezer in the short line, ask them if you should get in the short line, and they say yes?

Replies from: None, Lumifer, entirelyuseless
comment by [deleted] · 2015-09-15T01:21:49.346Z · LW(p) · GW(p)

"What if you see Hanson, Thiel, Kurzweil, and Eliezer in the short line, ask them if you should get in the short line, and they say yes?"

As I pointed at last time you brought this up,these people aren't just famous for being smart, they're also famous for being contrarians and futurists. Cryonics is precisely an area in which you'd expect them to make a bad bet, because it's seen as weird and it's futuristic.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2015-09-16T04:10:20.009Z · LW(p) · GW(p)

This depends on whether you model contrarianism and futurism as a bias ('Hanson is especially untrustworthy about futurist topics, since he works in the area') v. modeling contrarianism and futurism as skills one can train or bodies of knowledge one can learn ('Hanson is especially trustworthy about futurist topics, since he works in the area').

Replies from: None
comment by [deleted] · 2015-09-16T20:20:43.484Z · LW(p) · GW(p)

My typical heuristic for reliable experts (taken from Thinking Fast and Slow I think) is that if experts have tight, reliable feedback loops, they tend to be more trustworthy. Futurism obviously fails this test. Contrarianism isn't really a "field" in itself, and I tend to think of it more as a bias... although EY would obviously disagree.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2015-09-16T23:09:59.949Z · LW(p) · GW(p)

My typical heuristic for reliable experts (taken from Thinking Fast and Slow I think) is that if experts have tight, reliable feedback loops, they tend to be more trustworthy. Futurism obviously fails this test.

Then it might be that futurism is irrelevant, rather than being expertise-like or bias-like. (Unless we think 'studying X while lacking tight, reliable feedback loops' in this context is worse than 'neither studying X nor having tight, reliable feedback loops.')

Contrarianism isn't really a "field" in itself, and I tend to think of it more as a bias...

Thiel, Yudkowsky, Hanson, etc. use "contrarian" to mean someone who disagrees with mainstream views. Most contrarians are wrong, though correct contrarians are more impressive than correct conformists (because it's harder to be right about topics where the mainstream is wrong).

Replies from: None
comment by [deleted] · 2015-09-17T00:40:25.010Z · LW(p) · GW(p)

Then it might be that futurism is irrelevant, rather than being expertise-like or bias-like. (Unless we think 'studying X while lacking tight, reliable feedback loops' in this context is worse than 'neither studying X nor having tight, reliable feedback loops.')

In this case futurism is two things in these people:

  1. A belief in expertise about the future.
  2. A tendency towards optimism about the future. Combined, these mean that these people both think cryonics will work in the future, and are more confident in this assertion than warranted.

Thiel, Yudkowsky, Hanson, etc. use "contrarian" to mean someone who disagrees with mainstream views.

I don't think so... it's more someone who has the tendency(in the sense of an aesthetic preference) to disagree with mainstream views. In this case, they would tend to be drawn towards cryonics because it's out of the mainstream, which should give us less confidence that they're drawn towards cryonics because it's correct.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2015-09-17T01:02:33.427Z · LW(p) · GW(p)

One of the most common ways they use the word "contrarian" is to refer to beliefs that are rejected by the mainstream, for whatever reason; by extension, contrarian people are people who hold contrarian beliefs. (E.g., Galileo is a standard example of a "correct contrarian" whether his primary motivation was rebelling against the establishment or discovering truth.) "Aesthetic preference" contrarianism is a separate idea; I don't think it matters which definition we use for "contrarianism".

Replies from: None
comment by [deleted] · 2015-09-17T04:09:26.761Z · LW(p) · GW(p)

I think it matters in this context. If these people are contrarian simply because they happen to have lots of different views, then it's irrelevant that they're contrarian. If they're contrarian because they're DRAWN towards contrarian views, it means they're biased towards cryonics.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2015-09-17T09:31:45.397Z · LW(p) · GW(p)

I agree it matters in this case, but it doesn't matter whether we use the word "contrarianism" vs. tabooing it.

Also, your summary assumes one of the points under dispute: whether it's possible to be good at arriving at true non-mainstream beliefs ('correct contrarianism'), or whether people who repeatedly outperform the mainstream are just lucky. 'Incorrect contrarianism' and 'correct-by-coincidence contrarianism' aren't the only two possibilities.

Replies from: None
comment by [deleted] · 2015-09-18T01:35:16.606Z · LW(p) · GW(p)

Ok, so to summarize:

  1. These people are futurists.

1a. If you believe futurists have more expertise on the future, then they are more likely to be correct about cryonics.

1b. If you believe expertise needs tight feedback loops, they are less likely to be correct about cryonics.

1c. If you believe futurists are drawn towards optimistic views about they future, they are less likely to be correct about cryonics.

2.These people are contrarians

2a. If you believe they have a "correct contrarian cluster" of views, they are more likely to be correct about cryonics.

2b. If you believe that they arrived at contrarian views by chance, they are no more or less likely to be correct about cryonics.

2c. If you believe that they arrived at contrarian views because they are drawn to contrarian views, they are less likely to be correct about cryonics.

I believe 1b, 1c, and 2c. You believe 1a and 2a. Is that correct?

comment by Lumifer · 2015-09-14T17:26:57.511Z · LW(p) · GW(p)

But the intellectual quality of some of the people who have signed up for cryonics is exceptionally high

The intellectual quality of some people who have NOT signed up for cryonics is exceptionally high as well.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2015-09-16T04:06:32.874Z · LW(p) · GW(p)

But the average is lower, and not signing up for cryonics is a "default" action: you don't have to expend thought or effort in order to not be signed up for cryonics. A more relevant comparison might be to people who have written refutations or rejections of cryonics.

Replies from: Lumifer
comment by Lumifer · 2015-09-16T04:18:36.589Z · LW(p) · GW(p)

I don't think the average matters, it's the right tail of the distribution that's important.

Take, say, people with 130+ IQ -- that's about 2.5% of your standard white population and the overwhelming majority of them are not signed up. In fact, in any IQ quantile only a miniscule fraction has signed up.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2015-09-16T19:37:32.663Z · LW(p) · GW(p)

entirelyuseless made the point that low cryonics use rates in the general population are evidence against the effectiveness of cryonics. James Miller responded by citing evidence supporting cryonics: that cryonicists are disproportionately intelligent/capable/well-informed. If your response to James is just that very few people have signed up for cryonics, then that's restating entirelyuseless' point. "The intellectual quality of some people who have NOT signed up for cryonics is exceptionally high" would be true even in a world where every cryonicist were more intelligent than every non-cryonicist, just given how few cryonicists there are.

Replies from: Lumifer
comment by Lumifer · 2015-09-16T20:45:08.687Z · LW(p) · GW(p)

entirelyuseless made the point that low cryonics use rates in the general population are evidence against the effectiveness of cryonics

No, I don't think he did. The claim that low uptake rate is evidence against the effectiveness of cryonics is nonsense on stilts. entirelyuseless' point was that if you are in a tiny minority and you don't understand why the great majority doesn't join you, your understanding of the situation is... limited.

James Miller countered by implying that this problem can be solved if one assumes that it's the elite (IQ giants, possessors of secret gnostic knowledge, etc.) which signs up for cryonics and the vast majority of the population is just too stupid to take a great deal when it sees it.

My counter-counter was that you can pick any measure by which to choose your elite (e.g. IQ) and still find that only a miniscule fraction of that elite chose cryonics -- which means that the "just ignore the stupid and look at the smart ones" argument does not work.

comment by entirelyuseless · 2015-09-14T21:29:41.473Z · LW(p) · GW(p)

Someone who mistakenly believes that he is Napoleon presumably thinks that he himself is impressive intellectually, and in the artificial example I was discussing, he would think that others who believe the same thing are also impressive. However, it's also true that outside observers would not admit that, and in the cryonics case many people would, so in this respect the cryonics case is much more favorable than the Napoleon example. However, as Lumifer pointed out, this is not a terribly strong positive argument, given that you will be able to find equally intelligent people who have not signed up for cryonics.

In the Hanson etc airport situation, I would at least ask them why everyone else is in the long line, and if they had no idea then I would be pretty suspicious. In the cryonics case, in reality, I would expect that they would at least have some explanation, but whether it would be right or not is another matter. Ettinger at least thought that his proposal would become widely accepted rather quickly, and seems to have been pretty disappointed that it was not.

In any case, I wasn't necessarily saying that signing up for cryonics is a bad thing, just that it seems like a situation where you should understand why other people don't, before you do it yourself.

comment by Lumifer · 2015-09-14T17:04:00.882Z · LW(p) · GW(p)

My understanding is that the number of people signed up is in the thousands

gjm posted a link to the data: Alcor says it has about 1,000 members at the moment.

Replies from: entirelyuseless
comment by entirelyuseless · 2015-09-14T17:09:06.375Z · LW(p) · GW(p)

Yes, I meant including other groups. It might be around 2,000 or so total but I didn't want to assert that it is that low because I don't know that for a fact.

comment by [deleted] · 2015-09-15T04:41:32.063Z · LW(p) · GW(p)

But the logic that makes signing up for cryonics make sense is the same logic that humans are REALLY BAD AT doing. Following the crowd is generally a good heuristic, but you have to recognize it's limitations.

Replies from: entirelyuseless
comment by entirelyuseless · 2015-09-15T13:43:12.957Z · LW(p) · GW(p)

In principle this is saying that you know why most people don't sign up, so if you're right about that, then my argument doesn't apply to your case.

comment by Fluttershy · 2015-09-14T13:52:01.878Z · LW(p) · GW(p)

I'm impressed at how positively the author portrayed cryonicists. The parts which described the mishaps which occurred during/before the freezing process were especially moving.

Replies from: advancedatheist
comment by advancedatheist · 2015-09-14T14:15:46.715Z · LW(p) · GW(p)

The article discusses the Brain Preservation Foundation. The BPF has responded here:

A COURAGEOUS STORY OF BRAIN PRESERVATION, “DYING YOUNG” BY AMY HARMON, THE NEW YORK TIMES.

http://www.brainpreservation.org/a-courageous-story-of-brain-preservation-dying-young-by-amy-harmon-the-new-york-times/

comment by NancyLebovitz · 2015-09-16T17:52:33.309Z · LW(p) · GW(p)

How Grains Domesticated Us by James C. Scott. This may be of general interest as a history of how people took up farming (a more complex process than you might think), but the thing that I noticed was that there are only a handful (seven, I think) of grain species that people domesticated, and it all happened in the Neolithic Era. (I'm not sure about quinoa.) Civilized people either couldn't or wouldn't find another grain species to domesticate, and civilization presumably wouldn't have happened without the concentrated food and feasibility of social control that grain made possible.

Could domestcatable grain be a rather subtle filter for technological civilization? On the one hand, we do have seven species, not just one or two. On the other, I don't know how likely the biome which makes domesticable grain possible is.

Replies from: g_pepper, Lumifer, Vaniver
comment by g_pepper · 2015-09-16T18:39:13.718Z · LW(p) · GW(p)

I suspect that developing a highly nutritious crop that is easy to grow in large quantities is a prerequisite for technological civilization. However, I wonder if something other than grains might have sufficed (e.g. potatoes).

Replies from: NancyLebovitz, polymathwannabe
comment by NancyLebovitz · 2015-09-16T20:09:33.102Z · LW(p) · GW(p)

One of the points made in the video is that it's much easier to conquer and rule people who grow grains than people who grow root crops. Grains have to be harvested in a timely fashion-- the granaries can be looted, the fields can be burnt. If your soldiers have to dig up the potatoes, it just isn't worth it.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-09-18T05:14:24.610Z · LW(p) · GW(p)

Yes, it's easier to loot people who grow grains than roots, but I don't think that's so relevant to taxation by a stationary bandit.

comment by polymathwannabe · 2015-09-16T22:22:36.089Z · LW(p) · GW(p)

Hmm, abundant and easily accessible food is also a requisite for the evolution of eusocial animal colonies. I guess that's what cities ultimately are.

comment by Lumifer · 2015-09-16T18:51:05.338Z · LW(p) · GW(p)

Grain is just food that happened to possess two essential features:

  • Making it was sufficiently productive, that is, a group of humans could grow more grain than they themselves would need;

  • It could be stored for a long time with only minor spoilage. Having reserves of stored food to survive things like winters, droughts, and plagues of locusts is rather essential for a burgeoning civilization. Besides, without non-perishable food it's hard to have cities.

Replies from: VoiceOfRa
comment by VoiceOfRa · 2015-09-20T20:29:18.622Z · LW(p) · GW(p)

You left out an important property:

  • Making it requires that the makers stay in the same place for a large fraction of the year. Furthermore, if they are forced to leave for any reason, all the effort they have expended so far is wasted and they probably can't try again until next year.
Replies from: Lumifer
comment by Lumifer · 2015-09-21T15:11:05.948Z · LW(p) · GW(p)

That's a relevant feature for figuring out the consequences of depending on grain production. I'm not sure it's a relevant feature for the purposes of deciding why growing grains became so popular.

comment by Vaniver · 2015-09-16T18:36:02.476Z · LW(p) · GW(p)

Could domestcatable grain be a rather subtle filter for technological civilization?

This seems somewhat unlikely to me, and we might be able to answer it by exploring "grain." It seems to me that there are a handful of non-grain staple crops around the world that suggest that a planet would need to have no controllable vegetation sufficient for humans to sustain themselves on (either directly, or indirectly through feed animals). Even ants got agriculture to work.

Replies from: None
comment by [deleted] · 2015-09-17T12:58:50.798Z · LW(p) · GW(p)

Potatoes, sweet potatoes, turnips, taro, tapioca, those weird south american tubers related to Malibar spinach, and the tubers of runner beans immediately come to mind as long term storable calorie crops.

Of note, the consumption of flour has recently been pushed back to at the very least 32,000 years ago, probably much longer, even if field agriculture has not:

http://www.npr.org/sections/thesalt/2015/09/14/440292003/paleo-people-were-making-flour-32-000-years-ago

Replies from: Lumifer
comment by Lumifer · 2015-09-17T15:36:27.819Z · LW(p) · GW(p)

long term storable calorie crops

Doesn't that depend on the climate? I don't know for how long can you store potatoes and such in tropical climates -- my guess is not for long. If you are in, say, Northern Europe, the situation changes considerably.

Plus, the tubers you name are predominantly starch and people relying on them as a staple would have issues with at least insufficient protein.

Replies from: None
comment by [deleted] · 2015-09-17T16:52:45.835Z · LW(p) · GW(p)

Climate does make a difference, for sure. But there are two things to consider. One, climates that are warmer let things rot easier but tend to have longer or continuous growing seasons. Two, climate control is a thing that people do (dig deep enough and you get cooler temperature almost anywhere on Earth) as is processing for storage via drying or chemical treatment.

Forgot to mention nuts too.

You are certainly right about protein. Something else must be added, be it meat or veggies of some sort or legumes.

Replies from: Lumifer
comment by Lumifer · 2015-09-17T17:18:59.053Z · LW(p) · GW(p)

nuts

Hm, interesting. I don't know of any culture which heavily relied on nuts as a food source. I wonder why that is so. Nuts are excellent food -- fairly complete nutritionally, high caloric density, don't spoil easily, etc. Moreover, they grow on trees, so once you have a mature orchard, you don't need to do much other than collect them. One possibility is that trees are too "inflexible" for agriculture -- if your fields got destroyed (say, an army rolled through), you'll get a new crop next year (conditional, of course, on having seed grain, labour to work the fields, etc.). But if your orchard got chopped down, well, the wait till the new crop is much longer. A counter to this line of thought is complex irrigation systems which are certainly "inflexible" and yet were very popular. I wonder how land-efficient (calories/hectare) nut trees are.

Ah, I just figured out that coconuts are nuts and there are Polynesian cultures which heavily depend on them. But still, there is nothing in temperate regions and there are a lot of nut trees and bushes growing there.

Replies from: None
comment by [deleted] · 2015-09-17T17:39:09.642Z · LW(p) · GW(p)

I'm aware of pre-european Californian societies whose main calorie crop was acorns, rendered edible by soaking after crushing to remove irritating tannins and then cooked, and sometimes preserved by soaking in various other substances.

Replies from: Lumifer
comment by Lumifer · 2015-09-17T17:51:29.498Z · LW(p) · GW(p)

Yes, a good point. But weren't these American Indians mostly hunter-gatherers? I don't know if you can say that they engaged in agriculture. Some other tribes did, but those didn't rely on nuts or acorns.

Replies from: None
comment by [deleted] · 2015-09-17T18:42:14.994Z · LW(p) · GW(p)

Eh, to my mind the boundary between agriculture and gathering is fuzzy when your plants live a long time and grow pretty thickly and you encourage the growth of those you like.

Like, there's 11.5k year old seedless fig trees found in the middle east, a thousand years before there's any evidence of grain field agriculture. Those simply don't grow unless planted by humans.

Replies from: Lumifer
comment by Lumifer · 2015-09-17T19:00:28.242Z · LW(p) · GW(p)

All true. Still, grain very decisively won over nuts. I wonder if there's a good reason for that or it was just a historical accident. Maybe you can just make many more yummy things our of flour than out of nuts. Or maybe nuts don't actually store all that well because of fats going rancid...

comment by skeptical_lurker · 2015-09-14T13:50:36.512Z · LW(p) · GW(p)

AI risk going mainstream

This week on the BBC you may get the impression that the robots have taken over. Every day, under the banner Intelligent Machines, we will bring you stories on online, TV, radio about advances in artificial intelligence and robotics and what they could mean for us all.

Why now? Well at the end of last year Prof Stephen Hawking told the BBC that full artificial intelligence could spell the end for mankind. [...] That gloomy view started a public debate. Roboticists and computer scientists who specialise in the AI field rushed to reassure us that the "singularity", the moment when machines surpass humans, is so far off that it is still the stuff of science fiction.

Looks like Stephen Hawking is finally someone high enough status that he can say this sort of think and people will take him seriously.

Replies from: ChristianKl
comment by ChristianKl · 2015-09-19T09:56:13.080Z · LW(p) · GW(p)

Why now? Well at the end of last year Prof Stephen Hawking told the BBC that full artificial intelligence could spell the end for mankind. [...] That gloomy view started a public debate.

That's a pretty self serving explanation from the BBC. I think that Bostrom book plays in a major role for the change we have seen in the last year. It can be read by intelligent people and then they understand the problem. Beforehand there was no straightforward way to get a deep understanding in a straightforward way.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-20T15:11:36.440Z · LW(p) · GW(p)

I came across Bostrom a decade ago. I'm sure his book is great but 'Bostrom writes a book' isn't that different from 'Bostrom has a website'. Also, Kurtzweil had some highly selling books out a long time ago.

Elon Musk also made similar claims lately, and Bill Gates. Bostrom is pretty smart, but he's not a pre-existing household name like these guys.

Replies from: ChristianKl
comment by ChristianKl · 2015-09-20T18:21:41.538Z · LW(p) · GW(p)

Also, Kurtzweil had some highly selling books out a long time ago.

Yes, but with a quite different message.

I came across Bostrom a decade ago. I'm sure his book is great but 'Bostrom writes a book' isn't that different from 'Bostrom has a website'.

No, it's quite different.

I don't think Bill Gates would have made those claims when it wasn't for Bostrom's book. Bill Gates also promotes the book to other people. Bill Gates likely wouldn't tell important people: "Go read up on Bostroms website how we should think about AGI risk", the way he does it with the book.

Elon Musk is a busy guy with 80 hour workweeks. Bostrom and FHI made to him the case that UFAI risk is important. Personal conversations were likely important but reading Bostrom's book helped raise the importance of the issue in Elon's mind.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-20T21:33:09.904Z · LW(p) · GW(p)

Oh, so Bostrom was behind these three people? Then his book is more important than I thought.

Replies from: ChristianKl
comment by ChristianKl · 2015-09-20T23:14:06.814Z · LW(p) · GW(p)

I'm not saying that Bostrom was behind Stephen Hawking remarks but I think he's partly responsible for Musk and Gates positions.

When it comes to Musk I think there was a facebook post a while ago about FHI efforts in drafting Musk for the cause.

With Gates there's https://www.youtube.com/watch?v=6DBNKRYVY8g where Gates and Musk sit at a conferece for the Chinese elite and get interviewed by Baidu's CEO. As part of that Gates get asked for his take on AI risk and he says that he's concerned and people who want to delve deeper into the issue should read Bostroms book. As far as the timeline goes I think it's probable that Gates public comments on the issue come after him reading the book.

I don't think that a smart person suddenly starts to fear AI risk because they read in a newspaper that Steven Hawking is afraid of it. On the other hand a smart person who reads Bostrom's book can be convinced by the case of the book that the issue is really important.

That's something a book can do but that newspapers usually don't do. Books that express ideas in a way that convinces a smart person that reads them are powerful.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-21T07:38:39.720Z · LW(p) · GW(p)

I don't think that a smart person suddenly starts to fear AI risk because they read in a newspaper that Steven Hawking is afraid of it.

Well, Steven Hawking is far smarter than most people, so on most subjects which Steven Hawking is familiar it would be a good idea to update in the same direction as him, unless you are an expert on it too.

Also, it raises AI risk as a possible concern, at which point people might then try to find more information, such as Bostrom's book, or website.

So yes, people get more information from reading a book than reading a newspaper article, but the article might be what lead them to read the book in the first place.

comment by Richard Korzekwa (Grothor) · 2015-09-15T22:59:35.881Z · LW(p) · GW(p)

A while back, I was having a discussion with a friend (or maybe more of a friendly acquaintance) about linguistic profiling. It was totally civil, but we disagreed. Thinking about it over lunch, I noticed that my argument felt forced, while his argument seemed very reasonable, and I decided that he was right, or at least that his position seemed better than mine. So, I changed my mind. Later that day I told him I'd changed my mind and I thought he was right. He didn't seem to know how to respond to that. I'm not sure he even thought I was being serious at first.

Have other people had similar experiences with this? Is there a way to tell someone you've changed your mind that lessens this response of incredulity?

Replies from: Strangeattractor, Dahlen
comment by Strangeattractor · 2015-09-16T04:13:25.095Z · LW(p) · GW(p)

Sometimes saying why you changed your mind can help. In more detail than "his position seemed better than mine". But sometimes it takes doing some action that is in line with the new idea in order for other people to think you may be serious.

Another thing that may help is to wait some time before telling the person. "Later that day" makes it seem like a quick turnaround. Waiting until the next day to say something like "I've had some time to think about it, and I think you were right about X" might make more sense to the other person and lessen the incredulity.

Also, it depends on what your past history has been with this person, and what they have observed in your behaviour.

comment by Dahlen · 2015-09-17T18:43:24.328Z · LW(p) · GW(p)

It happened to me only with people who were extremely, unreasonably cynical about people's rationality in the first place (including their own). People who couldn't update on the belief of people being unable to update on their beliefs. There's an eerie kind of consistency about these people's beliefs, at least for that much one can give them credit...

You have to engage in some extra signaling of having changed your own mind; just stating it wouldn't be as convincing.

comment by Panorama · 2015-09-14T21:12:38.798Z · LW(p) · GW(p)

The Fallacy of Placing Confidence in Confidence Intervals

Welcome to the web site for the upcoming paper "The Fallacy of Placing Confidence in Confidence Intervals." Here you will find a number of resources connected to the paper, including the itself, the supplement, teaching resources and in the future, links to discussion of the content.

The paper is accepted for publication in Psychonomic Bulletin & Review.

pdf

Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true parameter value in some known proportion of repeated samples, on average. The width of confidence intervals is thought to index the precision of an estimate; CIs are thought to be a guide to which parameter values are plausible or reasonable; and the confidence coefficient of the interval (e.g., 95%) is thought to index the plausibility that the true parameter is included in the interval. We show in a number of examples that CIs do not necessarily have any of these properties, and can lead to unjustified or arbitrary inferences. For this reason, we caution against relying upon confidence interval theory to justify interval estimates, and suggest that other theories of interval estimation should be used instead

Replies from: None
comment by [deleted] · 2015-09-15T01:32:15.068Z · LW(p) · GW(p)

The Fallacy of Placing Confidence in Confidence Intervals

I just read through this, and it sounds like they're trying to squish a frequentist interpretation on a Bayesian tool. They keep saying how the confidence intervals don't correspond with reality, but confidence intervals are supposed to be measuring degrees of belief. Am I missing something here?

Replies from: VincentYu
comment by VincentYu · 2015-09-15T05:06:49.338Z · LW(p) · GW(p)

I briefly skimmed the paper and don't see how you are getting this impression. Confidence intervals are—if we force the dichotomy—considered a frequentist rather than Bayesian tool. They point out that others are trying to squish a Bayesian interpretation on a frequentist tool by treating confidence intervals as though they are credible intervals, and they state this quite explicitly (p.17–18, emphasis mine):

Finally, we believe that in science, the meaning of our inferences are important. Bayesian credible intervals support an interpretation of probability in terms of plausibility, thanks to the explicit use of a prior. Confidence intervals, on the other hand, are based on a philosophy that does not allow inferences about plausibility, and does not utilize prior information. Using confidence intervals as if they were credible intervals is an attempt to smuggle Bayesian meaning into frequentist statistics, without proper consideration of a prior. As they say, there is no such thing as a free lunch; one must choose. We suspect that researchers, given the choice, would rather specify priors and get the benefits that come from Bayesian theory. We should not pretend, however, that the choice need not be made. Confidence interval theory and Bayesian theory are not interchangeable, and should not be treated as so.

Replies from: None
comment by [deleted] · 2015-09-15T05:09:22.383Z · LW(p) · GW(p)

Hmmm, yes, I suppose I was making the same mistake they were... I thought that what confidence intervals were are actually what credible intervals are.

Replies from: VincentYu
comment by VincentYu · 2015-09-15T05:31:37.875Z · LW(p) · GW(p)

I see. Looking into this, it seems that the (mis)use of the phrase "confidence interval" to mean "credible interval" is endemic on LW. A Google search for "confidence interval" on LW yields more than 200 results, of which many—perhaps most—should say "credible interval" instead. The corresponding search for "credible interval" yields less than 20 results.

comment by Adam Zerner (adamzerner) · 2015-09-16T02:00:04.047Z · LW(p) · GW(p)

How many hours of legitimate work do you get done per day?

Legitimate = uninterrupted, focused work. Regarding the time you spend working but not fully focused, use your judgement in scaling it. Ie. maybe an hour of semi-productive work = .75 hours of legitimate work.

Edit: work doesn't only include work for your employer/school. It could be self-education, side projects etc. It doesn't include chores or things like casual pleasure reading though. Per day = per day that you intend to put in a full days work.

[pollid:1029]

Replies from: Vladimir_Golovin, Gunnar_Zarncke, adamzerner
comment by Vladimir_Golovin · 2015-09-18T07:06:20.001Z · LW(p) · GW(p)

I do about 3 hours of legit work when I'm in my usual situation (family, work), but I do way more when I'm alone, both on- and off-the-grid: 12 hours or even more (of course assuming that the problem I'm working on is workable and I don't hit any serious brick walls). My last superfocus period lasted for about two weeks, it happened when my family went on vacation, and I took a mini-vacation from work myself (though the task I was working on was pretty trivial). My longest superfocus period was about 45 days, it happened on a long off-the-grid vacation.

comment by Gunnar_Zarncke · 2015-09-16T22:46:37.006Z · LW(p) · GW(p)

In the absence of any indications whether this included weekends I assumed that it doesn't include weekends. On weekends my producivity is way lower.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-09-16T23:02:34.698Z · LW(p) · GW(p)

Good point. I intended for it to mean "on days where you intend to put in a full days work". I'm a little crazy so for me that's every day :) But I definitely should have clarified.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2015-09-17T06:49:41.805Z · LW(p) · GW(p)

I also don't strictly distinguish between work days and other, but you also clarified that the time shouldn't include chores which are work too but not usually associated with work for money or education so I had to make some cut. If you had included any kind of productive work the number would have read differently. Lots of pleasure reading e.g. LW can count as such; the line (or factor) could be how much it contributed to your own future development.

comment by Adam Zerner (adamzerner) · 2015-09-16T22:07:55.836Z · LW(p) · GW(p)

This is way lower than I expected. Thoughts?

Replies from: Gunnar_Zarncke, lmm
comment by Gunnar_Zarncke · 2015-09-16T22:47:39.006Z · LW(p) · GW(p)

Maybe you should have added another poll that asked for formally expected or billed hours.

comment by lmm · 2015-09-17T19:47:51.594Z · LW(p) · GW(p)

It's about where I expected. I think 6 is probably the best you can do under ideal circumstances. Legitimate, focussed work is exhausting.

If you're looking for bias, this is a community where people who are less productive probably prefer to think of themselves as intelligent and akrasikal (sp?). Also you've asked at the end of a long holiday for any students here.

comment by qmotus · 2015-09-17T07:45:16.566Z · LW(p) · GW(p)

Should we actually expect 'big world immortality' to be true? I know the standard LW response is that what we should care about is measure, but what I'm interested in is whether it should be true that from every situation in which we can find ourselves in, we should expect a never-ending continuity of consciousness?

Max Tegmark has put forth a couple of objections: the original one (apart from simple binary situations, a consciousness often undergoes diminishment before dying and there's no way to draw continuity from it to a world in which it survives) and a newer one in Our Mathematical Universe (he doesn't think there are "actual" infinities in nature and, therefore, a relevant world doesn't always exist). Any others?

Replies from: ChristianKl
comment by ChristianKl · 2015-09-19T09:13:22.471Z · LW(p) · GW(p)

Could you define exactly what you mean with 'big world immortality'?

Replies from: qmotus
comment by qmotus · 2015-09-19T10:29:07.474Z · LW(p) · GW(p)

Quantum immortality is an example, but something similar would arguably also apply to, for example, a multiverse or a universe of infinite size or age. Basically the idea that an observer should perceive subjective immortality, since in a big world there is always a strand in which they continue to exist.

Edit: Essentially, I'm talking about cryonics without freezers.

comment by philh · 2015-09-14T17:26:58.096Z · LW(p) · GW(p)

I have a variant on linear regression. Can anyone tell me what it's called / point me to more info about it / tell me that it's (trivially reducible to / nothing like) standard linear regression?

Standard linear regression has a known matrix X = x(i,j) and a known target vector Y = y(j), and seeks to find weights W = w(i) to best approximate X * W = Y.

In my version, instead of knowing the values of the input variables (X), I know how much each contributes to the output. So I don't know x(i,j) but I kind of know x(i,j) * w(i), except that W isn't really a thing. And I know some structure on X: every value is either 0, or equal to every other value in its row. (I can tell those apart because the 0s contribute zero and the others contribute nonzero.) I want to find the best W to approximate X * W = Y, but that question will depend on what I want to do with the uncertainty in X, and I'm not sure about that.

I should probably avoid giving my specific scenario, so think widget sales. You can either sell a widget in a city or not. Sales of a widget will be well-correlated between cities: if widget sells well in New York, it will probably sell well in Detroit and in Austin and so on, with the caveat that selling well in New York means a lot more sales than selling well in Austin. I have a list of previous widgets, and how much they sold in each city. Received wisdom is that a widget will sell about twice as much in New York as in Detroit, and a third more than in Austin, but I want to improve on the received wisdom.

So I'm told that a widget will sell 10 million, and that it will be sold in (list of cities). I want to come up with the best estimate for its sales in New York, its sales in Austin, etc.

Hopefully this is clear?

Replies from: bogus, Lumifer
comment by bogus · 2015-09-15T10:25:57.490Z · LW(p) · GW(p)

Sounds like your problem is fitting a sparse matrix, i.e. where you want many entries to be 0. This is usually called compressed sensing, and it's non-trivial.

comment by Lumifer · 2015-09-14T17:39:20.433Z · LW(p) · GW(p)

Well, it's going to depend on some specifics and on how much data do you have (with the implications for the complexity of the model that you can afford), but the most basic approach that comes to my mind doesn't involve any regression at all.

Given your historical data ("I have a list of previous widgets, and how much they sold in each city") you can convert the sales per widget per city into percentages (e.g. widget A sold 27% in New York, 15% in Austin, etc.) and then look at the empirical distribution of these percentages by city.

The next step would be introducing some conditionality -- e.g. checking whether the sales percentage per city depends, for example, on the number of cities where the widget was sold.

Generally speaking, you want to find some structure in your percentages by city, but what kind of structure is there really depends on your particular data.

Replies from: philh
comment by philh · 2015-09-14T22:56:43.649Z · LW(p) · GW(p)

The problem - at least the one I'm currently focusing on, which might not be the one I need to focus on - is converting percentages-by-city on a collection of subsets, into percentages-by-city in general. I'm currently assuming that there's no structure beyond what I specified, partly because I'm not currently able to take advantage of it if there is.

A toy example, with no randomness, would be - widget A sold 2/3 in city X and 1/3 in city Y. Widget B sold 6/7 in city X and 1/7 in city Z. Widget C sold 3/4 in city Y and 1/4 in city Z. Widget D is to be sold in cities X, Y and Z. What fraction of its sales should I expect to come from each city?

The answer here is 0.6 from X, 0.3 from Y and 0.1 from Z, but I'm looking for some way to generate these in the face of randomness. (My first thought was to take averages - e.g. city A got an average of (2/3 + 6/7)/2 = 16/21 of the sales - and then normalize those averages. But none of the AM, GM and HM gave the correct results on the toy version, so I don't expect them to do well with high randomness. It might be that with more data they come closer to being correct, so that's something I'll look into if no one can point me to existing literature.)

Replies from: skeptical_lurker, Lumifer
comment by skeptical_lurker · 2015-09-15T13:03:20.019Z · LW(p) · GW(p)

So, there's some sort of function mapping from (cities,widgets)->sales, plus randomness. In general, I would say use some standard machine learning technique, but if you know the function is linear you can do it directly.

So:

sales=constant x cityvalue x widgetvalue + noise

d sales/d cityvalue = constant x widgetvalue

d sales/d widgetvalue = constant x cityvalue

(all vectors)

So then you pick random starting values of cityvalue , widgetvalue, calculate the error and do gradient decent.

Or just plug

Error = sum((constant x cityvalue x widgetvalue - sales)^2)

Into an optimisation function, which will be slower but quicker to code.

Replies from: philh
comment by philh · 2015-09-15T13:57:34.773Z · LW(p) · GW(p)

Thank you! This seems like the conceptual shift I needed.

comment by Lumifer · 2015-09-14T23:46:56.440Z · LW(p) · GW(p)

You need to specify what kind of randomness you are expecting. For example, the standard ordinary least-squares regression expects no noise at all in the X values and the noise in Y to be additive, iid, and zero-mean Gaussian. If you relax some of these assumptions (e.g. your noise is autocorrelated) some properties of your regression estimates hold and some do not any more.

In the frequentist paradigm I expect you to need something in the maximum-likelihood framework. In the Bayesian paradigm you'll need to establish a prior and then update on your data in a fairly straightforward way.

In any case you need to be able to write down a model for the process that generates your data. Once you do, you will know the parameters you need to estimate and the form of the model will dictate how the estimation will proceed.

Replies from: philh
comment by philh · 2015-09-15T09:30:01.464Z · LW(p) · GW(p)

Sure, I'm aware that this is the sort of thing I need to think about. It's just that right now, even if I do specify exactly how I think the generating process works, I still need to work out how to do the estimation. I somewhat suspect that's outside of my weight class (I wouldn't trust myself to be able to invent linear regression, for example). Even if it's not, if someone else has already done the work, I'd prefer not to duplicate it.

Replies from: gwern, satt
comment by gwern · 2015-09-16T16:26:40.631Z · LW(p) · GW(p)

It's just that right now, even if I do specify exactly how I think the generating process works, I still need to work out how to do the estimation.

If you can implement a good simulation of the generating process, then you are already done - estimating is as simple as ABC. (Aside from the hilariously high computing demands of the naive/exact ABC, I've been pleased & impressed just how dang easy it is to use ABC. Complicated interval-censored data? No problem. Even more complicated mixture distribution / multilevel problem where data flips from garbage to highly accurate? Ne pas!)

comment by satt · 2015-09-16T01:17:42.468Z · LW(p) · GW(p)

Even if you know only the generating process and not an estimation procedure, you might be able to get away with just feeding a parametrization of the generating process into an MCMC sampler, and seeing whether the sampler converges on sensible posterior distributions for the parameters.

I like Stan for this; you write a file telling Stan the data's structure, the parameters of the generating process, and how the generating process produced the data, and Stan turns it into an MCMC sampling program you can run.

If the model isn't fully identified you can get problems like the sampler bouncing around the parameter space indefinitely without ever converging on a decent posterior. This could be a problem here; to illustrate, suppose I write out my version of skeptical_lurker's formulation of the model in the obvious naive way —

sales(city, widget) = α × β(city) × γ(widget) + noise(city, widget)

— where brackets capture city & widget-type indices, I have a β for every city and a γ for every widget type, and I assume there's no odd correlations between the different parameters.

This version of the model won't have a single optimal solution! If the model finds a promising set of parameter values, it can always produce another equally good set of parameter values by halving all of the β values and doubling all of the γ values; or by halving α and the γ values while quadrupling the β values; or by...you get the idea. A sampler might end up pulling a Flying Dutchman, swooping back and forth along a hyper-hyperbola in parameter space.

I think this sort of under-identification isn't necessarily a killer in Stan if your parameter priors are unimodal and not too diffuse, because the priors end up as a lodestar for the sampler, but I'm not an expert. To be safe, I could avoid the issue by picking a specific city and a specific widget as a reference widget type, with the other cities' β and other widgets' γ effectively defined as proportional to those:

if city == 1 and widget == 1: sales(city, widget) = α + noise(city, widget)

else, if city == 1: sales(city, widget) = α × γ(widget) + noise(city, widget)

else, if widget == 1: sales(city, widget) = α × β(city) + noise(city, widget)

else: sales(city, widget) = α × β(city) × γ(widget) + noise(city, widget)

Then run the sampler and back out estimates of the overall city-level sales fractions from the parameter estimates (1 / (1+sum(β)), β(2) / (1+sum(β)), β(3) / (1+sum(β)), etc.).

And I'd probably make the noise term multiplicative and non-negative, instead of additive, to prevent the sampler from landing on a negative sales figure, which is presumably nonsensical in this context.

Apologies if I'm rambling at you about something you already know about, or if I've focused so much on one specific version of the toy example that this is basically useless. Hopefully this is of some interest...

Replies from: gwern, philh
comment by gwern · 2015-09-17T18:25:16.730Z · LW(p) · GW(p)

And I'd probably make the noise term multiplicative and non-negative, instead of additive, to prevent the sampler from landing on a negative sales figure, which is presumably nonsensical in this context.

I know JAGS lets you put interval limits onto terms which lets you specify that some variable must be non-negative (looks something like dist(x,y)[0,∞]), so maybe STAN has something similar.

Replies from: satt
comment by satt · 2015-09-19T12:33:20.096Z · LW(p) · GW(p)

It does. However...

I see now I could've described the model better. In Stan I don't think you can literally write the observed data as the sum of the signal and the noise; I think the data always has to be incorporated into the model as something sampled from a probability distribution, so you'd actually translate the simplest additive model into Stan-speak as something like

data {
    int<lower=1> N;
    int<lower=1> Ncities;
    int<lower=1> Nwidgets;
    int<lower=1> city[N];
    int<lower=1> widget[N];
    real<lower=0> sales[N];
}

parameters {
    real<lower=0> alpha;
    real beta[Ncities];
    real gamma[Nwidgets];
    real<lower=0> sigma;
}

model {
    // put code here to define explicit prior distributions for parameters
    for (n in 1:N) {
        // the tilde means the left side's sampled from the right side
        sales[n] ~ normal(alpha + beta[city[n]] + theta[widget[n]], sigma);
    }
}

which could give you a headache because a normal distribution puts nonzero probability density on negative sales values, so the sampler might occasionally try to give sales[n] a negative value. When this happens, Stan notices that's inconsistent with sales[n]'s zero lower bound, and generates a warning message. (The quality of the sampling probably gets hurt too, I'd guess.)

And I don't know a way to tell Stan, "ah, the normal error has to be non-negative", since the error isn't explicitly broken out into a separate term on which one can set bounds; the error's folded into the procedure of sampling from a normal distribution.

The way to avoid this that clicks most with me is to bake the non-negativity into the model's heart by sampling sales[n] from a distribution with non-negative support:

for (n in 1:N) {
    sales[n] ~ lognormal(log(alpha * beta[city[n]] * theta[widget[n]]), sigma);
}

Of course, bearing in mind the last time I indulged my lognormal fetish, this is likely to have trouble too, for the different reason that a lognormal excludes the possibility of exactly zero sales, and you'd want to either zero-inflate the model or add a fixed nonzero offset to sales before putting the data into Stan. But a lognormal does eliminate the problem of sampling negative values for sales[n], and aligns nicely with multiplicative city & widget effects.

comment by philh · 2015-09-17T17:47:07.418Z · LW(p) · GW(p)

Thanks to both you and gwern. It doesn't look like this is the direction I'm going in for this problem, but it's something I'm glad to know about.

comment by Vaniver · 2015-09-20T18:40:35.124Z · LW(p) · GW(p)

I made a rationalist Tumblr, primarily to participate in rationalist conversations there. Solid posts will still be posted to LW, when I finish them.

comment by [deleted] · 2015-09-19T02:43:34.521Z · LW(p) · GW(p)

Singer v.s. Van der Vossen:

Singer asks, if it’s obligatory to save the drowning child you happen to encounter at the expense of your shoes, why isn’t it obligatory not to buy the shoes in the first place, but instead to save a child in equally dire straits?

As a profession, we are in an odd but unfortunate situation. Our best philosophers and theorists develop accounts of global justice that are disconnected from the best empirical insights about poverty and prosperity.

Reading these theories, one might think that our best prospects for alleviating poverty around the world lie in policies of redistribution, foreign aid, reforms to the international system, new global institutions, and so on. And one might think that markets, property rights, and economic freedom are at best incidental, and more likely inimical, to the eradication of global poverty.

Such ignorance, if not denial, of the empirical findings about development and growth is irresponsible.

Bas van der Vossen

The article he is quoted in goes on to explain:

Mainstream development economics, in a nutshell, holds that the poverty is an institutional problem. More precisely, poverty is human being’s natural state. Poverty is normal and does not need to be explained, but wealth does.

The main reason some nations are rich and others poor is not because some nations have better geography, better natural resources, or better genes. Rather, rich countries are rich because they have better institutions. Rich countries have institutions that incentivize growth and development.

These institutions include strong private property rights, inclusive and honest governments, stable political regimes, a dependable and inclusive legal system characterized by the rule of law, open and competitive markets, and free international trade.

Poor countries have institutions that fail to incentive growth and development, and often instead have institutions that encourage predation. These countries have weak recognition or active disregard of property rights, exclusive and dishonest governments, instable political regimes, undependable legal systems characterized by the capricious rule of men rather than the rule of law, and closed, rent seeking, crony capitalist markets, or few markets at all, and little international trade.

Overall, we are not against charity. We accept that charity is good on the margins. We both give money to various charities.

But to focus on charity as means of fighting poverty is misguided. Mainstream development economics holds that that international aid and charity tend to do little good overall, and tend to do as much harm as good.† ...

As a profession, we are in an odd but unfortunate situation. Our best philosophers and theorists develop accounts of global justice that are disconnected from the best empirical insights about poverty and prosperity.

Reading these theories, one might think that our best prospects for alleviating poverty around the world lie in policies of redistribution, foreign aid, reforms to the international system, new global institutions, and so on. And one might think that markets, property rights, and economic freedom are at best incidental, and more likely inimical, to the eradication of global poverty.

Such ignorance, if not denial, of the empirical findings about development and growth is irresponsible.†

We share van der Vossen’s concerns.

Mainstream development economics, in a nutshell, holds that the poverty is an institutional problem. More precisely, poverty is human being’s natural state. Poverty is normal and does not need to be explained, but wealth does.

The main reason some nations are rich and others poor is not because some nations have better geography, better natural resources, or better genes. Rather, rich countries are rich because they have better institutions. Rich countries have institutions that incentivize growth and development.

These institutions include strong private property rights, inclusive and honest governments, stable political regimes, a dependable and inclusive legal system characterized by the rule of law, open and competitive markets, and free international trade.

Poor countries have institutions that fail to incentive growth and development, and often instead have institutions that encourage predation. These countries have weak recognition or active disregard of property rights, exclusive and dishonest governments, instable political regimes, undependable legal systems characterized by the capricious rule of men rather than the rule of law, and closed, rent seeking, crony capitalist markets, or few markets at all, and little international trade.†

To be inclusive, economic institutions must feature secure private property, an unbiased system of law, and a provision of public services that provides a level playing field in which people can exchange and contract; it must also permit the entry of new businesses and allow people to choose their careers.†

Now, not every development economist shares Acemoglu and Robinson’s exact views. But their general position — that private property, markets, and economic freedom — are needed for sustained growth is the mainstream view.

To be clear, we are not saying that mainstream development economics calls for libertarian politics at a domestic level. Economists have varying positions about the degree to which governments can and should correct market failures. They also have varying positions about the extent to which countries should provide social insurance to their citizens.

,,,

Our view — consistent with development economics — is that first-worlders’ willingness to buy DVD players and iPhones, not their desire to donate their income, is that thing that actually makes the bigger difference in fighting poverty.

Taiwan and South Korea grew rich and became First World countries, not because of handouts, but because they produced and sold luxury goods (on the broad definition of “luxury good”) to the First World.

But, as far as we can tell, they talk this way because they are ignorant and/or misinformed about the relevant empirical work. Philosophers disagree with economists not because they have read and discovered serious flaws in the economists’ work, but because philosophers have for the most part just ignored development economics.

Philosophers are of course free to disagree with mainstream development economics, but they bear the burden of proof of refuting it. We do not bear the burden of defending it here

...

But in the long-term, we’ll shut down the very economic system that made the First World rich. The Third World doesn’t need to eat our success — they need to emulate

...

Second, the history of food donations is fraught with peril. Donating food to the Third World sometimes alleviates a famine — it is sometimes the thing to do in an emergency.

But, more frequently, first-world food donations just put Third World farmers out of business, and make them dependent on donations in the future.†

Again, the consensus in development economics is not that the Third World needs us to give them grain, but, on the contrary, that the Third World needs our governments to stop subsidizing grain production in the first world, so that we First Worlders instead buy our grain from the Third World.

Replies from: bogus, ChristianKl
comment by bogus · 2015-09-19T10:31:01.043Z · LW(p) · GW(p)

Mainstream development economics, in a nutshell, holds that the poverty is an institutional problem.

This is quite right - the best case for development aid in poor countries is through its positive feedback on institutions (most plausibly, civil society). Then again, most proponents of effective giving favor interventions that would plausibly have such feedbacks - for instance, it turns out that a lot of the money GiveDirectly hands out to poor folks is spent on entrepreneurship and capital acquisition, not direct consumption.

comment by ChristianKl · 2015-09-19T14:15:11.043Z · LW(p) · GW(p)

South Korea and Taiwan had no problem with Malaria killing children in which the society invested resources.

I don't understand why Van der Vossen thinks that there is clear evidence that the difference between what happened in a country like South Korea and what happened in subsaharan Africa has nothing to do with genes. Of course that's the politically correct belief. But standing there and saying that development economics proved it beyond all odds seems strange to me.

The rule of law does happen to be an important ingridiant to producing wealth but I don't think you get rule of law directly through buying iPhones.

To the extend that you believe that the rule of law is very useful in helping third world countries the next question would be whether there are cost effective interventions to increase it. That's a standard EA question.

Again, the consensus in development economics is not that the Third World needs us to give them grain, but, on the contrary, that the Third World needs our governments to stop subsidizing grain production in the first world, so that we First Worlders instead buy our grain from the Third World.

That seems like something nice to say, but politically it's very hard to imagine that a First World government gives up the ability of the First World country to feed it's own population without being dependent on outside forces.

Politically it's easier to ship excess grain from Europe to Africa than burning it but the excess grain doesn't get produced with the goal of feeding African's at all but to have European farmers that provide Europe with a food supply that also can supply in times of crisis.

Replies from: bogus
comment by bogus · 2015-09-19T15:40:21.220Z · LW(p) · GW(p)

thinks that there is clear evidence that the difference between what happened in a country like South Korea and what happened in subsaharan Africa has nothing to do with genes.

Well, let's see. It's quite convenient for us that there's a country right next door to South Korea, called North Korea. North Korea has the same genes as South Korea, and yet its economy is much more similar to the economy of Sub-saharan Africa than South Korea. Sure, that's just N=1, anecdotes are not data and all that, but I'd call that pretty good evidence.

Replies from: ChristianKl
comment by ChristianKl · 2015-09-19T17:18:21.772Z · LW(p) · GW(p)

The fact that the bad policies of North Korea lead to bad economic outcomes is no evidence that all bad economic outcomes are due to bad policies. It simply isn't.

Nobody in the EA camp denies that policies of countries matter and that the property rights and the rule of law aren't important. I haven't seen Peter Singer argue either that putting Embargos on other countries to put economic pressure on them instead of engaging in trade is bad.

Most African countries on the on the other hand don't suffer under strong embargo's. They are in the sphere of the IMF who preaches property rights for decades and tries to get the countries to respect property rights.

comment by advancedatheist · 2015-09-21T02:46:14.933Z · LW(p) · GW(p)

I don't understand how the karma system here works. One my posts below, about the usefulness of prostitutes for learning how to get into sexual relationships through dating regular women, dropped off for awhile with a -4 karma. Then I just checked, and it has a +4 karma. Where did the 8 karma points come from?

This has happened to some of my posts before. Do I have some fans I don't know about who just happen to show up in a short interval to upvote my controversial posts?

Replies from: philh, Lumifer, MrMind
comment by philh · 2015-09-21T09:26:34.048Z · LW(p) · GW(p)

I think someone is using a bunch of alts to occasionally mega-upvote posts they like.

comment by Lumifer · 2015-09-21T15:26:02.012Z · LW(p) · GW(p)

I don't understand how the karma system here works.

I think you do -- what you do NOT have is a good model for predicting future karma scoring of your posts :-/

comment by MrMind · 2015-09-21T07:24:25.113Z · LW(p) · GW(p)

Welcome to everybody's on this forum world.

Replies from: Elo
comment by Elo · 2015-09-22T07:07:36.451Z · LW(p) · GW(p)

I make sense of karma, and generally have been using it to tune my efforts towards more helpful and more useful posts to people (or at least I think I am doing that more than pandering)

comment by [deleted] · 2015-09-18T15:24:04.464Z · LW(p) · GW(p)

Many LWers, myself particularly, write awkwardly. Did you know Word can check your writing style, not just your spelling with a simple option change. I'm learning how to write with better style already.

Replies from: Dahlen
comment by Dahlen · 2015-09-19T12:19:15.358Z · LW(p) · GW(p)

This is a good occasion for relying on natural rather than artificial intelligence. Here's a list of style suggestions that can be made by Word. It checks for a lot of things that can be considered bad style in some contexts but not in others, and to my knowledge it's not smart enough to differentiate between different genres. (For example, it can advise you both against passive voice – useful for writing fiction, sometimes – and against use of first-person personal pronouns, which is a no-no in professional documents. If it needs mentioning, sometimes you cannot follow both rules at once.) There's plenty of reason to doubt that a human who can't write very well can have an algorithm for a teacher in matters of writing style; we're not there yet, I think.

comment by [deleted] · 2015-09-20T05:33:47.073Z · LW(p) · GW(p)

The Importance, tractability, and neglectedness approach is the go-to hereustic for EA's.

The open philanthropy project approaches it like this:

“What is the problem?” = importance

“What are possible interventions?” = tractability

“Who else is working on it?” = neglectedness

I reckon it's a simplification of the rational planning model:

Intelligence gathering — A comprehensive organization of data, potential problems and opportunities are identified, collected and analyzed.

Identifying problems — Accounting relevant factors.

Assessing the consequences of all options — Listing possible consequences and alternatives that could resolve the problem and ranking the probability that each potential factors could materialize in-order to give a correct priority in the analysis.

Relating consequences to values — With all policies there will be a set of relevant dimensional values (for example, economic feasibility and environmental protection) and a set of criteria for appropriateness, against which performance (or consequences) of each option being responsive can be judged.

Choosing the preferred option — The policy is brought through from fully understanding the problems, opportunities, all the consequences & the criteria of the tentative options and by selecting a optimal alternative with consensus of involved actors.

What do you reckon?

comment by [deleted] · 2015-09-19T05:03:51.968Z · LW(p) · GW(p)

If a graduate student approached you to do a section of the data analyses of your research in return for credit/authorship and to degree requirements, what would you give her? Not, she's specified just a ''section'' and is not interested in any data collection, research administration or the link, she just wants to fulfill her mini-research project requirements.

Replies from: ChristianKl
comment by ChristianKl · 2015-09-19T09:08:03.314Z · LW(p) · GW(p)

That depends obviously on the skills of the individual.

I think giving someone the Mnemosyne database to analyse for better ways to predict Spaced Repetition System learning would be useful if that person has enough skills to do genuine work. Gwern works to bring that data into a nicely downloadable format: https://archive.org/details/20140127MnemosynelogsAll.db

comment by advancedatheist · 2015-09-18T23:31:28.003Z · LW(p) · GW(p)

From the Foreword to Brave New World:

Nor does the sexual promiscuity of Brave New World seem so very distant. There are already certain American cities in which the number of divorces is equal to the number of marriages. In a few years, no doubt, marriage licenses will be sold like dog licenses, good for a period of twelve months, with no law against changing dogs or keeping more than one animal at a time. As political and economic freedom diminishes, sexual freedom tends compensatingly to increase. And the dictator (unless he needs cannon fodder and families with which to colonize empty or conquered territories) will do well to encourage that freedom. In conjunction with the freedom to daydream under the influence of dope and movies and the radio, it will help to reconcile his subjects to the servitude which is their fate.

comment by [deleted] · 2015-09-17T13:49:56.466Z · LW(p) · GW(p)

We can remember things we don't believe and believe things we don't remember. Which source of knowledge if a better authority for our expectations and priors?

Replies from: ChristianKl
comment by ChristianKl · 2015-09-19T09:12:34.894Z · LW(p) · GW(p)

Before asking that question it's useful to ask why one wants to know priors and what one means with the term. A person with arachnophobia has one a system I level a prior about spiders being dangerous but often doesn't have that one a system II level for small spiders.

comment by MrMind · 2015-09-17T07:41:09.565Z · LW(p) · GW(p)

I'm trying to wrap my mind around Stuart Armstrong's post on Doomsday argument, and to do so I've undertook the task of tabooing 'randomness' in the definitions of SIA and SSA.
My first attempt clearly doesn't work: "observers should reason giving the exact same degree of belief to any proposition of the form: 'I'm the first observer', 'I'm the second observer', etc." As it has been noted before many times, by me and by others, anthropic information changes the probability distribution, and any observer has at least a modicum of that. I suspect this conflict is what's thwarting my attempts at making sense of the topic.

Replies from: Viliam
comment by Viliam · 2015-09-17T08:48:45.341Z · LW(p) · GW(p)

the exact same degree of belief to any proposition of the form: 'I'm the first observer', 'I'm the second observer', etc.

Trying to assign the same degree of belief to infinitely many mutually exclusive options doesn't work. The probability of being an observer #1 is greater than the probability of being an observer #10^10, simply because some possible universes contain more than 1 but less than 10^10 observers.

I'm not sure how exactly the distribution should look; just saying in general that larger numbers have smaller probabilities. The exact distribution would depend on your beliefs about the universe, or actually about the whole Tegmark multiverse, and I don't have much strong beliefs in that area.

For example, if you believe that universe has a limited amount of particles and a limited amount of time, that would put an (insanely generous) upper bound on the number of observers in this universe.

Replies from: MrMind
comment by MrMind · 2015-09-18T06:57:28.970Z · LW(p) · GW(p)

Trying to assign the same degree of belief to infinitely many mutually exclusive options doesn't work.

Yeah, but the class of observers in the Doomsday argument is not infinite, usually one takes a small and a huge set, both finite. So in theory you could assign a uniform distribution.

For example, if you believe that universe has a limited amount of particles and a limited amount of time, that would put an (insanely generous) upper bound on the number of observers in this universe.

Exactly, and that's an assumption I'm always willing to make, to circumvent the problem of an infinite class of reference.

The problem though is not the cardinality of the set, it's rather the uniformity of the distribution, which I think is what is implied by the word 'randomness' in S(S|I)A, because I feel intuitively it shouldn't be so, due to the very definition of observer.

comment by [deleted] · 2015-09-19T02:58:50.636Z · LW(p) · GW(p)

The other day I heard someone reply 'so am I' to ''I'm sorry''. Never heard that before. Polite, but not as askward to replying I'm sorry right after the counterparty.

comment by advancedatheist · 2015-09-18T15:30:46.789Z · LW(p) · GW(p)

Brave New World, Chapter 3:

"And after all," Fanny's tone was coaxing, "it's not as though there were anything painful or disagreeable about having one or two men besides Henry. And seeing that you ought to be a little more promiscuous …"

Lenina shook her head. "Somehow," she mused, "I hadn't been feeling very keen on promiscuity lately. There are times when one doesn't. Haven't you found that too, Fanny?"

Fanny nodded her sympathy and understanding. "But one's got to make the effort," she said, sententiously, "one's got to play the game. After all, every one belongs to every one else."

"Yes, every one belongs to every one else," Lenina repeated slowly and, sighing, was silent for a moment; then, taking Fanny's hand, gave it a little squeeze. "You're quite right, Fanny. As usual. I'll make the effort."

comment by Richard_Kennaway · 2015-09-15T19:53:33.460Z · LW(p) · GW(p)

A child of four is just as intelligent as the adult they will eventually become. They just have less knowledge to work with.

Replies from: Vaniver, knb, Grothor, username2
comment by Vaniver · 2015-09-15T21:06:19.678Z · LW(p) · GW(p)

Four seems very young. It looks like brain development continues in a non-trivial way for a long time, and late adolescence (i.e. 15ish) is when IQs stabilize, if I remember the literature correctly.

comment by knb · 2015-09-17T01:04:02.535Z · LW(p) · GW(p)

If this was true, wouldn't 4-year-olds perform very nearly as well as adults on IQ tests like Raven's Progressive Matrices? I haven't actually looked it up, but I would be very surprised if pre-adolescent children score the same as adults on these tests.

comment by Richard Korzekwa (Grothor) · 2015-09-15T23:19:48.728Z · LW(p) · GW(p)

This sounds a lot like the theory of crystallized vs fluid intelligence: https://en.wikipedia.org/wiki/Fluid_and_crystallized_intelligence

As far as I know, by most any commonly used metric, both of these will increase well beyond four years of age. Vaniver mentions 15 years old, and I recall 19-20 years old being the number given for maximum fluid intelligence in the psychology textbook I had in undergrad.

comment by username2 · 2015-09-15T19:58:42.882Z · LW(p) · GW(p)

Is it true?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-09-15T20:36:10.452Z · LW(p) · GW(p)

It's a thought that occurred to me. Opinions are welcome.

Some explanatory footnotes:

"Four-year old" is just a focal example. Consider anyone, young, your own age, or old (these are the three ages of man) with whom you think there is a large conceptual distance to cross. Blues will be thinking of Greens and vice versa, but that's nothing on the scale of what I'm trying to point to. But very young children are a real example, not merely a parable.

When you are face to face with a stranger, you are looking at an alien AI. Their fundamental mechanism of discerning truth from falsity may be just as acute as yours, but is operating on completely different data. And it is the same for them, face to face with you.

There is more context for that thought, but that will do for now.

Replies from: Lumifer
comment by Lumifer · 2015-09-15T21:31:15.136Z · LW(p) · GW(p)

Their fundamental mechanism of discerning truth from falsity may be just as acute as yours, but is operating on completely different data.

We share the same physical reality, don't we?

comment by [deleted] · 2015-09-16T15:23:45.349Z · LW(p) · GW(p)

So that I may get a better overview of this account I have undertaken one task and plan to undertake another.

This is the task I plan to complete in the near future: In the near future I will be reviewing what I have replied to and deciding whether to bookmark that link and the parent link for further consideration or to disregard them.

This is the task already completed: Looking back over my post history, these are the only one's for which I currently expect that looking back upon them in the future will be a worthwhile exercise

etc

Posts after [? · GW] this post was published are not included

Discussion board posts

Synthetic biology for good

Deworming a movement

Rationality (psychology, computer sci and behavioural economics)

daily checklist

internalised discounting

meta cognitive checklist

Change, help and understanding

Reducing dimensions of complexity

high performance psychology

What does correlation mean?

triaging problems with plausible psychological and physical solution spaces

Perhaps there should be trigger warning warnings

Uncertainty is okay

Critical thinking emergency department

Hunt down false memories

What does it mean when mental health conditions shift to one another?

time and expertise

towards evidence based meta-cognition and conscious thought

Extinction addiction

Public health*

Lack of community interest in extremely high payoff risky projects, AI excepted

GiveWell's irreverence for the controversial topics in developmental economic

Is there any evidence GiveWell's recommended charities make anyone feel better any more effectively than alternatives?

Could preoccupation with neglectedness is an easy way to keep the charity evaluation space free of its own analytical competitors, and who's keeping track of open problems in ea?

Precision medicine

Human genomics has been a let down so far

Some people may be more depressed because they aren't traumatised enough

health startups go with a whimper not a bang, if they take off at all

Regulators hold back precision medicine

Political economy

young people can become politicians too

Are market economies conjured evolutions that can't bridge the Lucas critique?

recent Thiel quotes

individuals might be able to bet beliefs and vote values but is there enough homogeniety in what beliefs and values mean to set up a multiagent system around it?

radio opinions

Openheimmer and his documentary on paramilitaries, similar things and the people behind them

Motivation

competition and ordinariness

yo elliot

Affective truths in motivation

Procrastination and fear of success

Unexpected behaviour

What's MIRI hiding?

Structured sexism

Not only disinterested in playing paranoid debating online, LW's dislike the idea

Nobody seems interested in making a planning tool for lay people based to construct game theoretic scenarios

An effective altruism related post in an easy-points thread with minimal texts getting downvotes

LW'ers don't seem to like poetry

LW'ers don't like don't like tangents or counterforenscs

Replies from: Lumifer, polymathwannabe
comment by Lumifer · 2015-09-16T15:32:55.902Z · LW(p) · GW(p)

Dude, you ain't Eliezer or Yvain, it's a bit too early for you to start constructing Greatest Hits lists...

Replies from: Aiyen
comment by Aiyen · 2015-09-16T17:44:35.596Z · LW(p) · GW(p)

To be fair, if it helps someone find useful information, so much the better. If not, who does it harm?

Replies from: Lumifer
comment by Lumifer · 2015-09-16T18:15:23.963Z · LW(p) · GW(p)

If not, who does it harm?

It's noise and so harms everyone who is actually looking for information or whose time has value.

Imagine if I started to post random extracts from Wikipedia onto LW. You argument would apply to them as well, would it not?

Replies from: None
comment by [deleted] · 2015-09-16T23:14:10.215Z · LW(p) · GW(p)

Sorry, I had the impression that my posts were more helpful than unhelpful because my karma balance is above zero.

I'm not confident I'm interpreting karma right however since I rarely see upvoted posts but see many downvoted posts.

There is also evidence suggesting I am misinterpreting karma and that it is around zero:

don't know exactly who you're supposed to persuade, but your track record so far on LessWrong shows that you barely manage to break even with your karma, and that you lack the level of self-awareness of a socially well-adapted person. Whoever you successfully persuade would have to be even more oblivious than you, which is saying something.

Edit: also funny hearing this from you Lumifer. You're a very prolific poster and most of your content isn't probably the kind of information others are looking for. But extension of your logic, why don't you boo people who speak different languages, or people selling goods you don't want to buy yourself?

Replies from: Lumifer, Viliam
comment by Lumifer · 2015-09-17T15:45:32.774Z · LW(p) · GW(p)

why don't you boo people who speak different languages

I would boo people who came to LW and started having long public conversations in, say, German.

Besides, I think you're confusing "I have a right to do X" and "doing X is a good idea".

comment by Viliam · 2015-09-17T08:55:09.374Z · LW(p) · GW(p)

I had the impression that my posts were more helpful than unhelpful because my karma balance is above zero.

By removing the posts which individually have zero or negative karma you could have made the list half as long (and therefore more useful, per unit of time). I'd even say that post karma less than 3 is mere noise.

Replies from: None
comment by [deleted] · 2015-09-17T12:13:30.939Z · LW(p) · GW(p)

I reckon if that's your opinion as a blanket policy just update your account preferences to not show posts with less than 3 karma. It defaults to not showing less than 2 in any case.

comment by polymathwannabe · 2015-09-16T17:14:41.862Z · LW(p) · GW(p)

I can see the usefulness for you to have that list.

I don't see the usefulness for anyone else to have it.

Replies from: None
comment by [deleted] · 2015-09-16T22:33:34.279Z · LW(p) · GW(p)

I'm leaving LessWrong in a few days and want to save myself time if I wonder what if any of my posts are worth revisiting once I've forgotten

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2015-09-16T22:55:19.858Z · LW(p) · GW(p)

In that case you might have put these on you user page.

Replies from: None
comment by [deleted] · 2015-09-16T23:11:19.405Z · LW(p) · GW(p)

I tried to but it (edit: the Wiki) says I don't have one and don't have permission to create one.

comment by Fluttershy · 2015-09-15T09:03:38.374Z · LW(p) · GW(p)

Alcor Cryonics should rename itself to Alicorn Cryonics, because associating your brand with ponies makes marketing easier.

I'll leave it to you to determine if I meant for this comment to be taken seriously or sarcastically.

Replies from: advancedatheist
comment by advancedatheist · 2015-09-16T03:17:35.769Z · LW(p) · GW(p)

I don't know of any Bronies with cryonics arrangements.

Replies from: Fluttershy
comment by Fluttershy · 2015-09-16T03:56:00.385Z · LW(p) · GW(p)

The LW group on Fimfiction is pretty big, and I recognize a couple people with cryonics arrangements from that list. I'm leaning towards signing up for neuro with Alcor myself.

comment by [deleted] · 2015-09-19T02:54:00.446Z · LW(p) · GW(p)

namechk

Replies from: gjm
comment by gjm · 2015-09-19T09:54:20.295Z · LW(p) · GW(p)

What?

(No, I am not going to follow some random link just because someone posted it to LW.)

comment by advancedatheist · 2015-09-17T04:42:26.919Z · LW(p) · GW(p)

So how does getting sexual experience with prostitutes translate over to getting into sexual relationships with regular women through dating, any way?

I met a 20-something woman at the Venturist cryonics convention in Laughlin, Nevada, last year who talked to me more than she needed to as a social acknowledgment, which made me wonder if she felt attracted to me. I don't know how to interpret these situations in the handful of times they have happened in my life, so I don't know what to do, and they make me anxious.

If I had sexual learning experiences only from prostitutes, and I had nothing else to go on, should I have asked this woman how much money she wanted to come with me to my room in Laughlin's hotel for sex?

Replies from: ChristianKl, polymathwannabe, Good_Burning_Plastic, MrMind
comment by ChristianKl · 2015-09-19T09:24:05.714Z · LW(p) · GW(p)

If I had sexual learning experiences only from prostitutes, and I had nothing else to go on, should I have asked this woman how much money she wanted to come with me to my room in Laughlin's hotel for sex

That would likely be perceived as highly inappropriate and carries with it the chance of you getting banned from that convention in the future.

comment by polymathwannabe · 2015-09-17T12:58:24.406Z · LW(p) · GW(p)

should I have asked this woman how much money she wanted

That generally doesn't work on women who don't already sell sex for a living.

Maybe a sex surrogate could be useful for you. She would provide you with more emotional and social guidance than a regular hooker, and the learning process would advance at your own pace and on your own terms.

comment by Good_Burning_Plastic · 2015-09-18T08:29:57.807Z · LW(p) · GW(p)

Epistemic status: speculation

So how does getting sexual experience with prostitutes translate over to getting into sexual relationships with regular women through dating, any way?

It doesn't (unless you're subconsciously self-sabotaging because you're scared that you will make a bad impression with your first sexual performance or something). OTOH, it doesn't hurt either (except via opportunity costs, but then so does anything else). So how does eating at restaurants translate over to learning how to cook? It doesn't, but that's not what people eat at restaurants for.

comment by MrMind · 2015-09-17T06:55:45.970Z · LW(p) · GW(p)

So how does getting sexual experience with prostitutes translate over to getting into sexual relationships with regular women through dating, any way?

It doesn't, in no way. The top positive effect you could get from sex workers is the relief of pressure and anxiety, but if you're not getting even that then I guess you could stop wasting your money.

should I have asked this woman how much money she wanted

99.9% it would have had a bad outcome. Why didn't you just simply invited her to discuss the things further in front of a drink in a more intimate space?

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-17T09:06:34.164Z · LW(p) · GW(p)

Why didn't you just simply invited her to discuss the things further in front of a drink in a more intimate space?

I'd rather people actually said "Do you want to come back to my room for sex?" rather than "Do you want to come back to my room for coffee?" where coffee is a euphemism for sex, because some people will take coffee at face value, which can lead to either uncomfortable situations, including fear of assault, or lead to people missing opportunities because they are bad at reading between the lines.

Or if you do want to invite someone for a drink, go somewhere public.

Edit: I'm not saying that people should go round propositioning people for sex without getting to know them first. I'm saying that drinks in public are good, and that I, personally, prefer to think that adults should be able to say what they mean without euphemisms. I'm not saying that I get to ignore societies' rules. And I realise that people find what I have been saying creepy, but personally, I think if I was a girl I would find it very creepy that there could be situations where I'm in a private room with no witnesses and I want to drink coffee and the guy expects sex.

Replies from: Lumifer, Dahlen, lmm, MrMind, ChristianKl, Vaniver
comment by Lumifer · 2015-09-17T19:50:21.080Z · LW(p) · GW(p)

where coffee is a euphemism for sex

Actually, it's often not -- it's a declaration of interest and an euphemism for "let's move this thing along for the time being and see where we'll end up".

Imagine the answer to your "Do you want to come back to my room for sex?" being "I don't know yet, why don't we have coffee while I evaluate you a bit more thoroughly?"

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-17T20:07:53.783Z · LW(p) · GW(p)

Actually, it's often not -- it's a declaration of interest and an euphemism for "let's move this thing along for the time being and see where we'll end up".

So, someone who wanted to take things slowly would turn them down, where they might have accepted an invitation for coffee in starbucks. If invitation to bars = drink , bedroom = sex then everyone knows where they stand.

Replies from: Lumifer
comment by Lumifer · 2015-09-17T20:34:15.432Z · LW(p) · GW(p)

So, someone who wanted to take things slowly would turn them down

Maybe? It's a negotiation. For example that someone could have counter-suggested a coffee at Starbucks, that's a "you're going too fast" signal. Or said "Sure, but I'll have to run in 10 minutes, I have an appointment to catch", that's a "yes to coffee, no to sex" signal. There is a VERY large variety of ways to signal interest, intentions, etc.

comment by Dahlen · 2015-09-17T14:01:40.345Z · LW(p) · GW(p)

Plausible deniability, dude. It's much easier to dispel the awkwardness of rejection if you can reasonably fall back on the claim that, hey, maybe coffee was all you wanted anyway. Successful courtship depends on making the other person feel comfortable around you; it's a human relationship, not resource extraction, and it has to be framed in appropriate terms. (Edit: oh, sorry, I thought I was replying to advancedatheist; removed a sentence that assumed this.)

In table format. The second strategy is much more likely to lead to (2,1) than to (2,2).

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-17T19:49:02.531Z · LW(p) · GW(p)

I get that it's not resource extraction, but its not espionage either, and I personally don't see the need for 'I can neither confirm nor deny that I want sex'.

I also get that its about making people feel comfortable. I'm more comfortable if people are fairly upfront about what they want, but I get that it's just me who feels this way. I'm really bad at picking up on subtext, I have conversations like this:

Other person: "We're spending a lot of time together, its almost like we're being a couple."

Me: "Yeah, we have been hanging out a lot."

several months later...

Oh. I get it now. Why couldn't he just say he wanted a relationship?

And things can get even worse if one person thinks coffee means sex and one thinks it means coffee. I know a girl who has been accidentally raped because of drunken misunderstandings.

BTW I'm impressed that you went to the fuss of making a table :)

Replies from: Lumifer, ChristianKl, bogus
comment by Lumifer · 2015-09-17T19:56:49.833Z · LW(p) · GW(p)

but its not espionage either

The usual term is flirting.

if people are fairly upfront about what they want

A lot of the time people are not sure about what they want (or whether the cost-benefit is favorable). Socially acceptable delaying tactics are important.

comment by ChristianKl · 2015-09-19T09:42:53.782Z · LW(p) · GW(p)

I know a girl who has been accidentally raped because of drunken misunderstandings.

A girl saying yes to coffee isn't an excuse to not look for consent when having sex. Saying yes to coffee just means consent to move to a different location.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-19T12:32:27.499Z · LW(p) · GW(p)

This is true, but its not that simple. When you're in private, its a far more dangerous situation, and, for instance, some girls will be scared to say no because of the possibility of violence.

Replies from: ChristianKl
comment by ChristianKl · 2015-09-19T12:56:20.435Z · LW(p) · GW(p)

She will be even more afraid to say "no" while in private if she beforehand explicitely said "yes" to sex instead of having said "yes" to coffee.

If you ask: "Do you want to come to my room with me to have sex" and she says "Yes", that can be interpreted as a promise to have sex if the girl comes to the room. Asking for "coming to the room to drink coffee" doesn't do that to the same extend.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-19T13:05:52.869Z · LW(p) · GW(p)

But that presumes that the girl changes her mind about the sex when she reaches his room, which seems strange.

I suppose the room could be a sex dungeon, but in that case he should have asked "Wanna come home with me for kinky sex?"

(Obviously, people have the right to withdraw consent at any time for any reason, it just seems unlikely that it would be necessary)

Replies from: ChristianKl
comment by ChristianKl · 2015-09-19T14:16:02.935Z · LW(p) · GW(p)

But that presumes that the girl changes her mind about the sex when she reaches his room, which seems strange.

In the example the girl usually don't just want sex but she wants sex while she's turned on and that brings her pleasure. Even in the case of asking directly for sex a girl would assume that the guy will engage in foreplay that puts her then in an emotional state where she will have pleasurable sex.

When a guy asks: "Do you want to come to my room for coffee" a girl might think "That's exciting and hopefully the night will end with great sex" but depending on how the interaction in the room goes it might or might not end up in sex.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-20T21:43:48.449Z · LW(p) · GW(p)

I am assuming that the people involved have probably been out drinking and having fun and getting into an exciting emotional state beforehand.

comment by bogus · 2015-09-19T10:16:43.780Z · LW(p) · GW(p)

I know a girl who has been accidentally raped because of drunken misunderstandings.

That's such BS. Rapists know what they're doing, even when they pretend otherwise; rape is predatory behavior. The only way you could accidentally rape someone is in the "whoops, found the wrong hole!" sense.

Replies from: VoiceOfRa, ChristianKl
comment by VoiceOfRa · 2015-09-20T19:34:56.472Z · LW(p) · GW(p)

Rapists know what they're doing,

That depends on how one defines the word "rape". The fact that there is currently an attempt by certain groups to massively expand the definition of that word (while keeping the connotations of the original meaning) isn't helping.

comment by ChristianKl · 2015-09-19T10:53:26.501Z · LW(p) · GW(p)

Rapists know what they're doing, even when they pretend otherwise

The issue in this case seems to be that the man thought that the fact that the woman said "yes" to having coffee means that she expressed consent while the woman thought it didn't.

Why do you think that in every case both people have the same idea whether there's consent? Or do you think that rape means something different than having sex without consent?

Replies from: bogus
comment by bogus · 2015-09-19T12:14:18.672Z · LW(p) · GW(p)

The data show otherwise. As it turns out, an overwhelming portion of rapes is due to a minority of repeat offenders who never get caught, due in no small part to prevailing social attitudes which all-too-readily construe rapes as nothing more than one-off "misunderstandings" which can be "forgiven". But again, that's just wrong. Rape is not something that just happens once - they do it again and again.

Replies from: ChristianKl, philh, skeptical_lurker, VoiceOfRa
comment by ChristianKl · 2015-09-19T14:11:33.161Z · LW(p) · GW(p)

Someone who thinks that a woman saying "Yes" to coffee means that she expresses consent to sex is likely going to repeat the error multiple times.

Believes such as: 'Her mouth that "no" but her eyes said "yes"' can also repeat to repeated offending without the rapist thinking he's a rapist.

Understanding how to determine consent is vital and not all problems are due to bad intent.

comment by philh · 2015-09-19T12:44:23.860Z · LW(p) · GW(p)

I note that people who misunderstand something once seem above-averagely likely to misunderstand similar things in future, especially (but not exclusively) if they don't receive correction.

comment by skeptical_lurker · 2015-09-19T12:43:26.228Z · LW(p) · GW(p)

Maybe you're right about the vast majority of cases. In the specific anecdote I mentioned, the victim told me that it was a misunderstanding - they were friends, she thought she was going home with him to sleep, he thought they were going to have sex, they were both very, very, drunk and he didn't understand that she wasn't consenting. She has forgiven it and they are still friends, although perhaps less close.

I'm not endorsing anyone's actions here. Perhaps this guy is a threat, and she should not have forgiven him. But I think my original point stands, which is that it is safer for people to get to know each other over drinks in public and only go home if they both sure whether or not they want sex.

comment by VoiceOfRa · 2015-09-20T19:36:55.591Z · LW(p) · GW(p)

The data show otherwise.

Would this be the same "data" that claims that 1 in 4 college women are "raped"?

comment by lmm · 2015-09-17T19:40:21.300Z · LW(p) · GW(p)

I'd rather people actually said "Do you want to come back to my room for sex?" rather than "Do you want to come back to my room for coffee?" where coffee is a euphemism for sex, because some people will take coffee at face value, which can lead to either uncomfortable situations, including fear of assault, or lead to people missing opportunities because they are bad at reading between the lines.

I'd rather that too, and I've had it go wrong in both directions. But the whole point of much of this site is that outcomes are more important than principles. Saying "do you want to come back to my room for sex?" is not going to change society, it's just going to make you personally come off as a creep.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-17T20:15:11.516Z · LW(p) · GW(p)

Saying "do you want to come back to my room for sex?" is not going to change society, it's just going to make you personally come off as a creep.

I'm not sure its always creepy, not if you've already kissed them. Depends on circumstances. Inviting someone in for coffee and then trying to fuck them can be pretty creepy too.

But I agree that I can't change society, and so I might as well conform to the rules.

Replies from: Nornagest
comment by Nornagest · 2015-09-17T20:52:39.938Z · LW(p) · GW(p)

It's almost always creepy in the context of an early relationship: whether you've kissed or not, it's a strong signal of contempt for or unfamiliarity with sexual norms. About the only exceptions I can think of would occur in very sex-positive cultures with very strong norms around explicit verbal negotiation. There aren't many of those cultures, and even within them you'd usually want some strong indications of interest beforehand.

On the other hand, if you've invited someone up for coffee (or just said "do you want to come back to my place?", which is pretty much the same offer), that's not license for them to tear your clothes off as soon as the door closes either. Doing that would be creepy, unless you've practically been molesting each other on the way over, but normally the script goes more like this: you walk in, there's maybe some awkward chitchat, you sit down on the bed or couch, they sit down next to you, you start kissing, and things progress naturally from there. If at any point they break script or the progression stalls out... well, then you make coffee.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-17T21:24:26.259Z · LW(p) · GW(p)

About the only exceptions I can think of would occur in very sex-positive cultures with very strong norms around explicit verbal negotiation.

I can think of a few examples where I've seen directly propositioning someone work, but these examples were among rather promiscuous people, so I think your point stands.

On the other hand, if you've invited someone up for coffee (or just said "do you want to come back to my place?", which is pretty much the same offer)

Actually, I'd interpret this very differently - inviting someone back for coffee is, on the face of it, saying that the reason you are inviting them is for coffee, not sex. Its a false pretext. But "do you want to come back to my place?" gives no pretext and its obviously for sex (assuming you've kissed already).

Obviously, I do know that inviting someone for coffee means sex might happen (or at least it does in some contexts). But there's also people who invite people over to "watch a movie" or "smoke weed" and this is more of a grey area because they might actually want to watch a movie.

Replies from: Nornagest
comment by Nornagest · 2015-09-17T21:40:41.817Z · LW(p) · GW(p)

Actually, I'd interpret this very differently - inviting someone back for coffee is, on the face of it, saying that the reason you are inviting them is for coffee, not sex. Its a false pretext.

It's a pretext, sure. That's the point. The standard getting-to-know-you script does not allow for directly asking someone for sex (unless you're already screwing them on the regular; "wanna get some ice cream and fuck?" is acceptable, if a little crass, on the tenth date) so we've developed the line as a semi-standardized cover story for getting a couple hours of privacy with someone. You shouldn't read it as "I want coffee", but rather as "I want to be alone with you, so here's a transparent excuse". There are more creative ways to ask the same thing, but because they're more creative (and therefore further outside the standard cultural script), they're more prone to misinterpretation.

Compare the Seventies-era cliche of "wanna come look at my etchings?"

Replies from: Lumifer
comment by Lumifer · 2015-09-18T01:57:03.039Z · LW(p) · GW(p)

It's a pretext, sure. That's the point.

I think there's a deeper point: human interactions are multilayered and the surface layer does not necessarily carry the most important meaning. The meaning can be -- and often is -- masked by something else which should not be interpreted literally.

"It's a false pretext" is not even wrong -- it's just not a correct way to think about the situation. A "pretext" is a way to express in a socially acceptable fashion a deliberately ambiguous meaning which, if said explicitly aloud, would change the dynamics of the situation completely.

Human interaction, especially of a sexual nature, just is not reducible to the straightforward exchange of "wanna fuck?" information bits.

comment by MrMind · 2015-09-18T07:11:09.564Z · LW(p) · GW(p)

Or if you do want to invite someone for a drink, go somewhere public.

I agree with you, and that's indeed what is implied by my "a more intimate space". I meant a bar where you can create a two people bubble, with more overlapping of intimate space, rather than "come back to my room".

The error I see socially inexperienced people making over and over is presupposing that others have the same need and way of communicating that they have. It's not so, especially when dealing with a person of the opposite sex.

A good rule of thumb in these matters is to incrementally test for more intimacy in a gradual manner:

comment by ChristianKl · 2015-09-19T09:20:33.054Z · LW(p) · GW(p)

The problem with "Do you want to come back to my room for sex?" can be that it requires the woman to commit in that moment. A woman might very well think: "I would enjoy making out in a more private space but at the moment I don't know whether I actually want to have sex, and I want to make that decision based on how I feel in the moment"

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-09-20T21:56:53.431Z · LW(p) · GW(p)

I find this strange, because if I'm attracted to someone, this attraction doesn't change on a second-by-second basis, although perhaps its just me that feels like that. I think if this hypothetical woman doesn't know whether she wants sex, maybe it would be best for her to wait until the next date, where she might have a better idea of what she wants.

I heard some advice saying that if you're not enthusiastic about something its not worth doing, and while I'm not sure this applies in general, I would apply it to sex. No point in half-hearted sex.

Replies from: ChristianKl
comment by ChristianKl · 2015-09-20T23:34:46.958Z · LW(p) · GW(p)

I find this strange, because if I'm attracted to someone, this attraction doesn't change on a second-by-second basis, although perhaps its just me that feels like that.

Being attracted to someone and wanting to have sex with them next minute aren't the same thing. You usually want to also be horny to have sex. Women also want to feel comfort and trust.

A woman might feel: "I'm attracted to this guy but I'm menstruating and I don't like it to have sex while I'm menstruating."

where she might have a better idea of what she wants.

That assumes that a mental idea of what she wants drives her behavior. I think in most cases a woman will instead listen to her emotions that tell her what she likes in that particular moment instead of relying too much on mental concepts.

That desire might simply be: "I want to be more intimite with this guy than I'm at the moment but I don't want to be in public when we get more intimite."

comment by Vaniver · 2015-09-18T15:09:09.679Z · LW(p) · GW(p)

I'd rather people actually said "Do you want to come back to my room for sex?"

There is the section of Surely You're Joking, Mr. Feynman with the phrase "you just ask them!", which does endorse explicitly asking people if they're interested in sex. I don't think this is a replacement for understanding and displaying social cues, though.

comment by advancedatheist · 2015-09-16T03:58:19.754Z · LW(p) · GW(p)

Does America's health care system have a bias against incels?

Today I went to get my first physical in years now that I have Obamacare, and during the interview with the nurse practitioner, when she got to the questions about my marital status and whether I have any children, I just went straight to the point about my adult virginity, along with providing some context about how I wasted my time “dating” earlier in life because I could never close the deal with a woman. Otherwise she might assume that I had gone to prison for 30 years or something ridiculous like that to explain what kept me away from women for so long. (A woman actually asked me one time if I had spent decades in prison to account for my lack of sexual experience.)

And this nurse then started arguing with me about not giving up on finding sexual relationships – at my age (55). She sounded like the dating advice scolds that incel bloggers like The Black Pill have written about. This pissed me off, and I may have to find a different health care provider.

People with sexual experience really, really don’t understand the situation of guys like me, even ones with medical training.

Replies from: drethelin, Alicorn, polymathwannabe, NancyLebovitz
comment by drethelin · 2015-09-16T05:42:17.626Z · LW(p) · GW(p)

Don't rant to strangers about how incel you are. If you do, don't be surprised if some of those strangers try to offer you comfort.

Replies from: advancedatheist, advancedatheist
comment by advancedatheist · 2015-09-17T03:26:35.484Z · LW(p) · GW(p)

So how should I answer questions about my sexual history in a medical context?

I find it odd that gays and promiscuous women have become socially acceptable now, while incels with normal desires have become the freaks, weirdos and expendables. This has turned completely around from what people considered normal sexuality 50 years ago.

Replies from: Viliam, philh, Tem42
comment by Viliam · 2015-09-17T08:43:32.116Z · LW(p) · GW(p)

Men without families have always been considered expendable. The whole concept of army is built around that. I'm not saying it's right; I'm just saying it's old as history.

The new thing is that "having sex" has been completely divorced from "having a family", so now some stigma (less) is associated with not having a family, and some stigma (more) is associated with not having sex. It makes sense this way, because being unable to attract someone implies being unable to start a family. Again, I'm describing here, not making a moral judgement; I don't have a problem with people not reproducing.

It sucks to have low status. But it is stupid to needlessly tell strangers "hey, I have low status".

comment by philh · 2015-09-18T16:21:09.837Z · LW(p) · GW(p)

So how should I answer questions about my sexual history in a medical context?

"No."

Or if there are looking to be a lot of questions, you can head them off with "no, I'm a virgin".

comment by Tem42 · 2015-09-20T16:00:01.393Z · LW(p) · GW(p)

I don't believe that I have seen any statement that incels are freaks stronger than your own statement that "otherwise she might assume that I had gone to prison for 30 years". I'm sure that there are some people who might assume that -- or worse -- but I would not expect that most people would.

Likewise, when someone overshares about their problems (and if you defined yourself as an 'involuntary celebrate', you are framing it as a problem), the default social response is "don't give up, you can handle it!" whether you're talking about dandruff or cancer. Her response may not be what you hoped for, but it wasn't a clear indicator of prejudice.

comment by advancedatheist · 2015-09-18T14:44:23.613Z · LW(p) · GW(p)

The increasing visibility of incels in developed countries, especially in Japan, where the numbers of adult male virgins has gotten ridiculous, makes the correspondingly decreasing percentage of sexually experienced men uneasy for some reason. I have to wonder if the unease resembles the effects of mortality salience in terror management theory. We provide empirical evidence that women's sexual freedom hasn't resulted in a sexual utopia, despite all the propaganda to that effect going back to the Enlightenment.

Replies from: polymathwannabe
comment by polymathwannabe · 2015-09-19T04:20:47.714Z · LW(p) · GW(p)

I'm tempted to create a drinking game for every time the Enlightenment gets blamed for whatever somebody thinks is wrong with the world.

comment by Alicorn · 2015-09-16T04:28:54.877Z · LW(p) · GW(p)

I doubt very much that your context was medically relevant. She behaved inappropriately and of course you should change providers if you can and prefer to, but there was no reason to do anything but answer "no" to her questions in the first place, especially if the alternative involved phrases like "close the deal".

comment by polymathwannabe · 2015-09-16T04:52:42.666Z · LW(p) · GW(p)

I'm curious. If you had been examined by a male nurse, would you have felt the same need to give an extended explanation?

Replies from: advancedatheist
comment by advancedatheist · 2015-09-16T05:16:03.030Z · LW(p) · GW(p)

Even more so, because the male nurse might assume I'm gay otherwise.

I've noticed some little-studied cognitive biases here, because sexually experienced people tend to force ready-made "explanations" on male incels that make them comfortable, instead of trying to study and understand incel as its own phenomenon. The canned explanations lead to bad conclusions and useless advice for men like me. How would seeing a prostitute teach me how to get into sexual relationships? Men who get their sexual experience exclusively from prostitutes can remain as inept at dating as incels. You usually can't just pick up a girl at the coffee shop with your "day game" and expect her to do the whore tricks you have become accustomed to with escorts.

That also shows why I consider sexbots a really stupid and dangerous notion. Sexbots could just increase the proportions of socially retarded men who have no clue how to deal with real women.

Replies from: polymathwannabe
comment by polymathwannabe · 2015-09-16T17:12:38.677Z · LW(p) · GW(p)

Otherwise she might assume that I had gone to prison for 30 years or something ridiculous like that

the male nurse might assume I'm gay otherwise

What you need from the nurse is her set of skills. Her personal opinion of you is irrelevant to doing her job. I understand that we may see health professionals as higher-status than us, but they're actually doing us a service. You don't need to feel intimidated by an unspoken imagined condemnation.

comment by NancyLebovitz · 2015-09-16T17:31:47.804Z · LW(p) · GW(p)

It's reasonable to assume that any bias which is common in the culture will also show up in how patients are treated.