Open thread, Mar. 14 - Mar. 20, 2016

post by MrMind · 2016-03-14T08:02:53.240Z · LW · GW · Legacy · 213 comments

Contents

213 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

213 comments

Comments sorted by top scores.

comment by Lumifer · 2016-03-15T16:49:41.461Z · LW(p) · GW(p)

This post by Eric Raymond should be interesting to LW :-) Extended quoting:

There’s a link between autism and genius says a popular-press summary of recent research. If you follow this sort of thing (and I do) most of what follows doesn’t come as much of a surprise. We get the usual thumbnail case studies about autistic savants. There’s an interesting thread about how child prodigies who are not autists rely on autism-like facilities for pattern recognition and hyperconcentration. There’s a sketch of research suggesting that non-autistic child-prodigies, like autists, tend to have exceptionally large working memories. Often, they have autistic relatives. Money quote: “Recent study led by a University of Edinburgh researcher found that in non-autistic adults, having more autism-linked genetic variants was associated with better cognitive function.”

But then I got to this: “In a way, this link to autism only deepens the prodigy mystery.” And my instant reaction was: “Mystery? There’s a mystery here? What?” Rereading, it seems that the authors (and other researchers) are mystified by the question of exactly how autism-like traits promote genius-level capabilities.

At which point I blinked and thought: “Eh? It’s right in front of you! How obvious does it have to get before you’ll see it?”

... Yes, there is an enabling superpower that autists have through damage and accident, but non-autists like me have to cultivate: not giving a shit about monkey social rituals.

Neurotypicals spend most of their cognitive bandwidth on mutual grooming and status-maintainance activity. They have great difficulty sustaining interest in anything that won’t yield a near-immediate social reward. By an autist’s standards (or mine) they’re almost always running in a hamster wheel as fast as they can, not getting anywhere.

The neurotypical human mind is designed to compete at this monkey status grind and has zero or only a vanishingly small amount of bandwidth to spare for anything else. Autists escape this trap by lacking the circuitry required to fully solve the other-minds problem; thus, even if their total processing capacity is average or subnormal, they have a lot more of it to spend on what neurotypicals interpret as weird savant talents.

Non-autists have it tougher. To do the genius thing, they have to be either so bright that they can do the monkey status grind with a tiny fraction of their cognitive capability, or train themselves into indifference so they basically don’t care if they lose the neurotypical social game.

Once you realize this it’s easy to understand why the incidence of socially-inept nerdiness doesn’t peak at the extreme high end of the IQ bell curve, but rather in the gifted-to-low-end-genius region closer to the median. I had my nose memorably rubbed in this one time when I was a guest speaker at the Institute for Advanced Study. Afternoon tea was not a nerdfest; it was a roomful of people who are good at the social game because they are good at just about anything they choose to pay attention to and the monkey status grind just isn’t very difficult. Not compared to, say, solving tensor equations.

Replies from: username2, Yvain, James_Miller
comment by username2 · 2016-03-16T11:45:57.560Z · LW(p) · GW(p)

... Yes, there is an enabling superpower that autists have through damage and accident, but non-autists like me have to cultivate: not giving a shit about monkey social rituals.

There is much more to autism than that. It's just one thing that's easy for neurotypicals to notice.

Replies from: Lumifer
comment by Lumifer · 2016-03-16T14:48:33.913Z · LW(p) · GW(p)

There is much more to autism than that.

Of course, but Eric Raymond is not giving a comprehensive overview of autism, he is just making a single point.

comment by Scott Alexander (Yvain) · 2016-03-18T20:51:25.081Z · LW(p) · GW(p)

This idea of having more "bandwidth" is tempting, but not really scientifically supported as far as I can tell, unless he just means autists have more free time/energy than neurotypicals.

Replies from: Lumifer
comment by Lumifer · 2016-03-18T21:03:29.025Z · LW(p) · GW(p)

I think he means hyper-focus, basically.

comment by James_Miller · 2016-03-16T04:38:14.372Z · LW(p) · GW(p)

This might turn out to have socially damaging implications once we figure out how to do genetic engineering if parents select against their future children having "autistic" genes.

Replies from: Lumifer
comment by Lumifer · 2016-03-16T14:38:41.796Z · LW(p) · GW(p)

What is "this"?

If genetic engineering of future-kids becomes widespread, I expect to see a significant lessening of diversity. Most everyone will be Brandy and Clint. On the other hand, weird people will become REALLY weird :-/

comment by Daniel_Burfoot · 2016-03-14T10:43:07.580Z · LW(p) · GW(p)

Simple hypothesis relating to Why Don't Rationalists Win:

Everyone has some collection of skills and abilities, including things like charisma, luck, rationality, determination, networking ability, etc. Each person's success is limited by constraints related to these abilities, in the same way that an application's performance is limited by the CPU speed, RAM, disk speed, networking speed, etc of the machine(s) it runs on. But just as for many applications the performance bottleneck isn't CPU speed, for most people the success bottleneck isn't rationality.

Replies from: cousin_it, moridinamael, ChristianKl, turchin, Vaniver, Coacher, Coacher, Lumifer
comment by cousin_it · 2016-03-14T15:37:03.039Z · LW(p) · GW(p)

It could be worse. Rationality essays could be attracting a self-selected group of people whose bottleneck isn't rationality. Actually I think that's true. Here's a three-step program that might help a "stereotypical LWer" more than reading LW:

1) Gym every day

2) Drink more alcohol

3) Watch more football

Only slightly tongue in cheek ;-)

Replies from: Stingray, SanguineEmpiricist
comment by Stingray · 2016-03-15T10:25:54.876Z · LW(p) · GW(p)

Strongly disagree with 2) and 3). I think you mean them as a proxy for 'become more social, make more connections, find ways to fit in a local culture', but quality of connections usually matters more than quantity. But in many circles that are likely to matter for a typical LWer 3) is likely to be useless and likely benefits of 2) are achievable without drinking or with a very modest drinking.

Replies from: cousin_it
comment by cousin_it · 2016-03-15T13:54:25.729Z · LW(p) · GW(p)

My advice was more like "get in touch with your stupid animal side". The social part comes later :-)

Replies from: Stingray, Lumifer
comment by Stingray · 2016-03-15T14:43:47.446Z · LW(p) · GW(p)

Then living in a wilderness and cutting trees would be much better. Or some kinds of manual work where you can see the fruits of your labor, e.g. gardening. I believe that activities like these would be better suited for connecting mental and physical parts of a person.

comment by Lumifer · 2016-03-15T14:20:44.430Z · LW(p) · GW(p)

I don't know about yours, but my stupid animal side is uninterested in alcohol and football. It wants to eat, sleep, fuck, and harass betas :-D

comment by SanguineEmpiricist · 2016-03-15T20:17:12.379Z · LW(p) · GW(p)

Drinking alcohol is very necessary for connecting with people. People who are against alcohol don't know much they miss out at times.

Replies from: Lumifer, ChristianKl, Brillyant
comment by Lumifer · 2016-03-15T20:39:48.036Z · LW(p) · GW(p)

Drinking alcohol is very necessary for connecting with people.

"I drink to make other people more interesting" -- Ernest Hemingway

comment by ChristianKl · 2016-03-15T20:41:29.420Z · LW(p) · GW(p)

I think that depends very much on the kind of people with whom you hang out. There are people who need alcohol to open up. On the other hand there are people who have no problem opening up without alcohol.

comment by Brillyant · 2016-03-17T15:25:36.855Z · LW(p) · GW(p)

Drinking alcohol is very necessary for connecting with people.

This is so obviously wrong.

Alcohol may aid in connecting with some people some of the time.

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2016-03-18T06:48:52.210Z · LW(p) · GW(p)

This is just what nerdy types tell themselves and they come up with all these rationalizations for it, most peoples skillsets don't lend themselves for that type of socialization. These people just realize they were wrong years later when it's much too late.

Replies from: Viliam, Brillyant
comment by Viliam · 2016-03-18T09:28:41.630Z · LW(p) · GW(p)

I recommend trying "placebo alcohol". That means, getting drunk for the first time, to get an experience of what it feels like, but to have a non-alcoholic drink the next time and merely role-play being drunk.

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2016-03-18T18:21:49.817Z · LW(p) · GW(p)

This is the exact sort of community that would delude themselves in exactly this department and would never stop arguing(not saying you do this), but if someone told me "Can you have fun/meet people without drinking", I would say "sort of, but you're better of just participating anyways".

When you drink with friends you learn why you were wrong, there's always going to be just that "one guy" who thinks he knows better though.

comment by Brillyant · 2016-03-18T13:00:26.786Z · LW(p) · GW(p)

most peoples skillsets don't lend themselves for that type of socialization.

What do you mean?

comment by moridinamael · 2016-03-14T14:40:30.951Z · LW(p) · GW(p)

I think at some point in time a few years ago there seemed to be an implicit assumption on LessWrong that of course you can hack your determination, rewire your networking ability and bootstrap your performance at anything! And I don't think people so much stopped believing that this was true in principle, but rather people started realizing how incredibly difficult and time consuming it is to change your base skillset.

Replies from: Vaniver, username2, None
comment by Vaniver · 2016-03-15T12:11:44.227Z · LW(p) · GW(p)

Well, there's also the possibility that people who did successfully hack their determination, networking ability, and performance are now mostly not spending time on LW.

Replies from: username2
comment by username2 · 2016-03-16T11:49:28.021Z · LW(p) · GW(p)

If that's true, then many rationalists actually do win.

Replies from: Viliam
comment by Viliam · 2016-03-16T21:10:17.256Z · LW(p) · GW(p)

Depending on what is their goal. There are many possible levels of "winning".

You can win on individual level by quitting LW and focusing on your career and improving your skills. People who achieve that can easily disappear from our radars.

You can win on group level by creating a community of "successful rationalists"; so you are not only successful as a lonely individual, but you have a tribe that shares your values, and can cooperate in effective ways. We would probably notice such group, for example because they would advertise themselves on LW for purposes of recruitment.

And then you can win on civilizational level, by raising the planetary level of sanity and building a Friendly AI. Almost sure we would notice that.

Okay, the third one is outside of everyday life's scope, so let's ignore it for now.

I don't know how much I am generalizing here from my own example, but winning on an individual level would now feel insufficient for me, having met rationalists on LW website and in real life. If I could increase my skills and resources significantly, I would probably spend some time trying to get others from the rationalist community on my level. Because having allies I could achieve even more. So I would probably post much less comments on LW, but once in a while I would post an article trying to inspire people to "become stronger".

Replies from: username2
comment by username2 · 2016-09-30T06:00:03.245Z · LW(p) · GW(p)

On the other hand, perhaps you are being too insular in the communities you engage in. There are many, many groups of smart people out there in the world. Perhaps someone who got what they wanted from LW and 'quit' went on to gather allies who were already successful in their fields?

comment by username2 · 2016-03-14T20:51:59.551Z · LW(p) · GW(p)

Thousand of small steps is required, one big epiphany is not enough. But many people expect the latter, because the very reason they seek advice is to avoid doing the former.

Replies from: Lumifer
comment by Lumifer · 2016-03-14T20:54:47.325Z · LW(p) · GW(p)

because the very reason they seek advice is to avoid doing the former.

"Isn't there a pill I can just take?"

X-)

Replies from: Error
comment by Error · 2016-03-15T15:11:29.723Z · LW(p) · GW(p)

The world needs more "pills I can just take."

Replies from: Lumifer
comment by Lumifer · 2016-03-15T15:50:14.292Z · LW(p) · GW(p)

I don't know about that. So far the world's experience with "Just take this pill and everything will be fine" is... mixed.

Replies from: Error
comment by Error · 2016-03-19T14:24:09.561Z · LW(p) · GW(p)

Well, admittedly I was assuming pills that worked and had the intended effect.

comment by [deleted] · 2016-03-15T05:57:04.414Z · LW(p) · GW(p)

Maybe some started to appreciate the struggle and the suffering, to find joy and strength in it. Then, their terminal goals pivoted.

comment by ChristianKl · 2016-03-14T18:03:30.944Z · LW(p) · GW(p)

The part of focusing your efforts on the right task is a rationality skill.

Recently one rationalist wrote on facebook how he used physical rationality to make his his shoulder heal faster after an operation and produce less pain. Having accurate models of reality is very useful in many cases.

Replies from: moridinamael, 4hodmt, NancyLebovitz
comment by moridinamael · 2016-03-16T16:17:45.174Z · LW(p) · GW(p)

What is "physical rationality"?

Replies from: ChristianKl
comment by ChristianKl · 2016-03-16T20:49:44.103Z · LW(p) · GW(p)

It's a new coinage, so the term isn't well-defined. On the other hand here are reasons to use the term.

On key aspect of "physical rationality" is a strong alignment between your own physical body and your own map of it. An absence of conflicts between system I and system II when it comes to physicality.

Replies from: moridinamael
comment by moridinamael · 2016-03-16T21:55:21.572Z · LW(p) · GW(p)

So I suppose things like the Alexander Technique, possibly Yoga, certain martial arts and sports might be implicated?

Replies from: ChristianKl
comment by ChristianKl · 2016-03-16T22:44:57.137Z · LW(p) · GW(p)

I don't know all influences in this particular case but it's certainly that direction. There was a reference to the book "A Guide to Better Movement" by Todd Hargrove.

comment by 4hodmt · 2016-03-16T10:57:53.729Z · LW(p) · GW(p)

Assuming he only had one shoulder operated on, where was the control shoulder?

Replies from: ChristianKl
comment by ChristianKl · 2016-03-16T11:18:01.729Z · LW(p) · GW(p)

His doctor was dumbfounded over the result and the doctor has seen control shoulders.

Replies from: dhoe
comment by dhoe · 2016-03-18T13:02:22.936Z · LW(p) · GW(p)

Doctors being dumfounded is a hallmark of irrationalist stories. Not saying this one is - I don't even know the story here - but as someone who grew up around a lot of people who basically believed in magic, I can conjure so many anectotes of people thinking their doctors were blown away by sudden recoveries and miraculous healings. I mostly figure doctors go "oh cool it's going pretty well" and add a bit of color for the patient's benefit.

Replies from: ChristianKl
comment by ChristianKl · 2016-03-18T13:50:52.529Z · LW(p) · GW(p)

A lot of doctors will be suprised if someone walks over hot coals and afterwards has no blisters or burning marks. Yet, at Anthony Robbins seminars thousands walk over hot coals and most of them don't develop blisters.

The human body is complex there are a lot of real phenomena that can dumfounded doctors. If you think doctors are infallible you might want to read http://lesswrong.com/r/discussion/lw/nes/link_evidencebased_medicine_has_been_hijacked/

Whether you take that as evidence that magic exists is a different matter.

comment by NancyLebovitz · 2016-03-14T23:28:08.741Z · LW(p) · GW(p)

If you don't mind, what's the name of the person who used physical rationality?

Replies from: ChristianKl
comment by ChristianKl · 2016-03-15T07:18:53.138Z · LW(p) · GW(p)

Given semi-private facebook sources, I think I'll rather write you a direct message then answer publically.

comment by turchin · 2016-03-14T15:14:00.591Z · LW(p) · GW(p)

I had an idea to write a post about this problem under the name "general effectiveness". GE is measure of you by your outside peer, typically employer.

If I were employer I would (and I really did it as I used to hire people for small tasks in my art business) look on their general effectiveness. It constitutes of many things after rationality, including visual outlook, age, gender, interest to work, ability to come in time and their results in test work.

Most of these characteristics are unchangeable personality traits, so if a given person would invest a lot in studying rationality, he would not be able to change them much.

But he could change his place of work and find more suitable to him.

There are also ways to rise personal effectiveness in different ways. For example if I hire a helper I rise my effectiveness.

comment by Vaniver · 2016-03-14T14:50:31.814Z · LW(p) · GW(p)

But just as for many applications the performance bottleneck isn't CPU speed, for most people the success bottleneck isn't rationality.

Instrumental rationality, among other things, points people to whichever of their skills or abilities is currently the performance bottleneck and encourages them to work on that, not the thing that's most fun to work on. So we would still expect instrumental rationalists to win in this model.

(Yes, epistemic rationality might not lead to winning as directly.)

Replies from: username2
comment by username2 · 2016-03-14T16:00:42.720Z · LW(p) · GW(p)

Yes, epistemic rationality might not lead to winning as directly

Why would that be? Is it that many people work in areas where it doesn't really matters if they are mistaken? Or do people already know enough about the area they work in and further improvements have diminishing returns? Epistemic rationality provides a direction where people should put their efforts if they want to become less wrong about stuff. Are people simply unwilling to put in that effort?

Replies from: NancyLebovitz, Vaniver
comment by NancyLebovitz · 2016-03-14T23:27:01.695Z · LW(p) · GW(p)

People may underestimate the amount and kind of information they need to turn epistemic rationality into instrumental rationality.

Replies from: None
comment by [deleted] · 2016-03-15T08:51:43.408Z · LW(p) · GW(p)

People may underestimate the value of clearly stated and expressed and communicated preferences.

comment by Vaniver · 2016-03-14T20:06:49.244Z · LW(p) · GW(p)

Is it that many people work in areas where it doesn't really matters if they are mistaken? Or do people already know enough about the area they work in and further improvements have diminishing returns?

More the latter. Most of the things that a person could learn about are things that won't help them directly. Agreed that if one has poor epistemic rationality, it's hard to do the instrumental rationality part correctly ("I know, I'll fix this problem by wishing!").

comment by Coacher · 2016-03-14T12:04:58.593Z · LW(p) · GW(p)

Another hypothesis - the smarter you sound the less friends you tend to have.

Replies from: username2, OrphanWilde, Lumifer
comment by username2 · 2016-03-14T13:58:16.742Z · LW(p) · GW(p)

the less friends you tend to have

Fewer!

comment by OrphanWilde · 2016-03-15T20:06:27.807Z · LW(p) · GW(p)

Most people like having at least one smart friend.

The trick is not to make other people feel stupid, which many (most?) smart people are very bad at.

comment by Lumifer · 2016-03-14T15:00:44.864Z · LW(p) · GW(p)

the smarter you sound the less friends you tend to have

I suspect it's more of a golden middle kind of thing -- people out in both tails of the distribution tend to have social problems.

comment by Coacher · 2016-03-14T12:03:16.364Z · LW(p) · GW(p)

Could it also be, that being rational deprives portion of CPU/RAM of human brains, that would otherwise be used for something better?

comment by Lumifer · 2016-03-14T15:04:12.991Z · LW(p) · GW(p)

for most people the success bottleneck isn't rationality.

Instrumental rationality is more or less defined as "doing whatever you need to in order to succeed". If success requires e.g. networking, instrumental rationality would tell you to improve your networking ability.

For epistemic rationality I agree, it's not a common bottleneck.

The question whether luck is a skill is an interesting question :-)

comment by turchin · 2016-03-15T21:39:07.096Z · LW(p) · GW(p)

Probably everybody had seen it, but EY wrote long post on FB about AlphaGO which get 400 reposts. The post overestimates power of AlphaGO, and in general it seems to me that EY did too much conclusions based on very small available information (3:0 wins at the moment of the post - 10 pages of conclusions). The post's comment section includes contribution from Robin Hanson about usual foom's speed and type topic. EY later updated his predictions based on Segol win on game 4 and stated that even superhuman AI could make dumb mistakes, which may result in new type of AI failures.

https://www.facebook.com/yudkowsky/posts/10154018209759228?pnref=story

Replies from: None
comment by [deleted] · 2016-03-15T22:31:48.832Z · LW(p) · GW(p)

So, whats the difference between 'superhuman with dumb mistakes', 'dumb with some superhuman skills', and 'better at some things and worse at others'?

Replies from: turchin
comment by turchin · 2016-03-15T23:09:12.561Z · LW(p) · GW(p)

I think the difference here is distribution.

superhuman with dumb mistakes' - 4 brilliant games, one stupid loose.

dumb with some superhuman skills - dumb in one game, unbeatble in another.

better at some things and worse at others - different performance in different domains.

I think that if superhuman AI with bugs will start to self-improve, the bugs will start to accumulate. This will ruin or AIs power, or AIs goal system. The first is good and the second is bad. I also could suggest that first AI which will try to self improve will still have some bugs. The open question is if AI will be able to debug itself. Some bugs may prevent seeing them as bugs, so they are reccurent. The closest thing is human bias of overconfidence. Overconfident human can't understand that there is something wrong with him.

comment by NancyLebovitz · 2016-03-16T04:44:11.847Z · LW(p) · GW(p)

History of "That which can be destroyed by the truth, should be"

First said by Hodgell, Yudkowsky wrote a variant, Sagan didn't say it.

comment by turchin · 2016-03-14T23:23:20.925Z · LW(p) · GW(p)

Ok, now Lenet run out his AI after 30 years of development https://www.technologyreview.com/s/600984/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work/

Russian Compreno system, which manually model language is also suggested first service Findo (after 20 years and 80 million USD) https://abbyy.technology/en:features:linguistic:semanitc-intro

Replies from: MrMind
comment by MrMind · 2016-03-15T08:18:16.863Z · LW(p) · GW(p)

Ok, now Lenet run out his AI after 30 years of development

Open portion of said AI: http://www.cyc.com/platform/opencyc/

comment by Anders_H · 2016-03-20T05:32:34.388Z · LW(p) · GW(p)

Three days ago, I went through a traditional rite of passage for junior academics: I received my first rejection letter on a paper submitted for peer review. After I received the rejection letter, I forwarded the paper to two top professors in my field, who both confirmed that the basic arguments seem to be correct and important. Several top faculty members have told me they believe the paper will eventually be published in a top journal, so I am actually feeling more confident about the paper than before it got rejected.

I am also very frustrated with the peer review system. The reviewers found some minor errors, and some of their other comments were helpful in the sense that they reveal which parts of the paper are most likely to be misunderstood. However, on the whole, the comments do not change my belief in the soundness of the idea, and in my view they mostly show that the reviewers simply didn’t understand what I was saying.

One comment does stand out, and I’ve spent a lot of energy today thinking about its implications: Reviewer 3 points out that my language is “too casual”. I would have had no problem accepting criticism that my language is ambiguous, imprecise, overly complicated, grammatically wrong or idiomatically weird. But too casual? What does that even mean? I have trouble interpreting the sentence to mean anything other than an allegation that I fail at a signaling game where the objective is to demonstrate impressiveness by using an artificially dense and obfuscating academic language.

From my point of view, “understanding” something means that you are able to explain it in a casual language. When I write a paper, my only objective is to allow the reader to understand what my conclusions are and how I reached them. My choice of language is optimized only for those objectives, and I fail to understand how it is even possible for it to be “too casual”.

Today, I feel very pessimistic about the state of academia and the institution of peer review. I feel stronger allegiance to the rationality movement than ever, as my ideological allies in what seems like a struggle about what it means to do science. I believe it was Tyler Cowen or Alex Tabarrok who pointed out that the true inheritors of intellectuals like Adam Smith are not people publishing in academic journals, but bloggers who write in a causal language. I can’t find the quote but today it rings more true than ever.

I understand that I am interpreting the reviewers choice of words in a way that is strongly influenced both by my disappointment in being rejected, and by my pre-existing frustration with the state of academia and peer review. I would very much appreciate if anybody could steelman the sentence “the writing is too casual”, or otherwise help me reach a less biased understanding of what just happened.

The paper is available at https://rebootingepidemiology.files.wordpress.com/2016/03/effect-measure-paper-0317162.pdf . I am willing to send a link to the reviewers’ comments by private message to anybody who is interested in seeing it.

Replies from: Lumifer, ChristianKl, Viliam, Douglas_Knight, ChristianKl
comment by Lumifer · 2016-03-21T15:17:01.949Z · LW(p) · GW(p)

But too casual? What does that even mean?

Having glanced at your paper I think "too casual" means "your labels are too flippant" -- e.g. "Doomed". You're showing that you're human and that's a big no-no for a particular kind of people...

By the way, you're entirely too fond of using quoted words ("flip", "transported", "monotonicity", "equal effects", etc.). If the word is not exactly right so that you have to quote it, find a better word (or make a footnote, or something). Frequent word quoting is often perceived as "I was too lazy to find the proper word, here is a hint, you guess what I meant".

Replies from: Anders_H
comment by Anders_H · 2016-03-21T17:28:37.253Z · LW(p) · GW(p)

Thanks. Good points. Note that many of those words are already established in the literature with same meaning. For the particular example of "doomed", this is the standard term for this concept, and was introduced by Greenland and Robins (1986). I guess I could instead use "response type 1" but the word doomed will be much more effective at pointing to the correct concept, particularly for people who are familiar with the previous literature.

The only new term I introduce is "flip". I also provide a new definition of effect equality, and it therefore seems correct to use quotation marks in the new definition. Perhaps I should remove the quotation marks for everything else since I am using terms that have previously been introduced.

comment by ChristianKl · 2016-03-20T11:52:03.789Z · LW(p) · GW(p)

If my paper was rejected because it doesn't contain enough technical terms,
I desire to believe that my paper was rejected because it doesn't contain enough technical terms;
If my paper was not rejected because it doesn't contain enough technical terms,
I desire to believe that my paper was not rejected because it doesn't contain enough technical terms;
Let me not become attached to beliefs I may not want.

comment by Viliam · 2016-03-20T10:51:35.944Z · LW(p) · GW(p)

Didn't read the paper, but I think a charitable explanation of "too casual" could mean (a) ambiguous, or (b) technically correct but not using the expressions standard in the field, so the reader needs a moment to understand "oh, what this paper calls X that's probably what most of us already call Y".

But of course, I wouldn't dismiss the hypothesis of academically low-status language. Once at university I got a feedback about my essay that it's "technically correct, but this is not how university-educated people are supposed to talk".

(Okay, I skimmed through your paper, and the language seemed fine. You sound like a human, as opposed to many other papers I have seen.)

comment by Douglas_Knight · 2016-03-20T16:18:43.243Z · LW(p) · GW(p)

Without reading your paper, and without rejecting your hypothesis, let me propose other consequences of casual language. Experts use tools casually, but there may be pitfalls for beginners. Experts are allowed more casual language and the referee may not trust that you, personally, are an expert. That is a signaling explanation, but somewhat different. A very different explanation is that while your ultimate goal is to teach the reader your casual process, but that does not mean that recording it is the best method. Your casual language may hide the pitfalls from beginners, contributing both to their incorrect usage and to their not understanding how to choose between tools.

If your paper is aimed purely at experts, then casual language is the best means of communication. But should it be? Remember when you were a beginner. How did you learn the tools you are using? Did you learn them from papers aimed at beginners or experts; aimed at teaching tools or using them? Casual language papers can be useful for beginners as an advertisement: "Once you learn these tools, you can reason quickly and naturally, like me."

Professors often say that they are surprised by which of their papers is most popular. In particular, they are often surprised that a paper that they thought was a routine application of a popular tool becomes popular as an exposition of that tool; often under the claim that it is a new tool. This is probably a sign that the system doesn't generate enough exposition, but taking the system as given, it means that an important purpose of research papers is exposition, that they really are aimed at beginners as well as experts.

This is not to say that I endorse formal language. I don't think that formal language often helps the reader over the pitfalls; that work must be reconstructed by the reader regardless of whether it the author spelled it out. But I do think that it is important to point out the dangers..

comment by ChristianKl · 2016-03-20T12:26:34.273Z · LW(p) · GW(p)

This definition is based on the probability that a person who would otherwise not have been a case “flips” to being a case in response to treatment, and the probably that a non-case flips to being a case.

To me that sentence seems cryptic.

Do you mean probability instead of probably?

Maybe the reviewer considered “flips” as too casual. I think the paper might be easier to read if you either would write flips directly without quotes or choose another word.

What the difference between otherwise not have been a case and non-case?

in my view they mostly show that the reviewers simply didn’t understand what I was saying [...] From my point of view, “understanding” something means that you are able to explain it in a casual language.

If the reviwers don't succeed in understanding what you are saying you might have explained yourself in casual language but still failed.

Replies from: Anders_H
comment by Anders_H · 2016-03-20T17:00:36.339Z · LW(p) · GW(p)

Do you mean probability instead of probably?

Yes. Thanks for noticing. I changed that sentence after I got the rejection letter (in order to correct a minor error that the reviewers correctly pointed out), and the error was introduced at that time. So that is not what they were referring to.

If the reviewers don't succeed in understanding what you are saying you might have explained yourself in casual language but still failed.

I agree, but I am puzzled by why they would have misunderstood. I spent a lot of effort over several months trying to be as clear as possible. Moreover, the ideas are very simple: The definitions are the only real innovation: Once you have the definitions, the proofs are trivial and could have been written by a high school student. If the reviewers don't understand the basic idea, I will have to substantially update my beliefs about the quality of my writing. This is upsetting because being a bad writer will make it a lot harder to succeed in academia. The primary alternative hypotheses for why they misunderstood are either (1) that they are missing some key fundamental assumption that I take for granted or (2) that they just don't want to understand.

Replies from: ChristianKl
comment by ChristianKl · 2016-03-21T01:02:52.579Z · LW(p) · GW(p)

What kind of audience would you expect to understand your article?

comment by skeptical_lurker · 2016-03-14T12:36:31.658Z · LW(p) · GW(p)

A while ago I was, for some reason, answering a few hundred questions with yes-or-no answers. I thought I would record my confidence in the answers in 5% intervals, to check my calibration. What I found was that for 60%+ confidence I am fairly well calibrated, but when I was 55% confidant I was only right 45% of the time (100)!

I think what happened is that sometimes I would think of a reason why the proposition X is true, and then think of some reasons why X is false, only I would now be anchored onto my original assessment that X is true. So instead of changing my mind to 'X is false' I would only decrease my confidence.

I.e. my thought processes looked like this

reason why X is true -> X is true, 60% confidence -> reasons why X is false -> X is true, 55% confidence

When it should be:

reason why X is true -> X is true, 60% confidence -> reasons why X is false -> CHANGE OPINION -> X is false, 55% confidence

Replies from: Douglas_Knight, Dagon
comment by Douglas_Knight · 2016-03-14T17:19:07.559Z · LW(p) · GW(p)

Did you write the questions or were they presented to you? If they were presented to you, then you have no choice in which of the two answers is "yes" and which is "no." So it is meaningful for you distinguish between the questions for which you answered 55% and the questions for which you answered 45%. Did you find a symmetrical effect?

Replies from: skeptical_lurker
comment by skeptical_lurker · 2016-03-14T17:33:48.767Z · LW(p) · GW(p)

It was symmetric. I never answered 45% - to clarify, when I answered 55% I was right 45% of the time. And I only recorded whether I was right or wrong, not whether I was right about X being false.

comment by Dagon · 2016-03-14T15:55:10.317Z · LW(p) · GW(p)

The vast majority of yes/no questions you're likely to face won't support 5% intervals. You're just not going to get enough data to have any idea whether the "true" calibration is what actually happens for that small selection of questions.

That said, I agree there's an analytic flaw if you can change true to false on no additional data (kind of: you noticed salience of something you'd previously ignored, which may count as evidence depending on how you arrived at your prior) and only reduce confidence a tiny amount.

One suggestion that may help: don't separate your answer from your confidence confidence, just calculate a probability. Not "true, 60% confidence" (implying 40% unknown, I think, not 40% false), but "80% likely to be true". It really makes updates easier to calculate and understand.

Replies from: ChristianKl, Luke_A_Somers, skeptical_lurker
comment by ChristianKl · 2016-03-14T19:22:38.154Z · LW(p) · GW(p)

The vast majority of yes/no questions you're likely to face won't support 5% intervals. You're just not going to get enough data to have any idea whether the "true" calibration is what actually happens for that small selection of questions.

Tetlock found in the Good Judgement Project as described in his book Superforcasting that people who are excellent at forcasting do very finely grained predictions.

comment by Luke_A_Somers · 2016-03-14T18:59:20.374Z · LW(p) · GW(p)

I disagree that you can't get 5% intervals on random yes/no questions - if you stick with 10%, you really only have 5 possible values - 50-59%, 60-69%, 70-79%, 80-89%, and 90+%. That's very coarse-grained.

comment by skeptical_lurker · 2016-03-14T17:39:05.066Z · LW(p) · GW(p)

The vast majority of yes/no questions you're likely to face won't support 5% intervals.

I agree [edit: actually, it depends on where these yes/no questions are coming from] , but think the questions I was looking at were in the small minority that do support 5% intervals.

Not "true, 60% confidence" (implying 40% unknown, I think, not 40% false)

Perhaps I should have provided more details to explain exactly what I did, because I actually did mean 60% true 40% false.

So, I already was thinking in the manner you advocate, but thanks for the advice anyway!

comment by John_Maxwell (John_Maxwell_IV) · 2016-03-17T02:01:11.528Z · LW(p) · GW(p)

In The genie knows, but it doesn't care, RobbBB argues that even if an AI is intelligent enough to understand its creator's wishes in perfect detail, that doesn't mean that its creator's wishes are the same as its own values. By analogy, even though humans were optimized by evolution to have as many descendants as possible, we can understand this without caring about it. Very smart humans may have lots of detailed knowledge of evolution & what it means to have many descendants, but then turn around and use condoms & birth control in order to stymie evolution's "wishes".

I thought of a potential way to get around this issue:

  1. Create a tool AI.

  2. Use the tool AI as a tool to improve itself, similar to the way I might use my new text editor to edit my new text editor's code.

  3. Use the tool AI to build an incredibly rich world-model, which includes, among other things, an incredibly rich model of what it means to be Friendly.

  4. Use the tool AI to build tools for browsing this incredibly rich world-model and getting explanations about what various items in the ontology correspond to.

  5. Browse this incredibly rich world-model. Find the item in the ontology that corresponds to universal flourishing and tell the tool AI "convert yourself in to an agent and work on this".

There's a lot hanging on the "tool AI/agent AI" distinction in this narrative. So before actually working on this plan, one would want to think hard about the meaning of this distinction. What if the tool AI inadvertently self-modifies & becomes "enough of an agent" to deceive its operator?

The tool vs agent distinction probably has something to do with (a) the degree to which the thing acts autonomously and (b) the degree to which its human operator stays in the loop. A vacuum is a tool: I'm not going to vacuum over my prized rug and rip it up. A Roomba is more of an agent: if I let it run while I am out of the house, it's possible that it will rip up my prized rug as it autonomously moves about the house. But if I stay home and glance over at my Roomba every so often, it's possible that I'll notice that my rug is about to get shredded and turn off my Roomba first. I could also be kept in the loop if the thing gives me warnings about undesirable outcomes I might not want: for example, my Roomba could scan the house before it ran, giving me an inventory of all the items it might come in contact with.

An interesting proposition I'm tempted to argue for is the "autonomy orthogonality thesis". The original "orthogonality thesis" says that how intelligent an agent is and what values it has are, in principle, orthogonal. The autonomy orthogonality thesis says that how intelligent an agent is and the degree to which it has autonomy and can be described as an "agent" are also, in principle, orthogonal. My pocket calculator is vastly more intelligent than I am at doing arithmetic, but it's still vastly less autonomous than me. Google Search can instantly answer questions it would take me a lifetime to answer working independently, but Google Search is in no danger of "waking up" and displaying autonomy. So the question here is whether you could create something like Google Search that has the capacity for general intelligence while lacking autonomy.

I feel like the "autonomy orthogonality thesis" might be a good steelman of a lot of mainstream AI researchers who blow raspberries in the general direction of people concerned with AI safety. The thought is that if AI researchers have programmed something in detail to do one particular thing, it's not about to "wake up" and start acting autonomous.

Another thought: One might argue that if a Tool AI starts modifying itself in to a superintelligence, the result will be too complicated for humans to ever verify. But there's an interesting contradiction here. A key disagreement in the Hanson/Yudkowsky AI-foom debate was the existence of important, undiscovered chunky insights about intelligence. Either these insights exist or they don't. If they do, then the amount of code one needs to write in order to create a superintelligence is relatively little, and it should be possible for humans to independently verify the superintelligence's code. If they don't, then we are more likely going to have a soft takeoff anyway because intelligence is about building lots of heterogenous structures and getting lots of little things right, and that takes time.

Another thought: maybe it's valuable to try to advance natural language processing, differentially speaking, so AIs can better understand human concepts by reading about them?

Replies from: Viliam, turchin, ChristianKl
comment by Viliam · 2016-03-18T09:33:38.502Z · LW(p) · GW(p)

An interesting idea, but I can still imagine it failing in a few ways:

  • the AI kills you during the process of building the "incredibly rich world-model", for example because using the atoms of your body will help it achieve a better model;

  • the model is somehow misleading, or just your human-level intelligence will make a wrong conclusion when looking at the model.

Replies from: John_Maxwell_IV, hg00
comment by John_Maxwell (John_Maxwell_IV) · 2016-03-18T21:54:47.760Z · LW(p) · GW(p)

the AI kills you during the process of building the "incredibly rich world-model", for example because using the atoms of your body will help it achieve a better model;

OK, I think this is a helpful objection because it helps me further define the "tool"/"agent" distinction. In my mind, an "agent" works towards goals in a freeform way, whereas a "tool" executes some kind of defined process. Google Search is in no danger of killing me in the process of answering my search query (because using my atoms would help it get me better search results). Google Search is not an autonomous agent working towards the goal of getting me good search results. Instead, it's executing a defined process to retrieve search results.

A tool is a safer tool if I understand the defined process by which it works, the defined process works in a fairly predictable way, and I'm able to anticipate the consequences of following that defined process. Tools are bad tools when they behave unpredictably and create unexpected consequences: for example, a gun is a bad tool if it shoots me in the foot without me having pulled the trigger. A piece of software is a bad tool if it has bugs or doesn't ask for confirmation before taking an action I might not want it to take.

Based on this logic, the best prospects for "tool AIs" may be "speed superintelligences"/"collective superintelligences"--AIs that execute some kind of well-understood process, but much faster than a human could ever execute, or with a large degree of parallelism. My pocket calculator is a speed superintelligence in this sense. Google Search is more of a collective superintelligence insofar as its work is parallelized.

You can imagine using the tool AI to improve itself to the point where it is just complicated enough for humans to still understand, then doing the world-modeling step at that stage.

Also if humans can inspect and understand all the modifications that the tool AI makes to itself, so it continues to execute a well-understood defined process, that seems good. If necessary you could periodically put the code on some kind of external storage media, transfer it to a new air-gapped computer, and continue development on that computer to ensure that there wasn't any funny shit going on.

the model is somehow misleading, or just your human-level intelligence will make a wrong conclusion when looking at the model.

Sure, and there's also the "superintelligent, but with bugs" failure mode where the model is pretty good (enough for the AI to do a lot of damage) but not so good that the AI has an accurate representation of my values.

I imagine this has been suggested somewhere, but an obvious idea is to train many separate models of my values using many different approaches (ex - in addition to what I initially described, also use natural language processing to create a model of human values, and use supervised learning of some sort to learn from many manually entered training examples what human values look like, etc.) Then a superintelligence could test a prospective action against all of these models, and if even one of these models flagged the action as an unethical action, it could flag the action for review before proceeding.

And in order to make these redundant user preference models better, they could be tested against one another: the AI could generate prospective actions at random and test them against all the models; if the models disagreed about the appropriateness of a particular action, this could be flagged as a discrepancy that deserves examination.

My general sense is that with enough safeguards and checks, this "tool AI bootstrapping process" could probably be made arbitrarily safe. Example: the tool AI suggests an improvement to its own code, you review the improvement, you ask the AI why it did things in a particular way, the AI justifies itself, the justification is hard to understand, you make improvements to the justifications module... For each improvement the tool AI generates, it also generates a proof that the improvement does what it says it will do (checked by a separate theorem-proving module) and test coverage for the new improvement... Etc.

Replies from: Viliam
comment by Viliam · 2016-03-20T10:39:16.346Z · LW(p) · GW(p)

I am trying to imagine the weakest dangerous Google Search successor.

Probably this: Imagine that the search engine is able to model you. Adding such ability would make sense commercially, if the producers want to make sure that the customers are satisfied with their product. Let's assume that the computing power is too cheap and they added too much of this ability. Now the search engine could e.g. find a result with highest rank, but then predict that seeing this result would make you disapointed, so it chooses another result instead, with somewhat lower rank, but with high predicted satisfaction. For the producers this may seem like a desired ability (tailored, personally relevant search results).

As an undesired side-effect, the search engine would de facto gain an ability to lie to you, convincingly. For example, let's say that the function for measuring customer satisfaction only includes emotional reaction, and doesn't include things like "a desire to know truth, even if it's unpleasant". That could happen for various reason, such as the producers not giving a fuck about our abstract desires, or concluding that abstract desires are mostly a hypocrisy but emotions are honest. Now as a side-effect, instead of unpleasant truth, the search engine would return a comfortable lie, if available. (Because the answer which makes the customer most happy is selected.)

Perhaps people would become aware of this, and would always double-check the answers. But suppose that the search engine is insanely good at modelling you, so it can also predict how specifically are you going to verify the questions, and whether you succeed or fail to find the truth. Now we get the more scary version which lies to you if and only if you are unable to find out that it lied. Thus to you, the search engine will seem completely trustworthy. All answers you have ever received, if you verified them, you learned that they were correct. You are only surprised to see that the search engine sometimes delivers wrong answers to other people; but in such situations you are always unable to convince the other people that those answers were wrong, because the answers are perfectly aligned with their existing beliefs. You could be smart enough to use an outside view to suspect that maybe something similar is happening to you, too. Or you may conclude that the other people are simply idiots.

Let's imagine even more powerful search engine, and more clever designers, who instead of individual satisfaction with search results try to optimize for general satisfaction with their product in the population as a whole. As a side effect of this, now the search engine would only lie in ways that make society as a whole more happy with the results, and where the society as a whole is unable to find out what is happening. So for example, you could notice that the search engine is spreading a false information, but you would not be able to convince a majority of other people about it (because if the search engine would predict that you could, it would not have displayed the information at the first place).

Why could this be dangerous? A few "noble lies" here and there, what's the worst thing that could happen? Imagine that the working definition of "satisfaction" is somewhat simplistic and does not include all human values. And imagine an insanely powerful search engine that could predict the results of its manipulation centuries ahead. Such engine could gently push the whole humanity towards some undesired attractor, such as a future where all people are wireheaded (from the point of view of the search engine: customers are maximally satisfied with the outcome), or just brainwashed in a cultish society which supports the search engine because the search engine never contradicts the cult teaching. That pushing would be achieved by giving higher visibility to pages supporting the idea (especially if the idea would seem appealing to the reader), lower visibility to pages explaining the dangers of the idea; and also on more meta levels, e.g. giving higher visibility to pages explaining personal scandals related to the people prominently explaining the dangers of the idea, etc.

Okay, this is stretching the credibility at a few places, but I tried to find a hypothetical scenario where a too powerful but still completely transparently designed Google Search successor would doom humanity.

comment by hg00 · 2016-03-18T21:50:10.271Z · LW(p) · GW(p)

the AI kills you during the process of building the "incredibly rich world-model", for example because using the atoms of your body will help it achieve a better model;

OK, I think this is a helpful objection because it helps me further define the "tool"/"agent" distinction. In my mind, an "agent" works towards goals in a freeform way, whereas a "tool" executes some kind of defined process. Google Search is in no danger of killing me in the process of answering my search query (because using my atoms would help it get me better search results). Google Search is not an autonomous agent working towards the goal of getting me good search results. Instead, it's executing a defined process to retrieve search results.

A tool is a safer tool if I understand the defined process by which it works, the defined process works in a fairly predictable way, and I'm able to anticipate the consequences of following that defined process. Tools are bad tools when they behave unpredictably and create unexpected consequences: for example, a gun is a bad tool if it shoots me in the foot without me having pulled the trigger. A piece of software is a bad tool if it has bugs or doesn't ask for confirmation before taking an action I might not want it to take.

Based on this logic, the best prospects for "tool AIs" may be "speed superintelligences"/"collective superintelligences"--AIs that execute some kind of well-understood process, but much faster than a human could ever execute, or with a large degree of parallelism. My pocket calculator is a speed superintelligence in this sense. Google Search is more of a collective superintelligence insofar as its work is parallelized.

You can imagine using the tool AI to improve itself to the point where it is just complicated enough for humans to still understand, then doing the world-modeling step at that stage.

Also if humans can inspect and understand all the modifications that the tool AI makes to itself, so it continues to execute a well-understood defined process, that seems good. If necessary you could periodically put the code on some kind of external storage media, transfer it to a new air-gapped computer, and continue development on that computer to ensure that there wasn't any funny shit going on.

the model is somehow misleading, or just your human-level intelligence will make a wrong conclusion when looking at the model.

Sure, and there's also the "superintelligent, but with bugs" failure mode where the model is pretty good (enough for the AI to do a lot of damage) but not so good that the AI has an accurate representation of my values.

I imagine this has been suggested somewhere, but an obvious idea is to train many separate models of my values using many different approaches (ex - in addition to what I initially described, also use natural language processing to create a model of human values, and use supervised learning of some sort to learn from many manually entered training examples what human values look like, etc.) Then a superintelligence could test a prospective action against all of these models, and if even one of these models flagged the action as an unethical action, it could flag the action for review before proceeding.

And in order to make these redundant user preference models better, they could be tested against one another: the AI could generate prospective actions at random and test them against all the models; if the models disagreed about the appropriateness of a particular action, this could be flagged as a discrepancy that deserves examination.

My general sense is that with enough safeguards and checks, this "tool AI bootstrapping process" could probably be made arbitrarily safe. Example: the tool AI suggests an improvement to its own code, you review the improvement, you ask the AI why it did things in a particular way, the AI justifies itself, the justification is hard to understand, you make improvements to the justifications module... For each improvement the tool AI generates, it also generates a proof that the improvement does what it says it will do (checked by a separate theorem-proving module) and test coverage for the new improvement... Etc.

comment by turchin · 2016-03-17T23:40:46.440Z · LW(p) · GW(p)

I will clip your idea and add it to the map of the ways of AI control ideas.

comment by ChristianKl · 2016-03-18T14:06:53.422Z · LW(p) · GW(p)

Very smart humans may have lots of detailed knowledge of evolution & what it means to have many descendants, but then turn around and use condoms & birth control in order to stymie evolution's "wishes".

Evolution doesn't have "wishes". It's not a teleological entity.

comment by PipFoweraker · 2016-03-17T20:40:17.375Z · LW(p) · GW(p)

The recently posted Intelligence Squared video titled Don't Trust the Promise of Artificial Intelligence may be of interest to LW readers, if only because of IQ2's decently sized cultural reach and audience.

comment by Arshuni · 2016-03-14T18:18:41.283Z · LW(p) · GW(p)

Replication crisis: does anyone know of a list of solid, replicated findings in the social sciences? (all I know is that there were 36 in the report by Open Science Collaboration, and those are the ones I can easily find)

Replies from: Brillyant
comment by Brillyant · 2016-03-17T15:27:54.058Z · LW(p) · GW(p)

What are the 36 solid, replicated findings?

Replies from: Arshuni
comment by Arshuni · 2016-03-17T17:35:35.773Z · LW(p) · GW(p)

https://osf.io/hy58n/

There is the data. I am not sure what was the final criterion for the report, but sorting by P.value.R seems to have 33 findings with p under 0.05. (Maybe I misremembered the number?... also, I am unsure for what a p value of 0 is supposed to mean.) I didn't go too deep into what all the different columns represent, but there seems to be one with descriptions of the findings.

comment by [deleted] · 2016-03-16T08:32:17.364Z · LW(p) · GW(p)

Telling truth to any face -

Not a lie, with mortar hoary -

Go apace to any place,

To attend to any story.

Happy belated Pi Day, everyone!

Replies from: username2
comment by username2 · 2016-03-16T11:51:07.158Z · LW(p) · GW(p)

Happy sqrt(10) day!

comment by Arshuni · 2016-03-14T18:29:20.512Z · LW(p) · GW(p)

I want to make a desktop map application of my city, kinda like Paradox Interactive's games. My city is 280 km^2, and I would like it at a street level detail. I want to be able to just overlay multiple layers of different maps. What I have in mind is displaying predicted tram locations, purchasing power maps, and pretty much any information I can find on one map, and combining these at will, with a reasonable speed (and I would much prefer it to be seamless, like in a game, and not displaying white spots at the edges while it is loading)

Does anyone know of some toolset for such?

Replies from: MrMind, ChristianKl, Lumifer
comment by MrMind · 2016-03-15T08:00:30.639Z · LW(p) · GW(p)

Autocad Map 3D is also something you want to look into, as it's used exactly for this purpose (I almost do this as a job). For speed though, you need quite a capable machine.

comment by ChristianKl · 2016-03-14T18:46:16.535Z · LW(p) · GW(p)

OpenStreetMap provides data that can be used more widely than the Google data.

comment by Lumifer · 2016-03-14T18:34:27.670Z · LW(p) · GW(p)

Google Maps (which, I think Google Earth was folded into, but in case it wasn't you actually want Google Earth).

Alternatively, if you want your own app, look into Open Street Map and their tools.

comment by NancyLebovitz · 2016-03-19T17:51:45.514Z · LW(p) · GW(p)

Do you have a background in formal debate?

[pollid:1129]

If you do, do you think it was worth the time?

[pollid:1130]

If you don't, do you regret not having it?

[pollid:1131]

Replies from: Elo
comment by Elo · 2016-03-20T22:54:53.160Z · LW(p) · GW(p)

not many yeses. makes it hard to find out what you wanted to find out.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2016-03-20T23:04:23.266Z · LW(p) · GW(p)

I was just driven by vague curiosity from a discussion elsewhere about what traits might correlate with rationality.

The lack of debate background suggests (weakly because of small sample size) that debating doesn't correlate with rationality.

Maybe I'll figure out a good way to ask about the desire to argue, which I think does correlate with at least LW rationality.

Replies from: username2
comment by username2 · 2016-03-21T11:55:52.552Z · LW(p) · GW(p)

Maybe most LWers went to schools where no debate programs were available?

comment by moridinamael · 2016-03-18T17:01:58.317Z · LW(p) · GW(p)

I've always enjoyed Kurzweil's story about how the human genome project was "almost done" when they had decoded the first 1% of the genome, because the doubling rate of genomic science was so high at the time. (And he was right).

It makes me wonder if we're "almost done" with FAI.

I don't really know where we are with FAI. I don't know if our progress is even knowable, since we don't really know where we're going. There's certainly not a percentage associated with FAI Completion. However, there are a number of technologies that might suddenly become very helpful.

Douglas Lenat's Cyc, of which I was reminded by another comment in this very thread, seems to have become much more powerful than I would have expected the first time I heard of it. I'm actually blown away and a little alarmed by the things it can apparently do now. IBM's Watson is another machine that can interpret and answer complex queries and demonstrates real semantic awareness. These two systems alone indicate that the state of the art in what you might call "superhuman-knowledge-plus-superhuman-logical-deduction" is ripe (or almost ripe) for exploitation by human FAI researchers. (You could also call these systems "Weak Oracles" or something.)

Nobody expects Cyc or Watson to FOOM in the next few years, but other near-future Weak Oracles might still greatly accelerate our progress in exploring, developing and formalizing the technology needed to solve the Control Problem. It intuitively feels like Weak Oracle tech might actually enable the sort of rapid doubling in progress that we've observed in other domains.

The AlphaGo victory has made me realize that the quality of the future really hinges on which of several competing exponential trends happens to have the sharpest coefficient. Specifically, will we get really-strong-but-not-generally-intelligent Weak Oracles before we get GAI? Where is the crossover of those two curves?

Replies from: ChristianKl
comment by ChristianKl · 2016-03-18T22:25:38.082Z · LW(p) · GW(p)

Douglas Lenat's Cyc, of which I was reminded by another comment in this very thread, seems to have become much more powerful than I would have expected the first time I heard of it.

Can you provide a link to the powerful demonstrations of Cyc?

Replies from: moridinamael, Viliam
comment by moridinamael · 2016-03-21T14:06:56.626Z · LW(p) · GW(p)

Lenat's Google Talk has a lot of examples.

Among them would be giving Cyc a large amount of text and/or images to assimilate and then asking it questions like:

  • Query: "Government buildings damaged in terrorist events in Beirut between 1990 and 2001." A moment's thought will reveal how complex this query actually is, and how many ways there are to answer it incorrectly, but Cyc gives the right answer.

  • Query: "Pictures of strong and adventurous people." Returns a picture of a man climbing a rock face, since it knows that rock climbing requires strength and an adventurous disposition.

  • Query: "What major US cities are particularly vulnerable to an anthrax attack?" This is my favorite example, because it needs to assess not only what "major US cities" are but also what the ideal conditions for the spread of anthrax are and then apply that as a filter over those cities with nuanced contextual awareness.

In general Cyc impresses me because it doesn't use of any kind of neural network architecture, it's just knowledge linked in explicit ontologies with a reasoning engine.

comment by Viliam · 2016-03-20T10:55:58.329Z · LW(p) · GW(p)

It's good at marketing.

comment by Algernoq · 2016-03-17T06:17:24.582Z · LW(p) · GW(p)

Modest proposal for Friendly AI research:

Create a moral framework that incentivizes assholes to cooperate.

Specifically, create a set of laws for a "community", with the laws applying only to members, that would attract finance guys, successful "unicorn" startup owners, politicians, drug dealers at the "regional manager" level, and other assholes.

Win condition: a "trust app" that everyone uses, that tells users how trustworthy every single person they meet is.

Lose condition: startup fund assholes end up with majority ownership of the first smarter-than-human-level general AI, and no one's given smart people an incentive not to hurt dumb people.

If you can't incentivize smart selfish people to "cooperate" instead of "defect", then why do you think you can incentivize an AI to be friendly? What's to stop a troll from deleting the "Friendly" part the second the AI source code hits the Internet? Keep in mind that the 4chan community has a similar ethos to LW: namely "anything that can be destroyed by a basement dweller should be".

Replies from: polymathwannabe, Lumifer, Viliam, TheAltar
comment by polymathwannabe · 2016-03-17T13:08:42.881Z · LW(p) · GW(p)

Create a moral framework that incentivizes assholes to cooperate.

So, capitalism?

comment by Lumifer · 2016-03-17T15:12:36.059Z · LW(p) · GW(p)

a "trust app" that everyone uses, that tells users how trustworthy every single person they meet is.

That seems like a horrible idea.

If you can't incentivize smart selfish people to "cooperate" instead of "defect"

We can, of course, just not unconditionally and not all the time. Creatures which always cooperate are social insects.

comment by Viliam · 2016-03-18T09:39:52.013Z · LW(p) · GW(p)

Unrelated to AI:

Making the "trust app" would be a great thing. I spent some time thinking about it, but my sad conclusion is that as soon as the app would become popular, it would fail somehow. For example, if it is not anonymous, people could use real-world pressures to force people to give them positive ratings. The psychopaths would threaten to sue people who label them as psychopaths, or even use violence directly against them. On the other hand, if the ratings are anonymous, a charming psychopath could sic their followers to give many negative ratings to their enemy. At the end, the ratings of a psychopath who hurt many people could look pretty similar to ratings of a decent person who pissed off a vengeful psychopath.

Not sure what to do here. Maybe the usage itself of the "trust app" should be an information you only tell your trusted friends; and maybe create different personas for each group of friends. But then the whole network becomes sparse, so you will not be able to get information on most people you will care about. Also, there is still a risk that if the app becomes popular, there will be a social pressure to create an official persona, which will be further pressured to give socially acceptable ratings. (Your friends will still know your secret persona, but because of the sparse network, it will be mostly useless to them anyway.)

comment by TheAltar · 2016-03-18T13:25:46.911Z · LW(p) · GW(p)

A trust app is going to end up with all the same issues credit ratings have.

comment by ruelian · 2016-03-16T08:21:29.982Z · LW(p) · GW(p)

Looking for advice with something it seems LW can help with.

I'm currently part of a program the trains highly intelligent people to be more effective, particularly with regards to scientific research and effecting change within large systems of people. I'm sorry to be vague, but I can't actually say more than that.

As part of our program, we organize seminars for ourselves on various interesting topics. The upcoming one is on self-improvement, and aims to explore the following questions: Who am I? What are my goals? How do I get there?

Naturally, I'm of the opinion that rationalist thought has a lot to offer on all of those questions. (I also have ulterior motives here, because I think it would be really cool to get some of these people on board with rationalism in general.) I'm having a hard time narrowing down this idea to a lesson plan I can submit to the organizers, so I thought I'd ask for suggestions.

The possible formats I have open for an activity are a lecture, a workshop/discussion in small groups, and some sort of guided introspection/reading activity (for example just giving people a sheet with questions to ponder on it, or a text to reflect on).

I've also come up with several possible topics: How to Actually Change Your Mind (ideas on how to go about condensing it are welcome), practical mind-hacking techniques and/or techniques for self-transparency, or just information on heuristics and biases because I think that's useful in general.

You can also assume the intended audience already know each other pretty well, and are capable of rather more analysis and actual math than is average.

Ideas for topics or activities, particularly ones that include a strong affective experience because those are generally better at getting poeple to think about this sort of thing for the first time, are welcome.

Replies from: Lyyce, Viliam, ChristianKl
comment by Lyyce · 2016-03-16T09:33:08.078Z · LW(p) · GW(p)

Idea that might or might be relevant depending on how smart / advanced your group is.

You could introduce some advanced statistical methods and use it to derive results from everyday life, a la Bayes and mammography.

If you can show some interesting or counter intuitive results (that you can't obtain with intuition) it would give the affective experience you want, and if they want to do scientific research, the more they know about statistics the better.

Statistics are also a good entry door for rationalist thinking.

comment by Viliam · 2016-03-16T22:10:45.357Z · LW(p) · GW(p)

A few random thoughts:

Who am I?

A system composed of atoms. (As opposed to a magical immaterial being who merely happens to be trapped in a material body, but can easily overcome all its limitations by sufficient belief / mysterious willpower / positive thinking.)

That means I should pay some attention to me as a causal system; to try seeing myself as an outside observer would. For example, instead of telling myself that I should be e.g. "productive", I should rather look into my past and see what kinds of circumstances have historically made me more "productive"; and then try to replicate those more reliably. To pay attention to the trivial inconveniences, superstimuli, peer pressure -- simply to be humble enough to admit that in short term I may be less of the source of my actions than I would like to believe, and that the proper way to fix it is to be strategic in long term, which is not going to happen automatically.

What are my goals?

Most people value happiness. But the human value is complex; we also want our beliefs to correspond to reality instead of merely believing pretty lies or getting good feelings from drugs.

Often people are bad at predicting what would make them happy. There is often a difference between how something feels when we plan it, when we are living it, and when we remember the thing afterwards. For example, people planning vacation can overestimate how good the vacation will be, and they may underestimate the little joys of everyday life. Or a difficult experience may improve relationships between people who suffered together, and make a good story afterwards, thus creating a lot of value in long term despite being shitty at the moment.

Sometimes we have goals, or we tell ourselves that something will be awesome, under influence of other people. We should make sure those people are in our "reference group", and that they are speaking from their experience instead of merely repeating popular beliefs (in best case, those people should be older versions of our better selves).

Success often does not feel magical at the moment it happens; and it never makes you "happy ever after". For example, you may believe that if you achieve X, you will be super happy, but actually when the day comes, you will probably feel tired, or maybe even a bit disappointed. You may have already raised your expectations, so at the day you are reaching X you already believe that only 2X can make you truly happy. Or maybe X comes so gradually that you never actually notice it when it comes, because that day doesn't feel much different from the previous one. -- This can be solved by reviewing the past and finding the values of X that you have already achieved, and that you remember having wanted once.

If your strategy is "to do X because you want to achieve Y", you should look for evidence whether X actually brings Y, and whether there are alternative ways to achieve Y. Otherwise you risk spending a lot of time and energy to achieve X without actually achieving Y.

How do I get there?

Specific goals need specific answers. But in general, you probably need to have a good model of how other people achieve similar goals (the problem is, many people will lie to you for various reasons). Then you need vision and habits. And some system of feedback, to measure whether you really progress in long term.

For example, if your goal is to write a novel, you should look for advice from your favorite authors, you should imagine what kind of novel you want to write for which audience, and they you need to spend some time every week actually writing. You could measure your long-term progress e.g. by publishing your writing on web, and measuring how many people read it.

comment by ChristianKl · 2016-03-16T11:12:58.631Z · LW(p) · GW(p)

"How to Actually Change Your Mind" is a great topic. I good way to start such a workshop is by having everybody write down instances where they changed their mind in the last year and then discuss those examples.

comment by SanguineEmpiricist · 2016-03-15T20:16:10.754Z · LW(p) · GW(p)

Do you guys know how you can prevent sleep paralysis?

Replies from: ChristianKl, None, James_Miller, turchin
comment by ChristianKl · 2016-03-15T21:02:36.251Z · LW(p) · GW(p)

What makes it a problem for you? What's the problem of having a bit more conscious time while your body is at rest?

Have you tried the normal sleep hacks of going every day at the same time to bed and sleeping 8 hours, having no red light an hour before bed, sleeping in a pitch black room and taking a bit Melatonin?

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2016-03-16T00:43:46.751Z · LW(p) · GW(p)

It's an incredibly good indicator of poor sleep quality for me. I have to take phenibut to get good sleep quality nowadays though.

Yes I have. I notice it has to do with body position or when my head is on a tilt.

Replies from: calef
comment by calef · 2016-03-16T02:33:38.829Z · LW(p) · GW(p)

I've found that I only ever get something sort of like sleep paralysis when I sleep flat on my back, so +1 for sleeping orientation mattering for some reason.

comment by [deleted] · 2016-03-16T10:28:03.076Z · LW(p) · GW(p)

For me recurrent sleep paralysis turned out to be associated with sleep apnea. Both were reduced but not eliminated by adjusting sleep position (side rather than back as others have already mentioned), wearing a mandibular adjustment device (holds the jaw in a slightly different position to avoid airway obstruction). Similarly, some changes in consumption habits reduced occurrence: reducing alcohol intake and large/rich meals shortly before sleeping.

in my case these symptoms were the result of some abnormalities in my throat cartilage which eventually required surgery, but the above behaviour changes reduced occurrence substantially (approx 5 instances per week of sleep paralysis or choking to 1.2 based on 3-month diary). I made all the above adjustments together so cannot give any further indications about which of them might have helped. Or indeed, fully ruled out a placebo effect!

I didn't recognise the association between sleep paralysis and apnea but it was one of the first things the head & neck specialist asked.

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2016-03-16T19:07:11.113Z · LW(p) · GW(p)

I did not have sleep apnea or tested negative for it and narcolepsy.

comment by James_Miller · 2016-03-16T04:52:44.394Z · LW(p) · GW(p)

Putting a bar of soap between bedsheets supposedly prevents leg cramps. You might want to try it for sleep paralysis keeping in mind that the placebo effect is a real thing you want to take advantage of.

Replies from: ChristianKl
comment by ChristianKl · 2016-03-16T11:14:10.526Z · LW(p) · GW(p)

The example of soap suggests that our beds that are completely flat aren't optimal sleeping surfaces. It would be interesting what a bed that automatically adjusts it's surface can do if it's smart.

comment by turchin · 2016-03-15T21:33:16.090Z · LW(p) · GW(p)

Start to use for experiments with OBE or visualization.

comment by Arshuni · 2016-03-14T18:15:27.275Z · LW(p) · GW(p)

Does it make a difference if an organism reproduces in multiple smaller populations versus one larger, if the number of offspring at generation one is held constant? (score is determined by the number of offspring and their relatedness, so the standard game)

Smaller populations are more prone to genetic drift, but in both directions, right?

Does this change somehow if the populations are connected, with different rates of flow depending on the direction?

For example, in humans, migration to the capitals (and in general, urbanization) happens way more often than the converse. I also believe that people are unlikely to migrate between like-sized cities, cause what's the point, but that's just an assumption. In this case, for genes to spread from one small population to another, they have to go through the capital first. OTOH, the populations leaving source small population could be more related to the original.

So, uh, in general, how would one find the optimal strategy here? ...is there a difference?

comment by Sithlord_Bayesian · 2016-03-14T13:13:20.198Z · LW(p) · GW(p)

I have a rationalist/rationalist-adjacent friend who would love a book recommendation on how to be good at dating and relationships. Their specific scenario is that they already have a stable relationship, but they're relatively new to having relationships in general, and are looking for lots of general advice.

Since the sanity waterline here is pretty high, I though I'd ask if anyone had any recommendations or not. If not, I'll just point them to this LW post, though having a bit more material to read through might suit them well.

Thanks!

Replies from: MrMind, Manfred, ChristianKl, Strangeattractor
comment by MrMind · 2016-03-15T08:05:32.265Z · LW(p) · GW(p)

It's written from a Christian perspective, but "Things I wish I'd known before getting married" by Gary Chapman is extremely good: 90% level-headed good sense and 10% Christian moralizing. I recommend it for any new couple.

comment by Manfred · 2016-03-14T17:35:16.873Z · LW(p) · GW(p)

Rowland Miller, Intimate Relationships, 7th Ed.

comment by ChristianKl · 2016-03-14T16:17:25.728Z · LW(p) · GW(p)

A good book for a general overview is Mate: Become the Man Women Want by Tucker Max and Geoffrey Miller. Geoffrey Miller is an evolutionary psychology professor and Tucker Max is famous for writing books about his politically incorrect sex stories. At the same time the focus of their book is on mutual benefitial interactions. They also have a podacst over at http://thematinggrounds.com/

From the position of being new to realtionships it's also worthwhile to read about sex. Two good books are the Sex God Method by Daniel Rose and Slow Sex by Nicole Daedone. Both books provide very different perspectives. Daniel Rose comes from a PUA background. Nicole Daedone has a degree in Gender Communications and a more New Age background.

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2016-03-15T15:05:21.863Z · LW(p) · GW(p)

Tucker Max is famous for

(My brain automatically added "in-" before "famous" when I skimmed that sentence.)

comment by Strangeattractor · 2016-03-14T20:59:26.279Z · LW(p) · GW(p)

I like John Gottman's books. He has written several, any would be good. My favourite is "And Baby Makes Three." He is a therapist who studies married couples in a lab, and can see what works and what doesn't.

comment by Brillyant · 2016-03-15T21:36:37.340Z · LW(p) · GW(p)

Isn't some sort of deism at least plausible and reasonable at this juncture? Is there a materialistic theory of what happened before the big bang that is worth putting any stock in? Or are we in an agnostic wait-and-see mode regarding pre-big bang events?

Replies from: MrMind, username2
comment by MrMind · 2016-03-16T16:25:24.295Z · LW(p) · GW(p)

Isn't some sort of deism at least plausible and reasonable at this juncture?

That would majorly depend on what "deism" means, as a concrete model, other than "here my other models break down". After all, if you can postulate an intelligent and moral being, with exactly our own kind of intelligence and morality, with the power of creating a universe, then surely you can posit, with much more confidence, an unintelligent and amoral system with the power of creating a universe.

Is there a materialistic theory of what happened before the big bang that is worth putting any stock in?

There are many, but none of them are in the realm of testability due to the dependency to a flavor of quantum gravity. Let's not forget that the Big Bang is a singularity, meaning a point where the model breaks down and cry. If you want to go 'before' the Big Bang, you need a wider model (that is, a theory of quantum gravity).

Or are we in an agnostic wait-and-see mode regarding pre-big bang events?

That is surely the most sensible approach at our point in time.

comment by username2 · 2016-03-15T21:44:51.677Z · LW(p) · GW(p)

Time is a phenomenon inside the physical world, it is not something outside of it. It doesn't make sense to take about time before the the existence of physical world.

Replies from: Brillyant
comment by Brillyant · 2016-03-16T00:48:28.639Z · LW(p) · GW(p)

Yeah. Okay. Is there any consensus about what caused the big bang? Like, how it happened?

It seems to me abiogenesis is super tricky but conceivable. The "beginning" of everything is a bit more conceptually problematic.

Positing a hyper-powerful creative entity seems not that epistemologically reckless when the more "scientific" alternative is "something happened".

Replies from: Viliam, RolfAndreassen, calef
comment by Viliam · 2016-03-16T07:11:01.781Z · LW(p) · GW(p)

Positing a hyper-powerful creative entity seems not that epistemologically reckless when the more "scientific" alternative is "something happened".

Jumping from "something happened" to "a hyper-powerful creative entity happened" is not reckless? Especially when we have evidence that more complex things can arise from less complex things without a supernatural manager guiding the process.

What makes you look at the vast set of "somethings" that might have been responsible for the origin of the universe, and choose exactly the same thing that our ancestors considered a good explanation for the origins of thunder (and now we know they were wrong)?

Replies from: Brillyant
comment by Brillyant · 2016-03-16T13:04:38.001Z · LW(p) · GW(p)

Especially when we have evidence that more complex things can arise from less complex things without a supernatural manager guiding the process.

This isn't being questioned. I'm asking about origins.

What makes you look at the vast set of "somethings" that might have been responsible for the origin of the universe, and choose exactly the same thing that our ancestors considered a good explanation for the origins of thunder (and now we know they were wrong)?

I don't consider it a good explanation. But others have. And I don't see why it's necessarily bad. So far, I've seen no reason on this thread to update and make deism an awful explanation.

comment by RolfAndreassen · 2016-03-16T05:40:25.792Z · LW(p) · GW(p)

Positing a hyper-powerful creative entity seems not that epistemologically reckless

How about epistemicologically useless? What caused your hyper-powerful creative entity? You haven't accomplished anything, you've just added another black box to your collection.

Replies from: Viliam, Brillyant
comment by Viliam · 2016-03-16T07:12:55.629Z · LW(p) · GW(p)

It is a progress from "here is a black box and I don't know what is inside" to "here is a black box and I believe there is a magical fairy inside".

Replies from: Brillyant
comment by Brillyant · 2016-03-16T14:02:39.088Z · LW(p) · GW(p)

I suppose. Though I think saying "magical fairy" is just an attempt to use silly-sounding words to dismiss an idea.

I may be wrong (IF SO, PLEASE CORRECT ME WITH DETAILS), but from what I understand, the origin of the universe ("pre-big bang", to the extent that phrase makes any sense) is an area where we currently have almost no knowledge. There are lots of very strange theories and concepts being discussed that have no real evidence supporting them. We're often dealing with pure conjecture, speculating about the way things might be in the absence of the universal laws with which we are familiar.

Do you have a particular theory about how the universe came to be? If so, what makes you believe this?

Replies from: Viliam
comment by Viliam · 2016-03-16T21:36:06.151Z · LW(p) · GW(p)

I agree that the non-religious theories about origins of the universe are speculative. I could name a few, and perhaps say which ones I prefer, but I wouldn't expect to convince anyone, probably not even myself on a different day.

(I suspect the correct answer is somewhere along: "everything exists in a timeless Tegmark multiverse, but intelligent observers only happen in situations where causality exists, and causality defines some kind of measure, so despite everything existing, some things seem more likely to the observers than other things". And specifically for the origin of our universe, I suspect the correct answer would be that if you get too close to the big bang, local arrows of time start pointing in non-parallel directions and/or the past stops being unique. But that's just a bunch of words masking my lack of deep understanding.)

However, religions also don't have convincing answers for what happened before god(s) created the world, or how did god(s) happen to exist. So by adding religion you are actually not getting any closer to the answer. You have one more step in the chain, but the end of the new chain looks the same (or worse) as the end of the old one.

Instead of "universe has simply existed since ever" you have "god has simply existed since ever"; instead of "time only exists within universe, so it is meaningless to ask what was before that" you have "god has created time together with universe, so it is meaningless to ask what was before that"; instead of "the universes exist in an infinite loop of big bang and big crunch" you have "god keeps creating and destroying universes in an infinite loop"; et cetera.

comment by Brillyant · 2016-03-16T13:36:50.899Z · LW(p) · GW(p)

Can you explain how a simulated universe, for instance, is more useful than deism? Doesn't it also simply move the question of ultimate origins back a step?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2016-03-19T21:21:53.849Z · LW(p) · GW(p)

Right, which is why I don't postulate a simulated universe as the explanation for existence.

comment by calef · 2016-03-16T02:31:26.653Z · LW(p) · GW(p)

This is essentially what username2 was getting at, but I'll try a different direction.

It's entirely possible that "what caused the big bang" is a nonsensical question. 'Causes' and 'Effects' only exist insofar as there are things which exist to cause causes and effect effects. The "cause and effect" apparatus could be entirely contained within the universe, in the same way that it's not really sensible to talk about "before" the universe.

Alternatively, it could be that there's no "before" because the universe has always existed. Or that our universe nucleated from another universe, and that one could follow the causal chain of universes nucleating within universe backwards forever. Or that time is circular.

I suspect that the reason I'm not religious is that I'm not at all bothered by the question "Why is there a universe, rather than not a universe?" not having a meaningful answer. Or rather, it feels overwhelmingly anthropocentric to expect that the answer to that question, if there even was one, would be comprehensible to me. Worse, if the answer really was "God did it," I think I would just be disappointed.

Replies from: Brillyant
comment by Brillyant · 2016-03-16T03:25:52.433Z · LW(p) · GW(p)

It makes a lot of sense that the nature of questions regarding the "beginning" of the universe is nonsensical and anthropocentric, but it still feels like a cheap response that misses the crux of the issue. It feels like "science will fill in that gap eventually" and we ought to trust that will be so.

Matter exists. And there are physical laws in the universe that exist. I accept, despite my lack of imagination and fancy scientific book learning, that this is basically enough to deterministically allow intelligent live beings like you and I to be corresponding via our internet-ed magical picture boxes. Given enough time, just gravity and matter gets us to here—to all the apparent complexity of the universe. I buy that.

But whether the universe is eternal, or time is circular, or we came from another universe, or we are in a simulation, or whatever other strange non-intuitive thing may be true in regard to the ultimate origins of everything, there is still this pesky fact that we are here. And everything else is here. There is existence where it certainly seems there just as easily could be non-existence.

Again, I really do recognize the silly anthropocentric nature of questions about matters like these. I think you are ultimately right that the questions are non-sensical.

But, to my original question, it seems a simple agnostic-ish deism is a fairly reasonable position given the infantile state of our current understanding of ultimate origins. I mean, if you're correct, we don't even know that we are asking questions that make sense about how things exist...then how can we rule out something like a powerful, intelligent creative entity (that has nothing to with any revealed religion)?

I'm not asking rhetorically. How do you rule it out?

Replies from: Dagon, polymathwannabe, bbleeker, James_Miller
comment by Dagon · 2016-03-16T15:50:13.662Z · LW(p) · GW(p)

My disagreement isn't that it's implausible for such an entity to exist, but that it's extremely implausible for it to matter in any decision or experience I anticipate. The chain of unsupported leaps from "I perceive all this stuff and I don't know why" to "some powerful entitiy created it all, and I understand their desires and want to behave in ways that please or manipulate them" is more than I can follow.

Replies from: Lumifer, Brillyant
comment by Lumifer · 2016-03-16T16:06:43.503Z · LW(p) · GW(p)

and I understand their desires and want to behave in ways that please or manipulate them

The OP is talking about deism.

Replies from: Dagon
comment by Dagon · 2016-03-17T05:33:09.008Z · LW(p) · GW(p)

Right. And regardless of what's written about the rest of the cluster of religious belief regarding souls, creator-pleasing morality, etc., I have yet to actually meet anyone who assigns a high probability of a conscious thinking Creator without also bringing the rest of it in.

Replies from: Lumifer
comment by Lumifer · 2016-03-17T14:57:51.831Z · LW(p) · GW(p)

I have yet to actually meet anyone...

Have you met anyone who self-identifies as a deist?

comment by Brillyant · 2016-03-16T15:54:43.440Z · LW(p) · GW(p)

The chain of unsupported leaps from "I perceive all this stuff and I don't know why" to "some powerful entitiy created it all, and I understand their desires and want to behave in ways that please or manipulate them" is more than I can follow.

Why is the bold necessary, or necessarily relevant? Are you referencing revealed religion?

comment by polymathwannabe · 2016-03-16T13:42:12.863Z · LW(p) · GW(p)

There's the burden of proof thing (it's the affirmer, not the denier, who has to present evidence) and the null hypothesis thing (in absence of evidence, the no-effect or no-relationship hypothesis stands).

Replies from: Brillyant
comment by Brillyant · 2016-03-16T14:12:49.011Z · LW(p) · GW(p)

I'm not trying to prove anything. I'm asking specifically about the process people who are smarter than I use to rule a proposition out.

comment by Sabiola (bbleeker) · 2016-03-16T11:24:28.568Z · LW(p) · GW(p)

It makes a lot of sense that the nature of questions regarding the "beginning" of the universe is nonsensical and anthropocentric, but it still feels like a cheap response that misses the crux of the issue. It feels like "science will fill in that gap eventually" and we ought to trust that will be so.

I think that's one question that science probably won't be able to answer. But that's no reason to just make something up! Maybe we can't rule out a 'powerful, intelligent creative entity' – but why would you even think of that? And of course it just shifts the question to the next level, because where would that entity come from?

Replies from: Brillyant
comment by Brillyant · 2016-03-16T13:11:12.442Z · LW(p) · GW(p)

Maybe we can't rule out a 'powerful, intelligent creative entity' – but why would you even think of that?

Others have thought of it. I'm asking why I ought to dismiss it. I think we have good reasons to dismiss, for instance, Christianity, because of the positive claims it makes. I don't see the same contradiction with something like deism.

And of course it just shifts the question to the next level, because where would that entity come from?

This isn't a compelling argument to me. Can we rule out an intelligent prime mover with what we know about the universe? If so, what do we call the events that first caused everything to be?

comment by James_Miller · 2016-03-16T04:47:15.337Z · LW(p) · GW(p)

There is existence where it certainly seems there just as easily could be non-existence.

Could the prime numbers not exist? Somethings, such as our universe, might have to exist.

Replies from: Brillyant
comment by Brillyant · 2016-03-16T13:05:10.932Z · LW(p) · GW(p)

Please elaborate. The universe is necessary?

Replies from: James_Miller
comment by James_Miller · 2016-03-16T14:50:30.098Z · LW(p) · GW(p)

I have thought a lot about why there is something rather than nothing. It seems (to my brain at least) that the prime numbers have to exist, that they are necessary. I have speculated that perhaps after we understand all of physics we will come to realize that like the prime numbers, the universe must exist. I admit that I'm giving a mysterious answer to a mysterious question, sorry.

Replies from: Brillyant
comment by Brillyant · 2016-03-16T15:38:52.164Z · LW(p) · GW(p)

Interesting. I'm ignorant of math, but aren't numbers just abstractions? And prime numbers exist within those abstractions?

Can you help me understand the parallel to the physical reality, and ultimate origins, of the universe?

...

I admit that I'm giving a mysterious answer to a mysterious question, sorry.

I have thought a lot about why there is something rather than nothing. It seems (to my brain at least)...

I appreciate your reply, as it pretty well sums up where I'm at. Can you take a stab at articulating why you (presumably) reject something like deism as an explanation for why there is something instead of nothing?

I also believe a perfect knowledge of physics will ultimately allow us to see clearly "why" and "how" the universe is the way it is, solving questions of origin in the process. But, in the meantime, I'm having a hard time dismissing the idea of a powerful intelligent creative entity a la deism, as it seems just as plausible as the other ideas I'm aware of.

On other note: It seems deism gets saddled with connotations of religion in discussions like this, and I don't think this is fair or helpful in the discussion. If you would be intentional to avoid this in your response, I would appreciate it.

Replies from: moridinamael
comment by moridinamael · 2016-03-16T16:09:46.893Z · LW(p) · GW(p)

Look into the ideas of Tegmark, the Mathematical Universe Hypothesis. The central idea is that all possible mathematical structures exist. What we view as "the Universe" is just one set of equations with a particular set of boundary conditions, out of an infinite space of valid mathematical structures. The Universe exists because its existence is logically valid. That's it.

Replies from: James_Miller
comment by James_Miller · 2016-03-16T17:33:14.400Z · LW(p) · GW(p)

Yes, this is my best guess as well. I reject deism because of Occam's razor--the computational complexity of a conscious creator is rather high, although I think this might all be a computer simulation, although then the basement reality doesn't have a conscious creator.

comment by [deleted] · 2016-03-14T10:14:48.205Z · LW(p) · GW(p)
  • Private insurance approaches to universal healthcare seem like the only universal healthcare policy formulation that doesn't subsidise and therefore incentivise poor health decisions. Therefore, the primacy of my justice ethics would support that or non-universal health care policy formulations. However, I don't know if evidence supports or opposes the execution of that perverse incentive in actual human behaviour and whether complex other factors (e.g increased productivity of the subsidised risk takers?) sufficiently compensates individuals who are making legitimate, egoistic decisions or better (prosocial). Anyone know what the evidence says, preferably with an indication of the strength of evidence so others evidence can be synthesised appropriately?
  • How do I work out whether an ethical duck farm is a profitable venture?
  • Say this with me 'I will cognitively reframe and restructure the knowledge of antecedents and determinants of negative, inadvertable consequences because cognitive behavioural therapy actually works.'
  • I'm interested in things people might expect or seek reactions after disclosing information or asking a question. What kind of reaction are you expecting in response to whatever you comment to this reply?
  • Do you believe the affective fallacy is a legit fallacy? I don't, but I think attitudes to the fallacy would be a good correlate of attitudes to my writing.
  • Strategically, do you think more like a naval admiral, or a pirate captain?
  • By taxing tobacco above the Ramsey rate up to Pigovian rates it is sacrificing government tax revenue (by cannibalising the elasticity of demand) for public health gains that can be gained in other ways (eg by tobacco licenses), but that wouldn't that reduce demand too due to less consumption. That’s because even though tobacco tax revenue is high it doesn't match the costs externalised on the health care system.

Counterintuitive relevant fact: Adam Smith support Pigovian taxes

'Sugar, rum and tobacco are commodities which are no where necessaries of life, which are become objects of almost universal consumption, and which are therefore extremely proper subjects of taxation.'

  • Adam Smith. An Inquiry into the Nature and Causes of the Wealth of Nations, 1776i

  • Got the travel bug? Want a cure? Check this out

  • I recently almost asked someone if they had a strategy....for what amounted to the formulation fo their startup's strategy, meta addiction diagnosed!
  • Maybe it's because I'm compulsive. Maybe it's because I'm clingy to motivational videos, maybe it's because I'm a gambling addict. So, I've got off the hedonic treadmill. How? A mindblowing attitude adjustment on desire. This culls my impulsivity and reactiveness. Thank you Julien Blanc. A supplement I'm too lazy to watch usually myself here. I wouldn't be suprised if people look back at PUAs with the admiration afforded to social movements with the benefits of historic hindsight, who are reviled or ignored in their living prime.
  • Want to see the world's most competed, battle tested diplomat in action? Here's a video of him in an interview here: Interview: Dr Mohammad Javad Zarif, Iran's Foreign Minister with ABC's Chief Foreign Correspondent Philip Williams. He's the consequence of being a career diplomat AND an academic THEN a politician.
  • The standard of evidence for cocoa butter's efficacy on Wikipeida is citing Livestrong articles. They're natural, so are an attractive lip balm option, but do they work?
  • Here's an idea that's before it's time: A nightclub that plays music at a level that won’t cause permanent hearing damage...could go hand in hand with sober nightclubs ...maybe even silent discos for talking friendly, individual tailoring and no noise pollution
Replies from: username2
comment by username2 · 2016-03-14T13:53:44.918Z · LW(p) · GW(p)

Say this with me 'I will cognitively reframe and restructure the knowledge of antecedents and determinants of negative, inadvertable consequences because cognitive behavioural therapy actually works.'

No

It's a mouthful

Replies from: None
comment by [deleted] · 2016-03-14T14:21:10.314Z · LW(p) · GW(p)

If it wasn't, it would suffer from one of Empson's 7 Types of Ambiguity. Now that I have a typology of ambiguity, I no longer feel uncomfortable by it.

comment by Lyyce · 2016-03-14T11:35:30.355Z · LW(p) · GW(p)

One major difference between left and right is the stance on personal responsibility.

Leftist intellectuals (tends to) think society influence trumps individual capabilities, so people are not responsible for their misfortunes and deserve to be helped. Whereas Rightist have the opposite view (related).

This seems trivial, especially in hindsight. But I hardly ever see it mentioned and in most discussions the right side treat the left as foolish and irrational and the left thinks right people are self-interested and evil rather than simply having a different philosophical opinion.

I guess this is part of the bigger picture on political discourse, it is always easier to dehumanise an opponent than to admit is point is as valid as ours.

Replies from: Stingray, ChristianKl, Val
comment by Stingray · 2016-03-14T13:27:02.281Z · LW(p) · GW(p)

Would this description pass an ideological Turing test?

Replies from: gjm, Lyyce, Lyyce
comment by gjm · 2016-03-14T14:18:43.573Z · LW(p) · GW(p)

It seems to me (leftish) that it's pointing at something correct but oversimplifying.

In so far as Lycce's analysis is correct, I should be looking at people in difficulty and saying "there's nothing wrong with their abilities, but society has screwed them over, and for that reason they should be helped". I might say that sometimes -- e.g., when looking at a case of alleged sexual discrimination -- but in that case my disagreement with those who take the other position isn't philosophical, it's a matter of empirical fact. (Unless either side takes that position without regard to the evidence in any given case, which I don't think I do and wouldn't expect the more reasonable sort of rightist to do either.)

But it's not what I'd say about, say, someone who has had no job for a year and is surviving on government benefits. Because that would suggest that if in fact they had no job because they simply had no marketable skills, then I should be saying "OK, then let them starve". Which I wouldn't. I would say: no, we don't let them starve, because part of being civilized is not letting people starve even if for one reason or another they're not useful.

We might then have an argument -- my hypothetical rightist and I -- about whether a policy of letting some people starve results in more people working for fear of starvation, hence more prosperity, hence fewer people actually starving in the end. I hope I'd be persuadable by evidence and argument, but most likely I'd be looking for reasons to broaden the safety net and Hypothetical Rightist would be looking for reasons to narrow it. That may be because of differences in opinion about "personal responsibility" (as Lycce suggests) or in compassion (as I might suggest if feeling uncharitable) or in realism (as H.R. might suggest if feeling uncharitable) but I don't think it has much to do with societal influence trumping individual capabilities.

I think Lycce's analysis works better to explain left/right differences in attitudes to the conspicuously successful. H.R. might say: "look, this person has been smart and worked hard and done something people value, and deserves to be richly rewarded". I might be more inclined to say "yes indeed, but (1) here are some other people who are as smart and hardworking and doing valuable things but much poorer and (2) this person's success is also the result of others' contributions". And if you round that off to "societal influence versus individual capabilities" you're not so far off.

In uncharitable mood, my mental model of people on the right isn't quite "self-interested and evil" but "working for the interests of the successful". (When in slightly less uncharitable mood, I will defend that a little -- success is somewhat correlated with doing useful things, thinking clearly, not harming other people too overtly, etc., and there's something to be said for promoting the interests of those people.)

I would guess (not very confidently) that people on the right will be more inclined to agree with Lycce's analysis, and (one notch less confidently still) that Lycce identifies more with the right than with the left.

Replies from: Lyyce
comment by Lyyce · 2016-03-14T14:54:46.255Z · LW(p) · GW(p)

Apparently I have not made my point clear enough. I am indeed simplifying, "everything is due do society" and "everything is due to individuals" are the both ends but you can be anywhere in the spectrum. This is also only one point among others, probably not the main one, defining identity politics (as you told it), and surely not every leftist/rightist will have the view I give him or is even concerned by the concept.

If i take your example about the person on government benefits with no skills, a common argument is that the fact that he had poor parents, grew in a bad neighbourhood or was discriminated against is one if not the main reason he has trouble acquiring skills or finding a job, then he should not be held responsible and left alone.

I consider myself leftist (by European standard). I do think success mostly depends on things beyond the individual and that we anyway ought to help everyone, even if someone are the only one to blame for his misery (i also buy this civilized thing).

Replies from: Dagon, Lumifer
comment by Dagon · 2016-03-14T16:08:59.599Z · LW(p) · GW(p)

The reason to think in terms of ideological Turing test is that "opposite" is almost never correct. Almost nothing can be usefully simplified to a simple one-dimensional aspect where both ends are reasonable and common.

In the mulidimensional space of different personal influences (genetics, upbringing, current social environment, governmental and non-governmental support and constraint networks), there are likely multiple points of belief in the balance of choice vs non-choice. It's just not useful to characterize one cluster as "opposite" of the other.

Personally, I find the three-axis model fairly compelling - it's not that different political leanings come from different points on a dimension, it's that they are focusing on completely different dimensions . Progressives tend to think of oppressor/oppressed, Conservatives about Barbarism/Civilisation, and Libertarians about Coercion/Freedom.

This does get accepted (to some extent - it's still massively oversimple) by both liberal and conservative friends of mine, so passes at least one level of test.

comment by Lumifer · 2016-03-14T15:14:57.766Z · LW(p) · GW(p)

a common argument is

It might well be a common argument, but the correct question is whether it's a valid argument.

we anyway ought to help everyone

Using a less sympathetic expression this is also known as the forced redistribution of wealth. There is an issue, though, well summed up by the quote usually attributed to Margaret Thatcher: "The problem with socialism is that eventually you run out of other people's money".

Replies from: Lyyce
comment by Lyyce · 2016-03-14T15:40:48.567Z · LW(p) · GW(p)

a common argument is

It might well be a common argument, but the correct question is whether it's a valid argument.

I do think it is a valid arguments (I might be wrong of course), many studies have highlighted the effect of education, parents, genes, environment, etc. So I find it unfair to blame someone for its problems since there are too many element to consider to give an accurate judgement.

Using a less sympathetic expression this is also known as the forced redistribution of wealth.

I don't like the idea of forced redistribution of wealth (taxes, namely), but in my opinion having a part of the population living in horrible conditions if not outright starving is worse, whether they deserve it or not.

I'd wager there is enough money in the first world to give everyone a "decent" life (admittedly depends on your definition of decent, let's say a shelter, food, education, health care and some leftovers for whatever you want to do). It is already implemented in various country and the States are not so far off in their own way so it is doable. However it is probably not be the optimal path in the long run for economic growth, I think if it is worth it (low confidence though).

Replies from: Lumifer
comment by Lumifer · 2016-03-14T16:11:37.444Z · LW(p) · GW(p)

many studies have highlighted the effect of education, parents, genes, environment, etc.

Yes, but let me emphasize the important part of that argument: "then he should not be held responsible and left alone". That's a normative, not a descriptive claim. It is also entirely generic: every single human being should not be held responsible -- right?

I'd wager there is enough money in the first world to give everyone a "decent" life

For how long?

You're assuming there is a magical neverending pot of money from which you can simple grab and give out. What happens in a few years when you run out?

Replies from: Lyyce
comment by Lyyce · 2016-03-14T16:40:37.363Z · LW(p) · GW(p)

That's a normative, not a descriptive claim.

Fair enough, this is only my own biased opinion. It is indeed generic, I am still unsure if my position should be "mostly not responsible" or "not responsible at all" depending on which model about free will is correct.

For how long?

Wealth is produced, and the money do not disappear (does it actually? my understanding of economy is pretty basic) when you give it out since they spend it as consumer the same way the people you take it from would do.

I don't see anything "running out" in the few socialist countries out there.

Replies from: Viliam, Lumifer
comment by Viliam · 2016-03-15T08:02:56.669Z · LW(p) · GW(p)

Wealth is produced, and the money do not disappear (does it actually? my understanding of economy is pretty basic) when you give it out since they spend it as consumer the same way the people you take it from would do.

The money usually does not literally disappear, but what happens if you have too much money in circulation and not enough things to buy is that the money loses value, i.e. things become more expensive. (Attempts to fix this problem by regulating prices typically result in literally empty shops after the few cheap things are sold.) It is related to inflation, but the whole story is complicated.

I don't see anything "running out" in the few socialist countries out there.

There are many countries in eastern Europe that once had "socialist" in their names and now don't. And they happen to be among the poorest ones in Europe. The "running out of money" meant that over decades their standards of living were getting far behind the western Europe.

You probably mean Sweden (people who talk about "socialist" countries not running out of money usually mean Sweden, because it's quite difficult to find another example). I don't know much about Sweden to explain what happened there, but I suspect they have must less "socialism" than the former Soviet bloc.

(For the purposes of a rational debate it would probably be better to stop using words like "socialism" and instead talk about more specific things, such as: high taxes, planned economy, mandatory employment, censorship of media, dictatorship of one political party, universal health care, basic income, etc. These are things typically described as "socialist" but they don't have to appear together.)

Replies from: gjm, ChristianKl
comment by gjm · 2016-03-15T09:57:07.925Z · LW(p) · GW(p)

countries in Eastern Europe

I think that, as much as having once had "socialist" in their names, may be their problem. They got screwed over by the Nazis in WW2 and then screwed over again by the USSR. I think they'd be poor now whatever their politics had been.

Sweden [...] the former Soviet bloc

Again, the former Soviet bloc is distinguished by features other than socialism -- notably, by having been part of the Soviet bloc. And the USSR is distinguished by features other than socialism -- e.g., by totalitarianism, by having been the enemy of the US (which was always the richer superpower), etc.

On the other side, it's not just Sweden -- but also, as you say, not exactly hardcore socialism either.

Replies from: Lumifer
comment by Lumifer · 2016-03-15T14:29:07.753Z · LW(p) · GW(p)

They got screwed over by the Nazis in WW2

That's the whole (continental) Europe, not just Eastern.

and then screwed over again by the USSR

By having specific politics imposed on them. So the "whatever their politics had been" is a non sequitur.

And the USSR is distinguished by features other than socialism -- e.g., by totalitarianism

If by "socialism" you mean "Western social democracy", the USSR was never socialist. And if by "socialism" you mean "communism" (which is how the Russians, etc. used the word), totalitarianism is an essential part of the package.

Replies from: gjm
comment by gjm · 2016-03-15T17:49:02.515Z · LW(p) · GW(p)

By having specific politics imposed on them.

I do not think that was the only variety of screwage inflicted on the Soviet bloc countries by the USSR.

(And I bet imposing a particular political system on a country tends to make it less prosperous than it would have been had it adopted that political system of its own accord -- because the people who have to make it work will resent it, be less motivated to make it work well, etc. So even if that were all the USSR did, I'd still expect economic damage independent of the (de)merits of the particular system they imposed.)

If by "socialism" you mean [...]

Actually I mean something more like "that which Western social democracies have more of than Western free-market capitalist countries, and avowed communist countries have more of again". Or like the big bag of ideologies you'll find on Wikipedia.

Replies from: Lumifer
comment by Lumifer · 2016-03-15T18:13:54.170Z · LW(p) · GW(p)

And I bet imposing a particular political system on a country tends to make it less prosperous than it would have been had it adopted that political system of its own accord

Counter-example: post-WW2 Japan (and, arguably, Western Germany as well).

Generally speaking, I'd say that "people who have to make it work will resent it" is too crude of an approach. Some people will, but some people will see it as an excellent opportunity to advance. In the case of the Soviet Union itself it's unclear whether you can say that the political system was "imposed" -- it's not like the population had a free choice...

Replies from: gjm
comment by gjm · 2016-03-15T20:45:40.301Z · LW(p) · GW(p)

post-WW2 Japan

Yup, I'll agree that Japan did very well after WW2 despite having democracy imposed on it. Did it do better or worse than it would have had it embraced democracy autonomously, though?

(I doubt that's answerable with any confidence. Unfortunately we can't figure out how much evidence the economic difficulties of Eastern Europe are against socialist economic policies without taking some view on how damaging, if at all, it is to have a political system forced on you.)

too crude

Oh yes, but what else can you expect when we're trying to deal with big knotty political questions in short forum comments?

Replies from: Lumifer
comment by Lumifer · 2016-03-15T21:20:05.838Z · LW(p) · GW(p)

Unfortunately we can't figure out how much evidence the economic difficulties of Eastern Europe are against socialist economic policies without taking some view on how damaging, if at all, it is to have a political system forced on you.

Given the rather clean comparison of East and West Germanies (no one asked any Germans what kind of political system would they like), I don't understand why you are having problems figuring this out.

Replies from: gjm, Viliam
comment by gjm · 2016-03-15T22:43:56.196Z · LW(p) · GW(p)

The DDR was AIUI imposed on much more drastically than the BRD. It was an ally of other countries that were more prosperous and powerful to begin with (most importantly the US, as Viliam's comment about the Marshall Plan points out) whereas the DDR was their enemy.

For the avoidance of doubt, I do agree that there is very good evidence that Soviet-style communism is a less effective economic system than Western-style democratic lightly-regulated market capitalism. (And yes, the two halves of Germany make a nice comparison.) But from there to "all possible forms of socialism are bad for you" is not, so far as I can see, a step warranted by the evidence.

(The actual issue in this thread seems to have been whether the "First World" has the resources to provide everyone with 'a "decent" life' without running out. Lycce didn't propose any very specific way of trying to do this, but I don't have the impression he was wanting Soviet-style communism.)

comment by Viliam · 2016-03-15T21:41:20.401Z · LW(p) · GW(p)

Another huge difference was the Marshall Plan.

comment by ChristianKl · 2016-03-15T11:57:15.058Z · LW(p) · GW(p)

Basic income is historically no socialist idea. It's a liberal idea. Milton Friedman came up with it under the name of negative taxation.

Billionaire Götz Werner did a lot to promote the concept. In Germany the CDU (right-wing) politician Dieter Althaus spoke for it. YCombinator who invests into research in it is also no socialist institution.

Socialism is about workers rights. People who don't work but just receive basic income aren't workers. The unemployed aren't union members. Unions generally want that employers take care of their employees and believe that employeers should pay a living wage and that it's not the role of the government to pay low income people a basic income.

comment by Lumifer · 2016-03-14T17:30:33.043Z · LW(p) · GW(p)

I am still unsure if my position should be "mostly not responsible" or "not responsible at all"

If "not at all" won't you have issues with e.g. the criminal justice system?

Wealth is produced, and the money do not disappear (does it actually? my understanding of economy is pretty basic) when you give it out since they spend it as consumer the same way the people you take it from would do.

Money is just convenient tokens, you can't consume money. What you want is value in the form of valuable (that is, desirable) goods and services. Most goods and services disappear when you consume them: if you eat a carrot, that carrot is gone.

When you give out (free) money you generate demand for goods and services. In the context of a capitalist society there is a common assumption that "the market" will automagically generate the supply (that is, actual goods and services) to satisfy the demand. However if you are not in the context of a capitalist society any more, you can't assume that the supply will be there to meet the demand -- see the example of the Soviet Union, etc.

When you redistribute money, people use that money to buy stuff. Someone has to produce the actual stuff and moving money around will not, by itself, lead to actual stuff being produced. If no one is growing carrots, there will be none to be had, free money or no free money.

Replies from: Lyyce
comment by Lyyce · 2016-03-14T18:09:03.615Z · LW(p) · GW(p)

In the context of a capitalist society there is a common assumption that "the market" will automagically generate the supply

In the current system people produce goods for their subsistence. Maybe if you'd give subsistence to everyone (basic income for example) and let people produce in exchange for "more", the system would still be viable.

The advantages are nobody left out, more flexibility in your work, people doing what they like (more artist and stuff), not having to work to survive (that counts for some). It would increase the happiness of the persons concerned The disadvantages are a net loss of production compared to the current systems and the producers of good being worse off. Maybe the trade off is not worth it, I'd like to have it tried just to check.

If "not at all" won't you have issues with e.g. the criminal justice system?

I am indecisive, even if they are not responsible, criminals are harmful for the rest of the population so imprisonment can be necessary. However the justice system should be focus on rehabilitation rather than punishment.

Your question made me think, coming from that one could perfectly argue that since people not doing anything are harmful to the rest of the society (technically they are taking money from the productive part) so they should be forced to be productive.

Bearing that, I would be fine with giving unproductive persons incentives so they become productive. But then you have the question at how much incentive is ethically justified.

Replies from: Viliam, Lumifer
comment by Viliam · 2016-03-15T08:55:34.846Z · LW(p) · GW(p)

The disadvantages are a net loss of production compared to the current systems

The words "loss of production" are too abstract, so it feels like it is no big deal. But it depends on what specifically it means. Maybe it's slower internet connection, fewer computer games, and more expensive Coca Cola. Or maybe it's higher mortality in hospitals, higher retirement age, and more poverty.

I'm saying this because I think people usually only imagine the former, but in real life it's more likely to be both.

I would be fine with giving unproductive persons incentives so they become productive.

If you give incentives to unproductive people to become productive, but you don't give incentives to productive people to remain productive, the winning strategy for people is to have swings of productivity.

Generally, whenever you have a cool idea that would work well for the current situation, you should think about how the situation will change when people start adapting to the new rules and optimizing for them. Because sooner or later someone will.

Replies from: Lyyce
comment by Lyyce · 2016-03-15T11:35:06.636Z · LW(p) · GW(p)

I am aware that very negative consequences are possible, even likely, especially if you go the whole way (aka save everyone at any cost). My stance is that the current situation is not optimal, and that trying incremental / small scale changes to see whether it makes the situation any better (or worse). Admittedly the ways it could go wrong are multiples.

If you give incentives to unproductive people to become productive, but you don't give incentives to productive people to remain productive, the winning strategy for people is to have swings of productivity.

If working people can afford more luxury that non-working one, this gives incentive to people starting being productive and staying so. Another incentives that would probably exist (at least in the first generations) is the peer-pressure, not working being low-status.

Generally, whenever you have a cool idea that would work well for the current situation, you should think about how the situation will change when people start adapting to the new rules and optimizing for them. Because sooner or later someone will.

Yeah, impossibility to predict long term evolution is the biggest flaw of basic universal income and the like. But this is true for any significant change. That's why we should be very careful about policies changes, but immobilsm is not the thing to do (in my opinion).

Again I am not highly confident that my opinion is the good one.

(answer to your other message)

The difference between Sweden (Denmark and France also fit the bill) and eastern European countries is that the former have an extensive welfare system, but apart from that have a capitalist economy while this not the case for the later.

For example France (the one I know the more about), if you are single and have never worked there is a "living wage" of approx 500 euros per month (only if you are more than 25 for some reason), help for housing going from 90 to ~150 euros month. Free healthcare, free public transport. If you have kids you get more help and free education but it is harder to live without working.

On the other side France is a market economy with free trade, very few state monopolies and wealth is owned by the capital.

comment by Lumifer · 2016-03-14T18:31:25.567Z · LW(p) · GW(p)

In the current system people produce goods for their subsistence.

Nope, that would be true in a subsistence economy. You don't want to live in one :-/

In the current system people produce goods to be exchanged for money which money will be used to buy other goods.

Maybe if you'd give subsistence to everyone (basic income for example) and let people produce in exchange for "more", the system would still be viable.

And do you have reasons to believe that would be so -- besides "maybe"?

It would increase the happiness of the persons concerned

Well, until their toilet clogged and stayed clogged because most plumbers became painters and the rest just went fishing. And until they got sick and found out that the line to see one of the few doctors left is a couple of months. And until the buses stopped running because being a bus mechanic is not such a great job and there are not enough guys who are willing to do it just for fun...

one could perfectly argue that since people not doing anything are harmful to the rest of the society (technically they are taking money from the productive part) so they should be forced to be productive.

Of course. See e.g. the Soviet Union or Mao's China: being unemployed was a crime. If you can't find a job, the state has a nice labour camp all ready for you.

I would be fine with giving unproductive persons incentives so they become productive

In money or bullets?

Replies from: Lyyce
comment by Lyyce · 2016-03-14T18:53:40.565Z · LW(p) · GW(p)

Maybe if you'd give subsistence to everyone (basic income for example) and let people produce in exchange for "more", the system would still be viable.

And do you have reasons to believe that would be so -- besides "maybe"?

No, that's why I'd like to see it tried. Nordic countries seems to be headed in that direction, we'll see how it goes.

Well, until their toilet clogged and stayed clogged because most plumbers became painters and the rest just went fishing. And until they got sick and found out that the line to see one of the few doctors left is a couple of months. And until the buses stopped running because being a bus mechanic is not such a great job and there are not enough guys who are willing to do it just for fun...

One possibility is too find a new equilibrium where the least attractive a job is, the better the advantages for doing it (since people would be ready to pay more to have it done at your place).

I would be fine with giving unproductive persons incentives so they become productive

In money or bullets?

You forgot the second part :

But then you have the question at how much incentive is ethically justified.

This is already how it works. And In a perfect capitalistic society, you have a choice between working or starving (except if someone is willing to help you), this is not much better than bullets.

I would go for less incentives that in our current society personally.

Replies from: Lumifer
comment by Lumifer · 2016-03-14T19:08:06.280Z · LW(p) · GW(p)

No, that's why I'd like to see it tried.

Do you think that trying could have considerable costs? Russia tried communism, that... didn't turn out well.

One possibility is too find a new equilibrium where the least attractive a job is, the better the advantages for doing

Why new? That's precisely how the current equilibrium works (where advantages == money).

You forgot the second part

You didn't answer the question.

And In a perfect capitalistic society, you have a choice between working or starving

Why capitalistic? In your black-and-white picture that would be true for all human societies except for socialist ones. Under capitalism you could at least live off your capital if/when you have some.

I would go for less incentives that in our current society personally.

So why would anyone come to unclog your toilet?

Replies from: Lyyce
comment by Lyyce · 2016-03-14T19:53:45.167Z · LW(p) · GW(p)

Do you think that trying could have considerable costs? Russia tried communism, that... didn't turn out well.

It could, incremental changes, or doing it on a smaller case would mitigate the costs. A "partial" basic income already exist in several European countries, where even when not contributing to society you are given enough to subsist. The results are not too bad so far.

Why new? That's precisely how the current equilibrium works (where advantages == money).

You are right, it would just be different jobs having the most value

Why capitalistic? In your black-and-white picture that would be true for all human societies except for socialist ones. Under capitalism you could at least live off your capital if/when you have some.

Is any system where people are automatically given subsistence socialist? Because it is the only thing I have talked about.

You didn't answer the question.

Money, but with a cost for not being a producer smaller than today (aka no comfort rather than no subsistence)

So why would anyone come to unclog your toilet?

For money, same as today

Replies from: Lumifer
comment by Lumifer · 2016-03-14T20:34:48.152Z · LW(p) · GW(p)

Is any system where people are automatically given subsistence socialist?

What non-socialist societies which unconditionally provided subsistence to all its members, sufficient to live on, do you know other than a few oil-rich sheikhdoms?

comment by Lyyce · 2016-03-14T15:05:37.558Z · LW(p) · GW(p)

(for the ideological turing test)

I have tried to make my argument as neutral as possible, giving both sides of the arguments and avoiding depreciating any,

Let's try from both directions then (personally am a leftist).

Left side, I think so, I definitely think societal influence (amongst other things out of the individual power such as genetics) trumps individual choices, I also saw this opinion amongst friends and intellectuals so I am not alone in this, not everybody on the left think like this though.

Right side, my model of the right is not as good as I'd like, but i have seen it expressed in various places. Again it does not concern all the rightists neither is the main point for everyone.

comment by Lyyce · 2016-03-14T14:05:09.679Z · LW(p) · GW(p)

Sorry but I'm not sure I understand what you are talking about, could you develop your point?

Replies from: Vaniver
comment by Vaniver · 2016-03-14T14:48:43.894Z · LW(p) · GW(p)

One way of thinking about this is "would my enemies, if reading this, think it is a description of their beliefs written by an ally?"

I'm not sure of the relevance in this instance.

comment by ChristianKl · 2016-03-14T16:34:53.331Z · LW(p) · GW(p)

I downvoted the post for it being a political post on LW that tries to explain complex politics with a simple model.

Replies from: Lyyce
comment by Lyyce · 2016-03-14T16:52:13.258Z · LW(p) · GW(p)

Thank you for the feedback. Unfortunately it looks like I have not been able to express myself clearly.

It was not supposed to explain anything but rather gives one point I find not stressed enough, I am aware that it does not sum up politics or gives a full distinction between political side.

Replies from: ChristianKl
comment by ChristianKl · 2016-03-14T17:43:07.756Z · LW(p) · GW(p)

I don't think that the general class of posts "Political idea XY with whom I just came up isn't mentioned enough in the venues I read" makes a good LW post.

Replies from: username2
comment by username2 · 2016-03-14T20:22:16.467Z · LW(p) · GW(p)

with whom I just came up

“This is the type of arrant pedantry up with which I will not put!”

comment by Val · 2016-03-15T04:35:27.863Z · LW(p) · GW(p)

Still, it would be very wrong to describe rightists as thinking that everyone should starve who can't support themselves. Many people on the political right also practice and/or believe in charity.

Replies from: WalterL, Viliam
comment by WalterL · 2016-03-15T13:41:11.322Z · LW(p) · GW(p)

As a rightist myself I'd like to point out that there is a massive difference in our belief system between being forced to support folks who don't work (you are a slave, changing this intolerable state is the primary goal of your life) and choosing to do so (a righteous act, golf claps).

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-03-15T15:37:28.865Z · LW(p) · GW(p)

And I'd like to point out that there is a massive difference between maybe getting charitable support that keeps you alive and having a right to welfare. You don't know you going to be in the position of the giver from behind a veil.

Replies from: Dagon
comment by Dagon · 2016-03-15T15:50:27.434Z · LW(p) · GW(p)

I think this subthread is a good summary of why we should just leave politics out of LW, and why trying to summarize a single dimension of difference is hopeless.

So I'll continue :) Here goes the anti-turing definition (each side will agree it applies to the other, but not to themselves):

Progressives/leftists believe it's OK to define rights over things that don't exist yet (say, food that isn't yet planted or care from a future doctor who might prefer to golf that day instead of exposing himself to your disease). The conservatives/rightists think it's OK to define rights that make it easy to ignore others' suffering.

Replies from: TheAncientGeek, Lumifer
comment by TheAncientGeek · 2016-03-18T07:59:10.040Z · LW(p) · GW(p)

Progressives/leftists believe it's OK to define rights over things that don't exist yet (say, food that isn't yet planted ..

No, leftists thinks you have rights to things, not over things. Insisting that a right can only be over something pretty well begs the question in favour of property rights.

comment by Lumifer · 2016-03-15T16:04:36.901Z · LW(p) · GW(p)

Progressives/leftists believe it's OK to define rights over things that don't exist yet (say, food that isn't yet planted or care from a future doctor who might prefer to golf that day instead of exposing himself to your disease). The conservatives/rightists think it's OK to define rights that make it easy to ignore others' suffering.

I don't understand this -- it doesn't make sense to me.

Replies from: Dagon
comment by Dagon · 2016-03-15T18:16:32.579Z · LW(p) · GW(p)

It was my attempt to rephrase the "massive difference" posts by WalterL and TheAncientGreek, above.

WalterL taking the rightist side, asserting a right to freedom from coercion and that being forced to support others is a form of slavery. TheAncientGreek takes the leftist side in asserting a right to welfare being far preferable than a charitable state of support.

These rights are in direct conflict. Person A's right to welfare requires that person B is mandated to provide it. Person B's right to choose her own activities implies that person A might not get fed or housed.

Replies from: TheAncientGeek, Lumifer
comment by TheAncientGeek · 2016-03-18T08:03:07.884Z · LW(p) · GW(p)

It was my attempt to rephrase the "massive difference" posts by WalterL and TheAncientGreek, above

Then or was completely wrong. I was drawing a distinction between he kind of outlook you might have if you know you are in a winning position, and the kind you might take if you don't know what position you are going to be in,

comment by Lumifer · 2016-03-15T19:44:10.929Z · LW(p) · GW(p)

TheAncientGreek takes the leftist side in asserting a right to welfare

Um, to quote TheAncientGeek, "there is a massive difference between maybe getting getting charitable support that keeps you alive and having a right to welfare" -- I think you misunderstand him.

But still, how is the right to welfare a right "over things that don't exist yet" and how is the right to be not taxed (more or less) a right that "make[s] it easy to ignore others' suffering"?

The first is the right to support and the matching duty falls onto the government. It could be (see Saudi Arabia) that it can provide this support without taking money out of any individuals' pockets. The second is basically a property right and has nothing to do with the ease of ignoring suffering.

Replies from: Dagon
comment by Dagon · 2016-03-15T22:41:31.885Z · LW(p) · GW(p)

Perhaps I do misunderstand him. I took his "massive difference" comparison to mean that he doesn't believe charity is sufficient, and he would prefer welfare to be considered a right.

In the long term, the government is just a conduit - it matches and enforces transfers, it doesn't generate anything itself. The case of states that can sell resources is perhaps an exception for some time periods, but doesn't generalize in the way most people think of rights independent of local or temporal situations.

In any case, a right to support directly requires SOMEONE to provide that support, doesn't it? If everyone is allowed to choose not to provide that support, the suffering must be accepted.

Replies from: TheAncientGeek, Lumifer
comment by TheAncientGeek · 2016-03-18T08:07:50.260Z · LW(p) · GW(p)

Perhaps I do misunderstand him. I took his "massive difference" comparison to mean that he doesn't believe charity is sufficient, and he would prefer welfare to be considered a right.

That what I meant , butit it has nothing to with things that don't yet exist.

comment by Lumifer · 2016-03-16T14:32:05.595Z · LW(p) · GW(p)

In the long term, the government is just a conduit - it matches and enforces transfers, it doesn't generate anything itself.

So, can we just get rid of it, then? :-/ I don't think we should take a detour into this area, but, let's say, a claim that government does not create any economic value would be... controversial.

a right to support directly requires SOMEONE to provide that support, doesn't it?

Yes, correct. All rights come as pairs of right and duty. Whatever is someone's right is someone else's duty.

I'm still confused about "rights over things that don't exist yet" and "rights that make it easy to ignore".

Replies from: Dagon
comment by Dagon · 2016-03-16T15:18:16.184Z · LW(p) · GW(p)

Asserting a right to eat is not just a statement about current food supply ownership or access. It's saying that, if food is later created, the right applies to that too. Conversely, if I have the right not to grow food or not to give it to someone else, I am allowed to ignore their pain.

Replies from: Lumifer
comment by Lumifer · 2016-03-16T15:28:45.358Z · LW(p) · GW(p)

Asserting a right to eat is not just a statement about current food supply ownership or access. It's saying that, if food is later created, the right applies to that too.

Don't most rights work this way? I think it's just the default.

I am allowed to ignore their pain.

I don't quite understand the "allowed to ignore" part. What is the alternative, Clockwork Orange-style therapy?

Replies from: Jiro
comment by Jiro · 2016-03-16T22:15:30.300Z · LW(p) · GW(p)

"I am allowed to X" in this context means "X is not worthy of moral condemnation, and forcibly stopping X is worthy of moral condemnation".

Replies from: Dagon
comment by Dagon · 2016-03-17T01:54:24.994Z · LW(p) · GW(p)

Moral condemnation or application of force are the common responses.

comment by Viliam · 2016-03-15T09:06:30.619Z · LW(p) · GW(p)

I would guess that people on the political right are more likely to donate to charity than people on the political left.

At least when I look at people around me, those on the left are more likely to say "why should I care about this problem; isn't this one of those things that government should do?". And those on extreme left will even say something about how 'worse is better' because it will make the capitalist system collapse sooner, while donating to alleviate problems delays the revolution.

Replies from: gjm, Dagon
comment by gjm · 2016-03-15T09:49:40.230Z · LW(p) · GW(p)

This analysis suggests that any relationship between political affiliation and charitable donation isn't very strong. For what it's worth, the sign of the coefficient in the regression suggests that lefties give more than righties. (The paper also looks at volunteering, and finds that lefties volunteer quite a lot more than righties.)

I wouldn't make any large bets on the basis of that paper, though. There are lots of interrelated things here -- politics, wealth, religion, etc., etc., etc. -- and even if those regression coefficients indicate something real rather than just noise it may be much more complicated than "group X is more generous with their time/money than group Y". And it looks like it's the work of a single inexperienced researcher, and doesn't seem to be a peer-reviewed publication.

This paper -- not available for free, but there's an informal writeup by someone else here says that other research has indicated that righties give more than lefties (contrary to what the paper above says), and purports to explain this by saying that righties are more religious and the religious give more. More precisely, it looks as if religion leads to giving in two ways. There's giving to religious charities, which obviously religious people do a lot more of than irreligious ones; and there's other giving, which church attenders do and so (to a comparable extent) do people involved in other sorts of socially-conscious meeting up. ("Local civic or educational meetings" is the thing they actually looked at.)

If you control for religion, then allegedly the left/right differences largely go away.

Make of all that what you will. (What I make of it is: it's complicated.)

comment by Dagon · 2016-03-15T18:30:20.733Z · LW(p) · GW(p)

"charity" is a political term that makes measuring this very difficult. If you count donations to private-charity art museums and to activism/signaling groups rather than only looking at poverty impact, you'll get results that don't really tell you much about useful donations.