Open Thread May 23 - May 29, 2016

post by Gunnar_Zarncke · 2016-05-22T21:11:56.868Z · LW · GW · Legacy · 120 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

120 comments

Comments sorted by top scores.

comment by gwern · 2016-05-23T03:04:34.496Z · LW(p) · GW(p)

Misunderstandings and ignorance of GCTA seem to be quite pervasive, so I've tried to write a Wikipedia article on it: https://en.wikipedia.org/wiki/GCTA

Replies from: Viliam, Lumifer, Val, Algon, Jurily
comment by Viliam · 2016-05-23T10:30:38.864Z · LW(p) · GW(p)

Thanks for doing the frustrating work.

(The first and only comment so far is, more or less, "delete this article, because I don't care". Ugh.)

Replies from: gwern
comment by gwern · 2016-05-23T14:50:35.570Z · LW(p) · GW(p)

Yeah, that was weird. Almost as soon as I posted it, too. And the IP has only made 1 edit before, so it's not some auto-troll.

comment by Lumifer · 2016-05-23T15:30:08.861Z · LW(p) · GW(p)

Thank you.

One thing I found I wished for and didn't find, though, is a description of the underlying mechanics. You described what and why, but not how. Do you think that can be usefully expressed in a couple of paragraphs or it's too complicated for that? The article already assumes a fair amount of background knowledge.

Replies from: gwern
comment by gwern · 2016-05-23T16:51:30.156Z · LW(p) · GW(p)

I'm not sure it can. I've read many different descriptions and looked at the math, but it's a very different approach from the twin-based variance components estimation procedures I've managed to beat some understanding into my head of, and while I've worked with multilevel models & random effects in other contexts, the verbal descriptions of using multilevel models for estimating heritability just don't make sense to me. (Judging from Visscher's commentary paper, I may not be the only one having this problem.) I think my understanding of linear models and matrices may be too weak for it to click for me.

comment by Val · 2016-05-25T19:24:26.594Z · LW(p) · GW(p)

One problem I can see at first glance that the article doesn't look like a Wikipedia article, but as a textbook or part of a publication. The goal of a Wikipedia article should be for a wide audience to understand the basics of something, and not a treatise only experts can comprehend.

What you wrote seems to be an impressive work, but it should be simplified (or at least the introduction of it), so that even non-experts can have a chance to at least learn what it is about.

Replies from: Lumifer
comment by Lumifer · 2016-05-25T20:39:15.214Z · LW(p) · GW(p)

The goal of a Wikipedia article should be for a wide audience to understand the basics of something

I don't think this is true. Wikipedia is a collection of knowledge, not a set of introductory articles.

See e.g. the Wikipedia pages on intermediate-to-high statistical concepts and techniques, e.g. copulas).

comment by Algon · 2016-05-23T15:16:35.754Z · LW(p) · GW(p)

Good god, how long did that take to write?

Replies from: gwern
comment by gwern · 2016-05-23T17:00:15.409Z · LW(p) · GW(p)

1 full day. And I guess a few hours today checking edits other people made, tweaking parts of the article, responding to comments, etc. Plus, of course, all the background work that went into being able to write it in the first place... ('How long, Mr Whistler?') For example, I spent easily a week researching intelligence GCTAs and measurement error for my embryo selection cost-benefit analysis, which I could mostly copy-paste into that article. (I wanted an accurate GCTA estimate to put an upper bound on how much variance SNPs could ever explain and thus how much gain was possible per embryo. This required meta-analyzing GCTA estimates to get a stable point estimate and then correction for measurement error because a lot of the estimates are using imperfect measurements of intelligence.)

EDIT: and of course, after saying that, I then spent what must have been several other days working on digging up even more citations, improving related articles, and debating heritability and other stuff on Reddit...

comment by Jurily · 2016-05-25T10:26:28.821Z · LW(p) · GW(p)

How does this reject the genetic factors causing circumcision in Jews?

Replies from: gwern
comment by gwern · 2016-05-25T16:20:23.956Z · LW(p) · GW(p)

What?

Replies from: Jurily
comment by Jurily · 2016-05-25T17:22:43.848Z · LW(p) · GW(p)

It is my understanding that due to ethical concerns, the scientific field of psychology does not have a data collection methodology capable of distinguishing between effects caused by the parents' genes and effects caused by the parents' actions, and as such, no possible statistical approach will give a correct answer on the heritability of traits caused by the latter, like schizophrenia a.k.a. religion or intelligence. In order to clear up my "misunderstandings and ignorance", you will need to demonstrate an approach that can, at the very least, successfully disprove genetic contribution in circumcision.

Replies from: gwern, Lumifer
comment by gwern · 2016-05-25T20:27:03.340Z · LW(p) · GW(p)

I think you need to read up a little more on behavioral genetics. To point out the obvious, besides adoption studies (you might benefit from learning to use Google Scholar) and and more recent variants like using sperm donors (a design I just learned about yesterday), your classic twin study design and most any 'within-family' design does control for parental actions, because they have the same parents. eg if a trait is solely due to parental actions, then monozygotic twins should have exactly the same concordance as dizygotic twins despite their very different genetic overlaps, because they're born at the same time to the same parents and raised the same.

More importantly, the point of GCTA is that by using unrelated strangers, they are also affected by unrelated parents and unrelated environments. So I'm not sure what objection you seem to have in mind.

Replies from: Houshalter
comment by Houshalter · 2016-05-26T13:29:47.232Z · LW(p) · GW(p)

Sorry if I'm misunderstanding the method, but doesn't it work something like finding strangers who have common genetics by chance?

If so, then 2 jews are more likely to have common genetics than chance, and also more likely to be circumcised. So it would appear that circumcision is genetic, when in fact it's cultural.

Replies from: gwern
comment by gwern · 2016-05-26T19:22:30.211Z · LW(p) · GW(p)

It works by finding common genetics up to a limit of relatedness like fourth-cousin level. I think some Jewish groups may be sufficiently inbred/endogamous for long enough periods that it might not be possible to run GCTA with the usual cutoff since they'll all be too related to each other. Population structure beyond that is dealt with by the usual approach of subtracting out 10 or 20 principal components and including them to control for that. This is a bit ad hoc but does work well in GWASes and gets rid of that problem, as indicated by the fact that the hits replicate within-family where the population structure is equalized by design and also have a good track record cross-racially/country too: https://www.reddit.com/r/science/comments/4kf881/largestever_genetics_study_shows_that_genetic/d3el0p2

comment by Lumifer · 2016-05-25T17:36:50.758Z · LW(p) · GW(p)

It is my understanding that due to ethical concerns, the scientific field of psychology does not have a data collection methodology capable of distinguishing between effects caused by the parents' genes and effects caused by the parents' actions

Your understanding looks silly. It is rather obvious that not all children are brought up by their parents and that has been used in a number of studies. In fact, many classic identical-twins studies rely on being able to find genetically identical people who were brought up in different circumstances (including different parents).

Replies from: Jurily
comment by Jurily · 2016-05-25T18:23:59.469Z · LW(p) · GW(p)

Yes, it's obvious. That's why it was surprising when I couldn't find a single study on schizophrenia where all children were separated from the parents immediately after birth. Feel free to enlighten me.

Replies from: Lumifer
comment by Lumifer · 2016-05-25T18:49:27.206Z · LW(p) · GW(p)

Bzzzzz, I am sorry, you must have confused me with your research assistant. Please try again.

comment by Andy_McKenzie · 2016-05-23T13:31:00.451Z · LW(p) · GW(p)

I used ingres's excellent LW 2016 survey data set to do some analyses on the extended LW community's interest in cryonics. Fair warning, the stats are pretty basic and descriptive. Here it is: http://www.brainpreservation.org/interest-in-cryonics-from-the-less-wrong-2016-survey/

Replies from: Elo, Houshalter, None
comment by Elo · 2016-05-24T23:06:15.249Z · LW(p) · GW(p)

I am a little bothered by the scale you used - on a scale from 0-5 where:

0: no and don't want to sign up 1: no, still considering it. 2: no, would like to but can't afford it. etc. towards more interested in cryonics.

If we take an ordinary human who has barely even heard that cryonics is a real thing - the entry point to the scale is somewhere between 0 and 1 on the 6 point scale. Which means that as much as we have detailed data of states above 1; we don't have detailed data of states below 1. Which means that we potentially only recorded half the story; and with that; we have unrepresentative data that skews positively towards cryonics.

Replies from: Andy_McKenzie
comment by Andy_McKenzie · 2016-05-25T00:48:07.464Z · LW(p) · GW(p)

Upvoted because this is a good critique. My rationale for using this scale is that I was less interested in absolute interest in cryonics and more in relative interest in cryonics between groups. The data and my code are publicly available, so if you are bothered by it, then you should do your own analysis.

Replies from: Elo
comment by Elo · 2016-05-25T07:15:37.491Z · LW(p) · GW(p)

I am bothered by it to the extent that it was confusing because it was not automatically representative of the "absolute interest in cryonic" as you called it, but with what I pointed out in mind it is possible to still take the data as good information. (so not bothered enough to do my own analysis)

comment by Houshalter · 2016-05-23T13:50:12.497Z · LW(p) · GW(p)

Interesting that Lesswrongers are 50,000 times more likely to sign up for cryonics than the general population. I had previously heard criticism of Lesswrong, that if we really believe in cryonics, it's irrational that so few are signed up.

Also surprising that vegetarianism correlates with cryonics interest.

Replies from: Lumifer, qmotus, ChristianKl
comment by Lumifer · 2016-05-31T16:16:57.570Z · LW(p) · GW(p)

I had previously heard criticism of Lesswrong, that if we really believe in cryonics, it's irrational that so few are signed up.

It's a standard no-win situation: if too few have signed up, LW people are irrational; and if many have signed up, LW is a cult.

Replies from: gjm
comment by gjm · 2016-05-31T16:35:17.790Z · LW(p) · GW(p)

That's no-win given that ideas generally held on LW imply that we should sign up for cryonics.

There's nothing necessarily unfair about that. Suppose some group's professed beliefs imply that the sun goes around the earth; then you may say that members of the group are inconsistent if they aren't geocentrists, and crazy if they are. No win, indeed, but the problem is that their group's professed beliefs imply something crazy.

In this case, I don't think it's clear there is such a thing as LW's professed beliefs, it's not clear that if there are they imply that we should sign up for cryonics, and I don't think signing up for cryonics is particularly crazy. So I'm not exactly endorsing the no-win side of this. But it looks like you're making a complaint about the logical structure of the criticism that would invalidate some perfectly reasonable criticisms of (other?) groups and their members.

Replies from: Lumifer, Houshalter
comment by Lumifer · 2016-05-31T17:49:47.745Z · LW(p) · GW(p)

it looks like you're making a complaint about the logical structure of the criticism

Nope. I'm making a guess that this particular argument looked like a good soldier and so was sent into battle; a mirror-image argument would also look like a good soldier and would also be sent into the same battle. Logical structure is an irrelevant detail X-/

comment by Houshalter · 2016-05-31T17:47:09.854Z · LW(p) · GW(p)

Right, but what about the people who say they strongly believe in cryonics, have income high enough to afford it (and the insurance isn't that expensive actually), yet haven't signed up? I.e. "cryocrastinators". There are a lot of those on the survey results every year.

I believe this was the argument used, that Lesswronger's aren't very instrumentally rational, or good at actually getting things done. Again, I can't find the post in question, it's possible it was deleted.

comment by qmotus · 2016-05-28T15:18:58.780Z · LW(p) · GW(p)

I bet many LessWrongers are just not interested in signing up. That's not irrational, or rational, it's just a matter of preferences.

comment by ChristianKl · 2016-05-24T11:51:53.611Z · LW(p) · GW(p)

that if we really believe in cryonics

What does "we really believe" mean? That seems like something we categorically don't do. (1) We don't hold group belief but individuals have different beliefs.
(2) We think in terms of probability that are different for different people
It seems criticism like that comes from people who don't understand that we aren't a religion that specicies what everybody has to believe.

If the people who belief that cryonics works with >0.3 are signed up for cryonics when available while the people who think it only works with ~0.1 are not signed up I don't see any sign of irrationality.

Replies from: Good_Burning_Plastic, Houshalter
comment by Good_Burning_Plastic · 2016-06-19T11:42:47.629Z · LW(p) · GW(p)

If the people who belief that cryonics works with >0.3 are signed up for cryonics when available while the people who think it only works with ~0.1 are not signed up I don't see any sign of irrationality.

Has anybody looked at the data set to check if that's indeed the case?

Replies from: ChristianKl
comment by ChristianKl · 2016-06-19T15:07:59.598Z · LW(p) · GW(p)

The linked post contains graphs.

comment by Houshalter · 2016-05-24T12:36:29.851Z · LW(p) · GW(p)

I was just summarizing something I remember reading. I searched for every keyword I can think of but I can't find it.

But I swear there was a post highly critical of lesswrong, and one of the arguments was that. That if such a high percentage of lesswrongers believe in cryonics, why are so few signed up? It was an argument that lesswrong is ineffective.

It was just interesting to me to see the most recent statistics, and a lot of people are signed up, and certainly much higher than the general population.

Replies from: Viliam
comment by Viliam · 2016-05-25T10:13:43.471Z · LW(p) · GW(p)

It would be an argument that lesswrongers are not perfect. Also "lesswrongers" includes people who merely read the website once in a while.

I am completely unsurprised by the fact that mere reading LW articles doesn't make people perfect.

I would be more bothered by finding out that "lesswrongers" are less rational than the average population, or just some large enough control group that I could easily join instead of LW. But the numbers abour cryonics do not show that.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-25T13:05:16.264Z · LW(p) · GW(p)

I think that if LWers are 50,000 times more likely to do something than the general population, that proves neither rationality nor irrationality. It just shows that LWers are chosen by an extremely selective process.

comment by [deleted] · 2016-06-02T19:17:32.601Z · LW(p) · GW(p)

It is hilarious and yet quite predictable that one of the only groups nearly as unenthused about cryonics as 'committed theists' was 'biologists'.

comment by Lumifer · 2016-05-24T17:06:06.004Z · LW(p) · GW(p)

A good post in a generally good blog. Samples:

How big is your filter bubble? What’s in it? What’s outside it? Okay, next question: how can you tell?

...

Culture, in the part of the world in which I’ve been, and, for all I know, in other parts as well to which I cannot speak, has two rough parts: the Mainland and the Isles.

The Mainland is what calls itself the “mainstream” or “normal” culture.

You know… Mundania.

The Isles are everything else. Everything that’s not “mainstream” is an island.

Nobody knows how many Isles there are. They are wholly and utterly unmapped. Each one is its own subculture.

Some Isles are closer to the Mainland, and some further.

Some Isles are closer to others. Some are big. Some are small.

We — meaning I and a very large percentage of my readership — live in a collection of close Isles which form up an Archipelago. The SCA. Fandom. NERO. Etc.

This is the Archipelago of Weird.

...

I find it useful to apply Miller’s law:

In order to understand what another person is saying, you must assume that it is true and try to find out what it could be true of.

Replies from: Viliam
comment by Viliam · 2016-05-25T11:51:50.052Z · LW(p) · GW(p)

allistic person, let’s talk about this one place you feel locked out of and how we can make it even better for the majority, who already run so many other industries to the exclusion of people like me, first. Let’s make sure the already-privileged majority is comfortable in all places, at all times, before appreciating small pockets of minority safety and accommodation, and asking what they used to do right before they, too, were colonized by the tyranny of the narrowly-defined “default” human being in need of additional comfort while I try to survive. THAT FEELS FUCKING INCLUSIVE TO ME, HELL YEAH.

(...) Expecting autistic people to get better at small talk in order to make allistics feel more welcome is like expecting people in wheelchairs to get better at walking in order to make physically abled people feel more welcome.

(...) Mainstream feminism has some serious catching-up to do when it comes to learning about the lives of people who aren’t nice normal middle- to upper-class ladies, not to mention a lot of earned distrust. When you tell people that a skill to which they are inherently maladapted is a new requirement for participating in some culture, you are telling those people that they are no longer welcome in that culture. Bluntly, that is not your decision to make, and people are right not to trust the motivations of anyone who behaves as if they think it is. Too many of us have been burned too many times by people who told us “we want to make this a great place for everyone!”, only to find out that in practice, “everyone” actually means “all the allistics.”

reminds me of a SSC post on safe spaces:

One important feature of safe spaces is that they can’t always be safe for two groups at the same time. Jews are a discriminated-against minority who need a safe space. Muslims are a discriminated-against minority who need a safe space. But the safe space for Jews should be very far way from the safe space for Muslims, or else neither space is safe for anybody.

The rationalist community is a safe space for people who obsessively focus on reason and argument even when it is socially unacceptable to do so.

I don’t think it’s unfair to say that these people need a safe space. I can’t even count the number of times I’ve been called “a nerd” or “a dork” or “autistic” for saying something rational is too high to count. Just recently commenters on Marginal Revolution – not exactly known for being a haunt for intellect-hating jocks – found an old post of mine and called me among many other things “aspie”, “a pansy”, “retarded”, and an “omega” (a PUA term for a man who’s so socially inept he will never date anyone).

I also enjoy this ending (EDIT: of the article linked by Lumifer, not the SSC one):

Nor is it lost on me that I am sitting here patiently spergsplaining theory of mind to people who supposedly have it when I supposedly don’t. Allistics can get away with developing a theory of one mind — their own — because they can expect most of the people they interact with to have knowledge, perspectives, and a sensorium not all that different from theirs. Autists don’t get that option. Reaching adulthood, for us, means first learning how to function through a distorted sensorium, then learning to develop a theory of minds, plural, starting with ones different from our own.

It reminds me of my pet theory on some similarities between high-IQ people and autists; specifically, having to develop a "theory of mind unlike my own" during childhood. (But the two of us probably had a long disagreement about this in the past, if I remember correctly, so I don't want to repeat it here.)

Replies from: gjm
comment by gjm · 2016-05-25T12:33:17.916Z · LW(p) · GW(p)

Just to forestall confusion, that ending is not the ending of the SSC post, but the (near-)ending of the post Lumifer linked to. (In particular, Scott is not calling himself autistic.)

Replies from: Viliam
comment by Viliam · 2016-05-26T08:42:31.539Z · LW(p) · GW(p)

Thanks; edited the comment to make it clear.

comment by skeptical_lurker · 2016-05-24T11:21:34.074Z · LW(p) · GW(p)

Any advice on what is the best way to buy index funds and/or individual stocks? Particularly for people in the UK?

I know this has probably been asked before on a 'basic knowledge' thread, but I can't find the answer.

Replies from: philh, Lumifer
comment by philh · 2016-05-24T13:37:40.363Z · LW(p) · GW(p)

There's this document written by /u/sixes_and_sevens. I used it to set up mine (I'm the one who anti-recommended M&G). I might be able to answer any further questions, but it was a while ago so maybe not.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2016-05-31T16:56:46.645Z · LW(p) · GW(p)

Thanks! (and thanks to sixes_and_sevens)

comment by Lumifer · 2016-05-24T14:58:07.979Z · LW(p) · GW(p)

Open an account at a discount broker? Comparing fees is quite straightforward and other than that you only really care about the convenience of their user interface / experience.

comment by MrMind · 2016-05-24T13:11:38.860Z · LW(p) · GW(p)

Following the usual monthly linkfest on SSC, I stumbled upon an interesting paper by Scott Aaronson.
Basically, he and Adam Yedidia created a Turing machine which, from ZFC, cannot be proved to stop or run forever (it will run forever assuming a superset of said theory).
It is already known from Chaitin incompleteness theorem that every formal system has a limit complexity length, over which it cannot prove or disprove certain assertions. The interesting, perhaps surprising, part of the result is that said Turing machine has 'only' 7918 states, that is a registry less than two bytes long.
This small complexity is already sufficient to evade the grasp of ZFC.
You can easily slogan-ize this result by saying that BB(7918) (the 7918th Busy Beaver number) is uncomputable (whispering immediately after "... by ZFC").

Replies from: Houshalter, Gurkenglas, Viliam
comment by Houshalter · 2016-05-27T01:49:19.299Z · LW(p) · GW(p)

This is an upper bound. There could be many smaller indeterminate machines. Many suspect that even very simple TMs can indeterminate. E.g. collatz.

comment by Gurkenglas · 2016-05-27T01:37:34.836Z · LW(p) · GW(p)

Huh. I expected the smallest number of states of a TM of indeterminate halting to be, like, about 30. Consider how quickly BB diverges, after all.

comment by Viliam · 2016-05-25T10:19:47.365Z · LW(p) · GW(p)

I agree with most of what you said, but

'only' 7918 states, that is a registry less than two bytes long

this remark sounds weird. What is the meaning of the bit-size of the list of states? Are you suggesting to run the TM on a 16-bit computer? Then, good luck addressing the memory (the tape), because I guess the length of the tape used is probably also "uncomputable", or at least so large that even the pointers to the tape would not fit into any realistic computer's memory.

Replies from: MrMind
comment by MrMind · 2016-05-26T07:25:42.400Z · LW(p) · GW(p)

It was merely a remark to notice that 7918 states fits in a state registry that is less than two bytes long. And since said TM only has two symbols, it will also need no more than 15836 instructions.
Notice how compact the machine is: 13 bits for the state registry, 29 bits for each instructions, 15836 of said instructions = 459257 bits. less than half a megabyte. You could emulate that on basically everything that has a chip, nowadays.
Alas, the tape size is infinite, as with every TM... But! Turing machines do not need memory pointers: they observe only the symbol where the reading head is.

Replies from: Viliam
comment by Viliam · 2016-05-26T08:40:52.727Z · LW(p) · GW(p)

Turing machines do not need memory pointers: they observe only the symbol where the reading head is.

Sure, but any system that emulates the TM and the tape would need it. (In other words, it feels like cheating to say that memory is a part of the usual computer, but tape is not a part of the TM.)

Replies from: MrMind
comment by MrMind · 2016-05-26T09:20:25.401Z · LW(p) · GW(p)

I still don't see where the difficulty is. You need a memory registry only if you need random access to said memory, but the TM does not need it.
Sure, if you want to emulate a TM on a system that already uses random access memories, like most modern systems do, than of course you need a sufficiently long pointer for a sufficiently wide memory. But that is an accident of how systems work today, not an inherent complexity: you could easily emulate the TM in an old mainframe with a magnetic tape without ever seeing a memory pointer.

comment by PipFoweraker · 2016-05-23T00:58:21.443Z · LW(p) · GW(p)

Reminiscing over one of my favourite passages from Anathem, I've been enjoying looking through visual, wordless proofs of late. The low-hanging fruit is mostly classical geomety, but a few examples of logical proofs have popped up as well.

This got me wondering if it's possible to communicate the fundamental idea of Bayes' Theorem in an entirely visual format, without written language or symbols needing translation. I'd welcome thoughts from anyone else on this.

Replies from: SquirrelInHell, Elo, Vaniver, Daniel_Burfoot
comment by SquirrelInHell · 2016-05-24T02:26:32.387Z · LW(p) · GW(p)

Challenge accepted.

https://i.imgsafe.org/914f428.png

Replies from: Elo, jollybard, PipFoweraker
comment by Elo · 2016-05-24T23:16:54.257Z · LW(p) · GW(p)

If I am reading this correctly:

I saw some footprints; I know that there are 1/3 humans around and 2/3 cats around. there is a 3/4 likelyhood that humans made the human shaped footprint; there is a 1/4 chance that cats in boots made the human shaped footprints. Therefore my belief is that humans are more likely to have made the footprints than cats.

(I think it needs a little work, but it's an excellent diagram so far)

A suggestion: modify the number of creatures on the left to equal a count of the frequency of the priors? And the number on the right to account for frequency of belief.

Replies from: SquirrelInHell
comment by SquirrelInHell · 2016-05-25T01:13:02.952Z · LW(p) · GW(p)

Yup.

A suggestion: modify the number of creatures on the left to equal a count of the frequency of the priors? And the number on the right to account for frequency of belief.

I don't buy "frequency of belief". Maybe instead, I'd put those in thought bubbles, and change scaling of the bubbles.

Replies from: Elo
comment by Elo · 2016-05-25T07:24:08.513Z · LW(p) · GW(p)

C̶a̶n̶ ̶y̶o̶u̶ ̶a̶l̶s̶o̶ ̶a̶d̶d̶ ̶a̶ ̶w̶a̶t̶e̶r̶m̶a̶r̶k̶ ̶s̶o̶ ̶t̶h̶a̶t̶ ̶y̶o̶u̶ ̶g̶e̶t̶ ̶c̶r̶e̶d̶i̶t̶s̶ ̶i̶f̶ ̶I̶ ̶r̶e̶p̶o̶s̶t̶ ̶t̶h̶e̶ ̶i̶m̶a̶g̶e̶?̶ Edit: woops there is a watermark, I just didn't see it.

I was thinking more specficially, "I live with 1 humans and 2 cats. therefore my priors of who could have made these footprints are represented by one human and two cats". not exactly frequency of belief but a "belief of frequency"?

Edit: also can it be a square not a rectangle? Is there a reason it was a rectangle to begin with? Something about strength of evidence maybe?

One last edit: Can you make the "cat in boots" less likely? How many cats in boots do other people have in normal priors??

Replies from: SquirrelInHell
comment by SquirrelInHell · 2016-05-26T01:47:10.881Z · LW(p) · GW(p)

Can you make the "cat in boots" less likely?

It's not supposed to be realistic - real frequency of cats in boots is way too low for that. But I adjusted it a little for you: https://i.imgsafe.org/5876a8e.png

Edit: and about the shape, it matters not, as long as you think in odds ratios.

Replies from: Elo
comment by Elo · 2016-05-26T03:17:04.325Z · LW(p) · GW(p)

I like this version much better. Yes the shape does not matter; it does help me think about it though. I think this is generally an excellent visual representation. Well done!

comment by jollybard · 2016-05-25T03:07:52.173Z · LW(p) · GW(p)

This looks great and I can see that it should work, but I can't seem to find a formal proof. Can you explain a bit?

Replies from: Elo
comment by PipFoweraker · 2016-05-24T04:19:19.205Z · LW(p) · GW(p)

Whoah. That gets many points. What an excellent layout! We need to know what boots are for it to translate, but that's a lot closer to an ideal solution than I've worked through.

Edit - I thought the diagram looked familiar!

comment by Elo · 2016-05-23T01:31:34.221Z · LW(p) · GW(p)

Was considering something like a tshirt of p(smoke|fire) and p(fire|smoke). never came to fruition; feel free to take the idea if you like.

comment by Vaniver · 2016-05-23T01:18:20.697Z · LW(p) · GW(p)

Bayes is mostly about conditioning, and so I think you can draw a Venn Diagram that makes it fairly clear.

Replies from: PipFoweraker
comment by PipFoweraker · 2016-05-23T01:48:51.220Z · LW(p) · GW(p)

Thanks! I've been playing around with it for a week or so but can't elegantly find a way to do it that meets my arbitrary standards of elegance and cool design :-)

Becomes easier when using non-circular shapes for Venn-ing, but my efforts look a little hacky.

Replies from: Houshalter
comment by Houshalter · 2016-05-23T04:19:41.544Z · LW(p) · GW(p)

I prefer a diagram like this with just overlapping circles. And you can kind of see how the portion of the hypothesis that exists in the evidence circle represents it's probability.

Arbital also has some nice visualizations: https://arbital.com/p/bayes_rule_waterfall/?l=1x1 https://arbital.com/p/bayes_rule_proportional/ https://arbital.com/p/bayes_log_odds/ and https://arbital.com/p/bayes_rule_proof/?l=1yd

Fivethirtyeight also made a neat graphic: https://espnfivethirtyeight.files.wordpress.com/2016/05/hobson-theranos-1-rk.png?w=1024&h=767

comment by Daniel_Burfoot · 2016-05-23T14:26:03.125Z · LW(p) · GW(p)

The issue with Bayes theorem isn't the derivation or proof. Nobody seriously debates the validity of the theorem as a mathematical statement. The debate, or conceptual roadblock, or whatever you want to call it, is whether researchers should apply the theorem as the fundamental approach to statistical inference.

comment by Douglas_Knight · 2016-05-22T22:38:31.979Z · LW(p) · GW(p)

What was the result of the IARPA prediction contest (2010-2015)?

Below I present what seem to me very basic questions about the results. I have read vague statements about the results that sound like people are willing to answer these questions, but the details seem oddly elusive. Is there is some write-up I am missing?

How many teams were there? 5 academic teams? What were their names, schools, or PIs? What was the “control group”? Were there two, an official control group and another group consisting of intelligence analysts with access to classified information?
Added: perhaps a third control group "a prediction market operating within the IARPA tournament."

What were the Brier scores of the various teams in various years?

When Tetlock says that A did 10% better than B, does he mean that the Brier score of A was 90% of the Brier score of B?


I can identify 4 schools involved, composing 3-4 teams:
GJP (Berkeley: Tetlock, Mellers, Moore)
DAGGRE/SciCast (GMU: Twardy, Hanson)
Michigan, MIT - 2 teams or a joint team?

Replies from: ChristianKl
comment by ChristianKl · 2016-05-23T09:36:58.379Z · LW(p) · GW(p)

Were there two, an official control group and another group consisting of intelligence analysts with access to classified information?

In Superforcasting Tetlock writes that the main documents of the comparison between the GJP forecasters against the intelligence analysts with access to classified information is classified. Tetlock doesn't directly say something about that comparison but reports in his book that a newspaper article says that the GJP forecasters were 30% better (if I remember right).

Replies from: Douglas_Knight
comment by Douglas_Knight · 2016-05-24T02:08:42.332Z · LW(p) · GW(p)

Here is the leak. It says that the superforecasters averaged 30% better than the classified analysts. Presumably that's the 2012-2013 season only and we won't hear about other years.

What is weird is that other sources talk about "the control group" and for years I thought that this was the control group. But Tetlock implies that he doesn't have access to the comparison with the classified group, but that he does have access to the comparison with the control group. In particular, he mentions that IARPA set a 4th year target of beating the control group by 50% and I think he says that he achieved that the first or second year. So that isn't the classified comparison. I guess it is possible to reconcile the two comparisons by positing that the superforecasters were 30% better, but that GJP, after extremizing, was more than 50% better. But I think that there were two groups.

Replies from: ChristianKl
comment by ChristianKl · 2016-05-24T11:57:02.961Z · LW(p) · GW(p)

I'm not sure that X% better has a unit that's always the same.

But Tetlock implies that he doesn't have access to the comparison with the classified group

I don't think that's the case. It's rather that it's classified information that he can't reveal directly because it's classified.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2016-05-24T15:22:49.183Z · LW(p) · GW(p)

That's what I thought when I saw the passage quoted from the book (p95), but then I got the book and looked at the endnote (p301) and Tetlock says:

I am willing to make a big reputational bet that the superforecasters beat the intelligence analysts in each year in which such comparisons were possible.

which must be illegal if he has seen the comparisons.

Replies from: ChristianKl
comment by ChristianKl · 2016-05-24T15:50:33.693Z · LW(p) · GW(p)

He likely worked with a censor about how and what he can write. I think that line can be very well explained as the result of a compromise with the censor.

comment by morganism · 2016-05-28T21:14:09.450Z · LW(p) · GW(p)

The GRIM test — a method for evaluating published research

Testing the mean...

https://medium.com/@jamesheathers/the-grim-test-a-method-for-evaluating-published-research-9a4e5f05e870#.r9izfnrxp

comment by Gunnar_Zarncke · 2016-05-23T21:28:56.335Z · LW(p) · GW(p)

(epistemic status: Ruminations on cognitive processes by non-expert.)

I have a question tangential to AI safety about goal formation. How do goals form in systems that do no explicitly have goals to begin with?

I tried to google this and didn't find answers neither for AI systems nor for neuropsychology. One source (Rehabilitation Goal Setting: Theory, Practice and Evidence) summarised:

neuroscience has traditionally not been concerned with goal pursuit per se but rather with the cognitive component or sub-components that contribute to it. [...] whereas social psychology has tended to study more abstract life goals.

Apparently many AI safety problems revolve around the wrong goals or the extreme satisfaction of goals. The usually implied or explicit definition of a goal seems to be the minimum difference to a target state (which might be infinity for some valuation functions). Many AI models include some notion of the goal in some coded or explicitly given form. In general that coding isn't the 'real' goal. By real goal I mean that which the AI system it total appears to optimize for as a whole. And that may differ from the specification due to the structure of the available input and output channels and the strength of the optimization process. Nonetheless there is some goal and there is a conceptual relation between the coded and the real goal.

But maybe real things can be a bit more complicated. Consider human goal formation. Apparently we do have goals. And we kind of optimize for them. But the question arises: Where do they come from cognitively and neurologically?

Goals are very high level concepts. I think there is no high level specification of the goals somewhere inside us that we read off and optimize for. I think our goals are our own understanding - on that high level of abstraction - of those patterns behind our behavior.

If that is right and goals are just our own understanding of some patterns of behavior, then how comes there is are specific brain modules (prefrontal cortex) devoted to planning for it? Or rather how come these brain parts are actually connected to the abstract concept of a goal? Or aren't they? And the planning doesn't act on our understand of the goals but on the constituent parts. What are these?

In my children I see clearly goal-directed behavior long before they can articulate the concept. And there are clear intermediate steps where they desperately try to optimize for very isolated goals. For example winning a race to the door. Trying to climb a fence. Being the first one to get a treat. Winning a game. Loosing apparently causes real suffering. But why? Where is the loss? How are any of these things even matched against a loss. How does they brain match whatever representation of reality to these emotions? How do the encodings of concepts for me and you and our race get connected to our feelings about this situation? And I kind of assume here that the emotions themselves somehow produce the valuation that controls our motivation.

Replies from: Elo, Pimgd
comment by Elo · 2016-05-24T22:42:13.090Z · LW(p) · GW(p)

I took issue with not knowing how humans formed goals. so I made this list of common human goals and suggested humans who do not know should look at the list of common goals and pick ones that are relevant to themselves.

comment by Pimgd · 2016-05-24T07:54:36.359Z · LW(p) · GW(p)

You seem to be confusing goals and value systems - even without a goal, the UFAI risk is not gone.

Maybe it is not right to anthropomorphize but take a human who is (acting) absolutely clueless, and given choices. They'll pick something and stick to it. Questioned about it, they'll say something like "I dunno, I think I like that option" . This is what I'd imagine something without a goal to act - maybe it is consistent, maybe it will pick things it likes, but it doesn't plan ahead and doesn't try to steer actions to a goal.

For an AI, that would be a totally indifferent AI. I think it would just sit idle or do random actions. If you then give it a bad value system, and ask it to help you, you'll get "no" back. Helping people takes effort. Who'd want to spend processor cycles on that?

...

On the other hand, perhaps goals and value systems are actually the same; having a value system means you'll have goals ("envisioned preferred world states" vs "preferred world states"), so you can not not have goals whilst having a value system. In that case, you'd have an AI without values. This I think is likely to result in one of two options... on contact with a human that provides an order to follow, it could either not care and do nothing (it stays idle... forever, not even acting in self-preservation because, again, it has no values). Or, it accepts the order and just goes along. That'd be dangerous, because this has basically no brakes - if it does whatever you ask of it, without regard for human values... I hope you didn't ask for anything complex. "World peace" would resolve very nastily, as would "get me some money" (it is stolen from your neighbors... or maybe it brings you your wallet), and things like "get me a glass of water" can be interpreted in so many ways that being handed a piece of ice in the shape of a drinking glass is in the positive side of results.

That's the crux of it, I think. Without a value system, there are no brakes. There might also not be any way to get the AI to do anything. But with a value system that is flawed, there might be no brakes in a scenario where we'd want the AI to stop. Or the AI wouldn't entertain requests that we'd want it to do. So a lot of research goes into this area to make sure we can make the AI do what we want it to do in a way that we're okay with.

comment by root · 2016-05-29T15:32:52.539Z · LW(p) · GW(p)

How do you solve interpersonal problems when neither sides can see themselves as the one in fault?

I've had a a fight with my sister regarding my birthday present. She bought me - boosted with a contribution of my mom and dad - a bunch of clothes. I naturally got mad because:

  1. it's a large investment for an unsafe return (my disappointment)
  2. I always hated getting clothes for my birthday and the trend haven't changed. I always just asked for money instead.

It has caused a little bit of bitterness. I understand her point of view, which was to make me happy on my birthday but I still can't excuse the invalidity of the function she was using, especially considering that I previously mentioned that I hate clothes for birthday.

What should I do in order to ease the situation? Also, do you think that my reaction was inappropriate?

I talked about this with other people and what people said was 'it's the intention that matters' and that sounds like an excuse (and at this point I'm curious if I actually am looking for criticism or just subconsciously hoping I'll get a bunch of chocolate frogs) so get the best criticism you can give.

Replies from: gjm, philh, Jiro, Coacher
comment by gjm · 2016-06-01T15:29:58.658Z · LW(p) · GW(p)

Advance warning: there are very few chocolate frogs in what follows. Disclaimer: I will be saying a lot about how I think almost everyone feels about present-giving; I am describing, not endorsing.

I think your idea of what birthday present giving is for differs from that of the rest of society (including your sister). I think

  • you think that when A gives B a present, the point is to benefit B, and A will (subject to limitations of budget, time available, etc.) want to give a present that benefits B as much as possible;
  • practically everyone else would say something like that if asked, but actually behave as if the point is to enable A to demonstrate that they care about B and understand B's tastes well enough to buy something B likes. (So A can feel good about being caring and insightful, and B can feel good about being cared for and understood.)

From the first of those viewpoints, giving money makes a lot of sense. But from the second, it makes no sense at all. Therefore, giving money for a birthday present is unthinkable for most people -- and if you ask to be given money, you will almost-literally not be heard; all that will come across is a general impression of complaininess and unreasonableness.

I think you also differ from the rest of society (including your sister) about what's an appropriate reaction when you get something you don't like:

  • you think you should say "oh, no, I didn't want that; please don't do that again" and make a suggestion of something better for next time;
  • practically everyone else thinks you should pretend you really like it and act just as grateful as if you'd been given something perfect.

This is mostly a consequence of the other difference. If the point of giving a present is to demonstrate your own caring and understanding, then having it rejected ruins everything; if it's to give something genuinely beneficial, the failure is in the poor choice of present and the rejection is just giving useful information.

And now remember that "please give me money" is unthinkable and therefore can't be heard; so "I don't like X; please give me money in future" will be heard as "I don't like X, and I'm not going to suggest a better alternative for next time", and since you haven't (from the giver's perspective) actually made an actionable suggestion, it's quite possible that they won't remember that you specifically didn't like X; just that they gave you something and you were unhelpfully complainy in response.

So now here's how I think your sister probably sees it. (I'm going to assume you're male; let me know if that's wrong and I'll fix my language.)

"My brother refuses to say what he wants for his birthday. So, with no information to go on, I got him some clothes. After all, everyone wears clothes. And then, when he gets them, instead of being grateful or at least pretending to be grateful, he flies off the handle and complains about how he hates getting clothes!"

Whereas, of course, from your perspective it's

"My sister keeps getting me clothes for my birthday. I've said more than once before that I want money, not clothes, but she just doesn't listen. And then she gets upset when I tell her I don't want what she's given me!"

OK, so how to move forward? In an ideal world, part of the answer would be for your sister to accept your preference for being given money. But let's assume that's not going to happen. If you can cope with accepting blame for things that aren't altogether your fault, I think the most positive thing would be to find some things you would be glad to be bought, make an Amazon wishlist or something out of them, and say something like this to your sister:

I'm sorry I got cross with you about my birthday present. I shouldn't have, and I do appreciate that you were trying to give me something nice. [If you actually did like any of the clothes, this is where you can be more specifically grateful.] But, really, being given clothes is difficult for me because it usually turns out that at least half of what I'm given doesn't fit or doesn't go with other things in my wardrobe or just isn't what I want to wear. I never really know what to ask for, which is why I've just asked for money before, but I've tried hard to come up with a list and you can find it by searching for my name on Amazon. I'm sorry to be so difficult to buy for.

comment by philh · 2016-05-31T09:44:59.096Z · LW(p) · GW(p)

Did you offer any suggestions of things she could buy you? Cash doesn't count because mumblereasons. It sounds to me like your sister acted poorly, especially in getting your parents to contribute. But did you make it easy for her to act well?

I too would prefer simply receiving cash, but I've accepted that that's not happening, so I have an Amazon wishlist. It mostly has books and graphic novels. Graphic novels in particular make a good gift for me, because they're often a little more expensive than I'd like to spend on them myself.

(I feel like some people dislike even buying presents from a list, but you can at least suggest categories of things.)

comment by Jiro · 2016-05-29T23:09:04.476Z · LW(p) · GW(p)

Logically analyzing the actions of human beings in terms of preferences, functions, and returns is hard. It's not actually impossible, but pretty much everyone who tries misses important things that are hard to put into words. I'd first wonder why you think that birthday presents are supposed to be maximizing return in the first place.

Buying someone a present, for normal humans, requires both that the present not be too cheap and that some effort was taken to match the present specifically to the recipient. Maximizing return is not important. There are always edge cases, but in general, unless you are talking about an occasion where social customs require cash, cash is a bad gift because cash is not specifically matched to the recipient. It is very difficult to overturn this custom by just saying "I can use cash more than I can use clothes".

Furthermore, parents are a special case because parents can make decisions that favor your welfare instead of your preferences, that would be arrogant if made by anyone else. If your mom and dad think that you need clothes, they're going to buy you clothes even if you think you need something else more. There's still a line beyond which even parents would be rude, but just deciding that you need clothes probably isn't over that line.

It also depends on your age, whether you live with your parents (and thus they can see what clothes you own), etc. Also, did you even try to tell your parents that there was something you needed more than clothes, aside from cash?

comment by Coacher · 2016-05-30T12:18:51.712Z · LW(p) · GW(p)

How do you solve interpersonal problems when neither sides can see themselves as the one in fault?

Is there any other kind?

comment by Viliam · 2016-05-25T12:01:59.305Z · LW(p) · GW(p)

Spammer. Kill it with fire!

comment by Lumifer · 2016-05-23T16:54:06.966Z · LW(p) · GW(p)

We are getting closer to the future in which you WILL be able to stab people in the face over the internet.

Replies from: root
comment by root · 2016-05-23T17:43:09.228Z · LW(p) · GW(p)

Most silly thing I've seen in a while.

Is it supported by CBT though? It could look silly only to me.

Replies from: Lumifer
comment by Lumifer · 2016-05-23T18:12:37.257Z · LW(p) · GW(p)

CBT? It's no accident the company that makes it is named Pavlok. The front page of its website says:

Pavlok --- Break Bad Habits With Classical Pavlovian Conditioning

Bad dog! You will salivate!! X-D

Replies from: root
comment by root · 2016-05-24T09:42:11.048Z · LW(p) · GW(p)

I'm curious how effective it is. Getting beaten up can be a nice example: it's painful can last for a moment. That zap thing can be

  1. Easily circumvented
  2. Takes the Pavlov banner while not actually being 100% loyal to it.
  3. Isn't actually much of a consequence.
Replies from: Lumifer
comment by Lumifer · 2016-05-24T14:54:22.843Z · LW(p) · GW(p)

Without any data I'll take a wild-ass guess that it is effective for some people, probably a fairly small number. Most wouldn't buy such a thing (for obvious reasons) and a great deal of those who do would discard it after the first few unpleasant experiences.

comment by Elo · 2016-05-23T00:22:52.883Z · LW(p) · GW(p)

I am starting to look at the health insurance market.

This is a human-level search: where do I find the basic considerations to evaluate everything else with? Do you know of a good resource?

Replies from: PipFoweraker
comment by PipFoweraker · 2016-05-23T00:50:09.567Z · LW(p) · GW(p)

In any particular geographical or topical area?

Replies from: Elo
comment by Elo · 2016-05-23T01:29:35.061Z · LW(p) · GW(p)

Australia, NSW. I am a young and healthy person with no existing conditions, also good vision and no wisdom teeth. (looking to get health insurance)

It seems that it varies from just hospital, through to full associated cover including money back for having a gym membership, massage and more. I am hesitant because it seems that I would pay more for that than I would otherwise pay for services that I would use (as a healthy young person right now).

Replies from: PipFoweraker
comment by PipFoweraker · 2016-05-23T01:53:46.978Z · LW(p) · GW(p)

In your situation, in Australia, it's mostly about forward planning. Do you have any foreknowledge of likely changes in your health or family situation?

The insurance market in Australia has historically been pretty poor in terms of transparency and easy comparisons. I'm sure you've found the various compare-policy tools online. I'm assuming you don't want to piggyback on a family policy.

Are you looking for more data, or a list of considerations for insurance planning? If it's the latter, try browsing around insurance industry planner websites for their policy documents. I can probably get some friends in the industry to email me more comprehensive things if you want to work of their approaches.

Replies from: Elo
comment by Elo · 2016-05-23T11:34:45.464Z · LW(p) · GW(p)

will continue via PM.

comment by Elo · 2016-05-22T21:32:10.185Z · LW(p) · GW(p)

https://www.facebook.com/groups/144017955332/

The Facebook group has changed names. If you are looking for it, it goes by "Brain Debugging Discussion". The link is the same.

comment by Vaniver · 2016-05-24T00:06:14.143Z · LW(p) · GW(p)

WinSplit Revolution, which lets you set locations and sizes in pixels for various window options, worked beautifully for splitting my wide monitor into thirds, but did not survive the transition to Windows 10. I can find countless window managers that let me snap to the left half or the right half, or if they're particularly fancy into quarters. But I have yet to find a tool with keyboard hotkeys that will divide the monitor space into thirds, or let me set custom locations so I can do it myself.

What am I missing?

Replies from: moreati
comment by moreati · 2016-05-28T19:51:41.514Z · LW(p) · GW(p)

Have you asked this question in a Windows specific forum? e.g. https://www.reddit.com/r/windows

Replies from: Vaniver
comment by Vaniver · 2016-05-30T15:40:10.190Z · LW(p) · GW(p)

I have now.

comment by Ixiel · 2016-05-26T22:05:30.102Z · LW(p) · GW(p)

Ok, I have to hold my breath as I ask this, and I'm really not trying to poke any bears, but I trust this community's ability to answer objectively more than other places I can ask, including more than my weak weak Google fu, given all the noise:

Is Sanders actually more than let's say 25% likely to get the nod?

I had written him off early, but I don't get to vote in that primary so I only just started paying attention. I'm probably voting Libertarian anyway, but Trump scares me almost as much as Clinton, so I'd sleep a little better during the meanwhile if it turns out I was wrong.

Thanks in advance. If this violates the Politics Commandment I accept the thumbs, but I'd love to also hear an answer I can trust.

Replies from: gjm, Lumifer, Douglas_Knight, knb
comment by gjm · 2016-05-27T00:09:36.582Z · LW(p) · GW(p)

He's millions of votes and many many delegates down compared to HRC. I think the only realistic way he gets the Democratic nomination is if HRC abruptly becomes obviously unelectable (e.g., if the business with her email server starts looking like getting her into actual legal trouble, or someone discovers clear evidence of outright bribery from her Wall Street friends), in which case the "superdelegates" might all switch to Sanders. I don't see any such scenario that actually looks more than a few percent likely.

(I make no claim to be an expert; I offer this only as a fairly typical LWer's take on the matter.)

Replies from: Ixiel
comment by Ixiel · 2016-05-27T12:49:39.709Z · LW(p) · GW(p)

Thanks G, I feel more confident I understand. Can't wait to see the debates; I'm open to the possibility my judgement on the matter might be wrong about one or both.

comment by Lumifer · 2016-05-27T00:48:51.712Z · LW(p) · GW(p)

Is Sanders actually more than let's say 25% likely to get the nod?

No.

To get the nomination he needs something extraordinary to happen. Something like Hillary developing a major health problem or the FBI indicting her over her private email server.

Trump scares me almost as much as Clinton

Someone pointed out a silver lining: the notion of President Trump might make progressives to be slightly less enthusiastic about imperial presidency. I'm not holding my breath, though.

Replies from: gjm, Ixiel
comment by gjm · 2016-05-27T10:19:39.876Z · LW(p) · GW(p)

Are progressives particularly enthusiastic about imperial presidency?

I haven't noticed any such enthusiasm. I have noticed people being annoyed when "their guy" was in the White House but couldn't do the things they wanted because Congress was on the other side, but that's not at all the same thing.

Is it a thing progressives do more than conservatives? I dunno. It may be a thing progressives have done more of in the last decade or so because they've spent more of that time with the president on their side and Congress against, but that doesn't tell us much about actual differences in disposition.

[EDITED for slightly less clumsy wording.]

Replies from: Lumifer
comment by Lumifer · 2016-05-27T14:36:18.855Z · LW(p) · GW(p)

Are progressives particularly enthusiastic about imperial presidency?

I think so, yes. Here is an example, they are not hard to find. Of course, the left elides the word "imperial" :-/

I have noticed people being annoyed

More than annoyed. These people want to expand the presidential powers and use the executive branch to achieve their goals, separation of powers be damned.

Is it a thing progressives do more than conservatives?

Yes, because progressive are much more comfortable with the idea of Big State (not to mention the idea of upending traditional arrangements).

Replies from: gjm
comment by gjm · 2016-05-29T01:23:10.484Z · LW(p) · GW(p)

Here is an example

... whose authors say

> the consolidation of executive authority has led to a number of dangerous policies [see David Shipler, in this issue], and we strongly oppose the extreme manifestations of this power, such as the “kill lists” that have already defined Obama’s presidency

which doesn't seem exactly like a ringing endorsement of "imperial presidency".

So far as I can tell, the article isn't proposing that the POTUS should have any powers he doesn't already have; only that he should use some of his already-existing powers in particular ways. If that's "imperial presidency" then the US already has imperial presidency and the only thing restraining it is the limited ambition of presidents.

These people want to expand the presidential powers and use the executive branch to achieve their goals, separation of powers be damned.

Which people, exactly? Again, the article you pointed to as an example of advocacy for "imperial presidency" claims quite explicitly that the president already has the power to do all the things it says he should do. (Of course that might be wrong. But saying the president should do something that you wrongly believe he's already entitled to do is not advocating for expanding presidential power.)

Yes, because [...]

Do you have evidence that they actually do, as opposed to a bulveristic explanation of why you would expect them to?

I'm not sure how one would quantify that, but a related question would be which presidents have actually exercised more "imperial" power. A crude proxy for that that happens to be readily available is number of executive orders issued. So here are the last 50 years' presidents in order by number of EOs, most to least: Reagan (R), Clinton (D), Nixon (R), Johnson (D), Carter (D), Bush Jr (R), Obama (D), Ford (R), Bush Sr (R). Seems fairly evenly matched to me.

I don't think I understand your bulveristic explanation, anyway. Issuing more executive orders (or exercising more presidential power in other ways) is about the balance between branches of government, not about the size of the government.

Here's an interesting article from 2006 about pretty much exactly this issue; it deplores the (alleged) expansion of presidential power and says both conservatives and progressives are to blame, and if you look at its source you will see that it's not likely to be making that claim in the service of a pro-progressive bias.

comment by Ixiel · 2016-05-27T12:46:15.529Z · LW(p) · GW(p)

That's what I had thought originally. Thank you for the speedy reply!

comment by Douglas_Knight · 2016-05-27T00:40:51.485Z · LW(p) · GW(p)

Betfair says 5%. I'm not saying you shouldn't second-guess prediction markets, but you should look at them. If you think the right number is 25%, maybe you should put money on it. Actually, I do say that you should second-guess them: low numbers are usually over-estimates because of the structure of the market.

Replies from: Ixiel
comment by Ixiel · 2016-05-27T12:45:36.869Z · LW(p) · GW(p)

I don't know the right number; I just used it as a set point rather than saying "Can he win?" and getting "Well TECHNICALLY..." Thanks for the reply; I'll keep current sleep patterns ;)

comment by knb · 2016-05-28T00:52:12.591Z · LW(p) · GW(p)

I'd estimate Sanders' chances as less than 10%, maybe a bit more than 5%.He would need a mass defection of superdelegates at this point, and it's possible they would be directed to jump en masse to someone else (like Biden) even if the DNC decides to dump Clinton.

Replies from: Ixiel
comment by Ixiel · 2016-05-28T13:25:26.786Z · LW(p) · GW(p)

Thanks K; good to have more supporting evidence. I won't bother checking out his issues at this time; I'll wait until I know who I get to choose.

comment by [deleted] · 2016-05-25T23:57:55.907Z · LW(p) · GW(p)
  1. Cues may not actually trigger drug seeking as much as we assume:

'...robust increases in craving and exhibit modest changes in autonomic responses, such as increases in heart rate and skin conductance and decreases in skin temperature, when exposed to drug-related versus neutral stimuli....However, when drug-use measures are used in cue reactivity studies the typical finding is a modest increase in drug-seeking or drug-use behavior'

-WP: Cue reactivity

  1. people become scientists because they're lured by the charm of discovery and excitement, but in clinical research, that's not what you get. No, doctors get that because they see individual cases over time, many.

‘If study designs are ranked by their potential for new discoveries, than anecdotal evidence would be at the top of the list, followed by observational studies, followed by RCTs

-WP: RCT’s

  1. [Interesting discussion on responses to statements like: '"wow it is so inspirational to see how you got through med school with three kids, I can't imagine"'
  2. Is there a free app for this kinda automatic language translator device
  3. ‘you're not getting rejected, you're just testing the chemistry’

Unexpected reframe from ‘rsdtyler

comment by Tripitaka · 2016-05-25T08:00:04.699Z · LW(p) · GW(p)

I am not sure if I read it here or on SSC, but someone tried to estimate how a "mary's room" equivalent for the human brain would look like. A moon sized library on which robotic crawlers run around at decent fractions of c ...

Anybody having info on that?

Replies from: gjm
comment by gjm · 2016-05-25T12:40:48.661Z · LW(p) · GW(p)

When you say "mary's room", do you actually mean Chinese Room rather than Mary's Room?

Replies from: Viliam, Tripitaka
comment by Viliam · 2016-05-26T08:50:29.315Z · LW(p) · GW(p)

What if Mary is Chinese? :P

I mean, what if there is a person not understanding Chinese in a room, operating the Chinese symbols they don't understand, according to formal rules that make no sense to them. The system already "knows" (on the symbol level, not the person who operates it) everything about the "red" color, but it has never perceived the red color in its input. And then, one day, it receives the red color in the input. If there is an unusual response by the system, what exactly caused it? (For extra layer of complication, let's suppose that the inputs to the "Chinese room" are bit streams containing JPEG images, so even the person operating the room has never seen the red color.)

To add more context, what if the perceived red object is a trolley running down the railway track...

Replies from: gjm, Lumifer, Houshalter
comment by gjm · 2016-05-26T11:13:55.739Z · LW(p) · GW(p)

See also: Can Bad Men Make Good Brains Do Bad Things?. That's a JSTOR article which won't be accessible for most readers, but some kind person has copied out its content here.

[EDITED to use a slightly better choice of some-kind-person.]

comment by Lumifer · 2016-05-26T15:06:17.168Z · LW(p) · GW(p)

To add more context, what if the perceived red object is a trolley running down the railway track...

What if the Chinese room is operated by trolleys running on tracks and the signaling system works by putting some (smaller) number of fat people and some (greater) number of slim people onto appropriate tracks? X-0

...and then one day you find a giraffe on tracks.

comment by Houshalter · 2016-05-26T13:39:07.541Z · LW(p) · GW(p)

Reminds me of this.

comment by Tripitaka · 2016-05-25T15:15:49.867Z · LW(p) · GW(p)

Huh. Indeed and of course I obviously mean chinese room. Might be enough help, thanks!

Replies from: username2
comment by username2 · 2016-05-25T15:16:46.975Z · LW(p) · GW(p)

I think I have seen it in Scott Aaronson's lecture notes.

Replies from: Tripitaka
comment by Tripitaka · 2016-05-25T15:20:27.738Z · LW(p) · GW(p)

Found it already. chinese instead of marys room yielded http://slatestarcodex.com/2014/09/01/book-review-and-highlights-quantum-computing-since-democritus/

If each page of the rule book corresponded to one neuron of a native speaker’s brain, then probably we’d be talking about a “rule book” at leas the size of the Earth, its pages searchable by a swarm of robots traveling at close to the speed of light. When you put it that way, maybe it’s not so hard to imagine this enormous Chinese-speaking entity that we’ve brought into being might have something we’d be prepared to call understanding or insight.

comment by [deleted] · 2016-05-24T23:24:53.917Z · LW(p) · GW(p)

wsfdsfds

comment by morganism · 2016-05-27T06:40:43.090Z · LW(p) · GW(p)

#FoundThem - 21st Century Pre-Search and Post-Detection SETI Protocols for Social and Digital Media

https://arxiv.org/abs/1605.02947

https://theconversation.com/how-to-tell-the-world-youve-discovered-an-alien-civilisation-60014

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2016-05-28T16:15:17.871Z · LW(p) · GW(p)

Please escape the hash with a backslash (\#) or it formats the rest of the line as a title.