Open thread, December 7-13, 2015

post by polymathwannabe · 2015-12-07T14:47:07.561Z · LW · GW · Legacy · 224 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

224 comments

Comments sorted by top scores.

comment by James_Miller · 2015-12-07T17:21:47.722Z · LW(p) · GW(p)

I asked Steve Hsu (an expert) "How long do you think it will probably take for someone to create babies who will grow up to be significantly smarter than any non-genetically engineered human has ever been? Is the answer closer to 10 or 30 years?"

He said it might be technologically possible in 10 years but " who will have the guts to try it? There could easily be a decade or two lag between when it first becomes possible and when it is actually attempted."

In, say, five years someone should start a transhumanist dating service that matches people who want to genetically enhance the intelligence of their future children. Although this is certainly risky, my view is that the Fermi paradox implies we are in great danger and so should take the chance to increase the odds that we figure out a way through the great filter.

Replies from: gjm, NancyLebovitz, skeptical_lurker, passive_fist, ChristianKl, Lumifer, skeptical_lurker
comment by gjm · 2015-12-07T17:52:12.329Z · LW(p) · GW(p)

In so far as the Fermi paradox implies we're in great danger, it also suggests that exciting newly-possible things we might try could be more dangerous than they look. Perhaps some strange feedback loop involving intelligence enhancement is part of the danger. (The usual intelligence-enhancement feedback loop people worry about around here involves AI, of course, but perhaps that's not the only one that's scary.)

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-12-07T19:07:59.398Z · LW(p) · GW(p)

Hostile intelligences would presumably still create Dyson spheres/colonise the galaxy/emit radio waves/do something to alert other civilisations to their presence. The Fermi paradox has to be something like superweapons, not superintelllegnece.

comment by NancyLebovitz · 2015-12-08T15:04:10.073Z · LW(p) · GW(p)

How good do you think you'd be at raising a child who is a great deal smarter than any previous human?

Let's assume you're sane enough to not resent the child's superintelligence. Still, what does the child need?

Tentative suggestion: people who are interested in the project should aim for at least a dozen superintelligent children in the first generation so that at least they have some company.

Replies from: James_Miller, Lumifer
comment by James_Miller · 2015-12-08T16:29:33.011Z · LW(p) · GW(p)

I'm currently raising a child who is, age adjusted, considerably smarter than myself. It's challenging but fun. The danger for me isn't my resenting his intelligence, it's taking too much pride in it.

Replies from: Fluttershy, NancyLebovitz
comment by Fluttershy · 2015-12-09T01:19:35.202Z · LW(p) · GW(p)

Just from his occasional post on LW, and your occasional mention of him, Alex reminds me of a real life version of Harry from HPMoR. :)

Edit: to avoid the possibility of future confusion, I'd like to emphasize that I meant this in an entirely positive way.

Replies from: James_Miller
comment by NancyLebovitz · 2015-12-08T21:36:08.417Z · LW(p) · GW(p)

Smarter than you are is one thing, smarter than any previous person is another.

comment by Lumifer · 2015-12-08T18:10:27.332Z · LW(p) · GW(p)

That starts to remind me of Ender's Game series, in particular Shadow of the Hegemon.

comment by skeptical_lurker · 2015-12-07T20:42:07.884Z · LW(p) · GW(p)

He said it might be technologically possible in 10 years

He's talking about using CRISPR to edit DNA. I would ask what's the timeline for germline selection, but when he says:

then the main bottleneck will be the sample size of good (cognitive, genotype) data sets necessary to extract the genetic architecture. IF we can get to ~ millions (very plausible in 5-10 years), ...

And I assume that getting the datasets is also the bottleneck for germline selection.

Incidentally, is this the sort of problem which can be significantly speeded up by money/publicity? And how much money? Is this the sort of thing which would be a good target for philanthropy?

transhumanist dating service

Simpler idea: join okcupid, use #IWGEC (I want genetically enhanced children) as a hashtag to identify each other.

Of course, a dedicated niche dating site has advantages, in that the site can be tailored to the specific criteria, but its a lot harder to set up.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-08T12:48:31.602Z · LW(p) · GW(p)

And I assume that getting the datasets is also the bottleneck for germline selection.

Incidentally, is this the sort of problem which can be significantly speeded up by money/publicity? And how much money? Is this the sort of thing which would be a good target for philanthropy?

You would have that data if a country like Singapore decides to do DNA sequencing for it's entire population.

If you want to go in that direction in the US you would need to lobby for SAT scores being included in the digital health system created by Obamacare.

Apart from that the cost of genome sequencing is an important variable. Developing cheaper sequencing technology will increase the amount of people who have their DNA sequenced.

comment by passive_fist · 2015-12-07T20:39:07.469Z · LW(p) · GW(p)

I don't think we are at the point where we can adequately assess the risks involved. It's known that higher IQ is correlated with major depression, bipolar disorder, and schizophrenia. What use is having a super-intelligent child if they have to spend most of their teenage and early adult years away from society, in a medicated stupor?

There may also be other genetic side effects to increased intelligence, such as increased risk of alcohol dependence and substance abuse.

I think I remember a study saying that over an IQ of 130, there is no correlation between increased intelligence and success/happiness.

It would probably be far more worthwhile to focus on having children of moderate-to-high IQ score (120-130 range), and put more emphasis on better upbringing, instilling values such as the importance of socializing and putting effort into one's goals. The focus that some transhumanists seem to have on raw intelligence seems a bit childish and naive.

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2015-12-07T20:54:00.204Z · LW(p) · GW(p)

It would probably be far more worthwhile

What are you optimizing for?

Replies from: passive_fist
comment by passive_fist · 2015-12-07T20:57:38.158Z · LW(p) · GW(p)

The optimal mix of intelligence and ability to make use of intelligence.

Replies from: Lumifer
comment by Lumifer · 2015-12-07T20:59:24.012Z · LW(p) · GW(p)

You just shifted all the meaning to the word "optimal".

Optimal when maximizing for what?

Replies from: passive_fist
comment by passive_fist · 2015-12-07T21:13:43.256Z · LW(p) · GW(p)

No I did not.

If James_Miller meant 'genetic basis of intelligence' (and I think he did) then I am pointing out that that may not be predictive of actual intelligence when measured in the real world after development. You could just as well say I'm 'optimizing for intelligence'. I am simply making it clear that I'm not optimizing for at-birth intelligence.

Replies from: Lumifer
comment by Lumifer · 2015-12-07T21:15:32.663Z · LW(p) · GW(p)

I still don't understand you.

Is there any measurable value that you are optimizing for? What is it?

comment by ChristianKl · 2015-12-07T23:38:02.964Z · LW(p) · GW(p)

put more emphasis on better upbringing

What do you mean specifically with that sentence?

Replies from: passive_fist
comment by passive_fist · 2015-12-08T00:46:32.516Z · LW(p) · GW(p)

Nutrition, intellectually stimulating environments, presence of both parents, and existence of other children to play with have all been shown to be positively correlated with doing better at school, for one. I'm sure there are many other factors.

Another point, not directly related to your question, but related to OP's question, is that an IQ of, say, 130 may not be that high (and definitely not that high compared to the LW average) but it is 2 standard deviations above the mean... if everyone reached that average level of intelligence it would be a vast improvement in average intelligence over what it is now.

Replies from: James_Miller
comment by James_Miller · 2015-12-08T02:04:23.233Z · LW(p) · GW(p)

if everyone reached that average level of intelligence it would be a vast improvement in average intelligence over what it is now.

I agree, but this isn't actionable information for transhumanists. In contrast, a few transhumanist couples could, perhaps, in a decade create a biological super-intelligence. I would love to get an 18-year-old reader of LW to start thinking about doing this.

Replies from: passive_fist
comment by passive_fist · 2015-12-08T04:46:27.384Z · LW(p) · GW(p)

It's certainly possible to use simple selective breeding techniques to increase intelligence beyond what would ever likely occur naturally. Modern experience in selective breeding of, for example, cattle for milk production has resulted in herds of cows that produce far more milk than even the most extreme natural outlier ever produced. And furthermore there are statistical tools that can take as input various traits (various intelligence scores and also factors relating to general health and well-being) and produce, as outputs, pairings that would result in optimal intelligence increase. Going further, modern genomics techniques (like sperm sorting and prediction of traits from embryonic gene sequences) could make the process even more rapid.

But it could never be done in a decade. Modern techniques require a minimum of around ~5 generations to properly maximize traits beyond what would be found in the natural population (this varies hugely depending on the trait, of course, but 5 generations is a commonly-used ballpark estimate). Assuming impregnation starts as soon as reproductive viability is achieved, that gives a figure of 75 years.

The only thing that could shorten this would be designer baby technology. A simple method could be using embryonic stem cells to go directly to gametes without having to go through birth, development, and maturation. The downside to this is that prediction of intelligence based on just embryonic DNA is flimsy; much more generations would probably be required, and a few 'interim' individuals would probably have to at least reach school age for model calibration. Assuming, say, three interim stages, that gives 24 years. Even this would require a huge amount of resources - and not to mention the sacrifice and enormous ethical issues involved.

I can't see even modern genetics technology achieving biological superintelligence any shorter than that, unless you are willing to throw trillions of dollars at it.

Replies from: James_Miller
comment by James_Miller · 2015-12-08T05:20:27.909Z · LW(p) · GW(p)

We identify a bunch of genes that either increase or decrease intelligence and then use CRISPR to edit the genomes of embryos to create super-geniuses. Just eliminating mutational load from an embryo might do a lot.

Replies from: passive_fist
comment by passive_fist · 2015-12-08T05:39:43.815Z · LW(p) · GW(p)

The reason this approach won't work is that genes aren't linear factors that can added up together in that way. Even in something as simple as milk production, you need to do selection over multiple generations and evaluate each generation separately, building up small genetic changes over time.

If you could construct an actual model relating various genes to intelligence, in a way that took into account genetic interactions, then you could do what you propose in a single generation, but we are very very far from being able to construct such a model at present.

As it stands today, if you just carried out that naive approach you would end up with a non-viable embryo or, in the best-case scenario, a slightly-higher-than-average intelligence person. Not a super-genius.

Replies from: James_Miller, Douglas_Knight
comment by James_Miller · 2015-12-08T22:26:04.815Z · LW(p) · GW(p)

When researching my book I was told by experts that the intelligence genes which vary throughout the human population probably are linear. Consider President Obama who has a very high IQ but who also has parents who are genetically very different from each other. If intelligence genes worked in a non-additive complex way people with such genetically diverse parents would almost always be very unintelligent. We don't observe this.

Replies from: VoiceOfRa
comment by VoiceOfRa · 2015-12-16T02:14:53.092Z · LW(p) · GW(p)

Consider President Obama who has a very high IQ

Evidence?

Replies from: James_Miller
comment by James_Miller · 2015-12-16T02:16:44.506Z · LW(p) · GW(p)

Harvard Law Review

Replies from: Lumifer
comment by Lumifer · 2015-12-16T05:53:59.862Z · LW(p) · GW(p)

Counter-evidence: affirmative action.

In any case, it's interesting that Obama's SAT (or ACT) scores are sealed as are his college grades, AFAIK.

Replies from: solipsist
comment by solipsist · 2016-01-08T18:49:20.442Z · LW(p) · GW(p)

HLS students of any skin color have high IQs as measured by standardized tests. The school's 25th percentile LSAT score is 170, which is 97.5th percentile for the subset of college graduates who take the LSAT. 44% of HLS students are people of color.

Replies from: Lumifer
comment by Lumifer · 2016-01-08T20:31:22.695Z · LW(p) · GW(p)

44% of HLS students are people of color.

When I see funny terms like "people of color" (or, say, "gun deaths"), I get suspicious. A little bit of digging, and...

Black students constitute 10-12% of HLS students. Most of the "people of color" are Asians.

comment by Douglas_Knight · 2015-12-09T19:51:56.967Z · LW(p) · GW(p)

No, actually, genetic studies of both milk production and IQ show them to be mainly linear.

That selective breeding has to be done slowly has nothing to do with genetic structure.

Replies from: ChristianKl, passive_fist
comment by ChristianKl · 2015-12-09T20:02:28.448Z · LW(p) · GW(p)

No, actually, genetic studies of both milk production and IQ show them to be mainly linear.

What kind of study do you think shows IQ to be mainly linear?

I would guess that you confuse assumptions that the researchers behind a study make to reduce the amount of factors with finding of the study.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-12-09T23:20:28.763Z · LW(p) · GW(p)

There are decades of studies of the heritability of IQ. Some of them measure H², which is full heritability and some of them measure h², "narrow sense heritability"; and some measure both. Narrow sense heritability is the linear part, a lower bound for the full broad sense heritability. A typical estimate of the nonlinear contribution is H²-h²=10%. In neither case do they make any assumptions about the genetic structure. Often they make assumptions about the relation between genes and environment, but they never assume linear genetics. Measuring h² is not assuming linearity, but measuring linearity.

This paper finds a lower bound for h² of 0.4 and 0.5 for crystallized and fluid intelligence, respectively, in childhood. I say lower bound because it only uses SNP data, not full genomes. It mentions earlier work giving a narrow sense heritability of 0.6 at that age. That earlier work probably has more problems disentangling genes from environment, but is unbiased given its assumptions.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-09T23:49:39.712Z · LW(p) · GW(p)

The linked paper says:

We fitted a linear mixed model y = µ + g + e, where y is the phenotype, m is the mean term, g is the aggregate additive genetic effect of all the SNPs and e is the residual effect.

If you have 3511 individuals and 549692 SNPs you won't find any nonlinear effects. 3511 observations of 549692 SNPs is already overfitted 3511 observations of 549692 * 549691 gene interactions is even more overfitted and I wouldn't expect that the four four principal components they calculate to find an existing needle in that haystack.

Apart from that it's worth noting that IQ is g fitted to a bell curve. You wouldn't expect a variable that you fit to a bell curve to behave fully linearly.

Replies from: Douglas_Knight, Lumifer
comment by Douglas_Knight · 2015-12-10T00:36:45.956Z · LW(p) · GW(p)

No, they didn't try to measure non-linear effects. Nor did they try to measure environment. That is all irrelevant to measuring linear effects, which was the main thing I wanted to convey. If you want to understand this, the key phrase is "narrow sense heritability." Try a textbook. Hell, try wikipedia.

That it did well on held-back data should convince you that you don't understand overfitting.

Actually, I would expect a bell curve transformation to be the most linear.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-10T01:27:49.538Z · LW(p) · GW(p)

That it did well on held-back data should convince you that you don't understand overfitting.

They didn't do well on the gene level: Analyses of individual SNPs and genes did not result in any replicable genome-wide significant association

No, they didn't try to measure non-linear effects. Nor did they try to measure environment. That is all irrelevant to measuring linear effects, which was the main thing I wanted to convey.

No, the fact that you can calculate a linear model that predicts h_2 in a way that fits 0.4 or 0.5 of the variance doesn't mean that the underlying reality is structured in a way that gene's have linear effects.

To make a causal statement that genes work in a linear way the summarize statistic of is not enough.

comment by Lumifer · 2015-12-10T01:56:55.493Z · LW(p) · GW(p)

You wouldn't expect a variable that you fit to a bell curve to behave fully linearly.

I would not recommend making confident pronouncements which make it evident you have no clue what you are talking about.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-10T02:41:48.686Z · LW(p) · GW(p)

I would not recommend making confident pronouncements which make it evident you have no clue what you are talking about.

While I haven't worked with the underlying subjects in the last few years I did take bioinformatics courses by people who had a clue what they were talking about and the confident pronouncement I make are what I learned there.

Replies from: Lumifer
comment by Lumifer · 2015-12-10T03:33:00.473Z · LW(p) · GW(p)

OK, let's try a simpler piece of advice: first, stop digging.

comment by passive_fist · 2015-12-09T20:26:48.906Z · LW(p) · GW(p)

No, it was assumed that genes controlling milk production were linear, because it was much easier to study them that way, and unfortunately over time many people came to simply accept that fact as true, when it has never been proven (in fact it's been proven conclusively otherwise).

comment by ChristianKl · 2015-12-07T20:32:20.250Z · LW(p) · GW(p)

In, say, five years someone should start a transhumanist dating service that matches people who want to genetically enhance the intelligence of their future children.

Simply put it on OkCupid as an additional question that's important to you.

comment by Lumifer · 2015-12-07T20:19:59.307Z · LW(p) · GW(p)

Ahem. A transhumanist woman wanting to have a genetically engineered baby would do well to start with a sperm bank where she can screen many donors for a good genetic baseline.

Sorry, males :-/

Replies from: polymathwannabe, James_Miller, passive_fist
comment by polymathwannabe · 2015-12-07T20:25:44.337Z · LW(p) · GW(p)

In your scenario, a transhumanist man would do the same with egg banks, and then rent a healthy womb.

Replies from: Lumifer
comment by Lumifer · 2015-12-07T20:35:23.780Z · LW(p) · GW(p)

Also possible.

Actually, since we're genetically engineering anyway, we should be able to combine genetic material from two males or two females (or just clone, of course). And once an artificial womb gets developed you won't need to rent anything, um, living.

In any case, not too many prospects for dating :-/

You and me baby ain't nothin' but mammals, so let's do it like they do on the Discovery Channel suddenly acquires a whole new meaning X-D

Replies from: polymathwannabe, VoiceOfRa
comment by polymathwannabe · 2015-12-07T20:57:04.979Z · LW(p) · GW(p)

not too many prospects for dating

Why? People want intimacy for a thousand reasons other than breeding.

Replies from: Lumifer
comment by Lumifer · 2015-12-07T20:58:21.163Z · LW(p) · GW(p)

Which is precisely why "let's genetically engineer our possible children" isn't a great start.

Replies from: James_Miller, Calien, None
comment by James_Miller · 2015-12-08T02:07:10.772Z · LW(p) · GW(p)

Let's start thinking about the appropriate lines now so in ten years time we'll (or those of you young enough to sill have children a decade hence) will have the skills to win over appropriate mates.

Replies from: Lumifer
comment by Lumifer · 2015-12-08T02:47:21.840Z · LW(p) · GW(p)

This should be a separate thread: Best Pickup Lines in a Transhumanist Bar :-)

Replies from: James_Miller, Calien, James_Miller
comment by James_Miller · 2015-12-08T04:34:04.017Z · LW(p) · GW(p)

Are you whole body or just head?

Replies from: None
comment by [deleted] · 2015-12-08T05:33:12.133Z · LW(p) · GW(p)

In a universe where you have people of both classifications, that could become mildly rude.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-08T12:59:33.211Z · LW(p) · GW(p)

We do have both classifications. People who have whole body cryonics insurance and people who have head cryonics insurance.

Replies from: None
comment by [deleted] · 2015-12-09T01:19:31.769Z · LW(p) · GW(p)

I was picturing a universe in which the people were already unfrozen and healthy; it might be rude to ask things like "Is this your original set of limbs?"

But you are correct, and that didn't occur to me.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-09T12:42:03.348Z · LW(p) · GW(p)

My reading is heavily culture dependent. Presently many woman object to their partners signing up for getting cryonics. For a transhumanist who signs up for cryonics it's valuable to screen for woman who are okay with cryonics.

In a transhumanist bar that wouldn't be necessary. Asking "Are you whole body or just head?" with the target of finding out the cryonics status presupposes that the cryonics rejection isn't a concern. That what makes "Are you whole body or just head?" funny.

comment by Calien · 2015-12-15T07:45:04.141Z · LW(p) · GW(p)

I want to grow old and not die with you.

Replies from: Lumifer
comment by Lumifer · 2015-12-15T15:39:31.903Z · LW(p) · GW(p)

/blinks

I don't want to grow old.

Replies from: gjm
comment by gjm · 2015-12-15T16:26:29.153Z · LW(p) · GW(p)

Perhaps Calien takes "grow old" to mean "accumulate years and experience and memories" rather than "accumulate wear and tear and damage".

Replies from: Lumifer
comment by Lumifer · 2015-12-15T16:58:34.640Z · LW(p) · GW(p)

That's called "growing wise" :-P

Replies from: Calien
comment by Calien · 2015-12-16T11:18:38.762Z · LW(p) · GW(p)

gjm's interpretation is what I was going for. Chronological age only! (Warning: link to TVTropes) I wasn't sure how to keep the same form and still have it flow nicely.

comment by James_Miller · 2015-12-08T17:01:24.466Z · LW(p) · GW(p)

Yes, you should do this.

Replies from: Lumifer
comment by Lumifer · 2015-12-09T18:26:02.953Z · LW(p) · GW(p)

First, you should establish a Transhumanist Bar :-)

comment by Calien · 2015-12-14T12:03:22.784Z · LW(p) · GW(p)

Want children in maybe ten years, might work on me.

comment by [deleted] · 2015-12-08T05:22:51.533Z · LW(p) · GW(p)

That line might actually work on some people. It might work on me if I were more inclined to parent.

comment by VoiceOfRa · 2015-12-16T02:18:39.980Z · LW(p) · GW(p)

Actually, since we're genetically engineering anyway, we should be able to combine genetic material from two males or two females (or just clone, of course). And once an artificial womb gets developed you won't need to rent anything, um, living.

If we're assuming artificial wombs are widely used, humanity effectively becomes a eusocial species.

Replies from: Lumifer
comment by Lumifer · 2015-12-16T05:38:13.000Z · LW(p) · GW(p)

I don't know about that. I suspect that at this point things get really interesting and probably really unstable for a while :-/

comment by James_Miller · 2015-12-07T20:22:02.458Z · LW(p) · GW(p)

Let's assume that she has the typical desire to be married to the child's father.

Replies from: skeptical_lurker, Lumifer
comment by skeptical_lurker · 2015-12-07T20:26:49.500Z · LW(p) · GW(p)

And that her partner (if she in in a hetrosexual relationship) wants children, or at least does not want to be cuckolded.

comment by Lumifer · 2015-12-07T20:30:59.518Z · LW(p) · GW(p)

Really see no reason to assume that an avantgarde transhumanist woman would stick to such traditional trappings of the old patriarchy :-P

Replies from: James_Miller
comment by James_Miller · 2015-12-07T20:40:44.105Z · LW(p) · GW(p)

Since the ratio of women/men willing to do this will be low, willing women will have lots of dating market power. It would be silly for them to not use this power to get a high quality mate/provider.

Replies from: Lumifer
comment by Lumifer · 2015-12-07T20:45:18.791Z · LW(p) · GW(p)

Since the ratio of women/men willing to do this will be low

Why do you think so?

It would be silly for them to not use this power to get a high quality mate/provider

Provided she needs or wants one. And provided she wants a male one. I know lesbian families with lots of children.

Replies from: James_Miller
comment by James_Miller · 2015-12-07T21:48:09.282Z · LW(p) · GW(p)

Men are greater risk takers and are far more likely to be transhumanists.

comment by passive_fist · 2015-12-07T20:44:43.204Z · LW(p) · GW(p)

Sperm banks simply do not cater to transhumanists. They first and foremost screen donors for sperm count (to insure that they can make the most money out of every stored sample). Sperm count isn't strongly correlated with intelligence.

After sperm count, important factors for sperm banks are: Physical health, height, and weight.

Plus, sperm donors are mostly a self-selected bunch, and I'd guess that men who are in no immediate need of money would not wake up in the morning thinking of donating sperm.

Finally, upbringing is probably a far more important factor than mere genetics; a wise mother would want to ensure availability of the father for childrearing.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2015-12-08T12:53:49.258Z · LW(p) · GW(p)

Plus, sperm donors are mostly a self-selected bunch, and I'd guess that men who are in no immediate need of money would not wake up in the morning thinking of donating sperm.

There are man who want to "spread their DNA" and therefore donate sperm for reasons besides money.

comment by Lumifer · 2015-12-07T20:49:10.505Z · LW(p) · GW(p)

important factors for sperm banks

Not that I've used them, but as far as I understand, sperm banks provide a fair amount of data on sperm donors including education. If you stick with Ph.Ds the baseline IQ level should be decent. Besides, sperm banks are a customer-oriented business. They will look for factors which women demand.

Replies from: passive_fist
comment by passive_fist · 2015-12-07T20:50:51.116Z · LW(p) · GW(p)

Exactly. Most women demand attributes that they themselves find attractive in mates e.g. height and other appearance-related factors. Transhumanists don't make up most of the female population.

Replies from: Lumifer
comment by Lumifer · 2015-12-07T20:57:06.372Z · LW(p) · GW(p)

Sure, but the great advantage of sperm banks is that you can easily filter a large number of possibilities.

At the lets-genetically-engineer-super-IQ level you'd probably want to start by paying the sperm bank for whole genome scans of several likely candidates.

comment by skeptical_lurker · 2015-12-07T19:14:52.231Z · LW(p) · GW(p)

I think civilisation is in danger even disregarding the Fermi paradox.

There's no need to wait five years to start a transhumanist dating service. Suppose you want to have genetically enhanced kids in ten years time, presumably you would still want to date now. If you are looking for a long-term relationship now, then you would want it to be with someone you could have kids with one day.

The biggest problem is that transhumanists are mostly male. I wonder if this will change, given that transhumanism is becoming increasingly mainstream?

comment by ESRogs · 2015-12-09T10:15:10.753Z · LW(p) · GW(p)

Gwern has written an article for Wired, allegedly revealing the true identity of Satoshi Nakamoto:

http://www.wired.com/2015/12/bitcoins-creator-satoshi-nakamoto-is-probably-this-unknown-australian-genius/

Replies from: MrMind, Mitchell_Porter, VincentYu, ChristianKl, ESRogs
comment by MrMind · 2015-12-10T08:21:07.142Z · LW(p) · GW(p)

Just a tangential question: am I the only one perfectly happy not to know who really Satoshi is?

Replies from: Lumifer, VoiceOfRa
comment by Lumifer · 2015-12-10T16:07:59.990Z · LW(p) · GW(p)

I am perfectly happy not to know who Satoshi is, but I also have a well-developed curiosity :-)

comment by VoiceOfRa · 2015-12-16T02:51:00.069Z · LW(p) · GW(p)

I wouldn't mind knowing myself. However, I don't think having Satoshi's identity publicly known would be good to bitcoin.

comment by Mitchell_Porter · 2015-12-09T11:00:57.725Z · LW(p) · GW(p)

An anonymous source supplies Gwern with juicy facts about this man Wright, they see print after a few weeks, and then within hours his home is raided by Australian federal police. I am reminded that the source of the Watergate leaks was in fact the deputy director of the FBI...

Replies from: None
comment by [deleted] · 2015-12-09T22:38:50.179Z · LW(p) · GW(p)

A friend of mine who knows much more about cryptography and computers than I do points me to evidence that documents using the same public keys as Satoshi were back-dated and faked:

https://www.reddit.com/r/Bitcoin/comments/3w027x/dr_craig_steven_wright_alleged_satoshi_by_wired/cxslii7

I hardly know what to make of this myself, but she seems convinced.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-09T22:46:59.567Z · LW(p) · GW(p)

While we are at the topic of someone trying to fake Satoshi identity, the LW account http://lesswrong.com/user/Satoshi_Nakamoto/overview/ is worth noting. It stopped posting after I linked it to an attempt of establishing a fake identity for Satoshi. It might be useful to compare the stylometry of those posts with Wright.

comment by VincentYu · 2015-12-11T11:29:42.893Z · LW(p) · GW(p)

From the linked Wired article:

The PGP key associated with Nakamoto’s email address and references to an upcoming “cryptocurrency paper” and “triple entry accounting” were added sometime after 2013.

Gwern's comment in the Reddit thread:

[...] this is why we put our effort into nailing down the creation and modification dates of the blog post in third-party archives like the IA and Google Reader.

These comments seem to partly refer to the 2013 mass archive of Google Reader just before it was discontinued. For others who want to examine the data: the relevant WARC records for gse-compliance.blogspot.com are in line 110789824 to line 110796183 of greader_20130604001315.megawarc.warc, which is about three-quarters of the way into the file. I haven't checked the directory and stats grabs and don't plan to, as I don't want to spend any more time on this.

NB: As for any other large compressed archives, if you plan on saving the data, then I suggest decompressing the stream as you download it and recompressing into a seekable structure. Btrfs with compression works well, but blocked compression implementations like bgzip should also work in a pinch. If you leave the archive as a single compressed stream, then you'll pull all your hair out when you try to look through the data.

comment by ChristianKl · 2015-12-09T11:45:28.681Z · LW(p) · GW(p)

Gwern, what's your credence that Wright is Satoshi?

comment by ESRogs · 2015-12-11T08:26:42.043Z · LW(p) · GW(p)

Follow-up -- after we've all had some time to think about it, I think this is the best explanation for who this would-be SN is:

https://www.reddit.com/r/Bitcoin/comments/3w9xec/just_think_we_deserve_an_explanation_of_how_craig/cxuo6ac

comment by gjm · 2015-12-07T17:02:21.332Z · LW(p) · GW(p)

Bug report: the antikibitzer's toggle button (which appears at the top right of the browser window's content area) doesn't work correctly for me (on recent Firefox on Windows) because the loop that attempts to identify the antikibitzer stylesheet fails. It fails because an earlier stylesheet in the list (actually, the very first) has a null href.

A simple fix is to change the obvious line in antikibitzer.js to this:

if (document.styleSheets[i].href && document.styleSheets[i].href.indexOf("antikibitzer") > 0)

but I make no guarantee that this is the fix the author of the code would prefer.

comment by [deleted] · 2015-12-10T11:49:55.902Z · LW(p) · GW(p)

Some uncomfortable questions I've asked myself lately:

  • Could you without intentionally listening to music for 30 days?

  • I recall being taught to argue towards the predetermined point of view in schools and extra-curriculum activities like debating. Is that counterproductive or suboptimal?

  • Listening back to a recording I made of a therapy session when I was quite mentally ill, I feel amazed at just how much I have improved. I am appalled by the mode of thought of that young person. What impression do the people around me have that they won't discuss openly?

  • Aren't storm water drain explorer's potentially mapping out critical infrastructure which may be targetted more easily by terrorists? One way I see these things going is commercial drain tours. That way there would be a legitimised presence there and perhaps enhanced security.

  • something to be asked of academia

  • Imagine a person was abused for a large part of their childhood and is subsequently traumatised and mentally ill, then, upon regaining greater functioning as an adult decides to extort their abusive parents for money with the threat of exposing them while still counting on inheritence, instead of simply going to the authorities and approaching a legal settlement (expecting that will cut of any pleasant relations). Are there actions unconscionable? What would you do in their situation?

  • If I went straight to a family member without preparing them in advance would they consent to my cryonics application? to support a cryonics application?

  • Do most people really think like this?

  • The rate at which I come up with ideas that I feel are worthwhile business ventures is unmanageable. So, I’ll take a leaf out of the EA Ventures method webpage by asking: what are three existing organizations that are doing similar things and why aren’t you joining them?

Replies from: OrphanWilde, ChristianKl, Jiro, passive_fist, Artaxerxes, Artaxerxes, VoiceOfRa
comment by OrphanWilde · 2015-12-10T15:35:31.574Z · LW(p) · GW(p)

Upvoting for applied learning: Previously these would each be their own comment; you asked what you were doing wrong, somebody mentioned the number of comments, and you appear to have updated your behavior.

comment by ChristianKl · 2015-12-10T12:34:38.032Z · LW(p) · GW(p)

upon regaining greater functioning as an adult decides to extort their abusive parents for money with the threat of exposing them while still counting on inheritence

Extortion is by definition illegal on the other hand making an informal settelement is quite okay. It depends a lot on the details.

comment by Jiro · 2015-12-10T15:25:12.323Z · LW(p) · GW(p)

Imagine a person was abused for a large part of their childhood and is subsequently traumatised and mentally ill, then...

Withholding an inheritance from someone because you abused him and he dared to take you to court is unethical. So this "extortion" is being used as the only way to get compensation while avoiding being the victim of further unethical behavior.

So no, it's not unconscionable, although whether it's legally extortion would require asking a lawyer.

comment by passive_fist · 2015-12-13T00:06:44.263Z · LW(p) · GW(p)

Listening back to a recording I made of a therapy session when I was quite mentally ill, I feel amazed at just how much I have improved. I am appalled by the mode of thought of that young person. What impression do the people around me have that they won't discuss openly?

If you find yourself alternating between different psychological moods every few months, appalled by how you used to think, you may be suffering from bipolar disorder. Since you go to a therapist I assume that if you have it, it's been diagnosed by now, so I'm mostly saying this for the benefit of people reading this.

comment by Artaxerxes · 2015-12-12T03:46:03.446Z · LW(p) · GW(p)

I recall being taught to argue towards the predetermined point of view in schools and extra-curriculum activities like debating. Is that counterproductive or suboptimal?

This has been talked about before. One suggestion is to not make it a habit.

comment by Artaxerxes · 2015-12-12T03:37:46.189Z · LW(p) · GW(p)

Could you without intentionally listening to music for 30 days?

Can you rephrase this?

comment by VoiceOfRa · 2015-12-16T04:05:25.895Z · LW(p) · GW(p)

Imagine a person was abused for a large part of their childhood and is subsequently traumatised and mentally ill, then, upon regaining greater functioning as an adult decides to extort their abusive parents for money with the threat of exposing them while still counting on inheritence, instead of simply going to the authorities and approaching a legal settlement (expecting that will cut of any pleasant relations). Are there actions unconscionable? What would you do in their situation?

Depends on what you mean by "abuse"? A lot of what's been called "child abuse", e.g., spanking, isn't. On the other hand, legitimate abuse happens as well.

Replies from: None
comment by [deleted] · 2015-12-17T19:15:31.753Z · LW(p) · GW(p)

What is an example of 'legitimate abuse'?

Replies from: VoiceOfRa
comment by VoiceOfRa · 2015-12-17T19:43:27.157Z · LW(p) · GW(p)

This for example.

comment by cleonid · 2015-12-07T14:58:04.320Z · LW(p) · GW(p)

From Omnilibrium:

Replies from: Viliam, username2, passive_fist
comment by Viliam · 2015-12-07T17:03:35.003Z · LW(p) · GW(p)

There are intelligent people speaking, without attacking each other. When they add facts, I am going to suppose those facts are likely true. That's already better than 99.99% of internet.

Yet there seems to be no conclusion, and even the analysis seems rather shallow.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-08T13:04:27.414Z · LW(p) · GW(p)

Yet there seems to be no conclusion

What do you want when you mean "conclusion"?

Replies from: Viliam
comment by Viliam · 2015-12-08T22:44:04.551Z · LW(p) · GW(p)

Well, currently it seems to me like this:

A new topic.
Person X says something smart.
Person Y says something smart.
Person Z says something smart.
Everyone moves to the next topic.

That's okay if your goal is signalling smartness. It's okay-ish if your goal is to have more information about the topic. I haven't read much, but the debating style still feels adversarial -- people are intelligent and polite, but they still give arguments for one side or for the other, so at the end of the day you still get Team Pro and Team Con.

The missing part is someone saying "these are arguments for, these are arguments against, after weighing them carefully, this seems like an optimal solution. (And in the Aumann's ideal world, all participants of the debate would agree.)

Why? Because sometimes we ask questions when we need answers. If you ask "Is it better to do X, or to do Y?", and you receive three smart answers supporting X, and three smart answers supporting Y, at the end of the day you still don't know whether you should do X or Y. (Though if you make your decision, using whatever means, now you have three great arguments to support it. There is even a button on the website that will filter them for you.)

Replies from: ChristianKl
comment by ChristianKl · 2015-12-08T23:04:44.170Z · LW(p) · GW(p)

If you ask "Is it better to do X, or to do Y?", and you receive three smart answers supporting X, and three smart answers supporting Y, at the end of the day you still don't know whether you should do X or Y.

The goal of reading a site exploring a political question shouldn't be that the reader comes away with: "I don't need to think myself, the community decided that X is right, so I support X because I want to support what my tribe has chosen to support."

Ideally the person leaves with a mind that's more open than when they came.

Replies from: Viliam
comment by Viliam · 2015-12-10T08:26:02.418Z · LW(p) · GW(p)

I don't need to think myself, the community decided that X is right

This is how I accept 99% of information about the world. I have never seen an atom, never been in Paris, still believe they exist. A community I consider trustworthy about the topic has decided that they exist, and I don't have time to personally verify everything.

Getting more inputs for your independent research is great if you do have time and other resources necessary to do the research. Making the inputs public is also good, because some of the participants may have the time. But inputs without conclusion is still an incomplete work.

Ideally the person leaves with a mind that's more open than when they came.

Is adjusting probabilities towards 50% a good thing?

Replies from: ChristianKl
comment by ChristianKl · 2015-12-10T10:25:41.578Z · LW(p) · GW(p)

Is adjusting probabilities towards 50% a good thing?

I don't think that openness is mainly about probabilities but most people are heavily overconfident about most of their political positions so moving the probabilities closer to 50% is a good thing.

The world would be a much better place if more people would respond to Is policy A better than policy B? with I don't know instead of Policy A is better because my tribe says it's better.

I have never seen an atom, never been in Paris, still believe they exist. A community I consider trustworthy about the topic has decided that they exist, and I don't have time to personally verify everything.

Let's ask instead of are there atoms? is helium a molecule?. Thomas Kuhn wrote about the issue:

An investigator who hoped to learn something about what scientists took the atomic theory to be asked a distinguished physicist and an eminent chemist whether a single atom of helium was or was not a molecule. Both answered without hesitation, but their answers were not the same. For the chemist the atom of helium was a molecule because it behaved like one with respect to the kinetic theory of gases. For the physicist, on the other hand, the helium atom was not a molecule because it displayed no molecular spectrum. Presumably both men were talking of the same particle, but they were viewing it through their own research training and practice.”

You get a different answer to the question depending on who you ask is helium a molecule?.Does that mean that you should adjust probabilities towards 50% on the question of is helium a molecule?? No, that wouldn't make any sense to average the 100% certainity of the physicist that helium is no molecule with the 100% certainity of the chemist that it is towards 50%.

I would want participants who read a political dicussion come away with thinking that there are multiple ways of looking at the debate in question.

But inputs without conclusion is still an incomplete work.

That's basically rejecting skepticism. Skepticism is about being okay with the fact that you don't have a conclusion to every question. Keeping questions open for years is important for understanding them better.

Replies from: gjm
comment by gjm · 2015-12-10T11:22:31.623Z · LW(p) · GW(p)

is helium a molecule?

That's a very special kind of question: one that's almost entirely about definitions of words. It shouldn't be a surprise to anyone here that different people or groups use words in different ways, and therefore that questions about definitions often don't have a definite answer.

Many many questions have some element of this (e.g., if some etymology enthusiast insists that an "atom" must be indivisible then the things most people call atoms aren't "atoms" for him, and for all we know there may actually be no "atoms") and that's important to know. But this doesn't look to me like a good model for political disagreement; word definitions aren't usually a big part of political disagreements.

(What is usually a big part of those disagreements is divergence between different people's or groups' values, which can also lead to situations where there's no such thing as The Right Answer.)

That's basically rejecting skepticism.

Unless you allow the "conclusion" to be something like "We don't yet have enough information to know whether A or B is the better course of action", or "A is almost certainly better if what you mostly care about is X, and B is almost certainly better if what you mostly care about is Y", or "The dispute between A and B is mostly terminological". All of which I'm guessing Viliam would be fine with; it looks to me like what he's unsatisfied with is debates that basically consist of some arguments for A and some arguments for B, with no attempt to figure out what conclusion -- which might well be a conclusion with a lot of uncertainty to it -- should follow from looking at all those arguments together.

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2015-12-10T15:56:22.584Z · LW(p) · GW(p)

That's a very special kind of question: one that's almost entirely about definitions of words.

On one level, yes, this is just a definition issue.

On a deeper level no, because particular answers to such questions place the phenomena into a specific framework. Notice that two answers to "is helium a molecule" arose not because two people consulted two different dictionaries. They arose because these two people are used to thinking about molecules in very different ways -- both valid in their respective domains.

In that sense this "special kind of question" could be about defining terms, but it also could be about the context within which examine the issue.

Replies from: gjm
comment by gjm · 2015-12-10T17:20:32.407Z · LW(p) · GW(p)

I agree that the disagreement about whether a helium atom should be considered a molecule is related to what mental framework one slots the question into. I don't think this in any way stops it being a disagreement about definitions of words. (In particular, for the avoidance of doubt, I am not taking "X is a disagreement about definitions" to imply "X is trivial" or anything of the kind.)

The physicist and chemist in Kuhn's story could -- I don't know whether they would if actually asked -- both have said something like this: "It turns out that there are a few different notions close enough together that we use the word "molecule" for all of them, and they don't all agree about what to call a helium atom". Again, this is far from what happens in most political disagreements.

For the avoidance of doubt again, I am not denying that some political disagreements are like this. For instance, there are cases where two sides would both claim to be maximizing equality, but one side means "treat everyone exactly the same" and another means "treat everyone the same but compensate for inequalities X, Y, and Z elsewhere". I suggest that this is actually best considered a disagreement about values rather than about definitions, though. (Each group prefers to define "equality" in a particular way because they think what-they-call-equality is more important than what-the-other-guys-call-equality.)

Replies from: Lumifer
comment by Lumifer · 2015-12-10T17:37:10.704Z · LW(p) · GW(p)

this is actually best considered a disagreement about values rather than about definitions

Well, yes, because in the political context "framework" very often means "value framework". However both definitions and frameworks matter -- it is still the case that the argument will get nowhere until people agree on the meaning of the words they are using.

Replies from: gjm
comment by gjm · 2015-12-10T18:20:45.681Z · LW(p) · GW(p)

It feels as if you may be trying to correct a mistake I'm not making. I agree that definitions matter. As I said two comments upthread:

for the avoidance of doubt, I am not taking "X is a disagreement about definitions" to imply "X is trivial" or anything of the kind.

Replies from: Lumifer
comment by Lumifer · 2015-12-10T18:56:40.735Z · LW(p) · GW(p)

It feels as if you may be trying to correct a mistake I'm not making

Nope, you just have all your defensive shields up and at full power :-) I am agreeing with you here.

Replies from: gjm
comment by gjm · 2015-12-10T21:13:22.277Z · LW(p) · GW(p)

Full power is more dramatic than that :-).

My experience is that if someone begins "Well, yes" rather than, say, just "Yes", their intention is generally something less positive than simply agreeing with you. ("Well, yes. What kind of idiot would need that to be said explicitly?" "Well, yes, but you're forgetting about X, Y, and Z." "Well, yes, I suppose so, but I don't think that's actually quite the right question.")

Replies from: Lumifer
comment by Lumifer · 2015-12-10T22:30:35.844Z · LW(p) · GW(p)

I'll work on augmenting my expressions of enthusiasm :-)

comment by ChristianKl · 2015-12-10T12:30:18.958Z · LW(p) · GW(p)

That's a very special kind of question: one that's almost entirely about definitions of words.

For Thomas Kuhn it's a an issue of different paradigms.

When we look at the questions of atoms then saying: "Atoms exist." likely means "Thinking of matter as being made up of atoms is a valuable paradigm."

Lavoiser came up with describing oxygen as a new element. In doing so he rejected the paradgim that chemistry should analyse principles like phlogiston but rather think of matter as being made up of atoms.

Calling oxygen dephlogisticated air is more than just an issue of calling it a different name. It's an issue at the heart of the conflict of two scientific paradigms.

Both the phlogiston theory and the oxygen theory successfully predict that if you put a glass over a candle the candle while go out. The oxygen theory says that it's because there no oxygen anymore in the air. The phlogiston theory says that it's because the air is full of phlogiston so that it can't take any more additional phlogiston.

Phlogiston chemistry was a huge improvement over the chemistry of the four elements which neither explained or predicted that the candle would go out.

Understanding different paradigms to look at an political issue is often an important part of having a political debate. It moves the issue beyond tribe A vs. tribe B. Of course you can have a tribe A vs. tribe B political discussion but often that's not the kind of political debate that I like to have.

In reality the kind of conclusions that parliaments draw from political debate are laws that fill hundreds of pages that specify all sorts of little details that happen to be important. If the GBS does policy documents specifying details and coming to a conclusion makes sense but I don't think that's a good goal for a discussion on a forum like Omnilibrium.

comment by username2 · 2015-12-08T04:12:36.368Z · LW(p) · GW(p)

A question about Omnilibrium. The FAQ states

As it turns out, our current membership does not fit very well into the “left-wing” and “right-wing” boxes, so we adopted different labels for the observed political clusters.

So what beliefs generally cluster the optimates and populares? I've been wondering this, and it seems fairly opaque as an outside observer, but I'm sure that people who regularly use the site have picked up on it.

Replies from: cleonid
comment by cleonid · 2015-12-08T14:44:54.660Z · LW(p) · GW(p)

There are two noticeable differences between the optimate/populare and the traditional left-wing/right-wing politics:

1) Traditional politics is much better approximated by a binary. Person’s views on one significant issue, such as feminism, pretty accurately predict positions on foreign policy, economics and environmental issues. By comparison, optimate/populare labels have much less predictive power. While there is a significant correlation between populare (optimate) and left (right)-wing views on economics and foreign policy, both optimates and populares are much more likely to cross ideological lines on individual issues.

2) On average, both populares and optimates are more libertarian and less religious than the traditional left and right.

comment by passive_fist · 2015-12-13T00:10:16.614Z · LW(p) · GW(p)

I'm not sure what VoA should do beyond what it already does. It already provides a wide range of free programming to the world in a bunch of different languages. The programming - so far as I've seen it - is terrible and completely unconvincing for foreigners. On the other hand, home-grown youtube networks like The Young Turks seem to already have a large following from non-American viewers, despite being targeted towards Americans, and seem to do a much more effective job in exporting Western values to people who don't already believe in them.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-13T00:59:57.889Z · LW(p) · GW(p)

Investigative journalism costs money. Even in the US it's hard to fund it in a for-profit way as shown by outlets like the New York Times employing fewer investigative journalists. VoA should fund investigative journalists in other countries.

The Young Turks aren't doing genuine news. They comment on what various other people report and do little research into the subjects they cover.

To the extend that what VoA is producing is terrible, they should produce better content. Focus on material that get's shared in the target nations via social media. I see stories from RussiaToday from time to time on my facebook feed. There's no good reason why VoA shouldn't be able to do the same thing in the countries it targets.

Pay local bloggers with regime critical views to write stories. If needed allow them to publish stories under a pseudonym if the story would get them thrown in prison otherwise.

Replies from: passive_fist
comment by passive_fist · 2015-12-13T03:03:02.020Z · LW(p) · GW(p)

Investigative journalism costs money.

We aren't talking about journalism here. We are explicitly talking about propaganda. Or counter-propaganda, if you prefer.

The Young Turks aren't doing genuine news. They comment on what various other people report and do little research into the subjects they cover.

Much like the rest of the media. And, again, geniune news is off-topic. Although they do tend to bring into focus some subjects that the rest of the media is hesitant to cover.

Pay local bloggers with regime critical views to write stories

No use if they get blocked, thrown in prison, etc. And even if not, it would most likely turn out to be very counter-productive if it emerged that anti-government bloggers were paid off.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-13T10:54:09.968Z · LW(p) · GW(p)

We aren't talking about journalism here. We are explicitly talking about propaganda. Or counter-propaganda, if you prefer.

Not really. Alex Jones is speaking critically about the US system but the factual background of what he says is poor. While he do has a relatively large audience he doesn't strongly affect the political system.

To do effective propaganda you need to actually engage with the reality on the ground. Michael Hastings couldn't have written an article that forces Stanley McChrystal into resignantion without doing investigative reporting.

Much like the rest of the media. And, again, geniune news is off-topic. Although they do tend to bring into focus some subjects that the rest of the media is hesitant to cover.

Quite a lot of mainstream reporters do pick up the phone to call people to research a story. The Young Turks just seem to pick up news story and then have a few people sit together to speak about what they think about that story.

Replies from: Viliam
comment by Viliam · 2015-12-14T09:56:21.821Z · LW(p) · GW(p)

Specifically, Russian propaganda in my country (don't know if it works the same everywhere) usually markets itself as "the news for people who are not satisfied with propaganda and censorship in mainstream media". Obviously, any information inconvenient for Putin's regime is called "propaganda", and the fact that Putin's propaganda is not published in our mainstream media is called "censorship". The target group seems to be people believing in conspiracy theories, and young people.

Essentially, they are trying to role-play Assange, while publishing the same stuff you would find on the official Russian TV. Plus the conspiracy theories, because everything that puts West in a bad light is a bonus. (Yes, that includes even theories like "vaccination causes autism", because vaccination = big pharma = capitalism = the West.)

One tool in the toolset is providing a lot of links to "suppressed information" and encouraging readers to do their "research" for themselves. Basicly, instead of one propaganda website, you have dozen websites linking to each other, plus to some conspiracy theories written by third-party bloggers. And it works, because people who follow the links do have the feeling that they did a research, that they are better informed than the rest of the population, and that there is a lot of important information that is censored from the official media. If you ever had an applause light of "internet will bring freedom of speech and make the old media obsolete", it feels like you are in the middle of it, when you read that stuff.

So, having a network of websites debunking Russian propaganda -- using the engaging language of blogs, instead of the usual boring language of newspapers -- would provide some balance. (Of course it would only take 10 seconds for all the "independent" websites to declare that all these websites are paid by evil Americans, but they already keep saying that about everything that opposes them.)

comment by Panorama · 2015-12-09T21:03:39.166Z · LW(p) · GW(p)

Paradox at the heart of mathematics makes physics problem unanswerable

Gödel’s incompleteness theorems are connected to unsolvable calculations in quantum physics.

Undecidability of the Spectral Gap (full version) by Toby Cubitt, David Perez-Garcia, Michael M. Wolf

We show that the spectral gap problem is undecidable. Specifically, we construct families of translationally-invariant, nearest-neighbour Hamiltonians on a 2D square lattice of d-level quantum systems (d constant), for which determining whether the system is gapped or gapless is an undecidable problem. This is true even with the promise that each Hamiltonian is either gapped or gapless in the strongest sense: it is promised to either have continuous spectrum above the ground state in the thermodynamic limit, or its spectral gap is lower-bounded by a constant in the thermodynamic limit. Moreover, this constant can be taken equal to the local interaction strength of the Hamiltonian. This implies that it is logically impossible to say in general whether a quantum many-body model is gapped or gapless. Our results imply that for any consistent, recursive axiomatisation of mathematics, there exist specific Hamiltonians for which the presence or absence of a spectral gap is independent of the axioms. These results have a number of important implications for condensed matter and many-body quantum theory.

Replies from: Panorama
comment by NancyLebovitz · 2015-12-12T14:25:14.208Z · LW(p) · GW(p)

Extreme Self-Tracking

Man has himself MRIed twice a week for a year and a half, plus tracking a lot about his life. The data mining is still going on, but at least it's been shown that (probably) people's connectomes change pretty rapidly.

I'm also posting this to the media thread because I'm not sure where it's more likely to be seen.

Replies from: None, None
comment by [deleted] · 2015-12-13T08:51:59.801Z · LW(p) · GW(p)

Calling what was mapped here a 'connectome' is REALLY stretching it. When they make those graphs of parcels connected to each other, what they're doing is just measuring the correlation between activity as revealed by an fMRI (which is itself removed from activity, measuring the short-term fluctuations in bloodflow as a result of energy requirements) in different parcels of brain and drawing a 'connection' when the coefficient is high enough. Correlation is not just connection.

I do note that there was diffusion tensor imaging (which shows you the average orientation of fibers in any given voxel [and showed an unusual crossing mixed fiber feature in a spot of his corpus callosum and will probably show similar oddities throughout the brain in any given human] ) and I will try to get at that information once I am past a paywall later on, but the repeated MRIs appear to be fMRIs.

Replies from: None
comment by [deleted] · 2015-12-18T16:15:37.491Z · LW(p) · GW(p)

I don't think the paper is paywalled: link

MRIs: a lot of different scans including 15 T1 and T2 weighted structural scans; 19 diffusion-weighted scans. fMRI was mostly resting state (100), but also included various tasks such as n-back (15x), motion discrimination/stop signal (8x), object localiser (8x), verbal working memory localiser (5x), spatial WM (4x), breath holding (18x).

Replies from: None
comment by [deleted] · 2015-12-19T11:44:06.786Z · LW(p) · GW(p)

Oops! In whatever case I'm back from my situation-with-effectively-only-mobile-internet-access and have the paper now.

The repeated scans were indeed fMRIs measuring correlation of metabolic activity (a good proxy for activity) under various conditions. They made one diffusion tensor map from all their diffusion data (multiple scans). They saw correlations between a fiber-tract map they generated from the diffusion data (you plop down seed points in the cortex and other places and let fibers follow the main directions of diffusion) and their various activity correlation maps, and correlation was strongest for areas very close to each other on the brain and weak for longer fibers, especially inter-hemisphere fibers quite possibly because the tractography has a harder time getting those. The diffusion data also tended to show denser connections for stronger functional correlations, though as we see the instantaneous state can change the activity correlation quite a bit even though the white matter fiber tracts are not going to change that much on fast timescales. The fact that correlations are different in different activities illustrates that you dont need to have day to day changes entirely in the gross physical structure that shows up on scans of this type.

The actual layout of fibers at this coarse layer of detail is one thing of several that would contribute to activity correlations, including chemistry and actual engagement of said tract fibers for that particular activity and in that particular state, and all the fine molecular twiddling and potentiation at synapse scales.

comment by [deleted] · 2015-12-18T16:07:24.929Z · LW(p) · GW(p)

Not only is data mining still going on by the group who published the paper, but Russ Poldrack (first author and subject of the study) is a very vocal proponent of open science: data associated with this publication have been made freely available for anyone else as well: openfmri.org

Also see this blogpost where he discusses creation of an open analysis platform (and the challenges in setting up analysis pipelines that are reproducible by others

comment by passive_fist · 2015-12-07T20:24:33.736Z · LW(p) · GW(p)

Interesting article on vox (not a new one, but it's the first time I've seen it and I thought I'd share; apologies if it's been featured here before) on 'how politics makes us stupid': http://www.vox.com/2014/4/6/5556462/brain-dead-how-politics-makes-us-stupid

tl;dr: The smarter you are, the less likely you are to change your mind on certain issues when presented with new information, even when the new information is very clearly, simply, and unambiguously against your point of view.

Replies from: Lumifer, VoiceOfRa, None, Dias
comment by Lumifer · 2015-12-08T15:30:16.494Z · LW(p) · GW(p)

The smarter you are, the less likely you are to change your mind on certain issues when presented with new information

In an adversarial setting -- e.g. in the middle of culture warfare -- this is an entirely valid response.

If you just blindly update on everything and I control what evidence you see, I can make you believe anything with arbitrarily high credence. Note that this does not necessarily involve any lying, just proper filtering.

Replies from: passive_fist
comment by passive_fist · 2015-12-08T18:41:14.067Z · LW(p) · GW(p)

That's just rationalization. Again, even in the context of a simple hypothetical example with very clear and unfiltered evidence, participants were not willing to change their minds. I suggest you look at the actual study.

Replies from: Lumifer
comment by Lumifer · 2015-12-08T18:47:27.854Z · LW(p) · GW(p)

What is just rationalization? It seems pretty obvious to me that if your stream of evidence is filtered in some way, you should be very wary about updating on it.

Replies from: passive_fist
comment by passive_fist · 2015-12-08T23:50:14.703Z · LW(p) · GW(p)

Yes but that does not apply to this study; the participants weren't even willing to acknowledge the statistical results when it disagreed with their point of view. Let alone changing their minds.

About your point about having evidence selectively presented, it's easy to discard all information that disagrees with your worldview. What's hard is actually changing your mind. If you believe that there is a 'culture war' going on with filtering of evidence and manipulation from all sides, the rational response would be to look at all evidence skeptically, not just the evidence that disagrees with you. Yet in that study, participants had no problem accepting statistical results that agreed with them or that they didn't have strong political opinions about. And, importantly, this behaviour got worse for smarter people.

Replies from: Lumifer
comment by Lumifer · 2015-12-09T02:50:08.064Z · LW(p) · GW(p)

I'm not much interested in that particular study. I'm discussing your tl;dr which is

The smarter you are, the less likely you are to change your mind on certain issues when presented with new information

You, clearly, think this is bad. I, on the contrary, think that in certain situations -- to wit, when your stream of evidence is filtered -- NOT updating on new information is a good idea.

I feel this is a more interesting issue than going into the details of that study.

comment by VoiceOfRa · 2015-12-16T02:25:55.609Z · LW(p) · GW(p)

The smarter you are, the less likely you are to change your mind on certain issues when presented with new information, even when the new information is very clearly, simply, and unambiguously against your point of view.

Also, as George Orwell said "There are some ideas so absurd that only an intellectual could believe them".

comment by [deleted] · 2015-12-08T07:03:24.485Z · LW(p) · GW(p)

While that is the way Ezra Klein is interpreting it, I don't think that's exactly right. It's not that smart people are less likely to change their mind; it's that smart people who are also partisan are less likely to change their mind. The combination of intelligence and closedmindedness is dangerous; I would agree. But I believe intelligence is correlated with openmindedness so this is either a very narrow effect (which is what Ezra Klein seems to be suggesting) or an artifact of the study design.

Replies from: None, ChristianKl
comment by [deleted] · 2015-12-08T20:00:50.106Z · LW(p) · GW(p)

Actually, never mind for part of this. I had assumed they were using the median to divide between conservative and liberal in which case people who identified as moderate would be thrown out, but they're using the mean which is most likely a number in between the possible options, so everybody gets included. Moderates are included with either liberals or conservatives; I'm not sure which.

comment by ChristianKl · 2015-12-08T12:13:51.288Z · LW(p) · GW(p)

I don't think openmindedness is the same as the ability to get the math right for emotionally charged topics. The ability to get the math right in context like that is part of what Keith Stanovich wants to measure with the rationality index.

comment by Dias · 2015-12-08T00:22:18.596Z · LW(p) · GW(p)

Unfortunately in writing the article Vox themselves seem to have fallen prey to some of the same stupidity; if you're familiar with Vox's general left-wing sympathies you'll be unsurprised that the examples of stupidity used in the article are overwhelmingly from right-wing sources. If you really want to improve people's thinking, you need to focus on your own tribe at least as much as the enemy tribe.

I previously wrote about this here.

Replies from: passive_fist
comment by passive_fist · 2015-12-08T00:37:15.081Z · LW(p) · GW(p)

The example they give is actually anti gun control (it is a contrived example of course) and they repeatedly mention that the biases in question affect individuals who identify as left-wing as well as individuals who identify as right-wing.

If you really want to improve people's thinking, you need to focus on your own tribe at least as much as the enemy tribe.

Why? I looked at your linked article and the two articles it links to and I can't find any proof that doing what you say would result in fewer disagreements than not doing that.

comment by Furcas · 2015-12-11T21:11:47.873Z · LW(p) · GW(p)

World's first anti-ageing drug could see humans live to 120

Anyone know anything about this?

The drug is metformin, currently used for Type 2 diabetes.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2015-12-11T22:50:42.545Z · LW(p) · GW(p)

It seems like the drug trial is funded by the The American Federation for Aging Research(nonprofit). Likelihood of success isn't high but one of the core reason for running the trial seems to be the first anti-aging drug trial and have the FDA develop a framework for that purpose.

3000 patients from who are between 70 to 80 years old at the start of the study. Metformin is cheap to produce, so the claim isn't too expensive for the nonprofit that funds it.

comment by Lumifer · 2015-12-11T21:54:10.789Z · LW(p) · GW(p)

There is discussion on Hacker News. tl;dr: Don't hold your breath.

comment by Panorama · 2015-12-12T20:27:58.415Z · LW(p) · GW(p)

Please, not another bias! An evolutionary take on behavioural economics by Jason Collins

So, I want to take you to a Wikipedia page that I first saw when someone tweeted that they had found “the best page on the internet”. The “List of cognitive biases” was up to 165 entries on the day I took this snapshot, and it contains most of your behavioural science favourites … the availability heuristic, confirmation bias, the decoy effect – a favourite of marketers, the endowment effect and so on ….

But this page, to me, points to what I see as a fundamental problem with behavioural economics.

Let me draw an analogy with the history of astronomy. In 1500, the dominant model of the universe involved the sun, planets and stars orbiting around the earth.

Since that wasn’t what was actually happening, there was a huge list of deviations from this model. We have the Venus effect, where Venus appears in the evening and morning and never crosses the night sky. We have the Jupiter bias, where it moves across the night sky, but then suddenly starts going the other way.

Putting all the biases in the orbits of the planets and sun together, we end up with a picture of the orbits that looks something like this picture – epicycles on epicycles.

But instead of this model of biases, deviations and epicycles, what about an alternative model?

The earth and the planets orbit the sun.

Of course, it’s not quite as simple as this picture – the orbits of the planets around the sun are elliptical, not circular. But, essentially, by adopting this new model of how the solar system worked, a large collection of “biases” was able to become a coherent theory.

Behavioural economics has some similarities to the state of astronomy in 1500 – it is still at the collection of deviation stage. There aren’t 165 human biases. There are 165 deviations from the wrong model.

So what is this unifying theory? I suggest the first place to look is evolutionary biology. Human minds are the product of evolution, shaped by millions of years of natural selection.

Replies from: Good_Burning_Plastic, None, ChristianKl
comment by Good_Burning_Plastic · 2015-12-14T17:15:42.031Z · LW(p) · GW(p)

Wake me up when evolutionary biology can predict all those 165 things from first principles and a very little input the way modern astronomy can predict the motion of planets.

comment by [deleted] · 2015-12-14T07:15:12.348Z · LW(p) · GW(p)

I would agree that a collection of biases points to a need for a theory, but I don't think such a theory is likely to be central to the economics model simply because those deviations are irrelevant in a large number of cases. Simple rational expectations can be quite predictive of human behavior in many cases even though it is clearly completely absurd. Think of the relationship between quantum mechanics and relativity. Relativity doesn't seem to fit at all in any reasonable way into quantum mechanics, and yet relativity is quite useful and accurate for problems at the atomic level and above.

Jason Collins' reasoning can be used for almost any scientific theory imaginable. If you examine any scientific discipline closely enough, you will find deviations which don't fit the standard model. But the existence of deviations does not necessarily prove the need for a new model; particularly if those deviations do not appear to be central to the model's primary predictions. I would say a rigorously tested theory of how cognitive biases develop and are maintained may provide some useful insights into economics, but that it's unlikely that they will disprove the basic model of supply and demand.

There are other issues in that essay. Present bias isn't normally considered a bias. It's referred to in economics as temporal preferences. Hyperbolic discounting has from its conception been considered an issue of preference, and only later as one of rationality. He then discusses conspicuous consumption in the context of mating signals except that isn't a new idea either. Economics already has a theory of signalling that roughly matches with what he is referring to, and they've already considered social status as a type of signal, and that conspicuous consumption is used to signal social status. That also isn't an issue of rationality, but one of preference.

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2015-12-14T17:34:45.703Z · LW(p) · GW(p)

Relativity doesn't seem to fit at all in any reasonable way into quantum mechanics, and yet relativity is quite useful and accurate for problems at the atomic level and above.

General relativity doesn't seem to fit at all in any reasonable way into quantum mechanics, and it is way overkill for problems at the atomic level (and for a sizeable fraction of problems at the planetary level, too), where special relativity (which does fit with quantum mechanics, save for a few theoretical loose ends that few non-mathematicians have reasons to care about) suffices.
comment by ChristianKl · 2015-12-12T21:41:13.017Z · LW(p) · GW(p)

I don't think that the fact that Wikipedia has a list of 165 cognitive biases says more about Wikipedia than it says about behavioural economics.

The core idea from Kahnmann is that humans use heuristics to make decisons.

Evolution certainly affected human cognition but most designs for intelligent agents don't produce intelligent agents. The space of set of heuristics that produce intelligent agents is small. It's not clear that you can make an intelligent agent of something like a neural net that doesn't engage in something like the availability heuristic. Confirmation bias isn't something substantially different than the availability heuristic in action.

When Google's dreaming neural nets reproduce quirks of the human brain it's hard to argue that those quirks exit because they provide advantages in sexual competition.

So what does an evolutionary approach tell us about the human mind? For a start, it tells us something about our objectives.

That's basically saying Darwin was wrong and his critics who object to organism that don't evolve according to objectives were right. Darwin wasn't controversial because he invented evolution. Lamarks already did that decades before Darwin. Darwin was controversial because he proposed to get rid of teleology.

Economics makes errors because it assumes that humans have objectives. You don't fix that by explaining how humans have different objectives. You fix it by looking at the heuristics of human beings and also studying heuristics of effective decision making in general.

Replies from: Viliam
comment by Viliam · 2015-12-14T10:21:34.239Z · LW(p) · GW(p)

Seems like some people replace the teleological model of "it evolved this way because the Spirit of Nature wanted it to evolve this way" by a simplistic pseudo-evolutionary model of "it evolved because it helps you to survive and get more sex".

Nope. Some things evolve as side effects of the things that help us "survive and get more sex"; because they are cheaper solutions, or because the random algorithm found them first. There are historical coincidences and path-dependency.

For example, that fact that we have five fingers on each hand doesn't prove that having five fingers is inherently more sexy or more useful for survival than six or four. Instead, historically, the fish that were our ancestors had five bones in their fins (I hope I remember this correctly), and there was a series of mutations that transformed them into fingers. So, "having fingers" was an advantage over "having no fingers", but the number five got there by coincidence. Trying to prove that five is the perfect number of fingers would be trying to prove too much.

Analogically, having an imperfect brain was an advantage over having no brain. But many traits of the brain are similar historical artefacts, or design trade-offs, or even historical artefacts of the design trade-offs of our ancestors. A different history could lead to brains with different quirks. Using "neural nets" (as opposed to something else) already is a design decision that brings some artifacts. Having the brain divided into multiple components is another design decision; etc. Each path only proves that going this path was better than not going there; it doesn't prove that this path is better than all possible alternatives. Some paths could later turn out to be dead ends.

I agree that treating humans as "rational beings with objectives" can be a nice first approximation, but later it's just adding more epicycles on a fundamentally wrong assumption.

comment by DollarTransplant · 2015-12-08T02:21:43.373Z · LW(p) · GW(p)

Hey everyone,

This is my first post!

This is what I've been wondering lately:

Who is the best sales person in the world? Who knows?

‘Sales competitions’ generally refers to ‘in-house’ competitions established by managers to motivate their sales people to compete against one another.

Recently I began thinking about the prospects for a ‘world sales tournament’ of sorts:

Successful sales people have lots of money. But sales is derided, whether it be in real estate, ‘charity mugger’ fundraisers, or even the people doing tenders for defence contracts.

What if we could could take their money, convert it to prestige, and take a smooth commision on the whole thing?

Sales tournaments! The World Series of Sales! The Sales Olympics. Major League Sales. Who says sales people ought to pay an entry fee anyway (except, perhaps to get a manageable number of entrants). If there are companies out there with products or services to sell, getting the best, most competitive sales people in the game to sell them is a highly desirable service itself. In return for product to sell, said product and service companies could sponsor the competition.

Sales people have difficulty switching industries, despite their highly tuned sales skills. Product and industry knowledge is easy to pick-up, but soft skills are tougher to gain. Never-the-less, recruiters are reluctant to pick up sales people from other industries - the numbers don’t always make sense. Someone working for a luxury car dealership may have huge sales numbers, but a luxury hand bag dealer might not be able to translate the numbers over. Having a high sales ranking, in a similar way that programmers are ranked in coding competition websites, could make for a highly desirable piece of career capital.

An online ‘quick and dirty’ version could be coded for email marketers and telemarketers that could be conducted in a distributed fashion in the likeness of coding competitions. But, a large scale, potentially TV-friendly could be much more profitable.

There’s an EA aspect to this too. Rationalisation of sales human resources may make Effective Altruism fundraising more quantified. And, with the birth of this idea here - the earliest competitions, or the non-profit ones could be ‘selling’ those GiveWell recommended charities as options for prospective donors. Major League Fundraising/Philanthropy! The same could be said about politics - if this become ubiquitous, voters could try to adjust their interpretations of the policy-offerings of politicans by their sales ability.

I reckon there could even be ‘team sales’. People might barrack for particular sales team’s their affiliated with - Say the Farmers Marketing Cooperatve of California (made up name) may consist of 10 members, but be supported by hundreds of farmers. Then, when it comes to a competition, say to raise money for Development Media International, one of GiveWell’s charities, people would support them like a sports team. Imagine that, people caring as much about charity as sports teams, or their online gaming leagues! In a sense, this is the gamification of sales.

If you have read this till here, you are the kind of person I want to help me build this. Please do get in touch, preferably publically as a comment here and privately (yes, twice! Once for people to know who’s involved, and twice for contact details which you may prefer not to publicly disclose) with a contact email address so I can keep everyone involved in the loop and we can decide upon a work cycle.

Equity split for founding team, including myself will be by negotiation among all of us. I forsee an equal split of total equity - an equal partnerships, for those involved.

Replies from: ChristianKl, None, 9eB1, NancyLebovitz
comment by ChristianKl · 2015-12-08T12:08:30.215Z · LW(p) · GW(p)

I think fundraising competitions for GiveWell recommended charities would be a valid activity.

comment by [deleted] · 2015-12-08T22:51:41.728Z · LW(p) · GW(p)

I see two ideas here:

1) Create a mechanism for price discovery of standardized sales ability

Cool. I think the world would be a better place if a robust market existed for every good and service imaginable. Markets = better information = better decisions

2) Gamification of sales

Maybe I lack imagination, but I don't see how this would be entertaining. Then again, there are a lot of successful reality shows based on pawn shops and real estate agents and other boring stuff, so...?

comment by 9eB1 · 2015-12-08T03:15:30.122Z · LW(p) · GW(p)

At the highest echelons of sales, relationships are more important that soft-skills. Obviously, soft-skills are highly related to a salespersons relationships, but in the same way capital is related to income. Soft-skills determine how fast your relationship asset increases. While soft-skills are transferable between industries, relationships, in general, are not. Relationships are also path-dependent in a way soft-skills are not.

The best salespeople in the world are, depending on how much of a role you think luck plays into the equation, either heads of sales or CEOs at Fortune 500 companies, or simply highly-talented salespeople spread throughout high-level sales careers.

comment by NancyLebovitz · 2015-12-08T15:09:08.256Z · LW(p) · GW(p)

My guess is that a sales tournament would be a sufficiently simulated environment that it would train skills similar to, but not the same as, those used in actual sales. It would also be optimized for dramatic contests, which isn't quite the same thing as real world sales.

comment by iarwain1 · 2015-12-09T16:56:58.304Z · LW(p) · GW(p)

I'm from Baltimore, MD. We have a Baltimore meetup coming up Jan 3 and a Washington DC meetup this Sun Dec 13. So why do the two meetups listed in my "Nearest Meetups" sidebar include only a meetup in San Antonio for Dec 13 and a meetup in Durham NC for Sep 17 2026 (!)?

Replies from: Bryan-san, ChristianKl, Manfred
comment by Bryan-san · 2015-12-09T20:21:06.822Z · LW(p) · GW(p)

Whoever is running the meetup needs to make Meetup Posts for each meeting before they show up on the sidebar. IIRC regular meetups are often not posted there if the creator forgets about it. You can ask the person who runs the meetups to post them on LW more often or ask them if you can post them in their stead.

I run the San Antonio meetup and you are very welcome to attend here if it's the nearest one to you!

Replies from: iarwain1
comment by iarwain1 · 2015-12-09T21:26:43.554Z · LW(p) · GW(p)

Not sure what you mean by this. I actually posted the meeting for the Baltimore area myself.

The Baltimore and Washington DC meetups do show up if I click on "Nearest Meetups", just that they appear in the 5th and 8th spots. That list appears to be sorted first by date and then alphabetically. The San Antonio meetup appears at the #4 slot, and the Durham meetup does not appear at all.

Basically the "nearest" part of nearest meetups seems to be completely broken.

comment by ChristianKl · 2015-12-09T19:32:01.959Z · LW(p) · GW(p)

For it at the moment it shows:

Bi-weekly Frankfurt Meetup: 01 January 2019 07:30PM
Brussels - We meet every month: 12 January 2019 02:00PM

It doesn't show the Berlin Meetup which is the city where I live and which I have put into my LW profile.

comment by Manfred · 2015-12-09T18:42:47.521Z · LW(p) · GW(p)

Man, that Durham date sure disconfirms the idea that your meetup isn't soon enough :)

And hmm, just having one far-future meetup post is a clever way to just keep your meetup in the list permanently, like how meetup.com groups have permanent pages, with the actual meetup schedule being a part of that group page.

comment by NancyLebovitz · 2015-12-11T14:16:16.073Z · LW(p) · GW(p)

I'm thinking about people's capacity for emotional healing-- I believe this is possible because people have a base state to aim at, even if it's a slow and somewhat indirect process. My question is whether it would be possible to build something like this into an AI, since I assume an AI (even if not in a society of AIs) could either have mistakes built into its structure or make mistakes when changing itself.

Replies from: Tem42
comment by Tem42 · 2015-12-12T01:02:23.256Z · LW(p) · GW(p)

capacity for emotional healing

an AI (even if not in a society of AIs) could either have mistakes built into its structure or make mistakes when changing itself.

I am not very well versed in AI at all. But reading this, my automatic response is to question how an emotional response is different from any other response, for an AI.

I understand that emotional responses are different in complexity than trivial responses. But I think of emotional responses (for an AI) as fitting somewhere in the fairly straightforward continuum between "what color should my desktop be" and "how do I judge the validity of a moral structure to apply to humans when they cannot agree on any meaningful criteria themselves". I would assume that even AI emotional ecology is closer to the complex side of the spectrum, but it seems like a problem that should be fully open to internal inspection and modification by the AI -- or if it is limited, at least no more difficult to adjust than any equally important calculation.

Building an AI with hidden subconscious seems like an unfortunate combination of stupid and malicious. The most likely reason for such a thing to exist that I can think of is as a hidden backdoor to allow humans to manipulate the AI without it knowing what is going on, but inducing schizophrenia-like symptoms is probably not the sane way to control our constructs.

But I may be under-applying important concepts -- particularly, I may be underestimating the importance of emergent properties, especially in a hard takeoff scenario.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-12-12T14:23:53.258Z · LW(p) · GW(p)

I brought up emotional healing because I'd recently read a strong example of it, but you raise a bunch of interesting points, and because, as I said, people seem to have a base state of emotional health-- a system 1 which is compatible with living well. It seems as though people have some capacity for improving their system 1 reactions, though it tends to be slow and difficult.

Let's see if I can generalize healing for AIs. I'm inclined to think that AIs will have something like a system 1 / system 2 distinction-- subsystems for faster/lower cost reactions and ways of combining subsystems for slower/higher cost reactions. This will presumably be more complex than the human system, but I'm not sure the difference matters for this discussion.

I think an AI wouldn't need to have emotions, but if it's going to be useful, it needs to have drives-- to take care of itself, to take care of people (if FAI), to explore not-obviously useful information.

It wouldn't exactly have a subconscious in the human sense, but I don't think it can completely keep track of itself-- that would take its whole capacity and then some.

What is a good balance between the drives? To analogize a human problem, suppose that an FAI starts out having to fend off capable UFAIs. It's going to have to do extensive surveillance, which may be too much under other circumstances-- a waste of resources. How does it decide how much is too much?

This one isn't so much about emotional healing, though emotions are part of how people tell how there lives are going. Suppose it makes a large increase in its capacity. How can it tell whether or not it's made an improvement? Or a mistake? How does it choose what to go back to, when it may have changed its standards for what's an improvement?

Replies from: Tem42
comment by Tem42 · 2015-12-12T16:18:50.694Z · LW(p) · GW(p)

I don't think it can completely keep track of itself-- that would take its whole capacity and then some.

I have a different view of AI (I do not know if it is better or more likely). I would see the AI as a system almost entirely devoted to keeping track of itself. The theory behind a hard takeoff is that we already have pretty much all the resources to do the tasks required for a functional AI; all that is missing is the AI itself. The AI is the entity that organizes and develops the exiting resources into a more useful structure. This is not a trivial task, but it is founded on drives and goals. Assuming that we aren't talking about a paperclip maximizer, the AI must have an active and self-modifying sense of purpose.

Humans got here the messy way -- we started out as wiggly blobs wanting various critical things (light/food/sex), and it made sense to be barely better than paperclip maximizers. In the last million years we started developing systems in which maximizing the satisfaction of drives stopped being effective strategies. We have a lot of problems with mental ecology that probably derive from that.

It's not obvious what the fundamental drives of an AI would be -- it is arguable that 'fundamental' just doesn't mean the same thing to an AI as it does to a biological being... except in the unlucky case that AIs are essentially an advanced form of computer virus, gobbling up all the processor time they can. But it seems that any useful AI -- those AI in which we care about mental/emotional healing -- would have to be first and foremost a drive/goal tuning agent, and only after that a resource management agent.

This almost has to be the case, because the set of AIs that are driven first by output and second by goal-tuning are going to be either paperclip maximizers (mental economy may be complex, but conflict will almost always be solved by the simple question "what makes more paperclips?"), insane (the state of having multiple conflicting primary drives each more compelling than the drive to correct the conflict seems to fall entirely within the set that we would consider insane, even for particularly strict definitions of insane), or below the threshold for general AI (although I admit this depends on how pessimistic your view of humans is).

Suppose it makes a large increase in its capacity. How can it tell whether or not it's made an improvement? Or a mistake?

These are complex decisions, but not particularly damaging ones. I can't think of any problem in this area that an AI should find inherently unhealthy. Some matters may be hard, or indeterminate, or undetermined, but it is simply a fact about living in the universe that an effective agent will have to have the mental framework for making educated guesses (and sometimes uneducated guesses), and processing the consequences without a mental breakdown.

The simple case would be having an AI predict the outcome of a coin flip without going insane -- too little information, a high failure rate, and no improvement over time could drive a person insane, if they did not have the mental capacity to understand that this is simply a situation that is not under their control. Any functional AI has to have the ability to judge when a guess is necessary and to deal with that. Likewise, it has to be able to know its capability to process outcomes, and not break down when faced with an outcome that is not what it wanted, or that requires a change in thought processes, or simply cannot be interpreted with the current information.

There are certainly examples of hard problems (most of Asimovs' stories about robots involve questions that are hard to resolve under a simple rule system), and his robots do have nervous breakdowns.... but you and I would have no trouble giving rules that would prevent a nervous breakdown. In fact, usually the rule is something simple like "if you can't make a decision that is clearly best, rank the tied options as equal, and choose randomly". We just don't want to recommend that rule to beings that have the power to randomly ruin our lives -- but that only becomes a problem if we are the ones setting the rules. If the AI has power over its own rule set, the problem disappears.

To analogize a human problem, suppose that an FAI starts out having to fend off capable UFAIs. It's going to have to do extensive surveillance, which may be too much under other circumstances-- a waste of resources. How does it decide how much is too much?

This is a complex question, but it is also the sort of question that breaks down nicely.

  1. How big a threat is this? (Best guess may be not-so-good, but if AI cannot handle not-so-good guesses, AI will have a massive nervous breakdown early on, and will no longer concern us).

  2. How much resources should I devote to a problem that big?

  3. What is the most effective way(s) to apply those resources to that problem?

  4. Do that thing.

  5. Move on to the next problem.

As I write this out, I see that a large part of my argument is that AIs that do not have good mental ecology with a foundation of self-monitoring and goal/drive analysis will simply die out or go insane (or go paperclip) rather than become a healthy, interesting, and useful agent. So really, I agree that mental health is critically important, I just think that it is either in place from the start, or we have an unfriendly AI on our hands.

I realize that I may be shifting the goal posts by focusing on general AI. Please shift them back as appropriate.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-12-13T14:48:48.392Z · LW(p) · GW(p)

I don't think it can completely keep track of itself-- that would take its whole capacity and then some.

I have a different view of AI (I do not know if it is better or more likely). I would see the AI as a system almost entirely devoted to keeping track of itself.

You're probably right about the trend. I've heard that lizards do a lot less processing of their sensory information than we do. It's amusing that people put in so much work through meditation to be more like lizards. This is not to deny that meditation is frequently good for people.

However, an AI using a high proportion of its resources to keep track of itself is not the same thing as it being able to keep complete track of itself.

In re the possibly over-vigilant/over reactive AI: my notion is that its ability to decide how big a threat is will be affected by its early history.

As I write this out, I see that a large part of my argument is that AIs that do not have good mental ecology with a foundation of self-monitoring and goal/drive analysis will simply die out or go insane (or go paperclip) rather than become a healthy, interesting, and useful agent. So really, I agree that mental health is critically important, I just think that it is either in place from the start, or we have an unfriendly AI on our hands.

That's where I started. We have an evolved capacity to heal. Designing the capacity to heal for a very different sort of system is going to be hard if it's possible at all.

comment by polymathwannabe · 2015-12-10T22:27:42.422Z · LW(p) · GW(p)

Facebook has open-sourced its AI hardware.

comment by Viliam · 2015-12-08T22:50:11.681Z · LW(p) · GW(p)

When I go to CFAR web page, my browser complains about the certificate. Anyone else having this problem?

Replies from: michaelkeenan
comment by michaelkeenan · 2015-12-08T23:16:49.259Z · LW(p) · GW(p)

It looks like you're going to https://rationality.org rather than http://rationality.org. CFAR doesn't have a SSL certificate (but maybe should get one through Let's Encrypt).

Replies from: Viliam
comment by Viliam · 2015-12-10T08:32:00.726Z · LW(p) · GW(p)

You're right, but now I wonder how it happened (going to HTTPS). I would guess that I googled the address somehow or followed someone's link, but I don't remember anymore.

Replies from: Lumifer
comment by Lumifer · 2015-12-10T16:06:58.173Z · LW(p) · GW(p)

but now I wonder how it happened (going to HTTPS)

I think that by now both Chrome and Firefox will, by default, attempt an HTTPS connection if HTTP is not specified explicitly.

That is, going to "rationality.org" will by default do "https://rationality.org". You can manually override that by specifying "http://rationality.org", but, of course, few people bother.

comment by [deleted] · 2015-12-08T05:26:10.264Z · LW(p) · GW(p)

Curious: Are there any (currently active) readers who are in Idaho, Eastern Washington, or Eastern Oregon?

comment by skeptical_lurker · 2015-12-07T20:06:59.455Z · LW(p) · GW(p)

This article discusses FAI, mentioning Bostrom, EY etc. Its interesting to see how the problem is approached as it goes more mainstream, and in this particular case a novel approach to FAI is articulated: whole brain emulation (or biologically inspired neural nets) ... on acid!

The idea is that the WBE will be too at-one-with-the-universe to want to harm anyone.

Its easy to laugh at this. But I think there's also a real worry that someone might actually try to build an AI with hopelessly inadequate guarantees of safety.

Having said that, perhaps the idea is not quite as crazy as it sounds. If WBE comes first, then some form of psudo-drug based behavioural conditioning is better than nothing for AI control, although I would have thought that modifying oxytocin to increase empathy would be the obvious strategy: digital MDMA, not LSD.

Tangentially, there seems to be a perception some people have that taking LSD causes prosocial values (or, at least, what they believe to be prosocial values), but there seems to be a real danger of confusing the direction of causality here - hippies do acid, and hippies hold certain values, but the causal direction is surely: hippy values -> become hippy -> take acid, not take acid -> become hippy. Of course, perhaps acid might make the hippy values stronger, but that could be because the experience is interpreted within the structure of your pre-existing values. I have heard some (atypical?) neoreactionaries plan an acid trip, for spiritual reasons, and their values certainly appear different from hippy values. Of course, both the neoreactionaries and the hippies believe that they hold prosocial values, they just differ on what prosocial values are. Perhaps their terminal values are not so different, but they have very different models of the world?

To briefly go back to the original point, I think the author is conflating two things - just because 'can we program an AI to hallucinate?' is an interesting question (at least to some people), does not mean that it is an actually sensible proposition for FAI control. Conversely, just because this idea can trigger the absurdity heuristic, does not mean that 'AI behavioural modification with drugs' is an entirely useless idea.

comment by Michael_Keshick · 2015-12-11T15:31:21.886Z · LW(p) · GW(p)

So I’m attempting to adopt practices that will bring me closer to generally strategic behavior. I am also interested specifically in strategic/efficient studying. To that end I would like as much of an info dump as possible on the topic on failure.

This can include avoiding failure, preparing for failure even when avoiding it, how to notice when you are failing, and perhaps how to fail gracefully (as possible). I realize there is overlap/confusion here; I was simply rattling of primers for you to consider.

Please err on the side of inclusivity. I am not starting from a state of complete ignorance (the sequences can be turned towards my concerns handily), but best be safe.

Thank you for your help! :)

Edited for clarification, I hope.

Also for starts: What sorts of questions could someone ask to learn the most about failure?

Replies from: Gram_Stone, ChristianKl
comment by Gram_Stone · 2015-12-11T18:02:00.479Z · LW(p) · GW(p)

As part of my foundational work on how I function

I spent a long time coming up with theories about how I work and why, and it was a great waste of time. I now find it a lot more reliable to base my actions on generalizations about most or all humans, rather than coming up with idiosyncratic theories about myself. Idiosyncratic theories are likely to be based in introspection, which is notoriously unreliable and which humans are known for systematically overvaluing. (See introspection illusion.) I've found that a good rule of thumb is: Don't use an idiosyncratic theory unless you also would've generated that theory about someone else by observing their current behavior and having knowledge of their past behavior. And even when idiosyncratic theories seem to work, they more likely work because they're also explainable using the aforementioned generalizations.

Replies from: Michael_Keshick
comment by Michael_Keshick · 2015-12-11T19:53:00.457Z · LW(p) · GW(p)

This was a very useful topic to bring to the conversation, but I think I may have framed what I had in mind poorly. Did the edit clarify?

comment by ChristianKl · 2015-12-11T16:26:58.255Z · LW(p) · GW(p)

I decided to ask Lesswrong about failure

Your post contains no question at the moment. Specifying questions is useful for having discussions.

Replies from: Michael_Keshick
comment by Michael_Keshick · 2015-12-11T19:51:43.915Z · LW(p) · GW(p)

Thank you for pointing out my error. Did my editing clear up said issue?

Replies from: ChristianKl
comment by ChristianKl · 2015-12-11T22:26:10.307Z · LW(p) · GW(p)

There's still no question in the original post. Questions are quite useful for exploring a new topic of interest. You might get some answers by getting seeking an info dumb but a concrete question would likely produce better discussion. It would also help you focus yourself.

Replies from: Michael_Keshick
comment by Michael_Keshick · 2015-12-11T23:03:37.039Z · LW(p) · GW(p)

I like this prompt, and it so happens I have a proper response that fits.

I've seen people talk of noticing failure, but thankfully it having been a gentle one, they managed to make something of it. Sometimes people speak or write as if their may be some underlying method to be mined away from luck.

While planning actions, is it a good heuristic to attempt action so as a fall would not break legs, so to speak?

Well look at that you've helped me dissolve a question into a form that has an obvious answer. This is both nice (less clutter), and partly the reason I was asking for a dump. I'm trying to stumble across gaps in my understanding, not necessarily tangles (although again, thank you).

I suppose I expect to de-tangle my knowledge of this subject as I review anything possibly relevant. I just thought to ask here in conjunction with said review.

I'm trying to be as comprehensive as possible, which means I should ask the obvious first. Is the question now posed in the main post a respectable start?

Replies from: Tem42
comment by Tem42 · 2015-12-12T00:36:00.316Z · LW(p) · GW(p)

Is the question now posed in the main post a respectable start?

The post as it now stands needs some serious proofreading.

What sorts of questions could someone ask to learn the most about failure?

LessWrong is likely to focus on cognitive biases, and this is a good place to start. I assume that you have already read some on the subject, but if not, we have a lot on site, and there are some good books -- for example, The Invisible Gorilla and Mistakes Were Made. Everyone will have a different list of recommended reading, but I don't know if that is the sort of info dump you are looking for.

I think that your question may be too general. Being more specific will almost surely give you more useful responses.

I suspect that your best bet would be to notice specific sub-optimal outcomes in your life, and then ask knowledgeable people (which may include us) for thoughts and information. If you have access to a trustworthy person who will give honest and detailed feedback, you might ask them to observe you in completing some process (or better, multiple processes) and take notes on any thoughts they have regarding your actions -- things you do differently, things you do wrong, things that you do slower than most people, etc. They will probably notice some things that you do not. They may not know how to help you change, but that doesn't make their information any less valuable.

Replies from: Michael_Keshick
comment by Michael_Keshick · 2015-12-12T07:07:20.667Z · LW(p) · GW(p)

Thank you for the feedback. This was a surprisingly useful line of interaction.

The first thing it did was make me remember that inferential gaps take caution at the very least to cross. Another way I failed was in not carrying out my empathetic modules of people far enough; I knew people would realize what I was after was large and vague, but then trailed off into assuming people would actually want to rattle off in some randomly chosen direction available to them. Taken on iota more of a step and I can feel how annoying such a prompt is.

And then I recalled something about A.I. safety; something along the lines of not being able to specify all the ways we don’t want an AI (genie?) to act; the nature of value or goal specification is too exclusive to approach from that direction efficiently. Reflection to see if I can be coherent about his will have to happen later.

As of this moment (2 am) it is unattractive to see if I am on to something or not. Thank you once more for the feedback. It feels like I’ve gained valuable responses.

comment by MrMind · 2015-12-11T08:17:09.427Z · LW(p) · GW(p)

Is anyone aware of a healthy diet plan that comes with pre-packaged meals?

Replies from: ChristianKl
comment by ChristianKl · 2015-12-11T13:40:38.053Z · LW(p) · GW(p)

http://www.mealsquares.com/

Replies from: MrMind
comment by MrMind · 2015-12-14T08:11:40.670Z · LW(p) · GW(p)

Thanks, I found Soylent-equivalent available in Europe.

Replies from: Viliam
comment by Viliam · 2015-12-14T10:22:38.339Z · LW(p) · GW(p)

I heard there are more options, but I only tried Joylent.

Replies from: MrMind
comment by MrMind · 2015-12-15T08:20:53.486Z · LW(p) · GW(p)

How was your experience?

Replies from: Viliam
comment by Viliam · 2015-12-15T21:19:31.986Z · LW(p) · GW(p)

I have tried it for a week (eating only Joylent, nothing else), and I was completely satisfied.

The taste was okay. Not great -- but that is meta-great, because I was not tempted to overeat (which is usually my problem). What I was supposed to eat during one day, was exactly what I ate during the day, and I didn't feel hungry anytime. It was like: I am eating as much as I want to, whenever I want to; it's okay but nothing special, merely a fuel for my body. Perfect.

And it saved a lot of time. All the thinking about what should I cook, buying, cooking, cleaning dishes -- easily an hour a day -- didn't exist anymore. Convenient.

Then I stopped it mostly because my girlfriend didn't want to join me, and cooking for one person is almost as much work as cooking for two people. (During that one week she was away, so I had a chance to try what it is like when no one at home is eating normal food.) So now I mostly use Joylent as a backup option, for example when I wake up late in the morning and I have to hurry to my job, so I don't have to skip my breakfast completely.

We had a debate with my girlfriend about whether such food can be healthy. There are a few objections I consider reasonable, even assuming the food contains exactly what is advertised:

  • Just because it contains "100% of recommended daily intake of everything", it doesn't obviously follow that your body needs everything on the same day. Hypothetically, what if your body processing some X disallows it to process some other Y at the same time? You could have a balanced diet by eating 2X on one day, and 2Y on the other day, but if you eat 1X+1Y every day, you may get Y-deficient.

  • What if there are molecules your body needs that medicine still does not know about? They can occur in some meals you would randomly eat once in a while, but they may be absent from the artificial food.

  • Unprocessed food or diary products contain some friendly microorganisms, which will be missing in the artificial food.

But in my opinion, if you eat normal food once in a while, that should be safe enough. My opinion is that normal food should be something to enjoy, not a boring duty. If you don't enjoy every single meal, you might as well replace the ones you didn't enjoy by quick artificial food.

comment by [deleted] · 2015-12-11T01:40:08.059Z · LW(p) · GW(p)

It is not always possible to measure and calibrate expertise. Even when the opportunity exists, few professional disciplines make use of it. Very few experts have had any formal training or calibration for the provision of judgements. That should change. Perhaps a rationality startup could emerge to provide professional development across a range of professions. Until then, it’s important to know who to trust, how to trust them and when to trust them. To the lay person, graphs are intimidating. Atmospheric science is notoriously complex. Expert judgement is a ‘next best’ option, then perhaps what is socially normative and marketable. To the lay person, how can expert judgement be interpreted? Who even gets counted as an expert. We frequently hear about a ‘scientific consensus’ but also here from seemingly erudite ‘skeptics’, who use graphs that are compelling but uninterpretable in the broader context of all the information around. It looks like the naive algorithm for evaluating evidence, while a naive Bayesian conclusion, is not particularly efficient in some important cases: ...the claim of expert status is evaluated based on the professional characteristics and track record of the person. Their qualifications, experience, publications and professional standing are relevant. In the latter case, expert status may be evaluated by testing an expert’s judgements against independent evidence, matching predictions with outcomes observed subsequently – ACERA This has real world consequences: In Australian Federal courts, opinion evidence is admissible if it assists ‘…the trier of fact in understanding the testimony, or determining a fact in issue’ (ALRC 1985; p. 739-740). Scientific validity is established through falsifiability, peer review, acknowledged error rates, general acceptance of ideas and valid methods (Preston 2003). Science is presumed to act as a check on bias or prejudice. For instance, Cooke (1991) argued that if individuals follow the scientific method scrupulously, then they may arrive at results that have a claim on rationality and impartiality. The same thinking permeates current views of expert judgments in risk analysis. In this review, we will explore the view that scientists are advocates of a particular scientific view and spend their time trying to convince others of their position. In adversarial legal systems, potential expert witnesses are selected overwhelmingly for their credentials and for the strength of their support for the lawyer's viewpoint (Shuman et al. 1993; in Freckelton 1995). Lawyers search for appropriate attributes in an expert and develop strategies to maximize their chance of winning a case. Success often depends on the plausibility or self-confidence of the expert, rather than the expert's professional competence (ALRC 2000) ACERA I feel climate change is a good example of a thing which is allegedly highly important but extremely complex, where deferral to experts is probably prudent. However, knowing how to relate to expert evidence is then important, particularly if you, like me, are unsure about what to do with all the expertise floating around awhile action is sluggish, prompting me to wonder – what’s going on here? So why is a structured approach to expert interpretation useful in general. Let my friends from ACERA tell it When expert opinion is required, ideally, there would be a pool of people available, appropriately trained, with extensive experience and sound normative skills. However, this pool is sometimes small or non-existent. Often, it is composed of people with conflicting opinions, values and motivations. The positive qualities of experts noted above have limitations that are often overlooked. Expertise is limited to technical domains that are narrower than most practical problems. Expert performance does not transfer to other disciplines, even those that seem quite similar. As we will see below, most experts are overconfident. It can be difficult to have a clear picture of what an expert knows, because experts themselves have difficulty knowing the limits of their own expertise (Freudenburg 1999, Ayyab 1999). Furthermore, the self-serving nature of judgments may bias advice (Krinitzsky 1993). This section outlines definitions of experts in several fields and evaluates the importance of defining expertise. After doing some research, I’ve come up with a few notes. They are not in my own words, because they are written about adequately by others: Who is an expert? According to Hart (1986), attributes that characterize an expert include effectiveness (they use knowledge to solve problems with an acceptable rate of success), efficiency (they solve problems quickly) and awareness of limitations (they are willing to say when they cannot solve a problem). Other critical questions may include (Hart 1986, Walton 1997): Is the expert credible? Are they personally reliable? Do they have a special or conflicting interest? Is their assertion consistent with other experts and with their prior advice? What evidence is their opinion based on? How much experience do they have in the domain of the question at hand? Studies of experts and expertise provide a few generalizations (see Feltovitch et al. 2006). Experts organise knowledge effectively, have superior recall of information and have improved abilities to abstract knowledge to new situations, compared to lay people. They perform the basic operations of their discipline efficiently, and are able to think critically about data and methods in their domain. Usually, attaining expertise requires both study and practical experience. -ACERA Should we trust experts? Among experts in ecology: 'No consistent relationships were observed between performance and years of experience, publication record or self-assessment of expertise.'. >Expert responses were found to be overconfident, specifying 80% confidence intervals that captured the truth only 49–65% of the time. In contrast, student 80% intervals captured the truth 76% of the time, displaying close to perfect calibration. Best estimates from experts were on average more accurate than those from students. The best students outperformed the worst experts.

Expert status is a poor guide to good performance. In the absence of training and information on past performance, simple averages of expert responses provide a robust counter to individual variation in performance. -ACERA

comment by [deleted] · 2015-12-11T01:46:25.793Z · LW(p) · GW(p)

Part 2: 'Which experts to trust', 'Limitations' and 'Practice'

**Part 1: IS EXPERT OPINION A WASTE OF TIME? is available here

which experts to trust

Now for an application:

Tetlock (2005) showed, based on 20 years of longitudinal research on several hundred political experts, the single most powerful predictor of forecasting skill had little to do with what experts think, and more to do with how they think: he called it ‘cognitive style’. As well as being a predictor of forecasting accuracy, ‘cognitive style’ is also a strong correlate of overconfidence. Cognitive style is determined in large part by personality traits, which he characterized as hedgehogs and foxes.

Hedgehogs know “one big thing, and under the banner of parsimony [work] to expand the explanatory power of that big thing to ‘cover’ new cases”; the foxes know “many little things and [are] content to improvise ad hoc solutions” (p. 20-21). Cognitive style correlates with personality measures such as‘openness’ and ‘need for closure’. Tetlock’s work showed consistently greater overconfidence in ‘hedgehog’ experts than in ‘fox’ experts.

An obvious example of a category of hedgehog that springs to mind are ideologues – everyone from Anarchocapitalists to Bayesians to ‘materialists’ to ‘ordinary people’ to ‘Agnostics’ (just trying my best to insult the most number of people here, cause not enough people recognise they’re as guilty of the things they might be looking for in others). The motivated reasoning bias springs to mind (after prompting by the ACERA paper).

Commiserate:

Demographic variables had little bearing on accuracy. Years of experience also failed to predict accuracy. Left vs Right (the Ideology factor), Institutional vs Realist (the Realpolitik factor) and Doomster vs Boomster (the Optimism factor) also failed to predict accuracy. The consistent result across the spectrum of content domains, was that hedgehogs were: a) less well calibrated, b) had poorer discrimination, c) were more overconfident and d) were less likely to update their beliefs (were poorer Bayesians). Foxes outperformed all groups on each of the four criteria, and the middling groups fell in between in predictable ways.

important limitations to this research line

However, there have been few, if any, studies of their efficacy in realistic biosecurity settings. In particular, relatively little attention has been paid to the reliability or accuracy of conceptual models. Few of the formal techniques for elicitation, calibration or verification have been tested in conditions typical of biosecurity risk analysis. There is an opportunity to evaluate the most promising methods, with a view to implementing some of the most effective procedures in routine risk analyses.

Practice

For any smart cookies out there, you must wonder – well, what can you do to get the most out of experts? It’s too much information to tell you about here, but I recommend ACERA’s

Academic delphi style groups outperform baseline groups by around 50%. Could professional delphi groups be formed to profit from stock and prediction markets?

and

Sincerely, Carlos Larity

Self-appointed claimant as the resident expert expert

*Penned in order to get my karma back at around 11. I aimed to have a net zero karma – balancing more controversial stuff with popular content, just enough to not discredit myself. Though, I just found out that I actually can’t upvote when my karma is zero, and I need ’11 more’ to do so.

And, just a fun additional link on reddit for you to check out. Remember not to trust experts, particularly not experts in experts...or hedgehogs!

Replies from: None
comment by [deleted] · 2015-12-11T08:56:18.839Z · LW(p) · GW(p)

Confused about agnostics and ordinary people here.

comment by [deleted] · 2015-12-11T01:44:26.171Z · LW(p) · GW(p)

**Part 1: IS EXPERT OPINION A WASTE OF TIME?

Part 2 on 'Which experts to trust', 'Limitations' and 'Practice' available here

It is not always possible to measure and calibrate expertise. Even when the opportunity exists, few professional disciplines make use of it. Very few experts have had any formal training or calibration for the provision of judgements. Perhaps a rationality startup could emerge to provide professional development across a range of professions. Until then, it’s important to know who to trust, how to trust them and when to trust them.

To the lay person, graphs are intimidating. Atmospheric science is notoriously complex. Expert judgement is a ‘next best’ option, then perhaps what is socially normative and marketable.

To the lay person, how can expert judgement be interpreted? Who even gets counted as an expert. We frequently hear about a ‘scientific consensus’ but also here from seemingly erudite ‘skeptics’, who use graphs that are compelling but uninterpretable in the broader context of all the information around.

It looks like the naive algorithm for evaluating evidenc, while a naive Bayesian conclusion, is not particularly efficient in some important cases:

...the claim of expert status is evaluated based on the professional characteristics and track record of the person. Their qualifications, experience, publications and professional standing are relevant. In the latter case, expert status may be evaluated by testing an expert’s judgements against independent evidence, matching predictions with outcomes observed subsequently – ACERA

This has real world consequences:

In Australian Federal courts, opinion evidence is admissible if it assists ‘…the trier of fact in understanding the testimony, or determining a fact in issue’ (ALRC 1985; p. 739-740). Scientific validity is established through falsifiability, peer review, acknowledged error rates, general acceptance of ideas and valid methods (Preston 2003). Science is presumed to act as a check on bias or prejudice. For instance, Cooke (1991) argued that if individuals follow the scientific method scrupulously, then they may arrive at results that have a claim on rationality and impartiality. The same thinking permeates current views of expert judgments in risk analysis. In this review, we will explore the view that scientists are advocates of a particular scientific view and spend their time trying to convince others of their position. In adversarial legal systems, potential expert witnesses are selected overwhelmingly for their credentials and for the strength of their support for the lawyer's viewpoint (Shuman et al. 1993; in Freckelton 1995). Lawyers search for appropriate attributes in an expert and develop strategies to maximize their chance of winning a case. Success often depends on the plausibility or self-confidence of the expert, rather than the expert's professional competence (ALRC 2000) ACERA

I feel climate change is a good example of a thing which is allegedly highly important but extremely complex, where deferral to experts is probably prudent. However, knowing how to relate to expert evidence is then important, particularly if you, like me, are unsure about what to do with all the expertise floating around awhile action is sluggish, prompting me to wonder – what’s going on here?

So why is a structured approach to expert interpretation useful in general. Let my friends from ACERA tell it

When expert opinion is required, ideally, there would be a pool of people available, appropriately trained, with extensive experience and sound normative skills. However, this pool is sometimes small or non-existent. Often, it is composed of people with conflicting opinions, values and motivations. The positive qualities of experts noted above have limitations that are often overlooked. Expertise is limited to technical domains that are narrower than most practical problems. Expert performance does not transfer to other disciplines, even those that seem quite similar. As we will see below, most experts are overconfident. It can be difficult to have a clear picture of what an expert knows, because experts themselves have difficulty knowing the limits of their own expertise (Freudenburg 1999, Ayyab 1999). Furthermore, the self-serving nature of judgments may bias advice (Krinitzsky 1993). This section outlines definitions of experts in several fields and evaluates the importance of defining expertise.

After doing some research, I’ve come up with a few notes. They are not in my own words, because they are written about adequately by others:

Who is an expert?

According to Hart (1986), attributes that characterize an expert include effectiveness (they use knowledge to solve problems with an acceptable rate of success), efficiency (they solve problems quickly) and awareness of limitations (they are willing to say when they cannot solve a problem). Other critical questions may include (Hart 1986, Walton 1997): Is the expert credible? Are they personally reliable? Do they have a special or conflicting interest? Is their assertion consistent with other experts and with their prior advice? What evidence is their opinion based on? How much experience do they have in the domain of the question at hand?

Studies of experts and expertise provide a few generalizations (see Feltovitch et al. 2006). Experts organise knowledge effectively, have superior recall of information and have improved abilities to abstract knowledge to new situations, compared to lay people. They perform the basic operations of their discipline efficiently, and are able to think critically about data and methods in their domain. Usually, attaining expertise requires both study and practical experience.

-ACERA

Should we trust experts?

Among experts in ecology: 'No consistent relationships were observed between performance and years of experience, publication record or self-assessment of expertise.'.

Expert responses were found to be overconfident, specifying 80% confidence intervals that captured the truth only 49–65% of the time. In contrast, student 80% intervals captured the truth 76% of the time, displaying close to perfect calibration. Best estimates from experts were on average more accurate than those from students. The best students outperformed the worst experts.

Expert status is a poor guide to good performance. In the absence of training and information on past performance, simple averages of expert responses provide a robust counter to individual variation in performance.

-ACERA

My impression is that we shouldn’t naively take the judgements of experts to be simply superior to amateurs/lay people. As counterintuitive as it is: experts are more accurate than amateurs, but amateurs are more precise.

Expert overconfidence is one thing. Expert underperformance relative to simple equations is another:

One of the earliest and most confronting studies was by Meehl (1954). He explored clinical versus statistical prediction and demonstrated that many diagnostic judgments of experienced clinical psychologists were worse than the diagnostic judgments produced by a simple formula based on few variables. The implication was that any reasonably competent individual could out-do the clinical judgments of an expert psychologist by asking a standard series of questions and applying a standardised algorithm to the responses. Dawes et al. (1989) reviewed hundreds of studies that followed this tradition, and with the exception of a few medical diagnoses, the results were replicated in a vast array of domains including financial investment, sports forecasting, recidivism or violence in criminals and clinical pathology

Replies from: ChristianKl
comment by ChristianKl · 2015-12-11T12:51:32.216Z · LW(p) · GW(p)

Given the length it would make more sense to move this to it's own post in discussion instead of having it in the open thread.

comment by [deleted] · 2015-12-08T11:50:13.068Z · LW(p) · GW(p)

My impression after interviewing dozens of academics from various health related fields is that career advancement among these researchers pertains more to signalling the work is being done that actually doing the work. Thiel discredits this arrangement in Zero to One as dysfunctional.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-09T04:22:30.056Z · LW(p) · GW(p)

Which academics?

Replies from: None
comment by [deleted] · 2015-12-10T06:15:22.708Z · LW(p) · GW(p)

They're all from the same institution. It's one of Group of 8 universities in Australia. Interviewed just 10 academics for around 30 minutes each. Small sample size, and just one uni so not neccersarily generalisable.

comment by WhyAsk · 2015-12-11T16:44:04.235Z · LW(p) · GW(p)

Why can't seasoned politicians handle a windbag millionaire?

Is it because he is a caricature of them?

Replies from: VoiceOfRa, ChristianKl, Lumifer, NancyLebovitz, WhyAsk
comment by VoiceOfRa · 2015-12-16T03:15:08.362Z · LW(p) · GW(p)

A lot of problems that the establishment has been ignoring for decades, e.g., illegal immigration, out of control PC policing, pensions for government employees crowding out other spending, are starting to become critical and the seasoned politicians don't know how to address these problems. In fact they probably can't be addressed without upsetting established interests to whom the seasoned politicians are beholden to.

Replies from: WhyAsk
comment by WhyAsk · 2015-12-18T16:45:21.730Z · LW(p) · GW(p)

So we are locked into a stable, nowhere-near-optimum equilibrium. :(

Replies from: VoiceOfRa
comment by VoiceOfRa · 2015-12-19T02:23:57.592Z · LW(p) · GW(p)

It's not stable. The problems I mentioned are getting worse.

Replies from: WhyAsk
comment by WhyAsk · 2015-12-20T17:00:53.479Z · LW(p) · GW(p)

Good point.

comment by ChristianKl · 2015-12-11T18:42:22.446Z · LW(p) · GW(p)

I don't think it makes sense to call Trump a 'windbag millionaire'. Trump is a billionaire because he's good at dealmaking which is a relevant skill for political campaigning.

Offensively attacking Fox news and then having Fox news fold is an example of a political move that's likely calculated and no other Republican candidate could have pulled of the same way.

Trump and other politicians of this session show that the model of political electioneering that is about polling and then saying whatever polls best isn't the only one that works.

Through calling for the US to bomb Daesh's oil industry Trump already achieved concrete policy changes of US policy. Trump isn't stupid or uncalculated even when his public persona gives the impression that he's uncalculated.

Replies from: gjm
comment by gjm · 2015-12-11T20:22:13.460Z · LW(p) · GW(p)

Trump is a billionaire because he's good at dealmaking

It's not clear that that's true. E.g., earlier this year someone asked the question: If Trump had just taken all his money in 1987 and put it in index funds rather than trying to grow it himself, what would have happened? The answer, apparently, is that he'd be about 3x richer than he is now.

The reason for picking 1987 is less than fully convincing, though, so it's possible that there's some misleading cherry-picking going on. This article, as well as quoting the other one, says that if he'd put his money in index funds in 1978 instead of 1987 he'd now be about twice as rich as he actually is. Trump apparently disputes both the "before" and "after" figures, but if instead we use his own figures for 1976 (why 1976? because that's when we have figures from) he still ends up having underperformed the market.

So, I dunno, maybe he's good at dealmaking, but it seems like the main reason he's rich is that he inherited a fortune from his family. Everything he's done since to grow his wealth could have been done at least about as well (and maybe much better) by just putting the money into the stock market.

Replies from: ChristianKl, Jiro
comment by ChristianKl · 2015-12-11T22:01:38.292Z · LW(p) · GW(p)

If Trump had just taken all his money in 1987 and put it in index funds rather than trying to grow it himself, what would have happened? The answer, apparently, is that he'd be about 3x richer than he is now.

That's not a good comparison. There are many more people who inherited as much money and who haven't increased their wealth.

But apart from pure numbers, if I observe Trump style of interaction with journalists who interview his looks to me a lot like Carl Icahn's style. In both cases there an extreme amount of frame control that comes across as pretty rude.

That's the kind of stuff that get's Scotts Adams write gushing articles about Trump. I think that Scott Adams predictions are widely overconfident but I see what Scott Adams means when he speaks about Trumps language usage. Then I have had hypnosis training just as Scott Adams, so it can be that the patterns are otherwise hard to spot.

Replies from: gjm
comment by gjm · 2015-12-12T00:07:28.296Z · LW(p) · GW(p)

That's not a good comparison. There are many more people who inherited as much money and who haven't increased their wealth.

It suggests that Trump has been less successful, in terms of turning money into more money, than the average business in one of those index funds. The fact that individual investors often do even worse than that by investing badly is neither here nor there.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-12T00:31:41.021Z · LW(p) · GW(p)

than the average business in one of those index funds.

Most businesses do worse than the average business in one of those index funds. Individuals don't maxmize for wealth. Trump has brought a big yacht. He doesn't live the frugal life. Comparing apples to oranges makes no sense. You would have to compare Trump to similar people.

Replies from: Tem42
comment by Tem42 · 2015-12-12T01:42:48.844Z · LW(p) · GW(p)

Trump has brought a big yacht. He doesn't live the frugal life.

This is true, and a good point. But your original point was:

Trump is a billionaire because he's good at dealmaking which is a relevant skill for political campaigning.

Based on this discussion, it looks like Trump is not a billionaire because he is a good at dealmaking. He had multiple ways to become a billionaire (investing in index funds, investing in gold, dealmaking, inventing Facebook).

And as you now point out, he completely failed in utilizing the money; Trump is a billionaire because he was inefficient in finding good ways of exchanging his money for utility. Another area in which I would have outperformed him :-)

Replies from: ChristianKl
comment by ChristianKl · 2015-12-12T10:44:34.574Z · LW(p) · GW(p)

Based on this discussion, it looks like Trump is not a billionaire because he is a good at dealmaking. He had multiple ways to become a billionaire (investing in index funds, investing in gold, dealmaking, inventing Facebook).

Given the way he has chosen he wouldn't be a billionaire if he was bad at dealmaking. It's a relevant skill for winning political fights.

If you think that Trump just blabbers around what's on his mind, you are massively misreading the situation.

comment by Jiro · 2015-12-12T17:32:34.132Z · LW(p) · GW(p)

It doesn't follow that because he did worse than the optimal strategy his strategy wasn't equally as optimal. It could be that the strategy he followed is as optimal as the other one, but is subject to chance and he got unlucky.

You can't say "strategy A produced a better result than strategy B, therefore strategy A is a better strategy" based on a single example of someone using strategy A.

Replies from: gjm, Tem42
comment by gjm · 2015-12-12T22:21:36.152Z · LW(p) · GW(p)

The real point, for me, is not so much "Trump could have done better by investing in index funds", it's "Trump's business underperformed the market".

And, yes, underperforming the market over 30 years or so isn't proof of anything much; he could just have been unlucky. But for the exact same reasons, the fact that Trump's a billionaire isn't proof of anything much; he could just have been lucky. (He was: he inherited a lot.)

The only point I'm making is this: the fact that Trump is rich is not very good evidence that he's a great deal-maker. He's rich mostly because he inherited a fortune; someone who had inherited the same fortune and just put it into the stock market would now be richer than he is; what (admittedly limited) evidence we have of his business skill is that he's done worse than the market over the last few decades.

He might still be a great deal-maker. Or he might be a pretty terrible one. All I'm saying is that I don't see evidence that he's particularly good.

comment by Tem42 · 2015-12-12T17:50:09.140Z · LW(p) · GW(p)

You can't say "strategy A produced a better result than strategy B, therefore strategy A is a better strategy" based on a single example of someone using strategy A.

You have your example backwards.

We have a case of many many people using strategy A (index funds), and a single example of strategy B (Trump). And you can say that the strategy that worked lots of times is a better bet than the one that failed once. Strategy A is better in the limited sense that given our current information, it looks safer.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-12T19:46:48.248Z · LW(p) · GW(p)

We have a case of many many people using strategy A (index funds)

Is that true? Are there many billionaires who became billionaires through having most of their money in index funds?

Replies from: Tem42, gjm
comment by Tem42 · 2015-12-13T00:07:06.547Z · LW(p) · GW(p)

I am assuming that investment in index funds is scalable and was therefore including in my sample all long term investors in index funds. If this strategy is not scalable, I withdraw my analysis.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-13T00:41:43.363Z · LW(p) · GW(p)

The issue isn't whether investment in index funds is scalable but whether investing in index funds is scalable.

I can't imagine someone who has 1 billion dollars to simply put it into an index fund and do nothing with it and live life as if he wouldn't have any money.

If you look at lottery winners most of them as relatively soon broken and without any money. If a lottery winner has the same amount of money ten years later that shows good financial skills relative to other lottery winners.

comment by gjm · 2015-12-12T22:36:26.917Z · LW(p) · GW(p)

There aren't many billionaires, full stop. (A little under 2000.) And if someone (say) inherited $500M or earned it by building and selling a very successful business, and then put the money into index funds and waited for it to grow to $1B ... I don't think we'd say they became billionaires through index funds, we'd say they became billionaires by inheritance or by growing a business.

Trump became a billionaire by inheriting a fortune. The fortune he inherited was less than $1B, and it happens that the path he took from <$1B to >$1B involved running a business, but he could have invested his money conventionally and done just as well.

I would expect that few billionaires have their wealth invested conventionally, though. To get really rich you generally either need to do something exceptionally lucrative or inherit from someone who did. In the first case, (1) the chances are that you have a drive to keep doing exceptionally lucrative things and (2) you're likely to think -- with some justification -- that having done so well for yourself you can do better by carrying on than by just investing and relying on other people's success.

In the second case, the chances are that a lot of that inheritance is in the business that made your family rich. Again, if it's been so successful you're likely to think it better to keep most of your wealth in that.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-12T22:42:30.617Z · LW(p) · GW(p)

I don't think we'd say they became billionaires through index funds, we'd say they became billionaires by inheritance or by growing a business.

For how many billionaire's does that happen to be true?

If that's true for nobody than comparing Trump to nobody doesn't make much sense.

Replies from: gjm
comment by gjm · 2015-12-13T03:07:29.585Z · LW(p) · GW(p)

For how many billionaires does what happen to be true?

Replies from: ChristianKl
comment by ChristianKl · 2015-12-13T10:36:36.197Z · LW(p) · GW(p)

Became billioniares through having most of their assets in index funds.

Replies from: gjm
comment by gjm · 2015-12-13T14:43:50.167Z · LW(p) · GW(p)

That's the same question you asked 5 comments upthread from this one. Apparently you found my answer unsatisfactory since you just asked (what I now understand to be) the exact same question again, but unless you care to indicate what was unsatisfactory about it I'm not sure what to say that I haven't already.

comment by Lumifer · 2015-12-11T16:54:29.704Z · LW(p) · GW(p)

Because a lot of people are tired of and disillusioned with seasoned politicians who they see as windbags on their way to becoming millionaires.

The usual explanation for Trump is that it's a Rage Against the Machine thing.

comment by NancyLebovitz · 2015-12-12T14:04:56.049Z · LW(p) · GW(p)

Trump is a new problem. It can take time to figure out how to solve a new problem.

How were you expecting them to handle him?

You're actually asking why they couldn't handle him quickly. He may end up handled in the sense of not getting the nomination. It also seems as though it took Trump's proposal to not let Muslims enter the US to really motivate the Republican heavyweights.

comment by WhyAsk · 2015-12-12T19:14:40.641Z · LW(p) · GW(p)

I don't know how to reply to this thread as a whole, so I defaulted to this.

Like the Veiled Statue at Sais, I'm thinking this drama is revealing some truth about the US society and the US government. Some people recoil and want the veil restored, some want to see more and some don't know what to do, but no one is neutral.

What does Game Theory suggest in this situation? Is a tie the best that can be done? I don't think the "have you no decency?" retort will work here.

Also see DSM-IV, Narcissistic Personality Disorder, the first choice for any world leader according to Jerrold Post.

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2015-12-13T02:05:58.441Z · LW(p) · GW(p)

I'm thinking this drama is revealing some truth about the US society and the US government.

It's a rerun.

comment by ChristianKl · 2015-12-13T01:44:57.496Z · LW(p) · GW(p)

Game Theory assumes a defined set of players. In politics there are many different players each with different incentives and agenda's.

I'm thinking this drama is revealing some truth about the US society and the US government.

It reveal that the US government is weak. Both the Republican side and the Democratic side with Berny Sanders.

We had three presidents from Yale in a row. Then in 2004 two people from the same Yale fraternity running against each other. In 2008 Obama from the university of Chicaco was elected president. With Hilary the presidency might go back to Yale but otherwise there are many people with really different backgrounds.

People seem to be fessed up with politics as usual.

Replies from: Viliam
comment by Viliam · 2015-12-14T11:24:04.784Z · LW(p) · GW(p)

Yeah, I imagine having to choose between two former classmates, who are probably long-term buddies, can be demotivating. Even worse than the usual knowledge that most American presidents actually come from only a few "royal" families.

Voting for Trump or Sanders is another way to express "I want someone who does not belong to the old aristocracy". The way the system is designed, if you don't vote, it doesn't matter; if you vote for a third party, unless you succeed to coordinate half of the population (rather difficult, if the media will push in the opposite direction), it still doesn't matter... so the only way you are realistically allowed to rebel is to vote for the most eccentric candidate in the primaries.

Replies from: knb
comment by knb · 2015-12-24T11:03:06.293Z · LW(p) · GW(p)

Of recent presidents, only the Bushes were an established political family. Before the Bushes the most recent time a scion of a political family was in the White House was Kennedy.