Open Thread, January 15-31, 2012

post by OpenThreadGuy · 2012-01-16T00:56:04.835Z · LW · GW · Legacy · 248 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

(I plan to make these threads from now on. Downvote if you disapprove. If I miss one, feel free to do it yourself.)

248 comments

Comments sorted by top scores.

comment by David_Gerard · 2012-01-17T11:56:39.834Z · LW(p) · GW(p)

An outside view of LessWrong:

I've had a passing interest in LW, but about 95% of all discussions seem to revolve around a few pet issues (AI, fine-tuning ephemeral utilitarian approaches, etc.) rather than any serious application to real life in policy positions or practical morality. So I was happy to see a few threads about animal rights and the like. I am still surprised, though, that there isn't a greater attempt to bring the LW approach to bear on problems that are relevant in a more quotidian fashion than the looming technological singularity.

As far as I can tell, the reason for this is that in practical matters, "politics is the mind killer" is the mind killer.

Replies from: steven0461, J_Taylor, multifoliaterose
comment by steven0461 · 2012-01-21T22:59:31.670Z · LW(p) · GW(p)

Is there an argument behind "quotidian" besides "I have a short mental time horizon and don't like to think weird thoughts"?

Why would LessWrong be able to come to a consensus on political subjects? Who would care about such a consensus if it came about?

Replies from: David_Gerard
comment by David_Gerard · 2012-01-22T09:45:31.873Z · LW(p) · GW(p)

There's already enough geek-libertarian atmosphere that those of us who aren't really notice it. But yeah - as I said, I'm not actually sure it would be a good idea. But the shying away from practical application to that particular part of things people are actually interested in fixing in their daily lives is a noteworthy absence.

Your implied claim that quotidian thoughts are unworthy of attention is ... look, if you want to convince people all of this is actually a good idea, then when someone asks "so, OK. What are the practical applications of reading a million words of philosophy and learning probability maths?", answering "How dare you be so short-termist" strikes me as unlikely to work. I mean, I could be wrong ...

comment by J_Taylor · 2012-01-19T02:01:23.980Z · LW(p) · GW(p)

That's because in practice, "politics is the mind-killer" is the mind-killer.

If it is not too much trouble, could you explain further what you mean by that?

Replies from: David_Gerard
comment by David_Gerard · 2012-01-19T09:06:18.613Z · LW(p) · GW(p)

It seems to be treated as a thought stopper. "Do not go beyond this point." There are good reasons for it, but the behaviour looks just like shying away from a bad thought.

Replies from: steven0461, J_Taylor
comment by steven0461 · 2012-01-21T23:11:58.114Z · LW(p) · GW(p)

The thoughts are there, they're just not expressed on this particular site.

comment by J_Taylor · 2012-01-19T22:53:53.464Z · LW(p) · GW(p)

I always assumed it was more a discussion-stopper, meant to keep people polite and quiet. However, your interpretation is probably better.

Replies from: David_Gerard
comment by David_Gerard · 2012-01-19T23:08:44.223Z · LW(p) · GW(p)

I assume that was the intention. I'm not actually convinced that it would improve the site for us to dive headfirst into politics ... but it's odd for the stuff discussed here not to be applied even somewhere else, or even in the discussion section, without a flurry of downvotes. There's a strong social norm that even the slightest hint of political discussion is inherently bad and must be avoided.

Replies from: J_Taylor
comment by J_Taylor · 2012-01-19T23:13:49.168Z · LW(p) · GW(p)

It should be noted that RationalWiki is not a website known to be, let us say, lacking in killed minds.

Replies from: David_Gerard
comment by David_Gerard · 2012-01-19T23:51:13.932Z · LW(p) · GW(p)

It is a very silly place.

comment by multifoliaterose · 2012-01-18T13:33:27.463Z · LW(p) · GW(p)

I agree

comment by rocurley · 2012-01-16T07:12:15.564Z · LW(p) · GW(p)

I sometimes run into a situation where I see a comment I'm ambivalent about about, that I would normally not vote on. However, this comment also has an extreme vote total, either very high or very low. I would prefer this comment to be more like 0, but I'm not sure it's acceptable to vote according to what I want the total to be, as opposed to what I think about the post, because it gives me more voting power than I would otherwise have. What do you do in this situation?

Replies from: wedrifid, MixedNuts, Alex_Altair, Wrongnesslessness
comment by wedrifid · 2012-01-16T07:54:46.310Z · LW(p) · GW(p)

I would prefer this comment to be more like 0, but I'm not sure it's acceptable to vote according to what I want the total to be, as opposed to what I think about the post, because it gives me more voting power than I would otherwise have.

You get to modify the karma rating by one in either direction. Do so in whatever manner seems most desirable to you.

You have too much voting power if you create a sock puppet and vote twice.

Replies from: rocurley
comment by rocurley · 2012-01-16T21:13:24.828Z · LW(p) · GW(p)

Do so in whatever manner seems most desirable to you.

This is my attempt to figure out what is most desirable to me. At the moment, I want to do whatever would be the best overall policy if everyone followed it, with "best" here being defined as "resulting in the best lesswrong possible" (with a very complicated definition of best that I don't think I can specify well).

Given that that's what I want, how best to achieve it? The karma system is valuable because it makes more visible posts that are highly upvoted, so it's valuable to the extent that the highest upvoted comments are the best.

It should be noted that only relative karma matters (for sorting within an article), and the karma of other posts will tend to be rising (most posts wind up with positive karma). There is some number between 0 and 1 (call it x)that represents the expected vote of someone who votes.

Because karma is relative, if you've decide you care enough to vote, you should subtract x from your vote to determine if it counts as evidence that the post is good or bad. Do you want to vote 1-x, -x, or -1-x? Note 1-x>0, and the other two (not voting and down voting) are less than 0, downvoting by quite a bit. Which of these best corresponds to the sentiment "I liked this but think it's overrated"?

Replies from: shminux
comment by shminux · 2012-01-16T23:53:15.043Z · LW(p) · GW(p)

I roughly follow the following (prioritized) rules:

  1. Up-vote if I want to see more posts like this/down-vote if I don't want to see more posts like this, regardless of the current total.

  2. A comment that I do not feel very strongly about I may up- or down-vote based on what total karma I expect the comment of this kind to deserve.

  3. Very occasionally, I might like or dislike the author for unrelated reasons, and decide to up-/down-vote based on that.

comment by MixedNuts · 2012-01-16T13:52:28.425Z · LW(p) · GW(p)

You should vote without knowledge of total karma, otherwise it biases comments' karma scores towards 0 (except at extremes, where it creates bandwagon effects). Power doesn't enter into it, though.

Replies from: Manfred, Solvent
comment by Manfred · 2012-01-16T15:18:41.977Z · LW(p) · GW(p)

You're assuming that biasing karma scores towards zero (relative to what they would be before) is bad. Sure, it could be, but I don't see any particular reason why.

comment by Solvent · 2012-01-17T06:06:44.469Z · LW(p) · GW(p)

otherwise it biases comments' karma scores towards 0 (except at extremes, where it creates bandwagon effects)

[citation needed]

comment by Alex_Altair · 2012-01-16T18:32:47.602Z · LW(p) · GW(p)

I have previous thought that maybe karma should be hidden until after you vote.

But then there's the problem where part of the point of karma is to tell you whether something is worth reading. If karma was hidden until after voting, users would still have their total karma to motivate them, and we could still hide sufficiently negative comments.

Maybe we should hide comment karma before voting, but not article karma?

comment by Wrongnesslessness · 2012-01-16T07:48:43.570Z · LW(p) · GW(p)

I would prefer this comment to be more like 0

Does your preference mean that you honestly think the intrinsic value of the comment does not justify its vote count, or that you just generally prefer moderation and extremes irritate you?

In the former case, I would definitely vote toward what I thought would be a more justified vote count. Though in the latter case, I would probably be completely blind to my bias.

Replies from: rocurley
comment by rocurley · 2012-01-16T20:21:44.145Z · LW(p) · GW(p)

I meant that the intrinsic value of the comment does not justify its vote count.

comment by NancyLebovitz · 2012-01-17T06:36:14.875Z · LW(p) · GW(p)

Some thinking is easier in privacy.

In a fascinating study known as the Coding War Games, consultants Tom DeMarco and Timothy Lister compared the work of more than 600 computer programmers at 92 companies. They found that people from the same companies performed at roughly the same level — but that there was an enormous performance gap between organizations. What distinguished programmers at the top-performing companies wasn’t greater experience or better pay. It was how much privacy, personal workspace and freedom from interruption they enjoyed. Sixty-two percent of the best performers said their workspace was sufficiently private compared with only 19 percent of the worst performers. Seventy-six percent of the worst programmers but only 38 percent of the best said that they were often interrupted needlessly.

These are interesting results, but the research was from 1985--"Programmer Performance and the Effects of the Workplace," in Proceedings of the 8th International Conference on Software Engineering, August 1985. It seems unlikely that things have changed, but I don't know whether the results have been replicated.

Replies from: saturn, gwern
comment by saturn · 2012-01-17T09:16:59.350Z · LW(p) · GW(p)

I don't know of any studies, but there are many anecdotal reports about this.

comment by gwern · 2012-01-28T18:09:53.250Z · LW(p) · GW(p)

Worth noting: is correlational, not causal.

comment by [deleted] · 2012-01-16T19:16:43.752Z · LW(p) · GW(p)

Straw fascist ... has a point?

Replies from: Multiheaded
comment by Multiheaded · 2012-01-25T14:57:14.747Z · LW(p) · GW(p)

Yes he does, and it's a Superhappy kind of point... if all the words in this video are taken at face value, "you'll never have to think again" near the end spells "wireheading".

It all comes down to the grand debate between inconvenient uncertain "freedom" and more founded, more stable "happiness"; during our recent conversations, I've been leaning towards the former in some things and you've been cautioning people about how they might prefer to trade that for the latter - but in the end it's all just skirting our terminal values, so there's certainly no "correct" or "incorrect" conclusion to arrive at.

comment by billswift · 2012-01-16T14:43:44.414Z · LW(p) · GW(p)

The biggest risk of "existential risk mitigation" is that it will be used by the "precautionary principle" zealots to shut down scientific research. There is some evidence that it has been attempted already, see the fear-mongering associated with the startup of the new collider at CERN.

A slowdown, much less an actual halt, in new science is the one thing I am certain will increase future risks, since it will undercut our ability to deal with any disasters that actually do occur.

Replies from: amcknight, faul_sname
comment by amcknight · 2012-01-17T00:55:01.713Z · LW(p) · GW(p)

see the fear-mongering associated with the startup of the new collider at CERN.

Was there really deceptive fear-mongering? That's news to me. Fear was overblown, but I don't think anyone was using it for anything other than what they thought was safety.

A slowdown in new science is the one thing I am certain will increase future risks

I highly doubt this. All plausible major x-risks appear to be man-made. Slowing down would give us more time to see them coming. Why would it undercut our ability to deal with a disaster?

Replies from: TimS, vi21maobk9vp
comment by TimS · 2012-01-17T01:55:07.876Z · LW(p) · GW(p)

Fear was overblown, but I don't think anyone was using it for anything other than what they thought was safety.

I'm not highly read on the criticisms, but it wouldn't surprise me if someone vaguely influential invoked the CERN hysteria to argue for reducing the funding of basic research. But I don't have a cite for you.

I highly doubt this. All plausible major x-risks appear to be man-made. Slowing down would give us more time to see them coming. Why would it undercut our ability to deal with a disaster?

It's not clear to me that asteroid impacts, major plagues, or becoming caught in a Malthusian trap are not x-risks on the same order of magnitude as man-made x-risks. (Yes, a Malthusian trap is man-made, but it can't necessarily be prevented by stopping scientific research). And for man-made x-risks, what is the mechanism for "seeing the disaster coming" that isn't essentially doing more research?

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-01-17T06:09:46.737Z · LW(p) · GW(p)

A major plague is not, strictly speaking, an existential risk, although it would deal a lot of suffering. It will delay malthusian trap, though...

comment by vi21maobk9vp · 2012-01-17T06:19:09.519Z · LW(p) · GW(p)

Making science slow down means that you make the best and brightest not do their best in the research. So this drives them to optimizing algorithmical trading.

Also, you would want to slow down the research of new things and imncrease the research of implications; but how do you draw a line? Is the fact that a nuclear reactor can go critical and level a nearby city a useful cautionary knowledge about building power plant or a "stop giving them ideas" thing?

ETA: I do not mean that any of the currently running reactors is that bad — I mean how to research nuclear fission in years 1900-1925 to have a safe nuclear power plant before a nuclear bomb.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-01-19T01:42:18.124Z · LW(p) · GW(p)

If you claim that a modern nuclear reactor can level a nearby city, you are telling a falsehood.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-01-19T08:15:37.187Z · LW(p) · GW(p)

I was slightly unclear. Your statement is true.

I do not say that a modern nuclear reactor can level a city. I don't even claim or disclaim that the worst currently running nuclear reactor can level a city under reasonably imaginable coditions (I tend to agree that the fallout will be a problem, but a full-scale nuclear explosion is very unlikely but I have not enough evidence and knowledge to be sure either way).

I describe a situtation of the research of nuclear fission. Imagine that someone knows that a bigger pile of uranium emits more radiation and wants to build a power plant based on this in 10–20 years. Some research is done to be able to predict the behaviour of such a system — of course, there are no power plant designs from Earth-2010-our-timeline.

How should one do the research to prevent Chernobyl type disasters, minimize the risk of Fukushima type disasters and not find something that makes military build a nuclear bomb before first nuclear power plant is built?

Note that one needs to do enrichment both for a power plant and for a bomb.

It is true that simply piling even warhead-grade enriched uranium will not lead to a weapon-scale explosion, but the results of building a reactor without careful research into implications are not likely to be good.

comment by faul_sname · 2012-01-16T20:46:48.882Z · LW(p) · GW(p)

Will a halt in new science undercut our ability to deal with those disasters to a greater extent than it makes those disasters more likely? What if the halt was only in certain domains, life genetic engineering of deadly viruses?

Replies from: TimS
comment by TimS · 2012-01-17T02:01:38.587Z · LW(p) · GW(p)

There's no reason to believe that we've reached the optimum point for ending scientific research in any particular field. If we'd stopped medical research in 1900, the 1918 flu pandemic would have been worse. And basic research doesn't have a label telling us how it's going to be useful, yet the evidence is pretty strong that basic research is worth the money.

Regarding your specific example, isn't it worth knowing that the mutations to make that virus (1) already exist in nature, and (2) aren't really that far from being naturally incorporated into a single virus. If it took 500 passes instead of 10, we'd be relieved to learn that, right? In short, it seems like this kind of research is likely to be of practical use in treating serious flu virii (spelling?) in the relatively near future.

Replies from: faul_sname
comment by faul_sname · 2012-01-17T02:08:53.351Z · LW(p) · GW(p)

The question is not "Is it useful?" but "Is it useful enough to justify the risk?" In that case, the answer might well be yes, but there will probably be cases in the future where the knowledge is not worth the risk.

Replies from: TimS
comment by TimS · 2012-01-17T02:59:07.874Z · LW(p) · GW(p)

I agree that you have identified the right question. I disagree with you on when the balance shifts. In particular, I think you've picked a bad example of "dangerous" research, because I don't think the virus research you identified is a close question.

(That said, not my downvotes)

Replies from: faul_sname
comment by faul_sname · 2012-01-17T05:49:07.027Z · LW(p) · GW(p)

Upon further research, you're right. The research appears not to be as dangerous as it seemed at first glance.

comment by gwern · 2012-01-28T18:08:03.637Z · LW(p) · GW(p)

As part of my work for Luke, I looked into price projections for whole genome sequencing, as in not SNP genotyping, which I expect to pass the $100 mark by 2014. The summary is that I am confident whole-genome sequencing will be <$1000 by 2020, and slightly skeptical <$100 by 2020.


Starting point: $4k in bulk right now, from Illumina http://investor.illumina.com/phoenix.zhtml?c=121127&p=irol-newsArticle_print&ID=1561106 (I ran into a ref saying knomeBASE did <$5k sequencing - http://hmg.oxfordjournals.org/content/20/R2/R132.full#xref-ref-106-1 - but after thoroughly looking through their site, I'm fairly sure what they are actually offering is interpretation of a sequence, possibly done by Illumina.)

Projections: "The advent of personal genome sequencing" Drmanac http://wch.org.au/emplibrary/ccch/CPH_D5_L4_Genome_Sequencing.pdf Genetics in Medicine (http://journals.lww.com/geneticsinmedicine/Abstract/2011/03000/The_advent_of_personal_genome_sequencing.4.aspx)

Experts predict that the consumer price to sequence a complete human genome will drop to $1000 in 2014.[9] In our opinion, this will be achieved with existing DNA nanoarray technologies. We further believe that the existing DNA nanoarray technologies, with expected engineering advances, are capable of driving the cost per genome to significantly below $1000 in the following years. By 2020, with improved technology and reduced cost, we may expect tens of millions of personal genomes to be sequenced worldwide....We expect that advances in electronics will allow permanent lifelong storage of personal genetic variants (1 GB/person) for less than $10. [see also my previous discussion of kryder's law]

cite 9 = Metzger ML. Sequencing technologies—the next generation. Nature Rev. Genet. 2010;11:31– 46 http://eebweb.arizona.edu/nachman/Further%20Interest/Metzker_2009.pdf Confusingly, on pg44:

Closing the gap between $10,000 and $1,000 will be the greatest challenge for current technology developers, and the $1,000 genome might result from as-yet-undeveloped innovations. A timetable for the $1,000 draft genome is difficult to predict, and even more uncertain is the delivery of a high-quality, finished-grade personal genome.

Where does 2014 come from? I suggest attributing it to Drmanac and not Metzker. (I've emailed him to ask where his 2014 came from.) Drmanac is commercially involved and seems very optimistic; compare his answers in http://www.clinchem.org/content/55/12/2088.full to the other experts. But there is general agreement it is possible (see also paragraph 3 in https://www.sciencemag.org/content/311/5767/1544.full ).

Here's a citation for 2013: http://content.usatoday.com/communities/sciencefair/post/2011/07/race-to-1000-human-genome-machine-intensifies/1 discussing the new sequencing device in http://www.nature.com/nature/journal/v475/n7356/full/nature10242.html (more media coverage: http://www.nature.com/news/2011/110720/full/475278a.html )

In e-mailed comments to USA TODAY, [Jonathan] Rothberg confirms his team has sequenced Moore's genes:

...Much like computing, sequencing directly on a ion chip enables the rapid and continual increase in speed and reduction in cost. At the rate of Ion's current technology improvements we will reach the $1,000 human genome in 2013 and continue to drop the cost from there.

A guy from GenomeQuest (http://www.crunchbase.com/company/genomequest) agrees with Rothberg, saying $100 (not $1000) will be hit within a decade, and $1000 by July 2013: http://blogs.discovermagazine.com/gnxp/2010/07/genomic-liftoff/#comment-27818

As well: Snyder M, Du J, Gerstein M. Personal genome sequencing: current approaches and challenges http://stanford.edu/class/gene210/files/readings/Snyder_GenesDev_2010.pdf - pg 3 has a nice graph of the super-exponential price decrease (left, blue) vs total number of sequenced genomes (right, red). Probably don't need that though for a footnote.

A promising lead would be journalist Kevin Davies's The $1,000 Genome: The Revolution in DNA Sequencing and the New Era of Personalized Medicine. I read a few reviews including one in Nature, but unfortunately no one specifically quotes a due date for price-points and the book is not on library.nu for me to search.

Hopefully that is enough for sequencing! Phew. (Something of an echo chamber.)

Replies from: gwern
comment by gwern · 2012-06-15T19:27:55.321Z · LW(p) · GW(p)

"It beats Moore’s Law with a stick,” says [Raymond] McCauley, who believes that the $100 genome is only three years away.

--"Secrets of my DNA", Wired March 2011 (so 2014?)

Replies from: gwern
comment by gwern · 2013-03-17T14:57:39.532Z · LW(p) · GW(p)

BGI quotes prices as low as $3,000 to sequence a person’s DNA. ...Zhang Yong, 33, a BGI senior researcher, predicts that within the next decade the cost of sequencing a human genome will fall to just $200 or $300

Inside China’s Genome Factory, Technology Review

Replies from: gwern
comment by gwern · 2013-06-26T18:59:51.822Z · LW(p) · GW(p)

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3663089/

A few months ago, the National Human Genome Research Institute (NHGRI) updated their analysis of the cost of sequencing and, for the first time since records began, it got more expensive (Figure ​(Figure1).1). You know the graph, the one which looks like the profile of an aqua-park waterslide, a gradual incline followed by a precipitous drop as next generation sequencing kicks in. Well, now the waterslide ends with a treacherous upward flick! We have become so comfortable in the knowledge that DNA sequencing reduces in cost at a rate that makes each run cheaper than the last, that some of the scientific community are in denial. I have even seen people present this graph at meetings and explain how sequencing is getting cheaper every day despite the fact they are standing in front of a 10 foot PowerPoint slide showing clearly that this is not true. In fact, the cost of sequencing a human genome increased by $717 (an increase of 12%) between April 2012 and October 2012. This month the new figures showed that the price fell again, but the point remains - you can forget Moore's law! Some of you will think this merely means you need to replace the opening slide in your PowerPoint deck and tone down some of the rhetoric around $10 human genomes and the advent of free sequencing. I, however, think that the long-term ramifications may be more profound...'But...', I hear you scream, '...this is a temporary blip. Soon we will be saved by new cool technology that will plug into my laptop and sequence a genome for $10 in an hour'. In reality, is this just something we simply want to believe? There really is no reason to think that sequencing methodology is about to undergo a revolution in the near future. I am always amazed at the self-inflicted hype that follows any hint of a story where some company has come across a new way of sequencing that is going to turn all our Illumina kits into oversized doorstops. Often this comes not from the companies themselves but the scientists who are so desperate to buy them. The hype is usually followed by hyper-critical twitter and blog commentaries when the machine in question does not appear to do what we want it to (see this revealing interview with Oxford Nanopore's Clive Brown), in a cycle that has repeated itself at least three times in the last 5 years. I begin to wonder why we don't learn from history.

http://biomickwatson.wordpress.com/2013/05/15/a-pedantic-look-at-the-cost-of-sequencing/

This graph may or may not tell a different story. The story is that yes, sequencing costs are coming down; but since late 2007, early 2008 the rate of change of that reduction has been following an upwards trend i.e. over time, the reduction in cost from one period to the next has been increasing.

http://biomickwatson.wordpress.com/2013/06/18/the-1000-myth/

I’m going to try and lay this out in a completely technology neutral way, though I will have to mention different sequencing technologies at some point. However, I am pretty convinced of this one fact: there is not a single sequencing technology out today that can deliver 30X of a human genome for anywhere near $1000....None of the current sequencing companies can deliver 30x of a human genome for less than $1000 reagent costs (using list prices) Yes, that’s right – even ignoring points 2-5, even just buying reagents, the cost is greater than $1000 for a 30x human genome. Now, it’s possible Broad, BGI, Sanger etc can get below $1000 for the reagents due to sheer economies of scale and special deals they have with sequencing companies – but then remember they have to add in those extra charges (2-5) above. Obviously, Illumina don’t charge themselves list price for reagents, and nor do LifeTech, so it’s possible that they themselves can sequence 30x human genomes and just pay whatever it costs to make the reagents and build the machines; but this is not reality and it’s not really how sequencing is done today.

http://biomickwatson.wordpress.com/2013/06/18/the-1000-myth/#comment-2031

In my recent talk at the NIH symposium to mark the 10th anniversary of the HGP: http://bit.ly/KDHGP10 … I quoted a personal communication from Illumina CSO David Bentley, who says that in batch mode, the HiSeq can currently sequence five human genomes (presumably to 30x or higher) for a reagents list price of $25,000 — or $5,000/genome. With negotiated discounts (or if you want to estimate the wholesale cost), take 1/3 or 1/2 of that figure. So for what it’s worth, we might be edging close to the $2,500 genome, but that’s as good as it gets for now.

http://www.utsandiego.com/news/2013/Jun/19/1000-genome-mirage/2/?#article-copy

"He's right," Topol said. "If you get a bunch of genomes done at Illumina, you can get 'em for $2,500 each -- today." But in 2004, Topol said, sequencing a human genome cost $28.7 million. "Now we already have a 99.8 percent plus price reduction," Topol said. "We don't have that much further to go to get from $2,500 to $1,000. Most everyone would forecast that in a couple of years we will get to that number, with deep coverage of 40-fold, so it's accurate. I think it's clearly within reach now." "And I want to take it a step further," Topol said. "It's going to go well below $1,000 a genome in the future."...Topol agreed with Watson's take on the PeerJ statement [$100]. "That's a little far-fetched," he said. But incremental progress in getting the price lower once that $1,000 mark has been reached will continue.

Replies from: gwern
comment by gwern · 2018-11-03T20:16:43.503Z · LW(p) · GW(p)

Consumer WGSes hit ~$1000 with Veritas in 2016. In 2018, Dante Labs began offering WGS at ~$600, with a sale of $350. And we now have a rumor that Illumina will announce a $100 genome in a few months (presumably in early 2019): https://twitter.com/coregenomics/status/1058790189752049664

$100 might be a little questionable here (apparently Illumina has a history of making the most favorable possible assumptions about volume/amortization) but revisiting my original prediction from 7 years ago:

As part of my work for Luke, I looked into price projections for whole genome sequencing, as in not SNP genotyping, which I expect to pass the $100 mark by 2014. The summary is that I am confident whole-genome sequencing will be <$1000 by 2020, and slightly skeptical <$100 by 2020.

I was too pessimistic about SNP genotyping (it was actually more like $50 in 2014, I was completely unaware of UK Biobank at the time or its scale or savings), definitely right about '<$1000 by 2020', and I think I will turn out to be somewhat wrong about WGS being <$100 by 2020: even if Illumina is fudging some numbers for early 2019 at $100, it'll have almost a whole year to drop the cost a little more, and honestly, even if it's actually $110 does it make a difference considering how many things you can use whole genomes for & general medical overhead? You can hardly get some prescription aspirin these days for $100...

Overall self-assessment: I was more right than I had any right to be in that set of predictions given I was using some simple extrapolating and adding some pessimism/mean-reversion. Not bad, past-self!

Replies from: gwern
comment by gwern · 2018-11-19T13:45:26.232Z · LW(p) · GW(p)

Just got a Veritas-related email:

Veritas Genetics will be offering their MyGenome product (30x whole genome sequencing) normally $999, for $199 to the first 1000 customers, starting tomorrow, Monday, 9 AM ET.

Even allowing for promotional discounts, I'm still impressed. EDIT: Dante Labs too!

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-11-19T17:02:55.909Z · LW(p) · GW(p)

Thanks! I've been conflicted about which SNP service to use, and now I don't have to decide. :) Do you know if there are any potential downsides to consenting to let Veritas use the data for research? Would you tick that box?

Replies from: gwern
comment by gwern · 2018-11-20T03:09:35.287Z · LW(p) · GW(p)

Yes. In fact, I am already a PGP participant.

I am not sure you necessarily want to use Veritas/Dante Labs (Veritas might be sold out already based on their Twitter), as WGS reports are usually pretty raw and you won't get all of the interpretive services somebody like 23andMe would provide. I don't believe 23andMe or the other major services let you just upload sequencing data either, only download. Offhand, I'm not sure how easy it would be to even use Promethease (not that Promethease is very worthwhile, as most of their report is candidate-gene junk). Personally, I am holding off on getting a WGSes done. I don't know what I would do with mine, and the price should keep getting lower.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-11-20T05:47:42.952Z · LW(p) · GW(p)

Oh, I misunderstood the purpose of your comment and thought you were recommending people to take advantage of the sale. I knew it was going to sell out quickly so I made the order prior to posting my question. (I gave consent for research since it said that I could withdraw that consent at any time.)

It looks like Veritas offers VCF file download so it's compatible with Promethease but the format it uses only gives 16,000 genotypes. Also apparently Veritas used to provide the full BAM raw data, but no longer does, which is disappointing, so I'll probably cancel my order and take advantage of the Dante $199 sale instead which does offer BAM. Looks like sequencing.com lets you upload a BAM file and offers a bunch of apps to do different analyses on it.

Replies from: gwern
comment by gwern · 2018-11-20T17:53:18.702Z · LW(p) · GW(p)

No, I was mentioning the sales because they offer a measurement of what WGS costs end to end now - presumably Veritas/Dante or Nebula are offering at close to their marginal cost (as they aren't big or wealthy enough to afford to give it away and WGSes aren't exactly a repeat-customer business). As far as Dante goes, I have seen some complaints about very slow or inconsistent service; on IRC, one of us did a previous sale and their original spit didn't work, so they sent him another tube and forgot the postage. Not sure if he's gotten his WGS yet either.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-11-20T18:54:43.350Z · LW(p) · GW(p)

I see. Given that I haven't done a genotype yet, would you suggest that I go through with Dante anyway, or wait until the price comes down further? (Presumably it would definitely be worth doing at $100?)

Replies from: gwern
comment by gwern · 2018-11-22T03:43:02.563Z · LW(p) · GW(p)

Well, do you have anything in mind specifically to do with it? If you do, it may not be worthwhile to wait. But if you don't have something which needs to be done with a WGS right now, you probably aren't going to be struck with inspiration once you get your download either.

comment by [deleted] · 2012-01-20T13:12:27.356Z · LW(p) · GW(p)

I'm reading Moldbug's Patchwork and considering it as a replacement for Democracy. I expected it to be dystopia, but it actually sounds like a neat place to live, it is however a scary Eutopia.

Has anyone else read this recently?

Replies from: TimS, asr, Multiheaded, Multiheaded
comment by TimS · 2012-01-20T14:58:31.868Z · LW(p) · GW(p)

I've read through the pieces, and I'm struggling to come up with something to say that a reactionary absolutist like Moldbug would find interesting. For example, in the first piece linked, Moldbug says (Let's ignore that the last sentence is questionable as a matter of historical fact):

if you want stable government, accept the status quo as the verdict of history. There is no reason at all to inquire as to why the Bourbons are the Kings of France. The rule is arbitrary. Nonetheless, it is to the benefit of all that this arbitrary rule exists, because obedience to the rightful king is a Schelling point of nonviolent agreement. And better yet, there is no way for a political force to steer the outcome of succession - at least, nothing comparable to the role of the educational authorities in a democracy.

I don't disagree that it is a Schelling point. But is it stable? History strongly suggests that legitimacy is a real thing that is an important variable for predicting whether governments can stay in power and institutions can remain influential in a society. In other words, there's a reason why mature absolute monarchies (like Louis XIV) invented "divine right of kings." I assert that you can't throw that away (as Moldbug does) and assume that nothing changes about the setup.

My next point would be that there is no reason to expect a government to make a profit. But Moldbug's commitment to accepting the verdict of history means that he wouldn't find this very persuasive. if one believes that might makes right, then government probably does need to make a profit. In other words, when you acquire power by winning, there's every reason to expect that failing to continue winning will lead in short order to your replacement.

Replies from: None
comment by [deleted] · 2012-01-20T16:43:21.299Z · LW(p) · GW(p)

My next point would be that there is no reason to expect a government to make a profit.

The idea is that it is possible to make the cake bigger by having efficient government. This is why he invokes Laffer curves as relevant concepts.

I find myself sympathetic to this. If you say give some amount of stocks to foundations that provide free healthcare to those who can't afford it or preserve natural habitat ect. that matches current GDP spending, but come up with a government that is more efficient at providing funds for all these endeavours you get more spent in an absolute sense on healthcare or environmentalism than otherwise.

If you want to do efficient charity, you don't work in a soup kitchen, you work hard where you have a comparative advantage to earn as much money as possible and then donate it to an efficient charity. Moldbug may not approve but I actually think his design with the right ownership structure, might be together with some properly designed foundations be a much better "goodness generating machine" than a democratic US or EU might ever be.

I also like the idea of being able to live in a society with laws that you can agree with, if you don't like it you just leave and go somewhere where you do agree with them.

The profit motive is transparent and it is something that is easy to track down than "doing good", which is as the general goal of government far less transparent. As a shareholder or employee in a prosperous society you could easily start lobbying among other share holders to spend their own money to set up new charity foundations or have existing ones re-evaluate their goals.

It also has the neat property of seemingly guaranteeing human survival in a Malthusian em future (check out Robin Hansons writing on this). As long as humans own stocks it wouldn't matter if they where made obsolete by technology they could still basically collect a simply vast amount of rent which would continue growing at a rapid rate for millennia or even millions of years. The real problem is how these humans don't get hacked into being consumption machines by various transhuman service providers but optimize for Eudaimonia.

I don't disagree that it is a Schelling point. But is it stable? History strongly suggests that legitimacy is a real thing that is an important variable for predicting whether governments can stay in power and institutions can remain influential in a society. In other words, there's a reason why mature absolute monarchies (like Louis XIV) invented "divine right of kings." I assert that you can't throw that away (as Moldbug does) and assume that nothing changes about the setup.

He says robot armies and cryptographically locked weaponry eliminate the need to care about what your population thinks. The technology simply wasn't there in the time of Louis XIV. The governing structure has no need to mess with people's minds in various ways to convince them it is a just system.

And the thing is, while such technology as ubiquitous surveillance or automated soldiers in the hands of government sounds scary, there seems to be no relevant reason at all to think other government types won't have this technology anyway. Worse the technology to modify your mind in various ways will also be rapidly available (as if current brainwashing and propaganda technology wasn't scary enough).

In other words people living in such Patchwork instead of the futuristic US or the PRC would trade political freedoms for freedom of thought and association. The last two are not really guaranteed in any sense, but he gives several strong reasons why a sovereign corporation might have an interest in preserving them. Reasons that most other states as self-stabilizing systems don't seem to have.

But Moldbug's commitment to accepting the verdict of history means that he wouldn't find this very persuasive. if one believes that might makes right, then government probably does need to make a profit. In other words, when you acquire power by winning, there's every reason to expect that failing to continue winning will lead in short order to your replacement.

He basically says that whether we like it or not might does make right. The USA defeated Nazi Germany not because it was nobler but because it was stronger. This is why Germany is a democracy today. The US defeated the Soviet Union not because it was nobler but because its economy could support more military spending and the Soviet Communist party couldn't or wouldn't use military means as efficiently as say the Chinese to stomp out dissenting citizens. This is why Russia is a democracy today. Democracies won because they where better at convincing people that they where legitimate, their economies where better and as a result of these two they where better at waging war than other forms of government.

He also seems very confident that if his proposed form of government was enacted somewhere it would drastically out-compete all existing ones.

Replies from: TimS
comment by TimS · 2012-01-20T18:00:39.079Z · LW(p) · GW(p)

The profit motive is transparent and it is something that is easy to track down than "doing good", which is as the general goal of government far less transparent. As a citizen in a prosperous society you could easily start lobbying among other share holders to spend their own money to set up new charity foundations or have existing ones re-evaluate their goals.

Many government programs provide services to people who can't afford the the value of the service provided. Police and public education provided to inner-cities cannot be paid from the wealth of the beneficiaries. Moldbug complains about the inefficiency of the post office, but that problem is entirely caused by non-efficiency based commitments like delivering mail to middle-of-nowhere small towns. Without those constraints, USPS looks more like FedEx. That's not a Moldbuggian insght - everyone who's spent a reasonable amount of time thinking about the issue knows this trade-off.

He says robot armies and cryptographically locked weaponry eliminate the need to care about what your population thinks. The technology simply wasn't there in the time of Louis XIV. The governing structure has no need to mess with people's minds in various ways to convince them it is a just system.

And I simply don't believe this is a likely outcome. There will be times when a realm does not want to use its full arsenal of unobtanium weapons (i.e. to deal with jaywalking and speeding). Anyway, isn't it easier (and more efficient) to use social engineering to suppress populist sedition?

The US defeated the Soviet Union . . .

I mostly agree with your analysis, in that I think we've been lucky in some sense that the good guys won. But doesn't Moldbug have some totally different explanation for the Cold War, involving infighting between the US State Dept. and the Pentagon?

He also seems very confident that if his proposed form of government was enacted somewhere it would drastically out-compete all existing ones.

I think it likely that any system of government backed by unobtanium weapons would defeat any existing government system. It's not clear to me that a consent-of-the-governed system backed by the super weapons wouldn't beat Moldbug's absolutist system. And even if that isn't true, why should we want a return to absolutism. It's painfully obvious to me that my rejection of absolutism is the basis of most of my disagreement with Moldbug. I think government should provide "unprofitable" services, and he doesn't.

Replies from: None
comment by [deleted] · 2012-01-20T18:20:15.861Z · LW(p) · GW(p)

I mostly agree with your analysis, in that I think we've been lucky in some sense that the good guys won.

The good guys did win, because I'm not a National Socialist or a Communist or a Muslim or a Roman. But I don't think we where lucky. "The Gift We Give Tomorrow" should illustrate why I don't think you can say we where "lucky". By definition anyone that won would have made sure we viewed them as the more or less good guys.

But doesn't Moldbug have some totally different explanation for the Cold War, involving infighting between the US State Dept. and the Pentagon?

That wasn't Moldbug's argument about the USSR, it was mine :)

Yes, if I recall right his model goes something like this: The State Department wanted to make the Soviet Union its client much like say Britain or or West Germany or Japan where, it viewed US society and Soviet society as on a converging path, with the Soviet Union's ruling class having its heart in the right place but sometimes going too far. Something they could never do with any truly right wing regime. This is why they often basically sabotaged the Pentagon's efforts and attempts at client making. The Cold War and the Third World in general would have never been as bloody if the State Department vs. the Pentagon civli war by proxy wouldn't have been going on.

Anyway, isn't it easier (and more efficient) to use social engineering to suppress populist sedition?

Sure but I don't want to live in a society that takes this logic to its general conclusion. I want to be able to dislike the government I'm living under even if I can't do anything about it. Many people might not either, and we may be willing to tolerate living in a different less wealthy part of patch land or paying higher taxes for it.

consent-of-the-governed .

What is that? Can we depack this concept?

I think government should provide "unprofitable" services, and he doesn't.

I'm trying to figure out what you mean by this. Can't we have a "Deliver mail to far off corners foundation" and give it 0.5% of the stocks of Neo-Washington corp. when the thing takes off? Do you in principle object to government being for profit or is it just you think that nonprofits funded by shares of the government of equal GDP fractions as they have right now couldn't provide services of equally quality? What is the governments mission then? Which unprofitable services should it provide? All possible ones? Those that have the most eloquent rent-seekers? Those that are "good"? Can you define then the mission of government in words that are a bit more specific than universal benevolence? And if democratic government is so good at that why don't we have seed AI report to congress for approval of each self-modification? Don't worry the AI also gets one vote.

Replies from: TimS
comment by TimS · 2012-01-20T18:45:12.438Z · LW(p) · GW(p)

So, Moldbug's Cold War explanation is total nonsense? I thinks the Cold War follows after WWII even if the USA was ruled by King Truman I and the USSR was rule by King Stalin I. More formally, I think political realism is the empirically best description of international relations.


Anyway, you asked about patches and realms, and I said that governments do the unprofitable. If it were profitable, government wouldn't need to do it. Moldbug seems to say that we ought not to want government to do the unprofitable. That explains his move to a corporate form of government, but it doesn't justify the abandonment of the role that every government in history has decided it wanted to do.

Replies from: None, None
comment by [deleted] · 2012-01-20T18:50:53.854Z · LW(p) · GW(p)

You completely missed my point. Who gets to decide what is unprofitable? Who decides which unprofitable things are worth doing? The set of all possible unprofitable activities is vastly larger than the set of profitable ones.

If it were profitable, government wouldn't need to do it.

You do realize we where talking about the USSR just a few seconds ago right? I guess Russia was a bad place to make cars so the government had to step in and do that.

Replies from: TimS
comment by TimS · 2012-01-20T18:58:56.770Z · LW(p) · GW(p)

Communism (and socialism in general) have inefficient (i.e. not wealth-maximizing) preferences for wealth distribution. So no, it doesn't surprise me that that massive government planning was required to try to implement the communist preference. If equal wealth distribution were wealth-maximizing, then the government wouldn't have needed to intervene to make it happen.

This isn't a groundbreaking point. It falls out straightforwardly from the economic definition of efficiency.

Replies from: None
comment by [deleted] · 2012-01-20T19:04:40.515Z · LW(p) · GW(p)

I repeat myself:

You completely missed my point. ... Who decides which unprofitable things are worth doing?

Unless you are arguing Communist preferences of wealth redistribution and the opportunity cost that entails where automatically representative of those of "the Russian people" because duh they had the October revolution and a civil war in which Communists won. In which case I will ask why they would not be in North Korea, and would also ask you if all regimes deciding things are representative of "the people" why do we even need this democracy thing? Obviously Ancient Egyptian peasants wanted to be involved in the unprofitable business of building Pyramids for Pharaoh.

If we are not sure the ancient Egyptian Monarchies captured people's preferences for unprofitable activities that should be done according to the values of those indirectly funding them, if the same cannot be said of Rome, if the same cannot be said of Communism ... why do you think it can be said of say the US government? Why do you think this is more efficient than having government be a money making machine that gives its citizens free money because they own stock and lets them spend it on whatever charity (which also by definition do unprofitable things) or indulgence (which often are also unprofitable - whenever I go stop to smell the flowers or go watch a movie I don't do this to maximize my profit in currency, but to hopefully maximize my utility) they want? Or if it interferes with the operation of the state why not have the stockholders spend it in some other part of Patchland that specializes in being a great place to spend your money for good causes or fun?

And if you don't think people's preferences even matter when deciding what unprofitable stuff to spend resources on ... well whose preferences should then?

I want unprofitable stuff that I like done too. Like helping people not having to die if they don't want to. All else being equal I don't however much care who does them. BTW I'm not too sure about Moldbug's government type either, I wouldn't volunteer to live there just based on his arguments, but I do think he does a good job of dealing with regular arguments in favour of democracy. I do think a city or patch of desert somewhere to test the form of government might be a good idea.

Replies from: TimS
comment by TimS · 2012-01-20T19:34:46.200Z · LW(p) · GW(p)

Who decides which unprofitable things are worth doing?

For Moldbug, the answer is . . . not you. Unless the CEO of the realm put your charity on the cleared list. But I suspect that most of the things I would want to do with my dividends would be prohibited as security risks. Political control without thought control has never happened, and I don't think that super weapons could make it happen.

Replies from: None
comment by [deleted] · 2012-01-20T19:43:30.313Z · LW(p) · GW(p)

For Moldbug, the answer is . . . not you.

I'm interested in your answer.

Political control without thought control has never happened, and I don't think that super weapons could make it happen.

That is a good argument. Overall I think Moldbug does a better job of giving decent explanatory power for the modern world than providing workable solutions (if there are any) for its ills. :)

Replies from: Multiheaded
comment by Multiheaded · 2012-01-21T17:07:22.862Z · LW(p) · GW(p)

decent explanatory power for the modern world

Please elaborate on how, completely disregarding political realism in favor of an overarching conspiracy theory (as already mentioned above) and just ignoring the whole iceberg of neuroscience, evolutionary psychology, etc, one can arrive at a decent explanation for it all. "The leftist social sciences professor down the street is a witch, she did it" is not up to my standards of "decent".

Replies from: None
comment by [deleted] · 2012-01-21T17:57:48.438Z · LW(p) · GW(p)

"The leftist social sciences professor down the street is a witch, she did it"

That is not Moldbug's model. How much have you read?

He has decent models in my mind for many things including the genesis of the leftwards social movement for the past few decades or centuries, the genesis of modern morality, US foreign policy, the sociological aspect development of ideology ect.

I don't think I'm that much of a outlier in my estimation here, I've heard many people I know from LessWrong express interest in his thought (for example gwern, or Vladimir_M). He even had a live recorded debate with Robin Hanson back in 2010 on Futarchy (though he lost, everyone looses debates to RH ;) ). Top posters like Yvian and Eliezer also seem to have read some of Moldbug since they refer to his writing occasionally, ect. People sometimes agree and other times disagree with him, but I think they generally don't view him as a "crank" .

I really don't have the time right now to discuss all of this but there are a few older discussions in the comment sections of various articles (just search for "Moldbug" on the site), LessWrong that may interesting you if you'd like to learn more about his stuff and why people find it interesting.

My recent thread on one of his post also had some discussion.

Replies from: Multiheaded
comment by Multiheaded · 2012-01-21T18:11:37.847Z · LW(p) · GW(p)

He has decent models in my mind for many things including the genesis of the leftwards social movement for the past few decades or centuries, the genesis of modern morality, US foreign policy, the sociological aspect development of ideology ect.

I have read all of that, at first glance expecting a fun and intriguing contrarian ride. It came across as considerably more insane (in the LW/OB sense) and less grounded in reality than the milder forms of ol' good fascism to me.

Replies from: None
comment by [deleted] · 2012-01-21T18:27:17.076Z · LW(p) · GW(p)

I generally don't see what's so insane about WASP Blue State Protestant progressivism being the sociological, philosophical and cultural predecessor of WASP Blue State progressivism.

Or say that modern ethics aren't the product of pure reason and moral progress but a clear descendant of older Western morality.

Or that US foreign policy is often crazy and mixed up because the US isn't a monolithic entity and that more specifically the interests of the State department and the Pentagon diverge.

Or that in a modern parlimentary democracy power is wielded by opinion makers (academia and journalists) who create the intellectual fashion of the rich and well positioned subscribe to and with a twenty or so year lag the general population (they adopt it not just to copy the elites but because legislation and education are updated to push new beliefs on them) which then vote for representatives that are supposed to keep the unelected elites in check and working for their interests. Culturally any ethical ideas or value sets adopted by elite academia are assured long term victory.

I think that covers my examples.

I have read all of that, at first glance expecting a fun and intriguing contrarian ride. It came across as considerably more insane (in the LW/OB sense) and less grounded in reality than the milder forms of ol' good fascism to me.

Meh, fascists are often too mystical for my tastes (try reading Julius Evola. Religious Paleocons are a bit better but their axioms are all messed up, believing in God and all that. The few irreligious ones are often lots of fun.

comment by [deleted] · 2012-01-20T23:29:09.183Z · LW(p) · GW(p)

source

So we can separate California's expenses into two classes: those essential or profitable for California as a business; and those that are unnecessary and wasteful, such as feeding the poor, etc, etc. Let them starve! Who likes poor people, anyway? And as for the blind, bumping into lampposts will help them build character. Everyone needs character.

I am not Steve Jobs (I would be very ill-suited to the management of California), and I have not done the math. But my suspicion is that eliminating these pointless expenses alone - without any other management improvements - would turn California, now drowning in the red, into a hellacious, gold-spewing cash machine. We're talking dividends up the wazoo. Stevifornia will make Gazprom look like a pump-n-dump penny stock.

And suddenly, a solution suggests itself.

What we've done, with our separation of expenses, is to divide California's spending into two classes: essential and discretionary. There is another name for a discretionary payment: a dividend. By spending money to heal the lame, California is in effect paying its profits to the lame. It is just doing it in a very fiscally funky manner.

Thus, we can think of California's spending on good works as profits which are disbursed to an entity responsible for good works. Call it Calgood. If, instead of spending $30 billion per year on good works, California shifts all its good works and good-workers to Calgood, issues Calgood shares that pay dividends of $30 billion per year, and says goodbye, we have the best of both worlds. California is now a lean, mean, cash-printing machine, and the blind can see, the lame can walk, etc, etc.

Furthermore, Calgood's shares are, like any shares, negotiable. They are just financial instruments. If Calgood's investment managers decide it makes financial sense to sell California and buy Google or Gazprom or GE, they can go right ahead.

So without harming the poor, the lame, or the blind at all, we have completely separated California from its charitable activities. The whole idea of government as a doer of good works is thoroughly phony. Charity is good and government is necessary, but there is no essential connection between them.

Of course, in real life, the idea of Calgood is slightly creepy. You'd probably want a few hundred special-purpose charities, which would be much more nimble than big, lumbering Calgood. Of course they would be much, much more nimble than California. Which is kind of the point.

We could go even farther than this. We could issue these charitable shares not to organizations that produce services, but to the actual individuals who consume these services. Why buy canes for the blind? Give the blind money. They can buy their own freakin' canes. If there is anyone who would rather have $100 worth of free services than $100, he's a retard.

Some people are, of course, retards. Excuse me. They suffer from mental disabilities. And one of the many, many things that California, State of Love, does, is to hover over them with its soft, downy wings. Needless to say, Stevifornia will not have soft, downy wings. It will be hard and shiny, with a lot of brushed aluminum. So what will it do with its retards?

My suspicion is that Stevifornia will do something like this. It will classify all humans on its land surface into three categories: guests, residents, and dependents. Guests are just visiting, and will be sent home if they cause any trouble. Residents are ordinary, grownup people who live in California, pay taxes, are responsible for their own behavior, etc. And dependents are persons large or small, young or old, who are not responsible but need to be cared for anyway.

The basic principle of dependency is that a dependent is a ward. He or she surrenders his or her personal independence to some guardian authority. The guardian holds imperium over the dependent, ie, controls the dependent's behavior. In turn the guardian is responsible for the care and feeding of the dependent, and is liable for any torts the dependent commits. As you can see, this design is not my invention.

At present, a large number of Californians are wards of the state itself. Some of them are incompetent, some are dangerous, some are both. Under the same principle as Calgood, these dependents can be spun off into external organizations, along with revenue streams that cover their costs.

Criminals are a special case of dependent. Most criminals are mentally competent, but no more an asset to California than Jew-eating crocodiles. A sensible way to house criminals is to attach them as wards to their revenue streams, but let the criminal himself choose a guardian and switch if he is dissatisfied. I suspect that most criminals would prefer a very different kind of facility than those in which they are housed at present. I also suspect that there are much more efficient ways to make criminal labor pay its own keep.

And I suspect that in Stevifornia, there would be very little crime. In fact, if I were Steve - which of course I'm not - I might well shoot for the goal of providing free crime insurance to my residents. Imagine if you could live in a city where crime was so rare that the government could guarantee restitution for all victims. Imagine what real estate would cost in this city. Imagine how much money its owners would make. Then imagine that Calgood has a third of the shares. It won't just heal the lame, it will give them bionic wings.

This is why choosing the state as the actor that must bear unprofitable activities, regardless of on who's behalf, seems to my sentiments less an aesthetic choice or one that should be based on historic preference but an economic question that deserves some investigation. The losses of utility over such a trivial preference seem potentially large.

Replies from: Bugmaster, TimS
comment by Bugmaster · 2012-01-21T00:57:41.569Z · LW(p) · GW(p)

Charity is good and government is necessary, but there is no essential connection between them.

I suppose it depends on what you see as "charity". For example, free childhood vaccinations can be seen as charity -- after all, why shouldn't people just buy their own vaccines on the free market ? -- but having a vaccinated population with herd immunity is, nonetheless, a massive public good. The same can be said of public education, or, yes, canes for blind people.

comment by TimS · 2012-01-21T00:39:47.272Z · LW(p) · GW(p)

Let's do some [Edit: more abstract] analysis for a moment. [Edit: I suggest that] government is the entity that has been allocated the exclusive right to legitimate violence. And the biggest use of this threat of violence is compulsory taxation. Why do people put up with this threat of violence? As Thomas Hobbes says, to get out of the state of nature and into civil society. (As Moldbug says, land governed by the rule of law is more valuable than ungoverned land).

What does the government do with the money it receives. At core, it provides services to people who don't want them. The quote mentioned letting prisoners choose their jailors. It probably would increase prisoner utility to offer the choice. It might even save money (for example, some prison systems mandate completing a GED if the prisoner lacks a high school degree). But that's not what society wants to do to criminals. If the government uses compulsory power to fund prisons, I assert a requirement that the spending vaguely correspond to taxpayer desires for the use of the funds. (Moldbug seems to disagree).

Consider another example, the DMV. At root, the government threatens violence if you drive on the road without the required government license, on the belief that the quality of driving improves when skill requirements are imposed and the requirements will not (or cannot) be imposed without the threat of violence. It is common knowledge that going to the DMV to get the license is a miserable experience because the lines are long and the workers are not responsive to customer concerns. By contrast, the MacDonald's next door is filled with helpful people who quickly provide you with the service desired as efficiently as possible. Why the difference? In part, it is the compulsory nature of the license and in part, it is that benefits of improved service at the DMV do not accrue to anyone working for or supervising the DMV. See James Wilson's insightful discussion (pages 113-115 & 134-136) (There's also an interesting discussion of the post office on pp. 122-25). I assert that much "inefficiency" in government is simply the deadweight loss inherent in compulsory taxation, which is one part of government Moldbug doesn't want to abolish.

And there's less justification for calling an entity with compulsory tax powers a profit making entity. In what way has Moldbug's Calgood acted in a competitive marketplace? Voting with your feet is just as possible in the United States or Western Europe today as it would be in the patch & realm system.

Replies from: Prismattic
comment by Prismattic · 2012-01-21T02:01:57.270Z · LW(p) · GW(p)

For the libertarian, government is the entity that has been allocated the exclusive right to legitimate violence.

Max Weber was a libertarian?

Replies from: TimS
comment by TimS · 2012-01-21T03:42:57.229Z · LW(p) · GW(p)

Hmm. It's embarrassing to admit I'm not as well read as I'd like. I'd only ever heard the concept in libertarian discussions. Thanks.

comment by asr · 2012-01-21T01:39:51.156Z · LW(p) · GW(p)

Every time I read Moldbug's stuff I am startled by the extent to which he tries to give an economic analysis and solution to a political problem.

The reason we have government isn't that we sat down once upon a time in the state of nature to design a political system. We have government because we live in a world where violence is a potentially effective tactic for achieving goals. Government exists to curb and control this tendency, to govern it.

Uncontrolled violence turns out to be destructive to both the subject of the violence and also the wielder -- it turns out that it's potentially more fun to be in a citizen-soldier in a democracy than a menial soldier in an tyranny, or a member of a warlord's entourage.

Politically, we don't do welfare spending and criminal justice purely for the fuzzies, or solely because they're ends in themselves. Every so often, we have organized and vigorous protests against the status quo. When this happens, those in power can either appease the protesters, use force to crush the protesters, or try to make them go away quietly without violence. If the protesters are determined enough, this last approach doesn't work. And the government can either use clubs, or buy off the protesters.

It turns out that power structures that become habitually brutal don't do too well. People who get in the habit of using force aren't good neighbors, aren't good police, and aren't trusty subordinates. Bystanders don't want to live in a society that uses tanks and poison gas on retired veterans or that kills protesting students; leaders who try to use those tactics tend to get voted out of power -- or else overthrown.

Moldbug talking about cryptographically controlled weapons is missing the point: we don't want to live in a society that uses too much overt violence on its members. And we tolerate a lot of inefficiencies to avoid this need.

Replies from: Jayson_Virissimo, None, None, gwern, None
comment by Jayson_Virissimo · 2012-01-21T10:53:04.435Z · LW(p) · GW(p)

The reason we have government isn't that we sat down once upon a time in the state of nature to design a political system.

I believe the main thrust of Moldbug's writings is that we should be (but aren't) solving an engineering problem rather than moralizing when we engage in politics (although, he seems to fall into this trap himself what with all his blaming of "leftists" for everything under the sun).

Replies from: taelor, asr
comment by taelor · 2012-01-24T05:28:54.207Z · LW(p) · GW(p)

So much of Moldbug's belief system, and even his constructed identity as an "enlightened reactionary", ride on his complete rejection of whiggish historical narratives; however, he takes this to such an extent that he ends up falling into the very trap that Whig Interpretation's original critic, Herbert Butterfield warned of in his seminal work on the subject:

Further, it cannot be said that all faults of bias may be balanced by work that is deliberately written with the opposite bias; for we do not gain true history by merely adding the speech of the prosecution to the speech for the defence; and though there have been Tory – as there have been many Catholic – partisan histories, it is still true that there is no corresponding tendency for the subject itself to lean in this direction; the dice cannot be secretly loaded by virtue of the same kind of original unconscious fallacy.

comment by asr · 2012-01-22T02:42:10.204Z · LW(p) · GW(p)

I believe the main thrust of Moldbug's writings is that we should be (but aren't) solving an engineering problem rather than moralizing when we engage in politics (although, he seems to fall into this trap himself what with all his blaming of "leftists" for everything under the sun).

Except, none of his prescriptions are sensible engineering. Crypto-controlled weapons as foundation for social order are more science-fiction than sensible design for controlling violence in society. it's much too easy for people to build or buy weapons, or else circumvent the protections. Pinning your whole society on perfect security seems pretty crazy from a design point of view.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-01-22T04:40:34.206Z · LW(p) · GW(p)

Right, I don't think he succeeds either. I was merely trying to summarize his project as I think he sees it.

comment by [deleted] · 2012-01-21T10:12:45.467Z · LW(p) · GW(p)

Bystanders don't want to live in a society that uses tanks and poison gas on retired veterans or that kills protesting students; leaders who try to use those tactics tend to get voted out of power -- or else overthrown.

Just because governments often employ violence just before they loose power does not mean that employing violence was the cause of their downfall. Many sick people take medication just before they die. Sure violence may do them no good, like an aspirin does no good for a brain tumour, but it is hard to therefore argue that aspirin is the cause of death. The assertion is particularly dubious since historically speaking governments have used a whole lot of violence and this actually seems to have often saved them. Even in modern times we have plenty examples of this.

This Robin Hanson post seems somewhat relevant:

Once upon a time, poor masses suffered under rich elites. Then one day the poor realized they could revolt, and since then, the rich help the poor, fearing the poor will revolt if they ever feel they suffer too much.

Revolution experts mostly reject this myth; famous revolutions happened after things had gotten better, not worse, for the poor.

comment by [deleted] · 2012-01-21T08:58:34.898Z · LW(p) · GW(p)

We have government because we live in a world where violence is a potentially effective tactic for achieving goals. Government exists to curb and control this tendency, to govern it.

The state can be thought of as a sedentary bandit, who instead of pillaging and burning a village of farmers extorted them and eventually started making sure no one else pillages or burns them since that interferes with the farmers paying him. The roving bandit has no incentive to assure the sustainability of a particular farming settlement he parasites. A stationary banding in a sense farms the settlement.

Government can expediently be defined, ultimately beneath all the full, as a territorial monopolist of violence. There is a trade off between government violence used to prevent anyone else from exercising violence and violence by other organized groups. How do we know we are at the optimal balance in a utilitarian sense?

Also Moldbug dosen't want to do away with government he wants to propose a different kind of government. And we have in the past had systems of government that where the result of people sitting down and then trying to design a political system. To take modern examples of this (though I could easily pull out several Greek city states), perhaps the Soviet Union was a bad design, but the United States of America literally took over the world. In any case this demonstrates that new forms of government (not necessarily very good government) can be designed and implemented.

Uncontrolled violence turns out to be destructive to both the subject of the violence and also the wielder --

Government violence s ideally more predictable than the violence it prevents (that's the whole reason we in the West think rule of law is a good idea). Sure the government has other tools to prevent violence than just violence of its own, but ultimately all law is violence. In the sense of the WHO definition:

...as the intentional use of physical force or power, threatened or actual, against oneself, another person, or against a group or community, that either results in or has a high likelihood of resulting in injury, death, psychological harm, maldevelopment or deprivation.

You can easily make the violence painless by say sedating a would be rapist with the stun setting on your laser gun, and you can easily also eliminate the suffering of imprisoning him, by modifying his brain with advanced tools. But changing a persons mind without their consent or by giving them a choice between 6 years imprisonment and modifying their brain has surely just experienced violence according to the above definition.

it turns out that it's potentially more fun to be in a citizen-soldier in a democracy than a menial soldier in an tyranny, or a member of a warlord's entourag

The point of the the cryptographically controlled weapons is that you need a very small group of people who thinks being a citizen soldier is less fun than being paid handsomely by Blackwater to work.

comment by gwern · 2012-01-28T18:13:25.097Z · LW(p) · GW(p)

Every time I read Moldbug's stuff I am startled by the extent to which he tries to give an economic analysis and solution to a political problem.

Abba Lerner, "The Economics and Politics of Consumer Sovereignty" (1972):

"An economic transaction is a solved political problem... Economics has gained the title Queen of the Social Sciences by choosing solved political problems as its domain."

comment by [deleted] · 2012-01-21T10:37:50.859Z · LW(p) · GW(p)

Moldbug talking about cryptographically controlled weapons is missing the point: we don't want to live in a society that uses too much overt violence on its members. And we tolerate a lot of inefficiencies to avoid this need.

In raw utility the inefficiencies we tolerate to pay for this could easily be diverted to stop much more death and suffering elsewhere. Perhaps we are simply suffering from scope insensitivity, our minds wired for small tribes where the leader being violent towards a person means the leader being violent to a non-trival fraction of the population.

Also are you really that sure that people wouldn't want to live in a Neocameralist system? When you say efficiency I don't think you realize how emotionally appealing clean streets, good schools, low corruption and perfect safety from violent crime or theft is. What would be the price of real-estate there? It is not a confidence that he gives Singapore as an example, a society that uses more violence against its citizens than most Western democracies.

Capital punishment is a legal form of punishment in Singapore. The city-state had the highest per-capita execution rate in the world between 1994 and 1999, estimated by the United Nations to be 1.357 executions per hundred thousand of population during that period.[1] The next highest was Turkmenistan with 0.143 (which is now an abolitionist country). Each execution is carried out by hanging at Changi Prison at dawn on a Friday.

Singapore has had capital punishment since it was a British colony and became independent before the United Kingdom abolished capital punishment. The Singaporean procedure of hanging condemned individuals is heavily influenced by the methods formerly used in Great Britain.

Further more consider this:

Under the Penal Code,[12] the commission of the following offences may result in the death penalty:

  • Waging or attempting to wage war or abetting the waging of war against the Government*
  • Offences against the President’s person (in other words, treason)
  • Mutiny
  • Piracy that endangers life
  • Perjury that results in the execution of an innocent person
  • Murder
  • Abetting the suicide of a person under the age of 18 or an "insane" person
  • Attempted murder by a prisoner serving a life sentence
  • Kidnapping or abducting in order to murder
  • Robbery committed by five or more people that results in the death of a person
  • Drug trafficking
  • Unlawful discharge of firearms, even if nobody gets injured

Internal Security Act

The preamble of the Internal Security Act states that it is an Act to "provide for the internal security of Singapore, preventive detention, the prevention of subversion, the suppression of organised violence against persons and property in specified areas of Singapore, and for matters incidental thereto."[15] The President of Singapore has the power to designate certain security areas. Any person caught in the possession or with someone in possession of firearms, ammunition or explosives in a security area can be punished by death.

Arms Offences Act

The Arms Offences Act regulates firearms offences.[16] Any person who uses or attempts to use arms (Section 4) can face execution, as well as any person who uses or attempts to use arms to commit scheduled offences (Section 4A). These scheduled offences are being a member of an unlawful assembly; rioting; certain offences against the person; abduction or kidnapping; extortion; burglary; robbery; preventing or resisting arrest; vandalism; mischief. Any person who is an accomplice (Section 5) to a person convicted of arms use during a scheduled offence can likewise be executed.

Trafficking in arms (Section 6) is a capital offence in Singapore. Under the Arms Offences Act, trafficking is defined as being in unlawful possession of more than two firearms.

That sounds pretty draconian. But we also know Singapore is a pretty efficiently run government by most metrics. Is Singapore an unpleasant place to life? If so why do so many people want to live there? If you answer economic opportunities or standard of living or job opportunities, well then maybe Moldbug does have a point in his very economic approach to it.

Replies from: asr, Prismattic
comment by asr · 2012-01-22T02:54:27.188Z · LW(p) · GW(p)

In raw utility the inefficiencies we tolerate to pay for this could easily be diverted to stop much more death and suffering elsewhere. Perhaps we are simply suffering from scope insensitivity, our minds wired for small tribes where the leader being violent towards a person means the leader being violent to a non-trival fraction of the population.

I had assumed we were talking about government for [biased, irrational] humans, not for perfect utilitarians or some other mythical animal. I was saying that routine application of too much violence will upset humans, not that it should upset them.

Also are you really that sure that people wouldn't want to live in a Neocameralist system? When you say efficiency I don't think you realize how emotionally appealing clean streets, good schools, low corruption and perfect safety from violent crime or theft is. What would be the price of real-estate there? It is not a confidence that he gives Singapore as an example, a society that uses more violence against its citizens than most Western democracies.

I'm sure many people would live quite happily in Singapore. Clearly, it works for the Singaporians. But I don't think that model can be replicated elsewhere automatically, nor do I think Moldbug has a completely clear notion why it works.

Moldbug talks about splitting up the revenue generation (taxation) from the social-welfare spending. This seems like a recipe for absentee-landlord government. And historically that has worked terribly. The government of Singapore does have to live there, and that's a powerful restraint or feedback mechanism.

In the US (and I believe the rest of the world), the population would like to pay lower taxes, and pointing to the social welfare benefits is the thing that convinces them to pay and tolerate higher rates. I think once the separation between spending and taxation becomes too diffuse, you'll get tax revolts. Remember, we are designing a government for humans here -- short-sighted, biased, irrational, and greedy. So the benefits of unpleasant things have to be made as obvious as possible.

comment by Prismattic · 2012-01-21T18:38:41.290Z · LW(p) · GW(p)

Is Singapore an unpleasant place to life? If so why do so many people want to live there?

I'm open to being corrected on this, since I don't have a good source for Singaporean immigration statistics, but my prior is that people who choose to live in Singapore are coming there from other places that are much more corrupt while also still being rather draconian (China, Malaysia). I'm pretty sure well-educated Westerners could get a well-paying job in Singapore, and the reason few move there is not, in fact, about economics.

comment by Multiheaded · 2012-01-21T15:01:21.075Z · LW(p) · GW(p)

I'll be blunt. He's signaling self-righteous revulsion. It doesn't ever pattern match to anything nice. Case closed.

(Not for the idea of libertarian absolutism, of course - just for this Moldbug fellow himself! It's just that I don't want to tear through the buzzing of some conceited guy to get to the truth of the matter; there's always an alternative available in the market of ideas.)

Replies from: Multiheaded
comment by Multiheaded · 2012-01-21T16:56:39.844Z · LW(p) · GW(p)

Please indicate which particular part of the above do you find objectionable or not up to LW standards. Do you have any evidence that a paranoid attitude like his was ever not indicative of a crank?

Replies from: None
comment by [deleted] · 2012-01-21T17:45:28.451Z · LW(p) · GW(p)

I didn't down vote you, but I generally disagree with your pattern match. Moldbug signals self-righteous revulsion ironically since he is mirroring the self-righteous revulsion of the SWPLs to anything that challenges their ideological outlook.

Moldbug has been praised by many people I know from LessWrong as having a good style of writing. For example here:

The best way to improve the natural flow of ideas, and your writing in general, is to read really good writers so much that you unconsciously pick up their turns of phrase and don't even realize when you're using them. The best time to do that is when you're eight years old; the second best time is now.

Your role models here should be those vampires who hunt down the talented, suck out their souls, and absorb their powers. Which writers' souls you feast upon depends on your own natural style and your goals. I've gained most from reading Eliezer, Mencius Moldbug, Aleister Crowley, and G.K. Chesterton (links go to writing samples from each I consider particularly good); I'm currently making my way through Chesterton's collected works pretty much with the sole aim of imprinting his writing style into my brain.

Yes apparently some like it as much as they like Eliezer's writing.

Maybe that's what got you down voted? He's a bit too verbose for my taste, but as he said it himself several times that inhumanely long posts are one of the ways he keeps out the wrong kind of crowd which is attracted by a far right view for all the wrong reasons.

Replies from: Multiheaded
comment by Multiheaded · 2012-01-21T21:29:57.306Z · LW(p) · GW(p)

he keeps out the wrong kind of crowd

And why doesn't he hire a few moderators from among his allies, then open the doors to the right type of crowd - that is, someone like the SIAI staff - and let them see the light? He is, after all, saying that it'd be nice for his goals to hit more people where it counts with some quality propaganda. I bet that he's simply addicted to his ultra-contrarian throne beyond all reason.

Replies from: None
comment by [deleted] · 2012-01-21T23:47:37.339Z · LW(p) · GW(p)

He's just a random guy writing a blog. He can't even post that often since he's raising a baby girl. How many such people choose minions to moderate their blogs?

Why would he want the SIAI type around? The average SIAI employee might spend some time thinking about his arguments, but I suspect they have differing values.

I bet that he's simply addicted to his ultra-contrarian throne beyond all reason.

I actually agree.

comment by Multiheaded · 2012-01-21T21:06:34.524Z · LW(p) · GW(p)

Want to get mind-killed HARD? I mean, frothing at the monitor hard? Here's his opinion on the Norway massacre.

...If it was militarily possible to free Norway from Eurocommunism by killing a hundred communists, or a thousand communists, or ten thousand communists, we might have an interesting moral debate over whether this butcher’s bill was worth paying....

Yeah, yeah, yeah, I know, if you could ask Hitler and a random leftist from the academia in the 1930 whether 1) the sky is blue and 2) Stalin is an evil butcher, they'd perform equally well on 1) and you'd have to support Hitler over the leftist on 2), but still. This is not "deliberate hardcore contrarianism". This is not even trolling - go ask /b/, they're the internationally acknowledged authority. Moldbug is spitting on the graves of innocent teenagers slaughtered by an evident psychotic.

Replies from: None, steven0461, GLaDOS
comment by [deleted] · 2012-01-21T23:44:21.029Z · LW(p) · GW(p)

He says that a certain amount of lives is worth a regime change. This is not a controversial position in the real world. No one for example would in polite society say that regime change in 1945 in Nazi Germany was not worth the lives of 40 or 4000 young or 40 000 young men or women. He also calls the man a butcher, and clearly is implying that change can not be achieved by terrorism nor even conventional military action.

Considering the discussions that are calmly had on LessWrong (infanticide for starters) I can produce a dozen such "incriminating" out of context quotes. Come to think of it you can get quotes of me saying that infanticide is not as bad a crime as killing an adult, that incest among consenting adults should be legal, ect. Will you use that when itching to win a argument with me?

Also Robin Hanson has said that 9/11 isn't a big deal. Aha that's something I can use! How dare he spit on the graves of ... oh wait no actually that would be an inappropriate thing for me to say.

Even on a pure level of instrumental rationality, calmly pointing out the above quote and linking to it and asking if say I agree or not, would made him look much worse as well as being equally effective at any point you are trying to make. Also I would respond by stating the truth (as I shall now), that I haven't read it so far and that I haven't yet done the calculations about how many lives changing Norway's government to Marxist-Leninism/Anarchy-Syndicalism/Futarchy/Fascism would be worth, but that I suspect considering there are few better alternatives if any, and that Norway has such a small population that it wouldn't amount to more than ~5. Unless there is a really awesome form of government I don't know about yet.

I get it you really really hate Moldbug, but I'm basically currently of the opinion that the only one getting mindkilled here is you. Why couldn't you discuss the points and maintain the usual level of discourse? I fail to see how this was the right course of action, unless you are acting up being mindkilled for the purpose to shut down a certain topic for good.

Replies from: steven0461, Multiheaded
comment by steven0461 · 2012-01-22T00:14:39.051Z · LW(p) · GW(p)

considering there are few better alternatives if any, and that Norway has such a small population that it wouldn't amount to more than 5 or 6

Now I'm really curious what calculation would lead you to that number.

Replies from: None
comment by [deleted] · 2012-01-22T00:22:34.375Z · LW(p) · GW(p)

I know very little of the political situation in Norway but it is plausible that a better supreme court or batch of ministers could do a better job to the point of saving more lives than would be lost by such action (one has to factor in not just their lives but the cost of greater security measures such acts would produce if they where detected).

Yay I made the CIA watch list!

Replies from: steven0461
comment by steven0461 · 2012-01-22T00:46:35.138Z · LW(p) · GW(p)

Oh I see, I thought you had some sort of general formula for weighing (naive) consequentialism and deontology. If I had known there were going to be creepy specifics, I wouldn't have asked.

Replies from: None
comment by [deleted] · 2012-01-22T07:57:03.697Z · LW(p) · GW(p)

I assumed 5 or 6 seemed a creepily specific number and thus you wanted specifics. Note that as I said this was pure uninformed speculation. Norway is a partcularly well governed country, and overall specific people tend to in well governed Western states to matter much less here than say in Saddam Hussein's Iraq. However even a marginally better job at governing or crafting laws creates averts a lol of dusts specks.

I usually apply pure consequentalism measuring happy healthy years of life as a first estimate. When actually contemplating real actions to support I go by vritue ethics, since some costs are hard to capture in utilitarian thinking. Following virtue ethics I wouldn't ever support an attempt to assassinate someone unless they where directly responsible for a massive amount of death, as in signing death warrants or conducting killings or hiring thugs. Perhaps controversially I wouldn't put torturing people a ok reason to assassinate since that seems really hard to establish because of propaganda, misninformation, a more convoluted paper trail, ect.

comment by Multiheaded · 2012-01-22T00:03:18.902Z · LW(p) · GW(p)

Even on a pure level of instrumental rationality, calmly pointing out the above quote and linking to it and asking if say I agree or not, would make the thing look much worse

I didn't strictly speaking make that comment to persuade anyone, I'm just making small talk. Who'd take some guy's esoteric politics as a matter of actual life and death when we can have an AI?

I get it you really really hate Moldbug

You are WRONG! I sneer at him a bit, as many tend to when they find that people much smarter than themselves are kind of fucked in the head. I don't consider him a force for evil. I just clarify that in some cases I value politeness more than I do free speech if, and only if, said free speech is in a natural, all too human language. Nowhere have I suggested that weighing evil acts is inappropriate - say, when you clearly see how to apply math - otherwise I would've promptly freaked out back at the dust specks question.

haven't yet done the calculations about how many lives changing Norway's government would be worth, but that I suspect considering there are few better alternatives if any

I believe that it's exactly what he's admitting to himself - Norway is a kickass place to live in! - , and that although he might be serious about "Eurocommunism" as a potential engine of collapse, he's not serious serious. In other words, his prime motivation in formulating that sentence is a good opportunity for what he sees as trolling.

Considering the discussions that are calmly had on LessWrong (infaticide for starters) I can produce a dozen such "incriminating" out of context quotes. Come to think of it you can get quotes of me saying that infanticide is not as bad a crime as killing and adult, that incest among consenting adults should be legal, ect

Sorry? The discussion over infanticide we were holding a couple of weeks ago was - maybe - not perfectly relaxed, but neither side found the other's statements of position to be unacceptably crude, like flinging excrement in public. I do find exactly that about Moldbug's statement.

Replies from: None
comment by [deleted] · 2012-01-22T00:12:16.172Z · LW(p) · GW(p)

believe that it's exactly what he's admitting to himself - Norway is a kickass place to live in! - , and that although he might be serious about "Eurocommunism" as an engine of collapse, he's not serious serious. In other words, his prime motivation in formulating that sentence is a good opportunity for what he sees as trolling.

And you are trolling... why? Oh sorry, quoting a troll trolling on another forum/blog isn't actually trolling.

Right.

comment by steven0461 · 2012-01-21T21:48:41.073Z · LW(p) · GW(p)

Want to get mind-killed HARD? I mean, frothing at the monitor hard?

No thank you. LessWrong is a no froth zone. Please take it elsewhere.

Replies from: Multiheaded
comment by Multiheaded · 2012-01-21T22:16:55.028Z · LW(p) · GW(p)

Nope, sorry. I have seen that the Less Wrong community lacks neither the experience nor the intellectual courage to deal with highly provocative words head on - sometimes keeping its cool and sometimes not, but hardly resorting to a policy of selective blindness.

I'm not saying that Moldbug is a terrorist/would-be Hitler/whatever, or that people aren't allowed to like him on rational grounds, or anything of the sort. I am merely refusing to quickly avert my eyes from a thoroughly appalling detail.

Replies from: steven0461
comment by steven0461 · 2012-01-21T22:41:42.448Z · LW(p) · GW(p)

Even if you feel that judging the quality of someone's thought requires reporting any particularly offensive statements in their other writings, you could at least do so without encouraging mindkill and frothing. Arguably we shouldn't be discussing Moldbug in the first place.

Replies from: Multiheaded
comment by Multiheaded · 2012-01-21T23:31:20.724Z · LW(p) · GW(p)

Arguably we shouldn't be discussing Moldbug in the first place.

I more or less agree (for a different reason as you can see), however, I'm against pretty much any restrictions on LW topics provided even a modicum of intelligence being shown.

So if I'm an opponent (and yes, don't waste time pointing out my self-identification as such and pre-written bottom line), I don't like staying silent and I don't think that "mind-kill" is at all a net negative to LW discussion... heck, I'm gonna have oh-so-irresponsible fun.

Replies from: CaveJohnson
comment by CaveJohnson · 2012-01-21T23:58:06.103Z · LW(p) · GW(p)

I more or less agree (for a different reason as you can see), however, I'm against pretty much any restrictions on LW topics provided even a modicum of intelligence being shown.

In the current context I find this statement disingenuous. I'll be perfectly honest I think you are acting in bad faith.

comment by GLaDOS · 2012-01-22T00:00:50.408Z · LW(p) · GW(p)

Yes yes we all know Blue tribe clearly acts as an apologist for greater atrocities than Green.

[insert emotional screed against Green here]

comment by Viliam_Bur · 2012-01-16T10:30:35.610Z · LW(p) · GW(p)

At LW, religion is often used as a textbook example of irrationality. To some extent, this is correct. Belief in untestable supernatural is a textbook example of belief in belief and privileging the hypothesis.

However, religion is not only about belief in supernatural. A mainstream church that survives centuries must have a lot of instrumental rationality. It must provide solutions for everyday life. There are centuries of knowledge accumulated in these solutions. Mixed with a lot of irrationality, sure. Many religious people were pretty smart, for example Reverend Thomas Bayes, right? Also in my life I know religious people whose rationality is very high above average.

I am afraid that because of the halo effect we can miss a great source of rationality here. For example I am pretty sure that there are many successful anti-acrasia tactics written by religious authors. Another example: a list of capital sins, if you replace the religious terminology with something more lesswrongian, is simply a list of mental biases. (Pride = refusing to use an outside view. Gluttony = using a scarcity mindset in an abundance environment.) So I guess we could sometimes reuse the wheel instead of reinventing it.

Replies from: Nisan, TheOtherDave, dbaupp, curiousepic, NancyLebovitz
comment by Nisan · 2012-01-16T16:47:11.445Z · LW(p) · GW(p)

Have you seen this sequence? It reveals how the LDS church gets things done: By providing a real community for its members, and making them feel like they belong by giving them responsibilities. I'm sure an aspiring-rationalist version of that would be even better.

This is the super-secret rationality technique of churches. It's the reason religious people are happier than nonreligious people in the US. It's the domain where religious people are correct when they say that nonreligious people are missing out on something good. Now we just have to implement it. It's not something that we can do individually.

comment by TheOtherDave · 2012-01-16T16:15:27.450Z · LW(p) · GW(p)

I agree that religious organizations have developed many effective techniques for getting certain kinds of things done, and I endorse adopting those techniques where they achieve goals I endorse.

I'm not sure I agree that this isn't already happening, though.

Can you provide some examples of such techniques that aren't also in use outside of the religious organizations that developed them?

Incidentally, the word "rationality" seems to contribute nothing to this topic beyond in-group signalling effects.

comment by dbaupp · 2012-01-16T11:30:36.978Z · LW(p) · GW(p)

A mainstream church that survives centuries must have a lot of instrumental rationality

This isn't obviously true. Once a belief system is established it is easily continued via indoctrination, especially when the indoctrination includes the idea that indoctrinating others is a Good thing.

comment by NancyLebovitz · 2012-01-17T06:16:53.324Z · LW(p) · GW(p)

Accedia, an overview of catholic (and other, if I remember correctly) writing about sloth, plus a personal memoir. As I recall, quite an interesting book, but not personally useful-- and this is backed up by the top three amazon reviews.

The fact that such a seriously researched book doesn't turn up much that's easily useful (a more careful or motivated reader might have found something) suggests that there may not be much practical advice in the tradition.

This is reminding me of Theodore Sturgeon's complaint that Christianity told people to be more loving, but didn't say anything about how. (From memory, I don't have a cite.)

comment by [deleted] · 2012-01-22T18:14:32.462Z · LW(p) · GW(p)

When it comes to accepting evolution, gut feelings trump fact

“What we found is that intuitive cognition has a significant impact on what people end up accepting, no matter how much they know,” said Haury. The results show that even students with greater knowledge of evolutionary facts weren’t likelier to accept the theory, unless they also had a strong “gut” feeling about those facts...

In particular, the research shows that it may not be accurate to portray religion and science education as competing factors in determining beliefs about evolution. For the subjects of this study, belonging to a religion had almost no additional impact on beliefs about evolution, beyond subjects’ feelings of certainty....

For teaching evolution, the researchers suggest using exercises that allow students to become aware of their brains’ dual processing. Knowing that sometimes what their “gut” says is in conflict with what their “head” knows may help students judge ideas on their merits.

Seems to be classic System 1 vs. System 2. Also religion's small impact didn't surprise me.

comment by mstevens · 2012-01-16T10:39:53.688Z · LW(p) · GW(p)

A current thought experiment I'm pondering:

Scientists discover evidence that popularly discriminated against really does have all the claimed negative traits. The evidence is so convincing that everyone who hears it instantly agrees this is the case.

If you want to picture a group, I suggest the discovery that Less Wrong readers are evil megalomaniacs who want to turn you into paperclips.

How, if at all, does this affect your ideas of equality? Is it now okay to discriminate against them? Treat them differently legally? Not invite them to dinner?

I've heard Peter Singer says useful and interesting things about this, but it hasn't yet reached the top of my bookqueue.

Replies from: TheOtherDave, None, faul_sname, erratio, mstevens
comment by TheOtherDave · 2012-01-16T16:08:44.783Z · LW(p) · GW(p)

I'm puzzled that you describe this as a hypothetical.

For example, the culture I live in is pretty confident that five-year-olds are so much less capable than adults of acting in their own best interests that the expected value to the five-year-olds of having their adult guardians make important decisions on their behalf (and impose those decisions against their will) is extremely positive.

Consequently we are willing to justify subjecting five-year-olds to profound inequalities.

This affects my ideas of equality quite a bit, and always has. It is indeed OK to discriminate "against" them, and to treat them differently legally, and to not invite them to dinner, and always has been.

comment by [deleted] · 2012-01-16T14:50:27.232Z · LW(p) · GW(p)

How, if at all, does this affect your ideas of equality? Is it now okay to discriminate against them? Treat them differently legally? Not invite them to dinner?

We are actually as a society ok with discriminating against the vast majority of possible social groups. If this was not the case life as we know it would simply become impossible because we would have to treat everyone equally. That would be a completely crazy civilization to live in. Especially if it considered the personal to be political.

You couldn't like Alice because she is smart, since that would be cognitivist. You couldn't hang out with Alice because she has a positive outlook on life, because that would discriminate against the mentally ill (those who are currently experiencing depression for starters). You couldn't invite Alice out for lunch because you think she's cute, because that would be lookist. ect. ect.

Without the ability to discriminate between the people who have traits we find desirable or useful and those we don't, without a bad conscience, most people would be pretty miserable and perpetually repressed. Indeed considering humans are social creatures I'd say the repression and psychological damage would dwarf anything ever caused by even the most puritanical sexual norms.

Replies from: Multiheaded
comment by Multiheaded · 2012-01-21T14:41:38.589Z · LW(p) · GW(p)

See faul_sname's comment below; "discrimination" should really be tabooed with "prejudice based on weak prior evidence without any personal contact" in this discussion.

comment by faul_sname · 2012-01-16T10:49:02.132Z · LW(p) · GW(p)

"Discrimination" usually just means "applying statistical knowledge about the group to individuals in the group" and is a no-no in our society. If you examine it too closely, it stops making sense, but it is useful in a society where the "statistical knowledge" is easily faked or misinterpreted.

Replies from: None, vi21maobk9vp, mstevens, Multiheaded
comment by [deleted] · 2012-01-16T16:53:13.782Z · LW(p) · GW(p)

If you examine it too closely, it stops making sense, but it is useful in a society where the "statistical knowledge" is easily faked or misinterpreted.

The problem is that one of the only ways to prove someone is indeed using statistical knowledge, on the handful of cases that we have forbidden it, is to analyse their patterns of behaviour, basically look at the recorded statistics of their interactions. Both the records and the results of such an analysis which can be easily faked and misinterpreted.

Which means that if the forbidden statistical knowledge is indeed useful and reliable enough to be economical to use it, and someone else is very very serious about preventing it from being used, the knowledge will both be employed in a clandestine way and most of the economic gains from it will be eaten up by the cost of avoiding detection. This leads to a net loss of wealth.

Say a for-profit company that spends 90% of the gains from forbidden knowledge on avoidance of detection, the governments spends half or a third of that amount to monitor the company. The company would be indirectly paying for government monitoring regardless if it used the knowledge or not. It is therefore irrational for the company to not use the particular forbidden set statistical knowledge in such a situation.

Replies from: None, mstevens
comment by [deleted] · 2012-01-16T17:06:16.152Z · LW(p) · GW(p)

BTW To get the full suckiness hidden in the bland phrase "net loss of wealth" most people need some aid to fix their intuitions. Converting "wealth" to happy productive years or dead child currency sometimes works.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-16T17:50:36.258Z · LW(p) · GW(p)

(nods) That certain simplifies the task of comparing it to the loss of happy productive years and/or the increase in dead children that sometimes follows from the bland phrase "using forbidden statistical knowledge."

Once we convert everything to Expected Number of Happy Productive Years (for example), it's easier to ask whether we'd prefer system A, in which Sum(ENoHPY) = N1 and Standard Deviation(ENoHPY) = N2, or system B where Sum(ENoHPY) = (N1 - X) and Standard Deviation(ENoHPY) = N2- Y.

Replies from: None
comment by [deleted] · 2012-01-16T18:56:05.153Z · LW(p) · GW(p)

(nods) That certain simplifies the task of comparing it to the loss of happy productive years and/or the increase in dead children that sometimes follows from the bland phrase "using forbidden statistical knowledge."

That is kind of the point of being a utilitarian. And remembering to consider opportunity cost let alone estimate it often is the hard part when it comes to policy.

comment by mstevens · 2012-01-16T17:37:16.892Z · LW(p) · GW(p)

I read an interesting article on the legal side of this in the USA, annoyingly despite being sure I'd saved it I can't find anything.

comment by vi21maobk9vp · 2012-01-17T06:52:32.848Z · LW(p) · GW(p)

There are two problems: statistical knowledge being easily faked or misinterpreted and life being a multiple-repetition game.

It is hard to apply the knowledge of "many X are Y and it is bad" when X is easier to check than Y in such a way as to not diminish the return on investment of X who work hard to not be Y. The same with the positive case: if you think that MBA programs teach something useful and think "many MBAs have learnt the useful things from MBA program" then getting into the program and not learning starts making sense. And we have that effect!

http://www.freakonomics.com/2011/10/12/why-do-only-top-mba-programs-practice-grade-non-disclosure/

comment by mstevens · 2012-01-16T13:10:33.052Z · LW(p) · GW(p)

But don't people talking about discrimination often claim that the statistical trends aren't there?

Replies from: fubarobfusco
comment by fubarobfusco · 2012-01-19T01:54:59.748Z · LW(p) · GW(p)

Yes. For instance, the proportion of black Americans who use illegal drugs is well below the proportion of white Americans who do; however, black Americans are heavily overrepresented in illegal drugs arrests, convictions, and prison sentences. The arrest rates indicate that the law-enforcement system "believes" that black Americans use illegal drugs more — a statistical trend which isn't there.

Another way of thinking about these issues, rather than talking about "discrimination against ", is "privilege held by ". This can describe the same thing but in terms which can cast a different (and sometimes useful) light on it.

For instance, one could say " people are harassed by police when they hang out in public parks." However, this could be taken as raising the question of what those people are doing in those parks to attract police attention — which would be privileging the hypothesis (no pun intended). Another way of describing the same situation, without privileging the hypothesis, is " people get to hang out in public parks without the police taking interest."

Replies from: Alicorn, billswift
comment by Alicorn · 2012-01-19T03:15:57.947Z · LW(p) · GW(p)

the proportion of black Americans who use illegal drugs is well below the proportion of white Americans who do; however, black Americans are heavily overrepresented in illegal drugs arrests, convictions, and prison sentences.

Where does the data about the actual proportion come from, since it can't be the legal system's data?

Replies from: fubarobfusco
comment by fubarobfusco · 2012-01-19T03:51:34.047Z · LW(p) · GW(p)

Having re-checked the above from, e.g. the National Survey on Drug Use and Health, done by the Department of Health & Human Services, I retract the claim that black Americans use drugs less than white Americans.

Rather, it appears to be the case that white Americans are well overrepresented in lifetime illegal drugs use, but black Americans are slightly overrepresented in current illegal drugs use; which is what would feed into arrests — after all, you don't get arrested for snorting coke two decades ago. The white:black ratio in the population as a whole is 5.7, according to the Census. In lifetime illegal drugs use, 6.6; in last-month illegal drugs users, 5.1.

However, from the Census data on arrests, the white:black ratio in illegal drugs arrests is 1.9. Now, this doesn't break down by severity of alleged offenses, e.g. possession vs. dealing; or quantities; or aggravating factors such as school zones.

Replies from: Multiheaded
comment by Multiheaded · 2012-01-21T14:48:36.032Z · LW(p) · GW(p)

Rather, it appears to be the case that white Americans are well overrepresented in lifetime illegal drugs use, but black Americans are slightly overrepresented in current illegal drugs use; which is what would feed into arrests — after all, you don't get arrested for snorting coke two decades ago.

Sorry, I don't understand that. Does it simply mean that white people in general as seen here used to do more drugs some years/decades ago, but now their proportion dropped below that of blacks?

Replies from: fubarobfusco
comment by fubarobfusco · 2012-01-21T15:27:49.966Z · LW(p) · GW(p)

Maybe but not necessarily. It would be consistent with, for instance, there being proportionally more white people who tried illegal drugs once and didn't continue using.

Illegal drugs are an interesting place to try some Bayescraft.

comment by billswift · 2012-01-22T02:35:02.894Z · LW(p) · GW(p)

The arrest rates indicate that the law-enforcement system "believes" that black Americans use illegal drugs more — a statistical trend which isn't there.

In fact your interpretation is wrong. It is not "the law-enforcement system "believes"" that blacks use more. It is that blacks are more often dealers, and it is easier to get a conviction or plea bargain as a user than as a dealer, since the latter requires intent as well as possession and will be fought harder because of higher penalties.

Replies from: TimS
comment by TimS · 2012-01-24T02:19:22.370Z · LW(p) · GW(p)

I suspect that blacks are not over-represented as drug dealers. Rather, blacks live in urban areas, which can be policed at lower cost than rural areas for population density reasons.

comment by Multiheaded · 2012-01-21T14:53:27.884Z · LW(p) · GW(p)

it is useful in a society where the "statistical knowledge" is easily faked or misinterpreted.

Hell, that seems to be an understatement to me. There's a particular reason that racial discrimination is by far the most taboo and reviled form of it, beyond the memory of Nazism; real current political groups - that are very nasty - are always hoping for the chance to pounce on the issue once they're allowed to get close to it.

comment by erratio · 2012-01-16T19:19:57.569Z · LW(p) · GW(p)

The practice in the US of alerting people in the neighbourhood to the presence of convicted child molesters (or was it rapists? I don't remember) seems to indicate that at least some people think that it's a great idea. I think that as we get better at testing people for sociopathy we're likely to move towards certain types of legal discrimination towards them too.

None of this affects my personal ideas of equality though. I would prefer not to be friends with an evil megalomaniac in the same way that I would prefer not to be friends with a drug addict, but if I met an interesting person and then discovered that they were an evil megalomaniacal drug addict I wouldn't necessarily cut them out of my life, either.

comment by mstevens · 2012-01-16T17:58:54.971Z · LW(p) · GW(p)

As vague context, the whole area of equality and discrimination is something that nags me at me as not making enough sense. I hope with enough pondering to come up with a clear view on things, but it's failing so far.

comment by MileyCyrus · 2012-01-16T05:07:31.846Z · LW(p) · GW(p)

What are some efficient ways to signal intelligence? Earning an advanced degree from a selective university seems rather cost intensive.

Replies from: Grognor, dbaupp, None, asr, sixes_and_sevens, Manfred, D_Alex, multifoliaterose, Prismattic
comment by Grognor · 2012-01-16T07:13:52.533Z · LW(p) · GW(p)

I figured someone would have said this by now, and it seems obvious to me, but I'm going to keep in mind the general principle that what seems obvious to me may not be obvious to others.

You said efficient ways to signal intelligence. Any signaling worth salt is going to have costs, and the magnitude of these costs may matter less than their direction. So one way to signal intelligence is to act awkwardly, make obscure references, etc.; in other words, look nerdy. You optimize for seeming smart at the cost of signaling poor social skills.

Some less costly ones that vary intensely by region, situation, personality of those around you, and lots and lots of things, with intended signal in parentheses:

  • Talk very little. Bonus: reduces potential opportunities for accidentally saying stupid things. (People who speak only to convey information are smarter than people for whom talking is its own purpose.)
  • Talk quickly.
  • Quote famous people all the time. (He quotes people; therefore he is well-read; therefore he is intelligent.)
  • In general, do things quickly. Eating, walking, reacting to fire alarms. (Smart people have less time for sitting around.)
  • During conversations, make fun of beliefs that you mutually do not hold. Being clever about it is better, but I don't know how to learn cleverness. If you already have it, good. (He is part of my tribe and one of my allies. Therefore, because of the affect heuristic, he must be smart as well.)
  • Learn a little bit of linguistics.
  • Tutor people in things. (You have to be smart to teach other people things.)

It was not intentional that all of these related to conversation. Maybe that's not a coincidence and I've been unconsciously optimizing for seeming smart my entire life.

Replies from: faul_sname, dbaupp, amcknight, None
comment by faul_sname · 2012-01-16T09:42:50.065Z · LW(p) · GW(p)

Tutor people in things. (You have to be smart to teach other people things.)

Definitely this. Tutoring is a very strong signal of intelligence, but is really a matter of learned technique. I was able to tutor effectively in Statistics before I had taken any classes or fully understood the material by using tutoring techniques I had learned by teaching other subjects (notably Physics). The most common question I found myself asking was "what rule do we apply in situations like this," a question you do not actually need to know the subject material to ask.

comment by dbaupp · 2012-01-16T09:33:13.985Z · LW(p) · GW(p)

Learn a little bit of linguistics.

I'd be interested if you were to expand on this.

Replies from: Emily, Grognor
comment by Emily · 2012-01-18T21:45:30.655Z · LW(p) · GW(p)

I'm not the OP of that comment, but as a linguistics student I can corroborate. I think there are a couple of reasons that occasionally throwing a relevant piece of linguistic information into a conversation can produce the smartness impression. Firstly, conversations never fail to involve language, so opportunities to comment on language are practically constant if you're attuned to noticing interesting bits and pieces. This means that even occasional relevant comments mean you're saying something interesting and relevant quite frequently. This is an advantage that linguistics has over, say, marine biology. Secondly, I have the impression that most people are vaguely interested in language and under the equally vague impression that they know just how it works -- after all, they use it all the time, right? So even imparting a mundane little piece of extremely basic linguistics can create the impression that you're delivering serious cutting-edge expert-level stuff: after all, your listener didn't know that, and yet they obviously know a pretty decent amount about language!

comment by Grognor · 2012-01-16T18:50:04.133Z · LW(p) · GW(p)

It has worked for me. People are impressed when I point out their own sentence structure, things like how many phonemes are in the word "she", etc. I don't know if this also helps signal intelligence, but I also rarely get confused by things people say. Instead of saying, "What?" I say "Oh, I get it. You're trying to say X even though you actually said Y."

Also, I guess it seems like a subject only smart people are interested in. And not even most of them. Guess I got lucky in that regard.

comment by amcknight · 2012-01-17T00:42:15.291Z · LW(p) · GW(p)

It, of course, depends who you're signalling to. These sound to me like ways of signalling that you are intelligent to the unintelligent. (If that. They're good possibilities but I'm skeptical of about half of them.)

comment by [deleted] · 2012-01-16T07:40:28.675Z · LW(p) · GW(p)

Talk very little. Bonus: reduces potential opportunities for accidentally saying stupid things. (People who speak only to convey information are smarter than people for whom talking is its own purpose.)

I perhaps should work on this one. It might improve my signal/noise ratio.

Your list is quite wisely written.

comment by dbaupp · 2012-01-16T05:31:15.040Z · LW(p) · GW(p)

In a Dark-Arts-y way, glasses?

(A brief search indicates there are several studies that suggest wearing glasses increases percieved intelligence (e.g. this and this (paywall)), but there are also some that suggest that it has no effect (e.g. this (abstract only)))

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-01-17T09:57:44.468Z · LW(p) · GW(p)

There definitely exists a stereotype that people that wear glasses are more intelligent. The cause of this common stereotype is probably that people that wear glasses are more intelligent.

Replies from: multifoliaterose
comment by multifoliaterose · 2012-01-18T22:34:23.970Z · LW(p) · GW(p)

But what's the purported effect size?

comment by [deleted] · 2012-01-16T05:33:11.412Z · LW(p) · GW(p)

Here's a few suggestions, some sillier than others, in no particular order:

  • Join organizations like Mensa
  • Look good
  • Associate yourself with games and activities that are usually clustered with intelligence, e.g. chess, Go, etc.
  • If your particular field has certifications you can get instead of a degree, these may be more cost-effective
  • Speak eloquently, use non-standard cached thoughts where appropriate; be contrarian (but not too much)
  • Learn other languages--doing so not only makes you more employable, it can be a big status boost
Replies from: Prismattic
comment by Prismattic · 2012-01-16T05:55:48.961Z · LW(p) · GW(p)

Much depends on the audience one is signalling to.

Join organizations like Mensa

To stupid or average people, this is a signal of intelligence. To other intelligent people, my impression is that Mensa membership mostly distinguishes the subset of "intelligent and pompous about it" from the larger set of "intelligent people".

Associate yourself with games and activities that are usually clustered with intelligence, e.g. chess, Go, etc.

Again this works as a signal to people who are at a remove from these activities, because the average player is smarter than the average human. People who themselves actually play, however, will have encountered many people who happen to be good at certain specific things that lend themselves to abstract strategy games, but are otherwise rather dim.

Speak eloquently, use non-standard cached thoughts where appropriate; be contrarian (but not too much)

Agree with this one. It's especially useful because it has the opposite sorting effect of the previous two. Other intelligent people will pick up on it as a sign of intelligence. Conspicuously unintelligent people will fail to get it.

Learn other languages--doing so not only makes you more employable, it can be a big status boost

This one seems like it might vary by geography. It's a lot less of a distinction for a European than an American. In the US, the status signal from "speaks English and Spanish" is different from the status signal from "speaks English and some language other than Spanish".

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-01-16T09:53:27.699Z · LW(p) · GW(p)

To other intelligent people, my impression is that Mensa membership mostly distinguishes the subset of "intelligent and pompous about it" from the larger set of "intelligent people".

My experience seems to support this. The desire to signal intelligence is often so strong that it eliminates much of the benefits gained from high intelligence. It is almost impossible to have a serious discussion about something, because people habitually disagree just to signal higher intelligence, and immediately jump to topics that are better for signalling. Rationality and mathematics are boring, conspiracy theories are welcome. And of course, Einstein was wrong; an extraordinarily intelligent person can see obvious flaws in theory of relativity, even if they don't know anything about physics.

Mensa membership will not impress people who want to become stronger and have some experience with Mensa. Many interesting people make the Mensa entry test, come to the first Mensa meeting... and then run away.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-01-16T20:19:31.752Z · LW(p) · GW(p)

My experience with Mensa was similar to yours. I joined, read a couple issues of their magazine without having time to go to a meeting, and realized that if the meetings were like the magazine they weren't worth the time. There was far less original thought in Mensa then I had expected.

Replies from: khafra, Viliam_Bur
comment by khafra · 2012-01-17T15:27:26.030Z · LW(p) · GW(p)

I joined, read a couple issues of their magazine without having time to go to a meeting, and realized that if the meetings were like the magazine they weren't worth the time. There was far less original thought in Mensa then I had expected.

Saying this about Mensa is a much better way to signal intelligence to other intelligent people than actually being a Mensa member.

Replies from: TheOtherDave, Normal_Anomaly
comment by TheOtherDave · 2012-01-17T15:50:57.263Z · LW(p) · GW(p)

Well, it's worth being a little careful here. Saying dismissive things about an outgroup is an effective way to present myself as a higher-status member of the ingroup; that works as well for "us intelligent people" and "those Mensa dweebs" as any other ingroup/outgroup pairing. Which makes it hard to tell whether I'm really signalling intelligence at all.

comment by Normal_Anomaly · 2012-01-17T22:16:03.695Z · LW(p) · GW(p)

Yes, and I knew that when I said it. But it's also true.

comment by Viliam_Bur · 2012-01-17T10:55:33.807Z · LW(p) · GW(p)

Right now my question is: Is abandoning Mensa the most useful thing, or can it be used to increase rationality somehow?

Seems to me that the selection process in Mensa has two steps. First, one must decide to make a Mensa entry test. Second, one must decide to be a Mensa member, despite seeing that Mensa is only good for signalling -- this is sometimes not so obvious to a non-Member. For example when I was 15, I imagined that Mensa would be something like... I guess like I now imagine the LW meetups. I expected there people who are trying to win, not only to signal intelligence to other members.

So I conclude that people who pass the first filter are better material than people who pass both filters. A good strategy could be this: Start a local rationalist group. Become a member of Mensa, so you know when Mensa does tests. Prepare a flyer describing your rationalist group and give it to everyone that completes the Mensa test -- they will probably come to the first following Mensa meeting, but many of them will not appear again.

This is what I want to do, when I overcome my laziness. Also I will give a talk in Mensa about rationality and LW, though (judging by reactions on our facebook group) most members will not be really interested.

comment by asr · 2012-01-16T06:48:05.938Z · LW(p) · GW(p)

The best ways to signal intelligence are to write, say, or do something impressive. The details depend on the target audience. If you're trying to impress employers, do something hard and worthwhile, or write something good and get it published. If you're a techie and trying to impress techies, writing neat software (or adding useful features to existing software) is a way to go.

if you are asking about signalling intelligence in social situations, I suggest reading interesting books and thinking about them. Often, people use "does this person read serious books and think about them" as a filter for smarts.

comment by sixes_and_sevens · 2012-01-16T14:38:55.530Z · LW(p) · GW(p)

Do something prohibitively difficult that not a lot of people are competent enough to do.

Replies from: amcknight, sixes_and_sevens
comment by amcknight · 2012-01-17T00:49:16.630Z · LW(p) · GW(p)

Of course, make sure it's something people "know" is hard, like rocket science.

comment by sixes_and_sevens · 2012-01-17T00:01:04.875Z · LW(p) · GW(p)

I have to admit, I'm mystified as to why this one got downvoted.

Replies from: Multiheaded, vi21maobk9vp, duckduckMOO
comment by Multiheaded · 2012-01-21T17:18:38.450Z · LW(p) · GW(p)

Likely because it could be read as a sarcastic remark resolving to "become intelligent for real, and you wouldn't need to fake anything, you lazy cheating bastard". I wouldn't have downvoted for that, but such a reading had indeed occured to me at first, before I remembered that I'm at a website of a better sort.

comment by vi21maobk9vp · 2012-01-17T07:22:20.168Z · LW(p) · GW(p)

I guess there are two different questions: signalling intelligence to top-intelligence people and signalling intelligence to people above average and higher.

In the first case, it is a good plan. In the second case, you would fail.

comment by duckduckMOO · 2012-01-17T00:36:46.479Z · LW(p) · GW(p)

upvoted. I am also confused.

comment by Manfred · 2012-01-16T16:29:34.124Z · LW(p) · GW(p)

Be interested in lots of things that other people might not find interesting. I think it's the way that I personally signal intelligence the most. For example, if someone has a herpolhoder on their desk, I try to ask intelligent questions about it. Or if the rain on the window is dripping in nice straight lines because of the screen occasionally pressing against the glass, notice that.

comment by D_Alex · 2012-01-16T08:07:39.746Z · LW(p) · GW(p)

I have a different perspective on this compared with other commenters... Intelligence is very hard to fake.

What's the best way to signal guitar playing skills? Play the guitar, and play it well!

The efficient way to signal intelligence is: to do worthwhile things, intelligently!

Replies from: faul_sname
comment by faul_sname · 2012-01-16T08:43:29.958Z · LW(p) · GW(p)

How can you tell if someone is doing things intelligently?

Replies from: D_Alex
comment by D_Alex · 2012-01-16T09:02:12.335Z · LW(p) · GW(p)

Fair question, but difficult to answer in brief, I might try to do this later. For now let me answer with a couple of questions:

How can you tell if someone is playing a guitar well?

In general, can YOU tell the difference between someone doing things intelligently, and doing things unintelligently?

Replies from: Viliam_Bur, faul_sname
comment by Viliam_Bur · 2012-01-16T10:00:18.414Z · LW(p) · GW(p)

How can you tell if someone is playing a guitar well?

a) Listen to them playing.

b) Do they have concerts, CDs, fans, other symbols of "being a successful guitar player"? Do they write blogs or books about guitar playing? Do people write guitar-playing-related blogs and books about them?

The second option is less reliable and easier to fake, but it is an option that even a deaf person can use.

Replies from: Solvent
comment by Solvent · 2012-01-17T06:10:59.678Z · LW(p) · GW(p)

a) Listen to them playing.

Speaking as a guitar and piano player. I can do things on guitar and piano that are fairly easy, but look very impressive to someone who doesn't play the instrument. You actually need to play an instrument before you can judge how good someone is accurately.

(Obviously, it's pretty obvious if someone is distinctly bad. But distinguishing different levels of "good" is hard.)

comment by faul_sname · 2012-01-16T09:32:00.099Z · LW(p) · GW(p)

First question: A good guitar player a steady rhythm and hit the appropriate notes with appropriate volume and tone. At a higher level, they improvise in a way that sounds good. Sounding good seems to involve sticking to a standard scale with only a few deviations, and varying the rhythms. At the level above that, I really don't know.

Second question: I really don't know, at least that generally. I think I may use proxies such as the ability to find novel (good) solutions to problems and draw on multiple domains, then aggregate them into one linear value that I call "intelligence". I am probably also influenced by the person's attractiveness and how close their solution is to the one I would have proposed. I would definitely like your take on this as well.

comment by multifoliaterose · 2012-01-18T13:32:58.703Z · LW(p) · GW(p)

Why are you asking?

comment by Prismattic · 2012-01-16T05:58:49.457Z · LW(p) · GW(p)

Earning an advanced degree from a selective university seems rather cost intensive.

Depending on the selective university, an advanced degree might not cost much at all. Harvard, for example, only recently started paying the way of its undergraduates, but it has paid the way of its graduate students for a long time.

Replies from: grouchymusicologist
comment by grouchymusicologist · 2012-01-16T06:44:05.729Z · LW(p) · GW(p)

True, but free tuition or not, it's plenty costly in terms of opportunity.

(This is true to an almost hilarious extent if you're a humanities scholar like me: I'm not getting those ten (!!!!!!!) years of my life back.)

Replies from: Prismattic
comment by Prismattic · 2012-01-16T06:51:36.364Z · LW(p) · GW(p)

Is that the reason for "grouchy"musicologist?

Replies from: grouchymusicologist
comment by grouchymusicologist · 2012-01-16T06:57:41.228Z · LW(p) · GW(p)

Haha, no. I'm only grouchy because people occasionally say ill-informed things about musicology. Other than that, I really like my job and my chosen field. I rarely think I'd be much happier if I had chosen to pursue some lucrative but non-musicological career.

Replies from: Solvent
comment by Solvent · 2012-01-17T06:13:18.506Z · LW(p) · GW(p)

What's it like being a musicologist? What do you spend your days doing?

How many instruments do you play?

What's better out of Mozart's Jupiter Symphony and Holst's Jupiter movement?

Replies from: grouchymusicologist
comment by grouchymusicologist · 2012-01-17T07:47:30.089Z · LW(p) · GW(p)

Well, I wrote a bit about what musicologists do here. In terms of research areas, I myself am the score-analyzing type of musicologist, so I spend my days analyzing music and writing about my findings. I'm an academic, so teaching is ordinarily a large part of what I do, although this year I have a fellowship that lets me do research full-time. Pseudonymity prevents me from saying more in public about what I research, although I could go into it by PM if you are really interested.

I am (well, was -- I don't play much any more) what I once described as a "low professional-level [classical] pianist." That is, I play classical piano really well by most standards, but would never have gotten famous. At a much lower level, I can also play jazz piano and Baroque harpsichord. I never learned to play organ, and never learned any non-keyboard instruments. Among professional musicologists, I'm pretty much average for both number of instruments I can play and level of skill.

As to pieces about Jupiter, I can only offer you my personal opinion -- being a musicologist doesn't make my musical preferences more valid than yours. Both pieces are great, and I had a special fondness for the Holst when I was a kid (I heard it in a concert hall when I was about 11, and spent the whole 40 minutes grinning hard enough I should have burst a blood vessel). But I'll take the Jupiter Symphony without the slightest hesitation. Here you have one of the greatest works of one of the tiny handful of greatest composers ever, versus an excellent piece by a one-hit wonder among classical composers.

Really, though, I don't much like picking favorites among pieces of music, and always want to preface my answers with "Thank goodness I don't really have to choose!"

Replies from: None
comment by [deleted] · 2012-01-31T19:55:03.170Z · LW(p) · GW(p)

.

comment by endoself · 2012-01-17T00:26:20.742Z · LW(p) · GW(p)

In Marcus Hutter's list of open problems relating to AIXI at hutter1.net/ai/aixiopen.pdf (this is not a link because markdown is behaving strangely), problems 4g and 5i ask what Solomonoff induction and AIXI would do when their environment contains random noise and whether they could still make correct predictions/decisions.

What is this asking that isn't already known? Why doesn't the theorem on the bottom of page 24 of this AIXI paper constitute a solution?

comment by moridinamael · 2012-01-16T08:24:06.805Z · LW(p) · GW(p)

I've been incubating some thoughts for a while and can't seem to straighten them out enough to make a solid discussion post, much less a front page article. I'll try to put them down here as succinctly as possible. I suspect that I have some biases and blindspots, and I invite constructive criticism. In other cases, I think my priors are simply different than the LW average, because of my life experiences.

Probably because of how I was raised, I've always held the opinion that the path to world-saving should follow the following general steps: 1) Obtain a huge amount of personal wealth. 2) Create and/or fund the types of organizations that you believe are likely to save the world.

Other pathways feel (to me) like attempts to be too clever. I admit a likely personal bias here, but it looks like it should be easier to become wealthy by any available means than it is to singlehandedly solve all the world's important problems. If you do not agree with this assessment, I humbly suggest that perhaps you haven't thought long enough about how easy it might actually be to become ultra-rich if you actually set out with that goal in mind. I think that generally speaking very few people are actually trying to become wealthy; most people just try to match their parents' socioeconomic tier and then stop.

Replies from: faul_sname, None, Anatoly_Vorobey, Nick_Roy, dbaupp
comment by faul_sname · 2012-01-16T08:48:34.787Z · LW(p) · GW(p)

Might it not be even more effective to convince others to become ultra-rich and fund the organizations you want to fund? (Actually, this doesn't seem too far off the mark from what SIAI is doing).

Replies from: moridinamael
comment by moridinamael · 2012-01-16T19:03:37.865Z · LW(p) · GW(p)

I agree completely. I stopped myself short of saying this in my first post because I wanted to keep it succinct. I would go a bit further to suggest that SIAI could be doing more than merely convincing people to take this path. For example, providing trustworthy young rationalists with a financial safety net in order to permit them to take more risks. (One tentative observation I've made is that nobody becomes wealthy without taking risk. The "self-made" wealthy tend to be risk-loving.)

Replies from: faul_sname
comment by faul_sname · 2012-01-16T20:25:15.031Z · LW(p) · GW(p)

This is likely worth doing, but I am fairly sure that LWers are for the most part not wealthy enough to create this financial safety net. This seems like a concept that is worth a discussion post: what would LWers do if they had a financial safety net?

comment by [deleted] · 2012-01-16T15:51:58.662Z · LW(p) · GW(p)

I humbly suggest that perhaps you haven't thought long enough about how easy it might actually be to become ultra-rich if you actually set out with that goal in mind.

Any arguments that legitimately push you towards that conclusion should be easily convertible into actual advice about how to become ultra-rich. I think you're underestimating the difficulty of turning vague good-sounding ideas into effective action.

Replies from: moridinamael
comment by moridinamael · 2012-01-16T18:55:42.707Z · LW(p) · GW(p)

I think there's plenty of available advice on how to become ultra-rich. Just look at the Business section of any bookstore. The problem is that this advice typically takes you from a 0.001% chance of becoming ultra-rich, through sheer lucky accident or lottery, to a 0.1% chance, through strategy and calculated risks.

I'm not arguing that it's not really hard and really improbable. However, folks tend to assess P(becoming wealthy by any means) ~ P(winning the lottery).

comment by Anatoly_Vorobey · 2012-01-16T08:41:34.119Z · LW(p) · GW(p)

I humbly suggest that perhaps you haven't thought long enough about how easy it might actually be to become ultra-rich if you actually set out with that goal in mind. I think that generally speaking very few people are actually trying to become wealthy; most people just try to match their parents' socioeconomic tier and then stop.

What's ultra-rich? This claim isn't saying much unless you quantify it.

Intuitively, I find both your claims - that most people only try to match their parents' tier, and that it's easy to become ultra-rich if you focus on it - to be wrong, but it'd be interesting to see more arguments or evidence in their favor.

Replies from: moridinamael
comment by moridinamael · 2012-01-16T19:13:13.804Z · LW(p) · GW(p)

What's ultra-rich? This claim isn't saying much unless you quantify it.

I don't know, a billion dollars?

Intuitively, I find both your claims - that most people only try to match their parents' tier, and that it's easy to become ultra-rich if you focus on it - to be wrong, but it'd be interesting to see more arguments or evidence in their favor.

A quick Googling turns up a few papers which suggest that parental expectations largely define a child's level of educational and financial achievement. On a more intuitive level, I can only point out that the clear majority of Americans either don't go to college because their financial ambitions are satisfied by blue collar work, or they go to college in pursuit of a degree with a clear Middle Class career path attached to it. Do you know anybody whose stated goal is to be wealthy, rather than to be a doctor or an engineer or some specific career? I don't.

comment by Nick_Roy · 2012-01-16T11:42:07.638Z · LW(p) · GW(p)

Personally, I figure I'm not intelligent enough to research hard problems and I lack the social skills to be an activist, so by process of elimination the best path open to me for doing some serious good is making some serious money. Admittedly, some serious student loan debt also pushes me in this direction!

comment by dbaupp · 2012-01-16T09:40:44.476Z · LW(p) · GW(p)

it looks like it should be easier to become wealthy by any available means than it is to singlehandedly solve all the world's important problems

Doesn't becoming very wealthy for the purpose of saving the world (and then actually saving the world) count as singlehandedly solving all the problems?

Replies from: moridinamael, faul_sname
comment by moridinamael · 2012-01-16T18:57:33.902Z · LW(p) · GW(p)

What I was getting at is that the cognitive effort required to actually solve a Millennium problem may be greater than the cognitive effort of making a billion dollars and hiring a thousand mathematicians to work in Millennium problems.

comment by faul_sname · 2012-01-16T10:35:19.533Z · LW(p) · GW(p)

Who's counting?

Replies from: dbaupp
comment by dbaupp · 2012-01-16T11:33:28.792Z · LW(p) · GW(p)

Is this a joke? (Serious question, I can't tell. FWIW, I was using "count" as "fit the definition of".)

Replies from: faul_sname
comment by faul_sname · 2012-01-16T19:03:22.505Z · LW(p) · GW(p)

Partly, but not entirely. I noticed that I was asking myself seriously if that counted, then wondered why it mattered if it fit the definition.

comment by ahartell · 2012-01-16T19:05:38.599Z · LW(p) · GW(p)

Wow, 66 comments in 1 day. It looks the idea of having a mid-month open thread was a good one.

Replies from: shminux
comment by shminux · 2012-01-16T19:51:05.459Z · LW(p) · GW(p)

Seems like an indication that a third tier of posts, possibly karma-free, might be a good idea. Something like Stupid Questions, or Beginner's Corner, or Sandbox, or...

Replies from: Armok_GoB
comment by Armok_GoB · 2012-01-23T21:30:02.495Z · LW(p) · GW(p)

I've been sporadically trying to get something like this done for AGES. There was even a forum made, but without official endorsement it got like 5 members and died within days.

Replies from: shminux
comment by shminux · 2012-01-23T21:45:29.398Z · LW(p) · GW(p)

If you were to offer a tested contrib to the LW code base, Trike might agree to add it on a trial basis, provided EY&Co approve. Not sure what their policies are.

Replies from: Armok_GoB
comment by Armok_GoB · 2012-01-23T22:05:06.488Z · LW(p) · GW(p)

No idea how to do that, and wont have for the foreseeable future... I just don't have the attention span for coding or hacking any more for medical reasons.

comment by David_Gerard · 2012-01-17T13:06:22.992Z · LW(p) · GW(p)

Stephen Law on his new book, Believing Bullshit:

Intellectual black holes are belief systems that draw people in and hold them captive so they become willing slaves of claptrap. Belief in homeopathy, psychic powers, alien abductions - these are examples of intellectual black holes. As you approach them, you need to be on your guard because if you get sucked in, it can be extremely difficult to think your way clear again.

comment by faul_sname · 2012-01-16T20:17:59.239Z · LW(p) · GW(p)

Something has been bothering me about Newcomb's problem, and I recently figured out what it is.

It seems to simultaneously postulate that backwards causality is impossible and that you have repeatedly observed backwards causality. If we allow your present decision to affect the past, the problem disappears, and you pick the million dollar box.

In real life, we have a strong expectation that the future can't affect the past, but in the Newcomb problem we have pretty good evidence that it can.

Replies from: khafra, amcknight, shminux
comment by khafra · 2012-01-17T15:37:00.854Z · LW(p) · GW(p)

Short answer: Yup. Because Omega is a perfect or near-perfect predictor, your decision is logically antecedent, but not chronologically antecedent, to Omega's decision. People like Michael Vassar, Vladimir Nesov, and Will Newsome think and talk about this sort of thing more often than the average lesswronger.

comment by amcknight · 2012-01-17T01:07:31.633Z · LW(p) · GW(p)

You probably know this, but just in case:
In Newcomb's problem Omega predicts prior to you choosing. Omega is just really good at this. The chooser doesn't repeatedly observe backwards causality, even if they might be justified in thinking they did.

Replies from: faul_sname
comment by faul_sname · 2012-01-17T01:46:29.122Z · LW(p) · GW(p)

How is that observably different from backwards causality existing? Perhaps we need to taboo the word "cause".

Replies from: TimS
comment by TimS · 2012-01-17T01:52:20.189Z · LW(p) · GW(p)

It seems very intuitive to me that being very good at predicting someone's decision (probably by something like simulating the decision-process) is conceptually different from time travel. Plus, I don't think Newcomb's problem is an interesting decision-theory question if Omega is simply traveling (or sending information) backward in time.

Replies from: faul_sname
comment by faul_sname · 2012-01-17T02:06:06.387Z · LW(p) · GW(p)

This is intuitive to me as well, but I suspect that it is also wrong. What is the difference between sending information from the future of a simulated universe to the present of this universe and sending information back in the 'same' universe if the simulation is identical to the 'real' universe?

Replies from: TimS, Alejandro1
comment by TimS · 2012-01-17T02:56:15.309Z · LW(p) · GW(p)

Aside from the fact that the state of the art in science suggests that one (prediction) is possible and the other (time travel) is impossible?

But I think the more important issue is that assigning time-travel powers to Omega makes the problem much less interesting. It is essentially fighting the hypothetical, because the thought experiment is intended to shed some light on the concept of "pre-commitment." Pre-commitment is not particularly interesting if Omega can time-travel. In short, changing the topic of conversation, but not admitting you are changing the topic, is perceived as rude.

comment by Alejandro1 · 2012-01-18T05:35:43.672Z · LW(p) · GW(p)

Newcomb's problem doesn't lose much of its edge if you allow Omega not to be a perfect predictor (say, it is right 95% of the time). This is surely possible without a detailed simulation that might be confused with backwards causation.

comment by shminux · 2012-01-16T23:42:50.905Z · LW(p) · GW(p)

In real life, we have a strong expectation that the future can't affect the past, but in the Newcomb problem we have pretty good evidence that it can.

In the standard formulation (a perfect predictor) one-boxers always end up winning and two-boxers always end up losing, so there is no issue with causality, except in the mind of a confused philosopher.

comment by Grognor · 2012-01-16T01:22:54.761Z · LW(p) · GW(p)

How did Less Wrong get its name?

I have two disjunct guesses that are not mutually exclusive, but do not depend on each other:

  1. It was Michael Vassar's idea. He is my best guess for who came up with the name.
  2. It was inspired by this essay. This is my best guess for what inspired the name.

I don't know if either of these is true, or both, or whatever. I want to know the real answer.

Searching this site and Google has been useless so far.

Replies from: XFrequentist, Solvent
comment by XFrequentist · 2012-01-16T03:14:48.270Z · LW(p) · GW(p)

EY polled Overcoming Bias readers on their favorite from a list of several options, and "Less Wrong" was the overwhelming winner. Not sure how the options were generated.

Replies from: Grognor
comment by Grognor · 2012-01-16T03:59:50.923Z · LW(p) · GW(p)

Source?

Replies from: XFrequentist
comment by XFrequentist · 2012-01-16T14:56:57.263Z · LW(p) · GW(p)

Memory.

comment by Solvent · 2012-01-16T01:41:01.839Z · LW(p) · GW(p)

I remember Eliezer's post announcing LW. He didn't give any explanation of why it was called that, he just said "tentatively titled Less Wrong."

I'd be interested in hearing the answer to this. I suspect it was just a cool name that Eliezer came up with.

comment by tgb · 2012-01-16T13:31:21.784Z · LW(p) · GW(p)

An unusual answer to Newcomb's problem:

I asked a friend recently what he would do if encountering Newcomb's problem. Instead of giving either of the standard answer, he immediately attempted to create a paradoxical outcome and, as far as I can tell, succeeded. He claims that he would look inside the possibly-a-million-dollars box and do the following: If the box contains a million dollars, take both boxes. If the box contains nothing, take only that box (the empty one).

What would Omega do if he predicted this behavior or is this somehow not allowed in the problem setup?

Replies from: None, Manfred
comment by [deleted] · 2012-01-16T15:31:25.261Z · LW(p) · GW(p)

Not allowed. You get to look into the second box only after you have chosen. And even if both boxes were transparent, the paradox is easily fixed. Omega shouldn't predict what will you do (because that's assuming that you will ignore the content of the second box and Omega isn't stupid like that) but what will you do if box B contains a million dollars. Then it would correctly predict that your friend would two-box in that situation, so it wouldn't put the million dollars into the second box and your friend would take only the empty box according to his strategy. So yeah.

Replies from: tgb
comment by tgb · 2012-01-16T16:50:23.275Z · LW(p) · GW(p)

That's a nice simple way to reword it. Thanks.

comment by Manfred · 2012-01-16T16:15:41.263Z · LW(p) · GW(p)

There actually is a variant where you're allowed to look into the boxes - Newcomb's problem with transparent boxes.

And yes, it is undefined if you apply the same rules. However, there are two ways to re-define it.

1: Reduce the scope of the inputs. For example, Omega could operate on the following program: "If the contestant would take only one box when the million dollars is there, put the million dollars there." Before, Omega was looking at both situations, and now it's only looking at one.

2: Increase the scope of the program. There are two possible responses in two possible situations for a total of four inputs, so you just need to define Omega's response for all four. It's interesting that Omega now treats you differently depending on your thoughts, not just depending on which box you take, so this changes the genre of the problem.

comment by ahartell · 2012-01-25T03:54:08.810Z · LW(p) · GW(p)

So I was reading a book in the Ender's Game series, and at one point it talks about the idea of sacrificing a human colony for the sake of another species. It got me thinking about the following question. Is it rational to protect 20 "piggies" (which are morally equivalent to humans) and sacrifice 100 humans if the 20 piggies constitute 100% of their species' population and the humans represent a very very small fraction of the human race. At first, it seemed obvious that it's right to save the "piggies," but now I'm not so sure. Having tried to think of why saving them is right (for a few minutes), all I came up with was that diversifying investments in intelligent life makes intelligent life safer from extinction. But is diversity of life inherently valuable? What makes a future with "piggies" and humans better than one with just one or the other?

While writing this, I noticed one other reason: the valuable information that the "piggies" have. If this is eliminated, is it still worth saving them? And how many human lives can the "good of diversity" and the "loss of information" overcome? These are basically rhetorical questions (i.e. I'm not looking for answers like "53,243 humans per 'piggy'"), so I'm really just looking for your thoughts on this issue.

Replies from: shminux
comment by shminux · 2012-01-25T05:24:00.399Z · LW(p) · GW(p)

Is it rational to protect 20 "piggies"...and sacrifice 100 humans

Depends on your goal... If it is the survival of the human colony, then no. If it is the survival of the human race an the piggies hold a key to it, then yes (they do not, in this story). If it is the survival of the pequenino race, then yes. It does not make sense to ask which of the goals is rational, unless you can measure them against something else.

Replies from: ahartell
comment by ahartell · 2012-01-25T05:39:25.150Z · LW(p) · GW(p)

Right. Let's say that you just value "intelligent life," though, rather than the humans or pequeninos in particular. Say you're the hive queen. A piggy is equal to a human and the human race is equal to a human race.

(I worry that I'm still missing the point and the question is moot without first resolving whether you value "diversity" in it's own right or not, and that such valuing is a preference independent of rational decision making. Still, I feel as if some preferences can be irrational.)

comment by ahartell · 2012-01-16T23:05:25.033Z · LW(p) · GW(p)

Does anyone know how one would go about suggesting a new feature for predictionbook.com? I think it would be better if you could tag predictions so that then you could see separate ratings for predictions in different domains. Like, "Oh look, my predictions of 100% certainty about HPMOR are correct 90% of the time but my predictions of 100% certainty about politics are right 70% of the time." Also, you could look at recent predictions for only a specific topic, or see how well calibrated another user is in a specific area.

Replies from: gwern, Anubhav
comment by gwern · 2012-01-28T18:38:47.939Z · LW(p) · GW(p)

Does anyone know how one would go about suggesting a new feature for predictionbook.com?

http://github.com/tricycle/predictionbook/issues

As Anubhav pointed out, PB is not important to Trike since it's orders of magnitude less popular than LW (as useful as I may find it). If you really want tagging for per-domain calibration, you either need to get your hands dirty or put up a bounty.

comment by Anubhav · 2012-01-21T11:58:06.523Z · LW(p) · GW(p)

PB has a severe manpower shortage. New features not coming any time soon, AFAICT.

comment by billswift · 2012-01-16T14:46:54.021Z · LW(p) · GW(p)

Moore's Law Won't Fade for Business Reasons

Some writers have claimed that excess computing power will reduce the effort put into designing new and more powerful chips. Even when most users can't make use of the additional power, fear of losing out to the competition will keep designers pushing. Eventually, it will become too expensive to keep developing the new technology, but we are a lot further from those limits.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-01-19T01:41:26.336Z · LW(p) · GW(p)

This sounds like Marx's "overproduction" thesis: competition drives producers to make more and more regardless of demand. Generally, that sort of thing hasn't happened.

Specifically in the computer processing market: really, only gamers and datacenters buy the fastest available general-purpose processors. Other folks buy computers with an eye to convenience, portability, appearance, battery life, etc. rather than raw processing power.

Replies from: saturn
comment by saturn · 2012-01-20T22:28:17.478Z · LW(p) · GW(p)

Both home and datacenter markets seem to be shifting away from raw power and towards energy efficiency (i.e. maximizing computing power per watt) which increases battery life and decreases datacenter costs. This might actually end up propping up Moore's law anyway, as the more efficient transistors get, the more of them can be put on the same chip without overheating.

This will bottom out too, eventually, when a battery charge lasts longer than the device itself, or datacenter power and cooling costs become negligible.

comment by TimS · 2012-01-29T01:45:28.152Z · LW(p) · GW(p)

Depressing article opposing life extension research is depressing. Brief summary: In the least convenient possible world, human research trials would be unethically exploitative. And this is presented as an argument against attempting to end aging.

comment by David_Gerard · 2012-01-28T08:45:32.754Z · LW(p) · GW(p)

ZOMG, vaccines are part of the transhumanist agenda!! So are therefore unnatural and evil.

Spotted on Respectful Insolence.

comment by Normal_Anomaly · 2012-01-27T15:58:08.826Z · LW(p) · GW(p)

I've found a video that would be really cool if it were true, but I don't know how to judge its truth and it sounds ridiculous. This talk by Rob Bryanton deals with higher spatial dimensions, and suggests that different Everett branches are separated in the 5th dimension, universes with different physical laws are separated in the sixth dimension, etc. I can't find much info about the creator online, but one site accuses him of being a crank.Can somebody who knows something about physics tell me if there is any grain of truth to this possibility?

Replies from: gwern
comment by gwern · 2012-01-28T18:39:54.991Z · LW(p) · GW(p)

That reminds me of Tegmark's multi-level classification of multiverses, but that classification doesn't make sense as a spatial set of dimensions, IIRC.

comment by ahartell · 2012-01-26T22:01:40.837Z · LW(p) · GW(p)

In what ways do Frequentists and Bayesians disagree?

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-01-28T22:57:34.316Z · LW(p) · GW(p)

For a Bayesian a random quantity is just an unknown one. For example a coin not yet flipped is random (because I don't know which way it will land), and so is the population of Colorado (because I don't know what it is). Frequentists treat randomness as an inherent property of things, so that the coin flip would still be random (because it's not predetermined) but the population of Colorado isn't (because it's already fixed).

So given the problem of estimating the population of Colorado, a Bayesian would just hand you back a probability distribution (i.e. tell you how probable each population was). This option wouldn't be available to the Frequentist, who would refuse to put a probability distribution on a variable that wasn't random. Instead the Frequentist would give you an estimate and then tell you that the algorithm that generated the estimate had desirable properties, like being "unbiased".

comment by naturelover7 · 2012-01-25T17:25:45.514Z · LW(p) · GW(p)

I am interested in guidance on coping with loved one's irrationality.

comment by Alicorn · 2012-01-20T06:52:01.603Z · LW(p) · GW(p)

I wish it to be known that the next person to sign on as a beta for my fiction is entitled to the designation "pi".

Replies from: Solvent, Dorikka, daenerys, None
comment by Solvent · 2012-01-24T05:10:53.170Z · LW(p) · GW(p)

I'd be happy to do so. I'm halfway through Summons at the moment, but will probably finish that today or tomorrow.

comment by Dorikka · 2012-01-24T04:43:20.711Z · LW(p) · GW(p)

Though I've also never beta'd before, I'm up to date on Elcenia and would be happy to try.

If you want me to do so, just shoot me a PM. It'd also probably be a good idea to let me know what kind of feedback you're looking for.

comment by daenerys · 2012-01-20T20:53:00.132Z · LW(p) · GW(p)

I recently discovered, and devoured, Luminosity. Thank you for contributing to the "rationalist fiction" genre!

I haven't started Elcenia yet, but if/when I get caught up, I'll let you know. I've never beta-d before, but I'd be happy to try!

comment by [deleted] · 2012-01-20T07:15:39.654Z · LW(p) · GW(p)

What value is there in being this "pi"? Also, what's this fiction?

(PS. Tau is the one true circle constant)

Replies from: Alicorn
comment by Alicorn · 2012-01-20T07:19:04.518Z · LW(p) · GW(p)

Pi is a popular Greek letter. In the past this was the fiction, which makes me consider it potentially relevant here (fan density) but lately it is this instead. I'll designate a taubeta after acquiring a pibeta, rhobeta, and sigmabeta.

comment by lessdazed · 2012-01-18T22:05:18.843Z · LW(p) · GW(p)

"My priors are different than yours, and under them my posterior belief is justified. There is no belief that can be said to be irrational regardless of priors, and my belief is rational under mine,"

"I pattern matched what you said rather than either apply the principle of charity or estimate the chances of your not having an opinion marking you as ignorant, unreasoning, and/or innately evil,"

"Wot evah! I [believe] what I want!"

comment by tgb · 2012-01-17T17:13:12.545Z · LW(p) · GW(p)

Question regarding the quantum physics sequence:

This article tells me that the amplitude for a photon leaving a half mirror in each of the two directions is 1 and i (for straight and taking a turn, respectively) for an amplitude of 1 of a photon reaching the half-mirror. This must be a simplification, otherwise two half mirrors in a line would result in amplitude of i photon turning at the first mirror, an amplitude of i photon turning at the second mirror, and an amplitude of 1 of photon passing through both. This means that the squared-modulus ratio is 1:1:1 and all events are equally likely, and hence the existence of the second (possibly very distant) half mirror reduces the amount of light leaving the first half-mirror to 1/3 from 1/2 the intensity. I would be shocked to find that such a result is reality since it would, among other things, allow transmission of information faster than the speed of light.

Okay, so the obvious fix is to say that Eliezer simplified things and the real rule is that there is a factor of 1/sqrt(2) to each factor. Then the squared modulus ratio of the above example is 1/2:1/4:1/4 as expected.

But then I run into my second problem: suppose that there is a photon headed at a half-mirror. Turning at the half mirror leads to a detector. Going straight leads to a set of four mirrors which brings the photon back to the starting point. This introduces a loop in the system. What is the amplitude of the light reaching the detector? Intuitively, I would expect this to be 1 or possibly less than 1. Assuming that my above factor of 1/sqrt(2) is correct, then we get an infinite sum 1/sqrt(2) + 1/sqrt(4) + ... which converges to 1 + sqrt(2). This seems very wrong - we would need a factor of 1/2 to converge to 1, but then the previous situation gives a square modulus ratio of 1/4:1/16:1/16 or 4:1:1 which is again unexpected.

So is there a factor on each term of the half-mirror and if so what is it? Since no factor would agree with both of these setups, what have I done wrong?

Replies from: Oscar_Cunningham, dbaupp
comment by Oscar_Cunningham · 2012-01-18T11:05:39.479Z · LW(p) · GW(p)

What dbaupp said. But in particular you square first and then add because arriving at a different time makes the possibilities distinguishable, and so there is no interference (you don't add the complex amplitudes).

Replies from: tgb
comment by tgb · 2012-01-18T13:32:30.054Z · LW(p) · GW(p)

Ah good. This is a good explanation and I had been wondering how the different timing would affect it. Thanks to you and dbaupp.

comment by dbaupp · 2012-01-18T03:37:17.728Z · LW(p) · GW(p)

Assuming that my above factor of 1/sqrt(2) is correct, then we get an infinite sum 1/sqrt(2) + 1/sqrt(4) + ...

To get the ratio, one needs to add the squared moduli, so 1/2+1/4+..., and that gives 1.

comment by Craig_Heldreth · 2012-01-17T15:29:52.869Z · LW(p) · GW(p)

There are 2600 people signed up for the Reddit Godel Escher Bach group.

Replies from: multifoliaterose, Grognor
comment by multifoliaterose · 2012-01-18T13:35:13.846Z · LW(p) · GW(p)

Why do you bring this up?

For what it's worth my impression is that while there exist people who have genuinely benefited from the book; a very large majority of the interest expressed in the book is almost purely signaling.

Replies from: Craig_Heldreth
comment by Craig_Heldreth · 2012-01-18T14:33:49.806Z · LW(p) · GW(p)

It would be easier to discuss the merits (or lack) of the book if you specify something about the book you believe lacks merit. The opinion that the book is overly hyped is a common criticism, but is too vague to be refuted.

It was a bestseller. Of course many of those people who bought it are silly.

Replies from: multifoliaterose
comment by multifoliaterose · 2012-01-18T16:24:21.839Z · LW(p) · GW(p)

I wasn't opening up discussion of the book so much as inquiring why you find the fact that you cite interesting.

Replies from: Craig_Heldreth
comment by Craig_Heldreth · 2012-01-18T18:34:18.960Z · LW(p) · GW(p)

Fair question, but not an easy one to answer.

I signed up for the reading group along with the 2600 Redditors. It was previously posted about here. The book is an entry point to issues of Artificial Intelligence, consciousness, cognitive biases and other subjects which interest me. I enjoy the book every time I read from it, but I believe I am missing something which could be provided in a group reading or a group study. As I stated in the previous thread, I am challenged by the musical references. The last time I read music notation routinely was when I sang in a choir in middle school; many of the Bach references and other music references to terms such as fugue, canon, fifths & thirds, &c are difficult for me to grasp.

If one of those 2600 redditors felt moved to build some youtube tutorials with a bouncing ball along and atop the Bach scores illustrating Hofstadter's arguments, then I presume many others besides myself would enjoy seeing them.

Have you seen that Feynman video where he says he usually dislikes answering "why" questions? If not that, perhaps that Louis C. K. standup routine where he talks about his daughter asking "why?" It is a discussion prompt but it often does not point to anywhere. I have that feeling now that I am rambling.

Replies from: multifoliaterose
comment by multifoliaterose · 2012-01-18T22:31:50.050Z · LW(p) · GW(p)

I know Bach's music quite well from a listener's perspective though not from a theoretician's perspective. I'd be happy to share some pieces recordings that I've enjoyed / have found accessible.

Your last paragraph is obscure to me and I share your impression that you started to ramble :-).

comment by billswift · 2012-01-16T14:43:08.128Z · LW(p) · GW(p)

Utility functions do a terrible job of modelling our conscious wants and desires. Our conscious minds are too non-continuous to be modeled effectively. But our total minds are far more continuous, radical changes are rare which is why "character" and "personality" are recognizable over time and often despite our conscious desires, even quite strong conscious desires.

comment by TimS · 2012-01-18T19:00:23.723Z · LW(p) · GW(p)

What is the rational case for having children?

One can tell a story about how evolution made us not simply to enjoy the act that causes children but to want to have children. But that's not a reason, that's a description of the desire.

One could tell a story about having children as a source of future support or cost-controlled labor (i.e. farmhands). But I think the evidence is pretty strong that children are not wealth-maximizing in the modern era.

And if there is no case for having children, shouldn't that bother us on "Our morality should add up to normal, ceteris parabis" grounds?

Replies from: jimrandomh
comment by jimrandomh · 2012-01-18T19:50:04.480Z · LW(p) · GW(p)

Rationality helps you map out the relations between actions and goals, and between goals and subgoals; and it can help us better understand the structure of the goals we already have. We can say that doing something is good because it helps achieve goals, or bad because it hinders them; and we can say that certain things are also goals (subgoals), if achieving them helps with our original goals. However, this has to bottom out somewhere; and we call the places where it bottoms out - goals that're valued in and of themselves, not just because they help with some other goal - terminal values.

Rationality has nothing whatsoever to say about what terminal values you should have. (In fact, those terminal values are implicit when you use the word "should".) For people who want children, that is usually a terminal value. You cannot argue that it's good because it achieves something else, because that is not why people think it's good.

Replies from: TimS, torekp
comment by TimS · 2012-01-18T20:15:57.271Z · LW(p) · GW(p)

You are right. And that's at least the second time I've made that mistake, so hopefully I'll learn from it.

Let me ask the sociological question I should have asked: It appears that many of the folks invested enough in "rationality" to be active participants in LW not only don't have children, but think that having children is not a good goal. That constellation of beliefs suggests that there is some selection pressure that links those two beliefs. Should the existence of that selection pressure worry us on "Add up to normal" grounds?

comment by torekp · 2012-01-22T21:17:08.918Z · LW(p) · GW(p)

However, this has to bottom out somewhere; and we call the places where it bottoms out - goals that're valued in and of themselves, not just because they help with some other goal - terminal values.

This seems to be a near-consensus here at LessWrong. But I'm not convinced that "it bottoms out in goals that're valued in and of themselves" follows from "this has to bottom out somewhere". I grant the premise but doubt the conclusion. I doubt that where-it-bottoms-out needs to be, specifically, goals -- it could be some combination of beliefs, habits, experiences, and/or emotions, instead.

But you say, we call the places where it bottoms out goals ... (emphasis added). Of course, you can do that, and it's even true that people will pretty well understand what you mean. You can call these things goals, and do so without doing terrible violence to the language, but I'm not convinced that this is the most felicitous way of speaking about motivation and ethical learning. Whether these bottom-level items are best described as goals, or habits, or beliefs, or something quite different, depends on psychological facts which may not yet be in (sufficient) evidence.