[POLL] LessWrong census, mindkilling edition [closed, now with results] 2011-12-11T09:23:01.146Z
[LINK] Scientists create mammalian H5N1 2011-11-26T20:00:49.011Z


Comment by Oligopsony on Could a digital intelligence be bad at math? · 2016-01-20T03:55:41.921Z · LW · GW

If it's digitally embedded, even if the "base" module was bad at math in the same way we are, it would be trivial to cybernetically link it to a calculator program, just as us physical humans are cyborgs when we use physical calculators (albeit with a greater delay than a digital being would have to deal with.)

Comment by Oligopsony on What are "the really good ideas" that Peter Thiel says are too dangerous to mention? · 2015-04-13T04:17:54.066Z · LW · GW

Thiel enjoys the spotlight, he's his own boss and could spend all day rolling around in giant piles of money if he wanted to, he's said plenty of things publicly that are way more NRx-y than the monopoly thing and he's obviously fine.

Comment by Oligopsony on How has lesswrong changed your life? · 2015-04-01T12:32:20.251Z · LW · GW

I give more to charity and use spaced repetition systems heavily.

Comment by Oligopsony on Precisely Bound Demons and their Behavior · 2015-03-07T03:12:57.495Z · LW · GW

If the demons understand harm and are very clever in figuring out what will lead to it, what happens when we ask them minimize harm, or maximize utility, or do the opposite of what it would want to do otherwise, or {rigidly specified version of something like this}?

Can we force demons to tell us (for instance) how they'd rank various policy packages in government, what personal choices they'd prefer I make, &c., so we can back-engineer what not to do? They're not infinitely clever, but how clever are they?

Comment by Oligopsony on Why humans suck: Ratings of personality conditioned on looks, profile, and reported match · 2014-08-12T12:49:38.554Z · LW · GW

The issue isn't whether looks are objective (clearly they aren't,) but whether judgments of looks are more correlated among the userbase than those of personality.

(Actually, the degree to which personality is correlated is probably the more interesting question here (granting that interestingness isn't particularly objective either.) Robin Hanson has pointed to some studies that suggest that "compatibility" isn't really a thing and some people are just easier to get along with than others - the study in question IIRC didn't take selection effects into account, but it remains an interesting hypothesis.)

Comment by Oligopsony on Failed Utopia #4-2 · 2014-07-25T02:21:20.967Z · LW · GW

It was a garbled version of Angkorism, sorry.

Comment by Oligopsony on The Benefits of Closed-Mindedness · 2014-06-04T01:16:03.171Z · LW · GW

If your point is that Openness is probably not a thing-in-the-world, I would be inclined to agree, actually.

Comment by Oligopsony on The Benefits of Closed-Mindedness · 2014-06-03T22:45:58.316Z · LW · GW

Big Five Openness correlates with political liberalism, so cet par it would be weak Bayesian evidence for open-mindedness, even if it is not an example of it.

Comment by Oligopsony on The Cold War divided Science · 2014-04-06T01:18:26.842Z · LW · GW

I am completely uninformed on the technical particulars here, so this is idle speculation. But it isn't totally implausible that ideological factors were at play here. By this I don't mean that there were arguments being deployed as soldiers - nothing political, as far as I'm aware, rides upon the two theories - but that worldviews may have primed scientists (acting in entirely good faith) to think of, and see as more reasonable, certain hypotheses. Dialectical materialism, for instance, tends to emphasize (or, by default, think in terms of) qualitative transformations that arise from historically specific tensions between different forces that eventually gets resolved (in said qualitative transformations.) If I understand you correctly that the difference between the two theories was that the American one isolated a process (1) explicable by the properties of a single substance and (2) acting at all times in Earth's history, while the Soviet one isolated a process (1) explicable in terms of the interaction of forces and (2) only active until it the conditions for it (stores of primordial methane) were resolved, then it's easy to construct a just-so story about how a scientist thinking in the categories privileged by diamat might find the second more intuitive than the first. Likewise, if, as a stereotypical reductive mechanist, you tend to think of individual objects rather than relationships, and eternal laws rather than historically specific ones, the former might be more intuitive than the latter. Further, it seems at least facially plausible that if you had a scientific community with Aristotelian or German idealist frameworks, you'd have different dominant theories still - even with researchers acting in good faith, with lots of data, and material incentives to produce a theory that derived correct predictions. (Such frameworks bear some similarities to, but are more vague and general than, Kuhnian paradigms.)

Of course, I could totally misunderstand the nature of the two theories at play, and I don't know anything about the geological communities of the two superpowers specifically, so the just-so stories here are probably complete bullshit. But your concerns are more general than the specific examples as well, so consider their purpose to be illustrative rather than explanatory.

Comment by Oligopsony on Useful Personality Tests · 2014-02-12T00:56:29.142Z · LW · GW

And that willingness to invest such time might correlate with certain factors.

Comment by Oligopsony on White Lies · 2014-02-09T22:44:10.909Z · LW · GW

For present purposes, I suppose it includes any domain including the defense of lying itself.

Comment by Oligopsony on White Lies · 2014-02-08T14:36:21.385Z · LW · GW

All this needs the disclaimer that some domains should be lie-free zones. I value the truth and despise those who would corrupt intellectual discourse with lies.

Can anyone point me to a defense of corrupting intellectual discourse with lies (that doesn't resolve into a two-tier model of elites or insiders for whom truth is required and masses/outsiders for whom it is not?) Obviously there is at least one really good reason why espousing such a viewpoint would be rare, but I assume that, by the law of large numbers, there's probably an extant example somewhere.

Comment by Oligopsony on White Lies · 2014-02-08T14:18:07.764Z · LW · GW

At LessWrong there've been discussions of several different views all described as "radical honesty." No one I know of, though, has advocated Radical Honesty as defined by psychotherapist Brad Blanton, which (among other things) demands that people share every negative thought they have about other people. (If you haven't, I recommend reading A. J. Jacobs on Blanton's movement.) While I'm glad no one here is thinks Blanton's version of radical honesty is a good idea, a strict no-lies policy can sometimes have effects that are just as disastrous.

To point out the obvious, speaking from personal experience, this is indeed a terrible idea.

A couple of months ago I told a lie to someone I cared about. This wasn't a justified lie; it was a pretty lousy lie (both in its justifiability and the skill with which I executed it) and I was immediately exposed by facial cues. I felt pretty awful because a lot of my self-concept up to that point had been based around being a very honest person, and from that point on, I decided to treat my "you shouldn't tell her ___" intuitions as direct orders from my conscience to reveal exactly that thing, and to pay close attention to whether the meaning of what I've said deviates from the truth in a direction favorable to me, and as a consequence, I now feel rising anxiety whenever I feel some embarrassing thought followed by the need to confess it. I also resolved to search my conscience for any bad deeds I may have forgotten, which actually led to compulsive fantastic searching for terrible things I might have done and repressed, no matter how absurd (I've gotten moslty-successful help about this part.) She's long since forgiven me for the original lie and what I lied about, but continues to find this compulsive confessional behavior extremely annoying, and I doubt I could really function if I experienced it around people in general rather than her specifically.

Comment by Oligopsony on Rationality Quotes January 2014 · 2014-01-24T02:34:16.648Z · LW · GW

This, but in a more general sense for the first: Pascal thought there were a bunch of sophisticated philosophical reasons that you should be a Catholic; the Wager was just the one he's famous for.

Comment by Oligopsony on Rationality Quotes January 2014 · 2014-01-22T20:12:53.860Z · LW · GW

I suspect this was written and is being upvoted in very different senses.

Comment by Oligopsony on The Onrushing Wave · 2014-01-19T21:10:25.060Z · LW · GW

See also Hanson's less than enthusiastic review.

Comment by Oligopsony on Dark Arts of Rationality · 2014-01-16T23:49:01.432Z · LW · GW

Amusingly enough, the example of TrollBot that came to mind was the God expounded on in many parts of the New Testament, who will punish you iff you do not unconditionally cooperate with others, including your oppressors.

Comment by Oligopsony on On Voting for Third Parties · 2014-01-13T16:28:08.444Z · LW · GW

To provide a concrete example, this seems to suggest that a person who favours the Republicans over the Democrats and expects the Republicans to do well in the midterms should vote for a Libertarian, thereby making the Republicans more dependent on the Tea Party. This is counterintuitive, to say the least.

Is it? Again, I haven't done the math, but look at the behavior of minor parties in parliamentary systems. They typically demand a price for their support. If the Republican will get your vote regardless why should they care about you?

Comment by Oligopsony on Dangers of steelmanning / principle of charity · 2014-01-13T04:42:51.591Z · LW · GW

Taking arguments more seriously than you possibly should. I feel like I see all the time on rationalist communities people say stuff like "this argument by A sort of makes sense, you just need to frame it in objective, consequentialist terms like blah blah blah blah blah" and then follow with what looks to me like a completely original thought that I've never seen before.

Rather than - or at least in addition to - being a bug, this strikes me as one of charity's features. Most arguments are, indeed, neither original nor very good. Inasmuch as you can substitute them for more original and/or coherent claims, then so much the better, I say.

Comment by Oligopsony on On Voting for Third Parties · 2014-01-13T04:10:11.872Z · LW · GW

Another consideration is the effects of your decision criteria on the lesser evil itself. All else being equal, and assuming your politics aren't so unbelievably unimaginative that you see yourself somewhere between the two mainstream alternatives, you should prefer the lesser evil to be more beholden to its base. The logic of this should be most evident in parliamentary systems, where third party voters can explicitly coordinate and sometimes back and sometimes withdraw support from their nearest mainstream parties, depending on policy concessions.

Comment by Oligopsony on Some thoughts on having children · 2014-01-09T08:04:47.421Z · LW · GW

Sure. Or more glibly, does malaria not inhibit economic development?

Comment by Oligopsony on Some thoughts on having children · 2014-01-08T18:30:14.648Z · LW · GW

Educated women have less children, reduced childhood mortality means less hedging to reach a desired number of children, above-noted changes away from agriculture and mandatory public schooling reduce the economic value of child labor, some other stuff.

Comment by Oligopsony on Some thoughts on having children · 2014-01-08T18:18:23.965Z · LW · GW

Also, deontic concerns about forcing existence on people.

As Apprentice points out the heritability of prosocial behaviors such as cooperativeness, empathy and altruism is 0.5, and I think most people here are aware that IQ has a heritability around that number as well and is a pretty good predictor of life outcomes. If you want to increase the number of people in the world that are like yourself, then having children is a great way of doing so.

I would submit that most people are not very good about judging whether they are prosocial geniuses. (This goes double for people who are likely to be reading this.)

Also: inasmuch as the problem with sperm (and egg) donation is lack at the demand rather than supply end, surely one should seek to enter in on the demand side. Perhaps you really are a prosocial genius, but surely you are not the prosocialest geniusest. You probably suck in other ways too.

Also: heritability is not contribution, but that's veering towards a debate we've had and mostly exhausted already.

Moreover, the people you would save by donating to charity would also have children and those children would have children all of whom might require yet more aid in the future. Thus the short term gains in QALYs that giving to GiveWell recommended charities provides lead to a long term drain of resources and human capital.

That "might" is doing a lot of work here. The overall effect of economic development is to greatly reduce fertility.

Comment by Oligopsony on The selfish reason to write something for Ada Lovelace Day · 2013-10-11T09:12:54.042Z · LW · GW

Technically speaking, this seems like an altruistic reason to write something for Ada Lovelace Day, not a selfish one. Unless you're using the term in the trivial sense where "selfish reason to" is pleonastic.

Comment by Oligopsony on Meetup : Washington DC Book Swap meetup · 2013-09-22T19:49:54.339Z · LW · GW

I won't be able to make it today, but I do promise to show up sometime soon so I can return the books I borrowed the last swap.

Comment by Oligopsony on The Robots, AI, and Unemployment Anti-FAQ · 2013-08-02T13:30:08.414Z · LW · GW

For their part, Stalinists have tended to be fond of technical elites as well. However, I suspect that gristly examples may arise simply from the depth of the sample size; the innumerable cruelties of the premodern world, after all, we're chiefly overseen by humanistic elites. It may be that today humanistic values are substantially more weak and "feminine" (from the perspective of their predecessors,) but this may also be part of why existing power structures are less fond of employing them.

(All this, of course, assumes this is a useful dichotomy, the primary avenues for elite recruitment under modern liberalism are business and the legal profession, which straddle the line in some ways.)

Comment by Oligopsony on Open thread, July 23-29, 2013 · 2013-07-24T21:37:35.487Z · LW · GW

Can Blindsight-style Scramblers employ anthropic reasoning?

Comment by Oligopsony on [Link] Cosmological Infancy · 2013-07-22T01:23:18.226Z · LW · GW

Doesn't the anthropic principle provide some difficulty for the latter solution as well - why should we find ourselves at the very beginning of such preposterously long lifespans?

Comment by Oligopsony on On manipulating others · 2013-07-12T01:32:41.006Z · LW · GW

Having spoken with you in person (unaware that this was a consciously chosen practice) my experience was mostly that it was cognitively burdensome and that I was mostly worried for you. I suspect this isn't what you're shooting for! (I also classified it alongside my "Will is a troubled genius" model, which may or may not be what you're going for.)

My personal experience is that I tend towards terrible self-destructiveness when I don't get enough human warmth, so this strategy would not be a good debiaser for me. But if you can make it work... actually, this seems like a good thing to get external feedback on whether you make it work. Have you?

Comment by Oligopsony on Open thread, May 17-31 2013 · 2013-05-25T12:58:45.683Z · LW · GW

"Rather" my butt; there's an incredibly obvious rude reply I could have made, and would have, had I the minimal intelligence to realize it.

Comment by Oligopsony on Post ridiculous munchkin ideas! · 2013-05-24T05:59:52.113Z · LW · GW

If you are much better than the market at predicting how cards will trend, you should probably be working for Star City or some other secondary market giant.

Probably the continuous uptrend in the P9 et al. can be understood as rational if the continued growth of the game is uncertain. There's always the black swan possibility that Wizards will catastrophically fuck up in some way and hence let them tumble down. In addition, the growth of eternal formats is itself limited by the availability of staples. I would suspect there's an upper limit to how expensive the Moxen and friends can get on this basis alone - logarithmic growth of the game entails linear growth of Vintage and Legacy. This is, after all, why they created Modern, for which Modern Masters is possible.

Comment by Oligopsony on Open thread, May 17-31 2013 · 2013-05-21T16:39:59.008Z · LW · GW

Oh, I'm sure if I keep on my current kick I can dip below a kilokarma.

Comment by Oligopsony on Open thread, May 17-31 2013 · 2013-05-21T12:50:57.792Z · LW · GW

Maybe you can call in Gwern to measure my skull shape and really narrow it down.

Comment by Oligopsony on Open thread, May 17-31 2013 · 2013-05-21T12:49:11.600Z · LW · GW

The biggest barrier that has anything to do with cleverness? Sure.

Comment by Oligopsony on Open thread, May 17-31 2013 · 2013-05-20T00:02:34.816Z · LW · GW

or anybody smart enough to be on LW

Yeah, that captcha is a stumper.

Comment by Oligopsony on Open thread, May 17-31 2013 · 2013-05-19T19:26:14.613Z · LW · GW

Stop this. Seriously.

Stop what? I haven't the faintest idea what my IQ is, and you proposed low IQ as a reason for incomprehension in this instance. Why throw out a perfectly reasonable hypothesis?

Comment by Oligopsony on Open thread, May 17-31 2013 · 2013-05-19T15:12:54.547Z · LW · GW

Being angry is a signal that you're willing to back up your disagreement with consequences of some sort, whether it's violence or a lost friendship. It's also a signal, commensurate with the degree to which it is embarrassing, that this is highly important to you. Why, precisely, is it irrational to respond to this? Did evolution prime us to respond to it because it thought it would be funny? It is, indeed, not obvious to me (though perhaps I have low IQ) that it is astonishingly stupid to be more convinced (behaviorally) by pathos than logos; behavioral reinforcement is but one concern among many, and whose value fluctuates in accordance with how many interactions you expect to have with this person, whether they are physically larger than you, &c. And the persuasiveness of logos, obviously, can rather depend on the quality of the logos. Maybe your logos isn't as good as you think it is? You apparently weren't able to discern why they were upset with you in the first place, which certainly would have placed a damper on your ability to articulate convincing reasons why they should not.

Comment by Oligopsony on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-16T03:17:53.320Z · LW · GW

With respect to this being a "danger," don't Boltzmann brains have a decision-theoretic weight of zero?

Comment by Oligopsony on Post ridiculous munchkin ideas! · 2013-05-13T02:08:46.258Z · LW · GW

This says to me that early childhood nutrition is the common factor here.

Comment by Oligopsony on LW Women Entries- Creepiness · 2013-04-29T20:19:28.872Z · LW · GW

Translating any serious insights into LW-speak by myself is a bit of a daunting task

I like to think my entire tenure here has been something of an attempt at this, although of course I can't say how successful it's been.

(I'd also characterize it as in black rather than clown suits, at least from the inside. Will Newsome and muflax are the clown suit guys here, God bless them.)

Comment by Oligopsony on A thought-process testing opportunity · 2013-04-23T05:08:45.642Z · LW · GW

Some possibilities: it jets out in one direction, little droplets radiate outwards from all over, there are a bunch of miniature streams going in all directions, there are sorts of sheets of water radiating outward that split into droplets, it does any of these things at a rapid or very slow rate, the water doesn't leave at all

Reasoning: when I wring out a towel it usually all leaves in one big thing, then drips from all over. Does it all leave through one "faucet" because that's where the pressure is or because it's the lowest point? I've never paid too close attention to it, but I think it typically leaves from the bottom, which would imply that gravity's doing the work of choosing that one exit point. With the dripping off it's unclear whether the drops would just cling to the towel in the absence of gravity. It may be that the whole thing depends on the extra oomph of gravity; if when I wring out my towel it all leaves from the lowest spot (although again, I'm not sure of that,) then presumably the pressure prior to its getting there isn't enough to make it exit. All of this leans towards less water exiting than otherwise, and maybe not from all the same place, and (of course) slower than before. So my guess would be something like little droplets slowly radiating out from all over but not as much water leaving as before, unless it gets wrung really tight.

Meta reasoning: you were much more likely to post this if it was cool and/or surprising. The most intuitively cool and/or surprising things (that are still plausible according to the above thinking) would be if it all left as a sort of sheet or bubble of if none (or at least very little) left, or maybe if the sheet or bubble arrived but only after a lot of wringing. On the second thought, towels are pretty irregular surfaces, so I wouldn't expect a smooth sheet to radiate out from it. So my official guess is that very little will leave from it until it has been wrung very tightly, and then a bunch will surprisingly burst out.


Hey, that was cool! I guess I was sort of right - water mostly didn't exit the towel, but I didn't predict (correctly, in my head, or explicitly, in the post above) what it would look like as it was not exiting the towel, and I was more dramatically incorrect about it all bursting out. The features I hadn't thought about were the clumpiness of the water and the way water looks like when it's about to fall from a towel but is still clinging - visibly on the surface, not hiding within the folds. I also think that if I had correctly predicted what it would look like I wouldn't have predicted the burst from the meta-reasoning, because the towel being shrunk in within the water was already cool-looking. When I predicted that something cool would happen, I suspect I should have thought through my reasoning earlier and seen at what points there was an opportunity for something cool to occur - which is a point that might be more broadly applicable. To remember the surface tension stuff I should have tried to remember mechanical features of water in general, like why droplets form in the first place and so on, rather than jumping into directly imagining how wringing out a towel looks like and reasoning from there. Broader lesson there, obviously not a new one, but perhaps one I should be better about keeping in mind: appeals to first principles are relatively more important when dealing with novel situations.

Comment by Oligopsony on What truths are actually taboo? · 2013-04-19T21:33:26.584Z · LW · GW

The central premise of Time on the Cross - that slavery was economically profitable and unlikely "wither away", and this had some positive effect on the treatment of the slaves, seems quite plausible to me. (That said, I believe this is only true after the invention of the cotton gin).

The first half of the thesis is most assuredly true. It could be that if not for the invention of the cotton gin, slavery would not have been profitable in the cotton-growing regions of the US South, but slavery was extremely profitable and economically dynamic elsewhere, so I wouldn't be inclined to lay too much emphasis on the gin (except as a matter, possibly, of where slavery came to be located, as it did die out "naturally" in the areas where it was unprofitable.) However, it is also true that northern and/or metropolitan political leaders generally believed (however incorrectly) that free labor would generally be more efficient than slave, which to be fair it was in the industrial production processes that the abolishing regions had a comparative advantage in.

I am extremely skeptical of the second part of the thesis, because most everything I've seen indicates that slaves were worse off than black sharecroppers were worse off than southern whites were worse off than northern whites. But I haven't actually read Time on the Cross too closely.

Comment by Oligopsony on What truths are actually taboo? · 2013-04-19T20:04:23.002Z · LW · GW

For serious (though hardly undisputed) evidence that slavery wasn't, in certain respects, "not all that bad" see Fogel and Engerman's Time on the Cross. Note also that Fogel and Engerman were allowed to say this and that they both remain highly respected academics, despite Engerman existing in just the sort of field that the Sheeple Can't Handle My Thoughtcrime crowd would predict to be most witchhunty.

Comment by Oligopsony on What truths are actually taboo? · 2013-04-19T19:54:52.607Z · LW · GW

Had it never been officially discouraged in the first place, I would still expect it to be less popular in 2013 than 1913. Wouldn't you?

Comment by Oligopsony on Litany of a Bright Dilettante · 2013-04-19T16:59:05.089Z · LW · GW

From my experience, I think that your estimate of the odds of encountering a comment "which blows apart their argument" as about 1% is overly optimistic. Maybe in some other fields it's different. At best you can expect a minor correction or a qualification.

That's probably a more accurate way of phrasing things, yeah.

Comment by Oligopsony on Litany of a Bright Dilettante · 2013-04-19T05:41:03.586Z · LW · GW

For any given assertion by an expert on a situation you are not an expert on, the probability that your criticism is correct is not small. However,

1) this does not mean that the expected value of the criticism is negative, even to the expert. If the expert receives 100 comments, 99 of which are confused and one of which blows apart their argument, then they are probably collectively valuable.

2) if the expert is unusually patient, your comment can present her with an opportunity to correct your confusion.

I would say that the important thing is more humility of presentation than humility of willingness to speak at all.

Comment by Oligopsony on Explicit and tacit rationality · 2013-04-10T01:56:06.777Z · LW · GW

If Rationality is Winning, or perhaps more explicitly Making The Decisions That Best Accomplish Whatever Your Goals Happen to Be, then Rationality is so large that it swallows everything. Like anything else, spergy LW-style rationality is a small part of this, but it seems to me that anything which one can meaningfully discuss is going to be one such small portion. One could of course discuss Winning In General at a sufficiently high level of abstraction, but then you'd be discussing spergy LW stuff by definition - decision theory, utility, and so on.

If businessfolk are rather rational at running businesses, but no rational than anyone else about religion, or if people who have become experts on spergy LW stuff are no more winningful about their relationships, &c. &c. this (to my mind) brings into question the degree to which a General Rationality-as-Winningness skill exists. You acknowledge the distinction between explicit and tacit rationality, but do you expect successful entrepreneurs to be relatively more successful in their marital life? When you say you want to teach tacit rationality, do you mean something distinct from Teaching People How To Do Things Good?

Comment by Oligopsony on Don't Get Offended · 2013-03-31T14:04:31.463Z · LW · GW

Okay, so at this point we're basically disagreeing over what someone intended by what they say. Unless Julian wants to clarify I'm going to tap out.

Comment by Oligopsony on Don't Get Offended · 2013-03-30T23:14:11.608Z · LW · GW

Which claim? The one that anthropologists are endorsing is not the one that's politically convenient to them.

Comment by Oligopsony on Don't Get Offended · 2013-03-28T06:28:01.607Z · LW · GW

or even outright lies

You're misunderstanding Julian's claim, albeit I think for reasons of inferential distance rather than deliberate misreading. The claim was not that anthropology/Sinister Cathedral Orthodoxy endorses inborn gender identity, despite its being wrong, for its political utility to trans rights. Such Orthodoxy is precisely the basis on which he thinks it is wrong. The claim was that activists endorse this false belief for its political utility, and that he and other Sinister Cathedral Agents don't feel particularly obliged to go out of their way to correct it (although doing so was precisely what he did in that post.) If there was a widespread belief that washing your hands protected you from demons, I would not fault epidemiologists for failing to prioritize disabusing the public of this. Nor does it strike me as an affront to science that epidemiologists, as a general rule, have normative commitments that extend beyond scientific inquiry and on to the belief that health is better than sickness.