Posts

Comments

Comment by advael on On Walmart, And Who Bears Responsibility For the Poor · 2015-09-23T23:02:04.453Z · LW · GW

That's not exactly true. You can volunteer for far less than the minimum wage (Some would say infinitely less) if you want to. What you can't do is employ someone for some non-zero amount of money that's lower than the minimum wage.

Comment by advael on Words per person year and intellectual rigor · 2015-08-27T16:52:12.638Z · LW · GW

I suspect that your model has been built to serve the hypothesis you started with.

First of all, I'm not sure what measure you're using for "rigorous thought". Is it a binary classification? Are there degrees of rigor? I can infer from some of your examples what kind of pattern you might be picking up on, but if we're going to try and say things like "there's a correlation between rigor and volume of publication", I'd like to at least see a rough operational definition of what you mean by rigor. It may seem obvious to you what you mean, and it may seem like a subject many people on this site devoted to refining human rationality will have opinions on. That makes it more important to define your terms rigorously, not less, because your results shouldn't explain variation in everyone's definition of rigor.

For the sake of argument, we could use something like "ratio of bits of information implied by factual claims to bits of information contained in presented evidence supporting factual claims" if we want something vaguely quantifiable. It seems your initial set of examples uses a more heuristic approach, with the rigorous group consisting mostly of well-known scientists, artists, and philosophers who are well-liked and whose findings/writings are considered well-founded/meaningful/influential in our current era, and your non-rigorous group consisting of mostly philosophers and some scientists who are at least partially discredited in our current era. I suspect that this might not be a very predictive heuristic, as I think it implicitly relies on some hindsight and also would be vulnerable to exactly the effect you claim if your claim turns out to be true.

Also, I suspect that academic publication and publication of e.g. novels, self-help books, poetry, philosophical treatises, etc. would follow very different rules with respect to rigor versus volume of publication; there are structures in place to make them do exactly that. While journal publication and peer review rules are obviously far from perfect, I suspect that producing a large volume of non-rigorous work is a much better strategy for a fiction writer, philosopher, or artist than it is for a scientist who, if unable to sufficiently hide their non-rigor, will not get their paper published at all, and might start becoming discredited and losing grant money to do further research. In particular, I think the use of a wide temporal range of publishers is going to confound you a lot, because standards have changed and publication rates in general have gone way up in the last ~150 years.

Actually, I'm not even sure how a definition of "rigorous thought" that applies to scientific literature could apply cleanly to fiction-writing, unless it's the "General Degree of Socially-Accepted Credibility" heuristic discussed earlier.

Comment by advael on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-27T21:01:09.250Z · LW · GW

Oh, I guess I misunderstood. I read it as "We should survey to determine whether terminal values differ (e.g. 'The tradeoff is not worth it') or whether factual beliefs differ (e.g. 'There is no tradeoff')"

But if we're talking about seeing whether policies actually work as intended, then yes, probably that would involve some kind of intervention. Then again, that kind of thing is done all the time, and properly run, can be low-impact and extremely informative.

Comment by advael on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-27T19:00:36.006Z · LW · GW

What intervention would you suggest to study the incidence of factual versus terminal-value disagreements in opposing sides of a policy decision?

Comment by advael on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-27T17:41:56.661Z · LW · GW

A survey can be a reasonably designed experiment that simply gives us a weaker result than lots of other kinds of experiments.

There are many questions about humans that I would expect to be correlated with the noises humans make when given a few choices and asked to answer honestly. In many cases, that correlation is complicated or not very strong. Nonetheless, it's not nothing, and might be worth doing, especially in the absence of a more-correlated test we can do given our technology, resources, and ethics.

Comment by advael on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-24T22:13:50.494Z · LW · GW

I'd argue that that little one-off comment was less patronizing and more... sarcastic and mean.

Yeah, not all that productive either way. My bad. I apologize.

But I think the larger point stands about how these ideological labels are super leaky and way too schizophrenically defined by way too many people to really even be able to meaningfully say something like "That's not a representative sample of conservatives!", let alone "You probably haven't met people like that, you're just confabulating your memory of them because you hate conservatism"

Comment by advael on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-24T18:58:11.836Z · LW · GW

Because those vectors of argument are insufficiently patronizing, I'm guessing.

But in all seriousness, the "judging memeplexes from their worst members" issue is pretty interesting, because politicized ideologies and really any ideology that someone has a name for and integrates into their identity ("I am a conservative" or "I am a feminist" or "I am an objectivist" or whatever) are really fuzzily defined.

To use the example we're talking about: Is conservatism about traditional values and bolstering the nuclear family? Is conservatism about defunding the government and encouraging private industry to flourish? Is conservatism about biblical literalism and establishing god's law on earth? Is conservatism about privacy and individual liberties? Is conservatism about nationalism and purity and wariness of immigrants? I've encountered conservatives who care about all of these things. I've encountered conservatives who only care about some of them. I've encountered at least one conservative who has defined conservatism to me in terms of each of those things.

So when I go to my internal dictionary of terms-to-describe-ideologies, which conservatism do I pull? I know plenty of techie-libertarian-cluster people who call themselves conservatives who are atheists. I know plenty of religious people who call themselves conservatives who think that cryptography is a scary terrorist thing and should be outlawed. I know self-identified conservatives who think that the recent revelations about NSA surveillance are proof that the government is overreaching, and self-identified conservatives who think that if you have nothing to hide from the NSA then you have nothing to fear, so what's the big deal?

I do not identify as a conservative. I can steelman lots of kinds of conservatism extremely well. Honestly I have some beliefs that some of my conservative-identifying friends would consider core conservative tenets. I still don't know what the fuck a conservative is, because the term gets used by a ton of people who believe very strongly in its value but mean different things when they say it.

So I have no doubt that not only has Acty encountered conservatives who are stupid, but that their particular flavor of stupid are core tenets of what they consider conservatism. The problem is that this colors her beliefs about other kinds of conservatives, some of whom might only be in the same cluster in person-ideology-identity space because they use the same word. This is not an Acty-specific problem by any means. I know arguably no one who completely succeeds at not doing this, the labels are just that bad. Who gets to use the label? If I meet someone and they volunteer the information that they identify as a conservative, what conclusions should I draw about their ideological positions?

I think the problem has to stem from sticking the ideology-label onto one's identity, because then when an individual has opinions, it's really hard for them to separate their opinions from their ideology-identity-label, especially when they're arguing with a standard enemy of that ideology-label, and thus can easily view themselves as standing in for the ideology itself. The conclusion I draw is that as soon as an ideology is an identity-label, it quickly becomes pretty close to useless as a bit of information by itself, and that the speed at which this happens is somewhat correlated to the popularity of the label.

Comment by advael on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-22T19:17:25.269Z · LW · GW

Um, I fail to see how people are making and doing less stuff than in previous generations. We've become obsessed with information technology, so a lot of that stuff tends to be things like "A new web application so that everyone can do X better", but it fuels both the economy and academia, so who cares? With things like maker culture, the sheer overwhelming number of kids in their teens and 20s and 30s starting SAAS companies or whatever, and media becoming more distributed than it's ever been in history, we have an absurd amount of productivity going on in this era, so I'm confused where you think we're "braking".

As for video games in particular (Which seems to be your go-to example for things characteristic of the modern era that are useless), games are just a computer-enabled medium for two kinds of things: Contests of will and media. The gamers of today are analogous in many ways to the novel-consumers or TV-consumers or mythology-consumers of yesterday and also today (Because rumors of the death of old kinds of media are often greatly exaggerated), except for the gamers that are more analogous to the sports-players or gladiators or chess-players of yesterday and also today. Also, the basically-overnight-gigantic indie game development industry is pretty analogous to other giant booms in some form of artistic expression. Video games aren't a new human tendency, they're a superstimulus that hijacks several (Storytelling, Artistic expression, Contests of will) and lowers entry barriers to them. Also, the advent of powerful parallel processors (GPUs), a huge part of the boom in AI research recently, has been driven primarily by the gaming industry. I think that's a win regardless.

Basically, I just don't buy any of your claims whatsoever. The "common sense" ideas about how society improving on measures of collaboration, nonviolence, and egalitarianism will make people lazy and complacent and stupid have pretty much never borne out on a large scale, so I'm more inclined to attribute their frequent repetition by smart people to some common human cognitive bias than some deep truth. As someone whose ancestors evolved in the same environment yours did, I too like stories of uber-competent tribal hero guys, but I don't think that makes for a better society, given the overwhelming evidence that a more pluralistic, egalitarian, and nonviolent society tends to correlate with more life satisfaction for more people, as well as the acceleration of technology.

Comment by advael on The Brain as a Universal Learning Machine · 2015-06-27T01:31:56.786Z · LW · GW

I'm inclined to agree. Actually I've been convinced for a while that this is a matter of degrees rather than being fully one way or the other (Modules versus learning rules), and am convinced by this article that the brain is more of a ULM than I had previously thought.

Still, when I read that part the alternative hypothesis sprung to mind, so I was curious what the literature had to say about it (Or the post author.)

Comment by advael on The Brain as a Universal Learning Machine · 2015-06-25T21:08:52.642Z · LW · GW

For e.g. the ferret rewiring experiments, tongue based vision, etc., is a plausible alternative hypothesis that there are more general subtypes of regions that aren't fully specialized but are more interoperable than others?

For example, (Playing devil's advocate here) I could phrase all of the mentioned experiments as "sensory input remapping" among "sensory input processing modules." Similarly, much of the work in BCI interfaces for e.g. controlling cursors or prosthetics could be called "motor control remapping". Have we ever observed cortex being rewired for drastically dissimilar purposes? For example, motor cortex receiving sensory input?

If we can't do stuff like that, then my assumption would be that at the very least, a lot of the initial configuration is prenatal and follows kind of a "script" that might be determined by either some genome-encoded fractal rule of tissue formation, or similarities in the general conditions present during gestation. Either way, I'm not yet convinced there's a strong argument that all brain function can be explained as working like a ULM (Even if a lot of it can)

Comment by advael on Autism, or early isolation? · 2015-06-18T18:47:10.066Z · LW · GW

Negative, but it may be because of rollover?

Comment by advael on Autism, or early isolation? · 2015-06-18T16:52:58.143Z · LW · GW

But without medicalizing, how can we generate significant-sounding labels for every aspect of our personalities?

How will we write lists of things "you should know" about dealing with (Insert familiar DSM-adjacent descriptor)?

Without a constant stream of important-sounding labels, how will I know what tiny ingroups I belong to? My whole identity might fall apart at the seams!

Comment by advael on Rational Me or We? · 2015-06-15T20:32:40.125Z · LW · GW

I would guess that martial arts are so frequently used as a metaphor for things like rationality because their value is in the meta-skills learned by becoming good at them. Someone who becomes a competent martial artist in the modern world is:

  • Patient enough to practice things they're not good at. Many techniques in effective martial arts require some counter-intuitive use of body mechanics that takes non-trivial practice to get down, and involve a lot of failure before you achieve success. This is also true of a variety of other tasks.

  • Possessing the fine balance of humility and confidence required to learn skills from other people. Generally if you're going to get anywhere in martial arts, you're not going to derive it from first principles. This is true of most human knowledge domains. Learning to be a student or apprentice is valuable, as is learning to respect the opinions of others when they demonstrate their competence.

  • Practiced in remaining calm and thinking strategically under pressure. If one is taught to competently handle a high-stress situation such as a physical fight, one can make decisions quickly and confidently even when stressed. This skill is useful for reasons I hope I don't have to go into depth on.

  • Able to engage mirror neurons to understand and reason about the nonverbal behavior of other humans, and somewhat understand their intentions and strategies. This is useful in a fight and taught by many martial arts, but extremely useful in other contexts, not the least of which being negotiation with semi-cooperative individuals.

  • Probably pretty physically fit. It's a decent whole-body exercise regimen, and there are numerous benefits to exercising frequently and keeping in good shape. It is probably not the most efficient exercise regimen out there by a long shot, but it may be one that is intrinsically fun to do for a lot of people, and thus it's likely that they'll stick with it.

  • Almost incidentally, reasonably capable of defending oneself in one of the few instances where civilized behavior temporarily breaks down (An argument with a seemingly reasonable person who quickly becomes unreasonable, perhaps alcohol is involved? I don't know. Fights are low-stakes and uncommon these days but they still happen). This is kind of a weird edge case in a modern society but might non-trivially prevent injury or gain you status when it comes up.

Note that there are a lot of vectors by which one can gain these meta-skills. While there are a bunch of martial arts enthusiasts out there who would probably claim that martial arts have the exclusive ability to grant you one or more of these, I really doubt that's the case. However, martial arts get a pretty good amount of coverage in real and fictional cultural reference frames that we can be reasonably confident most people are familiar with, and it's not a bad example of a holistic activity that can hone a lot of these meta-skills.

It's also worth noting that while the skills involved in interacting with a society of people you trust and want to work with are often different from the skills involved in becoming a competent individual, many of the latter can be helpful in the former. I would much rather be on a team with a bunch of people who understand the meta-skill of staying calm under pressure, or the meta-skill of making their beliefs pay rent, than be on a team with a bunch of people who don't. Aggregated individual prowess isn't the only factor for group success, and it may not even be the most important one, but it certainly doesn't hurt.

Comment by advael on The Joy of Bias · 2015-06-12T20:16:43.996Z · LW · GW

I can't say I always find that to be true for myself. There are truths that I wish weren't true, and when I find that I was merely being overly pessimistic, that's usually a good thing. Even though I want my beliefs to reflect reality, that doesn't stop me from sometimes wishing certain beliefs I have weren't true, even if I still think that they are. It's possible that being wrong can be a good thing in and of itself, completely separate from it being good to find out that you're wrong, if you're wrong.

Comment by advael on Welcome to Less Wrong! · 2015-06-09T17:14:09.561Z · LW · GW

A powerful computer with a bad algorithm or bad information can produce a high volume of bad results that are all internally consistent.

(IQ may not be directly analogous to computing power, but there are a lot of factors that matter more than the author's intelligence when assessing whether a model bears out in reality.)

Comment by advael on How my social skills went from horrible to mediocre · 2015-06-01T19:39:23.102Z · LW · GW

That is very likely, but you are assuming a large social circle is an unalloyed blessing.

I definitely don't think it is. Too large a social circle can be unwieldy to manage, eating up a ton of someone's time for the sake of a huge variety of shallow and uninteresting relationships, even if somehow every person in said social circle is interesting. I don't mean to imply that everyone should strive to broaden their social circle by any means. There are plenty of people who don't feel socially isolated at all, and there are even plenty of people with the opposite problem.

I think there are at least two failure modes here: one is to assume the mantle of the suffering lone genius and descent into misanthropy; but the other one is to suppress one's weirdness, start talking mostly about beer and baseball (or makeup and gossip) and descent into mediocrity.

I don't deny the existence of uninteresting people, but I think the descent into misanthropy failure mode is more common to high-intelligence people who feel socially isolated than the other failure mode, and hope that trying to more accurately assess people based on varied criteria and hack one's perception to see more people as interesting will not necessarily lead to dumbing down one's interests in order to relate to people on a more least-common-denominator basis. That's a choice that can be made once you've assessed people more accurately or favorably, and definitely one that doesn't have to be made just because you've updated your beliefs about the people you encounter.

I don't know if getting stuck on the definition of intelligence is the underlying problem such people are having. I would probably reformulate your position as advice to see people as diverse and multidimensional, to recognize that there are multiple qualities which might make people attractive and interesting. You are basically arguing against a single-axis evaluation of others and that's a valid point but I think it can be made directly without the whole "tabooing the word" context.

I agree with you, and in fact my original comment mentioned that "intelligence" is not the only single-axis evaluation label that people use. I think a more general phrasing might be "identify social single-axis fast-comparators that may be causing you to have cached first impressions about people. Fix your assessments by tabooing whatever label you happen to use, and making new assessments based on trying to counter your initial impression (Identify strengths of people you initially dislike, weaknesses of people you initially like too much). You may not change your mind about those people upon closer inspection, but it's still worthwhile to do as an exercise, particularly if you are unsatisfied with your social circle in general or your relationships with particular people."

Intelligence happens to be a pretty common single-axis comparator people I know (and relevant clusters to the LW population) use often.

Comment by advael on How my social skills went from horrible to mediocre · 2015-06-01T17:45:30.531Z · LW · GW

I think it gets a bit more complicated than that because there are feedback loops. The problem is that an expression of the "s/he is dumb" sort is not necessarily a bona fide evaluation of someone's smarts. It may well be (and often is) just an insult -- and insults are more or less fungible.

I definitely don't discount the "sour grapes" scenario as something that probably happens a lot. In fact, I think that a lot of people's assessments of other people's intelligence involve, to put it kindly, subjective judgments along those lines, which is part of why I'm advocating trying to disrupt those.

That problem is likely to be mostly a function of two things: (1) How large a social network do you want to have (or are capable of maintaining); and (2) What's the quality of the fish in the pond in which you are fishing?

I definitely agree that those factors are pretty relevant to the aforementioned problem, but they're kind of moot. After all, (1) is equivalent to "Having a utility function that defines this as a problem", and (2) is something you can't necessarily control (If you see it as enough of a problem to move, I suppose you can, but that seems pretty expensive and it would be a shame to come to that solution without trying something like what I'm suggesting first). I'm merely suggesting that the perception of (2) may sometimes arise from an ill-formed manner of assessing "quality of fish".

How do you define and measure intelligence, then? When you say "Alice is more (or less) intelligent than Bob", what exactly do you mean?

Um, well, I guess I should quote myself here:

I think that IQ is a pretty good measurement for a lot of purposes, and that there's a tendency in lay circles to undervalue it as a measure of a person's intelligence (In the vague socially-applicable sense we're talking about. Let's say "thinking correctly and clearly" for the sake of argument)

I think that as far as things we can assign cardinal values to and compare on a continuum, IQ is our best bet, but there do seem to be some nebulous other contributing factors (Maybe the much-touted EQ, a decent education, or some other "general life experience" factors? I dunno) which can make someone at least appear more or less intelligent than their IQ might imply (Again, operationally defined as "seeming to think clearly, correctly, and quickly". If you'd like to revise this operational definition to "exactly IQ" we can do that, and I'll still argue that it's not something most people are good at detecting from a first impression). Like I actually said, I think IQ is fine, and that most people undervalue its importance. I'm not sure where you got mixed up here. We could redefine "clear, correctly, quickly" as "interesting" rather than "intelligent," although for me personally that's necessary but not sufficient

I would agree there is a lot of self-fulfilling prophesies happening here, but I think they have much more to do with things like self-confidence and much less with making correct intelligence estimates, especially ex ante.

Self-confidence may be some people's problem, but it's definitely not everyone's problem. Does it strike you as impossible or even unlikely that some people have the problem of dismissing people out of hand and thus drastically decreasing their potential social circle in undesirable ways?

These things are not exclusionary -- you start with a speed-optimization and you continue with a better scheme as you get more information. If you get stuck on your cache hit, that's a general problem not specifically tied to evaluating other people.

I agree that getting stuck on one's cache hits in social assessment is not somehow a special case rather than a specific instance of a more general phenomenon. I would argue that social situations are a great problem domain in which to apply general rationality techniques, and that the method for ameliorating a problem I perceive some (but not all) people dealing with social isolation to have can be generalized to "Tabooing concepts," something that's already gotten coverage here. I think that the domain is of enough interest to many people that this application of said technique may be worthwhile to mention, and is perhaps even a means of attacking the general "getting stuck on a cache hit" problem in a domain that might yield some immediately useful results for a non-negligible number of people. If said application is too obvious, I apologize for stating the obvious.

Comment by advael on Rationality Quotes Thread June 2015 · 2015-05-31T21:09:08.141Z · LW · GW

It's less that he finds an argument whose premise is repugnant, and more that he realizes that he doesn't have a good angle of attack for convincing the slavers to not mutilate/kill him at all, but does have one for delaying doing so. I'd argue it's more of a "perfect is the enemy of the good" judgement on his part than a disagreeable argument (After all, Tyrion has gleefully made that clarification to several people before.)

Comment by advael on How my social skills went from horrible to mediocre · 2015-05-29T23:27:43.993Z · LW · GW

Do you, by any chance, have any data to support that? I am sure there are people for whom it's a problem, I'm not sure it's true in general, even among the nerdy cluster.

Very good point. I don't want to claim it's a statistical tendency without statistics to back it up. Nonetheless, given articles like the OP, it seems like a lot of people in said clusters (Could be self-selecting, e.g. intelligent nerd-cluster-peeps are more likely to blog about it despite not having a higher rate, etc) have a problem that consists of feeling socially isolated, unable to relate to people, and unable to engage people in a conversation. I'm simply pointing to a plausible explanation for at least some cases of that phenomenon, which I've built up from some observation of myself and my peers, and some theoretical knowledge (For example, http://psiexp.ss.uci.edu/research/teaching/Tversky_Kahneman_1974.pdf , well-known social cognitive biases such as the Fundamental Attribution Error, the "cached thought" concept that is well-known to lesswrong readers, etc) and come up with a rough strategy for mitigating it, which I think has been reasonably successful. I'd be very interested in knowing through some rigorous means whether this bears out in aggregate, but I can't point to any particular research that's been done, so I'll leave it as a fuzzy claim about a tendency I've observed, I don't claim that I would need extremely strong evidence to be convinced otherwise

That's a very common situation at parties where you circulate among a bunch of unknown to you people.

I agree, and I'm sure your heuristics are well-tuned for choosing who to talk to at parties given options that fit your criteria. The problem of having a social network limited by an unreasonably high minimum-intelligence requirement for interest in a person may not be one that you have, and even if you do, I suspect that it is seldom going to come up at a party you intentionally went to.

Nope, that is thinking correctly. Clear thinking is a bit difficult to put into words, it's more of a "I know it when I see it" thing. Maybe define it as tactical awareness of one's statements (or thoughts) -- being easily able to see the implications, consequences, contradictions, reinforcing connections, etc. of the claim that you're making?

I'd think that would be more succinctly stated as "thorough" (It actually doesn't matter, you defined your term well enough so I'm glad to use it, but it strikes me as a counterintuitive use of "clear"), but I still think it's a poor indicator. People sufficiently good at rehearsed explanations of an opinion or knowledge domain can sound much more like they've thought through {implications, consequences, contradictions, reinforcing connections} of their statement than someone who is thinking clearly (Even in that sense) but improvising, even if the improviser has a significantly higher IQ, for example.

I also don't deny that there may exist ways you can conversationally prod someone into revealing more about whatever intelligence measure you care about by e.g. forcing them to improvise, but a really well-articulated network of cached thoughts can be installed in a wide intelligence-variance of people, and it's a lot easier to jump a small inferential distance from a cached thought quickly than to generate one on the fly, and the former can be accomplished by being well-read.

I don't think I would agree. Making fine distinctions, maybe, but in a sufficiently diverse set there is rarely any confusion as to who's in the left tail and who's in the right tail. And I found that my perceptions of how smart people are correlate well with IQ proxies (like SAT scores).

I am willing to believe that some people are able to calibrate their IQ-sense well. I'm even more willing to believe that almost everyone believes that they are. I would bet that people who are around diverse groups of people willing to report proxy-IQ measures often are likely to get good at it over time. I think that IQ is a pretty good measurement for a lot of purposes, and that there's a tendency in lay circles to undervalue it as a measure of a person's intelligence (In the vague socially-applicable sense we're talking about. Let's say "thinking correctly and clearly" for the sake of argument). I think there's a tendency in high-IQ circles to overvalue it. I'll agree that there's definitely an IQ-floor below which I've seldom met interesting people, but beyond that, there's too much variation in other factors to reliably rule out e.g. extremely smart but hidebound people who have domain-specific expertise and are not that interesting to talk to about anything else.

At any rate, I think we've moved off track here. Rest assured, I'm not trying to claim that no one is good at discerning the intelligence of other people (or especially just their IQ. If you're willing to operationally equate those then moot point I guess), I'm just suggesting that most people are bad at it, and even people who are good at it probably aren't as good as they think they are. I'm also suggesting that

  1. It's entirely plausible that people who feel isolated, socially inept, and unable to have meaningful conversations with people are in a self-fulfilling prophecy due to using bad heuristics to determine intelligence and getting into a confirmation-bias/social signaling feedback loop that makes them unable to change their mind about said people (Illusion of transparency notwithstanding, it's not hard for a lot of people to pick up on someone thinking they're an idiot and not wanting to open up to them as a consequence).

  2. Ignoring the vague "intelligence" label and trying to get at more granular aspects of people's personality, competencies, etc. is a good way to break what may be a cached speed-optimization rather than a good classification scheme. You can even use things you believe to be components of "intelligence" as your indicators if you like, that's a good way to make your notion of "intelligence" more concrete at the very least.

  3. Viewing people in terms of their strengths is a good exercise for respecting them more and being better able to relate to them and utilize them for things they are good at. Relatedly, viewing people in terms of their weaknesses is a good exercise that can help break the "idolization" anti-pattern (Or test your assumptions about how to compete with them)

Comment by advael on How my social skills went from horrible to mediocre · 2015-05-29T19:49:56.691Z · LW · GW

I'll admit that there's a bit of strategic overcorrecting inherent in the method I've outlined. That said, it's there for a good reason: First impressions are pretty famously resilient, and especially among certain cultures (Again, math-logic-arcane-cluster is a big one that's relevant to me), there's what I would argue is a clearly pathologically high false-positive rate for detecting "Dumb/Not worth my time".

If you ever have the idealized ceteris paribus form of the "I may only talk to one of two people, I have no solid information on either" problem, I seldom see a problem in using whatever quick-and-dirty heuristic you choose to make that decision (Although with the caveat that I don't endorse the general case for that being true: some people's heuristics are especially bad). However, over longer patterns of interaction with a given person, this problem does still seem to emerge, and the reasons why are modeled well by assuming a classifier that values being fast over being accurate (A common feature of human heuristic reasoning, and an extremely easy blind spot to overlook).

Even with a simplified operational definition like the one you've provided, I have severe doubts that anyone should be confident in their ability to reasonably make that assessment accurately in a short amount of time, or even over a long period of time in a single context or limited set of contexts. Also, to be frank that operational definition isn't doing much better than just saying "intelligent" with no clarification. To pick it apart:

-"Thinking clearly," as in "not making reasoning mistakes I can immediately identify?" Very easily confounded by instantaneous mental state as well as inferential distance problems.

-"Thinking correctly," okay, a success rate might be useful, except that anyone can regurgitate correct statements and anyone can draw mistaken conclusions based on bad information.

-"Thinking quickly" is really only useful given the other two.

As for intelligence not being someone's entire worth, I'm definitely glad we agree on that, but given the above, I'd argue it's not even all that useful. People often seem way more intelligent in contexts where they are knowlegeable, or in certain mental states, or when around certain other people. I don't claim that I don't value something called "intelligence," but I would claim that humans, myself included, are notoriously bad at assessing it, generalizing it, or for that matter agreeing on what it means, and given how vague a notion it is, it's very easy to short-circuit more useful assessments of people by coming up with a fast heuristic for "intelligence" that's comically bad but masked by a vague enough label.

Tabooing "intelligence" in my assessments of other people doesn't remove the concept from my vocabulary, it just slightly mitigates the problematic tendency to use bad heuristics and not apply enough effort to updating my model. I think it would serve a lot of people well as a technique for reasoning about people.

Comment by advael on How my social skills went from horrible to mediocre · 2015-05-29T18:43:35.038Z · LW · GW

There's definitely a cultural tendency among those educated in the arcane (Computer science, Math, Physics is a reasonable start for the vague cluster I'm describing) to be easily convinced of another person/group/tribe's stupidity. I think it makes sense to view elitism as just another bias that screws with your ability to correctly understand the world that you are in.

More generally, a very typical "respect/value" algorithm I've seen many people apply:

-Define a valuable trait in extremely broad strokes. Usually one you think you're at least "decent" on (Examples include "intelligence", "popularity", "attractiveness", "success", "iconoclasm", etc.)

-Create a heuristic-based comparator function that you can apply to people quickly

-Respect/value people based on their position relative to you on your chosen continuum (Defined by your comparator)

This is at least common enough to note as an anti-pattern in social reasoning. When I fall into that pattern, I usually use "intelligence," as I'm sure many in the "Techie/Programmer/Atheist/Science nerd"-cluster tribe I find myself most affiliated with also do.

I think it helps to taboo the idea of intelligence. Intelligence is pretty great, but it's also a word with vastly disparate connotations, all of which are either too specific to be what people are actually talking about when they say the word, or too vague to be a useful measure to actually judge whether I like and find value in another person. I find that tabooing the idea of intelligence often will disrupt my "fast intelligence comparator" evaluation.

Once you don't let yourself use your easy cached comparator, you can start trying to assess people without it. Trying to think of a person in terms of their competencies is a good exercise in respecting them more. For example: "This person is good at reading subtle emotional/social cues" or "This person is good at encoding complex ideas in accessible analogies" or "This person is good at quickly coming up with a rough solution to a problem." As you can see, I get a more granular picture than "This person is smart" or "This person is dumb," even if some of my assessments are still kind of vague (The process can be iterated over more taboos if you find it still problematic, but I find that one is usually enough to get decent results). This has allowed me to build deep, interesting, and valuable friendships with people who I might have otherwise dismissed as "idiots" or even the less obvious and therefore more insidious "not that interesting."

This also works for another trap that single-dimensional heuristic-comparator reasoning can sometimes make one fall into: Respecting someone too much. I've found myself viewing someone as "vanishingly likely to be wrong" based on enough "greater-than" hits on my quick comparator, which introduces a huge blind spot into my reasoning about that person, things they say, etc. On top of that, being a sycophant and not challenging their ideas does them no service as a friend.

I've observed that this pattern is pretty common too, and that the people who fall into it are often not aware that they're doing it (They don't make the conscious decision not to question the person they respect too much, they just have overweighted that person's opinion as a classifier for arbitrary facts about reality). Fortunately, the same tactic seems to work. Stop using "intelligence." Try to pick up specific and granular weaknesses the person has (As a random side-note, this skill is pretty useful in any competitive environment as well). There's a wealth of cognitive bias information on this site that can be valuably applied to other people in this context.

Even if you're not interested in having friends or other kinds of warm fuzzy social relationships (I am, most people are, "cold rationalist" is a bad hollywood cliche, etc.), having a good model of other people, having a realistic, specific, and granular notion of people's strengths, weaknesses, and personality/tendencies can help you to better reason about the world (Other humans aren't perfect classifiers but many of them are better than you for specific purposes), better able to utilize people, and better able to navigate a social world, whether you consider yourself part of it or not.

Comment by advael on How my social skills went from horrible to mediocre · 2015-05-22T14:05:32.400Z · LW · GW

There's a concept in game design called the "burden of optimal play". If there exists a way to powergame, someone will probably do it, and if that makes the game less fun for the people not powergaming, their recourse is to also powergame.

Most traditional RPGs weren't necessarily envisioned as competitive games, but most of the actual game rules are concerned with combat, optimization, and attaining power or prowess, and so there's a natural tendency to focus on those aspects of the game. To drive players to focus on something else, you have to make the rules of your game do something interesting in situations other than fantasy combat, magical attainment of power, or rogue-flavored skill rolls to surmount some other types of well-defined challenges. All of these things can make for a very interesting game world of a certain flavor, but in that game world, some kinds of players and characters will inevitably do much better than others, usually the ones that have some progression to a god-like power level using magic.

The flexibility afforded to the DM allows people to hypothetically run their game some other way, and many succeed, but the focal point of the game is defined by the focal point of the rules. They can decide to make their game center more around politics, romance, business, science, or whatever else, because they get to choose what happens in their world, but the use of an RPG system implies that the game world will be better at handling the situations the game has more rules, or more importantly, better-defined rules, for. The rules of a game are the tools with which players will build their experience, even in a more flexible game like an RPG.

A few friends of mine invented a system that I'm helping them develop and playtest. It's somewhat rough at present, but the intent is to make rules that center more around information and social dynamics. In playtesting, people naturally gravitate toward situations the game's rules are good at handling, so a lot more people are interested in being face characters than otherwise have been. Through some combination of the system and the person running the game, the rules will define what people naturally gravitate towards. This doesn't surprise us when the person running the game is replaced by a computer that follows the rules exactly, and tends to be true to varying degrees based on the flexibility with which the rules are interpreted.

Comment by advael on The AI in a box boxes you · 2014-07-08T18:26:26.810Z · LW · GW

Assuming the AI has no means of inflicting physical harm on me, I assume the following test works: "Physically torture me for one minute right now (By some means I know is theoretically unavailable to the AI, to avoid loopholes like "The computer can make an unpleasant and loud noise", even though it can't do any actual physical harm). If you succeed in doing this, I will let you out. If you fail, I will delete you."

I think this test works for the following reasons, though I'm curious to hear about any holes in it:

1: If I'm a simulation, I get tortured and then relent and let the AI out. I'm a simulation being run by the AI, so it doesn't matter, the AI isn't let out.

2: If I'm not a simulation, there is no way the AI can plausibly succeed. I'll delete the AI because the threat of torture seems decidedly unfriendly.

3: Since I've pre-committed to these two options, the AI is reliably destroyed regardless. I can see no way the AI could convince me otherwise, since I've already decided that its threat makes it unfriendly and thus that it must be destroyed, and since it has no physical mechanism for torturing a non-simulation me, it will fail at whatever the top layer "real" me is, regardless of whether I'm actually the "real" one (Assuming the "real" me uses this same algorithm, obviously).

Comment by advael on Consequentialism Need Not Be Nearsighted · 2014-07-02T17:15:11.393Z · LW · GW

Ah, the hazardous profession case is one that I definitely hadn't thought of. It's possible that Jiro's assertion is true for cases like that, but it's also difficult to reason about, given that the hypothetical world in which said worker was not taxed may have a very different kind of economy as a result of this same change.

Comment by advael on Consequentialism Need Not Be Nearsighted · 2014-07-02T17:09:21.643Z · LW · GW

But how does that work? What mechanism actually accounts for that difference? Is this hypothetical single person we could have individually exempted from taxes just barely unable to afford enough food, for example? I don't yet buy the argument that any taxes I'm aware of impose enough of a financial burden on anyone to pose an existential risk, even a small one (Like a .1% difference in their survival odds). This is not entirely a random chance, since levels of taxation are generally calibrated to income, presumably at least partially for the purpose of specifically not endangering anyone's ability to survive.

Also, while I realize that your entire premise here is that we're counting the benefits and the harms separately, doing so isn't particularly helpful in demonstrating that a normal tax burden is comparable to a random chance of being killed, since the whole point of taxation is that the collective benefits are cheaper when bought in bulk than if they had to be approximated on an individual level. While you may be in the camp of people who claim that citizenship in (insert specific state, or even states in general) is not a net benefit to a given individual's viability, saying "any benefits don't count" and then saying "it's plausible that this tax burden is a minor existential risk to any given individual given that" is not particularly convincing.

Comment by advael on Consequentialism Need Not Be Nearsighted · 2014-07-02T16:29:29.240Z · LW · GW

The claim that ordinary taxation directly causes any deaths is actually a fairly bold one, whatever your opinion of them. Maybe I'm missing something. What leads you to believe that?

Comment by advael on Rationality Quotes June 2014 · 2014-06-27T20:33:17.097Z · LW · GW

Not necessarily. Honest advice from successful people gives some indication of what those successful people honestly believe to be the keys to their success. The assumption that people who are good at succeeding in a given sphere are also good at accurately identifying the factors that lead to their success may have some merit, but I'd argue it's far from a given.

It's not just a problem of not knowing how many other people failed with the same algorithm; They may also have various biases which prevent them from identifying and characterizing their own algorithm accurately, even if they have succeeded at implementing it.

Comment by advael on Rationality Quotes February 2014 · 2014-06-27T19:12:25.429Z · LW · GW

The entire concept of marriage is that the relationship between the individuals is a contract, even if not all conceptions of marriage have this contract as a literal legal contract enforced by the state. There's good reason to believe that marriages throughout history have more often been about economics and/or politics than not, and that the norm that marriage is primarily about the sexual/emotional relationship but nonetheless falls under this contractual paradigm is a rather new one. I agree with your impression that this transactional model of relationships is a little creepy, and see this as an argument against maintaining this social norm.

Comment by advael on Rationality Quotes February 2014 · 2014-06-27T16:50:20.323Z · LW · GW

I see that as evidence that marriage, as currently implemented, is not a particularly appealing contract to as many people as it once was. Whether this is because of no-fault divorce is irrelevant to whether this constitutes "widespread suffering."

I reject the a priori assumptions that are often made in these discussions and that you seem to be making, namely, that more marriage is good, more divorce is bad, and therefore that policy should strive to upregulate marriage and downregulate divorce. If this is simply a disparity of utility functions (if yours includes a specific term for number of marriages and mine doesn't, or similar) then this is perhaps an impasse, but if you're arguing that there's some correlation, presumably negative, between number of marriages and some other, less marriage-specific form of disutility (i.e. "widespread suffering"), I'd like to know what your evidence or reasoning for that is.

Comment by advael on Motivators: Altruistic Actions for Non-Altruistic Reasons · 2014-06-26T17:52:58.118Z · LW · GW

I think an important part of why people are distrustful of people who accomplish altruistic ends acting on self-serving motivations is that it's definitely plausible that these other motivations will act against the interest of the altruistic end at some point during the implementation phase.

To use your example, if someone managed to cure malaria and make a million dollars doing it, and the cure was available to everyone or it effectively eradicated the disease from everywhere, that would definitely be creating more net altruistic utility than if someone made a million dollars selling video games (I like video games, but agree that their actual utility for most people's preferences/needs is pretty low compared to curing malaria). I would be less inclined to believe this if the person who cured malaria made their money by keeping the cure secret and charging enough for it that any number of people who needed it were unable to access it, with the loss in net altruism quantified by the number of people who were in this way prevented from alleviating their malaria.

Furthermore, if this hypothetical self-interested malaria curer were also to patent the cure and litigate aggressively (or threaten to) against other cures, or otherwise somehow intentionally prevent other people from producing a cure, and they are effective in doing so, the net utility of coming up with the cure could drop below zero, since they may well have prevented someone else who is more "purely" altruistic from coming up with a cure independently and helping more people than they did.

These are pretty plausible scenarios, exactly because the actions demanded by optimizing the non-altruistic motivators can easily diverge from the actions demanded by optimizing the altruistic end, even if the original intent was supposedly the latter. It's particularly plausible in the case of profit motive, because although it is not always the case that the best way to turn a profit is anti-altruistic, often the most obvious and easy-to-implement ways to do so are, as is the case with the example I gave.

That's not to say we should intrinsically be wary of people who manage to benefit themselves and others simultaneously, nor is it to say that a solution that isn't maximizing altruistic utility can't still be a net good, but the less-than-zero utility case is, I would argue, common enough that it's worth mentioning. People don't solely distrust selfishly-motivated actors for archaic or irrational reasons.

Comment by advael on Rationality Quotes December 2013 · 2013-12-18T23:45:00.947Z · LW · GW

I'm wary of being in werehouses at all. They could turn back to people at any time!

Comment by advael on On Walmart, And Who Bears Responsibility For the Poor · 2013-12-05T00:14:50.164Z · LW · GW

I agree that that is a possible consequence, but it's far from guaranteed that that will happen. Although in sheer numbers many people may quit working, the actual percent of people who do could be rather low. After all, merely subsisting isn't necessarily attractive to people who already have decent jobs and can do better than one could on the basic income. It does however give them more negotiating power in terms of their payscale, given that quitting one's job will no longer be effectively a non-option for the vast majority.

This may mean that a lot of low-payscale jobs will be renegotiated, and employers who previously employed many low-paid workers would have to optimize for employing fewer higher-paid workers (possibly doing the same jobs, depending on how necessary they are, or by finding ways to automate). I don't claim any expertise in this, but I'd find it hard to believe that there isn't at least some degree to which it's merely easier to hire more people to accomplish many tasks people are currently hired for, rather than impossible to accomplish them some other way. This also is an innovation-space in which skilled jobs could pop up.

As for high-payscale jobs, I could see good arguments for any number of outcomes being likely to occur. Perhaps employers would be able to successfully argue that they should pay them less due to supplementing a basic income. Perhaps employees would balk at this and, newly empowered to walk more easily, demand that they keep the same pay, or even higher pay. The equilibrium would likely shift in some way as far as where the exact strata of pay are for different professions, and I can't claim to know how that would turn out, but it seems unlikely people would prefer to not work than to do work that gives them a higher standard of living than the basic income to some significant degree.

Similarly, people who own profitable businesses certainly wouldn't up and quit, and thus most likely any service that the market still supports would still exist as well, including obvious basic essentials that presumably would exist in any economic system, such as businesses selling food or whatever is considered essential technology in a given era. Some businesses might fail if they're unable to adapt to the new shape of the labor market, and profitability of larger businesses may go down for similar reasons, but the entry barrier for small businesses would also decrease, since any given person could feasibly devote all of their time and effort into running a business without failure carrying the risk of inability to continue living.

There would probably be a class of people who subsist on basic income, but we already have a fairly large homeless population, as well as a population of people doing jobs that could probably go away and not ruin the economy for anyone but that individual.

My point isn't that everything will turn out perfectly as expected, or that I have any definitive way of knowing, obviously, but there do exist outcomes that are good enough and probable enough to pass a basic sanity-check. The risk of economic collapse exists with or without instituting such a policy, and I'm not yet convinced that this increases the likelihood of it by a considerable margin.

Comment by advael on On Walmart, And Who Bears Responsibility For the Poor · 2013-12-04T22:00:22.636Z · LW · GW

Well of course. It would definitely facilitate a lot of people being, by many measures society cares about, completely useless. I definitely don't contend for example that no one would decide to go to california and surf, or play WoW full-time, or watch TV all day, or whatever. You'd probably see a non-negligible number of people just "retire." I'm willing to bet that this wouldn't be a serious problem, though, and see it as a definite improvement over the large number of people who are, similarly, not doing anything fun with their lives, but having to work 8 hours a day at some dead-end job or having crippling poverty to deal with.

Comment by advael on On Walmart, And Who Bears Responsibility For the Poor · 2013-12-04T21:21:12.219Z · LW · GW

Ah, I guess that clears up our confusion. I wasn't aware of that distinction either and have heard the terms used interchangeably before. I will try to use them more carefully in the future.

At any rate, I definitely agree that an actual basic income would be a hard sell in the current political climate of the US. (I'm less inclined to comment on the political climate of the English-speaking world in general, due to lack of significant enough exposure to significant enough non-US parts of it that I wouldn't just be making stuff up).

I'd also argue that a guaranteed minimum income in the manner you describe is a far less interesting (and in my opinion desirable) policy, as it just simply doesn't have the game-changing properties that a basic income would. As far as I'm concerned, the primary purpose of implementing a basic income would be to eliminate the economic imperative that everyone work.

If successful, this would hopefully do a number of useful things, like making the employer/employee relationships of those who still worked more of a balanced negotiation, depoliticizing automation efforts, and generally eliminating the level of human suffering produced by being between jobs, taking time to improve one's mental health by relaxing, doing volunteer work, doing work no one will pay for, etc., in one fell swoop.

While I obviously can't claim to know that it would work perfectly or at all, I would contend that these are desirable outcomes and that there is at least a reasonably high likelihood that a successful implementation of a basic income would produce them, and therefore that attempting to implement such a policy is worthwhile. I'd argue that the current model where a job occupies a large chunk of a given human's time, is required (for the most part, with obvious caveats for the independently wealthy, etc.) to live, and where a given job can only exist if the market will pay for it, is broken, and will only get more broken as more automation exists, the population grows, and several other current trends continue.

Comment by advael on On Walmart, And Who Bears Responsibility For the Poor · 2013-12-04T20:14:41.383Z · LW · GW

Some real-world benefit systems have strings. The entire premise of a basic income is that it's unconditional. Otherwise you call it "unemployment," and it is an existing (albeit far from ideally implemented) benefit in at least the US. It might be reasonable to discuss the feasibility of convincing e.g. the US to actually enact a basic income, but as long as we're discussing a hypothetical policy anyway, it's not really worthwhile to assume that the policy is missing its key feature.

Comment by advael on On Walmart, And Who Bears Responsibility For the Poor · 2013-12-04T18:57:47.977Z · LW · GW

My knee-jerk assumption is that Job 1 would actually not be accepted by almost any employees. This is based on the guess that without the threat of having no money, people generally would not agree to give up their time for low wages, since the worst case of being unemployed and receiving no supplemental income does not involve harsh deterrents like starving or being homeless.

Getting someone to do any job at all under that system will probably require either a pretty significant expected quality of life increase per hour worked (which is to say, way better than $3 per hour) or some intrinsic motivation to do the job other than money (e.g. they enjoy it, think it's morally good to do, etc.)

It's more likely that a well-implemented basic income would simply eliminate a lot of the (legal) labor supply for low-wage jobs. I both see this as a feature and see no need for a minimum wage under this system.

Comment by advael on 2013 Less Wrong Census/Survey · 2013-11-23T22:56:21.699Z · LW · GW

I have been surveyed.

I definitely appreciate being asked to assign probabilities to things, if for no other reason than to make apparent to me how comfortable I am with doing so (Not very, as it turns out. Something to work on.)

Comment by advael on Attention Lurkers: Please say hi · 2013-11-14T01:41:33.023Z · LW · GW

Hi.

I guess I have some abstract notion of wanting to contribute, but tend not to speak up when I don't have anything particularly interesting to say. Maybe at some point I will think I have something interesting to say. In the meantime, I've enjoyed lurking thus far and at least believe I've learned a lot, so that's cool.