Comment by jj10dman on Schelling fences on slippery slopes · 2012-03-31T00:09:36.009Z · score: 2 (6 votes) · LW · GW

"One evening, I start playing Sid Meier's Civilization (IV, if you're wondering - V is terrible)" THANK YOU. ;D

Comment by jj10dman on Thomas C. Schelling's "Strategy of Conflict" · 2012-03-31T00:07:20.627Z · score: 1 (1 votes) · LW · GW

The example was just to make an illustration, and I wouldn't read into it too much. It has a lot of assumptions like, "I would rather sit around doing absolutely nothing than take stroll in the wilderness," and, "I have no possible landing position I can claim in order to make my preferred meeting point seem like a fair compromise, and therefore I must break my radio."

Comment by jj10dman on Blue or Green on Regulation? · 2011-04-26T18:57:28.812Z · score: 30 (30 votes) · LW · GW

Amen. Blue vs. Green thinking is the norm, and I have been accused (negatively) of being a liberal and a conservative in the same day.

Your opinion doesn't sound like mine, so it's probably the other side's opinion.

Comment by jj10dman on The Scales of Justice, the Notebook of Rationality · 2011-04-26T18:08:30.219Z · score: 0 (0 votes) · LW · GW

The paper "The Affect Heuristic in Judgments of Risks and Benefits" doesn't mention explicitly separating benefit from risk in the critical second experiment (and probably not the first either, which I didn't read). If I were brought in and given the question, 'In general, how beneficial do you consider the use of X in the U.S. as a whole?', then I would weigh all positive and negative aspects together to get a final judgment on whether or not it's worth using. "Benefit" CAN be a distinct concept from risks, but language is messy, and it can be (and I would) interpreted as "sufficiency to employ". As a result, depending on the reader's interpretation of "benefit", it's possible that any lowering of perceived risk will NECESSARILY increase perceived benefit, no logical error required.

Rather sloppy science, if you ask me.

Comment by jj10dman on Being a teacher · 2011-04-21T21:29:19.712Z · score: 0 (0 votes) · LW · GW

People call me an excellent teacher, and I've probably spent more time figuring out why people think I'm an excellent teacher than I have getting better at teaching. Some techniques I find universally applicable:

  1. Teach yourself. Imagine yourself knowing everything you now know minus the thing that needs to be taught and everything that requires that knowledge as a prerequisite. Now picture trying to teach yourself. Humans are terrible at remembering when they learned something, how long it took them, what it felt like and where they had problems, etc. By starting with the idea of how you would teach yourself, you're focusing on what you would absolutely NEED to tell ANYONE regardless of their prior knowledge or understanding; these are good things to focus on. Just as importantly, you also prime your brain to think about the subject on a less automatic level.

  2. Metaphors are absolutely critical. Everyday human experience seems to very far more within people's minds than it does on the outside. Give concrete examples from the physical world as metaphors: the internet is not a big truck, it's a series of tubes! It might be funny, but it made at least as much sense as anything else in the speech.

  3. Talk a LOT. Throw out a lot of information and your student will tend to latch onto the one thing that they were missing and ask about it, leading to a breakthrough. When you make an important point, repeat it a second or third time in different ways, then explicitly point it out later during examples. When the student is actually attempting something that requires a great deal on concentration, be absolutely silent unless they need a nudge; otherwise, avoid ever letting the room be silent for more than five seconds. This doesn't mean rush through your explanations, it means belabor the point and add more metaphors if possible. It also means build a habit in your student of talking about his thought process so you can gauge what to say next and when it's appropriate to move on.

  4. Prime the mind to recognize. Try to employ a fixed set of terms, even if you have to make it up on the spot, so that you can immediately point it out later and the student will know what you're trying to point out. If you sum up a conceptual explanation by calling it "ordered complexity", you can then point it out later, "see, that's what I meant by ordered complexity!" This will pull the entire explanation into their thought process the moment that it's needed.

  5. If all else fails, give up and go back to basics. If it seems like you're not making progress for a while - if they simply don't "get it" - you've usually incorrectly assumed that the student has a prerequisite level of skill or knowledge. Stop immediately and trace back to prerequisites that are most likely to not be met, do some trouble-shooting to find the biggest culprit, and start a new lesson in the trouble spot. Resist the urge to do a quick-fix bare-bones lesson to get them up to speed so you can return to the original lesson; you must endeavor to genuinely teach them the more basic knowledge/skill, or you will just waste your time later.

Comment by jj10dman on Fun and Games with Cognitive Biases · 2011-02-26T22:55:46.782Z · score: 6 (6 votes) · LW · GW

I think the most straightforward "edutainment" design would be a "rube or blegg" model of presenting conflicting evidence and then revealing the Word of God objective truth at the end of the game - different biases can be targetted with different forms of evidence, different models of interpretation (e.g. whether or not players can assign confidence levels in their guesses), and different scoring methods (e.g. whether the game is iterative, whether it's many one shots but probability of success over many games is the goal, etc.).

A more compelling example that won't turn off as many people (ew, edutainment? bo-ring) would probably be a multiplayer game in which the players are randomly led to believe incompatible conclusions and then interact. Availability of public information and the importance of having been right all along or committing strongly to a position early could be calibrated to target specific biases and fallacies.

As someone with aspirations to game design, this is a particularly interesting concept. One great aspect of video game culture is that most multiplayer games are one-offs from a social perspective: There's no social penalty for denigrating an ally's ability since you will never see them again, and there's no gameplay penalty for being wrong. This means that insofar as any and all facets in the course of a game where trusting an ally is not necessary, one can greatly underestimate the ally's skill FOREVER without ever being critically wrong. This makes online gaming perhaps the most fertile incubator of socially negative confirmation bias anywhere ever. If an ally is judged poorly, there's no penalty for declaring them as poor prematurely, and in fact people seem to apply profound confirmation bias on all evidence for the remainder of the game.

Could a game effectively be designed to target this confirmation bias and give the online gaming community a more constructive and realistic picture? I'll definitely be mulling this over. Great post.

Comment by jj10dman on Ability to react · 2011-02-23T20:23:16.639Z · score: 0 (0 votes) · LW · GW

Complete agreement; I'm in exactly the same boat.

One thing I've noticed is that high-speed action-reaction iterations seem beyond my grasp to truly master; one example is tracking objects with mouse movements in video games; I am exceptional compared to most humans, but among other highly-trained gamers, I seem to be a poor performer - even though my tested reflex speed is normal. This makes me a great general and a poor soldier. Any other good-analysis-bad-reaction minds care to weigh in on this? I'm curious if there's a connection.

Comment by jj10dman on Welcome to Less Wrong! · 2011-02-17T19:40:59.540Z · score: 3 (3 votes) · LW · GW

Rationalist blogs cite a lot of biases and curious sociological behaviors which have plagued me because I tend optimistically accept what people say at face value. In explaining them in rationalist terms, LW and similar blogs essentially explain them to my mode of thinking specifically. I'm now much better at picking up on unwritten rules, at avoiding punishment or ostracism for performing too well, at identifying when someone is lying politely but absolutely expects me to recognize it as a complete lie, etc., thanks to my reading into these psychological phenomena.

Additionally, explanations of how people confuse "the map" to be "the territory" have been very helpful in determining when correcting someone is going to be a waste of time. If they were sloppy and mis-read their map, I should step in; if their conclusion is the result of deliberately interpreting a map feature (flatness, folding) as a territory feature, unless I know the person to be deeply rational, I should probably avoid starting a 15-minute argument that won't convince them of anything.

Comment by jj10dman on Procedural Knowledge Gaps · 2011-02-17T19:14:20.762Z · score: 0 (0 votes) · LW · GW

I am enlightened.

Comment by jj10dman on Procedural Knowledge Gaps · 2011-02-10T20:16:19.272Z · score: 0 (0 votes) · LW · GW

Like SRStarin said, you can actually just hook negative to any old metal around the opening, because THE WHOLE CAR EXERIOR is negatively charged. How cool is that?? Many cars have a point in the hood opening near the battery mount that is shaped to be easily clipped-to.

Comment by jj10dman on Optimal Employment · 2011-02-05T12:35:33.920Z · score: 3 (3 votes) · LW · GW

The recommendation is sound if:

  • You value good drivers. I've been to almost a dozen countries, and nobody comes CLOSE to the conscientiousness of Australian drivers; they honk as a THANK YOU for right of way in Victoria. They also have the the most dramatically sensical road system ever, which is mostly like the U.S., with one difference I particularly like: yield signs instead of the stop signs that American drivers only yield at anyway, usually for good reason. I'm a big fan of either enforcing rules or changing them, with as few exceptions as possible.

The recommendation is suspect if:

  • You care about what foods you eat. (Going from one country to another is going to dramatically change prices based on local crop/livestock viability and tradition.)
  • The way business is done provides uncomfortable culture shock. (I value free refills on soda disproportionally because I adore soda and consume a disproportional amount per purchase. Refill policies in Aus are inconsistent and necessarily more restrictive than a system that is consistently without restriction.)
  • Your hobby is taxed to death (video games in Aus are about 140-200% the price of video games in the U.S. - ouch).
  • Your utility function dramatically supports things other than accumulation of wealth which are represented by your current culture more than Australia's. (I support high freedom in media consumption and gun rights, things that Australia is good about, but not as much as the U.S... and if money is the unit of caring, then I care about those rights with my tax dollars.)
Comment by jj10dman on Simpson's Paradox · 2011-02-05T10:46:28.039Z · score: 1 (1 votes) · LW · GW

Thanks for the links cousin it! Great reads.

Re: prase's reply: The Prisoners' dilemma is a legitimate dilemma. No matter how many times I read the page on Sen's paradox I can't interpret it as anything remotely sensical.

I kept editing this post again and again as I boiled down the problem (it's harder to explain something wrong than something correct), and I think I've got it down to one sentence:

If you just look at sorted lists of preferences without any comparative weights of given preferences, you're going to get paradoxes. Nash equilibrium exists because of weights. Sen's paradox does not exist because it precludes weights. If Bob just wants Alice to see a movie and doesn't much care about his own viewing of the film either way, and Alice just wants to spend time with Bob and doesn't much care if it's by watching a movie or not, then there's no paradox until a social scientist walks in the room.

Comment by jj10dman on "Nahh, that wouldn't work" · 2011-01-07T06:06:43.492Z · score: 1 (1 votes) · LW · GW

This may connect to the effect of self-fulfilling prophecies: We want a world with few threats, so we think that threats don't work, so we don't threaten people, so the world has fewer threats.

Comment by jj10dman on The Sheer Folly of Callow Youth · 2010-11-14T08:04:56.262Z · score: 0 (4 votes) · LW · GW

If anyone can give me the cliff's notes to this, I'd be appreciative. I am a big LW fan but aside from the obsession with the Singularity, I more or less stand at Eliezer1997's mode of thinking. Furthermore, making clever plans to work around the holes in your thinking seems like the wholly rational thing to do - in fact, this entire post seems like a direct counterargument to The Proper Use of Doubt: http://lesswrong.com/lw/ib/the_proper_use_of_doubt/

Comment by jj10dman on Swords and Armor: A Game Theory Thought Experiment · 2010-11-14T07:27:41.268Z · score: 1 (1 votes) · LW · GW

This has more to do with human psychology than strict mathematical game theory:

As an obsessive gamer and game designer: when fighting a random opponent, unless there is a ladder system and you end up in the top 2% or so of the population, the optimal strategy is to counter whatever is the optimal strategy vs. a null, average, or un-equipped person. That is to say, the vast majority of players who do not make a nearly-random selection will calculate the ideal strategy against a percieved "average" or "typical" set of values for damage/speed/armor/dodge, and then stop exactly there. So to win, you need to go exactly one step beyond that and then stop exactly there.

Comment by jj10dman on Raising the Sanity Waterline · 2010-10-15T13:48:42.585Z · score: 3 (3 votes) · LW · GW

What I meant was, the moment anyone comes up with such a concept, it would appear so completely and undeniably sensible that it would instantly take hold as accepted truth and only become dislodged with great effort of the combined philosophical efforts of humanity's greatest minds over thousands of years.

It's not technically "default", but that's like saying a magnet is not attracted to a nearby piece of iron "by default" because there's no nearby piece of iron implied by the existence of the magnet. It's technically true, but it kind of misses the important description of a property of magnets.

Comment by jj10dman on Simultaneously Right and Wrong · 2010-10-15T13:31:58.399Z · score: 0 (0 votes) · LW · GW

Yes it does.

...

Is there some implication I'm not getting here?

Comment by jj10dman on Welcome to Less Wrong! · 2010-10-15T13:25:48.547Z · score: 7 (7 votes) · LW · GW

I originally wrote this for the origin story thread until I realized it's more appropriate here. So, sorry if it straddles both a bit.

I am, as nearly as I believe can be seen in the present world, an intrinsic rationalist. For example: as a young child I would mock irrationality in my parents, and on the rare occasions I was struck, I would laugh, genuinely, even through tears if they came, because the irrationality of the Appeal to Force made the joke immensely funnier. Most people start out as well-adapted non-rationalists; I evidently started as a maladaptive rationalist.

As an intrinsic (maladaptive) rationalist, I have had an extremely bumpy ride in understanding my fellow man. If I had been born 10 years later, I might have been diagnosed with Asperger's Syndrome. As it was, I was a little different, and never really got on with anyone, despite being well-mannered. A nerd, in other words. Regarding bias, empathic favoritism, willful ignorance, asking questions in which no response will effect subsequent actions or belief confidences, and other peculiarities for which I seem to be an outlier, any knowledge about how to identify and then deal with these peculiarities has been extremely hard-won from years upon years of messy interactions in uncontrolled environments with few hypotheses from others to go on (after all, they "just get it", so they never needed to sort it out explicitly).

I've recently started reading rationalist blogs like this one, and they have been hugely informative to me because they put things I have observed about people but failed to understand intuitively into a very abstract context (i.e. one that bypasses intuition). Less Wrong, among others, have led to a concrete improvement in my interactions with humanity in general, the same way a blog about dogs would improve one's interactions with dogs in general. This is after just a couple months! Thanks LW.

Comment by jj10dman on Reason as memetic immune disorder · 2010-10-15T12:58:59.811Z · score: 1 (1 votes) · LW · GW

I have never so thoroughly enjoyed and had my mood brightened by something that I then logged a "downvote" upon. Am I being irrational in this vote? The proper criteria for making these votes seems largely implied but never actually explained.

Comment by jj10dman on Science as Attire · 2010-08-29T07:31:24.198Z · score: 4 (4 votes) · LW · GW

This. I regularly refer to cultural trends, business models, and technology as undergoing evolution, without the slightest inkling of doubt or shame. Real Life is about compromises to that most obstinate debater Nature, and if one must deal with the pragmatic issue of conveying ideas in a conversation in a short period of time, "evolution" is perfectly acceptable shorthand for "process by which a system becomes incrementally more efficient via an ongoing process of simultaneous diversification and selection, similar to biological evolution if someone were to replace the concept of random genetic variation with human ideas and natural selection with artificial selection." That simply takes way too long to say.

Comment by jj10dman on Policy Debates Should Not Appear One-Sided · 2010-08-10T21:56:17.111Z · score: -11 (11 votes) · LW · GW

fixed:

Real tough-mindedness is saying, "Yes, sulfuric acid is a horrible painful death, but it ought to have happened to her because a world without consequence - without cause and effect - is meaningless to either idealize or pursue... and as far as we can peer into a hypothetical, objective, pragmatic view of what ought to be, she totally deserved it."

Comment by jj10dman on Policy Debates Should Not Appear One-Sided · 2010-08-10T16:01:44.005Z · score: -4 (16 votes) · LW · GW

I agree strongly with everything in the above paragraph, especially the end. And so should you. Greens 4 life!

Comment by jj10dman on Cached Thoughts · 2010-08-10T12:23:36.310Z · score: 1 (1 votes) · LW · GW

I don't think most of us would agree that everyone out there is playing human rational capacity to the hilt and needs to slow down on attacking its biases and prejudices. After all, the modern critical examination of human biases, while touched upon throughout history, is essentially a century old or less.

Comment by jj10dman on The Halo Effect · 2010-08-10T12:05:08.486Z · score: 1 (1 votes) · LW · GW

The opposite is also true: a "negative halo effect" can be easily observed, wherein "bad" traits are also similarly grouped and feed on each other.

An interesting part of halo effects is that people seem to understand them on an instinctual level - not enough to get rid of them, but enough to exploit them...

I've drawn an extremely strong correlation in a particular online game between having a marijuana reference in one's handle and being bad at the game; being bad is not strongly linked with marijuana references, but that's only because they're in the extreme minority of the population; if you're sporting a "420", you're almost definitely underperforming. I've never bothered forming a hypothesis as to why this is, but it is.

So one day I decide to try a little experiment - for funsies, nothing rigorously scientific, just a "see what happens" thing - and predict aloud that an ally with a name referencing marijuana would perform poorly compared to the other players in the game. I turned out to be right (he was even worse than expected), but the interesting part was his response:

He implied my prediction was wrong, evidenced by that he was writing his college thesis on the effects of THC on the body.

The only way this statement makes sense is if we trace it through an expectation on his part that he can rebut the argument using the halo effect. "I am extremely accomplished academically," I could almost read on the screen, "Therefore, I am not a poor performer in this online video game."

Comment by jj10dman on Raising the Sanity Waterline · 2010-08-10T11:25:54.148Z · score: 12 (12 votes) · LW · GW

On the contrary, I would argue that our default belief state is one full of scary monsters trying to kills us and whirling lights flying around overhead and oh no what this loud noise and why am I wet

...I can't imagine a human ancestor in that kind of situation not coming up with some kind of desperate Pascal's wager of, "I'll do this ritualistic dance to the harvest goddess because it's not really that much trouble to do in the grand scheme of things, and man if there's any chance of improving the odds of a good harvest, I'm shakin' my rain-maker." Soon you can add, "and everyone else says it works" to the list, and bam, religion.

Comment by jj10dman on The Least Convenient Possible World · 2010-08-10T11:13:06.742Z · score: 2 (2 votes) · LW · GW

Yes! I can't believe I don't see this repeated in one form or another more often. Fallacies are a bit like prions in that they tend to force a cascade of fallacies to derive from them, and one of my favorite debate tactics is the thought experiment, "Let's assume your entire premise is true. How might this contradict your position?"

Usually the list is longer than my own arguments.

Comment by jj10dman on Simultaneously Right and Wrong · 2010-08-10T11:00:56.649Z · score: 1 (1 votes) · LW · GW

Last November, Robin described a study where subjects were less overconfident if asked to predict their performance on tasks they will actually be expected to complete. He ended by noting that "It is almost as if we at some level realize that our overconfidence is unrealistic."

I think there's a less perplexing answer: that at some level we realize that our performance is not 100% reliable, and we should shift down our estimate by an intuitive standard deviation of sorts. That way, we can under-perform in this specific case, and won't have to deal with the group dynamics of someone else's horrible disappointment because they were counting on you doing your part as well as you said you could.