Posts

The Good Bayesian 2009-03-25T21:39:18.934Z

Comments

Comment by Sideways on You'll die if you do that · 2011-05-12T21:12:50.397Z · LW · GW

This is why you don't eat silica gel.

I'm always mildly bemused by the use of quotation marks on these packets. I've always seen:

SILICA GEL

" DO NOT EAT "

Why would the quotation actually be printed on the package? Who are they quoting?

Comment by Sideways on Building rationalist communities: a series overview · 2011-05-09T21:35:15.299Z · LW · GW

What are you all most interested in?

Your solution to the "Four People Who Do Everything" organization problem. This will be immediately relevant to my responsibilities within the next couple months.

Comment by Sideways on Your Evolved Intuitions · 2011-05-05T19:00:26.058Z · LW · GW

I'm actually not making an accusation of overconfidence; just pointing out that using qualified language doesn't protect against it. I would prefer language that gives (or at least suggests) probability estimates or degrees of confidence, rather than phrases like "looks like" or "many suggest".

ID theorists are more likely than evolutionary biologists to use phrases like "looks like" or "many suggest" to defend their ideas, because those phrases hide the actual likelihood of ID. When I find myself thinking, "it could be that X," instead of "because of A and B, X is likely," I suspect myself of being overconfident, and I apply the same heuristic to statements from other people.

Comment by Sideways on Your Evolved Intuitions · 2011-05-05T18:04:50.202Z · LW · GW

An exercise in parody:

  • The bacterial flagellum looks like a good candidate for an intelligently designed structure.

  • Many [non-biologist] researchers think Intelligent Design has explanatory value.

  • Many [non-biologist] researchers suggest Intelligent Design is scientifically useful.

  • Our brains may have been intelligently designed to...

  • but we may not have been designed to...

Evolutionary psychology isn't as catastrophically implausible as ID; hence the bit about parody. The point is that merely using qualified language is no guarantee against overconfidence.

Comment by Sideways on Offense versus harm minimization · 2011-04-16T04:37:00.608Z · LW · GW

I'm not convinced that "offense" is a variety of "pain" in the first place. They feel to me like two different things.

When I imagine a scenario that hurts me without offending me (e.g. accidentally touching a hot stovetop), I anticipate feelings like pain response and distraction in the short term, fear in the medium term, and aversion in the long term.

When I imagine a scenario that offends me without hurting me (e.g. overhearing a slur against a group of which I'm not a member) I anticipate feelings like anger and urge-to-punish in the short term, wariness and distrust in the medium term, and invoking heavy status penalties or even fully disassociating myself from the offensive party in the long term.

Of course, an action can be both offensive and painful, like the anti-Semitic slurs you mention. But an offensive action need not be painful. My intuition suggests that this is a principled reason (as opposed to a practical one) for the general norm of pluralistic societies that offensiveness alone is not enough to constrain free speech.

I'm not sure which category the British Fish thought experiment falls into; the description doesn't completely clarify whether the Britons are feeling pained or offended or both.

Comment by Sideways on We are not living in a simulation · 2011-04-12T18:54:24.171Z · LW · GW

They're a physical effect caused by the operation of a brain

You haven't excluded a computational explanation of qualia by saying this. You haven't even argued against it! Computations are physical phenomena that have meaningful consequences.

"Mental phenomena are a physical effect caused by the operation of a brain."

"The image on my computer monitor is a physical effect caused by the operation of the computer."

I'm starting to think you're confused as a result of using language in a way that allows you to claim computations "don't exist," while qualia do.

As to your linked comment: ISTM that qualia are what an experience feels like from the inside. Maybe it's just me, but qualia don't seem especially difficult to explain or understand. I don't think qualia would even be regarded as worth talking about, except that confused dualists try to use them against materialism.

Comment by Sideways on We are not living in a simulation · 2011-04-12T18:05:26.420Z · LW · GW

I didn't intend to start a reductionist "race to the bottom," only to point out that minds and computations clearly do exist. "Reducible" and "non-existent" aren't synonyms!

Since you prefer the question in your edit, I'll answer it directly:

if I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved. If you believe that "embodying the same computation" is somehow a privileged concept in this regard -- that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?

Computation is "privileged" only in the sense that computationally identical substitutions leave my mind, preferences, qualia, etc. intact; because those things are themselves computations. If you replaced my brain with a computationally equivalent computer weighing two tons, I would certainly notice a difference and consider myself harmed. But the harm wouldn't have been done to my mind.

I feel like there must be something we've missed, because I'm still not sure where exactly we disagree. I'm pretty sure you don't think that qualia are reified in the brain-- that a surgeon could go in with tongs and pull out a little lump of qualia-- and I think you might even agree with the analogy that brains:hardware::minds:software. So if there's still a disagreement to be had, what is it? If qualia and other mental phenomena are not computational, then what are they?

Comment by Sideways on We are not living in a simulation · 2011-04-12T04:59:36.819Z · LW · GW

If computation doesn't exist because it's "a linguistic abstraction of things that exist within physics", then CPUs, apples, oranges, qualia, "physical media" and people don't exist; all of those things are also linguistic abstractions of things that exist within physics. Physics is made of things like quarks and leptons, not apples and qualia. I don't think this definition of existence is particularly useful in context.

As to your fruit analogy: two apples do in fact produce the same qualia as two oranges, with respect to number! Obviously color, smell, etc. are different, but in both cases I have the experience of seeing two objects. And if I'm trying to do sums by putting apples or oranges together, substituting one for the other will give the same result. In comparing my brain to a hypothetical simulation of my brain running on a microchip, I would claim a number of differences (weight, moisture content, smell...), but I hold that what makes me me would be present in either one.

See you in the morning! :)

Comment by Sideways on We are not living in a simulation · 2011-04-12T04:11:58.567Z · LW · GW

"Computation exists within physics" is not equivalent to " "2" exists within physics."

If computation doesn't exist within physics, then we're communicating supernaturally.

If qualia aren't computations embodied in the physical substrate of a mind, then I don't know what they are.

Comment by Sideways on We are not living in a simulation · 2011-04-12T03:51:06.548Z · LW · GW

I'm asserting that qualia, reasoning, and other relevant phenomena that a brain produces are computational, and that by computing them, a Turing machine can reproduce them with perfect accuracy. I apologize if this was not clear.

Adding two and two is a computation. An abacus is one substrate on which addition can be performed; a computer is another.

I know what it means to compute "2+2" on an abacus. I know what it means to compute "2+2" on a computer. I know what it means to simulate "2+2 on an abacus" on a computer. I even know what it means to simulate "2+2 on a computer" on an abacus (although I certainly wouldn't want to have to actually do so!). I do not know what it means to simulate "2+2" on a computer.

Comment by Sideways on We are not living in a simulation · 2011-04-12T03:16:58.604Z · LW · GW

the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator.... [to simulate humans] the simulator must physically incorporate a human brain.

It seems like the definition of "physical" used in this article is "existing within physics" (a perfectly reasonable definition). By this definition, phenomena such as qualia, reasoning, and computation are all "physical" and are referred to as such in the article itself.

Brains are physical, and local physics seems Turing-computable. Therefore, every phenomenon that a physical human brain can produce, can be produced by any Turing-complete computer, including human reasoning and qualia.

So to "physically incorporate a human brain" in the sense relative to this article, the simulator does NOT need to include an actual 3-pound blob of neurons exchanging electrochemical signals. It only needs to implement the same computation that a human brain implements.

Comment by Sideways on Rationality Quotes: March 2011 · 2011-03-06T17:28:08.604Z · LW · GW

http://en.wikipedia.org/wiki/Intentional_base_on_balls

Baseball pitchers have the option to 'walk' a batter, giving the other team a slight advantage but denying them the chance to gain a large advantage. Barry Bonds, a batter who holds the Major League Baseball record for home runs (a home run is a coup for the batter's team), also holds the record for intentional walks. By walking Barry Bonds, the pitcher denies him a shot at a home run. In other words, Paige is advising other pitchers to walk a batter when it minimizes expected risk to do so.

Since this denies the batter the opportunity to even try to get a hit, some consider it to be unsportsmanlike, and when overused it makes a baseball game less interesting. A culture of good sportsmanship and interesting games are communal goods in baseball-- the former keeps a spirit of goodwill, and the latter increases profitability-- so at a stretch, you might say Paige advises defecting in Prisoner's Dilemma type problems.

Comment by Sideways on Value Deathism · 2010-10-30T19:22:17.576Z · LW · GW

Other concepts that happen to also be termed "values", such as your ancestors' values, don't say anything more about comparative goodness of the future-configurations, and if they do, then that is also part of your values.

I'm having difficulty understanding the relevance of this sentence. It sounds like you think I'm treating "my ancestors' values" as a term in my own set of values, instead of a separate set of values that overlaps with mine in some respects.

My ancestors tried to steer their future away from economic systems that included money loaned at interest. They were unsuccessful, and that turned out to be fortunate; loaning money turned out to be economically valuable. If they had known in advance that loaning money would work out in everyone's best interest, they would have updated their values (future-configuration preferences).

Of course, you could argue that neither of us really cared about loaning at interest; what we really cared about was a higher-level goal like a healthy economy. It would be convenient if we could establish a restate our values in a well-organized hierarchy, with a node at the top that was invariant on available information. But even if that could be done, which I doubt, it would still leave a role for available information in deciding something as concrete as a preferred future-configuration.

Comment by Sideways on Value Deathism · 2010-10-30T18:47:11.894Z · LW · GW

The problem with this logic is that my values are better than those of my ancestors. Of course I would say that, but it's not just a matter of subjective judgment; I have better information on which to base my values. For example, my ancestors disapproved of lending money at interest, but if they could see how well loans work in the modern economy, I believe they'd change their minds.

It's easy to see how concepts like MWI or cognitive computationalism affect one's values when accepted. It's likely bordering on certain that transhumans will have more insights of similar significance, so I hope that human values continue to change.

I suspect that both quoted authors are closer to that position than to endorsing or accepting random value drift.

Comment by Sideways on Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality · 2010-09-14T20:51:23.950Z · LW · GW

Reading LessWrong is primarily a willpower restorer for me. I use the "hit" of insight I get from reading a high quality post or comment to motivate me to start Working (and it's much easier to continue Working than to start). I save posts that I expect to be high quality (like Yvain's latest) for just before I'm about to start Working. Occasionally the insight itself is useful, of course.

Commenting on LessWrong has raised my standards of quality for my own ideas, understanding them clearly, and expressing them concisely.

I don't know if either of those are Work, but they're both definitely Win.

Comment by Sideways on Problems in evolutionary psychology · 2010-08-14T05:04:37.521Z · LW · GW

New ideas are held to much higher standard than old ones... Behaviorists, Freudians, and Social Psychologists all had created their own theories of "ultimate causation" for human behavior. None of those theories would have stood up to the strenuous demands for experimental validation that Ev. psych endured.

I'm not sure what you mean. Are you saying that standards of evidence for new ideas are higher now than they have been in the past, or that people are generally biased in favor of older ideas over newer ones? Either claim interests me and I'd like a bit more explanation of whichever you intended.

In general, I think scientific hypotheses should invite "strenuous demands for experimental validation", not endure them.

Comment by Sideways on Five-minute rationality techniques · 2010-08-10T20:42:10.811Z · LW · GW

I agree (see, e.g., The Second Law of Thermodynamics, and Engines of Cognition for why this is the case). Unfortunately, I see this as a key inferential gap between people who are and aren't trained in rationality.

The problem is that many people-- dare I say most-- feel no obligation to gather evidence for their intuitive feelings, or to let empirical evidence inform their feelings. They don't think of intuitive feelings as predictions to be updated by Bayesian evidence; they treat their intuitive feelings as evidence.

It's a common affair (at least in the United States) to see debaters use unsubstantiated intuitive feelings as linchpins of their arguments. It's even common on internet debates to see whole chains of reasoning in which every link is supported by gut feeling alone. This style of argument is not only unpersuasive to anyone who doesn't share those intuitions already-- it prevents the debater from updating, as long as his intuitions don't change.

Comment by Sideways on Five-minute rationality techniques · 2010-08-10T04:22:44.151Z · LW · GW

'Instinct,' 'intuition,' 'gut feeling,' etc. are all close synonyms for 'best guess.' That's why they tend to be the weakest links in an argument-- they're just guesses, and guesses are often wrong. Guessing is useful for brainstorming, but if you really believe something, you should have more concrete evidence than a guess. And the more you base a belief on guesses, the more likely that belief is to be wrong.

Substantiate your guesses with empirical evidence. Start with a guess, but end with a test.

Comment by Sideways on Bayes' Theorem Illustrated (My Way) · 2010-06-04T18:24:24.073Z · LW · GW

Sure, but then the question becomes whether the other programmer got the program right...

My point is that if you don't understand a situation, you can't reliably write a good computer simulation of it. So if logical believes that (to use your first link) James Tauber is wrong about the Monty Hall problem, he has no reason to believe Tauber can program a good simulation of it. And even if he can read Python code, and has no problem with Tauber's implementation, logical might well conclude that there was just some glitch in the code that he didn't notice--which happens to programmers regrettably often.

I think implementing the game with a friend is the better option here, for ease of implementation and strength of evidence. That's all :)

Comment by Sideways on Bayes' Theorem Illustrated (My Way) · 2010-06-04T17:51:05.541Z · LW · GW

If--and I mean do mean if, I wouldn't want to spoil the empirical test--logical doesn't understand the situation well enough to predict the correct outcome, there's a good chance he won't be able to program it into a computer correctly regardless of his programming skill. He'll program the computer to perform his misinterpretation of the problem, and it will return the result he expects.

On the other hand, if he's right about the Monty Hall problem and he programs it correctly... it will still return the result he expects.

Comment by Sideways on Bayes' Theorem Illustrated (My Way) · 2010-06-04T17:19:15.717Z · LW · GW

I use entities outside human experience in thought experiments for the sake of preventing Clever Humans from trying to game the analogy with their inferences.

"If Monty 'replaced' a grain of sand with a diamond then the diamond might be near the top, so I choose the first bucket."

"Monty wants to keep the diamond for himself, so if he's offering to trade with me, he probably thinks I have it and wants to get it back."

It might seem paradoxical, but using 'transmute at random' instead of 'replace', or 'Omega' instead of 'Monty Hall', actually simplifies the problem for me by establishing that all relevant facts to the problem have already been included. That never seems to happen in the real world, so the world of the analogy is usefully unreal.

Comment by Sideways on Bayes' Theorem Illustrated (My Way) · 2010-06-04T17:02:04.168Z · LW · GW

Your analogy doesn't hold, because each spin of the roulette wheel is a separate trial, while choosing a door and then having the option to choose another are causally linked.

If you've really thought about XiXiDu's analogies and they haven't helped, here's another; this is the one that made it obvious to me.

Omega transmutes a single grain of sand in a sandbag into a diamond, then pours the sand equally into three buckets. You choose one bucket for yourself. Omega then pours the sand from one of his two buckets into the other one, throws away the empty bucket, and offers to let you trade buckets.

Each bucket analogizes to a door that you may choose; the sand analogizes to probability mass. Seen this way, it's clear that what you want is to get as much sand (probability mass) as possible, and Omega's bucket has more sand in it. Monty's unopened door doesn't inherit anything tangible from the opened door, but it does inherit the opened door's probability mass.

Comment by Sideways on Pain and gain motivation · 2010-04-08T17:17:58.520Z · LW · GW

As a tentative rephrasing, something that's "emotionally implausible" is something that "I would never do" or that "could never happen to me." Like you, I can visualize myself falling with a high degree of accuracy; but I can't imagine throwing myself off the bridge in the first place. Suicide? I would never do that.

It occurs to me that "can't imagine" implies a binary division when ability to imagine is more of a continuum: the quality of imagination drops steadily between trying to imagine brushing my teeth (everyday), calling 911 (very rare, but I've done it before), punching through a wall (never done it, but maybe if I was mad enough), and jumping off a bridge (I would never do that).

For all four, I can imagine the physical events as bare facts; but for the first two I can easily place myself in the simulation, complete with cognitive and emotional states. That's much harder in the third case; in the fourth, I'm about as confident in my imagination as I am in trying to imagine a world where 1+1=3.

Comment by Sideways on Pain and gain motivation · 2010-04-07T21:46:37.538Z · LW · GW

If you've exercised before, you can probably remember the feeling in your body when you're finished--the 'afterglow' of muscle fatigue, endorphins, and heightened metabolism--and you can visualize that. If you haven't, or can't remember, you can imagine feelings in your mind like confidence and self-satisfaction that you'll have at the end of the exercise.

As for studying, the goal isn't to study, per se; it's to do well on the test. Visualizing the emotional rewards of success on the test itself can motivate you to study, as well as get enough sleep the night before, eat appropriately the day of, take performance enhancing drugs, etc.

Imagination is a funny thing. You can imagine things that could physically never happen--but if you try to imagine something that's emotionally implausible to you, you'll likely fail. Just now I imagined moving objects with my mind, with no trouble at all; then I tried to imagine smacking my mother in the face and failed utterly. If you actually try to imagine having something--not just think about trying--and fail, it's probably because deep down you don't believe you could ever have it.

Comment by Sideways on Dennett's "Consciousness Explained": Prelude · 2010-02-16T22:29:59.423Z · LW · GW

The human experience of colour is not really about recognizing a specific wavelength of light.

True, but irrelevant to the subject at hand.

the qualia of colour are associated more with the invariant surface properties of objects than they are with invariant wavelengths of light.

No, the qualia of color have nothing to do with the observed object. This is the pons asinorum of qualia. The experience of color is a product of the invariant surface properties of objects; the qualia of color is a product of the relationship between that experience and other similar experiences.

A human looking at an optical illusion might say, "That looks red, but it's really white," acknowledging that spectral color is objective, but psychophysical color is more malleable. But compare that sentence to "that sounds good, but it's really bad." Statements about color aren't entirely subjective--to some extent they're about fact, not opinion.

Statements about qualia are about the subjective aspect of an experience: e.g., red is the color of rage; of love; the color that means 'stop.'

Comment by Sideways on Dennett's "Consciousness Explained": Prelude · 2010-02-16T19:35:50.648Z · LW · GW

Your eyes do detect the frequency of light, your nose does detect the chemical composition of smells, and your tongue does detect the chemical composition of food. That's exactly what the senses of sight, smell, and taste do.

Our brains then interpret the data from our eyes, noses, and tongues as color, scent, and flavor. It's possible to 'decode', e.g., color into a number (the frequency of light), and vice versa; you can find charts on the internet that match frequency/wavelength numbers to color. Decoding taste and scent data into the molecules that produce them is more difficult, but people find ways to do it--that's how artificial flavorings are made.

There are lots of different ways to encode data, and some of them are more useful in some situations, but none of them are strictly privileged. A non-human brain could experience the 'color' of light as a number that just happens to correspond to its frequency in oscillations/second, but that wouldn't prevent it from having qualia, any more than encoding numbers into hexadecimal prevents you from doing addition.

So it's not the 'redness' of light that's a quale; 'red' is just a code word for 'wavelength 635-700 nanometers.' The qualia of redness are the associations, connections, emotional responses that your brain attaches to the plain sensory experience.

Comment by Sideways on My Fundamental Question About Omega · 2010-02-10T21:28:50.729Z · LW · GW

When a human brain makes a decision, certain computations take place within it and produce the result. Those computations can be perfectly simulated by a sufficiently-more-powerful brain, e.g. Omega. Once Omega has perfectly simulated you for the relevant time, he can make perfect predictions concerning you.

Perfectly simulating any computation requires at least as many resources as the computation itself (1), so AFAICT it's impossible for anything, even Omega, to simulate itself perfectly. So a general "perfect predictor" may be impossible. But in this scenario, Omega doesn't have to be a general perfect predictor; it only has to be a perfect predictor of you.

From Omega's perspective, after running the simulation, your actions are determined. But you don't have access to Omega's simulation, nor could you understand it even if you did. There's no way for you to know what the results of the computations in your brain will be, without actually running them.

If I recall the Sequences correctly, something like the previous sentence would be a fair definition of Eliezer's concept of free will.

(1) ETA: On second thought this need not be the case. For example, f(x) = ( (x *10) / 10 ) +1 is accurately modeled by f(x) = x+1. Presumably Omega is a "well-formed" mind without any such rent-shirking spandrels.

Comment by Sideways on Outlawing Anthropics: An Updateless Dilemma · 2009-09-08T21:36:24.815Z · LW · GW

The more I think about this, the more I suspect that the problem lies in the distinction between quantum and logical coin-flips.

Suppose this experiment is carried out with a quantum coin-flip. Then, under many-worlds, both outcomes are realized in different branches. There are 40 future selves--2 red and 18 green in one world, 18 red and 2 green in the other world--and your duty is clear:

(50% ((18 +$1) + (2 -$3))) + (50% ((18 -$3) + (2 +$1))) = -$20.

Don't take the bet.

So why Eliezer's insistence on using a logical coin-flip? Because, I suspect, it prevents many-worlds from being relevant. Logical coin-flips don't create possible worlds the way quantum coin-flips do.

But what is a logical coin-flip, anyway?

Using the example given at the top of this post, an agent that was not only rational but clever would sit down and calculate the 256th binary digit of pi before answering. Picking a more difficult logical coin-flip just makes the calculation more difficult; a more intelligent agent could solve it, even if you can't.

So there are two different kinds of logical coin-flips: the sort that are indistinguishable from quantum coin-flips even in principle, in which case they ought to cause the same sort of branching events under many-worlds--and the sort that are solvable, but only by someone smarter than you.

If you're not smart enough to solve the logical coin-flip, you may as well treat it as a quantum coin-flip, because it's already been established that you can't possibly do better. That doesn't mean your decision algorithm is flawed; just that if you were more powerful, it would be more powerful too.

Comment by Sideways on Forcing Anthropics: Boltzmann Brains · 2009-09-07T23:47:05.372Z · LW · GW

ISTM the problem of Boltzmann brains is irrelevant to the 50%-ers. Presumably, the 50%-ers are rational--e.g., willing to update on statistical studies significant at p=0.05. So they don't object to the statistics of the situation; they're objecting to the concept of "creating a billion of you", such that you don't know which one you are. If you had offered to roll a billion-sided die to determine their fate (check your local tabletop-gaming store), there would be no disagreement.

Of course, this problem of identity and continuity has been hashed out on OB/LW before. But the Boltzmann-brain hypothesis doesn't require more than one of you--just a lot of other people, something the 50%-ers have no philosophical problem with. It's a challenge for a solipsist, not a 50%-er.

Comment by Sideways on Forcing Anthropics: Boltzmann Brains · 2009-09-07T23:33:32.642Z · LW · GW

[Rosencrantz has been flipping coins, and all of them are coming down heads]

Guildenstern: Consider: One, probability is a factor which operates within natural forces. Two, probability is not operating as a factor. Three, we are now held within un-, sub- or super-natural forces. Discuss.

Rosencrantz: What?

Rosencrantz & Guildenstern Are Dead, Tom Stoppard

Comment by Sideways on How does an infovore manage information overload? · 2009-08-27T20:23:45.604Z · LW · GW

Newcomb's problem is applicable to the general class of game-type problems where the other players try to guess your actions. As far as I can tell, the only reason to introduce Omega is to avoid having to deal with messy, complicated probability estimates from the other players.

Unfortunately, in a forum where the idea that Omega could actually exist is widely accepted, people get caught up in trying to predict Omega's actions instead of focusing on the problem of decision-making under prediction.

Comment by Sideways on A note on hypotheticals · 2009-08-08T00:58:29.228Z · LW · GW

IAWY and this also applies to hypotheticals testing non-mathematical models. For instance, there isn't much isomorphism between Newcomblike problems involving perfectly honest game players who can predict your every move, and any gamelike interaction you're ever likely to have.

Comment by Sideways on Of Exclusionary Speech and Gender Politics · 2009-07-21T18:30:01.767Z · LW · GW

Thanks for the heads-up. Fixed.

Comment by Sideways on Of Exclusionary Speech and Gender Politics · 2009-07-21T17:50:02.857Z · LW · GW

I may be in the minority in this respect, but I like it when Less Wrong is in crisis. The LW community is sophisticated enough to (mostly) avoid affective spirals, which means it produces more and better thought in response to a crisis. I believe that, e.g., the practice of going to the profile of a user you don't like and downvoting every comment, regardless of content, undermines Less Wrong more than any crisis has or will.

Furthermore, I think the crisis paradigm is what a community of developing rationalists ought to look like. The conceit of students passively absorbing wisdom at the feet of an enlightened teacher is far from the mark. How many people can you think of, who mastered any subject by learning in this way?

That said... both "sides" of the gender crisis are repeating themselves, which strongly suggests they have nothing new to say. So I say Eliezer is right. If you can't understand the other side's perspective by now--if you still have no basis for agreement after all this discussion--you need to acknowledge that you have a blind spot here and either re-read with the intent to understand rather than refute, or just avoid talking about it.

Comment by Sideways on Causation as Bias (sort of) · 2009-07-14T09:00:24.447Z · LW · GW

Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible.

Just because what you believe happens to be true, doesn't mean you're right to believe it. If I walk up to a roulette wheel, certain that the ball will land on black, and it does--then I still wasn't right to believe it would.

Hypothetical Hume-worlders, like us, do not have the luxury of access to reality's "source code": they have not been informed that they exist in a hypothetical Hume-world, any more than we can know the "true nature" of our world. Their Hume-world theory, like yours, cannot be based on reading reality's source code; the only way to justify Hume-world theory is by demonstrating that it makes accurate predictions.

Arguably, it does make at least one prediction: that any causal model of reality will eventually break down. This prediction, to put it mildly, does not hold up well to our investigation of our universe.

Alternatively, you could assert that if all possibilities are randomly realized, we might (with infinitesimal probability) be living in a world that just happened to exactly resemble a causal world. But without evidence to support such a belief, you would not be right to believe it, even if it turns out to be true. Not to mention that, as others have mentioned in this thread, unfalsifiable theories are a waste of valuable mental real estate.

Comment by Sideways on The enemy within · 2009-07-05T18:47:25.279Z · LW · GW

I agree. My comment was meant as a clarification, not a correction, because the paragraph I quoted and the subsequent one could be misinterpreted to suggest that humans and animals use entirely different methods of cognition--"excecut[ing] certain adaptions without really understanding how or why they worked" versus an "explicit goal-driven propositional system with a dumb pattern recognition algorithm." I expect we both agree that human cognition is a subsequent modification of animal cognition rather than a different system evolved in parallel.

I'm not sure I agree that humans are closer to pure consequentialism than animals; if anything, the imperfect match between prediction and decision faculties makes us less consequentialist. Eating or not eating one strip of bacon won't have an appreciable impact on your social status! Rather, I would say that future-prediction allows us to have more complicated and (to us) interesting goals, and to form more complicated action paths.

Comment by Sideways on The enemy within · 2009-07-05T15:47:32.197Z · LW · GW

All animals except for humans had no explicit notion of maximizing the number of children they had, or looking after their own long-term health. In humans, it seems evolution got close to building a consequentialist agent...

Clarification: evolution did not build human brains from scratch. Humans, like all known life on earth, are adaptation executers. The key difference is that thanks to highly developed frontal lobes, humans can predict the future more powerfully than other animals. Those predictions are handled by adaptation-executing parts of the brain in the same way as immediate sense input.

For example, consider the act of eating bacon. A human can extrapolate from the bacon to a pattern of bacon-eating to a future of obesity, health risks, and reduced social status (including greater difficulty finding a mate). This explains why humans can dither over whether to eat bacon, while a dog just scarfs it down--dogs can't predict the future that way. (The frontal lobes also distinguish between bad/good/better/best actions--hence the vegetarian's decision to abstain from bacon on moral grounds.)

Eliezer's body of writing on evolutionary psychology and P.J. Eby's writing on PCT and personal effectiveness seem to be regarded as incompatible by some commenters here (and I don't want to hijack this thread into yet another PCT debate), but they both support the proposition that akrasia and other "sub-optimal" mental states result from a brain processing future-predictions with systems that evolved to handle data from proximate environmental inputs and memory.

Comment by Sideways on Rationality Quotes - July 2009 · 2009-07-04T20:13:18.187Z · LW · GW

I think the point of the quote is not that young folks are more able to unlearn falsehoods; it's that they haven't learned as many falsehoods as old people, just by virtue of not having been around as long. If you can unlearn falsehoods, you can keep a "young" (falsehood-free) mind.

Comment by Sideways on Atheism = Untheism + Antitheism · 2009-07-02T22:21:33.709Z · LW · GW

You wrote:

My belief in science (trustworthy observation, logic, epistemology, etc.) is equivalent with my belief in God, which is why I find belief in God to be necessary.

Suppose, indeed, I were a rationalist of an Untheist society... Would it be very long before I asked if there was some kind of meta-organization?

The meta-organization is a property of the natural world.

It sounds like you're saying that your "God" is not supernatural. This isn't just a problem of proper usage. A theist who believes in a deity (which, given proper usage, is redundant) is at least being internally consistent when using ineffable language like "God," "belief," and "faith," because she's imagining something ineffable. Using ineffable language to describe natural phenomena just generates mysterious answers to mysterious questions.

The God you are talking about in ~A -- the one causing the miraculous violations -- sounds like some kind of creature.

The argument, "your puny God is a creature and mine isn't" sounds like one more retreat to mystery. A God that causes miracles is only required to be a creature insofar as a God that causes patterns to be "consistent and dovetail with one another" (in other words, prevents miracles) is also required to be a creature.

Comment by Sideways on Atheism = Untheism + Antitheism · 2009-07-02T21:04:17.298Z · LW · GW

Is there anything supernatural about meta-organization?

Take your hypothetical a step further: suppose that not only were you born into an Untheist society, but also a universe where physical reality, evolution, and mathematics did not "work." In universe-prime, the laws of physics do not permit stars to form, yet the Earth orbits the Sun; evolution cannot produce life, but humans exist; physicists and mathematicians prove that math can't describe reality, yet people know where the outfielder should stand to catch the fly ball.

byrnema-prime would have an open-and-shut case that some supernatural agency was tampering with the forces of nature. The "miraculous" violations of its meta-organization would be powerful evidence for the existence of God.

"Imagine," byrnema-prime might argue to an untheist, "a universe very different from ours, where every known phenomenon arose predictably from other known phenomena. In such a universe, your rejection of the supernatural would be proper; supernatural causes would not be required to produce what people observe. But in our universe, where miracles occur, atheism just can't be justified."

Which byrnema has the stronger argument? Which is evidence for God's existence, A or ~A?

Comment by Sideways on Atheism = Untheism + Antitheism · 2009-07-01T17:07:28.003Z · LW · GW

How about both?

If I understand your terms correctly, it may be possible for realities that are not base-level to be optimization-like without being physics-like, e.g. the reality generated by playing a game of Nomic, a game in which players change the rules of the game. But this is only possible because of interference by optimization processes from a lower-level reality, whose goals ("win", "have fun") refer to states of physics-like processes. I suspect that base-level reality be physics-like. To paraphrase John Donne, no optimization process is an island--otherwise how could one tell the difference between an optimization process and purely random modification?

On the other hand, the "evolution" optimization process arose in our universe without a requirement for lower-level interference. Not that I assume our universe is base-level reality, but it seems like evolution or analogous optimizations could arise at any level. So perhaps physics-like realities are also intrinsically optimization-like.

Comment by Sideways on Atheism = Untheism + Antitheism · 2009-07-01T04:49:13.772Z · LW · GW

If you could show hunter-gatherers a raindance that called on a different spirit and worked with perfect reliability, or, equivalently, a desalination plant, they'd probably chuck the old spirit right out the window.

There's no need to speculate--this has actually happened. From what I know of the current state of Native American culture (which is admittedly limited), modern science is fully accepted for practical purposes, and traditional beliefs guide when to party, how to mourn, how to celebrate rites of passage, etc.

The only people who seem to think science conflicts with Native American belief systems, are New Age converts coming from a Western religious background. From the linked article:

A Minnesota couple who refused chemotherapy for their 13-year-old son was ordered Friday to have the boy re-evaluated... Brown County District Judge John Rodenberg found Daniel Hauser has been "medically neglected" by his parents, Colleen and Anthony Hauser, who belong to a religious group that believes in using only natural healing methods practiced by some American Indians.

Comment by Sideways on Controlling your inner control circuits · 2009-06-29T21:27:56.753Z · LW · GW

'Correctness' in theories is a scalar rather than a binary quality. Phlogiston theory is less correct (and less useful) than chemistry, but it's more correct--and more useful!--than the theory of elements. The fact that the modern scientific theories you list are better than their precursors, does not mean their precursors were useless.

You have a false dichotomy going here. If you know of someone who "knows how human cognition works on all scales", or even just a theory of cognition as powerful as Newton's theory of mechanics is in its domain, then please, link! But if such a theory existed, we wouldn't need to be having this discussion. A strong theory of cognition will descend from a series of lesser theories of cognition, of which control theory is one step.

Unless you have a better theory, or a convincing reason to claim that "no-theory" is better than control theory, you're in the position of an elementalist arguing that phlogiston theory should be ignored because it can't explain heat generated by friction--while ignoring the fact that while imperfect, phlogiston theory is strictly superior to elemental theory or "no-theory".

Comment by Sideways on Ask LessWrong: Human cognitive enhancement now? · 2009-06-17T09:02:56.263Z · LW · GW

Likewise, every other actual practice that you think would be a good thing for you to do. If you think that, and you are not doing it, why?

If you want to understand akrasia, I encourage you to take your own advice. Take a moment and write down two or three things that would have a major positive impact in your life, that you're not doing.

Now ask yourself: why am I not doing these things? Don't settle for excuses or elaborate System Two explanations why you don't really need to do them after all. You've already stipulated that they would have a major positive impact on your life! You're not looking for a list of all possible reasons; you're looking for the particular reason that you don't do those things.

If you've chosen the right sort of inactions to reflect on, you'll realize that you don't know why you don't do them. It's not just that you want to do these things, but don't; it's that you don't know why you don't. There is a reason for your inaction, but you aren't consciously aware of what it is. Congratulations: you've discovered akrasia.

Comment by Sideways on Honesty: Beyond Internal Truth · 2009-06-08T01:31:42.826Z · LW · GW

Truth-telling is necessary but not sufficient for honesty. Something more is required: an admission of epistemic weakness. You needn't always make the admission openly to your audience (social conventions apply), but the possibility that you might be wrong should not leave your thoughts. A genuinely honest person should not only listen to objections to his or her favorite assumptions and theories, but should actively seek to discover such objections.

What's more, people tend to forget that their long-held assumptions are assumptions and treat them as facts. Forgotten assumptions are a major impediment to rationality--hence the importance of overcoming bias (the action, not the blog) to a rationalist.

Comment by Sideways on Mate selection for the men here · 2009-06-05T21:39:04.769Z · LW · GW

Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader's while.

The mere fact that you are human makes it much more probable than not that you are more skilled at self-deception and deception than at perceiving correctly the intrapersonal and interpersonal truths necessary to produce lasting change in another human being.

Probably true. But if you use those statistical facts about most people as an excuse to never listen to anyone, or even to one specific person, you're setting yourself up for failure. How will you ever revise your probability estimate of one person's knowledge or the general state of knowledge in a field, if you never allow yourself to encounter any evidence?

The financial success of your self-help practice is not significant evidence that you can produce lasting change in clients because again there is a plentiful supply of gullible self-help clients with money.

have you ever actually accompanied the client to a bar and observed how long it takes the client to achieve some objectively-valid sign of success (such as getting the woman's phone number or getting the woman to follow the client out to his car)?

Is that your true rejection? If P.J. Eby said "why, yes I have," would you change your views based on one anecdote? Since a randomized, double-blind trial is impossible (or at least financially impractical and incompatible with the self-help coach's business model), what do you consider a reasonable standard of evidence?

I worry that your copious writings on this site will discourage contributions from those who have constructed their causal model of mental and social reality more carefully.

In my book, until I see very strong evidence to the contrary, every mental-health practitioner and self-help practitioner is with high probability deluded except those that constantly remind themselves of how little they know.

Given the vigorous dissent from you and others, I don't think "discouraging contributions" is a likely problem! However, I personally would like to see discussion of specific claims of fact and (as much as possible) empirical evidence. A simple assertion of a probability estimate doesn't help me understand your points of disagreement.

Comment by Sideways on Dissenting Views · 2009-05-27T16:36:56.052Z · LW · GW

Vladimir, the problem has nothing to do with strength--some of these students did very well in other classes. Nor is it about effort--some students had already given up and weren't bothering, others were trying futilely for hours a night. Even closing the initial inferential gap that caused them to fall behind (see my reply to Daniel_Burfoot above) didn't solve the problem.

The problem was simply that they believed "math" was impossible for them. The best way to get rid of that belief--maybe the only effective way--was to give them the experience of succeeding at math. A pep talk or verbal explanation of their problems wouldn't suffice.

If your definition of "the dark arts" is so general that it includes giving an easy homework assignment, especially when it's the best solution to a problem, I think you've diluted the term beyond usefulness.

Comment by Sideways on Dissenting Views · 2009-05-27T16:17:18.317Z · LW · GW

Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they're taught a second technique that builds on the previous. So there are two skills required:

The discipline to study and practice a technique until you understand it and can apply it easily. The ability to close the inferential gap between one technique and the next.

The second is the source of trouble. I can (and have) sat in on a single day's instruction of a language class and learned something about that language. But if a student misses just one jump in math class, the rest of the year will be incomprehensible. No wonder people become convinced they're "terrible at math" after an experience like that!

Comment by Sideways on Dissenting Views · 2009-05-27T09:44:27.700Z · LW · GW

For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:

"I'm terrible at math."

"I hate math class."

"I'm just dumb."

That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments--very small inferential gaps, no "trick questions".

Now, the "I'm terrible at math" attitude was, in some sense, correct. You could look at their grades and their standardized test scores and see that they were in the lowest quartile of their class. But when my students started seeing A's on their homework papers--when they started to believe that maybe they were good at math, after all--the difference in their confidence and effort was night and day. It was the false belief that enabled them to "take the first steps."

Comment by Sideways on Dissenting Views · 2009-05-27T04:39:48.493Z · LW · GW

Agreed--most of the arguments in good faith that I've seen or participated in were caused by misunderstandings or confusion over definitions.

I would add that once you know the jargon that describes something precisely, it's difficult to go back to using less precise but more understandable language. This is why scientists who can communicate their ideas in non-technical terms are so rare and valuable.