Posts

Comments

Comment by tom_breton on Prolegomena to a Theory of Fun · 2008-12-20T22:48:58.000Z · score: 0 (0 votes) · LW · GW

In other words, "what is well-being?", in such terms that we can apply it to a completely alien situation. This is an important issue.

One red herring, I think, is this:

One major set of experimental results in hedonic psychology has to do with overestimating the impact of life events on happiness.

That could be read two ways. One way is the way that you and these psychologists are reading it. Another interpretation is that the subjects estimated the impact on their future well-being correctly, but after the events, they reported their happiness with respect to their new baseline, which became adjusted to their new situation. The second thing is effectively the derivative of the first. In this interpretation the subjects' mistake is confusing the two.

Comment by tom_breton on Chaotic Inversion · 2008-12-01T02:29:05.000Z · score: 0 (0 votes) · LW · GW

@billswift: You were right about Pavlina. I discovered that as I read more of his stuff.

Comment by tom_breton on Chaotic Inversion · 2008-11-29T19:14:51.000Z · score: 0 (0 votes) · LW · GW

@RT Wolf: Thanks for the Pavlina link. It looks fascinating so far.

Comment by tom_breton on AIs and Gatekeepers Unite! · 2008-10-10T03:45:26.000Z · score: 0 (0 votes) · LW · GW

Apparently the people who played gatekeeper previously held the idea that it was impossible for an AI to talk its way out. Not just for Eliezer, but for a transhuman AI; and not just for them, but for all sorts of gatekeepers. That's what is implied by saying "We will just keep it in a box".

In other words, and not meaning to cast any aspersions, they all had a blind spot. Failure of imagination, perhaps.

This blind spot may have been a factor in their loss. Having no access to the mysterious transcripts, I won't venture a guess as to how.

Comment by tom_breton on GAZP vs. GLUT · 2008-04-10T02:41:00.000Z · score: 0 (0 votes) · LW · GW
a "logically possible" but fantastic being — a descendent of Ned Block's Giant Lookup Table fantasy...

First, I haven't seen how this figures into an argument, and I see that Eliezer has already taken this in another direction, but...

What immediately occurs to me is that there's a big risk of a faulty intuition pump here. He's describing, I assume, a lookup table large enough to describe your response to every distinguishable sensory input you could conceivably experience during your life. The number of entries is unimaginable. But I suspect he's picturing and inviting us to picture a much more mundane, manageable LUT.

I can almost hear the Chinese Room Fallacy already. "You can't say that a LUT is conscious, it's just a matrix". Like "...just some cards and some rules" or "...just transistors". That intuition works in a common-sense way when the thing is tiny, but we just said it wasn't.

And let's not slight other factors that make the thing either very big and hairy or very, very, very big.

To work as advertised, it needs some sense of history. Perhaps every instant in our maybe-zombie's history has its own corresponding dimension in the table, or perhaps some field(s) of the table's output at each instant is an additional input at the next instant, representing one's entire mental state. Either way, it's gotta be huge enough to represent every distinguishable history.

The input and output formats also correspond to enormous objects capable of fully describing all the sensory input we can perceive in a short time, all the actions we can take in a short time (including habitual, autonomic, everything), and every aspect of our mental state.

This ain't your daddy's 16 x 32 array of unsigned ints.

Comment by tom_breton on Zombies! Zombies? · 2008-04-07T03:17:00.000Z · score: 1 (1 votes) · LW · GW

To put it much more briefly, under the Wesley Salmon definition of "explanation" the epiphenomenal picture is simply not an explanation of consciousness.

Comment by tom_breton on Initiation Ceremony · 2008-03-30T01:24:06.000Z · score: 0 (0 votes) · LW · GW
Any commited autodidacts want to share how their autodidactism makes them feel compared to traditional schooled learners? I'm beginning to suspect that maybe it takes a certain element of belief in the superiority of one's methods to make autodidactism work.

As Komponisto points out, traditional schooling is so bad at educating that belief in the superiority of one's [own] methods is easily acquired. I first noticed traditional schooling's ineptitude in kindergarten, and this perception was reinforced almost continuously thru the rest of my schooling.

PS: I liked the initiation ceremony fiction, Eliezer.

Comment by tom_breton on The Quotation is not the Referent · 2008-03-14T02:47:47.000Z · score: 1 (1 votes) · LW · GW
In classical logic, the operational definition of identity is that whenever 'A=B' is a theorem, you can substitute 'A' for 'B' [but it doesn't follow that] I believe 2 + 2 = 4 => I believe TRUE => I believe Fermat's Last Theorem.

The problem is that identity has been treated as if it were absolute, as if when two things are identical in one system, they are identical for all purposes.

The way I see it, identity is relative to a given system. I'd define it thus: A=B in system S just if for every equivalence relation R that can be constructed in S, R(A,B) is true. "Equivalence relation" is defined in the usual way: reflexive, symmetrical, transitive.

My formulation quantifies over equivalence relations, so it's not properly a relation in the system itself. It "lives" in any meta-logic about S that supports the definition's modest components: Ability to distinguish equivalence relations from other types, quantification over equivalence relations in S, ability to apply a variable that's known to be an equivalence relation, and ability to conjoin an arbitrary number of conjuncts. The fact that it's not in the system also avoids the potentially paradoxical situation of including '=' among its own conjuncts.

Given my formulation, it's easily seen that identity needs to be relative to some system. If we were to quantify over all equivalence relations everywhere, we would have to include relations like "Begins with the same letter", "Has the same ASCII representation", or "Is printed at the same location on the page". These relations would fail on A=B and on other equivalences that we certainly should allow at least sometimes. In fact, the =' test would fail on every two arguments, since the relation "is passed to the NNNNth call to=' as the same argument index" must fail for those arguments. It could only succeed in a purely Platonic sense. So identity needs to be relative to some system.

How can systems differ in what equivalence relations they allow, in ways that are relevant here? For instance, suppose you write a theorem prover in Lisp. In the Lisp code, you definitely want to distinguish symbols that have different names. Their names might even have decomposable meaning, eg in a field accessor like my-struct-my-field'. So implicitly there is an equivalence relationhas-same-name' about the Lisp. In the theorem prover itself, there is no such relation as has-same-Lisp-name or even has-same-symbol-in-theorem-prover. (You can of course feed the prover axioms which model this situation. That's different, and doesn't give you real access to these distinctions)

Your text editor in which you write the Lisp code has yet another different catalog of equivalence relations. It includes many distinctions that are sensitive to spelling or location. They don't trip us up here, they are just the sort of things that a text editor should distinguish and a theorem prover shouldn't.

The code in which your text editor is written makes yet other distinctions.

So what about the cases at hand? They are both about logic of belief (doxastic logic). Doxastic logic can contain equivalence relations that fail even on de re equivalent objects. For instance, doxastic logic should be able to say "Alice believes A but not B" even when A and B are both true. Given that sort of expressive capability, one can construct the relation "Alice believes either both A and B or neither", which is reflexive, symmetrical, transitive; it's an equivalence relation and it treats A and B differently.

So A and B are not identical here even though de re they are the same.

Comment by tom_breton on Dissolving the Question · 2008-03-10T01:57:48.000Z · score: -1 (1 votes) · LW · GW

Great post, Rolf Nelson.

Comment by tom_breton on Righting a Wrong Question · 2008-03-10T01:42:25.000Z · score: 0 (0 votes) · LW · GW

This seems to me a special case of asking "What actually is the phenomenon to be explained?" In the case of free will, or should I say in the case of the free will question, the phenomenon is the perception or the impression of having it. (Other phenomena may be relevant too, like observations of other people making choices between alternatives).

In the case of the socks, the phenomenon to be explained can be safely taken to be the sock-wearing state itself. Though as Eliezer correctly points out, you can start farther back, that is, you can start with the phenomenon that you think you're wearing socks and ask about it and work your way towards the other.

Comment by tom_breton on Variable Question Fallacies · 2008-03-05T23:29:25.000Z · score: 0 (0 votes) · LW · GW
"Have you stopped beating your wife?" has well-defined true-or-false answers. It's just that people are generally too stupid to understand what the no-answer actually indicates.

It's usually given as "Have you stopped beating your wife yet?" (Emph mine). The problem is the presupposition that you have been beating your wife. Either answer accepts (or appears to accept) that presupposition.

It's a different sort of bad question than the underconstrained questions. The Liar Paradox OTOH is a case of underconstrained question because it contains non-well-founded recursion.

Comment by tom_breton on Where to Draw the Boundary? · 2008-02-22T01:29:16.000Z · score: -2 (2 votes) · LW · GW

Wrt defining art, I offer my definition:

"An artifact whose purpose is to be perceived and thereby produce in its perceiver a positive experience of no direct practical value to the perceiver."

"Artifact" here is meant in the sense of being appropriate for Daniel Dennett's design stance. It is not neccessarily tangible or durable.

This is what's called a Genus-differentia definition, or type-and-distinction definition. "Artifact" is the type, the rest is the distinction.

This lets me build on existing understandings about artifacts. They have a purpose, but they remain artifacts even when they are not accomplishing that purpose. They are constructed by human beings, but this is a pragmatic fact about human ability and not a part of their definition.

I avoided terms that make no definitional progress such as "beauty" and "aesthetic". Using them would just be passing the buck.

This definition seems to include birdsong. Make of that what you will. One could reasonably say that birdsong is a fitness signal of direct practical value to the intended perceiver, though.

Under this definition, throw-the-paint art is not so much excluded as it is a marginal, failed, or not serious example, much the way that a hammer (which is another type of artifact) constructed of two twigs scotch-taped together at right angles is a failure as a hammer

Comment by tom_breton on Where to Draw the Boundary? · 2008-02-21T22:52:34.000Z · score: 4 (4 votes) · LW · GW
Just because there's a word "art" doesn't mean that it has a meaning, floating out there in the void, which you can discover by finding the right definition.

True, but it strongly suggests that people who use the term believe there is a referent for it. Sometimes there is none (eg "phlogiston" or "unicorn"). Sometimes the referent is so muddled or misunderstood that the term is has little use except to name the mistake (eg "free will", which seems to function as a means of grouping quite distinct concepts of subjective freedom together as if they were the same thing, or "qualia" whose referent is a subjective illusion)

But almost always it's worth asking what they think they mean by it.

@tcpkac, we sometimes call slightly-out-of-focus photos "blurries". Hope that helps with your important secret project. }:)

Comment by tom_breton on The Argument from Common Usage · 2008-02-14T19:28:15.000Z · score: 4 (4 votes) · LW · GW
I never could understand why people made such a fuss about whether the tree made a sound or not.

Because the sense in which this question is being used as an example here is not the real question that bishop Berkeley had in mind.

It's really a question about epistemology. It's related to the "grue" paradox, which is a bit easier to explain. The grue paradox first notes that ordinarily we have good reason to believe that certain things (grass, green paint, copper flames) are green and will continue to be green after (say) 1 January 2009. It then notes that every piece of evidence we have supporting that belief also supports the belief that these things are "grue", which is defined as being green before 2009 and being blue after that date. On the face of it, we should be equally confident that green paint etc will be blue after 2009.

Much has been written, but the important point is that nobody has ever experienced 2009 (except you lurkers who read posts from previous years. Just change 2009 to a date that's still in your future, or have they forgotten how to do that in the future?)

A similar condition applies with Berkeley's paradox. Tautologically, nobody has ever heard a tree fall that nobody heard. (Planting a tape recorder or radio transmitter and listening to that counts as hearing it) So when we guess that the falling tree makes a sound, we are extrapolating. There is no way to test that extrapolation, so how can it be justified?

I recommend David Deutsch's Four threads of reality for some intelligent and not too wordy comments on how, among other interesting topics he covers.

Comment by tom_breton on Newcomb's Problem and Regret of Rationality · 2008-02-01T23:45:00.000Z · score: 3 (3 votes) · LW · GW

IMO there's less to Newcomb's paradox than meets the eye. It's basically "A future-predicting being who controls the set of choices could make rational choices look silly by making sure they had bad outcomes". OK, yes, he could. Surprised?

What I think makes it seem paradoxical is that the paradox both assures us that Omega controls the outcome perfectly, and cues us that this isn't so ("He's already left" etc). Once you settle what it's really saying either way, the rest follows.

Comment by tom_breton on Trust in Bayes · 2008-01-30T23:21:54.000Z · score: 3 (3 votes) · LW · GW
No matter how many of McGee's bets you take, you can always take one more bet and expect an even higher payoff. It's like asking for the largest integer. There isn't one, and there isn't an optimal plan in McGee's dilemma.

Yes, the inability to name a largest number seems to underlie the infinity utility paradoxes. Which is to say, they aren't really paradoxes of utility unless one believes that "name a number and I'll give you that many dollars" is also a paradox of utility. (Or "...and I'll give you that many units of utility")

It's true that the genie can always correct the wisher by pointing out that the wisher could have accepted one more offer, but in the straightforward "X dollars" example the genie can also always correct the wisher along the same lines by naming a larger number of dollars that he could have asked for.

It doesn't prove that the wisher doesn't want to maximize utility, it proves that the wisher cannot name a largest number, which isn't about his preferences.

Comment by tom_breton on Allais Malaise · 2008-01-27T01:23:21.000Z · score: 2 (2 votes) · LW · GW

Perhaps a lot of confusion could have been avoided if the point had been stated thus:

One's decision should be no different even if the odds of the situation arising that requires the decision are different.

Footnote against nitpicking: this ignores the cost of making the decision itself. We may choose to gather less information and not think as hard for decisions about situations that are unlikely to arise. That factor isn't relevant in the example at hand.

Comment by tom_breton on But There's Still A Chance, Right? · 2008-01-07T17:25:48.000Z · score: 2 (2 votes) · LW · GW

@Alan Crowe:

FWIW, having tried that tack a few times, I've always been disappointed. The answer is always along the lines of "I'm not meeting any psychological need, I'm searching sincerely for the truth."

Comment by tom_breton on But There's Still A Chance, Right? · 2008-01-06T21:10:47.000Z · score: 1 (1 votes) · LW · GW
But what I found even more fascinating was the qualitative distinction between "certain" and "uncertain" arguments, where if an argument is not certain, you're allowed to ignore it. Like, if the likelihood is zero, then you have to give up the belief, but if the likelihood is one over googol, you're allowed to keep it.

I think that's exactly what's going on. These people you speak of who do this are mentally dealing with social permission, not with probability algebra. The non-zero probability gives them social permission to describe it as "it might happen", and the detail that the probability is 1 / googolplex stands a good chance of getting ignored, lost, or simply not appreciated. (Similarly, the tiny uncertainty)

And I don't just mean that it works in conversation. The person who makes this mistake has probably internalized it too.

It struck me that way when I read your opening anecdote. Your interlocutor talked like a lawyer who was planning on bringing up that point in closing arguments - "Mr Yudkowsky himself admitted there's a chance apes and humans are not related" - and not bringing up the minuscule magnitude of the chance, of course.

Comment by tom_breton on A Failed Just-So Story · 2008-01-05T17:50:56.000Z · score: 0 (0 votes) · LW · GW

The selection pressure for faith is almost surely memetic, not genetic. You can focus on the genetic adaptations that it hijacked, but in doing so you will miss the big picture.

Secondly, for understanding religion, I strongly recommend Pascal Boyer's Religion Explained.

Comment by tom_breton on The American System and Misleading Labels · 2008-01-03T19:58:14.000Z · score: 0 (0 votes) · LW · GW

That's true, Benquo.

Comment by tom_breton on The American System and Misleading Labels · 2008-01-03T04:25:16.000Z · score: 8 (8 votes) · LW · GW
"How many legs does a dog have, if you call a tail a leg?
Four. Calling a tail a leg doesn't make it a leg." -- Abraham Lincoln

This is the sort of quip that gives the speaker a cheap thrill of superiority, but underneath it is just a cheap trick.

In this case, the trick is that Lincoln (or whoever its real author is) has confused de dicto and de re. That is, he confuses assertions that are to be understood inside vs outside a quote-like context; in this case, in the context of the provision that we shall call a dog's tail a leg. He uses that to commit the fallacy of ambiguity. There is an undistributed middle term lurking in there, a modal operator that appears twice and needs to have the same semantics both times, and doesn't.

So I don't think this particular quote is a good illustration of "the map is not the territory". There's nothing about general semantics that forbids agreeing on or using some labelling scheme, even a variant labelling. The idea of GS is "the map is not the territory", not "use no maps" or "use no non-standard maps".

Comment by tom_breton on Fake Utility Functions · 2007-12-17T03:56:42.000Z · score: 0 (2 votes) · LW · GW
...there really is some good stuff in there. My advice would be to read Reasons and Persons (by Derek Parfit) and The Methods of Ethics (by Henry Sidgwick).

Looked up both. Two bum steers. Sidgwick is mostly interested is naming and taxonomizing ethical positions, and Parfit is just wrong.

Comment by tom_breton on Adaptation-Executers, not Fitness-Maximizers · 2007-11-11T20:12:26.000Z · score: 0 (0 votes) · LW · GW
The atoms of a screwdriver don't have tiny little XML tags inside describing their "objective" purpose. The designer had something in mind, yes, but that's not the same as what happens in the real world. If you forgot that the designer is a separate entity from the designed thing, you might think, "The purpose of the screwdriver is to drive screws" - as though this were an explicit property of the screwdriver itself, rather than a property of the designer's state of mind. You might be surprised that the screwdriver didn't reconfigure itself to the flat-head screw, since, after all, the screwdriver's purpose is to turn screws.

This is the distinction Daniel Dennett makes between the intentional stance and the design stance. I consider it a useful one. He also distinguishes the physical stance, which you touch on.

Comment by tom_breton on Torture vs. Dust Specks · 2007-11-01T00:05:00.000Z · score: 0 (0 votes) · LW · GW
Tom, if having an upper limit on disutility(Specks) that's lower than disutility(Torture1) is begging the question in favour of SPECKS then why isn't not* having such an upper limit begging the question in favour of TORTURE?

It should be obvious why. The constraint in the first one is neither argued for nor agreed on and by itself entails the conclusion being argued for. There's no such element in the second.

Comment by tom_breton on Torture vs. Dust Specks · 2007-10-31T21:39:00.000Z · score: 0 (0 votes) · LW · GW

@Neel.

Then I only need to make the condition slightly stronger: "Any slight tendency to aggregation that doesn't beg the question." Ie, that doesn't place a mathematical upper limit on disutility(Specks) that is lower than disutility(Torture=1). I trust you can see how that would be simply begging the question. Your formulation:

D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))

...doesn't meet this test.

Contrary to what you think, it doesn't require unbounded utility. Limiting the lower bound of the range to (say) 2 * disutility(torture) will suffice. The rest of your message assumes it does.

For completeness, I note that introducing numbers comparable to 3^^^3 in an attempt to undo the 3^^^3 scaling would cause a formulation to fail the "slight" condition, modest though it is.

Comment by tom_breton on Torture vs. Dust Specks · 2007-10-31T20:00:00.000Z · score: 5 (5 votes) · LW · GW

It's truly amazing the contortions many people have gone through rather than appear to endorse torture. I see many attempts to redefine the question, categorical answers that basically ignore the scalar, and what Eliezer called "motivated continuation".

One type of dodge in particular caught my attention. Paul Gowder phrased it most clearly, so I'll use his text for reference:

...depends on the following three claims:

a) you can unproblematically aggregate pleasure and pain across time, space, and individuality,

"Unproblematically" vastly overstates what is required here. The question doesn't require unproblematic aggregation; any slight tendency of aggregation will do just fine. We could stipulate that pain aggregates as the hundredth root of N and the question would still have the same answer. That is an insanely modest assumption, ie that it takes 2^100 people having a dust mote before we can be sure there is twice as much suffering as for one person having a dust mote.

"b" is actually inapplicable to the stated question and it's "a" again anyways - just add "type" or "mode" to the second conjunction in "a".

c) it is a moral fact that we ought to select the world with more pleasure and less pain.

I see only three possibilities for challenging this, none of which affects the question at hand.

  • Favor a desideratum that roughly aligns with "pleasure" but not quite, such as "health". Not a problem.
  • Focus on some special situation where paining others is arguably desirable, such as deterrence, "negative reinforcement", or retributive justice. ISTM that's already been idealized away in the question formulation.
  • Just don't care about others' utility, eg Rand-style selfishness.
Comment by tom_breton on A Priori · 2007-10-19T03:16:00.000Z · score: 0 (0 votes) · LW · GW

In a comment on "How to convince we that 2+2=3", I pointed out that the study of neccessary truths is not the same as the possession of neccessary truths (credit to David Deutsch for that important insight). Unfortunately, the discussion here seems to have gotten hung up on a philosophical formulation that blurs that important distinction, a priori. Eliezer's quotative paragraph illustrates the problem:

The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience. Wikipedia quotes Hume: Relations of ideas are "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe." You can see that 1 + 1 = 2 just by thinking about it, without looking at apples.

All of these definitions seem to assume there is no distinction between the existence of neccessary truths and knowing neccessary truths (more correctly, justifiably assigning extremely high probability to them). But there are neccessary truths that are not knowable by any means we have or expect to have. Eg, the digits of Gregory Chaitin's Omega constant, beyond the first few. Omega is the probability that a random Turing machine will halt. Whatever value it has, it neccessarily has.

(One might say more charitably that these definitions are only categorizing knowledge and say nothing about non-knowledge. If so, they mislead, and also make a subtler mistake. Neccessary truths are not a special type of knowledge, they are topic of knowledge)

One can understand why the mistake is made. Epistemology, the branch of philosophy about how we know what we know, is not looking for a way to assign untouchable status to what seems its most certain knowledge.

Comment by tom_breton on What Evidence Filtered Evidence? · 2007-10-01T22:15:29.000Z · score: 0 (0 votes) · LW · GW

G, you're raising points that I already answered.

Comment by tom_breton on What Evidence Filtered Evidence? · 2007-10-01T03:13:32.000Z · score: 0 (0 votes) · LW · GW
I don't believe this is exactly correct. After all, when you're just about to start listening to the clever arguer, do you really believe that box B is almost certain not to contain the diamond?

Where do you get that A is "almost certain" from? I just said the prior probability of B was "low". I don't think that's a reasonable restatement of what I said.

Your actual probability starts out at 0.5, rises steadily as the clever arguer talks (starting with his very first point, because that excludes the possibility he has 0 points), and then suddenly drops precipitously as soon as he says "Therefore..." (because that excludes the possibility he has more points).

It doesn't seem to me that excluding the possibility that he has more points should have that effect.

Consider the case where CA is artificially restricted to raising a given number of points. By common sense, for a generous allotment this is nearly equivalent to the original situation, yet you never learn anything new about how many points he has remaining.

You can argue that CA might still stop early when his argument is feeble, and thus you learn something. However, since you've stipulated that every point raises your probability estimate, he won't stop early. To make an argument without that assumption, we can ask about a situation where he is required to raise exactly N points and assume he can easily raise "filler" points.

ISTM at every juncture in the unrestricted and the generously restricted arguments, your probability estimate should be nearly the same, excepting only that you need compensate slightly less in the restricted case.

Now, there is a certain sense of two ways of saying the same thing, raising the probability per point (presumably cogent) but lowering it as a whole in compensation.

But once you begin hearing CA's argument, you know tautologically that you are hearing his argument, barring unusual circumstances that might still cause it not to be fully presented. I see no reason to delay accounting that information.

Comment by tom_breton on What Evidence Filtered Evidence? · 2007-09-30T21:44:40.000Z · score: 3 (3 votes) · LW · GW
Each statement that he makes is valid evidence - how could you not update your probabilities? ... But then the clever arguer can make you believe anything he chooses, if there is a sufficient variety of signs to selectively report. That doesn't sound right.

What's being overlooked is that your priors before hearing the clever arguer are not the same as your priors if there were no clever arguer.

Consider the case if the clever arguer presents his case and it is obviously inadequate. Perhaps he refers to none of the usual signs of containing a diamond and the signs he does present seem unusual and inconclusive. (Assume all the usual idealizations, ie no question that he knows the facts and presents them in the best light, his motives are known and absolute, he's not attempting reverse psychology, etc) Wouldn't it seem to you that here is evidence that box B does not contain the diamond as he says? But if no clever arguer were involved, it would be a 50/50 chance.

So the prior that you're updating for each point the clever arguer makes starts out low. It crosses 0.5 at the point where his argument is about as strong as you would expect given a 50/50 chance of A or B.

What lowers it when CA begins speaking? You are predictively compensating for the biased updating you expect to do when you hear a biased but correct argument. (Idealizations are assumed here too. If we let CA begin speaking and then immediately stop him, this shouldn't persuade anybody that the diamond is in box A on the grounds that they're left with the low prior they start with.)

The answer is less clear when CA is not assumed to be clever. When he presents a feeble argument, is it because he can have no good argument, or because he couldn't find it? Ref "What evidence bad arguments".

Comment by tom_breton on How to Convince Me That 2 + 2 = 3 · 2007-09-28T21:41:47.000Z · score: 1 (1 votes) · LW · GW

There are really two questions in there:

  • Whether the Peano arithmetic axioms correctly describe the physical world.
  • Whether, given those axioms and appropriate definitions of 2 and 4 (perhaps as Church numerals), 2 + 2 = 4.

One is a question about the world, the other about a neccessary truth.

The first is about what aspect of the world we are looking at, under what definitions. 2 rabbits plus 2 rabbits may not result in 4 rabbits. So I have to assume Eliezer refers to the second question.

Can we even meaningfully ask the second question? Kind of. As David Deutsch warns, we shouldn't mistake the study of absolute truths for the possession of absolute truths. We can ask ourselves how we computed whether 2+2=4, conscious that our means of computing it may be flawed. We could in principle try many means of computing whether 2+2=4 that seem to obey the Peano axioms: fingers, abacus, other physical counters, etc. Then we could call into question our means of aggregating the computations into a single very confident answer and then our means of retaining the answer in memory.

Seems a pointless exercise to me, though. Evolution either has endowed us with mental tools that correspond to some basic neccessary truths or it hasn't. If it hadn't, we would have no good means of exploring the question.

Comment by tom_breton on The Bottom Line · 2007-09-28T19:57:00.000Z · score: 2 (2 votes) · LW · GW
If you happened to be a literate English speaker, you might become confused, and think that this shaped ink somehow meant that box B contained the diamond.

A sign S "means" something T when S is a reliable indicator of T. In this case, the clever arguer has sabotaged that reliability.

ISTM the parable presupposes (and needs to) that what the clever arguer produces is ordinarily a reliable indicator that box B contained the diamond, ie ordinarily means that. It would be pointless otherwise.

Therein lies a question: Is he neccessarily able to sabotage it? Posed in the contrary way, are there formats which he can't effectively sabotage but which suffice to express the interesting arguments?

There are formats that he can't sabotage, such as rigorous machine-verifiable proof, but it is a great deal of work to use them even for their natural subject matter. So yes with difficulty for math-like topics.

For science-like topics in general, I think the answer is probably that it's theoretically possible. It needs more than verifiable logic, though. Onlookers need to be able to verify experiments, and interpretive frameworks need to be managed, which is very hard.

For squishier topics, I make no answer.

Comment by tom_breton on Doublethink (Choosing to be Biased) · 2007-09-14T20:51:22.000Z · score: 10 (10 votes) · LW · GW

What if self-deception helps us be happy? What if just running out and overcoming bias will make us - gasp! - unhappy?

You are aware, I'm sure, of studies that connect depression and freedom from bias, notably overconfidence in one's ability to control outcome.

You've already given one answer: to deliberately choose to believe what our best judgement tells us isn't so would be lunacy. Many people are psychologically able to fool themselves subtly, but fewer are able to deliberately, knowingly fool themselves.

Another answer is that even though depression leads to freedom from some biases and illusions, the converse doesn't seem to apply. Overcoming bias doesn't seem to lead to depression. I don't get the impression that a disproportionate number of people on this list are depressed. In my own experience, losing illusions doesn't make me feel depressed. Even if the illusion promised something desirable, I think what I have usually felt was more like intellectual relief, "So that's why (whatever was promised) never seemed to work."

Comment by tom_breton on Radical Honesty · 2007-09-11T05:06:38.000Z · score: 3 (3 votes) · LW · GW

This was surprisingly hard to explain to people; many people would read the careful explanation and hear, "Crocker's Rules mean you can say offensive things to other people."

Perhaps because it resembles the "2" part of a common verbal bully's 1-2 punch, the one that first insults you and then when you react, slurs you for allegedly not being able to handle the truth. I'm specifically thinking of the part of Crocker's Rule that goes "If you're offended, it's your fault".

Yes, I see that one is "me" and the other is "you". But the translation to "you" is so natural that even that writeup of Crocker's Rule slips into it.

I also think Crocker's Rules is an ivory-tower sort of position that starts with assumptions that just doesn't reflect the real world. Perhaps in Lee Crocker's experience, all debating opponents are at the worst mere curmudgeons who wrap truths in unpleasant rhetoric, but I doubt that's true even for him. It's certainly not my experience.

In my experience, the majority of people who this rule seems applicable to use petty and truthless rhetoric to defend minor points of lifestyle or ideology. Usually the arguing parties have already understood each other as much as they care to and are shouting their talking-points and postures past each other. It's true that usually one or both sides could stand to listen and learn, but for the people that applies to, invariably that's just what they don't want.

I won't belabor the point, but Crocker's apparent assumption about the nature of contentious rhetoric is grossly wrong in the real world.

Comment by tom_breton on The Crackpot Offer · 2007-09-08T19:30:05.000Z · score: 11 (10 votes) · LW · GW

It seems to be a common childhood experience on this list to have tried to disprove famous mathematical theorems.

Me, I tried to disprove the four-color map conjecture when I was 10 or 11. At that point it was a conjecture, not a theorem. I came up with a nice moderate size map that, after a apparently free initial labelling and a sequence of apparently forced moves, required a fifth color.

Fortunately the first thing that occured to me was to double-check my result, and of course I found a 4-color coloring.