Posts

Comments

Comment by Brian_Jaress2 on Worse Than Random · 2008-11-12T23:18:23.000Z · LW · GW

GreedyAlgorithm,

"If your source of inputs is narrower than 'whatever people anywhere using my ubergeneral sort utility will input' then you may be able to do better."

That's actually the scenario I had in mind, and I think it's the most common. Usually, when someone does a sort, they do it with a general-purpose library function or utility.

I think most of those are actually implemented as a merge sort, which is usually faster than quicksort, but I'm not clear on how that ties in to the use of information gained during the running of the program.

What I'm getting at is that the motivation for selecting randomly and any speedup for switching to merge sort don't seem to directly match any of the examples already given.

In his explanation and examples, Eliezer pointed to information gained while the algorithm is running. Choosing the best type of selection in a quicksort is based on foreknowledge of the data, with random selection seeming best when you have the least foreknowledge.

Likewise, the difference between quicksort and other sorts that may be faster doesn't have an obvious connection to the type information that would help you choose between different selections in a quicksort.

I'm not looking for a defense of nonrandom methods. I'm looking for an analysis of random selection in quicksort in terms of the principles that Eliezer is using to support his conclusion.

Comment by Brian_Jaress2 on Worse Than Random · 2008-11-12T19:19:34.000Z · LW · GW

Daniel I. Lewis, as I said, lists can have structure even when that structure is not chosen by a person.

"Let's say, for the sake of argument, that you get sorted lists (forwards or backwards) more often than chance, and the rest of the time you get a random permutation."

Let's not say that, because it creates an artificial situation. No one would select randomly if we could assume that, yet random selection is done. In reality, lists that are bad for selecting from the middle are more common than by random chance, so random beats middle.

If you put the right kind of constraints on the input, it's easy to find a nonrandom algorithm that beats random. But those same constraints can change the answer. In your case, part of the answer was the constraint that you added.

I was hoping for an answer to the real-world situation.

Comment by Brian_Jaress2 on Worse Than Random · 2008-11-12T02:21:01.000Z · LW · GW

GreedyAlgorithm, yes that's mostly why it's done. I'd add that it applies even when the source of the ordering is not a person. Measurement data can also follow the type of patterns you'd get by following a simple, fixed rule.

But I'd like to see it analyzed Eliezer's way.

How does the randomness tie in to acquired knowledge, and what is the superior non-random method making better use of that knowledge?

Using the median isn't it, because that generally takes longer to produce the same result.

Comment by Brian_Jaress2 on Worse Than Random · 2008-11-11T22:57:38.000Z · LW · GW

How would you categorize the practice of randomly selecting the pivot element in a quicksort?

Comment by Brian_Jaress2 on Building Something Smarter · 2008-11-02T21:21:20.000Z · LW · GW
Brian, if this definition is more useful, then why isn't that license to take over the term?

Carey, I didn't say it was a more useful definition. I said that Eliezer may feel that the thing being referred to is more useful. I feel that money is more useful than mud, but I don't call my money "mud."

More specifically, how can there be any argument on the basis of some canonical definition, while the consensus seems that we really don't know the answer yet?

I'm not arguing based on a canonical definition. I agree that we don't have a precise definition of intelligence, but we do have a very rough consensus on particular examples. That consensus rejects rocks, trees, and apple pies as not intelligent. It also seems to be rejecting paperclip maximizers and happy-face tilers.

It seems akin to arguing that aerodynamics isn't an appropriate basis for the definition of 'flight', just because a preconceived notion of flight includes the motion of the planets as well as that of the birds, even though the mechanisms turn out to be very different.

I've never heard anyone say a planet was flying, except maybe poetically. Replace "planets" with "balloons" and it'll get much closer to what I'm thinking.

Comment by Brian_Jaress2 on Building Something Smarter · 2008-11-02T18:04:09.000Z · LW · GW
To describe the universe well, you will have to distinguish these signatures from each other, and have separate names for "human intelligence", "evolution", "proteins", and "protons", because even if these things are related they are not at all the same.

Speaking of separate names, I think you shouldn't call this "steering the future" stuff "intelligence." It sounds very useful, but almost no one except you is referring to it when they say "intelligence." There's some overlap, and you may feel that what you are referring to is more useful than what they are referring to, but that doesn't give you license to take over the word.

I know you've written a bunch of articles justifying your definition. I read them. I also read the articles moaning that no one understands what you mean when you say "intelligence." I think it's because they use that word to mean something else. So maybe you should just use a different word.

In fact, I think you should look at generalized optimization as a mode of analysis, rather than a (non-fundamental) property. Say, "Let's analyze this in terms of optimization (rather than conserved quantities, economic cost/benefit, etc.)" not, "Let's measure its intelligence."

In one of your earlier posts, people were saying that your definitions are too broad and therefore useless. I agree with them about the broadness, but I think this is still a useful concept if it is taken to be a one of many ways of looking at almost anything.

Comment by Brian_Jaress2 on My Bayesian Enlightenment · 2008-10-05T21:39:59.000Z · LW · GW

Cat Dancer, I think by "no alternative," he means the case of two girls.

Of course the mathematician could say something like "none are boys," but the point is whether or not the two-girls case gets special treatment. If you ask "is at least one a boy?" then "no" means two girls and "yes" means anything else.

If the mathematician is just volunteering information, it's not divided up that way. When she says "at least one is a boy," she might be turning down a chance to say "at least one is a girl," and that changes things.

At least, I think that's what he's saying. Most of probability seems as awkward to me as frequentism seems to Eliezer.

Comment by Brian_Jaress2 on How Many LHC Failures Is Too Many? · 2008-09-21T07:54:25.000Z · LW · GW
"How many times does a coin have to come up heads before you believe the coin is fixed?"

I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?" Which, in my opinion, makes no sense.

Comment by Brian_Jaress2 on A Prodigy of Refutation · 2008-09-18T08:14:17.000Z · LW · GW

I've never been on a transhumanist mailing list, but I would have said, "Being able to figure out what's right isn't the same as actually doing it. You can't just increase the one and assume it takes care of the other. Many people do things they know (or could figure out) are wrong."

It's the type of objection you'd have seen in the op-ed pages if you announced your project on CNN. I guess that makes me another stupid man saying the sun is shining. At first, I was surprised that it wasn't on the list of objections you encountered. But I guess it makes sense that transhumanists wouldn't hold up humans as a bad example.

Comment by Brian_Jaress2 on Raised in Technophilia · 2008-09-17T09:06:20.000Z · LW · GW

When I read these stories you tell about your past thoughts, I'm struck by how different your experiences with ideas were. Things you found obvious seem subtle to me. Things you discovered with a feeling of revelation seem pedestrian. Things you dismissed wholesale and now borrow a smidgen of seem like they've always been a significant part of my life.

Take, for example, the subject of this post: technological risks. I never really thought of "technology" as a single thing, to be judged good or bad as a whole, until after I had heard a great deal about particular cases, some good and some bad.

When I did encounter that question, it seemed clear that it was good because the sum total of our technology had greatly improved the life of the average person. It also seemed clear that this did not make every specific technology good.

I don't know about total extinction, but there was a period ending around the time I was born (I think we're about the same age) when people thought that they, their families, and their friends could very well be killed in a nuclear war. I remember someone telling me that he started saving for retirement when the Berlin Wall fell.

With that in mind, I wonder about the influence of our experiences with ideas. If two people agree that technology is good overall but specific technologies can be bad, will they tend apply that idea differently if one was taught it as a child and the other discovered it in a flash of insight as an adult? That might be one reason I tend to agree with the principles you lay out but not the conclusions you reach.

Comment by Brian_Jaress2 on Mirrors and Paintings · 2008-08-24T03:34:21.000Z · LW · GW

Eliezer, I'm starting to think you're obsessed with Caledonian.

It's pretty astonishing that you would censor him and then accuse him of misrepresenting you. Where are all these false claims by Caledonian about your past statements? I haven't seen them.

For what it's worth, the censored version of Caledonian's comment didn't persuade me.

Comment by Brian_Jaress2 on The Cartoon Guide to Löb's Theorem · 2008-08-19T06:57:34.000Z · LW · GW

Larry D'Anna: Thanks, I think I understand the Deduction Theorem now.

Comment by Brian_Jaress2 on The Cartoon Guide to Löb's Theorem · 2008-08-19T01:41:02.000Z · LW · GW

Okay, I still don't see why we had to pinpoint the flaw in your proof by pointing to a step in someone else's valid proof.

Larry D'Anna identified the step you were looking for, but he did it by trying to transform the proof of Lob's Theorem into a different proof that said what you were pretending it said.

I think, properly speaking, the flaw is pinpointed by saying that you were misusing the theorems, not that the mean old theorem had a step that wouldn't convert into what you wanted it to be.

I've been looking more at the textual proof you linked (the cartoon was helpful, but it spread the proof out over more space) which is a little different and has an all-ascii notation that I'm going to borrow from.

I think if you used Lob's Theorem correctly, you'd have something like:

if PA |- []C -> C then PA |- C [Lob]

PA |- ([]C -> C) -> C [Deduction]

PA |- (not []C) -> C [Definition of implication]

(PA |- ((not []C) -> C) and (PA |- not []C) [New assumption]

PA |- ((not []C) -> C) and (not []C) [If you can derive two things, you can derive the conjunction]

PA |- C [Definition of implication]

And conclude that all provably unprovable statements are also provable, not that all unprovable statements are true.

(Disclaimer: I am definitely not a mathematician, and I skipped over your meta-ethics because I only like philosophy in small doses.)

Comment by Brian_Jaress2 on The Cartoon Guide to Löb's Theorem · 2008-08-18T00:28:18.000Z · LW · GW

I think the error is that you didn't prove it was unprovable -- all provably unprovable statements are also provable, but unprovable statements aren't necessarily true.

In other words, I think what you get from the Deduction Theorem (which I've never seen before, so I may have it wrong) is Provable((Provable(C) -> C) -> C). I think if you want to "reach inside" that outer Provable and negate the provability of C, you have to introduce Provable(not Provable(C)).

Comment by Brian_Jaress2 on Setting Up Metaethics · 2008-07-29T00:10:25.000Z · LW · GW

Eliezer, please don't ban Caledonian.

He's not disrupting anything, and doesn't seem to be trying to.

He may describe your ideas in ways that you think are incorrect, but so what? You spend a lot of time describing ideas that you disagree with, and I'll bet the people who actually hold them often disagree with your description.

Caledonian almost always disagrees with you, but treats you no differently than other commenters treat each other. He certainly treats you better than you treat some of your targets. For example, I've never seem him write a little dialog in which a character named "Goofus" espouses exaggerated versions of your ideas.

In this case, Caledonian seems to think that your four criteria are aimed at reconciling the two clashing intuitions and that it's a mistake to set such a goal. Well, so what? If that's not what you're trying to do, you fooled me as well.

To me, Caledonian just seems to have a very different take on the world than you and express it bluntly.

Comment by Brian_Jaress2 on What Would You Do Without Morality? · 2008-06-29T07:31:46.000Z · LW · GW

Unlike most of the others who've commented so far, I actually would have a very different outlook on life if you did that to me.

But I'm not sure how much it would change my behavior. A lot of the things you listed -- what to eat, what to wear, when to get up -- are already not based on right and wrong, at least for me. I do believe in right and wrong, but I don't make them the basis of everything I do.

For the more extreme things, I think a lot of it is instinct and habit. If I saw a child on the train tracks, I'd probably pull them off no matter what you'd proved to me. Even for more abstract things, like fraud, the thought that it would be wrong if there were a basis for right and wrong might be enough to make me feel I didn't want to do it.

Comment by Brian_Jaress2 on Against Devil's Advocacy · 2008-06-09T05:59:31.000Z · LW · GW

"On the other hand, it is really hard for me to visualize the proposition that there is no kind of mind substantially stronger than a human one. I have trouble believing that the human brain, which just barely suffices to run a technological civilization that can build a computer, is also the theoretical upper limit of effective intelligence."

I don't think visualization is a very good test of whether a proposition is true. I can visualize an asteroidal chocolate cake much more easily than an entire cake-free asteroid belt.

But what about other ways for your Singularity to be impossible?

Comment by Brian_Jaress2 on Timeless Control · 2008-06-07T08:20:03.000Z · LW · GW

That was interesting, but I think you misunderstand time as badly as you expect us to misunderstand non-time

In regular time, the past no longer exists -- so there's no issue of whether it is changing or not -- and when we talk about the future changing, we're really referring to what is likely to happen in a future that doesn't exist yet.

A person living in a block universe could mistakenly think they have time by only perceiving the present. On the other hand, a person living in a timed universe could mistakenly think they live in a block by writing down their memories and expectations in a little diagram.

Comment by Brian_Jaress2 on The Rhythm of Disagreement · 2008-06-02T01:15:06.000Z · LW · GW

Eliezer, why doesn't the difficulty of creating this AGI count as a reason to think it won't happen soon?

You've said it's extremely, incredibly difficult. Don't the chances of it happening soon go down the harder it is?

Comment by Brian_Jaress2 on My Childhood Role Model · 2008-05-23T09:12:35.000Z · LW · GW

Did Hofstadter explain the remark?

Maybe he felt that the difference between Einstein and a village idiot was larger than between a village idiot and a chimp. Chimps can be pretty clever.

Or, maybe he thought that the right end of the scale, where the line suddenly becomes dotted, should be the location of the rightmost point that represents something real. It's very conventional to switch from a solid to a dotted line to represent a switch from confirmed data to projections.

But I don't buy the idea of intelligence as a scalar value.

Comment by Brian_Jaress2 on That Alien Message · 2008-05-22T07:29:01.000Z · LW · GW

On average, if you eliminate twice as many hypotheses as I do from the same data, how much more data than you do I need to achieve the same results? Does it depend on how close we are to the theoretical maximum?

Comment by Brian_Jaress2 on No Safe Defense, Not Even Science · 2008-05-18T17:59:40.000Z · LW · GW

@billswift: Emotion might drive every human action (or not). That's beside the point. If an emotion drives you into a dead end, there's something wrong with that emotion.

My point was that if someone tells you the truth and you don't believe them, it's not fair to say they've led you astray. Eliezer said he didn't "emotionally believe" a truth he was told, even though he knew it was true. I'm not sure what that means, but it sounds like a problem with Eliezer, involving his emotions, not a problem with what he was told.

Comment by Brian_Jaress2 on No Safe Defense, Not Even Science · 2008-05-18T09:07:00.000Z · LW · GW

When they taught me about the scientific method in high school, the last step was "go back to the beginning and repeat." There was also a lot about theories replacing other theories and then being replaced later, new technologies leading to new measurements, and new ideas leading to big debates.

I don't remember if they explicitly said, "You can do science right and still get the wrong answer," but it was very strongly (and logically) implied.

I don't know what you were taught, but I expect it was something similar.

All this "emotional understanding" stuff sounds like your personal problem. I don't mean that it isn't important or that I don't have sympathy for any pain you suffered. I just think it's an emotion issue, not a science issue.

Comment by Brian_Jaress2 on Do Scientists Already Know This Stuff? · 2008-05-17T22:42:30.000Z · LW · GW

Maybe I'm doing it wrong, but when I score your many-worlds interpretation it fails your own four-part test.

  1. Anticipation vs curiosity: We already had the equations, so there's no new anticipation. At first it doesn't seem like a "curiosity stopper" because it leaves everyone curious about the Born probability thing, but that's because it doesn't say anything about that. On the parts where it does say something, it seems like a curiosity stopper.

After your posts on using complex numbers and mirrors, I was wondering, "Why complex numbers? Why do you add them when you add them and multiply them when you multiply them?" That's the question your interpretation answers, and the answer is, "There's stuff called amplitude that flows around in exactly that way."

  1. Blankly solid substance: That sounds like your amplitude. The equations are a specific, complex mechanism, but they're not part of your explanation. They're what you want to explain. Your explanation is just that a substance exists that exactly matches the form of the equations.

  2. Cherishing ignorance: (This one is about how supporters behave, and I've really only heard from you. My score here might be totally invalid if other supporters of the same thing support it differently.) You definitely don't do what I would call cherishing ignorance, but I think you do both of the things which you list as examples of it.

This recent series of posts is all about how your interpretation defeats ordinary science.

The "mundane phenomena" one is a little ambiguous. If the point of the rule is whether the theory is claimed as a special exception, then you haven't made that claim. In other words, you haven't said, "Things usually happen that way, but in this case they happen this way." But I think at least part of that rule has to do with pride in how shocking and different the explanation is -- a case of, "I've had a revolutionary insight that violates everything you think you know." You've certainly shown that attitude.

  1. Still a mystery: Well, there's the Born probabilities that it doesn't say anything about. Then there's the way that the values are assigned and combined to get the final amplitude, in other words the way the amplitude "flows around." Amplitude has its own peculiar way of flowing that was already in the equations and isn't explained by calling it amplitude.

So the score is:

  1. Check

  2. Check

  3. Maybe, with a frowny face even if it's technically OK.

  4. Check

Maybe I missed something in your past posts. (I skimmed over a lot attacks on other interpretations that I don't know much about.) Or maybe I misunderstood the four tests. Three of them seemed like pretty much the same thing.

I'm not sure I even agree with the test, but it captured part of what I don't like about your interpretation. It actually kind of reminds me of that "phlogiston" thing you always bring up as a bad example, in the sense that you started with a black box behavioral description and explained it with a substance defined in terms of the known behavior.

Comment by Brian_Jaress2 on Science Doesn't Trust Your Rationality · 2008-05-15T05:35:05.000Z · LW · GW

Science and Eliezer both agree that evidence is important, so let's collect some evidence on which one is more accurate.

Comment by Brian_Jaress2 on Decoherent Essences · 2008-04-30T23:50:02.000Z · LW · GW

I don't really follow a lot of what you've written on this, so maybe this isn't fair, but I'll put it out there anyway:

I have a hard time seeing much difference between you (Eliezer Yudkowsky) and the people you keep describing as wrong. They don't look beyond the surface, you look beyond it and see something that looks just like the surface (or the surface that's easiest to look at). They layer mysterious things on top of the theory to explain it, you layer mysterious things on top of physics to explain it. Their explanations all have fatal flaws, yours has just one serious problem. Their explanations don't actually explain anything, yours renames things (e.g. probability becomes "subjective expectation") without clearing up the cause of their relationships -- at least, not yet.

Comment by Brian_Jaress2 on Zombies: The Movie · 2008-04-20T10:31:07.000Z · LW · GW

Hopefully Anonymous, if you think a point should be addressed, make that point.

I say Eliezer has finally dealt with the zombie issue as it deserves.

It's a silly idea that invites convoluted discussion, which makes it look sophisticated and hard to refute.

Comment by Brian_Jaress2 on Extensions and Intensions · 2008-02-06T19:21:33.000Z · LW · GW

I once saw a person from Korea discover, much to her surprise, that pennies are not red. She had been able to speak English for a while and could correctly identify a stop sign or blood as red, and she had seen plenty of pennies before discovering this.

In Korea they put the color of pennies and the color of blood in the same category and give that category a Korean name.

Comment by Brian_Jaress2 on Newcomb's Problem and Regret of Rationality · 2008-02-01T23:20:00.000Z · LW · GW

In arguing for the single box, Yudkowsky has made an assumption that I disagree with: at the very end, he changes the stakes and declares that your choice should still be the same.

My way of looking at it is similar to what Hendrik Boom has said. You have a choice between betting on Omega being right and betting on Omega being wrong.

A = Contents of box A

B = What may be in box B (if it isn't empty)

A is yours, in the sense that you can take it and do whatever you want with it. One thing you can do with A is pay it for a chance to win B if Omega is right. Your other option is to pay nothing for a chance to win B if Omega is wrong.

Then just make your bet based on what you know about Omega. As stated, we only know his track record over 100 attempts, so use that. Don't worry about the nature of causality or whether he might be scanning your brain. We don't know those things.

If you do it that way, you'll probably find that your answer depends on A and B as well as Omega's track record.

I'd probably put Omega at around 99%, as Hendrik did. Keeping A at a thousand dollars, I'd one-box if B were a million dollars or if B were something I needed to save my life. But I'd two-box if B were a thousand dollars and one cent.

So I think changing A and B and declaring that your strategy must stay the same is invalid.

Comment by Brian_Jaress2 on Rational vs. Scientific Ev-Psych · 2008-01-05T07:48:47.000Z · LW · GW

"There are some people who will, if you just tell them the Refrigerator Hypothesis, snort and say 'That's an untestable just-so story' and dismiss it out of hand; but if you start by telling them about the gaze-tracking experiment and then explain the evolutionary motivation, they will say, 'Huh, that might be right.'"

But do they actually think it's more likely to be true?

They didn't say it was impossible, they said it wasn't testable. Explain how to test it, and they don't say that. What's the problem?