Posts

Comments

Comment by Silas on Markets are Anti-Inductive · 2009-02-26T02:00:29.000Z · LW · GW

Well, now you f*in' tell me.

Comment by Silas on She has joined the Conspiracy · 2009-01-14T03:10:05.000Z · LW · GW

When I saw the picture, I assumed she was the woman you described in one of your Bayesian conspiracy stories that you post here. But then, she was in a pink jumpsuit, and had, I think, blond hair.

Comment by Silas on Nonperson Predicates · 2008-12-28T18:31:21.000Z · LW · GW

@Daniel_Franke: I was just describing a sufficient, not a necessary condition. I'm sure you can ethically get away with less. My point was just that, once you can make models that detailed, you needn't be prevented from using them altogether, because you wouldn't necessarily have to kill them (i.e. give them information-theoretic death) at any point.

Comment by Silas on Nonperson Predicates · 2008-12-28T05:05:45.000Z · LW · GW

@Tim_Tyler:

The main problem with death is that valuable things get lost. Once people are digital, this problem tends to go away - since you can relatively easily scan their brains - and preserve anything of genuine value. In summary, I don't see why this issue would be much of a problem.

I was going to say something similar, myself. All you have to do is constrain the FAI so that it's free to create any person-level models it wants, as long as it also reserves enough computational resources to preserve a copy so that the model citizen can later be re-instantiated in their virtual world, without any subjective feeling of discontinuity.

However, that still doesn't obviate the question. Since the FAI has limited resources, it will still have to know, for which things it must reserve space for preserving, in order to know if the greater utility of the model justifies the additional resources it requires. Then again, it could just accelerate the model so that that person lives out a full, normal life in their simulated universe, so that they are irreversibly dead in their own world anyway.

Comment by Silas on True Sources of Disagreement · 2008-12-09T15:08:42.000Z · LW · GW

Khyre: Setting or clearing a bit register regardless of what was there before is a one-bit irreversible operation (the other two one-bit input, one-bit output functions are constant 1 and constant 0).

face-palm I can't believe I missed that. Thanks for the correction :-)

Anyway, with that in mind, Landauer's principle has the strange implication that resetting anything to a known state, in such a way that the previous can't be retrieved, necessarily releases heat, and the more information the state conveys to the observer, the more heat is released. Okay, end threadjack...

Comment by Silas on True Sources of Disagreement · 2008-12-08T18:05:18.000Z · LW · GW

I'm going to nitpick (mainly because of how much reading I've been doing about thermodynamics and information theory since your engines of cognition post):

Human neurons ... dissipate around a million times the heat per synaptic operation as the thermodynamic minimum for a one-bit operation at room temperature. ... it ought to be possible to run a brain at a million times the speed without ... invoking reversible computing or quantum computing.

I think you mean neurons dissipate a million times the thermodynamic minimum for an irreversible one-bit operation at room temperature, though perhaps it was clear you were talking about irreversible operations from the next sentence. A reversible operation can be made arbitrarily close to dissipating zero heat.

Even then, a million might be a low estimate. By Landauer's Principle a one-bit irreversible operation requires only kTln2 = 2.9e-21 J at 25 degrees C. Does the brain use more than 2.9e-15 J per synaptic operation?

Also, how can a truly one-bit digital operation be irreversible? The only such operations that both input and output one bit are the identity and inversion gates, both of which are reversible.

I know, I know, tangential to your point...

Comment by Silas on Failure By Analogy · 2008-11-18T18:08:57.000Z · LW · GW

Nick_Tarleton: I think you're going a bit too far there. Stability control theory had by that time been rigorously and scientifically studied, dating back to Watts's flyball governor in the 18th century (which controlled shaft rotation speed by allowing a ball to swing out and increase rotational inertia as it sped up) and probably even before that with the incubator (which used heat to move a valve that allowed just the right amount of cooling air in). Then all throughout the 19th century engineers attacked the problem of "hunting" on trains, where they would unsettlingly lurch faster and slower. Bicycles, a fairly recent invention then, had to tackle the rotational stability problem, somewhat similar (as many bicycle design constraints are) to what aircraft deal with.

Certainly, many inventors grasped at straws in attempt to replicate functionality, but the idea that they considered the stability implications of the beak isn't too outlandish.

Comment by Silas on The Weighted Majority Algorithm · 2008-11-14T15:46:14.000Z · LW · GW

@Scott_Aaronson: Previously, you had said the problem is solved with certainty after O(1) queries (which you had to, to satisfy the objection). Now, you're saying that after O(1) queries, it's merely a "high probability". Did you not change which claim you were defending?

Second, how can the required number of queries not depend on the problem size?

Finally, isn't your example a special case of exactly the situation Eliezer_Yudkowsky describes in this post? In it, he pointed out that the "worst case" corresponds to an adversary who knows your algorithm. But if you specifically exclude that possibility, then a deterministic algorithm is just as good as the random one, because it would have the same correlation with a randomly chosen string. (It's just like the case in the lockpicking problem: guessing all the sequences in order has no advantage over randomly picking and crossing off your list.) The apparent success of randomness is again due to, "acting so crazy that a superintelligent opponent can't predict you".

Which is why I summarize Eliezer_Yudkowsky's position as: "Randomness is like poison. Yes, it can benefit you, but only if you use it on others."

Comment by Silas on The Weighted Majority Algorithm · 2008-11-14T14:37:29.000Z · LW · GW

Could Scott_Aaronson or anyone who knows what he's talking about, please tell me the name of the n/4 left/right bits problem he's referring to, or otherwise give me a reference for it? His explanation doesn't seem to make sense: the deterministic algorithm needs to examine 1+n/4 bits only in the worst case, so you can't compare that to the average output of the random algorithm. (Average case for the determistic would, it seems, be n/8 + 1) Furthermore, I don't understand how the random method could average out to a size-independent constant.

Is the randomized algorithm one that uses a quantum computer or something?

Comment by Silas on The Weighted Majority Algorithm · 2008-11-13T06:00:13.000Z · LW · GW

Someone please tell me if I understand this post correctly. Here is my attempt to summarize it:

"The two textbook results are results specifically about the worst case. But you only encounter the worst case when the environment can extract the maximum amount of knowledge it can about your 'experts', and exploits this knowledge to worsen your results. For this case (and nearby similar ones) only, randomizing your algorithm helps, but only because it destroys the ability of this 'adversary' to learn about your experts. If you instead average over all cases, the non-random algorithm works better."

Is that the argument?

Comment by Silas on Worse Than Random · 2008-11-12T16:54:47.000Z · LW · GW

@Caledonian and Tiiba: If we knew where the image was, we wouldn't need the dots.

Okay, let's take a step back: the scenario, as Caledonian originally stated, was that the museum people could make a patron better see the image if the museum people put random dots on the image. (Pronouns avoided for clarity.) So, the problem is framed as whether you can make someone else see an image that you already know is there, by somehow exploiting randomness. My response is that, if you already know the image is there, you can improve beyond randomness, but just putting the dots there in a way that highlights the hidden image's lines. In any case, from that position, Eliezer_Yudkowsky is correct in that you can only improve the patron's detection ability for that image, by exploiting your non-random knowledge about the image.

Now, if you want to reframe that scenario, you have to adjust the baselines appropriately. (Apples to apples and all.) Let's look at a different version:

I don't know if there are subtle, barely-visible images that will come up in my daily life, but if there are, I want to see them. Can I make myself better off by adding random gray dots to my vision? By scattering physical dots wherever I go?

I can's see how it would help, but feel free to prove me wrong.

Comment by Silas on Worse Than Random · 2008-11-11T21:47:02.000Z · LW · GW

@Joshua_Simmons: I got to thinking about that idea as I read today's post, but I think Eliezer_Yudkowsky answered it therein: Yes, it's important to expirment, but why must your selection of what to try out, be random? You should be able to do better by exploiting all of your knowledge about the structure of the space, so as to pick better ways to experiment. To the extent that your non-random choices of what to test do worse than random, it is because your understanding of the problem is so poor as to be worse than random.

(And of course, the only time when searching the small space around known-useful points is a good idea, is when you already have knowledge of the structure of the space...)

@Caledonian: That's an interesting point. But are you sure the effect you describe (at science museums) isn't merely due to the brain now seeing a new color gradient in the image, rather than randomness as such? Don't you get the same effect from adding an orderly grid of dots? What about from aligning the dots along the lines of the image?

Remember, Eliezer_Yudkowsky's point was not that randomness can never be an improvement, but that it's always possible improve beyond what randomness would yield.

Comment by Silas on Lawful Uncertainty · 2008-11-11T15:04:15.000Z · LW · GW

So, in short: "Randomness is like poison: Yes, it can benefit you, but only if you feed it to people you don't like."

Comment by Silas on Building Something Smarter · 2008-11-05T21:42:33.000Z · LW · GW

Will_Pearson: Is it literally? Are you saying I couldn't send a message to someone that enabled them to print out a list of the first hundred integers without referencing a human's cognitive structure.

Yes, that's what I'm saying. It's counterintuitive because you so effortlessly refernce others' cognitive structures. In communicating, you assume a certain amount of common understanding, which allows you to know whehter your message will be understood. In sending such a message, you rely on that information. You would have to think, "will they understand what this sentence means", "can they read this font", etc.

Tim_Tyler: The whole idea looks like it needs major surgery to me - at least I can't see much of interest in it as it stands. Think you can reformulate it so it makes sense? Be my guest.

Certainly. All you have to do is read it so you can tell me what about it doesn't make sense.

Anyway, such a criticism cuts against the original claim as well - since that contained "know" as well as "don't know".

Which contests the point how?

Comment by Silas on Building Something Smarter · 2008-11-05T19:12:23.000Z · LW · GW

Okay, fair challenge.

I agree about your metal example, but it differs significantly from my discussion of the list-output program for the non-trivial reason I gave: specifically, the output is defined by its impact on people's cognitive structure.

Look at it this way: Tim_Tyler claims that I know everything there is to know about the output of a program that spits out the integers from 1 to 100. But, when I get the output, what makes me agree that I am in fact looking at those integers? Let's say that when printing it out (my argument can be converted to one about monitor output), I see blank pages. Well, then I know something messed up: the printer ran out of ink, was disabled, etc.

Now, here's where it gets tricky: what if instead it only sorta messes up: the ink is low and so it's applied unevenly so that only parts of the numbers are missing? Well, depending on how badly it messes up, I may or may not still recognize the numbers as being the integers 1-100. It depends on whether it retains enough of the critical characteristics of those numbers for me to so recognize them.

To tie it back to my original point, what this all means is that the output is only defined with respect to a certain cognitive system: that determines whether the numbers are in fact recognizable as 9's, etc. If it's not yet clear what the difference is between this and metal's melting point, keep in mind that we can write a program to find a metal's melting point, but we can't write a program that will look at a printout and know if it retains enough of its form that a human recognizes it as any specific letter -- not yet, anyway.

Comment by Silas on Building Something Smarter · 2008-11-05T17:36:23.000Z · LW · GW

Further analysis, you say, Tim_Tyler? Could you please redirect effort away from putdowns and into finding what was wrong with the reasoning in my previous comment?

Comment by Silas on Building Something Smarter · 2008-11-04T23:55:53.000Z · LW · GW

Very worthwhile points, Tim_Tyler.

First of all, the reason for my spirited defense of MH's statement is that looked like a good theory because of how concise it was, and how consistent with my knowledge of programs it was. So, I upped my prior on it and tended to see apparent failures of it as a sign I'm not applying it correctly, and that further analysis could yield a useful insight.

And I think I that belief is turning out to be true:

It seems to specify that the output is what is unknown - not the sensations that output generates in any particular observer.

But the sensations are a property of the output. In a trivial sense: it is a fact about the output, that a human will perceive it in a certain way.

And in a deeper sense, the numeral "9" means "that that someone will perceive as symbol representing the number nine in the standard number system". I'm reminded of Douglas Hofstadter's claim that definition of individual letters is an AI-complete problem because you must know a wealth of information about the cognitive system to be able to identify the full set of symbols someone will recognize as e.g. an "A".

This yields the counterintuitive result that, for certain programs, you must reference the human cognitive system (or some concept isomorphic thereto) in listing all the facts about the output. That result must hold for any program whose output will eventually establish mutual information with your brain.

Am I way off the deep end here? :-/

Comment by Silas on Building Something Smarter · 2008-11-04T22:47:21.000Z · LW · GW

@Eliezer_Yudkowsky: It wouldn't be an exact sequence repeating, since the program would have to handle contingencies, like cows being uncooperative because of insufficiently stimulating conversation.

Comment by Silas on Building Something Smarter · 2008-11-04T20:45:48.000Z · LW · GW

Nick_Tarleton: Actually, Tim_Tyler's claim would still be true there, because you may want to print out that list, even if you knew some exact arrangement of atoms with that property.

However, I think Marcello's Rule is still valid there and survives Tim_Tyler's objection: in that case, what you don't know is "the sensation arising from looking at a the numbers 1 through 100 prettily printed". Even if you had seen such a list before, you probably would want to print it out unless your memory were perfect.

My claim generalizes nicely. For example, even if you ran a program for the purpose of automating a farm, and knew exactly how the farm would work, then what you don't know in that case is "the sensation of subsisting for x more days". Although Marcello's Rule starts to sound vacuous at that point.

Hey, make a squirrely objection, get a counterobjection twice as squirrely ;-)

Comment by Silas on Building Something Smarter · 2008-11-03T14:09:28.000Z · LW · GW

Quick question: How would you build something smarter, in a general sense, than yourself? I'm not doubting that it's possible, I'm just interested in knowing the specific process one would use.

Keep it brief, please. ;-)

Comment by Silas on Efficient Cross-Domain Optimization · 2008-10-29T02:42:42.000Z · LW · GW

With all this talk about poisoned meat and CDSes, I was inspired to draw this comic.

Comment by Silas on Efficient Cross-Domain Optimization · 2008-10-28T22:04:44.000Z · LW · GW

Adam_Ierymenko: Evolution has evolved many strategies for evolution-- this is called the evolution of evolvability in the literature. These represent strategies for more efficiently finding local maxima in the fitness landscape under which these evolutionary processes operate. Examples include transposons, sexual reproduction,

Yes, Eliezer_Yudkowsky has discussed this before and calls that optimizaiton at the meta-level. Here is a representative post where he makes those distinctions.

Looking over the history of optimization on Earth up until now, the first step is to conceptually separate the meta level from the object level - separate the structure of optimization from that which is optimized. If you consider biology in the absence of hominids, then on the object level we have things like dinosaurs and butterflies and cats. On the meta level we have things like natural selection of asexual populations, and sexual recombination.
Comment by Silas on Traditional Capitalist Values · 2008-10-17T19:05:19.000Z · LW · GW

I've raised this before when Eliezer made the point, and I'll raise it again:

There most certainly is truth to "Suicide bombers are cowardly". Someone who chooses to believe a sweet, comforting lie, and go to the grave rather than face the possibility of having to deal with being wrong, is cowardly, or the term has no meaning. They might have been less cowardly with respect to their lives, but they were absolutely more cowardly with respect to their intellectual integrity. And FWIW, American soldiers who meet all of that are just as cowardly.

Most people making the "Suicide bombers are cowardly" argument, are doing so for the wrong reason. Doesn't make them wrong.

For that matter, many in Islamic countries do hate the West for its freedoms: they often do not accept the freedom to criticize Islam, and often go to great lengths to punish such blasphemers. As freedom of speech, including anti-religious speech, is considered an important freedom, that criticism is at least partially right.

The truth of the matter is something more like, "Many (not all) in Islamic countries hate Western freedoms ... but not enough to commit acts of terrorism unless and until the West forcefully intervenes in their countries."

Comment by Silas on Ends Don't Justify Means (Among Humans) · 2008-10-14T23:01:03.000Z · LW · GW
in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder ... I refuse to extend this reply to myself, because the epistemological state you ask me to imagine, can only exist among other kinds of people than human beings.

Interesting reply. But the AIs are programmed by corrupted humans. Do you really expect to be able to check the full source code? That you can outsmart the people who win obfuscated code contests?

How is the epistemological state of human-verified, human-built, non-corrupt AIs, any more possible?

Comment by Silas on AIs and Gatekeepers Unite! · 2008-10-09T21:50:57.000Z · LW · GW

@Phil_Goetz: Have the successes relied on a meta-approach, such as saying, "If you let me out of the box in this experiment, it will make people take the dangers of AI more seriously and possibly save all of humanity; whereas if you don't, you may doom us all"?

That was basically what I suggested in the previous topic, but at least one participant denied that Eliezer_Yudkowsky did that, saying it's a cheap trick, while some non-participants said it meets the spirit and letter of the rules.

Comment by Silas on Shut up and do the impossible! · 2008-10-09T17:06:23.000Z · LW · GW

One more thing: my concerns about "secret rules" apply just the same to Russell_Wallace's defense that there were no "cheap tricks". What does Russell_Wallace consider a non-"cheap trick" in convincing someone to voluntarily, knowingly give up money and admit they got fooled? Again, secret rules all around.

Comment by Silas on Shut up and do the impossible! · 2008-10-09T17:00:30.000Z · LW · GW

@Russell_Wallace & Ron_Garret: Then I must confess the protocol is ill-defined to the point that it's just a matter of guessing what secret rules Eliezer_Yudkowsky has in mind (and which the gatekeeper casually assumed), which is exactly why seeing the transcript is so desirable. (Ironically, unearthing the "secret rules" people adhere to in outputting judgments is itself the problem of Friendliness!)

From my reading, the rules literally make the problem equivalent to whether you can convince people to give money to you: They must know that letting the AI out of the box means ceding cash, and that not losing that cash is simply a matter of not being willing to.

So that leaves only the possibility that the gatekeeper feels obligated to take on the frame of some other mind. That reduces AI's problem to the problem of whether a) you can convince the gatekeeper that that frame of mind would let the AI out, and b) that, for purposes of that amount of money, they are ethically obligated to let the experiment end as per how that frame of mind would.

...which isn't what I see as the protocol specifying: it seems to me to instead specify the participant's own mind, not some mind he imagines. Which is why I conclude the test is too ill-defined.

Comment by Silas on Shut up and do the impossible! · 2008-10-09T16:10:10.000Z · LW · GW

When first reading the AI-Box experiment a year ago, I reasoned that if you follow the rules and spirit of the experiment, the gatekeeper must be convinced to knowingly give you $X and knowingly show gullibility. From that perspective, it's impossible. And even if you could do it, that would mean you've solved a "human-psychology-complete" problem and then [insert point about SIAI funding and possibly about why you don't have 12 supermodel girlfriends].

Now, I think I see the answer. Basically, Eliezer_Yudkowsky doesn't really have to convince the gatekeeper to stupidly give away $X. All he has to do is convince them that "It would be a good thing if people saw that the result of this AI-Box experiment was that the human got tricked, because that would stimulate interest in {Friendliness, AGI, the Singularity}, and that interest would be a good thing."

That, it seems, is the one thing that would make people give up $X in such a circumstance. AFAICT, it adheres to the spirit of the set-up since the gatekeeper's decision would be completely voluntary.

I can send my salary requirements.

Comment by Silas on Intrade and the Dow Drop · 2008-10-01T15:47:37.000Z · LW · GW

1) Eliezer_Yudkowsky: You should be comparing the percentage (1) change in the S&P 500 (2) to the change (3) in probability of any bailout happening (4) over the days in which the changes occurred (5) and have used more than one day (6). There, that's six errors in your calculation I count, of varying severity.

2) Tim_Tyler: Yeah, I'm surprised that hasn't been posted on Slashdot yet. I want to be the first to propose the theory that United Airlines was behind that, since Google was the cause of a recent fake plunge in United's stock price, when they highly ranked an old story about United's bankruptcy, fooling some into thinking it was happening again and they need to sell. Okay, maybe not "cause", but they started the chain reaction, and United blames them.

3) Peter_McCluskey: Whoa whoa whoa, are you now admitting that measuring the correlation between InTrade contracts and financial variables over a succession of days rather than a single day is important?

Comment by Silas on Ban the Bear · 2008-09-20T02:04:59.000Z · LW · GW

V, Ori, and everyone else: In my recent post, I explain how you can synthesize short and long positions. You have to ban a lot more than short-selling to ban short-selling, and a lot more than margin-buying to ban leveraged longs.

Comment by Silas on Optimization · 2008-09-14T05:46:40.000Z · LW · GW

@Lara_Foster: You see, it seems quite likely to me that humans evaluate utility in such a circular way under many circumstances, and therefore aren't performing any optimizations.

Eliezer touches on that issue in "Optimization and the Singularity":

Natural selection prefers more efficient replicators. Human intelligences have more complex preferences. Neither evolution nor humans have consistent utility functions, so viewing them as "optimization processes" is understood to be an approximation.

By the way, Ask middle school girls to rank boyfriend preference and you find Billy beats Joey who beats Micky who beats Billy...

Would you mind peeking into your mind and explaining why that arises? :-) Is it just a special case of the phenomenon you described in the rest of your post?

Comment by Silas on The True Prisoner's Dilemma · 2008-09-04T18:42:56.000Z · LW · GW

By the way:

Human: "What do you care about 3 paperclips? Haven't you made trillions already? That's like a rounding error!" Paperclip Maximizer: "How can you talk about paperclips like that?"


PM: "What do you care about a billion human algorithm continuities? You've got virtually the same one in billions of others! And you'll even be able to embed the algorithm in machines one day!" H: "How can you talk about human lives that way?"

Comment by Silas on The Truly Iterated Prisoner's Dilemma · 2008-09-04T18:21:13.000Z · LW · GW

Wait wait wait: Isn't this the same kind of argument as in the dilemma about "We will execute you within the next week on a day that you won't expect"? (Sorry, don't know the name for that puzzle.) In that one, the argument goes that if it's the last day of the week, the prisoner knows that's the last chance they have to execute him, so he'll expect it, so it can't be that day. But then, if it's the next-to-last day, he knows they can't execute him on the last day, so they have to execute him on that next-to-last day. But then he expects it! And so on.

So, after concluding they can't execute him, they execute him on Wednesay. "Wait! But I concluded you can't do this!" "Good, then you didn't expect it. Problem solved."

Just as in that problem, you can't stably have an "(un)expected execution day", you can't have an "expected future irrelevance" in this one.

Do I get a prize? No? Okay then.

Comment by Silas on Dreams of AI Design · 2008-08-27T22:12:19.000Z · LW · GW

Vassar handles personal networking? Dang, then I probably shouldn't have mouthed off at Robin right after he praised my work.

Comment by Silas on Dreams of AI Design · 2008-08-27T20:01:20.000Z · LW · GW

Eliezer, if the US government announced a new Manhattan Project-grade attempt to be the first to build AGI, and put you in charge, would you be able to confidently say how such money should be spent in order to make genuine progress on such a goal?

Comment by Silas on "Arbitrary" · 2008-08-12T22:06:23.000Z · LW · GW

It would really rock if you could show the context in which someone used the word "arbitrary" but in a way that just passed the recursive buck.

Here's where I would use it:

[After I ask someone a series of questions about whether certain actions would be immoral]

Me: Now you're just being arbitrary! Eliezer Yudkowsky: Taboo "arbitrary"! Me: Okay, he's deciding what's immoral based on whim. Eliezer Yudkowsky: Taboo "whim"! Me: Okay, his procedures for deciding what's immoral can't be articulated with finite words to a stranger such that he, and the stranger using his morality articulation, yield the same answers to all morality questions.

I'll send my salary requirements if you want. ;-)

Comment by Silas on Morality as Fixed Computation · 2008-08-08T04:24:11.000Z · LW · GW

Wait a sec: I'm not sure people do outright avoid modifying their own desires so as to make the desires easier to satisfy, as you are claiming here:

We, ourselves, do not imagine the future and judge, that any future in which our brains want something, and that thing exists, is a good future. If we did think this way, we would say: "Yay! Go ahead and modify us to strongly want something cheap!"

Isn't that exactly what people do when they study ascetic philosophies and otherwise try to see what living simply is like? And would people turn down a pill that made vegetable juice taste like a milkshake and vice versa?

Comment by Silas on Hiroshima Day · 2008-08-07T03:32:35.000Z · LW · GW

Who cares, I want to hear about building AIs.

Comment by Silas on The Meaning of Right · 2008-07-29T18:03:00.000Z · LW · GW

Matt Simpson: Many an experiment has been thought for the sole purpose of showing how utilitarianism is in direct conflict with our moral intuitions.

I disagree, or you're referring to something I haven't heard of. If I know what you mean here, those are a species of strawman ("act") utilitarianism that doesn't account for the long-term impact and adjustment of behavior that results.

(I'm going to stop giving the caveats; just remember that I accept the possibility you're referring to something else.)

For example, if you're thinking about cases where people would be against a doctor deciding to carve up a healthy patient against his will to save ~40 others, that's not rejection of utilitarianism. It can be recognition that once a doctor does that, people will avoid them in droves, driving up risks all around.

Or, if you're referring to the case of how people would e.g. refuse to divert a train's path so it hits one person instead of five, that's not necessarily an anti-utilitarian intuition; there are many factors at play in such a scenario. For example, the one person may be standing in a normally safe spot and so consented to a lower level of risk, and so by diverting the train, you screw up the ability of people to see what's really safe, etc.

Comment by Silas on Can Counterfactuals Be True? · 2008-07-24T16:39:24.000Z · LW · GW

"If the federal government hadn't bought so much stuff from GM, GM would be a lot smaller today." "If the federal government hadn't bought so much stuff from GM, GM would have instead been tooling up to produce stuff other buyers did want and thus could very well have become successful that way."

???

Comment by Silas on Whither Moral Progress? · 2008-07-16T15:08:22.000Z · LW · GW

I think a lot of people are confusing a) improved ability to act morally, and b) improved moral wisdom.

Remember, things like "having fewer deaths, conflicts" does not mean moral progress. It's only moral progress if people in general change their evaluation of the merit of e.g. fewer deaths, conflicts.

So it really is a difficult question Eliezer is asking: can you imagine how you would have/achieve greater moral wisdom in the future, as evaluated with your present mental faculties?

My best answer is yes, in that I can imagine being better able to discern inherent conflict between certain moral principles. Haphazard example: today, I might believe that a) assaulting people out-of-the-blue is bad, and b) credibly demonstrating ability to fend off assaulters are good. In the future, I might notice that these come into conflict, that if people value both of these, some people will inevitably have a utility function that encourages them to do a), and this is unavoidable. So then I find out more precisely how much of one comes at how much cost of the other, and that persuing certain combinations of them is impossible.

I call that moral progress. Am I right, assuming the premises?

Comment by Silas on Rebelling Within Nature · 2008-07-14T13:56:31.000Z · LW · GW

I picked up The Moral Animal on Eliezer's recommendation, after becoming so immersed I read 50 pages in the bookstore. Was not disappointed. This is the most eye-opening book I've read in quite a while, nearly couldn't put it down. And this is from someone who used to stay a mile away from anything related to biology on the grounds that it's "boring".

Will probably blog it. Will also continue to drop subjects from sentences.

Comment by Silas on The Genetic Fallacy · 2008-07-11T14:09:53.000Z · LW · GW

Eliezer, I think the point you've made here generalizes to several things in the standard fallacy lists, usually which take the form:

X Fallacy: Believing Y because of Z, when Z doesn't ABSOLUTELY GUARANTEE Y.

...even though, it turns out, Z should raise the probability you assign to Y.

For example:

Appeal to authority: An expert in the field believing something within that field doesn't guarantee its truth, but is strong evidence.

Argument from ignorance: The fact that you haven't heard of any good arguments for X, doesn't mean X is necessary false, but if most of humanity has conducted a motivated search for it and come up lacking, and you've checked all such justifications, that's strong evidence against X.

Comment by Silas on The Fear of Common Knowledge · 2008-07-09T16:56:53.000Z · LW · GW

How does this apply to religion? Is there e.g. a situation in which everyone knows that the tenets of the religion are false, and knows that everyone else knows they're false, but if this becomes common knowledge, the bond between the people breaks apart?

Comment by Silas on Moral Complexities · 2008-07-05T19:22:23.000Z · LW · GW

These are difficult questions, but I think I can tackle some of them:

Why would anyone want to change what they want?"

If a person wants to change from valuing A to valuing B, they are simply saying that they value B, but it requires short-term sacrifices, and in the short term, valuing A may feel psychologically easier, even though it sacrifices B. They thus want to value B so that it is psychologically easier to make the tradeoff.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"?

That's recognizing that they are violating an implicit understanding with others that they would not want others to do to them, and would perhaps hope others don't find out about. They are also feeling a measure of psychological pain from doing so as a result of their empathy with others.

Comment by Silas on No Universally Compelling Arguments · 2008-06-26T13:36:05.000Z · LW · GW

Unknown: it's okay, maybe you meant a different mind, like a grade-schooler taking a multiplication quiz ;-)

Anyway, if I were going to taboo "correctness", I would choose the more well-defined "lack of Dutch-bookability".

Comment by Silas on The Design Space of Minds-In-General · 2008-06-25T15:42:44.000Z · LW · GW

Shane_Legg: factoring in approximations, it's still about zero. I googled a lot hoping to find someone actually using some version of it, but only found the SIAI's blog's python implementation of Solomonoff induction, which doesn't even compile on Windows.

Comment by Silas on The Design Space of Minds-In-General · 2008-06-25T13:19:01.000Z · LW · GW

Does anyone know the ratio of discussion of implementations of AIXI/-tl, to discussion of its theoretical properties? I've calculated it at about zero.

Comment by Silas on Heading Toward Morality · 2008-06-20T22:13:58.000Z · LW · GW

Alright, since we're at this "summary/resting point", I want to re-ask a clarifying question that never got answered. One the very important "What an Algorithm Feels like from inside" post, I asked what the heck each graph (Network 1 and 2) was supposed to represent, and never got a clear answer.

Now, before you lecture me about how I should have figured it out right now, let's be realistic. Even the very best answer I got, requires me to download a huge pdf file, and read a few chapters, most of it irrelevant to understanding what Eliezer meant to represent with each network. And yet, with a short sentence, you can explain the algorithm that each graph represents, saving me and every other person who comes to read the post, lots and lots of time, and it would be nearly effortless for someone fluent in the topic.

Could somebody PLEASE, PLEASE explain how I should read those networks, such that the rest of the post makes snese?

Comment by Silas on LA-602 vs. RHIC Review · 2008-06-19T13:41:15.000Z · LW · GW
You can't admit a single particle of uncertain danger if you want your science's funding to survive. ... So no one can do serious analysis of existential risks anymore, because just by asking the question, you're threatening the funding of your whole field.

And by writing this blog post, you're doing what to the LHC...?