Posts

Comments

Comment by Caledonian2 on OB Status Update · 2009-01-29T18:43:45.000Z · LW · GW

[bored now. comment deleted.]

Comment by Caledonian2 on OB Status Update · 2009-01-28T16:52:04.000Z · LW · GW

"Caledonian, I look forward to being able to downvote your comments instead of deleting them."

What, the software forces you to delete my comments? Someone's holding a gun to your head?

I look forward to your forming a completely closed memetic sphere around yourself, instead of this partially-closed system you've already established.

Comment by Caledonian2 on OB Status Update · 2009-01-27T15:59:15.000Z · LW · GW

"you will get a semi-obsessed sub-culture of users with a few shared biases who effectively take over"

Of course! That's the point of the exercise.

The hope is that the shared biases will be ones that the site owner considers valuable and useful, and that the prospective audience for the site wants to read. A completely unbiased user culture would view anything that was posted (or not posted) as equally valuable. What use is that?

Besides, the site as it stands is already dominated by two people's biases... and as Eliezer seems to do most of the moderation, it's effectively one person. If that were a problem, why are you here?

Comment by Caledonian2 on 31 Laws of Fun · 2009-01-26T15:48:54.000Z · LW · GW

Specifying an entire world by listing every single thing you want to be included in it would take a very long time. Most worlds complex enough to be interesting are far too complicated to talk about in that manner.

Perhaps it would be more efficient to list the specific things you want to be excluded. Presumably the set of things you object to is far smaller than those you prefer or are neutral towards.

Comment by Caledonian2 on Failed Utopia #4-2 · 2009-01-22T19:42:00.000Z · LW · GW

Because I'm curious:

How much evidence, and what kind, would be necessary before suspicions of contrarianism are rejected in favor of the conclusion that the belief was wrong?

Surely this is a relevant question for a Bayesian.

Comment by Caledonian2 on Failed Utopia #4-2 · 2009-01-21T16:47:25.000Z · LW · GW

I would personally be more concerned about an AI trying to make me deliriously happy no matter what methods it used.

Happiness is part of our cybernetic feedback mechanism. It's designed to end once we're on a particular course of action, just as pain ends when we act to prevent damage to ourselves. It's not capable of being a permanent state, unless we drive our nervous system to such an extreme that we break its ability to adjust, and that would probably be lethal.

Any method of producing constant happiness ultimately turns out to be pretty much equivalent to heroin -- you compensate so that even extreme levels of the stimulus have no effect, forming the new functional baseline, and the old equilibrium becomes excruciating agony for as long as the compensations remain. Addiction -- and desensitization -- is inevitable.

Comment by Caledonian2 on In Praise of Boredom · 2009-01-19T19:07:50.000Z · LW · GW

Few people become bored with jumping in SMB because 1) becoming skilled at it is quite hard, 2) it's used to accomplish specific tasks and is quite useful in that context, 3) it's easier to become bored with the game as a whole than with that particular part of it.

Comment by Caledonian2 on Justified Expectation of Pleasant Surprises · 2009-01-16T20:17:32.000Z · LW · GW

Having to take action to avoid unpleasant surprises is usually pleasant, as long as your personal resources aren't stretched too much in the process.

If you eliminate the potential for unpleasant surprises, the game isn't much fun. (Imagine playing chess against an opponent that was so predictable as to never threaten to beat you. Why bother?)

Comment by Caledonian2 on Justified Expectation of Pleasant Surprises · 2009-01-15T18:23:11.000Z · LW · GW

Lots of people find planning their character design decisions, and exploring in detail the mechanical consequences of their designs, to be 'fun'.

Which is why there are so many sites that (for example) post in their entirety the skills for Diablo II and how each additional skillpoint affects the result - information that cannot be easily acquired from the game itself.

Although there are some basic principles behind 'fun', the specific things that make something 'fun' vary wildly from one person to another. If what the designers created wasn't to your taste, perhaps it's not that they failed, but that you're not a member of their target audience.

Comment by Caledonian2 on Serious Stories · 2009-01-12T17:18:23.000Z · LW · GW

Gwern, why do you think we have those emotional responses to pain in the first place?

Yes, I'm aware of forms of brain damage that make people not care about negative stimuli. They're extraordinarily crippling.

Comment by Caledonian2 on Rationality Quotes 23 · 2009-01-12T17:08:58.000Z · LW · GW

Nancy Lebovitz, those are great. I may have to appropriate some of those.

Comment by Caledonian2 on Serious Stories · 2009-01-09T18:16:42.000Z · LW · GW
I'd say the primary bad thing about pain is not that it hurts, but that it's pushy and won't tune out. You could learn to sleep in a ship's engine room, but a mere stubbed toe grabs and holds your attention. That, I think we could delete with impunity.

If we could learn to simply get along with any level of pain... how would it constitute an obstacle?

Real accomplishment requires real obstacles to avoid, remove, or transcend. Real obstacles require real consequences. And real consequences require pain.

Comment by Caledonian2 on Serious Stories · 2009-01-09T18:07:46.000Z · LW · GW

I would suggest that this book, and the two books immediately preceding it, are an examination of the difference between what people believe they want the world to be and what they actually want and need it to be. When people gain enough power to create their vision of the perfect world, they do - and then find they've constructed an elaborate prison at best and a slow and terrible death at worst.

An actual "perfect world" can't be safe, controlled, or certain -- and the inevitable consequence of that is pain. But so is delight.

Comment by Caledonian2 on Rationality Quotes 22 · 2009-01-08T18:52:41.000Z · LW · GW

The opposite of a Great Truth is unpretentiousness.

Comment by Caledonian2 on The Uses of Fun (Theory) · 2009-01-03T20:19:38.000Z · LW · GW

Mr. Tyler:

I admire your persistence; however, you should be reminded that preaching to the deaf is not a particularly worthwhile activity.

Comment by Caledonian2 on The Uses of Fun (Theory) · 2009-01-03T20:18:37.000Z · LW · GW

My own complaints regarding the Brave New World consist mainly of noting that Huxley's dystopia specialized in making people fit the needs of society. And if meant whittling down a square peg so it would fit into a round hole, so be it.

Embryos were intentionally damaged (primarily through exposure to alcohol) so that they would be unlikely to have capabilities beyond what society needed them to.

This is completely incompatible with my beliefs about the necessity of self-regulating feedback loops, and developing order from the bottom upwards.

Comment by Caledonian2 on Free to Optimize · 2009-01-02T19:37:59.000Z · LW · GW

It's really quite simple: the people who designed and maintain the legal system faced a choice. Is it better for the system to be consistent but endlessly repeat its mistakes, or inconsistent but error-correcting?

They preferred it to be predictable.

And that is why it is absurd to call it a "justice system". It's not concerned with justice.

Comment by Caledonian2 on High Challenge · 2008-12-19T17:11:23.000Z · LW · GW

Or, to put it another way:

"Fixing" the future, in a way that renders human beings completely redundant and unnecessary even to themselves, isn't fixing anything. It's creating a problem of unlimited scope.

If that's the ultimate outcome of, say, producing superhuman minds - whether they're somehow enslaved to human preferences or not - then we're trying very hard to create a world in which the only rational treatment of humanity is extinction. Whether imposed from without or from within, voluntarily, is irrelevant.

Comment by Caledonian2 on High Challenge · 2008-12-19T17:07:57.000Z · LW · GW

Based on the comments here, it would seem that it's the people who reject ultimately-meaningless forms of play - that is, 'play' that doesn't develop skills useful to perpetuation - and concentrate on the "real world" who will end up existing.

And the Luddites will inherit the Earth...

Comment by Caledonian2 on Observing Optimization · 2008-11-22T19:13:48.000Z · LW · GW
The mere fact that he has put so much time and energy into working on this issue over many years is strong evidence that he sincerely believes that it is a real possibility

Only if there are no other consequences of his actions that he desires. People working to forward an ideology don't necessary believe the ideology they're selling - they only need to value some of the consequences of spreading it.

Comment by Caledonian2 on Observing Optimization · 2008-11-21T21:34:42.000Z · LW · GW
I hope you are both willing at least to say that the other's contrary stance tells you that there is a good likelihood that you are wrong.

If Robin knows that Eliezer believes there is a good likelihood that Eliezer's position is wrong, why would Robin then conclude that his own position is likely to be wrong? And vice versa?

The fact that Eliezer and Robin disagree indicates one of two things: either one possesses crucial information that the other does not, or at least one of the two have made a fatal error.

The disagreement stems from the fact that each believes the other to have made the fatal error, and that their own position is fundamentally sound.

Comment by Caledonian2 on Observing Optimization · 2008-11-21T20:06:14.000Z · LW · GW
Eric, it's more amusing that both often cite a theorem that agreeing to disagree is impossible.

It's only impossible for rational Bayesians, which neither Hanson nor Yudkowsky are. Or any other human beings, for that matter.

Comment by Caledonian2 on The Weighted Majority Algorithm · 2008-11-13T19:55:51.000Z · LW · GW
Has anyone proved a theorem on the uselessness of randomness?

Clearly you don't recognize the significance of Eliezer's work. He cannot be bound by such trivialities as 'proof' or 'demonstration'. They're not part of the New Rationality.

Comment by Caledonian2 on Worse Than Random · 2008-11-12T14:41:25.000Z · LW · GW

Don't you get the same effect from adding an orderly grid of dots?
In that particular example, yes. Because the image is static, as is the static.

If the static could change over time, you could get a better sense of where the image lies. It's cheaper and easier - and thus 'better' - to let natural randomness produce this static, especially since significant resources would have to be expended to eliminate the random noise.

What about from aligning the dots along the lines of the image?
If we knew where the image was, we wouldn't need the dots.

To be precise, in every case where the environment only cares about your actions and not what algorithm you use to produce them, any algorithm that can be improved by randomization can always be improved further by derandomization.
It's clear this is what you're saying.

It is not clear this can be shown to be true. 'Improvement' depends on what is valued, and what the context permits. In the real world, the value of an algorithm depends on not only its abstract mathematical properties but the costs of implementing it in an environment for which we have only imperfect knowledge.

Comment by Caledonian2 on Worse Than Random · 2008-11-11T20:31:21.000Z · LW · GW

Caledonian: Yes, I did. So: can't you always do better in principle by increasing sensitivity?
That's a little bit like saying that you could in principle go faster than light if you ignore relativistic effects, or that you could in principle produce a demonstration within a logical system that it is consistent if you ignore Godel's Fork.

There are lots of things we can do in principle if we ignore the fact that reality limits the principles that are valid.

As the saying goes: the difference between 'in principle' and 'in practice' is that in principle there is no difference between them, and in practice, there is.

If you remove the limitations on the amount and kind of knowledge you can acquire, randomness is inferior to the unrandom. But you can't remove those limitations.

Comment by Caledonian2 on Worse Than Random · 2008-11-11T20:06:04.000Z · LW · GW
Caledonian: couldn't you always do better in such a case, in principle (ignoring resource limits), by increasing resolution?

I double-checked the concept of 'optical resolution' on Wikipedia.Resolution is (roughly speaking) the ability to distinguish two dots that are close-together as different - the closer the dots can be and still distinguished, the higher the resolution, and the greater detail that can be perceived.

I think perhaps you mean 'sensitivity'. It's the ability to detect weak signals close to perceptual threshold that noise improves, not the detail.

Comment by Caledonian2 on Worse Than Random · 2008-11-11T19:33:35.000Z · LW · GW
But it is an inherently odd proposition that you can get a better picture of the environment by adding noise to your sensory information - by deliberately throwing away your sensory acuity. This can only degrade the mutual information between yourself and the environment. It can only diminish what in principle can be extracted from the data.

It is certainly counterintuitive to think that, by adding noise, you can get more out of data. But it is nevertheless true.

Every detection system has a perceptual threshold, a level of stimulation needed for it to register a signal. If the system is mostly noise-free, this threshold is a ’sharp’ transition. If the system has a lot of noise, the theshold is ‘fuzzy’. The noise present at one moment might destructively interact with the signal, reducing its strength, or constructively interact, making it stronger. The result is that the threshold becomes an average; it is no longer possible to know whether the system will respond merely by considering the strength of the signal.

When dealing with a signal that is just below the threshold, a noiseless system won’t be able to perceive it at all. But a noisy system will pick out some of it - some of the time, the noise and the weak signal will add together in such a way that the result is strong enough for the system to react to it positively.

You can see this effect demonstrated at science museums. If an image is printed very, very faintly on white paper, just at the human threshold for visual detection, you can stare right at the paper and not see what’s there. But if the same image is printed onto paper on which a random pattern of grey dots has also been printed, we can suddenly perceive some of it - and extrapolate the whole from the random parts we can see. We are very good at extracting data from noisy systems, but only if we can perceive the data in the first place. The noise makes it possible to detect the data carried by weak signals.

When trying to make out faint signals, static can be beneficial. Which is why biological organisms introduce noise into their detection physiologies - a fact which surprised biologists when they first learned of it.

Comment by Caledonian2 on Lawful Uncertainty · 2008-11-11T18:55:08.000Z · LW · GW

Foraging animals make the same 'mistake': given two territories in which to forage, one of which has a much more plentiful resource and is far more likely to reward an investment of effort and time with a payoff, the obvious strategy is to only forage in the richer territory; however, animals instead split their time between the two spaces as the relative probability of a successful return.

In other words, if one territory is twice as likely to produce food through foraging as the other, animals spend twice as much time there: 2/3rds of their time in the richer territory, 1/3rd of their time in the poorer. Similar patterns hold when there are more than two foraging territories involved.

Although this results in a short-term reduction in food acquisition, it's been shown that this strategy minimizes the chances of exploiting the resource to local extinction, and ensures that the sudden loss of one territory for some reason (blight of the resource, natural diaster, predation threats, etc.) doesn't result in a total inability to find food.

The strategy is highly adaptive in its original context. The problem with humans that we retain our evolved, adaptive behaviors long after the context changes to make them non- or even mal-adaptive.

Comment by Caledonian2 on Ask OB: Leaving the Fold · 2008-11-10T20:47:00.000Z · LW · GW

I would suggest taking a hard look at the elements of your social support network, and trying to determine which would sever their links with you if they knew you were not a Christian.

I do not agree that you are compelled not to lie to people. Truth is a valuable thing, and shouldn't be wasted on those unworthy of it.

Consider that Carl Sagan's protagonist in "Contact", Ellie Arroway, claimed to be a Christian, despite being an atheist. Look carefully at the arguments she offered regarding that claim, and see if they can be adapted to your life.

I would recommend that you refuse to claim beliefs that you do not hold, or participate in actions that suggest you believe those things. Reciting the Creed if you do not accept it is out. Taking Communion if you reject the beliefs that form the basis of fellowship in your church is out. So on and so forth. Don't go to confession if you don't believe you need to confess. Etc. etc.

Comment by Caledonian2 on Recognizing Intelligence · 2008-11-08T04:12:46.000Z · LW · GW

It is impossible to determine whether something was well-designed without speculating as to its intended function. Bombs are machines, machines whose function is to fly apart; they generally do not last particularly long when they are used. Does that make them poorly-made?

If the purpose of a collection of gears was to fly apart and transmit force that way, sticking together would be a sign of bad design. Saying that the gears must have been well-designed because they stick together is speculating as to their intended function.

I do not see what is gained by labeling blind entropy-increasing processes as 'intelligence', nor do I see any way in which we can magically infer quality design without having criteria by which to judge configurations.

Comment by Caledonian2 on Recognizing Intelligence · 2008-11-08T02:49:25.000Z · LW · GW

There is no way to tell that something is made by 'intelligence' merely by looking at it - it takes an extensive collection of knowledge about its environment to determine whether something is likely to have arisen through simple processes.

A pile of garbage seems obviously unnatural to us only because we know a lot about Earth nature. Even so, it's not a machine. Aliens concluding that it is a machine with an unknown purpose would be mistaken.

Comment by Caledonian2 on Recognizing Intelligence · 2008-11-08T00:24:25.000Z · LW · GW

I see that the sentence noting how this line of argument comes dangerously close to the Watchmaker Argument for God has been edited out.

Why? If it's a bad point, it merely makes me look bad. If it's a good point, what's gained by removing it?

Comment by Caledonian2 on Back Up and Ask Whether, Not Why · 2008-11-08T00:16:14.000Z · LW · GW

Z.M., I agree with your analysis up to the point where you suggest that rational agents act to preserve their current value system.

It may be useful to consider why we have value systems in the first place. When we know why we do a thing, we can evaluate how well we do it, but not until then.

Comment by Caledonian2 on Recognizing Intelligence · 2008-11-08T00:11:02.000Z · LW · GW
I have no idea what the machine is doing. I don't even have a hypothesis as to what it's doing. Yet I have recognized the machine as the product of an alien intelligence.

Are beaches the product of an alien intelligence? Some of them are - the ones artificially constructed and maintained by humans. What about the 'naturally-occurring' ones, constructed and maintained by entropy? Are they evidence for intelligence? Those grains of sand don't wear down, and they're often close to spherical. Would a visiting UFO pause in awe to recognize beaches as machines with unknown purposes?

Comment by Caledonian2 on Back Up and Ask Whether, Not Why · 2008-11-07T23:59:55.000Z · LW · GW

Z.M., I agree with your analysis up to the point where you suggest that rational agents act to preserve their current value system.

I suggest that it may be useful for you to consider what the purpose of a value system is. When trying to decide between two value systems, a rational agent must evaluate them in some way. Is there an impersonal and objective set of criteria for evaluation?

Comment by Caledonian2 on Recognizing Intelligence · 2008-11-07T23:56:50.000Z · LW · GW
Suppose I landed on an alien planet and discovered what seemed to be a highly sophisticated machine, all gleaming chrome as the stereotype demands. Can I recognize this machine as being in any sense well-designed, if I have no idea what the machine is intended to accomplish? I have no idea what the machine is doing. I don't even have a hypothesis as to what it's doing. Yet I have recognized the machine as the product of an alien intelligence.

Carefully, Eliezer. You are very, very close to simply restating the Watchmaker Argument in favor of the existence of a Divine Being.

You have NOT recognized the machine as the product of an alien intelligence. You most certainly have not been able to identify the machine as 'well-designed'.

Comment by Caledonian2 on Hanging Out My Speaker's Shingle · 2008-11-06T19:28:52.000Z · LW · GW

You can't escape the temptation to lie to people just by having them not pay you in money. There are other forms of payment, of renumeration, besides money.

In fact, if you care about anything involving people or capable of being affected by them in some way, there can always arise situations in which you could maximize some of your goals or preferences by deceiving them.

There are only a few goals or preferences that change this -- chief among them, the desire to get what you want without deception. If you possess those goals or preferences in a dominant form, there's no temptation. If you don't, there's also no temptation, because you have no objection.

'Temptation' only arises when the preference for doing things one way is not stably dominant over not doing things that way.

Comment by Caledonian2 on Hanging Out My Speaker's Shingle · 2008-11-05T20:25:21.000Z · LW · GW

Personally, I'm doing it mainly because everyone else is (stop laughing, it's an important heuristic that should only be overridden when you have a definite reason).
Most smart people I know think that "because everyone else does it" IS a definite reason.

Information and education should be free
Why? People don't value what they get for free. Education was once valued very highly by the common folk in America. That changed once education began to be provided as a right, and children were obliged to go to school instead of its being a sacrifice on the family's part.

That's very selfish
You say that like it's a bad thing. I am neither a Randian nor a libertarian, but comments like yours push me closer to that line every day.

Comment by Caledonian2 on Today's Inspirational Tale · 2008-11-04T19:13:19.000Z · LW · GW
But a vote for a losing candidate is not "thrown away"; it sends a message to mainstream candidates that you vote, but they have to work harder to appeal to your interest group to get your vote.

Such actions send a lot of messages. I have no confidence in the ability of politicians to determine what I would be trying to convey or the effectiveness of my attempting to do so.

Besides, the point is trivial. A vote for a losing candidate isn't thrown away because the vote almost certainly couldn't have been used productively in the first place - you lose little by casting it for the candidate you prefer, just as you'd lose little by casting it for any of the ones you didn't.

Not voting also sends messages to politicians and your fellow citizens. It is not obvious that they are worse than the ones you'd send by voting.

Comment by Caledonian2 on BHTV: Jaron Lanier and Yudkowsky · 2008-11-03T16:16:39.000Z · LW · GW

He quickly appears to conclude that he cannot really discuss any issues with EY because they don't even share the same premises.
So they should establish what premises they DO share, and from that base, determine why they hold the different beliefs that they do.

I find it unlikely that they don't share any premises at all. Their ability to communicate anything, albeit strictly limited, indicates that's there's common ground of a sort.

Comment by Caledonian2 on Building Something Smarter · 2008-11-03T00:57:31.000Z · LW · GW

Was Carl Sagan hawking a religion?
Yes. He was trying to convince people that rationality could substitute itself for mysticism, which they highly value.

Pretty much everyone trying to "sell" an idea dip into religious memes, sooner or later -- as my now-deleted comment makes clear.

Comment by Caledonian2 on Building Something Smarter · 2008-11-02T21:36:53.000Z · LW · GW

Vladimir, you are slightly incorrect. Eliezer doesn't preach rationality as a Way, he preaches a Way that he claims is rationality.

And like any other preacher, he doesn't take well to people questioning and challenging him. You're likely to see your comments censored.

Comment by Caledonian2 on Building Something Smarter · 2008-11-02T19:19:22.000Z · LW · GW
There are those who will see it as almost a religious principle that no one can possibly know that a design will work, no matter how good the argument, until it is actually tested.

It's not necessarily a religious principle, although like anything else, it can be made one. It's a simple truth.

There is a non-trivial distinction between believing that a design will work, and believing that a design is highly likely to work. Maintaining the awareness that our maps do not constrain the territory is hard work, but necessary for a rationalist.

Comment by Caledonian2 on BHTV: Jaron Lanier and Yudkowsky · 2008-11-02T18:56:53.000Z · LW · GW

ou buy the Boltzmann brain argument? How did you calculate the probabilities? Nobody knows the probability of what seems to be our universe forming
That's not the right probability to be concerned about
and certainly nobody knows the probability of a Boltzmann brain forming in a universe of unknown size and age.
Emphasis added by me.

Again, you're asking about the wrong thing. Limiting the analysis to a single universe misses the point and -- as you rightfully object -- reduces the hypothesis to what is nearly a necessarily false statement. But there's no reason to limit analysis to a single universe.

Can you even state, explicitly, what it is that you mean by 'universe'?

Comment by Caledonian2 on Efficient Cross-Domain Optimization · 2008-10-29T17:33:47.000Z · LW · GW
I have not yet read of any means of actually measuring g , has anyone here got any references?

There's no way to "actually measure g", because g has no operational definition beyond statistical analyses of IQ.

There have been some attempts to link calculated g with neural transmission speeds and how easily brains can cope with given problems, but there's been little success.

Comment by Caledonian2 on Measuring Optimization Power · 2008-10-27T23:45:06.000Z · LW · GW
Comment by Caledonian2 on Ends Don't Justify Means (Among Humans) · 2008-10-27T23:26:00.000Z · LW · GW

I received an email from Eliezer stating:


You're welcome to repost if you criticize Coherent Extrapolated Volition specifically, rather than talking as if the document doesn't exist. And leave off the snark at the end, of course.

There is no 'snark'; what there IS, is a criticism. A very pointed one that Eliezer cannot counter.

There is no content to 'Coherent Extrapolated Volition'. It contains nothing but handwaving, smoke and mirrors. From the point of view of rational argument, it doesn't exist.

Comment by Caledonian2 on Belief in Intelligence · 2008-10-25T19:19:43.000Z · LW · GW
Gravity may not be a genius, but it's still an optimization problem, since the ball "wants" to minimize its potential energy.

Using the terms as Eliezer has, can you offer an example of a phenomenon that is NOT an optimization?

Comment by Caledonian2 on Expected Creative Surprises · 2008-10-24T23:29:10.000Z · LW · GW

"If we walk without rhythm, we won't attract the worm."

Set up a pattern-recognition system directed at your own actions, and when you fall into a predictable rut, do something differently.

Comment by Caledonian2 on Inner Goodness · 2008-10-24T23:27:07.000Z · LW · GW

Rand is consumed by a need to provide a 'rationalized' explanation for the irrational behavior of her villians. In essence, she declares them to have a sort of Freudian death wish that causes them to sabotage and destroy everyone capable of living happily, ultimately ending with themselves dying last.

Peculiar, given how utterly incompatible her thinking is with the pseudo-scientific bent of Freud's... although the sort of cult that formed around them both is appropriately similar.

I'm pretty sure her thinking was wrong in that regard. Most people don't have secret, rational reasons they hide even from themselves for the irrational things they do. They're simply irrational. Rand, I think, could not accept that.