Posts

Comments

Comment by Vladimir_Nesov2 on Relative Configuration Space · 2008-05-26T21:07:16.000Z · LW · GW

Your 'epiphenomena' are good old invariants. When you are talking about exorcising epiphenomena, you really are talking about establishing invariants as laws that allow you to use fewer degrees of freedom. One can even talk about consciousness being only dependent on physical makeup of the universe, and hence being an invariant across universes with the same physical makeup. What is the point in reformulating it your way, exactly?

Comment by Vladimir_Nesov2 on No Safe Defense, Not Even Science · 2008-05-18T20:43:56.000Z · LW · GW

Caledonian, you are not helping by disagreeing without clarification. You don't need to be certain about anything, including estimation of how much you are uncertain about something and estimation of how much you are uncertain about the estimation, etc.

Comment by Vladimir_Nesov2 on No Safe Defense, Not Even Science · 2008-05-18T20:13:28.000Z · LW · GW

Roland,

Probabilities allow grades of beliefs, and just as Achilles's pursuit of tortoise can be considered as consisting of infinite number of steps, if you note that steps actually get infinitely short, you can sum them up to a finite quantity. Likewise, you can join infinitely many infinitely unlikely events into a compound event of finite probability. It is a way to avoid regress Caledonian was talking about. Evidence can shift probabilities on all metalevels, even if in some hapless formalism there are infinitely many of them, and still lead to reasonable finite conclusions (decisions).

Comment by Vladimir_Nesov2 on No Safe Defense, Not Even Science · 2008-05-18T17:47:51.000Z · LW · GW

We could provide a warning, of course. But how would we then ensure that people understood and applied the warning? Warn them about the warning, perhaps? And then give them a warning about the warning warning?

That's the problem with discrete reasoning. When you have probabilities, this problem disappears. See http://www.ditext.com/carroll/tortoise.html

Comment by Vladimir_Nesov2 on No Safe Defense, Not Even Science · 2008-05-18T12:34:23.000Z · LW · GW

I started to seriously think about rationality only when I started to think about AI, trying to understand grounding. When I saw that meaning, communication, correctness and understanding are just particular ways to characterize probabilistic relations between "representation" and "represented", it all started to come together, and later was transferred to human reasoning and beyond. So, it was the enigma of AI that acted as a catalyst in my case, not a particular delusion (or misplaced trust). Most of the things I read on the subject were outright confused or in the state of paralyzed curiosity, not deluded in a particular technical way. But so is "Science". The problem is in settling for status quo, walking along the trodden track where it's possible to do better.

Thus, I see this post as a demonstration by example of how important it is to break the trust in all of your cherished curiosity stoppers.

Comment by Vladimir_Nesov2 on When Science Can't Help · 2008-05-16T08:54:28.000Z · LW · GW

HA: "Trying cryonics requires a leap of faith straight into the unknown for a benefit with an unestimable likelihood."

It's what probability is for, isn't it? If you don't know and don't have good prior hints, you just choose prior at random, merely making sure that mutually exclusive outcomes sum up to 1, and then adjust with what little evidence you've got. In reality, you usually do have some prior predispositions though. You don't raise your hands in awe and exclaim that this probability is too shaky to be estimated and even thought about, because in doing so you make decisions, actions, which given your goals implicitly assume certain assignment of probability.

In other words, if you decide not to take a bet, you implicitly assigne low probability to the outcome. It conflicts with you saying that "there are too many unknowns to make an estimation". You just made an estimation. If you don't back it up, it's as good as any other.

I assign high probability to success of cryonics (about 50%), given benevolent singularity (which is a different issue entirely, and it's not necessarily a high-probability outcome, so it can shift resulting absolute probability significantly). In other words, if information-theoretic death doesn't occur during cryopreservation (and I don't presently have noticeable reasons to believe that it does), singularity-grade AI should provide enough technological buck to revive patients "for free". Of course for the decision it's absolute probability that matters, but I have my own reasons to believe that benevolent singularity will likely be technically possible (relative to other outcomes), and I assign about 10% to it.

Comment by Vladimir_Nesov2 on Belief in the Implied Invisible · 2008-04-08T19:58:21.000Z · LW · GW

One problem is that 'you' that can be affected by things that you expect to interact with in the future is in principle no different from those space colonists that are sent out. You can't interact with future-you. All decisions that we are making form the future with which we don't directly interact. Future-you is just a result of one more 'default' manufacturing process, where laws of physics ensure that there is a physical structure very similar to what was in the past. Hunger is a drive that makes you 'manufacture' a fed-future-you, compassion is a drive that makes you 'manufacture' a good-feeling-other-person, and so on.

I don't see any essential difference between decisions that produce 'observable' effect and those that produce 'invisible' effect. What makes you value some of the future states and not others is your makeup, 'thousand shards of desire' as Eliezer put it, and among these things there might as well be those that imply value for physical states that don't interact with decision-maker's body.

If I put a person in a black box, and program it to torture that person for 50 years, and then automatically destroy all evidence, so that no tortured-person state can ever be observed, isn't it as 'invisible' as sending a photon away? I know that person is being tortured, and likewise I know that photon is flying away, but I can't interact with either of them. And yet I assign a distinct negative value to invisible-torture box. It's one of the stronger drives inbuilt in me.

Comment by Vladimir_Nesov2 on Joy in Discovery · 2008-03-21T09:00:01.000Z · LW · GW

The joy of textbook-mediated personal discovery...

Comment by Vladimir_Nesov2 on Circular Altruism · 2008-01-23T11:46:04.000Z · LW · GW

Eliezer,

What do specks have to do with circularity? Where in last posts you explain that certain groups of decision problems are mathematically equivalent, independent on actual decision, here you argue for a particular decision. Note that utility is not necessarily linear of number of people.

Comment by Vladimir_Nesov2 on Against Discount Rates · 2008-01-21T14:17:00.000Z · LW · GW

Discount rate takes care of effect your effort can have on the future, relative to effect it will have on present, it has nothing to do with 'intrinsic utility' of things in the future. Future doesn't exist in the present, you only have a model of the future when you make decisions in the present. Your current decisions are only as good as you can anticipate their effect in the future, and process Robin described in his blog post replay is how it can proceed, it assumes that you know very little and will be better off with just passing resources to future folk to take care of whatever they need themselves.

Comment by Vladimir_Nesov2 on To Lead, You Must Stand Up · 2007-12-29T22:44:34.000Z · LW · GW

Caledonian, I think you are confusing goals with truths. If truth is that the goal consists in certain things, rationality doesn't oppose it in any way. It is merely a tool that optimizes performance, not an arbitrary moral constraint.

Comment by Vladimir_Nesov2 on To Lead, You Must Stand Up · 2007-12-29T20:24:12.000Z · LW · GW

OC: Eliezer, enough with your nonsense about cryonicism, life-extensionism, trans-humanism, and the singularity. These things have nothing to do with overcoming bias. They are just your arbitrary beliefs.

I guess it's other way around: the point of most of the questions raised by Eliezer is to take a debiased look at controversial issues such as those you list, to hopefully build a solid case for sensible versions of them. For example, existing articles can point at fallacies in your assertions: you assume cryonics, etc. to be separate magisteria outside of the domain of rationality and argue from apparent absurdity of these issues.

Comment by Vladimir_Nesov2 on To Lead, You Must Stand Up · 2007-12-29T11:58:47.000Z · LW · GW

Eliezer,

Your accent on leadership in this context seems strange: it was in no one's interest to leave, so biased decision was to follow you, not hesitation in choosing to lead others outside.

Comment by Vladimir_Nesov2 on Asch's Conformity Experiment · 2007-12-26T09:48:07.000Z · LW · GW

It feels like there was no explicit rule not to ask questions. It's interesting what percentage of subjects actually questioned the process.

If people are conforming rationally, then the opinion of 15 other subjects should be substantially stronger evidence than the opinion of 3 other subjects.

I don't see how moderate number of other wrong-answering subjects should influence decision of rational subject, even if it's strictly speaking stronger evidence, as uncertainty in your own sanity should be much lower than probability of alternative explanations for wrong answers of other subjects.

Comment by Vladimir_Nesov2 on Terminal Values and Instrumental Values · 2007-11-18T11:40:10.000Z · LW · GW

Since there are insanely many slightly different outcomes, terminal value is also too big to be considered. So it's useless to pose a question of making a difference between terminal values and instrumental values, since you can't reason about specific terminal values anyway. All things you can reason about are instrumental values.

Comment by Vladimir_Nesov2 on Rationalization · 2007-10-01T17:37:04.000Z · LW · GW

Eliezer: An intuitive guess is non-scientific but not non-rational

It doesn't affect my point; but do you argue that intuitive reasoning can be made free of bias?

Comment by Vladimir_Nesov2 on Applause Lights · 2007-09-11T21:28:10.000Z · LW · GW

Such speech could theoretically perform "bringing to attention" function. Chunks of "bringing to attention" are equivalent to any kind of knowledge, it's just an inefficient form, and abnormality of that speech in its utter inefficiency, not lack of content. People can bear such talk as similar inefficiency can be present in other talks in different form. Inefficiency makes it much simpler to obfuscate eluding certain topics.

Comment by Vladimir_Nesov2 on Fake Causality · 2007-08-24T01:04:25.000Z · LW · GW

Phlogiston is not necessarily a bad thing. Concepts are utilized in reasoning to reduce and structure search space. Concepts can be placed in correspondence with multitude of contexts, selecting a branch with required properties, which correlate with its usage. In this case active 'phlogiston' concept correlates with presence of fire. Unifying all processes that exhibit fire under this tag can help in development of induction contexts. Process of this refinement includes examination of protocols which include 'phlogiston' concept. It's just not a causal model, which can rigorously predict nontrivial results through deduction.

Comment by Vladimir_Nesov2 on One Argument Against An Army · 2007-08-15T21:36:07.000Z · LW · GW

Just a question of bookkeeping - online confidence update can be no less misleading, even if all facts are processed once. Million negative arguments can have negligible total effect if they happen to be dependent in non-obvious way.

Comment by Vladimir_Nesov2 on Making Beliefs Pay Rent (in Anticipated Experiences) · 2007-07-29T10:01:17.000Z · LW · GW

Some ungrounded concepts can produce your own behavior which in itself can be experienced, so it's difficult to draw the line just by requiring concepts to be grounded. You believe that you believe in something, because you experience yourself acting in a way consistent with you believing in it. It can define intrinsic goal system, point in mind design space as you call it. So one can't abolish all such concepts, only resist acquiring them.