Posts

Comments

Comment by roko3 on Can't Unbirth a Child · 2008-12-28T17:55:26.000Z · LW · GW

I think we're all out of our depth here. For example, do we have an agreed upon, precise definition of the word "sentient"? I don't think so.

I think that for now it is probably better to try to develop a rigorous understanding of concepts like consciousness, sentience, personhood and the reflective equilibrium of humanity than to speculate on how we should add further constraints to our task.

Nonsentience might be one of those intuitive concepts that falls to pieces upon closer examinations. Finding "nonperson predicates" might be like looking for "nonfairy predicates".

Comment by roko3 on Disappointment in the Future · 2008-12-02T21:23:07.000Z · LW · GW

Given that i'm lying in bed with my iPhone commenting on this post, I'd say ray did ok.

His extrapolations of computer hardware seem to be pretty good.

His extrapolations of computer software are far too optimistic. He clearly made the mistake of vastly underestimating how much work our brains do when we translate natural language or turn speech into text.

Comment by roko3 on Singletons Rule OK · 2008-11-30T22:27:31.000Z · LW · GW

"Capitalist economists seem to like the idea of competition. It is the primary object of their study - if there were no comptition they would have to do some serious retraining."

Ditto.

Comment by roko3 on Crisis of Faith · 2008-10-10T23:25:03.000Z · LW · GW

I suspect that there are many people in this world who are, by their own standards, better off remaining deluded. I am not one if them; but I think you should qualify statements like "if a belief is false, you are better off knowing that it is false".

It is even possible that some overoptimistic transhumanists/singularitarians are better off, by their own standards, remaining deluded about the potential dangers of technology. You have the luxury of being intelligent enough to be able to utilize your correct belief about how precarious our continued existence is becoming. For many people, such a belief is of no practical benefit yet is psychologically detrimental.

This creates a "tradgedy of the commons" type problem in global catastrophic risks: each individual is better off living in a fool's paradise, but we'd all be much better off if everyone faced up to the dangers of future technology.

Comment by roko3 on My Naturalistic Awakening · 2008-09-25T19:26:45.000Z · LW · GW

@ carl: perhaps I should have checked through the literature more carefully. Can you point me to any other references on ethics using world-history utility functions with domain {world histories} ?

Comment by roko3 on My Naturalistic Awakening · 2008-09-25T17:23:07.000Z · LW · GW

@ shane: I was specifically talking about utility functions from the set of states of the universe to the reals, not from spacetime histories. Using the latter notion, trivially every agent is a utility maximizer, because there is a canonical embedding of any set X (in this case the set of action-perception pair sequences) into the set of functions from X to R. I'm attacking the former notion - where the domain of the utility function is the set of states of the universe.

Comment by roko3 on Horrible LHC Inconsistency · 2008-09-23T00:03:46.000Z · LW · GW

@ prase: well, we have to get our information from somewhere... Sure, past predictions of minor disasters due to scientific error are not in exactly the same league as this particular prediction. But where else are we to look?

@anders: interesting. So presumably you think that the evidence from cosmic rays makes the probability of an LHC disaster much less than 1 in 1000? Actually, how likely do you think it is that the LHC will destroy the planet?

Comment by roko3 on Horrible LHC Inconsistency · 2008-09-22T11:40:24.000Z · LW · GW

Your probability estimate of the LHC destroying the world is too small. Given that at least some phycisists have come up with vaguely plausible mechanisms for stable micro black hole creation, you should think about outrageous or outspoken claims made in the past by a small minority of scientists. How often has the majority view been overturned? I suspect that something like 1/1000 is a good rough guess for the probability of the LHC destroying us. This seems roughly consistent with the number of LHC failures that I would tolerate before I joined a pressure group to shut the thing down; e.g. 10 failures in a row, each occuring with probability 50%.

I suspect you just don't want to admit to yourself that the experiment is that risky: we'd be talking about an expected death toll of 6 million this spring. Yikes!

Comment by roko3 on Optimization · 2008-09-13T19:22:57.000Z · LW · GW

Eli: I think that your analysis here, and the longer analysis presented in "knowability of FAI" misses a very important point. The singularity is a fundamentally different process than playing chess or building a saloon car. The important distinction is that in building a car, the car-maker's ontology is perfectly capable of representng all of the high-level properties of the desired state, but the I stigators of the singularity are, by definition lacking a sufficiently complex representation system to represent any of the important properties of the desired state: post singularity earth. You have had the insight required to see this: you posted about " dreams of XML in a universe of quantum mechanics" a couple of posts back. I posted about this on my blog: "ontologies, approximations and fundamentalists" too.

It suffices to say that an optimization process which takes place with respect to a fixed background ontology or set of states is fundamentally different to a process which I might call vari-optimization, where optimization and ontology change happen at the same time. The singularity (whether an AI singularity or non AI) will be of the latter type.

Comment by roko3 on Invisible Frameworks · 2008-08-22T23:13:27.000Z · LW · GW

@ marcello, quasi-anonymous, manuel:

I should probably add that I am not in favor of using any brand new philosophical ideas - like the ones that I like to think about - to write the goal system of a seed AI. That would be far too dangerous. For this purpose, I think we should simply concentrate on encoding the values that we already have into an AI - for example using the CEV concept.

I am interested in UIVs because I'm interested in formalizing the philosophy of transhumanism. This may become important because we may enter a slow takeoff, non-AI singularity.

Comment by roko3 on You Provably Can't Trust Yourself · 2008-08-20T19:48:30.000Z · LW · GW

@ eli: nice series on lob's theorem, but I still don't think you've added any credibility to claims like "I favor the human one because it is h-right". You can do your best to record exactly what h-right is, and think carefully about convergence (or lack of) under self modification, but I think you'd do a lot better to just state "human values" as a preference, and be an out-of-the-closet-relativist.

Comment by roko3 on The Bedrock of Morality: Arbitrary? · 2008-08-14T23:23:53.000Z · LW · GW

It also worries me quite a lot that eliezer's post is entirely symmetric under the action of replacing his chosen notions with the pebble-sorter's notions. This property qualifies as "moral relativism" in my book, though there is no point in arguing about the meanings of words.

My posts on universal instrumental values are not symmetric under replacing UIVs with some other set of goals that an agent might have. UIVs are the unique set of values X such that in order to achieve any other value Y, you first have to do X. Maybe I find this satisfying because I have always been more at home with category theory than logic; I have defined a set of values by requiring them to satisfy a universal property.

Comment by roko3 on The Bedrock of Morality: Arbitrary? · 2008-08-14T23:03:23.000Z · LW · GW

I think that your use of the word arbitrary differs from mine. My mind labels statements such as "we should preserve human laughter for ever and ever" with the "roko-arbitrary" label. Not that I don't enjoy laughter, but there are plenty of things that I presently enjoy that, if I had the choice, I would modify myself to enjoy less. Activities such as enjoying making fun of other people, eating sweet foods, etc. It strikes me that the dividing line between "things I like but wish I didn't like" and "things I like and want to keep liking" should be made in some non-roko-arbitrary way. One might incorporate my position with eliezer's by saying that my concept of "rightness" relies heavily on my concept of arbitrariness, and that my concept of arbitrariness is clearly different to eliezer's.

Comment by roko3 on Moral Error and Moral Disagreement · 2008-08-12T12:25:38.000Z · LW · GW

virge makes a very good point here. The human mind is probably rather flexible in terms of it's ethical views; I suspect that Eli is overplaying our psychological unity.