Posts

Comments

Comment by Nick Bostrom (Nick_Bostrom2) on Comment on "Propositions Concerning Digital Minds and Society" · 2022-07-12T23:58:01.601Z · LW · GW

You are looking at the wreckage of an abandoned book project.  We got bogged down & other priorities came up.  Instead of writing the book, we decided to just publish a working outline and call it a day.

The result is not particularly optimized for tech executives or policymakers — it’s not really optimized for anybody, unfortunately.

The propositions all *aspire* to being true, although some of may not be particularly relevant or applicable in certain scenarios.  Still, there could be value on working out sensible things to say to cover quite a wide range of scenarios, partly because we don’t know which scenario will happen (and there is disagreement over the probabilities), but partly also because this wider structure — including the parts that don’t directly pertain to the scenario that actually plays out — might form a useful intellectual scaffolding, which could slightly constrain and inform people’s thinking of the more modal scenarios.

I think it’s unclear how well reasoning by analogy works in this area.  Or rather: I guess it works poorly, but reasoning deductively from first principles (at SL4, or SL15, or whatever) might be equally or even more error-prone.  So I’ve got some patience for both approaches, hoping the combo has a better chance of avoiding fatal error than either the softheaded or the hardheaded approach has on its own.

Comment by Nick Bostrom (Nick_Bostrom2) on The Rhythm of Disagreement · 2008-06-07T20:19:53.000Z · LW · GW

"In IEEE Spectrum's sad little attempt at Singularity coverage, one bright spot is Paul Wallich's "Who's Who In The Singularity",..."

Brightness here being a relative quality... I am labeled green, meaning "true believer, thinks it will happen within 30 years." Yet I am quoted (correctly) as saying "I would... assign less than a 50% probability to superintelligence being developed by 2033." (I also don't endorse "once the singularity comes near, we will all be kicking ourselves for not having brought it about sooner", even though they attribute this to me as my "central argument".)

Reg Oracle AI, I'm not sure how much of a disagreement there exists between Eliezer and me. My position has not been that it is definitely the case that Oracle AI is the way to go. Rather, my position is something like "this seems to have at least something going for it; I have not yet been convinced by the arguments I've heard against it; it deserves some further consideration". (The basic rationale is this: While I agree that a utility function that is maximized by providing maximally correct and informative answers to our questions is clearly unworkable (since this could lead the SI to transform all Earth into more computational hardware so as to better calculate the answer), it might turn out to be substantially easier to specify the needed constraints to avoid such catastrophic side-effects of an Oracle AI than it is to solve the Friendliness problem in its general form--I'm not at all sure it is easier, but I haven't yet been persuaded it is not.)

Reg disagreement between Robin and Eliezer on singularity: They've discussed this many times, both here and on other mailinglists. But the discussion always seems to end prematurely. I think this would make for a great disagreement case study--topic is important, both are disagreement savvy, both know and respect one another, both have some subject matter expertise... I would like them to try once to get to the bottom of the issue, and continue discussion until they either cease disagreeing or at least agree exactly on what they disagree about, and why, and on how each person justifies the persistent disagreement.

Comment by Nick Bostrom (Nick_Bostrom2) on No Evolutions for Corporations or Nanodevices · 2007-11-18T00:50:00.000Z · LW · GW

Eliezer, the criteria you list may be necessary for the evolution of complex structures. But I think it's worth highlighting that practically important evolutionary results could come about without the need for new complex structures. For example, suppose we have a population of controlled self-replicating nanobots, built unwisely in such a way that they keep replicating until a separate breaking circuit kicks in and shuts off replication. Now suppose there is a mutation in the code of one nanobot such that its offspring lack a working breaking circuit. Then this mutant nanobot could start an exponential goo. There need only be a one-step selection, but the results could be dramatic. Similarly with Hanson's colonizers that burn the cosmic commons - they might not gain much in complexity through evolution, but evolutionary selection could ensure that a certain type of colonizer (which was present but very rare at time=0) will eventually dominate at the frontier.

Comment by Nick Bostrom (Nick_Bostrom2) on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2007-10-23T17:06:00.000Z · LW · GW

Wei, no I don't think I considered the possibility of discounting people by their algorithmic complexity.

I can see that in the context of Everett it seems plausible to weigh each observer with a measure proportional to the amplitude squared of the branch of the wave function on which he is living. Moreover, it seems right to use this measure both to calculate the anthropic probability of me finding myself as that observer and the moral importance of that observer's well-being.

Assigning anthropic probabilities over infinite domains is problematic. I don't know of a fully satisfactory explanation of how to do this. One natural approach might to explore might be to assign some Turing machine based measure to each of the infinite observers. Perhaps we could assign plausible probabilities by using such an approach (although I'd like to see this worked out in detail before accepting that it would work).

If I understand your suggestion correctly, you propose that the same anthropic probability measure should also be used as a measure of moral importance. But there seems to me to be a problem. Consider a simple classical universe with two very similar observers. On my reckoning they should each get anthropic probability measure 1/2 (rejecting SIA, the Self-Indication Assumption). Yet it appears that they should each have a moral weight of 1. Does your proposal require that one accepts the SIA? Or am I misinterpreting you? Or are you trying to explicate not total utilitarianism but average utilitarianism?

Comment by Nick Bostrom (Nick_Bostrom2) on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2007-10-21T20:38:00.000Z · LW · GW

I have a paper which explores the problem in a somewhat more general way (but see especially section 6.3).

Infinite Ethics: http://www.nickbostrom.com/ethics/infinite.pdf

Comment by Nick Bostrom (Nick_Bostrom2) on 9/26 is Petrov Day · 2007-10-04T00:41:56.000Z · LW · GW

Carl, I like your suggestion to establish a prize for avoing mega-disasters and existential risks. (Meanwhile, I'm going to send Petrov a small donation.)

One of the bias issues this raises is the possibility of bias in how we allocate our attention. One could think of an attention allocation as if it involved an implicit belief that "this is worth attending to". Then we can think of how this kind of implicit belief might be biased. For example, in the ancestral environment nobody was worth attending to because they had prevented millions of deaths by refraining from pressing a button; so maybe we are biased in the direction of allocating too little attention to such acts... Some future post might explore this in more detail.

Eliezer, thanks for your post.

Comment by Nick Bostrom (Nick_Bostrom2) on Mysterious Answers to Mysterious Questions · 2007-08-27T17:21:18.000Z · LW · GW

And to continue the thread of Roy's comment as picked up by Eliezer, it might have been a fairly reasonable conjecture at the time (or at some earlier time). We have to be wary about hindsight bias. Imagine a time before biochemistry and before evolution theory. The only physicalist "explanations" you've ever heard of or thought of for why animals exist and how they function are obvious non-starters...

You think to yourself, "the folks who are tempted by such explanations just don't realize how far away they are from really explaining this stuff; they are deluded." And invoking an elan vital, while clearly not providing a complete explanation, at least creates a placeholder. Perhaps it might be possible to discover different versions of the elan vital; perhaps we could discover how this force interacts with other non-material substances such as ancestor spirits, consciousness, magic, demons, angels etc. Perhaps there could be a whole science of the psychic and the occult, or maybe a new branch of theological inquiry that would illuminate these issues. Maybe those faraway wisemen that we've heard about know something about these matters that we don't know. Or maybe the human mind is simply not equipped to understand these inner workings of the world, and we have to pray instead for illumination. In the afterlife, perhaps, it will all be clear. Either way, that guy who thinks he will discover the mysteries of the soul by dissecting the pineal gland seem curiously obtuse in not appreciating the magnitude of the mystery.

Now, in retrospect we know what worked and what didn't. But the mystics, it seems, could have turned out to have been right, and it is not obvious that they were irrational to favor the mystic hypothesis given the evidence available to them at the time.

Perhaps what we should be looking for is not structural problems intrinsic to certain kinds of questions and answers, but rather attitude problems that occur, for example, when ask questions without really caring about finding the answer, or when we use mysterious answers to lullaby our curiosity prematurely.

Comment by Nick Bostrom (Nick_Bostrom2) on One Life Against the World · 2007-05-19T02:15:59.000Z · LW · GW

I have a paper on this problem of infinities in ethics: http://www.nickbostrom.com/ethics/infinite.pdf

It is a difficult topic.

Comment by Nick Bostrom (Nick_Bostrom2) on Marginally Zero-Sum Efforts · 2007-04-12T03:12:04.000Z · LW · GW

I once suggested to the EU (whose research funding application process is exceptionally laborious) that when evaluating the effectiveness of their grants scheme, they should take into account not only the internal costs of administering the scheme, but also the costs of applying. More specifically, they should estimate the salary cost of the time the applicants put into learning about the program and applying (unsuccessful applicants included) and view this as an administrative cost of the program. I think it would be an eye-opener. Of course, the chances that they would actually start doing this as a result of my suggestion are indistinguishably close to zero.

Comment by Nick Bostrom (Nick_Bostrom2) on Archimedes's Chronophone · 2007-03-24T20:01:08.000Z · LW · GW

I'm not sure I understand exactly how the chronophone works. It sounds a bit like the only useful ideas a person can transmit are ideas that she herself has independently worked out or discovered; in which case not the same ideas but some analogous and similarly useful ideas gets delivered to Archimedes. In this case, I guess I might try to read out some of my research papers, hoping that they contain some useful original insights. It might also work if I transmit ideas that have originated with others but whose merits I have grasped through my own independent judgement.

It seems if you subtract all the information advantages that we moderns have, all that remains in this exercise are the organic qualities of our brains and the amount and quality of intellectual labour that our brains have performed.

Comment by Nick Bostrom (Nick_Bostrom2) on Outside the Laboratory · 2007-01-24T04:12:31.000Z · LW · GW

Eliezer wrote: "Godel's Completeness theorem shows that any first-order statement true in all models of a set of first-order axioms is provable from those axioms. Thus, the failure of Peano Arithmetic to prove itself consistent is because there are many "supernatural" models of PA in which PA itself is not consistent; that is, there exist supernatural numbers corresponding to proofs of P&~P."

This is getting far from the topic but... I really don't see how Completeness entails anything about PA's failure to prove itself consistent (much less how it suggests an explanation in terms of "supernatural models", whatever that is supposed to mean). PA is not expressible as a first-order statement, so Completeness has nothing to say about PA or its limitations.