Posts

Short introductory materials for a rationality meetup 2012-11-13T05:10:38.251Z
Brief Question about FAI approaches 2012-09-19T06:05:18.882Z
How to Improve Field Cryonics 2012-09-08T21:14:34.032Z
Thoughts on a possible solution to Pascal's Mugging 2012-08-01T12:32:23.995Z

Comments

Comment by Dolores1984 on Short introductory materials for a rationality meetup · 2012-11-16T20:40:43.509Z · LW · GW

Nothing so drastic. Just a question of the focus of the club, really. Our advertising materials will push it as a skeptics / freethinkers club, as well as a rationality club, and the leadership will try to guide discussion away from heated debate over basics (evolution, old earth, etc.).

Comment by Dolores1984 on Short introductory materials for a rationality meetup · 2012-11-16T20:02:42.247Z · LW · GW

Then he could give a guest lecture, and that'd be pretty cool.

Comment by Dolores1984 on Short introductory materials for a rationality meetup · 2012-11-16T20:02:21.554Z · LW · GW

In our club, we've decided to assume atheism (or, minimum, deism) on the part of our membership. Our school has an extremely high percentage of atheists and agnostics, and we really don't feel it's worth arguing over that kind of inferential distance. We'd rather it be the 'discuss cool things' club than the 'argue with people who don't believe in evolution' club.

Comment by Dolores1984 on Cryonics as Charity · 2012-11-12T18:45:00.023Z · LW · GW

This perspective looks deeply insane to me.

I would not kill a million humans to arrange for one billion babies to be born, even disregarding the practical considerations you mentioned, and, I suspect, neither would most other people. This perspective more or less requires anyone in a position of power to oppose birth control availability, and require mandatory breeding.

I would be about as happy with a human population of one billion as a hundred billion, not counting the number of people who'd have to die to get us down to a billion. I do not have strong preferences over the number of humans. The same does not go for the survival of the living.

Comment by Dolores1984 on Cryonics as Charity · 2012-11-11T19:38:21.444Z · LW · GW

There would be some number of digital people that could run simultaneously on whatever people-emulating hardware they have.

I expect this number to become unimaginably high in the foreseeable future, to the point that it is doubtful we'll be able to generate enough novel cognitive structures to make optimal use of it. The tradeoff would be more like 'bringing back dead people' v. 'running more parallel copies of current people.' I'd also caution against treating future society as a monolithic Entity with Values that makes Decisions - it's very probably still going to be capitalist. I expect the deciding factor regarding whether or not cryopatients are revived to be whether or not Alcor can pay for the revival while remaining solvent.

Also, I'm not at all certain about your value calculation there. Creating new people is much less valuable than preserving old ones. It would be wrong to round up and exterminate a billion people in order to ensure than one billion and one babies are born.

Comment by Dolores1984 on [LINK] blog on cryonics by someone who freezes things in a cell bio lab · 2012-10-23T01:19:57.667Z · LW · GW

Right, but (virtually) nobody is actually proposing doing that. It's obviously stupid to try from chemical first principles. Cells might be another story. That's why we're studying neurons and glial cells to improve our computational models of them. We're pretty close to having adequate neuron models, though glia are probably still five to ten years off.

I believe there's at least one project working on exactly the experiment you describe. Unfortunately, C. elegans is a tough case study for a few reasons. If it turns out that they can't do it, I'll update then.

Comment by Dolores1984 on [LINK] blog on cryonics by someone who freezes things in a cell bio lab · 2012-10-22T22:42:14.144Z · LW · GW

Which is obvious nonsense. PZ Meyers thinks we need atom-scale accuracy in our preservation. Were that the case, a sharp blow to the head or a hot cup of coffee would render you information theoretically-dead. If you want to study living cell biology, frozen to nanosecond accuracy, then, no, we can't do that for large systems. If you want extremely accurate synaptic and glial structural preservation, with maintenance of gene expressions and approximate internal chemical state (minus some cryoprotectant-induced denaturing), then we absolutely can do that, and there's a very strong case to be made that that's adequate for a full functional reconstruction of a human mind.

Comment by Dolores1984 on Open Thread, October 16-31, 2012 · 2012-10-20T19:35:25.807Z · LW · GW

I propose that we continue to call them koans, on the grounds that changing involves a number of small costs, and it really, fundamentally, does not matter in any meaningful sense.

Comment by Dolores1984 on Looking for alteration suggestions for the official Sequences ebook · 2012-10-18T02:51:24.176Z · LW · GW

So far, I'm twenty pages in, and getting close to being done with the basic epistemology stuff.

Comment by Dolores1984 on How To Have Things Correctly · 2012-10-17T23:17:14.204Z · LW · GW

Lottery winners have different problems. Mostly that sharp changes in money are socially disruptive, and that lottery players are not the most fiscally responsible people on Earth. It's a recipe for failure.

Comment by Dolores1984 on Looking for alteration suggestions for the official Sequences ebook · 2012-10-17T16:31:02.907Z · LW · GW

My mistake.

Comment by Dolores1984 on Looking for alteration suggestions for the official Sequences ebook · 2012-10-17T16:30:51.989Z · LW · GW

In general, when something can be either tremendously clever, or a bit foolish, the prior tends to the latter. Even with someone who's generally a pretty smart cookie. You could run the experiment, but I'm willing to bet on the outcome now.

It's important to remember that it isn't particularly useful for this book to be The Sequences. The Sequences are The Sequences, and the book can direct people to them. What would be more useful would be a condensed, rapid introduction to the field that tries to maximize insight-per-byte. Not something that's a definitive work on rationality, but something that people can crank through in a day or two, rave about to their friends, and come away with a better idea of what rational thinking looks like. It'd also serve as a less formidable introduction for those who are very interested, to the broader pool of work on the subject, including the Sequences. Dollar for sanity-waterline dollar, that's a very heavily leveraged position.

Actually, if CFAR isn't going to write that book, I will.

Comment by Dolores1984 on The Fabric of Real Things · 2012-10-17T01:47:46.948Z · LW · GW

You could plug a baby's nervous system into the output of a radium decay random number generator. It'd probably disagree (disregarding how crazy it would be) that its observations were best described by causal graphs.

Comment by Dolores1984 on The Fabric of Real Things · 2012-10-17T01:45:27.081Z · LW · GW

It does not. Epiphenomenal consciousness could be real for the same reason that the spaceship vanishing over the event horizon. It's Occam's Razor that knocks down that one.

Comment by Dolores1984 on The Fabric of Real Things · 2012-10-17T01:43:29.987Z · LW · GW

1: If your cousin can demonstrate that ability using somebody else's deck, under experimental conditions that I specify and he is not aware of ahead of time, I will give him a thousand dollars.

2: In the counter-factual case where he accomplishes this, that does not mean that his ability is outside the realm of science (well, probably it means the experiment was flawed, but we'll assume otherwise). There have been a wide range of inexplicable phenomena which are now understood by science. If your cousin's psychic powers are real, then science can study them, and break down the black box to find out what's inside. There are certainly causal arrows there, in any case. If there weren't, we wouldn't know about it.

3: If your strongest evidence that your partner loves you is psychic intuition, you should definitely get a prenup.

Comment by Dolores1984 on Looking for alteration suggestions for the official Sequences ebook · 2012-10-16T23:02:00.083Z · LW · GW

Oh, and somebody get Yudkowsky an editor. I love the sequences, but they aren't exactly short and to the point. Frankly, they ramble. Which is fine if you're just trying to get your thoughts out there, but people don't finish the majority of the books they pick up. You need something that's going to be snappy, interesting, and cater to a more typical attention span. Something maybe half the length we're looking at now. The more of it they get through, the more good you're doing.

EDIT: Oh! And the whole thing needs a full jargon palette-swap. There's a lot of LW-specific jargon that isn't helpful. In many cases, there's existing academic jargon that can take the place of the phrases Yudkowky uses. Aside from lending the whole thing a superficial-but-useful veneer of credibility, it'll make the academics happy, and make them less likely to make snide comments about your book in public fora. If you guys aren't already planning on a POD demand run, you really should. Ebooks are wonderful, but the bulk of the population is still humping dead trees around. An audiobook or podcast might be useful as well.

Comment by Dolores1984 on Looking for alteration suggestions for the official Sequences ebook · 2012-10-16T22:50:49.721Z · LW · GW

If it were me, I'd split your list after reductionism into a separate ebook. Everything that's controversial or hackles-raising is in the later sequences. A (shorter) book consisting solely of the sequences on cognitive biases, rationalism, and reductionism could be much more a piece of content somebody without previous rationalist intentions can pick up and take something valuable away from. The later sequences have their merits, but they are absolutely counterproductive to raising the sanity waterline in this case. They'll label your book as kooky and weird, and they don't, in themselves, improve their readers enough to justify the expense. People interested in the other stuff can get the companion volume.

You could label the pared down volume something self helpey like 'Thinking Better: The Righter, Smarter You." For goodness sake, don't have the word 'sequences' in the title. That doesn't mean anything to anyone not already from LW, and it won't help people figure out what it's about.

EDIT: Other title suggestions - really just throwing stuff at the wall here

  • Rationality: Art and Practice

  • The Rational You

  • The Art of Human Rationality

  • Black Belt Bayesian: Building a Better Brain

  • The Science of Winning: Human Rationality and You

  • Science of Winning: The Art and Practice of Human Rationality (I quite like this one)

Comment by Dolores1984 on Thinking soberly about the context and consequences of Friendly AI · 2012-10-16T18:44:21.305Z · LW · GW

There will always be multiple centers of power What's at stake is, at most, the future centuries of a solar-system civilization No assumption that individual humans can survive even for hundreds of years, or that they would want to

You give no reason why we should consider these as more likely than the original assumptions.

Comment by Dolores1984 on Good transhumanist fiction? · 2012-10-14T07:15:50.373Z · LW · GW

Sure. I think we just have different definitions of the term. Not much to be gained here.

Comment by Dolores1984 on Good transhumanist fiction? · 2012-10-14T03:28:55.322Z · LW · GW

How about a cyborg whose arm unscrews? Is he not augmented? Most of a cochlear implant can be removed. Nothing about trans-humanism says your augmentations have to be permanently attached to your body. You need only want to improve yourself and your abilities, which a robot suit of that caliber definitely accomplishes.

And, yes, obviously transhumanism is defined relative to historical context. If everyone's doing it, you don't need to have a word for it. That we have a word implies that transhumanists are looking ahead, and looking for things that not everyone has yet. So, no, your car doesn't make you a trans-humanist, but a robotic exoskeleton might be evidence of that philosophy.

Comment by Dolores1984 on Good transhumanist fiction? · 2012-10-13T21:09:37.038Z · LW · GW

I think the suit definitely counts as human augmentation. Plus, he designs his augmentations himself. Captain America just used the technology of some guy who then promptly proceeded to die, making the process unrepeatable for some reason. Stark is constantly refining his stuff.

Comment by Dolores1984 on Good transhumanist fiction? · 2012-10-13T19:00:19.340Z · LW · GW

The obvious counter-example is iron man, especially in the films.

Comment by Dolores1984 on [Link] Scientists to simulate human brain inside a supercomputer · 2012-10-13T18:56:40.366Z · LW · GW

Simply to replicate one of the 10,000 neuron brain cells involved in the rat experiment took the processing capacity usually found in a single laptop.

Good lord, what sort of neuron model are they running? There had got to be a way to optimize that.

Comment by Dolores1984 on When does something stop being a “self-consistent idea” and become scientific fact? · 2012-10-02T21:38:31.519Z · LW · GW

Your four criteria leave an infinite set of explanations for any phenomenon. Including, yes, George the Giant. That's why we have the idea of Occam's razor - or, more formally, Solomonoff Induction. Though I suppose, depending on the data available to the tribe, the idea of giant humans might not be dramatically more complicated than plate tectonics. It isn't like they postulated a god of earthquakes or some nonsense like that. At minimum, however, they are privileging the George the Giant hypotheses over the other equally-complicated plausible explanations. The real truth is that they don't have enough data to come up with the real answers. They need to start recording data and studying the natural world. They can probably figure it out in a few hundred years if they really put their backs into it.

Comment by Dolores1984 on The Useful Idea of Truth · 2012-10-02T06:47:35.326Z · LW · GW

When we try to build a model of the underlying universe, what we're really talking about it is trying to derive properties of a program which we are observing (and a component of), and which produces our sense experiences. Probably quite a short program in its initial state, in fact (though possibly not one limited by the finite precision of traditional Turing Machines).

So, that gives us a few rules that seem likely to be general: the underlying model must be internally consistent and mathematically describable, and must have a total K-complexity less than the amount of information in the observable universe (or else we couldn't reason about it).

So the question to ask is really "can I imagine a program state that would make this proposition true, given my current beliefs about my organization of the program?"

This is resilient to the atoms / QM thing, at least, as you can always change the underlying program description to better fit the evidence.

Although, in practice, most of what intelligent entities do can more precisely be described as 'grammar fitting' than 'program induction.' We reason probabalistically, essentially by throwing heuristics at a wall to see what offers marginal returns on predicting future sense impressions, since trying to guess the next word in a sentence by reverse-deriving the original state of the universe-program and iterating it forwards is not practical for most people. That massive mess of semi-rational, anticipatorially-justified rules of thumb is what allows us to reason in the day to day.

So a more pragmatic question is 'how does this change my anticipation of future events?' or 'What sense experiences do I expect to have differently as a result of this belief?'

It is only when we seek to understand more deeply and generally, or when dealing with problems of things not directly observable, that it is practical to try to reason about the actual program underlying the universe.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-10-01T21:01:05.985Z · LW · GW

Most of the sensible people seem to be saying that the relevant neural features can be observed at a 5nm x 5nm x 5nm spatial resolution, if supplemented with some gross immunostaining to record specific gene expressions and chemical concentrations. We already have SEM setups that can scan vitrified tissue at around that resolution, they're just (several) orders of magnitude too slow. Outfitting them to do immunostaining and optical scanning would be relatively trivial. Since multi-beam SEMS are expected to dramatically increase the scan rate in the next couple of years, and since you could get excellent economies of scale for scanning on parallel machines, I do not expect the scanners themselves to be the bottleneck technology.

The other possible bottleneck is the actual neuroscience, since we've got a number of blind spots in the details of how large-scale neural machinery operates. We don't know all the factors we would need to stain for, we don't know all of the details of how synaptic morphology correlates with statistical behavior, and we don't know how much detail we need in our neural models to preserve the integrity of the whole (though we have some solid guesses). We also do not, to the best of my knowledge, have reliable computational models of glial cells at this point. There are also a few factors of questionable importance, like passive neurotransmitter diffusion and electrical induction that need further study to decide how (if at all) to account for them in our models. However, progress in this area is very rapid. The Blue Brain project alone has made extremely strong progress in just a few years. I would be surprised if it took more than fifteen years to solve the remaining open questions.

Large scale image processing and data analytics, for parsing the scan images, is a sufficiently mature science that it's not my primary point of concern. What could really screw it up is if Moore's law craps out in ten years like Gordon Moore has predicted, and none of the replacement technologies are advanced enough to pick up the slack.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-10-01T18:29:41.924Z · LW · GW

Additionally, reality and virtual reality can get a lot fuzzier than that. If AR glasses become popular, and a protocol exists to swap information between them to allow more seamless AR content integration, you could grab all the feeds coming in from a given location, reconstruct them into a virtual environment, and insert yourself into that environment, which would update with the real world in real time. People wearing glasses could see you as though you were there, and vice versa. If you rented a telepresence robot, it would prevent people from walking through you, and allow you to manipulate objects, shake hands, that sort of thing. The robot would simply be replaced by a rendering of you in the glasses. Furthermore, you could step from that real environment seamlessly into an entirely artificial environment, and back again, and overlay virtual content onto the real world. I suspect that in the next twenty years, the line between reality and virtual reality is going to get really fuzzy, even for non-uploads.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-10-01T17:38:16.703Z · LW · GW

If you're talking about people frozen after four plus hours of room temperature ischemia, I'd agree with you that the odds are not good. However, somebody with a standby team, perfused before ischemic clotting can set in and vitrified quickly, has a very good chance in my book. We've done SEM imaging of optimally vitrified dead tissue, and the structural preservation is extremely good. You can go in and count the pores on a dendrite. There simply isn't much information lost immediately after death, especially if you get the head in ice water quickly.

I also have quite a high confidence that we'll be seeing WBE technology in the next forty years (I'd wager at better than even odds that we'll see it in the next twenty). The component technologies already exist (and need only iterative improvements), and many of them are falling exponentially in cost. That combined with what I suspect will be a rather high demand when the potential reaches the public consciousness, is a pretty potent combination of forces.

So, for me, I lose most of my probability mass to the idea that, if you're vitrified now, something will happen to Alcor within 40 years, or, more generally, some civilization-disrupting event will occur in the same time frame. That your brain isn't preserved (under optimal conditions), or that we'll never figure out how to slice up and emulate a brain, are not serious points of concern to me.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-10-01T06:28:18.728Z · LW · GW

The words "one of the things that creates bonds" should have been a big hint that I think there's more to friendship than that. Why did you suddenly start wondering if I'm a sociopath? That seems paranoid, or it suggests that I did something unexpected.

Well, then there's your answer to the question 'what is friendship good for' - whatever other value you place on friendship that makes you neurotypical. I was just trying to point out that that line of reasoning was silly.

Okay, but the reason why rationality has a special ability to help you get more of what you want is because it puts you in touch with reality. Only when you're in touch with reality can you understand it enough to make reality do things you want. In a simulation, you don't need to know the rules of reality, or how to tell the difference between true and false. You can just press a button and make the sun revolve around the earth, turn off laws of physics like gravity, or cause all the calculators to do 1+1 = 3. In a virtual world where you can get whatever you want by pressing a button, what value would rationality have?

Well, you have to get to that point, for starters. And, yes, you do need some level of involvement with top-level reality. To pay for your server space, if nothing else. Virtual environments permit a big subset of life (play, communication, learning, etc. much more efficiently than real life), with a few of the really horrifying sharp edges rounded off, and some additional possibilities added.

There are still challenges to that sort of living, both those imposed by yourself, and those imposed by ideas you encounter and by your interactions with other people. Rationality still has value, for overcoming these sorts of obstacles, even if you're not in imminent danger of dying all the time.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-10-01T05:38:11.921Z · LW · GW

Well, there's no reason to think you'd be completely isolated from top level reality. Internet access is very probable. Likely the ability to rent physical bodies. Make phone calls. That sort of thing. You could still get involved in most of the ways you do now. You could talk to people about it, get a job and donate money to various causes. Sign contracts, make legal arrangements to keep yourself safe. That sort of thing.

With friendship, one of the things that creates bonds is knowing that if I'm in trouble at 3:00 am, I can call my friend. If all the problems are happening in a world that neither of you has access to, if you're stuck inside a great big game where nothing can hurt you for real, what basis is there for friendship? What would companionship be good for?

Wait, you only value friendship in so far as it directly aids you? I hate to be the bearer of bad news, but if that's actually true, then you might be a sociopath.

Why are you learning rationality if you don't see value in influencing reality?

Rationality is about maximizing your values. I happen to think that most of my values can be most effectively fulfilled in a virtual environment. If the majority of humanity winds up voluntarily living inside a comfortable, interesting, social, novel Matrix environment, I don't think that's a bad future. It would certainly solve the over-crowding problem, for quite a while at least.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-10-01T04:19:00.598Z · LW · GW

I want meaning, this requires having access to reality. I'll think about it.

Does it? You can have other people in the simulation with you. People find a lot of meaning in companionship, even digitally mediated. People don't think a conversation with your mother is meaningless because it happens over VOIP. You could have lots of places to explore. Works of art.. Things to learn. All meaningful things. You could play with the laws of physics. Find out what if feels like to turn gravity off one day and drift out of your apartment window.

If you wake up one morning in your house, go make a cup of coffee, breathe the fresh morning air, and go for a walk in the park, does it really matter if the park doesn't really exist? How much of your actual enjoyment of the process derives from the knowledge that the park is 'real'? It's not something I normally even consider.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-09-30T23:52:57.135Z · LW · GW

Awful! That's experimenting on a person against their will, and without their knowledge, even! I sure hope people like you don't start freezing people like me in the event that I decide against cryo...

-shrug- so don't leave your brain to science. I figure if somebody is prepared to let their brain decompose on a table while first year medical students poke at it, you might as well try to save their life. Provided, of course, the laws wherever you are permit you to put the results down if they're horrible. Worst case, they're back where they started.

People experience this every day. It's called chemical depression. Even if you don't currently see a way for preservation or revival technology to cause this condition, it exists, it's possible that more than one mechanism may exist to trigger it, and that these technologies may have that as an accidental side-effect.

Chemical depression is not 'absolute misery.' Besides, we know how to treat that now. That we'll be able to bring you back, but unable to tweak your brain activity a little is not very credible. Worst case, once we have the scan, we can always put it back on ice for another decade or two until we can fix the problem.

Uh... no, because I'd be experiencing life, I would just be without what makes me me. That would be horror, not non-existence. So it is not death.

If I took a bunch of Drexler-class nanotech, took your brain, and restructured its material to be a perfect replica of my brain, that would be murder. You would cease to exist. The person living in your head would be me, not you. If brain damage is adequately severe, then you don't exist any more. The 'thing that makes you you' is necessary to 'do the experiencing.'

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-09-30T21:43:06.611Z · LW · GW

Depends on your definition of 'you.' Mine are pretty broad. The way I see it, my only causal link to myself of yesterday is that I remember being him. I can't prove we're made of the same matter. Under quantum mechanics, that isn't even a coherent concept. So, if I believe that I didn't die in the night, then I must accept that that's a form of survival.

Uploaded copies of you are still 'you' in the sense that the you of tomorrow is you. I can talk about myself tomorrow, and believe that he's me (and his existence guarantees my survival), even though if he were teleported back in time to now, we would not share a single thread of conscious experience. I can also consider different possibilities tomorrow. I could go to class, or I could go to the store. Both of those hypothetical people are still me, but they are not quite exactly each other.

So, to make a long story short, yes: if an adequately detailed model is made of my brain, then I consider that to be survival. I don't want bad things to happen to future me's.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-09-30T20:39:19.734Z · LW · GW

Remember: you can always take random recently dead guys who donated their bodies to science, vitrify their brains, and experiment on them. And this'll be after years of animal studies and such.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-09-30T20:37:43.081Z · LW · GW

You are overwhelmingly likely not to wake up in a body, depending on the details of your instructions to Alcor.. Scanning a frozen brain is exponentially cheaper and technologically easier than trying to repair every cell in your body. You will almost certainly wake up as a computer program running on server somewhere.

This is not a bad thing. Your computer program can be plugged into software body models in convincing virtual environments, permitting normal human activities (companionship, art, fun, sex, etc.), plus some activities not normally possible for humans. It'll likely be possible to rent mechanical bodies for interacting with the physical world.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-09-30T20:31:12.071Z · LW · GW

There's no reason to experiment o cryo patients. Lots of people donate their brains to science. Grab somebody who isn't expecting to be resurrected, and test your technology on them. Worst case, you wake up somebody who doesn't want to be alive, and they kill themselves.

Number two is very unlikely. We're basically talking brain damage, and I've never heard of a case of brain damage, no matter how severe, doing that.

As for number three, that shambling horror would not be you in a meaningful sense. You'd just be dead, which is the default case. Also, I have my doubts that they'd even bother to try to resurrect you with that much damage if they didn't already have a way of patching the gaps in your neurology.

As for number four, depending on the degree of the disability, suicide or euthanasia is probably possible. Besides, I think it's unlikely they'll be able to drag you back from being a corpsicle without being able to fix problems like that.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-09-30T20:25:45.517Z · LW · GW

Really? I wouldn't put odds of revival for best-case practices any lower than maybe 10%. How on earth do you have such a high confidence that WBE emulation won't be perfected in the next couple of hundred years?

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-09-30T20:16:17.555Z · LW · GW

Living forever isn't quite impossible. If we ever develop acausal computing, or a way to beat the first law of thermodynamics (AND the universe turns out to be spatially infinite), then it's possible that a sufficiently powerful mind could construct a mathematical system containing representations of all our minds that it could formally prove would keep us existent and value-fulfilled forever, and then just... run it.

Not very likely, though. In the mean time, more life is definitely better than less.

Comment by Dolores1984 on Female Test Subject - Convince Me To Get Cryo · 2012-09-30T20:12:14.369Z · LW · GW

If you're revived via whole brain emulation (dramatically easier, and thus more likely, than trying to convert a hundred kilos of flaccid, poisoned cell edifices into a living person), then you could easily be prevented from killing yourself.

That said, whole brain emulation ought to be experimentally feasible, in what, fifteen years? At a consumer price point in 40? (Assuming the general trend of Moore's law stays constant). That's little enough time that I think the probability of such a dytopian future is not incredibly large. Especially since Alcor et all can move around if the laws start to get draconian. So it doesn't just require an evil empire - it requires a global evil empire.

The real risk is that Alcor will fold before that happens, and (for some reason) won't plastinate the brains they have on ice. In which case, you're back in the same boat you started in.

Comment by Dolores1984 on Yale Creates First Self-Aware Robot? · 2012-09-28T21:14:19.119Z · LW · GW

Sure, there's some ambiguity there, but over adequately large sample sizes, trends become evident. Peer reviewed research is usually pretty good at correcting for confounds that people reading about it think up in the first fifteen minutes.

Comment by Dolores1984 on Yale Creates First Self-Aware Robot? · 2012-09-28T18:28:40.093Z · LW · GW

Because it correlates with intelligence and seems indicative of deeper trends in animal neurology. Probably not a signpost that carries over to arbitrary robots, though.

Comment by Dolores1984 on How Likely Is Cryonics To Work? · 2012-09-27T23:24:00.118Z · LW · GW

If cryonics is not performed extremely quickly, ischemic clotting can seriously inhibit cortical circulation, preventing good perfusion with cryoprotectants, and causing partial information-theoretic death. Being cryopreserved within a matter of minutes is probably necessary, barring a way to quickly improve circulation.

Comment by Dolores1984 on From First Principles · 2012-09-27T23:16:33.834Z · LW · GW

I'd like to apologize in advance for marginally lowering the quality of LW discourse. Here we go:

get the tip wet before you stick it in, and don't worry about position.

That's what she said.

EDIT: Yeah, that's fair. Again, sorry. Setup was too perfect.

Comment by Dolores1984 on Rationality Quotes September 2012 · 2012-09-27T19:51:49.635Z · LW · GW

Your idea of provincialism is provincial. The idea of shipping tinned apes around the solar system is the true failure of vision here, nevermind the bag check procedures.

Comment by Dolores1984 on Brief Question about FAI approaches · 2012-09-20T22:17:08.823Z · LW · GW

Not quite. It actually replaces it with the problem of maximizing people's expected reported life satisfaction. If you wanted to choose to try heroin, this system would be able to look ahead, see that that choice will probably drastically reduce your long-term life satisfaction (more than the annoyance at the intervention), and choose to intervene and stop you.

I'm not convinced 'what's best for people' with no asterisk is a coherent problem description in the first place.

Comment by Dolores1984 on Brief Question about FAI approaches · 2012-09-20T05:58:38.074Z · LW · GW

By bounded, I simply meant that all reported utilities are normalized to a universal range before being summed. Put another way, every person has a finite, equal fraction of the machine's utility to distribute among possible future universes. This is entirely to avoid utility monsters. It's basically a vote, and they can split it up however they like.

Also, the reflexive consistency criteria should probably be applied even to people who don't exist yet. We don't want plans to rely on creating new people, then turning them into happy monsters, even if it doesn't impact the utility of people who already exist. So, basically, modify the reflexive utility criteria to say that in order for positive utility to be reported from a model, all past versions of that model (to some grain) must agree that they are a valid continuation of themselves.

I'll need to think harder about how to actually implement the approval judgements. It really depends on how detailed the models we're working with are (i.e. cable of realizing that they are a model). I'll give it more thought and get back to you.

Comment by Dolores1984 on Brief Question about FAI approaches · 2012-09-20T05:43:50.628Z · LW · GW

I can think of an infinite utility scenario. Say the AI figures out a way to run arbitrarily powerful computations in constant time. Say it's utility function is over survival and happiness of humans. Say it runs an infinite loop (in constant time), consisting of a formal system containing implementations of human minds, which it can prove will have some minimum happiness, forever. Thus, it can make predictions about its utility a thousand years from now just as accurately as ones about a billion years from now, or n, where n is an finite number of years. Summing the future utility of the choice to turn on the computer, from zero to infinity, would be an infinite result. Contrived I know, but the point stands.

Comment by Dolores1984 on Brief Question about FAI approaches · 2012-09-20T05:38:46.015Z · LW · GW

If we can extract utility in a purer fashion, I think we should. At the bare minimum, it would be much more run-time efficient. That said, trying to do so opens up a whole can of worms of really hard problems. This proposal, provided you're careful about how you set it up, pretty much dodges all of that, as far as I can tell. Which means we could implement it faster, should that be necessary. I mean, yes, AGI is still very hard problem, but I think this reduces the F part of FAI to a manageable level, even given the impoverished understanding we have right now. And, assuming a properly modular code base, it would not be too difficult to swap out 'get utility by asking questions' with 'get utility by analyzing model directly.' Actually, the thing might even do that itself, since it might better maximize its utility function.

Comment by Dolores1984 on Brief Question about FAI approaches · 2012-09-20T05:29:16.516Z · LW · GW

Reflexively Consistent Bounded Utility Maximizer?

Hrm. Doesn't exactly roll off the tongue, does it? Let's just call it a Reflexive Utility Maximizer (RUM), and call it a day. People have raised a few troubling points that I'd like to think more about before anyone takes anything too seriously, though. There may be a better way to do this, although I think something like this could be workable as a fallback plan.

Comment by Dolores1984 on Brief Question about FAI approaches · 2012-09-20T00:15:15.595Z · LW · GW

Note the reflexive consistency criterion. That'd only happen if everyone predictable looked at the happy monster and said 'yep, that's me, that agent speaks for me.'