Posts

Comments

Comment by Ben_Jones on Fairness vs. Goodness · 2009-02-24T12:46:08.000Z · LW · GW

if both players play (C, C) and then divide up the points evenly at the end, isn't that sort of... well... communism?

Eliezer, you have just replaced Reeves' substance with your own symbol. What's your point here?

Comment by Ben_Jones on Epilogue: Atonement (8/8) · 2009-02-06T15:43:34.000Z · LW · GW

Untranslatable 2 is the thought sharing sex.

Sprite, you are, by definition, wrong.

Comment by Ben_Jones on OB Status Update · 2009-01-28T19:05:20.000Z · LW · GW

Thirding D Franke's idea.

Eliezer, a thought occurs. I'm sure the new setup will be great for everyone who wants to make sure they're using the right priors and calculating the correct odds about whether to bet on Obama or not. Or indeed, trying their utmost to eliminate every source of bias from their life and turn into a giant lookup table or something. But I've much preferred reading your assorted ramblings on things like quantum mechanics, timeless physics, and especially low-level AI theory. Wrong motives? Meh, maybe. I'm sure the answer is 'the elimination of bias and mind projection is the first step along the Way,' and that's fine, and I'm going to get involved. But I guess I just want to know whether or not you'll be writing in the same vein as over the last couple of years, which have opened a huge number of doors for me and how I think.

Caledonian, I don't think you realise just how much you do seem to look forward to that. If Eliezer's so far beyond saving, what's your rationale here?

Comment by Ben_Jones on OB Status Update · 2009-01-27T14:03:15.000Z · LW · GW

Or indeed Marcello Herreshoff?

Exciting stuff. Looking forward to having the OB back catalogue sorted into sequences. That'll make it much easier for me to badger everyone I know to get reading.

Comment by Ben_Jones on Eutopia is Scary · 2009-01-12T11:33:44.000Z · LW · GW

Rob - that's because The Wire is more like real life than real life.

I can see the link to the Chronophone here Eliezer. What would Benny F have found most shocking about today? How can we extrapolate that forwards?

Surely the most scary changes will be in ethics and the way we think of the human condition and personal identity.

I'm currently most of the way through The Mind's I, and if Hofstadter's (very plausible) musings on identity are anything like accurate, we're going to have to start thinking very differently about who we are, and even whether that question has any real application. My shocking prediction for 100, 500, 1000 years' time? There won't be any individuals, any notion of 'I'.

The repercussions don't really need spelling out or analysing here, and I'm not going to try and predict how things will work. That's all I've got. Human history is a list of examples of our intuitions being exploded by our observations. Individual personal identity over time is an intuitive illusion, and one that'll become increasingly transparent, and less useful, as time goes by. This scares the living hell out of me - I can't think of any way I could possibly feel more out of place.

And I'm certainly not going to write any fiction set in that world.

Comment by Ben_Jones on Continuous Improvement · 2009-01-12T10:57:49.000Z · LW · GW

Eliezer, does The Adaptive Stereo have an analogic application here?

To compress the idea, if you slowly turn down the strength of a signal into perception (in this case sound), you can make large, pleasant, periodic steps up without actually going anywhere. Or at least, you can go slower.

Any logical reason why this wouldn't work for hedons? 'Going digital' might nullify this effect, but in that case we just wouldn't do that, right?

Finally, I would dispute the notion that a periodic incremental increase in hedons flowing into us is how human pleasure works. The key notion here is surely not pleasure but payoff - continually getting something (even exponentially more of something) for nothing won't feel like an orgasm getting better and better.* Unless you make some serious architectural changes. And, for me at least, that would be filed under 'wirehead'. To continue the orgasm metaphor, if you could press a button (no puns please) and come, you'd quickly get bored. It might even turn you off actual sex.

The future scares me.

I know that we won't necessarily get all these billions and trillions of hedons free - we would probably seek to set up some carrots and sticks of our own etc. But still. It'd be tough not to just plug yourself in given the option. Like you say though, easier to poke holes than to propose solutions. Will ponder on.

Marcello, very well put.

*This is my intuition talking, but surely that's what we're running on here?

Comment by Ben_Jones on Serious Stories · 2009-01-09T11:07:11.000Z · LW · GW

[...]my experience of drugs is as nonexistent as my experience of torture.

There's something imbalanced about that.

Agreed. I'm sure both can be procured somewhere in the Bay Area though. Great material for blogging too!

Is the equivalent pleasure one that overrides everything with the demand to continue and repeat it?

Yes. And that's as horrible an idea as eternal torture. I'm surprised you haven't cited any of the studies about relative happiness of lottery winners, (compared to their expectations) though I seem to remember references in some of the posts about a year back.

Being able to change the rules of the game is dangerous. Being able to change your brain so you perceive the game differently is dangerous. Achieving the capability to do both within a short time window is my favourite candidate for a Great Filter.

Comment by Ben_Jones on Emotional Involvement · 2009-01-08T11:57:51.000Z · LW · GW

Thus fails the Utopia of playing lots of really cool video games forever.

Not convinced. Based on my experience of what people are like; from the moment where games are immersive enough, and we have the technology to plug in for good, the problem of 'no lasting consequences' will vanish for people who want it to. There are already plenty of people willing to plug into WoW for arbitrary amounts of time if they are able. Small increases in immersiveness and catheter technology will lead to large increases in uptime.

phane touches on something interesting just above. One shouldn't talk about video games or even VR as a special case; one should talk about non-magical-boundary sensory input and our reactions. I'm fully in agreement that you should YANK OUT THE WIRES, but I'm having trouble generalizing exactly why. Something to do with 'the more real your achievements the better'? Doesn't feel right. If this has come up implicitly in the Fun Theory series, apologies for not having spotted it.

Also, seconding Peter dB. Saying 'that reminds me of an episode where...' doesn't deserve a ticking-off, particularly following such posts as 'Use the try harder, Luke'. In fact, it can actually be useful to ground things when thinking anstractly, as long as you take care not to follow the fictional logic.

Comment by Ben_Jones on Harmful Options · 2008-12-25T11:26:43.000Z · LW · GW

Hey Rick Astley! Much better than this decision theory crap.

Came across this at work yesterday, which isn't unrelated. For every level of abstraction involved in a decision, or extra option added, I guess we should just accept that 50% of the population will fall by the wayside. Or start teaching decision theory in little school.

Happy Nondenominational Winter Holiday Period, all. Keep it rational.

Comment by Ben_Jones on Disjunctions, Antipredictions, Etc. · 2008-12-23T14:51:58.000Z · LW · GW

I have included in the envelope a means of identifying myself when I claim the money, so that it cannot be claimed by someone impersonating me.

Doesn't that technically make you now Known?

Also, how much time has to pass between an AI 'coming to' and the world ending? What constitutes an AI for this bet?

Eliezer, will you be donating the $10 to the Institute? If so, does this constitute using the wager to shift the odds in your favour, however slightly?

Yes, the last two are jokes. But the first two are serious.

Comment by Ben_Jones on Sensual Experience · 2008-12-21T11:48:26.000Z · LW · GW

Anonymous, that reminds me of some anecdote by Feynman where he has complex mathematical ideas described to him by young students. He wouldn't fully understand them, but he would imagine a shape, and for each new concept he'd add an extra bit, like a squiggly tail or other appendage. When something didn't fit in right, it would be instantly obvious to him, even if he couldn't explain exactly why.

Improvised sensory modality for maths?

Comment by Ben_Jones on Complex Novelty · 2008-12-21T11:34:44.000Z · LW · GW

And note that Eliezer never answered your question, namely, if you can modify yourself so that you never get bored, do you care about or need to have fun?

Richard, probably you wouldn't care or need to have fun. But why would you do that? Modifying yourself that way would just demonstrate that you value the means of fun more than the ends. Even if you could make that modification, would you?

Comment by Ben_Jones on Prolegomena to a Theory of Fun · 2008-12-19T12:47:37.000Z · LW · GW

How odd, I just finished reading The State of the Art yesterday. And even stranger, I thought 'Theory of Fun' while reading it. Also, nowhere near the first time that something I've been reading has come up here in a short timeframe. Need to spend less time on this blog!

Trying to anticipate the next few posts without reading:

Any Theory of Fun will have to focus on that elusive magical barrier that distinguishes what we do from what Orgasmium does. Why should it be that we place a different on earning fun from simply mainlining it? The intuitive answer is that 'fun' is the synthesis of endeavour and payoff. Fun is what our brains do when we are rewarded for effort. The more economical and elegant the effort we put in for higher rewards the better. It's more fun to play Guitar Hero when you're good at it, right?

But it can't just be about ratio of effort to reward, since orgasmium has an infinite ratio in this sense. So we want to put in a quantity of elegant, efficient effort, and get back a requisite reward. Still lots of taboo-able terms in there, but I'll think further on this.

Comment by Ben_Jones on Visualizing Eutopia · 2008-12-19T12:09:52.000Z · LW · GW

Phil, what Vlad and Nick said. I've no doubt we won't look much like this in 100 years, but it's still humanity and its heritage shaping the future. Go extinct and you ain't shaping nothing. This isn't a magical boundary, it's a pretty well-defined one.

Comment by Ben_Jones on Visualizing Eutopia · 2008-12-17T11:14:32.000Z · LW · GW

'Precise steering' in your sense has never existed historically, yet we exist in a non-null state.

Aron, Robin, we're only just entering the phase during which we can steer things to either a really bad or really good place. Only thinking in the short term, even if you're not confident in your predictions, is pretty irresponsible when you consider what our relative capabilities might be in 25, 50, 100 years.

There's absolutely no guarantee that humanity won't go the way of the neanderthal in the grand scheme of things. They probably 'thought' of themselves as doing just fine, extrapolating a nice stable future of hunting, gathering, procreating etc.

Marcello, have a go at writing a post for this site, I'd be really interested to read some of your extended thoughts on this sort of thing.

Comment by Ben_Jones on For The People Who Are Still Alive · 2008-12-15T12:15:42.000Z · LW · GW

in a Big World, I don't have to worry as much about creating diversity or giving possibilities a chance to exist, relative to how much I worry about average quality of life for sentients.

Can't say fairer than that.

Eliezer, given the proportion of your selves that get run over every day, have you stopped crossing the road? Leaving the house?

Or do you just make sure that you improve the standard of living for everyone in your Hubble Sphere by a certain number of utilons and call it a good day on average?

Comment by Ben_Jones on Sustained Strong Recursion · 2008-12-07T02:02:26.000Z · LW · GW

design cycles have stayed about the same length while chips have gotten hundreds of times more complex, and also much faster, both of which soak up computing power.

So...if you use chip x to simulate its successor chip y, and chip y to simulate its successor, chip z, the complexity and speed progressions both scale at exactly the right ratio to keep simulation times roughly constant? Interesting stuff.

Sounds as though the introduction of black-box 2015 chips would lead to a small bump and level off quite quickly, short of a few huge insights, which Jed seems to suggest are quite rare. Eliezer, is this another veiled suggestion that hardware is not what we need to be working on if we're looking to FOOM?

Changes to software that involve revising pervasive assumptions have always been difficult, of course.

Welcome to Overcoming Bias.

Comment by Ben_Jones on Underconstrained Abstractions · 2008-12-05T11:35:58.000Z · LW · GW

Also, while economists have many abstractions for modeling details of labor teams and labor markets, our standard is that the simplest versions should be of just a single aggregate quantity of labor.

Granted, but as long as we can assume that things like numbers of workers, hours worked and level of training won't drop through the floor, then brain emulation or uploading should naturally lead to productivity going through the roof shouldn't it?

Or is that just a wild abstraction with no corroborating features whatsoever?

Comment by Ben_Jones on Hard Takeoff · 2008-12-03T15:16:49.000Z · LW · GW

because our computing hardware has run so far ahead of AI theory, we have incredibly fast computers we don't know how to use for thinking; getting AI right could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.

Now there's a scary thought.

Comment by Ben_Jones on How Many LHC Failures Is Too Many? · 2008-12-02T15:49:00.000Z · LW · GW

Right, that's it, I'm gonna start cooking up some nitroglycerin and book my Eurostar ticket tonight. Who's with me?

I dread to think of the proportion of my selves that have already suffered horrible gravitational death.

Comment by Ben_Jones on Recursive Self-Improvement · 2008-12-02T15:29:16.000Z · LW · GW

Eliezer: part of the AIXI sequence, which I don't think I'll end up writing.

Ahh, that's a shame, though fully understood. Don't suppose you (or anyone) can link to some literature about AIXI? Haven't been able to find anything comprehensive yet comprehensible to an amateur.

Tim Tyler: Brainpower went into making new brains historically - via sexual selection. Feedback from the previous generation of brains into the next generation has taken place historically.

Tim, Dawkins has a nice sequence in The Blind Watchmaker about a species of bird in which the female began selecting for big, lustrous tails. This led to birds with tails so big they could barely fly to escape predators. While selecting for intelligence in a partner is obviously plausible, I'd have to see very compelling evidence that it's leading to continuously smarter people, or even that it ever could. Possibly a loop of some description there, but K definitely < 1.

However!

Owing to our tremendous lack of insight into how genes affect brains

So what happens when we start to figure out what those genes do? and then start switching them on and off, and gaining more knowledge and insight into how the brain attacks problems? As we've read recently, natural selection increased brainpower (insight) massively through blind stumbling onto low-hanging fruit in a relatively small amount of time. Why would we suppose it reached any sort of limit - or at least a limit we couldn't surmount? The 18 years to maturity thing is pretty irrelevant here, as long as, say, 5% compound insight can be gained per generation. You're still looking at exponential increases, and you might only need a handful of generations before the FOOM itself switches medium.

Comment by Ben_Jones on Recursive Self-Improvement · 2008-12-01T23:21:57.000Z · LW · GW

Andrew, we're not talking about the equivalent of a human studying neuroscience by groping in the dark. If an AI truly, truly groks every line of its own code, it can pretty much do what it wants with it. No need for trial and error when you have full visibility and understanding of every neuron in your head.

How, you ask? What do such recursive algorithms look like? Mere details; the code monkeys can worry about all that stuff!

Comment by Ben_Jones on Thanksgiving Prayer · 2008-11-28T17:08:51.000Z · LW · GW

Know the feeling. I'm a fully qualified ex-Catholic atheist, but when my girlfriend told me that her family generally just has a pasta dish on Christmas day I was shocked. Anything but turkey makes baby Jesus cry!

Those childhood priors sure get burnt in deeply.

Comment by Ben_Jones on The Weighted Majority Algorithm · 2008-11-13T22:57:29.000Z · LW · GW

Silas - yeah, that's about the size of it.

Eliezer, when you come to edit this for popular publication, lose the maths, or at least put it in an endnote. You're good enough at explaining concepts that if someone's going to get it, they're going to get it without the equations. However, a number of those people will switch off there and then. I skipped it and did fine, but algebra is a stop sign for a number of very intelligent people I know.

Comment by Ben_Jones on Worse Than Random · 2008-11-11T20:22:28.000Z · LW · GW

So...noise can be useful in decision theory as long as you don't expect it to do any work. And the mistake gets easier to make the more complex your system. Sounds right enough to me.

[nitpick]

Your 'by definition' link needs a look, Eliezer.

Or imagine that the combination changes every second. In this case, 0-0-0-0, 0-0-0-0 is just as good as the randomized algorithm - no better and no worse.

If it changes every second, trying the same set of four over and over is marginally better than random.

If you've just entered 0-0-0-0 and got it wrong, then on the next try every sequence except 0-0-0-0 has a small chance of being the correct sequence from the previous, and hence is incorrect this try.

Anyone care to work out exactly how much better off 0-0-0-0 is than a random set in this case?

[/nitpick]

Comment by Ben_Jones on Lawful Creativity · 2008-11-09T15:39:15.000Z · LW · GW

Here is the ultimate work of Modern Art, that truly defies all rules: It isn't mine, it isn't real, and no one knows it exists...

It's...it's beautiful.

Great post for the most part, though I do have to agree with Tim's straw man alert.

Something I learnt while studying postmodern fiction (yeah Eliezer, that's right): Art can be referential, or memetic, or both, or neither. Most is both, in that it (very roughly) is 'like' reality (i.e. it's memetic) and 'seeks to tell us something about' reality (i.e. it's referential). However, there's some really interesting stuff that is neither - defying ideas like logic, causation and induction (let alone plot, character etc) and blatantly having no regard for what Eliezer would call terminal values. (Except, in some cases, at a meta-level outside the text. But not in all cases.) Read up on Alain Robbe-Grillet's fiction and Sam Beckett's 'Trilogy' (and later poetry) for a start. Oh, and John Cage - yes, even 4'33.

Randomness, noise and so on can be astonishingly beautiful, in art or in nature, even to the novice. Or do you think that there are two parts of your brain, one which finds a painting beautiful, and one that finds the night sky beautiful? Yes, there's some high-minded bullshit out there, but as Tim says, please don't draw false boundaries simply to justify your profound bottom line.

And beware of putting a nuts-and-bolts heuristic in place of a sense of aesthetic beauty. You may then find yourself conflicted between finding something beautiful and being unable to understand why. And that truly would be a tragedy.

Comment by Ben_Jones on Back Up and Ask Whether, Not Why · 2008-11-06T20:29:05.000Z · LW · GW

/asks self if he should beat his wife

/realises self is not married

Comment by Ben_Jones on Hanging Out My Speaker's Shingle · 2008-11-06T12:31:45.000Z · LW · GW

I think we should rename it "Robin and Eliezers' Varied Thoughts".

John, Alex; Meh. Long as it's interesting who cares? Nobody promised anyone posts on any particular topic, and nodoby's forcing your mouse clicks. If it makes you feel better, rename your bookmark 'Metaphysical Singularity Sci-Fi'.

the strong whiff of sci-fi geekdom that pervades most of Eliezer's posts.

You say it like it's a bad thing.

Comment by Ben_Jones on Today's Inspirational Tale · 2008-11-05T11:00:40.000Z · LW · GW

What would effective cryo policy look like? Or conversely, what in current policy is inhibiting the proper development of cryogenics?

Ruling parties come and go in waves. Work out when you reckon you'll be unfrozen and vote with that year's election in mind.

Question: if you're on your deathbed and about to have your head frozen, should you be allowed to pre-register your votes for the next few elections? "Palin's counting on a low turnout amongst the dead for 2016, as they tend to vote primarily for the Democratic candidate."

Oh, by the way, well done America.

Comment by Ben_Jones on Building Something Smarter · 2008-11-03T13:15:46.000Z · LW · GW

"If your dreams and aspirations interfere with your rationality and impartiality then you're no longer following the Way. The Way doesn't drive one off course; one rather loses sight of the Way and hence goes off course."

[The Book of Eliezer 4:24 18-20]

Comment by Ben_Jones on Mundane Magic · 2008-11-03T11:24:48.000Z · LW · GW

Oh, and don't forget the Mystical Intertubes of Communication, which allow any person with access to the Tubes to 'post' their opinions for others to peruse. Even better, other Intertube users can append inanities to any of these essays with the minimum of thought and effort!

Comment by Ben_Jones on BHTV: Jaron Lanier and Yudkowsky · 2008-11-02T10:28:13.000Z · LW · GW

Wow. On around 20 minutes Jaron wraps his irrationality up in so much floral language it's impossible to follow. There's no arguing with that, but you had a really good stab, Eliezer. I'd have snapped at all the implied barbs. Fascinating all the way through. Three cheers for physical reality!

Comment by Ben_Jones on Mundane Magic · 2008-11-01T11:04:38.000Z · LW · GW

Possession of a single Eye is said to make the bearer equivalent to royalty.

Very good.

How about the miraculous ability to synthesise or isolate compounds of chemicals from the world that recreate sensations, or even push perception beyond the sensations for which it was designed? I'm always pretty impressed by that one.

Comment by Ben_Jones on Intelligence in Economics · 2008-10-31T16:07:49.000Z · LW · GW

A general theory of intelligence designed for constructing AI's does not need to be universally applicable.

I think the idea is that once that AI is running, it would be nice to have an objective measure of just how pwerful it is, over and above how efficiently it can build a car.

Comment by Ben_Jones on Measuring Optimization Power · 2008-10-28T12:41:37.000Z · LW · GW

From The Bedrock of Morality:

For every mind that thinks that terminal value Y follows from moral argument X, there will be an equal and opposite mind who thinks that terminal value not-Y follows from moral argument X.

Does the same apply to optimisation processes? In other words, for every mind that sees you flicking the switch to save the universe, does another mind see only the photon of 'waste' brain heat and think 'photon maximiser accidentally hits switch'? Does this question have implications for impartial measurements of, say, 'impressiveness' or 'efficiency'?

Emile, that's what I thought when I read Tim's comment, but then I immediately asked myself at what point between water flowing and neurons firing does a process become simple and deterministic? As Eliezer says, to a smart enough mind, we would look pretty basic. I mean, we weren't even designed by a mind, we sprung from simple selection! But yes, it's possible that optimisation isn't involved at all in water, whereas it pretty obviously is with going to the supermarket etc.

peeper, you score 2 on the comment incoherency criterion but an unprecedented 12 for pointlessness, giving you also an average of 7.0. Congrats!

Comment by Ben_Jones on Aiming at the Target · 2008-10-27T12:59:59.000Z · LW · GW

an outcome that ranks high in your preference ordering

Well if Garry's wins are in the centre of your preference ordering circle of course you'll lose! Some fighting spirit please!

Oh, and if something maximising entropy is a valid optimisation process, then surely everything is an optimisation process and the term becomes useless? Optimisation processes lead (locally) away from maximal entropy, not towards it, right?

Comment by Ben_Jones on Crisis of Faith · 2008-10-13T08:36:00.000Z · LW · GW

I would be rather not be around people who kept telling me true minutiae about the world and he cosmos, if they have no bearing on the problems I am trying to solve.

Will, not wishing to be told pointless details is not the same as deluding yourself.

I was discussing the placebo effect with a friend last night though, and found myself arguing that this could well be an example of a time when more true knowledge could hurt. Paternalistic issues aside, people appear to get healthier when they believe falsehoods about the effectiveness of, say, homeopathy or sugar pills.

Would I rather live in a world where doctors seek to eliminate the placebo effect by disseminating more true knowledge; or one where they take advantage of it, save more lives, but potentially give out misinformation about what they're prescribing? I honestly don't know.

Comment by Ben_Jones on Shut up and do the impossible! · 2008-10-09T16:42:29.000Z · LW · GW

From a strictly Bayesian point of view that seems to me to be the overwhelmingly more probably explanation.

Now that's below the belt.... ;)

Too much at stake for that sort of thing I reckon. All it takes is a quick copy and paste of those lines and goodbye career. Plus, y'know, all that ethics stuff.

Comment by Ben_Jones on My Bayesian Enlightenment · 2008-10-08T08:41:13.000Z · LW · GW

David,

Throttling an AI to human intelligence is like aiming your brand new superweapon at the world with the safety catch on. Potentially interesting, but really not worth the risk.

Besides, Eliezer would probably say that the F in FAI is the point of the code, not a module bolted into the code. There's no 'building the AI and tweaking the morality'. Either it's spot on when it's switched on, or it's unsafe.

Comment by Ben_Jones on My Bayesian Enlightenment · 2008-10-07T15:40:19.000Z · LW · GW

David, the concept behind the term Singularity refers to our inability to predict what happens on the other side.

However, you don't even have to hold with the theory of a technological Singularity to appreciate the idea that an intelligence even slightly higher than our own (not to mention orders of magnitudes faster, and certainly not to mention self-optimizing) would probably be able to do things we can't imagine. Is it worth taking the risk?

Comment by Ben_Jones on My Bayesian Enlightenment · 2008-10-07T15:29:56.000Z · LW · GW

@Phil G:

if you can provide us with some examples - say, ten or twenty - of scientists who had success using this approach.

Phil, the low prevalence of breakthroughs made using this approach is evidence of science's historical link with serendipity. What it is not is evidence that 'Bayesian precision' as Eliezer describes it is not a necessary approach when the nature of the problem calls for it.

Recall the sequence around 'Faster than Einstein'. From a top-down capital-S Science point of view, there's nothing wrong with pootling around waiting for that 'hmmm, that's odd' moment. As you say, science has been ratcheting forward like that for a long while.

However, when you're just one guy with limited resources who wishes to take a mind-boggling step forward in a difficult domain in its infancy, the answer space is small enough that pootling won't get you far at all. (Doubly so when a single misstep kills you dead, as Eliezer's fond of saying.) No-one will start coding a browser and stumble across a sentient piece of code (à la Fleming / Penicillin), let alone a seed FAI. That kind of advance requires a large number of steps, each one technically precise and reliant on its predecessors. Or so I'm told. ;)

People are very fond of saying that General Intelligence may be outside the human sphere of ability - by definition too difficult for us. Well unless someone tries as hard as it's possible to try, how will we ever know?

Comment by Ben_Jones on Beyond the Reach of God · 2008-10-06T09:07:00.000Z · LW · GW

the probabilities for cryonics look good.

They don't have to look good, they just have to beat the probabilities of your mind surviving the alternatives. Current alternatives: cremation, interment, scattering over your favourite football pitch. Currently I'm wavering between cryonics and Old Trafford.

Eliezer, I'm ridiculously excited about the next fifty years, and only slightly less excited about the fun theory sequence. Hope it chimes with my own.

Comment by Ben_Jones on Use the Try Harder, Luke · 2008-10-06T08:22:44.000Z · LW · GW

What you're saying is that once we've suspended our disbelief about the Force as a cool mysterious property of the universe (three films earlier) there's no call for it to be explained, particularly not in such a crappy way.

That's fair enough.

Comment by Ben_Jones on Use the Try Harder, Luke · 2008-10-05T09:41:21.000Z · LW · GW

In the conceptually impossible possible world where the Force exists in the first place, midichlorians are a foreign invader in the simplest explanation of the Force's structure. You want to move something, therefore it moves.

Fascinating. I'd have thought that a chance to render the Force into a physical instantiation would have been music to your ears.

Magnets can pick up paperclips even when you don't know about electromagnetism. However, to fully understand the magnet, you need a theory of electrons. If you want to use the Force to move something, you don't need to know about midichlorians. However, a good physicalist / reductionist would surely know and feel that the Force should be the result of a physical thing in the universe. As far as I'm concerned 'his midichlorian count is off the charts' and 'the Force is strong with this one' are pretty much synonymous, and I don't have any beef with either statement. I didn't need a physicalist explanation of the Force, but I'm not going to be upset if one is presented. Midichlorians slotted in fine for me. And they're still a good explanation for:

  • Strength with the Force being hereditary
  • Your potential level being, to an extent, predestined (very Star Warsy)
  • People being able to move shit with their minds
Comment by Ben_Jones on Use the Try Harder, Luke · 2008-10-02T18:53:26.000Z · LW · GW

Stupid HTML. The link to IMDB above is still good.

Comment by Ben_Jones on Use the Try Harder, Luke · 2008-10-02T18:49:29.000Z · LW · GW

[nerd] Eliezer, one's mastery of the Force isn't based solely on practice, but on the prevalence of Midichlorians in your blood. Due to his family ties, Luke has plenty - it's just the application and faith that he lacks.

This scene rang very true with me and I don't agree with your gripes. Luke has been training like mad for weeks. He's still at the stage of balancing rocks, while his friends are in great danger and he has no way of reaching them. His frustration reaches breaking point in this scene, hence the sulk. In an ideal world Yoda would use this as a lesson for him. Time constraints and the impending doom of the universe etc mean that he can't, hence the display of mad Force skillz.

Can't believe I wrote all that. I'm reminded of this.

Thom, spot on. Just to elaborate somewhat, in the exposition scene where Luke is hanging from the platform on Bespin, David Prowse actually said 'I killed your father', so even the cast and crew on set didn't know until the film came out. From IMDB:

Security surrounding this movie was so intense that George Lucas had regular reports about "leaks" from actors. George Lucas was so determined that the ending be kept secret that he had David Prowse (Darth Vader) say "Obi-Wan killed your father", and dubbed it later to be "I am your father". In fact, only six people knew about the ending: George Lucas, director Irvin Kershner, writers Leigh Brackett and Lawrence Kasdan, Mark Hamill, and James Earl Jones.

[/nerd]

Comment by Ben_Jones on The Magnitude of His Own Folly · 2008-09-30T16:18:23.000Z · LW · GW

Shane E, meet Caledonian. Caledonian, Shane E.

Nick T - it's worse than that. You'd have to mathematically demonstrate that your novel was both completely American and infallibly Great before you could be sure it wouldn't destroy the world. The failure state of writing a good book is a lot bigger than the failure state of writing a good AI.

Pinprick - bear in mind that if Eliezer considers you more than one level beneath him, your praise will be studiously ignored ;).

Comment by Ben_Jones on The Magnitude of His Own Folly · 2008-09-30T12:22:51.000Z · LW · GW

Yadda yadda yadda, show us the code.

Yes, I'm kidding. Small typo/missing word, end of first paragraph.

Comment by Ben_Jones on That Tiny Note of Discord · 2008-09-25T11:06:02.000Z · LW · GW

Phil, very well articulated and interesting stuff. Have you seen Wall-E? It's the scenario your post warns against, but with physical instead of evolutionary fitness.

I agree that Eliezer seems to have brushed aside your viewpoint withough giving it due deliberation, when the topic of the ethics of transcending evolution seems right up his street for blogging on.

However: It considers the "preferences" (which I, being a materialist, interpret as "statistical tendencies" of organisms, or of populations; but not of the dynamic system. Why do you discriminate against the larger system?

Because he can. You're straying close to the naturalistic fallacy here. Just as soon as natural selection gets around to building a Bayesian superintelligence, it can specify whatever function it wants to. We build the AI, we get to give it our preferences. What's unfair about that?

Besides, we departed from selection's straight-and-narrow when we made chocolate, condoms, penicillin and spacecraft. We are what selection made us, with our thousand shards of desire, but I see no reason why we should be constrained by that. Our ethics are long since divorced from their evolutionary origins. It's understandable to worry that this makes them vulnerable - I think we all do. It won't be easy bringing them with us into the future, but that's why we're working hard at it.

@Lara: what 'humans' want as a complied whole is not what we'll want as individuals

Great description of why people in democracies bitch constantly but never rise up. The collective gets what it wants but the individuals are never happy. If I was a superintelligence I'd just paperclip us all and be done with it.

Comment by Ben_Jones on Fighting a Rearguard Action Against the Truth · 2008-09-24T10:02:03.000Z · LW · GW

Post I'd like to read: Eliezer's Chrono-Conference Call With His Various Previous Selves.

You could even have Eliezer-2018 make an appearance towards the end. Oh, and please write it in the style of the GEB dialogues.