A cynical explanation for why rationalists worry about FAI

post by aaronsw · 2012-08-04T12:27:54.454Z · LW · GW · Legacy · 172 comments

Contents

172 comments

My friend, hearing me recount tales of LessWrong, recently asked me if I thought it was simply a coincidence that so many LessWrong rationality nerds cared so much about creating Friendly AI. "If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?"

Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.

Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas.

172 comments

Comments sorted by top scores.

comment by gwern · 2012-08-04T14:10:32.354Z · LW(p) · GW(p)

Let's take the outside view for a second. After all, if you want to save the planet from AIs, you have to do a lot of thinking! You have to learn all sorts of stuff and prove it and just generally solve a lot of eye-crossing philosophy problems which just read like slippery bullshit. But if you want to save the planet from asteroids, you can conveniently do the whole thing without ever leaving your own field and applying all the existing engineering and astronomy techniques. Why, you even found a justification for NASA continuing to exist (and larding out pork all over the country) and better yet, for the nuclear weapons program to be funded even more (after all, what do you think you'll be doing when the Shuttle gets there?).

Obviously, this isn't any sort of proof that anti-asteroid programs are worthless self-interested rent-seeking government pork.

But it sure does seem suspicious that continuing business as usual to the tune of billions can save the entire species from certain doom.

Replies from: aaronsw, pleeppleep, IlyaShpitser, JaneQ
comment by aaronsw · 2012-08-04T14:13:51.742Z · LW(p) · GW(p)

Yes, I agree that if a politician or government official tells you the most effective thing you can do to prevent asteroids from destroying the planet is "keep NASA at current funding levels and increase funding for nuclear weapons research" then you should be very suspicious.

Replies from: gwern
comment by gwern · 2012-08-04T19:23:09.255Z · LW(p) · GW(p)

I think you're missing the point; I actually do think NASA is one of the best organizations to handle anti-asteroid missions and nukes are a vital tool since the more gradual techniques may well take more time than we have.

Your application of cynicism proves everything, and so proves nothing. Every strategy can be - rightly - pointed out to benefit some group and disadvantage some other group.

The only time this wouldn't apply is if someone claiming a particular risk is higher than estimated and was doing absolutely nothing about it whatsoever and so couldn't benefit from attempts to address it. And in that case, one would be vastly more justified in discounting them because they themselves don't seem to actually believe it rather than believing them because this particular use of Outside View doesn't penalize them.

(Or to put it another more philosophical way: what sort of agent believes that X is a valuable problem to work on, and also doesn't believe that whatever Y approach he is taking is the best approach for him to be taking? One can of course believe that there are better approaches for other people - 'if I were a mathematical genius, I could be making more progress on FAI than if I were an ordinary person whose main skills are OK writing and research' - or for counterfactual selves with stronger willpower, but for oneself? This is analogous to Moore's paradox or the epistemic question, what sort of agent doesn't believe that his current beliefs are the best for him to hold? "It's raining outside, but I don't believe it is." So this leads to a remarkable result: for every agent which is trying to accomplish something, we can cynically say 'how very convenient that the approach you think is best is the one you happen to be using! How awfully awfully convenient! Not.' And since we can say it for every agent equally, the argument is entirely useless.)

Incidentally:

it does seem like FAI has a special attraction for armchair rationalists:

I think you badly overstate your case here. Most armchair rationalists seem to much prefer activities like... saving the world by debunking theism (again). How many issues have Skeptic or Skeptical Inquirer devoted to discussing FAI?

There's a much more obvious reason why many LWers would find FAI interesting other than the concept being some sort of attractive death spiral for armchair rationalists in general...

Replies from: aaronsw, MichaelVassar
comment by aaronsw · 2012-08-04T22:02:11.870Z · LW(p) · GW(p)

My suspicion isn't because the recommended strategy has some benefits, it's because it has no costs. It would not be surprising if an asteroid-prevention plan used NASA and nukes. It would be surprising if it didn't require us to do anything particularly hard. What's suspicious about SIAI is how often their strategic goals happen to be exactly the things you might suspect the people involved would enjoy doing anyway (e.g. writing blog posts promoting their ideas) instead of difficult things at which they might conspicuously fail.

comment by MichaelVassar · 2012-08-05T17:45:01.154Z · LW(p) · GW(p)

FHI, for what it's worth, does say that simulation shutdown is underestimated but doesn't suggest doing anything.

comment by pleeppleep · 2012-08-04T19:05:17.583Z · LW(p) · GW(p)

To be fair though, a lot of us would learn the tricky philosophy stuff anyway just because it seems interesting. It is pretty possible that our obsession with FAI stems partially from the fact that the steps needed to solve such a problem appeal to us. Not to say that FAI isn't EXTREMELY important by its own merits, but there are a number of existential risks that pose relatively similar threat levels that we don't talk about night and day.

Replies from: MichaelVassar
comment by MichaelVassar · 2012-08-05T18:08:26.472Z · LW(p) · GW(p)

My actual take is that UFAI is actually a much larger threat than other existential risks, but also that working on FAI is fairly obviously the chosen path, not on EV grounds, but on the grounds of matching our skills and interests.

comment by IlyaShpitser · 2012-08-04T21:48:32.353Z · LW(p) · GW(p)

"But it sure does seem suspicious that continuing business as usual can save the entire species from certain doom."

Doesn't this sentence apply here? What exactly is this community doing that's so unusual (other than giving EY money)?


The frame of "saving humanity from certain doom" seems to serve little point other than a cynical way of getting certain varieties of young people excited.

Replies from: MichaelVassar, gwern
comment by MichaelVassar · 2012-08-05T18:09:31.385Z · LW(p) · GW(p)

As far as I can tell, SI long ago started avoiding that frame because the frame had deleterious effects, but if we wanted to excite anyone, it was ourselves, not other young people.

comment by gwern · 2012-08-04T22:03:24.856Z · LW(p) · GW(p)

What exactly is this community doing that's so unusual (other than giving EY money)?

Exploring many unusual and controversial ideas? Certainly we get criticized for focusing on things like FAI often enough, it should at least be true!

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-08-04T22:36:10.696Z · LW(p) · GW(p)

Saying that you save the world by exploring many unusual and controversial ideas is like saying you save the world by eating ice cream and playing video games.

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2012-08-04T23:19:05.420Z · LW(p) · GW(p)

Isn't "exploring many unusual and controversial ideas" what scientists usually do? (Ok, maybe sometimes good scientist do it...) Don't you think that science could contribute to saving the world?

Replies from: IlyaShpitser, TimS
comment by IlyaShpitser · 2012-08-05T02:48:25.813Z · LW(p) · GW(p)

What I am saying is "exploring unusual and controversial ideas" is the fun part of science (along with a whole lot of drudgery). You don't get points for doing fun things you would rather be doing anyways.

Replies from: MichaelVassar
comment by MichaelVassar · 2012-08-05T18:10:26.390Z · LW(p) · GW(p)

Actually, I think you get points for doing things that work, whether they are fun or not.

comment by TimS · 2012-08-05T00:09:22.342Z · LW(p) · GW(p)

Some of the potentially useful soft sciences research is controversial. But essentially no hard sciences research is both (a) controversial and (b) likely to contribute massive improvement in human well-being.

Even something like researching the next generation of nuclear power plants is controversial only in the sense that all funding of basic research is "controversial."

Replies from: Decius
comment by Decius · 2012-08-05T01:11:53.638Z · LW(p) · GW(p)

Nuclear science is controversial for the same reason that equal-access marriage is controversial: Because there are people who have some opinions that cannot be changed by rational argument.

Replies from: TimS, army1987
comment by TimS · 2012-08-05T02:00:24.951Z · LW(p) · GW(p)

There's some ambiguity in your use of the word "science." Nuclear engineering is controversial (i.e. building and running nuclear plants is politically controversial).

But the post I was responding to was about research. In terms of political controversy, I suspect that nuclear researchers receive essentially no hate mail, especially compared to sociologists researching child-rearing outcomes among opposite-sex and same-sex couples.

Replies from: Decius
comment by Decius · 2012-08-05T19:40:18.072Z · LW(p) · GW(p)

Nuclear researchers run reactors. It's pretty much the only commercially viable way to test the effects of neutron bombardment on materials. They typically aren't power plants, because steam turbines are a lot more work to operate and research reactors are typically intermittent and low power (in terms of the electrical grid)

comment by A1987dM (army1987) · 2012-08-05T01:18:35.734Z · LW(p) · GW(p)

If the real reason people want nuclear power plants were their benefits compared to other ways of generating power, they'd use thorium not uranium. http://en.wikipedia.org/wiki/User:RaptorHunter/FunFacts#Thorium_reactor

Replies from: Decius, None
comment by Decius · 2012-08-05T19:34:23.249Z · LW(p) · GW(p)

[citation needed]

No, a user's talk page won't do.

comment by [deleted] · 2012-08-05T15:06:03.197Z · LW(p) · GW(p)

Over-hyped BS. There are regular reactors, and there are breeder reactors (fast neutron reactors), both can use uranium but only the latter type can use thorium. The latter type, also, incidentally, uses a lot less uranium than the former, and can use depleted uranium. The cost of fuel is of no consequence and all the safety issues are virtually identical for fast neutron reactors using thorium and using uranium (and for both, are expected to be significantly more severe than for regular water moderated reactors) There's a lot of depleted uranium laying around costing negative $ . Not thorium, though.

Replies from: Decius
comment by Decius · 2012-08-05T19:36:44.924Z · LW(p) · GW(p)

I thought that depleted uranium wrapped in paper was pretty much as safe as lead?

Replies from: wedrifid, None
comment by wedrifid · 2012-08-06T18:21:23.307Z · LW(p) · GW(p)

I thought that depleted uranium wrapped in paper was pretty much as safe as lead?

Really? As in... I can sleep with a cubic metre of the stuff under my bed and not expect to get cancer within a decade or two?

Replies from: Decius, thomblake
comment by Decius · 2012-08-06T22:03:16.550Z · LW(p) · GW(p)

Pretty much, yeah. The hexafloride is somewhat harder to contain, though. And expect long-term brain damage from the block of lead. (The decay chain of U-238 is mostly alpha and beta, which are completely absorbed by paper wrapping. There is some gamma radiation in some of the decay steps, but not significantly more than background for any reasonable amount. 18,500 metric tons of the stuff might have a total activity somewhat higher (=2.74e15 Bq), or one mole of helium produced every 70 years from direct decay alone, and a few mA's worth of electron emissions once the decay products reach equilibrium. It looks like the half-thickness for 'uranium' is about 7mm for Co-60 gamma emissions.

Doing the calculus, the total unshielded activity at the surface of the block would be equal to the integral of (total activity per unit thickness {14.8 Bq/mg; DU has a density of 18.5 G/cc}*percentage unshielded at that depth)

*{(}1/2)}%5E{(1000x/7cM)}dx)

(1000cM^2 is the cross sectional area, 1000mG/G is a conversion factor, making the first term the total activity per thickness; second term is the percentage unshielded calculated by raising 1/2 to the power of the number of half thicknesses of uranium above the layer in question. X is in cM.)

Overall though, the stochastic effects of ionizing radiation exposure are close enough to zero that studies of the effects of occupational exposure do not find conclusive correlations of long term low-level exposure to disease.

Replies from: wedrifid, None
comment by wedrifid · 2012-08-06T22:42:03.293Z · LW(p) · GW(p)

And expect long-term brain damage from the block of lead.

I don't believe you.

It's wrapped in paper (your stipulation) and under the bed (my stipulation). Are you asserting that the wrapped, undisturbed block of counterfactual lead under my bed is a significant airborne pollution threat?

Overall though, the stochastic effects of ionizing radiation exposure are close enough to zero that studies of the effects of occupational exposure do not find conclusive correlations of long term low-level exposure to disease.

Fascinating, thankyou.

Replies from: Decius
comment by Decius · 2012-08-07T01:00:04.003Z · LW(p) · GW(p)

I wasn't wrapping the lead in paper, but that wasn't fair of me- because frankly, inhaled DU particles ARE much worse than inhaled lead particles, because the 'wrapped in paper' (or painted, I suppose) is significant. DU munitions and armor aren't 'safe' in the sense of failing to contaminate the nearby area, but neither are lead munitions.

comment by [deleted] · 2012-08-06T22:27:29.895Z · LW(p) · GW(p)

I'd be more concerned about the fire hazard if you scratch it (by your depleted uranium paperweight). It's like lighter sparkler on steroids, from what i've heard. The activity in practice would also depend to how depleted it is (they don't deplete all the way to 0% u-235), and it better not be "dirty" depleted uranium from fuel re-processing. edit: also to how old it is, as the decay chain won't be in equilibrium. It will actually get more radioactive over time.

By the way. Depleted uranium is actually used as radiation shielding for e.g. medical irradiation sources, as well as for tank armor, and everyone knows about the ammunition.

comment by thomblake · 2012-08-06T18:42:39.219Z · LW(p) · GW(p)

For fun: read the parent as implying that wedrifid has slept on top of a cubic meter of lead for decades.

Replies from: wedrifid
comment by wedrifid · 2012-08-06T18:58:34.740Z · LW(p) · GW(p)

For fun: read the parent as implying that wedrifid has slept on top of a cubic meter of lead for decades.

It's so soft! There is no other metal that I've slept on for decades that is more comfortable than lead.

I haven't tried a water bed filled with mercury yet. That actually has potential. The extra mass would absorb the impact or rapid movement of a human more smoothly while maintaining malleable fluidity over a slightly longer timescale. Plus if you attach a glass tube near the head of the bed you can calculate your weight based off changes in mmHg!

Replies from: gwern, Decius, None
comment by gwern · 2012-08-06T21:11:51.520Z · LW(p) · GW(p)

I used to think that my mercury bed was a bad idea and mad as a hatter. But then I gave it a fair try for a few months, and boy did my mind change!

comment by Decius · 2012-08-06T22:23:36.523Z · LW(p) · GW(p)

It's not the mass, it's the viscosity. The higher density would result in a 'firmer' feel, since less immersion would be needed for the same amount of buoyant force.

A more reasonable option might be Gallium-which would be firm on initial contact, but then liquefy.

Replies from: wedrifid
comment by wedrifid · 2012-08-06T22:34:06.535Z · LW(p) · GW(p)

It's not the mass, it's the viscosity.

No, really, it's both. I edited out out viscosity since either would be sufficient and I happened to be certain about mass but merely confident about viscosity.

Replies from: Decius
comment by Decius · 2012-08-07T00:43:34.487Z · LW(p) · GW(p)

I assume that the primary mechanism by which mass absorbs impact would be inertia.

Malleable is a property that liquids don't have, so what did you mean by 'maintaining malleable fluidity' that doesn't also result from having the liquid in a closed container with some airspace and some elasticity? How would more inertia help absorb impact (spread the impulse out over a longer period of time)?

comment by [deleted] · 2012-08-06T22:19:56.763Z · LW(p) · GW(p)

That's actually a neat idea. You could use gallium/indium/tin alloy perhaps. Would be easily the most expensive fluid bed.

comment by [deleted] · 2012-08-06T06:57:30.505Z · LW(p) · GW(p)

It's uranium hexafluoride actually, that's laying around.

Replies from: Decius
comment by Decius · 2012-08-06T18:07:08.173Z · LW(p) · GW(p)

Ah, so it's about as safe as elemental mercury or many mercury compounds then.

Replies from: None
comment by [deleted] · 2012-08-06T21:44:02.818Z · LW(p) · GW(p)

Accidentally rupturing a tank of mercury doesn't usually kill a worker and injure a dozen. A tank of very nasty mercury compound might.

Actually, to steer back to topic which is (laudably tolerated here) dislike of rationalists, this argument can make good tiny pet example of 'rationalist' vs 'experts' debates.

Rationalists believe by special powers of rationality they are unusually less prone to for example nuclear = scary bias, and say that uranium wrapped in paper safe as lead etc etc. (By the way, also false, uranium is a serious fire hazard). There's a lot of such 'rationalists' around, not just here but everywhere, that's where people get misconceptions like yours from.

Experts actually know the matters to conclude something. (Not that I am a nuclear expert, of course, I only know overall overview of the process, and would defer to experts)

Replies from: Decius, wedrifid
comment by Decius · 2012-08-07T01:56:46.565Z · LW(p) · GW(p)

Frankly, rupturing any tank of just about any hexafluoride compound would be expected to be pretty dangerous.

I'm by no means a nuclear expert, I was just a nuke plant mechanic. The reason I am unafraid of radiation isn't because the fearmongering is baseless, it's because I'm enough of a lay expert to know the magnitude of the actual risks.

How is elemental uranium a fire hazard? Does flame spread across it faster that it spreads across wood paneling? I never considered that kind of hazard to be important, because uranium-as-she-is-used is safe enough from fire.

comment by wedrifid · 2012-08-06T22:22:21.115Z · LW(p) · GW(p)

Actually, to steer back to topic which is (laudably tolerated here) dislike of rationalists, this argument can make good tiny pet example of 'rationalist' vs 'experts' debates.

You can tell your pet "'rationalist' vs 'expert'" example has issues when it can replaced with "'rationalist' vs 'anyone with a net connection and 30 spare seconds'" and it applies just as well.

Experts actually know the matters to conclude something. (Not that I am a nuclear expert, of course, I only know overall overview of the process, and would defer to experts)

You realise, of course, that this places you squarely on the 'rationalist' side of that artificial dichotomy?

(By the way, also false, uranium is a serious fire hazard).

Not to mention it'll do much more damage to your toe if you drop it on yourself---so much heavier!

Replies from: None
comment by [deleted] · 2012-08-06T22:36:30.956Z · LW(p) · GW(p)

You realise, of course, that this places you squarely on the 'rationalist' side of that artificial dichotomy?

I would defer to experts, I said. This community has a well respected founder apparently leading it by example NOT to defer to experts, but instead go on how experts are wrong, on basis on something terribly shaky. (quantum sequence).

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-08-07T16:57:14.989Z · LW(p) · GW(p)

... and his position is held by a fair fraction of the experts.

comment by JaneQ · 2012-08-07T00:45:35.628Z · LW(p) · GW(p)

You have to learn all sorts of stuff and prove it and just generally solve a lot of eye-crossing philosophy problems which just read like slippery bullshit.

The enormous problem with philosophy problems is this. Philosophy fails a lot, historically. Fails terribly.

comment by Emile · 2012-08-04T13:14:16.396Z · LW(p) · GW(p)

I agree with the gist of this (Robin Hanson expressed similar worries), though it's a bit of a caricature. For example:

people who really like to spend their time arguing about ideas on the Internet have managed to persuade themselves that they can save the entire species from certain doom just by arguing about ideas on the Internet

... is a bit unfair, I don't think most SIAI folk consider "arguing about ideas on the Internet" to be of much help except for recruitment, raising funds, and occasionally solving specific technical problems (like some decision theory stuff). It's just that the "arguing about ideas on the Internet" is a bit more prominent because, well, it's on the Internet :)

Eliezer, specifically, doesn't seem to do much arguing on the internet, though he did do a good deal of explaining his ideas on the Internet, which more thinkers should do. And I don't think many of us folks who chat about interesting things on LessWrong are under any illusion that doing so is Helping Save Mankind From Impending Doom.

Replies from: aaronsw
comment by aaronsw · 2012-08-04T14:05:43.732Z · LW(p) · GW(p)

Yes, "arguing about ideas on the Internet" is a shorthand for avoiding confrontations with reality (including avoiding difficult engineering problems, avoiding experimental tests of your ideas, etc.).

Replies from: None
comment by [deleted] · 2012-08-05T00:18:51.169Z · LW(p) · GW(p)

May I refer you to AIXI, which was a potential design for GAI, that was, by these AI researchers, fleshed out mathematically to the point where they could prove it would kill off everyone?

If that isn't engineering, then what is programming (writing math that computers understand)?

Replies from: CarlShulman
comment by CarlShulman · 2012-08-05T00:51:42.905Z · LW(p) · GW(p)

that was, by these AI researchers, fleshed out mathematically

This was Hutter, Schmidhuber, and so forth. Not anyone at SI.

fleshed out mathematically to the point where they could prove it would kill off everyone?

No one has offered a proof of what real-world embedded AIXI implementations would do. The informal argument that AIXI would accept a "delusion box" to give itself maximal sensory reward was made by Eliezer a while ago, and convinced the AIXI originators. But the first (trivial) formal proofs related to that were made by some other researchers (I think former students of the AIXI originators) and presented at AGI-11.

Replies from: lukeprog, Alexandros
comment by lukeprog · 2012-08-05T02:12:35.272Z · LW(p) · GW(p)

BTW, I believe Carl is talking about Ring & Orseau's Delusion, Survival, and Intelligent Agents.

Replies from: CarlShulman
comment by CarlShulman · 2012-08-05T02:34:44.246Z · LW(p) · GW(p)

Yes, thanks.

comment by Alexandros · 2012-08-05T05:21:51.434Z · LW(p) · GW(p)

So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it? Or were the discoveries independent?

Because if it the first, SI let a huge, track-record-building accomplishment slip through its hands. A paper like that alone would do a lot to answer Holden's criticism.

Replies from: CarlShulman, timtyler
comment by CarlShulman · 2012-08-05T05:31:04.639Z · LW(p) · GW(p)

Or were the discoveries independent?

I'm not sure. If they were connected, it was probably by way of the grapevine via the Schmidhuber/Hutter labs.

SI let a huge, track-record-building accomplishment slip through its hands

Meh, people wouldn't have called it huge, and it isn't, particularly. It would have signaled some positive things, but not much.

comment by timtyler · 2012-08-05T13:21:33.320Z · LW(p) · GW(p)

So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it?

Surely Hutter was aware of this issue back in 2003:

Another problem connected, but possibly not limited to embodied agents, especially if they are rewarded by humans, is the following: Sufficiently intelligent agents may increase their rewards by psychologically manipulating their human “teachers”, or by threatening them. This is a general sociological problem which successful AI will cause, which has nothing specifically to do with AIXI. Every intelligence superior to humans is capable of manipulating the latter.

comment by Mitchell_Porter · 2012-08-06T05:15:57.679Z · LW(p) · GW(p)

Aaron, I currently place you in the category of "unconstructive critic of SI" (there are constructive critics). Unlike some unconstructive critics, I think you're capable of more, but I'm finding it a little hard to pin down what your criticisms are, even though you've now made three top-level posts and every one of them has contained some criticism of SI or Eliezer for not being fully rational.

Something else that they have in common is that none of them just says "SI is doing this wrong". The current post says "Here is my cynical explanation for why SI is doing this thing that I say is wrong". (Robin Hanson sometimes does this - introduces a new idea, then jumps to "cynical" conclusions about humanity because they haven't already thought of the idea and adopted it - and it's very annoying.) The other two posts introduce the criticisms in the guise of offering general advice on how to be rational: "Here is a rationality mistake that people make; by coincidence, my major example involves the founder of the rationality website where I'm posting this advice."

I suggest, first of all, that if your objective on this site is to give advice about how to be rational, then you need to find a broader range of examples. People here respect Eliezer, for very good reasons. If you do want to make a concentrated critique of how he has lived his life, then make a post about that, don't disguise it as a series of generic reflections on rationality which just happen to be all about him.

Personally I would be much more interested in what you have to say about the issue of AI. Do you even think AI is a threat to the human race? If so, what do you think we should do about it?

comment by Kaj_Sotala · 2012-08-04T16:08:24.836Z · LW(p) · GW(p)

Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas.

For what it's worth, I would personally be much happier if I didn't have to worry about FAI and could just do stuff that I found the most enjoyable. I also don't think that the work I do for SI has a very high chance of actually saving the world, though it's better than doing nothing.

I do consider the Singularity Institute a great employer, though, and it provided me a source of income at a time when I was desperately starting to need one. But that happened long after I'd already developed an interest in these matters.

comment by wedrifid · 2012-08-05T15:00:51.521Z · LW(p) · GW(p)

Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

What is the "outside view" on how much of an existential risk asteroids are? You know, the one you get when you look at how often asteroid impacts at or near the level that can cause mass extinctions happen? Answer: very damn low.

"The Outside View" isn't just a slogan you can chant to automatically win an argument. Despite the observational evidence from common usage the phrase doesn't mean "Wow! You guys who disagree with me are nerds. Sophisticated people think like I do. If you want to be cool you should agree with me to". No, you actually have to look at what the outside view suggests and apply it consistently to your own thinking. In this post you are clearly not doing so.

After all, if you want to save the planet from an asteroid, you have to do a lot of work!

Something being difficult (or implausible) is actually a good reason not to do it (on the margin).

You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

What the? Where on earth are you getting the idea that building an FAI isn't hard work? Or that it doesn't require building stuff and solving gritty engineering problems?

Replies from: DaFranker
comment by DaFranker · 2012-08-06T02:04:35.201Z · LW(p) · GW(p)

@aaronsw:

What the? Where on earth are you getting the idea that building an FAI isn't hard work? Or that it doesn't require building stuff and solving gritty engineering problems?

I'd like to reinforce this point. If it isn't hard work, please point us all at whatever solution any random mathematician and/or programmer could come up with on how to concretely implement Löb's Theorem within an AI to self-prove that a modification will not cause systematic breakdown or change the AI's behavior in an unexpected (most likely fatal to the human race, if you randomize through all conceptspace for possible eventualities, which is very much the best guess we have at the current state of research) manner. I've yet to see any example of such an application to a level anywhere near this complex in any field of physics, computing or philosophy.

Or maybe you could, instead, prove that there exists Method X that is optimal for the future of the human race which guarantees that for all possible subsets of "future humans", there exists no possible subsets which contain any human matching the condition "sufficiently irrational yet competent to build the most dangerous form of AI possible".

I mean, I for one find all this stuff about provability theory way too complicated. Please show us the easy-work stay-in-bed version, if you're so sure that that's all there is to it. You must have a lot of evidence to be this confident. All I've seen so far is "I'm being skeptic, also I might have evidence that I'm not telling you, so X is wrong and Y must be true!"

comment by wedrifid · 2012-08-05T14:39:13.464Z · LW(p) · GW(p)

Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further.

This is an unsubstantiated assertion presented with the form of something that should be conclusive. This is bizarre since the SIAI position on Tool AI is not a particular weak point in the SIAI position and the referenced conversation doesn't indicate any withdrawal from reality.

comment by atucker · 2012-08-04T23:53:49.669Z · LW(p) · GW(p)

The actual people at SIAI are much less prone to this than the community.

When I was living in San Francisco, people would regularly discuss various experiments that they were running on themselves, or skills that they were practicing. If I tried to assert something without concrete examples or predictions, people would be skeptical.

comment by wedrifid · 2012-08-06T23:09:18.788Z · LW(p) · GW(p)

OK, I believe we have more than enough information to consider him identified now:

  • Dmytry
  • private_messaging
  • JaneQ
  • Comment
  • Shrink
  • All_work_and_no_play

Those are the currently known sockpuppets of Dmytry. This one warrants no further benefit of the doubt. It is a known troll wilfully abusing the system. To put it mildly, this is something I would prefer not to see encouraged.

Replies from: gwern, CarlShulman, CarlShulman, Eliezer_Yudkowsky
comment by gwern · 2012-08-06T23:26:36.282Z · LW(p) · GW(p)

I agree. Dmytry was OK; private_messaging was borderline but he did admit to it and I'm loathe to support the banning of a critical person who is above the level of profanity and does occasionally make good points; JaneQ was unacceptable, but starting Comment after JaneQ was found out is even more unacceptable. Especially when none of the accounts were banned in the first place! (Were this Wikipedia, I don't think anyone would have any doubts about how to deal with an editor abusing multiple socks.)

Replies from: wedrifid, None
comment by wedrifid · 2012-08-07T06:00:31.371Z · LW(p) · GW(p)

private_messaging was borderline but he did admit to it

Absolutely, and he also stopped using Dmytry. My sockpuppet aversion doesn't necessarily have a problem with abandoning one identity (for reasons such as the identity being humiliated) and working to establish a new one. Private_messaging earned a "Do Not Feed!" tag itself through consistent trolling but that's a whole different issue to sockpuppet abuse.

JaneQ was unacceptable

And even used in the same argument as his other account, with them supporting each other!

Replies from: Kawoomba
comment by Kawoomba · 2012-08-07T22:06:24.041Z · LW(p) · GW(p)

Private_messaging earned a "Do Not Feed!" tag itself through consistent trolling

What does it matter what his motives are, ulterior (trolling) as they may be, as long as he raises salient points and/or provides at least thought-provoking insights with an acceptable ratio?

If I were to try to construct some repertoire model of him (e.g. signalling intellectual superiority by contradicting the alphas, seems like a standard contrarian mindset), it might be a good match. But frankly: Why care, his points should stand or fall on their own merit, regardless of why he chose to make them.

He raised some excellent points regarding e.g. Solomonoff induction that I've yet to see answered, (e.g. accepting simple models with assumed noise over complex models with assumed less noise, given the enourmously punishing discounting for length that may only work out in theoretical complexity class calculations and Monte Carlo approximations with a trivial solution) and while this is a CS dominated audience, additional math proficiency should be highly sought after -- especially for contrarians, since it makes their criticisms that much more valuable.

Is he a consistent fountain of wisdom? No. Is anyone?

I will not defend sockpuppet abuse here, though, that's a different issue and one I can get behind. Don't take this comment personal, the sentiment was spawned from when he just had 2 known accounts but was already met with high levels of "do not feed!", your comment just now seemed as good a place as any to voice it.

Replies from: Wei_Dai, Vladimir_Nesov, wedrifid
comment by Wei Dai (Wei_Dai) · 2012-08-08T08:50:10.866Z · LW(p) · GW(p)

He raised some excellent points regarding e.g. Solomonoff induction that I've yet to see answered

Can you link to the original post or comment? Your restatement of whatever he wrote is not making much sense to me.

Replies from: Kawoomba
comment by Kawoomba · 2012-08-08T10:02:56.378Z · LW(p) · GW(p)

Well there is definitely some sort of a Will Newsome-like projection technique going on, i.e. his comments - those that are on topic - are sometimes sufficiently opaque so that the insight is generated by the reader filling in the gaps meaningfully.

The example I used was somewhat implicit in this comment:

You end up modelling a crackpot scientist with this. Pick simplest theory that doesn't fit the data, then distrust the data virtually no matter how much evidence is collected, and explain it as people conspiring, that's what the AI will do. Gets even worse when you are unable to determine minimum length for either theory (which you are proven unable).

The universal prior discount for length is so severe (just a 20 bits longer description = 2^20 discounting, and what can you even say with 20 bits?), that this quote from Shane Legg's paper comes at little surprise:

"However it is clear that only the shortest program for will have much affect (sp) on [the universal prior]."

If the hypotheses allowed for some margin of error when checking for the shortest programs (and they should when applied to across a map-territory divide), it might very well stop at such a crackpot program that assumes all the mismatch may just be errors in the sense data.

How well does that argument hold up to challenges? I'm not sure, I haven't thought AIXI sufficiently through when taking into account the map-territory divide. But it sure is worthy of further consideration, which it did not get.

Here's some other comments that come to mind: This comment of his, which I interpreted to essentially refer to what I explained in my answering comment.

There's a variation of that point in this comment, third paragraph.

He also linked to this marvelous presentation by Marcus Hutter in another comment, which (the presentation) unfortunately did not get the attention it clearly deserves.

There's comments I don't quite understand on first reading, but which clearly go into the actual meat of the topic, which is a good direction.

My perspective is this: As long as he provides posts like those over a period of just a few weeks, I do not care about his destructive attitude, or his interspersed troll comments. That which can be killed by truth should be, this aphorism still holds true for me when substituting "truth" for "meaningful argument". Those deserve answers, not ignoring, regardless of their source.

Replies from: Wei_Dai, Vladimir_Nesov
comment by Wei Dai (Wei_Dai) · 2012-08-08T21:38:15.036Z · LW(p) · GW(p)

If the hypotheses allowed for some margin of error when checking for the shortest programs (and they should when applied to across a map-territory divide), it might very well stop at such a crackpot program that assumes all the mismatch may just be errors in the sense data.

It looks to me like you're reading your own interpretation into what he wrote, because the sentence he wrote before "You end up with" was

they are not uniquely determined and your c can be kilobits long, meaning, one hypothesis can be given prior >2^1000 larger than another, or vice versa, depending to choice of the language.

which is clearly talking about another issue. I can give my views on both if you're interested.

On the issue private_messaging raises, I think it's a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. private_messaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don't trust that too much.

On the issue you raised, a hypothesis of "simple model + random errors" must still match the past history perfectly to not be discarded, and the exact errors would have to be part of the hypothesis (i.e., program) and therefore count towards its length.

My perspective is this: As long as he provides posts like those over a period of just a few weeks, I do not care about his destructive attitude, or his interspersed troll comments. That which can be killed by truth should be, this aphorism still holds true for me when substituting "truth" for "meaningful argument". Those deserve answers, not ignoring, regardless of their source.

I defended private_messaging/Dmytry before for similar reasons, but the problem is that it's often not fun to argue with him. I do engage with him sometimes if I think I can draw out some additional insights or get him to clarify something, but now I tend not to respond just to correct something that I think is wrong.

Replies from: private_messaging
comment by private_messaging · 2012-08-09T11:31:00.477Z · LW(p) · GW(p)

On the issue private_messaging raises, I think it's a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. private_messaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don't trust that too much.

Are you picturing AI that has simulated multiverse from big bang up inside a single universe, and then it just uses camera sense data to very rapidly pick the right universe? Well yes that will dispose of 2^1000 prior very easily. Something that is instead e.g. modelling humans using minimum amount of guessing without knowing what's inside their heads, and which can't really run any reductionist simulations at the level of quarks to predict it's camera data, can have real trouble getting right the fine details of it's grand unified theory of everything, and most closely approximate a crackpot scientist. Furthermore, having to include a non-reductionist model of humans, it may even end up religious (feeding stuff into human mind model to build it's theory of everything by intelligent design).

How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem. edit: and the very strong intuition I have is that you can't just dismiss this sort of stuff out of hand. So many ways it can fail. So few ways it can work great. And no rigour what so ever in the speculations here.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-08-09T13:15:54.807Z · LW(p) · GW(p)

How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem.

I certainly don't disagree when you put it like that, but I think the convention around here is when we say "SI/AIXI will do X" we are usually referring to the theoretical (uncomputable) construct, not predicting that an actual future AI inspired by SI/AIXI will do X (in part because we do recognize the difficulty of this latter problem). The reason for saying "SI/AIXI will do X" may for example be to point out how even a simple theoretical model can behave in potentially dangerous ways that its designer didn't expect, or just to better understand what it might mean to be ideally rational.

comment by Vladimir_Nesov · 2012-08-08T12:07:19.207Z · LW(p) · GW(p)

Solomonoff induction never ignores observations.

Replies from: Kawoomba
comment by Kawoomba · 2012-08-08T12:26:25.847Z · LW(p) · GW(p)

One liners, eh?

It's not so much ignoring observations as testing models that allow for your sense data to be subject to both gaussian noise as well as systematic errors, i.e. explaining part of the observations as sensory fuzziness.

In such a case, an overly simply model that posits e.g. some systematic error in its sensors may have an advantage over an actually correct albeit more complex model, due to the way that the length penalty for the Universal Prior rapidly accumulates.

Imagine AIXI coming to the conclusion that the string it is watching is in fact partly output by a random string generator that intermittently takes over. If the competing (but potentially correct) model that works without such a random string generator needs just a megabit more space to specify, do the math.

I'll still have to think upon it further. It's just not something to be dismissed out of hand, and just one of several highly relevant tangents (since it pertains to real world applicability; if its a design byproduct it might well translate to any Monte Carlo or assorted formulations). It might well turn out to be a non-issue.

Replies from: roystgnr, cousin_it, FeepingCreature
comment by roystgnr · 2012-08-09T14:42:44.139Z · LW(p) · GW(p)

Does AIXI admit the possibility of random string generators? IIRC it only allows deterministic programs, so if it sees patterns a simple model can't match, then it's forced to update the model with "but there are exceptions: bit N is 1, and bit N+1 is 1, and bit N+2 is 0... etc" to account for the error. In other words, the size of the "simple model" then grows to be the size of the deterministic part plus the size of the error correction part. And in that case, even a megabyte of additional complexity in a model would stop effectively ruling out that complex model just as soon as more than a couple megabytes of simple-model-incompatible data had been seen.

comment by cousin_it · 2012-08-08T23:35:45.167Z · LW(p) · GW(p)

Nesov is right.

comment by FeepingCreature · 2012-08-08T23:31:10.262Z · LW(p) · GW(p)

IANAE, but doesn't AIXI work based on prediction instead of explanation? An algorithm that attempts to "explain away" sense data will be unable to predict the next sequence of the AI's input, and will be discarded.

Replies from: Kawoomba
comment by Kawoomba · 2012-08-09T03:05:06.285Z · LW(p) · GW(p)

If your agent operates in an environment such that your sense data contains errors or such that the world that spawns that sense data isn't deterministic, at least not on a level that your sense data can pick up - both of which cannot be avoided - then perfect predictability is out of the question anyways.

The problem then shifts to "how much error or fuzziness of the sense data or the underlying world is allowed", at which point there's a trade-off between "short and enourmously more preferred model that predicts more errors/fuzziness" versus "longer and enourmously less preferred model that predicts less errors/fuzziness".

This is as far as I know not an often discussed topic, at least not around here, probably because people haven't yet hooked up any computable version of AIXI with sensors that are relevantly imperfect and that are probing a truly probabilistic environment. Those concerns do not really apply to learning PAC-Man.

comment by Vladimir_Nesov · 2012-08-07T22:23:20.253Z · LW(p) · GW(p)

Is he a consistent fountain of wisdom? No. Is anyone?

The fallacy of gray.

Replies from: Kawoomba
comment by Kawoomba · 2012-08-07T22:33:14.699Z · LW(p) · GW(p)

An uncharitable reading, notice the "consistent" and referring to an acceptable ratio of (implied) signal/noise in the very first sentence.

Also, this may be biased, but I value relevant comments on algorithmic information theory particularly highly, and they are a rare enough commodity. We probably agree on that at least.

comment by wedrifid · 2012-08-08T03:19:49.790Z · LW(p) · GW(p)

What does it matter what his motives are, ulterior (trolling) as they may be, as long as he raises salient points and/or provides at least thought-provoking insights with an acceptable ratio?

Exactly. I often lament that the word 'troll' contains motive as part of the meaning. I often try to avoid the word and convey "Account to which Do Not Feed needs to be applied" without making any assertion about motive. Those are hard to prove.

As far as I'm concerned if it smells like a troll, has amazing regenerative powers, creates a second self when attacked and loses a limb and goes around damaging things I care about then it can be treated like a troll. I care very little whether it is trying to rampage around and destroy things---I just want to stop it.

comment by [deleted] · 2012-08-06T23:41:05.867Z · LW(p) · GW(p)

I needed clean data on how people react to various commentary here. I falsified several anti-LW hypotheses (if I think you guys are the Scientology 2.0 I want to see if I can falsify that, ok?), though at some point I was really curious to see what you do about two accounts in same place talking in exact same style, that was entirely unscientific, sorry about this.

Furthermore, the comments were predominantly rated at >0 and not through socks rating each other up (I would want to see if first-vote effect is strong but that would require far too much data). Sorry if there is any sort of disruption to anything.

I actually have significantly more respect for you guys now, with regards to considering the commentary, and subsequently non cultness. I needed a way to test hypotheses. That utterly requires some degree of statistical independence. I do still honestly think this FAI idea is pretty damn misguided (and potentially dangerous to boot), but I am allowing it much more benefit of the doubt.

edit: actually, can you reset the email of Dmytry to dmytryl at gmail ? I may want to post article sometime in the future (I will try to offer balanced overview as I see, and it will have plus points as well. Seriously.).

Also, on the Eliezer, I really hate his style but like his honesty, and its a very mixed feeling all around, i mean, its atrocious to just go ahead and say, whoever didn't get my MWI stuff is stupid, thats the sort of stuff that evaporates out a LOT of people, and if you e.g. make some mistakes, you risk evaporating meticulous people. On the other hand, if that's what he feels, that's what he feels, to conceal it is evil.

Replies from: gwern
comment by gwern · 2012-08-06T23:52:43.240Z · LW(p) · GW(p)

I needed clean data on how people react to various commentary here. I falsified several anti-LW hypotheses

So presumably we can expect a post soon explaining the background & procedure, giving data and perhaps predictions or hash precommitments, with an analysis of the results; all of which will also demonstrate that this is not a post hoc excuse.

edit: actually, can you reset the email of Dmytry to dmytryl at gmail ?

I can't, no. I'd guess you'd have to ask someone at Trike, and I don't know if they'd be willing to help you out...

Replies from: None
comment by [deleted] · 2012-08-06T23:56:25.351Z · LW(p) · GW(p)

Well basically I did expect much more negative ratings, and then I'd just stop posting on those. I couldn't actually set up proper study without zillion socks, and that'd be serious abuse. I am currently quite sure you guys are not Eliezer cult. You might be a bit of an idea cult but not terribly much. edit: Also as you guys are not Eliezer cult, and as he actually IS pretty damn good at talking people into silly stuff, in so much it is also evidence he's not building a cult.

re: email address, doesn't matter too much.

edit: Anyhow, I hope you do consider content of the comments to be of the benefit, actually I think you do. E.g. my comment against the idea of overcoming some biases, I finally nailed what bugs me so much about the 'overcomingbias' title and the carried-over cached concept of overcoming them.

edit: do you want me to delete all socks? No problem either way.

comment by CarlShulman · 2012-12-25T14:59:49.142Z · LW(p) · GW(p)

One more:

http://lesswrong.com/user/All_work_and_no_play/

Replies from: gwern
comment by gwern · 2012-12-25T22:30:03.651Z · LW(p) · GW(p)

Agree; that's either Dmytry or someone deliberately imitating him.

comment by CarlShulman · 2012-09-15T03:04:30.931Z · LW(p) · GW(p)

And here's one more (judging by content, style, and similar linguistic issues): Shrink. Also posting in the same discussions as private_messaging.

Replies from: gwern, wedrifid
comment by gwern · 2012-09-15T03:20:03.801Z · LW(p) · GW(p)

It certainly does sound like him, although I didn't notice any of his most obvious tells like ghmm or obsession with complexity of updating Bayesian networks.

Replies from: CarlShulman, wedrifid
comment by CarlShulman · 2012-09-15T04:08:09.184Z · LW(p) · GW(p)

"For the risk estimate per se" "The rationality and intelligence are not precisely same thing." "To clarify, the justice is not about the beliefs held by the person." "The honesty is elusive matter, "

Characteristic misuse of "the."

"You can choose any place better than average - physicsforums, gamedev.net, stackexchange, arstechnica observatory,"

Favorite forums from other accounts.

Replies from: gwern
comment by gwern · 2012-09-15T04:19:25.115Z · LW(p) · GW(p)

Ah yes, I forgot Dymtry had tried discussing LW on the Ars forums (and claiming we endorsed terrorism, etc. He got shut down pretty well by the other users.) Yeah, how likely is it that they would both like Ars forums...

comment by wedrifid · 2012-09-15T04:43:59.664Z · LW(p) · GW(p)

It certainly does sound like him, although I didn't notice any of his most obvious tells like ghmm or obsession with complexity of updating Bayesian networks.

He did open by criticising many worlds and in subsequent posts had an anti LW and SIAI chip on his shoulder that couldn't plausibly have been developed in the time from the account had existed.

comment by wedrifid · 2012-09-15T03:19:08.169Z · LW(p) · GW(p)

Well spotted. I hadn't even noticed the Shrink account existing, much less identified it by the content. Looking at the comment history I agree it seems overwhelmingly likely.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-02T10:02:19.537Z · LW(p) · GW(p)

Huh, I didn't see this whole conversation before. Will update appropriately.

comment by wedrifid · 2012-08-07T12:12:52.402Z · LW(p) · GW(p)

You are using testosterone to boost performance, it also clouds social judgment severely, and in so much as I know it, I can use it to dismiss literally anything you say (hard to resist temptation to, at times).

Counter incremented.

comment by [deleted] · 2012-08-04T18:05:56.342Z · LW(p) · GW(p)

People at the upper end of the IQ spectrum get lonely for someone smarter to talk to. It's an emotional need for an intellectual peer.

Replies from: David_Gerard, Arkanj3l
comment by David_Gerard · 2012-08-08T09:13:53.521Z · LW(p) · GW(p)

Lots of really smart people here is something I find very attractive about LW.

(Of course, smart and stupid are orthogonal, and no-one does stupid quite as amazingly as really smart people.)

comment by Arkanj3l · 2012-08-16T01:12:24.739Z · LW(p) · GW(p)

That doesn't privilege FAI, methinks, and seems too charitable as an after-the-fact explanation with not so much as a survey.

comment by NancyLebovitz · 2012-08-04T15:19:02.372Z · LW(p) · GW(p)

My cynical take on FAI is that it's a goal to organize one's life around which isn't already in the hands of professionals. I have no idea whether this is fair.

Replies from: private_messaging, jsalvatier
comment by private_messaging · 2012-08-04T15:57:59.232Z · LW(p) · GW(p)

There's also the belief that UFAI is in the hands of professionals... and that professionals miss some big picture insights that you could make without even knowing the specifics of the cognitive architecture of the AI, etc.

comment by jsalvatier · 2012-08-04T19:14:49.968Z · LW(p) · GW(p)

I couldn't parse "it's a goal to organize one's life around which isn't already in the hands of professionals".

Replies from: NancyLebovitz, arundelo
comment by NancyLebovitz · 2012-08-04T19:39:51.170Z · LW(p) · GW(p)

For people who are looking for a big goal so that their lives make sense, FAI is a project where it's possible to stake out territory relatively easily.

comment by arundelo · 2012-08-04T19:35:41.408Z · LW(p) · GW(p)

"it's (a goal to organize one's life around) which isn't already in the hands of professionals"

=

"it's a goal around which to organize one's life that isn't already in the hands of professionals"

Replies from: Decius
comment by Decius · 2012-08-05T01:00:46.110Z · LW(p) · GW(p)

It is a goal. It is not already in the hands of professionals. It is something around which one can organize one's life. It's hard to clear the ambiguity of whether it is one's life that isn't in the hands of professionals.

comment by timtyler · 2012-08-04T14:16:47.907Z · LW(p) · GW(p)

But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

IMO, a signalling explanation makes more sense. Publicly-expressed concern about moral issues signals to others what a fine fellow you are. In that context the more far-out things you care about the better. Trees, whales, simulated people, distant descendants - they all signal how much you care.

Replies from: torekp
comment by torekp · 2012-08-04T18:38:37.751Z · LW(p) · GW(p)

If I want to signal how much I care, I'll stick with puppies or local soup kitchens, thank you very much. That will get me a lot more warm fuzzies - and respect - from my neighbors and colleagues than making hay about a robot apocalypse.

Replies from: David_Gerard, ModusPonies, timtyler, DanielLC
comment by David_Gerard · 2012-08-04T20:40:13.754Z · LW(p) · GW(p)

Humans are adaptation-executers, not fitness maximisers - and evolved in tribes of not more than 100 or so. And they are exquisitely sensitive to status. As such, they will happily work way too hard to increase their status ranking in a small group, whether it makes sense from the outside view or not. (This may or may not follow failing to increase their status ranking in more mainstream groups.)

comment by ModusPonies · 2012-08-04T19:22:08.605Z · LW(p) · GW(p)

If you want to maximize respect from a broad, nonspecific community (e.g. neighbors and colleagues), that's a good strategy. If you want to maximize respect from a particular subculture, you could do better with a more specific strategy. For example, to impress your political allies, worry about upcoming elections. To impress members of your alumni organization, worry about the state of your sports team or the university president's competence. To impress folks on LessWrong, worry about a robot apocalypse.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-05T01:21:34.604Z · LW(p) · GW(p)

That's a fully general argument: to impress [people who care about X], worry about [X]. But it doesn't explain why for rationalists X equals a robot apocalypse as opposed to [something else].

Replies from: ModusPonies
comment by ModusPonies · 2012-08-05T04:41:38.685Z · LW(p) · GW(p)

My best guess is that it started because Eliezer worries about a robot apocalypse, and he's got the highest status around here. By now, a bunch of other respected community members are also worried about FAI, so it's about affiliating with a whole high-status group rather than imitating a single leader.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2012-08-05T07:04:25.741Z · LW(p) · GW(p)

I wouldn't have listened to EY if he weren't originally talking about AI. I realize others' EY origin stories may differ (e.g. HPMOR).

comment by timtyler · 2012-08-04T23:39:30.590Z · LW(p) · GW(p)

Much depends on who you are trying to impress. Around here, lavishing care on cute puppies won't earn you much status or respect at all.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2012-08-05T10:07:44.623Z · LW(p) · GW(p)

That raises the question of why people care about getting status from Less Wrong in the first place. There are many other more prominent internet communities.

Replies from: timtyler
comment by timtyler · 2012-08-05T12:25:07.590Z · LW(p) · GW(p)

Other types of apocalyptic phyg also acquire followers without being especially prominent. Basically the internet has a long tail - offering many special interest groups space to exist.

comment by DanielLC · 2012-08-04T19:07:55.602Z · LW(p) · GW(p)

Yeah, but how much respect will they get you from LessWrong?

comment by Malo (malo) · 2012-08-04T19:45:55.173Z · LW(p) · GW(p)

Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

I think the last sentence here is a big leap. Why is this a more plausible explanation then the idea that aspiring rationalist simply find AI-risk and FAI compelling. Furthermore, since this community was founded by someone who is deeply interested in both topics, members who are attracted to the rationality side of this community get a lot of exposure to the AI-risk side. As such, if we accept the premiss that AI-risk is a topic that aspiring rationalists are more likely to find interesting than a random member of the general public, then it's not surprising that many end up thinking/caring about it after being exposed to this community.

You seem to attempt to justify this last sentence of the quoted text with the following:

After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

I would respond to this by saying that thinking/caring about AI-riskworking on AI-risk. I imagine there are also lots of people who think about the risks of asteroid impacts, but aren't working on solving them, and wouldn't claim they are. Also, this paragraph could be interpreted a saying that, people who claim to be doing work on AI-risk (e.g., SI) aren't actually doing any work. It would be one thing to claim the work is misdirected, but to claim they aren't working hard (to me) seems misinformed or disingenuous.

Which then leads into the following:

Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.

I think a more accurate characterization of SI's stance would be that there are lots of important philosophical and mathematical problems that if solved will increase the likely hood of a positive Singularity, and that those doing what you call the “gritty engineering” haven't properly considered the risks. Your statement seems to trivialize this work, and you state Holden's criticism as evidence. What specifically in this “debate”—including the responses from SI—lead you to believe that SI's approach is “withdrawn from reality.”

comment by fubarobfusco · 2012-08-04T19:33:58.761Z · LW(p) · GW(p)

Scarcely the most cynical conceivable explanation. Here, try this one:

"Yes," declaimed Deep Thought, "I said I'd have to think about it, didn't I? And it occurs to me that running a programme like this is bound to create an enormous amount of popular publicity for the whole area of philosophy in general. Everyone's going to have their own theories about what answer I'm eventually to come up with, and who better to capitalize on that media market than you yourself? So long as you can keep disagreeing with each other violently enough and slagging each other off in the popular press, you can keep yourself on the gravy train for life. How does that sound?"

The two philosophers gaped at him.

"Bloody hell," said Majikthise, "now that is what I call thinking. Here Vroomfondel, why do we never think of things like that?"

"Dunno," said Vroomfondel in an awed whisper, "think our brains must be too highly trained, Majikthise."

So saying, they turned on their heels and walked out of the door and into a lifestyle beyond their wildest dreams.

— Douglas Adams, The Hitchhiker's Guide to the Galaxy, ch. 25

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-08-05T09:18:51.506Z · LW(p) · GW(p)

I guess there was an implied aditional limitation of being well-meaning on the consious level.

comment by Dr_Manhattan · 2012-08-04T15:59:57.292Z · LW(p) · GW(p)

Not sure about your sampling method, but a lot of LWers I know (in NY area) are pretty busy "doing stuff". Propensity to doing stuff does not seem to negatively correlate with FAI concerns as far as I can tell.

That said this is a bit of a concern for me as a donor, which is why I think the recent increase in transparency and spinning off CFAR is a big positive sign: either the organization is going to be doing stuff in the FAI area (I consider verifiable research doing stuff, and I don't think you can do it all in bed) or not, it's going to be clear either way.

comment by gwern · 2012-08-07T14:59:33.939Z · LW(p) · GW(p)

Keep in mind that real sock puppeteering is about making a strawman sock puppet, or a sock puppet that disagrees cleverly using existing argument you got but tentatively changes the view, or the like.

Sock puppets, both here and on Reddit or Wikipedia, can be used for multiple purposes, not just that.

comment by NancyLebovitz · 2012-08-07T12:42:52.769Z · LW(p) · GW(p)

One reason I have respect for Eliezer is HPMOR-- there's a huge amount of fan fiction, and writing something which impresses both a lot of people who like fan fiction and a lot of people who don't like fan fiction is no small achievement.

Also, it's the only story I know of which gets away with such huge shifts in emotional tone. (This may be considered a request recommendations of other comparable works.)

Furthermore, Eliezer has done a good bit to convince people to think clearly about what they're doing, and sometimes even to make useful changes in their lives as a result.

I'm less sure that he's right about FAI, but those two alone are enough to make for respect.

Replies from: None, Bruno_Coelho
comment by [deleted] · 2012-08-12T00:16:38.827Z · LW(p) · GW(p)

In the context of LessWrong and FAI, Yudkowsky's fiction writing abilities are almost entirely irrelevant.

comment by Bruno_Coelho · 2012-08-11T00:59:10.868Z · LW(p) · GW(p)

Eliezer has done a good bit to convince people to think clearly about what they're doing

This is a source of disagreement. Think cleary and change behavior" is not a good slogan, is used for numerous groups. But-- and the inferential distance with is not clear from begining-- there are lateral beliefs: computational epistemology, especificity, humans as impefect machines etc.

In a broad context, even education in general could fit this phrase, specially for people with no training in gathering data.

comment by DanielLC · 2012-08-04T19:07:30.518Z · LW(p) · GW(p)

After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

If you want to do either, you have to do work, get paid, and donate to the appropriate charity. The only difference is where you donate.

I can't seem to find a charity for asteroid impact avoidance. As far as I can tell, everything going into that is done by governments. Even if it is, I would still expect to find lobbyists involved. Also, I wonder if it's possible to donate directly to that stuff.

comment by [deleted] · 2012-08-05T00:38:47.676Z · LW(p) · GW(p)

it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

What? really? I happen to have drunk the kool-aid and think this is an important problem. So it is very important to me that you tell me how it is I could solve FAI without doing any real work.

comment by Manfred · 2012-08-04T13:46:41.729Z · LW(p) · GW(p)

I was all prepared to vote this up from '"If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?'." But then you had to go and be wrong - suggesting some sort of "lazy ideas only" search process that makes no sense historically, and conflating LW and SI.

Replies from: aaronsw, NancyLebovitz
comment by aaronsw · 2012-08-04T14:11:01.256Z · LW(p) · GW(p)

Can you point to something I said that's you think is wrong?

My understanding of the history (from reading an interview with Eliezer) is that Eliezer concluded the singularity was the most important thing to work on and then decided the best way to get other people to work on it was to improve their general rationality. But whether that's true or not, I don't see how that's inconsistent with the notion that Eliezer and a bunch of people similar to him are suffering from motivated reasoning.

I also don't see how I conflated LW and SI. I said many LW readers worry about UFAI and that SI has taken the position that the best way to address this worry is to do philosophy.

Replies from: Manfred
comment by Manfred · 2012-08-04T15:01:38.486Z · LW(p) · GW(p)

You're right that you can interpret FAI as motivated reasoning. I guess I should have considered alternate interpretations more.

Eliezer concluded the singularity was the most important thing to work on and then decided the best way to get other people to work on it was to improve their general rationality.

Well, kinda. Eliezer concluded the singularity was the most important thing to work on and then decided the best way to work on it was to code an AI as fast as possible, with no particular regard for safety.

I also don't see how I conflated LW and SI

"[...] arguing about ideas on the internet" is what I was thinking of. It's a LW-describing sentence in a non-LW-related area. Oh, and "Why rationalists worry about FAI" rather than "Why SI worries about FAI."

Replies from: aaronsw
comment by aaronsw · 2012-08-04T15:07:14.317Z · LW(p) · GW(p)

Two people have been confused by the "arguing about ideas" phrase, so I changed it to "thinking about ideas".

Replies from: Manfred
comment by Manfred · 2012-08-04T17:57:00.900Z · LW(p) · GW(p)

It's more polite, and usually more accurate, to say "I sent a message I didn't want to, so I changed X to Y."

Replies from: Decius
comment by Decius · 2012-08-05T01:17:02.613Z · LW(p) · GW(p)

Most accurate would be "feedback indicates that a message was received that I didn't intend to send, so..."

comment by NancyLebovitz · 2012-08-05T01:04:49.040Z · LW(p) · GW(p)

I was all prepared to vote this up from '"If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?'."

Maybe. So far as I know, averting asteroids doesn't have as good a writer to inspire people.

comment by [deleted] · 2012-08-09T17:42:16.143Z · LW(p) · GW(p)

actually, let it be the last post here, I get dragged out any time I resolve to leave then check if anyone messaged me.

A two hour self-imposed exile! I think that beats even XiXiDu's record.

comment by Oscar_Cunningham · 2012-08-04T12:40:20.255Z · LW(p) · GW(p)

LessWrong rationality nerds cared so much about creating Friendly AI

I don't!

Replies from: None
comment by [deleted] · 2012-08-04T13:33:23.841Z · LW(p) · GW(p)

Ditto, but not really OP's point.

Replies from: aaronsw
comment by aaronsw · 2012-08-04T14:08:21.570Z · LW(p) · GW(p)

Right. I tweaked the sentence to make this more clear.

comment by Dr_Manhattan · 2012-08-04T15:16:18.094Z · LW(p) · GW(p)

btw, if you're aaronsw in my Twitter feed, welcome to LessWrong^2

comment by Mitchell_Porter · 2012-08-04T12:35:42.722Z · LW(p) · GW(p)

What do you think of contemporary theoretical physics? That is also mostly "arguing on the Internet".

Replies from: JoshuaZ, aaronsw
comment by JoshuaZ · 2012-08-04T14:49:49.254Z · LW(p) · GW(p)

What do you think of contemporary theoretical physics? That is also mostly "arguing on the Internet".

Some of it yes. At the end of the day though, some of it does lead to real experiments, which need to pay rent. And some of it does quite well at that. Look for example at the recent discovery of the Higgs boson.

Replies from: betterthanwell
comment by betterthanwell · 2012-08-06T10:04:46.211Z · LW(p) · GW(p)

What do you think of contemporary theoretical physics? That is also mostly "arguing on the Internet".

Some of it yes. At the end of the day though, some of it does lead to real experiments, which need to pay rent. And some of it does quite well at that. Look for example at the recent discovery of the Higgs boson.

These theoretical physicists had to argue for several decades until they managed to argue themselves into enough money to hire the thousands of people to design, build and operate a machine that was capable of refuting, or as it turned out - supporting their well motivated hypothesis. Not to mention that the machine necessitated inventing the world wide web, advancing experimental technologies, data processing, and fields too numerous to mention by orders of magnitude compared to what was available at the time.

Perhaps today's theoretical programmers working on some form of General Artificial Intelligence find themselves faced with comparable challenges.

I don't know how things must have looked like at the time, perhaps people were wildly optimistic with respect to expected mass of the scalar boson(s) of the (now) Standard Model of physics, but in hindsight, it seems pretty safe to say that the Higgs boson must have been quite impossible for Humanity to experimentally detect back in 1964. Irrefutable metaphysics. Just like string theory, right?

Well, thousands upon thousands of people, billions of dollars, some directly but mostly indirectly (in semiconductors, superconductors, networking, ultra high vacuum technology, etc.) somehow made the impossible... unimpossible.

And as of last week, we can finally say they succeeded. It's pretty impressive, if nothing else.

Perhaps M-theory will be forever irrefutable metaphysics to mere humans, perhaps GAI. As Brian Greene put it: "You can't teach general relativity to a cat." Yet perhaps we shall see further (now) impossible discoveries made in our lifetimes.

comment by aaronsw · 2012-08-04T14:04:34.091Z · LW(p) · GW(p)

There's nothing wrong with arguing on the Internet. I'm merely asking whether the belief that "arguing on the Internet is the most important thing anyone can do to help people" is the result of motivated reasoning.

Replies from: David_Gerard
comment by David_Gerard · 2012-08-04T20:42:04.497Z · LW(p) · GW(p)

The argument I see is that donating money to SIAI is the most important thing anyone can do to help people.

Replies from: CarlShulman
comment by CarlShulman · 2012-08-07T01:30:19.165Z · LW(p) · GW(p)

anyone can do

Even if one thought SIAI was the most effective charity one could donate to at the present margin right now, or could realistically locate soon, this would not be true. For instance, if one was extremely smart and effective at CS research, then better to develop one's skills and take a crack at finding fruitful lines of research that would differentially promote good AI outcomes. Or if one was extremely good at organization and management, especially scholarly management, to create other institutions attacking the problems SIAI is working on more efficiently. A good social scientist or statistician or philosopher could go work at the FHI, or the new Cambridge center on existential risks as an academic. One could make a systematic effort to assess existential risks, GiveWell style, as some folk at the CEA are doing. There are many people whose abilities, temperament, and background differentially suit them to do X better than paying for others to do X.

comment by Wei Dai (Wei_Dai) · 2012-08-09T19:31:27.767Z · LW(p) · GW(p)

Ok, if what you're saying is not "SI concludes this" but just that we don't really know what even the theoretical SI concludes, then I don't disagree with that and in fact have made similar points before. (See here and here.) I guess I give Eliezer and Luke more of a pass (i.e. don't criticize them heavily based on this) because it doesn't seem like any other proponent of algorithmic information theory (for example Schmidhuber or Hutter) realizes that Solomonoff Induction may not assign most posterior probability mass to "physics sim + location" type programs, or if they do realize it, choose not to point it out. That presentation you linked to earlier is a good example of this.

I believe that Hutter et all were rightfully careful not to expect something specific, i.e. not to expect it to not kill him, not to expect it to kill him, etc etc.

You would think that if Hutter thought there's a significant chance that AIXI would kill him, he would point that out prominently so people would prioritize working on this problem or at least keep it in mind as they try to build AIXI approximations. But instead he immediately encourages people to use AIXI as a model to build AIs (in A Monte Carlo AIXI Approximation for example) without mentioning any potential dangers.

Those are questions to be, at last, formally approached.

Before you formally approach a problem (by that I assume you mean try to formally prove it one way or another), you have to think that the problem is important enough. How can we decide that, except by using intuition and heuristic/informal arguments? And in this case it seems likely that a proof would be too hard to do (AIXI is uncomputable after all) so intuition and heuristic/informal arguments may be the only things we're left with.

comment by David_Gerard · 2012-08-05T09:29:38.928Z · LW(p) · GW(p)

But an interest in rationality pulls in expertise transferable to all manner of fields, e.g. the 2011 survey result showing 56.5% agreeing with the MWI. (I certainly hope the next survey will ask how many of those saying they agree or disagree with it can solve the Schroedinger equatlon for a hydrogen atom, and also how many of those expressing a position would understand the solution to the Schroedinger equatlon for a hydrogen atom if they saw it written out.) So acquiring meaningful knowledge of artificial intelligence is par for the course.

Replies from: gwern
comment by gwern · 2012-08-05T22:26:45.835Z · LW(p) · GW(p)

57%, incidentally, is almost exactly equal to the results of one poll of cosmologists/physicists.

Replies from: CarlShulman
comment by CarlShulman · 2012-08-05T22:46:40.445Z · LW(p) · GW(p)

Come on, you can't just pick the most extreme of varied poll results without giving context.

Replies from: gwern
comment by gwern · 2012-08-05T23:01:25.075Z · LW(p) · GW(p)

Of course I can, just like David can engage in unfair snark about what a number on a poll might mean.

Replies from: HBDfan, David_Gerard
comment by HBDfan · 2012-08-06T09:26:32.702Z · LW(p) · GW(p)

[delete]

comment by David_Gerard · 2012-08-06T12:19:29.181Z · LW(p) · GW(p)

Could you please clarify what aspect you felt was unfair?

Replies from: gwern
comment by gwern · 2012-08-06T19:28:58.422Z · LW(p) · GW(p)

Perhaps you could first unpack your implicit unstated argument from random poll number to sarcastic remarks about not being physicists, so no one winds up criticizing something you then say you didn't mean.

Replies from: David_Gerard
comment by David_Gerard · 2012-08-06T19:46:32.088Z · LW(p) · GW(p)

So ask the question next survey. I do, however, strongly suspect they're expressing an opinion on something they don't actually understand - and I don't think that's an unfair assumption, given most people don't - which would imply they were only doing so because "believe in MWI" is a local trope.

So which bit was unfair?

Replies from: ArisKatsaris, gwern, OrphanWilde
comment by ArisKatsaris · 2012-08-07T08:38:36.669Z · LW(p) · GW(p)

I certainly hope the next survey will ask how many of those saying they agree or disagree with it

Since our certainty was given as a percentage, none of us said we agreed or disagreed with it in the survey, unless you define "agree" as certainty > 50% and "disagree" as certainty below 50%

Or are you saying that we should default to 50%, in all cases we aren't scientifically qualified to answer of our own strength? That has obvious problems.

"So ask the question next survey. I do, however, strongly suspect they're expressing an opinion on something they don't actually understand"

That's like asking people to explain how consciousness works before they express their belief in the existence of brains, or their disbelief in the existence of ghosts.

comment by gwern · 2012-08-06T20:01:18.212Z · LW(p) · GW(p)

I do, however, strongly suspect they're expressing an opinion on something they don't actually understand - and I don't think that's an unfair assumption, given most people don't - which would imply they were only doing so because "believe in MWI" is a local trope.

What is "actually understand" here and why does it sound like a dichotomy? Are you arguing that one cannot have any opinion about MWI based on any amount of understanding derived from popularizations (Eliezer-written or otherwise) which falls short of one being able to solve technical problems you list?

Surely you don't believe that one is not allowed to hold any opinion or confidence levels without becoming a full-fledged domain expert, but that does sound like what your argument is.

Replies from: David_Gerard
comment by David_Gerard · 2012-08-07T11:21:17.897Z · LW(p) · GW(p)

Given that the MWI is claimed to follow by just taking the equations seriously, then I think understanding the equations in question is not an unreasonable prerequisite to having a meaningful opinion on that.

comment by OrphanWilde · 2012-08-06T20:04:25.775Z · LW(p) · GW(p)

Your line of argument could equally apply to quantum physicists.

comment by hankx7787 · 2012-08-04T17:19:13.996Z · LW(p) · GW(p)

I'm not sure about the community at large, but as for some people, like Eliezer, they have very good reasons for why working on FAI actually makes the most sense for them to do, and they've gone to great lengths to explain this. So if you want to limit your cynicism to "armchair rationalists" in the community, fine, but I certainly don't think this extends to the actual pros.

comment by wedrifid · 2012-08-07T13:25:49.906Z · LW(p) · GW(p)

Also, I only thought of that when you would go on how I must feel so humiliated by some comment of yours

(I did not make such a claim, nor would I make one.)

comment by [deleted] · 2012-08-05T00:12:47.218Z · LW(p) · GW(p)

SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.

I think it is more that you can't build a genuine General Intelligence before you have solved some intractable mathematical problems, like practical algorithms for solomonoff induction. The fact that you would like goal stability and human-values is just two of many difficult theoretical problems that are not going to be solved by using black-box models, nor using simple models.

Practical AI researchers do a lot of good work. But remember that a neural network is a statistical classification tool.

(I am aware that neural networks isn't the bleeding edge, but i cannot come to mind of a more resent trend. My point stands that AI is fiendishly hard and you are not going to be able to build anything concise and efficient by solving gritty engineering problems. This is even visible in simple programming where the mathematically elegant functional programming languages are gaining on the mainstream 'gritty engineering' imperative languages.)

Replies from: CarlShulman
comment by CarlShulman · 2012-08-07T01:51:40.749Z · LW(p) · GW(p)

I think it is more that you can't build a genuine General Intelligence before you have solved some intractable mathematical problems, like practical algorithms for solomonoff induction.

Why think that you need to use a Solomonoff Induction (SI) approximation to get AGI? Do you mean to take it so loosely that any ability to do wide-ranging sequence prediction count as a practical algorithm for SI?

Replies from: None
comment by [deleted] · 2012-08-07T09:55:32.273Z · LW(p) · GW(p)

Well, solomonoff induction is the general principle of occamian priors in a mathematically simple universe. I would say that "wide-ranging sequence prediction" would mean you had already solved it with some elegant algorithm. I highly doubt something as difficult as AGI can be achieved with hacks alone.

Replies from: Dolores1984
comment by Dolores1984 · 2012-08-07T18:12:22.508Z · LW(p) · GW(p)

But, humans can't reliably do that, either, and we get by okay. I mean, it'll need to be solved at some point, but we know for sure that something at least human equivalent can exist without solving that particular problem.

Replies from: None
comment by [deleted] · 2012-08-09T09:36:13.259Z · LW(p) · GW(p)

Humans can't reliably do what?

When I say information sequence prediction, i mean not some abstract and strange mathematics.

I mean predicting your sensory experiences with the help of your mental world model, when you see a glass get brushed off the table, you expect to see the glass to fall off the table and down onto the floor.

You expect exactly because your prior over your sensory organs includes there being a high correlation between your visual impressions and the state of the external world, and because your prior over the external world predicts things like gravity and the glass being affected thereby.

From the inside it seems as if glasses fall down when brushed off the table, but that is the Mind Projection Fallacy. You only ever get information from the external world through your senses, and you only ever affect it through your motor-cortex's interaction with your bio-kinetic system of muscle, bone and sinew.

Human brains are one hell of a really powerful prediction engine.

Replies from: Dolores1984
comment by Dolores1984 · 2012-08-09T20:58:39.210Z · LW(p) · GW(p)

So... you just mean that in order to build AI, we're going to have to solve AI, and it's hard? I'm not sure the weakened version you're stating here is useful.

We certainly don't have to actually, formally solve the SI problem in order to build AI.

Replies from: None
comment by [deleted] · 2012-08-11T23:51:38.850Z · LW(p) · GW(p)

I really doubt an AI-like hack even looks like one, if you don't arrive on it by way of maths.

I am saying it is statistically unlikely to get GAI without maths, and a thermodynamic miracle to get FAI without math. However, my personal intuits are the GAI isn't as hard as, say, some of the other intractable problems we know of, like P =? NP, the Reimann Hypothesis, and other famous problems.

Only Uploads offer a true alternative.

comment by [deleted] · 2012-08-04T14:56:05.280Z · LW(p) · GW(p)

I would expect it to also attract unusually high percentage of narcissists .

Replies from: NancyLebovitz, wedrifid
comment by NancyLebovitz · 2012-08-04T23:12:17.762Z · LW(p) · GW(p)

Why?

Replies from: None
comment by [deleted] · 2012-08-05T05:01:45.693Z · LW(p) · GW(p)

Grandiosity, belief in own special importance, etc.

Narcissists are pretty common, people capable of grand contributions are very rare, so the majority of people who think they are capable of grand contributions got to be narcissists.

Speaking of which, Yudkowsky making a friendly AI? Are you frigging kidding me? I came here through the link to guy's quantum ramblings, which are anything but friendly.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-08-05T06:35:34.751Z · LW(p) · GW(p)

Eliezer argues that a lot of people are more capable than they permit themselves to be, which doesn't seem very narcissistic to me.

Replies from: None
comment by [deleted] · 2012-08-05T08:51:26.474Z · LW(p) · GW(p)

From your link:

The crackpot sees Einstein as something magical, so they compare themselves to Einstein by way of praising themselves as magical; they think Einstein had superpowers and they think they have superpowers, hence the comparison.

Nope. Crackpots compare themselves to Einstein because:

I say this, because I want to do important things with my life, and I have a genuinely important problem, and an angle of attack, and I've been banging my head on it for years, and I've managed to set up a support structure for it; and I very frequently meet people who, in one way or another, say: "Yeah? Let's see your aura of destiny, buddy."

[Albeit I do like his straight in your face honesty.]

It's not about choosing 'important' problem, it's about choosing solvable important problem, and a method of solving it, and intelligence helps, while unintelligent people just pick some idea out of science fiction or something, and can't imagine that some people can do better.

Had it really been that choosing right problems and approaches was matter of luck, we would observe far fewer cases where a single individual has many important insights, the distribution of insights per person would be different.

edit:

It is very easy to fail at this because of the cached thought problem: Tell people to choose an important problem and they will choose the first cache hit for "important problem" that pops into their heads, like "global warming" or "string theory".

The irony here is quite intense. Surely a person who's into science fiction will have the first "cache hit" be something science fictional, and then the first "cache hit" for the solution path be something likewise science fictional. Also a person into reading about computers will have first "cache hit" for describing the priming be reference to "cache".

Replies from: David_Gerard
comment by David_Gerard · 2012-08-05T09:35:36.053Z · LW(p) · GW(p)

It's not about choosing 'important' problem, it's about choosing solvable important problem, and a method of solving it, and intelligence helps, while unintelligent people just pick some idea out of science fiction or something, and can't imagine that some people can do better.

Richard Hamming also makes this point.

Replies from: None
comment by [deleted] · 2012-08-05T11:06:31.015Z · LW(p) · GW(p)

Thanks. He says it much more better than I could. He speaks of importance of small problems.

When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn't the way things go.

Speaking of which, one thing geniuses do is generate the right problems for themselves, not just choose from already formulated.

Science fiction is full of artificial minds, good and evil. It has minds improving themselves, and a plenty of Frankensteins of all kinds. It doesn't have things like 'a very efficient universal algorithm that given mathematical description of a system and constraints finds values for free parameters that meet constraints', because it is not a plot device. Fiction does not have wolfram alpha in 2010. It has Hal in 2000 . Fiction shuns merely useful in favor of interesting. I would be very surprised if the solution would be among the fictional set. The fictional set is as good place to look in as any, yes, but it is small. edit: On second thought, what I mean is that it would be very bad to be either inspired or 'de-spired' by fiction to any significant extent.

Replies from: HBDfan
comment by HBDfan · 2012-08-06T09:16:59.324Z · LW(p) · GW(p)

The fictional set is as good place to look in as any, yes, but it is small.

Yes: Humans think in stories but there are far far more concepts that do not make good story than do make it.

comment by wedrifid · 2012-08-06T02:28:23.841Z · LW(p) · GW(p)

I would expect it to also attract unusually high percentage of narcissists .

Narcissists are usually better at seeking out situations that give them power, status and respect or at least money.

Replies from: None
comment by [deleted] · 2012-08-06T07:01:43.460Z · LW(p) · GW(p)

~1% of people through all the social classes are usually better at this, you say? I don't think so.

Narcissists seek narcissistic supply. Most find it in delusions.

Replies from: wedrifid
comment by wedrifid · 2012-08-06T09:27:38.161Z · LW(p) · GW(p)

~1% of people through all the social classes are usually better at this, you say?

No, I didn't. (Although now that you mention it I'd comfortably say that more than 70% of people through all the social classes are usually better at this. It's kind of fundamental a human talent.)

comment by bryjnar · 2012-08-12T15:49:07.851Z · LW(p) · GW(p)

Suppose there was a suspicion 2..3 people with particularly strong view just decided to pick on your account to downvote? (from back when it was Dmytry) How do you actually check that?

Or, you know, get over it. It's just karma!

Replies from: private_messaging
comment by private_messaging · 2012-08-13T15:28:29.651Z · LW(p) · GW(p)

Comments are not read (and in general poorly interpreted) when at negative.

comment by [deleted] · 2012-08-09T12:35:20.290Z · LW(p) · GW(p)

A small number of people just downvote what ever I post, and this is especially damaging to any of the good posts.

Assuming the latter exist.