Will AGI surprise the world?

post by lukeprog · 2014-06-21T22:27:31.620Z · LW · GW · Legacy · 129 comments

Contents

129 comments

Cross-posted from my blog.

Yudkowsky writes:

In general and across all instances I can think of so far, I do not agree with the part of your futurological forecast in which you reason, "After event W happens, everyone will see the truth of proposition X, leading them to endorse Y and agree with me about policy decision Z."

...

Example 2: "As AI gets more sophisticated, everyone will realize that real AI is on the way and then they'll start taking Friendly AI development seriously."

Alternative projection: As AI gets more sophisticated, the rest of society can't see any difference between the latest breakthrough reported in a press release and that business earlier with Watson beating Ken Jennings or Deep Blue beating Kasparov; it seems like the same sort of press release to them. The same people who were talking about robot overlords earlier continue to talk about robot overlords. The same people who were talking about human irreproducibility continue to talk about human specialness. Concern is expressed over technological unemployment the same as today or Keynes in 1930, and this is used to fuel someone's previous ideological commitment to a basic income guarantee, inequality reduction, or whatever. The same tiny segment of unusually consequentialist people are concerned about Friendly AI as before. If anyone in the science community does start thinking that superintelligent AI is on the way, they exhibit the same distribution of performance as modern scientists who think it's on the way, e.g. Hugo de Garis, Ben Goertzel, etc.

My own projection goes more like this:

As AI gets more sophisticated, and as more prestigious AI scientists begin to publicly acknowledge that AI is plausibly only 2-6 decades away, policy-makers and research funders will begin to respond to the AGI safety challenge, just like they began to respond to CFC damages in the late 70s, to global warming in the late 80s, and to synbio developments in the 2010s. As for society at large, I dunno. They'll think all kinds of random stuff for random reasons, and in some cases this will seriously impede effective policy, as it does in the USA for science education and immigration reform. Because AGI lends itself to arms races and is harder to handle adequately than global warming or nuclear security are, policy-makers and industry leaders will generally know AGI is coming but be unable to fund the needed efforts and coordinate effectively enough to ensure good outcomes.

At least one clear difference between my projection and Yudkowsky's is that I expect AI-expert performance on the problem to improve substantially as a greater fraction of elite AI scientists begin to think about the issue in Near mode rather than Far mode.

As a friend of mine suggested recently, current elite awareness of the AGI safety challenge is roughly where elite awareness of the global warming challenge was in the early 80s. Except, I expect elite acknowledgement of the AGI safety challenge to spread more slowly than it did for global warming or nuclear security, because AGI is tougher to forecast in general, and involves trickier philosophical nuances. (Nobody was ever tempted to say, "But as the nuclear chain reaction grows in power, it will necessarily become more moral!")

Still, there is a worryingly non-negligible chance that AGI explodes "out of nowhere." Sometimes important theorems are proved suddenly after decades of failed attempts by other mathematicians, and sometimes a computational procedure is sped up by 20 orders of magnitude with a single breakthrough.

129 comments

Comments sorted by top scores.

comment by buybuydandavis · 2014-06-21T23:21:41.780Z · LW(p) · GW(p)

A third possibility is that AGI becomes the next big scare.

There's always a market for the next big scare, and a market for people who'll claim putting them in control will save us from the next big scare.

Having the evil machines take over has always been a scare. When AI gets more embodied, and start working together autonomously, people will be more likely to freak, IMO.

Getting beat on Jeopardy is one thing, watching a fleet of autonomous quad copters doing their thing is another. It made me a little nervous, and I'm quite pro AI. When people see machines that seem like they're alive, like they think, communicate among themselves, and cooperate in action, many will freak, and others will be there to channel and make use of that fear.

That's where I disagree with EY. He's right that a smarter talking box will likely just be seen as an nonthreatening curiosity. Watson 2.0, big deal. But embodied intelligent things that communicate and take concerted action will press our base primate "threatening tribe" buttons.

"Her" would have had a very different feel if all those AI operating systems had bodies, and got together in their own parallel and much more quickly advancing society. Kurzweil is right in pointing out that with such advanced AI, Samantha could certainly have a body. We'll be seeing embodied AI well before any human level of AI. That will be enough for a lot of people to get their freak out on.

Replies from: Qiaochu_Yuan, chaosmage, None, John_Maxwell_IV
comment by Qiaochu_Yuan · 2014-06-22T18:38:09.640Z · LW(p) · GW(p)

Yeah, this becomes plausible if some analogue of Chernobyl happens. Maybe self-driving cars cause some kind of horrible accident due to algorithms behaving unexpectedly.

comment by chaosmage · 2014-09-01T13:06:19.285Z · LW(p) · GW(p)

I imagine someone puts an autonomous mosquito-zapping laser (to fight malaria) on a drone.

In that scenario, most people who are surprised to meet one in action, and many who hear of it, cannot help but wonder "how long till we're the mosquitoes?"

That'd be a smart move to secure some MIRI funding...

Replies from: buybuydandavis
comment by buybuydandavis · 2014-09-01T20:46:16.213Z · LW(p) · GW(p)

That'd be a smart move to secure some MIRI funding...

Besides just being freaking awesome. Imagine that floating around the backyard barbecue. Pew. Pew. Pew.

comment by [deleted] · 2014-06-24T18:44:02.186Z · LW(p) · GW(p)

There's always a market for the next big scare, and a market for people who'll claim putting them in control will save us from the next big scare.

That's how I've always viewed SIAI/MIRI, at least in terms of a significant subset of those who send them money...

Replies from: buybuydandavis
comment by buybuydandavis · 2014-06-25T10:03:28.376Z · LW(p) · GW(p)

Perhaps, but AFAIK, minus the "putting them in control" part.

Replies from: None
comment by [deleted] · 2014-06-26T03:14:09.011Z · LW(p) · GW(p)

MIRI literally claims to want to build an overlord for the universe and has actively solicited donations with that goal explicitly stated. I'd call that a variant on the theme where they get money via people who think they're putting them in charge.

comment by John_Maxwell (John_Maxwell_IV) · 2014-06-22T03:38:10.347Z · LW(p) · GW(p)

We'll be seeing embodied AI well before any human level of AI.

By "embodied" do you mean "humanoid"? It seems like there's more demand for humanoid robots in some parts of the world (Japan) than other parts (US). Or by "embodied" do you mean detached autonomous robots like the Roomba?

Replies from: Cyan
comment by Cyan · 2014-06-22T16:35:44.758Z · LW(p) · GW(p)

The reference to autonomous fleets of quadcopters would suggest the latter.

Replies from: buybuydandavis
comment by buybuydandavis · 2014-06-23T00:18:56.189Z · LW(p) · GW(p)

Yes, "embodied" was simply physical, capable of motion and physical action.

Reactions will depend on the physical capabilies and culture too. For me, I've never liked bugs that fly, so the quadcopters press my buttons. Watching Terminator may have something to do with it too.

comment by fubarobfusco · 2014-06-22T21:15:57.066Z · LW(p) · GW(p)

Self-driving cars are already inspiring discussion of AI ethics in mainstream media.

Driving is something that most people in the developed world feel familiar with — even if they don't themselves drive a car or truck, they interact with people who do. They are aware of the consequences of collisions, traffic jams, road rage, trucker or cabdriver strikes, and other failures of cooperation on the road. The kinds of moral judgments involved in driving are familiar to most people — in a way that (say) operating a factory or manipulating a stock market are not.

I don't mean to imply that most people make good moral judgments about driving — or that they will reach conclusions about self-driving cars that an AI-aware consequentialist would agree with. But they will feel like having opinions on the issue, rather than writing it off as something that programmers or lawyers should figure out. And some of those people will actually become more aware of the issue, who otherwise (i.e. in the absence of self-driving cars) would not.

So yeah, people will become more and more aware of AI ethics. It's already happening.


Self-driving cars will also inevitably catalyze discussion of the economic morality of AI deployment. Or rather, self-driving trucks will, as they put millions of truck drivers out of work over the course of five to ten years — long-distance truckers first, followed by delivery drivers. As soon as the ability to retrofit an existing truck with self-driving is available, it would be economic idiocy for any given trucking firm to not adopt it as soon as possible. Robots don't sleep or take breaks.

So, who benefits? The owners of the trucking firm and the folks who make the robots. And, of course, everyone whose goods are now being shipped twice as fast because robots don't sleep or take breaks. (The AI does not love you, nor does it hate you, but you have a job that it can do better than you can.)

As this level of AI — not AGI, but application-specific AI — replaces more and more skilled labor, faster and faster, it will become increasingly impractical for the displaced workers to retrain into the fewer and fewer remaining jobs.

This is also a moral problem of AI ...

Replies from: army1987
comment by A1987dM (army1987) · 2014-06-23T14:56:51.102Z · LW(p) · GW(p)

Whether we should do otherwise-obviously-suboptimal things solely because it'd result in more jobs is a question that long predates self-driving cars...

Replies from: fubarobfusco, Lumifer
comment by fubarobfusco · 2014-06-23T18:20:26.628Z · LW(p) · GW(p)

Well, I want to end up in the future where humans don't have to labor to survive, so I'm all for automating more and more jobs away. But in order to end up in that future, the benefits of automation have to also accrue to the displaced workers. Otherwise you end up with a shrinking productive class, a teeny-tiny owner class, and a rapidly growing unemployable class — who literally can't learn a new trade fast enough to work at it before it is automated away by accelerating AI deployment.

As far as I can tell, the only serious proposal that might make the transition from the "most adult humans work at jobs to make a living" present to the "robots do most of the work and humans do what they like" future — without the sort of mass die-off of the lower class that someone out there probably fantasizes about — is something like Friedman's basic income / negative income tax proposal. If you want to end up in a future where humans can screw off all day because the robots have the work covered, you have to let some humans screw off all day. May as well be the displaced workers.

Replies from: army1987
comment by A1987dM (army1987) · 2014-06-24T07:25:41.305Z · LW(p) · GW(p)

I agree. (Yvain wrote about that in more detail here, and a followup here.)

I'd prefer something like Georgism to negative income tax, but the former has fewer chances of actually being implemented any time soon.

comment by Lumifer · 2014-06-23T17:29:06.848Z · LW(p) · GW(p)

Whether we should do otherwise-obviously-suboptimal things solely because it'd result in more jobs is a question that long predates self-driving cars...

It long predates Milton Friedman, too

comment by paulfchristiano · 2014-06-23T07:48:43.176Z · LW(p) · GW(p)

I don't think the linked PCP thing is a great example. Yes, the first time someone seriously writes an algorithm to do X it typically represents a big speedup on X. The prediction of the "progress is continuous" hypothesis is that the first time someone writes an algorithm to do X, it won't be very economically important---otherwise someone would have done it sooner---and this example conforms to that trend pretty well.

The other issue seems closer to relevant; mathematical problems do go from being "unsolved" to "solved" with comparatively little warning. I think this is largely because they are small enough problems that they are 1 man jobs (which would not be plausible if anyone really cared about the outcome), but that may not be the whole story and at any rate something is going on here.

In the PCP case, the relevantly similar outcome would be the situation where theoretical work on interactive proofs turned out to be useful right out of the box. I'm not aware of any historical cases where this has happened, but I could be missing some, and I don't really understand why it would happen as rarely as it does. It would be nice to understand this possibility better.

As for "people can't tell the difference between watson and being close to broadly human-level AI,” I think this is unlikely. At the very least the broader intellectual community is going to have little trouble distinguishing between watson and economically disruptive AI, so this is only plausible if we get a discontinuous jump. But even assuming a jump, the AI community is not all that impressed by watson and I expect this is an important channel by which significant developments would affect expectations.

Replies from: lukeprog
comment by lukeprog · 2014-06-24T23:24:17.629Z · LW(p) · GW(p)

Just for the record, I wasn't proposing the PCP thing as a counterexample to your model of "economically important progress is continuous."

comment by James_Miller · 2014-06-21T23:36:59.864Z · LW(p) · GW(p)

As you noted on your blog Elon Musk is concerned about unfriendly AI and from his comments about how escaping to mars won't be a solution because "The A.I. will chase us there pretty quickly" he might well share MIRI's fear that the AI will seek to capture all of the free energy of the universe. Peter Thiel, a major financial supporter of yours, probably also has this fear.

If after event W happens, Elon Musk, Peter Thiel, and a few of their peers see the truth of proposition X and decide that they and everything they care about will perish if policy Z doesn't get enacted, they will with high probability succeed in getting Z enacted.

I don't know if this story is true, but I read somewhere that when Julius Caesar was marching on the Roman Republic several Senators went to Pompey the Great, handed him a sword and said "save Rome." Perhaps when certain Ws happen we should make an analogous request of Musk or Thiel.

comment by V_V · 2014-06-22T18:39:26.871Z · LW(p) · GW(p)

I would believe that as soon as AGI becomes near (it it will ever will) predictions by experts will start to converge to some fixed date, rather than the usual "15-20 years in the future".

Replies from: Jan_Rzymkowski
comment by Jan_Rzymkowski · 2014-06-22T19:50:57.977Z · LW(p) · GW(p)

Sounds very reasonable.

comment by Punoxysm · 2014-06-21T23:16:22.895Z · LW(p) · GW(p)

I've posted about this before, but there are many aspects of AI safety that we can research much more effectively once strong AI is nearer to realization. If people today say "AI could be a risk but it would be hard to get a good ROI on research dollars invested in AI safety today", I'm inclined to agree.

Therefore, it won't simply be interest in X-risk, but feasibility of concrete research plans on how to reduce it that help advance any AI safety agenda.

comment by skeptical_lurker · 2014-06-22T11:41:16.566Z · LW(p) · GW(p)

(Nobody was ever tempted to say, "But as the nuclear chain reaction grows in power, it will necessarily become more moral!")

Apologies for asking an off-topic question that has certainly been discussed somewhere before, but if advanced decision theories are logically superior, then they are in some sense universal, in that a large subspace of mindspace will adopt them when the minds become intelligent enough ("Three worlds collide" seems to indicate that this is EYs opinion, at least for minds that evolved), then even a paperclip maximiser would assign some nontrivial component of its utility function to match humanity's, iff we would have done the same in the counterfactual case that FAI came first (I think this does also have to assume that at least one party has a sublinear utility curve).

In this sense, it seems that as entities grow in intelligence, they are at least likely to become more cooperative/moral.

Of course, FAI is vastly preferable to an AI that might be partially cooperative, so I am not trying to diminish the importance of FAI. I'd still like to know whether the consensus opinion is that this is plausible.

Actually I think I know one place has been discussed before - Clippy promised friendliness and someone else promised him a lot of paperclips. But I don't know of a serious discussion.

Replies from: Squark, cousin_it
comment by Squark · 2014-06-22T13:34:25.632Z · LW(p) · GW(p)

Cooperative play (as opposed to morality) strongly depends on the position from which you're negotiating. For example if the FAI scenario is much less likely (a priori) than a Clippy scenario, then there's no reason for Clippy to make strong concessions.

Replies from: XiXiDu, TheAncientGeek, skeptical_lurker, skeptical_lurker
comment by XiXiDu · 2014-06-22T14:40:40.716Z · LW(p) · GW(p)

For example if the FAI scenario is much less likely (a priori) than a Clippy scenario, then there's no reason for Clippy to make strong concessions.

But if a "paperclips" maximizer, as opposed to "tables", "cars", or "alien sex toys" maximizer, is just one of many unfriendly maximizers, then maximizing "human values" is just one of many unlikely outcomes. In other words, you can't just say that unfriendly AIs are more likely than friendly AIs when it comes to cooperation. Since the opposition between a paperclip maximizer and an "alien sex toy" maximizer is the same as the opposition between the former and an alien or human friendly AI. Since all of them want to maximize their opposing values. And even if there turns out to be a subset of values shared by some AIs, other groups could cooperate to outweigh their leverage.

Replies from: skeptical_lurker, Squark
comment by skeptical_lurker · 2014-06-22T17:42:38.244Z · LW(p) · GW(p)

But since there is an exponentially huge set of random maximisers, the probability of each individual one is infinitesimal. OTOH, human values have a high probability density in mindspace because people are actually working towards it.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-06-22T21:30:37.426Z · LW(p) · GW(p)

human values have a high probability density in mindspace because people are actually working towards it

Depends on how high probability density have humans (and alien life forms so similar to humans that they share our values) in mindspace. Maybe very low. Maybe a society ruled by intelligent ants according to their values would make us very unhappy... and on a cosmic scale, ants are our cousins; alien life should be much more different.

comment by Squark · 2014-06-22T15:56:06.958Z · LW(p) · GW(p)

I don't understand what point you're trying to make. My point was that cooperative game theory doesn't magically guarantee a UFAI will treat us nicely. It might work but only if there is a sufficiently substantial Everett branch with a FAI. The probability of that branch probably strongly depends on effort invested into FAI research.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-06-22T17:39:28.864Z · LW(p) · GW(p)

If the probability of FAI (and friendly uploads etc) is near zero, then we're doomed either way. But even though I believe the probability of provably friendly AI coming first is <50% , its definitely not 10^-11!

Replies from: Squark
comment by Squark · 2014-06-22T17:54:39.921Z · LW(p) · GW(p)

Fair enough, but there's still an enormous incentive to work on FAI.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-06-22T19:15:36.382Z · LW(p) · GW(p)

Of course, I was not trying to suggest otherwise.

comment by TheAncientGeek · 2014-06-22T15:18:08.685Z · LW(p) · GW(p)

But the we might be able achieve AI safety in a relatively easy way by creating networks of interacting agents (including interacting with us)

Replies from: Gunnar_Zarncke, Squark
comment by Gunnar_Zarncke · 2014-06-22T20:23:19.558Z · LW(p) · GW(p)

I think you points out the conclusion of the assumption that

Cooperative play strongly depends on the position from which you're negotiating.

but if you have multiple AIs then none of them is much stronger than the other.

comment by Squark · 2014-06-22T15:52:10.455Z · LW(p) · GW(p)

Sorry, didn't follow that. Can you elaborate?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-06-22T16:14:30.079Z · LW(p) · GW(p)

I mean you don't have to assume a singleton AI becoming very powerful very quickly. You can assume intelligence and friendliness developing in parallel.[and incrementally]

Replies from: MugaSofer, skeptical_lurker, Squark
comment by MugaSofer · 2014-06-23T13:46:22.529Z · LW(p) · GW(p)

Hmm.

Are you suggesting (super)intelligence would be a result of direct human programming, like Friendliness presumably would be?

Or that Friendliness would be a result of self-modification, like SIAI is predicted to be 'round these parts?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-06-23T14:01:17.097Z · LW(p) · GW(p)

I am talking about SIRI. I mean that human engineers are /will make multiple efforts at simultaneously improving AI and friendliness, and the ecosystem of AIs and AI users are/will select for friendliness that works.

comment by skeptical_lurker · 2014-06-22T17:31:53.842Z · LW(p) · GW(p)

Is the idea that the network develops at roughly the same rate, with no single entity undergoing a hard takeoff?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-06-23T14:05:58.338Z · LW(p) · GW(p)

Yes.

comment by Squark · 2014-06-22T16:46:14.876Z · LW(p) · GW(p)

I what sense I don't have to assume it? I think singleton AI happens to be a likely scenario and this has little to do with cooperation.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-06-22T17:07:51.837Z · LW(p) · GW(p)

The more alternative scenarios there are, the less likelihood iof the MIRI scenario, and the less need for the MIRI solutiion.

Replies from: Squark
comment by Squark · 2014-06-22T17:56:21.676Z · LW(p) · GW(p)

I don't understand what it has to do with cooperative game theory.

comment by skeptical_lurker · 2014-06-22T14:36:58.935Z · LW(p) · GW(p)

Also, cooperation seems to be at least a large component of morality, while some believe morality should be derived entirely from game theory.

Replies from: Squark
comment by Squark · 2014-06-22T16:00:10.115Z · LW(p) · GW(p)

I think this is a confusion. Game theory is only meaningful after you specified the utility functions of the players. If these utility functions don't already include caring about other agents, the result is not what I'd call "morality", it is just cooperation between selfish entities. Surely the evolutionary reasons for morality have to do with cooperative game theory, so what? The evolutionary reason for sex is reproduction, it doesn't mean we shouldn't be doing sex with condoms. Morality should not be derived from anything except human brains.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-06-22T18:16:17.094Z · LW(p) · GW(p)

I think this disagreement is purely a matter of semantics: 'morality' is an umbrella term which is often used to cover several distinct concepts, such as empathy, group allegiance and cooperation. In this case, the AI would be moral according to one dimension of morality, but not the others.

comment by skeptical_lurker · 2014-06-22T14:33:35.930Z · LW(p) · GW(p)

True, but since the universe is quite big, even a share of 10^-11 (many orders of magnitude lower than P(FAI)) would be sufficient for humanity to have a galaxy while Clippy clips the rest of the universe. If the laws of physics permit an arbitrary large amount of whatever compromises utility, than all parties can achieve arbitrary large amounts of utility, providing the utility functions do not actually involve impeding the other parties activities.

Replies from: Squark
comment by Squark · 2014-06-22T16:09:53.484Z · LW(p) · GW(p)

It might be Clippy will let us have our galaxy. However "arbitrary large amounts of utility" sounds completely infeasible since for one thing the utility function has time discount and for another our universe is going to hit heat death (maybe it is escapable by destabilizing the vacuum into a state with non-positive cosmological constant but I'm not at all sure).

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-06-22T17:26:44.328Z · LW(p) · GW(p)

Utility functions do not have to have a time discount - in fact, while it might be useful when dealing with inflation, I don't see why there should be time discounts in general. As far as circumventing the second law of thermodynamics goes, there are several proposed methods, and given that humanity doesn't have a complete understanding of physics I don't think we can have a high degree of confidence one way or the other.

Replies from: Squark
comment by Squark · 2014-06-22T17:48:45.690Z · LW(p) · GW(p)

Utility functions do not have to have a time discount...

Without time discount you run into issues like the procrastination paradox and Boltzmann brains. UDT also runs into trouble since arbitrarily tight bounds on utility become impossible to prove due to Goedel incompleteness. If your utility function is unbounded it gets worse: your expectation values fail to converge (as exemplified by Pascal mugging).

As far as circumventing the second law of thermodynamics goes, there are several proposed methods...

Are there?

...given that humanity doesn't have a complete understanding of physics I don't think we can have a high degree of confidence one way or the other.

Well, we can't have complete confidence, but I think our understanding is not so bad. We're missing a theory of heterogeneous nucleation of string theoretic vacua (as far as I know).

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-06-23T06:45:59.954Z · LW(p) · GW(p)

Without time discount you run into issues like the procrastination paradox and Boltzmann brains. UDT also runs into trouble since arbitrarily tight bounds on utility become impossible to prove due to Goedel incompleteness.

Could you provide links? A google search turned up many different things, but I think you mean this procrastination paradox. Is it possible that one's utility function does not discount, but given uncertainty about the future one should kind of behave as if it does? (e.g. I value life tomorrow exactly as much as I value life today, but maybe we should party hard now because we cannot be absolutely certain that we will survive until tomorrow)

If your utility function is unbounded it gets worse: your expectation values fail to converge (as exemplified by Pascal mugging).

What if I maximise measure. or maximise the probability of attaining an unbounded amount of utility?

WRT circumventing the second law of thermodynamics, there is the idea of creating a basement universe to escape into, some form of hypercomputation that can experience subjective infinite time in a finite amount of real time, and time crystals which apparently is a real thing and not what powers the TARDIS.

I think our understanding is not so bad. We're missing a theory of heterogeneous nucleation of string theoretic vacua (as far as I know).

AFAIK humanity does not know what the dark matter/ derk energy is that 96% of the universe is made of. This alone seems like a pretty big gap in our understanding, although you seem to know more physics than I do.

Replies from: Squark
comment by Squark · 2014-06-24T07:23:01.826Z · LW(p) · GW(p)

Could you provide links?

Boltzmann brains were discussed in many places, not sure what the best link would be. The idea is that when the universes reaches thermodynamic equilibrium, after humongous amount of time you get Poincare recurrences: that is, any configuration of matter will randomly appear an infinite number of times. This means there's an infinite number of "conscious" brains coalescing from randomly floating junk, living for a brief moment and perishing. In the current context this calls for time discount because we don't want the utility function to be dominated by the well being of those guys. You might argue we can't influence their well being anyway but you would be wrong. According to UDT, you should behave as if you're deciding for all agents in the same state. Since you have an infinite number of Boltzmann clones, w/o time discount you should be deciding as if you're one of them. Which means, extreme short term optimization (since your chances to survive the next t seconds decline very fast with t). I wouldn't bite this bullet.

UDT is sort-of "cutting edge FAI research", so there are no very good references. Basically, UDT works by counting formal proofs. If your utility function involves an infinite time span it would be typically impossible to prove arbitrarily tight bounds on it since logical sentences that contain unbounded quantifiers can be undecidable.

...I think you mean this procrastination paradox.

Yes.

Is it possible that one's utility function does not discount, but given uncertainty about the future one should kind of behave as if it does?

Well, you can try something like this but for one it doesn't sound consistent with "all parties can achieve arbitrary large amounts of utility" because the latter requires arbitrarily high confidence about the future and for another I think you need unbounded utility to make it work which opens a different can of worms.

What if I maximise measure. or maximise the probability of attaining an unbounded amount of utility?

I don't understand what you mean by maximizing measure. Regarding maximizing the probability of attaining an unbounded (actually infinite) amount of utility, well, that would make you a satisficing agent that only cares about the asymptotically far future (since apparently anything happening in a finite time interval only carries finite utility). I don't think it's a promising approach, but if you want to pursue it, you can recast it in terms of finite utility (by assigning new utility "1" when old utility is "infinity" and new utility "0" in other cases). Of course, this leaves you with the problems mentioned before.

...there is the idea of creating a basement universe to escape into...

If I understand you correctly it's the same as destabilizing the vacuum which I mentioned earlier.

...some form of hypercomputation that can experience subjective infinite time in a finite amount of real time...

This is a nice fantasy but unfortunately strongly incompatible with what we know about physics. By "strongly" I mean that it would take a very radical update to make it work.

...and time crystals which apparently is a real thing and not what powers the TARDIS...

To me it looks the journalist is misrepresenting what has actually been achieved. I think that this is a proposal for computing in extremely low temperatures, not for violating the second law of thermodynamics. Indeed the latter would require actual new physics which is not the case here at all.

AFAIK humanity does not know what the dark matter/ dark energy is that 96% of the universe is made of. This alone seems like a pretty big gap in our understanding...

You're right, of course. There's a lot we don't know yet, what I meant is that we already know enough to begin discussing whether heat death is escapable because the answer might turn out to be universal or nearly universal across a very wide range of models.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-06-27T18:58:48.067Z · LW(p) · GW(p)

Boltzmann brains were discussed in many places, not sure what the best link would be.

Sorry, I should have been more precise - I've read about Boltzmann brains, I just didn't realise the connection to UDT.

In the current context this calls for time discount because we don't want the utility function to be dominated by the well being of those guys.

This is the bit I don't understand - if these agents are identical to me, then it follows that I'm probably a Boltzmann brain too, as if I have some knowledge that I am not a Boltzmann brain, this would be a point of difference. In which case, surely I should optimise for the very near future even under old-fashioned causal decision theory. Like you, I wouldn't bite this bullet.

If your utility function involves an infinite time span it would be typically impossible to prove arbitrarily tight bounds on it since logical sentences that contain unbounded quantifiers can be undecidable.

I didn't know that - I've studied formal logic, but not to that depth unfortunately.

I don't understand what you mean by maximizing measure.

I was meaning in the sense of measure theory. I've seen people discussing maximising the measure of a utility function over all future Everett branches, although from my limited understanding of quantum mechanics I'm unsure whether this makes sense.

I don't think it's a promising approach, but if you want to pursue it, you can recast it in terms of finite utility (by assigning new utility "1" when old utility is "infinity" and new utility "0" in other cases).

Yeah, I doubt this would be a good approach either, in that if it does turn out to be impossible to achieve unboundedly large utility I would still want to make the best of a bad situation and maximise the utility achievable by the finite amount of negentropy available. I imagine a better approach would be to add the satisfying function to the time-discounting function, scaled in some suitable manner. This doesn't intuitively strike me as a real utility function, as its adding apples and oranges so to speak, but perhaps useful as a tool?

If I understand you correctly it's the same as destabilizing the vacuum which I mentioned earlier.

Well, I'm approaching the limit of my understanding of physics here, but actually I was talking about alpha-point computation which I think may involve the creation of daughter universes inside black holes.

This is a nice fantasy but unfortunately strongly incompatible with what we know about physics. By "strongly" I mean that it would take a very radical update to make it work.

It does seem incompatible with e.g. the plank time, I just don't know enough to dismiss it with a very high level of confidence, although I'm updating wrt your reply.

Your reply has been very interesting, but I must admit I'm starting to get seriously point out that I'm starting to get out of my depth here, in physics and formal logic.

Replies from: Squark, FeepingCreature
comment by Squark · 2014-07-01T18:57:39.106Z · LW(p) · GW(p)

This is the bit I don't understand - if these agents are identical to me, then it follows that I'm probably a Boltzmann brain too...

In UDT you shouldn't consider yourself to be just one of your clones. There is no probability measure on the set of your clones: you are all of them simultaneously. CDT is difficult to apply to situations with clones, unless you supplement it by some anthropic hypothesis like SIA or SSA. If you use an anthropic hypothesis, Boltzman brains will still get you in trouble. In fact, some cosmologists are trying to find models w/o Boltzman brains precise to avoid the conclusion that you are likely to be a Boltzman brain (although UDT shows the effort is misguided). The problem with UDT and Goedel incompleteness is a separate issue which has no relation to Boltzman brains.

I was meaning in the sense of measure theory. I've seen people discussing maximising the measure of a utility function over all future Everett branches...

I'm not sure what you mean here. Sets have measure, not functions.

I imagine a better approach would be to add the satisfying function to the time-discounting function, scaled in some suitable manner. This doesn't intuitively strike me as a real utility function, as its adding apples and oranges so to speak, but perhaps useful as a tool?

Well, you still got all of the abovementioned problems except divergence.

...actually I was talking about alpha-point computation which I think may involve the creation of daughter universes inside black holes.

Hmm, baby universes are a possibility to consider. I thought the case for them is rather weak but a quick search revealed this. Regarding performing an infinite number of computations I'm pretty sure it doesn't work.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-07-04T18:18:09.094Z · LW(p) · GW(p)

CDT is difficult to apply to situations with clones, unless you supplement it by some anthropic hypothesis like SIA or SSA.

While I can see why there intuitive cause to abandon the "I am person #2, therefore there are probably not 100 people" reasoning, abandoning "There are 100 clones, therefore I'm probably not clone #1" seems to be simply abandoning probability theory altogether, and I'm certainly not willing to bite that bullet.

Actually, looking back through the conversation, I'm also confused as to how time discounting helps in the case that one is acting like a Boltzmann brain - someone who knows they are a B-brain would discount quickly anyway due to short lifespan, wouldn't extra time discounting make the situation worse? Specifically, if there are X B-brains for each 'real' brain, then if the real brain can survive more than X times as long as a B-brain, and doesn't time discount, then the 'real' brain utility still is dominant.

I'm not sure what you mean here. Sets have measure, not functions.

I wasn't being very precise with my wording - I meant that one would maximise the measure of whatever it is one values.

Hmm, baby universes are a possibility to consider. I thought the case for them is rather weak but a quick search revealed this. Regarding performing an infinite number of computations I'm pretty sure it doesn't work.

Well, I have only a layman's understanding of string theory, but if it were possible to 'escape' into a baby universe by creating a clone inside the universe, then the process can be repeated, leading to an uncountably infinite (!) tree of universes.

Replies from: Squark
comment by Squark · 2014-07-08T19:00:14.059Z · LW(p) · GW(p)

While I can see why there intuitive cause to abandon the "I am person #2, therefore there are probably not 100 people" reasoning, abandoning "There are 100 clones, therefore I'm probably not clone #1" seems to be simply abandoning probability theory altogether, and I'm certainly not willing to bite that bullet.

I'm not entirely sure what you're saying here. UDT suggests that subjective probabilities are meaningless (thus taking the third horn of the anthropic trilemma although it can be argued that selfish utility functions are still possible). "What is the probability I am clone #n" is not a meaningful question. "What is the (updated/posteriori) probability I am in a universe with property P" is not a meaningful question in general but has approximate meaning in contexts where anthropic considerations are irrelevant. "What is the a priori probability the universe has property P" is a question that might be meaningful but is probably also approximate since there is a freedom of redefining the prior and the utility function simultaneously (see this). The single fully meaningful type of question is "what is the expected utility I should assign to action A?" which is OK since it is the only question you have to answer in practice.

Actually, looking back through the conversation, I'm also confused as to how time discounting helps in the case that one is acting like a Boltzmann brain - someone who knows they are a B-brain would discount quickly anyway due to short lifespan, wouldn't extra time discounting make the situation worse?

Boltzmann brains exist very far in the future wrt "normal" brains, therefore their contribution to utility is very small. The discount depends on absolute time.

I wasn't being very precise with my wording - I meant that one would maximise the measure of whatever it is one values.

If "measure" here equals "probability wrt prior" (e.g. Solomonoff prior) then this is just another way to define a satisficing agent (utility equals either 0 or 1).

Well, I have only a layman's understanding of string theory, but if it were possible to 'escape' into a baby universe by creating a clone inside the universe, then the process can be repeated, leading to an uncountably infinite (!) tree of universes.

Good point. Surely we need to understand these baby universes better.

comment by FeepingCreature · 2014-06-30T00:45:55.806Z · LW(p) · GW(p)

In the current context this calls for time discount because we don't want the utility function to be dominated by the well being of those guys.

This is the bit I don't understand - if these agents are identical to me, then it follows that I'm probably a Boltzmann brain too, as if I have some knowledge that I am not a Boltzmann brain, this would be a point of difference. In which case, surely I should optimise for the very near future even under old-fashioned causal decision theory. Like you, I wouldn't bite this bullet.

I think Boltzmann brains in the classical formulation of random manifestation in vacuum are a non-issue, as neither can they benefit from our reasoning (being random, while reason assumes a predictable universe) nor from our utility maximization efforts (since maximizing our short-term utility will make it no more or less likely that a Boltzmann brain with the increased utility manifests).

comment by cousin_it · 2014-06-23T16:36:48.605Z · LW(p) · GW(p)

Just a historical note, I think Rolf Nelson was the earliest person to come up with that idea, back in 2007. Though it was phrased in terms of simulation warfare rather than acausal bargaining at first.

comment by Nectanebo · 2014-06-25T07:30:12.375Z · LW(p) · GW(p)

policy-makers and research funders will begin to respond to the AGI safety challenge, just like they began to respond to... synbio developments in the 2010s.

What are we referring to here? As in, what synbio developments and how did they respond to it?

comment by XiXiDu · 2014-06-22T09:20:19.253Z · LW(p) · GW(p)

Nobody was ever tempted to say, "But as the nuclear chain reaction grows in power, it will necessarily become more moral!"

We became better at constructing nuclear power plants, and nuclear bombs became cleaner. What critics are saying is that as AI advances, our control over it advances as well. In other words, the better AI becomes, the better we become at making AI work as expected. Because if AI became increasingly unreliable as its power grew, AI would cease to be a commercially viable product.

Replies from: TheAncientGeek, Squark
comment by TheAncientGeek · 2014-06-22T10:10:35.012Z · LW(p) · GW(p)

That's one of the standard responses to the MIRI argument, but not the same as the Artificial Philosopher response. I call it the SIRI versus MIRI response.

comment by Squark · 2014-06-22T13:50:37.669Z · LW(p) · GW(p)

We became better at constructing nuclear power plants, and nuclear bombs became cleaner.

That would be small comfort if WWIII erupted triggering a nuclear winter.

...AI would cease to be a commercially viable product.

A doomsday device doesn't have to be a commercially viable product. It just has to be used, once.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-06-22T15:24:52.742Z · LW(p) · GW(p)

Unless you can show it is reasonably likely that SIRI will take over the world, that is a Pascal's mugging.

Replies from: Squark
comment by Squark · 2014-06-22T15:51:13.228Z · LW(p) · GW(p)

I doubt about SIRI, but I think the plausibility of AI risk has already been shown in MIRI's writing and I don't see much point in repeating the arguments here. Regarding Pascal's mugging, I believe in bounded utility functions. So, yea, something with low probability and dire consequences is important up to a point. But AI risk is not even something I'd say has low probability.

comment by brazil84 · 2014-06-22T21:16:30.996Z · LW(p) · GW(p)

A related question:

Are any governments working on AI projects? Surely the idea has occurred to a lot of military planners and spy agencies that AI would be an extremely potent weapon. What would the world be like if AI is first developed secretly in a government facility in Maryland?

Replies from: Lumifer
comment by Lumifer · 2014-06-23T18:21:04.692Z · LW(p) · GW(p)

What would the world be like if AI is first developed secretly in a government facility in Maryland?

Fucked.

Replies from: brazil84
comment by brazil84 · 2014-06-23T18:40:43.094Z · LW(p) · GW(p)

Fucked.

I'm not sure I would agree with that. Would you mind telling me why you think so?

To my mind, whoever is the first to develop AI has a good chance of having an awesome amount of power. I am not comfortable with anyone having that kind of power, but if I had to pick one person or organization, I would probably pick the United States government.

Replies from: Lumifer
comment by Lumifer · 2014-06-23T18:49:13.063Z · LW(p) · GW(p)

Would you mind telling me why you think so?

An AI developed by the military (or the security apparatus) will have the goals of the military (or the security). And being developed within the bureaucratic labyrinths of a federal organization ensures that there will be many things wrong with it, things about which no one will know (and so cannot suggest fixing) because it all will be very very secret.

Replies from: brazil84
comment by brazil84 · 2014-06-24T01:00:21.854Z · LW(p) · GW(p)

And being developed within the bureaucratic labyrinths of a federal organization ensures that there will be many things wrong with it, things about which no one will know (and so cannot suggest fixing) because it all will be very very secret.

Well if Microsoft develops AI first, won't there be a similar problem?

Replies from: Lumifer
comment by Lumifer · 2014-06-24T17:46:39.404Z · LW(p) · GW(p)

Not to the same extent, I don't think so.

Replies from: brazil84
comment by brazil84 · 2014-06-24T18:59:09.809Z · LW(p) · GW(p)

Not to the same extent, I don't think so.

I would have to disagree. For one thing, if a corporation like Microsoft (or perhaps Google) develops AI, it seems pretty likely that it will be done in secret, if only to prevent competitors from gaining an advantage. Even if the existence of the project is known, the actual details will probably be secret. For another, it seems like it would be difficult for the public to provide meaningful feedback in such a situation.

comment by [deleted] · 2014-06-22T04:46:35.766Z · LW(p) · GW(p)

Does MIRI have a prediction market on this stuff?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2014-06-22T18:35:25.036Z · LW(p) · GW(p)

By the time the market closes everyone will have bigger concerns than whatever was being risked on the market.

comment by TheAncientGeek · 2014-06-22T09:54:51.042Z · LW(p) · GW(p)

And have those tricky philosophical nuances been solved? Can there be reliable predictions of AI unfriendliness without such a solution?

Replies from: None
comment by [deleted] · 2014-06-22T16:39:40.850Z · LW(p) · GW(p)

AI undagety?

comment by [deleted] · 2014-06-22T00:21:28.522Z · LW(p) · GW(p)

AGI is already 1-2 decades away. Or 2-5 years if a well-funded project started now. I don't think that is enough time for a meaningful reaction by society, even just its upper echelons.

I would be very concerned about the "out of nowhere" outcome, especially now that the AI winter has thawed. We have the tools, and we have the technology to do AGI now. Why assume that it is decades away?

Replies from: ShardPhoenix, Punoxysm, Jan_Rzymkowski
comment by ShardPhoenix · 2014-06-22T01:29:43.792Z · LW(p) · GW(p)

Why do you think it's so near? I don't see many others taking that position even among those who are already concerned about AGI (like around here).

Replies from: None
comment by [deleted] · 2014-06-22T02:47:05.069Z · LW(p) · GW(p)

This is my adopted long-term field -- though professionally I work as a bitcoin developer right now -- and those estimates are my own. 1-2 decades is based on existing AGI work such as OpenCog, and what is known about generalizations to narrow AI being done by Google and a few smaller startups. It is reasonable extrapolations based on published project plans, the authors' opinions, and my own evaluation of the code in the case of OpenCog. 5 years is what it would take if money were not a concern. 2-years is based on my own, unpublished simplification of the CogPrime architecture meant as a blitz to seed-stage oracle AGI, under the same money-is-no-concern conditions.

The only extrapolations I've seen around here, e.g. by lukeprog, involve statistically sampling AI researchers' opinions. Stuart Armstrong showed a year or two ago just how inaccurate this method is historically, as well as concrete reasons for why such statistical methods are useless in this case.

Replies from: John_Maxwell_IV, FeepingCreature, None, Gunnar_Zarncke
comment by John_Maxwell (John_Maxwell_IV) · 2014-06-22T03:39:39.934Z · LW(p) · GW(p)

You rate your ability to predict AI above AI researchers? It seems to me that at best, I as an independent observer should give your opinion about as much weight as any AI researcher. Any concerns with the predictions of AI researchers in general should also apply to your estimate. (With all due respect.)

Replies from: None
comment by [deleted] · 2014-06-22T03:53:25.342Z · LW(p) · GW(p)

This is required reading for anyone wanting to extrapolate AI researcher predictions:

https://intelligence.org/files/PredictingAI.pdf

In short, asking AI researchers (including myself) their opinions is probably the worst way to get an answer here. What you need to do instead is learn the field, try your hand at it yourself, ask AI researchers what they feel are the remaining unsolved problems, investigate those answers, and most critically form your own opinion. That's what I did, and where my numbers came from.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-06-22T04:03:41.955Z · LW(p) · GW(p)

If several people follow this procedure, I would expect to get a better estimate from averaging their results than trying it out for myself.

Replies from: None
comment by [deleted] · 2014-06-22T06:33:02.846Z · LW(p) · GW(p)

That's a reasonable expectation. But in as much as one can expect AI researchers to have gone through this exercise in the past (this is where the problem is, I think), the data is apparently not predictive. Kaj Sotala and Stuart Armstrong looked at this in some detail, with MIRI funding. Some highlights:

"There is little difference between experts and non-experts" "There is little difference between current predictions, and those known to have been wrong previously" "It is not unlikely that recent predictions are suffering from the same biases and errors as their predecessors"

http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ https://intelligence.org/files/PredictingAI.pdf

In other words, asking AI experts is about as useless as it can get when it comes to making predictions about future AI developments. This includes myself, objectively. What I advocate people do instead is what I did: investigate the matter yourself and make your own evaluation.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-06-22T08:37:33.928Z · LW(p) · GW(p)

It sounds to me as though you are aware that your estimate for when AI will arrive is earlier than most estimates, but you're also aware that the reference class of which your estimate is a part of is not especially reliable. So instead of pushing your estimate as the one true estimate, you're encouraging others to investigate in case they discover what you discovered (because if your estimate is accurate, that would be important information). That seems pretty reasonable. Another thing you could do is create a discussion post where you lay out the specific steps you took to come to the conclusion that AI will come relatively early in detail, and get others to check your work directly that way. It could be especially persuasive if you were to contrast the procedure you think was used to generate other estimates and explain why you think that procedure was flawed.

Replies from: None
comment by [deleted] · 2014-06-22T08:55:20.454Z · LW(p) · GW(p)

"What I discovered" was that all the pieces for a seed AGI exist, are demonstrated to work as advertised, and could be assembled together rather quickly if adequate resources were available to do so. Really all that is required is rolling up our sleeves and doing some major integrative work in putting the pieces together.

With designs that are public knowledge (albeit not contained in one place), this could be done as well-funded project in the order of 5 years -- an assessment that concurs with what is said by the leaders of the project I am thinking of as well.

My own unpublished contribution is a refinement of this particular plan which strips out those pieces not strictly needed for a seed UFAI (these components being learnt by the AI rather than hand coded), and tweaks the remaining structure slightly in order to favor self-modifying agents. The critical path here is 2 years assuming infinite resources, but more scarily the actual resources needed are quire small. With the right people it could be done in a basement in maybe 3-4 years and take the world by storm.

But here's the conundrum, as was mentioned in one of the other sub-threads: how do I convince you of that, without walking you through the steps involved in creating an UFAI? If I am right, I would then have posted on the internet blueprints for the destruction of humankind. Then the race would really be on.

So what can I do, except encourage people to walk the same path I did, and see if they come to the same conclusions?

Replies from: brazil84, None, Crux
comment by brazil84 · 2014-06-23T15:49:16.373Z · LW(p) · GW(p)

But here's the conundrum, as was mentioned in one of the other sub-threads: how do I convince you of that, without walking you through the steps involved in creating an UFAI? If I am right, I would then have posted on the internet blueprints for the destruction of humankind. Then the race would really be on.

That's assuming people take you seriously. Even if your plan is solid, probably most people will write you off as another Crackpot Who Thinks He's Solved an Important Problem.

But I do agree it's a bit of a conundrum. If you have what you think is an important idea, it's natural to worry that people will either (1) steal your idea or (2) criticize it not because it's not a great idea but because they want to feel superior.

Replies from: None
comment by [deleted] · 2014-06-23T17:04:33.125Z · LW(p) · GW(p)

But I do agree it's a bit of a conundrum. If you have what you think is an important idea, it's natural to worry that people will either (1) steal your idea or (2) criticize it not because it's not a great idea but because they want to feel superior.

I think you entirely missed the point.

Replies from: brazil84
comment by brazil84 · 2014-06-23T18:09:30.642Z · LW(p) · GW(p)

I think you entirely missed the point.

I would agree with this in the sense that my stated reasons for the "conundrum" are a bit different from yours.

Replies from: None
comment by [deleted] · 2014-06-23T18:26:50.182Z · LW(p) · GW(p)

Well perhaps instead of insinuating motives, you could share your thoughts about the actual stated reason? At what point does one have a moral obligation not to share information about a dangerous idea on a public forum?

Replies from: brazil84
comment by brazil84 · 2014-06-24T00:49:27.662Z · LW(p) · GW(p)

Well perhaps instead of insinuating motives,

I was thinking of my own motives in similar situations, sorry if you took it as a characterization of yours. I do see it could have been read that way.

you could share your thoughts about the actual stated reason?

I would suggest you e-mail your blueprint to a few of the posters here with the understanding they keep it to themselves. If even one long-term poster says "I've read Friedenbach's arguments and while they are confidential, I now agree that his estimate of the time to AI is actually pretty good," then I think your argument is starting to become persuasive.

Replies from: None
comment by [deleted] · 2014-06-24T00:51:28.048Z · LW(p) · GW(p)

Sorry I didn't mean to come off so abrasively either. I was just being unduly snarky. The internet is not good for conveying emotional state :\

comment by [deleted] · 2014-06-25T09:04:49.642Z · LW(p) · GW(p)

My own unpublished contribution is a refinement of this particular plan which strips out those pieces not strictly needed for a seed UFAI (these components being learnt by the AI rather than hand coded), and tweaks the remaining structure slightly in order to favor self-modifying agents. The critical path here is 2 years assuming infinite resources, but more scarily the actual resources needed are quire small. With the right people it could be done in a basement in maybe 3-4 years and take the world by storm.

If you've solved stable self-improvement issues, that's FAI work, and you should damn well share that component.

comment by Crux · 2014-06-24T16:07:31.482Z · LW(p) · GW(p)

[retracted]

Replies from: None
comment by [deleted] · 2014-06-24T16:33:19.716Z · LW(p) · GW(p)

Read the OP, I didn't make any boisterous claims. I simply said UFAI is 2-5 years away, focused effort, and 10-20 years away otherwise. I therefore believe it important that FAI research be refocused on near-term solutions. I state so publicly in order to counter the entrenched meme that seems to have infected everyone here, saying that AI is X years away, where X is some arbitrary number that by golly seems like a lot, in the hope that some people who encounter the post consider refocusing on near-term work. What's wrong with that?

Replies from: Crux
comment by Crux · 2014-06-24T16:38:09.000Z · LW(p) · GW(p)

Disregard my reply. I really shouldn't be posting from my phone at 2 AM. Such a venture rarely ends well.

Replies from: None
comment by [deleted] · 2014-06-24T16:58:59.487Z · LW(p) · GW(p)

Yeah, I've been there before. No worries ;)

comment by FeepingCreature · 2014-06-30T00:52:07.928Z · LW(p) · GW(p)

OpenCog

Hey, speaking as an AI layman, how do you rate the odds that a design based on OpenCog could foom? I haven't really dug into that codebase, but from reading the Wiki it's my impression that it's a bit of a heap left behind by multiple contributors trying to make different parts of it work for their own ends, and if a coherent whole could be wrought from it it would be too complex to feasibly understand itself. In that sense: how far out do you think OpenCog is from containing a complete operational causal model of its own codebase and operation? How much of it would have to be modified or rewritten to reach this point?

comment by [deleted] · 2014-06-25T09:03:19.237Z · LW(p) · GW(p)

This is my adopted long-term field -- though professionally I work as a bitcoin developer right now -- and those estimates are my own. 1-2 decades is based on existing AGI work such as OpenCog, and what is known about generalizations to narrow AI being done by Google and a few smaller startups. It is reasonable extrapolations based on published project plans, the authors' opinions, and my own evaluation of the code in the case of OpenCog. 5 years is what it would take if money were not a concern. 2-years is based on my own, unpublished simplification of the CogPrime architecture meant as a blitz to seed-stage oracle AGI, under the same money-is-no-concern conditions.

I don't really entirely endorse the algorithms behind OpenCog and such, but I do share the forecasting timeline. Modern work in hierarchical learning, probabilities over sentences (and thus: learning and inference over structured knowledge), planning as inference... basically, I've been reading enough papers to say that we're definitely starting to see the pieces emerge that embody algorithms for actual, human-level cognition. We will soon confront the question, "Yes, we have all these algorithms, but how do we put them together into an agent?"

comment by Gunnar_Zarncke · 2014-06-22T21:15:21.286Z · LW(p) · GW(p)

I also think that most if not all parts needed for AGI are already there and 'only' need to be integrated. But that is actually a hard part. Kind of comparable to our understanding of the human brain: We know how most modules work - or at least how we can produce comparable results - but not how these are integrated. Just adding a meta level to Cog and plugins for domain specific modules at least wouldn't do.

comment by Punoxysm · 2014-06-22T01:44:59.951Z · LW(p) · GW(p)

20 years is on the very soon end of plausible; but 2-5 years is absolutely impossible. We just don't have the slightest notion how we would do that, regardless of fundingn.

We do not have the tools or technology right now; it won't come out of the blue.

Replies from: None
comment by [deleted] · 2014-06-22T02:53:19.338Z · LW(p) · GW(p)

We just don't have the slightest notion how we would do that, regardless of funding.

Really? And what's that opinion based on? Are you an expert in the field? I very often see this meme quoted, but no explanation to back it up.

I'm a computer scientist that has been following the AI / AGI literature for years. I have been doing my own private research (since publishing AGI work is too dangerous) based on OpenCog, pretty much since it was first open sourced, and a few other projects. I've looked at the issues involved in creating a seed AGI, while creating my own design for just such a system. And they are all solvable, or more often already solved but not yet integrated.

Replies from: Punoxysm, shminux, James_Miller
comment by Punoxysm · 2014-06-22T07:29:47.998Z · LW(p) · GW(p)

I'm a computer scientist who has been in a machine learning and natural language processing PhD program quite recently. I have an in-depth knowledge of machine learning, NLP and text mining.

In particular, I know that the broadest existing knowledge bases in the real-world (e.g. Google's knowledge Graph) are built on a hodge-podge of text parsing and logical inference techniques. These systems can be huge in scale and very useful, and reveal that a lot of knowledge is quite shallow even if it is apparently deeper, but also reveal the difficulty in dealing with knowledge that genuinely is deeper, by which I mean it relies on complex models of he world.

I am not familiar with OpenCog, but I do not see how it can address these sorts of issues.

The pitfall with private research is that nobody sees your work, meaning there's nobody to criticize it or tell you your assessment "the issues are solvable or solved but not yet integrated" is incorrect. Or, if it is correct and I'm dead wrong in my pessimism, nobody can know that either. Why would publishing it be dangerous (yeah, I get the general "AGI can be dangerous" thing, but what would be the actual marginal danger vs. not publishing and being left out of important conversations when they happen, assuming you've got something)?

Replies from: None
comment by [deleted] · 2014-06-22T08:20:55.037Z · LW(p) · GW(p)

In terms of practicalities, AI and AGI share two letters in common, and that's about it. OpenCog / CogPrime is at core nothing more than an interface language specification built on hypergraphs which is capable of storing inputs, outputs, and trace data for any kind of narrow AI application. It is most importantly a platform for integrating narrow AI techniques. (If you read any of the official documentation, you'll find most of it covers the specific narrow AI components they've selected, and the specific interconnect networks they are deploying. But those are secondary details to the more important contribution: the universal hypergraph language of the atomspace.)

So when you say:

I am not familiar with OpenCog, but I do not see how it can address these sorts of issues.

It doesn't really make sense. OpenCog solves these issues in the same way: through traditional text parsing and logical inference techniques. What's different is that the inputs, outputs, and the way in which these components are used are fully specified inside of the system, in a data structure that is self-modifying. Think LISP: code is data (albeit using a weird hypergraph language instead of s-expressions), data is code, and the machine has access to its own source code.

That's mostly what AGI is about: the interconnects and reflection layers which allow an otherwise traditional narrow AI program to modify itself in order to adapt to circumstances outside of its programmed expertise.

Replies from: Punoxysm, V_V, None
comment by Punoxysm · 2014-06-22T19:06:01.610Z · LW(p) · GW(p)

My two cents here are just:

1) Narrow AI is still the botteneck to Strong AI, and a feedback loop of development especially in the area of NLP is what's going to eventualy crack the hardest problems.

2) OpenCog's Hypergraphs do not seem especially useful. The power of a language cannot overcome the fact that without sufficiently strong self-modification techniques, it will never be able to self-modify into anything useful. Interconnects and reflection just allow a program to mess itself up, not become more useful, and scale or better NLP modules alone aren't a solution.

comment by V_V · 2014-06-22T15:24:47.422Z · LW(p) · GW(p)

That's mostly what AGI is about: the interconnects and reflection layers which allow an otherwise traditional narrow AI program to modify itself in order to adapt to circumstances outside of its programmed expertise.

Actually, what AGI is about, by definition, is to achieve human-level or higher performance in a broad variety of cognitive tasks.
Whether self-modification is useful or necessary to achieve such goal is questionable.

Even if self-modification turns out to be a core enabling technology for AGI, we are still quite far from getting it to work.
Just having a language or platform that allows introspection and runtime code generation isn't enough: LISP didn't lead to AGI. Neither did Eurisko. And, while I'm not very familiar with OpenCog, frankly I can't see any fundamental innovation in it.

Representing code as data is trivial. The hard problem is making a machine reason about code.
Automatic program verification is only barely starting to become commercially useful in a few restricted application domains, and automatic programming is still largely undeveloped with very little progress being made beyond optimizing compilers.

Having a machine write code at the level of a human programmer in 2 - 5 years is completely unrealistic, and 20 years looks like the bare minimum, with the realistic expectation being higher.

Replies from: None
comment by [deleted] · 2014-06-22T16:36:13.157Z · LW(p) · GW(p)

"Having a machine write code at the level of a human programmer" is a strawman. One can already think about machine learning techniques as the computer writing its own classification programs. These machines already "write code" (classifiers) better than any human could under the same circumstances.. it just doesn't look like code a human would write.

A significant pieces of my own architecture is basically doing the same thing but with the classifiers themselves composed in a nearly turing-complete total functional language, which are then operated on by other reflective agents who are able to reason about the code due to its strong type system. This isn't the way humans write code, and it doesn't produce an output which looks like "source code" as we know it. But it does result in programs writing programs faster, better, and cheaper than humans writing those same programs.

Regarding what AGI is "about", yes that is true in the strictest, definitional sense. But what I was trying to convey is how AGI is separate from narrow AI in that it is basically a field of meta-AI. An AGI approaches a problem by first thinking about how to solve the problem. It first thinks about thinking, before it thinks.

And yes, there are generally multiple ways it can actually accomplish that, e.g. the AGI could not actually solve the problem or modify itself to solve the problem, but instead output the source code for a narrow AI which efficiently does so. But if you draw the system boundary large enough, it's effectively the same thing.

Replies from: V_V, None
comment by V_V · 2014-06-22T17:50:38.274Z · LW(p) · GW(p)

"Having a machine write code at the level of a human programmer" is a strawman. One can already think about machine learning techniques as the computer writing its own classification programs. These machines already "write code" (classifiers) better than any human could under the same circumstances.. it just doesn't look like code a human would write.

Yes, and my pocket calculator can compute cosines faster than Newton could. Therefore my pocket calculator is better at math than Newton.

A significant pieces of my own architecture is basically doing the same thing but with the classifiers themselves composed in a nearly turing-complete total functional language, which are then operated on by other reflective agents who are able to reason about the code due to its strong type system.

Lots of commonly used classifiers are "nearly Turing-complete".
Specifically, non-linear SVMs, feed-forward neural networks and the various kinds of decision tree methods can represent arbitrary Boolean functions, while recurrent neural networks can represent arbitrary finite state automata when implemented with finite precision arithmetic, and they are Turing-complete when implemented with arbitrary precision arithmetic.

But we don't exactly observe hordes of unemployed programmers begging in the streets after losing their jobs to some machine learning algorithm, do we?
Useful as they are, current machine learning algorithms are still very far from performing automatic programming.

But it does result in programs writing programs faster, better, and cheaper than humans writing those same programs.

Really? Can you system provide a correct implementation of the FizzBuzz program starting from a specification written in English?
Can it play competitively in a programming contest?

Or, even if your system is restricted to machine learning, can it beat random forests on a standard benchmark?

If it can do no such thing perhaps you should consider avoiding such claims, in particular when you are unwilling to show your work.

And yes, there are generally multiple ways it can actually accomplish that, e.g. the AGI could not actually solve the problem or modify itself to solve the problem, but instead output the source code for a narrow AI which efficiently does so. But if you draw the system boundary large enough, it's effectively the same thing.

Which we are currently very far from accomplishing.

Replies from: Gavin, None
comment by Gavin · 2014-06-23T16:14:23.000Z · LW(p) · GW(p)

I'm not disagreeing with the general thrust of your comment, which I think makes a lot of sense.

But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.

We consider "write fizzbuzz from a description" to be a basic task of intelligence because it is for humans. But humans are the most complicated machines in the solar system, and we are naturally good at dealing with other humans because we instinctively understand them to some extent. An AGI may be able to accomplish quite a lot before human-style intelligence can be comprehended using raw general intelligence and massive amounts of data and study.

Replies from: V_V
comment by V_V · 2014-06-23T17:28:03.466Z · LW(p) · GW(p)

I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult.
Think of making first contact with an undiscovered human civilization, or better, a civilization of space-faring aliens.

... raw general intelligence ...

Note that it is unclear whether there is any way to achieve "general intelligence" other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence.
I mean, Solomonoff induction, AIXI and the like do certainly look interesting on paper, but the extent they can be applied to real problems (if it is even possible) without any specialization is not known.

The human brain is based on a fairly general architecture (biological neural networks), instantiated into thousands of specialized modules. You could argue that biological evolution should be included into human intelligence at a meta level, but biological evolution is not a goal-directed process, and it is unclear whether humans (or human-like intelligence) was a likely outcome or a fortunate occurrence.

Anyway, even if it turns out that "universal induction" techniques are actually applicable to a practical human-made AGI, given the economic interests of humans I think that before seeing a full AGI we should see lots of improvements in narrow AI applications.

Replies from: None
comment by [deleted] · 2014-06-23T19:33:07.542Z · LW(p) · GW(p)

I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult.

I think we're now saying the same thing, but to be clear: I don't think it follows at all that an AGI needs to be good at X, for any interesting X, in order to be considered an AGI. No, it has the meta-level condition instead: it must be able to become good at X, if doing so accomplishes its goals and it is given suitable inputs and processing power to accomplish that learning task.

Indeed, my blitz AGI design involves no natural language processing components, at all. The initial goal loading and debug interfaces would be via a custom language best described as a cross between vocabulary-limited Lojban and a strongly typed functional programming language. Having looked at the best approaches to NLP so far (Watson et al), and expert opinions on what would be required to go beyond that and build a truly human-level understanding of language, I found nothing that could not be rediscovered and developed by a less capable seed AI, if given sufficient resources and time.

Note that it is unclear whether there is any way to achieve "general intelligence" other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence.

Ok, try this experiment: start with a high-level diagram of what you would consider to be a complete human-level AGI design, e.g. able to do everything a human can do, as good or better. I think we're on the same page in assuming that at least on one level it would consist of a ton of little specialized programs handling the various specialized aspects of human intelligence. Enumerate all of these, and take a guess at how they are interconnected. I doubt you'll be able to fit it all in one sheet of paper, or even 10. Here's a start based on OpenCog, but there's lots lots more details you will need to fill in:

http://goertzel.org/MonsterDiagram.jpg

Now consider each component in turn. If you cut that component out of the diagram (perhaps rearranging some of the connections as necessary), could you reliably recreate it with the remaining pieces, if tasked with doing so and given the necessary inputs and processing power? If so, get rid of it. If not, ask: what are the minimum (less than human-level) capabilities required, which let you recreate the rest? Replace with that. Continue until the design can't be simplified further.

This experiment is a form of local search, and you may have to repeat from different starting points, or employ other global search methods to be sure that you are arriving at something close to the global minimum seed AGI design, but as an exercise I hope it gets the point across.

The basic AGI design I arrived as involved a dozen different "universal induction" techniques with different strengths, a meta-architecture for linking them together, a generic and powerful internal language for representing really anything, and basic scaffolding to stand in for the rest. It's damn slow an inefficient at first, but like a human infant a good portion of its time would be spent "dreaming" where it analyzes its acquired memories and seeks improvements to its own processes... and gains there have multiplying affects. Don't discount the importance of power-law mechanisms.

comment by [deleted] · 2014-06-25T09:13:54.049Z · LW(p) · GW(p)

On the subject of recurrent neural networks, keep in mind that you are such a network, and training you to write code and write it well took years.

comment by [deleted] · 2014-06-25T09:12:09.005Z · LW(p) · GW(p)

A significant pieces of my own architecture is basically doing the same thing but with the classifiers themselves composed in a nearly turing-complete total functional language, which are then operated on by other reflective agents who are able to reason about the code due to its strong type system.

Hmmm... Do you have a completeness result? I mean, I can see that if you make it a total language, you can just use coinduction to reason about indefinite computing processes, but I'm wondering what sort of internal logic you're using that would allow complete reasoning over programs in the language and decidable typing (since to have the agent rewrite its own code it will also have to type-check its own code).

Current theorem-proving systems like Coq that work in logics this advanced usually have undecidable type inference somewhere, and require humans to add type annotations sometimes.

comment by [deleted] · 2014-06-25T09:10:11.790Z · LW(p) · GW(p)

Personal opinion: OpenCog is attempting to get as general as it can within the logic-and-discrete-maths framework of Narrow AI. They are going to hit a wall as they try to connect their current video-game like environment to the real world, and find that they failed to integrate probabilistic approaches reasonably well. Also, without probabilistic approaches, you can't get around Rice's Theorem to build a self-improving agent.

Wellll.... the agent could make "narrow" self-improvements. It could build a formal specification for a few of its component parts and then perform the equivalent of provable compiler optimizations. But it would have a very hard time strengthening its core logic, as Rice's Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.

Replies from: asr
comment by asr · 2014-06-25T14:04:43.391Z · LW(p) · GW(p)

But it would have a very hard time strengthening its core logic, as Rice's Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.

This seems like the wrong conclusion to draw. Rice's theorem (and other undecidability results) imply that there exist optimizations that are safe but cannot be proven to be safe. It doesn't follow that most optimizations are hard to prove. One imagines that software could do what humans do -- hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won't necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.

Replies from: None
comment by [deleted] · 2014-06-25T14:26:41.552Z · LW(p) · GW(p)

One imagines that software could do what humans do -- hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won't necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.

To do that it's going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.

Replies from: None, asr
comment by [deleted] · 2014-06-25T15:37:09.507Z · LW(p) · GW(p)

Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.

Uh, what were you looking at? The basic foundation of OpenCog is a probabilistic logic called PLN (the wrong one to be using, IMHO, but a probabilistic logic nonetheless). Everything in OpenCog is expressed and reasoned about in probabilities.

Replies from: None
comment by [deleted] · 2014-06-25T20:39:20.780Z · LW(p) · GW(p)

Aaaaand now I have to go look at OpenCog again.

comment by asr · 2014-06-25T15:09:26.822Z · LW(p) · GW(p)

To do that it's going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.

I don't see why this follows. It might be that mildly smart random search, plus a theorem prover with a fixed timeout, plus a benchmark, delivers a steady stream of useful optimizations. The probabilistic reasoning and utility calculation might be implicit in the design of the "self-improvement-finding submodule", rather than an explicit part of the overall architecture. I don't claim this is particularly likely, but neither does undecidability seem like the fundamental limitation here.

comment by Shmi (shminux) · 2014-06-22T05:02:05.579Z · LW(p) · GW(p)

I have trouble trusting your expert opinion because it is not clear to me that you are an expert in the field, though you claim to be. Google doesn't point to any of your research in the area, and I can find no mention of your work beyond bitcoin by any (other) AI researchers. Feel free to link to anything corroborating your claims.

Replies from: None
comment by [deleted] · 2014-06-22T05:48:10.726Z · LW(p) · GW(p)

I have as much credibility as Eliezer Yudkowsky in that regard, and for the same reason. As I mention in the post you replied to, my work is private and unpublished. None of my work is accessible to the internet, as it should be. I consider it unethical to be publishing AGI research given what is at stake.

Replies from: V_V, shminux
comment by V_V · 2014-06-22T15:01:02.687Z · LW(p) · GW(p)

I have as much credibility as Eliezer Yudkowsky in that regard

That is, not very much.
But at least Eliezer Yudkowsky and pals have made an effort to publish arguments for their position, even if they haven't published in peer-reviewed journals or conferences (except some philosophical "special issue" volumes, IIRC).

Your "Trust me, I'm a computer scientist and I've fiddled with OpenCog in my basement but I can't show you my work because humans not ready for it" gives you even less credibility.

comment by Shmi (shminux) · 2014-06-22T07:09:17.735Z · LW(p) · GW(p)

I have as much credibility as Eliezer Yudkowsky in that regard, and for the same reason.

Eliezer published a lot of relevant work, I have seen none from you.

Replies from: None
comment by [deleted] · 2014-06-22T07:57:44.676Z · LW(p) · GW(p)

Eliezer has publications in the field of artificial intelligence? Where?

Replies from: Caspar42, None
comment by [deleted] · 2014-06-25T09:15:23.764Z · LW(p) · GW(p)

Don't make me figure this stuff out and publish the safe bits just to embarrass you guys.

comment by James_Miller · 2014-06-22T05:17:42.466Z · LW(p) · GW(p)

Do you have any predictions of what types of new narrow-AI we are likely to see in the next few years?

Replies from: None
comment by [deleted] · 2014-06-22T05:50:41.305Z · LW(p) · GW(p)

No, I wouldn't feel qualified to make predictions on novel narrow AI developments. I stay up to date with what's being published chiefly because my own design involves integrating a handful of narrow AI techniques, and new developments have ramifications for that. But I have no inside knowledge about what frontiers are being pushed next.

Edit: narrow AI and general AI are two very different fields, in case you didn't know.

comment by Jan_Rzymkowski · 2014-06-22T12:17:38.106Z · LW(p) · GW(p)

This whole debate makes me wonder , if we can have any certainity for AI predictions. Almost all is based on personal opinions, highly susceptible to biases. And even people with huge knowledge about these biases aren't safe. I don't think anyone can trace their prediction back to empiric data, it all comes from our minds' black boxes, to which biases have full access and which we can't examine with our conciousness.

While I find Mark's prediction far from accurate, I know it might be just because I wouldn't like it. I like to think that I would have some impact on AGI research, that some new insights are needed rather than just pumping more and more money in SIRI-like products. Developement of AI in next 10-15 years would mean that no qualitative research were needed and that all what is to be done is honing current technology. It would also mean there was time for thorough developement of friendliness and we may end up with AI catastroph.

While I guess human level AI to rise in about 2070s, I know I would LIKE if it happened in 2070s. And I base this prediction on no solid base.

Can anybody point me to any near-empiric data concerning, when AGI may be developed? Anything more solid than hunch of even most prominent AI researcher? Applying Moore's law seems a bit magical, it without doubt has some Bayesian effect, but with little certainity.

The best thing I can think of is that we all can agree, that AI is not be developed tomorrow. Or in a month. Why do we think that? It seems like coming from some very reliable empiric data. If we can identify factor, which make us near-certain AI is not be created in a span of few months from now, maybe upon closer look, it may provide us with some less shaky predictions for further future.

Replies from: None
comment by [deleted] · 2014-06-22T15:41:02.844Z · LW(p) · GW(p)

Honestly the best empiric data I know is Ray Kurzweil's extrapolations, which places 2045 generically as the date of the singularity, although he places human-level AI earlier around 2029 (obviously he does not lend credence to a FOOM). You have to take some care in using these predictions as individual technologies eventually hit hard limits and leave the exponential portion of the S-curve, but molecular and reversible computation shows that there is plenty of room at the bottom here.

2070 is a crazy late date. If you assume the worst case that we will be unable to build AGI any faster than direct neural simulation of the human brain, that becomes feasible in the 2030's on technological pathways that can be foreseen today. If you assume that our neural abstractions are all wrong and that we need to do a full simulation including the inner working details of neural cells and transport mechanisms, that's possible in the 2040's. Once you are able to simulate the brain of a computational neuroscientist and give it access to its own source code, that is certainly enough for a FOOM.

The best thing I can think of is that we all can agree, that AI is not be developed tomorrow. Or in a month. Why do we think that? It seems like coming from some very reliable empiric data. If we can identify factor, which make us near-certain AI is not be created in a span of few months from now, maybe upon closer look, it may provide us with some less shaky predictions for further future.

I'm not sure what you're saying here. That we can assume AI won't arrive next month because it didn't arrive last month, or the month before last, etc.? That seems like shaky logic.

If you want to find out how long it will take to make a self-improving AGI, then (1) find or create a design for one, and (2) construct a project plan. Flesh that plan out in detail by researching and eliminating as much uncertainty as you are able to, and fully specify dependencies. Then find the critical path.

Edit: There's a larger issue which I forgot to mention: I find it a little strange to think of AGI arriving in 2070 vs the near future as comforting. If you assume the AI has evil intentions, then it needs to do a lot of computational legwork before it is able to carry out any of its plans. With today's technology it's not really possible to do that and remain hidden. It could take over a botnet, sure, but the level of HPC computing required to develop new computational technology (e.g. molecular nanotechnology) requires data centers today. In 2070 though, either that technology already exists or a home network of PCs would be sufficient. By being released earlier, the UFAI has more legwork it needs to do in the event of a breakout scenario, giving higher chances of detection and more of a buffer for humanity.

Replies from: Jan_Rzymkowski
comment by Jan_Rzymkowski · 2014-06-22T19:46:52.062Z · LW(p) · GW(p)

I'm not willing to engage in a discussion, where I defend my guesses and attack your prediction. I don't have sufficient knowledge, nor a desire to do that. My purpose was to ask for any stable basis for AI dev predictions and to point out one possible bias.

I'll use this post to address some of your claims, but don't treat that as argument for when AI would be created:

How are Ray Kurzweil's extrapolations an empiric data? If I'm not wrong, all he takes in account is computational power. Why would that be enough to allow for AI creation? By 1900 world had enough resources to create computers and yet it wasn't possible, because the technology wasn't known. By 2029 we may have proper resources (computational power), but still lack knowledge on how to use them (what programs run on that supercomputers).

I'm not sure what you're saying here. That we can assume AI won't arrive next month because it didn't arrive last month, or the month before last, etc.? That seems like shaky logic.

I'm saying that, I guess, everybody would agree that AI will not arrive in a month. I'm interested on what basis we're making such claim. I'm not trying to make an argument about when will AI arrive, I'm genuinely asking.

You're right about comforting factor of AI coming soon, I haven't thought of that. But still, developement of AI in near future would probably mean that its creators haven't solved the friendliness problem. Current methods are very black-box. More than that, I'm a bit concerned about current morality and governement control. I'm a bit scared, what may people of today do with such power. You don't like gay marriage? AI can probably "solve" that for you. Or maybe you want financial equality of humanity? Same story. I would agree though that it's hard to tell where would our preferences point to.

If you assume the worst case that we will be unable to build AGI any faster than direct neural simulation of the human brain, that becomes feasible in the 2030's on technological pathways that can be foreseen today.

Are you taking in account that to this day we don't truly understand biological mechanism of memory forming and developement of neuron connections? Can you point me to any predictions made by brain researchers about when we may expect technology allowing for full scan of human connectome and how close are we to understanding brain dynamics? (Creating of new synapses, control of their strenght, etc.)

Once you are able to simulate the brain of a computational neuroscientist and give it access to its own source code, that is certainly enough for a FOOM.

I'm tempted to call that bollocks. Would you expect a FOOM, if you'd give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them? Humans can't even understand nematoda's neural network. You expect them to understand whole 100 billion human brain?

Sorry for the above, it would need a much longer discussion, but I really don't have strength for that.

I hope it would be in any way helpful.

Replies from: Izeinwinter, FeepingCreature
comment by Izeinwinter · 2014-06-23T13:27:46.080Z · LW(p) · GW(p)

No, but a sufficiently morally depraved research program can certainly do a hard take-off based on direct simulations and "Best guess butchery" alone. Once you have a brain running in code, you can do experimental neurosurgery with a reset button and without the constraints of physicality, biology or viability stopping you. A thousand simulated man-years of virtual people dying horrifying deaths later... This isn't a very desirable future, but it is a possible one.

comment by FeepingCreature · 2014-06-30T00:59:32.365Z · LW(p) · GW(p)

I'm tempted to call that bollocks. Would you expect a FOOM, if you'd give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them?

Don't underestimate the rapid progress that can be achieved with very short feedback loops. (In this case, probably rapid progress into a wireheading attractor, but still.)