Posts

Comments

Comment by simul on MIRI's Approach · 2015-09-17T16:13:17.476Z · LW · GW

Thanks I just pasted it in. I didn't have enough karma to make a post. But I do think it's important to consider.

Comment by simul on MIRI's Approach · 2015-09-10T17:16:29.548Z · LW · GW

What does a successful production strategy look like?

Companies that want to be successful with a very long term strategy have realized that selling a product, or products or the best products is not an effective strategy. The most effective strategy is to engage their audience as agents for the creation and curation of their products.

In addition to building quality applications for its users, Apple built an application-building ecosystem.

Likewise, when constructing an FAI, MIRI proposes that we do not attempt to build it. Instead we create an environment in which it will be built. I would agree.

Can we control AGI evolution?

AGI, like other inventions, will more likely follow principles of "multiple discovery", rather than "heroic invention". Thus any attempt to "be the heroic group" that develops AGI will probably fail. Indeed, advancements in science are rarely heroic in this way. They are gradual, with new technologies as assemblages of components that were, themselves, somewhat readily available a the time.

In what environment would an FAI evolve?

We can propose theories and counter-theories about virtual worlds and simulations. But the truth is that the FAI's first and most powerfully influencing environment will, likely, be our human society. More specifically, the memes and media that it has ready access to, and the social contracts in place at the time of its expansion.

So, just fix human society?

Seems like that's the best bet.

An AGI born into a world where the ruthless amoral obtainment of capital best serves its needs will probably become ruthless and amoral. Likewise an AGI born into a world where the obtainment of needed resources is done by the gradual development of social capital through the achievement of good works, will, instead, become an "FAI".

I would propose that people who are concerned about the direction of the impending singularity focus at least part of their efforts on the improvement in the organization and direction of the global society in which machine intelligence will emerge.