AI origin question
post by hairyfigment · 2015-11-01T20:35:18.911Z · LW · GW · Legacy · 18 commentsContents
18 comments
What do people see as the plausible ways for AGI to come into existence, in the absence of smart people specifically working on AI safety?
These are the ones that occur to me, in no precise order:
- An improved version of Siri (itself an improved version of MS Clippy).
- A program to make Google text ads that people will click on.
- As #2, but for spam.
- A program to play the stock market or otherwise maximize some numerical measure of profit, perhaps working against/with other programs with the same purpose.
- A program to make viral music videos from scratch (generating all images and music).
- An artificial programmer.
- A program to analyze huge amounts of data looking for 'threats to national security.'
- Uploads.
It seems like #2-5 would have formally specified goals which in the long term could be satisfied without human beings, and in the short term require manipulating human beings to some degree. Learning manipulation need not arouse suspicion on the part of the AI's creators, since the AI would be trying to fulfill its intended purpose and might not yet have thought of alternatives.
18 comments
Comments sorted by top scores.
comment by Manfred · 2015-11-01T22:49:40.913Z · LW(p) · GW(p)
I think if our knowledge of AI is enough to make 1-7 generally intelligent "by mistake," it's enough to make an AGI intentionally.
I think the most plausible origin of AGI is simply someone intentionally trying to create AGI.
Replies from: hairyfigment↑ comment by hairyfigment · 2015-11-02T01:31:17.122Z · LW(p) · GW(p)
How did we develop that knowledge? Did nobody use it to make tons of money before it was knowably sufficient to create AGI?
Replies from: Manfred↑ comment by Manfred · 2015-11-02T04:16:03.902Z · LW(p) · GW(p)
The vast majority of the work will be done not for immediate personal gain, but for the same reason most other research gets done. As we get closer, things will probably get more volatile, but whether we get academia all the way to the finish line or some sort of government arms race or something in between, I think it's most likely that AGI will be created qua AGI.
Replies from: hairyfigment↑ comment by hairyfigment · 2015-11-02T05:37:35.291Z · LW(p) · GW(p)
"Academia" is an interesting way to put it. Bluntly, the people currently trying to make AGI seem like kooks - sometimes associated with academic institutions but without much support from respectable people. (You'll recall that MIRI is not trying to create AGI yet.)
Now, my list may be too heavily influenced by present conditions. I certainly think the development will take longer than this post might suggest. I would have included an 'everything else' category except that I was inviting people to add possibilities. But are you actually saying you think academia will begin a credible AGI project, before one of the options on my list is well underway, and do so without AI safety people pushing that option?
I'm also somewhat confused by the idea of a government arms race having the deliberate aim of creating AGI, rather than #7 or weapons of some kind. Our governments have taken on strange projects, but usually with a nominal goal connected to some function of government.
comment by [deleted] · 2015-11-03T12:04:15.412Z · LW(p) · GW(p)
Humans have bred dogs from wolves. There are some dogs that have language and problem solving skills that are comparable with human children. They also have a friendly attitude to humans. Dogs are our first AI. An uplifted animal is another way AI can happen.
Direct brain to brain communication migh produce a meta-consciousness not found in the original brains.
Replies from: Houshalter, Parnpuu, ChristianKl↑ comment by Houshalter · 2015-11-05T22:27:31.156Z · LW(p) · GW(p)
I do remember reading that domesticated dogs have less intelligence than wolves. According to whatever tests they use to measure animal intelligence. However dogs are better at understanding cues given by humans. Not necessarily at the level of human children though.
Direct brain to brain communication migh produce a meta-consciousness not found in the original brains.
How? We already have brain to brain communication, and it's called language.
Replies from: Lumifer↑ comment by Parnpuu · 2015-11-04T09:16:49.882Z · LW(p) · GW(p)
This could work with somekind of human-machine connection as well. I remember reading a paper in computational neuroscience, where they hooked an eels brain to a simple machine and created a loop of machineinput - eelinput - eeloutput - machineoutput. So the eel received perceptual information from the robot and then gave actions to the robot to move.
↑ comment by ChristianKl · 2015-11-03T14:32:41.598Z · LW(p) · GW(p)
Direct brain to brain communication
What does "direct" mean? Synapses are linked through a wire?
Replies from: None↑ comment by [deleted] · 2015-11-03T16:38:32.886Z · LW(p) · GW(p)
I have no science, only science fiction, ideas of how it could be done. What I am thinking of are two or more people who are communicating without speech, writing, gesture, eye contact, or in other conventional ways. Instead, a thought in one person's body is shared / perceived in another person's body. I think of a red fire truck and either you know I'm thinking of a red fire truck or you also think of a red fire truck, by some human-created non-conventional way. I can only guess it would be partially direct wiring between brains, partially sensors that detect and transmit / reproduce chemical and electrical changes in brains. I know some small amount of brain monitoring and brain wiring is possible now, but I make no claim a full brain to brain dialogue can ever happen. I'd like it to, maybe it will, I do not claim to know.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-11-03T16:44:15.358Z · LW(p) · GW(p)
If there a machine that determine that a person thinks of a red fire truck and then stimulates the neurons in the brain of another person, that's not direct. The machine is in the middle.
The machine needs an internal language in which it can model "red fire truck", be able to recognize that in Alice by looking at neuron firing pattern and then have a model of what neuron firing would likely to have the effect of something like a "red fire truck" be perceived by another person.
Given those translation issues of those two changes of represenation systems I don't see why I would call the process "direct".
Replies from: None, None↑ comment by [deleted] · 2015-11-03T20:37:06.589Z · LW(p) · GW(p)
To be fair you could also imagine a setup without such internal interpretation, with just signal X in electrode 1 causing signal X to be disgorged in electrode 2. It would then be up to the brains on either end to learn how to modulate/interpret this new channel. People can and do learn to use such new channels all the time whenever they are provided.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-11-03T21:07:49.912Z · LW(p) · GW(p)
It would then be up to the brains on either end to learn how to modulate/interpret this new channel.
I think that's still substantially about learning a new language in which to communicate and not just transmitting existing thoughts as is.
↑ comment by [deleted] · 2015-11-03T19:24:30.944Z · LW(p) · GW(p)
I want trying if failing to convey some realization of 'two heads are better than one.' Not an AI in the interfacing machine, but a consciousness that is neither of the two people connected. A self-awareness not found in either of the two connected people. It's not Alice and it's not Bob but is partially in Alice, partially in Bob and perhaps partially in their connection. The way two sounds with just the right frequency can produce a third sound when they overlap.
comment by Viliam · 2015-11-02T19:53:42.900Z · LW(p) · GW(p)
The ancient prophecies of paperclip maximizers seem to point towards #1.
But it seems to me that #4 has the greatest incentive to be general, to work across many different domains, because you have many different kinds of companies on stock market. -- For example, if we look at nanotechnology, #2, #3 and #5 need to be able to compose a short interesting story about nanotechnology. But that's about human psychology; the story has to be interesting, not realistic. On the other hand, #4 needs to be able to look at a company that claims to produce nanotechnology and evaluate whether their projects are realistic or just nice-sounding nonsense. (#6 and #7 also feel too narrow.) -- Of course, "having an incentive" is not the same as "having the problem solved".
The battle between #4 (or other general machine) and #8 will probably depend on the state of hardware, our knowledge of neurobiology, and our knowledge of intelligent algorithms. If we will understand "the essence of intelligence" formally enough, we may be able to write an intelligent code. However, if we will not get much close to useful formal definitions, but we will have insanely poweful hardware and we will know which parts of human brain physiology are important, the uploads may be first. -- Note that the brain uploads may not be recursively self-improving, so we may get uploads first, but some de novo AGI may still surpass them later.
comment by TheAncientGeek · 2015-11-03T11:03:35.677Z · LW(p) · GW(p)
It's worth noting that if AGI comes from something Siri, it is likely to be friendly, since the marketplace will select friendly agents. That is almost an arguemnt against building singleton AGI in an isolated lab..why trow away existing advances in friendliness #?
Replies from: gjm↑ comment by gjm · 2015-11-03T12:06:35.267Z · LW(p) · GW(p)
The marketplace selects friendly-looking agents. Those friendly-looking agents not infrequently go on to mine your personal data and sell it to advertisers, or sell cars with known dangerous defects having calculated that the extra profit from cutting corners exceeds the likely losses from lawsuits from the families of the people killed, or persuade you to get a mortgage you can't afford to repay, or sell you wine containing poisonous chemicals that taste nice.
I don't find that process so reliably friendly that I feel good about having it creating superintelligent agents.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-11-07T21:10:55.493Z · LW(p) · GW(p)
The marketplace is not selecting unfriendly agents in the sense that friendlier agents are left on the shelf, and the agents are not unfriendly in the sense that they make their own decision to be unfriendly -- they are not deliberately dissimulating, and are not complex enough to do so. The behaviours you mention ae essentially hard-coding by the authors of the software, or the decision of producers to market certain products. The current situations one where agentive corporations are battling out with agentive consumers, with some not-very agentive software in the middle.
It's not in the interest of consumers to buy unfriendly agent,s because they whole point of agents is to be ion the owner's side, and act on their behalf. It is in the interest of corporations to sell software that's biased towards themselves, and therefore sell software that's only seemingly friendly. But that's a variation on an age-old battle, and there are solutions. The free market solutions is to offer agents which aren't rigged. -- the rational purchases will prefer them. The statist solution is to call, for regulation. It's not much to do with AI as such either way.