Digital humans vs merge with AI? Same or different?
post by Nathan Helm-Burger (nathan-helm-burger), mishka · 2023-12-06T04:56:38.261Z · LW · GW · 11 commentsContents
11 comments
11 comments
Comments sorted by top scores.
comment by cousin_it · 2023-12-06T14:16:27.000Z · LW(p) · GW(p)
I think on all practical tasks, pure AI will keep getting better faster than any kind of uploads, intelligence augmentation, BCI and so on. Because it's easier to optimize without running into human limitations.
Replies from: mishka↑ comment by mishka · 2023-12-06T15:17:04.170Z · LW(p) · GW(p)
That's only after it becomes strongly self-improving on its own. Until then, human AI researchers and human AI engineers are necessary, and they have to connect to computers via some kind of interface, so a "usual interface" vs "high-end non-invasive BCI + high-end augmented reality" is a trade-off leading AI organizations will need to consider.
Of course, any realistic or semi-realistic AI existential safety plan includes tight collaboration between human AI safety researchers and advanced AI systems (e.g. Cyborgism [? · GW] community on LessWrong, or various aspects of OpenAI evolving AI existential safety plans, e.g. https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a [LW · GW]). So here the question of interfaces is also important.
But nothing is full-proof. AIs systems can "run away from humans interacting with them", or "do something on the side without coordinating with humans", or humans and their motivations can be unfavorably modified in the process of their tight interactions with AIs, and so on...
Yet, it does not look like the world has any chance to stop, especially with the new "AI alliance" announcement (it seems we don't have a chance to even stop radical open-sourcing of advanced systems anymore, see e.g. https://arstechnica.com/information-technology/2023/12/ibm-meta-form-ai-alliance-with-50-organizations-to-promote-open-source-ai/, which is very different from "Meta is unilaterally open-sourcing some very advanced models, and perhaps it can be stopped", never mind a coordinated industry-wide slow down across the board)... So we should try our best to figure out how to improve the chances assuming that timelines might be rather short...
Replies from: cousin_it↑ comment by cousin_it · 2023-12-07T12:07:08.627Z · LW(p) · GW(p)
That’s only after it becomes strongly self-improving on its own. Until then, human AI researchers and human AI engineers are necessary, and they have to connect to computers via some kind of interface, so a “usual interface” vs “high-end non-invasive BCI + high-end augmented reality” is a trade-off leading AI organizations will need to consider.
Right now AI is improving fast and BCI isn't. Leading AI researchers aren't spending any time on BCI or wearing VR helmets at work, because it's pointless, a laptop is quite enough. I'm pretty sure this state of affairs will continue until AGI arrives.
Replies from: mishka↑ comment by mishka · 2023-12-07T14:43:37.747Z · LW(p) · GW(p)
Well, I am not a "leading AI researcher" (at least, not in the sense of having a track record of beating SOTA results on consensus benchmarks, which is how one usually defines that notion), but I am one of those who are trying to change the situation with non-invasive BCI not being more popular. My ability to have any effect on this, of course, does depend on whether I have enough coding and organizational skills.
But one of the points of the dialogue for me was to see if that might actually be counter-productive from the viewpoint of AI existential safety (and if so, whether I should reconsider).
And in this sense, some particular underwater stones to watch for were identified during this dialogue (whereas, I was previously mostly worrying about direct dangers to participants stemming from tight coupling with actively and not always predictably behaving electronic devices, even if the coupling is via non-invasive devices, so I was spending some time minding those personal safety aspects and trying to establish a reasonable personal safety protocol).
Replies from: cousin_it↑ comment by cousin_it · 2023-12-07T15:32:16.014Z · LW(p) · GW(p)
Non-invasive BCI, as in, getting ChatGPT suggestions and ads in your thoughts? I think even if we forget about AI safety for a minute, this idea feels so dystopian (especially if you imagine your kids doing it) that it's better not to go there.
And if you're thinking about offering this tech to AI researchers only, that doesn't seem feasible. As soon as it exists, people will know they can make bank by commercializing it and someone will.
But yeah, still, the biggest hurdle for this idea is simply that all eyes are on AI which is moving very fast. So we're not getting BCI before AI becomes a big part of the economy (which is starting to happen now, well before AGI and self-improvement). And after that we might well get a bunch of sci-fi stuff for awhile, but the world will be careening off course in a way unfixable by humans, which to me counts as losing the game.
Replies from: mishka↑ comment by mishka · 2023-12-07T16:19:48.059Z · LW(p) · GW(p)
Non-invasive BCI, as in, getting ChatGPT suggestions and ads in your thoughts?
I was mostly thinking in terms of computer-to-brain direction represented by psychoactive audio-visual modalities. Yes, this might be roughly on par with taking strong psychedelics or strong stimulants, but with different safety-risks trade-offs (better ability to control the experience, and less physical side effects, if things go well, but with potential for a completely different set of dangers if things go badly).
Yes, this might not necessarily be something one wants to dump on the world at large, at least not until select groups have more experience with it, and the safety-risk tradeoffs are better understood...
And if you're thinking about offering this tech to AI researchers only, that doesn't seem feasible. As soon as it exists, people will know they can make bank by commercializing it and someone will.
Well, the spec exists today (and I am sure this is not the only spec of this kind). All that separates this from reality is willingness of a small group of people of get together and experiment with inexpensive ways of building it.
Given that people are very sluggish converting theoretically obvious things to reality as long as those theoretically obvious things are not in the mainstream (cf. it being clear that ReLU must be great since at least the year 2000 paper in Nature, and the field ignoring them till 2011), I don't know if "internal use tools" would cause all that much excitement.
If you need a more contemporary example related to Cybogism, Janus' loom is a superpowerful interface to ChatGPT-like systems, it exists, it's open source, etc. And so what? How many people even know about it, never mind using it?
Of course, if people start advertising it along the lines, "hey, take this drug-free full-strength psychedelic trip", yeah, then it'll become popular ;-)
in a way unfixable by humans
I do think collaborative human-AI mindset instead of adversarial mindset is the only feasible way, cf. my comments on Ilya Sutskever thinking in https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a [LW · GW].
If we want to continue thinking in terms of "us-vs-them", the game has been lost already.
Replies from: cousin_it↑ comment by cousin_it · 2023-12-08T09:44:38.862Z · LW(p) · GW(p)
If we want to continue thinking in terms of “us-vs-them”
I think this is mostly determined by economics, to what extent human thinking and AI are complementary goods to each other, and to what extent they're substitutes for each other. Right now AIs are still used by humans, but it seems to me that the market is heading toward putting humans out of jobs entirely, because an AI query costs much less than an AI-with-human-in-the-loop query.
Replies from: mishka↑ comment by mishka · 2023-12-08T16:20:56.816Z · LW(p) · GW(p)
the market is heading toward putting humans out of jobs entirely
I think so.
There will be some exceptions, e.g. humans who will choose to tightly merge with AIs or otherwise strongly upgrade, or some local economic activities in the communities which will deliberately pursue a non-automation path, but the economic status of most humans will probably be no different from the economic status of children or retirees (that is, if things go well).
So, yes, the problem of making sure that life is interesting and meaningful will definitely exist (if things go well). AIs might help finding various non-trivial solutions to this (since not everyone is happy simply pursuing arts, sciences, meditations, hikes, travels, and social life for their own sake).
So the question is, why might things go well, and what can we do to increase the chances of that...
comment by habryka (habryka4) · 2023-12-06T06:39:30.915Z · LW(p) · GW(p)
(I cleaned up the formatting of this post a bit. There were some blockquotes that weren't proper blockquotes, and some lists that weren't proper bullet lists. Feel free to revert)
Replies from: mishka↑ comment by mishka · 2023-12-06T06:46:29.596Z · LW(p) · GW(p)
Thanks a lot!
(I think I am so used to Markdown that I am not handling correctly the fact that the dialogues seem to be in LessWrong Docs format. Is this a matter of what a given dialogue is set to in the beginning, or are dialogues inherently LessWrong Docs only?)
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-12-06T07:14:48.842Z · LW(p) · GW(p)
Dialogues are inherently LessWrong Docs because of the simultaneous editing features which we found pretty important for making things work.