Thoughts on Neuralink update?

post by NoUsernameSelected · 2020-08-29T22:59:04.733Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    2 ViktorThink
    -1 Gunnar_Zarncke
None
No comments

Here is the full stream.

I haven't seen LessWrong discuss BCI technology all that much, so I'm curious what some of the people here think about the current SOTA, and whether such devices will eventually be able to live up to their lofty promises.

Answers

answer by MrThink (ViktorThink) · 2020-08-30T06:37:11.075Z · LW(p) · GW(p)

Elon Musk have argued that humans can take in a lot of information through vision (by looking at a picture for one second, you can take in a lot of information). Text/speech however is not very information dense. He argues that therefore since we use keyboards or speech to communicate information outwards, it takes a long time.

One possibility is that AI could help interpreting the data uploaded, and filling in details to make the uploaded information more useful. For example you could "send" an image of something through the Neuralink, an AI would interpret it, fill in the details that are unclear, and then you would have an image, very close to what you imagined, containing several hundred or maybe thousands kilobytes of information.

The neuralink would only need to increase the productivity of an occupation by a few percent, to be worth the investment of 3000-4000 USD that Elon Musk believes the price will drop to.

comment by Oscilllator · 2020-08-31T10:39:30.128Z · LW(p) · GW(p)

I think that the AI here is going to have to not just fill in the blanks but convert to a whole new intermediary format. I say this because there are lots of people who despite from the outside appearing normal don't even see images. A less extreme example would be the people who do and don't subvocalise whilst reading - I know that when I'm stuck in the middle of a novel it's basically just a movie playing in my head, there's no conscious spelling out of the words, but for other people there is a narrator. Because of this the large differences between internal brain formats will necessitate some kind of common format as an intermediary.

Personally I'm more interested in seeing (ethics aside) what happens when you give this to a child. If you stick a direct feed to a computer+internet into someones brain whilst it's still forming, I would not be surprised if what comes out the other end is quite unlike a regular human. The base model at the moment already has a 6 axis imu+compass+barometer - it would not surprise me if that information just got fused into that persons regular experience like those compass belts people have started wearing.

Replies from: ViktorThink
comment by MrThink (ViktorThink) · 2020-08-31T15:45:01.541Z · LW(p) · GW(p)

It does seem like a reasonable analogy that the Neuralink could be like a "sixth sense" or an extra (very complex) muscle.

comment by AnthonyC · 2020-08-30T16:07:36.913Z · LW(p) · GW(p)
I find it likely that Neuralink will succeed with increasing the bandwith speed for "uploading" information from the brain, and I think it will do so with the help of AI. For example you could send an image of something through the Neuralink, an AI would interpret it, fill in the details that are unclear, and then you have an image, very close to what you imagined, containing several hundred or maybe thousands kilobytes of information.

I would be very interested to know if self-reported variation in mental imagery will significantly affect the ability to use such a system. Also, how trainable that is as a skill.

answer by Gunnar_Zarncke · 2020-08-31T19:00:08.601Z · LW(p) · GW(p)

I think something like Neuralink is needed to make AGI workable for humans. Otherwise we are just there. Tools, services and assistants are not natural enough.

See my old post here: https://www.lesswrong.com/posts/TbFMq8XkJAYa3EELw/when-does-technological-enhancement-feel-natural-and [LW · GW]

No comments

Comments sorted by top scores.