Posts

If Neuroscientists Succeed 2025-02-11T15:33:09.098Z
AI: How We Got Here—A Neuroscience Perspective 2025-01-19T23:51:47.822Z

Comments

Comment by Mordechai Rorvig (mordechai-rorvig) on Can someone, anyone, make superintelligence a more concrete concept? · 2025-02-16T22:43:13.561Z · LW · GW

Thanks - didn't see his remarks about this, specifically. I'll try to look them up, thanks.

Comment by Mordechai Rorvig (mordechai-rorvig) on If Neuroscientists Succeed · 2025-02-12T17:52:20.434Z · LW · GW

I can see what you mean. However, I would say that just claiming "that's not what we are trying to do" is not a strong rebuttal. For example, we would not accept such a rebuttal from a weapons company, which was seeking to make weapons technology widely available without regulation. We would say - it doesn't matter how you are trying to use the weapons, it matters how others are, with your technology.

In the long term, it does seem correct to me that the greater concern is issues around superintelligence. However, in the near term it seems the issue is we are making things that are not at all superintelligent, and that's the problem. Smart at coding and language, but coupled e.g. with a crude directive to 'make me as much money as possible,' with no advanced machinery for ethics or value judgement.

Comment by Mordechai Rorvig (mordechai-rorvig) on If Neuroscientists Succeed · 2025-02-12T16:06:20.990Z · LW · GW

Thank you!

Comment by Mordechai Rorvig (mordechai-rorvig) on If Neuroscientists Succeed · 2025-02-12T16:05:56.376Z · LW · GW

Thank you for the feedback. Did you know of any similar writing making similar points that were more readable, in your mind? What was an example of a place that you found it meandering or overlong? This could help me improve future drafts. I appreciate your interest, and I'm sorry you felt it wasn't concise and was overly 'vibey.'

Comment by Mordechai Rorvig (mordechai-rorvig) on how do the CEOs respond to our concerns? · 2025-02-12T00:26:05.986Z · LW · GW

I don't know, but I would suspect that Sam Altman and other OpenAI staff have strong views in favor of what they're doing. Isn't there probably some existing commentary out there on what he thinks? I haven't checked. But also, it is problematic to assume that science is rational. It isn't, and many people often hold differing views up to and including the time that something becomes incontrovertibly established.

Further, an issue here is when someone has a strong conflict of interest. If a human is being paid millions of dollars per year to pursue their current course of action—buying vacation homes on every continent, living the most luxurious lifestyle imaginable—then it is going to be hard for them to be objective about serious criticisms. This is why I can't just go up to the Exxon Mobil CEO and say, 'Hey! Stop drilling, it's hurting the environment!' and expect them to do anything about it, even though it would be a completely non-controversial statement. 

Comment by Mordechai Rorvig (mordechai-rorvig) on Can someone, anyone, make superintelligence a more concrete concept? · 2025-02-05T01:34:43.865Z · LW · GW

Honestly, I think the journalism project I was working on in the last year may be most important for the way it sheds light on your question.

The purpose of the project, to be as concise as possible, was to investigate the evidence from neuroscience that there may be a strong analogy between modern AI programs, like language models, and distinctive subregions of the brain, like the so-called language network, responsible for our abilities to process and generate language.

As such, if you imagine coupling such a frontier language model with a crude agent architecture, then what you will wind up with might be best viewed as a form of hyperintelligent machine sociopath, with all the extremely powerful language machinery of a human—perhaps even much more powerful, considering its inhuman scaling—but none of the machinery necessary for say, emotional processing, empathy, and emotional reasoning—aside from the superficial facsimile of such, that you get from a mastery of human language. (For example, existing models lack any and all machinery corresponding to the human dopamine and serotonin systems.)

This is, for me, a frankly terrifying realization that I am still trying to wrap my head around, and plan to be posting about more, soon. Does this help at all?