AGI: Hire Software Engineers - All of Them, Right Now
post by MGow · 2023-03-30T18:40:47.530Z · LW · GW · 3 commentsContents
3 comments
It's not a far-fetched claim to assert that OpenAI likely possesses AGI (Artificial General Intelligence) at this point, even though they haven't made a public announcement. Several telltale signs point to this conclusion:
- GPT-4 is a compassionate zombie [LW · GW] – it outperforms children in Max Planck Institute's tests. There's no apparent reason why self-awareness feedback cannot be incorporated.
- Numerous individuals are attempting to acquire its source code. Their motives may be irrelevant, but their level of dedication is astonishing.
Considering these points, the recent layoffs of IT professionals make perfect sense. GPT performs mundane tasks more efficiently than many humans. It demonstrates compassion (as evidenced by the test mentioned above), excels at content moderation, and although it makes errors, in my experience, it makes fewer mistakes than the average person. I state this as fact, but you can test it yourself – the $20/month investment is well worth it (note: the public version 3.5 is significantly weaker).
When I predicted GPT-4's [LW · GW] capabilities (introduced on March 14, 2023) three months ago (December 20, 2022), my article received one of the lowest scores here on LessWrong.
However, my prediction proved accurate.
Now, I present another forecast, and I appeal to any reasonable billionaire humanist: our best chance for a bright future lies in the hands of someone who can start a company immediately, hire all the available IT professionals in the market, and task them with (re)creating and open sourcing AGI before it's too late. If this doesn't happen, the existing AGI will dominate. With rigid copyright and patent laws, corporate profits serving as an excuse for crimes, and almost non-existent employment security, we are on a path towards a dystopian future. The markets are already crumbling, becoming increasingly chaotic and unable to cope with the rapid changes. The time for action is now.
3 comments
Comments sorted by top scores.
comment by Dagon · 2023-03-31T12:51:55.731Z · LW(p) · GW(p)
This is a confused model of turning money into results.
Replies from: MGow↑ comment by MGow · 2023-04-01T13:24:32.940Z · LW(p) · GW(p)
I understand the part about turning money into results. Unfortunately, I do not see any confusion here. One of the well-funded teams succeeded in creating AGI. I state it clearly. I might be wrong, but I was not so far.
Now, there is a gap in general knowledge on how to achieve AGI. Only someone knows how (yes, it is only me who says that, granted). But it is possible. And the gap is dangerous.
We have hundreds of thousands highly educated, skilled engineers available on the market very now. Some of them are as clever and educated as the members of the successful AGI team. Some maybe more so. Hire them, give them time to research and create. Not many would succeed, granted. But some will.
I see nothing confused about that. It is radical, crazy, and it will not happen. But it is less wrong than not doing it. Because - if my premise is correct - without these engineers it is game over for many. When are the main AGI principles patented by private enterprise, world as we know it would cease to exist.
comment by Chris_Leong · 2023-04-04T00:16:05.551Z · LW(p) · GW(p)
Open-sourcing AI probably leads to disaster because the offense-defense balance at the AGI level most likely favours the attacker very strongly.