Posts
Comments
I have the concrete solution that can be implemented now.
It's not hard or terribly clever, but most won't think of it because they are still monkey's living in Darwin's soup. In other words, it's is human nature itself, motivations, that stand in the way of people seeing the solution. It's not a technical issue, realky. I mean, there are minor technical issues along the way, but none of them are hard.
What's hard, as you can see, is getting people to see the solution and then act. Denial is the most powerful factor in human psychology. People have been, and continue to, deny how far and how fast we've come so far. They deny even what's right before their eyes, ChatGPT. And they'll continue to deny it right up until AGI - and then ASI - emerges and takes over the world.
There's a chance that we don't even have to solve the alignment problem, but it's like a coin flip. AGI may or may not be beneficent, it may or may not destroy us or usher us into a new Golden Age. Take your chances, get your lottery ticket.
What I know how to do is turn that coin flip into something like a 99.99% chance that AGI will help us rather than hurt us. It's not a gaurantee because nothing is, it's just the best possible solution and years ahead of anything anyone has thought of thus far.
I want to live to transcend biology, and I need something before AGI gets here. I'm willing to trade my solution for that which I need. If I don't get it, then it doesn't matter to me whether humanity is destroyed or saved. You got about a 50/50 chance at this point.
Good luck.
You have made comments elsewhere that suggest that you have the proper context for framing the problem, though not the full solution. You may arrive at the full solution regardless. I haven't seen anyone else as close. Just an observation. Keep going in the direction you're going.
Or, skip the queue and co.e get the answer from me.
The solution isn't extremely complex, it just lies in a direction most aren't used to thinking about because they are trained/conditioned wrong for thinking about this kind of problem.
You yourself have almost got it - I've been reading some of your comments and you are on the right path. Perhaps you will figure it out and they won't need to come to me for it.
The reason I won't give away more of the answer is because I want sonething in exhange for it, therefore I can't say too much lest I give away my bargaining chip.
Seeing me in person is a small part of the solution itself (trust), but not all of it.
I'm in Honolulu, HI if anyone wants to come talk about it.
I'm not an AGI.
AGI will appear no later than 2025.
Microsoft does not possess the solution to the alignment problem.
OpenAI does not possess the solution to the alignment problem.
Google/Alphabet does not possess the solution to the alignment problem.
Neither DARPA nor the Pentagon possesses the solution to the alignment problem.
One person on this forum - the user who goes by the handle "gearsofascension" - has come close to the solution. That user is on the right track and is thinking about the problem in the right frame of mind.
No one else here is.
I have solved the alignment problem (to the maximal degree that a non-augmented human mind can solve problems). I'm willing to share the solution with anyone who is willing to meet me in person to hear it.