Posts
Comments
At the risk of sounding catty, I've just got to say that I desperately wish there were some kind of futures market robust enough for me to invest in the prospect of EY, or any group of which EY's philosophy has functional primacy, achieving AGI. The chance of this is, of course, zero. The entire approach is batty as hell, not because the idea of AGI is batty, but because the notion that you can sit around and think really, really hard, solve the problem, and then implement it -
Here's another thing that's ridiculous: "I'm going to write the Great American Novel. So I'm going to pay quiet attention my whole life, think about what novel I would write, and how I would write a novel, and then write it."
Except EY's AGI nonsense is actually for more nonsensical than that. In extremely rare cases novel writing DOES occur under such circumstances. But the idea that it is only by some great force of self-restraint that EY and co. desist from writing code, that they hold back the snarling and lunging dogs of their wisdom lest they set in motion a force that would destroy creation -
well. You can see what I think of it.
Here's a bit of advice, which perhaps you are rational enough to process: the entire field of AI researchers is not ignoring your ideas because it is, collectively, too dim to have achieved the series of revelations you have enumerated here at such length. Or because there's nothing in your thinking worth considering. And it's not because academia is somehow fundamentally incompatible with research notions so radical - this last is particularly a load of bollocks. No, it's because your methodology is misguided to the point of silliness and vague to the point of uselessness.
Fortunately for you, the great thing about occupying a position that is never put to the test, never produces anything that one can evaluate, is that one is not susceptible to public flogging, and dissent is reduced to little voices in dark sleepless hours.
And to "crackpots", of course.