Posts
Comments
Thanks for the links. I think this will greatly enhance my understanding of the matter.
I hope we will see multi-modal modells. To me it will be very important for the future of AI/AGI development to see whether we will get significant transfer/synergy-learning (from one modality to another).
IF these synergy effects would occur (and my prediction is that they will not occur to a significant degree), this would push my AI predictions towards shorter time horizons for useful AI and AGI.
If we wont see transfer learning it would significantly lengthen my AGI time horizon because it would indicate to me that we will have to come up with additional and new techniques to enhance our AI models towards generality.
I really like the idea of such an argument mapping system. In the early days of wikipedia I thought it might be a natural extension of the wikipedia project to have a debate-plattform attatched to it. So its interesting for me to see that a platform like Kialo exists - I didnt know about it before your post, thanks.
I hold the same opinion as you do, that many arguments are very complex and when you try to get a good impression of the state of the debate it is a complicated task, because its hard to find good, exhaustive and well structured debates. And a platform that systematizes this important task is very valueable for people and society in my opinion.
My intuition for the most promising path to AGI is with the Gary Marcus/Ben Goertzel Neuro-Symbolic camp.
As the years passed and so much money has gone to the pure learning, neural networks only, scaling-hypothesis branch of AI and imo. far too little has gone to improve on fundational methods I grew more sceptical of a rapid progress to AGI.
About four years ago there was a DARPA funding for the sort term (2 yrs?) discovery of "next generation" AI methods with a high budget (maybe 1.5 billion USD). I dont now of anything with substance that came from it.
Also, things like Hintons capsule networks seemed interesting to me but again nothing much came from it.
Imo. the same again for research at the big players like Deep Mind and Open AI: there was a time - maybe around 2018 - where every other week or so some apparently breakthrough was published. What again where the numerous, big, original breakthroughs for the past 2 yrs? They indeed showed some remarkable scaling/engeneering success, but that to me looks rather underwhelming - especially considering how much they increased their staff and funding.
All in all it seems to me that over the past 5-7 yrs there have been some very interesting basic exploration of AI techniques and applications. But no/little (GATO might be considered a prominent exception) more ambitious project to bring these techniques togehter into one architechture for (proto) AGI. So its still mostly individual demo piecemeal.
What would, to me, be indicative of goal directed progress towards AGI would be an architechture for a proto AGI to which multiple parties could contribute and interatively improve on (like Sigulairty Net and Open Cog). But the big players did imo. not come forth with anything like that so far. And the interesting small scale initiatives like OpenCog operate on a shoestring budget.
To end on a positive note: The timeline pushbacks and entire cancelation of self driving projects were surely disappointing. The only good that might (its more wishful thinking than expectation) come from it, is that the self driving industry could develop general purpose methods that would benefit many AI projects (see Tesla Optimus). And because this industry is well funded it could progress rapidly.