Posts

Comments

Comment by Ben_Goertzel on The Magnitude of His Own Folly · 2008-09-30T17:27:18.000Z · LW · GW

I think there is a well-understood, rather common phrase for the approach of "thinking about AGI issues and trying to understand them, because you don't feel you know enough to build an AGI yet."

This is quite simply "theoretical AI research" and it occupies a nontrivial percentage of the academic AI research community today.

Your (Eliezer's) motivations for pursuing theoretical rather than practical AGI research are a little different from usual -- but, the basic idea of trying to understand the issues theoretically, mathematically and conceptually before messing with code, is not terribly odd....

Personally I think both theoretical and practical AGI research are valuable, and I'm glad both are being pursued.

I'm a bit of a skeptic that big AGI breakthroughs are going to occur via theory alone, but, you never know ... history shows it is very hard to predict where a big discovery is going to come from.

And, hypothetically, let's suppose someone does come up with a big AGI breakthrough from a practical direction (like, say, oh, the OpenCogPrime team... ;-); then it will be very good that there exist individuals (like yourself) who have thought very deeply about the theoretical aspects of AGI, FAI and so forth ... you and other such individuals will be extremely well positioned to help guide thinking on the next practical steps after the breakthrough...

-- Ben G

Comment by Ben_Goertzel on Above-Average AI Scientists · 2008-09-30T01:51:00.000Z · LW · GW

Vassar wrote:



I think it somewhat unlikely there are creationists at your level (Richard Smalley included) and would be astounded if there were any at mine. Well... I mean avowed and sincere biblical literalists, there might be all sorts of doctrines that could be called creationist.

I have no clear idea what you mean by "level" in the above...

IQ?

Demonstrated scientific or mathematical accomplishments?

Degree of agreement with your belief system? ;-)

-- Ben G

Comment by Ben_Goertzel on Above-Average AI Scientists · 2008-09-29T05:04:13.000Z · LW · GW

Eliezer said:


To all claiming that the judgment is too subtle to carry out, agree or disagree: "Someone could have the knowledge and intelligence to synthesize a mind from scratch on current hardware, reliably as an individual rather than by luck as one member of a mob, and yet be a creationist."


Strongly agree.

I'm not making any specific judgments about the particular Creationist you have in mind here (and I'm pretty sure I know who you mean)... but I see no reason to believe that Creationism renders an individual unable to solve the science and engineering problems involved in creating AGI. Understanding mind is one thing ... beliefs about cosmogony are another...

I note that there are many different belief systems lumped under the label of "Creationism" ... not all of them are stupid or anti-intellectual.... (Though, I do not accept any of them myself, being a lifelong atheist...)

And, there may be a statistical anticorrelation between Creationism and IQ ... but it's not so strong a relationship as to let you draw useful conclusions about individual cases in the face of more particular information about the people in question...

-- Ben G

Comment by Ben_Goertzel on Above-Average AI Scientists · 2008-09-28T23:25:26.000Z · LW · GW

Eliezer: One comment is that I don't particularly trust your capability to assess the insights or mental capabilities of people who think very differently from yourself. It may be that the people whose intelligence you most value (who you rate as residing on "high levels", to quasi-borrow your terminology) are those who are extremely talented at the kind of thinking you personally most value. Yet, there may be many different sorts of intelligent human thinking, some of which you may not excel at, may understand relatively little of, and may not be particularly good at assessing in others. And, it's not yet clear whether the style of intelligence that you favor (or the slightly different one that I tend to intuitively, and by personality-bias, favor) is the one that is most likely to lead to powerful, beneficial AGI ... or whether some other style of intelligence may be more effective in this regard....

I note again that objective definitions of general intelligence don't really exist except in the limit of massive computational processing power (and even there, they're controversial). So, assessing intelligence or capability in practice is a subtle matter ... and I don't particularly trust your analysis of intelligence in terms of a hierarchy of levels. I guess human intelligence is more mess, heterarchical and multifaceted than that. Of course, you can meaningfully construct hierarchies of intelligence in various areas, such as "mathematical theorem proving" or "theorem proving in continuous-variable analysis and related branches of math" ... or, say, "biology experimental design" or "software design", etc. But, when dealing with something like AGI that is poorly understood and may be amenable to a variety of different approaches, it's hard to say which of these domain-specific intelligences are going to be most critical to the effective solution of the AGI problem.

Maybe one of these scientists whom you dismiss as "mediocre level" according to the particular aspects of intelligence that you value most, are actually "high level" according to other aspects of intelligence that you aren't able to recognize and evaluate so accurately ... and maybe some of these other aspects will turn out to be MORE valuable for the creation of AGI.

I'm not saying I have a strong feeling this is the case ... I'm just saying "maybe"....

Compared to you, I think I have a bit more humility about my capability to recognize what another person's capabilities really are. Yes, I can see how well they do on a test, or how clever they are in a conversation ... or what papers they publish. But how do I know what's in their mind, that is not revealed to me explicitly due to the strictures of their personality or culture? How do I know what is in their statements or works that I'm not well-suited to recognize due to my own particular biases and limitations?

When I have to choose which scientist or engineer to hire or collaborate with, then I just make my best judgments ... and if I miss out on someone great due to my own limitations of vision, so be it ... but I personally tend to be more hesitant to consider either my own gut-level assessment of another's abilities, or performance on narrowly-specified test instruments, or success in social rituals like paper-publishing or university, as fundamentally indicative of someone's general intelligence or intellectual capability...

-- Ben G

Comment by Ben_Goertzel on Competent Elites · 2008-09-28T23:06:00.000Z · LW · GW

First a comment on a small, specific point you made: I have met a large number of VC's during the last 11 years, and in terms of intelligence and insight I really found them to be all over the map. Some brilliant, wide-ranging thinkers ... some narrow-minded morons. Hard to generalize.

Regarding happiness, if you're not familiar with it you might want to look at the work on flow and optimal experience:

http://www.amazon.com/Flow-Psychology-Experience-Mihaly-Csikszentmihalyi/dp/0060920432

which is likely relevant to why many successful CEO's would habitually feel happy...

Also, there have been many psychological studies of the impact of wealth on happiness, and one result I remember is that, once a basic level of wealth that avoids profound physical discomfort is achieved, the main impact of wealth on happiness is to DECREASE UNHAPPINESS but not to INCREASE HAPPINESS. (Yes, I know this wording is imprecise ... but it is precise in the relevant research papers, which I don't have at my fingertips right now...)

That is, having a lot of $$ decreases the amount of petty annoyance in your life. But it doesn't provide higher highs, or significantly increase your overall life-satisfaction. But, having so little $$ that you're hungry, or cold, etc., obviously does decrease your overall life-satisfaction.

-- Ben G


Comment by Ben_Goertzel on The Level Above Mine · 2008-09-28T21:55:00.000Z · LW · GW

Some else wrote

"
This is a youthful blog with youthful worries. From the vantage point of age worrying about intelligence seems like a waste of time and unanswerable to boot.
"

and I find this observation insightful, and even a bit understated.

Increasingly, as one ages, one worries more about what one DOES, rather than about abstract characterizations of one's capability.

Obviously, one reason these sorts of questions about comparative general intelligence are unanswerable is that "general intelligence" is not really a rigorously defined concept -- as you well know! And the rigorous definitions that have been proposed (e.g. in Legg and Hutter's writing, or my earlier writings, etc.) are basically nonmeasurable in practice -- they're only crudely approximable in practice, and the margin of error of these approximations is almost surely large enough to blur whatever distinctions exist between various highly clever humans.

I have no doubt that you're extremely smart, and especially talented in some particular areas (such as mathematics and writing, to give a nonexhaustive list) ... and that you're capable of accomplishing great things intellectually.

As an aside, the notion that Conway, or von Neumann or any other historical math figure is "more intelligent than Eliezer along all dimensions" seems silly to me ... I'm sure they weren't, under any reasonable definition of "dimensions" in this context.

To take a well-worn example: from my study of the historical record, it seems clear that Einstein and Godel were both less transparently, obviously clever than von Neumann. My guess is that von Neumann would have scored higher on IQ tests than either of those others, because he was incredibly quick-minded and fond of puzzle-type problems. However, obviously there were relevant dimensions along which both Einstein and Godel were "smarter" than von Neumann; and they pursued research paths in which these dimensions were highly relevant.

"General intelligence" has more and more meaning as one deals with more and more powerful computational systems. For humans it's meaningful but not amazingly, dramatically meaningful ... what's predictive of human achievement is almost surely a complex mixture of human general intelligence with human specialized intelligence in achievement-relevant domains.

Pragmatically separating general from specialized intelligence in oneself or other humans is a hard problem, and not really a terribly useful thing to try to do.

Achieving great things seems always to be a mixture of general intelligence, specialized intelligence, wise choice of the right problems to work on, and personality properties like persistence ...

-- Ben G

Comment by Ben_Goertzel on Dreams of AI Design · 2008-09-24T13:01:00.000Z · LW · GW

As my name has come up in this thread I thought I'd briefly chime in. I do believe it's reasonably likely that a human-level AGI could be created in a period of, let's say, 3-10 years, based on the OpenCogPrime design (see http://opencog.org/wiki/OpenCog_Prime). I don't claim any kind of certitude about this, it's just my best judgment at the moment.

So far as I can recall, all projections I have ever made about the potential of my own work to lead to human-level (or greater) AGI have been couched in terms of what could be achieved if an adequate level of funding were provided for the work. A prior project of mine, Webmind, was well-funded for a brief period, but my Novamente project (http://novamente.net) never has been, and nor is OpenCogPrime ... yet.

Whether others involved in OpenCogPrime work agree closely with my predictive estimates is really beside the point to me: some agree more closely than others. We are involved in doing technical research and engineering work according to a well-defined plan (aimed explicitly at AGI at the human level and beyond), and the important thing is knowing what needs to be done, not knowing exactly how long it will take. (If I found out my time estimate were off by a factor of 5, I'd still consider the work roughly equally worthwhile. If I found out it were off by a factor of 10, that would give me pause, and I would serious consider devoting my efforts to developing some sort of brain scanning technology, or quantum computing hardware, or to developing some totally different sort of AGI design).

I do not have a mathematical proof that the OpenCogPrime design will work for human-level AGI at all, nor a rigorous calculation to support my time-estimate. I have discussed the relevant issues with many smart, knowledgeable people, but ultimately, as with any cutting-edge research project, there is a lot of uncertainty here.

I really do not think that my subjective estimate about the viability of the OpenCogPrime AGI design is based on any kind of simple cognitive error. It could be a mistake, but it's not a naive or stupid mistake!

In order to effectively verify or dispute my hypothesis that the OpenCogPrime design (or the Novamente Cognition Engine design: they're similar but not identical) is adequate for human-level AGI, with a reasonable level of certitude, Manhattan Project level funding would not be required. US $10M per year for a decade would be ample; and if things were done very carefully without too much bad luck, we might be able to move the project full-speed-ahead on as little as US $1.5 M per year, and achieve amazing results within as little as 3 years.

Hell, we might be able to get to the end goal without ANY funding, based on the volunteer efforts of open-source AI developers, though this seems a particularly difficult path, and I think the best course will be to complement these much-valued volunteer efforts with funded effort.

Anyway, a number of us are working actively on the OpenCogPrime project now (some funded by SIAI, some by Novamente LLC, some as volunteers) even without an overall "adequate" level of funding, and we're making real progress, though not as much as we'd like.

Regarding my role with SIAI: as Eliezer stated in this thread, he and I have not been working closely together so far. I was invited into SIAI to, roughly speaking, develop a separate AGI research programme which complements Eliezer's but is still copacetic with SIAI's overall mission. So far the main thing I have done in this regard is to develop the open-source OpenCog (http://opencog.org) AGI sofware project of which OpenCogPrime is a subset.