Ben Goertzel on Charity
post by XiXiDu · 2011-03-09T16:37:41.414Z · LW · GW · Legacy · 75 commentsContents
75 comments
Artificial general intelligence researcher Ben Goertzel answered my question on charitable giving and gave his permission to publish it here. I think the opinion of highly educated experts who have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI.
Alexander Kruel asked:
What would you do with $100,000 if it were given to you on the condition that you donate it to a charity of your choice?
Ben Goertzel replied:
Unsurprisingly, my answer is that I would donate the $100,000 to the OpenCog project which I co-founded and with which I'm currently heavily involved. This doesn't mean that I think OpenCog should get 100% of everybody's funding; but given my own state of knowledge, I'm very clearly aware that OpenCog could make great use of $100K for research working toward beneficial AGI and a positive Singularity. If I had $100M rather than $100K to give away, I would have to do more research into which other charities were most deserving, rather than giving it all to OpenCog!
What can one learn from this?
- The SIAI is not the only option to work towards a positive Singularity
- The SIAI should try to cooperate more closely with other AGI projects to potentially have a positive impact.
I'm planning to contact and ask various experts, who are aware of risks from AI, the same question.
75 comments
Comments sorted by top scores.
comment by Alexandros · 2011-03-09T16:44:20.044Z · LW(p) · GW(p)
I don't see how either of your Lessons Learned follows from the Goertzel quotes.
Replies from: wedrifid, XiXiDu↑ comment by XiXiDu · 2011-03-09T18:04:31.983Z · LW(p) · GW(p)
I don't see how either of your Lessons Learned follows from the Goertzel quotes.
- He said that the OpenCog project is a way to work towards a positive Singularity.
- There are other projects that work towards AGI and cooperation allows one to estimate their progress and influence them to be careful.
- There are other people who take an experimental approach towards friendliness and cooperating with them allows one to learn from their success.
I conclude that you and the 4 people who upvoted you didn't even read what Ben Goertzel wrote.
Downvoted.
Replies from: Alexandros↑ comment by Alexandros · 2011-03-09T18:22:07.399Z · LW(p) · GW(p)
If you take a project founder's evaluation of their own project as objective truth, I have a project for you to fund.
I disagree with your other points too, but will not respond as they have nothing to do with my assertion that the lessons learned do not follow from the Goertzel quotes.
As for the up/downvotes, downvote all you like, but if I were you I would check my priors regarding the thought the average LessWronger puts into their comments and voting behaviour.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-09T18:43:27.847Z · LW(p) · GW(p)
If you take a project founder's evaluation of their own project as objective truth, I have a project for you to fund.
I don't, but it is evidence that people disagree with the SIAI and think that there are more effective ways towards a positive Singularity. Don't forget that he once worked for the SIAI. If Michael Vassar was to leave the SIAI and start his own project, wouldn't that be evidence about the SIAI?
As for the up/downvotes, downvote all you like, but if I were you I would check my priors regarding the thought the average LessWronger puts into their comments and voting behaviour.
I had people going through my comments downvoting over 40 in a matter of minutes. I have seen enough irrational and motivated voting behavior to not trust it anymore, especially when it comes to anything about the community itself.
Replies from: Wei_Dai, Alexandros, wedrifid↑ comment by Wei Dai (Wei_Dai) · 2011-03-09T20:44:00.565Z · LW(p) · GW(p)
I don't, but it is evidence that people disagree with the SIAI and think that there are more effective ways towards a positive Singularity.
Did you think that many LWers weren't aware of this fact? I would have thought that everyone already knew...
Don't forget that he once worked for the SIAI. If Michael Vassar was to leave the SIAI and start his own project, wouldn't that be evidence about the SIAI?
I'm curious if you've seen this discussion, which occurred while Ben was still Research Director of SIAI.
ETA: I see that you commented in that thread several months after the initial discussion, so you must have read it. I suppose the problem from your perspective is that you can't really distinguish between Eliezer and Ben. They each think their own approach to a positive Singularity is the best one, and think the other one is incompetent/harmless. You don't know enough to judge the arguments on the object level. LW mostly favors Eliezer, but that might just be groupthink. I'm not really sure how to solve this problem, actually... anyone else have ideas?
Replies from: cousin_it, saturn, XiXiDu↑ comment by cousin_it · 2011-03-09T21:29:50.356Z · LW(p) · GW(p)
Here's an argument that may influence XiXiDu: people like Scott Aaronson and John Baez find Eliezer's ideas worth discussing, while Ben doesn't seem to have any ideas to discuss.
Replies from: XiXiDu, timtyler, ferrouswheel↑ comment by XiXiDu · 2011-03-10T09:52:47.671Z · LW(p) · GW(p)
...while Ben doesn't seem to have any ideas to discuss.
That an experimental approach is the way to go. I believe we don't know enough about the nature of AGI to solely follow an theoretical approach right now. That is one of the most obvious shortcomings of the SIAI in my opinion.
↑ comment by ferrouswheel · 2011-03-10T06:39:25.381Z · LW(p) · GW(p)
Or perhaps it could be that Ben is too busy actually developing and researching AI to spend time discussing them ad nauseum? I stopped following many mailing lists or communities like this because I don't actually have time to argue in circles with people.
(But make an exception when people start making up untruths about OpenCog)
Replies from: cousin_it↑ comment by cousin_it · 2011-03-10T08:22:27.228Z · LW(p) · GW(p)
Even if they don't want to discuss their insights "ad nauseum", I need some indication that they have new insights. Otherwise they won't be able to build AI. "Busy developing and researching" doesn't look very promising from the outside, considering how many other groups present themselves the same way.
Replies from: ferrouswheel, Vladimir_Nesov↑ comment by ferrouswheel · 2011-03-10T08:33:21.909Z · LW(p) · GW(p)
Ben's publishing several books (well, he's already published several, but he's publishing the already written "Building Better Minds" early 2012 and a pop sci version shortly there after which are more current regarding OpenCog). I'll be writing a "practical" guide to OpenCog once we reach our 1.0 release at the end of 2012.
Ben actually does quite a lot of writing, theorizing and conferences. Whereas myself and a number of others are more concerned with the software development side of things.
We also have a wiki: http://wiki.opencog.org
Replies from: cousin_it↑ comment by cousin_it · 2011-03-10T08:37:54.876Z · LW(p) · GW(p)
What new insights are there?
Replies from: ferrouswheel↑ comment by ferrouswheel · 2011-03-10T09:57:13.443Z · LW(p) · GW(p)
Well new is relative... so without any familiarity of your knowledge on OpenCog I can't comment.
Replies from: cousin_it↑ comment by cousin_it · 2011-03-10T10:04:29.218Z · LW(p) · GW(p)
New insights relative to the current state of academia. Many of us here are up-to-date with the relevant areas (or trying to be). I'm not sure what my knowledge of OpenCog has to do with anything, as I was asking for the benefit of all onlookers too.
↑ comment by Vladimir_Nesov · 2011-03-10T12:48:53.122Z · LW(p) · GW(p)
Even if they don't want to discuss their insights "ad nauseum", I need some indication that they have new insights. Otherwise they won't be able to build AI.
Evolution managed to do that without any capacity for having insights. It's not out of the question that enough hard work without much understanding would suffice, particularly if you use the tools of mainstream AI (machine learning).
Also, just "success" is not something one would wish to support (success at exterminating humanity, say, is distinct from success in exterminating polio), so the query about which institution is more likely to succeed is seriously incomplete.
↑ comment by saturn · 2011-03-10T09:58:01.638Z · LW(p) · GW(p)
Unfortunately, there are only a few types of situations where it's possible to operate successfully without an object level understanding—situations where you have a trustworthy authority to rely on, where you can apply trial and error, where you can use evolved instincts, or where the environment has already been made safe and predictable by someone who did have an object level understanding.
None of those would apply except relying on a trustworthy authority, but since no one has yet been able to demonstrate their ability to bring about a positive Singularity, XiXiDu is again stuck with difficult object level questions in deciding what basis he can use to decide who to trust.
↑ comment by XiXiDu · 2011-03-10T09:50:01.935Z · LW(p) · GW(p)
Did you think that many LWers weren't aware of this fact? I would have thought that everyone already knew...
Yes.
I see that you commented in that thread several months after the initial discussion.
I commented on a specific new comment there and didn't read the thread. Do you think newbies are able to read thousands of comments?
You don't know enough to judge the arguments on the object level. LW mostly favors Eliezer, but that might just be groupthink.
Indeed, this possibility isn't discussed enough.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-10T13:08:57.539Z · LW(p) · GW(p)
Indeed, this possibility isn't discussed enough.
I'm sympathetic to your position here, but precisely what kind of discussion about this possibility do you want more of?
Eliezer believes that building a superhuman intelligence is so dangerous that experimenting with it is irresponsible, and that instead some kind of theoretical/provable approach is necessary before you even get started.
The specific approach he has adopted is one built on a particular kind of decision theory, on the pervasive use of Bayes' Theorem, and on the presumption that what humans value is so complicated that the best way to express it is by pointing at a bunch of humans and saying "There: that!"
SIAI is primarily populated by people who think Eliezer's approach is a viable one.
Less Wrong is primarily populated by people who either think Eliezer's strategy is compelling, or who don't have building a superhuman intelligence as their primary focus in the first place.
People who have that as their primary focus and think that his strategy is a poor one go put their energy somewhere else that operates on a different strategy.
If he's wrong, then he'll fail, and SIAI will fail. If someone else has a different, viable, strategy, then that group will succeed. If nobody does, then nobody will.
This seems like exactly the way it's supposed to work.
Sure, discussion is an important part of that. But Eliezer has written tens of thousands of words introducing his strategy and his reasons for finding it compelling, and hundreds of readers have written tens of thousands of words in response, arguing pro and con and presenting alternatives and clarifications and pointing out weaknesses.
I accept that you're (as you say) unable to read thousands of comments, so you can't know that, but in the nine months or so since I've found this site I have read thousands of comments, so I do know it. (Obviously, you don't have to believe me. But you shouldn't expect to convince me that it's false, either, or anyone else who has read the same material.)
I'm not saying it's a solved problem... it's not. It is entirely justifiable to read all of that and simply not be convinced. Many people are in that position.
I'm saying it's unlikely that we will make further progress along those lines by having more of the same kind of discussion. To make further progress in that conversation, you don't need more discussion, you need a different kind of discussion.
In the meantime: maybe this is a groupthink-ridden cult. If so, it has a remarkable willingness to tolerate folks like me, who are mostly indifferent to its primary tenets. And the conversation is good.
There's a lot of us around. Maybe we're the equivalent of agnostics who go to church services because we're bored on Sunday mornings; I dunno. But if so, I'm actually OK with that.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-10T20:44:33.050Z · LW(p) · GW(p)
I feel that people here are way too emotional. If you tell them they'll link you up to a sequence post on why being emotional can be a good thing. I feel that people here are not skeptic enough. If you tell them they'll link you up to a sequence post on why being skeptic can be a bad thing. I feel that people here take some possibilities too seriously. If you tell them they'll link you up...and so on. I could as well be talking to Yudkowsky only. And whether there is someone else, some expert or otherwise smart guy not agreeing then he is either accused of not having read the sequences or below their standards.
Eliezer believes that building a superhuman intelligence is so dangerous that experimenting with it is irresponsible...
The whole 'too dangerous' argument is perfect for everything from not having to prove any coding or engineering skills to dismissing openness and any kind of transparency up to things I am not even allowed to talk about here.
If he's wrong, then he'll fail, and SIAI will fail. If someone else has a different, viable, strategy, then that group will succeed. If nobody does, then nobody will.
Here we get to the problem. I have no good arguments against all of what I have hinted at above except that I have a strong gut feeling that something is wrong. So I'm trying to poke holes into it, I try to crumble the facade. Why? Well, they are causing me distress by telling me all those things about how possible galactic civilizations depend on my and your money. They are creating ethical dilemmas that make me feel committed to do something even though I'd really want to do something else. But before I do that I'll first have to see if it holds water.
But Eliezer has written tens of thousands of words introducing his strategy and his reasons for finding it compelling...
Yup, I haven't read most of the sequences but I did a lot spot tests and read what people linked me up to. I have yet to come across something novel. And I feel all that doesn't really matter anyway. The basic argument is that high-risks can outweigh low probabilities, correct? That's basically the whole fortification for why I am supposed to bother, everything else just being a side note. And that is also where I feel (yes gut feeling, no excuses here) something is wrong. I can't judge it yet, maybe in 10 years when I learnt enough math, especially probability. But currently it just sounds wrong. If I thought that there was a low probability that running the LHC was going to open an invasion door for a fleet of aliens interested in torturing mammals then according to the aforementioned line of reasoning I could justify murdering a bunch of LHC scientists to prevent them from running the LHC. Everything else would be scope-insensitivity! Besides the obvious problems with that, I have a strong feeling that that line of reasoning is somehow bogus. I also don't know jack shit about high-energy physics. And I feel Yudkowsky doesn't know jack shit about intelligence (not that anyone else does know more about it). In other words, I feel we need to do more experiments first to understand what 'intelligence' is to ask people for their money to save the universe from paperclip maximizers.
See, I'm just someone who got dragged into something he thinks is bogus and of which he doesn't want to be a part of but who nonetheless feels that he can't ignore it either. So I'm just hoping it goes away if I try hard enough. How wrong and biased, huh? But I'm neither able to ignore it nor get myself to do something about it.
Replies from: TheOtherDave, wedrifid, Nornagest↑ comment by TheOtherDave · 2011-03-10T21:14:13.171Z · LW(p) · GW(p)
(nods) I see.
So, if you were simply skeptical about Yudkowsky/SIAI, you could dismiss them and walk away. But since you're emotionally involved and feel like you have to make it all go away in order to feel better, that's not an option for you.
The problem is, what you're doing isn't going to work for you either. You're just setting yourself up for a rather pointless and bitter conflict.
Surely this isn't a unique condition? I mean, there are plenty of groups out there who will tell you that there are various bad things that might happen if you don't read their book, donate to their organization, etc., etc., and you don't feel the emotional need to make them go away. You simply ignore them, or at least most of them.
How do you do that? Perhaps you can apply the same techniques here.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-11T09:59:43.041Z · LW(p) · GW(p)
How do you do that? Perhaps you can apply the same techniques here.
I managed to do that with Jehovas Witnesses. I grew up being told that I have to tell people about Jehovas Witnesses so that they will be salvaged. It is my responsibility. But this here is on a much more sophisticated level. It includes all the elements of organized religion mixed up with science and math. Incidentally one of the first posts I read was Why Our Kind Can't Cooperate:
The obvious wrong way to finish this thought is to say, "Let's do what the Raelians do! Let's add some nonsense to this meme!" [...]
When reading that I thought, "Wow, they are openly discussing what they are doing while dismissing it at the same time." That post basically tells the story of how it all started.
So it's probably not a good idea to cultivate a sense of violated entitlement at the thought that some other group, who you think ought to be inferior to you, has more money and followers.
'Probably not' my ass! :-)
The respected leader speaks, and there comes a chorus of pure agreement: if there are any who harbor inward doubts, they keep them to themselves. So all the individual members of the audience see this atmosphere of pure agreement, and they feel more confident in the ideas presented - even if they, personally, harbored inward doubts, why, everyone else seems to agree with it.
Not that you could encounter that here ;-)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-11T15:23:31.713Z · LW(p) · GW(p)
But this here is on a much more sophisticated level.
It is astonishing how effective it can be to systematize a skill that I learn on a simple problem, and then apply the systematized skill to more sophisticated problems.
So, OK: your goal is to find a way to disconnect emotionally from Less Wrong and from SIAI, and you already have the experience of disconnecting emotionally from the Jehovah's Witnesses. How did you disconnect from them? Was there a particular event that transitioned you, or was it more of a gradual thing? Did it have to do with how they behaved, or with philosophical/intellectual opposition, or discovering a new social context, or something else...?
That sort of thing.
As for Why Our Kind Can't Cooperate, etc. ... (shrug). When I disagree with stuff or have doubts, I say so. Feel free to read through my first few months of comments here, if you want, and you'll see plenty of that. And I see lots of other people doing the same.
I just don't expect anyone to find what I say -- whether in agreement or disagreement -- more than peripherally interesting. It really isn't about me.
↑ comment by Nornagest · 2011-03-10T21:34:18.889Z · LW(p) · GW(p)
Less Wrong ought to be about reasoning, as per Common Interest of Many Causes. Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
And I'd hazard a guess that the SIAI representatives here know that. A lot of people benefit from knowing how to think and act more effectively unqualified, but a site about improving reasoning skills that's also an appendage to the SIAI party line limits its own effectiveness, and therefore its usefulness as a way of sharpening reasoning about AI (and, more cynically, as a source of smart and rational recruits), by being exclusionary. We're doing a fair-to-middling job in that respect; we could definitely be doing a better one, if the above is a fair description of the intended topic according to the people who actually call the shots around here. That's fine, and it does deserve further discussion.
But the topic of rationality isn't at all well served by flogging criticisms of the SIAI viewpoint that have nothing to do with rationality, especially when they're brought up out of the context of an existing SIAI discussion. Doing so might diminish perceived or actual groupthink re: galactic civilizations and your money, but it still lowers the signal-to-noise ratio, for the simple reason that the appealing qualities of this site are utterly indifferent to the pros and cons of dedicating your money to the Friendly AI cause except insofar as it serves as a case study in rational charity. Granted, there are signaling effects that might counter or overwhelm its usefulness as a case study, but the impression I get from talking to outsiders is that those are far from the most obvious or destructive signaling problems that the community exhibits.
Bottom line, I view the friendly AI topic as something between a historical quirk and a pet example among several of the higher-status people here, and I think you should too.
Replies from: Wei_Dai, wedrifid, Pavitra, komponisto↑ comment by Wei Dai (Wei_Dai) · 2011-03-10T23:07:39.621Z · LW(p) · GW(p)
Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
Disagree on the "fewer" part. I'm not sure about SIAI, but I think at least my personal interests would not be better served by having fewer transhumanist posts. It might be a good idea to move such posts into a subforum though. (I think supporting such subforums was discussed in the past, but I don't remember if it hasn't been done due to lack of resources, or if there's some downside to the idea.)
Replies from: Nornagest↑ comment by Nornagest · 2011-03-10T23:14:43.717Z · LW(p) · GW(p)
Fair enough. It ultimately comes down to whether or not tickling transhumanists' brains wins us more than we'd gain from appearing however more approachable to non-transhumanist rationalists, and there's enough unquantified values in that equation to leave room for disagreement. In a world where a magazine as poppy and mainstream as TIME likes to publish articles on the Singularity, I could easily be wrong.
I stand by my statements when it comes to SIAI-specific values, though.
↑ comment by wedrifid · 2011-03-11T00:14:17.429Z · LW(p) · GW(p)
I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause
One of these things is not like the others. One of these things is not about the topic which historically could not be named. One of them is just a building block that can be sometimes useful when discussing reasoning that involves decision making.
Replies from: Nornagest↑ comment by Nornagest · 2011-03-11T00:28:35.520Z · LW(p) · GW(p)
My objection to that one is slightly different, yes. But I think it does derive from the same considerations of vast utility/disutility that drive the historically forbidden topic, and is subject to some of the same pitfalls (as well as some others less relevant here).
There are also a few specific torture scenarios which are much more closely linked to the historically forbidden topic, and which come up, however obliquely, with remarkable frequency.
Replies from: wedrifid↑ comment by wedrifid · 2011-03-11T00:51:50.541Z · LW(p) · GW(p)
There are also a few specific torture scenarios which are much more closely linked to the historically forbidden topic, and which come up, however obliquely, with remarkable frequency.
Hmm...
- Roko's Basilisk
- Boxed AI trying to extort you
- The 'People Are Jerks" failure mode of CEV
I can't think of any other possible examples off the top of my head. were these the ones you were thinking of?
Replies from: Nornagest↑ comment by komponisto · 2011-03-10T21:56:44.475Z · LW(p) · GW(p)
Upvoted for complete agreement, particularly:
Replies from: komponistoLess Wrong ought to be about reasoning, as per Common Interest of Many Causes. Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
(...)
Bottom line, I view the friendly AI topic as something between a historical quirk and a pet example among several of the higher-status people here, and I think you should too.
↑ comment by komponisto · 2011-03-11T02:13:59.500Z · LW(p) · GW(p)
↑ comment by Alexandros · 2011-03-09T18:53:43.631Z · LW(p) · GW(p)
I don't, but
Your 'lessons learned' implies you do.
it is evidence that people disagree with the SIAI and think that there are more effective ways towards a positive Singularity.
This is not news to anyone.
Don't forget that he once worked for the SIAI.
I don't think he was ever on their payroll, so 'worked for' is inaccurate. Also, he's still on their advisory board: http://singinst.org/aboutus/ourmission/ . Finally, OpenCog started from within SIAI as far as I know, and is still mentioned on the history page.
I had people going through my comments downvoting over 40 in a matter of minutes. I have seen enough irrational and motivated voting behavior to not trust it anymore, especially when it comes to anything about the community itself.
Fair enough.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-09T19:51:20.864Z · LW(p) · GW(p)
To the point about my 'lessons'. It is what I inferred from talking to Goertzel. I still don't see what is wrong with that. If it annoys you too much let me know and I'll just remove that part.
I'm weary of apparently making people angry, so I'll apologize at this point and post any further exchanges and writings on my personal blog. I just don't want to spoil the fun for everyone else. My goal is to collect evidence on how to best contribute my time and money. I heard most of the arguments from within this community and I think it is essential to also listen to what other people have to say.
Finally, I believe there are serious problems with reputation systems in general and especially any negative incentive via downvotes. I feel that I can currently spend my time better doing other things than discussing that topic though. I am open for links to LW posts or studies that show that I am wrong and that reputation systems indeed do not harm honesty and diversity of tought. But since I'm not willing to discuss this I will shut up about any kind of voting behavior from now on.
Replies from: Alexandros, timtyler, wedrifid↑ comment by Alexandros · 2011-03-09T21:24:18.794Z · LW(p) · GW(p)
Look, this isn't personal. I am sympathetic to your overall effort to gain a better understanding of SIAI's workings and how to contribute to existential risk in general, and talking to other experts can't be a bad idea. All I had to say is what I said in my first comment. I think that one of the best things we are supposed to do for each other here on LW is to correct each other's thinking when we make a unwarranted step. So don't remove parts to pacify me, as I have no anger to speak of. Only do so if you actually change your mind.
↑ comment by timtyler · 2011-03-10T22:04:12.844Z · LW(p) · GW(p)
Finally, I believe there are serious problems with reputation systems in general and especially any negative incentive via downvotes.
Facebook seem to share the latter attitude. However, here, here and here are a lot of people who disagree. Frankly, I'm with them.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-11T09:42:56.487Z · LW(p) · GW(p)
On Facebook it doesn't matter, it will do a good job there and elsewhere. It matters if you seek truth.
Replies from: timtyler↑ comment by timtyler · 2011-03-11T10:20:09.895Z · LW(p) · GW(p)
I think we are at cross purposes. I was mostly talking about a dislike button. You seem to be back to the whole concept of a reputation system.
Reputation systems seem to be just generally good to me. How else are you supposed to automatically deal with spammers, flamers and other lowlife?
You seem more concerned with group-think and yes-men. Sure, but no big deal.
More reputation system would be better. Non-anonymous votes. Separate up and down vote counts. "Meh" button. Reputations for the posters themselves (rather than the sum of their comments). Comment like and dislike lists that can be made public. Vote annotations - so people can say why they voted. Killfiles that can be made public - etc.
Oh, yes, and while I am on the topic, let's not be faceless - bring on the http://en.gravatar.com/
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-11T16:12:13.183Z · LW(p) · GW(p)
Agreed, and I would add to that list a feature I've wanted to see for a long time on really large networks: complex filter criteria that support spreading activation networks of vote weight.
For example, a standard filter for "entries highly upvoted by people who tend to upvote the same kinds of things that i do, and/or highly downvoted by people who tend to downvote the kinds of things that I upvote," allowing a single system to support communities in strong disagreement with one another.
Relatedly, a spreading rule such that a new entry by a users whose entries I tend to rate highly inherits a high default rating, and therefore appears on my filters.
For example, a standard mechanism for "assemble a reading list with 30% things I would upvote for agreement, 65% things I would upvote for interest, and 5% things I would downvote for disagreement "
Etc.
Of course, someplace like LW is too small to need that sort of thing.
Replies from: timtyler↑ comment by timtyler · 2011-03-11T21:19:35.391Z · LW(p) · GW(p)
A recommendation system for comments would be fine. People who liked this comment also liked...
Discussion is probably about the number one collective intelligence app. FriendFeed and Google Wave didn't really make it - and commenting is currently a disaster. That needs fixing.
↑ comment by wedrifid · 2011-03-09T23:57:26.147Z · LW(p) · GW(p)
To the point about my 'lessons'. It is what I inferred from talking to Goertzel. I still don't see what is wrong with that. If it annoys you too much let me know and I'll just remove that part.
It is a matter of simple correctness. Even if Ben were a messiah and those were perfectly True beliefs they just don't follow from what you cited. You would have to learn them from something else.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-10T09:45:00.034Z · LW(p) · GW(p)
It is a matter of simple correctness
What information I learnt from Ben's answer:
- There is an experimental AGI project doing research towards a postive Singularity.
- An AGI researcher believes $100K are better spend with that project.
What I concluded:
- There are other options to work towards a positive Singularity (experimental).
- The SIAI might benefit from cooperating with them rather than competing.
Now you and other tell me those conclusions are incorrect. Can you elaborate?
Replies from: wedrifidcomment by Louie · 2011-03-09T22:15:17.426Z · LW(p) · GW(p)
Has anyone here ever tried to contribute to the OpenCog project?
Because I have.
You know what I learned?
This open source* code is missing huge components that are proprietary parts of Ben's Novamente system. So if you're a coder, you can't actually compile it, run it, or do anything with else with it. Ben's holding all the key components hostage and refuses to release them until he's paid money. If you'd like to pay someone a second time to open source the code they already wrote, OpenCog is an excellent charity. Hopefully after he gets enough money to actually show us what he written, Ben's software will cause an amazing Singularity with ponies for everyone. I guess proprietary software can't create Singularities... or ponies.... or funding.
- Open Source = closed source components you can't have + empty stubs of code
↑ comment by Mitchell_Porter · 2011-03-09T23:55:34.180Z · LW(p) · GW(p)
Are you saying that the existing OpenCog source is actually useless unless you have a paid-for copy of Novamente to augment it, or just that there are functionalities which have already been realized in Novamente which will have to be recreated in open source if they are to become part of what OpenCog can do?
Replies from: JaredWigmore↑ comment by JaredWigmore · 2011-03-10T04:20:22.282Z · LW(p) · GW(p)
I'm one of the leaders of OpenCog, and I can tell you that these accusations are spurious and bizzare. Regarding installing dependencies and compiling the code, detailed instructions are provided on our wiki. All the major features have been released (as they were ported/cleaned up during 2008 and 2009).
Some interesting features were previously implemented in Novamente but during rushed commercial contracts, in a hacky way that means it's easier to re-implement them now. Sometimes people have difficulties compiling the code, but we help them if they show up on IRC (I don't remember Louie though).
Replies from: Louie↑ comment by Louie · 2011-03-10T11:13:53.614Z · LW(p) · GW(p)
My comment relates to the state of OpenCog when I downloaded it in November 2009. It's entirely possible that things are much improved since then. I think it was reasonable to assume that things hadn't changed much though since the code looked mostly empty at that time and I didn't sense that there was any active development by anyone who wasn't on the Novamente/OpenCog team an an employee or close team member. There were comments in the code at the time stating that pieces were missing because they hadn't yet been released from Novamente. Hopefully those are gone now.
Sorry I didn't join you on IRC. I never noticed you had a channel.
I could have sent an email to the list. But again, it looked like I couldn't contribute to OpenCog unless I somehow got hired by OpenCog/Novamente or ingratiated myself to the current team and found a way to become part of the inner circle. I was considering if that would be a good idea at the time but figured that emailing the list with "Duuuhhhh... I can't compile it. WTF?" would only frustrate internal developers, get condescending replies from people who had unreleased code that made their versions work, or get requests for funding to help open source the unreleased code.
Hopefully things have improved in the last 1.5 years. I would love to support OpenCog. The vision you guys have looks great.
Replies from: ferrouswheel, nilg↑ comment by ferrouswheel · 2011-03-11T02:10:19.314Z · LW(p) · GW(p)
Well, we get a lot of the "I can't compile it" emails and while we are not especially excited to receive these, we usually reply and guide people through the process with minimal condescension.
There has been progressive additions to OpenCog from closed source projects, but they've never prevented the core framework from compiling and working in and of itself.
Apologies for my tone too. We occasionally get people trolling or trash-talking us without taking any time to understand the project... sometimes they just outright lie, and that's frustrating. Of course, we're not perfect as an OSS project, but we are constantly trying to improve.
↑ comment by nilg · 2011-03-10T14:13:15.042Z · LW(p) · GW(p)
Ah, OK. Thanks for clearing that up. Sorry for my perhaps harsh tone, I didn't imagine your comment would be based on an old/incomplete version of OpenCog, you should have mentioned that in your post or even better update your knowledge before posting! There's been a lot of work since then.
You can use it to run a virtual pet under Multiverse (although you need either 2 machines or a virtual box, one with Linux and the other one with Windows because OpenCog isn't completely ported for Windows and Multiverse runs under Windows). It is also used to control the Nao robot in a lab in China. Soon it will be possible to connect it in the Unity3D game engine with a much improved tool kit to code you own bot (because currently the API is really tough to understand and use).
Just for playing around with the various components (except MOSES which is a standalone executable for now) there is a Scheme binding, and there will be soon a Python binding.
It's really a lot of work and except the HK team who got a grant to focus entirely on it for the next 2 years and some students in the BLISS lab in China we only manage to contribute via loosely related contracts that do not always help advancing OpenCog itself (though we're trying our best to direct our efforts toward it).
So any help is very welcome!
http://wiki.opencog.org/w/Volunteer http://wiki.opencog.org/w/The_Open_Cognition_Project
↑ comment by ferrouswheel · 2011-03-10T06:42:23.185Z · LW(p) · GW(p)
Yeah, you've tried to contribute huh? Who are you again and why is there no mention of you in my complete archive of the OpenCog mailing lists?
↑ comment by nilg · 2011-03-10T05:02:02.809Z · LW(p) · GW(p)
Louie where did you get this non sense? OpenCog doesn't need any proprietary add-ons and is better and cleaner (and keeps getting better and better) than the Novamente code from which it has been seeded.
You are either hiding your identity or making up the fact that you've tried to contribute because I've never heard about you on the email list or IRC.
comment by benelliott · 2011-03-09T16:59:13.910Z · LW(p) · GW(p)
Any possibility to ask a follow-up question about what he would do with $100M? With all due respect to Ben, there's a good chance he'd overestimate the importance of his own project so I'd be more interested to see how he thinks other projects compare with each-other.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-09T18:07:40.520Z · LW(p) · GW(p)
Upvoted because someone downvoted you without explaining himself.
With all respect to Ben, there's a good chance he'd overestimate the importance of his own project so I'd be more interested to see how he thinks other projects compare with each-other.
I think that is sufficiently outweighed by the fact that the same could be said about the SIAI.
Any possibility to ask a follow-up question about what he would do with $100M?
He said he would have to do more research into it. I really don't want to bother him even more with questions right now. But you can find his e-mail on his homepage.
Replies from: benelliott↑ comment by benelliott · 2011-03-09T18:31:37.972Z · LW(p) · GW(p)
I think that is sufficiently outweighed by the fact that the same could be said about the SIAI.
The fact that SIAI is just as likely to be biased is exactly why I want to hear what those outside of it think of it.
He said he would have to do more research into it. I really don't want to bother him even more with questions right now
Fair enough.
comment by wedrifid · 2011-03-09T23:38:53.965Z · LW(p) · GW(p)
What can one learn from this?
- The SIAI should try to cooperate more closely with other AGI projects to potentially have a positive impact.
This is not something that can be learned from what you have mentioned. Particularly if prior observation of Goertzel left you unimpressed. A self endorsement does not 'teach' you that cooperation with him on AGI would be beneficial.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-10T10:11:20.562Z · LW(p) · GW(p)
This is not something that can be learned from what you have mentioned.
What information I learnt from Ben's answer:
- There is an experimental AGI project doing research towards a postive Singularity.
- An AGI researcher believes $100K are better spend with that project.
What I concluded:
- There are other options to work towards a positive Singularity (experimental).
- The SIAI might benefit from cooperating with them rather than competing.
Particularly if prior observation of Goertzel left you unimpressed.
I'm more impressed than I am with the SIAI right now. At least he is doing something while most of what the SIAI achieved is some science fictional idea called CEV and a handful of papers of which most are just survey papers and none are peer-reviewed, as far as I know. And the responses to this post seem completely biased and in some cases simply motivated by a strong commitment to the SIAI.
comment by lukeprog · 2011-03-09T19:39:39.647Z · LW(p) · GW(p)
Too bad we can't judge Friendly AI charity effectiveness as "easily" as we can judge the effectiveness of some other charities, like those which distribute malaria nets and vaccines.
If one assumes that giving toward solving the Friendly AI problem offers the highest marginal return on investment, which project do you give to? Yudkowsky / SIAI? OpenCog / Goertzel? Gert-Jan Lokhorst? Stan Franklin / Wendell Wallach / Colin Allen?
My money is on SIAI, but I can't justify that with anything quick and easy.
Replies from: ferrouswheel, Vladimir_M, None↑ comment by ferrouswheel · 2011-03-10T06:51:21.992Z · LW(p) · GW(p)
As I see it, OpenCog is making practical progress towards an architecture for AGI, whereas SIAI is focused on the theory of Friendly AI.
I specifically added "consultation with SIAI" in the latter part of OpenCog's roadmap to try to ensure the highest odds of OpenCog remaining friendly under self-improvement.
As far as I'm aware there is no software development going on in SIAI, it's all theoretical and philosophical comment on decision theory etc. (this might have changed, but I haven't heard anything about them launching an engineering or experimental effort).
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-10T09:57:28.409Z · LW(p) · GW(p)
As far as I'm aware there is no software development going on in SIAI, it's all theoretical and philosophical comment on decision theory etc. (this might have changed, but I haven't heard anything about them launching an engineering or experimental effort).
Indeed, that is another reason for me to conclude that the SIAI should seek cooperation with projects that follow an experimental approach.
Replies from: timtyler↑ comment by timtyler · 2011-03-10T22:18:53.539Z · LW(p) · GW(p)
At the moment, they seem more interested in lobbing stink bombs in the general direction of the rest of the AI community - perhaps in the hope that it will drive some of the people there in its direction. Claiming that your opponents' products may destroy the world is surely a classic piece of FUD marketing.
The "friendlier-than-thou" marketing battle seems to be starting out with some mud-slinging.
↑ comment by Vladimir_M · 2011-03-10T00:12:31.250Z · LW(p) · GW(p)
I don't know much about AI specifically, but I do know something about software in general. And I'd say that even if someone had a correct general idea how to build an AGI (an assumption that by itself beggars belief given the current state of the relevant science), developing an actual working implementation with today's software tools and methodologies would be sort of like trying to build a working airplane with Neolithic tools. The way software is currently done is simply too brittle and unscalable to allow for a project of such size and complexity, and nobody really knows when and how (if at all) this state of affairs will be improved.
With this in mind, I simply can't take seriously people who propose a few years long roadmap for building an AGI.
Replies from: Vladimir_Nesov, ferrouswheel, lukeprog↑ comment by Vladimir_Nesov · 2011-03-10T11:49:58.955Z · LW(p) · GW(p)
And I'd say that even if someone had a correct general idea how to build an AGI (an assumption that by itself beggars belief given the current state of the relevant science), developing an actual working implementation with today's software tools and methodologies would be sort of like trying to build a working airplane with Neolithic tools.
This is the sort of sentiment that has people predict that AGI will be built in 300 years, because "300 years" is how difficult the problem feels like. There is a lot of uncertainty about what it takes to build an AGI, and it would be wrong to be confident one way or the other just how difficult that's going to be, or what tools are necessary.
We understand both airplanes and Neolithic tools, but we don't understand AGI design. Difficulty in basic understanding doesn't straightforwardly translate into the difficulty of solution.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-03-10T22:19:42.755Z · LW(p) · GW(p)
We understand both airplanes and Neolithic tools, but we don't understand AGI design. Difficulty in basic understanding doesn't straightforwardly translate into the difficulty of solution.
That is true, but a project like OpenCog can succeed only if: (1) there exists an AGI program simple enough (in terms of both size and messiness) to be doable with today's software technology, and (2) people running the project have the right idea how to build it. I find both these assumptions improbable, especially the latter, and their conjunction vanishingly unlikely.
Perhaps a better analogy would be if someone embarked on a project to find an elementary proof of P != NP or some such problem. We don't know for sure that it's impossible, but given both the apparent difficulty of the problem and the history of the attempts to solve it, such an announcement would be rightfully met with skepticism.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-03-11T00:06:44.715Z · LW(p) · GW(p)
You appealed to inadequacy of "today's software tools and methodologies". Now you make a different argument. I didn't say it's probable that solution will be found (given the various difficulties), I said that you can't be sure that it's Neolithic tools in particular that are inadequate.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-03-11T02:47:27.314Z · LW(p) · GW(p)
It's hard to find a perfect analogy here, but both analogies I mentioned lend support to my original claim in a similar way.
It may be that with the present state of math, one could cite a few established results and use them to construct a simple proof of P != NP, only nobody's figured it out yet. Analogously, it may be that there is a feasible way to take present-day software tools and use them to implement a working AGI. In both cases, we lack the understanding that would be necessary either to achieve the goal or to prove it impossible. However, what insight and practical experience we have strongly suggests that neither thing is doable, leading to conclusion that the present-day software tools likely are inadequate.
In addition to this argument, we can also observe that even if such a solution exists, finding it would be a task of enormous difficulty, possibly beyond anyone's practical abilities.
This reasoning doesn't lead to the same certainty that we have in problems involving well-understood physics, such as building airplanes, but I do think it's sufficient (when spelled out in full detail) to establish a very high level of certainty nevertheless.
↑ comment by ferrouswheel · 2011-03-10T06:35:40.739Z · LW(p) · GW(p)
Well, if you bothered looking at our/OpenCog's roadmap you'll see it doesn't expect AGI in a "few years".
What magical software engineering tools are you after that can't be built with the current tools we have?
If nobody attempts to build these then nothing will ever improve - people will just go "oh, that can't be done right now, let's just wait a while until the tools appear that make AGI like snapping lego together". Which is fine if you want to leave the R&D to other people... like us.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-03-10T07:06:34.463Z · LW(p) · GW(p)
ferrouswheel:
Well, if you bothered looking at our/OpenCog's roadmap you'll see it doesn't expect AGI in a "few years".
The roadmap on opencog.org has among its milestones: "2019-2021: Full-On Human Level AGI."
What magical software engineering tools are you after that can't be built with the current tools we have?
Well, if I knew, I'd be cashing in on the idea, not discussing it here. In any case, surely you must agree that claiming the ability to develop an AGI within a decade is a very extraordinary claim.
Replies from: JaredWigmore, ferrouswheel, timtyler↑ comment by JaredWigmore · 2011-03-10T10:26:05.619Z · LW(p) · GW(p)
As in "extraordinary claims demand extraordinary evidence".
A summary of the evidence can be found on Ben's blog
Adding some more info... Basically the evidence can be divided into two parts. 1) Evidence that the OpenCog design (or something reasonably similar) would be a successful AGI system when fully implemented and tested. 2) Evidence that the OpenCog design can be implemented and tested within a decade.
1) The OpenCog design has been described in considerable detail in various publications (formal or otherwise); see http://opencog.org/research/ for an incomplete list. A lot of other information is available in other papers co-authored by Ben Goertzel, talks/papers from the AGI Conferences (http://agi-conf.org/), and the AGI Summer School (http://agi-school.org/) amongst other places.
These resources also include explanations for why various parts of the design would work. They use a mix of different types of arguments (i.e. intuitive arguments, math, empirical results). It doesn't constitute a formal proof that it will work, but it is good evidence.
2) The OpenCog design is realistic to achieve with current software/hardware and doesn't require any major new conceptual breakthroughs. Obviously it may take years longer than intended (or even years less); it depends on funding, project efficiency, how well other people solve parts of the problem, and various other things. It's not realistic to estimate the exact number of years at this point, but it seems unlikely that it needs to take more than, say 20 years, given adequate funding.
By the way, the two year project mentioned in that blog post is the OpenCog Hong Kong project, which is where ferrouswheel (Joel Pitt) and I are currently working. We have several other people here as well, and various other people working right now (including Nil Geisweiller who posted before as nilg).
↑ comment by ferrouswheel · 2011-03-10T07:42:52.925Z · LW(p) · GW(p)
Not particularly, people have been claiming a decade from human-level intelligence since the dawn of the AI field, why should now be any different? ;p
And usually people would consider a decade being more than a "few years" - which was sort of my point.
↑ comment by timtyler · 2011-03-10T22:46:08.885Z · LW(p) · GW(p)
Eyeballing my own graph I give it about a 12% chance of being true. Ambitious - but not that extraordinary.
People are usually overoptimistic about the timescales of their own projects. It is typically an attempt to signal optimism and confidence.
↑ comment by [deleted] · 2011-03-09T22:30:15.756Z · LW(p) · GW(p)
The OpenCog Roadmap does say that they will collaborate with SIAI at some point:
2019-2021: Full-On Human Level AGI
- Integration of special-purpose intelligent agents from 2017-2018 into a single OpenCog-based mind kit.
- Risk-assessment of goal stability under self-modification, with consultation from the Singularity Institute for Artificial Intelligence.