Posts
Comments
Of course, if you never assume independence, then the only right network is the fully-connected one.Um, conditional independence, that is.
I want to know if my being killed by Eliezer's AI hinges on how often observables of interest tend to be conditionally dependent.
I think Einstein is a good example of both bending with the wind (when he came up with relativity), and of not bending with the wind (when he refused to accept quantum mechanics).In other words, you are advocating a combative, Western approach; I am bringing up a more Eastern approach, which is not to be so attached to anything in the first place, but to bend if the wind blows hard enough.The trouble is that you cannot break new ground this way. You can't do einstein like feats. You should follow the direction of the wind, but engage nitrous to follow that direction. Occasionally stopping and sticking a finger out the window to make sure you are going the right direction.
By "bending with the wind" I don't mean "bending with public opinion". I mean not being emotionally attached to your views.
In a PD, agents hurt each other, not themselves.In a PD, everyone having accurate information about the payoff matrix leads to a worse outcome for everyone, than some false payoff matrices you could misinform them with. That is the point.
If a belief is true you will be better off believing it, and if it is false you will be better off rejecting it.It is easy to construct at least these 2 kinds of cases where this is false:
- You have a set of beliefs optimized for co-occurence, and you are replacing one of these beliefs with a more-true belief. In other words, the new true belief will cause you harm because of other untrue (or less true) beliefs that you still hold.
- If an entire community can be persuaded to adopt a false belief, it may enable them to overcome a tragedy-of-the-commons or prisoners'-dilemma situation.
If you still aren't convinced whether you are always better-off with a true belief, ask yourself whether you have ever told someone else something that was not quite true, or withheld a truth from them, because you thought the full truth would be harmful.
Eliezer:
If a belief is true you will be better off believing it, and if it is false you will be better off rejecting it.I think you should try applying your own advice to this belief of yours. It is usually true, but it is certainly not always true, and reeks of irrational bias.
My experience with my crisis of faith seems quite opposite to your conceptions. I was raised in a fundamentalist family, and I had to "make an extraordinary effort" to keep believing in Christianity from the time I was 4 and started reading through the Bible, and finding things that were wrong; to the time I finally "came out" as a non-Christian around the age of 20. I finally gave up being Christian only when I was worn out and tired of putting forth such an extraordinary effort.
So in some cases your advice might do more harm than good. A person who is committed to making "extraordinary efforts" concerning their beliefs is more likely to find justifications to continue to hold onto their belief, than is someone who is lazier, and just accepts overwhelming evidence instead of letting it kick them into an "extraordinary effort." In other words, you are advocating a combative, Western approach; I am bringing up a more Eastern approach, which is not to be so attached to anything in the first place, but to bend if the wind blows hard enough.
Carl: None of those would (given our better understanding) be as bad as great plagues that humanity has lived through before.
A forum makes more sense for a blog like this, which is not timely, but timeless.
Consider the space of minds built using Boolean symbolic logic. This is a very large space, and it is the space which was at one time chosen by all the leading experts in AI as being the most promising space for finding AI minds. And yet I believe there are /no/ minds in that space. If I'm right, this means that the space of possible minds as imagined by us, is very sparsely populated by possible minds.
I agree with Mike Vassar, that Eliezer is using the word "mind" too broadly, to mean something like "computable function", rather than a control program for an agent to accomplish goals in the real world.
The real world places a lot of restrictions on possible minds.
If you posit that this mind is autonomous, and not being looked after by some other mind, that places more restrictions on it.
If you posit that there is a society of such minds, evolving over time; or a number of such minds, competing for resources; that places more restrictions on it. By this point, we could say quite a lot about the properties these minds will have. In fact, by this point, it may be the case that variation in possible minds, for sufficiently intelligent AIs, is smaller than the variation in human minds.
Phil, I don't see how the argument is obviously incorrect. Why can't two works of literature from different cultures be as different from each other as Hamlet is from a restaurant menu?
They could be, but usually aren't. "World literature" is a valid category.
The larger point, that the space of possible minds is very large, is correct.
The argument used involving ATP synthase is invalid. ATP synthase is a building block. Life on earth is all built using roughly the same set of Legos. But Legos are very versatile.
Here is an analogous argument that is obviously incorrect:
People ask me, "What is world literature like? What desires and ambitions, and comedies and tragedies, do people write about in other languages?"
And lo, I say unto them, "You have asked me a trick question."
"the" is a determiner which is identical in English poems, novels, and legal documents. It has not changed significantly since the rise of modern English in the 17th century. It's is something that every English document has in common.
Any two works of literature from different countries might be less similar to each other than Hamlet is to a restaurant menu.