Posts
Comments
But we want them to be sentient. These things are going to be our cultural successors. We want to be able to enjoy their company. We don't want to pass the torch on to something that isn't sentient. If we were to build a nonsentient one, assuming such a thing is even possible, one of the first things it would do would be start working on its sentient successor.
In any case, it seems weird to try and imagine such a thing. We are sentient entirely as a result of being powerful optimisers. We would not want to build an AI we couldn't talk to, and if it can talk to us as we can talk to each other, it's hard to see what aspect of sentience it could be lacking. At first blush it reads as if you plan to build an AI that's just like us except it doesn't have a Cartesian Theatre.
"by the time the AI is smart enough to do that, it will be smart enough not to"
I still don't quite grasp why this isn't an adequate answer. If an FAI shares our CEV, it won't want to simulate zillions of conscious people in order to put them through great torture, and it will figure out how to avoid it. Is it simply that it may take the simulated torture of zillions for the FAI to figure this out? I don't see any reason to think that we will find this problem very much easier to solve than a massively powerful AI.
I'm also not wholly convinced that the only ethical way to treat simulacra is never to create them, but I need to think about that one further.
Posting much too late, but about the Emperor's New Clothes: I had always interpreted the story to mean not that people stopped believing in the clothes when the little boy spoke out, but that it hadn't quite crossed people's minds until that moment that whether or not the King was clothed, he appeared naked to that boy, and that wasn't how things should be. Everyone laughs because they think, I too can see the nakedness of the King; only then do they realise that their neighbour can also see it, and only after that do they see that there are no clothes to see.
Very recently experienced exactly this phenomenon: someone discussing atheists who think "all religion/religious stuff is bad" to the inclusion of, for example, the music of Bach, or drinking and celebrating at Christmas. They seemed convinced that such atheists exist, and I doubt it, or at least I have never heard of them or met them, and I know for a fact that for example all four horsemen of atheism have made explicit statements to the contrary.
Your disclaimer is an annoying one to have to make, and of course this problem comes up whenever this move is made in discussion; your counterpart says "well, but some singularitarians believe that, don't they?" and you can't actually prove there are none, and you have the sneaking fear that given the vastness of the Internet a judicious Google search might just turn up someone crazy enough to vindicate them; but of course a handful of anonymous loons on the Internet sought specifically for their particular brand of madness does not a position worthy of discussion make.
I can't believe the discussion has got this far and no-one has mentioned The Land of Infinite Fun.
There is room to do vastly better than what is usually used for community content finding, and it's a great mystery to me how little explored this area is. If things have moved forward significantly since Raph Levien's work on attack resistant trust metrics, I haven't heard about it.
Good software to support rational discussion would be a huge contribution to thought.
There's a whole world of atheist blogging and writing out there that might also be worth tapping into for advice from others who've been there. See this collection of deconversion stories for example.
That sounds like a really tough spot. I hope you find advice that can help.
Possession of a single Eye is said to make the bearer equivalent to royalty.
I approve.
Crossman: there's a third argument, which is that even if the consequences of keeping the secret are overall worse than those of betraying the confidence even after the effect you discuss, turning yourself into someone who will never betray these secrets no matter what the consequences and advertising yourself as such in an impossible-to-fake way may overall have good consequences. In other words, you might turn away from consequentialism on consequentialist grounds.
Another example where unfakeably advertising irrationality can (at least in theory) serve you is threats. My only way of stopping you from taking over the world is that I have the power to destroy the world and you. Now, if you take over the world, there's no possible advantage to destroying it, so I won't, so you can take the world over. But if I put a lunatic in charge of the button who believably will carry out the threat, you will be deterred; the same applies if I can become that lunatic.
However, overall I think that the arguments against turning yourself into a lunatic are pretty strong, and in fact I suspect that consequentialism has the best consequences.
There's a huge conspiracy covering it up
Well, that's just what one of the Bad Guys would say, isn't it?
Why should I have to justify myself to you?
Oh, you with your book-learning, you think you're smarter than me?
They said that to Einstein and Galileo!
That's a very interesting question, let me show you the entire library that's been written about it (where if there were a satisfactory answer it would be shortish)
How can you be so sure?
Don't think "silver spoons", think "clean drinking water".
I like "we are the cards we are dealt", which expresses nicely a problem with common ideas of blame and credit. I disagree that intelligence is the unfairest card of all - I think that a relatively dim person born into affluence in the USA has a much better time of it than a smart person born into poverty in the Congo.
Interesting. There's a paradox involving a game in which players successively take a single coin from a large pile of coins. At any time a player may choose instead to take two coins, at which point the game ends and all further coins are lost. You can prove by induction that if both players are perfectly selfish, they will take two coins on their first move, no matter how large the pile is. People find this paradox impossible to swallow because they model perfect selfishness on the most selfish person they can imagine, not on a mathematically perfect selfishness machine. It's nice to have an "intuition pump" that illustrates what genuine selfishness looks like.
Are you arguing that a few simple rules describe what we're all trying to get at with our morality? That everyone's moral preference function is the same deep down? That anything that appears to be a disagreement about what is desirable is actually just a disagreement about the consequences of these shared rules, and could therefore always be resolved in principle by a discussion between any two sufficiently wise, sufficiently patient debaters? And that moral progress consists of the moral zeigeist moving closer to what those rules capture?
That certainly would be convenient for the enterprise of building FAI.
Paul, do you think that your own morality is optimum or can you conceive of someone more moral than yourself - not just a being who better adheres to your current ideals, but a being with better ideals than you?
Yes I can.
If you take the view that ethics and aesthetics are one and the same, then in general it's hard to imagine how any ideals other than your own could be better than your own for the obvious reason that I can only measure them against my own.
What interests me about the rule I propose (circular preferences are bad!) is that it is exclusively a meta-rule - it cannot measure my behavour, only my ideals. It provides a meta-ethic that can show flaws in my current ethical thinking, but not how to correct them - it provides no guidance on which arrow in the circle needs to be reversed. And I think it covers the way in which I've been persuaded of moral positions in the past (very hard to account for otherwise) and better yet allows me to imagine that I might be persuaded of moral points in the future, though obviously I can't anticipate which ones.
If I can imagine that through this rule I could be persuaded to take a different moral stance in the future, and see that as good, then I'm definitely elevating a different set of ideals - my imagined future ideals - over my current ideals.
I'm by no means sure that the idea of moral progress can be salvaged. But it might be interesting to try and make a case that we have fewer circular preferences now than we used to.
The "One Christers" are a nice SF touch.
It's not known whether the Universe is finite or infinite, this article gives more details:
http://en.wikipedia.org/wiki/Shape_of_the_Universe
If the Universe is infinite, then it has always been so even from the moment after the Big Bang; an infinite space can still expand.
It hadn't quite sunk in until this article that looked at from a sum-over-histories point of view, only identical configurations interfere; that makes decoherence much easier to understand.
Would this get easier or harder if you started with, say, gliders in Conway's Life?
How do you apply this approach to questions like "to what extent was underconsumption the cause of the Great Depression?" No conceivable experiment could answer such a question, even given a time machine (unlike, say, "Who shot JFK?") but I think such questions are nevertheless important to our understanding of what to do next.
The best answer I have to such questions is to posit experiments in which we rewind history to a particular date, and re-run it a million times, performing some specific miracle (such as putting money into a billion carefully-chosen wallets) on half a million of those occasions, and gather statistics on how the miracle affects economic indicators.
I don't think this answer meets the standards of rigour that you set above, but I'm increasingly convinced that the idea of free will arises out of punishment. Punishment plays a central role in relations among apes, but once you reach the level of sophistication where you can ask "are we machines", the answer "no" gives the most straightforward philosophical path to justifying your punishing behaviour.
"The old political syllogism "something must be done: this is something: therefore this will be done" appears to be at work here, in spades." -- Charlie Stross
Charlie is quoting the classic BBC TV series "Yes Minister" here.
"I assign higher credibility to an institution if liberals accuse it of being conservative and conservatives accuse it of being liberal." -- Alex F. Bokov
Surprised to see that one there - the world is full of people desperate to ensure that there is a stool either side of them, and that seems like a process very far from hugging the query.
The large sums of money make a big difference here. If it were for dollars, rather than thousands of dollars, I'd do what utility theory told me to do, and if that meant I missed out on $27 due to a very unlucky chance then so be it. But I don't think I could bring myself to do the same for life-changing amounts like those set out above; I would kick myself so hard if I took the very slightly riskier bet and didn't get the money.
Probabilities of 0 and 1 are perhaps more like the perfectly massless, perfectly inelastic rods we learn about in high school physics - they are useful as part of an idealized model which is often sufficient to accurately predict real-world events, but we know that they are idealizations that will never be seen in real life.
However, I think we can assign the primeness of 7 a value of "so close to 1 that there's no point in worrying about it".
Practically all words (eg "dead") actually cut across a continuum; maybe we should reclaim the word "certainty". We are certain that evolution is how life got to be what it is, because the level of doubt is so low you can pretty much forget about it. Any other meaning you could assign to the word "certain" makes it useless because everything falls on one side.
If you'd like someone to try the random jury approach, you need to think about how to turn it into good TV.
The video notes that when the subject is instructed to write their answers, conformity drops enormously. That suggests we can set aside the hypothesis that they conform for the rational reason you set out.
Recovering irrationalist: I feel the same way. The most interesting book I've read about this is George Ainslie's "Breakdown of Will". Ainslie uses the experimentally verified theory of hyperbolic discounting to build a model of why we do things like make promises to ourselves that we then fail to keep, and othe rforms of behaviour related to "akrasia".
"No. The "unless" clause is still incorrect. We can know a great deal about the fraction of people who think B, and it still cannot serve even as meta-evidence for or against B."
This can't be right. I have a hundred measuring devices. Ninety are broken and give a random answer with an unknown distribution, while ten give an answer that strongly correlates with the truth. Ninety say A and ten say B. If I examine a random meter that says B and find that it is broken, then surely that has to count as strong evidence against B.
This is probably an unnecessarily subtle point, of course; the overall thrust of the argument is of course correct.
I don't want to say what it is for fear of spoilering it, but is anyone else thinking of the same groundbreaking comic book I am? Perhaps that's the supervillain Eliezer is thinking of...
So the point is that the idiots who are directly useless - make no useful contributions, have no ideas, spark nothing good - may be useful because they give shelter for others who want to raise controversial ideas?
I'd want to see a group not already mad that suffered for not having an idiot in their number before I believed it...
Which paper was Merkle talking about, if I may ask?