Posts
Comments
Surely if you provably know what the ideal FAI would do in many situations, a giant step forward has been made in FAI theory?
BBC Radio : Should we be frightened of intelligent computers? http://www.bbc.co.uk/programmes/p01rqkp4 Includes Nick Bostrom from about halfway through.
Drat. I just came here to post that. Still, at least this time I only missed by hours.
You need a different definition for 'blackmail' then. Action X might be beneficial to the blackmailer rather than negative in value and still be blackmail.
Why not taboo 'blackmail'? That word already has a bunch of different meanings in law and common usage.
Omega gives you a choice of either $1 or $X, where X is either 2 or 100?
It seems like you must have meant something else, but I can't figure it out.
Isn't that steel-man, rather than strong-man?
Reading that, I thought: "I bet people asking questions like that is why 'Original Sin' got invented".
Of course, the next step is to ask: "Why doesn't the priest drown the baby in the baptismal font, now that its Original Sin is forgiven?"
…
I, Robin, or Michael Vassar could probably think for five minutes and name five major probable-big-win meta-level improvements that society isn't investing in
Are there lists like this about? I think I'd like to read about that sort of stuff.
I remember seeing a few AI(and other things, sometimes) debates (mostly on YouTube) where they'd just be getting to the point where they are clarifying what it is that each one actually believes and you get: 'agree to disagree'. The end.
Just when the really interesting part seemed to be approaching! :(
For text-based discussions that fail to go anywhere, that brings to mind the 'talking past each other' you mention or 'appears to be deliberately misinterpreting the other person'
Has there been any evolution in either of their positions since 2008, or is that the latest we have?
edit Credit to XiXiDu to sending me this OB link, which contains in the comments this YouTube video of a Hanson-Yudkowsky AI debate in 2011. Boiling it down to one sentence I'd say it amounts to Hanson thinking that a singleton Foom is a lot less likely than Yudkowsky thinks.
Is that more or less what it was in 2008?
I find it is the downsides of those things that I generally blame for not doing them, though I do own a Bon Jovi CD.
…powers such as precognition (knowledge of the future), telepathy or psychokinesis…
Sounds like a description of magic to me. They could have written it differently if they'd wanted to evoke the impression of super-advanced technologies.
I hope that happens quick. There are systems in my body that need some re-engineering, lest I die even sooner than the average Englishman.
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for making cheesecake.
Several comments on the original thread seem to be making a comparison between "I found a complicated machine-thing, something must have made it" and the classic anti-evolution "This looks complicated, therefore God"
I can't quite see how they can leap from one to the other.
So, a choice between the worst possible thing a superintelligence can do to you by teaching you an easily-verifiable truth and the most wonderful possible thing by having you believe an untruth. That ought to be an easy choice, except maybe when there's no Omega and people are tempted to signal about how attached to the truth they are, or something.
I am worried about "a belief/fact in its class" the class chosen could have an extreme effect on the outcome.
OpenOffice file, I think. edit OpenDocument Presentation. You ought to be able to view it with more recent versions of MS Office, it seems
[pollid:49]
I was under the impression from reading stuff Gwern wrote that Intrade was a bit expensive unless you were using it a lot. Also, even assuming I made money on it, wouldn't I be liable for tax? I intend to give owning shares via a self-select ISA a go.
As a non-USian, my main interest in the election is watching the numbers go up and down on Nate Silver's blog.
Even having watched the video before, when I concentrated hard on counting passes, I missed seeing it.
Using Opera Mini, I just delete the cookies (which then requires me to re-login to LW) It was much less annoying when the count-to-nag was 20, rather than 10.
Is this pretty much what gets called 'signalling' on LW? Anything you do in whole or in part to look good to people or because doing otherwise would make people think badly of you?
I'm not sure it counts as an origin story, but after I noticed a lot of discussions/arguments seemed to devolve into arguments about what words meant, or similar, I got the idea this was because we didn't 'agree on our axioms' (I'd studied some maths). Sadly, trying to get agreement on what we each meant by the things we disagreed on didn't seem to work - I think that the other party mostly considered it an underhanded trick and gave up. :(
"One death is a tragedy. One million deaths is a statistic."
If you want to remind people that death is bad, agreed, the death of individuals you know or feel like you know is worse than lots of people you never met or even saw.
Eulogies on arbitrary people might help with motivation, and if you're doing that you might as well chose one with a minor advantage like not needing a long introduction to make the reader empathize, rather than choosing purely at random.
Are you suggesting that putting eulogies of famous people on LessWrong is a good idea? That sort of sounds like justifying something you've already decided.
~150,000 other people died today, too. Okay, Armstrong was hugely more famous than any of them, probably the most famous person to die this year, but what did he do for rationality, or AI, or other LessWrong interests?(which I figure do include space travel, admittedly. Presumably he wasn't signed up for cryogenic preservation) the post doesn't say.
Yes, death is bad, and Armstrong is/was famous, possibly uniquely famous, but I don't think eulogies of famous people are on-topic.
Credit to Bakkot for having tried out and reported on magnetic rings, not me.
Holden Karnofsky thinks superintelligences with utility functions are made out of programs that list options by rank without making any sort of value judgement (basically answer a question), and then pick the one with the most utility.
Isn't 'listing by rank' 'making a (value) judgement'?
In my recollection of just about any place I have eaten in the UK, there is no choice. They only ever have one cola or the other. Is this different in other parts of the world?
I thought that sensitivity might be the answer. Not that hearing fairly sensitive perception of magnetic fields is possible makes me want the ability enough to stick magnets in my fingers. Yet.
I've heard about other superhuman sensory devices, like the compass-sense belt, though, and the more I hear about this stuff, the cooler it sounds. Perhaps sometime the rising interest and falling cost/inconvenience curves will cross for me. :)
I can see X-ray or terahertz scanners missing a tiny lump of metal, but aren't there a fair number of magnetic scanners in use looking for larger lumps of metal, which I'd think the magnet would interact fairly strongly with?
Judging by previous instances, you ought to put in more than just a link and also put [LINK] in the title, or else you are liable to get a bunch of downvotes.
[edit] OK, watched the first video, with people getting little rare-earth magnets put in their fingers so they can feel magnetic fields... Why not just get a magnetic ring? That way you can feel magnetic fields and don't risk medical complications and you don't have to stop for several minutes and explain every time you fly or go through one of those scanners I hear are relatively common in the US. [/edit]
Well, they say that now. We have something that works better than what we had before. I commend Asimov's essay The Relativity Of Wrong.
Good to read that again. Thanks.
I was wondering about evidence that uploading was accurate enough that you'd consider it to be a satisfactory continuation of personal identity.
I'd think that until even one of those little worms with only a couple hundred neurons is uploaded (or maybe a lobster), all evidence of the effectiveness of uploading is theory or fiction.
If computing continues to get cheaper at Moore's Law rates for another few decades, then maybe...
More generally, what would folks here consider to be good enough evidence that uploading was worth doing?
Good enough evidence that (properly done) uploading would be a good thing, as opposed to the status quo of tens of thousands of people dying every day, you mean?
[edit] If you want to compare working SENS to uploading, then I'd have to think a lot harder.
Wasn't that trick tried with Windows Vista, and people were so annoyed by continually being asked trivial "can I do this?" questions that they turned off the security?
I think that the intention is to make forgetting your password as hard as forgetting how to ride a bicycle. Although I only remember the figure of '2 weeks' from reading about this yesterday.
If you mostly solve the 'Ageing' and 'Unnecessary Unhappiness' problems, the youthful, happy populous will probably give a lot more weight to 'Things That Might Kill Everyone'
I don't know about putting these things into proper categories, but I'm sure I'd be a lot more worried about the (more distant than a few decades) future if I had a stronger expectation of living to see it and I spent less time being depressed.
Just reading the title of this post, TVTropes came to mind, and there it was when I read it, which made me feel both good that I had made a successful prediction, and worried that it was probably me being biased by not remembering all the fleeting predictions that don't come true.
I can't help you there. Not enough detail has survived the years.
It has been more than a decade since then. All I have left are the less-reliable memories-of-memories of the dream. Having said that, I recall the dream being of text coloured like the MUD I was playing, but I am pretty sure that there was only the text. I don't even recall anything that happened in the dream or if I previously did and have forgotten.
I very rarely recall any dreams, but I do remember one time, during a summer I spent playing a lot of MUD (Internet text-based game, primitive ancestor to World of Warcraft), that I had a dream in text.
Why would tools for which the failure-mode of the tool just wireheading itself was common be built?
Well, I realize that personal health is a personal choice in most cases.
You might want to rethink your wording on that one. Perhaps 'personal health status is a consequence of previous choices in many cases' or something. As written it sounds a bit overstated.
And yet, several high-status Less Wrongers continue to affirm utilitarianism with equal weight for each person in the social welfare function. I have criticized these beliefs in the past (as not, in any way, constraining experience), but have not received a satisfactory response.
I'm not sure how that answers my question, or follows from it. Can you clarify?
I am not sure what 'accurate moral beliefs' means. By analogy with 'accurate scientific beliefs', it seems as if Mr Danaher is saying there are true morals out there in reality, which I had not thought to be the case, so I am probably confused. Can anyone clarify my understanding with a brief explanation of what he means?
Not very sure. I've heard all sorts of assertions. I'm pretty sure that sugar and other carbs are a bad idea, since I've been diagnosed as diabetic. Also that too much animal fat and salt are bad - but thinking that things are bad doesn't always stop me indulging :(
The UK government recommends five portions (handful-sized) of different fruit and vegetables per day, but I don't even manage to do that, most days.
Sadly, the last time I got an appointment to talk about my diet, the nurse I had an appointment with turned out to be fatter than I am, and absolutely everything she said has slipped my memory, perhaps because I fail to believe the dieting advice of a fat nurse.
I think if I were given a few simple "doctor's orders" about food, I might be able to follow them, but don't think I can possibly hold dozens or hundreds of rules about food in my head - which is what all the stuff I recall reading consists of.