Posts

[LINK] SMBC on Confirmation Bias 2011-11-27T15:57:13.142Z

Comments

Comment by Lapsed_Lurker on Be Wary of Thinking Like a FAI · 2014-07-18T21:52:12.083Z · LW · GW

Surely if you provably know what the ideal FAI would do in many situations, a giant step forward has been made in FAI theory?

Comment by Lapsed_Lurker on Open Thread for February 11 - 17 · 2014-02-12T15:45:55.835Z · LW · GW

BBC Radio : Should we be frightened of intelligent computers? http://www.bbc.co.uk/programmes/p01rqkp4 Includes Nick Bostrom from about halfway through.

Comment by Lapsed_Lurker on Open Thread for February 3 - 10 · 2014-02-07T18:58:28.209Z · LW · GW

Drat. I just came here to post that. Still, at least this time I only missed by hours.

Comment by Lapsed_Lurker on Duller blackmail definitions · 2013-07-15T12:40:12.044Z · LW · GW

You need a different definition for 'blackmail' then. Action X might be beneficial to the blackmailer rather than negative in value and still be blackmail.

Comment by Lapsed_Lurker on Duller blackmail definitions · 2013-07-15T12:09:38.850Z · LW · GW

Why not taboo 'blackmail'? That word already has a bunch of different meanings in law and common usage.

Comment by Lapsed_Lurker on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-19T22:52:25.943Z · LW · GW

Omega gives you a choice of either $1 or $X, where X is either 2 or 100?

It seems like you must have meant something else, but I can't figure it out.

Comment by Lapsed_Lurker on Strongmanning Pascal's Mugging · 2013-02-20T14:08:21.070Z · LW · GW

Isn't that steel-man, rather than strong-man?

Comment by Lapsed_Lurker on Sensual Experience · 2013-01-08T12:30:53.367Z · LW · GW

Reading that, I thought: "I bet people asking questions like that is why 'Original Sin' got invented".

Of course, the next step is to ask: "Why doesn't the priest drown the baby in the baptismal font, now that its Original Sin is forgiven?"

Comment by Lapsed_Lurker on [SEQ RERUN] Life's Story Continues · 2012-11-12T11:11:40.702Z · LW · GW

I, Robin, or Michael Vassar could probably think for five minutes and name five major probable-big-win meta-level improvements that society isn't investing in

Are there lists like this about? I think I'd like to read about that sort of stuff.

Comment by Lapsed_Lurker on [POLL] AI-FOOM Debate in Sequence Reruns? · 2012-11-01T23:48:17.677Z · LW · GW

I remember seeing a few AI(and other things, sometimes) debates (mostly on YouTube) where they'd just be getting to the point where they are clarifying what it is that each one actually believes and you get: 'agree to disagree'. The end.

Just when the really interesting part seemed to be approaching! :(

For text-based discussions that fail to go anywhere, that brings to mind the 'talking past each other' you mention or 'appears to be deliberately misinterpreting the other person'

Comment by Lapsed_Lurker on [POLL] AI-FOOM Debate in Sequence Reruns? · 2012-11-01T10:36:16.013Z · LW · GW

Has there been any evolution in either of their positions since 2008, or is that the latest we have?

edit Credit to XiXiDu to sending me this OB link, which contains in the comments this YouTube video of a Hanson-Yudkowsky AI debate in 2011. Boiling it down to one sentence I'd say it amounts to Hanson thinking that a singleton Foom is a lot less likely than Yudkowsky thinks.

Is that more or less what it was in 2008?

Comment by Lapsed_Lurker on Argument by lexical overloading, or, Don't cut your wants with shoulds · 2012-10-23T07:53:10.544Z · LW · GW

I find it is the downsides of those things that I generally blame for not doing them, though I do own a Bon Jovi CD.

Comment by Lapsed_Lurker on [LINK] AI-boxing Is News, Somehow · 2012-10-19T14:01:09.841Z · LW · GW

…powers such as precognition (knowledge of the future), telepathy or psychokinesis…

Sounds like a description of magic to me. They could have written it differently if they'd wanted to evoke the impression of super-advanced technologies.

Comment by Lapsed_Lurker on [SEQ RERUN] Recognizing Intelligence · 2012-10-17T22:50:45.700Z · LW · GW

I hope that happens quick. There are systems in my body that need some re-engineering, lest I die even sooner than the average Englishman.

Comment by Lapsed_Lurker on [SEQ RERUN] Recognizing Intelligence · 2012-10-17T22:47:16.250Z · LW · GW

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for making cheesecake.

Comment by Lapsed_Lurker on [SEQ RERUN] Recognizing Intelligence · 2012-10-17T13:08:27.582Z · LW · GW

Several comments on the original thread seem to be making a comparison between "I found a complicated machine-thing, something must have made it" and the classic anti-evolution "This looks complicated, therefore God"

I can't quite see how they can leap from one to the other.

Comment by Lapsed_Lurker on Problem of Optimal False Information · 2012-10-16T14:49:03.608Z · LW · GW

So, a choice between the worst possible thing a superintelligence can do to you by teaching you an easily-verifiable truth and the most wonderful possible thing by having you believe an untruth. That ought to be an easy choice, except maybe when there's no Omega and people are tempted to signal about how attached to the truth they are, or something.

Comment by Lapsed_Lurker on Problem of Optimal False Information · 2012-10-16T11:34:47.838Z · LW · GW

I am worried about "a belief/fact in its class" the class chosen could have an extreme effect on the outcome.

Comment by Lapsed_Lurker on A presentation about Cox's Theorem made for my English class · 2012-10-16T07:47:00.000Z · LW · GW

OpenOffice file, I think. edit OpenDocument Presentation. You ought to be able to view it with more recent versions of MS Office, it seems

Comment by Lapsed_Lurker on Less Wrong Polls in Comments · 2012-09-20T21:50:07.051Z · LW · GW

[pollid:49]

Comment by Lapsed_Lurker on Any existential risk angles to the US presidential election? · 2012-09-20T15:44:35.939Z · LW · GW

I was under the impression from reading stuff Gwern wrote that Intrade was a bit expensive unless you were using it a lot. Also, even assuming I made money on it, wouldn't I be liable for tax? I intend to give owning shares via a self-select ISA a go.

Comment by Lapsed_Lurker on Any existential risk angles to the US presidential election? · 2012-09-20T12:59:04.048Z · LW · GW

As a non-USian, my main interest in the election is watching the numbers go up and down on Nate Silver's blog.

Comment by Lapsed_Lurker on New study on choice blindness in moral positions · 2012-09-20T07:46:12.467Z · LW · GW

Even having watched the video before, when I concentrated hard on counting passes, I missed seeing it.

Comment by Lapsed_Lurker on [LINK] Meteorologists are Epistemically Rational · 2012-09-13T18:49:24.571Z · LW · GW

Using Opera Mini, I just delete the cookies (which then requires me to re-login to LW) It was much less annoying when the count-to-nag was 20, rather than 10.

Comment by Lapsed_Lurker on [LINK] Interfluidity on "Rational Astrologies" · 2012-09-11T13:37:39.451Z · LW · GW

Is this pretty much what gets called 'signalling' on LW? Anything you do in whole or in part to look good to people or because doing otherwise would make people think badly of you?

Comment by Lapsed_Lurker on What's your "rationalist arguing" origin story? · 2012-09-03T14:09:21.523Z · LW · GW

I'm not sure it counts as an origin story, but after I noticed a lot of discussions/arguments seemed to devolve into arguments about what words meant, or similar, I got the idea this was because we didn't 'agree on our axioms' (I'd studied some maths). Sadly, trying to get agreement on what we each meant by the things we disagreed on didn't seem to work - I think that the other party mostly considered it an underhanded trick and gave up. :(

Comment by Lapsed_Lurker on Neil Armstrong died before we could defeat death · 2012-08-26T20:36:28.570Z · LW · GW

"One death is a tragedy. One million deaths is a statistic."

If you want to remind people that death is bad, agreed, the death of individuals you know or feel like you know is worse than lots of people you never met or even saw.

Comment by Lapsed_Lurker on Neil Armstrong died before we could defeat death · 2012-08-26T17:35:16.707Z · LW · GW

Eulogies on arbitrary people might help with motivation, and if you're doing that you might as well chose one with a minor advantage like not needing a long introduction to make the reader empathize, rather than choosing purely at random.

Are you suggesting that putting eulogies of famous people on LessWrong is a good idea? That sort of sounds like justifying something you've already decided.

Comment by Lapsed_Lurker on Neil Armstrong died before we could defeat death · 2012-08-25T23:57:32.397Z · LW · GW

~150,000 other people died today, too. Okay, Armstrong was hugely more famous than any of them, probably the most famous person to die this year, but what did he do for rationality, or AI, or other LessWrong interests?(which I figure do include space travel, admittedly. Presumably he wasn't signed up for cryogenic preservation) the post doesn't say.

Yes, death is bad, and Armstrong is/was famous, possibly uniquely famous, but I don't think eulogies of famous people are on-topic.

Comment by Lapsed_Lurker on Biohacking in New York, Cybernetics and first Cyborg Hate Crime: theverge.com · 2012-08-20T21:17:13.054Z · LW · GW

Credit to Bakkot for having tried out and reported on magnetic rings, not me.

Comment by Lapsed_Lurker on Dreams of Friendliness · 2012-08-20T08:58:47.075Z · LW · GW

Holden Karnofsky thinks superintelligences with utility functions are made out of programs that list options by rank without making any sort of value judgement (basically answer a question), and then pick the one with the most utility.

Isn't 'listing by rank' 'making a (value) judgement'?

Comment by Lapsed_Lurker on Harder Choices Matter Less · 2012-08-18T08:36:31.509Z · LW · GW

In my recollection of just about any place I have eaten in the UK, there is no choice. They only ever have one cola or the other. Is this different in other parts of the world?

Comment by Lapsed_Lurker on Biohacking in New York, Cybernetics and first Cyborg Hate Crime: theverge.com · 2012-08-13T17:03:48.249Z · LW · GW

I thought that sensitivity might be the answer. Not that hearing fairly sensitive perception of magnetic fields is possible makes me want the ability enough to stick magnets in my fingers. Yet.

I've heard about other superhuman sensory devices, like the compass-sense belt, though, and the more I hear about this stuff, the cooler it sounds. Perhaps sometime the rising interest and falling cost/inconvenience curves will cross for me. :)

Comment by Lapsed_Lurker on Biohacking in New York, Cybernetics and first Cyborg Hate Crime: theverge.com · 2012-08-10T11:12:48.787Z · LW · GW

I can see X-ray or terahertz scanners missing a tiny lump of metal, but aren't there a fair number of magnetic scanners in use looking for larger lumps of metal, which I'd think the magnet would interact fairly strongly with?

Comment by Lapsed_Lurker on Biohacking in New York, Cybernetics and first Cyborg Hate Crime: theverge.com · 2012-08-08T19:26:25.136Z · LW · GW

Judging by previous instances, you ought to put in more than just a link and also put [LINK] in the title, or else you are liable to get a bunch of downvotes.

[edit] OK, watched the first video, with people getting little rare-earth magnets put in their fingers so they can feel magnetic fields... Why not just get a magnetic ring? That way you can feel magnetic fields and don't risk medical complications and you don't have to stop for several minutes and explain every time you fly or go through one of those scanners I hear are relatively common in the US. [/edit]

Comment by Lapsed_Lurker on [Link] You Have No Idea How Wrong You Are · 2012-07-23T12:01:53.782Z · LW · GW

Well, they say that now. We have something that works better than what we had before. I commend Asimov's essay The Relativity Of Wrong.

Good to read that again. Thanks.

Comment by Lapsed_Lurker on Uploading: what about the carbon-based version? · 2012-07-23T11:15:28.870Z · LW · GW

I was wondering about evidence that uploading was accurate enough that you'd consider it to be a satisfactory continuation of personal identity.

I'd think that until even one of those little worms with only a couple hundred neurons is uploaded (or maybe a lobster), all evidence of the effectiveness of uploading is theory or fiction.

If computing continues to get cheaper at Moore's Law rates for another few decades, then maybe...

Comment by Lapsed_Lurker on Uploading: what about the carbon-based version? · 2012-07-23T09:23:42.673Z · LW · GW

More generally, what would folks here consider to be good enough evidence that uploading was worth doing?

Good enough evidence that (properly done) uploading would be a good thing, as opposed to the status quo of tens of thousands of people dying every day, you mean?

[edit] If you want to compare working SENS to uploading, then I'd have to think a lot harder.

Comment by Lapsed_Lurker on Contaminated by Optimism · 2012-07-23T06:55:40.746Z · LW · GW

Wasn't that trick tried with Windows Vista, and people were so annoyed by continually being asked trivial "can I do this?" questions that they turned off the security?

Comment by Lapsed_Lurker on [LINK] Using procedural memory to thwart "rubber-hose cryptanalysis" · 2012-07-20T08:40:47.039Z · LW · GW

I think that the intention is to make forgetting your password as hard as forgetting how to ride a bicycle. Although I only remember the figure of '2 weeks' from reading about this yesterday.

Comment by Lapsed_Lurker on Nick Bostrom's TED talk and setting priorities · 2012-07-09T10:21:36.171Z · LW · GW

If you mostly solve the 'Ageing' and 'Unnecessary Unhappiness' problems, the youthful, happy populous will probably give a lot more weight to 'Things That Might Kill Everyone'

I don't know about putting these things into proper categories, but I'm sure I'd be a lot more worried about the (more distant than a few decades) future if I had a stronger expectation of living to see it and I spent less time being depressed.

Comment by Lapsed_Lurker on The Fiction Genome Project · 2012-06-29T15:38:25.376Z · LW · GW

Just reading the title of this post, TVTropes came to mind, and there it was when I read it, which made me feel both good that I had made a successful prediction, and worried that it was probably me being biased by not remembering all the fleeting predictions that don't come true.

Comment by Lapsed_Lurker on Does rationalism affect your dreams? · 2012-05-25T22:53:08.374Z · LW · GW

I can't help you there. Not enough detail has survived the years.

Comment by Lapsed_Lurker on Does rationalism affect your dreams? · 2012-05-25T22:47:53.688Z · LW · GW

It has been more than a decade since then. All I have left are the less-reliable memories-of-memories of the dream. Having said that, I recall the dream being of text coloured like the MUD I was playing, but I am pretty sure that there was only the text. I don't even recall anything that happened in the dream or if I previously did and have forgotten.

Comment by Lapsed_Lurker on Does rationalism affect your dreams? · 2012-05-25T15:35:00.434Z · LW · GW

I very rarely recall any dreams, but I do remember one time, during a summer I spent playing a lot of MUD (Internet text-based game, primitive ancestor to World of Warcraft), that I had a dream in text.

Comment by Lapsed_Lurker on Tool for maximizing paperclips vs a paperclip maximizer · 2012-05-12T08:48:25.986Z · LW · GW

Why would tools for which the failure-mode of the tool just wireheading itself was common be built?

Comment by Lapsed_Lurker on [deleted post] 2012-05-10T08:43:23.097Z

Well, I realize that personal health is a personal choice in most cases.

You might want to rethink your wording on that one. Perhaps 'personal health status is a consequence of previous choices in many cases' or something. As written it sounds a bit overstated.

Comment by Lapsed_Lurker on John Danaher on 'The Superintelligent Will' · 2012-04-04T08:39:07.835Z · LW · GW

And yet, several high-status Less Wrongers continue to affirm utilitarianism with equal weight for each person in the social welfare function. I have criticized these beliefs in the past (as not, in any way, constraining experience), but have not received a satisfactory response.

I'm not sure how that answers my question, or follows from it. Can you clarify?

Comment by Lapsed_Lurker on John Danaher on 'The Superintelligent Will' · 2012-04-03T12:00:32.508Z · LW · GW

I am not sure what 'accurate moral beliefs' means. By analogy with 'accurate scientific beliefs', it seems as if Mr Danaher is saying there are true morals out there in reality, which I had not thought to be the case, so I am probably confused. Can anyone clarify my understanding with a brief explanation of what he means?

Comment by Lapsed_Lurker on What is your rationality blind spot? · 2011-12-24T20:47:21.855Z · LW · GW

Not very sure. I've heard all sorts of assertions. I'm pretty sure that sugar and other carbs are a bad idea, since I've been diagnosed as diabetic. Also that too much animal fat and salt are bad - but thinking that things are bad doesn't always stop me indulging :(

The UK government recommends five portions (handful-sized) of different fruit and vegetables per day, but I don't even manage to do that, most days.

Sadly, the last time I got an appointment to talk about my diet, the nurse I had an appointment with turned out to be fatter than I am, and absolutely everything she said has slipped my memory, perhaps because I fail to believe the dieting advice of a fat nurse.

I think if I were given a few simple "doctor's orders" about food, I might be able to follow them, but don't think I can possibly hold dozens or hundreds of rules about food in my head - which is what all the stuff I recall reading consists of.