Posts

Comments

Comment by Humbug on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-11T20:31:04.640Z · LW · GW

Given that you believe that unfriendly AI is likely, I think one of the best arguments against cryonics is that you do not want to increase the probability of being "resurrected" by "something". But this concerns the forbidden topic, so I can't get into more details here. For hints see Iain M. Banks' novel Surface detail on why you might want to be extremely risk averse when it comes to the possibility of waking up in a world controlled by posthuman uploads.

Comment by Humbug on Examples in Mathematics · 2013-12-15T11:39:12.199Z · LW · GW

I shall discuss many concepts, later in the book, of a similar nature to these. They are puzzling if you try to understand them concretely, but they lose their mystery when you relax, stop worrying about what they are, and use the abstract method.

Timothy Gowers in Mathematics: A Very Short Introduction, p. 34

Comment by Humbug on 2013 Census/Survey: call for changes and additions · 2013-11-05T15:16:29.114Z · LW · GW

How many people have been or are still worried about the basilisk is more important than whether people disagree with how it has been handled. It is possible to be worried and disagree about how it was handled if you expect that maintaining silence about its perceived danger would have exposed less people to it.

In any case, I expect LessWrong to be smart enough to dismiss the basilisk in a survey, in order to not look foolish for taking it seriously. So any such question would be of little value as long as you do not take measures to make sure that people are not lying. Which would be possible by e.g. asking specific multiple choice questions that can only be answered correctly if someone e.g. read the RationalWiki entry about the basilisk, or the LessWrong Wiki entry that amusingly reveals most of the detail but which nobody who cares has taken note of so far. Anyone who is seriously worried about it would not take the risk of reading up on the details.

Comment by Humbug on Thoughts on the Singularity Institute (SI) · 2012-05-18T09:50:01.106Z · LW · GW

He or someone else must have explained at some point, or I wouldn't know his reason was that the article was giving a donor nightmares.

This is half the truth. Here is what he wrote:

For those who have no idea why I'm using capital letters for something that just sounds like a random crazy idea, and worry that it means I'm as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us.

Comment by Humbug on SotW: Check Consequentialism · 2012-03-30T08:40:35.205Z · LW · GW

I can't believe you missed the chance to say, "Taboo pirates and ninjas."

"Pirates versus Ninjas is the Mind-Killer"

Comment by Humbug on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-29T19:20:06.646Z · LW · GW

“I do not say this lightly... but if you're looking for superpowers, this is the place to start.”

Sing karaoke...

Now I can't get this image out of my head of Eliezer singing 'I am the very model of a singularitarian '...

Comment by Humbug on The Singularity Institute's Arrogance Problem · 2012-01-28T10:52:20.687Z · LW · GW

The primary issue with the Roko matter wasn't as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals.

The original reasons given:

Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient detail.)

...and further:

For those who have no idea why I'm using capital letters for something that just sounds like a random crazy idea, and worry that it means I'm as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.

(emphasis mine)

Comment by Humbug on Safe questions to ask an Oracle? · 2012-01-28T10:35:36.373Z · LW · GW

What if asking what the sum of 1+1 is causes the Oracle to devote as many resources as possible to looking for an inconsistency arising from the Peano axioms?

If the Oracle we are talking about was specifically designed to do that, for the sake of the thought experiment, then yes. But I don't see that it would make sense to build such a device, or that it is very likely to be possible at all.

If Apple was going to build an Oracle it would anticipate that other people would also want to ask it questions. Therefore it can't just waste all resources on looking for an inconsistency arising from the Peano axioms when asked to solve 1+1. It would not devote additional resources on answering those questions that are already known to be correct with a high probability. I just don't see that it would be economically useful to take over the universe to answer simple questions.

I further do not think that it would be rational to look for an inconsistency arising from the Peano axioms while solving 1+1. To answer questions an Oracle needs a good amount of general intelligence. And concluding that asking it to solve 1+1 implies to look for an inconsistency arising from the Peano axioms does not seem reasonable. It also does not seem reasonable to suspect that humans desire an answer to their questions to approach infinite certainty. Why would someone build such an Oracle in the first place?

I think that a reasonable Oracle would quickly yield good solutions by trying to find answers within a reasonable time which are with a high probability just 2–3% away from the optimal solution. I don't think anyone would build an answering machine that throws the whole universe at the first sub-problem it encounters.

Comment by Humbug on Safe questions to ask an Oracle? · 2012-01-27T19:07:16.377Z · LW · GW

I am not sure what exactly you mean by "safe" questions. Safe in what respect? Safe in the sense that humans can't do something stupid with the answer or in the sense that the Oracle isn't going to consume the whole universe to answer the question? Well...I guess asking it to solve 1+1 could hardly lead to dangerous knowledge and also that it would be incredible stupid to build something that takes over the universe to make sure that its answer is correct.

Comment by Humbug on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-01-26T12:10:07.511Z · LW · GW

We have tried to discuss topics like race and gender many times, and always failed.

The overall level of rationality of a community should be measured by their ability to have a sane and productive debate on those topics, and on politics in general.

Comment by Humbug on 12-year old challenges the Big Bang · 2012-01-17T16:41:39.578Z · LW · GW

So, did anyone actually save Roko's comments before the mass deletion?

Google Reader fetches every post and comment that is being made on lesswrong. Editing or deleting won't remove it. All comments and posts that have ever been made are still there, saved by Google. You just have to add the right RSS feeds to Google Reader.

Comment by Humbug on Whole Brain Emulation: Looking At Progress On C. elgans · 2011-10-29T19:25:13.208Z · LW · GW

None of the simulation projects have gotten very far...this looks to me like it is a very long way out, probably hundreds of years.

Couldn't you say the same about AGI projects? It seems to me that one of the reasons that some people are being relatively optimistic about computable approximations to AIXI, compared to brain emulations, is that progress on EM's is easier to quantify.

Comment by Humbug on [LINK] Terrorists target AI researchers · 2011-09-15T15:34:37.966Z · LW · GW

In statements posted on the Internet, the ITS expresses particular hostility towards nano­technology and computer scientists. It claims that nanotechnology will lead to the downfall of mankind, and predicts that the world will become dominated by self-aware artificial-intelligence technology. Scientists who work to advance such technology, it says, are seeking to advance control over people by 'the system'.

Comment by Humbug on [LINK] Terrorists target AI researchers · 2011-09-15T15:30:06.422Z · LW · GW

What do you do if you really believe that someone's research has a substantial chance of destroying the world?

Go batshit crazy.

Comment by Humbug on An Outside View on Less Wrong's Advice · 2011-07-12T15:21:24.876Z · LW · GW

...people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?

One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.

Comment by Humbug on [deleted post] 2011-07-12T14:11:51.646Z

What he's talking about is knowledge that's objectively harmful for someone to have.

Someone should make a list of knowledge that is objectively harmful. Could come in handy if you want to avoid running into it accidentally. Or we just ban the medium that is used to spread it, in this case natural language.

Comment by Humbug on [deleted post] 2011-07-11T20:00:41.013Z

No one is seriously disputing where the boundary between basilisk and non-basilisk lies...

This assumes that everyone knows where the boundary lies. The original post by Manfred either crossed the boundary or it didn't. In the case that it didn't, it only serves as a warning sign of where not to go. In the case that it did, how is your knowledge of the boundary not a case of hindsight bias?

Comment by Humbug on [deleted post] 2011-07-11T19:16:24.966Z

...before exposing the public to something that you know that a lot of people believe to be dangerous.

The pieces of the puzzle that Manfred put together can all be found on lesswrong. What do you suggest, that research into game and decision theory be banned?

Comment by Humbug on [deleted post] 2011-07-11T18:56:43.432Z

I obviously think it's safe.

Be careful to trust Manfred, he is known to have destroyed the Earth on at least one previous occasion.