Posts

On the Boxing of AIs 2015-03-31T21:58:08.749Z · score: 0 (3 votes)
The Hardcore AI Box Experiment 2015-03-30T18:35:19.385Z · score: 3 (12 votes)
Boxing an AI? 2015-03-27T14:06:19.281Z · score: 2 (13 votes)

Comments

Comment by tailcalled on Dissolving Scotsmen · 2018-05-11T11:45:27.937Z · score: 4 (2 votes) · LW · GW

Maybe this applies in most cases? I don't know, I've worked on making communities for puppy-cuddlers where I've thought they ended too psychotic-monkey-Czarist in the end. At the very least, it seems unfair to ignore the fact that a Blue coalition does exist, even if Bob is also overreacting in his accusations.

Comment by tailcalled on Dissolving Scotsmen · 2018-05-11T09:36:07.139Z · score: 8 (3 votes) · LW · GW

We can also use our new framework for understanding the No True Scotsman fallacy. Alice isn't equivocating between two definitions of Blue. She is consistently using "Blue" to mean "supporting cuddling puppies". It is, in fact, Bob who is equivocating between "Blue" meaning "supporting giving psychotic monkeys control over the economy" and "Blue" meaning "supporting cuddling puppies".

This seems to be assuming that coalitions don't exist. Suppose Alice starts a puppy-cuddling club. How likely is this to get filled with Blues? And how likely are those Blues to start strategizing about how to give psychotic monkeys control over the economy? My expectation would be that it's a lot higher than the base rate, but how much probably depends a lot on many factors, such as how Blue-coded puppy-cuddling is and how many Blues support psychotic monkey economy Czars.

Comment by tailcalled on 2016 LessWrong Diaspora Survey Results · 2016-05-01T19:17:54.782Z · score: 3 (3 votes) · LW · GW

2.12% MtF, 0.75% FtM, 5.42% other of which roughly (just eyeballing, not counting) 33% seem trans. That'd be ~4.66% trans. Thoughts?

Comment by tailcalled on LessWrong 2.0 · 2015-12-04T14:35:00.942Z · score: 0 (2 votes) · LW · GW

I think /r/SlateStarCodex fulfills some of these. As a Focal Point/News Organisation, we see that it is evolving to a general rationlist-ish subreddit with about 9/25 articles on the front page not being written by Scott. In terms of the Stack Overflow-like function, with about 4/25 front page posts being questions related to various SSC/rationalist things.

Comment by tailcalled on Unbounded linear utility functions? · 2015-10-13T16:49:31.622Z · score: 0 (0 votes) · LW · GW

I hold the hard-nosed position that moral philosophies (including utiliarianism) are human inventions which serve the purpose of facilitating large-scale coordination.

This is one part of morality, but there also seems to be the question of what to (coordinate to) do/build.

Comment by tailcalled on Simulations Map: what is the most probable type of the simulation in which we live? · 2015-10-13T16:45:01.904Z · score: 1 (1 votes) · LW · GW

Of course, but OTOH, we have simulated a lot of tiny, strange universes, so it's not completely implausible.

Comment by tailcalled on Simulations Map: what is the most probable type of the simulation in which we live? · 2015-10-12T21:49:08.434Z · score: 1 (1 votes) · LW · GW

It could be that the 'external' world is completely different and way, way bigger than our world. Their world might be to our world what our world is to a simple game of life simulation.

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-07T19:32:20.051Z · score: 0 (0 votes) · LW · GW

By the way, you asked for a helpless-view deconversion. TomSka just posted one, so...

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-07T18:52:12.193Z · score: 0 (0 votes) · LW · GW

Hold on. I thought the helpless view was for the "dumb masses". They are certainly not able to figure out what the "international mainstream consensus" is. Hell, even I have no idea what it is (or even what it means).

The "dumb masses" here are not defined as being low-IQ, but just low-rationality. Low-IQ people would probably be better served just doing what people around them are doing (or maybe not; I'm not an expert in low-IQ people).

A simple example: Western democracy. What's the "international mainstream consensus"?

Well, one of the first conclusions to draw with helpless view is "politics is too complicated to figure out". I'm not sure I care that much about figuring out if democracy is good according to helpless view. The UN seems to like democracy, and I would count that as helpless-view evidence in favor of it.

I would guess it says that the Western-style democracy needs a strong guiding hand lest it devolves into degeneracy and amoral chaos.

I would guess that there is an ambiguously pro-democratic response. 48% of the world lives in democracies, and the places that aren't democratic probably don't agree as much on how to be un-democratic as the democratic places agree on how to be democratic.

For the purpose of promoting/recommending either the independent view or the helpless view.

Whoever does the promoting/recommending seems like a natural candidate, then.

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-07T17:30:26.208Z · score: 0 (0 votes) · LW · GW

Sorry, under the helpless approach you cannot conclude anything, much less on the basis of something as complex as a cross-cultural comparative religion analysis. If you are helpless, you do what people around you do and think what people around you think. The end.

It seems like we are thinking of two different views, then. Let's keep the name 'helpless view' for mine and call yours 'straw helpless view'.

The idea behind helpless view is that you're very irrational in many ways. Which ways?

  1. You're biased in favor of your ingroups and your culture. This feels like your ingroups are universally correct from the inside, but you can tell that it is a bias from the fact that your outgroups act similarly confident.

  2. You're biased in favor of elegant convincing-sounding arguments rather than hard-to-understand data.

  3. Your computational power is bounded, so you need to spend a lot of resources to understand things.

  4. Mount Stupid

There are obviously more, but biases similar to those are the ones the helpless view is intended to fight.

The way it fights those arguments is by not allowing object-level arguments, arguments that favor your ingroups or your culture over others and things like that.

Instead, in helpless view, you focus on things like:

  1. International mainstream consensus. (Things like cross-cultural analysis on opinions, what organizations like the UN say, etc.)

  2. Expert opinions. (If the experts, preferably in your outgroup, agree that something is wrong, rule it out. Silence on the issue does not let you rule it out.)

  3. Things that you are an expert on. (Looking things up on the web does not count as expert.)

  4. What the government says.

(The media are intentionally excluded.)

Oh-oh. I'm failing this test hard.

evil grin

Besides, are you quite sure that you want to make an untestable article of faith with zero practical consequences the criterion for rationality? X-/

Nah, it was mostly meant as a semi-joke. I mean, I like the criterion, but my reasons for liking it are not exactly unbiased.

If I were to actually make a rationality test, I would probably look at the ingroups/outgroups of the people I make the test for, include a bunch of questions about facts where each there is a lot of ingroup/outgroup bias, and look at the answers to that.

That's an argument against the helpless view, right? It sure looks this way.

Except that we live in the current world, not the counterfactual world, and in the current world the helpless view tells you not to believe conspiracy theories.

Well, yes, I'm going to notice and I generally have little trouble figuring out who's stupid and who is smart. But that's me and my personal opinion. You, on the other hand, are setting this up as a generally applicable rule. The problem is who decides. Let's say I talk a bit to Charlie and decide that Charlie is stupid. Charlie, on the basis of the same conversation, decides that I'm stupid. Who's right? I have my opinion, and Charlie has his opinion, and how do we resolve this without pulling out IQ tests and equivalents?

It's essentially a power and control issue: who gets to separate people into elite and masses?

I dunno.

For what purpose are you separating the people into elite and masses? If it's just a question of who to share dangerous knowledge to, there's the obvious possibility of just letting whoever wants to share said dangerous knowledge decide.

In your setup there is the Arbiter -- in your case, yourself -- who decides whether someone is smart (and so is allowed to use the independent approach) or stupid (and so must be limited to the helpless approach). This Arbiter has a certain level of IQ. Can the Arbiter judge the smartness/rationality of someone with noticeably higher IQ than the Arbiter himself?

I don't know, because I have a really high IQ, so I don't usually meet people with noticeably higher IQ. Do you have any examples of ultra-high-IQ people who write about controversial stuff?

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-06T22:14:58.301Z · score: 0 (0 votes) · LW · GW

The reason I brought it up is that there is no default "do what the mainstream does" position there. The mainstream is religious and the helpless view would tell you to be religious, too.

Of course, but you can ask for the asymmetry between $yourcountry, USA, Germany, Italy, Japan and Israel (or whichever group of places you prefer). These places have wildly different attitudes to religion (or, well, at least they follow different religions, somewhat) with no-one being in a better position in terms of figuring out the right religion, so you can conclude that while some religion must be correct, we don't know which one.

I don't have much experience with deconversions, but even looking at personal stories posted on LW, they seem to rotate around doubting particular elements on the objective level, not on the "this belief is too weird" level.

Something something selection bias.

Anyway, I don't know about religious deconversion, but I know I've had a lot of stupid political views that I've removed by using helpless view.

IIRC my brother deconverted via helpless view, but I might misremember. Either way, that would be n=1, so not that useful.

Well, yes, but "rationality" is not terribly well defined and is a whole another can of worms. In particular, I know how to measure IQ and I know how it's distributed in populations and sub-groups. I do NOT know how to measure the degree of rationality and what's happening with it in populations. That makes discussions of rationality as an empirical variable to be handwavy and... not stand on solid data ground.

I quite like Eliezer's suggestion of using the question of MWI as a test for rationality, but I'm biased; I independently discovered it as a child. :P

First, signaling loyalty could be a perfectly rational thing to do. Second, there is the issue of the difference between signaling and true beliefs -- signaling something other than what you believe is not uncommon.

The problem here is that there isn't necessarily a difference between signaling and true beliefs. Imagine your outgroup saying the most ridiculous thing you can. That thing is likely a kind of signaling, but in some ways (not all, though) it acts like a belief.

You have priors, don't you?

... I can sometimes for simplicity's sake be modeled as having priors, but we're all monkeys after all. But yeah, I know what you mean.

Presumably, quite strong priors about certain things?

Sure. But if I lived in a world where most people believed the holocaust is a hoax, or a world where it was controversial whether it was a hoax but the knowledge of the evidence was distributed in the same way as it is today, I'm pretty sure I would be a holocaust denier.

(Of course, in the second case the evidence in favor of the holocaust having happened would rapidly emerge, completely crush the deniers, and send us back to the current world, but we're "preventing" this from happening, counterfactually.)

Anyway, this shows that a large part of my belief in the holocaust comes from the fact that everybody knows holocaust deniers are wrong. Sure, the evidence in favor of the holocaust is there, but I (assume I would have (I haven't actually bothered checking what the deniers are saying)) no way of dealing the denier's counterarguments, because I would have to dig through mountains of evidence every time.

(If holocaust deniers are actually trivial to disprove, replace them with some other conspiracy theory that's trickier.)

How do you know who is who? And who gets to decide? If I am talking to someone, do I have to first have to classify her into enlightened or unenlightened?

Well, most of the time, you're going to notice. Try talking politics with them; the enlightened ones are going to be curious, while the unenlightened ones are going to push specific things. Using the word 'majoritarian' for the helpless view might have made it unclear that in many cases, it's a technique used by relatively few people. Or rather, most people only use it for subjects they aren't interested in.

However, even if you can't tell, most of the time it's not going to matter. I mean, I'm not trying to teach lists of biases or debate techniques to every person you talk to.

That's not a winning line of argument -- it's argument for popularity can be easily shut down by pointing out that a lot more smart people are not worried, and the helpless approach tells you not to pick fringe views.

Gates is one of the most famous people within tech, though. That's not exactly fringe.

Actually, I just re-read your scenario. I had understood it as if Alice subscribed to the helpless view. I think that in this case, Bob is making the mistake of treating the helpless view as an absolute truth, rather than a convenient approximation.I wouldn't dismiss entire communities based on weak helpless view knowledge; it would have to be either strong (i.e. many conspiracy theories) or independent view.

In the case described in the OP, we have strong independent view knowledge that the pseudoscience stuff is wrong.

The basic question is, how do you know? In particular, can you consistently judge the rationality of someone of noticeably higher IQ?

I think so. I mean, it even had a Seer who hosted a reasonably popular event at the club, so... yeah. IQ high, rationality at different levels.

Also, 'noticeably higher IQ' is ambiguous. Do you mean 'noticeably higher IQ' than I have? Because it was just an ordinary high-IQ thing, not an extreme IQ thing, so it's not like I was lower than the average of that place. I think its minimum IQ was lower than the LW average, but I might be mixing up some stuff.

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-06T19:09:46.433Z · score: 0 (0 votes) · LW · GW

I understand "outside view" a bit more traditionally and treat it as a forecasting technique.

The thing is, you can apply it more widely than just forecasting. Forecasting is just trying to figure out the future, and there's no reason you should limit yourself to the future.

Anyway, the way I see it, in inside view, both when forecasting and when trying to figure out truth, you focus on the specific problem you are working on, try to figure out its internals, etc.. In outside view, you look at things outside the problem, like track record of similar things (which I, in my list, called "looks like cultishness"; arguably I could have named that better), other's expectations of your success (hey bank, I would like to borrow money to start a company! what, you don't believe I will succeed?), etc.. Perhaps 'outside view' isn't a good term either (which kinda justifies me calling it majoritarianism to begin with...), but whatever. Let's make up some new terms, how about calling them the helpless and the independent views?

Why, yes, I do. In fact, I think it's the normal process of extracting oneself from "a web of lies" -- you start by realizing you're stuck in one. Of course, no one said it would be easy.

Well, how often does it happen?

An example -- religious deconversion. How do you think it will work in your system?

How much detail do you want it in and how general do you want it to be? What is the starting point of the person who needs to be deconverted? Actually, to skip all these kinds of questions, could you give an example of how you would write how deconversion would work in your system?

Well, this theory implies some consequences. For example, it implies high negative correlation between IQ (or more fuzzy "smartness") and the strength of tribal affiliation. Do we observe it?

IQ != rationality. I don't know if there is a correlation, and if there is one, I don't know in which direction. Eliezer has made a good argument that higher IQ gives a wider possible range of rationality, but I don't have the evidence to support that.

Anyway, I at least notice that the times where people are wrong, it's often because they try to signal loyalty to their tribe (of course, there often is an opposing tribe that is correct on the question where the first one was wrong...). This is anecdotal, though, so YMMV. What do you observe? That people who have made certain answers to certain questions part of their identity are more likely to be correct?

The theory also implies that if the tribal affiliation increases (e.g. because your country got involved in a military conflict), everyone suddenly becomes much dumber. Do we observe that?

...probably? Not so much with military conflicts, because you are not doing as much politics as you are doing fighting, but I generally see that if a discussion becomes political, everybody starts saying stupid stuff.

I don't know about that. You think of winning a debate in high-school debate club terms, or maybe in a TV debate terms -- the one who scores the most points with the judges wins. That's not how real life operates. The way for the conspiracy theorist to win the debate is to convince you. Unless you became a believer at the end, he did NOT win the debate. Most debates end in a draw.

But the only reason I don't get convinced is because of the helpless view (and, of course, things like tribalism, but let's pretend I'm a bounded rationalist for simplicity). In the independent view, I see lots of reasons for believing him, and I have no good counterarguments. I mean, I know that I can find counterarguments, but I'm not going to do that after the debate.

In fact, I don't see how your approach is compatible with being on LW.

Again, I believe in an asymmetry between people who have internalized various lessons on tribalism and other people. I agree that if I did not believe in that asymmetry, I would not have good epistemic reasons for being on LW (though I might have other good reasons, such as entertainment).

What can Alice reply to Bob? She is, in fact, not a Ph.D. and has no particular expertise in AI.

"Smart people like Bill Gates, Stephen Hawking and Elon Musk are worried about AI along with a lot of experts on AI."

This should also be a significant factor in her belief in AI risk; if smart people or experts weren't worried, she should not be either.

I don't think you can extrapolate from very-low-IQ people to general population. By the same token, these people should not manage their own money, for example, or, in general, lead an unsupervised life.

I've been in a high-IQ club and not all of them are rational. Take selection effects into account and we might very well end up with a lot of irrational high-IQ people.

Comment by tailcalled on Rationality Quotes Thread September 2015 · 2015-10-06T17:33:44.496Z · score: 1 (1 votes) · LW · GW

Morality as we know it evolved from physics plus starting conditions. When you say that physics is soluble but morality isn't, I suppose you mean that the starting conditions are absent.

You need to know not just the starting conditions, but also the position where morality evolves. That position can theoretically have huge complexity.

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-05T22:07:47.424Z · score: 0 (0 votes) · LW · GW

Sure, but inside view/contrarianism/knowledge of most biases seem like things that ideally should be reserved for when you know what you're doing, which the person described in the OP probably doesn't.

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-05T20:49:07.248Z · score: 0 (0 votes) · LW · GW

Besides, what do you call "mainstream" -- the current scientific consensus or the dominant view among the population? They diverge on a fairly regular basis.

Perhaps I focused too much on 'mainstream' when I really meant 'outside view'. Obviously, outside view can take both of these into account to different degrees, but essentially, the point is that I think teaching the person to use outside view is better, and outside view is heavily biased (arguably justifiably so) in favor of the mainstream.

I doubt just being a contrarian in some aspect lifts you into "elite" status (e.g. paleo diets, etc.)

But that's my point: a lot of different contrarian groups have what the OP calls "a web of lies that sound quite logical and true". Do you really think you can teach them how to identify such a web of lies while they are stuck in one?

Instead, I think you need to get them unstuck using outside view, and then you can teach them how to identify truth correctly.

I don't understand. Are you saying the the masses are dumb because (causative!) the tribal affiliation is strong with them??

Yes. The masses try to justify their ingroup, they don't try to seek truth.

Another claim I strongly disagree with. Following this forces you to believe everything you're told as long as sufficient numbers of people around you believe the same thing -- even though it's stupid on the object level. I think it's a very bad approach.

The way is see it is this: if I got into a debate with a conspiracy theorist, I'm sure they would have much better object-level arguments than I do; I bet they would be able to consistently win when debating me. The reason for this is that I'm not an expert on their specific conspiracy, while they know every single shred of evidence in favor of their theory. This means that I need to rely on meta-level indicators like nobody respecting holocaust deniers in order to determine the truth of their theories, unless I want to spend huge amounts of time researching them.

Sure, there are cases where I think I can do better than most people (computer science, math, physics, philosophy, gender, generally whatever I decide is interesting and start learning a lot about) and in those case I'm willing to look at the object level, but otherwise I really don't trust my own ability to figure out the truth - and I shouldn't, because it's necessary to know a lot of the facts before you can even start formulating sensible ideas on your own.

If we take this to the extreme where someone doesn't understand truth, logic, what constitutes evidence or anything like that, I really would start out by teaching them how to deal with stuff when you don't understand it in detail, not how to deal with it when you do.

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-05T19:51:29.339Z · score: 0 (0 votes) · LW · GW

So, is this an elites vs dumb masses framework?

For contrarianism (e.g. atheism, cryonics, AI, reductionism) to make epistemological sense, you need an elites vs dumb masses framework, otherwise you can't really be justified in considering your opinion more accurate than the mainstream one.

Once we have the framework, the question is the cause of the dumb masses. Personally, I think it's tribal stuff, which means that I honestly believe tribalism should be solved before people can be made more rational. In my experience, tribal stuff seemed to die down when I got more accepting of majoritarianism (because if you respect majoritarianism, you can't really say "the mainstream is silencing my tribe!" witthout having some important conclusions to make about your tribe).

Your approach seems to boil down to "First, they need to sit down, shut up, listen to authority, and stop getting ideas into their head. Only after that we can slowly and gradually start to teach them". I don't think it's a good approach -- either desirable or effective. You don't start to reality-adjust weird people by performing a lobotomy. Not any more, at least.

It's probably not a good approach for young children or similarly open minds, but we're not working with a blank slate here. Also, it's not like the policies I propose are Dark Side Epistemology; avoiding object level is perfectly sensible if you are not an expert.

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-05T19:13:18.518Z · score: 0 (0 votes) · LW · GW

In this case I would like to declare myself a big fan of the Inside View and express great distrust of the Outside View.

Well, that makes sense for people who know what they are talking about, are good at compensating for their biases and avoid tribal politics. Less so for people who have trouble with rationality.

Remember: I'm not against doing stuff in Inside View, but I think it will be hard to 'fix' completely broken belief systems in that context. You're going to have trouble even agreeing what constitutes a valid argument; having a discussion where people don't just end up more polarized is going to be impossible.

Heh. Otherwise? You just said they're engaging in tribal politics anyway and I will add that they are highly likely to continue to do so. If you don't want to teach them anything until they stop, you just will not teach them anything, period.

I want to teach them to not get endlessly more radical before I teach anything else. Then I want to teach them to avoid tribalism and stuff like that. When all of that is done, I would begin working on the object-level stuff. Doing it in a different order seems doomed to failure, because it's very hard to get people to change their minds.

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-05T18:18:09.508Z · score: 0 (0 votes) · LW · GW

Teaching people to notice fallacies explicitly pushes them into the meta (reflective) mode and promotes getting out of the inside view.

By Inside View I meant focusing on object-level arguments, which a lot of bias/fallacy teaching supports. The alternative would be meta-level Outside View, where you do things like:

  • Assume people who claim to be better than the mainstream are wrong.

  • Pay greater attention to authority than arguments.

  • Avoid things that sound cultish.

  • etc.

Oh. It's even worse -- I read you as "keep 'em ignorant so they don't hurt themselves" and here you are actually saying "keep 'em ignorant because they are my tribal enemies and I don't want them to get more capable".

I'm actually saying that everybody, friend or foe, who engages in tribal politics, should be taught to... not engage in tribal politics. And that this should be done before we teach them the most effective arguments, because otherwise they are going to engage in tribal politics.

And why is tribal politics bad? Cuz it doesn't lead to truth/a better world, but instead to constant disagreement.

That's... a common misunderstanding. Rational people can be expected to agree with each other on facts (because science). Rational people can NOT be expected to agree, nor do they, in fact, agree on values and, accordingly, on goals, and policies, and appropriate trade-offs, etc. etc.

Sure. But most of the time, they seem to disagree on facts too.

Recall your original statement: "attempting to go from irrational contrarian to rational contrarian ... without passing through majoritarian seems like something that could really easily backfire". What are the alternatives? Do you want to persuade people that the mainstream is right, and once you've done that do you want to turn around and persuade them that the mainstream is wrong? You think this can't backfire?

I think it will backfire less.

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-05T17:13:13.178Z · score: 2 (4 votes) · LW · GW

That's meaningless hand-waving. Do you have evidence?

I don't think it's fair to say that it is meaningless. Surely it must convey some, arguably a lot, of meaning. For example, it includes the advice of making people trust authorities more, and a critique of certain traditional rationalist ideas.

In terms of evidence... well, I don't have scientific evidence, but obviously I have anecdotes and some theory behind my belief. I can write the anecdotes if you think you're going to find knowing their details relevant, but for now I'll just skip them, since they're just anecdotes.

The theory behind my claim can roughly be summed up in a few sentences:

  • Inside view was what got them into this mess to begin with.

  • This seems to be something to do with tribal politics, which is known for being annoying and hard to deal with. Probably best to not give them ammunition.

  • People who know a lot about biases don't seem to be any better at agreeing with each other (instead, they seem to argue much more), which indicates that they're not that rational.

Essentially, don't try to teach people 'hard mode' until they can at least survive 'easy mode'.

By the way, if it's extremely dangerous, maybe we should shut down LW -- unenlightened people can get ideas here that "could really easily backfire", couldn't they?

'Extremely dangerous' could be considered a hyperbola; what I meant is that if you push them down into the hole of having ridiculous ideas and knowing everything about biases, you might not ever be able to get them up again.

I don't think the Sequences are that dangerous, because they spend a lot of time trying to get people to see problems in their own thinking (that's the entire point of the Sequences, isn't it?). The problem is that actually doing that is tricky. Eliezer has had a lot of community input in writing them. so he has an advantage that the OP doesn't have. Also, he didn't just focus on bias, but also on a lot of other (IMO necessary) epistemological stuff. I think they're hard to use for dismissing any opposing argument.

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-05T15:23:13.302Z · score: 9 (9 votes) · LW · GW

No, but attempting to go from irrational contrarian to rational contrarian (thinking about arguments, for instance by considering fallacies, is contrarian-ish) without passing through majoritarian seems like something that could really easily backfire.

Comment by tailcalled on How could one (and should one) convert someone from pseudoscience? · 2015-10-05T15:07:42.371Z · score: 9 (13 votes) · LW · GW

The one thing I've come up with is to somehow introduce them to classical logical fallacies.

That seems extremely dangerous. Most of the time, this will just make people better at rationalization, and many things that are usually considered fallacies are actually heuristics.

Comment by tailcalled on Magic and the halting problem · 2015-08-23T20:21:23.450Z · score: 4 (4 votes) · LW · GW

I personally subscribe to the Many Worlds Interpretation of quantum mechanics, so I effectively "believe" in the multiverse. That means it is possible that somewhere in the universal wavefunction, there is an Everett Branch in which magic is real.

Nope. The laws of physics are the same in all branches.

Or at least every time someone chants an incantation, by total coincidence, the desired effect occurs.

Those branches would be extremely rare.

Alan Turing pondered a related problem known as the halting problem, which asks if a general algorithm can distinguish between an algorithm that will finish or one that will run forever.

I don't find it very obvious how this is related.

So how would a person distinguish between pseudo-magic that will inevitably fail, and real magic that is the true laws of physics?

The pseudo-magic will with large probability fail the next time you test it.

And finally, what if our entire understanding of reality, including logic, is mere deception by happenstance, and everything we think we know is false?

Then you would find out very soon, unless you postulate something to keep the system stable.

Comment by tailcalled on Rationality Reading Group: Part G: Against Rationalization · 2015-08-13T16:53:06.849Z · score: 4 (4 votes) · LW · GW

That way, you can take it as a kind of evidence/argument, instead of a Bottom Line - like an opinion from a supposed expert which tells you the "X is Y", but doesn't have the time to explain. You can then ask: "is this guy really an expert?" and "do other arguments/evidence outweight the expert's opinion?"

Note that both for experts and for your intuition, you should consider that you might end up double-counting the evidence if you treat them as independent of the evidence you have found - if everybody is doing everything correctly (which very rarily happens), you, your intuition and the experts should all know the same arguments, and naive thinking might double/triple-count the arguments.

Comment by tailcalled on Rationality Reading Group: Part G: Against Rationalization · 2015-08-13T10:24:53.926Z · score: 5 (5 votes) · LW · GW

Well, that Bottom Line is generated by your intuition, and your intuition is probably pretty good at weighing evidence to find a good Bottom Line (with the caveat that your intuition probably does a lot more signalling than truth-seeking). In principle, though, that means that you don't have to justify that bottom line, at least not as much; instead, it would be more productive to search for flaws. The most likely flaw is that your subconcious figured "My friends like this conclusion" rather than "This conclusion is true".

Comment by tailcalled on Versions of AIXI can be arbitrarily stupid · 2015-08-12T21:35:34.232Z · score: 1 (1 votes) · LW · GW

I think my claim was that your example was kinda bad, since it's not obvious that the AI is doing anything wrong, but on reflection I realized that it doesn't really matter, since I can easily generate a better example.

Comment by tailcalled on Versions of AIXI can be arbitrarily stupid · 2015-08-11T18:00:45.573Z · score: 2 (2 votes) · LW · GW

Do you have a better proposal for how to act if you know that you are in either heaven or hell, but don't know which?

Comment by tailcalled on (Rational) website design and cognitive aesthetics generally- why no uptake? · 2015-07-23T17:20:16.738Z · score: 0 (0 votes) · LW · GW

The rational part is reading the papers; the optimal part is doing what the papers say.

However, reading papers is not limited to design. It is part of general rationality (virtue of scholarship).

Of course, if you search for papers on cognitive biases in web design, then they would tell you about rational design.

Comment by tailcalled on (Rational) website design and cognitive aesthetics generally- why no uptake? · 2015-07-23T07:07:09.901Z · score: 0 (0 votes) · LW · GW

Perhaps optimal would better describe what you want?

Comment by tailcalled on Stupid Questions July 2015 · 2015-07-05T15:38:42.581Z · score: 2 (2 votes) · LW · GW

An obvious hypothesis is that trans women follow culture-specific roles (such as wearing dresses) for the same reason cis women follow culture-specific roles.

This would mean that whatever makes us follow culture-specific roles isn't extremely stupid, so it interprets 'women should wear dresses' correctly, rather than 'you shouldn't wear dresses'.

Comment by tailcalled on Rationality Quotes Thread April 2015 · 2015-04-04T23:32:07.471Z · score: -4 (8 votes) · LW · GW

Jim Raynor: You think he's right? That I'm just gonna run out on you all?

Ajendarro Ybarra: You got us working for the Dominion now, Commander. Taking us back to Char. It's like you're gone already.

Jim Raynor: This ain't about the Dominion. Our war has always been about saving lives. If the Zerg wipe everyone out, it's all been for nothing. So I'm going back to Char. If you're with me, it's your choice. Just like it's always been.

--Jim Raynor, Starcraft

Comment by tailcalled on On the Boxing of AIs · 2015-04-01T13:24:26.127Z · score: 0 (0 votes) · LW · GW

Not really. If you have an AI where you're not sure if it is completely broken or just unfriendly, you might want to test it, but without proper boxing you still risk destroying the world in the unlikely case that the AI works.

Comment by tailcalled on On the Boxing of AIs · 2015-04-01T09:06:48.283Z · score: 0 (0 votes) · LW · GW

There are other reasons than to check if the AI is friendly. AI, like other software, would have to be tested pretty thoroughly. It would be hard to make an AI if we can't test it without destroying the world.

Comment by tailcalled on On the Boxing of AIs · 2015-04-01T09:00:42.047Z · score: 0 (0 votes) · LW · GW

Fixed.

Comment by tailcalled on On the Boxing of AIs · 2015-04-01T08:59:32.958Z · score: 1 (1 votes) · LW · GW

Could you give a link or something?

Comment by tailcalled on The Hardcore AI Box Experiment · 2015-03-31T09:45:21.122Z · score: 2 (2 votes) · LW · GW

Well, yeah, you should still be good to your friends and other presumably real people. However, there would be no point in, say, trying to save people from the holocaust, since the simulators wouldn't let actual people get tortured and burnt.

Comment by tailcalled on The Hardcore AI Box Experiment · 2015-03-30T23:34:25.173Z · score: -1 (1 votes) · LW · GW

This boxing method is designed to work under the assumption that humans are so easily hackable that we are worthless as gatekeepers.

Comment by tailcalled on The Hardcore AI Box Experiment · 2015-03-30T23:31:53.367Z · score: 0 (0 votes) · LW · GW

I'll post a list of methods soon, probably tomorrow.

Comment by tailcalled on The Hardcore AI Box Experiment · 2015-03-30T22:04:37.250Z · score: 2 (2 votes) · LW · GW

I think the fundamental point I'm trying to make is that Eliezer merely demonstrated that humans are too insecure to box an AI and that this problem can be solved by not giving the AI a chance to hack the humans.

Comment by tailcalled on The Hardcore AI Box Experiment · 2015-03-30T20:58:07.931Z · score: 2 (2 votes) · LW · GW

They're both questions about program verification. However, one of the programs is godshatter while the other is just a universe. Encoding morality is a highly complicated project dependent on huge amounts of data (in order to capture human values). Designing a universe for the AI barely even needs empiricism, and it can be thoroughly tested without a world-ending disaster.

Comment by tailcalled on Boxing an AI? · 2015-03-30T20:49:49.094Z · score: 0 (0 votes) · LW · GW

This would definitely let you test the values-stable-under-self-modification. Just plonk the AI in an environment where it can self-modify and keep track of its values. Since this is not dependent on morality, you can just give it easily-measurable values.

Comment by tailcalled on The Hardcore AI Box Experiment · 2015-03-30T20:46:58.368Z · score: 0 (0 votes) · LW · GW

The new idea is not perfect, but it has some different trade-offs while allowing perfect security.

Comment by tailcalled on The Hardcore AI Box Experiment · 2015-03-30T20:32:04.906Z · score: 1 (1 votes) · LW · GW

Well, that depends on the complexity of the box, but even for highly complex boxes it seems easier than to prove that the morality of an AI has been implemented correctly.

Actually, now that you're mentioning it, I just realized that there is a much, much easier way to properly box an AI. I will probably post it tomorrow or something.

Comment by tailcalled on The Hardcore AI Box Experiment · 2015-03-30T20:16:17.397Z · score: 1 (1 votes) · LW · GW

Of course a nerfed AI would have a harder time escaping. Or a stupid AI. That seems like the opposite of the point worth making.

Harder for the AI, I meant.

Of how to contain a stupid AI? Why bother?

Not stupid. Properly boxed.

Once it starts exploring, holes in the box will start showing up

Unless you follow the obvious strategy of making a box without holes.

Comment by tailcalled on The Hardcore AI Box Experiment · 2015-03-30T19:41:03.901Z · score: 6 (8 votes) · LW · GW

Assuming the simulators are good, that would imply that people who experience lives not worth living are not actually people (since otherwise it would be evil to simulate them) but instead shallow 'AIs'. Paradoxically, if that argument is true, there is nothing good about being good.

Or something along those lines.

Comment by tailcalled on The Hardcore AI Box Experiment · 2015-03-30T19:37:54.489Z · score: 5 (7 votes) · LW · GW

Well, there are multiple purposes:

  1. To illustrate why this is a lot harder than Eliezer's original experiment.

  2. To talk about some strategies I found.

  3. To encourage people to be more concrete than 'AI magically realizes that it has been boxed because AIs are overpowered'.

Comment by tailcalled on Boxing an AI? · 2015-03-30T19:33:28.408Z · score: 1 (1 votes) · LW · GW

Good point.

The strength of the evidence depends a lot on your prior for the root-level universe, though.

Comment by tailcalled on Boxing an AI? · 2015-03-30T17:09:25.075Z · score: 2 (2 votes) · LW · GW

Just... don't put it in a world where it should be able to upgrade infinitely? Make processors cost unobtainium and limit the amount of unobtainium so it can't upgrade past your practical processing capacity.

Remember that we are the ones who control how the box looks from inside.

Remember that you have to get this right the first time; if the AI finds itself in a box, you have to assume it will find its way out.

Minor nitpick: if the AI finds itself in a box, I have to assume it will be let out. It's completely trivial to prevent it from escaping when not given help; the point in Eliezer's experiment is that the AI will be given help.

Comment by tailcalled on Boxing an AI? · 2015-03-27T22:42:04.596Z · score: 0 (0 votes) · LW · GW

I would probably only include it as part of a batch of tests and proofs. It would be pretty foolish to rely on only one method to check if something that will destroy the world if it fails works correctly.

Comment by tailcalled on Boxing an AI? · 2015-03-27T22:00:31.197Z · score: 1 (1 votes) · LW · GW

It would actually tell us a lot of useful things.

First of all, there is the general problem of 'does this AI work?' This includes the general intelligence/rationality-related problems, but possibly also other problems, such as whether it will wirehead itself (whether a box can test that really depends a lot on the implementation).

The morality-stuff is tricky and depends on a lot of stuff, especially on how the AI is implemented. It seems to dangerous to let it play a multiplayer game with humans, even with most restrictions I can think of. However, how to test the morality really depends on how its human-detection system has been implemented. If it just uses some 'humans generally do these stupid things' heuristics, you can just plop down a few NPCs. If it uses somewhat smarter heuristics, you might be able to make some animals play the game and let the AI care for them. If it picks something intelligent, you might be able to instantiate other copies of the AI with vastly different utility functions. Basically, there are a lot of approaches to testing morality, but it depends on how the AI is implemented.

Comment by tailcalled on Boxing an AI? · 2015-03-27T21:46:19.604Z · score: 2 (2 votes) · LW · GW

Pick or design a game that contains some aspect of reality that you care about in terms of AI. All games have some element of learning, a lot have an element of planning and some even have varying degrees of programming.

As an example, I will pick Factorio, a game that involves learning, planning and logistics. Wire up the AI to this game, with appropriate reward channels etc. etc.. Now you can test how good the AI is at getting stuff done; producing goods, killing aliens (which isn't morally problematic, as the aliens don't act as personlike morally relevant things) and generally learning about the universe.

The step with morality depends on how the AI is designed. If it's designed to use heuristics to identify a group of entities as humans and help them, you might get away with throwing it in a procedurally generated RPG. If it uses more general, actually morally relevant criteria (such as intelligence, self-awareness, etc.), you might need a very different setup.

However, speculating at exactly what setup is needed for testing morality is probably very unproductive until we decide how we're actually going to implement morality.