Posts

The consequentialist case for social conservatism, or “Against Cultural Superstimuli” 2021-04-14T21:50:25.794Z
Translating bad advice 2015-04-14T09:20:06.253Z
The Ten Commandments of Rationality 2014-03-30T16:36:50.092Z
In favour of terseness 2014-03-08T18:01:06.447Z
Less Wrong’s political bias 2013-10-25T16:38:53.384Z

Comments

Comment by Sophronius on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” · 2021-04-15T19:06:05.353Z · LW · GW

Yeah, I'm willing to entertain the idea that there's a tradeof to be made between the short term and the long term or something like that... but to be honest, I don't think the people who push these ideas are even thinking along those lines. I think a rational discussion would just be a net plus for everyone involved, but people are unwilling to do that either because it's not in their interest to do so (lobby groups, media agencies) or because they don't understand why they should.

Don't get me wrong, I do think there are some left-wing groups who have had discussions on how to best change things. But mostly I think that people are just unwilling to criticize their own side, allowing the craziest voices to rise to the top. 

The closest thing I've seen to anyone seriously discussing these ideas was when Bill Maher suggested that the US needs a "tea party of the left", full of people so batshit crazy that they make people like him look like the reasonable ones. So maybe I'm not giving progressives enough credit and they did actually did do a calculation along those lines at one point, and decided that people being temporarily miserable was a worthwhile sacrifice. But for the most part, I think it's just been reflexive partisanship, and little else.

Comment by Sophronius on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” · 2021-04-15T18:55:39.791Z · LW · GW

Here are a few examples of the sort of thing I have in mind; if you think they're badly unrepresentative, could you explain why?

Representative of the current culture war clashes? Sort of, I guess. But it's weird to me that you're reading e.g. Jordan Peterson asking people to please not attack him or make him say words that he doesn't want to as "evil conservatives attack trans people for no reason." Is your model of Peterson that he is only pretending to feel threatened, or that he just feels threatened by trans people in general? If so, that seems amazingly uncharitable.

But to be fair to your position, there really are a lot of conservative politicians right now who very much play the victim while actively picking culture war fights. But those people are politicians. It seems to me that a lot of progressives are either unable or unwilling to tell principled conservatives apart from concern trolls. For example, Peterson has now apparently been cast as The Red Skull in the latest Marvel comics (yes, really). Of all the right-wingers they could have chosen to portray as a super-Nazi, they picked a nice Canadian professor who tells young men how to behave themselves. It just seems... really lazy, and dumb. :/

Trans people commonly have (to put it as neutrally and weakly as possible) a strong preference for being referred to using pronouns that match their own gender identity. The cases I've encountered where people flatly refuse to do this have almost all been (at least according to my memory, which may be unreliable) social conservatives.

Well, if you want to spread the message that it's important to be polite and respectful with ones words, it'd make sense to not also use phrases like "TERFS" and "CIS SCUM". Meanwhile, JP has said that he calls people by their preferred pronouns as long as they ask politely. What's wrong with that?

I can't help thinking that maybe you've already got "... and therefore progressives are bad" written on your bottom line. (More specifically, my guess is that what you're really upset about is that progressives keep saying that conservatives are making things worse for trans people, and you've got a whole batch of different arguments for why that might be a bad thing and haven't entirely noticed that e.g. if saying "conservatives hate you" 

I strongly encourage you to at least entertain the idea that I mean exactly what I say, no more and no less. That voice in your head that tells you that I have a secret agenda and that I'm really saying something else... that's the culture war speaking.

Cast in point: I started this essay by saying that cultural conservatives were being a bunch of angry shouty people and that I was frustrated by their refusal to actually explain their arguments, and so I'd try to do it for them.

Evangelical Christians have also always said that New Atheists sound "angry" and "shrill". I don't think Dawkins sounds angry at all. I think it just sounds that way because he speaks with great clarity, doesn't hedge or use weasel words, and says exactly what he means. Since people are accustomed to having to "read between the lines", they figure that if he's willing to be that bold on paper his real opinions must be ten times as strong. But in reality that's not the case. He just means exactly what he says. 

And so do I. :)

Comment by Sophronius on The consequentialist case for social conservatism, or “Against Cultural Superstimuli” · 2021-04-15T11:25:16.693Z · LW · GW

Haven't trans people's suicide rates been tremendously high since before there even were any trans activists to speak of?

Hard to say for sure - the rise of activism and the increase in openness on the issue occurred at the same time.

The survey is just there to give an idea of the statistics - I was not trying to push any particular narrative by linking to it. 

because so much of the badness of those lives is because those same social-conservatives are working hard to make those lives bad, or at least to stop them being made less bad.

Do they? Are conservatives really going around trying to make trans people's lives harder?

This sounds very similar to the whole "conservatives want to control women's bodies as part of their war on women" spiel. Again, I have to question if saying things like this really makes Trans people better off. 

"trans activists"

Why are you using scare quotes for that term? It's not as if I came up with it. It's trans people like Contrapoints who say that the trans-activist community is toxic. Do you think most trans people are happy with how they've been represented?

This is the frustrating thing about the culture war. People seem to assume that the sides are clearly delineated in black and white. The actual people who are supposedly being represented are much more diverse than you might think.

Comment by Sophronius on Lesswrong 2016 Survey · 2016-04-03T11:21:30.800Z · LW · GW

I'm still not certain if I managed to get what I think is the issue across. To clarify, here's an example of the failure mode I often encounter:

Philosopher: Morality is subjective, because it depends on individual preferences.
Sophronius: Sure, but it's objective in the sense that those preferences are material facts of the world which can be analyzed objectively like any other part of the universe.
Philosopher: But that does not get us a universal system of morality, because preferences still differ.
Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that's wrong, right?
Philosopher: No, we cannot criticize other cultures, because morality is subjective.

The mistake that the Philosopher makes here is conflating two different uses of subjectivity: He is switching between there being no universal system of morality in practice ("morality is subjective") and it not being possible to make moral claims in principle ("Morality is subjective"). We agree that Morality is subjective in the sense that moral preferences differ, but that should not preclude you from making object-level moral judgements (which are objectively true or false).

I think it's actually very similar to the error people make when it comes to discussing "free will". Someone argues that there is no (magical non-deterministic) free will, and then concludes from that that we can't punish criminals because they have no free will (in the sense of their preferences affecting their actions).

Comment by Sophronius on Lesswrong 2016 Survey · 2016-03-31T20:04:04.293Z · LW · GW

That makes no sense to me.

I am making a distinction here between subjectivity as you define it, and subjectivity as it is commonly used, i.e. "just a matter of opinion". I think (though could be mistaken) that the test described subjectivism as it just being a matter of opinion, which I would not agree with: Morality depends on individual preferences, but only in the sense that healthcare depends on an individual's health. It does not preclude a science of morality.

However, as far as I know, he never gave an actual argument for why such a thing could be extrapolated

Unfortunate, but understandable as that's a lot harder to prove than the philosophical argument.

I can definitely imagine that we find out that humans terminally value other's utility functions such that U(Sophronius) = X(U(DanArmak) + ..., and U(danArmak) = U(otherguy) + ... , and so everyone values everybody else's utility in a roundabout way which could yield something like a human utility function. But I don't know if it's actually true in practice.

Comment by Sophronius on Lesswrong 2016 Survey · 2016-03-28T19:26:16.742Z · LW · GW

Everything you say is correct, except that I'm not sure Subjectivism is the right term to describe the meta-ethical philosophy Eliezer lays out. The wikipedia definition, which is the one I've always heard used, says that subjectivism holds that it is merely subjective opinion while realism states the opposite. If I take that literally, then moral realism would hold the correct answer, as everything regarding morality concerns empirical fact (As the article you link to tried to explain).

All this is disregarding the empirical question of to what extend our preferences actually overlap - and to what extend we value each other's utility functions an sich. If the overlap/altruism is large enough, we could still end up with de facto objective morality, depending. Has Eliezer ever tried answering this? Would be interesting.

Comment by Sophronius on Lesswrong 2016 Survey · 2016-03-27T15:22:09.721Z · LW · GW

I had a similar issue: None of the options seems right to me. Subjectivism seems to imply that one person's judgment is no better than another's (which is false), but constructivism seems to imply that ethics are purely a matter of convenience (also false). I voted the latter in the end, but am curious how others see this.

Comment by Sophronius on Lesswrong 2016 Survey · 2016-03-27T15:18:43.584Z · LW · GW

RE: The survey: I have taken it.

I assume the salary question was meant to be filled in as Bruto, not netto. However that could result in some big differences depending on the country's tax code...

Btw, I liked the professional format of the test itself. Looked very neat.

Comment by Sophronius on Political Debiasing and the Political Bias Test · 2015-09-12T21:22:28.401Z · LW · GW

No, it's total accuracy on factual questions, not the bias part...

More importantly, don't be a jerk for no reason.

Comment by Sophronius on Political Debiasing and the Political Bias Test · 2015-09-12T20:58:15.864Z · LW · GW

Cool! I've been desperate to see a rationality test and so make improvements in rationality measurable (I think the Less Wrong movement really really needs this) so it's fantastic to see people working on this. I haven't checked the methodology yet but the basic principle of measuring bias seems sound.

Comment by Sophronius on The path of the rationalist · 2015-05-06T17:05:14.191Z · LW · GW

Hm, a fair point, I did not take the context into account.

My objection there is based on my belief that Less Wrong over-emphasizes cleverness, as opposed to what Yudkowsky calls 'winning'. I see too many people come up with clever ways to justify their existing beliefs, or being contrarian purely to sound clever, and I think it's terribly harmful.

Comment by Sophronius on Translating bad advice · 2015-04-16T09:09:15.204Z · LW · GW

My point was that you're not supposed to stop thinking after finding a plausible explanation, and most certainly not after having found the singularly most convenient possible explanation. "Worst of all possible worlds" and all that.

If you feel this doesn't apply to you, then please do not feel as though I'm addressing you specifically. It's supposed to be advice for Less Wrong as a whole.

Comment by Sophronius on Translating bad advice · 2015-04-15T16:43:37.265Z · LW · GW

That is a perfectly valid interpretation, but it doesn't explain why several people independently felt the need to explain this to me specifically, especially since it was worded in general terms and at the time I was just stating facts. This implied that there was something about me specifically that was bothering them.

Hence the lesson: Translate by finding out what made them give that advice in the first place, and only then rephrase it as good advice.

Comment by Sophronius on Translating bad advice · 2015-04-15T15:35:44.202Z · LW · GW

The point is that you don't ignore countless people saying the same thing just because you can think of a reason to dismiss them. Even if you are right and that's all it is, you'll still have sinned for not considering it.

Otherwise clever people would always find excuses to justify their existing beliefs, and then where would we be?

Comment by Sophronius on Translating bad advice · 2015-04-15T09:09:20.114Z · LW · GW

Oh, I've thought of another example:

Less Wrongers and other rationalists frequently get told that "rationality is nice but emotion is important too". Less Wrongers typically react to this by:

1) Mocking it as a fallacy because "rationality is defined as winning so it is not opposed to emotion", before eagerly taking it up as a strawman and posting the erroneous argument all over the place to show everyone how poor the enemies of reason are at reasoning.

Instead of:

2) Actually considering for five minutes whether or not there might be a correlation or even an inverse causal relationship between rationality and emotional control/ability to read emotions, which causes this observation in the first place.

Needless to say, I blame Yudkowsky.

Comment by Sophronius on Translating bad advice · 2015-04-14T16:43:08.751Z · LW · GW

Hm, okay, let me try to make it more concrete.

My main example is one where people (more than once, in fact) told me that "I might have my own truth, but other people have their truth as well". This was incredibly easy to dismiss as people being unable to tell map from territory, but after the third time I started to wonder why people were telling me this. So I asked them what made them bring it up in the first place, and they replied that they felt uncomfortable when I was stating facts with the confidence they warranted. I was reminded of something Richard Dawkins said: "clarity is often seen as offensive." I asked some other people if they felt the same way, and a helpful people-person told me that the reason for this is that those people felt threatened by my intelligence (they were HR) and my stating things with confidence reminded them of this. So I got the advice to phrase my statements of belief in a more friendly way. I hated this because it felt dishonest, having to use weasel words to hide the fact that I felt confident, but I could no longer deny that my current method wasn't working.

The meta-level I learned was the one presented in the OP: When people give you advice/objections, they almost never say what they mean or what the actual problem is. They substitute something that sounds nice and not-offensive sounding, making it easy to dismiss their advice as nonsense. So what you are supposed to do is find out what they originally meant and draw a lesson from that instead.

Another example: My father often tells me not to be cynical, but this doesn't make much sense to me because he is very cynical himself. It turns out that what he actually means is that I should be more upbeat, or as Scott Adams would put it: "Be a huge phony." The reason my father does not state this outright is because he is following his own rule even while giving the advice: he is rephrasing "be a huge phony" as "don't be cynical", because "be a huge phony" sounds cynical.

Comment by Sophronius on Translating bad advice · 2015-04-14T16:30:14.806Z · LW · GW

This surprised me as well when I first heard it, but it's apparently a really common problem for shy people. I tend to shy back and do my own thing, and apparently some people took that as meaning I felt like I was too good to talk to them.

Now that I've trained myself to be more arrogant, it's become much less of an issue.

Comment by Sophronius on The path of the rationalist · 2015-04-14T10:20:28.276Z · LW · GW

This is an extremely important lesson and I am grateful that you are trying to teach it.

In my experience it is almost impossible to actually succeed in teaching it, because you are fighting against human nature, but I appreciate it nonetheless.

(A few objections based on personal taste: Too flowery, does not get to the point fast enough, last paragraph teaches false lesson on cleverness)

Comment by Sophronius on Translating bad advice · 2015-04-14T09:26:47.445Z · LW · GW

Btw, I am curious as to whether a post like this one could be put in Main. I put it in discussion right now because I wrote it down hastily, but I think the lesson taught is important enough for main. Could someone tell me what I would need to change to make this main-worthy?

Comment by Sophronius on Meetup : Effective Altruism Netherlands: Effective Altruism for the masses · 2015-03-15T13:07:41.981Z · LW · GW

Hey, where are you guys? I am terrible at finding people and i see no number i can call

Comment by Sophronius on Irrationalism on Campus? · 2014-11-26T15:06:31.711Z · LW · GW

My own personal experience in the Netherlands did not show one specific bias, but rather multiple groups within the same university with different convictions. There was a group of people/professors who insisted that people were rational and markets efficient, and then there was the 'people are crazy and the world is mad' crowd. I actually really liked that people held these discussions, made it much more interesting and reduced bias overall I think.

In terms of social issues, I never noticed much discussion about this. People were usually pretty open and tolerant to any ideas, if it wasn't too extreme. The exception was during the debating club where any and all rhetorical tricks were considered okay.

I do remember some instances where professors were fired/persecuted for professing the "wrong" beliefs, but that was a while ago now. For example, my uncle was not allowed to say that Jewish people were more likely to have diabetes and that medical students should take this into account. Also, there was a scientist who was hounded in the media for 40 years because he said that crime had a large genetic component, until recently when people suddenly went "oops looks like he was right after all, how about that".

Comment by Sophronius on Meetup : Utrecht: Debiasing techniques · 2014-09-20T13:58:01.856Z · LW · GW

Ooh, debiasing techniques, sounds cool. My brother and I will be attending this one. Is there any pre-reading we should do?

Comment by Sophronius on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-20T13:54:23.116Z · LW · GW

Interesting. However, I still don't see why the filter would work similarly to a chemical reaction. Unless it's a general law of statistics that any event is always far more likely to have a single primary cause, it seems like a strange assumption since they are such dissimilar things.

Comment by Sophronius on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-08-30T13:18:50.780Z · LW · GW

Maybe not explicitly, but I keep seeing people refer to "the great filter" as if it was a single thing. But maybe you're right and I'm reading too much into this.

Comment by Sophronius on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-08-30T11:57:07.055Z · LW · GW

Can somebody explain to me why people generally assume that the great filter has a single cause? My gut says it's most likely a dozen one-in-a-million chances that all have to turn out just right for intelligent life to colonize the universe. So the total chance would be 1/1000000^12. Yet everyone talks of a single 'great filter' and I don't get why.

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-30T10:09:26.479Z · LW · GW

1) As far as I understand it, atoms don't have a specific 'location', there are only probabilities for where that atom might be at any given time. Given that it is silly to speak of individual atoms. Even if I misunderstood that part, it is still the case that two entities which have no discernible difference in principle are the same, as a matter of simple logic.

2) Asking "which body do you wake up in" is a wrong question. It is meaningless because there is no testable difference depending on your answer, it is not falsifiable even in principle. The simple fact is that if you copy Sophronius, you then have 2 Sophronius waking up later, each experiencing the sensation of being the original. Asking whose sensation is "real" is meaningless.

3) It is not a non-sequitur. Sleep interrupts your continuity of self. Therefore, if your existence depends on uninterrupted continuity of self, sleep would mean you die every night.

I notice that you keep using concepts like "you", "I" and "self" in your defence of a unique identity. I suggest you try removing those concepts or any other that presupposes unique identity. If you cannot do that then you are simply begging the question.

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-05T10:26:36.517Z · LW · GW

That's irrelevant when you're considering whether or not to use the horcrux at all and the alternative is being dead.

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-03T20:08:20.080Z · LW · GW

That's not an issue when it comes to acquiring immortality though. I mean, if you lost all knowledge of algebra, would you say that means you "died"?

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-03T05:44:58.754Z · LW · GW

Yes, that ideology is precisely what bothers me. Eliezer has a bone to pick with death so he declares death to be the ultimate enemy. Dementors now represent death instead of depression, patronus now uses life magic, and a spell that is based on hate is now based on emptiness. It's all twisted to make it fit the theme, and it feels forced. Especially when there's a riddle and the answer is 'Eliezer's password'.

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-03T05:37:16.983Z · LW · GW

Yea, the concept of burden of proof can be a useful social convention, but that's all it is. The thing is that taking a sceptical position and waiting for someone to proof you wrong is the opposite of what a sceptic should do. If you ever see two 'sceptics' both taking turns postinf 'you have the burden of proof', 'no you have the burden of proof!'... You'll see what i mean. Actual rationality isn't supposed to be easy.

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-01T18:54:35.857Z · LW · GW

The appropriateness of that probably depends on what kind of question it is...

I guess it is slightly more acceptable if it's a binary question. But even so it's terrible epistimology, since you are giving undue attention to a hypothesis just because it's the first one you came up with.

An equally awful method of doing things: Reading through someone's post and trying to find anything wrong with it. If you find anything --> post criticism, if you don't find anything --> accept conclusion. It's SOP even on Less Wrong, and it's not totally stupid but it's really not what rationalists are supposed to do.

I think my hackles got raised by the claim that your perception is "what it actually is" -- and that's a remarkably strong claim. It probably works better phrased like something along the lines of "trying to take your ego and preconceived notions out of the picture".

Yes, that is a big part of it, but it's more than that. It means you stop seeing things from one specific point of view. Think of how confused people get about issues like free will. Only once you stop thinking about the issue from the perspective of an agent and ask what is actually happening from the perspective of the universe can you resolve the confusion.

Or, if you want to see some great examples of people who get this wrong all the time, go to the James Randi forums. There's a whole host of people there who will say things during discussions like "Well it's your claim so you have the burden of proof. I am perfectly happy to change my mind if you show me proof that I'm wrong." and who think that this makes them rationalists. Good grief.

Any links to egregious examples? :-)

I have spent some time going through your posts but I couldn't really find any egregious examples. Maybe I got you confused with someone else. I did notice that where politics were involved you're overly prone to talking about "the left" even though the universe does not think in terms of "left" or "right". But of course that's not exactly unique to you.

One other instance I found:

Otherwise, I still think you're confused between the model class and the model complexity (= degrees of freedom), but we've set out our positions and it's fine that we continue to disagree.

It's not a huge deal but I personally would not classify ideas as belonging to people, for the reasons described earlier.

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-01T18:17:42.352Z · LW · GW

This needs to be on posters and T-shirts if it isn't already. Is it a well-known principle?

Sadly not. I keep meaning to post an article about this, but it's really hard to write an article about a complex subject in such a way that people really get it (especially if the reader has little patience/charity), so I keep putting it off until I have the time to make it perfect. I have some time this weekend though, so maybe...

I think the Fundamental Optimization Problem is the biggest problem humanity has right now and it explains everything that's wrong with society: It represents the fact that doing what's good will always feel less good than doing what feels good, people who optimize for altruism will always be seen as more selfish than people who optimize for being seen as altruistic, the people who get in power will always be the ones whose skills are optimized for getting in power and not for knowing what to do once they get there, and people who yell about truth the most are the biggest liars. It's also why "no good deed goes unpunished". Despite what Yoda claims, the dark side really is stronger.

Unfortunately there's no good post about this on LW AFAIK, but Yvain's post about Moloch is related and is really good (and really long).

Also thanks for the music video. Shame I can't upvote you multiple times.

Aww shucks. ^_^

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-01T17:27:26.892Z · LW · GW

The primary thing I seem to do is to remind myself to care about the right things. I am irrelevant. My emotions are irrelevant. Truth is not influenced by what I want to be true. I am frequently amazed by the degree with which my emotions are influenced by subconscious beliefs. For example I notice that the people who make me most angry when they're irrational are the ones I respect the most. People who get offended usually believe at some level that they are entitled to being offended. People who are bad at getting to the truth of a matter usually care more about how they feel than about what is actually true. (This is related to the fundamental optimization problem: The truth will always sound less truthful than the most truthful sounding falsehood.) Noticing that kind of thing is often more effective than trying to control emotions the hard way.

Secondly, you want to pay attention to your thoughts as much as possible. This is just meditation, really. If you become conscious of your thoughts, you gain a degree of control over them. Notice what you think, when you think it, and why. If a question makes you angry, don't just suppress the anger, ask yourself why.

For the rest it's just about cultivating a habit of asking the right questions. Never ask yourself what you think, since the universe doesn't care what you think. Instead say "Velorien believes X: How much does this increase the probability of X?".

Bertrand Russel gets it right, of course

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-01T16:49:07.615Z · LW · GW

Heheh, fair point. I guess a better way of putting it is that people fail to even bother to try this in the first place, or heck even acknowledge that this is important to begin with.

I cannot count the number of times I see someone try to answer a question by coming up with an explanation and then defending it, and utterly failing to graps that that's not how you answer a question. (In fact, I may be misremembering but I think you do this a lot, Lumifer.)

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-01T16:35:36.089Z · LW · GW

Your attitude makes me happy, thank you. :)

It's the most basic rationalist skill there is, in my opinion, but for some reason it's not much talked about here. I call it "thinking like the universe" as opposed to "thinking like a human". It means you remove yourself from the picture, you forget all about your favourite views and you stop caring about the implications of your answer since those should not impact the truth of the matter, and describe the situation in purely factual terms. You don't follow any specific chain of logic towards finding an answer: You instead allow the answer to naturally flow from the facts.

It means you don't ask "which facts argue in favour of my view and which against?", but "what are the facts?"
It means you don't ask "What is my hypothesis?", you ask "which hypotheses flow naturally from the facts?"
It means you don't ask "What do I believe?" but "what would an intelligent person believe given these facts?"
It means you don't ask "which hypothesis do I believe is true?", but "how does the probability mass naturally divide itself over competing hypotheses based on the evidence?"
It means you don't ask "How can I test this hypothesis?" but "Which test would maximally distinguish between competing hypotheses?"
It means you never ever ask who has the "burden of proof".

And so on and so forth. I see it as the most fundamental skill because it allows you to ask the right questions, and if you start with the wrong question it really doesn't matter what you do with it afterwards.

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-01T16:01:12.491Z · LW · GW

Hm, I didn't think I was reacting that strongly... If I was, it's probably because I am frustrated in general by people's inability to just take a step back and look at an issue for what it actually is, instead of superimposing their own favourite views on top of reality. I remember I recently got frustrated by some of the most rational people I know claiming that sun burn was caused by literal heat from the sun instead of UV light. Once they formed the hypothesis, they could only look at the issue through the 'eyes' of that view. And I see the same mistake made on Less Wrong all the time. I guess it's just frustrating to see EY do the same thing. I don't get why everyone, even practising rationalists, find this most elementary skill so hard to master.

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-01T15:19:05.902Z · LW · GW

What do you mean with the term "scientifically" in that sentence? If I put identity into Google Scholar I'm fairly sure I fill find a bunch of papers in respectable scientific journals that use the term.

I mean that if you have two carbon atoms floating around in the universe, and the next instance you swap their locations but keep everything else the same, there is no scientific way in which you could say that anything has changed.

Combine this with humans being just a collection of atoms, and you have no meaningful way to say that an identical copy of you is "not really you". Also, 'continuity of consciousness' is just a specific sensation that this specific clump of atoms has at each point in time, except for all the times when it does not exist because the clump is 'sleeping'. So Quirrel's objection seems to have no merit (could be I'm missing something though).

"Obviously" is a fairly strong word. It makes some sense to label the negation of any emotion a emotionless state. Unfriendly AI doesn't hate humans but is indifferent.

Yes, there is an insight to be had there, I will acknowledge that much.

However, to say that the opposite of a friendly AI is a paper clip maximiser is stupid. The opposite of an AI which wants to help you is very obviously an AI which wants to hurt you. Which is why the whole "AK version 2 riddle" just doesn't work. The Patronus goes from "not thinking about death" (version 1) to "Valuing life over death" (version 2). The killing curse goes from "valuing death over life" (version 1) to "not caring about life" (version 2). You can visualise the whole thing as a line measuring just the one integer, namely "life-death preference":

Value death over life (-1) ---- don't think about it either way (0) ----- Value life over death (+1)

The patronus gets a boost by moving from 0 to +1. The killing curse gets a boost by moving from -1 to 0. That makes no sense. Why would the killing curse, which is powered by the exact opposite of the patronus, receive a boost in power by moving in the same direct as the Patronus which values life over death?

Only fake wisdom can get ridiculous results like this.

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-01T14:59:51.977Z · LW · GW

It is a wrong question, because reality is never that simple and clear cut and no rationalist should expect it to be. And as with all wrong questions, the thing you should do to resolve the confusion is to take a step back and ask yourself what is actually happening in factual terms:

A more accurate way to describe emotion, much like personality, is in terms of multiple dimensions. One dimension is intensity of emotion. Another dimension is the type of experience it offers. Love and hate both have strong intensity and in that sense they are similar, but they are totally opposite in the way they make you feel. They are also totally opposite in terms of the effect it has on your preferences: Thinking well vs. thinking poorly of someone (ignoring the fact that there are multiple types of hate and love, and the 9999 other added complexities).

Ordinary people notice that hate and love are totally the opposite in several meaningful ways, and say as much. Then along comes a contrarian who wants to show how clever he is, and he picks up on the one way that love and hate are similar and which can make them go well together: The intensity of emotion towards someone or something. And so the contrarian states that really love and hate are the same and indifference is the opposite of both (somehow), which can cause people who aren't any good at mapping complex subjects along multiple axi in their head to throw out their useful heuristic and award status to the contrarian for his fake wisdom.

I'm a bit disappointed that Eliezer fell for the number one danger of rationalists everywhere: Too much eagerness to throw out common sense in favour of cleverness.

(Eliezer if you are reading this: You are awesome and HPMOR is awesome. Please keep writing it and don't get discouraged by this criticism)

Comment by Sophronius on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-08-01T14:27:59.788Z · LW · GW

Yea, I was quite surprised to find that Quirrell believes in continuity of consciousness as being a fundamental problem, since it really is just an illusion to begin with (though you could argue the illusion itself is worthwhile). Surely you could just kill yourself the moment your horcrux does its job if you're worried about your other self living on? But maybe he doesn't know that scientifically there's no such thing as identity. Or maybe he's lying. Personally, I would be MUCH more concerned about the fact that the horcrux implants memories, but does not replace personality. But for some reason Quirrel does not mention that as the obvious drawback.

(I was also surprised that Eliezer seems to buy in the obviously false notion that "the opposite of love is indifference")

Comment by Sophronius on Confused as to usefulness of 'consciousness' as a concept · 2014-07-15T06:02:30.514Z · LW · GW

Again no, a computer being conscious does not necessitate it acting differently. You could add a 'consciousness routine' without any of the output changing, As far as I can tell. But if you were to ask the computer to act in some way that requires consciousness, say by improving it's own code, then I imagine you could tell the difference.

Comment by Sophronius on Confused as to usefulness of 'consciousness' as a concept · 2014-07-14T20:56:54.645Z · LW · GW

Well no, of course merely being connected to a conscious system is not going to do anything, it's not magic. The conscious system would have to interact with the laptop in a way that's directly or indirectly related to its being conscious to get an observable difference.

For comparison, think of those scenario's where you're perfectly aware of what's going on, but you can't seem to control your body. In this case you are conscious but your being conscious is not affecting your actions. Consciousness performs a meaningful role but it's mere existence isn't going to do anything.

Sorry if this still doesn't answer your question.

Comment by Sophronius on Confused as to usefulness of 'consciousness' as a concept · 2014-07-14T19:17:08.796Z · LW · GW

The role of system A is to modify system B. It's meta-level thinking.

An animal can think: "I will beat my rival and have sex with his mate, rawr!"
but it takes a more human mind to follow that up with: "No wait, I got to handle this carefully. If I'm not strong enough to beat my rival, what will happen? I'd better go see if I can find an ally for this fight."

Of course, consciousness is not binary. It's the amount of meta-level thinking you can do, both in terms of CPU (amount of meta/second?) and in terms of abstraction level (it's meta all the way down). A monkey can just about reach the level of abstraction needed for the second example, but other animals can't. So monkeys come close in terms of consciousness, at least when it comes to consciously thinking about political/strategic issues.

Comment by Sophronius on Confused as to usefulness of 'consciousness' as a concept · 2014-07-14T18:56:08.659Z · LW · GW

Based on this knowledge, can you make any meaningful predictions about the differences in behavior between the two systems

I'm going to go ahead and say yes. Consciousness means a brain/cpu that is able to reflect on what it is doing, thereby allowing it to make adjustments to what it is doing, so it ends up acting differently. Of course with a computer it is possible to prevent the conscious part from interacting with the part that acts, but then you effectively end up with two separate systems. You might as well say that my being conscious of your actions does not affect your actions: True but irrelevant.

Comment by Sophronius on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult · 2014-07-14T18:43:57.391Z · LW · GW

To clarify what I mean, take the following imaginary conversation:

Less Wronger: Hey! You seem smart. You should consider joining the Less Wrong community and learn to become more rational like us!
Normal: (using definition: Rationality means using cold logic and abstract reasoning to solve problems) I don't know, rationality seems overrated to me. I mean, all the people I know who are best at using cold logic and abstract reasoning to solve problems tend to be nerdy guys who never accomplish much in life.
Less Wronger: Actually, we've defined rationality to mean "winning", or "winning on purpose" so more rationality is always good. You don't want be like those crazy normals who lose on purpose, do you?
Normal: No, of course I want to succeed at the things I do.
Less Wronger: Great! Then since you agree that more rationality is always good you should join our community of nerdy guys who obsessively use cold logic and abstract reasoning in an attempt to solve their problems.

As usual with the motte and bailey, only the desired definition is used explicitly. However, the connotations with the second mundane use of the word slip in.

Comment by Sophronius on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult · 2014-07-14T18:19:52.307Z · LW · GW

I don't really understand your objection. When I say that everything is objectively true or false, I mean that any particular thing is either part of the universe/reality at a given point in time/space or it isn't. I don't see any other possibility*. Perhaps you are confusing the map and the territory? It is perfectly possible to answer questions with "I don't know" or "mu" but that doesn't mean that the universe itself is in principle unknowable. The fact that consciousness is not properly understood yet does not mean that it occupies a special state of existing/not existing: We are the one's that are confused, not the universe.

*Ok, my brain just came up with another possibility but it's irrelevant to the point I'm making.

Comment by Sophronius on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult · 2014-07-13T18:58:17.912Z · LW · GW

To be fair Less Wrong's definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless. And then all the connotations of the word still slip in of course. It's a cheap tactic also used in the social justice movement which Yvain recently criticized on his blog (motte and bailey I think it was called)

Comment by Sophronius on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult · 2014-07-13T18:45:43.705Z · LW · GW

Your criticism of rationality for not guaranteeing correctness is unfair because nothing can do that. Your criticism that rationality still requires action is equivalent to saying that a driver's license does not replace driving, though many less wrongers do overvalue rationality so I guess I agree with that bit. You do however seem to make a big mistake in buying into the whole fact- value dichotomy, which is a fallacy since at the fundamental level only objective reality exists. Everything is objectively true or false, and the fact that rationality cannot dictate terminal values does not contradict this.

I do agree with the general sense that less wrong is subject to a lot of group think however, and agree that this is a big issue.

Comment by Sophronius on [moderator action] Eugine_Nier is now banned for mass downvote harassment · 2014-07-07T18:32:16.346Z · LW · GW

Uhm, no. I mean, this is exaggerating; we are not having any physical violence here. Worst case: poisoning of minds.

Yes of course it's an exaggeration, but it's the same meta-type of error: Seeing X used for evil and therefore declaring that all X is evil and anyone who says X isn't always evil is either evil or stupid themselves. It's the same mistake as the one Neoreactionaries always complain about: "Perceived differences based on race or sex have been used to excuse evil, therefore anyone who says there are differences between races or sexes is evil!"

And poisoning of minds is very, very bad. People always seem to assume that physical violence is somehow worse than mental violence, but it's just not true. Ideas can can be a lot more dangerous than guns.

(of course all of this is a bit moot since I'm not actually proposing banning democrats/republicans/race research/feminism or anything like that)

What if there are two competing religions; each one of them evil in a different way. And one missionary approaches you with an offer that if you help him establish the holy inquisition, he will rid you of those evil heretics from the other side. Is it a good idea to give him the power?

Of course not, why would I? Why are you asking this? Are you implying that in the real world, both sides to any conflict are always equally evil? Because that definitely isn't the case in my experience.

Comment by Sophronius on [LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient · 2014-07-06T14:52:35.177Z · LW · GW

I don't think the first sense and the second are mutually exclusive.

A dog has half as much processing power as a human = a dog can think the same thoughts but only at half the speed.
A dog has half as much consciousness as a human = a dog is only half as aware as a human of what's going on.

And yes, I definitely think that this is how it works. For example, when I get up in the morning I am much less aware of what's going on than when I am fully awake. Sometimes I pause and go "wait what am I doing right now?" And of course there's those funny times when you end up going to your old school by accident because you're not aware of what you're doing until it's too late...

The only part I disagree with is "A being that is functionally exactly like you, and that is experiencing exactly what you are experiencing". A being that is only half as conscious is going to have a different brain, so will act differently. You definitely notice when someone is only at 50% consciousness.

Finally, I submit that consciousness works just like will power: It is a limited resource which you can allocate to certain tasks/thoughts, and by training it you can get more of it. A good rationalist needs a large pool of consciousness.

Comment by Sophronius on [moderator action] Eugine_Nier is now banned for mass downvote harassment · 2014-07-06T10:58:55.344Z · LW · GW

Thank you for taking the time to write all that, it helps me see where you are coming from. You clearly have a large framework which you are basing your views on, but the thing you have to keep in mind is that I do, too. I have several partially-written posts about this which I hope to post on Less Wrong one day, but I’m very worried they’ll be misconstrued because it’s such a difficult subject. The last thing I want to do is defend the practices of oppressive regimes, believe me. I’m worried that people just read my posts thinking “oh he is defending censorship, censorship is evil, downvote” without thinking about what I’m actually saying. “Censorship” is just a word. All of my arguments work just as well for “having a community norm against” something as opposed to “censoring” it.

The problem is a framing issue, I think. People keep seeing something like censorship as a bad thing period, because it is something that’s used by oppressive regimes. However, killing people is also used by oppressive regimes, and yet I still wouldn’t promote total pacifism. Your post reads to me like Ghandi saying that the Nazi’s should be opposed non-violently: I do believe that there is wisdom in what you say, but that’s going much too far. The thing you have to realize is that if all the nice and reasonable people in the world go around worrying that if they fight monsters they will themselves become monsters, the monsters always win because they’re the only ones willing to fight.

My view on killing is this: The crucial issue is who is being killed and why, and what principle you are using to determine who to kill. My view on censorship is this: The crucial issue is what view is being censored and why, and what principle you are using to determine what to censor. Censoring a view just because you disagree with it is just as wrong as killing someone just because they disagree with you. Getting everybody who disagrees with me to shut up wouldn’t actually make for the kind of world I want to live in. So what views do I think should be censored? Only those ones which seem to serve no purpose other than to destroy everything that’s good and right in this world.

Imagine you are the leader of Utopialand. Everything is going swimmingly: People are working hard, people are happy, and everyone is largely free to do and say as they please. Then one day, a foreign missionary enters your country in order to start spreading his religion among the naïve and carefree people of your land. His religion states things that not strictly illegal but which are incredibly harmful: You have to believe everything it says or you will be burned to death, you have to verbally abuse your children daily to raise them properly, you must reject anyone from society who holds ideas the religion disagree with, science and critical thinking are wicked, and so on and so forth. This religion goes against everything you value and what’s worse, it seems to be catching on. What is your reaction?

1) “Well if he wants to spread ideas that destroy everything I love and cherish that’s his right as a citizen. Who am I to tell him he can’t destroy the world? I mean it’s a free country. “
2) “AAAAAAGHHH IT’S A VICTIM OF A MEMETIC PLAGUE! QUICK, ISOLATE HIM BEFORE HE INFECTS THE OTHERS.”

The way I see it, there is a war of ideas spanning across all of human history, with good and helpful ideas on the one side and horrible memetic plagues which destroy everything they touch on the other. The civilisations that have prospered so far are the ones which fought for the good ideas and won. I submit that if your reaction to the above is option 1) and not 2), you are essentially choosing to lose the war of ideas on purpose. There will be nothing left of your empire but fire and ash.