Posts

DRAFT:Ethical Zombies - A Post On Reality-Fluid 2013-01-09T13:38:02.754Z
[LINK] AI-boxing Is News, Somehow 2012-10-19T12:34:27.117Z

Comments

Comment by MugaSofer on Remembering the passing of Kathy Forth. · 2018-06-23T10:19:43.578Z · LW · GW

Any suicide in general, and this one in particular, definitely has multiple causes. I'm really sorry if I gave the opposite impression.

But I think it's reasonable and potentially important to respond to a suicide by looking into those causes and trying to reduce them.

To be more object-level:

  • Kathy was obviously mentally ill, and her particular brand of mental illness seems to have been well-known. I don't know what efforts were made to help her with that (I do get the impression some were made), but I've seen people claim her case was an example of the ways our community habitually fails to help people with mental illness and it certainly seems worth looking into that.
  • Kathy publicly attributed her suicide to the fact that she had been sexually assaulted. Whatever else was in play, it's certainly true that sexual assault is a risk factor for suicide and she really does seem to have been assaulted. It behooves us to check for flaws in our protections against this sort of thing when they fail this dramatically.
  • In particular, it seems she felt she didn't know how to avoid inevitably getting assaulted again. I get the impression this was part of a paranoid/depressive spiral on her part. But it's true that this is a real phenomenon and I've talked to other rationalists who have been concerned with this as well.

To return to the meta level, I'm also very concerned by the fact that this has been taken up by the anti-rationalist crowd and this may be making some people defensive. I don't recall anyone saying that we should be so concerned about suicide contagion as to ignore the object-level issues raised completely when Aaron Swartz committed suicide, for example. Maybe we should have been! But the fact that we as a community potentially failed or simply could have done better here means that we should be more careful about dismissing this, not less.

Comment by MugaSofer on Remembering the passing of Kathy Forth. · 2018-06-22T06:29:15.539Z · LW · GW

It's pretty standard to respond to the suicides of Y victims by rallying to reduce Y.

Making a commitment not to notice when something drives a person to suicide seems like it would probably be a monumental mistake.

Comment by MugaSofer on 37 Ways That Words Can Be Wrong · 2018-01-05T19:37:47.420Z · LW · GW

I don't think so - I think Eliezer's just being sloppy here. "God did a miracle" is supposed to be an example of something that sounds simple in plain English but is actually complex:

One observes that the length of an English sentence is not a good way to measure "complexity". [...] An enormous bolt of electricity comes out of the sky and hits something, and the Norse tribesfolk say, "Maybe a really powerful agent was angry and threw a lightning bolt." The human brain is the most complex artifact in the known universe. [...] The complexity of anger, and indeed the complexity of intelligence, was glossed over by the humans who hypothesized Thor the thunder-agent.

To a human, Maxwell's Equations take much longer to explain than Thor.

Comment by MugaSofer on What's up with Arbital? · 2017-03-29T20:46:54.094Z · LW · GW

Will this "Arbital 2.0" be an entirely unrelated microblogging platform, or are you simply re-branding Arbital 1.0 to focus on the microblogging features?

Comment by MugaSofer on On the importance of Less Wrong, or another single conversational locus · 2016-11-29T12:02:42.304Z · LW · GW

Off the top of my head: Fermat's Last Theorem, whether slavery is licit in the United States of America, and the origin of species.

Comment by MugaSofer on The Psychological Unity of Humankind · 2016-03-20T15:35:32.828Z · LW · GW

It's almost like having a third sex. In fact the winged males look far more like females than they look like wingless males.

That sounds like exactly the kind of situation Eliezer claims as the exception - the adaptation is present in the entire population, but only expressed in a subset based on the environmental conditions during development, because there's a specific advantage to polymorphism.

There's the whole phenomenon of frequent-dependent selection. Most people are familiar with this from blood types, and sickle-cell anaemia.

Those are single genes, not complex adaptations consisting of multiple mutually-dependant genes. Exactly the "froth" he describes.

Comment by MugaSofer on Ethics Notes · 2015-09-17T19:38:53.462Z · LW · GW

Psy-Kosh: Hrm. I'd think "avoid destroying the world" itself to be an ethical injunction too.

The problem is that this is phrased as an injunction over positive consequences. Deontology does better when it's closer to the action level and negative rather than positive.

Imagine trying to give this injunction to an AI. Then it would have to do anything that it thought would prevent the destruction of the world, without other considerations. Doesn't sound like a good idea.

No more so, I think, than "don't murder", "don't steal", "don't lie", "don't let children drown" etc.

Of course, having this ethical injunction - one which compels you to positive action to defend the world - would, if publicly known, rather interfere with the Confessor's job.

Comment by MugaSofer on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119 · 2015-03-11T12:17:01.053Z · LW · GW

Well, that and the differences in the setting/magic (there's no Free Transfiguration in canon, for instance, and the Mirror is different - there are less Mysterious Ancient Artefacts generally - and Horcruxes run on different mechanics ... stuff like that.)

And Voldemort is just inherently smarter than everyone else, too, for no in-story reason I can discern; he just is, it's part of the conceit. (Although maybe that was Albus' fault too, somehow?)

Comment by MugaSofer on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119 · 2015-03-11T12:06:27.653Z · LW · GW

To be fair, we don't know when he wrote the note.

Comment by MugaSofer on 2013 Less Wrong Census/Survey · 2015-03-02T11:50:21.002Z · LW · GW

I don't like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.

Actually, with our expanding universe you can get starships far enough away that the light from them will never reach you.

But I see we agree on this.

That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can't. This is another reason I'd prefer that the capability continue not to exist.

But is it possible to impersonate intelligence? Isn't anything that can "fake" problem-solving, goal-seeking behaviour sufficiently well intelligent (that is, sapient; but potentially not sentient, which could be a problem.)

I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don't accept.

When it comes down to it, ethics are entirely a matter of taste (though I would assert that they're a unique exception to the old saw "there's no accounting for taste" because a person's code of ethics determines whether he's trustworthy and in what ways).

I strongly disagree with this claim, actually. You can definitely persuade people out of their current ethical model. Not truly terminal goals, perhaps, but you can easily obfuscate even those.

What makes you think that "individual rights" are a thing you should care about? If you had to persuade a (human, reasonably rational) judge that they're the correct moral theory, what evidence would you point to? You might change my mind.

One can't really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.

Oh, everyone is misguided. (Hence the name of the site.) But they generally aren't actual evil strawmen.

Comment by MugaSofer on In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him? · 2015-03-02T11:32:05.593Z · LW · GW

Actually, they mention every so often that the Cold War turned hot in the Star Trek 'verse and society collapsed. They're descended from the civilization that rebuilt.

Comment by MugaSofer on [Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions · 2015-01-29T22:21:01.852Z · LW · GW

I'm no expert, but even Kurzweil - who, from past performance, is usually correct but over-optimistic by maybe five, ten years - doesn't expect us to beat the Turing Test until (checks) 2030, with full-on singularity hitting in 2045.

2020 is in five years. The kind of progress that would seem to imply - from where we are now to full-on human-level AI in just five years - seems incredible.

Comment by MugaSofer on I tried my hardest to win in an AI box experiment, and I failed. Here are the logs. · 2015-01-29T13:28:41.497Z · LW · GW

We rolled randomly the ethics of the AI, rolled random events with dice and the AI offered various solutions to those problems... You lost points if you failed to deal with the problems and lost lots of points if you freed the AI and they happened to have goals you disagreed with like annihilation of everything.

Comment by MugaSofer on [Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions · 2015-01-29T13:08:14.533Z · LW · GW

many now believe that strong AI may be achieved sometime in the 2020s

Yikes, but that's early. That's a lot sooner than I would have said, even as a reasonable lower bound.

Comment by MugaSofer on A Parable On Obsolete Ideologies · 2015-01-27T03:46:20.184Z · LW · GW

Yikes, you're right. I had noticed something odd, but forgot to look into it. Dangit.

I'm pretty sure this is somebody going to the trouble of downvoting every comment of mine, which has happened before.

It's against the rules, so I'll ask a mad to look into it; but obviously, if someone cares enough about something I'm doing or wrong about this much, please, PM me. I can't interpret you through meaningless downvotes, but I'll probably stop whatever is bothering you if I know what it is.

Comment by MugaSofer on A Parable On Obsolete Ideologies · 2015-01-27T03:30:52.548Z · LW · GW

I can give you a little more data - this has happened before, which is why I'm in the negatives. Which I guess makes it more likely to happen again, if I'm that annoying :/

It turned out to be a different person to the famous case, they were reasonable and explained their (accurate) complaint via PM. Probably not the same person this time, but if it happened once ...

Comment by MugaSofer on A Parable On Obsolete Ideologies · 2015-01-27T03:11:43.527Z · LW · GW

Yup, definitely. Interested amateur here.

Comment by MugaSofer on A Parable On Obsolete Ideologies · 2015-01-15T19:45:41.557Z · LW · GW

There's also the problem of people taking things meant to be metaphorical as literal, simply because, well, it's right there, right?

For example (just ran into this today):

Early in the morning, as Jesus was on his way back to the city, he was hungry. Seeing a fig tree by the road, he went up to it but found nothing on it except leaves. Then he said to it, “May you never bear fruit again!” Immediately the tree withered. Matthew 21:18-22 NIV

This is pretty clearly an illustration. "Like this tree, you'd better actually give results, not just give the appearance of being moral". (In fact, I believe Jesus uses this exact illustration in a sermon later.)

And yet, I saw this on a list of "God's Temper Tantrums that Christians Never Mention", presumably interpreted as "Jesus zapped a tree because it annoyed him."

Except that I think another reasonable interpretation is: whoever edited the text into a form that contains both stories did notice that they are inconsistent, didn't imagine that somehow they are both simultaneously correct, but did intend them to be taken at face value -- the implicit thinking being something like "obviously at least one of these is wrong somewhere, but both of them are here in our tradition; probably one is right and the other wrong; I'll preserve them both, so that at least the truth is in here somewhere".

Ooh, I hadn't thought of that.

Comment by MugaSofer on A Parable On Obsolete Ideologies · 2015-01-15T13:01:07.206Z · LW · GW

I don't see how this is a good example: if anything this is one where the fundamentalists are actually reading the text closer to what a naive reading means, without any stretched attempts to claim a metaphorical intent that is hard to see in the text. The problem of trying to read the Genesis text in a way that is consistent with the evidence is something that smart people have been trying for a very long time now, so that leads to a lot of very well done apologetics to choose from, but that doesn't mean it is actually what the text intended.

Well, I'm a Christian, so I might be biased in favour of interpretations that make that seem reasonable. But even so, I find it hard to believe a text that includes two mutually-contradictory creation stories (right next to each other in the text, at that) intended them to be interpreted literally.

Comment by MugaSofer on A Parable On Obsolete Ideologies · 2015-01-09T21:47:12.498Z · LW · GW

both groups are convinced that this applies to the other group.

Oh, it does apply, generally. That's mindkilling for you.

USian fundamentalist-evangelical Christianity, however, is ... exceptionally bad at reading their supposedly all-important sacred text, though. And, indeed, facts in general. We're talking about the movement that came up with and is still pushing "creationism", here.

I'm Irish, and we seem to have pretty much no equivalent movement in Europe; our conservative Christians follow a different, traditionalist-Catholic model. The insanity that (presumably) sparked this article is fairly American in nature, but the metaphor is general enough that it presumably applies to all traditions? The conflict is still largely liberal-vs-conservative here, albeit based on different (and usually more obscure) doctrinal arguments.

Comment by MugaSofer on A Parable On Obsolete Ideologies · 2015-01-09T21:35:04.028Z · LW · GW

I don't know nearly as many Muslims as I do Christians, but I generally get the impression that liberal Muslims don't have unusually strong reactions to atheism and other religions? Whereas they are, if anything, more threatened by Muslim terrorists - because of the general name-blackening, in addition to the normal fear response to your tribe being attacked.

Has this not been your experience?

Comment by MugaSofer on A Parable On Obsolete Ideologies · 2015-01-07T16:20:35.263Z · LW · GW

You have noticed, he says, that the new German society also has a lot of normal, "full-strength" Nazis around. The "reformed" Nazis occasionally denounce these people, and accuse them of misinterpreting Hitler's words, but they don't seem nearly as offended by the "full-strength" Nazis as they are by the idea of people who reject Nazism completely.

This part of the metaphor doesn't work.

Religious people generally condemn heretics even more strongly than nonbelievers. Liberal Christians, specifically, are generally more opposed to fundamentalist Christians' policies than liberal atheists' policies - for a variety of reasons, including the fact that they're wildly misinterpreting key passages and it's really really obvious, and the fact that there's a readily-available blue/green divide between them.

Comment by MugaSofer on 2013 Less Wrong Census/Survey · 2014-12-25T07:23:53.621Z · LW · GW

The obvious next question would be to ask if you're OK with your family being tortured under the various circumstances this would suggest you would be.

I've lost the context to understand this question.

How would you react to the idea of people being tortured over the cosmological horizon, outside your past or future light-cone? Or transferred to another, undetectable universe and tortured?

I mean, it's unverifiable, but strikes me as important and not at all meaningless. (But apparently I had misinterpreted you in any case.)

The usual version of this I hear is from people who've read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated -- maybe not perfectly, but to an arbitrarily high difficulty of disproving it -- by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.

Oh. That's an important distinction, yeah, but standard Singularity arguments suggest that by the time that would come up humans would no longer be making that decision anyway.

Um, if something is smart enough to solve every problem a human can, ho relevant is the distinction? I mean, sure, it might (say) be lying about it's preferences, but ... surely it'll have exactly the same impact on society, regardless?

On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.

It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn't open to reason?

ahem ... I'm ... actually from the other tribe. Pretty heavily in favor of a Nanny Welfare State, and although I'm not sure I'd go quite so far as to say it's "obvious" and anyone who disagrees must be "senseless ... not open to reason".

Care to trade chains of logic? A welfare state, in particular, seems kind of really important from here.

I think the trouble with these sort of battle-cries is that they lead to, well, assuming the other side must be evil strawmen. It's a problem. (That's why political discussion is unofficially banned here, unless you make an effort to be super neutral and rational about it.)

What's that?

Ahh ... "Boy Who Cried Wolf". Sorry, that was way too opaque, I could barely parse it myself. Not sure why I thought that was a good idea to abbreviate.

Comment by MugaSofer on xkcd on the AI box experiment · 2014-11-22T15:27:32.313Z · LW · GW

Really? I honestly found it pretty unfunny.

Comment by MugaSofer on Offense versus harm minimization · 2014-11-17T19:33:34.968Z · LW · GW

Another thing I find interesting is that such a argument would never be set up using the example of piss Christ or a desecrated Talmud.

Interestingly, I have seen (less well-written) versions of this argument used for anti-Christian blasphemy, including "Piss Christ".

I live in Ireland, which is known for it's strong Catholic values. So ... yup, this seems to fit with your theory.

Comment by MugaSofer on Offense versus harm minimization · 2014-11-17T18:52:12.133Z · LW · GW

To hold that speech is interchangeable with violence is to hold that a bullet can be the appropriate answer to an argument.

I wouldn't consider a picture of Muhammad to be an "argument", would you?

Comment by MugaSofer on Offense versus harm minimization · 2014-11-17T16:53:46.133Z · LW · GW

What if they claimed to experience benefits from the implants? For example, they might cure certain neurological conditions.

Would you then expect them to remove the implants or be jolted?

Comment by MugaSofer on Offense versus harm minimization · 2014-11-17T16:43:44.800Z · LW · GW

This analysis seems be assuming that Muslims will deconvert if only they're shown a sufficient number of pictures of Muhammad.

Comment by MugaSofer on Meetup : Glasgow (Scotland) Meetup · 2014-11-16T11:51:16.676Z · LW · GW

Got another potential b) here.

Comment by MugaSofer on On Caring · 2014-10-18T17:32:29.623Z · LW · GW

That's a good point. Humans are disturbingly good at motivated reasoning and compartmentalization on occasion.

Comment by MugaSofer on On Caring · 2014-10-18T16:02:07.775Z · LW · GW

To better approximate a perfectly-rational Bayesian reasoner (with your values.)

Which, presumably, would be able to model the universe correctly complete with large numbers.

That's the theory, anyway. Y'know, the same way you'd switch in a Monty Haul problem even if you don't understand it intuitively.

Comment by MugaSofer on On Caring · 2014-10-10T13:57:39.741Z · LW · GW

I think this is the OP's point - there is no (human) mind capable of caring, because human brains aren't capable of modelling numbers that large properly. If you can't contain a mind, you can't use your usual "imaginary person" modules to shift your brain into that "gear".

So - until you find a better way! - you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.

Comment by MugaSofer on On Caring · 2014-10-10T13:47:24.205Z · LW · GW

I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner.

This strikes me as backward reasoning - if your moral intuitions about large numbers of animals dying are broken, isn't it much more likely that you made a mistake about vegetarianism?

(Also, three dollars isn't that high a value to place on something. I can definitely believe you get more than $3 worth of utility from eating a chicken. Heck, the chicken probably cost a good bit more than $3.)

Comment by MugaSofer on Open thread, September 22-28, 2014 · 2014-09-27T11:33:47.798Z · LW · GW

Thank you!

Comment by MugaSofer on Open thread, September 22-28, 2014 · 2014-09-26T15:24:52.058Z · LW · GW

So ... I suspect someone might be doing that mass-downvote thing again. (To me, at least.)

Where do I go to inform a moderator so they can check?

Comment by MugaSofer on I may have just had a dangerous thought. · 2014-09-26T15:13:55.650Z · LW · GW

Hey, I've listened to a lot of ideas labelled "dangerous", some of which were labeled "extremely dangerous". Haven't gone crazy yet.

I'd definitely like to discuss it with you privately, if only to compare your idea to what I already know.

Comment by MugaSofer on Anthropics doesn't explain why the Cold War stayed Cold · 2014-09-07T16:56:31.973Z · LW · GW

I'm saying that if Sleeping Beauty's goal is to better understand the world, by performing a Bayesian update on evidence, then I think this is a form of "payoff" that gives Thirder results.

From If a tree falls on Sleeping Beauty...:

Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”, and the answer given will be scored according to a logarithmic scoring rule, with the aggregate result corresponding to the number of utilons (converted to dollars, let’s say) she will be penalized after the experiment.

In this case it is optimal to bet 1/3 that the coin came up heads, 2/3 that it came up tails: [snip table]

Comment by MugaSofer on Anthropics doesn't explain why the Cold War stayed Cold · 2014-09-06T19:10:14.286Z · LW · GW

I don't understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn't feel like what I'd call 'using anthropic evidence'. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)

OK, well by analogy, what's the "payoff structure" for nuclear anthropics?

Obviously, we can't prevent it after the fact. The payoff we get for being right is in the form of information; a better model of the world.

It isn't perfectly analogous, but it seems to me that "be right" is most analogous to the Thirder payoff matrix for Sleeping-Beauty-like problems.

Comment by MugaSofer on [LINK] Speed superintelligence? · 2014-09-06T18:54:33.445Z · LW · GW

So do regular playthroughs, though; it's a video game. The first paragraph still remarks on "how different optimal play can be from normal play."

Comment by MugaSofer on Anthropics doesn't explain why the Cold War stayed Cold · 2014-09-06T17:59:53.461Z · LW · GW

The trouble is, anthropic evidence works. I wish it didn't, because I wish the nuclear arms race hadn't come so close to killing us (and may well have killed others), and was instead prevented by some sort of hard-to-observe cooperation.

But it works. Witness the Sleeping Beauty Problem, for example. Or the Sailor's Child, a modified Sleeping Beauty that I could go outside and play a version of right now if I wished.

The winning solution, that gives the right answer, is to use "anthropic" evidence.

If this confuses you, then I (seriously) suggest you re-examine your understanding of how to perform anthropic calculations.


In fact, what you are describing is not "anthropic" evidence, but just ordinary evidence.

I (think I) know that George VI had five siblings (because you told me so.) That observation is more likely in a world where he did have five siblings (because I guessed your line of argument pretty early in the post, so I know you have no reason to trick me.) Therefore, updating on this observation, it is probable that George VI had five siblings.

Is this an explanation? Sort of.

There might be some special reason why George VI had only five siblings - maybe his parents decided to stop after five, say.

More likely, the true "explanation" is that he just happened to have five siblings, randomly. It wasn't unusually probable, it just happened by chance that it was that number.

And if that is the true explanation, then that is what I desire to believe.

Comment by MugaSofer on The Great Filter is early, or AI is hard · 2014-09-06T17:43:56.543Z · LW · GW

If you aren't sure about something, you can't just throw up your hands, say "well, we can't be sure", and then behave as if the answer you like best is true.

We have math for calculating these things, based on the probability different options are true.

For example, we don't know for sure how abiogenesis works, as you correctly note. Thus, we can't be sure how rare it ought to be on Earthlike planets - it might require a truly staggering coincidence, and we would never know for anthropic reasons.

But, in fact, we can reason about this uncertainty - we can't get rid of it, but we can quantify it to a degree. We know how soon life appeared after conditions became suitable. So we can consider what kind of frequency that would imply for abiogenesis given Earthlike conditions and anthropic effects.

This doesn't give us any new information - we still don't know how abiogenesis works - but it does give us a rough idea of how likely it is to be nigh-impossible, or near-certain.

Similarly, we can take the evidence we do have about the likelihood of Earthlike planets forming, the number of nearby stars they might form around, the likely instrumental goals most intelligent minds will have, the tools they will probably have available to them ... and so on.

We can't be sure about any of these things - no, not even the number of stars! - but we do have some evidence. We can calculate how likely that evidence would be to show up given the different possibilities. And so, putting it all together, we can put ballpark numbers to the odds of these events - "there is a X% chance that we should have been contacted", given the evidence we have now.

And then - making sure to update on all the evidence available, and recalculate as new evidence is found - we can work out the implications.

Comment by MugaSofer on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-09-06T12:58:42.251Z · LW · GW

Ah, interesting! I didn't know that. Props to Limbaugh et al.

(Nationalizing airport security seems orthogonal to the TSA search issue, though.)

Comment by MugaSofer on The Great Filter is early, or AI is hard · 2014-09-06T12:56:53.416Z · LW · GW

Oh, a failed Friendly AI might well do that. But it would probably realize that life would develop elsewhere, and take steps to prevent us.

Comment by MugaSofer on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult · 2014-09-05T18:37:19.410Z · LW · GW

I'm curious, how do you know they were sociopaths? You seem to imply your evidence was that they were unfaithful and generally skeevy individuals besides, but was there anything else?

(Actually, does anyone know how we know that sociopaths are better at manipulating people? I've absorbed this belief somehow, but I don't recall seeing any studies or anything.)

Comment by MugaSofer on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult · 2014-09-05T18:33:52.901Z · LW · GW

Firstly, I just want to second the point that this is way too interesting for, what, a fifth-level recursion?

Secondly:

One recipe for being a player is to go after lower-status (less-attractive) people, fulfill their romantic needs with a mix of planned romance, lies and bravado, have lots of sex, and then give face-saving excuses when abandoning them.

Is this ... a winning strategy? In any real sense?

I mean, yes, it's easier to sleep with unattractive people. But you don't want to sleep with unattractive people. That is what "attractiveness" refers to - the quality of people wanting you [as a sexual/romantic partner, by default.]

Now, the fact that it then becomes easy for attractive psychopaths to create relationships for nefarious purposes is ... another matter.

But I'm confused as to why you see the choices as "player, but unethical" or "non-player, but good". Surely you want to be a "player" who has sex with people you are actually attracted to?

Comment by MugaSofer on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-09-05T17:05:19.675Z · LW · GW

You can't fit billions of people in the UK.

You can, actually. It's called "the British Empire".

It was widely considered a bad idea the last time it was tried, but it is possible. The United Kingdom is not defined by it's current set of borders or locations.

Comment by MugaSofer on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-09-05T17:01:22.278Z · LW · GW

If Monroe was a hero, then Monroe's personality really doesn't fit with some of Quirrel's actions.

Also, the Defence Professor lied-with-truth about having stolen Quirrel's body outright "using incredibly Dark magic" when questioned on the real Quirrel's whereabouts.

Comment by MugaSofer on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-09-05T16:58:04.855Z · LW · GW

... hmm. You know, depending on how separate the personalities are, it's possible the original ("zombie") Quirrel was simply stressed out of his mind from Voldemort essentially holding him prisoner in his own body.

Comment by MugaSofer on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-09-05T16:55:13.114Z · LW · GW

When does the opposition to the Left ever respond with a little tit for tat? In the US, there are all sorts of people mouthing off big words about fighting government tyranny, while meekly standing by while their children are sexually assaulted by the TSA purportedly looking for nuclear weapons in their underwear.

Ah, I'm no expert in US politics, but I thought that was a Right-supported program? With what little of the Overton Window that covers "this is an absurd overreaction" lying on the metaphorical left-hand side?

Comment by MugaSofer on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-09-05T16:50:52.155Z · LW · GW

if wizards were public about their abilities, a higher proportion of wizards (even low-powered wizards) in the muggle population would be identified and trained

There's no such thing as a "low-powered wizard", and all wizards in Britain are automatically detected magically (at birth?)

It is implied that in HPMOR there are - presumably third-world? - countries where they "receive no letters of any kind". So potentially a complete breakdown of the masquerade might allow the least sane Muggle governments to track down and kidnap wizarding children for their own use. (I'm a little confused by this, though, since spontaneous untrained magic should be a serious issue if muggleborns aren't being dealt with.)