On maximising expected value 2011-10-26T11:15:44.269Z · score: -6 (11 votes)
Do not ask what rationalists should do 2011-07-13T10:47:37.971Z · score: 22 (25 votes)
Admit your ignorance 2011-03-15T10:28:33.654Z · score: 16 (18 votes)


Comment by thakil on Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread) · 2016-01-11T16:06:23.975Z · score: 0 (0 votes) · LW · GW

I'm a little confused by your first point (I guess you're pointing out a grammar/spelling error, but the only one I note is that you've used "a" instead of "an", and evil starts with a vowel so, no I don't understand that point).

You're second point is correct, I meant to mention that as a cost. By appearing more moderate I cost myself support. I've sort of hand waved the idea that I can just convince everyone to fight for me in the first place, which is obviously a difficult problem! That said I think you could be a little less obviously evil initially and still attract people to your fundamentalist regime.

Comment by thakil on Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread) · 2016-01-11T09:14:21.100Z · score: 0 (0 votes) · LW · GW

"More useful questions would be: given their supreme goal (to establish a new Caliphate which will spread Islam by the sword to the whole world), what should they do to accomplish that? And how should we (by which I mean, everyone who wants Islamic universalism to fail) act to prevent them?"

I think this is an interesting question. If you want to create a new islamic state you could do worse than siezing on the chaos caused by a civil war in Syria, and a weak state in Iraq. You will be opposed by

1)local interests, i.e. the governments of Iraq and Syria 2)The allies of local interests. In the case of Syria, Iran and Russia, Iraq the US and Britain.

I think 2 is quite interesting because the amount other nations intervene will be due in part to how much their population cares. I would argue that the attacks on Russia and France represent a strategic mistake because in both cases it encouraged those nations to be more active in their assault on ISIS.

Arguably the best way to discourage international interests from getting involved is increasing local costs. Make sure that any attacks on you will kill civillians, try to appear as legitimate and as boring as possible.

Essentially, if I wanted to run an evil fundamentalist oppressive state I would look as cuddly as possible at first. In fact, I would probably pretend to be on the side of the less religiously motivated rebels, so I can get guns and arms. Then, when Assad is toppled, make sure that any oil I have is available. My model here will be to look as much as Saudia Arabia as possible, as they can do horrifying things to their own citizens provided they remain a key strategic ally in the region. Real politik will trumph over morality provided you can keep western eyes off of you.

The goal, always, would be to be as non threatening as possible to squeeze as much arms as you can out of western allies (and Russian allies too, if you can work it, but if you topple Assad you probably can't), which puts you in a position to expand your interests. Then you need to provoke other nations to invade you, so you can plausibly claim to be the wronged party in any conflict where the US feels obliged to pick sides.

Comment by thakil on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-08T08:49:09.789Z · score: 0 (0 votes) · LW · GW

"However if the utility is dished out after the number has been spesified then an idler and a ongoer have exactly the same amount of utility and ought to be as optimal. 0 is not a optimum of this game so an agent that results in 0 utility is not an optimiser. If you take an agent that is an optimiser in other context then it ofcourse might not be an optimiser for this game."

The problem with this logic is the assumption that there is a "result" of 0. While it's certainly true that an "idler" will obtain an actual value at some point, so we can assess how they have done, there will never be a point in time that we can assess the ongoer. If we change the criteria and say that we are going to assess at a point in time then the ongoer can simply stop then and obtain the highest possible utility. But time never ends, and we never mark the ongoer's homework, so to say he has a utility of 0 at the end is nonsense, because there is, by definition, no end to this scenario.

Essentially, if you include infinity in a maximisation scenario, expect odd results.

Comment by thakil on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-07T11:54:13.610Z · score: 1 (1 votes) · LW · GW

Indeed. And that's what happens when you give a maximiser perverse incentives and infinity in which to gain them.

This scenario corresponds precisely to pseudocode of the kind



while newval>oldval





Which never terminates. This is only irrational if you want to terminate (which you usually do), but again, the claim that the maximiser never obtains value doesn't matter because you are essentially placing an outside judgment on the system.

Basically, what I believe you (and the op) are doing is looking at two agents in the numberverse.

Agent one stops at time 100 and gains X utility Agent two continues forever and never gains any utility.

Clearly, you think, agent one has "won". But how? Agent two has never failed. The numberverse is eternal, so there is no point at which you can say it has "lost" to agent one. If the numberverse had a non zero probability of collapsing at any point in time then Agent two's strategy would instead be more complex (and possibly uncomputable if we distribute over infinity), but as we are told that agent one and two exist in a changeless universe and their only goal is to obtain the most utility then we can't judge either to have won. In fact agent two's strategy only prevents it from losing, and it can't win.

That is, if we imagine the numberverse full of agents, any agent which chooses to stop will lose in a contest of utility, because the remaining agents can always choose to stop and obtain their far greater utility. So the rational thing to do in this contest is to never stop.

Sure, that's a pretty bleak lookout, but as I say, if you make a situation artificial enough you get artificial outcomes.

Comment by thakil on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-07T08:41:59.007Z · score: 0 (0 votes) · LW · GW

But time doesn't end. The criteria of assessment is

1)I only care about getting the highest number possible

2)I am utterly indifferent to how long this takes me

3)The only way to generate this value is by speaking this number (or, at the very least, any other methods I might have used instead are compensated explicitly once I finish speaking).

If your argument is that Bob, who stopped at Grahams number, is more rational than Jim, who is still speaking, then you've changed the terms. If my goal is to beat Bob, then I just need to stop at Graham's number plus one.

At any given time, t, I have no reason to stop, because I can expect to earn more by continuing. The only reason this looks irrational is we are imagining things which the scenario rules out: time costs or infinite time coming to an end.

The argument "but then you never get any utility" is true, but that doesn't matter, because I last forever. There is no end of time in this scenario.

If your argument is that in a universe with infinite time, infinite life and a magic incentive button then all everyone will do is press that button forever then you are correct, but I don't think you're saying much.

Comment by thakil on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-06T13:51:51.576Z · score: 1 (3 votes) · LW · GW

Then the "rational" thing is to never stop speaking. It's true that by never stopping speaking I'll never gain utility but by stopping speaking early I miss out on future utility.

The behaviour of speaking forever seems irrational, but you have deliberately crafted a scenario where my only goal is to get the highest possible utility, and the only way to do that is to just keep speaking. If you suggest that someone who got some utility after 1 million years is "more rational" than someone still speaking at 1 billion years then you are adding a value judgment not apparent in the original scenario.

Comment by thakil on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-06T10:24:36.540Z · score: 1 (1 votes) · LW · GW

But apparently you are not losing utility over time? And holding utility over time isn't of value to me, otherwise my failure to terminate early is costing me the utility I didn't take at that point in time? If there's a lever compensating for that loss of utility then I'm actually gaining the utility I'm turning down anyway!

Basically the only reason to stop at time t1 would be that you will regret not having had the utility available at t1 until t2, when you decide to stop.

Comment by thakil on We really need a "cryonics sales pitch" article. · 2015-08-07T09:14:07.262Z · score: 0 (0 votes) · LW · GW

A fairly small amount. Again, risk aversion says to me that a 1 in 1000 chance isn't worth much if I can only make that bet once.

Comment by thakil on We really need a "cryonics sales pitch" article. · 2015-08-05T09:18:28.612Z · score: 0 (0 votes) · LW · GW

Less than 1%. I haven't thought hard about these numbers, but I would say 1 has a probability of say 50/60%,2 10% (as 2 allows for societal collapse, not just company collapse) 3 10% (being quite generous there) and 4 40% which gives us*0.4=0.0024. If I'm more generous to 3, bumping it up to 80% I get 0.0192. I don't think I could be more generous to 2 though. These numbers are snatched from the air without deep thought, but I don't think they're wildly bad or anything.

Comment by thakil on We really need a "cryonics sales pitch" article. · 2015-08-05T07:50:59.392Z · score: 0 (0 votes) · LW · GW

My argument against cyronics:

The probability of being successfully frozen and then being revived later on is dependent on the following

1)Being successfully frozen upon death (loved ones could interfere, lawyers could interfere, the manner of my death could interfere)

2)The company storing me keeps me in the same (or close to it) condition for however long it takes for revivification technologies to be discovered

3)The revivification technologies are capable of being discovered

4)There is a will to revivify me

These all combine to make the probability of success quite low.

The value of success is obviously high, but it's difficult to assess how high: just because they can revivify me doesn't mean my life will then end up being endless (at the very least, violent death might still lead to death in the future)

This is weighted by the costs. These are

1)The obvious financial ones

2)The social ones. I actually probably value this higher than 1. Explaining to my loved ones my decision, having to endure mockery and possibly quite strong reactions

The final point here is about risk aversion. While one could probably set up the utility calculation above to come up positive, I'm not sure that utility calculation is the correct way to determine whether to make such a risk. That is, if a probability of a one shot event is low enough, the expected value isn't a very useful indicator of my actual returns. That is, if a lottery has a positive gain, it still might not be worth me playing it if the odds are still very much against me making any money from it!

So how would you convince me?

1)Drop the costs, both social and financial. The former is obviously done by making cryonics more mainstream, the latter... well by making cryonics more mainstream, probably

2)Convince me that the probability of all 4 components is higher than I think it is. If the conjoined probability started hitting >5% then I might start thinking about it seriously.

Comment by thakil on The horrifying importance of domain knowledge · 2015-08-03T07:37:56.220Z · score: 3 (3 votes) · LW · GW

So while many of these false beliefs are worth noting, it's worth thinking why programmers make these mistakes in practice. While it might be the case that names can include numbers, it's probably also going to be the case that the majority of numbers get into names via user error. Depending on the purpose of your database, it might be more valuable to avoid user error, than avoid a minority of users being excluded.

The reason I mention this is a lot of things in life are a study in trade offs. Quantum mechanics isn't very good at describing how big things work, and classical mechanics isn't very good at describing how small things work.

Comment by thakil on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-06-01T16:59:06.632Z · score: 1 (1 votes) · LW · GW

Right, so to try to get to the end of this exhausting thread, your contention is that the confrontational arguers would do better in revolutionary Russia (say) than non confrontational arguers? So, if so, where is your evidence that this is the case? To be clear, I do not have evidence to the contrary, and would be happy to know where your confident claims are originating from.

Comment by thakil on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-05-31T16:37:53.060Z · score: 2 (2 votes) · LW · GW

OK. So what point are you making? That when stakes matter, no argumentative style is effective? Yes, "all" was hyperbolic, but I'm actually trying to get at what exactly you are trying to see. You seem to have a strong disagreement with this article, and I'd love to get to the heart of it.

Comment by thakil on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-05-29T12:49:02.775Z · score: 2 (2 votes) · LW · GW

Could... could you evidence the claim that the non confrontational arguers in Russia all died while the confrontational arguers didn't?

[ I appreciate that I haven't presented evidence that my narrative of what might occur is more likely than yours, but I'm not the one using the phrase "empirical".]

Comment by thakil on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-05-28T18:52:16.503Z · score: 2 (2 votes) · LW · GW

Um. I'm not sure you and I use empirical the same way.

More importantly, what on earth is your point here? My point was that a non confrontational argumentative style might have benefits outside of simply getting along with fellow human beings, but it might even save your skin in a totalitarian regime. Is your point of view that the way to save your skin in a totalitarian regime to be aggressively argumentative? I suspect if Jerry is going to denounce good ol' Bob then he'll definitely denounce firebrand Bob. The answer might be for Bob to leave the country, but we are literally talking about how to talk to other human beings here.

Comment by thakil on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-05-28T08:27:42.316Z · score: 7 (7 votes) · LW · GW

You are confusing a dispute with an argument. By this, let's suppose I'm hanging out in Russia in 1917/18. I'm a little unhappy with all these communists who are getting into power and would like them to maybe have less political power. If I lose this dispute me and my family may well be killed as traitors!

That still doesn't mean my best method of argument is to start disagreeing with every communist I bump into! Even if my arguments are sound and I'm very persuasive, I'm probably going to only sway a few, and have made a name for myself as trouble. In addition, even if I think this is the best path, I'll need to pick my battles. I and a communist probably disagree on quite a lot, but if I want them to stop that Lenin fellow I'd be better off on focusing on our common ground and bringing them into my circle.

The common mistake I think a lot of people make is that you can change people's minds by arguing with them about that very thing in a clear, logical, and rational manner. But this probably isn't true. This can sometimes work, if the other person is sympathetic to your views to begin with, which is key! So the best way to get someone to change their minds is to try and make them like you, feel like you are part of their community. Then, when they think about capitalism, which is an evil vice, they'll think "but Bob says he's a capitalist, and he always buys me a round of drinks!" and then maybe you'll have a boozy chat one evening and find common ground, and maybe even tease out some contradictions in their world view, until one day Jerry the communist is Jerry the moderate and Lenin wants to point a gun at Bob but Jerry knows Bob is his friend and gives him warning.

Comment by thakil on In Praise of Maximizing – With Some Caveats · 2015-03-16T13:57:35.047Z · score: 1 (1 votes) · LW · GW

You seem to have made a convincing argument that most people are epistemic satisficers. I certainly am. But you don't seem to have made a compelling argument that such people are worse off than epistemic maximisers. I don't really see what benefits I would get from making an additional effort to truly identify my "terminal values". If I found myself dissatisfied with my current situation, then that would be one thing, but if I was I would try and improve it under my satisficer behaviour anyway. What you are proposing is that someone with 40 utility should put in some effort and presumably gaining some disutility from doing so, perhaps dropping myself to 35 utility to see if they might be able to achieve 60 utility.

I actually think this is a fundamentally bad approach to how humans think. If we focus on obtaining a romantic life partner, something a lot of people value, and took this approach, it wouldn't be incredibly difficult to identify flaws with my current romantic situation, and perhaps think about whether I could achieve something better. At the end of this reasoning chain, I might determine that there is indeed someone better out there and take the plunge for the true romantic bliss I want. However, I might actually come to the conclusion that while my current partner and situation is not perfect, it's probably the best I can achieve given my circumstances. But this is terrible! I can hardly wipe my memory of the last week or so of thought in which I carefully examined the flaws in my relationship and situation, and now all those flaws are going to fly into my mind, and may end up causing the end of a relationship which was actually the best I could achieve! This might sound a very artificial reasoning pattern, but it's essentially the plot line of many the male protagonist in some sitcoms and films who overthink their relationships into unhappiness. Obviously if I have such behavioural patterns anyway then I may need to respond to them, but it doesn't seem like a good idea to encourage them where they don't currently exist!

I actually have similar thoughts towards many who hold religious beliefs. While I am aware that I am far more likely to be correct about the universe than them, those beliefs do many holding them fairly small harm and actually a lot of good: they provide a ready made supportive community for them. Examination of those beliefs could well be very destructive to them, and provided they are not leading them towards destructive behaviours currently, I see no reason to encourage them otherwise.

Comment by thakil on An investment analogy for Pascal's Mugging · 2014-12-09T21:13:46.639Z · score: -2 (2 votes) · LW · GW

This post is essentially my response to Pascal's mugging.

a)If an event is extremely unlikely to occur then trying to maximise expected utility is foolish unless it is repeated multiple times. In the case of the lottery, you need to buy millions of tickets to have a reasonable chance of winning once (ignoring small prizes here). In the mugging, you have to be mugged an absolutely absurd number of times before any of the muggers are even remotely likely to be telling the truth b)The hidden and secret response, which is this:

even if the lottery has positive expected utility, there are likely to be alternate uses for your money which will give better ones! If you have the money to win the lottery, you can invest elsewhere and get better returns. This is true for Pascal's mugging, where you can go spend your 10 dollars to save lives immediately via a recommendation from give well.

Comment by thakil on xkcd on the AI box experiment · 2014-11-28T11:23:01.639Z · score: 1 (1 votes) · LW · GW

Well, that's what I get for finding a source without checking it properly I suppose.

Comment by thakil on xkcd on the AI box experiment · 2014-11-28T10:31:25.271Z · score: 1 (1 votes) · LW · GW

Indeed, but if Derren Brown guesses your mobile number, it's probably a "trick" rather than "mentalism". ThisSpaceAvailable has claimed that he can manipulate people. I would argue that this is weakly true, and he uses it for the simpler tricks he performs, but for the really impressive effects he probably falls on traditional magic tricks most of the time. The card trick by Simon Singh demonstrates that: he hasn't used mind manipulation to pick the cards, he's used a standard card trick and dressed it with the language of "mentalism".

Note that I make no claim that there is anything wrong with all this! But Derren Brown is trying to fool you, and that is to be remembered. He also does a similar thing to Penn and Teller, where he shows you how some of the trick is done but leaves the most "amazing" part hidden (I'm thinking of the horse racing episode, which was great, and the chess playing trick)

Comment by thakil on xkcd on the AI box experiment · 2014-11-28T08:06:56.550Z · score: 1 (1 votes) · LW · GW

Well. While sleight of hand is a key tool in magic, traditionally confederates and even camera tricks have been too. David Blaine's famous levitation trick, for instance, looks so impressive on TV because they cheated and made it look more impressive than it is.

Mentalism as a magic power is not a real thing, sorry. It is a title magician's sometimes took and take to make their act look different. Simon Singh on some of the tricks. has a list of some of the tricks he performs as well.

Comment by thakil on xkcd on the AI box experiment · 2014-11-28T07:59:57.452Z · score: 0 (0 votes) · LW · GW

Well, put it this way, if Eliezer had performed a trick which skirted the rules, he could hardly weigh in on this conversation and put us right without revealing that he had done so. Again, not saying he did, and my suggestion upthread was one of many that have been posted.

Comment by thakil on xkcd on the AI box experiment · 2014-11-27T08:41:40.065Z · score: 0 (0 votes) · LW · GW

You're quite possibly right, and without access to the transcripts it's all just speculation.

Comment by thakil on xkcd on the AI box experiment · 2014-11-27T08:38:04.901Z · score: 2 (2 votes) · LW · GW

I'm fairly certain he is a fraud by your definition then. Magician's often do these kind of things, and Derren Brown is a magician. He does not have access to secret powers others know not of, so for each trick think how someone else would replicate it. If you can't think of an honest way, then it's probably a trick.

That's not to say some of his tricks aren't done by known mental manipulation tricks (as far as I'm aware, hypnotists are reasonably genuine?) but if he is doing something that seems completely confounding, I am quite happy to guarantee that it is not a trick and not the awesome mind powers he has unlocked.

Put it this way. During the Russian roulette trick, do you think it likely that Channel 4 would have okayed it if there was the slightest possibility that he could actually kill himself?

Comment by thakil on xkcd on the AI box experiment · 2014-11-23T10:03:45.850Z · score: 0 (2 votes) · LW · GW

Indeed. Given a lack of transcripts being released, I give a reasonable amount of probability that there is a trick of some sort involved (there have been some proposals of what that might be, e.g. "this will get AI research to get more donations"), although I don't think that would necessarily defeat the purpose of the trick: after all, the AI got out of the box either way!

Comment by thakil on xkcd on the AI box experiment · 2014-11-21T12:53:34.421Z · score: 8 (8 votes) · LW · GW

It is, although I found this

"People who aren't familiar with Derren Brown or other expert human-persuaders sometimes think this must have been very difficult for Yudkowsky to do or that there must have been some sort of special trick involved,"

amusing, as Derren Brown is a magician. When Derren Brown accomplishes a feat of amazing human psychology, he is usually just cleverly disguising a magic trick.

Comment by thakil on xkcd on the AI box experiment · 2014-11-21T11:31:54.981Z · score: 7 (7 votes) · LW · GW

Not everyone. But I think an xkcd comic about the AI box experiment would be an opportunity to let everyone know about less wrong, not to have another argument about the basilisk which is a distraction.

Comment by thakil on xkcd on the AI box experiment · 2014-11-21T10:10:42.914Z · score: 10 (10 votes) · LW · GW

There are some times when a fight is worth having, and sometimes when it will do more harm than good. With regards to this controversy, I think that the latter approach will work better than the former. I could, of course, be wrong.

I am imaging here a reddit user who has vaguely heard of less wrong, and then reads rational wiki's article on the basilisk (or now, I suppose, an xkcd reader who does similar). I think that their take away from that reddit argument posted by Eliezer might be to think again about the rational wiki article, but I don't think they'd be particularly attracted to reading more of what Eliezer has written. Given that I rather enjoy the vast majority of what Eliezer has written, I feel like that's a shame.

Comment by thakil on xkcd on the AI box experiment · 2014-11-21T10:00:23.600Z · score: 14 (14 votes) · LW · GW

Yeah I've read that and I feel like it's a miss (at least for me). It's an all together too serious and non-self deprecating take on the issue. I appreciate that in that post Eliezer is trying to correct a lot of mis perceptions at once but my problem with that is

a)a lot of people won't actually know about all these attacks (I'd read the rational wiki article, which I don't think is nearly as bad as Eliezer says (that is possibly due to its content having altered over time!)), and responding to them all actually gives them the oxygen of publicity. b)When you've made a mistake the correct action (in my opinion ) is to go "yup, I messed up at that point", give a very short explanation of why, and try to move on. Going into extreme detail gives the impression that Eliezer isn't terribly sorry for his behaviour. Maybe he isn't, but from a PR perspective it would be better to look sorry. Sometimes it's better to move on from an argument rather than trying to keep having it!

Further to that last point, I've foudn that Eliezer often engages with dissent by having a full argument with the person who is dissenting. Now this might be a good strategy from the point of view of persuading the dissenter: if I come in and say cyronics sux then a reasoned response might change my mind. But by engaging so thoroughly with dissent when it occurs it actually makes him look more fighty.

I'm thinking here about how it appears to outside observers: just as with a formal debate the goal isn't to convince the person you are arguing with, it is to convince the audience, with PR the point isn't to defeat the dissenter with your marvellous wordplay, it is to convince the audience that you are more sane than the dissenter.

Obviously these are my perceptions of how Eliezer comes across, I could easily be an exception.

Comment by thakil on xkcd on the AI box experiment · 2014-11-21T08:55:27.059Z · score: 38 (38 votes) · LW · GW

So I'm going to say this here rather than anywhere else, but I think Eliezer's approach to this has been completely wrong headed. His response has always come tinged with a hint of outrage and upset. He may even be right to be that upset and angry about the internet's reaction to this, but I don't think it looks good! From a PR perspective, I would personally stick with an amused tone. Something like:

"Hi, Eliezer here. Yeah, that whole thing was kind of a mess! I over-reacted, everyone else over-reacted to my over-reaction... just urgh. To clear things up, no, I didn't take the whole basilisk thing seriously, but some members did and got upset about it, I got upset, it all got a bit messy. It wasn't my or anyone else's best day, but we all have bad moments on the internet. Sadly the thing about being moderately internet famous is your silly over reactions get captured in carbonite forever! I have done/ written lots of more sensible things since then, which you can check out over at less wrong :)"

Obviously not exactly that, but I think that kind of tone would come across a lot more persuasively than the angry hectoring tone currently adopted whenever this subject comes up.

Comment by thakil on Using Bayes to dismiss fringe phenomena · 2014-10-08T14:22:54.902Z · score: 1 (1 votes) · LW · GW

As a heuristic, I suspect ignoring things ignored by most scientists will actually work pretty well for you. Its not an unreasonable assumption to say that "given no other information, the majority of scientists dismissing a subject lowers my probability that that subject has any grounding". Thats a sensible thing to do, and does indeed use a simple Bayesian logic.

Note that we essentially do this for all science, in that we tend to accept the scientific consensus. We can't be subject specialists in everything, so while we can do a bit of reading, its probably fine to just think: what most scientists think is probably the closest to correct I am capable of being without further study.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 28, chapter 99-101 · 2013-12-12T11:05:59.669Z · score: -2 (6 votes) · LW · GW

Ah, apologies I mean the fact that their magics interact in weird ways. He's known about it for a long time, but hasn't really gone out of his way to research it. Unless I'm forgetting, I don't think he's even looked it up in the library.

Re: the unicorn, yeah I'm aware of his thoughts, but that is an extremely uncritical approach. Even if we accept that a unicorn is less than a wizards life, he is basically saying its fine to kill something to gain at most a year (as far as we can tell? Its implied that unicorn blood is not an indefinite cure?). There should be some attempt at moral calculus there at least.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 28, chapter 99-101 · 2013-12-12T08:50:04.834Z · score: 1 (7 votes) · LW · GW

Harry's blindness to Quirrel being pretty obviously bad news at this point is definitely something I'd like to see explained. I know that as the reader I get to see things more clearly than Harry does, but when you start thinking painfully murdering magical creatures to preserve your life for a short amount of time is fine if the person doing it is someone you like, something is going wrong there! I am fully expecting at this point to understand that Harry's thinking on Quirrel is being deliberately suppressed. After all, Harry's meant to be fundamentally curious about magic... why has he not investigated what could cause the anti-magic effect?

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 23, chapter 94 · 2013-07-09T12:23:29.629Z · score: 5 (5 votes) · LW · GW

It seems highly likely that Harry has time turned to hide Hermione's body and possible accomplish other objectives to which we are not yet privy. Evidence for this being that Harry now knows about the map when he did not before, and that he reacts to Hermione's body going missing with no emotional response whatsoever. He broadly speculates on the likely suspects for body snatching, but if he cared (which, for resurrection purposes, he surely would), then he would be far more animated: after all, he has previously.

There are some interesting wriggles in this though, in that we tend to get Harry's point of view, so we follow him when he travels back in time. There are a couple of exceptions, so this could be to increase the mystery from the reader's point of view. Worth noting that we don't see any thoughts from Harry indicating that he has made this journey, however. This could be explained by him voluntarily obliviating himself to be immune to legilmens based attacks? Or simply us not seeing his thoughts.

We have also learned that we're near the end of the fic, with one and a half arcs to go! This is intriguing stuff, and strongly implies to me that Harry is going to have to tap into ultimate power some time soon to tie up loose plot threads. Primarily among these being Voldemort's motivation of course! We as readers can guess, provided that we can take Quirrel=Voldemort for granted, which I think the text has given us strong indicators towards thus far. He seems to want to unite all of magical Britain/ the magical world into a unit strong enough to subjugate mankind and thus prevent scientific destruction. His original plan appeared to be to unite everyone round David Monroe as an opponent to Voldemort. When that didn't work (not clear why.. did the death eaters get out of control, or Monroe wasn't charismatic enough?) he decided to fake his death/accidentally got destroyed by Harry (this is, as yet, unclear), and then chose to make Harry the dark hero. It is possible/likely that Harry is either horcruxy or just Voldemort.

One revelation I would really like to see is why characters just don't seem to consider that Quirrel is a suspect. With the exception of Moody, who really only suggests it as a precaution, it really seems like theres nothing Quirrel can do which make people think that he's behind the bad stuff. Now to be fair its obviously clear that Quirrel doesn't want to kill Harry, he has demonstrated that on multiple occasions, but the unseen villain has never attempted to kill Harry!

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-09T09:04:12.626Z · score: 2 (2 votes) · LW · GW

I'm not entirely sure what this has got to do with my comments, other than it is an issue related to feminism in science fiction and fantasy writing. I don't really want to get into this argument, but would suggest simply that it this situation is perhaps more complicated than your post suggests.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-08T14:14:26.120Z · score: 2 (2 votes) · LW · GW

Possibly. It would depend on when and how that is presented in the story really. There is a problem with critiquing a work in progress which I am aware of, but I think its sort of inevitable with the sort of release schedule this story has.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-07T08:08:27.235Z · score: 4 (4 votes) · LW · GW

Indeed. When we are talking about facts about reality, then these kind of things become a problem. When we are talking about people's critical response to the text, then if someone has that response to a text, then its there for them at least. If multiple people do, then we can argue that

a-there's something about the text which causes this reaction in a subgroup of people b-this subgroup of people would have this reaction to every single text.

I assign b a lower probability because this is a reaction borne of particular chapters rather than the entire novel.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-07T08:05:51.750Z · score: 5 (9 votes) · LW · GW

Indeed. The point is with fridging is that it is not an inherently bad thing, but by repetition, and by being predominately women being fridged to motivate men, it begins to be unfortunate.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-07T08:04:33.896Z · score: 5 (7 votes) · LW · GW

The point is that once an author is made aware of a trope which can be off putting to some readers, they can attempt to avoid it in future. Obviously the author doesn't have to, and sometimes this particular trope might be necessary, but I don't think its bad to go "hey, this doesn't work for me for x y and z reasons".

From a story telling point of view, ignoring feminism for a minute, I personally find characters dying "randomly" unsatisfying. Joss Whedon does this occasionally, killing off characters essentially at random, rather than letting said character have a heroic moment then dying. I appreciate that this is deeply realistic, but the story lover in me rebels. This is, of course, a different issue from the one I'm approaching, but I wonder if it isn't adding to some people's reaction.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-06T18:45:23.811Z · score: 7 (15 votes) · LW · GW

No, it doesn't indicate a problem with the critique. If I tell you that super mario is not a particularly feminist piece of work I don't think you'd disagree, but I imagine you'd probably not agree that we shouldn't play it.

Criticism isn't about saying that something is unworthy of our time: quite the contrary, its about looking at worthy pieces of work and seeing where they fail and they succeed.

Yes, the best friend dying to motivate our hero is a classic motivation, and not one that is inherently bad. However, because so many heroes in literature and film are men, and so many of the friends that die are women, it begins to be problematic. Pointing out tropes and their abundance in culture isn't to say that an individual instance is necessarily bad, but to say that it might be worth thinking of new ways to approach the problem. For example, being sexually assaulted in one's past might be an excellent motivation for a female character, except it occurs in fiction a hell of a lot, so it has become tiresome.

For more on this I might point to the good (if a little feminist 101) tropes vs women in video games videos.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-06T17:37:02.612Z · score: 8 (8 votes) · LW · GW

To be more positive, I really did like the letter from Harry's parents. In the previous chapter where Harry was thinking that he had ruined his relationship with his parents, I remember thinking that it was extremely unlikely that his parents would react that way. And, indeed, it was demonstrated that Harry's beliefs were based on his emotional immaturity rather than an accurate assessment of recent events. I wonder if Harry's undervaluing of the power of emotional bonds is in part caused by Quirrel's influence.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-06T17:34:35.151Z · score: 4 (4 votes) · LW · GW

Yup, this is pretty much my point. Of course, this fic being as it is, Hermione may be back alive in a couple of chapters time, which will change things.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-06T17:33:34.511Z · score: 4 (8 votes) · LW · GW

Yeah I meant to mention Amelia Bones, who is by far the most competent female character we've encountered thus far. She is not, of course, a particularly major character thus far.

I guess when a character has an exciting fight off stage and we as readers perceive them as mortally wounded and helpless, and our male character swears vengeance at their death.. thats pretty much fridging to me. Regardless of the conclusion, the next few chapters at least will be devoted to Harry's actions which are entirely predicated on hermione's death.

If she had, as some suggested, died saving someone, as part of her arc, if we'd seen more of her fight, I do think that scene would have come across better. There may, of course, be excellent reasons that we did not get to observe that scene: perhaps we'll find out. I'm talking about the response now, and immediate feelings associated with that.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-06T17:29:56.208Z · score: 2 (8 votes) · LW · GW

Well quite. When I call this something problematic that can be still enjoyed, I find it problematic and still enjoy it!

With regards to whether an issue exists or not.. I mean if readers can perceive it, then it exists. Eliezer can decide that the story he's going to tell is just going to alienate those readers, or perhaps he can make adjustments now or in future to avoid that. My minor concern is that in some of his responses I don't feel like he has quire grasped the substance of the complaints: the problems exist, and trying to argue that they do not is probably a hiding to nothing.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-06T17:27:07.749Z · score: 3 (9 votes) · LW · GW

I do understand why the story is like that, and, to be clear, its fine for HPMOR to fail a feminist critique! Lots of fantastic stories fail feminists critiques: this will bug some readers more than others, and it might be useful for a particular author to consider that a particular choice might alienate some readers because of the history.

Yes, there are lots of great reasons for Moody and Dumbledore to be how they are, but McGonnogal is an order member, so could easily be different (and in earlier chapters, often is!) .

To be clear, I do think this story in general does portray women pretty well, but the bullying arc and this death feel like misfires because they embody certain tropes without, perhaps, intending to.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 22, chapter 93 · 2013-07-06T09:33:55.585Z · score: 16 (38 votes) · LW · GW

Disclaimer: I am thoroughly enjoying HPMOR. That said, I just don't think Eliezer is quite grokking the substance of feminist complaints.

It makes complete sense within the story for all the female characters to do what they do, given what they've defined to be and what circumstances have arisen. The death of hermione makes complete sense. But its a fridging, of course its a fridging, because you are the author. You created these characters, and put them into the situation. If you tell a Superman story where he kills, and you set up circumstances where the only thing he can do is kill, then, sure, within the story, we buy that Superman needed to kill in that circumstance. But you, the author, put him in that circumstance, made him and his opponents make choices which led to that death, because you wanted him to kill.

I don't think Eliezer necessarily intended to make the female characters in this fic weaker than the male ones, more passive, more timid, more prone to mistakes, but thats how it has turned out. And for the defence that this is what he got from canon? Well to be honest its quite clear that many of these characters aren't the characters from canon. Moody is far more competent, Dumbledore very different, and Quirrel... Yet Hermione and McGonnogal are essentially as flawed as they were in the original text.

A feminist reading does not negate the quality of something, and I wouldn't necessarily say the story should be modified at this point at all, but its something to be aware of. We can enjoy problematic things even while acknowledging they're problematic. HPMOR isn't the first and won't be the last piece of fiction to fail at a feminist reading.

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90 · 2013-07-03T07:12:13.574Z · score: 3 (3 votes) · LW · GW

Indeed it is. In book.. 6(?) it is made clear that children in magical families are essentially exempt because of this rule. It is assumed that parents will enforce the rules on their children. It is another example of prejudice in the magical world (which I believe is deliberate. Rowling explicitly and implicitly suggests repeatedly that the current set up of the magical world is corrupt and prejudical)

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90 · 2013-07-02T12:16:27.459Z · score: 3 (3 votes) · LW · GW

Thats because (mild spoiler for the books) every young person has "the trace" put on them, which can be tracked. Any magic done in the vicinity of someone with the trace on will be picked up on. That said, they are apparently aware that it was a hover charm in book 2, so they can clearly detect the type of magic too...

Comment by thakil on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-07-02T08:21:14.796Z · score: 6 (6 votes) · LW · GW

Huh, reading that quote again it occurs to me that Harry doesn't reach for the oxygenating potion, he reaches for the syringe of glowing orange liquid that was the oxygenating potion. A truly prepared murderer would merely have to replace the syringe with... something else.

Comment by thakil on Fallacies of reification - the placebo effect · 2012-09-13T09:24:03.812Z · score: 4 (4 votes) · LW · GW

Note that I don't believe this is limited to cancer trials. Ethical considerations mean that in any situation where a treatment is known to be effective, withholding it would be wrong, so the most effective drug must be competed with. In addition, the goal of a new drug is to be better than its competitors, and comparing it to a placebo wouldn't help with this