Comment by faws on LW Women- Minimizing the Inferential Distance · 2012-11-24T06:31:57.700Z · score: 5 (7 votes) · LW · GW

The writer and danerys thought so, apparently, and it made sense when I read it.

My point is that I don't know what exactly they were thinking and that's why I'm asking. If they think that plowing in particular is a feminine activity that would make it somewhat more understandable, but it's not at all obvious to me from the post that this (their thinking so) is actually the case, and even then I don't quite see what was supposed to be signified since Christine was already regularly including things like making tea. Occams razor would suggest a single misapprehension the absence of which leads to the whole section to making sense more likely than multiple misapprehensions.

Comment by faws on LW Women- Minimizing the Inferential Distance · 2012-11-24T04:13:40.241Z · score: 10 (14 votes) · LW · GW

I don't understand how Christine the female dungeon master who has apparently consistently been playing with approximately gender-balanced groups not accommodating plowing fits in here. Plowing doesn't even seem like a particularly feminine activity (compared to e. g. trying for peaceful relations with the elves).

Comment by faws on Checklist of Rationality Habits · 2012-11-12T17:47:35.226Z · score: 1 (1 votes) · LW · GW

Do you have an example of a military dictatorship where the immensely rich were allowed to keep their wealth, but couldn't use it to exert political influence?

Comment by faws on A place for casual, non-karmic discussion for lesswrongers? · 2012-11-08T04:20:47.479Z · score: 2 (2 votes) · LW · GW

Or, you know, they could weight suffering in a continuous, derivable way that doesn't make a fundamental distinction in theory, but achieves that result in practice; amputating a finger is worth more than a billion blood-pricks, one broken arm is worth more than a billion billion nudges, and so on.

That's not (at all realistically) possible with a number as large as 3^^^3. If there is a number large enough to make a difference 3^^^3 is larger than that number. You say "and so on", but you could list a billion things each second, each a billion times worse than the preceding, continue doing so until the heat death of the universe and you still wouldn't get anywhere close to a difference even worth mentioning when there's a factor of 3^^^3 involved.

Comment by faws on 2012 Less Wrong Census/Survey · 2012-11-07T01:17:10.146Z · score: 0 (0 votes) · LW · GW

26 - Ebjf, gur guerr yvarf ner svkrq ng n cbvag naq ebgngr. 29, 35, 38 - Zvqqyr pbyhza vf gur genafsbezngvba cresbezrq ba gur yrsg pbyhza gb neevir ng gur evtug ebyhza.

Comment by faws on 2012 Less Wrong Census/Survey · 2012-11-04T17:43:35.695Z · score: 20 (20 votes) · LW · GW

I wonder whether there are visible conversion effects on the redwood question for native metric users? Estimates slightly on the short side and neatly divisible by three because the quick and dirty meter -> feet conversion is multiplying by three?

Comment by faws on 2012 Less Wrong Census/Survey · 2012-11-04T17:31:47.738Z · score: 2 (2 votes) · LW · GW

Edit: For copulation's sake, whose kneecaps do I have to break to make Markdown leave my indentation the Christian Underworld alone, and who wrote those filthy blatant lies masquerading as comment formatting help?

does
     prefacing 
          with 4 extra spaces 
    work?

EDIT: Apparently not. Very likely a bug then.

Comment by faws on 2012 Less Wrong Census/Survey · 2012-11-04T16:40:14.619Z · score: 26 (26 votes) · LW · GW

Took the survey.

Comment by faws on Raising the forecasting waterline (part 1) · 2012-10-10T12:45:19.091Z · score: -1 (1 votes) · LW · GW

That's surely an artifice of human languages and even so it would depend on whether the statement is mostly structured using "or" or using "and".

It's true of any language optimized for conveying information. The information content of a statement is reciprocal to it's prior probability, and therefore more or less proportional to how many other statements of the same form would be false.

In your counter example the information content of a statement in the basic form decreases with length.

Comment by faws on Raising the forecasting waterline (part 1) · 2012-10-10T10:56:54.833Z · score: 1 (1 votes) · LW · GW

I disagree with this. The reason you shouldn't assign 50% to the proposition "I will win the lottery" is because you have some understanding of the odds behind the lottery. If a yes/no question which I have no idea about is asked, I am 50% confident that the answer is yes. The reason for this is point 2: provided I think a question and its negation are equally likely to have been asked, there is a 50% chance that the answer to the question you have asked is yes.

That's only reasonable if some agent is trying to maximize the information content of your answer. The vast majority of possible statements of a given length are false.

Comment by faws on A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified · 2012-09-20T14:54:26.074Z · score: 5 (5 votes) · LW · GW

The logic requires that your donations are purely altruistically motivated and you only care for good outcomes.

E. g. take donating to one of the organizations A, or B for cancer research. If your donations are purely altruistic and the consequences are the same you should have no preference on which of the organizations finds a new treatment. You have no reason to distinguish the case of you personally donating $ 1000 to both organizations and someone else doing the same from you donating $2000 to A and someone else donating $2000 to B. And once the donations are made you should have no preference between A or B finding the new treatment.

So the equivalent to your personal portfolio when making investments aren't your personal donations, but the aggregate donations of everyone. And since you aren't the only one making donations the donations are already diversified, so you are free to pick something underrepresented with high yield (which will almost certainly still be underrepresented afterwards). If you manage 0.1% of a $ 10,000,000 portfolio with 90% in government bonds it makes no sense to invest any of that 0.1% in government bonds in the name of diversification.

Comment by faws on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-17T16:21:40.738Z · score: 1 (1 votes) · LW · GW

That thought experiment doesn't make much sense. If the experiences were somehow switched, but everything else kept the same (i .e all your memories and associations of red are still connected to each other and everything else in the same way) you wouldn't notice the difference; everything would still match your memories exactly. If there even is such a thing as raw qualia there is no reason to suppose they are stable from one moment to the other; as long as the correct network of associations is triggered there is no evolutionary advantage either way.

Comment by faws on Stupid Questions Open Thread Round 3 · 2012-07-08T09:38:10.634Z · score: 1 (1 votes) · LW · GW

Are you sure you aren't just pattern matching to similarity to known types of blackmail? Do you think it would be useful for an AI to classify it the same way (which was the starting point of this thread)?

Your link doesn't go into much detail, but it seems like he was convicted because he was lying and making up the negative consequences he threatened her with, and like he was going out of his way to make the consequences of selling to someone else as bad as possible rather than maximizing revenue (or at least making her believe so). That would qualify this case as blackmail under the definition above, unlike either of our hypothetical examples.

Comment by faws on Stupid Questions Open Thread Round 3 · 2012-07-08T02:51:14.115Z · score: 0 (2 votes) · LW · GW

That's not blackmail at all. It seems like blackmail because of the questionable morality of selling secretly recorded sex tapes, but giving the movie star the chance to buy the tape first doesn't make the whole thing any less moral than it would be without that chance, and unlike real blackmail the movie star being known not to respond to blackmail doesn't help in any way.

Consider this variation: Instead of a secret tape the movie star voluntarily participated in an amateur porno that was intended to be publicly released from the beginning, but held up for some reason, and all that happened before the movie star became famous in the first place. The producer knows that releasing the tape will hurt her career and offers her to buy the tape to prevent it from being released. This doesn't seem like blackmail at all, and the only change was to the moral (and legal) status of releasing the tape, not to the trade.

Comment by faws on Transparency in Insurance (Edit: Solution found) · 2012-07-06T20:27:34.491Z · score: 1 (1 votes) · LW · GW

Cheaper by enough to make up for the extra years you pay premiums in? E. g. getting life insurance at 25 will have cost less than getting life insurance at 40 by the time you are 60? If so, why would insurance companies set the rates that way? Are people who get life insurance early so much more responsible that they are significantly less likely to die even at higher ages?

Comment by faws on Can anyone explain to me why CDT two-boxes? · 2012-07-04T18:49:17.091Z · score: 2 (2 votes) · LW · GW

You are using the wrong sense of "can" in "cannot make different decisions". The every day subjective experience of "free will" isn't caused by your decisions being indeterminate in an objective sense, that's the incoherent concept of libertarian free will. Instead it seems to be based on our decisions being dependent on some sort of internal preference calculation, and the correct sense of "can make different decisions" to use is something like "if the preference calculation had a different outcome that would result in a different decision".

Otherwise results that are entirely random would feel more free than results that are based on your values, habits, likes, memories and other character traits, i. e. the things that make you you. Not at all coincidentally this is also the criterion whether it makes sense to bother thinking about the decision.

You yourself don't know the result of the preference calculation before you run it, otherwise it wouldn't feel like a free decision. But whether Omega knows the result in advance has no impact on that at all.

Comment by faws on [Link] A superintelligent solution to the Fermi paradox · 2012-05-31T10:55:21.259Z · score: 4 (4 votes) · LW · GW

You mostly talk about your new blog instead of the idea the post claims to be about, and the post largely sounds like an advertisement. Two paragraphs summarizing your idea and one sentence talking about the blog (preferably worded as a disclaimer instead of an advertisement) would have been better.

Comment by faws on Cryonics without freezers: resurrection possibilities in a Big World · 2012-04-25T00:18:13.177Z · score: 2 (2 votes) · LW · GW

(Not Will, but I think I mostly agree with him on this point)

There is no such thing as an uniquely specified "next experience". There are going to be instances of you that remember being you and consider themselves the same person as you, but there is no meaningful sense in which exactly one of them is right. Granted, all instances of you that remember a particular moment will be in the future of that moment, but it seems silly to only care about the experiences of that subset of instances of you and completely neglect the experiences of instances that only share your memories up to an earlier point. If you weight the experiences more sensibly then in the case of a rigorously executed quantum suicide the bulk of the weight will be in instances that diverged before the decision to commit quantum suicide. There will be no chain of memory leading from the QS to those instances, but why should that matter?

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-18T19:06:59.516Z · score: 2 (2 votes) · LW · GW

Mere dualism isn't enough to save libertarian free will. To the extent your decision is characteristic of you it is at least in principle predictable, at least probabilistically. The non-predictable component of your decision process is by necessity not even in principle distinguishable from that of Gandhi or Hitler in any way. So how can you call the result of the non-predictable component deciding with your free will?

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-18T07:28:29.555Z · score: 6 (6 votes) · LW · GW

At a cursory glace the date you cite seems to be for the time the population they are descended from split from African populations, not for when they arrived in Australia. Genetic evidence cannot show where your ancestors lived, only how they were related to other populations (which might imply things about where they lived provided you already know that for the other populations)

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-18T03:07:05.341Z · score: 3 (3 votes) · LW · GW

If you assume both free will and prescience, it's natural.

You mean libertarian free will, which already doesn't make sense all by itself, and even then the combination doesn't make sense for additional reasons, starting with that seeing anything would usually require that only main characters have free will.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-18T02:59:01.353Z · score: 1 (1 votes) · LW · GW

And I already remarked in the Luminosity thread that that makes no sense. It makes even less sense in a universe with time turners.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-18T02:46:27.399Z · score: 11 (11 votes) · LW · GW

Why should the time of an ominous decision be so relevant to seers? Even if the consequences of the decision have a big impact on the future, that future already was the future. It's not like there is a default future before you make your decision and a different future afterwards, your decision itself would already be a part of the future of any earlier point in time. From a many worlds perspective you might have several different possible futures so your overall prospect of the future might significantly change after an important branching, but Harry's decision doesn't seem particularly influenced by recent random chance; it seems unlikely that from the perspective of 6 hours ago most future Harrys would make a completely different decision.

Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85

2012-04-18T02:30:26.958Z · score: 9 (12 votes)
Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-16T03:20:29.477Z · score: 2 (2 votes) · LW · GW

It's definitely not just that, otherwise he'd have tossed Harry a knut port key into an active volcano or the like. His plan seems to involve Harry's "dark side" taking permanent control (like he expected to be the case before he was surpised by the news that there was a separate dark side in the first place rather than just Harry sometimes pretending to be non-dark).

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-14T08:37:30.971Z · score: 0 (0 votes) · LW · GW

(I lost track of what you were trying to argue, and the comment in isolation seemed to suggest that the non-trivial change had happened. A clause like "so the fact that this was carefully kept constant is evidence in favor of ..." would have helped. )

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-14T07:47:03.485Z · score: 1 (1 votes) · LW · GW

That would only have changed if the year he started Hogwarts changed, which it did not. The birth date didn't change by a whole year, just from late enough in 1926 to enter Hogwarts in 1938 to early enough in 1927 to enter Hogwarts in that same year.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-13T17:55:59.184Z · score: 4 (4 votes) · LW · GW

In addition to what the others have said there are numerous other passages and oddities that hint at Quirrel = Voldemort = Harry's dark side. Harry and Quirrel describing a Horcrux between them when they talk about the Pioneer plate. Quirrel saying he "resolved his parental issues to his satisfaction" long ago, just after saying being grateful for his parents could never have occured to him. Quirrel having a much better model of Harry than of Draco or Hermione, particularly when Harry's dark side is involved. Harry's dark side and Quirrel both being extremely vulnerable to Dementors. Harry's dark side and Quirrel both being very good at pretending to be other people. Quirrel's extremely odd reaction when Harry talks about having a mysterious dark side and his going along with the convenient conclusion that it's just another part of Harry. Dumbledore and Snape recognizing Voldemort's hand in Quirrels actions. Harry's dark side hating Dumbledore. And so on.

The strongest piece of evidence for Lucius believing Harry to be Voldemort is his "I know it was you" message after breaking out Bellatrix.

Comment by faws on Newcomblike problem: Counterfactual Informant · 2012-04-12T20:34:26.698Z · score: 2 (6 votes) · LW · GW

Voted down for neither containing new and interesting ideas nor being funny.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-12T19:40:01.098Z · score: 6 (6 votes) · LW · GW

Harry concluded that it must have been a false memory charm. That was one of the more popular theories before, and Harry agreeing is probably as much confirmation as we are going to get in-story.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-12T19:01:14.059Z · score: 7 (9 votes) · LW · GW
  • Quirrel is pretty much universally agreed to be Voldemort, likely with the 5 places Harry names in chapter 46 as his Horcrux hiding places, among them the Pioneer plate, and probably Harry himself as in canon.
  • Surius Black is generally agreed not to ever have been taken to Azkaban, most likely by somehow making Peter Pettigrew take his place.
  • Lucius is agreed to think that Harry is Voldemort (and be right with respect to Harry's dark side).

There were a bunch of theories regarding the identities of Hat and Cloak and Santa Claus and the details of the prank on Rita Skeeter, but those were mostly settled in recent chapters.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82 · 2012-04-12T14:30:22.793Z · score: 2 (2 votes) · LW · GW

The Hugo is a reader award, voted on by the attendees of Worldcon.

Ranked preference voting, though. I'd expect a significant numbers of voters to rank "no award" ahead of any fanfic just on general principles. If it was just a single round of single preference voting the odds would look much better.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-11T19:48:06.671Z · score: 1 (1 votes) · LW · GW

I don't have any saved copy, but clear memory of the bolded part not being there. I think the wording is otherwise identical.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-11T15:13:43.060Z · score: 17 (21 votes) · LW · GW

"But I -" Her excellent memory helpfully replayed it for the thousandth time, Draco Malfoy telling her with a sneer that she'd never beat him when he wasn't tired, and then proceeding to prove just that, dancing like a duelist between the warded trophies while she frantically scrambled, and dealing the ending blow with a hex that sent her crashing against the wall and drew blood from her cheek - and then - then she'd -

This seems to suggest that her memories of the duel are a fabrication (or the "Draco" she was fighting was someone else under the influence of polyjuice). Draco has no particular reason to further provoke her and was genuinely unsure whether he could beat her. It doesn't seem obvious why anyone would do that if there was going to be a genuine duel anyway, though. Maybe the the genuine memories were just touched up a bit? Alternatively, why might Draco behave as in that memory when there's no one else around? (the behavior would have made more sense for the second, public duel)

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-11T14:43:32.344Z · score: 5 (5 votes) · LW · GW

The edit to 53 recently mentioned seems to be here:

"Your wand," murmured Bellatrix, "I took it from the Potters' house and hid it, my lord... under the tombstone to the right of your father's grave... will you kill me, now, if that was all you wished of me... I think I must have always wanted you to be the one to kill me... but I can't remember now, it must have been a happy thought..."

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-11T08:35:13.459Z · score: 16 (22 votes) · LW · GW

Harry nodded. " At least nobody's going to try hexing you, not after what the Headmaster said at dinner tonight. Oh, and Ron Weasley came up to me, looking very serious, and told me that if I saw you first, I should tell you that he's sorry for having thought badly of you, and he'll never speak ill of you again."

"Ron believes I'm innocent?" said Hermione.

"Well... he doesn't think you're innocent, per se..."

Ron approves of trying to murder Draco Malfoy?

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-11T07:34:51.639Z · score: 7 (9 votes) · LW · GW

As of last week Eliezer didn't have any plans to include an allegory to FAI, and expected any such allegory to work very badly in story terms ("suck like a black hole").

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-11T07:10:43.686Z · score: 2 (2 votes) · LW · GW

Wait, that doesn't work, for Voldemort being a known parselmouth to allow Hagrid a retrial after discovering the charm on the Sorting Hat Tom Riddle and Voldemort have to be known to be the same person.

EDIT: Eliezer jossed heroic Riddle in the mean time anyway.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-11T05:08:55.389Z · score: 3 (7 votes) · LW · GW

Thomas Marvolo Gaunt-Riddle, hero of wizarding Britain? Though since Dumbledore knows that Tom Riddle is Voldemort that seems like quite the narrow escape; his game would be up if Bones and Dumbledore talked openly to each other.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-11T03:47:33.392Z · score: 14 (16 votes) · LW · GW

Why wasn't one of the first things Harry did when returning from the trial exposing Hermione to the light of the True Patronus while she was still unconscious (it looks like it didn't happen at least)? He already knows it restores recent Dementor damage, has a plausible reason to know in that he experienced it himself under Dumbledore's eyes and could have told Dumbledore to secure his cooperation. Is his anger at Dumbledore getting in the way?

Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

2012-04-11T03:39:38.702Z · score: 4 (8 votes)
Comment by faws on Rationality Quotes April 2012 · 2012-04-09T17:11:06.706Z · score: 3 (5 votes) · LW · GW

Are you saying that the demographic you are talking about is special in using prejudice as the marker of evilness (as opposed to religious affiliation or whatever), or in taking that sort of attitude at all?

Comment by faws on SMBC comic: poorly programmed average-utility-maximizing AI · 2012-04-06T11:41:37.981Z · score: 0 (2 votes) · LW · GW

the thing that is identical is that you are trading utilities across people,

This is either wrong (the utility functions of the people involved aren't queried in the dust speck problem) or so generic as to be encompassed in the concept of "utility calculation".

Aggregating utility functions across different people is an unsolved problem, but not necessarily an unsolvable one. One way of avoiding utility monsters would be to normalize utility functions. The obvious way to do that leads to problems such as arachnophobes getting less cake even if they like cake equally much, but IMO that's better than utility monsters.

Comment by faws on SMBC comic: poorly programmed average-utility-maximizing AI · 2012-04-06T11:01:46.701Z · score: 4 (6 votes) · LW · GW

(examples chosen for being at different points in the spectrum between the two options, not for being likely)

Moral Universalism could be true in some sense, but not automatically compelling, and the AI would need to be programmed to find and/or follow it.

There could be a uniquely specified human morality that fulfills much of the same purpose Moral Universalism does for humans.

It might be possible to specify what we want in a more dynamic way than freezing in current customs.

Comment by faws on SMBC comic: poorly programmed average-utility-maximizing AI · 2012-04-06T10:37:42.547Z · score: 12 (18 votes) · LW · GW

The only similarity between those cases is that they involve utility calculations you disagree with. Otherwise every single detail is completely different. (e. g. the sort of utility considered, two negative utilities being traded against each other vs. trading utility elsewhere (positive and negative) for positive utility, which side of the trade the single person with the large individual utility difference is on, the presence of perverse incentives, etc, etc).

If anything it would be more logical to equate Felix with the tortured person and treat this as a reductio ad absurdum of your position on the dust speck problem. (But that would be wrong too, since the numbers aren't actually the problem with Felix, the fact that there's an incentive to manipulate your own utility function that way is (among other things).)

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82 · 2012-04-04T20:33:49.053Z · score: 0 (2 votes) · LW · GW

Yes, and the FFN update alert already went out. My guess is that Eliezer posted the chapter and deleted it immediately afterwards, perhaps due to some formatting problem.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82 · 2012-04-04T14:19:14.345Z · score: 10 (12 votes) · LW · GW

If he apologizes he'll probably either do it in person or in a similar way to last time, when he apologized for being unfair after Fawkes started shouting via Flitwick.

One major problem with such a list is that he currently doesn't know how difficult it would be to earn more money.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82 · 2012-04-04T12:23:19.075Z · score: 4 (6 votes) · LW · GW

And there is no problem with that if it's restricted to non-blackmail interactions (except perhaps to the degree it's mistaken by others to also apply to blackmail). Not responding to blackmail as a principled position and not valuing the life of the hostage highly enough for the amount asked for are completely different things.

Otherwise it would have made sense for Voldemort (who wouldn't care about Death Eater families) to keep taking family members hostage and ask for lower and lower amounts until hitting the sum they are valued at. Either that sum would have been low enough to devastate the morale of the Order members (e. g, 100 galleons and Voldemort asks for 101 the next time) or it would be high enough to drain their funds.

A refusal to respond to blackmail needs to be unconditional.

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82 · 2012-04-04T03:20:17.121Z · score: 6 (8 votes) · LW · GW

Dumbledore seems a bit off in equating the two situations. Lucius isn't threatening to send Hermione to Azkaban in the hope of getting something from Harry/Dumbledore; in fact he made clear that he would rather send her to Azkaban than receive the money. Therefore paying of the blood debt does not equal giving in to blackmail and Harry can save her while still maintaining a consistent position of not giving in to blackmail. Engineering similar situations without making apparent that they are engineered (and therefore blackmail) is probably too impractical to be worth the effort.

Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82

2012-04-04T02:53:41.790Z · score: 7 (8 votes)
Comment by faws on What is life? · 2012-04-02T10:07:57.191Z · score: -1 (3 votes) · LW · GW

Hence the word "largely".

Comment by faws on What is life? · 2012-04-01T23:46:46.335Z · score: 0 (2 votes) · LW · GW

Saying that moral weight is based on sentience is IMO largely a tautology. Sentience is mostly the word we use for "whatever poorly defined features of a mind give it moral weight".

Comment by faws on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2012-04-01T01:00:29.132Z · score: 1 (1 votes) · LW · GW

Harry could still get a false negative. Remember, Harry will feel the impulse to offer a drink to Alice if and only if if Alice is about to be surprised.

Again, we don't know that. The soda working in two steps as you seem to suggest (detecting future surprise, then determining whether that surprise is sufficient to cause soda spitting when drunk at the right time) is consistent with what we know about the soda. But that's not the only possibility consistent with what we know. The soda could also work in a single step and detect whether soda drunk at various points would be spit, without directly detecting surprise at all.

Meta: Karma and lesswrong mainstream positions

2011-04-07T10:44:28.446Z · score: 10 (11 votes)

Luminosity (Twilight fanfic) discussion thread

2010-08-25T08:49:22.156Z · score: 12 (19 votes)

Spring 2010 Meta Thread

2010-03-11T10:27:25.726Z · score: 3 (8 votes)