Posts

Call for volunteers: Publishing the Sequences 2012-06-28T15:08:39.443Z
Dotting i's and Crossing t's - a Journey to Publishing Elegance 2012-03-14T21:23:28.381Z
Cryonics on Castle [Spoilers] 2011-10-04T09:46:55.391Z
Preference For (Many) Future Worlds 2011-07-15T23:31:46.156Z
Wiki: Standard Reference or Original Research? 2011-05-25T13:13:09.250Z
Rationality Quotes: January 2011 2011-01-03T05:24:50.403Z
The 9 Circles of Scientific Hell 2010-12-22T02:59:02.736Z
Explaining information theoretic vs thermodynamic entropy? 2010-11-04T23:41:01.232Z
A Rational Education 2010-06-23T05:48:20.854Z
Rationality Quotes: February 2010 2010-02-01T06:39:35.541Z
Open Thread: February 2010 2010-02-01T06:09:38.982Z

Comments

Comment by wedrifid on Dunbar's Function · 2016-04-24T07:35:48.788Z · LW · GW

We have the instinct to consume sugar because it is the most concentrated form of energy that humans can process, not because it is naturally paired with vitamins.

Sugar is desirable as the most easily accessible form of energy. Being concentrated is more useful for long term storage in a mobile form, hence the use of the more concentrated fat.

Comment by wedrifid on Too good to be true · 2015-01-02T10:43:00.993Z · LW · GW

UPI Reporter Dan Olmsted went looking for the autistic Amish. In a community where he should have found 50 profound autistics, he found 3.

He went looking for autistics in a community mostly known for rejecting Science and Engineering? It 'should' be expected that the rate of autism is the same as in the general population? That's... not what I would expect. Strong social penalties for technology use for many generations would be a rather effective way to cull autistic tendencies from a population.

Comment by wedrifid on Rationality Quotes December 2014 · 2014-12-31T11:14:19.726Z · LW · GW

I think this is about the only scenario on LW that someone can be justifiably downvoted for that statement.

I up-voted it for dissenting against sloppy thinking disguised as being deep or clever. Twisting the word 'god' to include other things that do fit the original, literal or intended meaning of the term results in useless equivocation.

Comment by wedrifid on Rationality Quotes December 2014 · 2014-12-31T11:06:22.403Z · LW · GW

Hubris isn't something that destroys you, it's something you are punished for. By the gods!

Or by physics. Not all consequences for overconfidence are social.

Comment by wedrifid on The Hostile Arguer · 2014-11-28T22:33:51.145Z · LW · GW

You were willing to engage with me after I said something "inexcusably obnoxious" and sarcastic, but you draw the line at a well reasoned collection of counterarguments? Pull the other one.

For those curious, I stopped engaging after the second offense - the words you wrote after what I quoted may be reasonable but I did not and will not read them. This is has been my consistent policy for the last year and my life has been better for it. I recommend it for all those who, like myself, find the temptation to engage in toxic internet argument hard to resist.

It works even better in forums that do not lack the block feature. I was unable to avoid peripheral exposure to the parent comment when I was drawn to the thread to thank Markus.

Comment by wedrifid on The Hostile Arguer · 2014-11-28T22:25:34.627Z · LW · GW

Can't imagine who'd have guessed your exact intention just based on your initial response, though.

You are probably right and I am responsible for managing the predictable response to my words. Thankyou for the feedback.

Comment by wedrifid on The Hostile Arguer · 2014-11-28T08:33:47.482Z · LW · GW

I was sarcastic, but you were sarcastic first.

I was not sarcastic. I was entirely straightforward and sincere.

I am afraid your conversation practices make me unable to engage with you further (unless, obviously, I perceive others to be negatively impacted by your words.)

Comment by wedrifid on The Hostile Arguer · 2014-11-28T04:56:51.450Z · LW · GW

Wow, thank God you've settled this question for us with your supreme grasp of rationality. I'm completely convinced by the power of your reputation to ignore all the arguments common_law made, you've been very helpful!

Apart from the inexcusably obnoxious presentation the point hidden behind your sarcasm suggests you misunderstand the context.

Stating arguments in favour of arguing with hostile arguers is one thing. "You should question your unstated but fundamental premise" is far more than that. It uses a condescending normative dominance attempt to imply that the poster must not have 'questioned' or thought about a central part of the point because, presumably, if they had 'questioned' that they would have ended up agreeing with common_law instead.

In my judgement the opening poster deserves some moral support and protection against that kind of sniping. I chose (largely out of politeness) to express simple agreement with the poster, rather than a more aggressive and detailed rejection of common_law.

Since you (passive aggressively) asked:

Whether an argument is worthwhile depends primarily on the competence of the arguments presented, which isn't strongly related to the sincerity of the arguer.

This argument misses the point. The reason to avoid arguing with hostile arguers is not that it is impossible to learn anything from such people (although the expected information value is likely to be low). It is because doing so is dangerous or costly on a psychological, physical or economic level.

Of course if you enjoy arguing with hostile people or think it is potentially useful practice then go ahead. In much the same way if you think getting into physical fights will teach you self defence skills then go ahead and insult drunk guys at the bar till they take a swing at you.

Comment by wedrifid on The Hostile Arguer · 2014-11-28T02:22:35.021Z · LW · GW

You should question your unstated but fundamental premise: one should avoid arguments with "hostile arguers."

I just questioned that premise. It seems sound.

Comment by wedrifid on The Hostile Arguer · 2014-11-28T02:20:27.263Z · LW · GW

Trying to use reasoned discussion tactics against people who've made up their minds already isn't going to get you anywhere, and if you're unlucky, it might actually be interpreted as backtalk, especially if the people you're arguing against have higher social status than you do--like, for instance, your parents.

At times being more reasonable and more 'mature' sounding in conversation style even seems to be more offensive. It's treating them like you are their social equal and intellectual superior.

Comment by wedrifid on Why I will Win my Bet with Eliezer Yudkowsky · 2014-11-27T13:27:43.910Z · LW · GW

I want the free $10. The $1k is hopeless and were I to turn out to lose that side of the bet then I'd still be overwhelmingly happy that I'm still alive against all expectations.

Comment by wedrifid on Neo-reactionaries, why are you neo-reactionary? · 2014-11-27T13:11:48.009Z · LW · GW

I consider social policy proposal harmful and reject it as applied to myself or others. You may of course continue to refrain from speaking out against this kind of behaviour if you wish.

In the unlikely event that the net positive votes (at that time) given to Azathoth123 reflect the actual attitudes of the lesswrong community the 'public' should be made aware so they can choose whether to continue to associate with the site. At least one prominent user has recently disaffiliated himself (and deleted his account) for a far less harmful social political concern. On the other hand other people who embrace alternate lifestyles may be relieved to see that Azathoth's prejudiced rabble rousing is unambiguously rejected here.

Comment by wedrifid on Superintelligence 11: The treacherous turn · 2014-11-27T08:01:09.362Z · LW · GW

Ignorant is fastest - only calculate answer and doesn't care of anything else.

Just don't accidentally give it a problem that is more complex than you expect. Only caring about solving such a problem means tiling the universe with computronium.

Comment by wedrifid on Why I will Win my Bet with Eliezer Yudkowsky · 2014-11-27T06:22:15.859Z · LW · GW

Wow. I want the free money too!

Comment by wedrifid on Neo-reactionaries, why are you neo-reactionary? · 2014-11-26T22:05:59.553Z · LW · GW

2) Gays aren't monogamous. One obvious way to see this is to note how much gay culture is based around gay bathhouses. Another way is to image search pictures of gay pride parades.

This user seems to to spreading an agenda of ignorant bigotry against homosexuality and polyamory. It doesn't even temper the hostile stereotyping with much pretense of just referring to trends in the evidence.

Are the upvotes this account is receiving here done by actual lesswrong users (who, frankly, ought to be ashamed of themselves) or has Azathoth123 created sockpuppets to vote itself up?

Comment by wedrifid on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-26T21:51:48.840Z · LW · GW

This is the gist of the AI Box experiment, no?

No. Bribes and rational persuasion are fair game too.

Comment by wedrifid on xkcd on the AI box experiment · 2014-11-26T21:46:08.893Z · LW · GW

To quote someone else here: "Well, in the original formulation, Roko's Basilisk is an FAI

I don't know who you are quoting but they are someone who considers AIs that will torture me to be friendly. They are confused in a way that is dangerous.

The AI acausally blackmails people into building it sooner, not into building it at all.

It applies to both - causing itself to exist at a different place in time or causing itself to exist at all. I've explicitly mentioned elsewhere in this thread that merely refusing blackmail is insufficient when there are other humans who can defect and create the torture-AI anyhow.

You asked "How could it?". You got an answer. Your rhetorical device fails.

Comment by wedrifid on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-26T12:34:07.343Z · LW · GW

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Communication isn't enough. CDT agents can't cooperate in a prisoner's dilemma if you put them in the same room and let them talk to each other. They aren't going to be able to cooperate in analogous trades across time no matter how much acausal 'communicaiton' they have.

Comment by wedrifid on Neo-reactionaries, why are you neo-reactionary? · 2014-11-26T12:06:38.735Z · LW · GW

Evidence?

Start here.

Comment by wedrifid on xkcd on the AI box experiment · 2014-11-26T11:46:57.633Z · LW · GW

By "the basilisk", do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.

Not quite true. There are more than two relevant agents in the game. The behaviour of the other humans can hurt you (and potentially make it useful for their creation to hurt you).

Comment by wedrifid on xkcd on the AI box experiment · 2014-11-26T06:00:41.792Z · LW · GW

It is plausible (though not necessarily true) that refusing to be blackmailed acausally prevents the AI from becoming a torture AI, but it cannot prevent the AI from existing at all. How could it?

In this case "be blackmailed" means "contribute to creating the damn AI". That's the entire point. If enough people do contribute to creating it then those that did not contribute get punished. The (hypothetical) AI is acausally creating itself by punishing those that don't contribute to creating it. If nobody does then nobody gets punished.

Comment by wedrifid on Rationality Quotes November 2014 · 2014-11-26T05:32:08.869Z · LW · GW

I'll be sure to ask you the next time I need to write an imaginary comment.

I wasn't the pedant. I was the tangential-pedantry analyzer. Ask Lumifer.

It's not like anyone didn't know what I meant. What do you think of the actual content? How much do you trust faul_sname's claim that they wouldn't trust their own senses on a time-travel-like improbability?

Your comment was fine. It would be true of most people, I'm not sure if Faul is one of the exceptions.

Comment by wedrifid on Rationality Quotes November 2014 · 2014-11-26T03:06:12.793Z · LW · GW

Realistically speaking?

Unfortunately this still suffers from the whole "Time Traveller visits you" part of the claim - our language doesn't handle it well. It's a realistic claim about counterfactual response of a real brain to unrealistic stimulus.

Comment by wedrifid on Breaking the vicious cycle · 2014-11-26T03:01:57.708Z · LW · GW

This seems weird to me.

It seemed weird enough to me that it stuck in my memory more clearly than any of his anti-MIRI comments.

XiXiDu does not strike me as someone who is of average or below-average intelligence--quite the opposite, in fact.

I concur.

Is there some advantage to be gained from saying that kind of thing that I'm just not seeing here?

My best guess is an ethical compulsion towards sincere expression of reality as he perceives it. For what it is worth that sincerity did influence my evaluation of his behaviour and personality. XiXiDu doesn't seem like a troll, even when he does things that trolls also would do. My impression is that I would like him if I knew him in person.

Comment by wedrifid on Rationality Quotes November 2014 · 2014-11-26T02:53:33.576Z · LW · GW

I don't think it's literally factually :-D

I think you're right. It's closer to, say... "serious counterfactually speaking".

Comment by wedrifid on Breaking the vicious cycle · 2014-11-26T02:52:04.970Z · LW · GW

False humility? Countersignalling? Depression? I don't want to make an internet diagnosis or mind reading, but from my view these options seem more likely than the hypothesis of low intelligence.

From the context I ruled out countersignalling and for what it is worth my impression was that the humility was real, not false. Given that I err on the side of cynical regarding hypocrisy and had found some of XiXiDu's comments disruptive I give my positive evaluation of Xi's sincerity some weight.

I agree that the hypothesis of low intelligence is implausible despite the testimony. Addition possible factors I considered:

  • Specific weakness in intelligence (eg. ADHD, dyslexia or something less common) that produced low self esteem in intelligence despite overall respectable g.
  • Perfectionistic or obsessive tendencies which would lead to harsh self judgements relative to an unrealistic ideal. (Potentially similar to the kind of tendencies which would cause the idealism failure mode described in the opening post.)
  • Not realising just how stupid 'average' is. (This is a common error. This wasn't the first time I've called 'bullshit' on claims to be below average IQ. Associating with highly educated nerds really biases the sample.)

(Unless the context was something like "intelligence lower than extremely high"; i.e. something like "I have IQ 130, but compared with people with IQ 160 I feel stupid".)

That would have been more accurate, but no, the context ruled that out.

I'm curious whether XiXiDu's confidence/objective self evaluation has changed over the intervening years. I hope it has.

Comment by wedrifid on November 2014 Monthly Bragging Thread · 2014-11-26T02:34:28.582Z · LW · GW

I gave two TEDx talks in two weeks (also a true statement: I gave two TEDx talks in 35 years), one cosmic colonisation, one on xrisks and AI.

I'm impressed. (And will look them up when I get a chance.)

Comment by wedrifid on xkcd on the AI box experiment · 2014-11-26T02:33:16.649Z · LW · GW

For what it's worth, I don't think anybody understands acausal trade.

It does get a tad tricky when combined with things like logical uncertainty and potentially multiple universes.

Comment by wedrifid on xkcd on the AI box experiment · 2014-11-26T01:03:13.434Z · LW · GW

Precommitment isn't meaningless here just because we're talking about acausal trade.

Except in special cases which do not apply here, yes it is meaningless. I don't think you understand acausal trade. (Not your fault. The posts containing the requisite information were suppressed.)

What I described above doesn't require the AI to make its precommitment before you commit; rather, it requires the AI to make its precommitment before knowing what your commitment was.

The time of this kind decision is irrelevant.

Comment by wedrifid on xkcd on the AI box experiment · 2014-11-26T00:59:19.797Z · LW · GW

The key is that the AI precommits to building it whether we refuse or not.

The 'it' bogus is referring to is the torture-AI itself. You cannot precommit to things until you exist, no matter your acausal reasoning powers.

Comment by wedrifid on Breaking the vicious cycle · 2014-11-26T00:52:26.353Z · LW · GW

It's such a plausible conclusion that it makes sense to draw, even if it turns out to be mistaken. Absent the ability to read minds and absent an explicit statement, we have to go on what is likely.

The best we can say is that it is a sufficiently predictable conclusion. Had the author not underestimated inferential distance he could easily have pre-empted your accusation with an additional word or two.

Nevertheless, it is still a naive (and incorrect) conclusion to draw based on the available evidence. Familiarity with human psychology (in general), internet forum arguing (in general), XiXiDu in particular or even a complete read of the opening thread would suggest that the advice you dismiss is clearly, obviously and overwhelmingly good advice for XiXIDu. You have also completely misread the style of dominance manoeuvre Anatoly was employing. Petty sniping of the kind you suggest wouldn't naturally fit with the more straightforward aggressively condescending style of the comment. ie. Even when interpreting Anatoly's motives in the worst possible light your interpretation is still sloppy.

Absent the ability to read minds and absent an explicit statement, we have to go on what is likely.

'We' need to go on the expected consequences of our choices. Your choice was to accuse someone of questionable motives and use that as a premise to give advice for how to handle a serious mental health issue. You should expect that your behaviour will be negatively received by those who:

  • Don't want XiXiDu to be distracted by bad advice (that is, to be encouraged to continue exposing himself to a clearly toxic addiction) as a side effect of Jiro playing one-upmanship games. Or,
  • Don't like accusation of questionable motives based on mind-reading when there is reasonable doubt. Or,
  • Think you are wrong (in a way that socially defects against another).
Comment by wedrifid on Breaking the vicious cycle · 2014-11-25T04:28:50.893Z · LW · GW

XiXiDu is generally a smart person and most of his comments are very good.

XiXiDu has repeatedly claimed that he is not a smart person (unless I have confused him with someone else). (I didn't believe him but his claim is at least somewhat relevant.)

Comment by wedrifid on Breaking the vicious cycle · 2014-11-25T04:25:00.298Z · LW · GW

I can't read minds

Yet you spoke with the assumption that you could, and when many observers do not share your mind-reading conclusions. Hopefully in the future when you choose to do that you will not fail to see why you get downvotes. It's a rather predictable outcome.

Comment by wedrifid on Breaking the vicious cycle · 2014-11-25T04:22:29.765Z · LW · GW

XiXiDu should discount this suggestion because it seems to be motivated reasoning.

The advice is good enough (and generalizable enough) that the correlation to the speaker's motives is more likely to be coincidental than causal.

Addicts tend to be hurt by exposing themselves to their addiction triggers.

Comment by wedrifid on Conceptual Analysis and Moral Theory · 2014-11-24T07:30:07.607Z · LW · GW

When discussing transparent Newcomb, though, it's hard to see how this point maps to the latter two situations in a useful and/or interesting way.

Option 3 is of the most interest to me when discussing the Transparent variant. Many otherwise adamant One Boxers will advocate (what is in effect) 3 when first encountering the question. Since I advocate strategy 2 there is a more interesting theoretical disagreement. ie. From my perspective I get to argue with (literally) less-wrong wrong people, with a correspondingly higher chance that I'm the one who is confused.

The difference between 2 and 3 becomes more obviously relevant when noise is introduced (eg. 99% accuracy Omega). I choose to take literally nothing in some situations. Some think that is crazy...

In the simplest formulation the payoff for three is undetermined. But not undetermined in the sense that Omega's proposal is made incoherent. Arbitrary as in Omega can do whatever the heck it wants and still construct a coherent narrative. I'd personally call that an obviously worse decision but for simplicity prefer to define 3 as a defect (Big Box Empty outcome).

As for 4... A payoff of both boxes empty (or both boxes full but contaminated with anthrax spores) seems fitting. But simply leaving the large box empty is sufficient for decision theoretic purposes.

Out of interest, and because your other comments on the subject seem well informed, what do you choose when you encounter Transparent Newcomb and find the big box empty?

Comment by wedrifid on Breaking the vicious cycle · 2014-11-24T02:29:26.181Z · LW · GW

Breaking the vicious cycle

I endorse this suggestion.

Don't Feed The Trolls!

Comment by wedrifid on Conceptual Analysis and Moral Theory · 2014-11-24T02:09:27.838Z · LW · GW

If I consider my predictions of Omega's predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.

It sounds like your decision making strategy fails to produce a useful result. That is unfortunate for anyone who happens to attempt to employ it. You might consider changing it to something that works.

"Ha! What if I don't choose One box OR Two boxes! I can choose No Boxes out of indecision instead!" isn't a particularly useful objection.

Comment by wedrifid on Conceptual Analysis and Moral Theory · 2014-11-24T02:04:41.190Z · LW · GW

It's me who has to run on a timer.

No, Nshepperd is right. Omega imposing computation limits on itself solves the problem (such as it is). You can waste as much time as you like. Omega is gone and so doesn't care whether you pick any boxes before the end of time. This is a standard solution for considering cooperation between bounded rational agents with shared source code.

When attempting to achieve mutual cooperation (essentially what Newcomblike problems are all about) making yourself difficult to analyse only helps against terribly naive intelligences. ie. It's a solved problem and essentially useless for all serious decision theory discussion about cooperation problems.

Comment by wedrifid on Conceptual Analysis and Moral Theory · 2014-11-24T01:54:18.605Z · LW · GW

As I argued in this comment, however, the scenario as it currently is is not well-specified; we need some idea of what sort of rule Omega is using to fill the boxes based on his prediction.

Previous discussions of Transparent Newcomb's problem have been well specified. I seem to recall doing so in footnotes so as to avoid distraction.

I have not yet come up with a rule that would allow Omega to be consistent in such a scenario, though, and I'm not sure if consistency in this situation would even be possible for Omega. Any comments?

The problem (such as it is) is that there is ambiguity between the possible coherent specifications, not a complete lack. As your comment points out there are (merely) two possible situations for the player to be in and Omega is able to counter-factually predict the response to either of them, with said responses limited to a boolean. That's not a lot of permutations. You could specify all 4 exhaustively if you are lazy.

IF (Two box when empty AND One box when full) THEN X
IF ...

Any difficulty here is in choosing the set of rewards that most usefully illustrate the interesting aspects of the problem.

Comment by wedrifid on Conceptual Analysis and Moral Theory · 2014-11-24T01:03:37.519Z · LW · GW

I am too; I'm providing a hypothetical where the player's strategy makes this the least convenient possible world for people who claim that having such an Omega is a self-consistent concept.

It may be the least convenient possible world. More specifically it is the minor inconvenience of being careful to specify the problem correctly so as not to be distracted. Nshepperd gives some of the reason typically used in such cases.

Moreover, the strategy "pick the opposite of what I predict Omega does" is a member of a class of strategies that have the same problem

What happens when you try to pick the the opposite of what you predict Omega does is something like what happens when you try to beat Deep Fritz 14 at chess while outrunning a sports car. You just fail. Your brain is a few of pounds of fat approximately optimised for out-competing other primates for mating opportunities. Omega is a super-intelligence. The assumption that Omega is smarter than the player isn't an unreasonable one and is fundamental to the problem. Defying it is a particularly futile attempt to fight the hypothetical by basically ignoring it.

Generalising your proposed class to executing maximally inconvenient behaviours in response to, for example, the transparent Newcomb's problem is where it gets actually gets (tangentially) interesting. In that case you can be inconvenient without out-predicting the superintelligence and so the transparent Newcomb's problem requires more care with the if clause.

Comment by wedrifid on Conceptual Analysis and Moral Theory · 2014-11-22T03:41:18.554Z · LW · GW

No, because that's fighting the hypothetical. Assume that he doesn't do that.

It is actually approximately the opposite of fighting the hypothetical. It is managing the people who are trying to fight the hypothetical. Precise wording of the details of the specification can be used to preempt such replies but for casual defininitions that assume good faith sometimes explicit clauses for the distracting edge cases need to be added.

Comment by wedrifid on Rationality Quotes November 2014 · 2014-11-21T01:00:54.607Z · LW · GW

While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of "rationality".

To be fair, while it is possible to have a coherent preference for death far more often people have a cached heuristic to refrain from exactly the kind of (bloody obvious) reasoning that Boy 2 is explaining. Coherent preferences are a 'rationality' issue.

Since nothing in the quote prescribes the preference and instead merely illustrates reasoning that happens to follow from having preferences like those of Boy 2. If Boy 2 was saying (or implying) that Boy 1 should want to live to infinity then there would be a problem.

Comment by wedrifid on Conceptual Analysis and Moral Theory · 2014-11-20T02:13:36.829Z · LW · GW

when much of mainstream philosophy consists of what (I assume) you're calling "bad amateur philosophy".

No, much of it is bad professional philosophy. It's like bad amateur philosophy except that students are forced to pretend it matters.

Comment by wedrifid on Conceptual Analysis and Moral Theory · 2014-11-20T02:01:16.046Z · LW · GW

Curiously enough, I made no claims about ideal CDT agents.

True. CDT is merely a steel-man of your position that you actively endorsed in order to claim prestigious affiliation.

The comparison is actually rather more generous than what I would have made myself. CDT has no arbitrary discontinuity between at p=1 and p=(1-e) for example.

That said, the grandparent's point applies just as well regardless of whether we consider CDT, EDT, the corrupted Lumifer variant of CDT or most other naive but not fundamentally insane decision algorithms. In the general case there is a damn good reason to make an abstract precommittment as soon as possible. UDT is an exception only because such precomittment would be redundant.

Comment by wedrifid on Conceptual Analysis and Moral Theory · 2014-11-19T14:00:27.356Z · LW · GW

Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.

Curiously, this particular claim is true only because Lumifer's primary claim is false. An ideal CDT agent released at time T with the capability to self modify (or otherwise precommit) will as rapidly as possible (at T + e) make a general precommitment to the entire class of things that can be regretted in advance only for the purpose of influencing decisions made after (T + e) (but continue with two-boxing type thinking for the purpose of boxes filled before T + e).

Comment by wedrifid on Conceptual Analysis and Moral Theory · 2014-11-19T02:49:20.650Z · LW · GW

If Omega is just a skilled predictor, there is no certain outcome so you two-box.

Unless you like money and can multiply, in which case you one box and end up (almost but not quite certainly) richer.

Comment by wedrifid on A discussion of heroic responsibility · 2014-11-18T07:47:12.898Z · LW · GW

I retract my previous statement based on new evidence acquired.

Comment by wedrifid on A discussion of heroic responsibility · 2014-11-18T03:48:43.651Z · LW · GW

I may have addressed the bulk of what you're getting at in another comment; the short form of my reply is, "In the cases which 'heroic responsibility' is supposed to address, inaction rarely comes because an individual does not feel responsible, but because they don't know when the system may fail and don't know what to do when it might."

Short form reply: That seems false. Perhaps you have a different notion of precisely what heroic responsibility is supposed to address?

Comment by wedrifid on A discussion of heroic responsibility · 2014-11-18T03:34:11.264Z · LW · GW

I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally

I fundamentally disagree. It is better for misleading comments to have lower votes than insightful ones. This helps limit the epistemic damage caused to third parties. Replying to every incorrect claim with detailed arguments in not viable and not my responsibility either heroic or conventional - even though my comment history suggests that for a few years I made a valiant effort.

Silent downvoting is often the most time efficient form positive influence available and I endorse it as appropriate, productive and typically wiser than trying to argue all the time.

Comment by wedrifid on November 2014 Monthly Bragging Thread · 2014-11-16T04:21:51.995Z · LW · GW

If there's something about "ability to learn" outside of this, I'd be interested to hear about it.

Skills, techniques and habits are also rather important.