Posts

Recreational Cryonics 2014-01-15T20:21:36.011Z
Precommitting to paying Omega. 2009-03-20T04:33:35.511Z

Comments

Comment by topynate on The Useful Idea of Truth · 2016-01-08T05:28:15.883Z · LW · GW

The first image is a dead hotlink. It's in the internet archive and I've uploaded it to imgur.

Comment by topynate on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T20:15:20.537Z · LW · GW

It was considerably easier before the Dunblane massacre (1996).

Comment by topynate on LINK: Superrationality and DAOs · 2015-01-27T00:11:28.787Z · LW · GW

That very much depends on what you choose to regard as the 'true nature' of the AI. In other words we're flirting with reification fallacy by regarding the AI as a whole as 'living on the blockchain', or even being 'driven' by the blockchain. It's important to fix in mind what makes the blockchain important to such an AI and to its autonomy. This, I believe, is always the financial aspect. The on-blockchain process is autonomous precisely because it can directly control resources; it loses autonomy in so far as its control of resources no longer fulfils its goals. If you wish, you can consider the part of the AI which verifies correct computation and interfaces with 'financial reality' as being its real locus of selfhood, but bear in mind that even the goal description/fulfilment logic can be zero-knowledge proofed and so exist off-chain. From my perspective, the on-chain component of such an AI looks a lot more like a combination of robotic arm and error-checking module.

Comment by topynate on Free Hard SF Novels & Short Stories · 2014-10-09T08:06:44.735Z · LW · GW

That was pretty good, thanks.

Comment by topynate on Confound it! Correlation is (usually) not causation! But why not? · 2014-07-09T15:31:08.940Z · LW · GW

There's an asymptotic approximation in the OEIS: a(n) ~ n!2^(n(n-1)/2)/(M*p^n), with M and p constants. So log(a(n)) = O(n^2), as opposed to log(2^n) = O(n), log(n!) = O(n log(n)), log(n^n) = O(n log(n)).

Comment by topynate on War and/or Peace (2/8) · 2014-03-21T23:13:28.694Z · LW · GW

I want a training session in Unrestrained Pessimism.

Comment by topynate on Learning languages efficiently. · 2014-03-06T01:17:46.560Z · LW · GW

As someone who moved to Israel at the age of 25 with very minimal Hebrew (almost certainly worse than yours), went to an ulpan for five months and then served in the IDF for 18 months while somehow avoiding the 3 month language course I certainly should have been placed in based on my middle-of-ulpan level of fluency:

Ulpan (not army ulpan, real ulpan) is actually pretty good at doing what it's supposed to. I had a great time - it depends on the ulpan but I haven't heard of a single one that would be psychologically damaging. Perhaps your experience with a less intensive system as a minor has coloured your views? I know that I got put off Hebrew by the quality of teaching I had around the age of 11-13. I'm not sure if you could get benefits to do a free course (it would depend on your status) but that would certainly take off the pressure to learn Hebrew quickly. You'd have to delay your draft date, which is usually possible.

'Army ulpan' is, according to my friends, a bit of a joke, but that's three months you'd be with a bunch of Anglos, being taught by 19 year old girls, and going on semi-regular day trips, which is fun, rather than jumping straight into basic training, which sucks. It's also three months less time being bored to tears at the end of your service doing the same thing you've been doing the last two years.

You can't learn spoken Hebrew by reading. No way. Not only do you need grammatical knowledge to know which vowels should be used, but the spoken and written forms become quite divergent above the most basic level. You need to speak and hear Hebrew for most of the day, every day - which could be a pretty lonely experience in the US. Think Hebrew pop music, armed with a copy of the lyrics and the translation. Learn the songs and what they mean - it's just repetition - and you'll automatically pick up the most common vocabulary. Hebrew grammar isn't that hard for an English speaker, the verb conjugation is traditionally considered the hard part, and that's mostly just memorization. Genders are a pain but not knowing the gender of a word won't impair comprehension if you guess wrongly.

Comment by topynate on Recreational Cryonics · 2014-01-15T22:14:22.555Z · LW · GW

Then perhaps my assessment was mistaken! But in any case, I wasn't referring to the broad idea of cryonics patients ending up in deathcubes, but of their becoming open-access in an exploitative society - c.f. the Egan short.

Comment by topynate on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-15T20:22:37.491Z · LW · GW

My attempt at a reply turned into an essay, which I've posted here.

Comment by topynate on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-11T12:28:35.053Z · LW · GW

It is likely that you would not wish for your brain-state to be available to all-and-sundry, subjecting you to the possibility of being simulated according to their whims. However, you know nothing about the ethics of the society that will exist when the technology to extract and run your brain-state is developed. Thus you are taking a risk of a negative outcome that may be less attractive to you than mere non-existence.

Comment by topynate on [Link] Valproic acid, a drug for brain plasticity · 2014-01-05T19:29:29.156Z · LW · GW

The article is crap but referring to the sample size without considering the baseline success rate is misleading. If, say, the task were to be creating a billion dollar company, and the treated group had even one success, then that would be quite serious evidence for an effect, just because of how rare success is.

Comment by topynate on A Brief Overview of Machine Ethics · 2013-12-27T00:50:18.508Z · LW · GW

I can't find it by search, but haven't you stated that you've written hundreds of KLOC?

Comment by topynate on How habits work and how you may control them · 2013-10-14T20:09:48.820Z · LW · GW

The front page is, in my opinion, pretty terrible. The centre is filled with static content, the promoted posts are barely deserving of the title, and any dynamic content loads several seconds after the rest of the page, even though the titles of posts could be cached and loaded far more quickly.

Comment by topynate on Cooperating with agents with different ideas of fairness, while resisting exploitation · 2013-09-16T16:30:31.993Z · LW · GW

This is analogous to zero determinant strategies in the iterated prisoner's dilemma, posted on LW last year. In the IPD, there are certain ranges of payoffs for which one player can enforce a linear relationship between his payoff and that of his opponent. That relationship may be extortionate, i.e. such that the second player gains most by always cooperating, but less than her opponent.

Comment by topynate on Open thread, September 2-8, 2013 · 2013-09-04T22:40:12.776Z · LW · GW

Yet another article on the terribleness of schools as they exist today. It strikes me that Methods of Rationality is in large part a fantasy of good education. So is the Harry Potter/Sherlock Holmes crossover I just started reading. Alicorn's Radiance is a fair fit to the pattern as well, in that it depicts rapid development of a young character by incredible new experiences. So what solutions are coming out of the rational community? What concrete criteria would we like to see satisfied? Can education be 'solved' in a way that will sell outside this community?

Comment by topynate on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-08-28T22:24:33.176Z · LW · GW

I doubt he can Transfigure antimatter. If he can, the containment will be very hard to get right, and he would absolutely have to get it right. How do you even stop it blowing up your wand, if you have to contact the material you're Transfiguring?

Maybe Tazers! They'd work against some shields, are quite tricky to make, and if you want lots of them they're easier to buy. Other things: encrypted radios, Kevlar armour (to avoid Finite Incantem). Most things that can be bought for 5K could have been bought in Britain in the early 90s, apart from that sort of paramilitary gear. Guns are unlikely because the twins would have heard of them.

Comment by topynate on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-08-28T22:17:58.117Z · LW · GW

If he can make a model rocket, he can make a uranium gun design. It's one slightly sub-critical mass of uranium with a suitable hole for the second piece, which is shaped like a bullet and fired at it using a single unsynchronised electronic trigger down a barrel long enough to get up a decent speed. Edit: And then he or a friendly 7th year casts a charm of flawless function on it.

Comment by topynate on Two angles on Repetitive Strain Injury · 2013-08-26T21:12:13.881Z · LW · GW

Was I alone in expecting something on recursive self improvement?

Comment by topynate on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-25T21:36:27.676Z · LW · GW

Perhaps gewunnen, meaning conquered, and not gewunen. I don't think you can use present subjunctive after béo anyway. Here béo is almost surely the 3rd person singular subjunctive of béon, the verb that we know as to be. If gewunnen, then we can interpret it as being the past participle, which makes a lot more sense (and fits the provided translation). The past participle of gewunian is gewunod, which clearly isn't the word used here.

Edit: translator's automatic conjugation is broken, sorry for copy-paste.

Comment by topynate on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-25T19:23:27.638Z · LW · GW

Aha! The prophecy we just heard in chapter 96 is Old English. However, by the 1200s, when, according to canon, the Peverell brothers were born, we're well into Middle English (which Harry might well understand on first hearing). I was beginning to wonder if there was not some old wizard or witch listening, for whom that prophecy was intended.

There's still the problem of why brothers with an Anglo-Norman surname would have Old English as a mother tongue... well, that could happen rather easily with a Norman father and English mother, I suppose.

And the coincidence of Canon!Ignotus Peverell being born in 1214, the estimated year of Roger Bacon's birth, seemed significant too... I shall have to go back over the chapters referring to his diary.

Comment by topynate on Reflection in Probabilistic Logic · 2013-03-23T22:21:19.748Z · LW · GW

If you cock up and define a terminal value that refers to a mutable epistemic state, all bets are off. Like Asimov's robots on Solaria, who act in accordance with the First Law, but have 'human' redefined not to include non-Solarians. Oops. Trouble is that in order to evaluate how you're doing, there has to be some coupling between values and knowledge, so you must prove the correctness of that coupling. But what is correct? Usually not too hard to define for the toy models we're used to working with, damned hard as a general problem.

Comment by topynate on [Link] Your genes, your rights – FDA’s Jeffrey Shuren not a fan · 2011-03-11T18:49:51.868Z · LW · GW

I have a comment waiting in moderation on the isteve post Konkvistador mentioned, the gist of which is that the American ban on the use of genetic data by health insurers will cause increasing adverse selection as these services get better and cheaper, and that regulatory restrictions on consumer access to that data should be seen in that light. [Edit: it was actually on the follow-up.]

Comment by topynate on Who's likely to write the AI? A hypothesis · 2011-02-17T00:00:28.896Z · LW · GW

A pertinent question is what problem a government or business (not including a general AI startup) may wish to solve with a general AI that is not more easily solved by developing a narrow AI. 'Easy' here factors in the risk of failure, which will at least be perceived as very high for a general AI project. Governments and businesses may fund basic research into general AI as part of a strategy to exploit high-risk high-reward opportunities, but are unlikely to do it in-house.

One could also try and figure out some prerequisites for a general AI, and see what would lead to them coming into play. So for instance, I'm pretty sure that a general AI is going to have long-term memory. What AIs are going to get long-term memory? A general AI is going to be able to generalize its knowledge across domains, and that's probably only going to work properly if it can infer causation. What AIs are going to need to do that?

Comment by topynate on On Charities and Linear Utility · 2011-02-06T13:56:50.007Z · LW · GW

Consider those charities that expect their mission to take years rather than months. These charities will rationally want to spread their spending out over time. Particularly for charities with large endowments, they will attempt to use the interest on their money rather than depleting the principal, although if they expect to receive more donations over time they can be more liberal.

This means that a single donation slightly increases the rate at which such a charity does good, rather than enabling it to do things which it could not otherwise do. So the scaling factor of the endowment is restored: donating $1000 to a charity with a $10m endowment increases the rate at which it can sustainably spend by 1000/10^7 = 0.1%.

This does not mean that a charity will say, look, if our sustainable spending rate was 0.1% higher we'd have enough available this year to fund the 'save a million kids from starvation' project, oh well. They'll save the million kids and spend a bit less next year, all other things being equal. In other words, the charity, by maximising the good it does with the money it has, smooths out the change in its utility for small differences in spending relative to the size of its endowment, i.e. the higher order derivatives are low. So long as the utility you get from a charity comes from it fulfilling its stated mission, your utility will also vary smoothly with small spending differences.

Likewise, with rational collaborating charities, they will each adjust their spending to increase any mutually beneficial effects. So mixed derivatives are low, too.

The upshot is that unless your donation is of a size that it can permanently and significantly raise the spending power of such a charity, you won't be leaving the approximately linear neighbourhood in utility-space. So if you're looking for counterexamples, you'll need to find one of:

  • charities with both low endowments and low donation rates, which nevertheless can produce massive positive effects with a smallish amount of money
  • charities which must fulfil their mission in a short time and are just short of having the money to do so.
Comment by topynate on post proposal: Attraction and Seduction for Heterosexual Male Rationalists · 2011-02-06T05:31:26.475Z · LW · GW

I don't think you should write the post. Reason: negative externalities.

Comment by topynate on Meta: A 5 karma requirement to post in discussion · 2011-01-27T23:35:11.822Z · LW · GW

It looks like wezm has followed your suggestion, with extra hackishness - he added a new global variable.

Comment by topynate on Karma Motivation Thread · 2011-01-27T23:17:01.480Z · LW · GW

Just filed a pull request. Easy patch, but it took a while to get LW working on my computer, to get used to the Pylons framework and to work out that articles are objects of class Link. That would be because LW is a modified Reddit.

Comment by topynate on A plan for spam · 2011-01-26T02:54:42.052Z · LW · GW

I just gave myself a deadline to write a patch for that problem.

Edit: Done!

Comment by topynate on Karma Motivation Thread · 2011-01-26T02:52:08.514Z · LW · GW

Task: Write a patch for the Less Wrong codebase that hides deleted/banned posts from search engines.

Deadline: Sunday, 30 January.

Comment by topynate on Trustworthiness of rational agents · 2011-01-25T16:40:43.205Z · LW · GW

The thrust of your argument is that an agent that uses causal decision theory will defect in a one-shot Prisoner's Dilemma.

You specify CDT when you say that

No matter what Agent_02 does, actually implementing Action_X would bear no additional value

because this implies Agent_01 looks at the causal effects of do(Action_X) and decides what to do based solely on them. Prisoner's Dilemma because Action_X corresponds to Cooperate, and not(Action_X) to Defect, with an implied Action_Y that Agent_02 could perform that is of positive utility to Agent_01 (hence, 'trade'). One-shot because without causal interaction between the agents, they can't update their beliefs.

That CDT using agents unconditionally defect in the one-shot PD is old news. That you should defect against CDT using agents in the one-shot PD is also old news. So your post rather gives the impression that you haven't done the research on the decision theories that make acausal trade interesting as a concept.

Comment by topynate on What do superintelligences really want? [Link] · 2011-01-25T11:29:50.173Z · LW · GW

And how do you propose to stop them. Put a negative term in their reward functions?

Comment by topynate on Perfectly Friendly AI · 2011-01-24T19:42:50.464Z · LW · GW

This is a TDT-flavoured problem, I think. The process that our TDT-using FAI uses to decide what to do with an alien civilization it discovers is correlated with the process that a hypothetical TDT-using alien-Friendly AI would use on discovering our civilization. The outcome in both cases ought to be something a lot better than subjecting us/them to a fate worse than death.

Comment by topynate on Who are these spammers? · 2011-01-20T20:19:58.392Z · LW · GW

If that's the case, then when a page is hidden the metadata should be updated to remove it from the search indexes. If you search 'pandora site:lesswrong.com' on Google, all the pages are still there, and can be followed back to LW. That is to say, the spammers are still benefiting from every piece of spam they've ever posted here.

Comment by topynate on Theists are wrong; is theism? · 2011-01-20T02:25:31.095Z · LW · GW

All of those phenomena are caused by human action! Once you know humans exist, the existence of macroeconomics is causally screened off from any other agentic processes. All of those phenomena, collectively, aren't any more evidence for the existence of an intelligent cause of the universe than the existence of humans: the existence of such a cause and the existence of macroeconomics are conditionally independent events, given the existence of humans.

Comment by topynate on Theists are wrong; is theism? · 2011-01-20T01:28:57.629Z · LW · GW

If you don't mind my asking, how did it come to be that you were raised to believe that convincing arguments against theism existed without discovering what they are? That sounds like a distorted reflection of a notion I had in my own childhood, when I thought that there existed a theological explanation for differences between the Bible and science but that I couldn't learn them yet; but to my recollection I was never actually told that, I just worked it out from the other things I knew.

Comment by topynate on No One Can Exempt You From Rationality's Laws · 2011-01-19T18:21:20.992Z · LW · GW

It's roughly as many words as are spoken worldwide in 2.5 seconds, assuming 7450 words per person per day. It's very probably less than the number of English words spoken in a minute. It's also about the number of words you can expect to speak in 550 years. That means there might be people alive who've spoken that many words, given the variance of word-production counts.

So, a near inconceivable quantity for one person, but a minute fraction of total human communication.

Comment by topynate on "Manna" by Marshall Brain · 2011-01-19T17:02:22.800Z · LW · GW

//Not an economist//

The minimum wage creates a class of people who it isn't worth hiring (their productivity is less than their cost of employment). If you have a device which raises the productivity of these guys, they can enter the workforce at minimum wage.

Additionally, there may be zero marginal product workers - workers whose cost of employment equals the marginal increase in productivity that results from hiring them. This could happen in a contracting job market if the fear of losing employment causes other workers to increase their productivity enough. Then you could fire Jack and see the productivity of John increase enough to match the productivity net of costs that Jack provided. If such workers exist, then they can provide a new source of labour even in the absence of minimum wage laws.

I agree with you that there's a lack of economic logic in the story, though.

Comment by topynate on Statistical Prediction Rules Out-Perform Expert Human Judgments · 2011-01-19T16:03:17.411Z · LW · GW

I thought we were talking about how to use necessary requirements without risking a suit, not how to conceal racial preferences by using cleverly chosen proxy requirements. But it looks like you can't use job application degree requirements without showing a business need either.

Comment by topynate on Statistical Prediction Rules Out-Perform Expert Human Judgments · 2011-01-19T15:31:21.006Z · LW · GW

You can put degree requirements on the job advertisement, which should act as a filter on applications, something that can't be caught by the 80% rule.

(Of course, universities tend to use racial criteria for admission in the US, something which, ironically, can be an incentive for companies to discriminate based on race even amongst applicants with CS degrees.)

Comment by topynate on It's not like anything to be a bat · 2011-01-18T22:41:55.849Z · LW · GW

it says nothing about the properties that really define qualia, like the "" that we've been talking about in another thread

So we can set up state machines that behave like people talking about qualia the way you do, and which do so because they have the same internal causal structure as people. Yet that causal structure doesn't have anything to do with the referent of 'redness'. It looks like your obvious premise that redness isn't reducible implies epiphenomenalism. Which is absurd, obviously.

Edit: Wow, you (nearly) bite the bullet in this comment! You say:

Unless one is willing to explicitly advocate epiphenomenalism, then mental states must be regarded as causes. But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.

I claim that mental states can be regarded as causes, that they are indeed a shorthand for immensely complicated physical details (and significantly less but still quite a lot complicated computational details), and claim further that they cause a lot of things. For instance, they're a cause of this comment. I claim that the word 'cause' can apply to more than relationships between fundamental particles: for instance, an increase in the central bank interest rate causes a fall in inflation.

So, which do you disagree with: that interest rates are causal influences on inflation, or that interest rates and inflation are shorthand for complicated physical details?

Comment by topynate on Is there a way to quantify the relationship between Person1's Map and Person2's Map? · 2011-01-09T17:27:11.698Z · LW · GW

Have you heard of the Kullback-Leibler divergence? One way of thinking about it is that it quantifies the amount you learn about one random variable when you learn something about another random variable. I.e., if your variables are X and Y, then D(p(X|Y=y),p(X)) is the information gain about X when you learn Y=y. It isn't a metric, as it isn't symmetric: D(p(X|Y=y),p(X)) != D(p(X),p(X|Y=y)). Nevertheless, with two people with different probability distributions on some underlying space, it's a good way of representing how much more one knows than the other.

As jimrandomh says, the representation of beliefs that you use isn't very practical. However your question is a good one, as it applies whatever representation you use.

Your comment about taking emotional salience into account is leaving the realm of probability and epistemic rationality - I'm less familiar with what tools are available to formalize differences in what's valued than I am with tools to formalize differences in what's known.

Comment by topynate on Luminosity (Twilight Fanfic) Discussion Thread 3 · 2011-01-08T17:51:48.747Z · LW · GW

How about: Allirea's been shielding Bella both from being seen directly and from Alice's power. Addy found them - Bella was in the vicinity - made a deal with them, and then they came back together. Allirea may still be around and using her power, or she may have left. Possibly Addy's taking of Siobhan's power enabled her to take Allirea into account, somehow, which made it easier for her to find them.

By the way Alicorn, I've been thoroughly enjoying your two stories. Your portrayal of Allirea's power is one of my favourite parts.

Comment by topynate on Dark Arts 101: Using presuppositions · 2011-01-02T21:15:26.460Z · LW · GW

It's really not that subtle a trick. If it sounds unnatural it may be more a consequence of a lack of practice in persuasive writing generally (in which case, bravo for practising, icebrand!) than of special brain chemistry that irreparably cripples and nerdifies you if you try anything socially 'fancy'.

Comment by topynate on Tallinn-Evans $125,000 Singularity Challenge · 2010-12-27T22:09:02.841Z · LW · GW

Actions which increase utility but do not maximise it aren't "pointless". If you have two charities to choose from, £100 to spend, and you get a constant 2 utilons/£ for charity A and 1 utilon/£ for charity B, you still get a utilon for each pound you donate to B, even if to get 200 utilons you should donate £100 to A. It's just the wrong word to apply to the action, even assuming that someone who says he's donated a small amount is also saying that he's donated a small proportion of his charitable budget (which it turns out wasn't true in this case).

Comment by topynate on Tallinn-Evans $125,000 Singularity Challenge · 2010-12-27T17:58:41.639Z · LW · GW

The idea is that the optimal method of donation is to donate as much as possible to one charity. Splitting your donations between charities is less effective, but still benefits each. They actually have a whole page about how valuable small donations are, so I doubt they'd hold a grudge against you for making one.

Comment by topynate on Two questions about CEV that worry me · 2010-12-23T18:54:19.951Z · LW · GW

If Archimedes and the American happen to extrapolate to the same volition, why should that be because the American has values that are a progression from those of Archimedes? It's logically possible that both are about the same distance from their shared extrapolated volition, but they share one because they are both human. Archimedes could even have values that are closer than the American's.

Comment by topynate on How to Convince Me That 2 + 2 = 3 · 2010-12-17T04:39:01.627Z · LW · GW

I do not consider myself a rationalist, because I doubt my own rationality.

This site isn't called Always Right, you know.

Comment by topynate on High Failure-Rate Solutions · 2010-12-17T00:16:36.861Z · LW · GW

That quote completely ignores the risk of worsening the situation each 'solution' might carry. The venture-capital method only works because of limited liability.

Comment by topynate on Torture vs. Dust Specks · 2010-12-16T23:49:12.357Z · LW · GW

Assuming a roughly 50-50 split the inverse square-root rule is right. Now my issue is why you incorporate that factor in scenario 2, but not scenario 3. I honestly thought I was just rephrasing the problem, but you seem to see it differently? I should clarify that this isn't you unconditionally receiving a speck if you're willing to, but only if half the remainder are also so willing.

The point of voting, for me, is not an attempt to induce scope insensitivity by personalizing the decision, but to incorporate the preferences of the vast majority (3^^^^3 out of 3^^^^3 + 1) of participants about the situation they find themselves in, into your calculation of what to do. The Torture vs. Specks problem in its standard form asks for you to decide on behalf of 3^^^^3 people what should happen to them; voting is a procedure by which they can decide.

[Edit: On second thought, I retract my assertion that scenario 1) and 2) have roughly the same stakes. That in scenario 1) huge numbers of people who prefer not to be dust-specked can get dust-specked, and in scenario 2) no one who prefers not to be dust-specked is dust-specked, makes much more of a difference than a simple doubling of the number of specks.]

By the way, the problem as stated involves 3^^^3, not 3^^^^3, people, but this can't possibly matter so nevermind.

Comment by topynate on Torture vs. Dust Specks · 2010-12-16T22:17:20.222Z · LW · GW

Compare two scenarios: in the first, the vote is on whether every one of the 3^^^3 people are dust-specked or not. In the second, only those who vote in favour are dust-specked, and then only if there's a majority. But these are kind of the same scenario: what's at stake in the second scenario is at least half of 3^^^3 dust-specks, which is about the same as 3^^^3 dust-specks. So the question "would you vote in favour of 3^^^3 people, including yourself, being dust-specked?" is the same as "would you be willing to pay one dust-speck in your eye to save a person from 50 years of torture, conditional on about 3^^^3 other people also being willing?"