Inquiry into community standards 2014-08-06T20:22:42.756Z


Comment by ThisSpaceAvailable on Group rationality -- bridging the gap in a post-truth world · 2016-11-22T06:30:03.028Z · LW · GW

"Everyone on this site obviously has an interest in being, on a personal level, more rational."

Not in my experience. In fact, I was downvoted and harshly criticized for expressing confusion at gwern posting on this site and yet having no apparent interest in being rational.

Comment by ThisSpaceAvailable on Mismatched Vocabularies · 2016-11-22T06:19:39.752Z · LW · GW

"Instead of generalizing situation-specific behavior to personality (i.e. "Oh, he's not trying to make me feel stupid, that's just how he talks"), people assume that personality-specific behavior is situational (i.e. "he's talking like that just to confuse me")."

Those aren't really mutually exclusive. "Talking like that just to confuse his listeners is just how he talks". It could be an attribution not of any specific malice, but generalized snootiness.

Comment by ThisSpaceAvailable on Sample means, how do they work? · 2016-11-22T06:08:30.925Z · LW · GW

This may seem pedantic, but given that this post is on the importance of precision:

"Some likely died."

Should be

"Likely, some died".

Also, I think you should more clearly distinguish between the two means, such as saying "sample average" rather than "your average". Or use x bar and mu.

The whole concept of confidence is rather problematic, because it's on the one hand one of the most common statistical measures presented to the public, but on the other hand it's one of the most difficult concepts to understand.

What makes the concept of CI so hard to explain is that pretty every time the public is presented with it, they are presented with one particular confidence interval, and then given the 95%, but the 95% is not a property of the particular confidence interval, it's a property of the process that generated it. The public understands "95% confidence interval" as being an interval that has a 95% chance of containing the true mean, but actually a 95% confidence interval is an interval generated by a process, where the process has a 95% chance of generating a confidence interval that contains the true mean.

Comment by ThisSpaceAvailable on Inverse cryonics: one weird trick to persuade anyone to sign up for cryonics today! · 2016-08-25T01:58:24.954Z · LW · GW

By how many orders of magnitude? Would you play Russian Roulette for $10/day? It seemed to me that implicit in your argument was that even if someone disagrees with you about the expected value, an order of magnitude or so wouldn't invalidate it. There's a rather narrow set of circumstances where your argument doesn't apply to your own situation. Simply asserting that you will sign up soon is far from sufficient. And note that many conditions necessitate further conditions; for instance, if you claim that your current utility/dollar ratio is ten times what it will be in a year, then you'd better not have turned down any loans with APY less than 900%.

And how does the value of cryonics go up as your mortality rate does? Are you planning on enrolling in a program with a fixed monthly fee?

Comment by ThisSpaceAvailable on The map of the risks of aliens · 2016-08-23T18:06:14.602Z · LW · GW

"Also there are important risks that we are in simulation, but that it is created not by our possible ancestors"

Do you mean "descendants"?

Comment by ThisSpaceAvailable on A Review of Signal Data Science · 2016-08-21T18:26:22.214Z · LW · GW

What about after the program, if you don't get a job, or don't get a job in the data science field?

Comment by ThisSpaceAvailable on Inverse cryonics: one weird trick to persuade anyone to sign up for cryonics today! · 2016-08-21T18:25:37.174Z · LW · GW

1% of a bad bet is still a bad bet.o

Comment by ThisSpaceAvailable on A Review of Signal Data Science · 2016-08-21T18:22:18.647Z · LW · GW

They should have some statistics, even if they're not completely conclusive.

As I understand it, the costs are:

$1400 for lodging (commuting would cost even more) $2500 deposit (not clear on the refund policy) 10% of next year's income (with deposit going towards this)

I wouldn't characterize that as "very little". It's enough to warrant asking a lot of questions.

How would you characterize the help you got getting a job? Getting an interview? Knowing what to say in an interview? Having verifiable skills?

Comment by ThisSpaceAvailable on Inverse cryonics: one weird trick to persuade anyone to sign up for cryonics today! · 2016-08-21T02:59:55.518Z · LW · GW

Are your finances so dire that if someone offered you $1/day in exchange for playing Russian Roulette, you would accept? If not, aren't you being just as irrational as you are accusing those who fail to accept your argument of being?

Comment by ThisSpaceAvailable on Seeking Optimization of New Website "New Atheist Survival Kit," a go-to site for newly-made atheists · 2016-08-21T02:12:23.849Z · LW · GW

You might want to consider what the objective is, and whether you should have different resources for different objectives. Someone who's in a deeply religious community who would be ostracized if people found out they're an atheist would need different resources than someone in a more secular environment who simply wants to find other atheists to socialize with.

I think I should also mention your posting a URL but not making it clickable You should put anchors in your site. For instance, there should at the very least be anchors at "New atheists", "Theists", and "“Old” Atheists", and links to the anchors when you first list those three categories, if not an outline at the beginning and links to the parts. Organizationally, it's a bit of a mess; for instance, the "Communities of Atheists." heading isn't set out from the rest of the text at all.

Comment by ThisSpaceAvailable on Seeking Optimization of New Website "New Atheist Survival Kit," a go-to site for newly-made atheists · 2016-08-21T01:48:00.569Z · LW · GW

"Just a "Survival Guide for Atheists" "

Are you referring to the one by Hehmant Mehta?

"not-particularly-deep-thinking theist."


Comment by ThisSpaceAvailable on A Review of Signal Data Science · 2016-08-21T01:31:02.251Z · LW · GW

I suppose this might be better place to ask than trying to resurrect a previous thread:

What kind of statistics can Signal offer on prior cohorts? E.g. percentage with jobs, percentage with jobs in data science field, percentage with incomes over $100k, median income of graduates, mean income of graduates, mean income of employed graduates, etc.? And how do the different cohorts compare? (Those are just examples; I don't necessarily expect to get those exact answers, but it would be good to have some data and have it be presented in a manner that is at least partially resistant to cherry picking/massaging, etc.) Basically, what sort of evidence E does Signal have to offer, such that I should update towards it being effective, given both E, and "E has been selected by Signal, and Signal has an interest in choosing E to be as flattering rather than as informative as possible" are true?

Also, the last I heard, there was a deposit requirement. What's the refund policy on that?

Comment by ThisSpaceAvailable on An update on Signal Data Science (an intensive data science training program) · 2016-06-16T02:21:42.655Z · LW · GW

"We're planning another one in Berkeley from May 2nd – July 24th."

Is that June 24th?

Comment by ThisSpaceAvailable on Attention! Financial scam targeting Less Wrong users · 2016-03-05T20:18:01.238Z · LW · GW

Isn't that fraud? That is, if you work for a company that matches donations, and I ask to give you money for you to give to MIRI, aren't I asking you to defraud your company?

Comment by ThisSpaceAvailable on Attention! Financial scam targeting Less Wrong users · 2016-03-05T20:15:21.056Z · LW · GW

It does mean that not-scams should find ways to signal that they aren't scams, and the fact that something does not signal not-scam is itself strong evidence of scam.

Comment by ThisSpaceAvailable on Attention! Financial scam targeting Less Wrong users · 2016-03-05T20:09:49.063Z · LW · GW

Isn't the whole concept of matching donations a bit irrational to begin with? If a company thinks that MIRI is a good cause, they should give money to MIRI. If they think that potential employees will be motivated by them giving money to MIRI, wouldn't a naive application of economics predict that employees would value a salary increase of a particular amount at a utility that is equal or greater than the utility of that particular amount being donated to MIRI? An employee can convert a $1000 salary increase to a $1000 MIRI donation, but not the reverse. Either the company is being irrational, or it is expecting its employees to be irrational.

Comment by ThisSpaceAvailable on Attention! Financial scam targeting Less Wrong users · 2016-03-05T20:01:23.866Z · LW · GW

Shouldn't we first determine whether the amount of effort needed to figure out the costs of the tests is less than the expected value of ((cost of doing tests - expected gain)|(cost of doing tests > expected gain))?

Comment by ThisSpaceAvailable on Attention! Financial scam targeting Less Wrong users · 2016-03-05T19:54:12.851Z · LW · GW

And if this is presented as some sort of "competition" to see whether LW is less susceptible than the general populace, then if anyone has fallen for it, that can further discourage them from reporting it. A lot of this is exploiting the banking system's lack of transparency as to just how "final" a transaction is; for instance, if you deposit a check, your account may be credited even if the check hasn't actually cleared. So scammers take advantage of the fact that most people are familiar with all the intricacies of banking, and think that when their account has been credited, it's safe to send money back.

Comment by ThisSpaceAvailable on Book Review: Naive Set Theory (MIRI research guide) · 2015-08-19T05:03:16.256Z · LW · GW

It is somewhat confusing, but remember that srujectivity is defined with respect to a particular codomain; a function is surjective if its range is equal to its codomain, and thus whether it's surjective depends on what its codomain is considered to be; every function maps its domain onto its range. "f maps X onto Y" means that f is surjective with respect to Y". So, for instance, the exponential function maps the real numbers onto the positive real numbers. It's surjective with respect to positive real numbers*. Saying "the exponential function maps real numbers onto real numbers" would not be correct, because it's not surjective with respect to the entire set of real numbers. So saying that a one-to-one function maps distinct elements onto a set of distinct elements can be considered to be correct, albeit not as clear as saying "to" rather than "onto". It also suffer from a lack of clarity in that it's not clear what the "always" is supposed to range over; are there functions that sometimes do map distinct elements to distinct elements, but sometimes don't?

Comment by ThisSpaceAvailable on Mental Model Theory - Illusion of Possibility Example · 2015-08-19T04:11:54.087Z · LW · GW

So, we have

  1. We don't have both “Either K or A” and “Either Q or A”
  2. Therefore, we either have “Neither K nor A” or “Neither Q nor A”
  3. Since both of the possibilities involve “no A”, there can be no A.

Your post seems to be a rather verbose way of showing something that can be shown in three lines. I guess you're trying to illustrate some larger framework, but it's rather unclear what it is or how it adds anything to the analysis, and you haven't given the reader much reason to look into it further.

The reason that someone might think an Ace would be a good choice is that they misread it as saying “one of these two statements is true”. But it is nowhere stated that either statement is true; rather it is stated that at least one statement is false. Once one notices that the Ace is involved in both of these statements, of which one has to be false, one's intuition should lead one choosing the King.

Also, if you're using set notation, (K ∪ A) indicates the same thing as (A or K or K ∩ A).

Comment by ThisSpaceAvailable on Fragile Universe Hypothesis and the Continual Anthropic Principle - How crazy am I? · 2015-08-19T00:55:59.005Z · LW · GW

I think that the first step is to unpack "annihilate". How does one "annihilate" a universe? You seem to be equivocating between destroying a universe, and putting it in a state inhospitable to consciousness.

It also seems to me that once we bring the anthropic principle in, that leads to Boltzmann brains.

Comment by ThisSpaceAvailable on Does random reward evoke stronger habits? · 2015-08-18T22:16:22.840Z · LW · GW

Upvote for content, but I think that there's a typo in your second sentence

Variable schedules maximize what is known as resistance to extinction, the probability a behavior will decrease in frequency goes down. Perhaps a semicolon instead of a comma, or "as frequency of rewards ... " instead of "in frequency ...", was intended?

Comment by ThisSpaceAvailable on Soylent has been found to contain lead (12-25x) and cadmium (≥4x) in greater concentrations than California's 'safe harbor' levels · 2015-08-17T01:17:31.033Z · LW · GW

"show that one serving of Soylent 1.5 can expose a consumer to a concentration of lead that is 12 to 25 times above California's Safe Harbor level for reproductive health"

Concentration, or amount? it seems to me that that is a rather important distinction, and it worrying that As You Sow doesn't seem to recognize it.

Comment by ThisSpaceAvailable on Examples of AI's behaving badly · 2015-07-17T20:05:15.331Z · LW · GW

I'm not sure you understand what "iid" means. I t means that each is drawn from the same distribution, and each sample is independent of the others. The term "iid" isn't doing any work in your statement; you could just same "It's not from the distribution you really want to sample", and it would be just as informative.

Comment by ThisSpaceAvailable on Examples of AI's behaving badly · 2015-07-17T06:07:16.783Z · LW · GW

"This isn't an example of overfitting, but of the training set not being iid."

Upvote for the first half of that sentence, but I'm not sure how the second applies. The set of tanks is iid, the issue that the creators of the training set allowed tank/not tank to be correlated to an extraneous variable. It's like having a drug trial where the placebos are one color and the real drug is another.

Comment by ThisSpaceAvailable on Examples of AI's behaving badly · 2015-07-17T05:55:54.593Z · LW · GW

Perverse incentives.

Comment by ThisSpaceAvailable on Philosophical differences · 2015-06-13T02:55:56.894Z · LW · GW

I realize that no analogy is perfect, but I don't think your sleeper cell hypothetical is analogous to AI. It would be a more accurate analogy if someone were to point out that, gee, a sleeper cell would be quite effective, and it's just a matter of time before the enemy realizes this and establishes one. There is a vast amount of Knightian uncertainty that exists in the case of AI, and does not exist in your hypothetical.

Comment by ThisSpaceAvailable on Visions and Mirages: The Sunk Cost Dilemma · 2015-06-06T04:53:27.130Z · LW · GW

You paid a karma toll to comment on one of my most unpopular posts yet

My understanding is that the karma toll is charged only when responding to downvoted posts within a thread, not when responding to the OP.

to... move the goalposts from "You don't know what you're talking about" to "The only correct definition of what you're talking about is the populist one"?

I didn't say that the only correct definition is the most popular one; you are shading my position to make it more vulnerable to attack. My position is merely that if, as you yourself said, "everybody" uses a different definition, then that is the definition. You said "everybody is silently ignoring what the fallacy actually refers to". But what a term "refers to" is, by definition, what people mean when they say it. The literal meaning (and I don't take kindly to people engaging in wild hyperbole and then accusing me of being hyperliteral when I take them at their word, in case you're thinking of trying that gambit) of your post is that in the entire world, you are the only person who knows the "true meaning" of the phrase. That's absurd. At the very least, your use is nonstandard, and you should acknowledge that.

Now, as to "moving the goalposts", the thing that I suspected you of not knowing what you were talking about was knowing the standard meaning of the phrase "sunk cost fallacy", so the goalposts are pretty much where they were in the beginning, with the only difference being that I have gone from strongly suspecting that you don't know what you're talking about to being pretty much certain.

Well, I guess we'd better redefine evolution to mean "Spontaneous order arising out of chaos", because apparently that's how we're doing things now.

I don't know of any mainstream references defining evolution that way. If you see a parallel between these two cases, you should explain what it is.

You're not even getting the -populist- definition of the fallacy right.

Ideally, if you are going to make claims, you would actually explain what basis you see for those claims.

Your version, as-written, implies that the cost for a movie ticket to a movie I later decide I don't want to see is -negative- the cost of that ticket. See, I paid $5, and I'm not paying anything else later, so 0 - 5 = -5, a negative cost is a positive inlay, which means: Yay, free money?

Presumably, your line of thought is that what you just presented is absurd, and therefore it must be wrong. I have two issues with that. The first is that you didn't actually present what your thinking was. That shows a lack of rigorous thought, as you failed to make explicit what your argument is. This leaves me with both articulating your argument and mine, which is rather rude. The second problem is that your syllogism "This is absurd, therefore it is false" is severely flawed. It's called the Sunk Cost Fallacy. The fact that it is illogical doesn't disqualify it from being a fallacy; being illogical is what makes it a fallacy.

Typical thinking is, indeed, that if one has a ticket for X that is priced at $5, then doing X is worth $5. For the typical mind, failing to do X would mean immediately realizing a $5 loss, while doing X would avoid realizing that loss (at least, not immediately). Therefore, when contemplating X, the $5 is considered as being positive, with respect to not doing X (that is, doing X is valued higher than not doing X, and the sunk cost is the cause of the differential).

Why didn't I bring that up before? Because I'm not here to score points in an argument.

And if you were here to score points, you would think that "You just described X as being a fallacy, and yet X doesn't make sense. Hah! Got you there!" would be a good way of doing so? I am quite befuddled.

Why do I bring it up now? Because I'm a firm believer in tit-for-tat - and you -do- seem to be here to score points in an argument

I sincerely believe that you are using the phrase "sunk cost fallacy" that is contrary to the standard usage, and that your usage impedes communication. I attempted to inform you of my concerns, and you responded by accusing me of simply trying "score points". I do not think that I have been particularly rude, and absent prioritizing your feelings over clear communication, I don't see how I could avoid you accusing me of playing "games of trivial social dominance".

"Once I've called that, usually -my- turn is to reiterate that it's a game of social dominance, and that this entire thing is what monkeys do"

Perceiving an assertion of error as being a dominance display is indeed something that the primate brain engages in. Such discussions cannot help but activate our social brains, but I don't think that means that we should avoid ever expressing disagreement.

We could, of course, skip -all- of that, straight to: What exactly do you actually want out of this conversation? To impart knowledge? To receive knowledge? Or do you merely seek dominance?

My immediate motive is to impart knowledge. I suppose if one follows the causal chain down, it's quite possible that humans' desire to impart knowledge stems from our evolution as social beings, but that strikes me as overly reductionist.

Comment by ThisSpaceAvailable on Approximating Solomonoff Induction · 2015-06-04T22:30:42.932Z · LW · GW

The set of possible Turing Machines is infinite. Whether you consider that to satisfy your personal definition of "seen" or "in reality" isn't really relevant.

Comment by ThisSpaceAvailable on Visions and Mirages: The Sunk Cost Dilemma · 2015-06-04T22:28:39.308Z · LW · GW

If you think that everyone is using a term for something other than what it refers to, then you don't understand how language works. And a discussion of labels isn't really relevant to the question of whether it's a straw man. Also, your example shows that what you're referring to as a sunk cost fallacy is not, in fact, a fallacy.

Comment by ThisSpaceAvailable on A Proposal for Defeating Moloch in the Prison Industrial Complex · 2015-06-04T22:20:31.643Z · LW · GW

(a) Prison operators are not currently incentivized to be experts in data science (b) Why? And will that fix things? There are plenty of examples of industries taking advantage of vulnerabilities, without those vulnerabilities being fixed. (c) How will it be retrained? Will there be a "We should retrain the model" lobby group, and will it act faster than the prison lobby?

Perhaps we should have a futures market in recidivism. When a prison gets a new prisoner, they buy the associated future at the market rate, and once the prisoner has been out of prison sufficiently long without committing further crimes, the prison can redeem the future. And, of course, there would be laws against prisons shorting their own prisoners.

Comment by ThisSpaceAvailable on Approximating Solomonoff Induction · 2015-06-02T04:08:31.070Z · LW · GW

An example of a sense would be to define some quantification of how good an algorithm is, and then show that a particular algorithm has a large value for that quantity, compared to SI. In order to rigorously state that X approaches Y "in the limit", you have to have some index n, and some metric M, such that |M(Xn)-M(Yn)| -> 0. Otherwise, you're simply making a subjective statement that you find X to be "good". So, for instance, if you can show that the loss in utility in using your algorithm rather than SI goes to zero as the size of the dataset goes to infinity, that would be an objective sense in which your algorithm approximates SI.

Comment by ThisSpaceAvailable on Approximating Solomonoff Induction · 2015-06-02T04:00:44.370Z · LW · GW

You can't do an exhaustive search on an infinite set.

Comment by ThisSpaceAvailable on Open Thread, Jun. 1 - Jun. 7, 2015 · 2015-06-02T03:49:08.514Z · LW · GW

Consequentialist thinking has a general tendency to get one labeled an asshole.


"Hey man, can you spare a dollar?" "If I did have a dollar to spare, I strongly doubt that giving it to you would be the most effective use of it." "Asshole."

Although I think that it's dangerous to think that you can accurately estimate the cost/benefit of tact; I think most people underestimate how much effect it has.

Comment by ThisSpaceAvailable on Stupid Questions June 2015 · 2015-06-02T03:43:26.742Z · LW · GW

There's a laundry section, with detergent, fabric softeners, and other laundry-related products. I don't think the backs generally say what the product is, and even if they do, that's not very useful. And as I said, most laundry brands have non-detergent products. Not labeling detergent as detergent trains people to not look for the "detergent" label, which means that they don't notice when they're buying fabric softener or another product.

Comment by ThisSpaceAvailable on Visions and Mirages: The Sunk Cost Dilemma · 2015-06-02T03:35:25.400Z · LW · GW

As I said, that is not what the sunk cost fallacy is. If you've spent $100, and your expected net returns are -$50, then the sunk cost fallacy would be to say "If I stop now, that $100 will be wasted. Therefore, I should keep going so that my $100 won't be wasted."

While it is a fallacy to just add sunk costs to future costs, it's not a fallacy to take them into account, as your scenario illustrates. I don't know of anyone who recommends completely ignoring sunk costs; as far as I can tell you are arguing against a straw man in that sense.

Also, it's "i.e.", rather than "i/e".

Comment by ThisSpaceAvailable on Stupid Questions June 2015 · 2015-06-01T04:19:03.986Z · LW · GW

What's the deal with laundry detergent packaging? For instance, take a look at this Nowhere on the package does it actually say it's detergent! I guess they're just relying on people knowing that Tide is a brand of detergent? Except that Tide also makes other products, such as fabric softener. And it's not just Tide.

Doing a google search, the only image that I came across of a bottle that actually says "detergent" is this: If you zoom in, way at the bottom, in tiny print, it says "detergent". Maybe the other ones also say it, but they weren't zoomable.

Comment by ThisSpaceAvailable on Open Thread, Jun. 1 - Jun. 7, 2015 · 2015-06-01T03:40:57.461Z · LW · GW

Suppose we have a set S of n elements, and we ask people to memorize sequences of these elements, and we find that people can generally easily memorize sequences of length k (for some definition of "generally" and "easily"). If we then define a function f(S) := k log n, how will f depend on S? Have there been studies on this issue?

Comment by ThisSpaceAvailable on Approximating Solomonoff Induction · 2015-06-01T03:36:14.927Z · LW · GW

Estimator asked in what sense SI is approximated, not, given a sense, how SI is approximated in that sense. Can you give a metric for which the value is close to SI's value?

Comment by ThisSpaceAvailable on Log-normal Lamentations · 2015-05-30T02:49:25.552Z · LW · GW

Suppose you were given two options, and told that whatever money results would be given to an EA charity. Would you find it difficult to choose a 1% shot at $1000 over a sure $5? What if you were told that there are a thousand people being given the same choice? What if you're not told how the gamble turns out? What if all the gambles are put in a pool, and you're told only how many worked out, not whether yours did?

Comment by ThisSpaceAvailable on [Link] Small-game fallacies: a Problem for Prediction Markets · 2015-05-29T18:24:36.619Z · LW · GW

No. If you forecast that the price of gold will go up, and the price instead goes down, then being honest about your forecast loses you money. Prediction markets reward people for making accurate predictions. Whether those predictions were an accurate reflection of beliefs is irrelevant.

Comment by ThisSpaceAvailable on A resolution to the Doomsday Argument. · 2015-05-29T18:17:20.687Z · LW · GW

Most people consider causality to be a rather serious argument. If you're going to unilaterally declare certain lines of argument illegitimate, and then criticize people for failing to present a "legitimate" argument, and declaring that any opinions that disagree with you don't improve the discussion, that's probably going to piss people off,

Comment by ThisSpaceAvailable on A resolution to the Doomsday Argument. · 2015-05-29T18:13:50.020Z · LW · GW

You clearly expect estimator to agree that the other arguments are fallacious. And yet estimator clearly believes that zir argument is not fallacious. To assert that they are literally the same thing, that they are similar in all respects, is to assert that estimator's argument is fallacious, which is exactly the matter under dispute. This is begging the question. I have already explained this, and you have simply ignored my explanation.

All the similarities that you cite are entirely irrelevant. Simply noting similarities between an argument, and a different, fallacious argument, does nothing to show that the argument in question is fallacious as well, and the fact that you insist on pretending otherwise does not speak well to your rationality.

Estimator clearly believes that there is no way that creating simulations can affect whether we are in a simulation. You have presented absolutely no argument for why it can. Instead, you've simply declared that your "theory" is "straightforward", and that disagreeing is unacceptable arrogance. Arguing that your "theory" violates a well-established principled is addressing your "theory". So apparently, when you write "do not need to condescend to address my theory", what you really mean is "have failed to present a counterargument that I have deigned to recognize as legitimate".

Comment by ThisSpaceAvailable on Visions and Mirages: The Sunk Cost Dilemma · 2015-05-29T06:48:20.971Z · LW · GW

Hopefully, I'm not just feeding the troll, but: just what exactly do you think "the sunk cost fallacy" is? Because it appears to me that you believe that it refers to the practice of adding expenses already paid to future expected expenses in a cost-benefit analysis, when in fact in refers the opposite, of subtracting expenses already paid from future expected expenses.

Comment by ThisSpaceAvailable on What degree of cousins are you and I? Estimates of Consanguinity to promote feelings of kinship and empathy · 2015-05-29T05:58:10.305Z · LW · GW

"Meanwhile, my System 2 has heard that all humans are at least 50th degree cousins"

Shouldn't that be "at most"?

Comment by ThisSpaceAvailable on A resolution to the Doomsday Argument. · 2015-05-29T05:14:45.936Z · LW · GW

What does that mean, "You're not going to just happen to be in one of the first twenty years"? There are people who have survived more than one billion seconds past their twenty first birthdays. And each one, at one point, was within twenty second of their twenty first birthday. What would you say to someone whose twenty first birthday was less than twenty seconds ago who says "I'm not going to just happen to be in the first twenty seconds"?

Comment by ThisSpaceAvailable on A resolution to the Doomsday Argument. · 2015-05-29T05:10:30.154Z · LW · GW

Look, does this seem like solid reasoning to you? Because your arguments are beginning to sound quite like it.

"Species can't evolve, that violates thermodynamics! We have too much evidence for thermodynamics to just toss it out the window."

Listing arguments that you find unconvincing, and simply declaring that you find your opponent's argument to be similar, is not a valid line of reasoning, isn't going to make anyone change their mind, and is kind of a dick move. This is, at its heart, simply begging the question: the similarity that you think exists is that you think all of these arguments are invalid. Saying "this argument is similar to another one because they're both invalid, and because it's so similar to an invalid argument, it's invalid" is just silly.

"My argument shares some similarities to an argument made by someone respected in this community" isn't much of an argument, either.

Comment by ThisSpaceAvailable on A resolution to the Doomsday Argument. · 2015-05-29T04:58:36.667Z · LW · GW

There was no "mockery", just criticism and disagreement. It's rather disturbing that you saying that criticism and disagreement is "not acceptable" has been positively received. And estimator didn't say that the argument is closed, only that zie has a solid opinion about it.

Comment by ThisSpaceAvailable on A quick heuristic for evaluating elites (or anyone else) · 2015-02-28T19:33:52.356Z · LW · GW

"Countries with a lot of specialization are richer, therefore, within a country, the richest people should be people who specialize."


Comment by ThisSpaceAvailable on Rationality Quotes November 2014 · 2015-02-28T04:31:32.705Z · LW · GW

You said "More like the first definition." The first definition is "to name, write, or otherwise give the letters, in order, of (a word, syllable, etc.)". Thus, I conclude that you are saying that it is impossible to name, write, or otherwise give the letters, in order, of the word "complexity". I have repeatedly seen people in this community talk of "verified debating", in which it is important to communicate with other people what your understanding of their statements is, and ask them whether that is accurate. And yet when I do that, with an interpretation that looks quite straightforward to me, I get downvoted, and your only response is "no", with no explanation.