Posts

Pittsburgh meetup Nov. 20 2010-11-16T21:03:42.630Z
Bay Area Meetup Saturday 6/12 2010-06-09T18:18:17.750Z
Pittsburgh Meetup: Saturday 9/12, 6:30PM, CMU 2009-09-10T03:06:21.614Z
Pittsburgh Meetup: Survey of Interest 2009-08-27T16:18:27.792Z

Comments

Comment by Nick_Tarleton on "No evidence" as a Valley of Bad Rationality · 2020-03-30T17:33:47.963Z · LW · GW

Upvoted, but weighing in the other direction: Average Joe also updates on things he shouldn't, like marketing. I expect the doctor to have moved forward some in resistance to BS (though in practice, not as much as he would if he were consistently applying his education).

Comment by Nick_Tarleton on "No evidence" as a Valley of Bad Rationality · 2020-03-30T17:27:07.079Z · LW · GW

And the correct reaction (and the study's own conclusion) is that the sample is too small to say much of anything.

(Also, the "something else" was "conventional treatment", not another antiviral.)

Comment by Nick_Tarleton on Zeynep Tufekci on Why Telling People They Don't Need Masks Backfired · 2020-03-18T08:12:36.106Z · LW · GW

I find the 'backfired through distrust'/'damaged their own credibility' claim plausible, it agrees with my prejudices, and I think I see evidence of similar things happening elsewhere; but the article doesn't contain evidence that it happened in this case, and even though it's a priori likely and worth pointing out, the claim that it did happen should come with evidence. (This is a nitpick, but I think it's an important nitpick in the spirit of sharing likelihood ratios, not posterior beliefs.)

Comment by Nick_Tarleton on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-15T20:35:02.330Z · LW · GW

if there's a domain where the model gives two incompatible predictions, then as soon as that's noticed it has to be rectified in some way.

What do you mean by "rectified", and are you sure you mean "rectified" rather than, say, "flagged for attention"? (A bounded approximate Bayesian approaches consistency by trying to be accurate, but doesn't try to be consistent. I believe 'immediately update your model somehow when you notice an inconsistency' is a bad policy for a human [and part of a weak-man version of rationalism that harms people who try to follow it], and I don't think this belief is opposed to "rationalism", which should only require not indefinitely tolerating inconsistency.)

Comment by Nick_Tarleton on How long does SARS-CoV-2 survive on copper surfaces · 2020-03-11T22:40:55.348Z · LW · GW

On the other hand:

We found that viable virus could be detected... up to 4 hours on copper...

Comment by Nick_Tarleton on How long does SARS-CoV-2 survive on copper surfaces · 2020-03-07T19:41:50.646Z · LW · GW

Here's a study using a different coronavirus.

Brasses containing at least 70% copper were very effective at inactivating HuCoV-229E (Fig. 2A), and the rate of inactivation was directly proportional to the percentage of copper. Approximately 103 PFU in a simulated wet-droplet contamination (20 µl per cm2) was inactivated in less than 60 min. Analysis of the early contact time points revealed a lag in inactivation of approximately 10 min followed by very rapid loss of infectivity (Fig. 2B).

Comment by Nick_Tarleton on How long does SARS-CoV-2 survive on copper surfaces · 2020-03-07T19:35:14.546Z · LW · GW

That paper only looks at bacteria and does not knowably carry over to viruses.

Comment by Nick_Tarleton on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-31T00:36:21.268Z · LW · GW

I don't see you as having come close to establishing, beyond the (I claim weak) argument from the single-word framing, that the actual amount or parts of structure or framing that Dragon Army has inherited from militaries are optimized for attacking the outgroup to a degree that makes worrying justified.

Comment by Nick_Tarleton on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-05-21T19:58:53.751Z · LW · GW

Doesn't work in incognito mode either. There appears to be an issue with lesserwrong.com when accessed over HTTPS — over HTTP it sends back a reasonable-looking 301 redirect, but on port 443 the TCP connection just hangs.

Comment by Nick_Tarleton on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-05-21T03:44:15.088Z · LW · GW

Similar meta: none of the links to lesserwrong.com currently work due to, well, being to lesserwrong.com rather than lesswrong.com.

Comment by Nick_Tarleton on Mental Illness Is Not Evidence Against Abuse Allegations · 2018-05-16T01:17:32.159Z · LW · GW

Further-semi-aside: "common knowledge that we will coordinate to resist abusers" is actively bad and dangerous to victims if it isn't true. If we won't coordinate to resist abusers, making that fact (/ a model of when we will or won't) common knowledge is doing good in the short run by not creating a false sense of security, and in the long run by allowing the pattern to be deliberately changed.

Comment by Nick_Tarleton on Mental Illness Is Not Evidence Against Abuse Allegations · 2018-05-16T01:12:43.988Z · LW · GW

This post may not have been quite correct Bayesianism (... though I don't think I see any false statements in its body?), but regardless there are one or more steel versions of it that are important to say, including:

  • persistent abuse can harm people in ways that make them more volatile, less careful, more likely to say things that are false in some details, etc.; this needs to be corrected for if you want to reach accurate beliefs about what's happened to someone
  • arguments are soldiers; if there are legitimate reasons (that people are responding to) to argue against someone or see them as dangerous, this is likely to bleed over to dismissing other things they say more than is justified, especially if there are other motivations to do so
  • the intelligent social web makes some people both more likely to be abused, and less likely to be believed
    • whether someone seems "off" depends to some extent on how the social web wants them to be perceived, independent of what they're doing
    • seriously I don't know how to communicate using words just how powerful (I claim) this class of effects is
  • there are all kinds of reasons that not believing claims about abuse is often just really convenient; this sounds obvious but I don't see people accounting for it well; this motivation will take advantage of whatever rationalizations it can
Comment by Nick_Tarleton on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-06T23:26:04.312Z · LW · GW

IMO, the "legitimate influence" part of this comment is important and good enough to be a top-level post.

Comment by Nick_Tarleton on Give praise · 2018-05-01T00:06:37.500Z · LW · GW

This is simply instrumentally wrong, at least for most people in most environments. Maybe people and an environment could be shaped so that this was a good strategy, but the shaping would actually have to be done and it's not clear what the advantage would be.

Comment by Nick_Tarleton on Give praise · 2018-05-01T00:00:34.731Z · LW · GW

My consistent experience of your comments is one of people giving [what I believe to be, believing that I understand what they're saying] the actual best explanations they can, and you not understanding things that I believe to be comprehensible and continuing to ask for explanations and evidence that, on their model, they shouldn't necessarily be able to provide.

(to be upfront, I may not be interested in explaining this further, due to limited time and investment + it seeming like a large tangent to this thread)

Comment by Nick_Tarleton on [Link] AlphaGo: Mastering the ancient game of Go with Machine Learning · 2016-01-28T16:27:07.803Z · LW · GW

I don't see how we anything like know that deep NNs with ‘sufficient training data’ would be sufficient for all problems. We've seen them be sufficient for many different problems and can expect them to be sufficient for many more, but all?

Comment by Nick_Tarleton on LessWrong 2.0 · 2015-12-05T20:36:45.403Z · LW · GW

A tangential note on third-party technical contributions to LW (if that's a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating.

Comment by Nick_Tarleton on The Problem with AIXI · 2014-03-22T17:18:47.189Z · LW · GW

Other possible implications of this scenario have been discusesd on LW before.

Comment by Nick_Tarleton on Is my view contrarian? · 2014-03-14T02:21:45.483Z · LW · GW

This shouldn't lead to rejection of the mainstream position, exactly, but rejection of the evidential value of mainstream belief, and reversion to your prior belief / agnosticism about the object-level question.

Comment by Nick_Tarleton on Building Phenomenological Bridges · 2013-12-25T21:37:48.813Z · LW · GW

Solving that problem seems to require some flavor of Paul's "indirect normativity", but that's broken and might be unfixable as I've discussed with you before.

Do you have a link to this discussion?

Comment by Nick_Tarleton on Open Thread, November 1 - 7, 2013 · 2013-11-25T20:35:58.898Z · LW · GW

Why not go a step further and say that 1 copy is the same as 0, if you think there's a non-moral fact of the matter? The abstract computation doesn't notice whether it's instantiated or not. (I'm not saying this isn't itself really confused - it seems like it worsens and doesn't dissolve the question of why I observe an orderly universe - but it does seem to be where the GAZP points.)

Comment by Nick_Tarleton on Open Thread, November 1 - 7, 2013 · 2013-11-08T05:04:24.482Z · LW · GW

I wonder if it would be fair to characterize the dispute summarized in/following from this comment on that post (and elsewhere) as over whether the resolutions to (wrong) questions about anticipation/anthropics/consciousness/etc. will have the character of science/meaningful non-moral philosophy (crisp, simple, derivable, reaching consensus across human reasoners to the extent that settled science does), or that of morality (comparatively fuzzy, necessarily complex, not always resolvable in principled ways, not obviously on track to reach consensus).

Comment by Nick_Tarleton on No Universally Compelling Arguments in Math or Science · 2013-11-08T01:10:55.481Z · LW · GW

Where Recursive Justification Hits Bottom and its comments should be linked for their discussion of anti-inductive priors.

(Edit: Oh, this is where the first quote in the post came from.)

Comment by Nick_Tarleton on No Universally Compelling Arguments in Math or Science · 2013-11-08T00:06:39.358Z · LW · GW

Measuring optimization power requires a prior over environments. Anti-inductive minds optimize effectively in anti-inductive worlds.

(Yes, this partially contradicts my previous comment. And yes, the idea of a world or a proper probability distribution that's anti-inductive in the long run doesn't make sense as far as I can tell; but you can still define a prior/measure that orders any finite set of hypotheses/worlds however you like.)

Comment by Nick_Tarleton on No Universally Compelling Arguments in Math or Science · 2013-11-05T05:58:49.820Z · LW · GW

I agree with the message, but I'm not sure whether I think things with a binomial monkey prior, or an anti-inductive prior, or that don't implement (a dynamic like) modus ponens on some level even if they don't do anything interesting with verbalized logical propositions, deserve to be called "minds".

Comment by Nick_Tarleton on Open Thread, November 1 - 7, 2013 · 2013-11-04T01:42:46.052Z · LW · GW

Have Eliezer's views (or anyone else's who was involved) on the Anthropic Trilemma changed since that discussion in 2009?

Comment by Nick_Tarleton on 2012 Winter Fundraiser for the Singularity Institute · 2013-06-07T21:04:45.526Z · LW · GW

So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...

an added dollar of marginal spending is more valuable at CFAR (in my estimates).

Is this still your view?

Comment by Nick_Tarleton on Morality is Awesome · 2013-01-11T20:18:30.179Z · LW · GW

I didn't, and still don't... but now I'm a little bit disturbed that I don't, and want to look a lot more closely at Hermione for ways she's awesome.

Comment by Nick_Tarleton on Morality is Awesome · 2013-01-11T20:13:22.181Z · LW · GW

Upvoted; whatever its relationship to what the OP actually meant, this is good.

Saying that it's good because it's vague, because it's harder to screw up when you don't know what you're talking about, is contrary to the spirit of LessWrong.

Reminding yourself of your confusion, and avoiding privileging hypotheses, by using vague terms as long as you remember that they're vague doesn't seem so bad.

Comment by Nick_Tarleton on Second-Order Logic: The Controversy · 2013-01-05T01:19:13.386Z · LW · GW

I kept expecting someone to object that "this Turing machine never halts" doesn't count as a prediction, since you can never have observed it to run forever.

Comment by Nick_Tarleton on Politics Discussion Thread January 2013 · 2013-01-03T06:53:34.313Z · LW · GW

More sympathetically, people might (well, I'm sure some people do) see avoiding stereotype-based jokes as a step towards there being things you can't say, and prefer some additional risk of saying harmful things to moving in that direction (possibly down a slippery slope).

Comment by Nick_Tarleton on Politics Discussion Thread January 2013 · 2013-01-02T06:47:06.907Z · LW · GW

But the blogger's position is one that is often met with hostility round these parts, for reasons that are unclear to me.

I think some of it is a defensive reaction to perceived possible vaguely-defined moral demands/condemnation. Here's a long comment I wrote about that in a different context.

Also simple contrarianism, though that's not much of an explanation absent a theory of why this is the thing people are contrarian against.

the parts of social engineering that I think LW is worst at.

What are those?

Comment by Nick_Tarleton on You can't signal to rubes · 2013-01-01T23:41:55.025Z · LW · GW

It/s a lesswrongian prejjudice that the only game anyone would want to play is Highly Competent But Criminally Underappreciated Backroom Boffin.

Yes. The general case of this prejudice is probably something like 'behavior morally should be evaluated according to its stated far-mode purpose; other purposes are possible and important, but dirty'. Of course, this has the large upside of making us seriously evaluate things according to their stated purpose at all....

Comment by Nick_Tarleton on You can't signal to rubes · 2013-01-01T09:05:33.884Z · LW · GW

Out of curiosity, what are the connotations of the word "rube" that make you suspicious?

Low status, contemptibility, etc. I expect making status hierarchies salient to make people less rational (hence fully generic suspicion), and I had the specific hypothesis that you might see people using 'signaling' models as judging others as contemptible and be offended by this.

Relatedly, I dislike calling the behavior in question "pandering", since I expect using condemnatory terms for phenomena to make them aversive to look at closely, and to lead to bias in attribution (against seeing them in oneself/'good' people and towards seeing them in 'bad' people, as well as towards seeing people who unambiguously exhibit them as 'bad').

Comment by Nick_Tarleton on You can't signal to rubes · 2013-01-01T08:32:43.621Z · LW · GW

I have a hard time telling whether you're trying to say that 'signaling' models are inaccurate, or just that calling them 'signaling' is misleading. I agree with the latter insofar as 'signaling' means this specific economic model, because the behaviors in question aren't directed at economically rational agents. I also can't tell if you dislike models that postulate stupidity (the strong status connotations of the word "rube" make me suspicious).

If you mean the former: I think you greatly overestimate median rationality in your take on the manager and butcher examples. All positive traits get conflated with each other by default. People can and do override their affective impressions with explicit reasoning, but more often than not they don't, especially when evaluating performance is difficult — and it's almost always more difficult than evaluating "does this person look like a winner?".

I also used to think that simple non-costly signaling couldn't possibly stably work, but experience (often with my own irrationality) changed my mind. This is less confusing if I think of it as social-primate (rather than general-intelligence) behavior; liking things/people other people like is socially useful. (This would likely be significant in the manager example in real life, e.g., I'll look better to my superiors if I make similar evaluations of my subordinates to them.)

The quality proposed was "status", but outrage is cheap. Any fool can be outraged at a blog post mentioning rape.

Now, status signaling is overused as an explanation. If the "HOW DARE YOU" comments are signaling (or 'signaling') anything, the obvious thing is alignment with the perceived-as-socially-powerful (implicit-Schelling-point-)faction condemning Robin, not status.

Comment by Nick_Tarleton on META: Deletion policy · 2012-12-26T05:50:32.462Z · LW · GW

I agree with this policy.

Comment by Nick_Tarleton on New censorship: against hypothetical violence against identifiable people · 2012-12-24T05:12:23.063Z · LW · GW

I and the one person currently in the room with me immediately took "by all means necessary" to suggest violence. I think you're in a minority in how you interpret it.

Comment by Nick_Tarleton on Rationality Quotes December 2012 · 2012-12-18T02:54:59.030Z · LW · GW

Police seek and preserve public favour not by catering to public opinion, but by constantly demonstrating absolute impartial service to the law.

I know this is meant to be an ideal for the police, but it could also be read as a descriptive claim about public favor, and it's worth noting that that claim is sometimes false: how often do people approve of police bashing the heads of $OUTGROUP?

Comment by Nick_Tarleton on Why you must maximize expected utility · 2012-12-14T04:01:34.419Z · LW · GW

"Apply decision theory to the set of actions you can perform at that point" is underspecified — are you computing counterfactuals the way CDT does, or EDT, TDT, etc?

This question sounds like a fuzzier way of asking which decision theory to use, but maybe I've missed the point.

Comment by Nick_Tarleton on Why you must maximize expected utility · 2012-12-14T03:49:57.254Z · LW · GW

Can you give an example of circular preferences that aren't contextual and therefore only superficially circular (like Benja's Alice and coin-flipping examples are contextual and only superficially irrational), and that you endorse, rather than regarding as bugs that should be resolved somehow? I'm pretty sure that any time I feel like I have intransitive preferences, it's because of things like framing effects or loss aversion that I would rather not be subject to.

Comment by Nick_Tarleton on Politics Discussion Thread December 2012 · 2012-12-14T03:39:43.686Z · LW · GW

I'd like to know what you think of this (unfortunately long) piece arguing (persuasively IMO) that Mystery/Roissy-style PUA is solving the wrong problem and a memetic hazard.

The right thing for these guys to do would be to deal with these core issues of low self-worth feelings and their inferiority feelings so that they can fix them once and for all. What pickup teaches them to do however is not to fix feelings but instead to switch from their current faulty coping strategy, which is surrender, to another faulty coping strategy of overcompensation. Using overcompensation, they repress these unwanted feelings with defense mechanisms so that they end up blocking themselves from consciously accessing this self-hatred. They learn to rationalize away and deny their feelings of low self-worth. They learn to project away their feelings of inferiority and self-hatred onto others. (Ever wonder why pickup artists develop this fanatical hatred of beta males? It’s their hatred of the beta traits they fear still exist within themselves, so they try to destroy these unwanted traits by first projecting them onto other male targets and then destroying those other targets.) They also learn to use another defense mechanism of intellectualization to cope with these low self-worth feelings, which is where all the mental masturbation and books on evolutionary psychology, animal behavior, persuasion, sales, New Age thinking and success literature like Tony Robbins comes in (not that there’s anything inherently wrong with any of this literature but rather in the way they are being used in this speak instance as a way to avoid fixing core issues).

Comment by Nick_Tarleton on Poll - Is endless September a threat to LW and what should be done? · 2012-12-12T05:03:45.674Z · LW · GW

An elite intellectual community can^H^H^H has to mostly reject newcomers, but those it does accept it has to invest in very effectively (while avoiding the Objectivist failure mode).

I think part of the problem is that LW has elements of both a ground for elite intellectual discussion and a ground for a movement, and these goals seem hard or impossible to serve with the same forum.

I agree that laziness and expecting people to "just know" is also part of the problem. Upvoted for the quote.

Comment by Nick_Tarleton on Do I really not believe in God? Do you? · 2012-12-10T23:55:39.511Z · LW · GW

I'm just wondering whether this script (something/someone is responsible for the good/bad stuff that happens to me) is equivalent to an alief in supernatural.

I'm not sure this is a meaningful question. "Alief" is a very fuzzy category.

Comment by Nick_Tarleton on By Which It May Be Judged · 2012-12-10T18:40:10.528Z · LW · GW

Possibly (this is total speculation) Eliezer is talking about the feeling of one's entire motivational system (or some large part of it), while you're talking about the feeling of some much narrower system that you identify as computing morality; so his conception of a Clippified human wouldn't share your terminal-ish drives to eat tasty food, be near friends, etc., and the qualia that correspond to wanting those things.

Comment by Nick_Tarleton on Science, Engineering, and Uncoolness; Here and Now, Then and There · 2012-12-08T21:51:33.549Z · LW · GW

How much does the perception that science and engineering became uncool come from bias in what gets recorded, and in particular the fact that most of us attended high school within the last decade or two?

Comment by Nick_Tarleton on LW Women- Minimizing the Inferential Distance · 2012-11-29T04:42:30.239Z · LW · GW

The three interpretations I mean are:

  • (1) People's behavior is accurately predicted by modeling them as status-maximizing agents.
  • (2) People's subjective experience of well-being is accurately predicted by modeling it as proportional to status.
  • (3) A person is well-off, in the sense that an altruist should care about, in proportion to their status.

Is that clearer?

Comment by Nick_Tarleton on LW Women- Minimizing the Inferential Distance · 2012-11-29T02:29:55.746Z · LW · GW

"Society" is not an agent.

Comment by Nick_Tarleton on LW Women- Minimizing the Inferential Distance · 2012-11-29T02:23:26.325Z · LW · GW

Konkvistador believes that humans are driven primarily by their desire to achieve a higher status, and that this is in fact one of our terminal goals.

This needs to be considered separately as (1) a descriptive statement about actions (2) a descriptive statement about subjective experience (3) a normative statement about the utilitarian good. It seems much more accurate as (1) than (2) or (3), and I think Konkvistador means it as (1); meanwhile, statements about "quality of life" could mean (2) or (3) but not (1).

Comment by Nick_Tarleton on LW Women- Minimizing the Inferential Distance · 2012-11-29T01:16:00.066Z · LW · GW

If we measure quality of life solely in terms of status

Is there a reason we might want to do this? It feels like your comments in this thread unjustifiably privilege this model.

Comment by Nick_Tarleton on Logical Pinpointing · 2012-11-12T20:09:39.724Z · LW · GW

I've heard people say the meta-ethics sequence was more or less a failure since not that many people really understood it, but if these last posts were taken as a perequisite reading, it would be at least a bit easier to understand where Eliezer's coming from.

Agreed, and disappointed that this comment was downvoted.