Posts

Pittsburgh, PA – ACX Meetups Everywhere 2021 2021-08-23T08:54:36.608Z
Welcome to Pittsburgh Rats [Edit With Your Details] 2018-03-28T04:56:05.085Z
zlrth's Shortform Feed 2018-02-13T18:41:09.849Z
Consider motivated snobbery 2017-11-11T15:49:57.490Z
Leave beliefs that don't constrain experience alone 2017-10-30T04:03:34.677Z

Comments

Comment by zlrth on Pittsburgh, PA – ACX Meetups Everywhere 2021 · 2021-09-10T15:54:09.175Z · LW · GW

(Will you all get this comment as an email?) Looking forward to meeting! I'll bring nametags and a sign up sheet and some extemporaneously-chosen food.

Comment by zlrth on LessWrong.com URL transfer complete, data import will run for the next few hours · 2018-03-26T22:17:24.462Z · LW · GW

Broken link:

http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/2f14

Expected behavior: You can see the comment, a la archive.org:

https://web.archive.org/web/20170424155218/http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/2f14

(do make sure you hit the '+')

Actual behavior: You can't see the comment on the page unless you click "show more" comments. Click "show more," and the page reloads, and it zooms down to see his comment. Given that lesswrong.com/{...}/2f14 is a direct link to that comment, it should show that comment.

Comment by zlrth on zlrth's Shortform Feed · 2018-03-13T17:12:40.837Z · LW · GW

Some time ago I stopped telling people I'd be somewhere at ish-o'clock. 4PM-ish for example. I really appreciate when people tell me they'll be somewhere at an exact time, and they're there.

I've heard that people are more on-time for a meeting that starts at 4:05 than one at 4:00, and I've used that tactic (though I'd pick the less-obviously-sneaky 4:15).

Comment by zlrth on June 2012: 0/33 Turing Award winners predict computers beating humans at go within next 10 years. · 2018-02-23T20:26:13.812Z · LW · GW

Yeah--when the person asking the question said, "90 years," and the Turing award winners raised some hands, couldn't they be interpreted to be specifying a wide confidence interval, which is what you should do when you know you don't have domain expertise with which to predict the future?

Comment by zlrth on How to not talk about probability estimates · 2018-02-21T03:56:03.465Z · LW · GW
This intuitively feels epistemologically arrogant, but it succeeds in solving the probability language discrepancy.

In general I support the thought that you avoid a lot of pitfalls if you're really precise and really upfront about what kinds of evidence you'll accept and not. I suspect that that kind of planning is not discussed enough in rationalist-circles, so I appreciate this post! You're upfront about the fact that you'll accept a non-explicit signal. I see nothing wrong with that, given that you're many inferential steps from a shared understanding of probability.

Comment by zlrth on zlrth's Shortform Feed · 2018-02-13T23:03:26.181Z · LW · GW

Oh! Thanks.

Comment by zlrth on zlrth's Shortform Feed · 2018-02-13T23:02:47.829Z · LW · GW

First: Yes I agree that my thing is a different thing, different enough to warrant a new name. And I am sneaking in negative affect.

Yeah, no kidding it’s easier to catch people doing it—because it’s a completely different thing!

Indeed, I am implicitly arguing that we should be focused on faults-we-actually-have[0], not faults-it's-easy-to-see-we-don't. My example of this is the above-linked podcast, where the hosts hem and haw and, after thinking about it, decide they have no sacred cows, and declare that Good (full disclosure: I like the podcast).

"Sacred-cow" as "well-formed proposition about the world you'd choose to be ignorant of" is clearly bad to LWers, so much so that it's non-tribal.

[0] And especially, faults-we-have-in-common-with-non-rationalists! I said, "The advantage of this definition is that it’s easier to catch rationalists and non-rationalists doing it." Said Achmiz gave examples using the word "people," but I intended to group rationalists with non-rationalists.

Comment by zlrth on zlrth's Shortform Feed · 2018-02-13T19:36:32.326Z · LW · GW

I sometimes hear rationalist-or-adjacent people say, "I don't have any sacred-cow-type beliefs." This is the perspective of this commenter who says, "lesswrong doesn't scandalize easily." Agreed: rationalists-and-adjacents entertain a wide variety of propositions.

The conventional definition of sacred-cow-belief is: A falsifiable belief about the world you wouldn't want falsified, given the chance. For example: If a theist had the opportunity to open a box to see whether God existed, and refused, and wouldn't let anyone else open the box, that belief is a sacred cow.

A more interesting (to me) definition of sacred cow is: a belief that causes you to not notice mistakes you make. The advantage of this definition is that it's easier to catch rationalists and non-rationalists doing it. Rationalists are much better than average at evaluating well-formed propositions, so they won't be baited by a passionately ignorant person talking about homeopathy.

But there are pre-propositional beliefs[1]. Here are some examples:

Talking about it makes it better.
I'm the man of the house, and the man of the house should be strong.
I'm unworthy.
I can tolerate anything but the outgroup.
Worrying solves nothing.
People enjoy my presence.
People need structure.

You walk into a conversation and start talking and miss what your interlocutor was talking about because you believe things like these.

An example rant: I am damn-near unwilling to falsify my belief that "people need structure." When they ask me for help what am I supposed to say? Anything I suggest is additional structure. Were I to suggest taking a break, that's still a change, and change is structure. Maybe change itself is a problem, but that's beside the point; my interlocutor is asking for advice. That means change, and change entails a change in structure. You might as well make it good structure.

What I just said is intended to show that sacred cows are terse beliefs that constrain decisions in social interaction, and offer easy post hoc rationalization. Say I find out my belief is false. Now, I've been making mistakes all my life. I blundered into conversations adding structure. Without that sacred cow, I don't know what to say.

[1] It's probably a spectrum. All those examples can be made falsifiable. They all have in common vagueness, which begets unfalsifiability.

Note: This is a shorter version of a comment I wrote here, written as a comment on an episode of The Bayesian Conspiracy, a podcast I like.

Comment by zlrth on De-Centering Bias · 2017-11-07T03:21:00.412Z · LW · GW
(as Elizier says, it is dangerous to be half a rationalist, link, there's a better link somewhere, but I can't find it)

This might be it: http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/

Excerpt:

And you do not warn them to scrutinize arguments they agree with just as hard as they scrutinize incongruent arguments for flaws.  So they have acquired a great repertoire of flaws of which to accuse only arguments and arguers who they don't like.  This, I suspect, is one of the primary ways that smart people end up stupid. 

(it also mentions that it's dangerous to be half a rationalist.

Comment by zlrth on Acknowledging Rationalist Angst · 2017-11-07T02:57:54.504Z · LW · GW

I'm going to write soon about how I don't care about existential risk, and how I can't figure out why. Am I not a good rationalist? Why can't I seem to care?

In one compound sentence: Personal demons made me a rationalist; personal demons decide what I think/feel is important.

I'm still angsty!

Comment by zlrth on Leave beliefs that don't constrain experience alone · 2017-11-07T02:30:20.091Z · LW · GW
I personally will have no strong beliefs about the truth value of their hypothesis if I have too much conflicting evidence. However, I won't want to put much effort into testing the hypothesis unless my plans depend on it being true or false.

I like how you said this.

The people with whom I was speaking were successful members of society, so they fell into the uncanny valley for me when they started pushing the idea that everyone has their own truth. I'm not sure if it's better or worse that they didn't quite literally believe that, but didn't know how to better articulate what they actually believed.

In social situations, I've been trying to find a delicate and concise way to get across that, "'Everyone has their own truth' is not an experience-constraining belief. Saying it is a marker of empathy--good for you (seriously!). But if I wanted to falsify that belief, I wouldn't know where to begin. What trade-offs do you think you're making by saying, 'Everyone has their own truth'?"

"Everyone has their own truth" is just one example of these kinds of applause-lights-y nonbeliefs. I say them too when I'm trying to signal empathy, and not much else.

For example, my objection to people believing in poltergeists (which is how the conversation started) isn't that they believe it. It's that they don't see the vast implications of a) transhumanism via ghost transformation, b) undetectable spies, c) remote projection of physical force, or d) possibly unlimited energy. They live as if none of those possibilities exist, which to me is a worse indictment of their beliefs than a lack of evidence, and an indictment of their education even if they're right about the ghosts.

Because they live as if none of these possibilities exist (i.e. their experiences are constrained), couldn't you say that for some definition of "believe," they don't actually believe in poltergeists? They're committing a minor sin by saying out loud that they believe in poltergeists, while not living as though they do.

That said, I'd still say that aligning your stated beliefs with how you behave is admirable and effective.

Comment by zlrth on Acknowledging Rationalist Angst · 2017-11-06T20:48:55.596Z · LW · GW

I think I know what you mean by "rationalists really do wipe the floor with the competition." But in the interest of precision, what do you mean? I'm not convinced they do; I alluded to this in my post here: https://www.lesserwrong.com/posts/dPLLnAJbas97GGsvQ/leave-beliefs-that-don-t-constrain-experience-alone

It may be that the community already has a standard article on this; I'd be happy with a link. It may also be that I should read more rigorously about what exactly a rationalist is. If there is no standard article, I'm curious about your thoughts.

Comment by zlrth on Leave beliefs that don't constrain experience alone · 2017-10-31T20:54:08.655Z · LW · GW

You want to do better than a Nobel Prize? Not the prize of course, but the contribution to society? I'm intrigued. Could you expand on that?

My intrigue comes from my bar-of-what-is-possible, John von Neumann. He probably has more beliefs-that-pay-rent than me, but he also has a "practically unlimited" capacity for work, tons of "mathematical courage," and "awe-inspiring" speed[0]. It'd be so great if those things were simply beliefs-that-pay-rent!

So I tell myself, "To do better than I have been doing, I must increase my work ethic, mathematical courage, and speed." That's very difficult for me; I'm a lazy, nervous, and slow thinker! I'm not sure what I think (nor do I know what the lesswrong consensus is) about what is and is not a belief-that-pays-rent, and whether changing those beliefs changes your life as much as changing things that aren't.

What do you have in mind as regards the possibility of doing great things? By the way, I agree with and appreciate your comment.

[0] http://stepanov.lk.net/mnemo/legende.html?hn

Comment by zlrth on Leave beliefs that don't constrain experience alone · 2017-10-30T22:04:30.548Z · LW · GW

Thank you! I updated my post.

Comment by zlrth on Trope Dodging · 2017-10-24T16:36:00.473Z · LW · GW

Responding to the prompt for discussion: Once one finds deeply rooted filters with poor calibration, how should you go about fixing them?

I've heard people comment and meta-comment about how rationality seems to help people only in indirect ways. I'd also say that about myself!

I also have rarely asked for help. This is a deeply-rooted, poorly-calibrated filter. My first answer is: do the hard thing the easy way.

"Complaining more" is my way of "asking for help." Specifically, complaining that's directed toward a tractable problem and that explains my failed attempts and foibles. And does so in socially acceptable ways.

It'd be better to ask for help the way the therapy books tell you. But that's an ugh field for me--at least for the things I have an ugh field about!

I hypothesize that going from having a poorly-calibrated filter to a well-calibrated one takes a lot of effort. That's another reason to do the hard thing the easy way. It's easier to inch away from "complaining more" toward "asking for help" than it is to whole-hog it.

A question I think is worth asking and challenging is: Once people know about trope dodging, to what extent can/will they recalibrate? I don't know enough about human nature to hazard an answer. But if the answer is, "Not much" then that's another point for doing the hard thing the easiest way. Better to reduce friction to permit change.