strangepoop's Shortform

post by a gently pricked vein (strangepoop) · 2019-09-10T17:04:58.375Z · LW · GW · 8 comments

Contents

8 comments

8 comments

Comments sorted by top scores.

comment by a gently pricked vein (strangepoop) · 2019-11-20T17:52:35.633Z · LW(p) · GW(p)

The expectations you do not know you have control your happiness more than you know. High expectations that you currently have don't look like high expectations from the inside, they just look like how the world is/would be.

But "lower your expectations" can often be almost useless advice, kind of like "do the right thing".

Trying to incorporate "lower expectations" often amounts to "be sad". How low should you go? It's not clear at all if you're using territory-free un-asymmetric simple rules like "lower". Like any other attempt at truth-finding, it is not magic. It requires thermodynamic work.

The thing is, the payoff is rather amazing. You can just get down to work. As soon as you're free of a constant stream of abuse from beliefs previously housed in your head, you can Choose without Suffering.

The problem is, I'm not sure how to strategically go about doing this, other than using my full brain with Constant Vigilance.

Coda: A large portion of the LW project (or at least, more than a few offshoots) is about noticing you have beliefs that respond to incentives other than pure epistemic ones, and trying not to reload when shooting your foot off with those. So unsurprisingly, there's a failure mode here: when you publicly declare really low expectations (eg "everyone's an asshole"), it works to challenge people, urges them to prove you wrong. It's a cool trick to win games of Chicken but as usual, it works by handicapping you. So make sure you at least understand the costs and the contexts it works in.

comment by a gently pricked vein (strangepoop) · 2019-09-10T17:04:58.545Z · LW(p) · GW(p)

Is metarationality about (really tearing open) the twelfth virtue?

It seems like it says "the map you have of map-making is not the territory of map-making", and gets into how to respond to it fluidly, with a necessarily nebulous strategy of applying the virtue of the Void.

(this is also why it always felt like metarationality seems to only provide comments where Eliezer would've just given you the code)

The parts that don't quite seem to follow is where meaning-making and epistemology collide. I can try to see it as a "all models are false, some models are useful" but I'm not sure if that's the right perspective.

Replies from: gworley, Viliam
comment by Gordon Seidoh Worley (gworley) · 2019-09-10T17:31:45.087Z · LW(p) · GW(p)

Coming from within that framing, I'd say yes.

comment by Viliam · 2019-09-10T23:57:41.345Z · LW(p) · GW(p)

From certain perspective, "more models" becomes one model anyway, because you still have to choose which of the models are you going to use at a specific moment. Especially when multiple models, all of them "false but useful", would suggest taking a different action.

As an analogy, it's like saying that your artificial intelligence will be an artificial meta-intelligence, because instead of following one algorithm, as other artificial intelligences do, it will choose between multiple algorithms. At the end of the day, "if P1 then A1 else if P2 then A2 else A3" still remains one algorithm. So the actual question is not whether one algorithm or many algorithms is better, but whether having a big if-switch at the top level is the optimal architecture. (Dunno, maybe it is, but from this perspective it suddenly feels much less "meta" than advertised.)

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-09-12T04:21:56.669Z · LW(p) · GW(p)
becomes one model anyway, because you still have to choose which of the models are you going to use at a specific moment.

The architecture feels way different when you're not trying to have consistency though. Your rules for switching can themselves switch based on the current model, and the whole thing becomes way more dynamic.

comment by a gently pricked vein (strangepoop) · 2020-07-25T20:02:42.734Z · LW(p) · GW(p)

Cold Hands Fallacy/Fake Momentum/Null-Affective Death Stall

Although Hot Hands has been the subject of enough controversy to perhaps no longer be termed a fallacy, there is a sense in which I've fooled myself before with a fake momentum. I mean when you change your strategy using a faulty bottomline [LW · GW]: incorrectly updating on your current dynamic.

As a somewhat extreme but actual example from my own life: when filling out answersheets to multiple-choice questions (with negative marks for incorrect responses) as a kid, I'd sometimes get excited about having marked almost all of the questions near the end, and then completely, obviously, irrationally decide to mark them all. This was out of some completion urge, and the positive affect around having filled in most of them. This involved a fair bit of self-deception to carry out, since I was aware at some level that I left some of them previously unanswered because I was in fact unsure, and to mark them I had to feel sure.

Now, for sure you could make the case that maybe there are times when you're thinking clearer and when you know the subject or whatever, where you can additionally infer this about yourself correctly and then rationally ramp up the confidence (even if slight) in yourself. But this wasn't one of those cases [LW · GW], it was the simple fact that I felt great about myself.

Anyway the real point of this post is that there's a flipside (or straightforward generalization) of this: we can talk about this fake inertia for subjects at rest or at motion. What I mean is there's this similar tendency to not feel like doing something because you don't have that dynamic right now [LW · GW], hence all the clichés of the form "first blow is half the battle". In a sense, that's all I'm communicating here, but seeing it as a simple irrational mistake (as in the example above) really helped me get over this without drama: just remind yourself of the bottomline and start moving in the correct flow, ignoring the uncalibrated halo (or lack thereof) of emotion [LW · GW].

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-07-25T20:16:40.567Z · LW(p) · GW(p)

Above, a visual depiction of strangepoop.

Replies from: strangepoop
comment by a gently pricked vein (strangepoop) · 2020-07-25T20:30:18.902Z · LW(p) · GW(p)

Ideally, I'd make another ninja-edit that would retain the content in my post and the joke in your comment in a reflexive manner, but I am crap at strange loops.