Posts

Countering Self-Deception: When Decoupling, When Decontextualizing? 2020-12-10T15:28:24.302Z
strangepoop's Shortform 2019-09-10T17:04:58.375Z

Comments

Comment by strangepoop on Wholehearted choices and “morality as taxes” · 2020-12-23T22:04:22.464Z · LW · GW

It occurred to me while reading your comment that I could respond entirely with excerpts from Minding our way. Here's a go (it's just fun, if you also find it useful, great!):

You will spend your entire life pulling people out from underneath machinery, and every time you do so there will be another person right next to them who needs the same kind of help, and it goes on and on forever

This is a grave error, in a world where the work is never finished, where the tasks are neverending.

Rest isn't something you do when everything else is finished. Everything else doesn't get finished. Rather, there are lots of activities that you do, some which are more fun than others, and rest is an important one to do in appropriate proportions. 

Rest isn't a reward for good behavior! It's not something you get to do when all the work is finished! That's finite task thinking. Rather, rest and health are just two of the unending streams that you move through. [...]

the scope of the problem, at least relative to your contribution, is infinite

This behavior won't do, for someone living in a dark world. If you're going to live in a dark world, then it's very important to learn how to choose the best action available to you without any concern for how good it is in an absolute sense. [...]

You will beg for a day in which you go outside and don't find another idiot stuck under his fucking car

I surely don't lack the capacity to feel frustration with fools, but I also have a quiet sense of aesthetics and fairness which does not approve of this frustration. There is a tension there.

I choose to resolve the tension in favor of the people rather than the feelings. [...]

somebody else is going to die, you monster

We aren't yet gods. We're still fragile. If you have something urgent to do, then work as hard as you can — but work as hard as you can over a long period of time, not in the moment. [...]

You can look at the bad things in this world, and let cold resolve fill you — and then go on a picnic, and have a very pleasant afternoon. That would be a little weird, but you could do it! [...]

So eventually you either give up, or you put earplugs in your ears and go enjoy some time in the woods, completely unable to hear the people yelling for help.

many people seem to think that there is a privileged "don't do anything" action, that consists of something like curling up into a ball, staying in bed, and refusing to answer emails. It's much easier to adopt the "buckle down" demeanor when, instead, curling up in a ball and staying in bed feels like just another action. It's just another way to respond to the situation, which has some merits and some flaws.

(That's not to say that it's bad to curl up in a ball on your bed and ignore the world for a while. Sometimes this is exactly what you need to recover. Sometimes it's what the monkey is going to do regardless of what you decide. [...])

So see the dark world. See everything intolerable. Let the urge to tolerify it build, but don't relent. Just live there in the intolerable world, refusing to tolerate it. See whether you feel that growing, burning desire to make the world be different. Let parts of yourself harden. Let your resolve grow. It is here, in the face of the intolerable, that you will be able to tap into intrinsic motivation. [...]

Comment by strangepoop on Countering Self-Deception: When Decoupling, When Decontextualizing? · 2020-12-13T10:33:09.436Z · LW · GW

You draw boundaries towards questions.

As the links I've posted above indicate, no, lists don't necessarily require questions to begin noticing joints and carving around them.

Questions are helpful however, to convey the guess I might already have and to point at the intension that others might build on/refute. And so...

Your list doesn't have any questions like that

...I have had some candidate questions in the post since the beginning, and later even added some indication of the goal at the end.

EDIT: You also haven't acknowledged/objected to my response to your "any attempt to analyse the meaning independent of the goals is confused", so I'm not sure if that's still an undercurrent here.

Comment by strangepoop on Countering Self-Deception: When Decoupling, When Decontextualizing? · 2020-12-12T19:39:17.830Z · LW · GW

In Where to Draw the Boundaries, Zack points out (emphasis mine):

The one replies:

But reality doesn't come with its joints pre-labeled. Questions about how to draw category boundaries are best understood as questions about values or priorities rather than about the actual content of the actual world. I can call dolphins "fish" and go on to make just as accurate predictions about dolphins as you can. Everything we identify as a joint is only a joint because we care about it.

No. Everything we identify as a joint is a joint not "because we care about it", but because it helps us think about the things we care about.

There are more relevant things in there, which I don't know if you have disagreements with. So maybe it's more useful to crux with Zack's main source. In Where to Draw the Boundary, Eliezer gives an example:

And you say to me:  "It feels intuitive to me to draw this boundary, but I don't know why—can you find me an intension that matches this extension?  Can you give me a simple description of this boundary?"

I take it this game does not work for you without a goal more explicit than the one I have in the postscript to the question? 

(Notice that inferring some aspects of the goal is part of the game; in the specific example Eliezer gave, they're trying to define Artwhich is as nebulous an example as it could be. Self-deception is surely less nebulous than Art.)

I was looking for this kind of engagement, which asserts/challenges either intension or extension:

You come up with a list of things that feel similar, and take a guess at why this is so.  But when you finally discover what they really have in common, it may turn out that your guess was wrong.  It may even turn out that your list was wrong.

Comment by strangepoop on Countering Self-Deception: When Decoupling, When Decontextualizing? · 2020-12-12T12:08:52.869Z · LW · GW

It seemed to me that avoiding fallacies of compression was always a useful thing (independent of your goal, so long as you have the time for computation), even if negligibly. Yet these questions seem to be a bit of a counterexample in mind,  namely that I have to be careful when what looks like decoupling might be decontextualizing.

Importantly, I can't seem to figure out a sharp line between the two. The examples were a useful meditation for me, so I shared them. Maybe I should rename the title to reflect this?

(I'm quite confused by my failure of conveying the point of the meditation, might try redoing the whole post.)

Comment by strangepoop on Countering Self-Deception: When Decoupling, When Decontextualizing? · 2020-12-11T12:02:12.227Z · LW · GW

Yes, this is the interpretation. 

If I'm doing X wrong (in some way), it's helpful for me to notice it. But then I notice I'm confused about when decoupling context is the "correct" thing to do, as exemplified in the post. 

Rationalists tend to take great pride in decoupling and seeing through narratives (myself included), but I sense there might be some times when you "shouldn't", and they seem strangely caught up with embeddedness in a way.

Comment by strangepoop on Countering Self-Deception: When Decoupling, When Decontextualizing? · 2020-12-11T11:54:24.048Z · LW · GW

I think I might have made a mistake in putting in too many of these at once. The whole point is to figure out which forms of accusations are useful feedback (for whatever), and which ones are not, by putting them very close to questions we think we've dissolved.

Take three of these, for example. I think it might be helpful to figure out whether I'm "actually" enjoying the wine, or if it's a sort of a crony belief. Disentangling those is useful to make better decisions for myself, in say, deciding to go to a wine-tasting if status-boost with those people wouldn't help.

Perhaps similarly, I'm better off knowing if my knowledge of whether this food item is organic is interfering with my taste experience.

But then in the movie example, no one would dispute the knowledge is relevant to the experience! Going back to our earlier ones, maybe just the knowledge there was relevant, and "genuinely" making it a better experience? 

Maybe my degree of liking is a function of both "knowledge of organic origin" and "chemical interactions with tongue receptors" just like my degree of liking of a movie is a function of both "contextual buildup from the narrative" and "the currently unfolding scene"?

How about when you apply this to "you only upvoted that because of who wrote it"? Maybe that's a little closer home.

Comment by strangepoop on Open & Welcome Thread - December 2020 · 2020-12-10T15:24:20.405Z · LW · GW

[ETA: posted a Question instead]

Question: What's the difference, conceptually, between each of the following if any?

"You're only enjoying that food because you believe it's organic"

"You're only enjoying that movie scene because you know what happened before it"

"You're only enjoying that wine because of what it signals"

"You only care about your son because of how it makes you feel"

"You only had a moving experience because of the alcohol and hormones in your bloodstream"

"You only moved your hand because you moved your fingers"

"You're only showing courage because you've convinced yourself you'll scare away your opponent"

For example:

Comment by strangepoop on Open & Welcome Thread – November 2020 · 2020-11-15T15:49:47.060Z · LW · GW

So... it looks like the second AI-Box experiment was technically a loss

Not sure what to make of it, since it certainly imparts the intended lesson anyway. Was it a little misleading that this detail wasn't mentioned? Possibly. Although the bet was likely conceded, a little disclaimer of "overtime" would have been nice when Eliezer discussed it.

Comment by strangepoop on Impostor Syndrome as skill/dominance mismatch · 2020-11-06T14:33:15.872Z · LW · GW

I was also surprised. Having spoken to a few people with crippling impostor syndrome, the summary seemed to be "people think I'm smart/skilled, but it's not Actually True." 

I think the claim in the article is they're still in the game when saying that, just another round of downplaying themselves? This becomes really hard to falsify (like internalized misogyny) even if true, so I appreciate the predictions at the end.

Comment by strangepoop on The bads of ads · 2020-10-23T19:04:49.129Z · LW · GW

I like the idea of it being closer to noise, but there are also reasons to consider the act of advertising theft, or worse:

  • It feels like the integrity of my will is attacked, when ads work and I know somewhere that I don't want it to; a divide and conquer attack on my brain, Moloch in my head.
  • If they get the most out of marketing it to parts of my brain rather than to me as a whole, there is optimization pressure to keep my brain divided, to lower the sanity waterline.
  • Whenever I'm told to "turn off adblocker", for that to work for them, it's premised on me being unable to have an adblocker inside my brain, preying on what's uncontrollably automatic for me. As if to say: "we both know how this works". It makes me think of an abuser saying to their victim: "go fetch my belt".
Comment by strangepoop on The bads of ads · 2020-10-23T18:47:40.017Z · LW · GW

There's a game of chicken in "who has to connect potential buyers to sellers, the buyers or the sellers?" and depending on who's paying to make the transaction happen, we call it "advertisement" or "consultancy".

(You might say "no, that distinction comes from the signal-to-noise ratio", so question: if increasing that ratio is what works, how come advertisements are so rarely informative?)

Comment by strangepoop on Open & Welcome Thread – October 2020 · 2020-10-04T05:14:08.063Z · LW · GW

As a meta-example, even to this I want to add:

  • There's this other economy to keep in mind of readers scrolling past walls of text. Often, I can and want to make what I'm saying cater to multiple attention spans (a la arbital?), and collapsed-by-default comments allow the reader to explore at will.
    • A strange worry (that may not be true for other people) is attempting to contribute to someone else's long thread or list feels a little uncomfortable/rude without reading it all/carefully. With collapsed-by-default, you could set up norms that it's okay to reply without engaging deeply.
  • It would be nice to have collapsing as part of the formatting
  • With this I already feel like I'm setting up a large-ish personal garden that would inhibit people from engaging in this conversation even if they want to, because there's so much going on.
    • And I can't edit this into my previous comment without cluttering it.
  • There's obviously no need for having norms of "talking too much" when it's decoupled from the rest of the control system
    • I do remember Eliezer saying in a small comment somewhere long ago that "the thumb rule is to not occupy more than three places in the Recent Comments page" (paraphrased).
Comment by strangepoop on Open & Welcome Thread – October 2020 · 2020-10-04T04:45:11.028Z · LW · GW
  • I noticed a thing that might hinder the goals of longevity as described here ("build on what was already said previously"): it feels like a huge cost to add a tiny/incremental comment to something because of all the zero-sum attention games it participates in. 

    It would be nice to do a silent comment, which:

    • Doesn't show up in Recent Comments
    • Collapsed by default
    • (less confident) Doesn't show up in author's notifications (unless "Notify on Silent" is enabled in personal settings)
    • (kinda weird) Comment gets appended automatically to previous comment (if yours) in a nice, standard format.
  • The operating metaphor is to allow the equivalent of bulleted lists to span across time, which I suppose would mostly be replies to yourself.
    • It feels strange to keep editing one comment, and too silent. Also disrupts flow for readers.
  • I don't see often that people have added several comments (via edit or otherwise) across months, or even days. Yet people seem to use a lot of nested lists here. Hard to believe that those list-erious ways go away if spread out in time.
Comment by strangepoop on Some thoughts on criticism · 2020-09-18T14:21:33.090Z · LW · GW
Often, people like that will respond well to criticism about X and Y but not about Z.

One (dark-artsy) aspect to add here is that the first time you ask somebody for criticism, you're managing more than your general identity, you're also managing your interaction norms with that person. You're giving them permission to criticize you (or sometimes, even think critically about you for the first time), creating common knowledge that there does exist a perspective from which it's okay/expected for them to do that. This is playing with the charity they normally extend to you, which might mean that your words and plans will be given less attention than before, even though there might not be any specific criticism in their head. This is especially relevant for low-legibility/fluid hierarchies, which might collapse and impede functioning from the resulting misalignment, perhaps not unlike your own fears of being "crushed", but at the org level.

Although it's usually clear that you'd want to get feedback rather than manage this (at least, I think so), it's important to notice as one kind of anxiety surrounding criticism. This is separate from any narcissistic worries about status, it can be a real systemic worry when you're acting prosocially.

Comment by strangepoop on Invisible Frameworks · 2020-09-16T13:51:53.207Z · LW · GW
Incidentally Eliezer, is this really worth your time?

This comment might have caused a tremendous loss of value, if Eliezer took Marcello's words seriously here and so stopped talking about his metaethics. As Luke points out here, despite all the ink spilled, very few seemed to have gotten the point (at least, from only reading him).

I've personally had to re-read it many times over, years apart even, and I'm still not sure I fully understand it. It's also been the most personally valuable sequence, the sole cause of significant fundamental updates. (The other sequences seemed mostly obvious --- which made them more suitable as just incredibly clear references, sometimes if only to send to others.)

I'm sad that there isn't more.

Comment by strangepoop on Sunday August 23rd, 12pm (PDT) – Double Crux with Buck Shlegeris and Oliver Habryka on Slow vs. Fast AI Takeoff · 2020-09-10T12:05:32.462Z · LW · GW

Ping!

I've read/heard a lot about double crux but never had the opportunity to witness it.

EDIT: I did find one extensive example, but this would still be valuable since it was a live debate.

Comment by strangepoop on Toolbox-thinking and Law-thinking · 2020-09-05T20:41:26.815Z · LW · GW

This one? From the CT-thesis section in A first lesson in meta-rationality.

the objection turns partly on the ambiguity of the terms “system” and “rationality.” These are necessarily vague, and I am not going to give precise definitions. However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.11 If a person is an algorithm, it is probably an incomprehensibly vast one, which could not written concisely. It is probably also an incomprehensibly weird one, which one could not consciously follow accurately. I say “probably” because we don’t know much about how minds work, so we can’t be certain.
What we can be certain is that, because we don’t know how minds work, we can’t treat them as systems now. That is the case even if, when neuroscience progresses sufficiently, they might eventually be described that way. Even if God told us that “a human, reasoning meta-systematically, is just a system,” it would be useless in practice. Since we can’t now write out rules for meta-systematic reasoning in less than ten kilograms, we have to act, for now, as if meta-systematic reasoning is non-systematic.
Comment by strangepoop on strangepoop's Shortform · 2020-07-25T20:30:18.902Z · LW · GW

Ideally, I'd make another ninja-edit that would retain the content in my post and the joke in your comment in a reflexive manner, but I am crap at strange loops.

Comment by strangepoop on strangepoop's Shortform · 2020-07-25T20:02:42.734Z · LW · GW

Cold Hands Fallacy/Fake Momentum/Null-Affective Death Stall

Although Hot Hands has been the subject of enough controversy to perhaps no longer be termed a fallacy, there is a sense in which I've fooled myself before with a fake momentum. I mean when you change your strategy using a faulty bottomline: incorrectly updating on your current dynamic.

As a somewhat extreme but actual example from my own life: when filling out answersheets to multiple-choice questions (with negative marks for incorrect responses) as a kid, I'd sometimes get excited about having marked almost all of the questions near the end, and then completely, obviously, irrationally decide to mark them all. This was out of some completion urge, and the positive affect around having filled in most of them. This involved a fair bit of self-deception to carry out, since I was aware at some level that I left some of them previously unanswered because I was in fact unsure, and to mark them I had to feel sure.

Now, for sure you could make the case that maybe there are times when you're thinking clearer and when you know the subject or whatever, where you can additionally infer this about yourself correctly and then rationally ramp up the confidence (even if slight) in yourself. But this wasn't one of those cases, it was the simple fact that I felt great about myself.

Anyway the real point of this post is that there's a flipside (or straightforward generalization) of this: we can talk about this fake inertia for subjects at rest or at motion. What I mean is there's this similar tendency to not feel like doing something because you don't have that dynamic right now, hence all the clichés of the form "first blow is half the battle". In a sense, that's all I'm communicating here, but seeing it as a simple irrational mistake (as in the example above) really helped me get over this without drama: just remind yourself of the bottomline and start moving in the correct flow, ignoring the uncalibrated halo (or lack thereof) of emotion.

Comment by strangepoop on (answered: yes) Has anyone written up a consideration of Downs's "Paradox of Voting" from the perspective of MIRI-ish decision theories (UDT, FDT, or even just EDT)? · 2020-07-07T15:48:22.056Z · LW · GW

There's a whole section on voting in the LDT For Economists page on Arbital. Also see the one for analytic philosophers, which has a few other angles on voting.

From what I can tell from your other comments on this page, you might already have internalized all the relevant intuitions, but it might be useful anyway. Superrationality is also discussed.

Sidenote: I'm a little surprised no one else mentioned it already. Somehow arbital posts by Eliezer aren't considered as canon as the sequences, maybe it's the structure (rather than just the content)?

Comment by strangepoop on Conversation Halters · 2020-06-28T12:20:32.981Z · LW · GW

I usually call this lampshading, and I'll link this comment to explain what I mean. Thanks!

Comment by strangepoop on How to evaluate (50%) predictions · 2020-05-13T01:30:15.680Z · LW · GW

Thank you for this comment. I went through almost exactly the same thing, and might have possibly tabled it at the "I am really confused by this post" stage had I not seen someone well-known in the community struggle with and get through it.

My brain especially refused to read past the line that said "pushing it to 50% is like throwing away information": Why would throwing away information correspond to the magic number 50%?! Throwing away information brings you closer to maxent, so if true, what is it about the setup that makes 50% the unique solution, independent of the baseline and your estimate? That is, what is the question?

I think it's this: in a world where people can report the probability for a claim or the negation of it, what is the distribution of probability-reports you'd see?

By banning one side of it as Rafael does, you get it to tend informative. Anyway, this kind of thinking makes it seem like it's a fact about this flipping trick and not fundamental to probability theory. I wonder if there are more such tricks/actual psychology to adjust for to get a different answer.

Comment by strangepoop on Is Rationalist Self-Improvement Real? · 2019-12-12T12:42:10.952Z · LW · GW

While you're technically correct, I'd say it's still a little unfair (in the sense of connoting "haha you call yourself a rationalist how come you're failing at akrasia").

Two assumptions that can, I think you'll agree, take away from the force of "akrasia is epistemic failure":

  • if modeling and solving akrasia is, like diet, a hard problem that even "experts" barely have an edge on, and importantly, things that do work seem to be very individual-specific making it quite hard to stand on the shoulders of giants
  • if a large percentage of people who've found and read through the sequences etc have done so only because they had very important deadlines to procrastinate

...then on average you'd see akrasia over-represented in rationalists. Add to this the fact that akrasia itself makes manually aiming your rationality skills at what you want harder. That can leave it stable even under very persistent efforts.

Comment by strangepoop on Sayan's Braindump · 2019-11-24T07:39:50.491Z · LW · GW

I'm interested in this. The problem is that if people consider the value provided by the different currencies at all fungible, side markets will pop up that allow their exchange.

An idea I haven't thought about enough (mainly because I lack expertise) is to mark a token as Contaminated if its history indicates that it has passed through "illegal" channels, ie has benefited someone in an exchange not considered a true exchange of value, and so purists can refuse to accept those. Purist communities, if large, would allow stability of such non-contaminated tokens.

Maybe a better question to ask is "do we have utility functions that are partial orders and thus would benefit from many isolated markets?", because if so, you wouldn't have to worry about enforcing anything, many different currencies will automatically come into existence and be stable.

Of course, more generally, you wouldn't quite have complete isolation, but different valuations of goods in different currencies, without "true" fungibility. I think it is quite possibe that our preference orderings are in fact partial and the current one-currency valuation of everything might be improved.

Comment by strangepoop on strangepoop's Shortform · 2019-11-20T17:52:35.633Z · LW · GW

The expectations you do not know you have control your happiness more than you know. High expectations that you currently have don't look like high expectations from the inside, they just look like how the world is/would be.

But "lower your expectations" can often be almost useless advice, kind of like "do the right thing".

Trying to incorporate "lower expectations" often amounts to "be sad". How low should you go? It's not clear at all if you're using territory-free un-asymmetric simple rules like "lower". Like any other attempt at truth-finding, it is not magic. It requires thermodynamic work.

The thing is, the payoff is rather amazing. You can just get down to work. As soon as you're free of a constant stream of abuse from beliefs previously housed in your head, you can Choose without Suffering.

The problem is, I'm not sure how to strategically go about doing this, other than using my full brain with Constant Vigilance.

Coda: A large portion of the LW project (or at least, more than a few offshoots) is about noticing you have beliefs that respond to incentives other than pure epistemic ones, and trying not to reload when shooting your foot off with those. So unsurprisingly, there's a failure mode here: when you publicly declare really low expectations (eg "everyone's an asshole"), it works to challenge people, urges them to prove you wrong. It's a cool trick to win games of Chicken but as usual, it works by handicapping you. So make sure you at least understand the costs and the contexts it works in.

Comment by strangepoop on [deleted post] 2019-09-26T09:34:39.969Z

I think a counterexample to "you should not devote cognition to achieving things that have already happened" is being angry at someone who has revealed they've betrayed you, which might acause them to not have betrayed you.

Comment by strangepoop on strangepoop's Shortform · 2019-09-10T17:04:58.545Z · LW · GW

Is metarationality about (really tearing open) the twelfth virtue?

It seems like it says "the map you have of map-making is not the territory of map-making", and gets into how to respond to it fluidly, with a necessarily nebulous strategy of applying the virtue of the Void.

(this is also why it always felt like metarationality seems to only provide comments where Eliezer would've just given you the code)

The parts that don't quite seem to follow is where meaning-making and epistemology collide. I can try to see it as a "all models are false, some models are useful" but I'm not sure if that's the right perspective.

Comment by strangepoop on [deleted post] 2019-07-28T09:05:50.460Z

I want to ask this because I think I missed it the first few times I read Living in Many Worlds: Are you similarly unsatisfied with our response to suffering that's already happened, like how Eliezer asks, about the twelfth century? It's boldface "just as real" too. Do you feel the same "deflation" and "incongruity"?

I expect that you might think (as I once did) that the notion of "generalized past" is a contrived but well-intentioned analogy to manage your feelings.

But that's not so at all: once you've redone your ontology, where the naive idea of time isn't necessarily a fundamental thing and thinking in terms of causal links comes a lot closer to how reality is arranged, it's not a stretch at all. If anything, it follows that you must try and think and feel correctly about the generalized past after being given this information.

Of course, you might modus tollens here.

Comment by strangepoop on Go Do Something · 2019-05-21T17:45:46.082Z · LW · GW

Soares also did a good job of impressing this in Dive In:

In my experience, the way you end up doing good in the world has very little to do with how good your initial plan was. Most of your outcome will depend on luck, timing, and your ability to actually get out of your own way and start somewhere. The way to end up with a good plan is not to start with a good plan, it's to start with some plan, and then slam that plan against reality until reality hands you a better plan.

The idea doesn't have to be good, and it doesn't have to be feasible, it just needs to be the best incredibly concrete plan that you can come up with at the moment. Don't worry, it will change rapidly when you start slamming it into reality. The important thing is to come up with a concrete plan, and then start executing it as hard as you can — while retaining a reflective state of mind updating in the face of evidence.
Comment by strangepoop on The concept of evidence as humanity currently uses it is a bit of a crutch. · 2019-05-21T17:33:41.603Z · LW · GW

I don't think the "idea of scientific thinking and evidence" has so much to do with throwing away information as adding reflection, post which you might excise the cruft.

Being able to describe what you're doing, ie usefully compress existing strategies-in-use, is probably going to be helpful regardless of level of intelligence because it allows you to cheaply tweak your strategies when either the situation or the goal is perturbed.

Comment by strangepoop on The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence · 2019-04-15T21:19:13.437Z · LW · GW

To further elaborate 4: your example of the string "1" being a conscious agent because you can "unpack" it into an agent really feels like it shouldn't count: you're just throwing away the "1" and replaying a separate recording of something that was conscious. This sounds about as much of a non-sequitur as "I am next to this pen, so this pen is conscious".

We could, however, make it more interesting by making the computation depend "crucially" on the input. But what counts?

Suppose I have a program that turns noise into a conscious agent (much like generative models can turn a noise vector into a face, say). If we now seed this with a waterfall, is the waterfall now a part of the computation, enough to be granted some sentience/moral patienthood? I think the usual answer is "all the non-trivial work is being done by the program, not the random seed", as Scott Aaronson seems to say here. (He also makes the interesting claim of "has to participate fully in the arrow of time to be conscious", which would disqualify caching and replaying.)

But this can be made a little more confusing, because it's hard to tell which bit is non-trivial from the outside: suppose I save and encrypt the conscious-generating-program. This looks like random noise from the outside, and will pass all randomness tests. Now I have another program with the stored key decrypt it and run it. From the outside, you might disregard the random-seed-looking-thingy and instead try to analyze the decryption program, thinking that's where the magic is.

I'd love to hear about ideas to pin down the difference between Seeding and Decrypting in general, for arbitrary interpretations. It seems within reach, and like a good first step, since the two lie on roughly opposite ends of a spectrum of "cruciality" when the system breaks down into two or more modules.

Comment by strangepoop on The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence · 2019-04-15T21:13:08.842Z · LW · GW

Responses to your four final notes:

1. This is, as has been remarked in another comment, pretty much Dust theory. See also Moravec's concise take on the topic, referenced in the Dust theory FAQ. Doing a search for it on LW might also prove helpful for previous discussions.

2. "that was already there"? What do you mean by this? Would you prefer to use the term 'magical reality fluid' instead of "exists"/"extant"/"real"/"there" etc, to mark your confusion about this? If you instead feel like you aren't confused about these terms, please provide (a link to) a solution. You can find the problem statement in The Anthropic Trilemma.

3. Eliezer deals with this using average utilitarianism, depending on whether or not you agree with rescuability (see below).

4. GAZP vs GLUT talks about the difference between a cellphone transmitting information of consciousness vs the actual conscious brain on the other end, and generalizes it to arbitrary "interpretations". That is, there are parts of the computation that are merely "interpreting", informing you about consciousness and others that are "actually" instantiating. It may not be clear what exactly the crucial difference is yet, but I think it might be possible to rescue the difference, even if you can construct continuums to mess with the notion. This is of course deeply tied to 2.

----

It may seem that my takeaway from your post is mostly negative, this is not the case. I appreciate this post, it was very well organized despite tackling some very hairy issues, which made it easier to respond to. I do feel like LW could solve this somewhat satisfactorily, perhaps some people already have and don't bother pointing the rest of us/are lost in the noise?

Comment by strangepoop on Epistemic Tenure · 2019-03-07T11:29:57.774Z · LW · GW

it is not as though rationality consisted of some singular epistemesis score that can be raised or lowered

I feel like this is fighting the hypothesis. As Garrabrant says:

Attention is a conserved resource, and attention that I give to Bob is being taken away from attention that could be directed toward GOOD ideas.

It doesn't matter whether or not you think it is possible to track rationality through some singular epistemesis score. The question is: you have limited attentional resources and the problem OP outlined; "rationality" is probably complicated; what do you do anyway?

How you divvy them is the score. Or, to replace the symbol with the substance: if you're in charge of divvying those resources, then your particular algorithm will decide what your underlings consider status/currency, and can backpropagate into their minds.

Comment by strangepoop on Out to Get You · 2019-03-05T22:04:04.207Z · LW · GW

Maybe you meant "cutting corners" rather than cutting corners? ie you did understand the distinction between the thing and the appearance of the thing, you just forgot to add the quotes.

Comment by strangepoop on Epistemic Tenure · 2019-03-04T11:00:05.776Z · LW · GW

I think your "attentional resources" are just being Counterfactually Mugged here, so if you're okay with that, you ought to be okay with some attention being diverted away from "real" ideas, if you're reasonably confident in your construction of the counterfactual "Bob’s idea might HAVE BEEN good".

This way of looking at it also says that tenure is a bad metaphor: your confidence in the counterfactual being true can change over time.

(If you then insist that this confidence in your counterfactual is also something that affects Bob, which it kinda does, then I'm afraid we're encountering an instance of unfair problem class in the wild and I don't know what to do)

As an aside, this makes me think: What happens when all consumers in the market are willing to get counterfactually mugged? Where I'm not able to return my defected phone because prediction markets said it would have worked? I suppose this is not very different from the concept of force majeure, only systematized.

Comment by strangepoop on Unconscious Economics · 2019-02-27T15:10:28.938Z · LW · GW

It's worth noting that David Friedman's Price Theory clearly states this in the very first chapter, just three paragraphs down:

The second half of the assumption, that people tend to find the correct way to achieve their objectives, is called rationality. This term is somewhat deceptive, since it suggests that the way in which people find the correct way to achieve their objectives is by rational analysis--analyzing evidence, using formal logic to deduce conclusions from assumptions, and so forth. No such assumption about how people find the correct means to achieve their ends is necessary.

One can imagine a variety of other explanations for rational behavior. To take a trivial example, most of our objectives require that we eat occasionally, so as not to die of hunger (exception--if my objective is to be fertilizer). Whether or not people have deduced this fact by logical analysis, those who do not choose to eat are not around to have their behavior analyzed by economists. More generally, evolution may produce people (and other animals) who behave rationally without knowing why. The same result may be produced by a process of trial and error; if you walk to work every day, you may by experiment find the shortest route even if you do not know enough geometry to calculate it. Rationality in this sense does not necessarily require thought. In the final section of this chapter, I give two examples of things that have no minds and yet exhibit rationality.

I don't think it counts as a standard textbook, but it is meant to be a textbook.

On the whole, I think it's perfectly okay for economists to mostly ignore how the equilibrium is achieved, since like you pointed out, there are so many juicy results popping out from just the fact that they are achieved on average.

Also, I enjoyed the examples in your post!

Comment by strangepoop on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-02-25T21:35:18.842Z · LW · GW

effortless pattern-recognition is what machine learning can do today, while effortful attention, and explicit reasoning (which seems to be a subset of effortful attention) is generally beyond ML’s current capabilities.

Just to be clear, are you or aren't you (or neither) saying that this is only a matter of scale?

It seems to me like you're saying it could indeed only be a matter of scale, we're just in the stage of figuring out what the right dimension to amp up is ("be coherent for longer").

Comment by strangepoop on Double-Dipping in Dunning--Kruger · 2018-11-28T08:58:06.957Z · LW · GW

See, the problem is now that I've also internalized this (seemingly true) lesson, the +15% might double-boost my ass numbers.

But maybe if we accumulate enough lessons we can get increasingly close to the truth by adding these "higher order terms"?

I don't think so - the error bars do not necessarily diminish. For example:

  • Ass number for drawing ability percentile: ~70%
  • Dunning Kruger correction: ~50%
  • Double-dip correction: ~65%

Did I do it right? I have no idea. Every step might have already been taken into account in the first asstimate. Every system-2 patch that we discover might have immediately patched system-1.

One (admittedly lazy) way out is to chuck all context-sensitive formal rules like 'add/subtract X%' and leave it entirely to system-1: play calibration games for skill-percentiles.

Comment by strangepoop on If You Want to Win, Stop Conceding · 2018-11-26T12:33:46.275Z · LW · GW

I hope we haven't forgotten Stuck in the Middle With Bruce and Soares' Have No Excuses, which starts with a quote from Bonds That Make Us Free.

I think one reason people end up using a minimax strategy is that it's just easier to compute than EV-maximization.

But more importantly, it just feels like there's no downsides - it's free insurance!

If you want to have a convincing excuse however, you might actively impede your chances. (It might be possible to distance yourself from the excuse so the insurance is actually free, but I think this is unlikely/hard.)

If you've already hedged so hard that you've bet a lot against yourself, you might have sufficiently changed the payoffs to make losing rational, especially if you also add a penalty to Being Mediocre. This is why post like this are needed.

Comment by strangepoop on Unrolling social metacognition: Three levels of meta are not enough. · 2018-08-30T07:39:30.115Z · LW · GW

Somehow this comment was really inspiring! I'm glad this exchange happened, so maybe I should upvote grandparent too? :P

BTW,

incredulity-as-attack

[not] acknowledging that your first concern had been addressed

We have terms for these! They are, respectively, stonewalling and logical rudeness.

I'm still split on how I feel about jargon, and of course it's good that you didn't use any here, but it does give the concepts you describe some legitimacy (for better or worse). Legitimacy helps especially in cases where such expressions are dismissed as over-reactions unique to you, and are thus assumed to be your responsibility to fix, by some implicit jargon-efficiency argument ("if this were a thing to be concerned about, we'd have a name for it!").

Comment by strangepoop on Simplicio and Sophisticus · 2018-07-22T21:53:52.909Z · LW · GW

"... natural science has shown a curious mixture of rationalism and irrationalism. Its prevalent tone of thought has been ardently rationalistic within its own borders, and dogmatically irrational beyond those borders. In practice such an attitude tends to become a dogmatic denial that there are any factors in the world not fully expressible in terms of its own primary notions devoid of further generalization. Such a denial is the self-denial of thought."

- A.N. Whitehead, Process and Reality

I can't really tell yet, but David Chapman's work seems to be trying to hint at this phenomenon all the time. See his How to Think Real Good, for example, even if you don't agree with his characterization of Bayesian rationality. There's also Fixation and Denial, where he goes into some failure modes when dealing with hard-to-fully-formalize things. Meta-rationality seems to be mostly about this, AFAICT.

I have to say, most of Chapman's stuff feels like pure lampshading, ie acknowledging that there is a problem and then simply moving on. I suppose he's building up to more practical advice.

If you're getting frustrated (I certainly am) that all everyone seems to be doing about this is offering loose and largely unhelpful tips, I think that's something Alan Perlis anticipated: "One can't proceed from the informal to the formal by formal means."

(of course, that's just another restatement of the fact that there is a problem.)

Comment by strangepoop on Osmosis learning: a crucial consideration for the craft · 2018-07-18T06:19:50.485Z · LW · GW

See also: "show, don't tell"/the iceberg theory in writing and the monad tutorial fallacy in functional programming. These are weakish evidence for the existence of this phenomenon, although they still reside in the lingual realm.

[posting a double comment because it is sufficiently different and the previous one is already too long]

Comment by strangepoop on Sleeping Beauty Resolved? · 2018-07-16T11:04:16.458Z · LW · GW

I'd say your reply is at least a little bit of logical rudeness, but I'll take the "Sure, ...".

I was pointing specifically at the flaw* in bringing up Everett branches into the discussion at all, not about whether the context happened to be changing here.

I wouldn't really mind the logical rudeness (if it is so), except for the missed opportunity of engaging more fully with your fascinating comment! (see also *)

It's also nice to see that the followup to OP starts with a discussion of why it's a good/easy first rule to, like I said, just ban non-timeless propositions, even if we can eventually come with a workable system that deals with it well.

(*) As noted in GP, it's still not clear to me that this is a flaw, only that I couldn't come up with anything in five minutes! Part of the reason I replied was in the hopes that you'd have a strong defense of "everettian-indexicals", because I'd never thought of it that way before!

Comment by strangepoop on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-13T17:58:29.799Z · LW · GW

What rossry said, but also, why do you expect to be "winning" all arms races here? Genes in other people may have led to development of meme-hacks that you don't know are actually giving someone else an edge in a zero sum game.

In particular, they might call you fat or stupid or incompetent and you might end up believing it.

Comment by strangepoop on Mathematical Mindset · 2018-07-13T13:25:30.030Z · LW · GW
For mathematics is not about proofs; it is about definitions. The essence of great mathematics is coming up with a powerful definition that results in short proofs.

Or in software terms, coming up with a powerful and elegant exposed API/top level functions that don't require peeking into the abstraction (which would imply a "longer" "proof", following Curry-Howard).

Comment by strangepoop on An Agent is a Worldline in Tegmark V · 2018-07-13T13:11:35.049Z · LW · GW

I'm a little confused.

It seemed to me that the way Tegmark had put it, level IV is meta-closed: any consistent map of (even possibly eventually inconsistent) maps is still just a consistent map in level IV; it doesn't have to model any particular territory, it just has to be a mathematical "structure". Maybe you're saying that this is actually in level V and my view of level IV is too inclusive (but I think Tegmark would disagree with you, see esp. appendix A of his original paper), or maybe I missed your point altogether.

It's not even clear that there would be a notion of an "agent" in every level IV universe (in fact I'd say it's clear that this is NOT the case), so I think the idea of a worldline between them would not be well-defined. Nevertheless, I'm fine with non-standard uses of terms if it helps communicate the idea you have after you've clarified your usage of them (but don't canonize independently in an already nebulous territory!), but I'm having some trouble with that.

So can you clarify what you mean, particularly by level IV? (reasonably precise english is fine :P)

ETA: Okay, given your Mathematical Mindset post, I'm doubly fine with your redefinition, but I still want it :3

Comment by strangepoop on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-13T12:08:32.568Z · LW · GW

I suppose you mean the fallibility of memory. I think Garrabrant meant it tautologically though (ie, as the definition of "past").

Comment by strangepoop on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-13T11:54:17.605Z · LW · GW
I think the LW zeitgeist doesn't really engage with this.

Really? I feel quite the opposite, unless you're saying we could do still more. I think LW is actually one of the few communities that take this sort of non-dualism/naturalism in arriving at a probabilistic judgement (and all its meta levels) seriously. We've been repeatedly exposed to the fact that Newcomblike problems are everywhere since a long time ago, and then relatively recently, with Simler's wonderful post on crony beliefs (and now, his even more delightful book with Hanson, of course).

ETA: I'm missing quite a few posts that were even older (Wei Dai's? Drescher's? yvain had something too IIRC), it'd be nice if someone else who does remember posted them here.

Comment by strangepoop on A Sarno-Hanson Synthesis · 2018-07-13T10:57:03.638Z · LW · GW

why private? :(

Comment by strangepoop on A Sarno-Hanson Synthesis · 2018-07-13T10:56:18.526Z · LW · GW

Related to the sleep example, because you didn't say exactly this, and it makes a stronger case:

I noticed some time ago that my misery when waking up was a negotiation tactic from when my parents would wake me up. They were nice enough to let me sleep a little longer if I looked sufficiently upset at being woken up. It became obvious recently that I was pattern-matching my alarm to a parent. How do I know this? Because I knew that if I started loudly singing a cheery tune with a smile on my face I'd automatically become less miserable, but I never did, because I didn't want to be less miserable, even though my parents weren't around anymore. I started doing it when I realized this, and it works pretty well!

(There was one more problem, of feeling like I'm manipulating myself, which seems at first to be at odds with building self-loyalty. I think this went away as I got more comfortable with the idea of sometimes "wanting to be manipulated" for my own success, of desiring less freedom (which would be sacrilege to my younger self). Reading about Kegan's model of adult development and experimenting with BDSM helped me get there somehow.)

One problem with applying this thesis (did I mention I wholeheartedly agree with it?) is that it's hard to refrain from inadvertently reinforcing such negotiation tactics when someone else looks miserable (like my parents did), ie ferberization is painful (not to mention patronizing when done to adults). I think it's possible to be honest about it with someone reasonable and smart enough to grasp the subtleties, and then, usually only after they're done having their episode, but there's no good solution to this AFAIK. Else we wouldn't have hard problems of income redistribution either - the problem of helping those who need it without inducing weakness/dependence.

BTW, Is there an economic term for this specific problem?