Posts

The Problem of the Criterion is NOT an Open Problem 2022-01-06T16:31:09.183Z
The Map-Territory Distinction Creates Confusion 2022-01-04T15:49:58.964Z
Bayesian Dharani, Great Dharani for Conserving Evidence 2021-12-20T16:32:34.606Z
Seeking Truth Too Hard Can Keep You from Winning 2021-11-30T02:16:58.695Z
Why the Problem of the Criterion Matters 2021-10-30T20:44:00.143Z
Zen and Rationality: Equanimity 2021-08-16T16:51:02.116Z
Oh No My AI (Filk) 2021-06-11T15:05:48.733Z
What are the gears of gluten sensitivity? 2021-06-08T16:42:52.189Z
Zen and Rationality: Continuous Practice 2021-05-31T18:42:56.950Z
The Purpose of Purpose 2021-05-15T21:00:20.559Z
Yampolskiy on AI Risk Skepticism 2021-05-11T14:50:38.500Z
Identity in What You Are Not 2021-04-24T20:11:49.480Z
Forcing Yourself is Self Harm, or Don't Goodhart Yourself 2021-04-10T15:19:42.130Z
Forcing yourself to keep your identity small is self-harm 2021-04-03T14:03:06.469Z
How I Meditate 2021-03-08T03:34:21.612Z
Bootstrapped Alignment 2021-02-27T15:46:29.507Z
Fake Frameworks for Zen Meditation (Summary of Sekida's Zen Training) 2021-02-06T15:38:17.957Z
The Problem of the Criterion 2021-01-21T15:05:41.659Z
The Teleological Mechanism 2021-01-19T23:58:54.496Z
Zen and Rationality: Karma 2021-01-12T20:56:57.475Z
You are Dissociating (probably) 2021-01-04T14:37:02.207Z
A Model of Ontological Development 2020-12-31T01:55:58.654Z
Zen and Rationality: Skillful Means 2020-11-21T02:38:09.405Z
No Causation without Reification 2020-10-23T20:28:51.831Z
The whirlpool of reality 2020-09-27T02:36:34.276Z
Zen and Rationality: Just This Is It 2020-09-20T22:31:56.338Z
Zen and Rationality: Map and Territory 2020-09-12T00:45:40.323Z
How much can surgical masks help with wildfire smoke? 2020-08-21T15:46:12.914Z
Bayesiance (Filk) 2020-08-18T16:30:00.753Z
Zen and Rationality: Trust in Mind 2020-08-11T20:23:34.434Z
Zen and Rationality: Don't Know Mind 2020-08-06T04:33:54.192Z
Let Your Mind Be Not Fixed 2020-07-31T17:54:43.247Z
[Preprint] The Computational Limits of Deep Learning 2020-07-21T21:25:56.989Z
Comparing AI Alignment Approaches to Minimize False Positive Risk 2020-06-30T19:34:57.220Z
What are the high-level approaches to AI alignment? 2020-06-16T17:10:32.467Z
Pragmatism and Completeness 2020-06-12T16:34:57.691Z
The Mechanistic and Normative Structure of Agency 2020-05-18T16:03:35.485Z
What is the subjective experience of free will for agents? 2020-04-02T15:53:38.992Z
Deconfusing Human Values Research Agenda v1 2020-03-23T16:25:27.785Z
Robustness to fundamental uncertainty in AGI alignment 2020-03-03T23:35:30.283Z
Big Yellow Tractor (Filk) 2020-02-18T18:43:09.133Z
Artificial Intelligence, Values and Alignment 2020-01-30T19:48:59.002Z
Towards deconfusing values 2020-01-29T19:28:08.200Z
Normalization of Deviance 2020-01-02T22:58:41.716Z
What spiritual experiences have you had? 2019-12-27T03:41:26.130Z
Values, Valence, and Alignment 2019-12-05T21:06:33.103Z
Doxa, Episteme, and Gnosis Revisited 2019-11-20T19:35:39.204Z
The new dot com bubble is here: it’s called online advertising 2019-11-18T22:05:27.813Z
Fluid Decision Making 2019-11-18T18:39:57.878Z
Internalizing Existentialism 2019-11-18T18:37:18.606Z

Comments

Comment by G Gordon Worley III (gworley) on Implications of Civilizational Inadequacy (reviewing mazes/simulacra/etc) · 2022-01-24T05:59:35.357Z · LW · GW

Am roughly in middle management. Can confirm. Basically I and everyone around me is trying to walk some line between take enough responsibility to get results (the primary thing you're evaluated on) but don't take so much that if something goes south you'll be in trouble. Generally we don't want the pain to fall on ICs ("individual contributor" employees whose scope of responsibility is ultimately limited to their own labor since they need sponsorship from someone else or a process to commit to big decisions) unless they messed up for reasons within their control.

I generally see the important split as who is responsible and who is accountable. Responsible means here something like "who has to do the work" and accountable means something like "who made the decision and thus gets the credit or blame". ICs do well when they do a good job doing whatever they were told to do, even if it's the wrong thing. Management-types do well when the outcomes generate whatever we think is good, usually whatever we believe is driving shareholder value or some proxy of it. ICs get in trouble when they are inefficient, make a lot of mistakes, or otherwise produce low quality work. Management-types are in trouble when they make the wrong call and do something that produces neutral or negative value for something the company is measuring.

Basically I think all the maze stuff is just what happens when middle management manages to wirehead the organization so we're no longer held accountable for mistakes. I've not actually seen much serious mazes in my life because I've mostly worked for startups of various sizes, and in startups there's enough pressure from the executives on down to hold people accountable for stuff. I think it's only if the executives get on board with duping the board and shareholders so they can wirehead that things fall apart.

Comment by G Gordon Worley III (gworley) on Trying to Keep the Garden Well · 2022-01-16T17:48:27.940Z · LW · GW

This is why we can't have nice things.

Comment by G Gordon Worley III (gworley) on Zen and Rationality: Just This Is It · 2022-01-13T21:52:12.450Z · LW · GW

I still really like this post, and rereading it I'm surprised how well it captures points I'm still trying to push because I see a lot of people out there not quite getting them, especially by mixing up models and reality in creative ways. I had not yet written much about the problem of the criterion at this time, for example, yet it carries all the threads I continue to think are important today. Still recommend reading this post and endorse what it says.

Comment by G Gordon Worley III (gworley) on Paper claims: "Rationality" flavored words rose since 1850, began declining ~1980 · 2022-01-13T04:28:23.616Z · LW · GW

Also skeptical. I think this is tracking something more like the rise and fall of high modernism than the rise of fall of good epistemics.

Comment by G Gordon Worley III (gworley) on Value extrapolation partially resolves symbol grounding · 2022-01-13T04:26:02.476Z · LW · GW

This doesn't really seem like solving symbol grounding, partially or not, so much as an argument that it's a non-problem for the purposes of value alignment.

Comment by G Gordon Worley III (gworley) on Has anyone had weird experiences with Alcor? · 2022-01-11T23:46:08.087Z · LW · GW

+1 on the wording likely being because Alcor has dealt with resistant families a lot, and generally you stand a better chance of being preserved if Alcor has as much legal authority as possible to make that happen. You may have to explain that you're okay with your wife potentially doing something that would have been against your wishes (yes, I realize you don't expect that, but there more than 0% chance it will happen) and result in no preservation when Alcor thinks you would have liked one.

This is actually why I went with Alcor: they have a long record of going to court to fight for patients in the face of families trying to do something else.

Comment by G Gordon Worley III (gworley) on Activated Charcoal for Hangover Prevention: Way more than you wanted to know · 2022-01-11T06:55:09.678Z · LW · GW

This is a great point. Drinking plenty of water before sleep doesn't guarantee no hangover, but I find it works like 80% of the time since I think some hangover effects are just dehydration.

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-11T01:25:58.607Z · LW · GW

The point is kinda that you can take it to be a hypothesis and have it approach 100% likelihood. That's not possible if that hypothesis is instead assumed to be true. I mean, you might still run the calculations, they just don't matter since you couldn't change your mind in such a situation even if you wanted to.

I think the baked-in absurdity of that last statement (since people do in fact reject assumptions) points at why I think there's actual no contradiction in my statements. It's both true that I don't have access to the "real" God's eye view and that I can reconstruct one but will never be able to be 100% sure that I have. Thus I mean to be descriptive of how we find reality: we don't have access to anything other than our own experience, and yet we're able to infer lots of stuff. I'm just trying to be especial careful to not ground anything prior in the chain of epistemic reasoning on something inferred downstream, and that means not being able to predicate certain kinds of knowledge on the existence of an objective reality because I need those things to get to the point of being able to infer the existence of an objective reality.

Comment by G Gordon Worley III (gworley) on Activated Charcoal for Hangover Prevention: Way more than you wanted to know · 2022-01-11T00:52:10.645Z · LW · GW

Really appreciate the relative methanol calculations, and they match my own experience: generally no hangovers if I drink vodka drinks, mild hangovers from beer and other liquors, terrible hangovers from red wine.

Comment by G Gordon Worley III (gworley) on No Abstraction Without a Goal · 2022-01-11T00:43:42.141Z · LW · GW

I think this is exactly right, because without some goals, purpose, or even just a norm to be applied, there's nothing to power knowing anything since knowing is at its ground about picking and choosing what goes into what category to separate out the world into things.

Comment by G Gordon Worley III (gworley) on Signaling isn't about signaling, it's about Goodhart · 2022-01-08T04:54:31.575Z · LW · GW

I think a lot of the reason people are desperate to signal is because they are desperate: they need something to happen to feel safe, secure, fulfilled, or whatever and so they greedily grasp for it. Not doing that requires being sufficiently content with the world being as it is to not try to force it to be a particular way, but getting to a place where one has that kind of security is quite hard.

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-08T03:32:58.698Z · LW · GW

B

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-08T03:31:41.372Z · LW · GW

If we give up any assumption that there's an external reality and try to reason purely from our experience, then in what sense can there be any difference between "the snow is white" and "the snow looks white to me"? This is, in part, what I'm trying to get at in the post: the map-territory metaphor creates this kind of confusing situation where it looks and awful lot like there's something like a reality where it could independent of any observer have some meaning where snow is white, whereas part of the point of the post that this is nonsense, there must always be some observer, they decide what is white and not, and so the truth of snow being white is entirely contingent on the experience of this observer. Since everything we know is parsed through the lens of experience, we have no way to ground truth in anything else, so we cannot preclude the possibility that we only think snow is quite because of how our visual system works. In fact, it's quite likely this is so, and we could easily construct aliens who would either disagree or would at least be unable to make sense of what "snow is white" would mean since they would lack something like a concept of "white" or "snow" and thus be unable to parse the proposition.

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-07T04:24:49.647Z · LW · GW

Perhaps it seems like I'm not really defending #1 because it still all has to add up to normality, so it's not like I am going to go around claiming an objective universe is total nonsense except in a fairly technical sense; in an everyday sense I'm going to act not much different than a person claiming for there to definitely be objective reality because I've still got to respond to the conditions I find myself in.

From a pragmatic perspective most of the time it doesn't matter what you believe so long as you get the right outcome, and that can be for a surprisingly large space where it can be hard to find the places where things break down. Mostly they break down when you try to justify how things are grounded without stopping when it's practical and instead going until you can't go anymore. That's the kind of places where rejecting #3 (except as something contingent) and accepting something more like #1 starts to make sense, because you end up getting underneath the processes that were used to justify the belief in #3.

Comment by G Gordon Worley III (gworley) on The Problem of the Criterion is NOT an Open Problem · 2022-01-07T04:14:01.990Z · LW · GW

I hope they are okay with Neurath's boat, because that seems to be the world we live in. That is, the problem of the criterion shows us there is no solid foundation because we are born into contingency.

There's certainly folks who wanted a typed theory of truth (logical positivists) and there are folks who refashion truth in the image of referential integrity (coherentists), but even for them it either means giving up completeness to get a typed theory or giving up objectivity since truth must be coherent with our subjective experience. So circularity isn't really a sign of failure, it's just how it is, and truth isn't really about truth, it's about signaling (just kidding, it's about what we care about).

Comment by G Gordon Worley III (gworley) on The Problem of the Criterion is NOT an Open Problem · 2022-01-06T17:50:49.736Z · LW · GW

I'm not aware of anything quite so rigorous beyond what we might call "philosophical math" of using words in a precise way to evaluate doxastic logic. Maybe this is enough, but does feel like we should at least write it down somewhere in formal notation to make sure there's no gaps.

Comment by G Gordon Worley III (gworley) on You can't understand human agency without understanding amoeba agency · 2022-01-06T16:17:10.959Z · LW · GW

Seems right to me. For example, I think by most natural notions of "agency" that don't introduce anything crazy, we should probably think of thermostats as agents since they go about making the world different based on inputs. But such deflationary notions of agency seem deeply uncomfortable to a lot of people because they violate the very human-centric notion that lots of simple things don't have "real" agency because we understand their mechanism, whereas things with agency seem to be complex in a way that we can't easily understand how they work.

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-06T16:09:29.799Z · LW · GW

B: "Okay, cool, but that's information you constructed from within our universe and so is contingent on the process you used to construct it thus it's not actually a God's eye view but an inference of one. Thus you should be very careful what you do with that information because if you start to use it as the basis for your reasoning you're now making everything contingent on it and thus necessarily more likely to be mistaken in some way that will bite you in the ass at the limits even if it's fine 99.99% of the time. And since I happen to know you care about AGI alignment and AGI alignment is in large part about getting things right at the extreme limits, you should probably think hard about if you're not seeing yourself up to be inadvertently turned into a paperclip."

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-06T04:18:25.383Z · LW · GW

So to return to your GoL example, it only works because you exist outside the universe. If you were inside that GoL, you wouldn't be able to construct such a view (at least based on the normal rules of GoL). I see this as exactly analogous to the case we find ourselves in: what we know about physics seems to imply that we couldn't hope to gather enough information to ever successfully construct a God's eye view.

This is why I make a claim more like your #1 (though, yes, #2 is obviously the right thing here because nothing is100% certain) that a God's eye view is basically nonsense that our minds just happen to be able to image is possible because we can infer what it would be like if such a thing could exist from the sample set of our experience, but the logic of it seems to be that it just isn't a sensible thing we could ever know about except via hypothesizing the possible existence of it, putting it on par with thinking about things outside our Hubble volume, for example.

I'm suspicious someone could endorse #3 and not get confused reasoning about embedded agency because I'd expect either assuming #3 to cause you to get confused thinking about the embedded agency situation (and getting tripped up on questions like "why can't we just do thing X that endorsing #3 allows?") or that thinking about embedded agency hard enough would cause you to have to break down the things that make you endorse #3 and then you would come to no longer endorse it (my claim here is backed in part by that fact that I and others have basically gone down this path before one way or another, previously having assumed something like #3 and then having to unassume it because it got in the way and was inconsistent with the rest of our thinking).

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-05T20:06:46.313Z · LW · GW

Actually, it's totally find if they're hallucinations. Maybe other people are? Regardless, since these hallucinations seem to act in lawful ways that determine what else happens in my experience, it doesn't really matter much if they are "real" or not, and so long as reports of others experiences are causally intertwined with our experience we should care about them just the same.

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-05T20:04:39.932Z · LW · GW

The idea of a "view from nowhere" is basically the idea that there exists some objective, non-observer-based perspective of the world. This is also sometimes called a God's eye view of the world.

However such a thing does not exist except to the extent we infer things we expect to be true independent of observer conditions.

Yes, embedded agency is quite connected to all this. Basically I view embedded agency as a way of thinking about AI that avoids many of the classical pitfalls of non-subjective models of the world. The tricky thing is that for many toy models, like chess or even most AI training today, the world is constrained enough such that we can have a view from nowhere onto the artificially constrained world, but we can't get this same thing onto the universe because, to extend the analogy from above a bit, we are like chess or go pieces on the board and can only see the board from our place on it, not above it.

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-05T15:41:43.797Z · LW · GW

Yes. Let's take the case of building safe AGI, because that's actually why I ever ended up caring so much about all this stuff (modulo some risk that I would have cared about it anyway and my reason for caring is not actually strongly dependent on the path I took to caring).

In my posts on formal alignment I start from a stance of transcendental idealism. I wouldn't necessarily exactly endorse transcendental idealism today and all the cruft surrounding it, but I think it gets at the right idea: basically assume an arealist stance and assume that anything you're going to figure out about the world is ultimately subjective. This was quite useful, for example in the first post of that sequence, because it clears up any possibility of confusion that we can base alignment on some fundamental feature of the universe. Although I hadn't worked it all out at the time of that post, this ultimately lead me to realize that, for example, any sort of alignment we might achieve depends on choosing norms to align to, and the source of the norms must be human axiology.

None of this was totally novel at the time, but what was novel was having a philosophical argument for why it must be so, rather than a hand-wavy argument that permitted the idea that other approaches might be workable.

I started out just being your typical LW-style rationalist: sure, let's very strongly assume there's external reality to the point we just take for granted that it exists, no big deal. But this can get you into trouble because it's very easy to jump from "there's very likely external reality" to "there's a view from nowhere in that external reality" and get mixed up about all kinds of stuff. Not having a firm grounding in how subjective everything really is made it hard to make progress without constantly getting tripped up in stupid ways, basically in ways equivalent to thinking "if we just make AGI smart enough it'll align itself". So after I got really tangled up by my own thoughts, I realized the problem was that I was making these strong assumptions that I shouldn't be taking for granted. When I stopped doing that and stopped trying to justify things in terms of those beliefs things got a lot easier to think about.

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-05T02:16:29.299Z · LW · GW

Yes, different tools are useful to different purposes, but also sometimes trying to extract a general theory is quite useful since tools can get you confused when they are applied outside their domain of function.

Cf. Toolbox Thinking and Law Thinking

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-05T02:12:49.807Z · LW · GW

What's the type signature of a proposition here?

experience -> experience

  • Can there be multiple incompatible propositions that predict the same experiences, and how does your approach deal with them? In particular, what if they only predict the same experiences within some range of observation, but diverge outside of that? What if you can't get outside, or don't get outside, of the range?

That seems fine. Consistency is often useful, but it's not always. Sometimes completeness is better at the expense of consistency.

  • How does it deal with things like collider bias? If Nassim Taleb filters for people with high g factor (due to job + interests) and for people who understand long tails (due to his strong opinions on long tails), his experience might become that there is a negative correlation between intelligence and understanding long tails. Would it then be "true" "for him" that there's a tradeoff between g and understanding long tails, even if g is positively correlated with understanding long tails in more representative experiences?

Since experience is subjective and I'm implicitly here talking about subjective probability (this is LessWrong; no frequentists allowed 😛), then of course truth becomes subjective, but of course only because "subjective" is kind of meaningless because there's no such thing as objectivity anyway except as we infer there to be some things that are so common among the things we classify in our experience to be reports of others experience to believe that maybe there's some stuff out there that is the same for all of us.

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-05T02:05:41.062Z · LW · GW

Yeah, privileging prediction doesn't really solve anything. This post is meant to be a bit of a bridge towards a viewpoint that resolves this issue by dropping the privileging of any particular concern, but getting there requires first seeing that a very firm notion of truth based on the assumption of something external can be weakened to a more fluid notion based on only what is experienced (since, my bold claim is that's how the world is anyway and we're just confused when making metaphysical claims otherwise).

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-05T02:02:01.367Z · LW · GW

My reply here feels weird to me because I think you basically get my point, but you're one inferential gap away from my perspective. I'll see if I can close that gap.

It's true that the only way we get any glimpse at the territory is through our sensory experiences. However, the map that we build in response to this experience carries information that sets a lower bound on the causal complexity of the territory that generates it.

We need not assume there is anything more than experience, though. Through experience we might infer the existence of some external reality that sense data is about (this is a realist perspective), and as you say this gives us evidence that perhaps the world really does have some structure external to our experience, but we need not assume it to be so.

This is perhaps a somewhat subtle distinction, but the point is to shift as much as possible from assumption to inference. If we take an arealist stance and do not assume realism, we may still come to infer it based on the evidence we collect. This is, arguably, better even if most of the time it doesn't produce different results because now everything about external reality in our thinking exists firmly within our minds rather than outside of them where we can say nothing about them, and now we can make physical claims about the possibility of an external reality rather than metaphysical assumptions about an external reality.

Comment by G Gordon Worley III (gworley) on The Map-Territory Distinction Creates Confusion · 2022-01-05T01:53:07.803Z · LW · GW

Thanks for your reply here, Val! I'll just add the following:

There's a somewhat technical argument that predictions are not the kind of thing classically pointed at by a correspondence theory of truth, which instead tend to be about setting up a structured relationship between propositions and reality and having some firm ground by which to judge the quality of the relationship. So in that sense subjective probability doesn't really meet the standard of what is normally expected for a correspondence theory of truth since it generally requires, explicitly or implicitly, the possibility of a view from nowhere.

That said, it's a fair point that we're still talking about how some part of the world relates to another, so it kinda looks like truth as predictive power is a correspondence theory. However, since we've cut out metaphysical assumptions, there's nothing for these predictions (something we experience) to relate to other than more experience, so at best we have things corresponding to themselves, which breaks down the whole idea of how a correspondence theory of truth is supposed to work (there's some ground or source (the territory) that we can compare against). A predictive theory of truth is predictions all the way down to unjustified hyperpriors.

I don't get into this above, but this is why I think "truth" in itself is not that interesting; "usefulness to a purpose" is much more inline with how reasoning actually works, and truth is a kind of usefulness to a purpose, and my case above is a small claim that accurate prediction does a relatively good job of describing what people mean when they point at truth that's grounded in the most parsimonious story I know to tell about how we think.

Comment by G Gordon Worley III (gworley) on An Observation of Vavilov Day · 2022-01-04T00:33:54.599Z · LW · GW

A bit of advice from my own experience with fasting.

Hunger is strongest roughly between hours 6 and 12 after you stop eating. There's some variation here, but my vague understanding is this has to do with something about blood sugar and insulin response cycles.

Generally once I make it to 12 hours the rest is smooth sailing, at least in so far as hunger becomes a background annoyance rather than a strong urge.

At around 20 hours or so I find mental changes start to happen due to what I presume is low blood sugar. It's nothing insurmountable, just something like mental fuzziness that makes it hard to do mental activities at the limit of my ability.

I'm not sure what happens past 24 hours; never gone longer than that.

Comment by G Gordon Worley III (gworley) on $1000 USD prize - Circular Dependency of Counterfactuals · 2022-01-03T23:55:01.704Z · LW · GW

Causation is a feature of models, not reality. We need only suppose reality is one thing after another (or not even that! reality is just this moment, which for us contains a sensation we call a memory of past moments), and any causal structure is inferred to exist rather than something we directly observe. I make this argument in some detail here: https://www.lesswrong.com/posts/RMBMf85gGYytvYGBv/no-causation-without-reification

Comment by G Gordon Worley III (gworley) on $1000 USD prize - Circular Dependency of Counterfactuals · 2022-01-03T16:39:51.651Z · LW · GW

Agreed. That said, I don't think counterfactuals are in the territory. I think I said before that they were in the map, although I'm now leaning away from that characterisation as I feel that they are more of a fundamental category that we use to draw the map.

Yes, I think there is something interesting going on where human brains seem to operate in a way that makes counterfactuals natural. I actually don't think there's anything special about counterfactuals, though, just that the human brain is designed such that thoughts are not strongly tethered to sensory input vs. "memory" (internally generated experience), but that's perhaps only subtly different than saying counterfactuals rather than something powering them is a fundamental feature of how our minds work.

Comment by G Gordon Worley III (gworley) on $1000 USD prize - Circular Dependency of Counterfactuals · 2022-01-03T00:37:05.376Z · LW · GW

I don't think they're really at odds. Zack's analysis cuts off at a point where the circularity exists below it. There's still the standard epistemic circularity that exists whenever you try to ground out any proposition, counterfactual or not, but there's a level of abstraction where you can remove the seeming circularity by shoving it lower or deeper into the reduction of the proposition towards grounding out in some experience.

Another way to put this is that we can choose what to be pragmatic about. Zack's analysis choosing to be pragmatic about counterfactuals at the level of making decisions, and this allows removing the circularity up to the purpose of making a decision. If we want to be pragmatic about, say, accurately predicting what we will observe about the world, then there's still some weird circularity in counterfactuals to be addressed if we try to ask questions like "why these counterfactuals rather than others?" or "why can we formulate counterfactuals at all?".

Also I guess I should be clear that there's no circularity outside the map. Circularity is entirely a feature of our models of reality rather than reality itself. That's way, for example, the analysis on epistemic circularity I offer is that we can ground things out in purpose and thus the circularity was actually an illusion of trying to ground truth in itself rather than experience.

I'm not sure I've made this point very clearly elsewhere before, so sorry if that's a bit confusing. The point is that circularity is a feature of the relative rather than the absolute, so circularity exists in the map but not the territory. We only get circularity by introducing abstractions that can allow things in the map to depend on each other rather than the territory.

Comment by G Gordon Worley III (gworley) on $1000 USD prize - Circular Dependency of Counterfactuals · 2022-01-02T05:27:54.426Z · LW · GW

I think this is just agreement then? That minds are influenced by the structure of the universe they operate in in similar ways sounds like exactly what we should expect. That doesn't mean we need to elevate such convergence to be something more than intersubjective agreement about reality.

Comment by G Gordon Worley III (gworley) on $1000 USD prize - Circular Dependency of Counterfactuals · 2022-01-02T05:24:53.567Z · LW · GW

I think A is solved, though I wouldn't exactly phrase it like that, more like counterfactuals make sense because they are what they are and knowledge works the way it does.

Zack seems to be making a claim to B, but I'm not expert enough in decision theory to say much about it.

Comment by G Gordon Worley III (gworley) on $1000 USD prize - Circular Dependency of Counterfactuals · 2022-01-01T18:58:38.577Z · LW · GW

I mostly agree with Zack_M_Davis that this is a solved problem, although rather than talking about a formalization of causality I'd say this is a special case of epistemic circularity and thus an instance of the problem of the criterion. There's nothing unusual going on with counterfactuals other than that people sometimes get confused about what propositions are (e.g. they believe propositions have some sort of absolute truth beyond causality because they fail to realize epistemology is grounded in purpose rather than something eternal and external to the physical world) and then go on to get mixed up into thinking that something special must be going on with counterfactuals due to their confusion about propositions in general.

I don't know if I'll personally get around to explaining this in more detail, but I think this is low hanging fruit since it falls out so readily from understanding the contingency of epistemology caused by the problem of the criterion.

Comment by G Gordon Worley III (gworley) on Bayesian Dharani, Great Dharani for Conserving Evidence · 2021-12-21T05:56:59.876Z · LW · GW

I feel like this really misses the point of what a dharani is and why something like this might be something to chant, by which I mean you get what's literally going on but are missing the big picture by taking it too literally. I feel a bit like I'm presenting an impressionist painting and your complaint is that it's not realistic enough, which is true, but also not the point.

Maybe it would help to know that we think early dharanis just had a clear meaning like "please make it so bad stuff doesn't happen" and this got lost over time, so in making a new one I start from the literal but in a way that allows it to morph over time to become sounds for an intention. Also compare the two rationalists litanies to similar religious ones. This is meant to have the same energy, but a different form.

Comment by G Gordon Worley III (gworley) on Bayesian Dharani, Great Dharani for Conserving Evidence · 2021-12-20T20:20:42.470Z · LW · GW

This seems to prove too much. You seem to have a fully general argument here against using words.

Comment by G Gordon Worley III (gworley) on Where can one learn deep intuitions about information theory? · 2021-12-18T03:33:49.193Z · LW · GW

Okay, gotta punch up my recommendation a little bit.

About 10 years ago I moved houses and, thanks to the growing popularity of fancy ebooks, I decided to divest myself of most of my library. I donated 100s of books that weighted 100s of pounds and ate up 10s of boxes. I kept only a small set of books, small enough to fit in a single box and taking up only about half a shelf.

An Introduction to Information Theory made the cut and I still have my copy today, happily sitting on a shift next to me as I type. It's that good and that important.

Comment by G Gordon Worley III (gworley) on Where can one learn deep intuitions about information theory? · 2021-12-16T20:03:33.557Z · LW · GW

I highly recommend An Introduction to Information Theory: Symbols, Signals and Noise by John R. Pierce. Several things that are great about this book:

  • it's short and concise, like many Dover books
  • focuses on helping you build models with gears about how information and related concepts works
  • has a bit of a bias towards talking about signals, like over a phone line, but this is actually really helpful because it's the original application of most of the information theory metaphors
  • dives into thermodynamics without getting bogged down in a lot of calculations you'll probably never do

I think you'll also appreciate that it is self-aware that information theory is a big deal and a deep concept that explains a lot of stuff, and has some chapters on some of the far reaching stuff. There's chapters at the end on cybernetics, psychology, and art.

This was one of the books that had the most impact on me and how I think and I basically can't recommend it highly enough.

Comment by G Gordon Worley III (gworley) on An Open Letter to the Monastic Academy and community members · 2021-12-14T17:17:38.741Z · LW · GW

Agree. I've gotten some communication about my top level comment on this post with regards to there being some additional context around this, and would like to see a more detailed, multiparty account of what happened in order to better assess and decide how to respond.

Comment by G Gordon Worley III (gworley) on An Open Letter to the Monastic Academy and community members · 2021-12-14T03:39:27.047Z · LW · GW

Thanks for sharing. I've not been 100% on top of what's been happening with MAPLE et al., but this makes me quite worried given you seem to be familiar with what high-intensity spiritual training should look like. I'm generally willing to give teachers with charismatic authority the benefit of the doubt (after all, Shakyamuni had only charismatic authority if you don't believe the stories about his past lives), but this seems like clear evidence that Soryu may not be sufficiently skilled to do what he's doing, and that carries through to those he's trained to lead.

I've previously been somewhat willing to defend MAPLE as a high-intensity monastic experience. And as such I've previously reasoned that many of people's concerns about it where due to unfamiliarity with that type of environment, but this seems like clear evidence that even given that framing something wrong is going on.

Comment by G Gordon Worley III (gworley) on Privacy and Manipulation · 2021-12-06T03:30:01.965Z · LW · GW

I don't know how it works for therapists, but I know a bit about the priest situation.

Catholics take the relationship pretty seriously. Priests are supposed to give up their own life rather than violate the confidentiality of confession.

Civil law largely takes its lead from the Catholic church by carving out an exception so Catholic priests aren't forced to constantly refuse to testify and find themselves in contempt of court.

However the exception is not complete. In various localities there are carve outs for mandatory reporting about things like child abuse. Catholic priests are supposed to keep the secret anyway and go to jail for violating the law if they refuse to testify.

Other religions are less strict. For example, in my experience with Zen, dokusan ("going alone to the teacher", i.e. a private conversation between teacher a student about practice) is legally protected the same way confession is. Within our tradition there's a more standard assumption of privacy with a somewhat reasonable expectation that things might be shared among other teachers via your teacher asking for advice or reporting dangerous things to authorities, but it's not absolute like in Catholicism.

However it's very easy to break the confidentiality rules of confession-like situations. In particular, my understanding is that if a person ever talks about what is discussed in there at all to anyone else, they forfeit the right to confidentiality all together under the law and the priest/etc. can be compelled to testify (civilly, anyway; Catholic priests are still not supposed to, as I understand it, and can be excommunicated if they do).

In the end the choice the Catholic church makes is an absolute one based on being able to grant a religious sacrament. Others make somewhat less absolute guarantees of confidentiality that nonetheless are enough to enable someone to speak openly in ways that they wouldn't without such protection, but not in such an absolute way that reasonable harm cannot be avoided, as in the Catholic situation.

Comment by G Gordon Worley III (gworley) on Can solipsism be disproven? · 2021-12-04T23:08:18.344Z · LW · GW

One of the challenges with solipsism or any claim about metaphysics is that we lack access to evidence to settle disputes about it (once we steelman what we mean say it's not obviously false). In this sense, it's something we can imagine but cannot justify. Same goes for alternatives to solipsism, like physicalism.

So there's not much to say here. You can't prove it disprove solipsism by our normal epistemological methods because it hinges on information we can't have.

Comment by G Gordon Worley III (gworley) on Seeking Truth Too Hard Can Keep You from Winning · 2021-11-30T15:52:36.987Z · LW · GW

You seem to have anticipated this response. The definition you begin with—truth as "accurate predictions about our experiences"—is fairly narrow. One could respond that what you identify here are the effects of truth (presumably? but maybe not necessarily), while truth is whatever knowledge enables us to make these predictions. In any case, it doesn't seem self-evident that truth is necessarily concerned with making predictions, and I wonder how much of the argument hinges upon this strict premise. How would it alter if etc.

Not much. You could choose some other kind of truth definition if you like. My goal was to use a deflationary definition of truth in order to avoid stumbling into philosophical mindfields, and because I'm not committed to metaphysical realism myself so I'd be dishonest if I used such a definition.

Relatedly, you say that when we seek truth, "we want to know things that tell us what we’ll find as we experience the world." Rather than primarily aiming to predict in advance what we'll find, might we instead aim to know the things that enable us to understand whatever we actually do find, regardless of whether we expected it (or whether it is as we predicted it would be)? Maybe this knowledge amounts to the same thing in the end. I don't know.

I'd say that amounts to the same thing. There's some links in the post relevant to the case for this about Bayesianism and the predictive processing model of the brain.

You refer to the thing outside of truth that grounds the quest for it as purpose. Would belief or faith be an acceptable substitute here?

Maybe. "Purpose" is here a stand-in term for a whole category of things like what Heidegger called Sorge. Although not necessarily exhaustive, I wrote a post about this topic. I could see certain notions of belief and faith fitting in here.

It would seem that [desire for] knowledge of truth already encompasses or takes into account the existence of non-truth-seeking agents and the knowledge requisite to accurately modeling them.

As I think I addressed a couple points up, yes and humans are in the implementation formed such that this is insufficient.

Given your statement in the antepenultimate paragraph—"the reality is that you are not yourself actually a truth-seeking-agent, no matter how much you want it to be so"—this piece ultimately appears to be a reflection on self-knowledge. By encouraging the rigidly truth-obsessed dork to more accurately model non-truth-seeking agents, you are in fact encouraging him to more accurately model himself. So again, the desire for truth (as self-knowledge, or the truth about oneself) still guides the endeavor. (This was the best paragraph in the piece, I think.)

Seeking truth starts at home, so to speak. :-)

Comment by G Gordon Worley III (gworley) on Seeking Truth Too Hard Can Keep You from Winning · 2021-11-30T15:39:53.284Z · LW · GW

In theory, yes. In practice this tends to be impractical because of the amount of effort required to think through how other people think in a deliberate way that accurately models them. Most people who succeed in modeling others well seem to do it by having implicit models that are able to model them quickly.

I think the point is that people are complex systems that are too complex to model well if you try to do it in a deliberate, system-2 sort of way. Even if you eventually succeed in modeling them, you'll likely get your answer about what to do way to late to be useful. The limitations of our brains force us to do something else (heck, the limitations of physics seem to force this, since idealized Solomonoff inductors run into similar computability problems, cf. AIXI).

Comment by G Gordon Worley III (gworley) on Why do you need the story? · 2021-11-25T03:09:23.686Z · LW · GW

This points at something I find it very hard to work against: a desire to explain why things are the way they are rather than just accept that they are the way they are. Explanations are useful, but things will still be as they are even if i have no explanation for why they are the way they are. Yet when I find something in the world, there's a movement of mind that quickly follows observation that seeks to create a gears level model of the world. On the one hand, such models are useful. On the other, a desire to explain in the absence of any information to build it off of is worse than useless—it's the path to confusion.

Comment by G Gordon Worley III (gworley) on Integrating Three Models of (Human) Cognition · 2021-11-24T23:33:51.641Z · LW · GW

Thanks for this thorough summary. At this point the content has become spread over a books worth of posts, so it's handy to have this high level, if long, summary!

Comment by G Gordon Worley III (gworley) on Why I am no longer driven · 2021-11-18T04:57:37.373Z · LW · GW

I advised no such thing, notice the /s at the end.

I guess I'm not cool enough to know what that means. Just looks like a typo to me. 🤷

If anecdotal evidence was the standard to be judged by, alternative medicine would be bloody miracle cures - plenty of patients swear it works. And in the absence of empirical data, it's your anecdotal against my anecdotal evidence. I had no intention of being charitable as I think it's a complete snake-oil industry, Tai Lopez & Co. just made it ridiculously obvious in recent years imo. It doesn't even require practitioners to be consciously ill-intentioned.

You make a claim that "[m]otivational videos, speeches and self-help books are essentially modern forms of letters of indulgence", and seem to back it up by saying that there's folks whose experience of self help is that it just makes you feel good and takes your money without offering anything in return. But this is just opinion and conjecture. The strongest evidence you offer is an example of "Tai Lopez & Co.", who I'm not familiar with, that you say "made it ridiculously obvious in recent years [that's it's complete snake-oil]".

Anecdotal evidence is not necessarily the standard to judge by, but anecdotal evidence is sufficient to suggest we cannot dismiss something out of hand. To your point about alternative medicine, that some people find things work means it's worthy of study, not that it can simply be dismissed. And sometimes what looks like alternative medicine, to take your point, turns out to be real medicine or just inefficient medicine (for example, people eating molds containing antibiotics or drinking tea made with witch hazel rather than taking aspirin).

It's fine to have your opinion that self help and motivational videos are not helpful, but my claim is that you're not taking seriously the case for things like self help that lots of people think work, including lots of people on this site, and this lack of charity seems to be resulting in a failure to even consider evidence (which to be fair I'm not providing to you, but your position seems to be rejecting even a willingness to consider the possibility that self-help might work, which means you seem to have already written the bottom line.)

Comment by G Gordon Worley III (gworley) on Why I am no longer driven · 2021-11-17T15:36:09.740Z · LW · GW

I think this is an uncharitable strawman of motivational and self help materials.

Is there stuff out there that's trying to get you to buy something that doesn't really help? Yes. Is there also stuff out there that people find transforms theirs lives because it helps them have insights that unstick them from their problems that they couldn't unstick themselves from? Absolutely. Evidence: me and lots of people claiming this.

What you advise might work for some, but for others suck forced action would actually make the situation worse! I know this has been the case for me at times: forcing myself to "grind" actually made the problem worse over time rather than better.

Comment by G Gordon Worley III (gworley) on Worst Commonsense Concepts? · 2021-11-17T01:59:21.031Z · LW · GW

But some stuff is explicitly outside of science's purview, though not in the way you're talking about here. That is, some stuff is explicitly about, for example, personal experience, which science has limited tools for working with since it has to strip away a lot of information in order to transform it into something that works with scientific methods.

Compare how psychology sometimes can't say much of anything about things people actually experience because it doesn't have a way to turn experience into data.

Comment by G Gordon Worley III (gworley) on Worst Commonsense Concepts? · 2021-11-17T01:57:18.803Z · LW · GW

Probably the most persistent and problem-causing is the common sense way to treating things as having essences.

By this I mean that people tend to think of things like people, animals, organizations, places, etc. etc. as having properties or characteristics as if they had a little file inside them with various bits of metadata set that define their behavior. But this is definitely not how the world works! The property like this is at best a useful fiction or abstraction that allows simplified reasoning about complex systems, but it also leads to lots of mistakes because most people don't seem to realize these are like aggregations over complex interactions in the world rather than real things themselves.

You might say this is mistaking the map for the territory, but I think framing it this way makes it a little clearer just what is going on. People act as if there was some essential properties of things and think that's how the world actually is and as a result make mistakes when that model fails to correspond to what actually happens.