Posts

The whirlpool of reality 2020-09-27T02:36:34.276Z · score: 9 (2 votes)
Zen and Rationality: Just This Is It 2020-09-20T22:31:56.338Z · score: 28 (14 votes)
Zen and Rationality: Map and Territory 2020-09-12T00:45:40.323Z · score: 16 (5 votes)
How much can surgical masks help with wildfire smoke? 2020-08-21T15:46:12.914Z · score: 10 (3 votes)
Bayesiance (Filk) 2020-08-18T16:30:00.753Z · score: 8 (3 votes)
Zen and Rationality: Trust in Mind 2020-08-11T20:23:34.434Z · score: 22 (9 votes)
Zen and Rationality: Don't Know Mind 2020-08-06T04:33:54.192Z · score: 22 (10 votes)
Let Your Mind Be Not Fixed 2020-07-31T17:54:43.247Z · score: 47 (22 votes)
[Preprint] The Computational Limits of Deep Learning 2020-07-21T21:25:56.989Z · score: 9 (2 votes)
Comparing AI Alignment Approaches to Minimize False Positive Risk 2020-06-30T19:34:57.220Z · score: 6 (2 votes)
What are the high-level approaches to AI alignment? 2020-06-16T17:10:32.467Z · score: 13 (4 votes)
Pragmatism and Completeness 2020-06-12T16:34:57.691Z · score: 14 (5 votes)
The Mechanistic and Normative Structure of Agency 2020-05-18T16:03:35.485Z · score: 14 (5 votes)
What is the subjective experience of free will for agents? 2020-04-02T15:53:38.992Z · score: 10 (3 votes)
Deconfusing Human Values Research Agenda v1 2020-03-23T16:25:27.785Z · score: 18 (6 votes)
Robustness to fundamental uncertainty in AGI alignment 2020-03-03T23:35:30.283Z · score: 11 (3 votes)
Big Yellow Tractor (Filk) 2020-02-18T18:43:09.133Z · score: 12 (4 votes)
Artificial Intelligence, Values and Alignment 2020-01-30T19:48:59.002Z · score: 13 (4 votes)
Towards deconfusing values 2020-01-29T19:28:08.200Z · score: 13 (5 votes)
Normalization of Deviance 2020-01-02T22:58:41.716Z · score: 60 (23 votes)
What spiritual experiences have you had? 2019-12-27T03:41:26.130Z · score: 22 (5 votes)
Values, Valence, and Alignment 2019-12-05T21:06:33.103Z · score: 12 (4 votes)
Doxa, Episteme, and Gnosis Revisited 2019-11-20T19:35:39.204Z · score: 14 (5 votes)
The new dot com bubble is here: it’s called online advertising 2019-11-18T22:05:27.813Z · score: 55 (21 votes)
Fluid Decision Making 2019-11-18T18:39:57.878Z · score: 9 (2 votes)
Internalizing Existentialism 2019-11-18T18:37:18.606Z · score: 10 (3 votes)
A Foundation for The Multipart Psyche 2019-11-18T18:33:20.925Z · score: 7 (1 votes)
In Defense of Kegan 2019-11-18T18:27:37.237Z · score: 10 (5 votes)
Why does the mind wander? 2019-10-18T21:34:26.074Z · score: 11 (4 votes)
What's your big idea? 2019-10-18T15:47:07.389Z · score: 29 (15 votes)
Reposting previously linked content on LW 2019-10-18T01:24:45.052Z · score: 18 (3 votes)
TAISU 2019 Field Report 2019-10-15T01:09:07.884Z · score: 39 (20 votes)
Minimization of prediction error as a foundation for human values in AI alignment 2019-10-09T18:23:41.632Z · score: 13 (7 votes)
Elimination of Bias in Introspection: Methodological Advances, Refinements, and Recommendations 2019-09-30T20:23:13.139Z · score: 16 (3 votes)
Connectome-specific harmonic waves and meditation 2019-09-30T18:08:45.403Z · score: 12 (10 votes)
Goodhart's Curse and Limitations on AI Alignment 2019-08-19T07:57:01.143Z · score: 21 (8 votes)
G Gordon Worley III's Shortform 2019-08-06T20:10:27.796Z · score: 16 (2 votes)
Scope Insensitivity Judo 2019-07-19T17:33:27.716Z · score: 25 (10 votes)
Robust Artificial Intelligence and Robust Human Organizations 2019-07-17T02:27:38.721Z · score: 17 (7 votes)
Whence decision exhaustion? 2019-06-28T20:41:47.987Z · score: 17 (4 votes)
Let Values Drift 2019-06-20T20:45:36.618Z · score: 3 (11 votes)
Say Wrong Things 2019-05-24T22:11:35.227Z · score: 100 (36 votes)
Boo votes, Yay NPS 2019-05-14T19:07:52.432Z · score: 34 (11 votes)
Highlights from "Integral Spirituality" 2019-04-12T18:19:06.560Z · score: 19 (22 votes)
Parfit's Escape (Filk) 2019-03-29T02:31:42.981Z · score: 40 (15 votes)
[Old] Wayfinding series 2019-03-12T17:54:16.091Z · score: 9 (2 votes)
[Old] Mapmaking Series 2019-03-12T17:32:04.609Z · score: 9 (2 votes)
Is LessWrong a "classic style intellectual world"? 2019-02-26T21:33:37.736Z · score: 31 (8 votes)
Akrasia is confusion about what you want 2018-12-28T21:09:20.692Z · score: 29 (17 votes)
What self-help has helped you? 2018-12-20T03:31:52.497Z · score: 34 (11 votes)

Comments

Comment by gworley on The rationalist community's location problem · 2020-09-25T23:23:54.516Z · score: 4 (2 votes) · LW · GW

The pandemic has updated me in the direction that having any particular place be the center of the physical community is not super important. In some ways, it would almost be better if we less anchored on the idea of trying to get everyone physically together in a single local, and instead thought of ourselves as distributed with many hubs that have strong connections within and between hubs, although those connections within and between look a bit different (local being more about human needs, and between hubs being more about project needs).

For comparison, many businesses operate with multiple offices, the community of academics is highly distributed, and religions have various approaches to this split local/global model. There's no special reason we all need to be physically together in the same city, so I don't think it needs to happen and thus won't.

Put another way, I think of a major rationalist hub that everyone was happy living in as a kind of fairy tale: it's a nice idea to dream about, but the ground conditions simply aren't conducive to it, and we should focus on meeting the conditions as we find them rather than hoping we can find a city that probably doesn't exist that will enable us to have a hub with features that sadly currently sit well beyond the Pareto frontier.

Comment by gworley on EA Relationship Status · 2020-09-20T00:12:27.669Z · score: 3 (2 votes) · LW · GW

I'm surprised also by the relatively low "ever married" rates for the above 45 segments, since marriage rates were higher in the past and those people have had more chances to have gotten married, so barely cresting 60% suggests EA is somehow correlated with folks who don't get married, robust to those people having many opportunities to get married prior to EA coalescing as a movement. I would have expected something closer to 85%.

Comment by gworley on This Territory Does Not Exist · 2020-09-19T00:49:56.364Z · score: 4 (2 votes) · LW · GW

Depends. In a certain vague sense, they are both okay pointers to what I think is the fundamental thing they are about, the two truths doctrine. In another sense, no, because the map and territory metaphor suggests a correspondence theory of truth, whereas ontological and ontic about mental categories and being or existence, respectively, and historically tied to a different approaches to truth, namely those associated with transcendental idealism. And if you don't take my stance that they're both different aspects of the same way of understanding reality that are contextualized in different ways and thus both wrong at some limit but in different ways, then there is an ocean of difference between them.

Comment by gworley on niplav's Shortform · 2020-09-19T00:39:55.125Z · score: 3 (2 votes) · LW · GW

This is an idea that's been talked about here before, but it's not even exactly clear what philosophical reasoning is or how to train for it, let alone if it's a good idea to teach an AI to do that.

Comment by gworley on Rationality for Kids? · 2020-09-17T21:32:54.815Z · score: 3 (2 votes) · LW · GW

I can't find it, but I vaguely recall Julia Galef writing something about how her parents raised her and her brother such that they fit naturally with the Rationalist community, even though it didn't exist at the time of their upbringing.

Comment by gworley on This Territory Does Not Exist · 2020-09-17T21:22:38.183Z · score: 2 (1 votes) · LW · GW

Standard rationalist terminology would be roughly territory and map, respectively.

Comment by gworley on Zen and Rationality: Map and Territory · 2020-09-17T21:21:36.020Z · score: 2 (1 votes) · LW · GW

Yep, you got it.

Comment by gworley on Zen and Rationality: Map and Territory · 2020-09-16T21:21:31.959Z · score: 3 (2 votes) · LW · GW

So, he is, but for reasons orthogonal to the way you describe it. The idea of guest and host doesn't really match closely (in my understanding) with this idea of treating experience as a guest coming to a party, but it does present a way to get closer to seeing the host/absolute by not holding on so tightly to the guest/relative (as I'm understanding it from your comments).

Comment by gworley on Against boots theory · 2020-09-14T17:36:30.949Z · score: 18 (12 votes) · LW · GW

I've generally viewed this discussion as an explanation not of why the rich are rich, but why the poor stay poor, i.e. the poverty trap. Dickens expressed a similar idea in another way:

Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness. Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.

The point is that below some threshold, it's not possible to acquire capital (to save) and you're constantly underwater financially.

I think much of the appeal of this theory is that it fits with what it feels like to be on the wrong side of the threshold. I was working-poor in America for several years, constantly falling behind in debt, having to choose which bills to pay and which to abandon, constantly choosing worse options that would cost me more in the long term because I couldn't afford something that would be cheaper longterm but required capital outlays I didn't have.

My guess is that the people who resonate with this theory most are on the outside looking in at what it must be like to be rich (or just not poor), it's a reasonable (if, as you prove, not accurate) guess to assume that being rich must mean a reversal of the thing that seems to be keeping them poor.

Comment by gworley on The Short Case for Verificationism · 2020-09-11T23:47:27.323Z · score: 2 (1 votes) · LW · GW

So, I think the crux of why I don't really agree with your general gist and why I'm guessing a lot of people don't, is we see meaningfulness as something bigger than just whether or not something is a fact (statement that has a coherent truth value). To most I think, something is meaningful if it somehow is grounded in external reality, not whether or not it can be assess to be true, and many things are meaningful to people that we can't assess the truthiness of. You seem to already agree that there are many things of which we cannot speak of facts, yet these non-facts are not meaningless to people. Just for example, you perhaps can't speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them. There's a kind of deflation of what you mean by "meaning" here that, to me, makes this position kind of boring and useless, since most of the interesting stuff we need to deal with is now outside the realm of facts and "meaning" by your model.

Comment by gworley on The Short Case for Verificationism · 2020-09-11T21:55:45.208Z · score: 5 (2 votes) · LW · GW

It might help if you could be more specific on what it means for a statement to be "meaningless". Simply unable to treat it as a fact?

Comment by gworley on How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? · 2020-09-11T16:50:30.481Z · score: 3 (2 votes) · LW · GW

Actually, I'm not sure that sign flips are easier to deal with. A sentiment I've heard expressed before is that it's much easier to trim something to more a little more or less of it, but it's much harder to know if you've got it pointed in the right direction or not. Ultimately, addressing false positives though ends up being about these kind of directional issues.

Comment by gworley on How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? · 2020-09-11T01:28:59.874Z · score: 4 (2 votes) · LW · GW

You seem to be approaching a kind of reasoning about this similar to something I've explored in a slightly different way (reducing risk of false positives in AI alignment mechanism design). You might find it interesting.

Comment by gworley on Capturing Ideas · 2020-09-11T01:26:12.107Z · score: 4 (2 votes) · LW · GW

I think, down the road, there comes a point where the place to capture ideas is no longer needed.

This is mostly speaking from personal experience, but I found that after years of capturing ideas like this I eventually just didn't need to, because I realized I was generating hundreds or thousands of ideas a day and I didn't do anything with most of them and having them written down was not a useful way to select amongst them, so instead it made sense to just let them come and go and not get stressed out if I didn't capture something.

This suggests this sort of technique is useful for a while until it trains you so well that you can drop the capture tool.

Yes, you'll still sometimes need to capture things, but the point is that you continue to carry around the brain that produced the ideas you thought worth capturing, and if you had them once and they're worth doing something with you'll have them again, so you can eventually ease up on the capture side. Of course, if you're not babbling enough, you should do something like this to change that!

Comment by gworley on Loneliness and the paradox of choice · 2020-09-09T20:40:04.590Z · score: 4 (2 votes) · LW · GW

This generally accords with my own understanding of why people feel lonely. Although I might not have chosen to put the emphasis on decision making and choice, it's definitely about being confused and thinking you are apart from others or disconnected from them in ways you are not, and that shows up in making decisions, so it's going to be where it's easier to see loneliness being created from the outside.

Comment by gworley on Emotional valence vs RL reward: a video game analogy · 2020-09-05T22:27:30.167Z · score: 4 (3 votes) · LW · GW

There's this from QRI that I think also points to a similar interpretation of valence and arousal as the one you use here.

Comment by gworley on Li and Vitanyi's bad scholarship · 2020-09-05T20:57:27.736Z · score: 2 (1 votes) · LW · GW

You seem to be making an argument here against statements made in a particular book, and provide a lot of quotes, but not quotes of the specific statements you are arguing against. You claim they say Solomonoff induction solves the problem of induction, which it clearly doesn't in full generality because that would also mean it solves the problem of the criterion, epistemic circularity, and other formulations of what we might call the hard problem of epistemology (how do you go from knowing nothing to knowing something) in a justified way, yet on most accounts Solomonoff induction is usually argued to formalize and address induction up to the limit of systematization, which seems more relevant to the rest of what you get at in your post.

So, uh, I guess just what are you trying to argue here, other than that you think these coauthors made a mistake because you don't think they engaged with the literature on induction enough in the text of their book?

Comment by gworley on nostalgebraist: Recursive Goodhart's Law · 2020-08-30T21:33:12.772Z · score: 2 (1 votes) · LW · GW

I'm pretty happy to count all these things as optimization. Much of the issue I find with using the feedback loop definition is, as you point to, is the difficulty of figuring out things like "is there a lot here?", suggesting there might be a better, more general model for what I've been pointing to work feedback loop because it's simply the closest, most general model I know. Which actually points back to the way I phrased it before, which isn't formalized but I think does come closer to expansively capturing all the things I think make sense to group together as "optimization".

Comment by gworley on nostalgebraist: Recursive Goodhart's Law · 2020-08-29T01:22:25.694Z · score: 2 (0 votes) · LW · GW

Yeah, if I want to be precise, I mean anytime there is a feedback loop there is optimization.

Comment by gworley on "On Bullshit" and "On Truth," by Harry Frankfurt · 2020-08-28T01:14:39.813Z · score: 4 (3 votes) · LW · GW

On Bullshit argues that bullshitting doesn't necessarily undermine society, at least not up to a point, but I I would expect a hypothetical society with even slightly less bullshit than ours to function more smoothly. I also disagree with the position that truth intrinsically gives us joy. Many of us love bullshit more than truth.

I don't think the fact that people love bullshit means they couldn't experience the joy of truth. Many people like abusing others in various ways, yet would probably be happier if they didn't, but they are stuck in a local optimum that helps them deal with some trauma, likely from having been abused themselves. So, too, I expect it is with bullshit: suffer enough of it and you'll become a connoisseur of it, able to enjoy it, but having also fallen into an attractor that keeps you from enjoying the greater joy of what simply is, distrustful even that those who experience it aren't bullshitting you themselves.

Comment by gworley on nostalgebraist: Recursive Goodhart's Law · 2020-08-27T21:49:42.606Z · score: 2 (1 votes) · LW · GW

I think once one begins to enter this alternative frame where lots of things aren't optimization, it starts to become apparent that "hardly anything is just optimization" -- IE, understanding something as optimization often hardly explains anything about it, and there are often other frames which would explain much more.

I guess it depends on whether you want to keep "optimization" as a referent to the general motion that is making the world more likely to be one way than another or a specific type of making the world more likely to be one way rather than another. I think the former is more of a natural category for the types of things most people seem to mean by optimizing.

None of this is to say, though, that there aren't many processes where the optimization framing is not very useful. For example, you mention logic and Bayesian updating as examples, and that sounds right to me, because those are processes operating over the map rather than the territory (even if they are meant to be grounded in the territory), and when you only care about the map it doesn't make much sense to talk about taking actions to make the world one way rather than another, because there is only one consistent way the world can be within the system of a particular map.

Comment by gworley on nostalgebraist: Recursive Goodhart's Law · 2020-08-26T20:53:26.524Z · score: 5 (5 votes) · LW · GW

It's mostly explicated down in the comments on the post where people started getting confused about just how integral the act of measuring is to doing anything. When I wrote the post I considered the point obvious enough to not need to be argued on its own, until I hit the comments.

(On the example, I was a short sighted optimizer.)

Comment by gworley on nostalgebraist: Recursive Goodhart's Law · 2020-08-26T19:00:40.922Z · score: 7 (6 votes) · LW · GW

I agree and glad this is getting upvotes, but for what it's worth I made exactly the same point a year ago and several people were resistant to the core idea, so this is probably not an easily won insight.

Comment by gworley on Learning human preferences: black-box, white-box, and structured white-box access · 2020-08-25T18:54:12.466Z · score: 2 (1 votes) · LW · GW

Any model is going to be in the head of some onlooker. This is the tough part about the white box approach: it's always an inference about what's "really" going on. Of course, this is true even of the boundaries of black boxes, so it's a fully general problem. And I think that suggests it's not a problem except insofar as we have normal problems setting up correspondence between map and territory.

Comment by gworley on Charting Is Mostly Superstition · 2020-08-25T18:40:05.730Z · score: 1 (2 votes) · LW · GW

Although you make a case against divining what to do from charts, I think there might still be a case for doing things like this.

I think this because I rely heavily on inference from charts to do my job, but these are charts telling me about the behavior of computer systems via telemetry rather than stocks. Now, there's some big differences here to be sure. I'm trying to infer the behavior of a mostly deterministic thing that, while complex, is complex like a clock rather than complex like a school of fish.

Nonetheless, this suggests to me that charts should still be useful for inferring things about how the market works and then being able to use that model to create a strategy. To the extent it doesn't work, I would say you probably need more and better charts to help you make sense of what's going on, since that's usually the answer in my world, rather than that things are just random and you can't find evidence of what's happening.

Maybe the argument is something like financial markets have so much noise that you're more likely to accidentally overfit to noise rather than find real patterns that let you infer a useful model, but if that's the case that's a problem everywhere, and you just have to get more aggressive about dealing with it up to some limit where there's simply not enough signal to determine anything useful.

Comment by gworley on Mathematical Inconsistency in Solomonoff Induction? · 2020-08-25T17:28:24.029Z · score: 1 (2 votes) · LW · GW

I think you're seeing the problem of the universal prior. It's a free variable and depending on how you like to talk about it, you set it normatively, arbitrarily, on faith, or to serve a particular purpose.

Comment by gworley on Is 'satificing' optimisation? · 2020-08-25T17:23:45.708Z · score: 5 (3 votes) · LW · GW

Satisficing is optimization. I'd say even more generally, any kind of decision process is a kind of optimization because it's driving the world towards a particular state based on the decision method, even if that's something like "optimizing" for higher entropy. The only way not to optimize is to not make decisions which means not changing what happens based on feedback.

Comment by gworley on Current guidance for COVID-19 self-care? · 2020-08-24T20:43:09.582Z · score: 2 (1 votes) · LW · GW

I don't know how it specifically interacts with COVID-19, but most modern people are vitamin D deficient, it's easy to supplement, and higher vitamin D levels than without supplementation seem to be correlated with better immune system function.

Comment by gworley on Douglas_Knight's Shortform · 2020-08-24T20:22:28.344Z · score: 2 (1 votes) · LW · GW

I think it's not specifically about slaves, but the supply of labor relative to the demand. For a modern example, the standard argument is that Japan has pushed further ahead on automation than other developed nations in recent decades because labor is in shorter supply there relative to the demand for it because of demographic changes. Similarly, the argument for why the industrial revolution happened when it did and where it did is because labor was in short supply in England (and to a lesser extent, in short supply in northern Europe), so it was necessary to automate work to meet demand.

That said, this is just a model and there are likely other factors at play such that even when the labor supply-demand curve supports automation it may not happen, perhaps for cultural reasons (e.g. a society of Butlerians or Luddites).

Comment by gworley on G Gordon Worley III's Shortform · 2020-08-24T19:46:51.367Z · score: 3 (2 votes) · LW · GW

This actually fits the lifting metaphor (which is itself a metaphor)!

Comment by gworley on Inoculating against Psychedelic Woo · 2020-08-21T22:29:45.268Z · score: 5 (3 votes) · LW · GW

This is not to (as my post above suggests) prevent irrational thought during the trip itself, but rather to ensure that one is properly prepared, once sober, to reflect critically about their experiences, and avoid common epistemic mistakes people make post-psychedelics

This seems like the right place to intervene. Psychedelic trips are somewhat similar to mystical experiences, and there the problem tends to be taking what was experienced and trying to turn it into something like a universal truth. So, it quickly jumps from "I felt warm and connected" to "I was in the presence of God" or whatever else is ready at hand in the person's mind. Stated otherwise, you mostly have to watch out for overfitting the data to a pattern you want to be true.

Comment by gworley on G Gordon Worley III's Shortform · 2020-08-21T22:24:37.958Z · score: 8 (4 votes) · LW · GW

Explanations are liftings from one ontology to another.

Comment by gworley on Inoculating against Psychedelic Woo · 2020-08-21T16:13:23.700Z · score: 10 (6 votes) · LW · GW

I'm in no way meaning to dismiss your concerns, but this all seems a bit much to me. Something like "person identified with their conception of themself seeks to remain identified with themself in circumstances where disidentification happens". Or if you like, comparable to trying to get drunk without actually being drunk or to engage in exposure therapy without actually being exposed to anything.

I'm not saying we can't optimize for better and more useful experiences, only that something seems off to me about your approach here where you're trying to have your cake and eat it, too.

Comment by gworley on MakoYass's Shortform · 2020-08-21T15:59:00.925Z · score: 2 (1 votes) · LW · GW

Yes, this seems straightforwardly true, although I don't think it's especially significant unless I'm failing to think of some relevant context about why you think indexical claims matter so much (but then I don't spend a lot of time thinking very hard about semantics in a formal context, so maybe I'm just failing to grasp what all is encompassed by "indexical").

Comment by gworley on How much can surgical masks help with wildfire smoke? · 2020-08-21T15:53:35.672Z · score: 2 (1 votes) · LW · GW

As should be obvious, this question is somewhat motivated because:

  • I'm in Oakland and there's wildfire smoke
  • I have asthma
  • I have a bunch of surgical masks
  • I'm running HEPA filtration but there's enough smoke now that I can still smell smoke in my apartment, so the filtertation isn't keeping up

I've already ordered more dakka, but I'm thinking about things I might do in the interval other than wear my P100 mask, since that has the unfortunate tradeoff that it makes breathing more effortful so it's not a great option for an extended period of time. I'm actually wearing a surgical mask in my home now as an experiment, but I think it's also worth asking since it's a bit hard for me to gather enough data to know if it's making a difference or not.

(The additional context for people who know me in person is that while I've been in Oakland/Berkeley for a the last few years of fires, my asthma is much worse now so I don't know how much the smoke may end up affecting me.)

Comment by gworley on Should we write more about social life? · 2020-08-19T22:27:34.513Z · score: 8 (4 votes) · LW · GW

I think Jacobian has made a number of cross-posts here about social life.

Comment by gworley on On Defining your Terms · 2020-08-19T18:21:19.016Z · score: 4 (2 votes) · LW · GW

One thing I found helpful was realizing I can't expect mathematically precise definitions about the territory; you can only get those for definitions of things that exist within the map. Otherwise there is always a gap where uncertainty and fuzziness seep in, but that's okay. If you expect that what you're doing is less like defining mathematical terms or programming and more like painting a picture so that it evokes the expected thoughts (e.g. "oh, I guess that must be a flower"), you'll suffer a lot less strife and be more effective at using and explaining words in ways that are useful.

Comment by gworley on Survey Results: 10 Fun Questions for LWers · 2020-08-19T17:54:13.054Z · score: 2 (1 votes) · LW · GW

9. When you feel emotions, do they mostly help or hinder you in pursuing your goals?

I don't understand how to interpret the results of this question. Is lower or higher help or hinder?

Comment by gworley on Partially Enlightened AMA · 2020-08-19T02:26:44.608Z · score: 3 (2 votes) · LW · GW

Then keep drinking until the reflection in your cup is you!

Comment by gworley on Matt Goldenberg's Short Form Feed · 2020-08-18T19:25:09.941Z · score: 2 (1 votes) · LW · GW

My memory of The Gervais Principle is that it gets wrapped up in lots of fairly specific models of how people interact, whereas Moral Mazes has a more diffuse "you are contaminated by interacting with the system" vibe. So in the end maybe pretty similar, but with different emphases.

Comment by gworley on Partially Enlightened AMA · 2020-08-18T19:21:30.249Z · score: 3 (2 votes) · LW · GW

But what is it to have this mind of no?

Comment by gworley on Partially Enlightened AMA · 2020-08-18T16:39:33.356Z · score: 3 (2 votes) · LW · GW

What makes your enlightenment partial? Have you only a partial mind?

Comment by gworley on Partially Enlightened AMA · 2020-08-18T16:36:34.247Z · score: 3 (2 votes) · LW · GW

Mmm, the rabbit bounds under the full moon.

Comment by gworley on GPT-3, belief, and consistency · 2020-08-18T01:34:57.079Z · score: 2 (1 votes) · LW · GW

The other side of this is to ask, what do humans believe? As in, what are the mechanisms going on that we then categorize as constructing beliefs.

At a certain level I think if we took a fresh look at beliefs, we'd see humans and GPT are doing similar things, albeit with different optimization pressures. But on another level, as you point out by addressing the question of resolving inconsistency, GPT seems to lack the sort of self-referential quality that humans have, except insofar as GPT, say, is fed articles about GPT.

Comment by gworley on Partially Enlightened AMA · 2020-08-18T01:07:27.737Z · score: 4 (3 votes) · LW · GW

Why speak the emperor's name? Is it not better to cut out your tongue so you may speak eloquently?

Comment by gworley on "The Conspiracy against the Human Race," by Thomas Ligotti · 2020-08-14T04:39:42.461Z · score: 8 (4 votes) · LW · GW

This was really in-depth and I enjoyed it (heh).

To me Ligotti reads like someone stuck in the "pit" of nihilism or a dark night of the soul. From the view of many spiritual traditions and from some theories of developmental psychology, this is a necessary phase where a person sees a side of reality we might here usefully call emptiness (but "vastness", as mentioned in the article, works too), but importantly gets stuck on that emphasis and fails to remember or discover the value of form or what we might here simply think of as the mundane everyday experience of things. If this is right, then it suggests he's been stuck there a long time; at least long enough to bother to write this book!

I say this because this all feels familiar, and yet I keep getting on with life anyway. I guess Ligotti would argue I resumed my role in the conspiracy as a trade off to temporarily suffer a little less or something like that, but I think there's a bit more to it than that. Just what that is, though, I won't say.

Comment by gworley on This Territory Does Not Exist · 2020-08-14T03:24:43.373Z · score: 8 (3 votes) · LW · GW

Okay, circling around on this to maybe say something more constructive now that I've thought about it a bit.

Part of the problem is that your central thesis is not very clear on first read, so I had to think about it a bit to really get what your "big idea" or ideas are that motivate this post. I realize you say right up top that you believe a strong version of verificationism is correct, but to me that's not really getting at the core of what you're thinking, that's just something worked out by other people you can point at and say "something in that direction seems right".

(FWIW, even some of the people who came up with logical positivism and related ideas like verificationism eventually worked themselves into a corner and realized there was no way out and the whole thing fell apart. The arguments for why it doesn't work eventually get pretty subtle if you really press the issue, and I doubt I could do them justice, so I'll stay higher level and may not have the time and energy to address every objection that you could bring up, but basically there's 50+ years of literature trying to make ideas like this work and then finding there were inevitably problems.)

So, as best I can tell, the insight you had driving this was something like "oh, there's no way to fully ground most statements, thus those statements are meaningless". I'll respond based on this assumption.

Now, up to a point you are right. Because there is an epistemological gap between the ontological and the ontic, statements about the ontic are "meaningless" in that they are not fully grounded. This is due to epistemic circularity, a modern formulation of the problem of the criterion. Thus the only statements that can be true in a classical sense are statements about statements, i.e. you can only get "truth" from inside an ontology, and there is no matter of truth to be assessed about statements of any other kind. This, alas, is not so great, because it takes all the power out of our everyday experience of truth.

One possible response is to hold this classical notion of truth fixed and say all those statements that can't be grounded are false. The alternative is to say we screwed up on the classical notion of truth, and it's not the category we thought it was.

Which every path you take, they converge to the same place, because if you reject things as not having truth value, now you have to introduce some new concept to talk about what everyone in everyday speech thinks of as truth, and you'll be forced to go around annoying everyone with your jargon but whatever. The alternative is to accept that the classical notion of truth is malformed and not the category it was thought to be, and to rehabilitate truth in epistemology to match what people actually mean by it.

As I say, in the limit they converge to the same place, but in different language (cf. the situation with moral realism and anti-realism). I'll speak now from the second perspective because it's mine, but there's one from the other side.

So then, if we were wrong that truth is about statements that can be proven true and grounded in reality, then what is truth? I take from your talk of "constrained expectations" you already see the gist of it: truth can be about predicting experiences, as in what is true is that which is part of a web of causality that generates our perceptions. This is messy, of course, because we're embedded in the world and have to perceive it from inside it, but it gives us a way to make sense of what we really mean when we talk about truth, and to see that classical notions of truth were poor but reasonable first approximations of this, made with the intuitive assumption that there was ever some (classical) truth to know that was of a special kind. And on this view, truth of statements about the ontic are not meaningless; in fact, they are the only kind of statements you can make, because the ontological is ontic, but the ontic is not ontological.

Thus, also, why you have a comment section full of people arguing with you, because of course the natural notion of truth we have is meaningful, and it is only on a particular narrow view of it that it is not, which is right within itself but leaves out much and confusingly repurposes words to mean things they don't normally mean.

Comment by gworley on Rationally Ending Discussions · 2020-08-13T18:48:25.621Z · score: 3 (2 votes) · LW · GW

Mmm, something like using "rational" to really mean "best" or "optimal". Rationality is about a process, not an outcome, even if it claims to make particular promises on the quality of the outcome.

Comment by gworley on This Territory Does Not Exist · 2020-08-13T18:46:19.552Z · score: 4 (2 votes) · LW · GW

Claiming that they are meaningless is also making a claim that there is no there there to make claims about, and implies a metaphysics where there is a causal disconnect between perception and the perceived.

Comment by gworley on This Territory Does Not Exist · 2020-08-13T18:25:15.121Z · score: 5 (3 votes) · LW · GW

General assessment: valid critiques but then you go and make your own metaphysical claims in exactly the opposite direction, missing the point of your own analysis.