Posts

Dealing With Delusions 2022-08-14T21:11:43.937Z
Welcome to Baltimore Lesswrong Meetup [Edit With Your Details] 2018-03-25T21:49:42.255Z
One-Consciousness Universe 2018-01-23T08:46:55.965Z

Comments

Comment by adrusi on Sam Altman's sister, Annie Altman, claims Sam has severely abused her · 2023-11-18T01:24:02.292Z · LW · GW

This hypothesis seems like it should be at or near the top of the list. It explains a lot of Sam's alleged behavior. If she's exhibiting signs of psychosis then he might be trying to get her to get care, which would explain the strings-attached access to resources. Possibly she is either altering the story or misunderstanding about her inheritance being conditional on Zoloft, it might have been an antipsychotic instead.

On the other hand, while psychosis can manifest in subtle ways, I'm skeptical that someone whose psychosis is severe enough that they'd be unable to maintain stable employment or housing would be able to host a podcast where their psychosis isn't clearly visible. (I haven't listened to it yet, but I would expect it to be obvious enough that others would have pointed it out)

A variation on this hypothesis that I find more likely is that Annie is psychologically unwell in exactly the ways she says she is, and out of some mixture of concern for her wellbeing and fear that her instability could hurt his own reputation or business interests, Sam has used some amount of coercion to get her to seek psychiatric care. She then justifiably got upset about her rich and powerful family members using their financial power to coerce her into taking drugs she knows she doesn't want to take. You don't have to be psychotic to develop some paranoia in a situation like that.

Comment by adrusi on Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer · 2023-08-09T23:18:47.817Z · LW · GW

This is somewhat unconvincing on its own, because clearly at the very least the trans community does some Motte/Bailey on it.

Yeah I bet that does happen. A more charitable lens that explains some of what might come across that way, though, is that "women trapped in men's bodies" is a neat and succinct way to explain trans women to someone who it would otherwise take too long to explain to, in situations where an extended lecture would be impractical, inappropriate or unappreciated.

I think autogynephilia is correlated with gender identity?

In extension, it's true that learning that someone experiences autogynephilic sexual fantasies should increase your credence that they will report a feminine gender identity.

What I mean is that the Blanchardian model and the gender variance model barely make reference to the same concepts. Orthogonal in theory space, not in people space. But another way of putting my point is that endorsing autogynephilia as an explanation for most trans women's motivation for transition in no way binds you to any position on whether trans women are women.

Comment by adrusi on Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer · 2023-08-09T08:12:06.714Z · LW · GW

The reason autogynephilia is controversial is because it's an alternative to the "woman trapped in a man's body" trope, an etiological story that undermines the "trans women are women" slogan and makes MtFs seem more relevantly M than F, despite their/our efforts.

I don't agree that's the reason that autogynephilia theory is controversial! Not that it isn't part of the story, but I'm pretty sure the main reason for the controversy is that it contradicts trans women's own understanding of their motivations for transitioning, and is often presented as to imply trans women are either deceiving themselves or others

In reddit-tier discourse, people do get mad that autogynephilia theory contradicts "trans women are women," but I have no idea how to coherently interpret reddit-tier discourse. When people of the same ideological persuasion as the reddit "trans women are women" crowd want to be coherent, I've seen them often cite Julia Serano on the topic:

In recent papers, proponents of autogynephilia have argued that the theory should be accepted because it has more explanatory potential than what they call the “feminine essence narrative”—that is, the idea forwarded by some transsexuals that they are rather uncomplicatedly “women trapped in men’s bodies”. According to this argument, while the feminine essence narrative may hold true for androphilic transsexual women (whose feminine gender expression and attraction to men allows them to come off as sufficiently “womanly”), nonandrophilic and/or nonfeminine transsexual women fail to achieve conventional ideals of womanhood and, therefore, must comprise a different category and arise from a distinct etiology. However, pitting autogynephilia against an overly simplistic “feminine essence narrative” ignores a more nuanced view that I will refer to here as the gender variance model, which holds that gender identity, gender expression, sexual orientation, and physical sex are largely separable traits that may tend to correlate in the general population but do not all necessarily align in the same direction within any given individual. According to this model, transsexuals share the experience of discordance between their gender identity and physical sex (which leads to gender dysphoria and a desire to physically transition) but are expected to differ with respect to their gender expression and sexual orientation (just as nontranssexuals vary in these aspects).

[source: The Case Against Autogynephilia]

As far as I can tell, "women trapped in men's bodies" hasn't been put forth as a serious model of transness since the theory of sexual inversion in the 19th century. In the ideological framework of mainstream trans activists, autogynephilia doesn't actually threaten "trans women are women," because what makes trans women women is "gender identity," which autogynephilia is entirely orthogonal to. I think the reason that redditors act like it does is because its proponents have a tendency to deny that (at least autogynephilic) trans women are women, not anything to do with the theory itself.

If the spooky number of dimensions I have in common with trans women (like being spectrumy programmers) aren't things we have in common with actual females, that still undermines the slogan

I have no particular attachment to the slogan or its metaphysical agenda, but I want to point out that in my own life, it's seemed like spectrumy trans women sure have a lot in common with spectrumy cis women. Most of my friends growing up were spectrumy cis women, and I think these friends of mine fit into the spectrumy trans woman stereotypes pretty well. I don't know to what degree this is peculiar to me and the people I encountered, but I'm not the first to observe it.

Comment by adrusi on Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer · 2023-07-19T13:51:24.030Z · LW · GW

Not necessarily sexual fantasies themselves! Sexual fantasies are an indicator of the presence of an underlying sexual orientation towards that which is depicted in the fantasies

I see! This is something I associate with Ann Lawrence's contribution to the theory. I had Lawrence on my reading list last year, but I felt it was wise to pull back from that reading for a bit, so sorry if my criticism is a bit basic. I'll be going off just your comment here and what I've heard second hand from Lawrence's critiques, who might not be the best of rationalists.

I'll say that I've remarked before that "autogynephilia," if you looked at just the etymology and not its origin in describing cross-sex sexual fantasies (that's definitely how Blanchard used it initially), seemed like as good a description of myself as any. Mostly because it sounds pretty deflationary: I chose to transition because I... like myself as a woman (or more feminine, I'd prefer to say). The alternatives seem like they'd be either cynical-strategic or self-harm.

But "sexual orientation" sounds like it comes with a lot more baggage than that. What account of "sexual orientation" allows calling autogynephilia without concordant sexual fantasies a "sexual orientation?" I've heard people talk about "the desire to become a woman and fall in love with yourself" in the context of Lawrence (and Zack made reference to that here, so I assume it's not made up). But "desire to become a woman" without the "and fall in love with yourself" part doesn't sound like something you'd want to call a "sexual orientation," and the falling in love with yourself part... I don't think I fell in love with myself or that I'm likely to. From my second-hand impressions of Lawrence's work, I think that part is supposed to be involved in explaining why post-transition trans women tend to no longer experience much autogynephilic sexual fantasy, by analogy to a stale sexless marriage.

The classic autogynephile is a male who has a sexual desire to both be a woman and to have sex with other women. But the theory also has to account for asexuals like me, so it describes us as exclusively autogynephilic. So autogynephilia is a sexual orientation that can be present on its own, or in combination with homosexual or bisexual attraction (w.r.t. natal sex). If autogynephilia were something that people had in the place of conventional sexual orientations, then I could see the elegance of calling it a sexual orientation. But not if we've established that it can be found in the place of or in addition to other orientations.

I don't think we get any explanatory value out of this account of autogynephilia-as-a-sexual-orientation without necessary sexual components. Remember that we invoked it in order to explain why some males want to transition and live as women — if "autogynephilia" amounts to nothing more than a desire to be a woman, then we're just begging the question!

And again, if autogynephilia fails to provide an explanation for my desire to transition, then even if it seems to explain other people's it leaves unexplained why I seem so damn similar to them along a spooky number of dimensions, and that should cast the entire claim into doubt.

And there are hypotheses that perform better. Scott explains cross-sex gender identification as causally posterior to ASD:

My guess is something like joint issues → poor proprioception → all sensory experience is noisy and confusing → the brain, which is embodied and spends most of its time trying to process sensory experience, learns a different reasoning style → different reasoning style is less context-dependent (producing symptoms of autism) → different reasoning style when trying to interpret bodily correlates of gender (eg sex hormones) → transgender.

[source: Why Do Transgender People Report Hypermobile Joints?]

Or the hypothesized Meyer-Powers Syndrome, which purports to explain several of the observed commonalities among late-onset trans women, including gender dysphoria, in terms of a disorder of steroidogenesis. I'm skeptical of its empirical validity, but I bet that whatever the true explanation is, it'll involve a similar-looking causal graph.

Comment by adrusi on Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer · 2023-07-19T08:13:15.603Z · LW · GW

I'm trying to make sense of this. If I'm not mistaken you claim:

  1. Autogynephilic sexual fantasies are causally responsible for late-onset not-purely-androphilic trans women's motivations for transition
  2. Some late-onset trans women have never had autogynephilic sexual fantasies

This obviously doesn't make sense as-is. You briefly went into a theory of early-onset HSTS, late-onset not-otherwise-specified gender dysphoria, and you raised internalized misandry as a possible alternate instantiation of that "not-otherwise-specified". And that could resolve the issue I'm pointing at.

This explanation makes a testable prediction. I've noticed that late-onset trans women tend to fall remarkably close together along a number of characteristics that aren't obviously related to gender dysphoria or autogynephilia. Let me know if you don't think that's right, and I can go into more detail, but as a basic example, this group has way higher rates of ASD, and more people who were excellent programmers at a young age, compared to the male baseline. If you're proposing that non-autogynephilic late-onset trans women have significantly different causal explanations for transitioning, then we wouldn't expect to find them also in this autistic computer-kid cluster.

I notice that when offering Ziz as an example of a non-autogynephilic late-onset trans woman, you chose to mention that she's "unusual along a lot of dimensions." So I'm hopeful that I'm on the right track in inferring your thinking here.

From my perspective as a late-onset, not-purely-androphilic trans woman who's on the spectrum and was an excellent programmer at a young age, but who lacks a history of autogynephilic sexual fantasies, I find the "not-otherwise-specified" explanation hard to believe.

Instead of supposing that most late-onset trans women were motivated to transition by their fetish, while I was motivated by some other factor, and that it's just a coincidence that we happen to also share a lot of peculiar features, it would be more parsimonious to say that among these characteristics that we share is some psychological factor that motivated all of our transitions, and which also causes most of us to develop autogynephilic fetishes.

Maybe I'm wrong, and what I perceive as a clear cluster of unusual traits isn't actually enough of a statistical anomaly to support my conclusions (I think it's really anomalous though). Or maybe I'm a victim of social contagion — I ended up friends with a bunch of autogynephiles because we share all these characteristics, then they transitioned because of their autogynephilia, and then I did because I wanted to be ingroup (I'm quite happy with my transition though).

My explanation also has the advantage of matching the reports by most late-onset trans women about the relation between their gender dysphoria and autogynephilic fantasies. I agree there's plenty of evidence that nobody is thinking sanely on this subject — motivated self-delusion is a believable explanation! But it does still incur a complexity penalty.

Comment by adrusi on Deflationism isn't the solution to philosophy's woes · 2021-03-10T23:36:43.370Z · LW · GW

I worry that this doesn't really end up explaining much. We think that our answers to philosophical questions are better than what the analytics have come up with. Why? Because they seem intuitively to be better answers. What explanation do we posit for why our answers are better? Because we start out with better intuitions.

Of course our intuitions might in fact be better, as I (intuitively) think they are. But that explanation is profoundly underwhelming.

This might actually be the big thing LW has over analytic philosophy, so I want to call attention to it and encourage people to poke at what this thing is.

I'm not sure what you mean here, but maybe we're getting at the same thing. Having some explanation for why we might expect our intuitions to be better would make this argument more substantive. I'm sure that anyone can give explanations for why their intuitions are more likely to be right, but it's at least more constraining. Some possibilities:

  • LWers are more status-blind, so their intuitions are less distorted by things that are not about being right
  • Many LWers have a background in non-phil-of-mind cognitive sciences, like AI, neuroscience and psychiatry, which leads them to believe that someways of thinking are more apt to lead to truth than others, and then adopt the better ones
  • LWers are more likely than analytic philosophers to have extensive experience in a discipline where you get feedback on whether you're right, rather than merely feedback on whether others think you are right, and that might train their intuitions in a useful direction.

I'm not confident that any of these are good explanations, but they illustrate the sort of shape of explanation that I think would be needed to give a useful answer to the question posed in the article.

Comment by adrusi on Deflationism isn't the solution to philosophy's woes · 2021-03-10T05:46:38.033Z · LW · GW

I think an important piece that's missing here is that LW simply assumes that certain answers to important questions are correct. It's not just that there are social norms that say it's OK to dismiss ideas as stupid if you think they're stupid, it's that there's a rough consensus on which ideas are stupid.

LW has a widespread consensus on bayseian epistemology, physicalist metaphysics and consequentialist ethics (not an exhaustive list). And it has good reasons for favoring these positions, but I don't think LW has great responses to all the arguments against these positions. Neither do the alternative positions have great responses to counterarguments from the LW-favored positions.

Analytic philosophy in the academy is stuck with a mess of incompatible views, and philosophers only occasionally succeed in organizing themselves into clusters that share answers to a wide range of fundamental questions.

And they have another problem stemming from the incentives in publishing. Since academic philosophers want citations, there's an advantage to making arguments that don't rely on particular answers to questions where there isn't widespread agreement. Philosophers of science will often avoid invoking causation, for instance, since not everyone believes in it. It takes more work to argue in that fashion, and it constrains what sorts of conclusions you can arrive at.

The obvious pitfalls of organizing around a consensus on the answers to unsolved problems are obvious.

Comment by adrusi on The Principle of Predicted Improvement · 2019-04-25T06:40:11.444Z · LW · GW

I also had trouble with the notation. Here's how I've come to understand it:

Suppose I want to know whether the first person to drive a car was wearing shoes, just socks, or no footwear at all when they did so. I don't know what the truth is, so I represent it with a random variable , which could be any of "the driver wore shoes," "the driver wore socks" or "the driver was barefoot."

This means that is a random variable equal to the probability I assign to the true hypothesis (it's random because I don't know which hypothesis is true). It's distinct from and which are both the same constant, non-random value, namely the credence I have in the specific hypothesis (i.e. "the driver wore shoes").

( is roughly "the credence I have that 'the driver wore shoes' is true," while is "the credence I have that the driver wore shoes," so they're equal, and semantically equivalent if you're a deflationist about truth)

Now suppose I find the driver's great-great-granddaughter on Discord, and I ask her what she thinks her great-great-grandfather wore on his feet when he drove the car for the first time. I don't know what her response will be, so I denote it with the random variable . Then is the credence I assign to the correct hypothesis after I hear whatever she has to say.

So is equivalent to and means "I shouldn't expect my credence in 'the driver wore shoes' to change after I hear the great-great-granddaughter's response," while means "I should expect my credence in whatever is the correct hypothesis about the driver's footwear to increase when I get the great-great-granddaughter's response."

I think there are two sources of confusion here. First, was not explicitly defined as "the true hypothesis" in the article. I had to infer that from the English translation of the inequality,

In English the theorem says that the probability we should expect to assign to the true value of H after observing the true value of D is greater than or equal to the expected probability we assign to the true value of H before observing the value of D,

and confirm with the author in private. Second, I remember seeing my probability theory professor use sloppy shorthand, and I initially interpreted as a sloppy shorthand for . Neither of these would have been a problem if I were more familiar with this area of study, but many people are less familiar than I am.

Comment by adrusi on Debt is an Anti-investment · 2018-07-07T19:16:57.483Z · LW · GW

I think there's some ambiguity in your phrasing and that might explain gjm's disagreement:

You seem to value the (psychological factor of having debt) at zero.

Or

You seem to value the psychological factor of (having debt at zero).

These two ways of parsing it have opposite meanings. I think you mean the former but I initially read it as the latter, and reading gjm's initial comment, I think they also read it as the latter.

Comment by adrusi on Are ethical asymmetries from property rights? · 2018-07-02T05:19:07.702Z · LW · GW

I'm attracted to viewing these moral intuitions as stemming from intuitions about property because the psychological notion of property biologically predates the notion of morality. Territorial behaviors are found in all kinds of different mammals, and prima facie the notion of property seems to be derived from such behaviors. The claim, then, is that during human evolution, moral psychology developed in part by coopting the psychology of territory.

I'm skeptical that anything normative follows from this though.

Comment by adrusi on Notification update and PM fixes · 2018-01-28T09:48:20.725Z · LW · GW

Are there plans to support email notifications? Having to poll the notification tray to check for replies to posts and comments is not ideal.

Comment by adrusi on Dispel your justification-monkey with a “HWA!” · 2018-01-28T09:40:24.157Z · LW · GW
What happens the next time the same thing happens? Am I, Bob, supposed to just “accept reality” no matter how many times Alice messes up and does a thing that harms or inconveniences me, and does Alice owe me absolutely nothing for her mistakes?

If Alice has, to use the phrase I used originally, "aquired a universal sense of duty," then the hope is that it is less likely for the same thing to happen again. Alice doesn't need to feel guilty or at fault for the actions, she just acknowledges that the outcome was undesirable, and that she should try to adjust her future behavior in such a way as to make similar situations less likely to arise in the future. Bob, similarly, tries to adjust his future behavior to make similiar situations less likely to arise (for example, by giving Alice a written reminder of what she was supposed to get at the store).

The notion of "fault" is an oversimplification. Both Alice's and Bob's behavior contributed to the undesirable outcome, it's just that Alice's behavior (misremembering what she was supposed to buy) is socially-agreed to be blameworthy and Bob's behavior (not giving Alice a written reminder) is socially-agreed to be perfectly OK. We could have different norms, and then the blame might fall on Bob for expecting Alice to remember something without writing it down for her. I think that would be a worse norm, but that's not important; the norm that we have isn't optimal because it blinds Bob to the fact that he also has the power to reduce the chance of the bad outcome repeating itself.

HWA addresses this, but not without introducing other flaws. Our norms of guilt and blame are better at compelling people to change their behavior. HWA relies on people caring about and having the motiviation to prevent repeat bad outcomes purely for the sake of preventing repeat bad outcomes. Guilt and blame give people external interest and motivation to do so.

Comment by adrusi on Pareto improvements are rarer than they seem · 2018-01-28T09:17:42.119Z · LW · GW

I think philh is using it in the first way you described, just while honoring the fact that potential future deals factor into how desirable a deal is for each party. We do this implicitly all the time when money is involved: coming away from a deal with more money is desirable only because that money makes the expected outcomes of future deals more desirable. That's intuitive because it's baked into the concept of money, but the same consideration can apply in different ways.

Acknowledging this, we have to consider the strategic advantages that each party has as assets at play in the deal. These are usually left implicit and not obvious. So in the case of re-opening Platform 3, the party in favor of making the platform accessible has a strategic advantage if no deal is made, but loses that advantage if the proposed deal is made. The proposed deal, therefore, is not a Pareto improvement compared to not making a deal.

Comment by adrusi on Dispel your justification-monkey with a “HWA!” · 2018-01-24T15:08:08.103Z · LW · GW

I think I more or less try to live my life along the lines of HWA, and it seems to go well for me, but I wonder if that says more about the people I choose to associate with than the inherent goodness of the attitude. HWA works when people are committed to making things go better in the future regardless of whose fault it is. But not everyone thinks that way all the time. Some people haven't acquired a universal sense of duty, they only feel duty when they attribute blame to themselves, and feel a grudging sense of unfairness if asked to care about fixing something that isn't their fault. HWA would not work for them, unless they always understood it to mean "other person's responsibility" and became moral freeloaders.

Even among those better suited to HWA, I think it's still less than ideal, because it suppresses consensus-building. I think inevitably people will still think about whose fault something is, but once someone utters "HWA," they won't share their assessments. When people truly honor the spirit of HWA this won't matter, because they won't ascribe much significance to guilt and innocence, but the stories we tell about our lives are structured around the institutions of guilt and innocence, and by imposing a barrier to sharing our stories with one another, we come to each live our own story, which I fear is what tears communities and societies apart. HWA may be good for friendships, but I'm not sure it's good on larger scales of human interactions.

Comment by adrusi on One-Consciousness Universe · 2018-01-23T21:35:02.540Z · LW · GW

I'm not sure at this point what my goal was with this post, it would be too easy to fall into motivated reasoning after this back-and-forth. So I agree with you that my post fails to give evidence for "consciousness can be based on person-slices," I just don't know if I ever intended to give that positive conclusion.

I do think that person-slices are entirely plausible, and a very useful analytical tool, as Parfit found. I have other thoughts on consciousness which assume person-slices are a coherent concept. If this post is sufficient to make the burden of proof for the existence of person-slices not clearly fall to me, then it's served a useful purpose.

***

By the way, I did give a positive account for the existence of person slices, comparing the notion of a person slice to something that we more readily accept exists:

What would it be like to be a person-slice? This seems to me to be analogous to asking “how can we observe a snapshot of an electron in time?” We can’t! Observation can only be done over an interval of time, but just because we can’t observe electron-slices doesn’t mean that we shouldn’t expect to be able to observe electrons over time, nor does the fact that we can observe electrons over time suggest that electron-slices are a nonsensical concept. Likewise, if there’s nothing it’s like to be a person-slice, that doesn’t mean that person-slices are nonsense.
Comment by adrusi on One-Consciousness Universe · 2018-01-23T18:12:43.660Z · LW · GW

This is precisely the kind of gymnastics you need to do if you want to justify the foundational claim of altruism, that other people should matter to you. But what you've said is not sufficient to justify that. Why should I care about the person-slices the conscion visits if they are not my own?

Comment by adrusi on One-Consciousness Universe · 2018-01-23T17:23:31.091Z · LW · GW

You posted this reply before I finished editing my previous comment to include its second clause, but I'll respond as though the order were more natural.

That's the wrong comparison to be making. Suppose the deist idea about the origin of the universe were dominant, and I proposed that God may not have created the universe. After all, deists, what created God? He was an unmoved mover? Well why couldn't the universe have just been an unmoved movse in the first place? Sound like you're just passing the recursive buck, deists! I'm not proposing any kind of better explanation, just offering a different non-explanation to induce doubt.

Doubt in what? Well I admit I don't know all that much about deism, but let's suppose that deists believed that even though god never intervened in the universe, he had intentions for how the universe should turn out, and it's our job as his creations to honor his intentions like we would honor the intentions of our fathers. This baggage is not entailed by the core theory of deism, it just came along for the ride when deism evolved from older Christian metaphysics. That's why even though my proposed alternative to deism is no more an explanation of the origin of the universe than deism is, it brings to attention the fact that that deism's baggage is unnecessary and we should forsake it.

I'm not saying we need to doubt the conventional understanding of consciousness entirely, rather that we should recognize that it has baggage and forsake it. What's the evidence that it has baggage? Well conventional intuition makes the idea of person-slices seem suspect, as I described in the post. Person-slices don't seem suspect when you use the conscion model of consciousness. If the two hypotheses are equally non-explanatory, then it is baggage that causes the different intuition.

Comment by adrusi on One-Consciousness Universe · 2018-01-23T16:25:51.879Z · LW · GW

I'm not trying to explain the theory of flow (not in this post, I do have some thoughts on the matter). I'm merely trying to induce doubt.

The conventional understanding of consciousness as the Christian soul doesn't explain anything, really, just like the "conscion." But because it's tied up in millennia of Christian scholarship, there are suppositions attached to it that are indefensible.

Comment by adrusi on One-Consciousness Universe · 2018-01-23T15:56:25.093Z · LW · GW

You've woven a story in which I am wrong, and it will be hard for me to admit that I am wrong. In doing so, you've made it tricky for to defend my point in the case that I'm not wrong.

You're accusing my "conscion" of being the same kind of mysterious answer as phlogiston. It would be, if I were seriously proposing it as an answer to the mystery of consciousness. I'm not.

I view this one-electron universe model as an ontological koan. It makes us think “hey, reality could be this way rather than the way we think it is and we would be none the wiser — let’s try to deepen our understanding of reality in light of that.”

I'll gladly concede a failure of my writing in not making it clearer that I'm not making any claim that the conscion exists, but rather that thinking about what it would mean for our understanding of consciousness if the conscion did exist, as described. I'm trying to force people to drop their intuition about the "flow" of consciousness. I'm saying that all our observations about consciousness can be equally well explained by this weird conscion hypothesis as can be by the conventional consciousness-as-the-christian-soul hypothesis, so we should notice that many of our intuitions about consciousness have simply been transplanted from theology, and we should not trust those.

Comment by adrusi on More Babble · 2018-01-13T04:50:58.531Z · LW · GW

I'm inclined to think that the babble you've been describing is actually just thoughts, and not linguistic at all. You create thoughts by babble-and-prune and then a separate process converts the thoughts into words. I haven't thought much about how that process works (and at first inspection I think it's probably also structured as babble-and-prune), but I think it makes sense to think about it as separate.

If the processes of forming thoughts and phrasing them linguistically were happening at the same level, I'd expect it to be more intuitive to make syntax reflect semantics, like you see in Shakespeare where the phonetic qualities of a character's speech reflect their personality. Instead, writing like that seems to require System 2 intervention.

But I must admit I'm biased. If I were designing a mind, I'd want to have thought generation uncoupled from sentence generation, but it doesn't have to actually work that way.

Edit: If generating linguistic-babble happens on a separate level from generating thought-babble, then that has consequences for how to train thought-babble. Your suggestions of playing scrabble and writing haikus would train the wrong babble (nothing wrong with training linguistic-babble, that's how you become a good writer, but I'm more interested in thought-babble). I think if you wanted to train thought-babble, you'd want to do something like freewriting or brainstorming — rapidly producing a set of related ideas without judgment.

Comment by adrusi on More Babble · 2018-01-13T00:24:39.366Z · LW · GW

Are you referring to the second half of my comment? Because perhaps I wasn't clear enough. I'm confused what alkjash means, because some of their references to the babble graph seemed perfectly consistent with my understanding but I got the impression that overall we might not be talking about the same thing.if we are talking about the same thing then that whole section of my comment is irrelevant.

Comment by adrusi on Babble · 2018-01-12T21:27:29.308Z · LW · GW

I've made a reply to your followup.

Comment by adrusi on More Babble · 2018-01-12T21:26:21.730Z · LW · GW

This is a followup to my comment on the previous post.

This followup (Edit: alkjash's followup post, not my followup comment) addresses my stated motivation for suggesting that the babble-generator is based on pattern-matching rather than a mere entropy. I had said that there are too many possible ideas for a entropy to generate reasonable ones. For babble to be produced by a random walk along the idea graph is more plausible. It's not obvious that you couldn't produce sufficiently high-quality babble with a random-walk along a well-constructed idea-graph.

Now, while I absolutely think the idea graph exists, and I agree that producing babble involves a walk along that graph, I am still committed to the belief that that walk is not random, but is guided by pattern matching. My first reason for holding this belief is introspection: I noticed that ideas are produced by "guess-and-check" (or babble-and-prune) by introspection, and I also noticed that the guessing process is based on pattern matching. That's fairly weak evidence. My stronger reason for belieiving that babble is produced by pattern matching is that it's safer to assume that a neurological process is based on pattern matching than random behavior. Neurons are naturally suited to forming pattern-matching machines (please forgive my lay-understanding of cognitive science), and while I don't see why they couldn't also form an entropy generator, I don't suspect that a random walk down the idea graph would be more adaptive than a more "intelligent" pattern matching algorithm.

I also infer that the babble-generator is a pattern matcher from the predictions that makes. If the babble-generator is a random-walk down the idea-graph, then the only way to improve your babble should be to improve your idea graph. If the babble-generator is a pattern-matcher-diected-walk down the idea-graph then you should be able to improve your babble both by training the pattern-matcher well and by improving your idea-graph. Let's say reading nonfiction impoves your idea graph more effectively than it trains a hypthetical pattern-matcher, and that writing novels trains your pattern-matcher more effectively than it improves your idea-graph. Then if the random-walk hypothesis is true, we should see the same kinds of improvements to babble when we read nonfiction and write novels, but if the pattern-matcher hypothesis is true we should expect different kinds of improvements.

***

I think for the most part we're talking about the same thing, I'm just suggesting this additional detail of pattern-matching, which has normative consequences (as I sketched out in my previous comment). However I'm not quite sure that we're talking about the same graphs. You say:

What is the Babble graph? It's the graph within which your words and concepts are connected. Some of these connections are by rhyme and visual similarity, others are semantic or personal.

I certainly don't think think that this graph is a graph of words, even though I agree that there can be connections representing syntactic relationships like rhyme. I don't think that the babble algorithm is "start at some node of the graph, output the word associated with that node, then select a connected node and repeat." There is an idea-graph, and it's used in the production of babble, but not like that. I'm not sure if you were claiming that it does, but in case you were, I disagree. I would try to elaborate what role I do think the idea-graph plays in babble generation, but this comment is already getting very long.

I'm curious about the details of your model of this "babble-graph," You mention that it can create new connections, which suggest to me that the "graph" is actually a static representation of an active process of connection-drawing. I could be convinced that the pattern-matching I'm talking about is actually a separate process which is responsible for forming these connections. But I'm fuzzy on what exactly you mean so I'm not sure that's even coherent.

Great posts, I wouldn't mind a part 3!

Comment by adrusi on Babble · 2018-01-12T16:23:51.823Z · LW · GW

I've been thinking about this same idea, and I thought your post captured the heart of the algorithm (and damn you for beating me to it 😉). But I think you got the algorithm slightly wrong, or simplified the idea a bit. The “babble” isn't random, there are too many possible thoughts for random thought generation to ever arrive at something the prune filter would accept. Instead, the babble is the output of of a pattern matching process. That's why computers have become good at producing babble: neural networks have become competent pattern matchers.

This means that the algorithm is essentially the hypothetical-deductive model from philisophy of science, most obvious when the thoughts you're trying to come up with are explanations of phenomena: you produce an explanation by pattern matching, then prune the ones that make no goddamn sense (then if you're doing science you take the explanations you can't reject for making no sense and prune them again by experiment). That's why I've been calling your babble-prune algorithm “psychological adbuctivism.”

Your babble's pattern matching gets trained on what gets accepted by the prune filter, that's why it gets better over time. But if your prune filter is so strict that it seldom accepts any of your babble's output, your babble never improves. That's why you must constrain the tyranny of your prune filter if you find yourself with nothing to say. If you never accept any of your babble, then you will never learn to babble better. You can learn to babble better by pattern matching off of what others say, but if your prune filter is so strict, you're going to have a tough time finding other people who say things that pass your prune filter. You'll think “thats a fine thing to say, but I would never say it, certainly not that way.” Moreover, listening to other people is how your prune filter is trained, so your prune filter will be getting better (that is to say, more strict) at the same time as your straggling babble generator is.

I've had success over the past year with making my prune filter less strict in conversational speech, and I think my babble has improved enough that I can put my prune filter back up to its original level. But I need to do the same with my writing and I find it harder to do that. With conversational speech, you have a time constraint, so if your prune filter is too strict you simply end up saying nothing — the other person will say something or leave before you come up with a sufficiently witty response. In writing, you can just take your time. If it takes you an hour toncome up with the next sentence, then you sit down and wait that god-forsaken hour out. You can get fine writing out with that process, but it's slow. My writing is good enough that I never proofread (I could certainly still get something out of proofreading, but it isn't compulsory, even for longer pieces of writing), but to get that degree of quality takes me forever, and I cant produce lower quality writing faster (which would be very useful for finishing my exams on time).

Comment by adrusi on Rationalist Politicians · 2017-12-23T18:19:52.039Z · LW · GW

If a nerd won the presidency, it wouldn't be great because they would say "true" things. It would be great because they would actually be concerned with figuring out what is true. They might actually change their minds if they realized they were wrong.

If you agree with Trump, then let's allow that he "says true things." That doesn't mean that he embodies what would be great about a nerd in the Oval Office. If Trump says true things, it's because it gets him the support of certain segments of the population. If he had evidence that one of his beliefs was false, but his base still believed it, I'm quite certain he would go on professing the false belief. Now I don't actually think Trump would even recognize his belief to be wrong in the presence of evidence, and I think a substantial fraction of politicians are little better than he.

The article didn't claim that Obama was — rather, it claimed that Obama's path to the presidency could be emulated by nerds.