Posts

Comments

Comment by lucidian on A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats · 2015-04-19T03:43:16.836Z · LW · GW

I suspect you're aware of this already, but you are basically describing E-Prime.

Comment by lucidian on Who are your favorite "hidden rationalists"? · 2015-01-12T18:20:20.905Z · LW · GW

Highly recommend kazerad, for Scott-level insights about human behavior. Here's his analysis of 4chan's anonymous culture. Here's another insightful essay of his. And a post on memetics. And these aren't necessarily the best posts I've read by him, just the three I happened to find first.

By the way, I'm really averse to the label "hidden rationalists". It's like complimenting people by saying "secretly a member of our ingroup, but just doesn't know it yet". Which simultaneously presupposes the person would want to be a member of our ingroup, and also that all worthwhile people are secretly members of our ingroup and just don't know it yet.

Comment by lucidian on Low Hanging fruit for buying a better life · 2015-01-06T16:27:35.330Z · LW · GW

Here are the ten I thought of:

  • decorations for your house/apartment
  • a musical instrument
  • lessons for the musical instrument
  • nice speakers (right now I just have computer speakers and they suck)
  • camping equipment
  • instruction books for crafts you want to learn (I'm thinking stuff like knitting, sewing etc.)
  • materials for those crafts
  • gas money / money for motels, so you can take a random road trip to a place you've never been before
  • gym membership
  • yoga classes (or martial arts or whatever)

Also I totally second whoever said "nice kitchen knives". I got one as a Christmas present once, and it's probably the best holiday gift I've ever received.

Comment by lucidian on I Want To Believe: Rational Edition · 2014-11-19T19:51:01.754Z · LW · GW

Read more things that agree with what you want to believe. Avoid content that disagrees with it or criticizes it.

Comment by lucidian on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-01T21:00:35.388Z · LW · GW

I don't have an answer, but I would like to second this request.

Comment by lucidian on What's the right way to think about how much to give to charity? · 2014-09-24T23:52:49.003Z · LW · GW

This post demonstrates a common failure of LessWrong thinking, where it is assumed that there is one right answer to something, when in fact this might not be the case. There may be many "right ways" for a single person to think about how much to give to charity. There may be different "right ways" for different people, especially if those people have different utility functions.

I think you probably know this, I am just picking on the wording, because I think that this wording nudges us towards thinking about these kinds of questions in an unhelpful way.

Comment by lucidian on Should people be writing more or fewer LW posts? · 2014-09-15T20:21:01.319Z · LW · GW

I think that we should have fewer meta posts like this. We spend too much time trying to optimize our use of this website, and not enough time actually just using the website.

Comment by lucidian on Overcoming Decision Anxiety · 2014-09-11T14:10:41.100Z · LW · GW

Thanks for this post! I also spend far too much time worrying about inconsequential decisions, and it wouldn't surprise me if this is a common problem on LessWrong. In some sense, I think that rationality actually puts us at risk for this kind of decision anxiety, because rationality teaches us to look at every situation and ask, "Why am I doing it this way? Is there a different way I could do it that would be better?" By focusing on improving our lives, we end up overthinking our decisions. And we tend to frame these things as optimization problems: not "How can I find a good solution for X?", but "How can I find the best solution for X?" When we frame everything as optimization, the perfect can easily become the enemy of the good. Why? Because suppose you're trying to solve problem X, and you come up with a pretty decent solution, x. If you are constantly asking how to improve things, then you will focus on all the negative aspects of x that make it suboptimal. On the other hand, if you accept that some things just don't need to be optimized, you can learn to be content with what you have; you can focus on the positive aspects of x instead.

I think this is how a lot of us develop decision anxiety, actually. In general, we feel anxiety about a decision when we know it's possible for things to go wrong. The worse the possible consequences, the more anxiety we feel. And the thing is, when we focus on the downsides of our decisions, then we have negative feelings about our decisions. The more negative feelings we have about every decision we make, the more it seems like making a decision is an inherently fraught endeavor. Something in our minds says, "Of course I should feel anxiety when making decisions! Every time I make a decision, the result always feels really bad!"

Based on all of this, I'm trying to remedy my own decision anxiety by focusing on the positive more, and trying to ignore the downsides of decisions that I make. Last weekend, I was also looking for a new apartment. I visited two places, and they both looked great, but each of them had its downsides. One was in the middle of nowhere, so it was really nice and quiet, but very inaccesible. The other was in a town, and was basically perfect in terms of accesibility, but if you stood outside, you could vaguely hear the highway. At first I was pretty stressed about the decision, because I was thinking about the downsides of each apartment. And my friend said to me, "Wow, this is going to be a hard decision." But then I realized that both apartments were really awesome, and I'd be very happy in either of them, so I said, "Actually this is a really easy decision." Even if I accidentally picked the 'wrong' apartment, I would still be very happy there.

But here's the thing: whether I'm happy with my decision will depend on my mindset as I live in the apartment. I ended up picking the accesible apartment where you can hear the highway a little. If I spend everyday thinking "Wow, I hate that highway, I should have chosen the other apartment," then I'll regret my decision (even though the other place would have also had its faults). But if I spend every day thinking "Wow, this apartment is beautiful, and so conveniently located!", then I won't regret my decision at all.

Comment by lucidian on [QUESTION]: Looking for insights from machine learning that helped improve state-of-the-art human thinking · 2014-07-26T16:13:50.973Z · LW · GW

I think it's worth including inference on the list of things that make machine learning difficult. The more complicated your model is, the more computationally difficult it will be to do inference in it, meaning that researchers often have to limit themselves to a much simpler model than they'd actually prefer to use, in order to make inference actually tractable.

Comment by lucidian on The Correct Use of Analogy · 2014-07-18T11:04:38.660Z · LW · GW

Analogies are pervasive in thought. I was under the impression that cognitive scientists basically agree that a large portion of our thought is analogical, and that we would be completely lost without our capacity for analogy? But perhaps I've only been exposed to a narrow subsection of cognitive science, and there are many other cognitive scientists who disagree? Dunno.

But anyway I find it useful to think of analogy in terms of hierarchical modeling. Suppose you have a bunch of categories, but you don't see any relation between them. So maybe you know the categories "dog" and "sheep" and so on, and you understand both what typical dogs and sheep look like, and how a random dog or sheep is likely to vary from its category's prototype. But then suppose you learn a new category, such as "goat". If you keep categories totally separate in your mind, then when you first see a goat, you won't relate it to anything you already know. And so you'll have to see a whole bunch of goats before you get the idea of what goats are like in general. But if you have some notion of categories being similar to one another, then when you see your first goat, you can think to yourself "oh, this looks kind of like a sheep, so I expect the category of goats to look kind of like the category of sheep". That is, after seeing one goat and observing that it has four legs, you can predict that pretty much all goats also have four legs. That's because you know that number-of-legs is a property that doesn't vary much in the category "sheep", and you expect the category "goat" to be similar to the category "sheep". (Source: go read this paper, it is glorious.)

Anyway I basically think of analogy as a way of doing hierarchical modeling. You're trying to understand some situation X, and you identify some other situation Y, and then you can draw conclusions about X based on your knowledge of Y and on the similarities between the two situations. So yes, analogy is an imprecise reasoning mechanism that occasionally makes errors. But that's because analogy is part of the general class of inductive reasoning techniques.

Comment by lucidian on The Correct Use of Analogy · 2014-07-18T10:50:22.767Z · LW · GW

I'm also reading this book, and I'm actually finding it profoundly unimpressive. Basically it's a 500-page collection of examples, with very little theoretical content. The worst thing, though, is that its hypothesis seems to fundamentally undermine itself. Hofstadter and Sander claim that concepts and analogy are the same phenomenon. But they also say that concepts are very flexible, non-rigid things, and that we expand and contract their boundaries whenever it's convenient for reasoning, and that we do this by making analogies between the original concept (or its instances) and some new concept (or its instances). And I agree with that. But that means that it's essentially meaningless to claim that "concepts and analogy are the same thing". We can draw an analogy between the phenomenon we typically call "categorization" and the phenomenon we typically call "analogy", and I think it's very useful to do so. But deciding whether they're the same phenomenon is just a question of how fine-grained you want your categories to be, and that will depend on the specific reasoning task you're engaged in. So I'm just massively frustrated with the authors for not acknowledging the meaninglessness of their thesis. If they just said "it's useful to think of analogy and categorization as instances of a single phenomenon" then I'd totally agree. But they don't. They say that analogy and categorization are literally the same thing. Arggggghhhh.

(Metaphors We Live By, on the other hand, is one of my favorite books in the universe. It changed my life and I highly recommend it. (Edit: ok that sounds kind of exaggeraty. It changed my life because I study language and it gave me a totally different way of thinking about language.))

Comment by lucidian on Meetup : Sydney Social Meetup - April (Bridge walk) · 2014-04-10T00:26:11.433Z · LW · GW

Potluck means we bring our own food and then share it? Is there a list of what people are bringing, to avoid duplicates?

Comment by lucidian on Meetup : Sydney Meetup - March · 2014-03-25T15:50:40.872Z · LW · GW

Oh hey, this is convenient, I just got to Sydney yesterday and you guys have a meetup tonight. =) I'll probably attend. (I'm in town for three months, visiting from the United States.)

I have an ulterior motive for attending: I am looking for housing near Macquarie University for the next three months. I don't suppose anyone here has a room for rent, or knows of a good place to stay? (Sorry if this is the wrong place to ask about such things!)

Comment by lucidian on Parenting versus career choice thinking in teenagers · 2014-03-17T22:07:25.158Z · LW · GW

That's what I'm wondering.

Comment by lucidian on Parenting versus career choice thinking in teenagers · 2014-03-17T21:06:47.857Z · LW · GW

Sure, but that understanding is very specific to our culture. It's only recently that we've come to see procreation as "recreation" - something unnecessary that we do for personal fulfillment.

Many people don't hold jobs just to avoid being poor. It's also a duty to society. If you can't support yourself, then you're a burden on society and its infrastructure.

Similarly, having children was once thought of as a duty to society. I read an article about this recently: http://www.artofmanliness.com/2014/03/03/the-3-ps-of-manhood-procreate/

Anyway, my point is, our idea that career is necessary but children are not is culture-specific.

Comment by lucidian on Friendly AI ideas needed: how would you ban porn? · 2014-03-17T20:58:09.598Z · LW · GW

To construct a friendly AI, you need to be able to make vague concepts crystal clear, cutting reality at the joints when those joints are obscure and fractal - and them implement a system that implements that cut.

Strongly disagree. The whole point of Bayesian reasoning is that it allows us to deal with uncertainty. And one huge source of uncertainty is that we don't have precise understandings of the concepts we use. When we first learn a new concept, we have a ton of uncertainty about its location in thingspace. As we collect more data (either through direct observation or indirectly through communication with other humans), we are able to decrease that uncertainty, but it never goes away completely. An AI which uses human concepts will have to be able to deal with concept-uncertainty and the complications that arise as a result.

The fact that humans can't always agree with each other on what constitutes porn vs. erotica demonstrates that we don't all carve reality up in the same places (and therefore there's no "objective" definition of porn). The fact that individual humans often have trouble classifying edge cases demonstrates that even when you look at a single person's concept, it will still contain some uncertainty. The more we discuss and negotiate the meanings of concepts, the less fuzzy the boundaries will become, but we can't remove the fuzziness completely. We can write out a legal definition of porn, but it won't necessarily correspond to the black-box classifiers that real people are using. And concepts change - what we think of as porn might be classified differently in 100 years. An AI can't just find a single carving of reality and stick with it; the AI needs to adapt its knowledge as the concepts mutate.

So I'm pretty sure that what you're asking is impossible. The concept-boundaries in thingspace remain fuzzy until humans negotiate them by discussing specific edge cases. (And even then, they are still fuzzy, just slightly less so.) So there's no way to find the concept boundaries without asking people about it; it's the interaction between human decision makers that define the concept in the first place.

Comment by lucidian on Parenting versus career choice thinking in teenagers · 2014-03-14T17:58:04.138Z · LW · GW

I can't help but think that some of this has to do with feminism, at least in the case of girl teenagers. I hear a lot of people emphasizing that having children is a choice, and it's not for everyone. People are constantly saying things like "Having children is a huge responsibility and you have to think very carefully whether you want to do it." The people saying this seem to have a sense that they're counterbalancing societal pressures that say everyone should have children, or that women should focus on raising kids instead of having a career.

It's interesting, though, that no one applies the same advice to careers. (At least, not in my demographic.) No one says "Following a career path is a huge responsibility, so think very carefully whether you want to do it." A lot of people say "think very carefully about which career you want" but not "think carefully about whether you want a career at all".

I wonder, also, if there's gender differences. Do parents talk to male teenagers about their careers, and female teenagers about their future children, or anything like that?

Comment by lucidian on Open Thread: March 4 - 10 · 2014-03-05T04:52:40.752Z · LW · GW

Thanks!

Comment by lucidian on Open Thread: March 4 - 10 · 2014-03-05T03:47:38.778Z · LW · GW

Cog sci question about how words are organized in our minds.

So, I'm a native English speaker, and for the last ~1.5 years, I've been studying Finnish as a second language. I was making very slow progress on vocabulary, though, so a couple days ago I downloaded Anki and moved all my vocab lists over to there. These vocab lists basically just contained random words I had encountered on the internet and felt like writing down; a lot of them were for abstract concepts and random things that probably won't come up in conversation, like "archipelago" (the Finnish word is "saaristo", if anyone cares). Anyway, the point is that I am not trying to learn the vocabulary in any sensible order, I'm just shoving random words into my brain.

While studying today, I noticed that I was having a lot more trouble with certain words than with others, and I started to wonder why, and what implications this has for how words are organized in our minds, and whether anyone has done studies on this.

For instance, there seemed to be a lot of "hash collisions": vocabulary words that I kept confusing with one another. Some of these were clearly phonetic: hai (shark) and kai (probably). Another phonetic pair: toivottaa (to wish) and taivuttaa (to inflect a word). Some were a combination of phonetic and semantic: virhe (error), vihje (hint), vaihe (phase, stage), and vika (fault). Some of them I have no idea why I kept confusing: kertautua (to recur) and kuvastaa (to mirror, to reflect).

There were also a few words that I just had inordinate amounts of trouble remembering, and I don't know why: eksyä (to get lost), ehtiä (to arrive in time), löytää (to find), kyllästys (saturation), sisältää (to include), arvata (to guess). Aside from the last one, all of these have the letter ä in them, so maybe that has something to do with it. Also, the first two words don't have a single English verb as an equivalent.

There were also some words that were easier than I expected: vankkuri (wagon), saaristo (archipelago), and some more that I don't remember now because they quickly vanished from my deck. Both of these words are unusual but concrete concepts.

Do different people struggle with the same words when learning a language? Are some Finnish words just inherently "easy" or "hard" for English speakers to learn? If it's different for each person, how does the ease of learning certain words relate to a person's life experiences, interests, common thoughts, etc.?

What do hash collisions tell us about how words are organized in our minds? Can they tell us anything about the features we might be using to recognize words? For instance, English speakers often seem to have trouble remembering and distinguishing Chinese names; they all seem to "sound the same". Why does this happen? Here's a hypothesis: when we hear a word, based on its features, it is mapped to a specific part of a learned phonetic space before being used to access semantic content. Presumably we would learn this phonetic space to maximize the distance between words in a language, since the farther apart words are, the less chance they have of accessing the wrong semantic content. Maybe certain Finnish words sound the same to me because they map to nearby regions of my phonetic space, but a speaker of some other language wouldn't confuse these particular words because they'd have a different phonetic space? I'm just speculating wildly here.

I'd be interested to hear everyone else's vocab-learning experiences and crazy hypotheses for what's going on. Also, does anyone know any actual research that's been done on this stuff?

Comment by lucidian on The Rationality Wars · 2014-03-03T23:51:58.151Z · LW · GW

Hmm. If you want to know how Bayesian models of cognition work, this paper might be a good place to start, but I haven't read it yet: "Bayesian Models of Cognition", by Griffiths, Kemp, and Tenenbaum.

I'm taking a philosophy class right now on Bayesian models of cognition, and we've read a few papers critiquing Bayesian approaches: "Bayesian Fundamentalism or Enlightenment?", by Jones and Love "Bayesian Just-So Stories in Psychology and Neuroscience", by Bowers and Davis Iirc, it's the latter that discusses the unfalsifiability of the Bayesian approach.

Comment by lucidian on The Rationality Wars · 2014-02-28T15:24:08.252Z · LW · GW

It might be worth noting that Bayesian models of cognition have played a big role in the "rationality wars" lately. The idea is that if humans are basically rational, their behaviors will resemble the output of a Bayesian model. Since human behavior really does match the behavior of a Bayesian model in a lot of cases, people argue that humans really are rational. (There has been plenty of criticism of this approach, for instance that there are so many different Bayesian models in the world that one is sure to match the data, and thus the whole Bayesian approach to showing that humans are rational is unfalsifiable and overfitting.)

If you are interested in Bayesian models of cognition I recommend the work of Josh Tenenbaum and Tom Griffiths, among others.

Comment by lucidian on Is love a good idea? · 2014-02-23T19:29:45.132Z · LW · GW

This description/advice is awesome, and I mostly agree, but I think it presents an overly uniform impression of what love is like. I've been in Mature Adult Love multiple times, and the feelings involved have been different every time. I wouldn't necessarily reject your division into obsession, closeness, and sexual desire, but I think maybe there are different kinds (or components) of closeness, such as affection, understanding, appreciation, loyalty, etc., and any friendship or relationship will have these in differing degrees. For instance, for a lot of people, family love seems to involve a lot of loyalty but not as much understanding.

Comment by lucidian on A defense of Senexism (Deathism) · 2014-02-16T21:08:35.883Z · LW · GW

Hmm, I can see arguments for and against calling computationalism a form of dualism. I don't think it matters much, so I'll accept your claim that it's not.

As for embodied cognition, most of what I know about it comes from reading Lawrence Shapiro's book Embodied Cognition. I was much less impressed with the field after reading that book, but I do think the general idea is important, that it's a mistake to think of the mind and body as separate things, and that in order to study cognition we have to take the body into consideration.

I agree that embodiment could be simulated. But I don't like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others. Since I think of my subjective experience as an essential part of my self, this seems important.

Comment by lucidian on A defense of Senexism (Deathism) · 2014-02-16T20:11:09.904Z · LW · GW

Ah. I'm not sure I agree with you on the nature of the self. What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)

I mean, I don't necessarily disagree with this kind of dualism; it seems possible, even given what I know about embodied cognition. I just am not sure how it could be tested scientifically.

Comment by lucidian on A defense of Senexism (Deathism) · 2014-02-16T20:07:17.520Z · LW · GW

Hmm, I'll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people's preferences, so that any drop in predictive power might be worth it. But I'm not sure I've seen evidence in either direction; I just assumed it based on analogy and priors.

As for why you should care, I don't think you should, necessarily, if you don't already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.

Comment by lucidian on A defense of Senexism (Deathism) · 2014-02-16T19:48:20.038Z · LW · GW

What does it mean to benefit a person, apart from benefits to the individual cells in that person's body? I don't think it's unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.

Comment by lucidian on A defense of Senexism (Deathism) · 2014-02-16T19:46:14.010Z · LW · GW

Thanks for this post. I basically agree with you, and it's very nice to see this here, given how one-sided LW's discussion on death usually is.

I agree with you that the death of individual humans is important for the societal suporganism because it keeps us from stagnating. But even if that weren't true, I would still strongly believe in the value of accepting death, for pretty much exactly the reasons you mentioned. Like you, I also suspect that modern society's sheltering, both of children and adults, is leading to our obsession with preventing death and our excessive risk aversion, and I think that in order to lead emotionally healthy lives, we need to accept risk of failure, pain, and even death. Based on experience and things I've read, I suspect we all have much deeper reserves of strength than we realize, but this strength is only called on in truly dire circumstances, because it's so costly to use. If we never put ourselves into these dire circumstances, we will never accomplish the extraordinary. And if we're afraid to put ourselves in such circumstances, then our personal growth will be stunted by fear.

I say this as someone who was raised by very risk-averse parents who always focused on the worst-case scenario. (For instance, when I was about seven, I was scratching a mosquito bite, and my dad said to me "you shouldn't scratch mosquito bites, because one time your grandfather was cleaning the drain, and he scratched a mosquito bite with drain gunk on his hand, and his arm swelled up black and he had to go to the hospital".) As a kid I was terrified of doing much of anything. It wasn't until my late teens and early twenties that I started to learn how to accept risk, uncertainty, and even death. Learning to accept these things gave me huge emotional benefits - I felt light and free, like a weight had been lifted. Once I had accepted risk, I spent a summer traveling, and went on adventures that everyone told me were terrible ideas, but I returned from them intact and now look back on that summer as the best time in my entire life. I really like the saying that "the coward dies a thousand deaths, the brave man only one". It fits very well with my own experience.

I'm hesitant to say that death is objectively "good" or "bad". (I might even classify the question as meaningless.) It seems like, as technology improves, we will inevitably use it to forestall or even prevent death. Should this happen, I think it will be very important to accept the lack of death, just as now it's important to accept death. And I'm not really opposed to all forms of anti-deathism; I occasionally hear people say things like "Life is so much fun, why wouldn't I want to keep doing it forever?". That doesn't seem especially problematic to me, because it's not driven by fear. What I object to is this idea that death is the worst thing ever, and obviously anyone who is rational would put a lot of money and effort into preventing it, so anyone who doesn't is just failing to follow their beliefs and desires to the logical conclusion. So it's really nice to see this post here, amidst the usual anti-deathism. Thanks again for writing it.

Comment by lucidian on Thoughts on Death · 2014-02-15T21:41:58.028Z · LW · GW

What would it mean to examine this issue dispassionately? From a utilitarian perspective, it seems like choosing between deathism and anti-deathism is a matter of computing the utility of each, and then choosing the one with the higher utility. I assume that a substantial portion of the negative utility surrounding death comes from the pain it causes to close family members and friends. Without having experienced such a thing oneself, it seems difficult to estimate exactly how much negative utility death brings.

(That said, I also strongly suspect that cultural views on death play a big role in determining how much negative utility there will be.)

Comment by lucidian on How to not be a fatalist? Need help from people who care about true beliefs. · 2013-12-08T02:05:05.681Z · LW · GW

I wish I could upvote this comment more than once. This is something I've struggled with a lot over the past few months: I know that my opinions/decisions/feelings are probably influenced by these physiological/psychological things more than by my beliefs/worldview/rational arguments, and the best way to gain mental stability would be to do more yoga (since in my experience, this always works). Yet I've had trouble shaking my attachment to philosophical justifications. There's something rather terrifying about methods (yoga, narrative, etc.) that work on the subconscious, because it implies a frightening lack of control over our own lives (at least if one equates the self with the conscious mind). Particularly frightening to me has been the idea that doing yoga or meditation might change my goals, especially since the teachers of these techniques always seem to wrap the techniques in some worldview or other that I may dislike. Therefore, if I really believe in my goals, it is in my interest not to do these things, even though my current state of (lack of) mental health also prevents me from accomplishing my goals. But I do want to be mentally healthy, so I spent months trying to come up with some philosophical justification for doing yoga that I could defend to myself in terms of my current belief system.

Earlier this week, though, some switch flipped in me and I realized that, in my current state of mental health, I was definitely not living my life in accordance with my values (thanks, travel, for shaking me out of fixed thought-patterns!). I did some yoga and immediately felt better. Now I think I'm over this obsession with philosophical justifications, and I'm very happy about it, but damn, it took a long time to get there. The silly thing is that I've been through this internal debate a million times ("seek out philosophical justifications, which probably don't exist in a form that will satisfy my extreme skepticism and ability to deconstruct everything" vs. "trust intuition because it is the only viable option in the absence of philosophical justifications; also, do more yoga"). Someday I'll just settle on the latter and stop getting in arguments with myself.

Also, sorry if this comment is completely off-topic; it's just something I've been thinking about a lot.

Comment by lucidian on Probability, knowledge, and meta-probability · 2013-09-15T23:47:41.918Z · LW · GW

I was wondering this too. I haven't looked at this A_p distribution yet (nor have I read all the comments here), but having distributions over distributions is, like, the core of Bayesian methods in machine learning. You don't just keep a single estimate of the probability; you keep a distribution over possible probabilities, exactly like David is saying. I don't even know how updating your probability distribution in light of new evidence (aka a "Bayesian update") would work without this.

Am I missing something about David's post? I did go through it rather quickly.

Comment by lucidian on Engaging Intellectual Elites at Less Wrong · 2013-08-16T20:06:29.868Z · LW · GW

Forgive me, but the premise of this post seems unbelievably arrogant. You are interested in communicating with "intellectual elites"; these people have their own communities and channels of communication. Instead of asking what those channels are and how you can become part of them, you instead ask how you can lure those people away from their communities, so that they'll devote their limited free time to posting on LW instead.

I'm in academia (not an "intellectual elite", just a lowly grad student), and I've often felt torn between my allegiances to the academic community vs. the LessWrong community. In part, the conflict exists because LessWrong frames itself as an alternative to academia, as better than academia, a place where the true intellectuals can congregate, free from the constraints of the system of academic credibility, which unfairly penalizes autodidacts, or something. Academia has its problems, of course, and I agree with some of the LessWrong criticisms of it. But academia does have higher standards of rigor: peer review, actual empirical investigation of phenomena instead of armchair speculation based on the contents of pop science books, and so on. Real scientific investigation is hard work; the average LW commenter seems too plagued by akrasia to put in the long hours that science requires.

So an academic might look at LW and see a bunch of amateurs and slackers; he might view autodidacts as people who demand that things always be their way and refuse to cooperate productively with a larger system. (Such cooperation is necessary because the scientific problems we face are too vast for any individual to make progress on his own; collaboration is essential.) I'm not making all this up; I once heard a professor say that autodidacts often make poor grad students because they have no discipline, flitting back and forth between whatever topics catch their eye, and lacking the ability to focus on a coherent program of study.

Anyway, I just figured I'd point out what this post looks like from within academia. LessWrong has repeatedly rejected academia; now, finally, you are saying something that could be interpreted as "actually, some academics might be worth talking to". But instead of conceding that academia might have some advantages over LW and thus trying to communicate with academics within their system, you proclaim LessWrong to be "the highest-quality relatively-general-interest form on the web" (which, to me, is obviously false) and then you ask actual accomplished intellectuals to spend their time conversing with a bunch of intelligent-but-undereducated twenty-somethings who nonetheless think they know everything. I say that if members of LW want to communicate with intellectual elites, they should go to a university and do it there. (Though I'm not sure what to recommend for people who have graduated from college already; I'm going into academia so that I don't have to leave the intellectually stimulating university environment.)

I realize that this comment is awfully arrogant, especially for something that's accusing you of arrogance. And I realize that you are trying to engage with the academic system by publishing papers in real academic journals. I just think it's unreasonable to assume that "intellectual elites" (both inside and outside of academia) would care to spend time on LW, or that it would be good for those people if they did.

Comment by lucidian on Who are some of the best writers in history? · 2013-08-10T15:30:07.416Z · LW · GW

Who are some of the best writers in the history of civilization?

Different writers have such different styles that I'm not sure it's possible to measure them all on a simple linear scale from "bad writing" to "good writing". (Or rather, of course it's possible, but I think it reduces the dimensionality so much that the answer is no longer useful.)

If I were to construct such a linear scale, I might do so by asking "How well does this writer's style serve his goals?" Or maybe "How well does this writer's style match his content?" For instance, many blogs seem to be optimized for quick readability, since most people are unwilling to devote too much time to reading a blog post. On the other hand, some academic writing seems optimized for a certain kind of eloquence and formality.

I guess what I'm trying to say is that you're asking the wrong question. Don't ask "What makes a piece of writing good?". Ask "How does the structure of this piece of writing lead to the effect it has on the reader?". The closer you come to answering this question, the easier it will be to design a structure that serves your particular writing needs.

Comment by lucidian on Suggestions for Rationality Blogs in the Sidebar · 2013-06-28T11:11:17.477Z · LW · GW

Do people here read Ribbonfarm?

Comment by lucidian on Useful Concepts Repository · 2013-06-28T11:07:41.505Z · LW · GW

The tradeoff between efficiency and accuracy. It's essential for computational modeling, but it also comes up constantly in my daily life. It keeps me from being so much of a perfectionist that I never finish anything, for instance.

Comment by lucidian on Life hack request: I want to want to work. · 2013-06-13T03:30:32.353Z · LW · GW

I cannot agree with this more strongly. I was burnt out for a year, and I've only just begun to recover over the last month or two. But one thing that speeded my recovery greatly over the last few weeks was stopping worrying about burnout. Every time I sat down to work, I would gauge my wanting-to-work-ness. When I inevitably found it lacking, I would go off on a thought spiral asking "why don't I like working? how can I make myself like working?" which of course distracted me from doing the actual work. Also, the constant worry about my burnout surely contributed to depression, which then fed back into burnout....

It took me a really long time to get rid of these thoughts, not because I have trouble purging unwanted thoughts (this is something I have extensive practice in), but because they didn't seem unwanted. They seemed quite important! Burnout was the biggest problem in my life, so it seemed only natural that I should think about it all the time. I would think to myself, "I have to fix burnout! I must constantly try to optimize everything related to this! Maybe if I rearrange the desks in my office I won't be burnt out anymore." I thought, for a long time, that this was "optimization" and "problem solving". It took a depressingly long time for me to identify it for what it really was, which is just plain old stress and worry.

Once I stopped worrying about my inability to work, it became a lot easier to work.

Of course, there's some danger here - I got rid of the worry-thoughts after I had already started to recover from burnout. They weren't necessary, and my desire to work could just take over and make me work. But if you really have no desire to work, then erasing such thoughts could just lead to utter blissful unproductivity.

Comment by lucidian on Epistemic and Instrumental Tradeoffs · 2013-05-21T06:40:49.093Z · LW · GW

There are also things which are bad to learn for epistemic rationality reasons.

Sampling bias is an obvious case of this. Suppose you want to learn about the demographics of city X. Maybe half of the Xians have black hair, and the other half have blue hair. If you are introduced to 5 blue-haired Xians but no black-haired Xians, you might infer that all or most Xians have blue hair. That is a pretty obvious case of sampling bias. I guess what I'm trying to get at is that learning a few true facts (Xian1 has blue hair, Xian2 has blue hair, ... , Xian5 has blue hair) may lead you to make incorrect inferences later on (all Xians have blue hair).

The example you give, of debating being harmful to epistemic rationality, seems comparable to sampling bias, because you only hear good arguments for one side of the debate. So you learn a bunch of correct facts supporting position X, but no facts supporting position Y. Thus, your knowledge has increased (seemingly helpful to epistemic rationality), but leads to incorrect inferences (actually bad for epistemic rationality).

There's also the question of what to learn. You could spend all day reading celebrity magazines, and this would give you an increase in knowledge, but reading a math textbook would probably give you a bigger increase in knowledge (not to mention an increase in skills). (Two length-n sets of facts can, of course, increase your knowledge a different amount. Information theory!)

Comment by lucidian on LINK: Google research chief: 'Emergent artificial intelligence? Hogwash!' · 2013-05-17T21:02:12.562Z · LW · GW

This kind of AI might not cause the same kinds of existential risk typically described on this website, but I certainly wouldn't call it "safe". These technologies have a huge potential to reshape our lives. In particular, they can have a huge influence on our perceptions.

All of our search results come filtered through google's algorithm, which, when tailored to the individual user, creates a filter bubble. This changes our perception of what's on the web, and we're scarcely even conscious that the filter bubble exists. If you don't know about sampling bias, how you can you correct for it?

With the advent of Google Glass, there is a potential for this kind of filter bubble to pervade our entire visual experience. Instead of physical advertisements painted on billboards, we'll get customized advertisements superimposed on our surroundings. The thought of Google adding things to our visual perception scares me, but not nearly as much as the thought of Google removing things from our perception. I'm sure this will seem quite enticing. That stupid painting that your significant other insists on hanging on the wall? With advanced enough computer vision, Google+ could simply excise it from your perception. What about that ex-girlfriend with whom things ended badly? Now she walks down the streets of your town with her new boyfriend. What if you could change a setting in your Google glasses and have him removed from view? The temptations of such technology are endless. How many people in the world would rather simply block out the unpleasant stimulus than confront the cause of its unpleasantness - their own personal problems?

Google's continuous user feedback is one of the things that scares me most about its services. Take the search engine for example. When you're typing something into the search bar, google autocompletes - changing the way you construct your query. Its suggestions are often quite good, and they make the system run more smoothly - but they take away aspects of individuality and personal expression. The suggestions change the way you form queries, pushing them towards a common denominator, slowly sucking out the last drops of originality.

And sure, this matters little in search engines, but can you see how readily it could be applied to things like automatic writing helpers? Imagine you're a high school student writing an essay. An online tool provides you with suggestions for better wordings of your sentences, based on other user preferences. It will suggest similar wordings for all people, and suddenly, all essays will become that much more canned. (Certainly, such a tool could add a bit of randomness to the rewording-choice, but one has to be careful - introduce too much randomness and the quality decreases rapidly.)

I guess I'm just afraid that autocomplete systems will change the way people speak, encouraging everyone to speak in a very standardized way, the way which least confuses the autocomplete system or the natural language understanding system. As computers become more omnipresent, people might switch to this way of speaking all the time, to make it easier for everyone's mobile devices to understand what they're saying. Changing the way we speak changes the way we think; what will this do to our thought processes, if original wording is discouraged because it's hard for the computer to understand?

I do realize that socializing with other humans already exerts this kind of pressure. You have to speak understandably, and this changes what words you'll use. I find myself speaking differently with my NLP grad school colleagues than I do with non-CS friends, for instance. It's automatic. In a CS crowd, I'll use CS metaphors; in a non-CS crowd I won't. So I'm not opposed to changing the way I speak based on the context. I'm just specifically worried about the sort of speaking patterns NLP systems will force us into. I'm afraid they'll require us to (1) speak more simply (easier to process), (2) speak less creatively (because the algorithm has only been trained on a limited set of expressions), and (3) speak the way the average user speaks (because that's what the system has gotten the most data on, and can respond best to).

Ok, I'm done ranting now. =) I realize this is probably not what you were asking about in the post. I just felt the need to bring this stuff up, because I don't think LW is as concerned about these things as we should be. People obsess constantly about existential risk and threats to our way of life, but often seem quite gung-ho about new technological advances like Google Glass and self-driving cars.

Comment by lucidian on Antijargon Project · 2013-05-05T21:18:54.047Z · LW · GW

Hmm, you're probably right. I guess I was thinking that quick heuristics (vocabulary choice, spelling ability, etc.) form a prior when you are evaluating the actual quality of the argument based on its contents, but evidence might be a better word.

Where is the line drawn between evidence and prior? If I'm evaluating a person's argument, and I know that he's made bad arguments in the past, is that knowledge prior or evidence?

Comment by lucidian on Antijargon Project · 2013-05-05T19:52:45.831Z · LW · GW

Unless the jargon perpetuates a false dichotomy, or otherwise obscures relevant content. In politics, those who think in terms of a black-and-white distinction between liberal and conservative may have a hard time understanding positions that fall in the middle (or defy the spectrum altogether). Or, on LessWrong, people often employ social-status-based explanations. We all have the jargon for that, so it's easy to think about and communicate, but focusing on status-motivations obscures people's other motivations.

(I was going to explain this in terms of dimensionality reduction, but then I thought better of using potentially-obscure machine learning jargon. =) )

Comment by lucidian on Antijargon Project · 2013-05-05T18:54:05.552Z · LW · GW

I agree with you that it's useful to optimize communication strategies for your audience. However, I don't think that always results in using shared jargon. Deliberately avoiding jargon can presumably provide new perspectives, or clarify issues and definitions in much the way that a rationalist taboo would.

Comment by lucidian on Antijargon Project · 2013-05-05T18:30:54.422Z · LW · GW

This is very related to something my friend pointed out a couple weeks ago. Jargon doesn't just make us less able to communicate with people from outside groups - it makes us less willing to communicate with them.

As truth-seeking rationalists, we should be interested in communicating with people who make good arguments, consider points carefully, etc. But I think we often judge someone's rationality based on jargon instead of the content of their message. If someone uses a lot of LessWrong jargon, it gives a prior that they are rational, which may bias us in favor of their arguments. If someone doesn't use any LW jargon (or worse, uses jargon from some other unrelated community), then it might give a prior that they're irrational, or won't have acquired the background concepts necessary for rational discussion. Then we'll be biased against their arguments. This contributes to LW becoming a filter bubble.

I think this is a very important bias to combat. Shared jargon reflects a shared conceptual system, and our conceptual systems constrain the sort of ideas that we can come up with. One of the best ways to get new ideas is to try understanding a different worldview, with a different collection of concepts and jargon. That worldview might be full of incorrect ideas, but it still broadens the range of ideas you can think about.

So, thanks for this post. =) I hope you will discuss the results of your attempt to speak without jargon.

Comment by lucidian on Open Thread, May 1-14, 2013 · 2013-05-05T18:17:43.926Z · LW · GW

I think it's a grave mistake to equate self-esteem with social status. Self-esteem is an internal judgment of self-worth; social status is an external judgment of self-worth. By conflating the two, you surrender all control of your own self-worth to the vagaries of the slavering crowd.

Someone can have high self-esteem without high social status, and vice versa. In fact, I might expect someone with a strong internal sense of self-worth to be less interested in seeking high social status markers (like a fancy car, important career, etc.). When I say "a strong internal sense of self-worth", I guess I mean self-esteem that does not come from comparing oneself with others. It's the difference between saying "I'm proud of myself because I coded this piece of software that works really well" and "I'm proud of myself because I'm a better programmer than Steve is."

From what I can tell, the internal kind of self-worth comes from having values, and sticking to them. So if I value honesty, hard work, ability to cook, etc., then I can be proud of myself for being an honest hard-working person who knows how to cook, regardless of whether anyone else shares these traits. Also, I think internal self-worth comes from completing one's goals, or contributing something useful to the world, both of which explain why someone can be proud of coding a great piece of software.

(Sometimes I wonder whether virtue ethicists have more internal motivation/internal self-worth, while consequentialists have more external motivation/external self-worth.)

(It seems that people of my generation (I'm 23) have less internal self-worth than people have had in the past. If this is true, then I'm inclined to blame consumerist culture and the ubiquity of social media, but I dunno, maybe I'm just a proto-curmudgeon.)

Anyway, your theory about there being a "high self-esteem algorithm" and a "low self-esteem algorithm" seems like a reasonable enough model. And the use of these algorithms may very well correlate with social status. I just don't think the relationship is at all deterministic, and an individual can work to decouple them in his own life by developing an internal sense of self-worth.

I don't think this phenomenon is unique to status or self-esteem though. I suspect that people have different cognitive algorithms for all the roles they play in society. I have a different behavior-algorithm when interacting with a significant other than I do when interacting with my coworkers, for instance. Of course status/social dominance/etc. has a huge impact on which role you'll play, but it's not the only thing influencing it.

I think people are probably most comfortable in social roles which feel "in line" with (one of) their identities.

Last thing: I think that social status should not be equated with a direct dominance relationship between two people. Social status seems like a more pervasive effect across relationships, while direct social dominance might play a bigger role in deciding which algorithm to use. If someone big and threatening gives you an order (like "hand me your wallet"), it might activate the "Do what you're told" algorithm regardless of your general social status.

Social status would seem to correlate with how frequently you are the dominant one in social interactions. But it's not always the case. A personal servant of the king might have very high status in society, but always follow the "Do what you're told" algorithm when he's at work taking orders from the king.

(As a last note, this is why I'm really concerned about the shift from traditional manufacturing jobs to service industry jobs. Both "car mechanic" and "fast food employee" are jobs associated with a lower socioeconomic class, but the car mechanic doesn't spend all day being subservient to customers.)

Comment by lucidian on LW Women- Female privilege · 2013-05-05T16:50:57.860Z · LW · GW

Regarding PUA jargon...

I'm female and submissive and I've always been attracted to guys about eight years older than me. (When I say "always", I mean since my first serious crush at age 13.) My parents are feminists, they're the same age as each other, and they strongly believe in power equality in relationships. Thus, growing up, I always thought there was something terribly wrong with me.

In college, I learned about PUA and alpha males an all of that. Suddenly, here was an ideological system that treated my desires as natural instead of perverted. I was immediately entranced, and began to read PUA blogs very seriously. I saw a lot of truth in the PUAs' discussions of male-female interactions. From what I could tell, feminism was just another optimistic belief system built on a very common but very rotten foundation: the idea that humans are rational creatures, that our rationality elevates us high above our brutal and bestial forebears. I saw that while we may have intelligence and cunning far exceeding that of our ancestors, we often use that intelligence to serve animal aims - for instance, procreation. I had many supposedly just-friends relationships with guys, but I saw that sexual desire often lay coiled beneath our calm and innocent intellectual discussions.

So to me, PUA seemed much more honest than the rest of the world of ideas, and much more correct about the facts of human nature (at least as it exists in this society). This gave me a pretty high prior for PUA being right about things, and so I believed their essentialist and evo-psych explanations. These days, I'm less certain about PUA claims that women are inherently submissive, or more at the mercy of their emotions than men - but I still see why these ideas seem plausible and appealing. They fit the data pretty well, and give a nice generative model for it and everything. =P

...I'm not really sure why I'm telling this story. When I read this post, I thought "oh, interesting, submitter E seems to have a lot in common with me - and she also felt drawn to PUA ideas". So I guess I wanted to give some perspective on why that might be. I do agree with you, Multiheaded, that it's dangerous to equate one's own sexual preferences with gender essentialism, evo-psych arguments, etc.

Comment by lucidian on LW Women- Female privilege · 2013-05-05T13:00:24.918Z · LW · GW

I am female, and (to a large extent) my experience agrees with Submitter E's. I'm glad to see this posted here, because after reading the other LW and Women posts, I had begun to suspect that I was a complete outlier, and that I couldn't use my own experiences as a reference point for other women's at all.

Comment by lucidian on Grad Student Advice Repository · 2013-04-14T22:13:45.725Z · LW · GW

Do you know of any features for predicting who will recover from burnout, and who won't?

Comment by lucidian on Open Thread, April 1-15, 2013 · 2013-04-12T17:33:51.341Z · LW · GW

You may be interested in the literature on "concept learning", a topic in computational cognitive science. Researchers in this field have sought to formalize the notion of a concept, and to develop methods for learning these concepts from data. (The concepts learned will depend on which specific data the agent encounters, and so this captures the some of the subjectivity you are looking for.)

In this literature, concepts are usually treated as probability distributions over objects in the world. If you google "concept learning" you should find some stuff.

Comment by lucidian on LW Women: LW Online · 2013-02-16T09:28:01.935Z · LW · GW

This is one of the big reasons that niceness annoys me. I think I've developed a knee-jerk negative reaction to comments like "good job!" because I don't want to be manipulated by them. Even when the speaker is just trying to express gratitude, and has no knowledge of behaviorism, "good job!" annoys me. I think it's an issue of one-place vs. two-place predicates - I have no problem with people saying "I like that" or "I find that interesting".

If I let my emotional system process both statements without filtering, I think "good job!" actually does reinforce the behavior regardless, while "I like that!" will depend on my relation to the speaker. I know that my emotional system is susceptible to these behaviorist things, and I think that's part of why I've developed a negative reaction to them - to avoid letting them through to a place where they can influence me.

Another reason niceness annoys me is that it satisfies my craving for recognition and approval, but it's like empty calories. If I can get a quick fix of approval by posting a cat picture on facebook, then it will decrease my motivation to actually accomplish anything I consider worthwhile. This is one of the many reasons I avoid social media and think it encourages complacence. (Also, I get the impression that constant exposure to social media is decreasing people's internal motivation and increasing their external motivation. But I'm not sure if I believe this becaue it's true, or because I enjoy being a curmudgeon.)

Comment by lucidian on LW Women: LW Online · 2013-02-15T08:13:50.192Z · LW · GW

Hmm, so I'm thinking about smileys and exclamation points now. I don't think they just demonstrate friendliness - I think they also connote femininity. I used to use them all the time on IRC, until I realized that the only people who did so were female, or were guys who struck me as more feminine as a result. I didn't want to be conspicuously feminine on IRC, so I stopped using smileys/exclamation points there.

It never bothered me when other people didn't use smileys/exclamations. But when I stopped using them on IRC, everything I wrote sounded cold or rude. I felt like I should put the smileys in to assure people I was happy and having a good time (just as I always smile in person so that people will know I'm enjoying myself). But no one else was using them, and they didn't strike me as unfriendly, so I decided to stop using them.

Until I saw this comment, I had forgotten that I had adjusted myself in this way! In light of this, I may have to take back some of my earlier comments, as it really does seem like culturally enforced gender differences are getting in the way here, and that LW has little tolerance for people who sound feminine (perhaps because of an association between femininity and irrationality, which I'll admit to being guilty of myself).

Do other people associate smileys and exclamations with feminity, or is it just me?

(EDIT: Now I'm thinking that smileys vs. lack thereof might also be a formality thing. I also limit the amount of smileys/exclamations that I put in work emails, because they seem overly friendly/informal for a professional context. LW feels more like a professional environment than a social gathering to me, I think.)

Comment by lucidian on LW Women: LW Online · 2013-02-15T07:51:28.608Z · LW · GW

Hmm, I definitely see where you're coming from, and I don't (usually) want my comments to hurt anyone. If my comments were consistently upsetting people when I was just trying to have a normal conversation, then I would want to know about this and fix it - both because I actually do care about people's feelings, and because I don't want to prevent every single interesting person from conversing with me. It would take a lot of work, and it would go against my default conversational style, but it would be worth it in the long run.

However, it sounds more like there's a cultural/gender difference on LW. That is, different people prefer different paddings of niceness. Currently, the community has a low-niceness-padding standard, which is great for people who prefer that style of interaction, but which sucks for people who would prefer more niceness-padding, and those people are either driven away from the community or spend much of their time here feeling alienated and upset.

So the question here is, should we change LW culture? I personally would prefer we didn't, because I like the culture we have now. I don't support rationalist evangelism, and I'm not bothered by the gender imbalance, so I don't feel a need to lure more women onto LW by changing the culture. Is this unfair to rationalist women who would like to participate in LW discussions, but are put off by the lack of friendliness? Yes, it is. But similarly, if we encouraged more niceness padding, this would be unfair to the people who prefer a more bare-bones style of interaction.

(It could be that it's easier to adjust in one direction - maybe it's easier to grow accustomed to niceness padding than to the lack thereof. In that case, it might be worth the overhead.)

Regarding your example...

I feel like it doesn't take away from the discussion to say "Oh sorry! I really meant [this]" instead of "I said [this] not [that]," which sounds pretty unfriendly on the internet.

See, I would have classified this as "disrespect" rather than "unfriendliness". In the first version, the person is admitting that he/she was unclear, and is trying to correct it - a staple of intellectual discussion, which often serves to elucidate things through careful analysis. In the second version, the person is saying "I'm right and you're wrong", which means that the discussion has devolved into an argument, instead of two people working together towards greater understanding.

What about these examples?

"Oh sorry! I really meant [this]" (your example)

"Good point; let me clarify. [Clarification.]"

"Oops, let me clarify. [Clarification.]"

"Clarification: [clarification]"

I would tend towards the second or third, personally. The first has "sorry" in it, which seems unnecessarily apologetic to me. People frequently state things unclearly and then have to elucidate them; it's part of the normal discussion process, and not something to be sorry for. The fourth sounds unnecessarily abrupt to me (though I imagine it'd depend on the context). I'm curious what other people think w.r.t. these examples.

Comment by lucidian on LW Women: LW Online · 2013-02-15T05:14:27.822Z · LW · GW

I agree with your second paragraph completely, and I would be averse to comments whose only content was "niceness". I'm on LW for intellectual discussions, not for feel-goodism and self-esteem boosts.

I think it's worth distinguishing niceness from respect here. I define niceness to be actions done with the intention of making someone feel good about him/herself. Respect, on the other hand, is an appreciation for another person's viewpoint and intelligence. Respect is saying "We disagree on topic X, but I acknowledge that you are intelligent, you have thought about X in detail, and you have constructed sophisticated arguments which took me some thought to refute. For these reasons, even though we disagree, I consider you a worthwhile conversation-partner."

When I began this comment with "I agree with your second paragraph", I wasn't saying it to be nice. I wasn't trying to give fubarobfusco warm fuzzy happiness-feelings. I was saying it because I respect fubarobfusco's thoughts on this matter, to the point where I wanted to comment and add my own elaborations to the discussion.

There's not much purpose to engaging in an intellectual discussion with someone who doesn't respect your ideas. If they're not even going to listen to what you have to say, or consider that you might be correct, then what's the point? So I think respect is integral to intellectual discussions, and therefore it's worthwhile to demonstrate it verbally in comments. But I consider this completely separate from complimenting people for the sake of being nice.

It sounds like part of what Submitter B is complaining about is lack of respect. The guys she dated didn't respect her intellect enough to believe assertions she made about her internal experiences. I suspect this is a dearth of respect that no quantity of friendliness can remedy.

(For what it's worth, I'm female, albeit a rather distant outlier. I'd emphatically prefer that "niceness" not become a community norm. For me, it takes a lot of mental effort to be nice to people (because I have to focus on my internal model of their feelings, as well as on the discussion at hand), and I get annoyed when people are gratuitously nice to me. This post makes me wonder if I'm unusual among LW females in holding this opinion.)