Comment by jessica-liu-taylor on No, it's not The Incentives—it's you · 2019-06-15T18:50:34.446Z · score: 2 (1 votes) · LW · GW

It's an assumption of a pact among fraudsters (a fraud ring). I'll cover for your lies if you cover for mine. It's a kind of peace treaty.

In the context of fraud rings being pervasive, it's valuable to allow truth and reconciliation: let the fraud that has been committed come to light (as well as the processes causing it), while having a precommitment to no punishments for people who have committed fraud. Otherwise, the incentive to continue hiding is a very strong obstacle to the exposition of truth. Additionally, the consequences of all past fraud being punished heavily would be catastrophic, so such large punishments could only make sense when selectively enforced.

Comment by jessica-liu-taylor on Drowning children are rare · 2019-06-09T20:39:46.357Z · score: 6 (3 votes) · LW · GW

An aesthetic identity movement is one where everything is dominated by how things look on the surface, not what they actually do/mean in material reality. Performances of people having identities, not actions of people in reality. To some extent this is a spectrum, but I think there are attractor states of high/low performativity.

It's possible for a state not to be an aesthetic identity movement, e.g. by having rule of law, actual infrastructure, etc.

It's possible for a movement not to be an aesthetic identity movement, by actually doing the thing, choosing actions based on expected value rather than aesthetics alone, having infrastructure that isn't just doing signalling, etc.

Academic fields have aesthetic elements, but also (some of the time) do actual investigation of reality (or, of reasoning/logic, etc) that turns up unexpected information.

Mass movements are more likely to be aesthetic identity movements than obscure ones. Movements around gaining resources through signalling are more likely to be aesthetic identity movements than ones around accomplishing objectives in material reality. (Homesteading in the US is an example of a historical movement around material reality)

(Note, EA isn't only as aesthetic identity movement, but it is largely one, in terms of percentage of people, attention, etc; this is an important distinction)

It seems like the concept of "aesthetic identity movement" I'm using hasn't been communicated to you well; if you want to see where I'm coming from more in more detail, read the following.

(no need to read all of these if it doesn't seem interesting, of course)

Comment by jessica-liu-taylor on Drowning children are rare · 2019-06-09T20:16:43.055Z · score: 6 (3 votes) · LW · GW

I think I actually do much more criticism of the rationality community than the EA community nowadays, although that might be invisible to you since most of it is private. (Anyway, I don't do that much public criticism of EA either, so this seems like a strange complaint about me regardless)

Comment by jessica-liu-taylor on Drowning children are rare · 2019-06-08T21:34:28.790Z · score: 18 (6 votes) · LW · GW

The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values.

Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn't think that?)

Geeks, Mops, Sociopaths happened to the rationality community, not just EA.

So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.

I don't think it's unique! I think it's extremely, extremely common for things to become aesthetic identity movements! This makes the phenomenon matter more, not less!

I have about as many beefs with the rationality movement as I do with the EA movement. I am commenting on this post because Ben already wrote it and I had things to add.

It's possible that I should feel more moral pressure than I currently do to actively (not just, as a comment on other people's posts) say what's wrong about the current state of the rationality community publicly. I've already been saying things privately. (This is an invitation to try morally pressuring me, using arguments, if you think it would actually be good for me to do this)

Comment by jessica-liu-taylor on Asymmetric Weapons Aren't Always on Your Side · 2019-06-08T20:59:19.767Z · score: 14 (5 votes) · LW · GW

Note, having industrial advantages is quite related to being "good" (in the sense of having productive coordination among individuals to produce usable material and informational goods). Having lots of manpower is also related to being "good" (in the sense of having a system that has enough buy-in from enough people that they will fight effectively on your side, and not depose you). These correlations aren't accidental, they're essential.

(It is, however, true that military power is not identical with goodness, and that there are "bad" ways of getting industry and manpower, although (I claim) their "badness" is essential to their disadvantages)

Comment by jessica-liu-taylor on Asymmetric Weapons Aren't Always on Your Side · 2019-06-08T20:50:35.381Z · score: 18 (6 votes) · LW · GW

He said:

Violence isn’t merely symmetric—it’s asymmetric in a bad direction, since fascists are better than violence than you.

So "Violence is asymmetric in favor of violence" is a misinterpretation; Davis is making a claim about it being asymmetric in a bad direction, and also claiming that fascists are better at violence than "you".

Comment by jessica-liu-taylor on Drowning children are rare · 2019-06-07T02:10:35.555Z · score: 4 (5 votes) · LW · GW

My comment was a response to Evan's, in which he said people are reacting emotionally based on identity. Evan was not explaining people's response by referring to actual flaws in Ben's argumentation, so your explanation is distinct from Evan's.

a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben's claim is neither dishonest nor false.

b) So, the fact that you associate these phrases with coalitional politics, means Ben is attacking GiveWell? What? These phrases have denotative meanings! They're pretty clear to determine if you aren't willfully misinterpreting them! The fact that things that have clear denotative meanings get interpreted as attacking people is at the core of the problem!

To say that Ben creating clarity about what GiveWell is doing is an attack on GiveWell, is to attribute bad motives to GiveWell. It says that GiveWell wants to maintain a positive impression of itself, regardless of the facts, i.e. to defraud nearly everyone. (If GiveWell wants correct information about charities and charity evaluations to be available, then Ben is acting in accordance with their interests [edit: assuming what he's saying is true], i.e. the opposite of attacking them).

Perhaps you endorse attributing bad motives to GiveWell, but in that case it would be hypocritical to criticize Ben for doing things that could be construed as doing that.

Comment by jessica-liu-taylor on Drowning children are rare · 2019-06-07T01:01:01.164Z · score: 20 (6 votes) · LW · GW

And EA did better than the previous things, along some important dimensions! And people attempting to do the next thing will have EA as an example to learn from, which will (hopefully) prompt them to read and understand sociology, game theory, etc. The question of "why do so many things turn into aesthetic identity movements" is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.

Success is nowhere near guaranteed, and total success is quite unlikely, but, trying again (after a lot of study and reflection) seems like a better plan than just continuing to keep the current thing running.

Comment by jessica-liu-taylor on Drowning children are rare · 2019-06-07T00:26:51.268Z · score: 5 (7 votes) · LW · GW

So, EA largely isn't about actually doing altruism effectively (which requires having correct information about what things actually work, e.g. estimates of cost per life saved, and not adding noise to conversations about these), it's an aesthetic identity movement around GiveWell as a central node, similar to e.g. most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it), which is also claiming credit for, literally, evaluating and acting towards the moral good (as environmentalism claims credit for evaluating and acting towards the health of the planet). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated values of EA, EA-as-it-is ought to be replaced with something very, very different.

[EDIT: noting that what you said in another comment also agrees with the aesthetic identity movement view: "I imagine the CEA does it through some combo of revering Singer, thinking it’s good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they’re presented in."]

Comment by jessica-liu-taylor on All knowledge is circularly justified · 2019-06-04T23:31:16.383Z · score: 14 (5 votes) · LW · GW

This seems like a restatement of Where Recursive Justification Hits Bottom.

Comment by jessica-liu-taylor on Feedback Requested! Draft of a New About/Welcome Page for LessWrong · 2019-06-02T21:48:38.129Z · score: 2 (1 votes) · LW · GW

Seems like a non sequitur, what's the relevance?

Comment by jessica-liu-taylor on What is required to run a psychology study? · 2019-05-29T07:44:10.184Z · score: 20 (7 votes) · LW · GW

For studies in the US, see this flowchart.

If the human subjects research isn't supported by the US Department of Health and Human Services, and isn't supported by an institution that holds an FWA, then the research isn't covered by regulations on human subjects research.

At a cognitive science lab at Stanford I worked in, it was quite common to run studies using Mechanical Turk.

Comment by jessica-liu-taylor on Drowning children are rare · 2019-05-29T04:19:32.580Z · score: 5 (5 votes) · LW · GW

In your previous comment you're talking to Wei Dai, though. Do you think Wei Dai is going to misinterpret the werewolf concept in this manner? If so, why not link to the original post to counteract the possible misinterpretation, instead of implying that the werewolf frame itself is wrong?

(meta note: I'm worried here about the general pattern of people optimizing discourse for "the public" who is nonspecific and assumed to be highly uninformed / willfully misinterpreting / etc, in a way that makes it impossible for specific, informed people (such as you and Wei Dai) to communicate in a nuanced, high-information fashion)

[EDIT: also note that the frame you objected to (the villagers vs werewolf frame) contains important epistemic content that the "let's incentivize non-obfuscatory behavior" frame doesn't, as you agreed in your subsequent comment after I pointed it out. Which means I'm going to even more object to saying "the villagers/werewolf frame is bad" with the defense being that "people might misinterpret this", without offering a frame that contains the useful epistemic content of the misinterpretable frame]

Comment by jessica-liu-taylor on Drowning children are rare · 2019-05-29T03:23:11.164Z · score: 2 (1 votes) · LW · GW

But I think thinking in terms of villagers and werewolves leads you to ask the question ‘who is a werewolf’ moreso than ‘how do we systematically disincentivize obfuscatory or manipulative behavior’, which seems a more useful question.

Clearly, the second question is also useful, but there is little hope of understanding, much less effectively counteracting, obfuscatory behavior, unless at least some people can see it as it happens, i.e. detect who is (locally) acting like a werewolf. (Note that the same person can act more/less obfuscatory at different times, in different contexts, about different things, etc)

Comment by jessica-liu-taylor on Separation of Concerns · 2019-05-24T21:09:02.402Z · score: 4 (2 votes) · LW · GW

You might need to believe that a plan will work with very high probability in order to go through with every step rather than giving in to short-term temptations.

Why doesn't conservation of expected evidence apply? (How could you expect thinking about something to predictably shift your belief?)

Comment by jessica-liu-taylor on Does the Higgs-boson exist? · 2019-05-24T08:18:10.278Z · score: 2 (1 votes) · LW · GW

She does no such thing.

Technically, she isn't asserting nonrealism, she's asserting that the territory is unreferenceable, in saying that statements such as "quarks exist" don't refer to any physical reality, but instead refer to predictions and observations. But that's still a philosophical idea, and the rest of the comment applies.

She is doing physics, not linguistics or cognitive science or psychiatry.

She does physics in her work, and is doing philosophy in this article. Philosophical ideas can be examined philosophically. (I agree that she isn't examining them philosophically, at least not very well)

Pot. Kettle.

Can you be more specific? Are you saying that I'm doing bad philosophy while claiming not to be doing philosophy? If so, which philosophy is bad, specifically? (I definitely think of myself as doing philosophy, and have for most of my life)

Her philosophy (or non-philosophy) certainly works for her and for most physicists, look at all the successes in physics over the last century.

Do you have evidence for this? I think the most generative physicists, such as Einstein, thought of themselves as studying a mostly-external world, and attempting to discern the truth about it (see: Einstein's Philosophy, Richard Feynman's Philosophy of Science).

Comment by jessica-liu-taylor on Separation of Concerns · 2019-05-24T08:15:31.838Z · score: 12 (3 votes) · LW · GW

Considering a possibility doesn't automatically make you believe it. Why not think about the different possible Nash equilibria in order to select the best one?

Comment by jessica-liu-taylor on Discourse Norms: Justify or Retract Accusations · 2019-05-24T02:06:39.489Z · score: 34 (6 votes) · LW · GW

the consequences of a space being too negative are much more stifling to a community than the consequences of a space being too positive.

I don't agree with this. I've felt pretty silenced by people having high opinions of e.g. certain orgs and seeming actively uninterested in information indicating that such opinions are false. Which means it's harder for me to talk about what I actually think.

I anticipate much more negative social feedback for criticizing things people like, versus for praising things people don't like.

As far as I can tell, "well-calibrated" is actually optimal, and deviations from that are stifling, because they contribute to a sense that everyone is lying all the time, and you have to either join in the lies, stay quiet, or be a rebel.

Comment by jessica-liu-taylor on Separation of Concerns · 2019-05-24T01:37:38.092Z · score: 11 (3 votes) · LW · GW

Evolution only optimizes for individual rationality, and what we care about or need is group rationality, and separation of concerns is a good way to achieve that in the absence of superhuman amounts of compute.

This seems very clearly true, such that it seems strange to use "evolution produces individuals who, in some societies, don't seem to value separate epistemic concerns" as an argument.

Well-functioning societies have separation of concerns, such as into different professions (as Plato described in his Republic). Law necessarily involves separation of concerns, as well (the court is determining whether someone broke a law, not just directly what consequences they will suffer). Such systems are created sometimes, and they often degrade over time (often through being exploited), but can sometimes be repaired.

If you notice that most people's strategies implicitly don't value epistemic rationality as a separate concern, you can infer that there isn't currently a functioning concern-separating social structure that people buy into, and in particular that they don't believe that desirable rule of law has been achieved.

Comment by jessica-liu-taylor on Separation of Concerns · 2019-05-24T01:16:07.713Z · score: 10 (2 votes) · LW · GW

Since separation of concerns is obviously only applicable to bounded agents, it seems like someone who is clearly optimizing for epistemic rationality as a separate concern is vulnerable to being perceived as lacking the ability to optimize for instrumental rationality in an end-to-end way

No one believes anyone else to be an unbounded agent, so how is the concern with being perceived as a bounded agent relevant? A bounded agent can achieve greater epistemology and instrumental action by separating concerns, and there isn't a reachable upper bound.

Comment by jessica-liu-taylor on Separation of Concerns · 2019-05-24T01:11:36.275Z · score: 12 (3 votes) · LW · GW

I strongly agree that separation of concerns is critical, and especially the epistemic vs. instrumental separation of concerns.

There wouldn’t even be a concept of ‘belief’ or ‘claim’ if we didn’t separate out the idea of truth from all the other reasons one might believe/claim something, and optimize for it separately.

This doesn't seem quite right. Even if everyone's beliefs are trying to track reality, it's still important to distinguish what people believe from what is true (see: Sally-Anne test). Similarly for claims. (The connection to simulacra is pretty clear; there's a level-1 notion of a belief (i.e. a property of someone's world model, the thing controlling their anticipations and which they use to evaluate different actions), and also higher-level simulacra of level-1 beliefs)

Moreover, there isn't an obvious decision-theoretic reason why someone might not want to think about possibilities they don't want to come true (wouldn't you want to think about such possibilities, in order to understand and steer away from them?). So, such perceived incentives are indicative of perverse anti-epistemic social pressures, e.g. a pressure to create a positive impression of how one's life is going regardless of how well it is actually going.

Comment by jessica-liu-taylor on Does the Higgs-boson exist? · 2019-05-23T19:04:52.621Z · score: 30 (12 votes) · LW · GW

In this post she sums up beautifully what I and many physicists believe, and is vehemently opposed by the prevailing realist crowd here on LW.

She seems to vacillate between "realism is a philosophical idea" and "realism is false".

This is about realism being a philosophical idea:

And this is all well and fine, but realism is a philosophy. It’s a belief system, and science does not tell you whether it is correct.

And this is simply asserting nonrealism:

If you want to claim that the Higgs-boson does not exist, you have to demonstrate that the theory which contains the mathematical structure called “Higgs-boson” does not fit the data. Whether or not Higgs-bosons ever arrive in a detector is totally irrelevant.

So, she isn't making a coherent argument against realism, she says "it's philosophical" as if it's a counterargument (wat?).

The issue is, when she says something like "That is what we mean when we say 'quarks exist': We mean that the predictions obtained with the hypothesis agrees with observations," that is itself a philosophical idea, subject to philosophical analysis. (What does it mean for a statement to mean something? What's a prediction? What's an observation? How does this idea behave in unusual cases such as the person claiming there's an invisible pink dragon in their garage?) But she's trying to exclude philosophy from the domain of the conversation... which is inextricably philosophical.

This seems like another instance of "people who say they're not doing philosophy are in fact doing bad philosophy."

Comment by jessica-liu-taylor on Discourse Norms: Justify or Retract Accusations · 2019-05-22T02:56:16.568Z · score: 7 (4 votes) · LW · GW

Sure, this seems reasonable. I am also worried about content-free praise, but, independent of that, content-free criticism seems good to discourage.

Comment by jessica-liu-taylor on Discourse Norms: Justify or Retract Accusations · 2019-05-22T02:46:51.092Z · score: 12 (5 votes) · LW · GW

So, this depends on what is meant by "explain yourself if questioned". If I'm allowed to say "my research aesthetics say this project is useless, and I can point to a couple details, but not enough to convince that many others", or, "I had some negative experiences in relation to this organization that lead me to believe that it's causing harm, and I can share a few details, but others are confidential" then, fine. But, such justification norms could (and probably should) naturally apply to praise as well.

Comment by jessica-liu-taylor on Discourse Norms: Justify or Retract Accusations · 2019-05-22T02:27:23.021Z · score: 41 (11 votes) · LW · GW

I don't want this norm to be adopted. Sometimes someone has personal information that someone did something, but doesn't have sufficient evidence to convince others of this. This is very likely when sexual assault happens, for example. It's also common in cases of e.g. abuse by employers toward employees. Also, some judgments of "this thing is bad" are based on intuitive senses (e.g. aesthetics) that, while often truth-tracking, are difficult to explain to those who don't have the same intuitive sense.

In cases like this, it's important for people to be able to state "I have information that leads me to believe X, and my saying this (and giving the details I can) might or might not be sufficient to convince you". Perhaps others will, upon hearing this, have more relevant information to add, eventually creating common knowledge; and they will also likely have more correct beliefs (and be able to make better decisions) in the meantime, before common knowledge is created.

Ben Hoffman has written on problems with holding criticism to a higher standard than praise:

The problem comes when this standard is applied to critics but not to supporters of EA organizations. This is effectively a tax on internal criticism of EA. If you ask that we impose a higher burden on criticism than on praise for you or your organization, you are proposing that we forgo the benefits of an adversarial system, in order to avoid potentially damaging criticism. If we forgo the benefits of an adversarial system, we can only expect to come to the right answers if the parties that are presenting us with information exhibit an exceptionally honest intent to inform.

If you ask people to hold criticism of you to a higher standard than praise, you are either asserting a right to misinform, or implicitly promising to be honest enough that a balanced adversarial system is not necessary. You are promising to be a reliable, objective source of information, not just a clever arguer.

If you're asserting a right to misinform, then it is clear enough why people might not want to trust you.

So, the norm as stated seems more likely to serve the interest of "create a positive impression of what's going on, regardless of what's actually going on" (i.e. wirehead everyone; this is as scary as it sounds!), than the interest of "share information about what is going on in a way that can at some point lead to common knowledge being created, and the problems being solved".

This norm could be workable if you can distinguish sharing the information "I believe this person did this bad thing" from "I accuse this person of doing this bad thing" (with the first being a denotative statement, and the second being a speech act).

Comment by jessica-liu-taylor on Go Do Something · 2019-05-22T00:39:07.133Z · score: 26 (7 votes) · LW · GW

There are lots of ways for people to improve their own life and those of friends without this being massively massively profitable, though. Like, it seems like you're conflating the coordination required to, say, start a discussion group, with the coordination required to run a tech empire. (I have talked to someone in the rationalist community recently who believes that starting a club is hard because of the social dynamics involved, including expected social discouragement for excluding people).

You can't justifiably reason from "doing this at world-class competence is hard" to "you can't get large gains by being moderately good at this instead of not trying at all".

[EDIT: note that I'm including things like "having more illuminating intellectual discussions", "being less afraid to communicate", and "doing less bullshit work" in "improving one's own life", so these feed into other goals, not just personal ones; put on your own oxygen mask first, and all that]

Comment by jessica-liu-taylor on Comment section from 05/19/2019 · 2019-05-20T17:19:11.287Z · score: 22 (8 votes) · LW · GW

Blocking Zack isn't an appropriate response if, as Vanessa thinks, Zack is attacking her and others in a way that makes these attacks hard to challenge directly. Then he'd still be attacking people even after being blocked, by saying the things he says in a way that influences general opinion.

Feelings are information, not numbers to maximize.

It's possible that your actual concern is with "I feel" language being used for communication.

Comment by jessica-liu-taylor on Comment section from 05/19/2019 · 2019-05-20T16:55:15.489Z · score: 9 (5 votes) · LW · GW

That's not what I meant. I meant specifically someone who is trying to prevent common knowledge from being created (and more generally, to gum up the works of "social decisionmaking based on correct information"), as in the Werewolf party game.

Comment by jessica-liu-taylor on Comment section from 05/19/2019 · 2019-05-20T01:30:50.726Z · score: 37 (9 votes) · LW · GW

I think my actual concern with this line of argumentation is: if you have a norm of "If 'X' and 'X implies Y' then 'Y', EXCEPT when it's net bad to have concluded 'Y'", then the werewolves win.

The question of whether it's net bad to have concluded 'Y', is much, much more complicated than the question of whether, logically, 'Y' is true under these assumptions (of course, it is). There are many, many more opportunities for werewolves to gum up the works of this process, making the calculation come out wrong.

If we're having a discussion about X and Y, someone moves to propose 'Y' (because, as it has already been agreed, 'X' and 'X implies Y'), and then someone else says "no, we can't do that, that has negative consequences!", that second person is probably playing a werewolf strategy, gumming up the works of the epistemic substrate.

If we are going to have the exception to the norm at all, then there has to be a pretty high standard of evidence to prove that adding 'Y' to the discourse, in fact, has bad consequences. And, to get the right answer, that discussion itself is going to have to be up to high epistemic standards. To be trustworthy, it's going to have to make logical inferences much more complex than "if 'X' and 'X implies Y', then 'Y'". What if someone objects to those logical inference steps, on the basis that they would have negative consequences? Where does that discussion happen?

In practice, these questions aren't actually answered. In practice, what happens is that social epistemology doesn't happen, and instead everything becomes about coalitional politics. Saying 'Y' doesn't mean 'Y is literally true', it means you're part of the coalition of people who wants consequences related to (but not even necessarily directly implied by!) the statement 'Y' to be put into effect, and that makes you blameworthy if those consequences hurt someone sympathetic, or that coalition is bad. Under such conditions, it is a major challenge to re-establish epistemic discourse, because everything is about violence, including attempts to talk about the "we don't have epistemology and everything is about violence" problem.

We have something approaching epistemic discourse here on LessWrong, but we have to defend it, or it, too, becomes all about coalitional politics.

Comment by jessica-liu-taylor on Comment section from 05/19/2019 · 2019-05-20T00:00:41.049Z · score: 28 (9 votes) · LW · GW

It's important to distinguish the question of whether, in your own personal decisionmaking, you should ever do things that aren't maximally epistemically good (obviously, yes); from the question of whether the discourse norms of this website should tolerate appeals to consequences (obviously, no).

It might be morally right, in some circumstances, to pass off a false mathematical proof as a true one (e.g. in a situation where it is useful to obscure some mathematical facts related to engineering weapons of mass destruction). It's still a violation of the norms of mathematics, with good reason. And it would be very wrong to argue that the norms of mathematics should change to accommodate people making this (by assumption, morally right) choice.

To summarize: you're destroying the substrate. Stop it.

Comment by jessica-liu-taylor on Comment section from 05/19/2019 · 2019-05-19T19:43:38.456Z · score: 16 (6 votes) · LW · GW

I sometimes round this off in my head to something like “pure decouplers think arguments should be considered only on their epistemic merits, and pure contextualizers think arguments should be considered only on their instrumental merits”.

The proper words for that aren't decoupling vs contextualizing, it's denotative vs enactive language. An orthogonal axis to how many relevant contextual factors are supposed to be taken into account. You can require lots of contextual factors to be taken into account in epistemic analysis, or require certain enactments to be made independent of context.

Note, the original post makes the conflation I'm complaining about here too!

Comment by jessica-liu-taylor on Comment section from 05/19/2019 · 2019-05-19T17:36:21.086Z · score: 29 (9 votes) · LW · GW

It sounds more like a defense of discussing a political specific by means of abstraction.

Zack said:

Like, maybe statistics is part of the common interest of many causes, such that, as a matter of local validity, you should assess arguments about statistics on their own merits in the context that those arguments are presented, without worrying about how those arguments might or might not be applied in other contexts?

What, realistically, do you expect the atheist—or the racist, or me—to do? Am I supposed to just passively accept that all of my thoughts about epistemology are tainted and unfit for this forum, because I happen to be interested in applying epistemology to other topics (on a separate website, under a pseudonym)?

Which isn't saying specifics should be discussed by discussing abstracts, it says abstracts should be discussed, even when part of the motivation for discussing the abstract is specific. Like, people should be able to collaborate on statistics textbooks even if they don't agree with their co-authors' specific applications of statistics to their non-statistical domains. (It would be pretty useless to discuss abstracts if there we no specific motivations, after all...)

Comment by jessica-liu-taylor on Comment section from 05/19/2019 · 2019-05-19T17:22:58.478Z · score: 24 (7 votes) · LW · GW

I do take objection to the assumption in this post that decoupling norms are the obvious and only correct way to deal with things.

Zack didn't say this. What he said was:

Like, maybe statistics is part of the common interest of many causes, such that, as a matter of local validity, you should assess arguments about statistics on their own merits in the context that those arguments are presented, without worrying about how those arguments might or might not be applied in other contexts?

Which is compatible with thinking more details should be taken into account when the statistical arguments are applied in other contexts (in fact, I'm pretty sure this is what Zack thinks).

Discussion of abstract epistemology principles, which generalize across different contexts, is perhaps most of the point of this website...

Your points 1,2,3 have nothing to do with the epistemic problem of decoupling vs contextualizing, they have to do with political tradeoffs in moderating a forum; they apply to people doing contextualization in their analysis, too. I hate that the phrase "contextualizing norms" is being used to conflate between "all sufficiently relevant information should be used" and "everything should be about politics".

Comment by jessica-liu-taylor on Narcissism vs. social signalling · 2019-05-12T14:58:05.986Z · score: 12 (3 votes) · LW · GW

I don't believe humans are badly modeled as single agents. Rather, they are single agents that have communicative and performative aspects to their cognition and behavior. See: The Elephant In The Brain, Player vs Character.

If you have strong reason to think "single agent communicating and doing performances" is a bad model, that would be interesting.

In this case, "convincing yourself" is clearly motivated. It doesn't make sense as a random interaction between two subagents (otherwise, why aren't people just as likely to try to convince themselves they have bad qualities?); whatever interaction there is has been orchestrated by some agentic process. Look at the result, and ask who wanted it.

Comment by jessica-liu-taylor on Narcissism vs. social signalling · 2019-05-12T14:46:16.830Z · score: 2 (1 votes) · LW · GW

The academic term for the Bayesian part is Bayesian Brain. Also see The Elephant In The Brain. The model itself (humans as singular agents doing performances) has some amount of empirical evidence (note, revealed preference models deductively imply performativity), and is (in my view) the most parsimonious. I haven't seen empirical evidence specific to its application to narcissism, though.

Comment by jessica-liu-taylor on Narcissism vs. social signalling · 2019-05-12T14:45:24.998Z · score: 2 (1 votes) · LW · GW

Yes.

Comment by jessica-liu-taylor on Narcissism vs. social signalling · 2019-05-12T14:43:48.735Z · score: 3 (2 votes) · LW · GW

We can imagine a world where job applicants can cheaply reveal information about themselves (e.g. programming ability), and can more expensively generate fake information that looks like true information (e.g. cheating on the programming ability test, making it look like they're good at programming). The employer, meanwhile, is doing a Bayesian evaluation of likely features given the revealed info (which may contain lies), to estimate the applicant's expected quality. We could also give the employer audit powers (paying some amount to see the ground truth of some applicant's trait).

This forms a game; each player's optimal strategy depends on the other's, and in particular the evaluator's Bayesian probabilities depend on the applicant's strategy (if they are likely to lie, then the info is less trustworthy, and it's more profitable to audit).

I would not be surprised if this model is already in the literature somewhere. Ben mentioned the costly signalling literature, which seems relevant.

Fine to refer to this in a question, in any case.

Comment by jessica-liu-taylor on Narcissism vs. social signalling · 2019-05-12T06:30:37.316Z · score: 9 (4 votes) · LW · GW

There's no such thing as "convincing yourself" if you're an agent, due to conservation of expected evidence. What people describe as "convincing yourself" is creating conditions under which a certain character-level belief is defensible to adopt, and then (character-level) adopting it. It's an act, a simulacrum of having a belief.

(Narcissism is distinct from virtue ethics, which is the pursuit of actual good qualities rather than defensible character-level beliefs of having good qualities)

Comment by jessica-liu-taylor on Narcissism vs. social signalling · 2019-05-12T06:09:27.337Z · score: 22 (5 votes) · LW · GW

Signalling implies an evaluator trying to guess the truth. At equilibrium, a signaller reveals as much information as is cheap to reveal. Not revealing cheap-to-reveal information is a bad sign; if the info reflected well on you, you'd have revealed it, and so at equilibrium, evaluators literally assume the worst about non-revealed but cheap-to-reveal info (see: market for lemons).

This is stage 1 signalling. Stage 2 signalling is this but with convincing lies, which actually are enough to convince a Bayesian evaluator (who may be aware of the adversarial dynamic, and audit sometimes).

At stage 3, the evaluators are no longer attempting to discern the truth, but are instead discerning "good performances", the meaning of which shifts over time, but which initially bears resemblance to stage 2's convincing lies.

Narcissism is stage 3, which is very importantly different from stage 1 signalling (maximal revealing of information and truth-discernment) and stage 2 lying (maximizing for impression convincingly).

Comment by jessica-liu-taylor on Interpretations of "probability" · 2019-05-10T01:17:06.427Z · score: 4 (4 votes) · LW · GW

There are 0 other days "similar to" this one in Earth's history, if "similar to" is strict enough (e.g. the exact pattern of temperature over time, cloud patterns, etc). You'd need a precise, more permissive definition of "similar to" for the statement to be meaningful.

Comment by jessica-liu-taylor on Towards optimal play as Villager in a mixed game · 2019-05-07T22:00:37.038Z · score: 9 (4 votes) · LW · GW

I agree with the strategy in this comment, for some notions of "absorbed"; being absorbed territorially or economically might be fine, but being absorbed culturally/intellectually probably isn't. Illegibility and good relationships seem like the most useful approaches.

Comment by jessica-liu-taylor on Towards optimal play as Villager in a mixed game · 2019-05-07T21:57:53.187Z · score: 7 (3 votes) · LW · GW

To be clear, I’m very glad you’re working on anti-werewolf tech, I think it’s one of the necessary things to have good guys working on, I just don’t expect it to translate into decisive strategic advantage.

I agree, it's necessary to reach at least the standard of mediocrity on other aspects of e.g. running a business, and often higher standards than that. My belief isn't that anti-werewolf tech immediately causes you to win, so much as that it expands your computational ability to the point where you are in a much better position to compute and implement the path to victory, which itself has many object-level parts to it, and requires adjustment over time.

Comment by jessica-liu-taylor on Towards optimal play as Villager in a mixed game · 2019-05-07T21:36:22.328Z · score: 4 (2 votes) · LW · GW

My core thesis here is that if you have a lowel-level manager that is as competent at detecting werewolves, you will be more powerful if you instead promote those people to a higher level, so that you can expand and gain more territory.

Either it's possible to produce people/systems that detect werewolves at scale, or it isn't. If it isn't, we have problems. If it is, you have a choice of how many of these people to use as lower-level managers versus how many to use for expansion. It definitely isn't the case that you should use all of them for expansion, otherwise your existing territories become less useful/productive, and you lose control of them. The most competitive empire will create werewolf detectors at scale and use them for lower management in addition to expansion.

Part of my thesis is that, if you live in a civilization dominated by werewolves and you're the first to implement anti-werewolf systems, you get a big lead, and you don't have to worry about direct competitors (who also have anti-werewolf systems but who want to expand indefinitely/unsustainably) for a while; by then, you have a large lead.

Comment by jessica-liu-taylor on Towards optimal play as Villager in a mixed game · 2019-05-07T20:48:36.102Z · score: 7 (3 votes) · LW · GW

I agree with the general picture that scaling organizations results in wasted motion due to internal competitive dynamics. Some clarifications:

Because at least some orgs have unbounded goals, even if your goal is bounded, if it impacts the worldscale you must contended with cancerous, unbounded agents.

This means every competitive org must be the largest, most rickety versions of themselves that can reasonably function.

This assumes orgs with bounded goals that have big leads can't use their leads to suppress competition, by e.g. implementing law or coordinating with each other. Monopolies/oligopolies are common, as are governments. A critical function of law is to suppress illegitimate power grabs.

werewolves get to flourish up till the point where they could affect the Inner Council of the King. By then, they have revealed themselves, and the Inner Council of the King beheads them.

This assumes that implementing law/bureaucracy internally at lower levels than the inner council is insufficient for detecting effective werewolf behavior. Certainly, it's harder, but it doesn't follow that it isn't possible.

Behind the veil of anthropics, you should mostly expect to be located in one of the kingdoms sacrificed to the werewolves.

This is like saying you should anthropically expect to already have cancer. Kingdoms sacrificed to the werewolves have lower capacity and can collapse or be conquered.

Conditional revealed preference

2019-04-16T19:16:55.396Z · score: 18 (7 votes)
Comment by jessica-liu-taylor on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-01T22:02:28.777Z · score: 2 (1 votes) · LW · GW

The numbering in this comment is clearly Markdown auto-numbering. Is there a different comment with numbering that you meant?

For reference, this is how Markdown numbers a list in 3, 2, 1 order:

  1. item

  2. item

  3. item

Comment by jessica-liu-taylor on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-01T21:30:19.057Z · score: 6 (4 votes) · LW · GW

Seems like a bot to me, are there signs of humanity you can point to?

[EDIT: replies by GPT2 come in way too fast (like, 5 seconds) for this to be a human]

Comment by jessica-liu-taylor on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-01T21:28:00.523Z · score: 6 (2 votes) · LW · GW

Markdown numbers lists in order even if you use different numbers.

Comment by jessica-liu-taylor on Privacy · 2019-03-18T20:42:30.351Z · score: 11 (6 votes) · LW · GW

OK, you're right that less privacy gives significant advantage to non-generative conformity-based strategies, which seems like a problem. Hmm.

Comment by jessica-liu-taylor on Privacy · 2019-03-17T17:44:48.433Z · score: 12 (4 votes) · LW · GW

OK, I can defend this claim, which seems different from the "less privacy means we get closer to a world of angels" claim; it's about asymmetric advantages in conflict situations.

In the example you gave, more generally available information about people's locations helps Big Bad Wolf more than Little Red Hood. If I'm strategically identifying with Big Bad Wolf then I want more information available, and if I'm strategically identifying with Little Red Hood then I want less information available. I haven't seen a good argument that my strategic position is more like Little Red Hood's than Big Bad Wolf's (yes, the names here are producing moral connotations that I think are off).

So, why would info help us more than our enemies? I think efforts to do big, important things (e.g. solve AI safety or aging) really often get derailed by predatory patterns (see Geeks, Mops, Sociopaths), which usually aren't obvious to the people cooperative with the original goal for a while. These patterns derail the group and cause it to stop actually targeting its original mission. It seems like having more information about strategies would help solve this problem.

Of course, it also gives the predators more information. But I think it helps defense more than offense, since there are more non-predators to start with than predators, and non-predators are (presently) at a more severe information disadvantage than the predators are, with respect to this conflict.

Anyway, I'm not that confident in the overall judgment, but I currently think more available info about strategies is good in expectation with respect to conflict situations.

Comment by jessica-liu-taylor on Has "politics is the mind-killer" been a mind-killer? · 2019-03-17T09:45:15.159Z · score: 16 (6 votes) · LW · GW

With stakes so high, how can you justify placing good faith debate above using whatever tactics are necessary to avoid losing?

Local validity!

[EDIT: also, you could actually be uncertain, or could be talking to aligned people who are uncertain, in which case having more-informative discussions about politics helps you and your friends make better decisions!]

Boundaries enable positive material-informational feedback loops

2018-12-22T02:46:48.938Z · score: 30 (12 votes)

Act of Charity

2018-11-17T05:19:20.786Z · score: 152 (58 votes)

EDT solves 5 and 10 with conditional oracles

2018-09-30T07:57:35.136Z · score: 61 (18 votes)

Reducing collective rationality to individual optimization in common-payoff games using MCMC

2018-08-20T00:51:29.499Z · score: 58 (18 votes)

Buridan's ass in coordination games

2018-07-16T02:51:30.561Z · score: 55 (19 votes)

Decision theory and zero-sum game theory, NP and PSPACE

2018-05-24T08:03:18.721Z · score: 109 (36 votes)

In the presence of disinformation, collective epistemology requires local modeling

2017-12-15T09:54:09.543Z · score: 116 (42 votes)

Autopoietic systems and difficulty of AGI alignment

2017-08-20T01:05:10.000Z · score: 3 (3 votes)

Current thoughts on Paul Christano's research agenda

2017-07-16T21:08:47.000Z · score: 6 (6 votes)

Why I am not currently working on the AAMLS agenda

2017-06-01T17:57:24.000Z · score: 15 (8 votes)

A correlated analogue of reflective oracles

2017-05-07T07:00:38.000Z · score: 4 (4 votes)

Finding reflective oracle distributions using a Kakutani map

2017-05-02T02:12:06.000Z · score: 1 (1 votes)

Some problems with making induction benign, and approaches to them

2017-03-27T06:49:54.000Z · score: 3 (3 votes)

Maximally efficient agents will probably have an anti-daemon immune system

2017-02-23T00:40:47.000Z · score: 3 (3 votes)

Are daemons a problem for ideal agents?

2017-02-11T08:29:26.000Z · score: 5 (2 votes)

How likely is a random AGI to be honest?

2017-02-11T03:32:22.000Z · score: 0 (0 votes)

My current take on the Paul-MIRI disagreement on alignability of messy AI

2017-01-29T20:52:12.000Z · score: 14 (7 votes)

On motivations for MIRI's highly reliable agent design research

2017-01-29T19:34:37.000Z · score: 8 (8 votes)

Strategies for coalitions in unit-sum games

2017-01-23T04:20:31.000Z · score: 3 (3 votes)

An impossibility result for doing without good priors

2017-01-20T05:44:26.000Z · score: 1 (1 votes)

Pursuing convergent instrumental subgoals on the user's behalf doesn't always require good priors

2016-12-30T02:36:48.000Z · score: 7 (5 votes)

Predicting HCH using expert advice

2016-11-28T03:38:05.000Z · score: 3 (3 votes)

ALBA requires incremental design of good long-term memory systems

2016-11-28T02:10:53.000Z · score: 1 (1 votes)

Modeling the capabilities of advanced AI systems as episodic reinforcement learning

2016-08-19T02:52:13.000Z · score: 4 (2 votes)

Generative adversarial models, informed by arguments

2016-06-27T19:28:27.000Z · score: 0 (0 votes)

In memoryless Cartesian environments, every UDT policy is a CDT+SIA policy

2016-06-11T04:05:47.000Z · score: 12 (4 votes)

Two problems with causal-counterfactual utility indifference

2016-05-26T06:21:07.000Z · score: 3 (3 votes)

Anything you can do with n AIs, you can do with two (with directly opposed objectives)

2016-05-04T23:14:31.000Z · score: 2 (2 votes)

Lagrangian duality for constraints on expectations

2016-05-04T04:37:28.000Z · score: 1 (1 votes)

Rényi divergence as a secondary objective

2016-04-06T02:08:16.000Z · score: 2 (2 votes)

Maximizing a quantity while ignoring effect through some channel

2016-04-02T01:20:57.000Z · score: 2 (2 votes)

Informed oversight through an entropy-maximization objective

2016-03-05T04:26:54.000Z · score: 0 (0 votes)

What does it mean for correct operation to rely on transfer learning?

2016-03-05T03:24:27.000Z · score: 4 (4 votes)

Notes from a conversation on act-based and goal-directed systems

2016-02-19T00:42:29.000Z · score: 3 (3 votes)

A scheme for safely handling a mixture of good and bad predictors

2016-02-17T05:35:55.000Z · score: 0 (0 votes)

A possible training procedure for human-imitators

2016-02-16T22:43:52.000Z · score: 2 (2 votes)

Another view of quantilizers: avoiding Goodhart's Law

2016-01-09T04:02:26.000Z · score: 3 (3 votes)

A sketch of a value-learning sovereign

2015-12-20T21:32:45.000Z · score: 11 (2 votes)

Three preference frameworks for goal-directed agents

2015-12-02T00:06:15.000Z · score: 4 (2 votes)

What do we need value learning for?

2015-11-29T01:41:59.000Z · score: 3 (3 votes)

A first look at the hard problem of corrigibility

2015-10-15T20:16:46.000Z · score: 10 (3 votes)

Conservative classifiers

2015-10-02T03:56:46.000Z · score: 2 (2 votes)

Quantilizers maximize expected utility subject to a conservative cost constraint

2015-09-28T02:17:38.000Z · score: 3 (3 votes)

A problem with resource-bounded Solomonoff induction and unpredictable environments

2015-07-27T03:03:25.000Z · score: 2 (2 votes)

PA+100 cannot always predict modal UDT

2015-05-12T20:26:53.000Z · score: 3 (3 votes)

MIRIx Stanford report

2015-05-11T06:11:26.000Z · score: 1 (1 votes)

Reflective probabilistic logic cannot assign positive probability to its own coherence and an inner reflection principle

2015-05-07T21:00:10.000Z · score: 5 (5 votes)

Learning a concept using only positive examples

2015-04-28T03:57:24.000Z · score: 3 (3 votes)

Minimax as an approach to reduced-impact AI

2015-04-02T22:00:04.000Z · score: 3 (3 votes)