Posts
Comments
Will there be LaTeX support?
(Not very familiar with math.)
The Heyting-algebraic definition of implication makes intuitive sense to me, or at least after you state your confusion. 'One circle lies inside the other' is like saying A is a subset of B, which is a statement that describes a relation between two sets, and not a statement that describes a set, so we shouldn't expect that that mental image would correspond to a set. Furthermore, the definition of implication you've given is very similar to the material implication rule; that we may substitute 'P implies Q' with 'not-P or Q'.
Also, I have personally been enjoying your recent posts with few prerequisites. (Seems to be a thing.)
I have what feels like a naive question. Is there any reason that we can't keep appealing to even higher-order preferences? I mean, when I find that I have these sorts of inconsistencies, I find myself making an additional moral judgment that tries to resolve the inconsistency. So couldn't you show the human (or, if the AI is doing all this in its 'head', a suitably accurate simulation of the human) that their preference depends on the philosopher that we introduce them to? Or in other cases where, say, ordering matters, show them multiple orderings, or their simulations' reactions to every possible ordering where feasible, and so on. Maybe this will elicit a new judgment that we would consider morally relevant. But this all relies on simulation, I don't know if you can get the same effect without that capability, and this solution doesn't seem even close to being fully general.
I imagine that this might not do much to resolve your confusion however. It doesn't do much to resolve mine.
Discipline, especially internal psychological, also increases skills.
This is a little ambiguous; does he mean self-control or punishment?
I think these are all points that many people have considered privately or publicly in isolation, but that thus far no one has explicitly written them down and drawn a connection between them. In particular, lots of people have independently made the observation that ontological crises in AIs are apparently similar to existential angst in humans, ontology identification seems philosophically difficult, and so plausibly studying ontology identification in humans is a promising route to understanding ontology identification for arbitrary minds. So, thank you for writing this up; it seems like something that needed to be written quite badly.
Some other problems that might be easier to tackle from this perspective include mind crime, nonperson predicates, and suffering risk, especially subproblems like suffering in physics.
We had succeeded in obtaining John yon Neumann as keynote speaker. He discussed the need for, and likely impact of, electronic computing. He mentioned the "new programming method" for ENIAC and explained that its seemingly small vocabulary was in fact ample: that future computers, then in the design stage, would get along on a dozen instruction types, and this was known to be adequate for expressing all of mathematics. (Parenthetically, it is as true today as it was then that "programming" a problem means giving it a mathematical formulation. Source languages which use "plain English" or other appealing vocabularies are only mnemonic disguises for mathematics.) Von Neumann went on to say that one need not be surprised at this small number, since about 1,000 words were known to be adequate for most situations of real life, and mathematics was only a small part of life, and a very simple part at that. This caused some hilarity in the audience, which provoked von Neumann to say: "If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is."
Franz L. Alt, "Archaeology of computers: Reminiscences, 1945--1947", Communications of the ACM, volume 15, issue 7, July 1972, special issue: Twenty-fifth anniversary of the Association for Computing Machinery, p. 694. PDF.
Neuroscience of art & art appreciation
A slightly broader keyword would be 'neuroaesthetics.'
Evolutionary basis for storytelling
I haven't done an exhaustive literature search, but one book I'm going through right now is Brian Boyd's On the Origin of Stories: Evolution, Cognition, and Fiction.
psychopathology* (Genuinely trying to be helpful, not nitpicky; keywords are important.)
Related, broader keyword: abnormal psychology.
Tangentially, I thought you might find repair theory interesting, if not useful. Briefly, when students make mistakes while doing arithmetic, these mistakes are rarely the effect of a trembling hand; rather, most such mistakes can be explained via a small set of procedural skills that systematically produce incorrect answers.
Do you think I disagree with that?
I've had a strong urge to ask about the relation between Project Hufflepuff and group epistemic rationality since you started writing this sequence. This also seems like a good time to ask because your criticism of the essays that you cite (with the caveat that you believe them to contain grains of truth) seems fundamentally to be an epistemological one. Your final remarks are altogether an uncontroversial epistemological prescription, "We have time and we should use it because other things equal taking more time increases the reliability of our reasoning."
So, if I take it that your criticism of the lack of understanding in this area is an epistemological one, then I can imagine this sequence going one of two ways. The one way is that you'll solve the problem, or some of it, with your individual epistemological abilities, or at least start on this and have others assist. The other way is that before discussing culture directly, you'll discuss group epistemic rationality, bootstrapping the community's ability to reason reliably about itself. But I don't really like to press people on what's coming later in their sequence. That's what the sequence is for. Maybe I can ask some pointed questions instead.
Do you think group epistemic rationality is prior to the sort of group instrumental rationality that you're focusing on right now? I'm not trying to stay hyperfocused on epistemic rationality per se. I'm saying that you've demonstrated that the group has not historically done well in an epistemological sense on understanding the open problems in this area of group instrumental rationality that you're focusing on right now, and now I'm wondering if you, or anyone else, think that's just a failure thus far that can be corrected by individual epistemological means only and distributed to the group, or if you think that it's a systemic failure of the group to arrive at accurate collective judgments. Of course it's hardly a sharp dichotomy. If one thought the latter, then one might conclude that it is important to recurse to social epistemology for entirely instrumental reasons.
If group epistemic rationality is not prior to the sort of instrumental rationality that you're focusing on right now, then do you think it would be nevertheless more effective to address that problem first? Have you considered that in the past? Of course, it's not entirely necessary that these topics be discussed consecutively, as opposed to simultaneously.
How common do you think knowledge of academic literature relevant to group epistemic rationality is in this group? Like, as a proxy, what proportion of people do you think know about shared information bias? The only sort of thing like this I've seen as common knowledge in this group is informational cascades. Just taking an opportunity to try and figure out how much private information I have, because if I have a lot, then that's bad.
How does Project Hufflepuff relate to other group projects like LW 2.0/the New Rationality Organization, and all of the various calls for improving the quality of our social-epistemological activities? I now notice that all of those seem quite closely focused on discussion media.
I enjoyed this very much. One thing I really like is that your interpretation of the evolutionary origin of Type 2 processes and their relationship with Type 1 processes seems a lot more realistic to me than what I usually see. Usually the two are made to sound very adversarial, with Type 2 processes having some kind of executive control. I've always wondered how you could actually get this setup through incremental adaptations. It doesn't seem like Azathoth's signature. I wrote something relevant to this in correspondence:
If Type 2 just popped up in the process of human evolution, and magically got control over Type 1, what are the chances that it would amount to anything but a brain defect? You'd more likely be useless in the ancestral environment if a brand new mental hierarch had spontaneously mutated into existence and was in control of parts of a mind that had been adaptive on their own for so long. It makes way more sense to me to imagine that there was a mutant who could first do algorithmic cognition, and that there were certain cues that could trigger the use of this new system, and that provided the marginal advantage. Eventually, you could use that ability to make things safe enough to use the ability even more often. And then it would almost seem like it was the Type 2 that was in charge of the Type 1, but really Type 1 was just giving you more and more leeway as things got safer.
Thank you for following up after all this time. Longitudinal studies seem important.
This is neat.
I find the metaphor plausible. Let's see if I understand where you're coming from.
I've been looking into predecision processes as a means of figuring out where human decisionmaking systematically goes wrong. One such process is hypothesis generation. I found an interesting result in this paper; the researchers compared the hypothesis sets generated by individuals, natural groups and synthetic groups. In this study, a synthetic group's hypothesis set is agglomerated from the hypothesis sets of individuals who never interact socially. They found that natural groups generate more hypotheses than individuals, and that synthetic groups generate more hypotheses than either. It appears that social interaction somehow reduces the number of alternatives that a group considers relative to what the sum of their considerations would be if they were not a group.
Now, this could just be biased information search. One person poses a hypothesis aloud, and then the alternatives become less available to the entire group. But information search itself could be mediated by motivational factors, like if I write "one high-status person poses a hypothesis aloud...", and this is now a hypothesis about biased information search and a zero-sum social-control component. It does seem worth noting that biased search is currently a sufficient explanation by itself, so we might prefer it by parsimony, but at this level of the world model, it seems like things are often multiply determined.
Importantly, creating synthetic groups doesn't look like punishing memetic-warfare/social-control at all. It looks like preventing it altogether. This seems like an intervention that would be difficult to generate if you thought about the problem in the usual way.
I was familiar with this.
I find the first etiology similar to my model. Did you mean to imply this similarity by use of the word 'indeed'? I can see how one might interpret my model as an algorithm that outputs a little 'gender token' black box that directly causes the self-reports, but I really didn't mean to propose anything besides "Once gendered behavior has been determined, however that occurs, cisgender males don't say "I'm a boy!" for cognitive reasons that are substantially different from the reasons that transgender males say "I'm a boy!" " Writing things like "behaviorally-masculine girls" just sounds like paraphrase to me. Should it not? On the other hand, as I understand it, the second etiology substantially departs from this. In that case it is proposed that transgender people that transition later in life perform similar behaviors for entirely different cognitive reasons.
I'll reiterate that I admit to the plausibility of other causes of self-report. I do find your confidence surprising, however. I realize the visible controversy is not much evidence that you're wrong, because we would expect controversy either way. Do you have thoughts on Thoughts on The Blanchard/Bailey Distinction? I'd just like to read them if they exist.
This is an excellent summary of my argument! Thank you so much for compressing this into a soundbite!
I think this topic is really only as political as you make it.
I did in fact decide not to reply to the grandparent because I estimated that it would cause less harm in this respect than replying. This article is intended to be a contribution to the philosophy of gender identity in the style of EY's executable philosophy, and it is more directly a reply to lucidfox's Gender Identity and Rationality. This topic was perfectly acceptable in 2010.
Can you break that down to the extent that I broke down my confusion above? I'm having a hard time seeing deep similarities between these problems.
This is rather as if you imagine a puddle waking up one morning and thinking, 'This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn't it? In fact, it fits me staggeringly well, must have been made to have me in it!' This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be all right, because this World was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
Douglas Adams, The Salmon of Doubt
"Why was I born as myself rather than someone else?" versus "Why do I think I was born as myself rather than someone else?"
This never got solved in the comments.
I was sitting in microeconomics class in twelfth grade when I asked myself, "Why am I me? Why am I not Kelsey or David or who-have-you?" Then I remembered that there are no souls, that 'I' was a product of my brain, and thus that the existence of my mind necessitates the existence of my body (or something that serves a similar function). Seeing the contradiction, I concluded that I had reasoned, incoherently, as if 'I' were an ontologically fundamental mental entity with some probability of finding itself living some particular lives. That's unsurprising, because as a great deal of cognitive science and Eliezer's free will solution has demonstrated, humans often intuitively evaluate 'possibility' and plausibility by evaluating how easy it is to conceive of something, as a proxy. "I can conceive of 'being someone else,' thus there must be some probability that I 'could have been someone else', so what is the distribution, what is its origin, and what probability does it assign to me being me?"
The quote is from this article, section 4.1. There might be other descriptions elsewhere, Lenat himself cites some documents released by the organization hosting the wargame. You might want to check out the other articles in the 'Nature of Heuristics' series too. I think there are free pdfs for all of them on Google Scholar.
Recently in the LW Facebook group, I shared a real-world example of an AI being patched and finding a nearby unblocked strategy several times. Maybe you can use it one day. This example is about Douglas Lenat's Eurisko and the strategies it generated in a naval wargame. In this case, the 'patch' was a rules change. For some context, R7 is the name of one of Eurisko's heuristics:
A second use of R7 in the naval design task, one which also inspired a rules change, was in regard to the fuel tenders for the fleet. The constraints specified a minimum fractional tonnage which had to be held back, away from battle, in ships serving as fuel tenders. R7 caused us to consider using warships for that purpose, and indeed that proved a useful decision: whenever some front-line ships were moderately (but not totally) damaged, they traded places with the tenders in the rear lines. This maneuver was explicitly permitted in the rules, but no one had ever employed it except in desperation near the end of a nearly-stalemated battle, when little besides tenders were left intact. Due to the unintuitive and undesirable power of this design, the tournament directors altered the rules so that in 1982 and succeeding years the act of 'trading places' is not so instantaneous. The rules modifications introduced more new synergies (loopholes) than they eliminated, and one of those involved having a ship which, when damaged, fired on (and sunk) itself so as not to reduce the overall fleet agility.
Why do you mourn when you can contemplate politics no more? What makes you think about it so much in the first place? That just seems like something you wouldn't want to ignore.
I admit I don't quite understand what MINERVA-DM is...I glanced at the paper briefly and it appears to be a...theoretical framework for making decisions which is shown to exhibit similar biases to human thought? (With cells and rows and ones?)
I can't describe it too much better than that. The framework is meant to be descriptive as opposed to normative.
A complete description of MINERVA-DM would involve some simple math, but I can try to describe it in words. The rows of numbers you saw are vectors. We take a vector that represents an observation, called a probe, along with all vectors in episodic memory, which are called traces, and by evaluating the similarity of the probe to each trace and averaging these similarities, we obtain a number that represents a global familiarity signal. By assuming that people use this familiarity signal as the basis of their likelihood judgments, we can simulate some of the results found in the field of likelihood judgment.
I suspect that with a bit of work, one could even use MINERVA-DM to simulate retrospective and prospective judgments of task duration, and thus, planning fallacy.
I am assuming the first point is about this post and the second two are about the planning primer?
The first two are about this article and the third is about the planning fallacy primer. I mentioned hypothesis generation because you talked about 'pair debugging' and asking people to state the obvious solutions to a problem as ways to increase the number of hypotheses that are generated, and it pattern matched to what I'd read about hypothesis generation.
There are definitely several papers on memory bias affecting decisions, although I'm unsure if we're talking about the same thing here. What I want to say is something like "improperly recalling how long things took in the past is a problem that can bias predictions we make" and this phenomena has been studied several times.
I'm definitely talking about this as opposed to the other thing. MINERVA-DM is a good example of this class of hypothesis in the realm of likelihood judgment. Hilbert (2012) is an information-theoretic approach to memory bias in likelihood judgment.
I'm just saying that it looks like there's a lot of fruit to be picked in memory theory and not many people are talking about it.
There is a problem where I say "Your hypothesis is backed by the evidence," when your entirely verbal theory is probably amenable to many interpretations and it's not clear how many virtue points you should get. But, I wanted to share some things from the literature that support your points about using feelings as information and avoiding miserliness.
First, there is something that's actually just called 'feelings-as-information theory', and has to do with how we, surprise, use feelings as sources of information. 'Feelings' is meant to be a more general term than 'emotions.' Some examples of feelings that happen to be classified as non-emotion feelings in this model are cognitive feelings, like surprise, or ease-of-processing/fluency experiences; moods, which are longer-term than emotions and usually involve no causal attribution; and bodily sensations, like contraction of the zygomaticus major muscles. In particular, processing fluency is used intuitively and ubiquitously as a source of information, and that's the hot topic in that small part of cognitive science right now. I have an entire book on that one feeling. I did write about this a little bit on LW, like in Availability Heuristic Considered Ambiguous, which argues that Kahneman and Tversky's availability heuristic can be fruitfully interpreted as a statement about the use of retrieval fluency as a source of information; and Attempts to Debias Hindsight Backfire!, which is about experiments that manipulate fluency experiences to affect people's retroactive likelihood judgments. The idea of 'feelings as information' looks central to the Art.
There is also a small literature on hypothesis generation. See the section 'Hypothesis Generation and Hypothesis Evaluation' of this paper for a good review of everything we know about hypothesis generation. Hardly inspiring, I know. The evidence indicates that humans generate relatively few hypotheses, or we may also write, humans have impoverished hypothesis sets. Also in this paper, I saw studies that compare hypothesis generation between individuals and groups of various sizes. You're right that groups typically generate more hypotheses than individuals. They also tried comparing 'natural' and 'synthetic' groups, natural groups are what you think; the hypothesis sets of synthetic groups are formed from the union of many individual, non-group hypothesis sets. It turns out that synthetic groups do a little better. Social interaction somehow reduces the number of alternatives that a group considers relative to what the sum of their considerations would be if they were not a group.
Also, about your planning fallacy primer, I think the memory bias account has a lot more going for it than a random individual might infer from the brevity of its discussion.
I'll try to write a short post on it at some point.
Please do!
(Tentatively upvoted.)
I find that a good way to make statements criticizing individuals or organizations less provocative is to frame your criticism as a confusion. This simultaneously allows you to demonstrate that you've thought about their reasoning for more than five minutes and tends to make any further discussion less adversial.
The abstract reasoning about why prison reform is a bipartisan cause makes sense to me: prisons cost lots of money (bad conservative metric) and they're disproportionately inhabited by minorities (bad liberal metric), but if your descriptions of their recommended organizations are charitable, then I too am confused right now.
Does anyone have an electronic copy of the Oxford Handbook of Metamemory that they're willing to share?
I think it's possible to exercise Hufflepuff virtue in the act of encouraging more Ravenclaw virtue, right? That is, getting an arbitrary ball rolling is a Hufflepuff thing to do, even if you roll the ball in a Ravenclaw direction? That's an important distinction to me.
A mid-term goal of mine is to replicate Dougherty et al.'s MINERVA-DM in MIT/GNU Scheme (it was originally written in Pascal; no, I haven't requested the authors' source code, and I don't intend to). I also intend to test at least one of its untested predictions using Amazon Mechanical Turk, barring any future knowledge that makes me think that I won't be able to obtain reliable results (which has only become less plausible as I've learned more; e.g. Turkers are more representative of the U.S. population than the undergraduate population that researchers routinely sample from in behavioral experiments; there's also a few enthusiasts who have done some work on AMT-specific methodological considerations).
MINERVA-DM is a formal model of human likelihood judgments that successfully predicts the experimental findings on conservatism), the availability heuristic, the representativeness heuristic, the base rate fallacy, the conjunction fallacy, the illusory truth effect, the simulation heuristic, and the hindsight bias. MINERVA-DM can also be described as a modified version of Bayes' Theorem. I'm not too far yet, having just started learning Scheme/programming-in-general, but I have managed to hobble together a one-line program that outputs an n-vector with elements drawn randomly with replacement from the set {-1, 0, 1}, so I guess I've technically started writing the program.
It's worth saying that I'm not very confident that MINERVA-DM won't be overturned by a better model, and that's not the point.
I need some sort of example, and MINERVA-DM has good properties as an example, because its math is exceedingly simple (i.e., capital-sigma notation, arithmetic mean, basic probability theory (see Bolstad's Introduction to Bayesian Statistics, Ch.3), etc. There are probably plenty of improvements that we need to and could make as a community, but my own concern is that it's never been winter-night-clear to me why at least some of us aren't trying to perform (Keyword Alert!) heuristics and biases/judgment and decision making (JDM)/behavioral decision theory research on LW or on whatever conversational focus we may be using in the near- to mid-term future. There is no organization in the community for this; CFAR is the closest thing to this, and AFAICT, they are not doing basic research into H&B/JDM/BDT. People around here seem to me more likely than most to agree that you're more likely to make progress on applications if you have a deep understanding of the problem that you're trying to solve.
I think it is intuitive that you simply cannot productively do academic work solely in the blogosphere, and when you're explaining a counterintuitive point, a point that is not universal background knowledge, you should recurse far enough to prevent misunderstandings in advance. I no longer find it intuitive that you can't do a substantial amount of work on the blogosphere. For one, a good deal of academic work, especially the kind we're collectively interested in, doesn't require any special resources. Reviews, syntheses, analyses, critiques, and computational studies can all be done from a keyboard. As for experiments, we don't need to buy a particle accelerator for psych research, you guys; this is where Mechanical Turk comes in. E.g. see these two blog posts wherein a scientist replicates one of Tversky and Kahneman's base rate fallacy results with N = 66 for US$3.30, and replicates one of Tversky and Kahneman's conjunction fallacy results with N = 50 for US$2.50. (Here's a list with more examples.)
Arguing that there's important academic work that doesn't require anything but a computer (reviews, syntheses, analyses, computational studies), and demonstrating that you can test experimental predictions with your lunch money seems like a good start on preempting the 'you can't do real science outside of academia' criticism. (It's not like there isn't a precedent for this sort of thing around here anyway.) It also prevents people from calling you a hypocrite for proposing that the community steer in a certain direction without your doing any of the pedaling. I probably would've kept quiet for a lot longer if I didn't think it were important to the community to respond to calls like this article, especially considering that we may be moving to a new platform soon.
(for example, focusing seems to be related to nonverbal parts, but it sort of breaks the S1/S2 dichotomy by being nonverbal but slow.)
Noncentral nitpick that is meant to be helpful: Focusing is a counterexample to the lay dual process theory that people sometimes use around here, but not the up-to-date, cognitive-scientific one.
Briefly, the key distinction (and it seems, the distinction that implies the fewest assumptions) is the amount of demand that a given process places on working memory.
nonverbal
Although language is often involved in Type 2 processing, this is likely a mere correlate of the processes by which we store and manipulate information in working memory, and not the defining characteristic per se. To elaborate, we are widely believed to store and manipulate auditory information in working memory by means of a 'phonological store' and an 'articulatory loop', and to store and manipulate visual information by means of a 'visuospatial sketchpad', so we may also consider the storage and processing in working memory of non-linguistic information in auditory or visuospatial form, such as musical tones, or mathematical symbols, or the possible transformations of a Rubik's cube, for example. The linguistic quality of much of the information that we store and manipulate in working memory is probably noncentral to a general account of the nature of Type 2 processes. Conversely, the production and comprehension of language must often be an associative or procedural process, rather than a deliberative one; otherwise you might still be parsing the first sentence of this comment. That's all technically original research and I Am Not A Cognitive Scientist, but I think it should be pretty obvious even from a layperson's perspective.
slow
There's nothing stopping Type 2 from being relatively fast, either; it's just another correlate that doesn't always hold. Trivial example: Have you ever awoken and not been able to make mental sense of what you're seeing for a few seconds? It might take you longer to do that than to perform one transformation of a Rubik's cube while fully awake, even though the former is automatic and the latter deliberate. In general, people sometimes seem to act as if there has never been a judgment that was simultaneously deliberate and fast, because they have come to describe all fast judgments as automatic. Such judgments are plausible in my experience.
See also: Evans (2013).
comments to comments
Do you know if you'll be able to maintain their familial relationships as well?
Not sure if this should make me feel better or worse.
Yeah, post hoc rationalization or deception makes more sense than what I said.
Stipulation is obviously sometimes a cheat. I would be surprised if it was always one.
(Upvoted.) Just wanted to say, "Welcome to LessWrong."
I think this is worth pointing out because it seems like an easy mistake to use my reasoning to justify dictatorship. I also think this is an example of two ships passing in the night. Eliezer was talking about a meta-level/domain-general ethical injunction. When I was talking to the student, I was talking about how to avoid screwing up the object-level/domain-specific operationalization of the phrase 'good governance'.
My argument was that if you're asking yourself the question, "What does the best government look like?", assuming that that is indeed a right question, then you should be suspicious if you find yourself confidently proposing the answer, "My democracy." The first reason is that 'democracy' can function as a semantic stopsign, which would stop you dead in your tracks if you didn't have the motivation to grill yourself and ask, "Why does the best government look like my democracy?" The second reason is that the complement of the set containing the best government would be much larger than the set containing the best government, so if you use the mediocrity heuristic, then you should conclude that any given government in your hypothesis space is plausibly not the best government. If you consider it highly plausible that your democracy is the end state of political progress, then you're probably underestimating the plausibility of the true hypothesis. And lastly, we hope that we have thereby permitted ourselves to one day generate an answer that we expect to be better than what we have now, but that does not require the seizure of power by any individual or group.
If, in the course of your political-philosophical investigations, you find yourself attempting to determine your preference ordering over the governments in your hypothesis space, and, through further argumentation, you come to the separate and additional conclusion that dictatorship is preferable to democracy, then the ethical injunction, "Do not seize power for the good of the tribe," should kick in, because no domain is supposed to be exempt from an ethical injunction. It just so happens that you should also be suspicious of that conclusion on epistemic grounds, because the particular moral error that that particular ethical injunction is intended to prevent may often be caused by an act of self-deception. And if you add a new government to your hypothesis space, and this government somehow doesn't fit into the category 'dictatorship', but also involves the seizure of power for the good of the tribe, then the ethical injunction should kick in then too, and you should once more be suspicious on epistemic grounds as well.
What do you think about all of that?
Thanks for clarifying. It was easy for me to forget that as well as being a moderator, you're also just another user with a stake in what happens to LW.
Genuine question: Did the Apolitical Guideline become an Apolitical Rule? Or have I always been mistaken about it being a guideline?
Additional data point: I see [deleted].
I know this is on the blogroll right now, but since it was originally on Facebook I thought it might be nice to start a place for discussion on LW. Linkposts are also quite a bit more visible than the blogroll.
This post is already getting too long so I deleted the section on lessons to be learned, but if there is interest I'll do a followup. Let me know what you think in the comments!
I at least would be interested in hearing anything else that you have to say about this topic. I'm not averse to private conversation on the matter either; most such conversations of mine are private.
Hypothesis: Fiction silently allows people to switch into truthseeking mode about politics.
A history student friend of mine was playing Fallout: New Vegas, and he wanted to talk to me about which ending he should choose for the game's narrative. The conversation was mostly optimized for entertaining one another, but I found that this was a situation where I could slip in my real opinions on politics without getting wide-eyed stares! Like this one:
The question you have to ask yourself is "Do I value democracy because it is a good system, or do I value democracy per se?" A lot of people will admit that they value democracy per se. But that seems wrong to me. That means that if someone showed you a better system that you could verify was better, you would say "This is good governance, but the purpose of government is not good governance, the purpose of government is democracy." (I do, however, understand democracy as a 'current best bet' or 'local maximum'.)
I have in fact gotten wide-eyed stares for saying things like that, even granting the final pragmatic injunction on democracy as local maximum. I find that weird, because it seems like one of the first steps you would take towards thinking about politics clearly, not even as cognitive work but for the sake of avoiding cognitive anti-work, to not equivocate democracy with good governance. If you were further in the past and the fashionable political system were not democracy but monarchy, and you, like many others, consider democracy preferable to monarchy, then upon a future human revealing to you the notion of a modern democracy, you would find yourself saying, regrettably, "This is good governance, but the purpose of government is not good governance, the purpose of government is monarchy."
But because we were arguing for fictional governments, I seemed to be sending an imperceptibly weak signal that I would defect in a real tossup between democracy and something else, and thus my conversation partner could entertain my opinion whilst looking through truthseeking goggles instead of political ones.
The student is one of two people with whom I've had this precise conversation, and I do mean in the particular sense of "Which Fallout ending do I pick?" I slipped this opinion into both, and both came back weeks later to tell me that they spent a lot of time thinking about that particular part of the conversation and that the opinion I shared seemed deep. If Eliezer's hypothesis about the origin of feelings of deepness is true, then this is because they were actually truthseeking when they evaluated my opinion, and the opinion really got rid of a real cached thought: "Democracy is a priori unassailable."
In the spirit of doing accidentally effective things deliberately, if you ever wanted to flip someone's truthseeking switch, you might do it by placing the debate within the context of a fictional universe.
Something I've been meaning to say for a while:
Keyword: Utopian studies.
Genuine question: why do you anticipate that we'll assume that you're being disingenuous?
If you don't get a proper response, it may be worthwhile to make this into its own post, if you have the karma. (Open thread is another option.)
I've always really liked this idea. I already do the toothbrushing thing. Hygiene's a good category to pull from. A few others I use:
- Feeling of cold air
- Warmth of sunlight
- Warmth of water, be it bathing, dishwashing, etc.
- Smell of clean laundry
- Smell of coffee/warm beverages
- Feel of wearing freshly cleaned clothing
- Feel of a fresh shave
- I often have long hair, and notice that my hair actually starts to feel heavier the longer it goes unwashed, so, the lightness of freshly washed hair
But I would say that these disadvantages are necessary evils that, while they might be possible to mitigate somewhat, go along with having a genuinely public discourse and public accountability.
I'm often afraid of being an unwanted participant, so I've thought about this particular point somewhat. The worst case version of this phenomenon is the Eternal September, when the newbies become so numerous that the non-newbies decide to exit en masse.
I think there's something important that people miss when they think about the Eternal September phenomenon. From Wikipedia:
Every September, a large number of incoming freshmen would acquire access to Usenet for the first time, taking time to become accustomed to Usenet's standards of conduct and "netiquette". After a month or so, these new users would either learn to comply with the networks' social norms or tire of using the service.
The lever that everyone has already thought to pull is the 'minimize number of new users' lever, primed perhaps by the notion that not pulling this lever apparently resulted in the destruction of Usenet. Additionally, social media platforms often have moderation features that make pulling this lever very easy, and thus even more preferable.
But cultures don't have to leave new users to learn social norms on their own; you could pull the 'increase culture's capacity to integrate new users' lever. It makes sense that that lever hasn't been pulled, because it requires more coordination than the alternative. This post calls for a similar sort of coordination, so it seems like a good place for me to mention this possibility.
This applies not just to social norms, but to shared concepts, especially in cultures like this one, where many of the shared concepts are technical. It's easy to imagine that everyone who decreases discussion quality lacks the desire or wherewithal to become someone who increases discussion quality, but some newbies may have the aspirations and capability to become non-newbies, and it's better for everyone if that's made as easy as possible. In that way, I find that some potential improvements are not difficult to imagine.
"letting people sell their organs after they're dead doesn't seem like it would increase the supply that much"
seems very suspect. If you could sell the rights to your organs, there's now incentive to set up a "pay people to be signed up for organ donation" business. This is also not harmful to the donor, unlike kidneys.
True. More than anything I was trying to bite off a small piece of the larger 'organ market question'. Given your comment, a better way to do this would have been to note that even perfectly allocating all cadaveric organs would still be insufficient to get a kidney to everyone who needs one. Although one thing I don't like about your proposal is that things could get very shady if people 'don't consent' to have their organs taken after they've already sold their rights and have therefore 'legally consented'. In my scheme I imagine people not getting paid unless the kidney's already out.
Also, for added horror, a link to this may be worth including somehow.
Just for added horror, or is there a larger point? (It's okay if there's no larger point. I ask because I've seen a general 'you don't want to legally create new incentives around organ trade, look at China' sort of objection that I didn't address in the article and that I would be prepared to address if that's where you're going.)