Open thread, Jan. 26 - Feb. 1, 2015
post by Gondolinian · 2015-01-26T00:46:22.484Z · LW · GW · Legacy · 432 commentsContents
432 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
432 comments
Comments sorted by top scores.
comment by falenas108 · 2015-01-26T18:38:56.909Z · LW(p) · GW(p)
Natural experiments: I've been trying a new acne wash for the past 6 months, and although I felt like it was working, I wasn't sure. Then, the other day when I was applying it to my back, my partner noticed there was an area I wasn't reaching. In fact, there was an entire line on my back where I wasn't stretching enough to get the wash on. This line coincided exactly with a line of acne, while the rest of my back was clear.
Now I know the wash works for me.
Related: http://xkcd.com/700/
Replies from: None↑ comment by [deleted] · 2015-01-27T14:37:40.593Z · LW(p) · GW(p)
Do you mind sharing the brand and product name, for others?
Replies from: falenas108↑ comment by falenas108 · 2015-01-27T19:10:26.900Z · LW(p) · GW(p)
I've been using this benzol peroxide wash.
comment by Kawoomba · 2015-01-28T20:41:30.510Z · LW(p) · GW(p)
Strong statement from Bill Gates on machine superintelligence as an x-risk, on today's Reddit AMA:
Replies from: savedpassI am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.
↑ comment by savedpass · 2015-01-29T11:57:58.892Z · LW(p) · GW(p)
"It seems pretty egocentric while we still have malaria and TB for rich people to fund things so they can live longer. It would be nice to live longer though I admit."
Replies from: cursed, Dirac_Delta↑ comment by Dirac_Delta · 2015-01-31T09:26:14.392Z · LW(p) · GW(p)
By that line of reasoning, we should not be funding space exploration and etc. either...
(I take his comment to mean that we should not be funding life extension research because it is egocentric.)
comment by JoshuaZ · 2015-01-27T19:03:44.195Z · LW(p) · GW(p)
Someone here mentioned the idea that making objects very cold was a more plausible source of unexpected physics leading to extinction than high energy physics because high energy events occur in the atmosphere all the time whereas there's no reason to expect any non-artificial cause of temperatures in the millKelvin range. Does someone have a source for this observation? I'm writing a post where I'd like to attribute this properly.
Replies from: knb, Manfred, Gunnar_Zarncke↑ comment by Manfred · 2015-01-28T01:24:18.945Z · LW(p) · GW(p)
Hm, that's an interesting idea. I don't think it's at all workable, though. You can't mess up a stable equilibrium by making it colder, so the only option I see for cool novel physics at low temps is a spontaneously broken symmetry. Which will then re-symmetrize as it heats up in a way that is much more guaranteed than heating something up and then cooling it down.
Replies from: Gunnar_Zarncke, JoshuaZ↑ comment by Gunnar_Zarncke · 2015-01-29T23:15:32.047Z · LW(p) · GW(p)
I think it is interesting in so far as you not only create setups wiith very low temperature but also with very regular structures. Structures where quantum computing is possible and exploits the calculation power of reality. I think it's at least conceivable that this could trigger novel effects. Especially so if the universe it a simulation where this could cause things like stack-overflow :-)
↑ comment by JoshuaZ · 2015-01-28T01:46:39.929Z · LW(p) · GW(p)
I agree that it doesn't seem likely: part of why I wanted it was not the specific scenario but the point that it isn't always obvious when we we are pushing the universe into configurations where material is not naturally in and don't appear in nature in any way.
↑ comment by Gunnar_Zarncke · 2021-06-25T16:31:21.879Z · LW(p) · GW(p)
Can't be so bad:
Macroscopic object (40kg) cooled to 77 nanokelvin:
https://news.mit.edu/2021/motional-ground-state-ligo-0618
comment by JoshuaZ · 2015-01-26T02:16:00.082Z · LW(p) · GW(p)
Sometimes when one learns something it makes many other things "click" by making them all make sense in a broader framework. Moreover, when this happens I will be astounded I hadn't learned about the thing in the first place. One very memorable such occasion is when I learned about categories and how many different mathematical structures could be thought of in that context. Do people have other examples where they have been "Wow. That makes so much sense. Why didn't anyone previously say that?"
Replies from: emr, Ishaan, Emily, Antisuji, Punoxysm, Princess_Stargirl, passive_fist, None↑ comment by emr · 2015-01-26T04:48:25.492Z · LW(p) · GW(p)
Basic game theory: Nash equilibriums and the idea of evolutionary game theory.
An unbelievable number of human problems map onto the property that a particular Nash equilibrium or evolutionarily stable strategy isn't guaranteed to be socially desirable (or even Pareto-efficient, or even when compared only to other Nash equilibrium).
Likewise, you really can't do non-trivial consequentialist reasoning without accounting for the impact of your proposed strategy on the strategies of other agents.
Once you've seen the patterns, you can avoid painstakingly deriving or arguing for the general picture over and over again, which probably consumes about a third of all policy and ethics debate. And more critically, you can avoid missing the importance of interlocking strategies in cases where it does matter: Another third of public debate is reserved for wondering why people are acting in the way that the actions of other people encourage them to act; or for helpfully suggesting that some group should move unilaterally along a moral gradient, and then blindly assuming that this will lead to a happier equilibrium once everything adjusts.
Replies from: Ishaan↑ comment by Ishaan · 2015-01-26T05:58:34.964Z · LW(p) · GW(p)
1) The idea of constructing things out of axioms. This is probably old hat to everyone here, but I was clumsily groping towards how to describe a bunch of philosophical intuitions I had, and then I was learning math proofs and understood that any "universe" can be described in terms of a set of statements, and suddenly I understood what finally lay at the end of every chain of why?s and had the words to talk about a bunch of philosophical ideas...not to mention finally understanding what math is, why it's not mysterious if physics is counterintuitive, and so on. (Previously I had thought of "axioms" as"assumptions", rather than building blocks.). Afterwards, I felt a little cheated, because it is a concept much simpler than algebra and it ought to have been taught in grade school.
2) Something more specialized: I managed to get a B.S. in neuroscience without knowing about the thalamus. I mean, knew the word and I knew approximately where it was and what it did, but I did not know that it was the hub for everything. (By which I mean, nearly every connection is either cortico-cortico or cortico-thalamic). After graduation, I was involved in a project where I had to map out the circuitry of the hippocampus, and suddenly... Oh! This is clearly one of the single most important organizational principles of the brain and I had no idea. After that, a whole bunch of other previously arbitrary facts gradually began to made sense...Why did no one simply show us a picture of a connectome before and point out that big spot right in the middle where it all converges?
3) We learned all this minutia of history, but no one really talked about the hunter-gatherer <--> agriculture transition and its causes. Suddenly, historical trends in religion, the demographic transition, nutrition, exercise, cultural differences, and a bunch of other things start clicking together.
I think what all these 3 things have in common, is that they really aught to have been among the very first lessons on their respective subjects...but somehow they were not.
Replies from: None, passive_fist, Pfft↑ comment by [deleted] · 2015-01-26T13:54:52.173Z · LW(p) · GW(p)
This is a pet peeve of mine, Axioms as assumptions (or self-evident truths) seem to be a very prevalent mode of thinking in educated people not exposed to much formal maths.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-26T17:01:34.033Z · LW(p) · GW(p)
What's wrong with treating axioms as assumptions?
Replies from: None, Ishaan, None↑ comment by [deleted] · 2015-01-26T22:55:58.954Z · LW(p) · GW(p)
Well, it's hard to articulate. There's of course nothing wrong with assumptions per se, since axioms indeed are assumptions, my peeve is with the baggage that comes with it. People say things like "what if the assumptions are wrong?", or "I don't think that axiom is clearly true", or "In the end you can't prove that your axioms are true".
These questions would be legitimate if the goal were physical truth, or a self-justifying absolute system of knowledge or whatever, but in the context of mathematics, we're not so interested in the content of the assumptions as we are in the structure we can get out of them.
In my experience, this kind of thing happens most often when philosophically inclined people talk about things like the Peano axioms, where it's possible to think we're discussing some ideal entity that exists independently of thought, and disappears when people are exposed to, say, the vector space axioms, or some set of axioms of set theory, where it becomes clear that axioms aren't descriptions but definitions.
Actually, you can ignore everything I've said above, I've figured out precisely what I have a problem with. It's the popular conception of axioms as descriptive rather than prescriptive. Which, I suppose OP was also talking about when they mentioned building blocks as opposed to assumptions.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-27T01:37:40.058Z · LW(p) · GW(p)
People say things like "what if the assumptions are wrong?"
That's a valid question in a slightly different formulation: "what if we pick a different set of assumptions?"
"In the end you can't prove that your axioms are true"
But that, on the other hand, is pretty stupid.
It's the popular conception of axioms as descriptive rather than prescriptive.
Well, normally you want your axioms to be descriptive. If you're interested in reality, you would really prefer your assumptions/axioms to match reality in some useful way.
I'll grant that math is not particularly interested in reality and so tends to go off on exploratory expeditions where reality is seen as irrelevant. Usually it turns out to be true, but sometimes the mathematicians find a new (and useful) way of looking at reality and so the expedition does loop back to the real.
But that's a peculiarity of math. Outside of that (as well as some other things like philosophy and literary criticism :-D) I will argue that you do want axioms to be descriptive.
↑ comment by Ishaan · 2015-01-27T00:51:33.947Z · LW(p) · GW(p)
I don't think it's "wrong" in the sense of "incorrect"... it's just that if you don't also realize that axioms are arbitrarily constructed "universes" and that all math takes place in the context of said fictional "universes", you kind of miss the deeper point. Thinking of them as assumptions is a simple way to teach them to beginners, but that's a set of training wheels that aught to be removed sooner rather than later, especially if you are using axioms for math.
And , handy side effect, your intuition for epistemology gets better when you realize that. (In my opinion).
Replies from: Lumifer↑ comment by Lumifer · 2015-01-27T01:47:36.417Z · LW(p) · GW(p)
if you don't also realize that axioms are arbitrarily constructed "universes"
Well, they are a set of assumptions on the basis of which you proceed forward. Starting with a different set will land you in a different world built on different assumptions. But I see it as a characteristic of assumptions in general, I still don't see what's so special about axioms.
Replies from: Kindly↑ comment by Kindly · 2015-01-27T02:20:43.953Z · LW(p) · GW(p)
When you assume the parallel postulate, for example, you are restricting your attention to the class of models of geometry in which the parallel postulate holds. I don't think that's a useful way of thinking about other kinds of assumptions such as "the sun will rise tomorrow" or "the intended audience for this comment will be able to understand written English".
(At least for me, I think that the critical axiom-related insight was the difference between a set of axioms and a model of those axioms.)
Replies from: Lumifer↑ comment by Lumifer · 2015-01-27T03:12:09.252Z · LW(p) · GW(p)
I don't think that's a useful way of thinking about other kinds of assumptions such as "the sun will rise tomorrow" or "the intended audience for this comment will be able to understand written English".
What is useful depends on your goals. The difference is still not clear to me -- e.g. by assuming that "the intended audience for this comment will be able to understand written English" you are restricting your attention to the class of situations in which people to whom you address your comment can understand English.
Replies from: Ishaan, Kindly↑ comment by Ishaan · 2015-01-27T05:18:41.549Z · LW(p) · GW(p)
What is useful depends on your goals
When your goal is to do good mathematics (or good epistemology, but that's a separate discussion) you really want to do that "restrict your attention" thing.
Human intuition is to treat assumptions as part of a greater sistem. "It's raining" is one assumption, but you can also implicitly assume a bunch of other things, like rain is wet., to arrive at statements like "it's raining => wet".
This gets problematic in math. If I tell you axioms "A=B" and "B=C", you might reasonably think "A=C"...but you just implicitly assumed that = followed the transitive property. This is all well and good for superficial maths, but in deeper maths you need to very carefully define "=" and its properties. You have to strip your mind bare of everything but the axioms you laid down.
It's mostly about getting in the habit of imagining the universe as completely nothing until the axioms are introduced. No implicit beliefs about how things aught to work. All must be explicitly stated. That's why it's helpful to have the psychology of "putting building blocks in an empty space" rather than "carving assumptions out of an existing space".
I mean, that's not the only way of thinking about it, of course. Some think of it as an infinite number of "universes" and then a given axiom "pins down" a subset of those, and I guess that's closer to "assumption" psychology. It's just a way of thinking, you can choose what you like.
The real important thing is to realize that it's not just about making operations that conserve truth values..,that all the mathematical statements are arbitrarily constructed. That's the thing I didn't fully grasp before...I thought it was just about "suppose this is true, then that would be true". I thought 1+1=2 was a "fact about the actual universe" rather than a "tautology" - and I didn't quite grasp the distinction between those two terms. Until I broke free of this limitation, I wasn't able to think thoughts like "how would geometry be if the parallel postulate isn't true?", because, well, "obviously (said my incorrect intuition) the parallel postulate is factual and how can you even start considering how things would look without it?"
..as I write this, I'm realizing that this is a really hard misconception to explain to one who has never suffered from it, because the misconception seems rather bizarre in hindsight once you are set right. Maybe you just intuitively get it and so aren't seeing why some people would be led astray by thinking of it as an assumption.
Reading your reply to me, you do seem to have your thoughts correct, and you seem to gravitate toward the "pin down" way of thinking, so I think for you it is perfectly okay to mentally refer to them as assumptions. But it confused me.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-27T06:44:31.313Z · LW(p) · GW(p)
I think I see what you mean. I would probably describe it not as a difference in the properties of axioms/assumptions themselves, but rather a difference in the way they are used and manipulated, a difference in the context.
I do not recall a realization similar to yours, however, perhaps because thinking in counterfactuals and following the chain of consequences comes easy to me. "Sure, let's assume A, it will lead to B, B will cause C, C is likely to trigger D which, in turn, will force F. Now you have F and is that what you expected when you wanted A?" -- this kind of structure is typical for my arguments.
But yes, I understand what you mean by blocks in empty space.
Replies from: Ishaan↑ comment by Ishaan · 2015-01-27T17:20:22.777Z · LW(p) · GW(p)
I don't think this is really the same skill as following counterfactuals and logical chains and judging internal consistency. Maybe the "parallel postulate" counterfactual was a bad example.
It's more the difference between
"Logic allows you to determine what the implications of assumptions are, and that's useful when you want to figure out which arguments and suppositions are valid" (This is where your example about counterfactuals and logical chains comes in) [1]
and
"Axioms construct / pin down universes. Our own universe is (hopefully) describable as a set of axioms". (This is where my example about building blocks comes in) [2]
Starting with a different set will land you in a different world built on different assumptions. But I see it as a characteristic of assumptions in general, I still don't see what's so special about axioms.
And that's a good way of bridging [1] and [2].
Replies from: Lumifer↑ comment by Lumifer · 2015-01-27T21:08:22.254Z · LW(p) · GW(p)
Axioms construct / pin down universes.
I am not too happy with the word "universe" here because it conflates the map and the territory. I don't think the territory -- "our own universe", aka the reality -- is describable as a set of axioms.
I'll accept that you can start with a set of axioms and build a coherent, internally consistent map, but the question of whether that map corresponds to anything in reality is open.
Replies from: Ishaan↑ comment by Ishaan · 2015-01-27T23:45:38.020Z · LW(p) · GW(p)
I don't think the territory -- "our own universe", aka the reality -- is describable as a set of axioms.
I very strongly do. I think the universe is describable by math. I think there exist one or more sets of statements that can describe the observable universe in its entirety. I can't imagine the alternative, actually. What would that even be like?
That's actually the only fundamental and unprovable point that I take on faith, from which my entire philosophy and epistemology blossoms. ("Unprovable" and "faith" because it relies on you to buy into the idea of "proof" and "logic" in the first place, and that's circular)
I don't necessarily think we can find such a set of axioms. mind you. I can't guarantee that there are a finite number of statements required, or that the human mind is necessarily is capable of producing/comprehending said statements, or even that any mind stuck within the constraints of the universe itself is capable. (I suppose you can take issue with the use of the word "describable" at this point). But I do think the statements exist, in some platonic sense, and that if we buy into logic we can at least know that they exist even if we can't know them directly. (In the same sense that we can often know whether or not a solution exists even if it's impossible to find)
No "universally compelling arguments in math and science" applies here: I can't really prove it to you, but I think anyone who believes in a lawful, logical universe will come around to agree after thinking about it long enough.
Replies from: JoshuaZ, Lumifer↑ comment by JoshuaZ · 2015-01-27T23:51:45.444Z · LW(p) · GW(p)
I very strongly do. I think the universe runs on math. I think there exist one or more sets of statements that can the universe in its entirety. I can't actually imagine the alternative, actually.
What if it requires an infinite set of statements to specify? Consider the hypothetical of a universe where there are no elementary particles but each stage is made up of something still simpler. Or consider something like the Standard Model but where the constants are non-computable. Would either of these fit what you are talking about?
Replies from: Ishaan↑ comment by Ishaan · 2015-01-28T00:13:37.359Z · LW(p) · GW(p)
Yes, that would fit in what I am talking about. I have a bad habit of constantly editing posts as I write, so you might have seen my post before I wrote this part.
I don't necessarily think we can find such a set of axioms. mind you. I can't guarantee that there are a finite number of statements required, or that the human mind is necessarily is capable of producing/comprehending said statements, or even that any mind stuck within the constraints of the universe itself is capable. (I suppose you can take issue with the use of the word "describable" at this point). But I do think the statements exist, in some platonic sense, and that if we buy into logic we can at least know that they exist even if we can't know them directly. (In the same sense that we can often know whether or not a solution exists even if it's impossible to find)
Such a universe wouldn't even necessarily be "complicated". A single infinite random binary string requires an infinitely long statement to fully describe (but we can at least partially pin it down by finitely describing a multiverse of random binary strings)
Replies from: JoshuaZ↑ comment by JoshuaZ · 2015-01-28T00:22:43.449Z · LW(p) · GW(p)
Yes, thank you, I don't think that was there when I read it. I'm not sure then that the statement that universe runs on math at that point has any degree of meaning.
Replies from: Ishaan↑ comment by Ishaan · 2015-01-28T00:46:02.550Z · LW(p) · GW(p)
It seems self evident once you get it, but it's not obvious.
In the general population you get these people who say "well, if it's all just atoms, whats the point"? They don't realize that everything has to run on logic regardless of whether the underlying phenomenon is atoms or souls or whatever. (Or at least, they don't agree. I shouldn't say "realize" because the whole thing rests on circular arguments.)
It also provides sort of ontological grounding onto which further rigor can be built. It's nice to know what we mean when we say we are looking for "truth".
↑ comment by Lumifer · 2015-01-28T16:36:30.660Z · LW(p) · GW(p)
Interesting. We seem to have a surprisingly low-level (in the sense of "basic") disagreement.
A couple of questions. Does your view imply that the universe is deterministic? And if "I can't guarantee ... even that any mind stuck within the constraints of the universe itself is capable" then I am not sure what does your position actually mean. Existing "in some platonic sense" is a very weak claim, much weaker than "the universe runs on math" (and, by implication, nothing else).
Replies from: Ishaan↑ comment by Ishaan · 2015-01-30T04:09:43.159Z · LW(p) · GW(p)
Does your view imply that the universe is deterministic?
No, randomness is a thing.
what does your position actually mean.
Practically, it means we'll never run into logical contradictions in the territory.
Theoretically, it means we will never encounter a phenomenon that in theory (in a platonic sense) cannot be fully described. In practice, we might not be able to come up with a complete description.
In a platonic sense, the territory must have at least one (or more) maps that totally describes it, but these maps may or may not be within the space of maps that minds stuck within the constraints of said territory can create.
a very weak claim,
As the only claim that I've been taking on faith and the foundation for all that follows, it is meant to be a weak claim.
I'm trying to whittle down the principles I must take on faith before forming a useful philosophy to as small a base as possible, and this is where I am at right now.
Descartes's base was "I think before I am", and from there he develops everything else he believes. My base is "things are logical" (which further expands into "all things have descriptions which don't contain contradictions")
Replies from: Lumifer↑ comment by Lumifer · 2015-01-30T16:26:24.494Z · LW(p) · GW(p)
the territory must have at least one (or more) maps
Maps require a mind, a consciousness of some sort. Handwaving towards "platonic sense" doesn't really solve the issue -- are you really willing to accept Plato's views of the world, his universals?
As the only claim that I've been taking on faith and the foundation for all that follows, it is meant to be a weak claim.
The problem is that, as stated, this claim (a) could never be decided; and (b) has no practical consequences whatsoever.
Replies from: Ishaan, dxu↑ comment by Ishaan · 2015-01-30T20:02:50.234Z · LW(p) · GW(p)
Maps require a mind, a consciousness of some sort.
Think of it this way: Godel's incompleteness theorem demonstrates there will always be statements about the natural numbers that are true, but that are unprovable within the system. It's perfectly okay for us to talk about those hypothetical statements as existing in the "platonic" sense, even though we might never really have them in the grasps of our minds and notebooks.
Similarly, it's okay for us to talk about a space of maps even while knowing we can't necessarily generate every map in that space due to constraints on us that might exist. I haven't actually read any Plato, so I might be misusing the term. I'm just using the word "platonic" to describe the entire space of maps, including the ungraspable ones. "Platonic" is merely to distinguish those things from things that actually exist in the territory.
The problem is that, as stated, this claim (a) could never be decided; and (b) has no practical consequences whatsoever.
part a) I endorse Dxu's defense of what I said, and see my reply to him for my objections to what he said.
part b) I disagree in principle with the idea that the validity of things depends on practical consequences, However, the whole point here is to create a starting point from which the rest of everything can be derived, and the rest of everything does have practical consequences
(it may be fair to say that there is no practical reason to derive them from a small starting point, but that is questioning the practicality of philosophy in general)
Replies from: Lumifer↑ comment by Lumifer · 2015-02-02T18:20:42.354Z · LW(p) · GW(p)
I'm just using the word "platonic" to describe the entire space of maps, including the ungraspable ones.
So, you're talking about things you can, basically, imagine.
In which sense do "ungraspable maps" exist, but herds of rainbow unicorns gallivanting on clouds do not?
I disagree in principle with the idea that the validity of things depends on practical consequence
I concur with your disagreement :-) but here we have TWO things: (1) unprovable and unfalsifiable; and (2) of no practical consequences.
Consider the claim that there is God, He created the universe, but then left forever. The same two things could be said of this claim as well.
Replies from: Ishaan↑ comment by Ishaan · 2015-02-02T18:43:33.828Z · LW(p) · GW(p)
So, you're talking about things you can, basically, imagine.
Yes, all the logically consistent systems we can imagine, and more. (See the Godel analogy above for "and more".)
In which sense do "ungraspable maps" exist, but herds of rainbow unicorns gallivanting on clouds do not?
You...can't imagine logically coherent systems with rainbow unicorns on clouds?
Keep in mind, we're making distinctions between "real tangible reality" and "the space of logically coherent systems". Your ad-absurdum works by using the word "exist" to confound those two, in a "tree falls in the forest" sort of manner. I specifically used the word "platonic" hoping to separate those ideas. It's merely an inconvenience of language that we don't have the words to distinguish the tautological "reality" and "existence" of 1+1=2 from the reality of "look, there's a thing over there". People say "in Integers, there exists an odd number between every even number" but it's not that sort of "existence".
↑ comment by dxu · 2015-01-30T16:59:28.100Z · LW(p) · GW(p)
Maps require a mind, a consciousness of some sort.
Really? If I wrote a physics engine in, let's say, Java, is that not a(n approximate) map of physical reality? I would say so. Yet the physics engine isn't conscious. It doesn't have a mind. In fact, the simulation isn't even dependent on its substrate--I could save the bytecode and run it on any machine that has the JVM installed. Moreover, the program is entirely reducible to a series of abstract (Platonic) mathematical statements, no substrates required at all, and hence no "minds" or "consciousness" required either
In what sense is the physics engine described above not a map?
The problem is that, as stated, this claim (a) could never be decided;
Hence, I assume, Ishaan's use of the word "faith".
and (b) has no practical consequences whatsoever.
In practice, it means we will never find something in the territory that is logically contradictory. (Not that we're likely to find such a thing in the first place, of course, but if we did, it would falsify Ishaan's claim, so it's not unfalsifiable, though it is untestable. Seeing that Ishaan has stated that he/she is taking this claim "on faith", though, I can't see that untestability is a big issue here.)
Personally, I disagree with Ishaan's approach of taking anything on faith, even logic itself. That being said, if you really need to take something on faith, I have trouble thinking of a better claim to do so with than the claim that "everything has a logical description".
Replies from: Ishaan, g_pepper, Lumifer↑ comment by Ishaan · 2015-01-30T19:38:07.676Z · LW(p) · GW(p)
I disagree with Ishaan's approach of taking anything on faith, even logic itself.
Let me unpack "faith" a little bit, then, because it's not like regular faith. I only use the word "faith" because it's the closest word I know to what I mean.
I agree with the postmodern / nihilist / Lesswrong's idea of "no universally compelling arguments" in morality, math, and science. Everything that comes out of my mind is a property of how my mind is constructed.
When I say that I take logic "on faith", what I'm really saying is that I have no way to justify it, other than that human minds run that way (insert disclaimers about, yes, I know human minds don't actually run that way)
I don't have a word to describe this, the sense that I'm ultimately going to follow my morality and my cognitive algorithm even while accepting that is not and cannot be justification for them outside my own mind. (I kinda want to call this "epistemic particularism" to draw an analogy from political particularism, but google says that term is already in use and I haven't read about it so I am not sure whether or not it means what I want to use it for. I think it does, though.)
Suppose I got up one morning, and took out two earplugs, and set them down next to two other earplugs on my nighttable, and noticed that there were now three earplugs, without any earplugs having appeared or disappeared
I think there would exist a way to logically describe the universe Eliezer would find himself in.
(There are redefinitions, but those are not "situations", and then you're no longer talking about 2, 4, =, or +.) But that doesn't make my belief unconditional.
I disagree with Eliezer here. If the people in this universe want to use "2", "3", and "+" to describe what is happening to them, then their "3" does not have the same meaning as our "3" We are referring to something with integer properties, and they are referring to something with other properties. I think Wittgenstein would have a few choice words for Eliezer here (although I've only read summaries of his thoughts, I think he's basically saying what I'm saying).
I don't think Eliezer should be interpreted as admitting that the territory might be illogical. I think he just made a mistake concerning what definitions are. (I'm not saying your interpretation is unreasonable. I'm saying that the fact that your interpretation is reasonable is a clue that Eliezer made a logical error somewhere, and this is the error I think he made. (I'd be curios to know if he'd agree with me that he made an error in saying 2+2=3 is not a re-definition. Judging from his other writing I suspect he would.)
(And again, it's circular because it has to be. The fact that your perfectly logical interpretation of Eliezer basically just invoked the Principle of Explosion indicates the statements themselves contain a logical error, but none of this works if you don't buy into logic to begin with. You're throwing out logic, and I'm convincing you that this is illogical - which is a silly thing to do, but still.)
Eliezer's weird universe is still in the space of logic-land. We've just constructed a different logical system, where 2+2=3 because they are different;y defined now. It's not like Eliezer is simultaneously experiencing and not experiencing three earplugs or something. An illogical world isn't merely different from our world - it's incomprehensible and indescribable nonsense insofar as our brain is concerned. If you're still looking at evidence and drawing conclusions, you're still using logic. (Inb4 paraconsistent and fuzzy logic, the meta rules handling the statements still use the same tautology-contradiction structure common to all math)
↑ comment by g_pepper · 2015-01-30T17:33:14.997Z · LW(p) · GW(p)
If I wrote a physics engine in, let's say, Java, is that not a(n approximate) map of physical reality? I would say so. Yet the physics engine isn't conscious. It doesn't have a mind.
True, but it was written by someone with a conscious mind (you), just as a map drawn on paper by a cartographer was drawn by someone with a mind.
↑ comment by Lumifer · 2015-01-30T17:14:28.129Z · LW(p) · GW(p)
If I wrote a physics engine in, let's say, Java, is that not a(n approximate) map of physical reality?
An interesting question. No, I am not sure I want to define maps this way. Would you, for example, consider the distribution of ecosystems on Earth to be a map of the climate?
I tend to think of maps as representations and these require an agent.
we will never find something in the territory that is logically contradictory
I don't understand what this means -- logic is in the mind. Can you give me an example that's guaranteed to be not a misunderstanding? By a "misunderstanding" I mean something like the initial reaction to the double-slit experiment: there is a logical contradiction, the electron goes through one slit, but it goes through both slits.
Replies from: Ishaan↑ comment by Ishaan · 2015-01-30T20:04:29.341Z · LW(p) · GW(p)
Can you give me an example that's guaranteed to be not a misunderstanding?
You can never perceive red and not perceive red simultaneously, but if you could, that would embody a logical contradiction in the territory.
(Tree falls in the forest" type word play doesn't count)
the initial reaction to the double-slit experiment: there is a logical contradiction, the electron goes through one slit, but it goes through both slits.
That is not a contradiction in the evidence, that is simply a falsification of a prior hypothesis (as well as a violation of human physical intuition). However, If you were to insist upon retaining your old model of the universe after seeing the results of the experiment, then you would have a contradiction within your view of reality (which must accommodate both your previous beliefs and the new evidence)
This is the sort of thing I meant when I said earlier in the thread that the insight I'm referring to here is what led me to realize that there is nothing particularly odd about intuition-violating physics. There's no reason the axioms of the universe need to be intuitive - they need only be logically consistent.
But, it's good that you brought up this example: I think Eliezer's example that Dxu linked, with 2+2=3, is similar to the double slit experiment - it's violating prior intuitions and hypothesis about the world, not violating logic.
Replies from: Lumifer↑ comment by Lumifer · 2015-02-02T18:14:12.050Z · LW(p) · GW(p)
You can never perceive red and not perceive red simultaneously, but if you could, that would embody a logical contradiction in the territory.
I don't understand. Perception happens in the mind, I don't see anything unusual about the ability to screw up a mind (via drugs, etc) to the extent that it thinks it perceives red and does not perceive red simultaneously. Why would that imply a "logical contradiction in the territory"?
Replies from: Ishaan↑ comment by Ishaan · 2015-02-02T18:35:07.597Z · LW(p) · GW(p)
I'm not talking about it thinks it perceives red even when it doesn't perceive red - that's "tree falls in the forest" thinking. I'm talking about simultaneously thinking you perceive red and not thinking your perceive red.
But yes - you could screw up a mind sufficiently such that it thinks it's perceiving red and not perceiving red simultaneously. Such a mind isn't following the normal rules (and the rules of logic and so on arise from the rules of the mind in the first place, so of course you could sufficiently destroy or disable a mind such that it no longer things that way - there's no deeper justification, so you are forced to trust the normal mental process to some degree...that's what the "no universally compelling arguments and therefore you just have to yourself" spiel I was giving higher in the thread stems from).
Replies from: Lumifer↑ comment by Lumifer · 2015-02-02T18:55:33.853Z · LW(p) · GW(p)
But you said "that would embody a logical contradiction in the territory" and that doesn't seem to be so any more.
My original question, if you recall, was for an example of something -- anything -- that would be falsify your theory.
Replies from: Ishaan↑ comment by Ishaan · 2015-02-02T19:20:31.288Z · LW(p) · GW(p)
I guess I bite the bullet, there is no real falsifying here? I did say you have to take it on faith to an extent because there is no other way. It's a foundational premise for building an epistemic structure, not a theory as such.
Anyhow, I'm not sure we're talking about the same thing anymore. If you don't accept that the universe follows a certain logic, the idea of "falsifying" has no foundation anyway.
↑ comment by Kindly · 2015-01-27T04:10:44.443Z · LW(p) · GW(p)
Well, I was thinking that in those other cases, you consider the other possibility (e.g., that nobody who reads my comment will understand it) and dismiss it as unlikely or unimportant. It doesn't even make sense to ask "but what if it turns out that the parallel postulate doesn't actually hold after all?"
Am I explaining myself any better?
Replies from: Lumifer↑ comment by passive_fist · 2015-01-26T10:21:15.939Z · LW(p) · GW(p)
I managed to get a B.S. in neuroscience without knowing about the thalamus.
About that, how would you evaluate the state of the typical undergrad neuroscience curriculum today and how relevant it is to modern knowledge about the organization and workings of the brain?
Replies from: Ishaan↑ comment by Ishaan · 2015-01-27T01:24:54.668Z · LW(p) · GW(p)
Hmm
I think the undergraduate curriculum is good enough to get the average college student up to a level where they are comfortable reading and understanding a scientific paper in biology even if they start out with only a very rudimentary knowledge of science coming in.
You spend the first 3 years kind of learning the basic fundamentals of biology, like how evolution and cells and hormones and neurons work, and I think for Lesswrong's demographic that sort of thing is general knowledge so most of y'all could skip it without any major issues. I found these courses challenging because of the amount of stuff to memorize, but I am not sure I found them useful. I kind of wish I could have replaced some of those introductory courses with work in computer science or a stronger biochem/chem foundation, because I already knew about evolution and mitochondria and that sort of thing.
The last 2 years, for me, were quite useful. In these upper level classes, professors in a given sub-specialty would select primary literature which was important to their field, and it would be discussed in depth. What you learn will largely depend on what the professor is passionate about. There are also classes where many different researchers come in and present their work, and one ends up picking up many little threads from that which can later be pursued at leisure. Despite already being comfortable with primary literature in neuroscience and psychology before joining these classes, I still found them very useful because of the specific content of the work I was exposed to. Many of these courses were technically graduate courses, but it is standard for undergraduates to attend them.
Overall, I think if you are generally comfortable reading scientiic papers in biology, bachelors-degree level neuroscience is not an extremely difficult subject for a motivated autodidact to acquire without formal coursework (assuming you have access to all the major scientific journals). The coursework is good - but there's no secret esoteric knowledge you can only acquire from the coursework. it's not necessarily better than self-study, but it's awesome when combined with self-study and is fairly decent even without any self-directed study.
Direct contact with researchers is definitely a very positive thing for keeping a pulse on stuff, knowing what is important to study and what isn't, and generally learning faster than you could on your own. You're also expected to join a lab during your undergrad, and you will inevitably learn a lot through that process.
TL:DR - As with many fields, if you want to be up to date on modern knowledge, there is absolutely no substitute for constantly skimming the new papers that come out. The undergraduate curriculum spends 2-3 years preparing you to successfully read a scientific paper in biology, and 1-2 years having you read papers which are selected to be particularly important in the field. Also, you will typically join a lab, which is certain to cause learning.
↑ comment by Pfft · 2015-01-26T16:46:33.453Z · LW(p) · GW(p)
a concept much simpler than algebra and it ought to have been taught in grade school.
Well, algebra is also not taught in grade school. Considering Piaget's theory of cognitive development, with abstract thought only getting in place in the teens, I wonder if maybe it's not possible to teach it until middle/high school, even if its simple once the cognitive machinery is activated...
Replies from: Ishaan↑ comment by Ishaan · 2015-01-27T00:46:56.471Z · LW(p) · GW(p)
I might have misused the term - I thought up to 8th grade and perhaps even 12th grade was "grade school"? I got my sister to think algebraically with non-geometrical problems, and then apply that successfully to a novel geometrical problem with perimeters and volume when she was 10...but she wasn't able to retain it after a week. Later on when she learned it in school at an older age, she did retain it. I suspect attentional control is the limiting factor, rather than abstract thought.
But you're right, this should be tested. She's technically not a teen yet so next time she has a long holiday of free time I'll see if she can learn about basic logical properties and set theory. (It seems way easier than the graphs simultaneous linear equations she's doing right now, so I am optimistic).
Replies from: Pfft↑ comment by Pfft · 2015-01-27T03:00:59.068Z · LW(p) · GW(p)
I see. I'm used to it being a synonym for primary school, but according to Wikipedia, that's apparently ambiguous or incorrect.
I agree that trying to teach it and seeing what happens is the way to go. :) Although I guess there is probably a lot of individual variation, so a school curriculum based on what works for your sister might also not generalize.
Replies from: Ishaan↑ comment by Ishaan · 2015-01-28T01:04:31.220Z · LW(p) · GW(p)
This is true. It would just be a test case for whether pre-teens can learn these concepts.
If I was designing a hypothetical curriculum, I wouldn't use high-sounding words like "axiom". It would just be - "Here is a rule. Is this allowed? Is this allowed? If this is a rule, then does it mean that that must be a rule? Why? Can this and that both be rules in the same game? Why not?" Framing it as a question of consistent tautology, inconsistent contradiction as opposed to "right" and "wrong" in the sense that science or history is right or wrong.
And "breaking" the rules, just to instill the idea that they are all arbitrary rules, nothing special. "What if "+" meant this instead of what it usually means?"
And maybe classical logic, taught in the same style that we teach algebra. (I really think [A=>B <=> ~B=>~A]? is comparable to [y+x=z <=> y=z-x]? in difficulty) with just a brief introduction to one or two examples of non-classical logic in later grades, to hammer in the point about the arbitrariness of it. I'd encourage people to treat it more like a set of games and puzzles rather than a set of facts to learn.
...and after that's done, just continue teaching math as usual. I'm not proposing a radical re-haul of everything. It's not about a question of complex abstract thought- it's just a matter of casual awareness, that math is just a game we make, and sometimes we make our math games match the actual world. (Although, if I had my way entirely, it would probably be part of a general "philosophy" class which started maybe around 5th or 6th grade.)
(I'm not actually suggesting implementing this in schools yet, of course, since most teachers haven't been trained to think this way despite it not being difficult, and I haven't even tested it yet. Just sketching castles in the sky.)
↑ comment by Emily · 2015-01-26T09:26:39.238Z · LW(p) · GW(p)
Basic chemistry. I hated chemistry the first 2-3 of years of high school (UK; I don't know if it's taught differently elsewhere). It was all about laundry lists of chemicals, their apparently random properties, and mixing them according to haphazard instructions with results that very occasionally corresponded approximately with what we were informed they should be. We were sort of shown the periodic table, of course, but not really enlightened as to what it all meant. I found it boring and pointless. I hated memorising the properties and relationships of the chemicals we were supposed to know about.
Then, all of a sudden (I think right at the start of year 10), they told us about electron shells. There was rhyme! There was reason! There were underlying, and actually rather enthralling and beautiful, explanations! The periodic table made SO MUCH SENSE. It was too late for me... I had already pretty much solidified in my dislike of chemistry, and had decided not to take an excessive amount of science at GCSE because similar (though less obvious) things had happened in biology and physics, too. But at least I did get that small set of revelations. Why on earth they didn't explain it to us like that right from the start, I have no idea. I would have loved it.
Replies from: Luke_A_Somers, Dahlen, gjm, Ishaan, Pfft↑ comment by Luke_A_Somers · 2015-01-27T19:12:16.114Z · LW(p) · GW(p)
Electron shells didn't really make sense to me without having taken quantum mechanics. I mean, I understood that they were there, but I didn't have a clue why they ought to take on any particular shape.
Replies from: Emily↑ comment by Emily · 2015-01-28T09:43:58.283Z · LW(p) · GW(p)
Yeah, of course I also had no idea about the next layer down of explanation. But just having one layer seemed so much preferable to having none! It was the awareness that chemistry was dealing with a system, rather than a collection of boring facts, that made the difference to me.
↑ comment by Dahlen · 2015-01-26T10:15:44.102Z · LW(p) · GW(p)
Huh. Electron shells were one of the first things they taught us in our first-ever chemistry class, and to a 13-year-old I have to say they don't make much sense. I mean yeah, they shed some light upon the periodic properties of the table of elements, most of us could get that at that age, but man was it a pain in the ass to do the computations for them.
Then again maybe someone else would have reacted differently to exposure to the same info at the same age; maybe there's nothing that could make me in particular like chemistry. Well into college, I still have to take chemistry-like classes, and I still hate them.
↑ comment by gjm · 2015-01-26T10:25:28.666Z · LW(p) · GW(p)
I took A-level chemistry (= last two years of high school in the UK, ages ~16-18) and while indeed we learned a bit about electron shells and all that, I still found it really frustrating for similar reasons.
The thing I remember hating most was a category of question that was pretty much guaranteed to be in the exams. It went like this: "Describe and explain how property P varies across the elements down column C of the periodic table". And the answer was always this: "As we go down column C of the periodic table, characteristic A increases, which tends to increase property P, while characteristic B decreases, which tends to decrease property P." followed by some explanation of how (e.g.) the effect of A predominates to begin with but B is more important for the later elements, so that property P increases and then decreases. Or B always predominates, so property P just decreases. Or some other arbitrary fact about how A and B interact that you couldn't possibly work out using A-level chemistry.
So it was a big exercise in fake explanations. Really, you just had to learn what property P does as you go down column C of the periodic table, and then to answer these questions you also had to be able to trot out these descriptions of the underlying phenomena that do nothing at all to help you determine the answer to the questions.
The underlying problem here is that chemistry is really quantum mechanics, and figuring out these questions from first principles is way beyond what high-school students can do.
Replies from: sixes_and_sevens, Emily↑ comment by sixes_and_sevens · 2015-01-26T12:00:13.156Z · LW(p) · GW(p)
I seem to have had a different A-Level experience from you (1998-2000). There was a certain amount of learn-this-trend-by-rote, but I would easily class A-Level chemistry (which I didn't even do that well in) as one of the most practicably useful subject choices I've taken.
There's a bunch of stuff I know which my other similarly-educated peers don't, and which I attribute to A-Level chemistry. Some of it is everyday stuff about which paint and glue and cleaning products are appropriate for which purpose. Some of it is useful for reasoning about topical scientific claims, such as biofuels, pharmaceuticals or nutrition.
I even have a control case for this, in that my sister and I studied all the same subjects, only she took electronics at A-Level over chemistry. When one of us says something "obvious" which the other person doesn't recognise as such, we have a pretty good idea where it came from.
Replies from: gjm↑ comment by gjm · 2015-01-26T15:43:55.515Z · LW(p) · GW(p)
Just to be clear, I didn't make any comment on how useful A-level chemistry in the UK is (or was in ~1986-1988 when I took it). Only on the annoying pseudo-explanations I had to learn to give. (I expect there are useful things that I know only because I studied chemistry at school, but it's hard to be sure because by now I've learned a lot of other things and forgotten a lot of what I learned at school.)
↑ comment by Ishaan · 2015-01-27T06:10:44.188Z · LW(p) · GW(p)
Thinking only of shells works for simple reactions, but has anyone ever had a "click" for organic chemistry reactions? Orbitals and shells are the only part of O-Chem that ever made sense to me...it seems like all my friends who "get it" are just practicing their butts off until they arrive at an intuition which isn't amenable to simple rule-based explanations (they seem to know the answers but can't always articulate why).
I'd really like it if organic chemistry made systematic sense.
↑ comment by Pfft · 2015-01-26T16:51:07.706Z · LW(p) · GW(p)
I've never taken chemistry beyond high school, but my impression is that even at university level it involves large amounts of memorization. Like, we know that there is an underlying model, because chemistry is a special case of physics, but in practice using that model to make predictions is computationally unfeasible.
↑ comment by Antisuji · 2015-01-26T04:02:16.285Z · LW(p) · GW(p)
Schmidhuber's formulation of curiosity and interestingness as a (possibly the) human learning algorithm. Now when someone says "that's interesting" I gain information about the situation, where previously I interpreted it purely as an expression of an emotion. I still see it primarily about emotion, but now understand the whys of the emotional response: it's what (part of) our learning algorithm feels like from the inside.
There are some interesting signaling implications as well.
Replies from: Dorikka, Username↑ comment by Username · 2015-01-26T05:42:20.556Z · LW(p) · GW(p)
Direct link to the page on the theory.
That's really interesting! (ha!) I recommend reading the full page for good examples, but here's a summary:
Apart from external reward, how much fun can a subjective observer extract from some sequence of actions and observations? His intrinsic fun is the difference between how many resources (bits & time) he needs to encode the data before and after learning. A separate reinforcement learner maximizes expected fun by finding or creating data that is better compressible in some yet unknown but learnable way, such as jokes, songs, paintings, or scientific observations obeying novel, unpublished laws.
↑ comment by Punoxysm · 2015-01-27T05:09:50.520Z · LW(p) · GW(p)
I used to be frustrated by the idea that my nation's stated principles were often undermined by its historical actions. Then I realized that this is true of every nation, everywhere at all times. Same with politicians, public figures, parents, companies, myself, etc.
Hypocritical actions happen all the time, and it is a victory when their severity and frequency is tempered. At the same time, justifications for those hypocritical actions abound. The key is not to take them at face value or reject them completely, but remember with humility that your favored group makes the same excuses at varying levels of validity.
So now I can empathize much more easily when people try to defend apparently hypocritical and reprehensible behavior. Even if I AM better than they are, I'm not qualitatively better, and its disingenuous to try to argue as if I am. This realization leads to a more pragmatic, more fact-and-context-sensitive approach to real-world conflicts of values.
↑ comment by Princess_Stargirl · 2015-01-26T17:27:51.187Z · LW(p) · GW(p)
Bryan Caplan's "Myth of the Rational Voter" and "Mises, Bastiat, Public Choice and Public Policy."
Replies from: MrMind↑ comment by passive_fist · 2015-01-26T04:53:10.574Z · LW(p) · GW(p)
I had many of these while reading Jaynes.
↑ comment by [deleted] · 2015-01-26T02:55:17.141Z · LW(p) · GW(p)
Learning about Simon Wordley's concept of Wordley mapping. Connected a ton of concepts like different management styles, Crossing the Chasm, the Gartner hype cycle, commoditization, and more.
Replies from: Antisuji↑ comment by Antisuji · 2015-01-26T03:44:24.379Z · LW(p) · GW(p)
This, I assume? (It took me a few tries to find it since first I typed in the name wrong and then it turns out it's "Wardley" with an 'a'.) Is the video on that page a good introduction?
Replies from: None↑ comment by [deleted] · 2015-01-26T16:47:37.882Z · LW(p) · GW(p)
Damn, I had the "a" then changed it cause it looked wrong ' The video is a decent introduction. however, if you'd like to learn how to actually use them, a great resource is this free book: http://www.wardleymaps.com/uploads/9/5/9/6/9596026/future-is-predictable-v14.pdf, which was compiled from Simon Wardley's blog, takes you step by step through understanding the process, creating them, and using them to create a coherent business strategy.
If you have time for it, his entire blog is worth a look: http://blog.gardeviance.org/
comment by Stefan_Schubert · 2015-01-28T16:17:38.334Z · LW(p) · GW(p)
I seem to recall that some Democrat and Republican donors have agreed not to give to their respective parties, but rather to charity, on the condition that their opponents do the same. Does anyone know about this? Mine and Google's combined efforts have been fruitless. Seems a very nice idea that could be used much more widely to re-distribute resources away from zero-sum games to games with joint interests.
Replies from: ike↑ comment by ike · 2015-01-29T22:42:02.649Z · LW(p) · GW(p)
I found http://californiawatch.org/dailyreport/super-pac-money-flows-effort-divert-cash-charity-16050 with the search terms "political donors redirect to charity" on Google.
It doesn't appear to be active, possibly due to not getting FEC approval, though.
Searching "repledge charity" gives several more articles on this. http://www.washingtonpost.com/politics/the-influence-industry-new-group-hopes-to-divert-campaign-contributions-to-charities/2012/04/11/gIQAzF6QBT_story.html
Replies from: Stefan_Schubert↑ comment by Stefan_Schubert · 2015-01-31T14:25:07.494Z · LW(p) · GW(p)
Excellent!!! Many thanks. :) Exactly what I was looking for.
comment by polymathwannabe · 2015-01-27T02:51:10.941Z · LW(p) · GW(p)
I remember somewhere in the sequences EY mentioned that Bayesianism was a more general method of which the scientific method was merely a special case. Now I find this, Dempster-Shafer theory, which according to Wikipedia is an even more general method, of which Bayesianism is merely a special case.
Has this topic been given any treatment here?
Replies from: ike↑ comment by ike · 2015-01-27T02:52:29.870Z · LW(p) · GW(p)
https://www.google.com/search?q=Dempster-Shafer+theory+site:lesswrong.com
11 results. That should get you started.
comment by wobster109 · 2015-01-26T23:22:10.826Z · LW(p) · GW(p)
On Sunday at 11 AM Eastern and 8 AM Pacific*, I will be playing a round of AI Box with a person who wishes to remain anonymous. I will be playing as AI, and my opponent will be playing as Gatekeeper (GK). The loser will pay the winner $25, and will also donate $25 to the winner's charity of choice. The outcome will be posted here, and maybe a write-up if the game was interesting. We will be using Tuxedage's ruleset with two clarifications:
- GK must read and make a reasonable effort to understand AI's text, but does not need to make an extraordinary effort to understand things such as heavily misspelled text or intricate theoretical arguments.
- The monetary amount will not be changed after the game is concluded.
The transcript will not be made public, sorry. We are looking for a neutral third party who will agree beforehand to read and verify the transcript. Preferably someone who has already played in many games, who will not have their experience ruined by reading someone else's transcript.
- I habitually give the Eastern and Pacific times. This does not mean GK is in one of those two time zones.
↑ comment by Punoxysm · 2015-01-27T05:12:03.392Z · LW(p) · GW(p)
AI is the harder role, judging from past outcomes. I hope you prepare well enough to make it interesting for GK.
I'm interested in doing AI Box as either role. How did you organize your round?
Replies from: wobster109↑ comment by wobster109 · 2015-01-27T06:13:15.185Z · LW(p) · GW(p)
I'll try hard! ^^
I went to a random forum somewhere and posted for an opponent. GK responded with an email address, and we worked out the details via email. We'll be holding our round in a secret, invite-only IRC channel.
It looks like if you offer to play as AI, you'll have no trouble finding an opponent. Tuxedage said in one of his posts that there are 20 gatekeeper players for each AI player.
However. . . since I encountered GK on a different forum, not LW, I insisted on having a third party interview GK. As a safety measure. I have known people who were vengeful or emotionally fragile, and I wanted to take no chances there.
comment by passive_fist · 2015-01-26T21:47:01.338Z · LW(p) · GW(p)
I've been using LyX for preparing my doctoral dissertation and I'm amazed that such a complete and capable tool isn't more widely known and used. I can't imagine preparing scientific documents now with anything other than LyX, and I can't imagine that I used to use software like MS Word for this purpose. Anyone have any other examples of obscure but amazingly capable software?
Replies from: shminux, emr, None, kalium, falenas108↑ comment by Shmi (shminux) · 2015-01-27T00:47:57.264Z · LW(p) · GW(p)
I guess LyX must have improved a lot in the last few years. When I tried using it for my PhD thesis I had to give up after it refused to import classes and templates mandated by the university, and the documentation on how to deal with this issue was, well... open-sores level. I did use it to create a few snippets, later manually edited. It did not really save me any time.
Re other obscure software, Total Commander is a great Windows file manager, similar to the Linux Midnight Commander.
Replies from: passive_fist↑ comment by passive_fist · 2015-01-27T01:06:13.648Z · LW(p) · GW(p)
It seems weird that it would refuse to import templates, I've had no trouble with that at all. Even when templates use highly non-standard settings, you can still directly edit the LaTeX preamble in LyX, and done correctly that should take care of 99% of problems.
↑ comment by emr · 2015-01-27T00:43:54.835Z · LW(p) · GW(p)
(I hope this doesn't sound condescending) but do you mean Latex, or LyX specifically? Latex itself seems almost humorously field-dependent: everywhere in the hard sciences and nowhere in business.
Replies from: passive_fist↑ comment by passive_fist · 2015-01-27T06:28:21.580Z · LW(p) · GW(p)
LyX specifically.
↑ comment by [deleted] · 2015-01-27T06:05:29.044Z · LW(p) · GW(p)
http://www.thebrain.com/ is a little known but amazingly useful knowledge modeling/ridiculously powerful mindmapping software.
Replies from: passive_fist↑ comment by passive_fist · 2015-01-27T20:37:14.575Z · LW(p) · GW(p)
Mind-mapping is something that would seem to be highly relevant to the LW community. Personally, though, I find that the restriction to a tree structure is too limiting. Unless I've completely misunderstood how mind-mapping works.
Replies from: None↑ comment by [deleted] · 2015-01-27T20:54:27.311Z · LW(p) · GW(p)
You haven't, but that's what makes personalbrain so awesome. It doesn't limit you to the tree structure,anything can be a parent or child of anything else, and there's sideways "jump" connections that don't follow the parent child relation at all.
Not to mention the ability to link to any file or website, take notes on each connection, and name the connections. It's an incredibly powerful piece of software.
Replies from: passive_fist↑ comment by passive_fist · 2015-01-27T21:15:00.066Z · LW(p) · GW(p)
Seems interesting, I'll have a look.
↑ comment by kalium · 2015-02-01T02:08:01.300Z · LW(p) · GW(p)
I was introduced to LaTeX via LyX as a freshman and found the interface very off-putting and confusing and forgot about the whole thing for years. When I found out I could just type a text file instead, run a few commands, and get the same gorgeous result, it was a revelation and I never went back to OpenOffice.
Probably not news to anyone here, but learning to use a good text editor like vim or emacs is hugely useful and I wish I hadn't waited so long to do it. Git for version control is pretty great too.
Replies from: passive_fist↑ comment by passive_fist · 2015-02-01T03:32:41.444Z · LW(p) · GW(p)
For me it was the exact opposite. I'd been using LaTeX for years before I discovered LyX. I can't imagine writing in raw LaTeX anymore. Especially, live math preview has become indispensable for me, as well as 'smart' label handling and intelligent handling of documents composed of multiple independent files (like chapters in a book).
↑ comment by falenas108 · 2015-01-27T19:09:03.886Z · LW(p) · GW(p)
I am also a massive fan of Lyx.
I'm only an undergrad physics major, but I'm in 2 classes where I have to submit moderately high level reports, and I'm working on a thesis. And I've only ever had to use one special format, which also happened to be the default format.
So far, I've found documentation to be eh, but I haven't had too many problems where that was an issue yet. The biggest problem is that my knowledge of LaTeX is sorely lacking because I've been using Lyx for everything!
Replies from: gjmcomment by protest_boy · 2015-01-29T18:57:17.109Z · LW(p) · GW(p)
Are there any updates on when the Sequences e-book versions are going to be released? I'm planning a reread of some of the core material and might wait if the release is imminent.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2015-02-01T21:49:01.424Z · LW(p) · GW(p)
We don't have an official release date yet, but it will most likely come out in March, before Harry Potter and the Methods of Rationality ends.
comment by skeptical_lurker · 2015-01-26T14:08:26.670Z · LW(p) · GW(p)
In the spirit of admitting when I am wrong, I would like to say that when /u/Azathoth123/ said that schools were encouraging children to be gay and transsexual, I thought he was being paranoid. I thought that schools were preaching tolerance, and this had been misinterpreted.
Also, the strategy of asking children whether any of their friends are trans is bizarre, because (a) if about 0.1% of adults are trans, then you really wouldn't expect any children to be trans unless the population of the school is gigantic, regardless of the school's attitude to tolerance and (b) children should not be discussing their friend's private secrets with strangers.
I have nothing against transexuals, and in fact they seem like obvious allies of transhumanists. But, unlike homosexuality, 'being trapped in the wrong body' clearly causes a lot of psychological distress - otherwise people wouldn't undergo serious surgery to correct it (although this is not necessarly true of people who don't identify as either gender). As long as there is a possibility that it has partially psychological roots (apparently twin studies show 62% heritability), priming people at a young age (10) to induce gender identity disorder is insane.
Edit & clarification:
I think gender identity disorder (of the 'trapped in a body' form, not of the 'not identifying as either gender' form) is a mental disease because it causes suffering. This doesn't mean that transgender people need to feel bad about being trans, because that will just make matters worse. I've met people who are trans and I know people who suffering from other mental illnesses and I hope I'm not coming across as insensitive but I just don't see the point in mincing my words.
Edit 2:
Its been pointed out to me that these claims are allegations made by biased people who can't be trusted, and that even if the allegations are true, I still overstated the case.
Also, I'm defending a statement Azeroth123 made (schools are encouraging kids to be gay - although I'm not so sure about this now) while not endorsing his conclusions, which also might make what I have written seem confusing or even contradictory.
Replies from: JoshuaZ, gjm, Viliam_Bur, Gunnar_Zarncke↑ comment by JoshuaZ · 2015-01-26T14:16:01.168Z · LW(p) · GW(p)
While it is good to acknowledge when one is wrong, this is hardly strong evidence. One has one school in one location making an allegation. There also seems to be a big leap between asking people if they know anyone trapped and pushing people to be gay or transsexual. (I agree though with your points in your second paragraph.)
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-26T15:37:47.899Z · LW(p) · GW(p)
I suppose that this is only an allegation at the moment, although other similar allegations about the same organisation pushing a left-wing agenda at the expense of education have been made, which makes the whole thing more plausible (plus there is Azathoth's original allegation).
Asking an adult if they know anyone who is trapped is ok. The problem is that asking a 10 year old primes them with a concept they would not previously have had. If there is some sort of train of thought one can go down, which ends with 'help I'm trapped in the wrong body' when they would otherwise not have had this problem, then you do not prompt them to start this train of thought. For mostly the same reason, you don't ask children "do your friends drink vodka?".
Essentially, its conceivably possible that the idea of transsexualism poses an information hazard to children.
Replies from: Lumifer, falenas108, Nornagest↑ comment by Lumifer · 2015-01-26T16:56:35.286Z · LW(p) · GW(p)
the idea of transsexualism poses an information hazard to children
Of the same magnitude as the idea of drinking alcohol, shooting guns, or doing stupid things on video..?
I tend to think that in the age of internet-connected smartphones the concept of protecting children from information hazards is... quaint and counterproductive.
Having said that, I would interpret the events which led to this discussion as authorities attempting to shape the kids' value system which is a different and, probably, a more dangerous thing.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-26T17:58:55.137Z · LW(p) · GW(p)
Of the same magnitude as the idea of drinking alcohol, shooting guns, or doing stupid things on video..?
Definitely not!
I would say that smartphones should have age filters on them, although I could equally say that in the internet age, the whole idea of sex education, gay or straight, is quaint and counterproductive.
I also agree that the far bigger issue is whether political indoctrination (I'm trying to think of a more positive way to phrase this, but I can't) of this form is justified. The impression I got from the article is that this is partially a reaction against the growth of fundamentalist Islamism in schools, where state funded teacher were caught teaching small children certain things like "Hindus drink their god's piss". Clearly, forcing schools to teach children about how lesbians have sex is going to really annoy the Islamists (although its not obvious whether this will make the problem of Islamism better or worse), but to avoid discrimination the same thing has to apply to Christian schools.
I suppose one could argue that enforcing certain cultural norms (e.g. the belief that all religions and sexual orientations are equally valid) is necessary to prevent society from breaking down into factions engaged in armed conflict with each other, which is far more important than any other issue we have discussed here.
OTOH... well I certainly don't hold either hetrosexuality or cissexuality as terminal values (my argument was purely about avoiding suffering), but I think some people, such as Azathoth, do, and it does seem rather unfair that the state can declare that your values are wrong and demand that your children hold different values.
I'm really not sure how to answer this.
Replies from: ilzolende, JoshuaZ, Lumifer↑ comment by ilzolende · 2015-01-27T00:38:18.831Z · LW(p) · GW(p)
I would say that smartphones should have age filters on them
I agree. We should encourage children to develop an interest in anonymous filter-dodging web access systems like Tor, securely encrypting their messages such that they can't be monitored for inappropriate language usage, and other related skills while they're still young.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-28T09:16:23.813Z · LW(p) · GW(p)
While your comment does amuse me enough for an upvote, I feel the need to point out that if the children do not have root access to the phone, then they can't install Tor. As I understand it, rooting a phone is not easy, and I suppose once they have reached the age when they are smart & patient enough to root a phone then they are probably mature enough to deal with the internet.
Replies from: ilzolende, Lumifer↑ comment by JoshuaZ · 2015-01-26T18:39:07.601Z · LW(p) · GW(p)
it does seem rather unfair that the state can declare that your values are wrong and demand that your children hold different values.
Can/should a school teach that different racial groups are morally the same? What about that slavery is wrong? What about "be kind to each other and share your toys"? Is the difference purely that more people disagree with one claim as opposed to the others?
Replies from: Lumifer, skeptical_lurker↑ comment by Lumifer · 2015-01-26T19:32:42.838Z · LW(p) · GW(p)
Let's add to that list.
Can/should a school teach that Kim Jong-un is the greatest human being who ever lived and that only his incessant efforts keep the people safe and prosperous? What about "it is the highest moral duty to immediately report all rule-breaking to the authorities"?
↑ comment by skeptical_lurker · 2015-01-26T19:42:11.295Z · LW(p) · GW(p)
I want logical positivist schools that only teach scientifically verifiable truths about objective reality :)
But seriously, you make a good point. I think the number of people who agree with the claim is important, but there is perhaps a second issue in that some people claim that certain information can produce irreversible personality changes. If advocating homosexuality turned people gay (and shared environment does affect the prevalence of lesbians) then this causes a permanent hit to the utility function of a homophobe, whereas if some one wants their child not to share their toys ( because that's communism, maybe?) then the child could still change their mind after they leave school.
↑ comment by Lumifer · 2015-01-26T18:24:38.742Z · LW(p) · GW(p)
I would say that smartphones should have age filters on the
I would be opposed to the idea.
where state funded teacher were caught teaching small children certain things like "Hindus drink their god's piss"
Um, as opposed to Christians who drink their god's blood..?
Clearly, forcing schools to teach children about how lesbians have sex is going to really annoy the Islamists
I am sorry, is the goal of the exercise to annoy Islamists..? 8-0
one could argue that enforcing certain cultural norms is necessary to prevent society from breaking down
This historically has been argued A LOT. Pretty much every time the question of enforcing cultural norms came up. The funny thing is, those currently in power always argue that the cultural norms which help with keeping them on top are "necessary to prevent society from breaking down".
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-26T20:01:01.035Z · LW(p) · GW(p)
I would be opposed to the idea.
Really, so children should be able to view extremely violent and other adult things?
Um, as opposed to Christians who drink their god's blood..?
I'm guessing the fundamentalist Islamists were pretty scathing of Christianity too. I wouldn't be so bothered about adults saying that, but the important bits include 'taxpayer funded' and 'small children'. Also, communion is an actual part of Christianity, whereas I think "Hindus drink their god's piss" was just a complete fabrication.
I am sorry, is the goal of the exercise to annoy Islamists..? 8-0
I really don't think most people seem to understand that annoying your political opponents serves no purpose and shuts down constructive dialogue.
On second thoughts, I suppose the idea could be to annoy them enough so that the leave the country.
This historically has been argued A LOT. Pretty much every time the question of enforcing cultural norms came up. The funny thing is, those currently in power always argue that the cultural norms which help with keeping them on top are "necessary to prevent society from breaking down".
Yes, it is an interestingly convenient coincidence isn't it?
Replies from: Nornagest, Lumifer↑ comment by Nornagest · 2015-01-26T22:51:43.626Z · LW(p) · GW(p)
Also, communion is an actual part of Christianity, whereas I think "Hindus drink their god's piss" was just a complete fabrication.
I suspect this is pointing to the Hindu reverence for cattle, which tends to show up in weird ways in Hindu-Muslim disputes from that area. Milk is not urine, and cows aren't treated as gods per se, but it's an allegation that I could see Kevin Baconing its way out of the truth.
I do know of one case of ceremonial consumption of urine, but it's not Hindu -- it's a Siberian entheogenic practice aimed at the still-psychoactive metabolites of compounds found in the A. muscaria mushroom, previously eaten by shamans.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-27T08:15:02.696Z · LW(p) · GW(p)
I suspect this is pointing to the Hindu reverence for cattle, which tends to show up in weird ways in Hindu-Muslim disputes from that area. Milk is not urine, and cows aren't treated as gods per se, but it's an allegation that I could see Kevin Baconing its way out of the truth.
Exactly right! An impressively accurate guess.
↑ comment by Lumifer · 2015-01-27T01:20:56.280Z · LW(p) · GW(p)
Really, so children should be able to view extremely violent and other adult things?
Yes. And they do, by the way.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-28T09:05:21.260Z · LW(p) · GW(p)
I'm certainly aware that they do. Interestingly, most people arrested for child porn are teenagers sending other teenagers naked pictures.
Replies from: JoshuaZ, Lumifer↑ comment by falenas108 · 2015-01-26T18:32:24.541Z · LW(p) · GW(p)
This entirely depends on which path the causality takes.
Trans folks are much more depressed and tend to have much higher levels of mental illness than the general population.*
Obviously, experiences are different for different people. But most trans people experience extreme discomfort in the gender roles they are expected to perform and have some form of gender dysphoria. I would expect these things to be present regardless if they knew that the label "trans" exists. If this is the reason for the higher rates of mental illness, then encouraging awareness of what trans is will let people do things to help fix some of these issues.
However, if the causal path is that people become aware of the idea of being trans, then realize that they do not fit the gender they were assigned at birth, leading to higher rates of mental illness, that would be a different issue.
Anecdotally, almost all the trans people I know have the experience of learning what being trans is, then having an "Oh! That's I'm feeling" moment. This would be evidence for the first method.
(Side note: The term most trans people use is transgender rather than transexual, because it is the gender that is different. On a similar note, most trans people do not have the surgeries you were talking about.) *I am not counting gender identity disorder as a mental illness, both because I don't think it should be classified that way and because this statement would be pointless if I did.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-26T20:31:47.801Z · LW(p) · GW(p)
I think there is a third causal path, which goes:
Thinking about being the opposite sex -> psychosomatic alteration of hormone levels during puberty-> structural differences in the brain -> transgender.
I'm not saying this is plausible, or that I have evidence for it. This is not my field. But AFAIK I cannot rule it out.
*I am not counting gender identity disorder as a mental illness, both because I don't think it should be classified that way and because this statement would be pointless if I did.
I would say that since transgender people are much more depressed, presumably due to being trapped in the wrong body (which, as we both mentioned, doesn't apply to all trans people) then GID is a mental illness because it causes depression and suffering.
This doesn't mean that transgender people need to feel bad about being trans, because that will just make matters worse. I know people who are trans and I know people who suffering from other mental illnesses and I hope I'm not coming across as insensitive but I just don't see the point in mincing my words.
Replies from: falenas108↑ comment by falenas108 · 2015-01-27T19:26:23.205Z · LW(p) · GW(p)
Sure, that path seems possible as well.
I would say that since transgender people are much more depressed, presumably due to being trapped in the wrong body (which, as we both mentioned, doesn't apply to all trans people) then GID is a mental illness because it causes depression and suffering.
Although some of the depression could be caused by that, it seems pretty likely that a large portion of it could also because by being treated by society as a gender they aren't, as well as more targeted transphobia. GLB people also have much higher rates of depression, which is probably for that reason and not some third link.
Furthermore, I think we need to go back to diseased thinking about diseases. When we call something a mental illness, it's because we are trying to treat it in some way, or alleviate the effects. This is not something we want to do with trans people, the effects that we're talking about are all other mental illnesses that we do want to treat the symptoms of.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-28T09:52:55.207Z · LW(p) · GW(p)
Although some of the depression could be caused by that, it seems pretty likely that a large portion of it could also because by being treated by society as a gender they aren't, as well as more targeted transphobia.
I've heard trans people say that simply having breasts is really disturbing, enough to require unconfortable breast-binding. I've also heard a trans person say that they enjoy looking at themselves in the mirror, because they are turned on by their own body.
Incidentally, are there separate words for 'non gender identifying transgender' and 'trapped in the wrong body transgender'?
Anyway, clearly transphobia is going to make the problem worse.
When we call something a mental illness, it's because we are trying to treat it in some way, or alleviate the effects. This is not something we want to do with trans people, the effects that we're talking about are all other mental illnesses that we do want to treat the symptoms of.
Well, sex reassignment surgery clearly is a treatment. And the picture isn't clear with certain other mental illnesses either (e.g. autism).
Replies from: falenas108↑ comment by falenas108 · 2015-01-28T18:21:50.327Z · LW(p) · GW(p)
Incidentally, are there separate words for 'non gender identifying transgender' and 'trapped in the wrong body transgender'?
I think what you are going for is non-binary/agender trans people vs. binary trans people.
But, I'm not sure which distinction you're talking about. There are people who fit the classic "trapped in the wrong body," who have a clear idea of what body parts they would/wouldn't like (which could be anything from having a penis and breasts to no genitalia at all). There are other people who are completely fine with their physical body but are uncomfortable with the idea of identifying with the gender they were assigned at birth.
If you're talking about that distinction, then people in the second category don't necessarily identify as agender or non-binary, and people in the first category don't always identify as a binary gender.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-28T21:45:18.662Z · LW(p) · GW(p)
Well, I had a transgender friend who said that at a trans meeting two types of people turned up: those that didn't strongly identify as either gender, and those that strongly identified as the gender opposite to their physical body. This is the distinction I am trying to describe.
And "agender trans people" is quite a mouthful.
Replies from: falenas108↑ comment by falenas108 · 2015-01-29T02:47:35.390Z · LW(p) · GW(p)
You can just say "non-binary people" or "agender people." In any case, binary and non-binary are the types you are talking about.
↑ comment by Nornagest · 2015-01-26T22:39:38.791Z · LW(p) · GW(p)
For mostly the same reason, you don't ask children "do your friends drink vodka?".
It didn't reach this level of specificity, but I remember similar questions on an allegedly anonymous survey passed around when I was in middle school (age 11 or 12, don't remember which). Along with a number of questions about sex and illegal drug use.
That was about when the War on Drugs and related moral panics were peaking, though.
↑ comment by gjm · 2015-01-26T22:10:26.638Z · LW(p) · GW(p)
I don't understand. Nothing in the article you linked to describes anyone
encouraging children to be gay and transsexual
and the article isn't about what schools do, or even about what one school does. It's about what some government inspectors are alleged to have done, and I think a little context might be in order.
This is about the inspection of Grindon Hall Christian School. I think it's clear that the inspectors were concerned that the school might be instilling hostility to, and/or ignorance of, various things that conservative Christians commonly disapprove of: other religions, homosexuality, transsexualism. So they asked pupils some questions intended to probe this.
The school has issued a complaint about those questions. (This is where the stuff in the Telegraph article comes from.) The inspection report, now it's out, is extremely negative.
If the complaint made by the school is perfectly accurate, then it does sound as if the probing was done quite insensitively. Tut tut, naughty inspectors. But it's worth noting that complaints of this sort -- especially when, as one might suspect here, they're made partly in self-defence -- are not always perfectly accurate. And, e.g., if you look at the headmaster's letter of complaint to the authorities and his somewhat-overlapping complainy letter to parents of his pupils, you'll see that he apparently has trouble distinguishing "inspectors asked pupils whether they celebrate festivals of any religions other than Christianity" from "inspectors think the school should be celebrating festivals of other religions". Which is just silly, and doesn't give me much confidence in his ability to describe impartially (or even correctly) what happened in the inspection.
I suppose you can argue that even mentioning (e.g.) transsexualism is "encouraging" it in the rather aetiolated sense that children who have never heard of transsexualism are a bit less likely to end up being overtly transsexual. (But maybe correspondingly more likely to have no idea what it is they're going through and maybe kill themselves.) But it seems pretty clear that that isn't what Azathoth123 meant, and it's not (I think) what any reasonable reader would understand, by saying that "schools encourage children to be transsexual". (Even if any school were doing it, which I repeat this report doesn't even allege, never mind give evidence for.)
[EDITED to add: It is alleged that at this school (1) some teachers labelled girls "sluts", (2) there was a campaign of abuse against lesbian girls, ignored by staff, and (3) the school has links to some Christian group that "condemns all homosexual practice". The article I linked to is behind a paywall and I have no further details; given what a lot of Christian groups condemn all homosexual practice, #3 isn't necessarily terribly surprising. But if this sort of allegation was flying around before the inspection, or for that matter discovered during the inspection, then it might help to explain why the inspectors were asking such oh-so-insensitive questions. And it isn't the inspectors I'd blame for that.]
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-28T11:37:50.628Z · LW(p) · GW(p)
I don't understand. Nothing in the article you linked to describes anyone "encouraging children to be gay and transsexual" and the article isn't about what schools do, or even about what one school does. It's about what some government inspectors are alleged to have done, and I think a little context might be in order.
Well, saying that children should be taught how lesbians have sex is encouraging children to be gay. Unlike Azeroth123, I doubt this will cause the collapse of civilisation, so I'm not necessarly agreeing with him about the consequences or trying to morally condemn this.
If government inspectors were asking these questions, it implies that this is supposed to be the norm for schools.
And, e.g., if you look at the headmaster's letter of complaint to the authorities and his somewhat-overlapping complainy letter to parents of his pupils, you'll see that he apparently has trouble distinguishing "inspectors asked pupils whether they celebrate festivals of any religions other than Christianity" from "inspectors think the school should be celebrating festivals of other religions". Which is just silly, and doesn't give me much confidence in his ability to describe impartially (or even correctly) what happened in the inspection.
While these two statements are logically distinct, most people don't communicate in a clear, precise manner, saying exactly what they mean without any subtly. Most people hint at things, and in this case I think the inference that "inspectors think the school should be celebrating festivals of other religions" is perhaps justified.
(But maybe correspondingly more likely to have no idea what it is they're going through and maybe kill themselves.)
This is a good point which I really shouldn't have overlooked.
It is alleged that at this school (1) some teachers labelled girls "sluts", (2) there was a campaign of abuse against lesbian girls, ignored by staff, and (3) the school has links to some Christian group that "condemns all homosexual practice".
If encouraging children to be gay is a bit weird, then perhaps condemnation of homosexuality is equally weird, and actual abuse is appalling. This is all the more reason why inspectors asking kids to out their friends is awful.
I'd like to make it absolutely clear that I'm not defending the school here. Rather then some people trying to get schools to promote prog values, and some people trying to promote conservative values, we should just keep politics out of schools, which means disbanding religious schools like this one.
Replies from: gjm, satt↑ comment by gjm · 2015-01-28T14:11:15.888Z · LW(p) · GW(p)
saying that children should be taught how lesbians have sex is encouraging children to be gay.
Just like saying that children should be taught what words they use in France is encouraging children to be French?
(We don't even know it's true that anyone said children should be taught how lesbians have sex. What we do know is that the school's headmaster claimed that a pupil claimed that an inspector asked them what lesbians do. It seems eminently possible that (1) the headmaster lied, (2) the headmaster misunderstood, (3) the pupil lied, (4) the pupil misunderstood, or (5) the inspector did ask that but not with the intention that's being assumed here. For instance, consider the following possible context. Pupil: "It shouldn't be allowed. What they do is disgusting and unnatural and forbidden by God." Inspector: "So what is it they do that's so disgusting?" I'm not sure I'd exactly approve of the inspector's question in this scenario, but its point wouldn't be that pupils ought to be taught all about lesbians' sex lives.)
most people don't communicate in a clear, precise manner [...] Most people hint at things
Quite true. But that doesn't license an inference from "inspectors asked whether there are any pupils who celebrate non-Christian religions' holidays" to "inspectors think the school should be celebrating non-Christian religions' holidays". (Other interpretations that seem more plausible to me: 1. They wanted to make a point about the fact that the school's teaching simply ignores the existence of other religions (this was one of the complaints in the inspection report, IIRC). 2. They wanted to find out something about the religious makeup of the school's pupil population, and didn't entirely trust the figures they were given by the school. 3. They wanted to identify pupils who might be adversely affected by the school's (allegedly) intolerant and narrow-minded ethos, so that they could talk to them and see whether there actually was a problem or not.)
(If I were wanting to hint at such a thing in such a way, I would be asking not "do any of you celebrate any other festivals?" but "does the school celebrate any other festivals?".)
Now, for sure it's possible that the inspectors really would like to see the school celebrating non-Christian religious festivals. Just as it's possible that if I greet one of my colleagues with "Good morning -- did you have a good weekend?" my real intention is to flirt with them and ultimately seduce them. But they don't get to go to HR and complain about sexual harassment merely because I said something that a would-be seducer might also happen to say; and the headmaster of this school doesn't get to tell parents that inspectors tried to make his school celebrate pagan festivals just because they said something that someone with that intention might also happen to say.
inspectors asking kids to out their friends is awful
Without having been there and observed exactly what was asked, with what wording and what emphasis, etc., it's hard to be sure; but my reading was not that they asked kids to out their friends. (I agree that asking them to out their friends would have been way out of order.) I thought they just asked "do you know anyone who ...?" expecting a binary answer (or perhaps a "maybe") -- rather than expecting "oh, yes, there's Ashley and Frank and Melanie, and I think Ahmed might feel that way too".
just keep politics out of schools, which means disbanding religious schools like this one
I am inclined to agree. I'm not sure, though. I think parents should be allowed to educate their children at home, provided they can demonstrate that they're giving a decent education. In practice, some of those parents will be giving just as religiously biased an education as this place. If individual parents can do that, should they really be forbidden to get together and do it as a group?
And of course a school can be highly religious without having a name like "Inquisitor Torquemada Memorial Catholic School". If you're going to forbid religious schools, how are you going to do that other than by having some kind of inspection process that checks what sort of things they're teaching? Boom, now your inspection process is necessarily political and religious.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-28T21:37:57.564Z · LW(p) · GW(p)
Just like saying that children should be taught what words they use in France is encouraging children to be French?
Well, at the very least I'd say its encouraging them to visit France.
We don't even know it's true that anyone said children should be taught how lesbians have sex.
Admittedly, yes for some reason I took this article at face value, rather then assuming that everyone lies about everything all the time, which is generally a good assumption.
- They wanted to find out something about the religious makeup of the school's pupil population, and didn't entirely trust the figures they were given by the school. 3. They wanted to identify pupils who might be adversely affected by the school's (allegedly) intolerant and narrow-minded ethos, so that they could talk to them and see whether there actually was a problem or not.)
If there are non-Christian pupils in the school, the obvious next step is to demand that their religious holidays are observed too.
Its the tactic of taking it one step at a time. Demand that the school recognise other religions exist, then demand that they teach that the other religions are not evil, then that they are equally valid. Since a fundamental point of Christianity is that other religions are wrong (thou shall have no other god) or "put here by Satan to tempt us" (according the people from the Christian union at a perfectly normal university), then if they accede to the next demand then many people would say they are then Christian in name only.
Demand that they acknowledge that some kids are not Christian. Then acknowledge that they are from other faiths. Then exempt them from religious services. Then allow them to hold their own religious services away from the other kids. Then get the school as a whole to celebrate other religion's festivals. Then try to stop people wishing each other a merry Christmas and instead say "Happy Holidays".
I'm not trying to take the Christians' side - I think their religion is absurd. I'm trying to show that they are right to be afraid of the tactic where each step seems reasonable and tolerant, and then several steps down the line everything they value is gone.
Just as it's possible that if I greet one of my colleagues with "Good morning -- did you have a good weekend?" my real intention is to flirt with them and ultimately seduce them. But they don't get to go to HR and complain about sexual harassment merely because I said something that a would-be seducer might also happen to say
Some people do do this. Heard of Elevatorgate? A guy asked a girl if she fancied a cup of coffee. She realised that 'coffee' might be a euphemism for sex, and that rapists also want sex, and so asking her if she wants coffee was "a potential sexual assault". The absurdity would be funny if it hadn't torn the atheist & skeptics movement in half.
I thought they just asked "do you know anyone who ...?" expecting a binary answer (or perhaps a "maybe") -- rather than expecting "oh, yes, there's Ashley and Frank and Melanie, and I think Ahmed might feel that way too".
Thing is, now the transphobes can launch a witch-hunt to see which kid to bully. A secret like this would probably only be told to a close friend, which narrows that pool. Even if the kid has a few close friends, you can bet its the one who has been acting weird and has interests more typical of the opposite sex.
Boom, now your inspection process is necessarily political and religious.
Maybe you could try to enforce a lack of politics?
Replies from: gjm, Lumifer↑ comment by gjm · 2015-01-28T22:27:44.448Z · LW(p) · GW(p)
the obvious next step
I really don't think we should be condemning people for doing something that could be followed by doing something else that could be followed by doing something else that would be bad. Not unless we have actual evidence that they intend the whole sequence.
(I also remark that what you originally said was that schools are encouraging children to be gay and transsexual. We've come quite a way from there.)
they are right to be afraid
Maybe they are. But being afraid of something doesn't, at least in my value system nor in theirs if they haven't that bit about not bewaring false witness against other people, constitute sufficient reason to claim it's already happened.
Elevatorgate
Yes, I have heard of it and I know enough about the story to know that your version of it is quite inaccurate. But that's not the point here. The point is that that kind of overreaction is silly and harmful, and it's what the school did in this case, and to my mind that means we should be cautious about trusting their account of what the inspectors did.
now the transphobes can launch a witch-hunt
Yes, that's a problem. For the avoidance of doubt, it's not my purpose to claim that the inspectors didn't do anything foolish or harmful. I am claiming only that your original characterization of the situation is wrong. Which I think you're not disputing at this point.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-28T23:06:14.413Z · LW(p) · GW(p)
I really don't think we should be condemning people for doing something that could be followed by doing something else that could be followed by doing something else that would be bad. Not unless we have actual evidence that they intend the whole sequence.
I'm not condemning it, at most I'm saying the school's head teacher is right to condemn it from within his value system. I'm slightly torn here between saying I understand why people might draw a line in the sand to avoid being defeated one step at a time, and realising that this would make organisations really inflexible.
Yes, I have heard of it and I know enough about the story to know that your version of it is quite inaccurate. But that's not the point here. The point is that that kind of overreaction is silly and harmful, and it's what the school did in this case, and to my mind that means we should be cautious about trusting their account of what the inspectors did.
Do you have a relatively short, unbiased version of elevatorgate you can link me to?
But yes, I take your point, and given that the school is biased they can't be trusted here.
I am claiming only that your original characterization of the situation is wrong. Which I think you're not disputing at this point.
Broadly speaking, yes. I mean, teaching children how lesbians have sex might have happened, and if it did then it might slightly increase the number of lesbians, but that's not nesscerly the intention. At the very least, I massively overstated the case.
Replies from: gjm↑ comment by gjm · 2015-01-28T23:22:10.639Z · LW(p) · GW(p)
unbiased version of elevatorgate
ahahahahahaha hahaha hahahaaaa.
(On the substantive issues, I think we're basically done at this point.)
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-29T00:15:06.382Z · LW(p) · GW(p)
Agreed on both points.
↑ comment by Lumifer · 2015-01-28T21:43:03.253Z · LW(p) · GW(p)
and then several steps down the line everything they value is gone.
So is this the situation where everything the Christians value is gone..?
Demand that they acknowledge that some kids are not Christian. Then acknowledge that they are from other faiths. Then exempt them from religious services. Then allow them to hold their own religious services away from the other kids. Then get the school as a whole to celebrate other religion's festivals. Then try to stop people wishing each other a merry Christmas and instead say "Happy Holidays".
All that (except maybe for the last sentence) sounds perfectly reasonable to me. In fact, acknowledging that some kids are not Christian -- if, in fact, they are not -- seems to me like the first step away from insanity.
Replies from: alienist, skeptical_lurker↑ comment by alienist · 2015-02-02T02:26:45.784Z · LW(p) · GW(p)
Well, their parents did choose to send them to a Christian school.
Replies from: Lumifer↑ comment by Lumifer · 2015-02-02T18:11:34.641Z · LW(p) · GW(p)
...yes, and?
Replies from: alienist↑ comment by alienist · 2015-02-03T01:39:56.708Z · LW(p) · GW(p)
Presumably that means they want their kids exposed to Christian values and Christian services.
Replies from: Lumifer↑ comment by Lumifer · 2015-02-03T17:36:44.545Z · LW(p) · GW(p)
And they are exposed. But if the kids are actually not Christian, recognizing that seems to me an entirely reasonable thing to do. And by the time kids want to hold their own religious services (presumably "kids" are teenagers at this point), the wishes of parents matter less.
↑ comment by skeptical_lurker · 2015-01-28T21:54:44.536Z · LW(p) · GW(p)
So is this the situation where everything the Christians value is gone..?
By the standards of Christians living a few hundred years ago (and hardliners living today), the secularisation of Europe must look catastrophic. Hundreds of millions of people doomed to burn in eternal hellfire.
All that (except maybe for the last sentence) sounds perfectly reasonable to me. In fact, acknowledging that some kids are not Christian -- if, in fact, they are not -- seems to me like the first step away from insanity.
This is probably because you are not a hardline conservative Christian. To them, the idea that there is an alternative to Christianity is an information hazard far worse then, say, Roko's Basilisk. The idea that you would present impressionable young children with an idea which, if adopted, results in them burning in hell is pure insanity in their eyes.
Before I read the sequences and understood about 'beliefs as attire' and so forth, I was confused as to how any Abraham religion could possibly co-exist with any other religion.
Replies from: JoshuaZ, Lumifer↑ comment by JoshuaZ · 2015-01-29T03:08:35.334Z · LW(p) · GW(p)
I was confused as to how any Abraham religion could possibly co-exist with any other religion.
Um, you do know that there are major versions of every one of the three major Abrahamic religions that don't believe in eternal suffering for non-believers? Similar remarks apply for the minor Abrahamic offshoots (although deciding which are their own offshoots is fuzzy). Moreover, there are also variations in at least one of those religions where there's enough pre-destination that most of this is rendered completely irrelevant.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-29T07:43:02.136Z · LW(p) · GW(p)
I'm certainly aware that there are many variants of these religions which believe wildly different things, but it was still my understanding that "eternal suffering for non-believers" was they most mainstream branch.
Replies from: Salemicus, JoshuaZ↑ comment by Salemicus · 2015-01-29T16:35:57.478Z · LW(p) · GW(p)
"Eternal suffering for non-believers" is non-mainstream in Islam. The mainstream position is that righteous Jews, Christians and Sabaeans will be OK. Pagans, however, are right out.
Replies from: Viliam_Bur, skeptical_lurker↑ comment by Viliam_Bur · 2015-01-30T09:11:59.101Z · LW(p) · GW(p)
Uhm, this seems like saying that "eternal suffering for non-believers" is the mainstream position... it's just that People of the Book are not automatically included among the "non-believers".
Replies from: Salemicus↑ comment by Salemicus · 2015-01-30T10:09:03.735Z · LW(p) · GW(p)
That's one way of looking at it, I suppose. I think "non-believers" normally means "people who don't believe in that religion." Remember the original question was - how can an Abrahamic religion co-exist with a different religion? These are clearly different religions. I do think I'm drawing a meaningful distinction in that Christians believe that the only way to heaven is through Jesus (John 14:6, perhaps the most famous verse in the NT) whereas Islam teaches that you don't have to be a Muslim to go to heaven.
↑ comment by skeptical_lurker · 2015-01-29T18:57:36.790Z · LW(p) · GW(p)
Really? So... out of the Islam, Judaism, and Christianity, the only one which teaches that non-believers burn in hell is the one based on Jesus' teachings of forgiveness.
Why am I not that surprised.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T19:13:02.657Z · LW(p) · GW(p)
It's a bit more complicated.
In Judaism there is basically no afterlife -- neither heaven nor hell.
Christianity introduced the promise of eternal life but made it a carrot-and-stick deal -- bask in joy or burn in flames.
Islam essentially went with Christianity's approach, but wrote in a grandfathering clause for "people of the Book" -- Jews and Christians -- who are seen as following more or less the right religion, just not the latest most-correct version updated by the final prophet (Muhammad). Pagans and atheists still burn.
Replies from: Salemicus, JoshuaZ↑ comment by Salemicus · 2015-01-29T19:50:33.269Z · LW(p) · GW(p)
My understanding is that mainstream Christians think non-Christians can go to heaven as long as they didn't have the chance to become Christian - e.g. Moses, or some undiscovered Amazonian tribe - as long as they lived righteously. The mainstream Islamic position, however, is that Islam is really obvious, so even if you never heard of the prophet Mohammed you should still be able to work out most of the stuff based on reason alone (!) so you've got no excuse. So while Christians view Moses, Abraham, etc as precursors to Christianity, Islam views them as actually having been Muslim. For Muslims, the first Muslim was Adam (of Adam and Eve fame).
So it's not that Jews, Christians and Sabaeans get grandfathered in for having the updated version. Rather, it's that they are still worshipping the right God, even though they've distorted his teachings and those of his prophets, which is surely pushing their luck. The "People of the Book" thing is way less tolerant than it sounds.
Replies from: gjm, alienist↑ comment by gjm · 2015-01-29T21:32:13.985Z · LW(p) · GW(p)
that Islam is really obvious, so [...] you should still be able to work out most of the stuff based on reason alone
That would be really weird given that so far as I can tell Muslims don't hold (e.g.) that all the prophets (Moses, Jesus, etc.) were aware of anything like the whole of Islam despite being actually on a mission from God. Does "most of the stuff" here mean something like "what Islam, Judaism and Christianity have in common"?
Replies from: Salemicus↑ comment by Salemicus · 2015-01-30T10:29:53.027Z · LW(p) · GW(p)
Muslims believe that the teachings of Moses, Jesus, etc were perverted by the Jews and Christians. In particular they definitely do believe that Moses taught the same things that Mohammed did - this is explicitly stated in the Qu'ran, which repeatedly treats Moses as a parallel for Mohammed. So the fact that the Biblical Moses isn't a Muslim is irrelevant - you have to go by the Qu'ranic Moses. That's why Hollywood films about Old Testament prophets are frequently censored in the Middle East, because they are telling 'inaccurate' (i.e. non-Qu'ranic) stories about Islamic prophets, see e.g. here and here.
Replies from: gjm↑ comment by gjm · 2015-01-30T12:27:08.040Z · LW(p) · GW(p)
So I knew that Muslims believe that earlier prophets' teachings were compatible by Islam before they were corrupted by the Jews and Christians. Are you saying, beyond that, that they believe the earlier prophets actually had something like the whole of Muhammad's message, before Muhammad?
That seems a little unlikely to me. (E.g., for sure Moses didn't have the Qur'an, and I wouldn't expect "There are vitally important things in the Qur'an that weren't known before it" to be controversial among Muslims. But I'm very willing to be corrected.)
Replies from: Salemicus↑ comment by Salemicus · 2015-01-30T14:07:20.417Z · LW(p) · GW(p)
Are you saying, beyond that, that they believe the earlier prophets actually had something like the whole of Muhammad's message, before Muhammad? That seems a little unlikely to me. (E.g., for sure Moses didn't have the Qur'an)
I don't know exactly what the man-in-the-street believes, but yes, Islam teaches that they had something like the whole of the message. It also teaches that all of the prophets had the Torah, and indeed that the Torah and other earlier revealed scriptures talk about Mohammed. The special thing about the Qu'ran isn't that it's a unique account of God's word - supposedly God gave his word to mankind over and over, but mankind kept polluting it. The special thing about the Qu'ran is that it's the final and incorruptible version.
You're right that it is an obviously silly belief, but I am not an expert as to how the contradictions are worked out. For example, did Adam teach the necessity of Hajj? Surely no, because Abraham built the Ka'aba, and he came later. But if Adam's religion was missing one of the pillars of Islam, then how was he a Muslim? But really it's no sillier than any manner of Christian doctrines that no-one remarks on.
Replies from: alienist↑ comment by alienist · 2015-02-07T02:45:13.807Z · LW(p) · GW(p)
Surely no, because Abraham built the Ka'aba, and he came later. But if Adam's religion was missing one of the pillars of Islam, then how was he a Muslim?
Well, according to this article:
According to Islamic tradition, the Kaaba was built by Adam as a place of worship, and then later reconstructed by Abraham and Ishmael.
↑ comment by alienist · 2015-02-01T23:24:10.259Z · LW(p) · GW(p)
My understanding is that mainstream Christians think non-Christians can go to heaven as long as they didn't have the chance to become Christian - e.g. Moses, or some undiscovered Amazonian tribe - as long as they lived righteously.
Well, Dante put the righteous pagans in Limbo (the 1st circle of hell). As for Isrealites, they got to heaven because they were followers of G-d after all.
↑ comment by JoshuaZ · 2015-02-03T02:12:06.010Z · LW(p) · GW(p)
In Judaism there is basically no afterlife -- neither heaven nor hell.
That's not really accurate. There are versions of Judaism which have no afterlife, but many classical forms of Judaism do have an afterlife. Part of the idea that Judaism doesn't have an afterlife is due to Christian misunderstandings because in Judaism the afterlife is just really, really not important. It is a much more this world focused religion. But most forms of Orthodox Judaism definitely believe in an afterlife where while the details may be fuzzy, there's a definite reward for the righteous and punishment for sin.
Replies from: Lumifer↑ comment by Lumifer · 2015-02-03T17:34:26.670Z · LW(p) · GW(p)
Can you provide some links? There is Sheol, sure, but I was under the impression that it's just a grey place where shades slowly wither away to nothing. But punishment for sinners and rewards for the righteous -- which branches believe in them? And is it a late Christian influence?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2015-02-03T19:06:39.328Z · LW(p) · GW(p)
Can you provide some links?
Sure. See this summary of traditional beliefs. Note that some movements or subsects are more explicit. For example, Chabad and most of the Chassidic sects have a much more "Christian" view of the afterlife, as you can see here.
There is Sheol, sure, but I was under the impression that it's just a grey place where shades slowly wither away to nothing.
Sheol as depicted in the oldest parts of the Bible is something like that. It would however be a mistake to interpret the Old Testament/Tanach as having the same role in Judaism as the Bible does for Christianity. In many ways the Talmud is more important as a set of documents when it comes to theology.
But punishment for sinners and rewards for the righteous -- which branches believe in them?
Almost all Orthodox Jews believe this in some form, and this does date back to the early sections of the Talmud (200-300 CEish). But the nature of such reward and punishment can vary, ranging from simple oblivion for the wicked, to a "heaven" like reward and a long purgatory, as well as possible reincarnation as a punishment for the wicked. Among Reform and Conservative movements there's much less of a belief in an afterlife, although individual beliefs may vary.
And is it a late Christian influence?
Difficult to say. A lot of these ideas were floating around in the late Second Temple period so it is hard to tell exactly who was influencing whom and to what extent. Moreover, a lot of the written sources date to 200 CE or so which is already a lot later.
Replies from: Lumifer↑ comment by JoshuaZ · 2015-01-29T15:51:58.420Z · LW(p) · GW(p)
Certainly not for Judaism, even stringent forms of Orthodox Judaism. And not for the Bahai either. For the others the situation is more complicated.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-29T16:11:02.192Z · LW(p) · GW(p)
Ok, well I know more about Christianity than Judaism and I assumed it was similar, but thanks for enlightening me.
↑ comment by Lumifer · 2015-01-28T22:02:10.257Z · LW(p) · GW(p)
I am not sure what is the point that you are making. There is a pretty diverse set of people commonly called extremists who think that the contemporary society is a catastrophe and is horribly bad. If such people decide to withdraw from the society, sure, no problems. If they decide to change, that is, "save" the society, they shouldn't be surprised to encounter resistance.
What is it that you are complaining about?
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-28T22:30:02.075Z · LW(p) · GW(p)
I'm not complaining. I think secularisation is a good thing. If anything, I'm trying to convey just how much values have changed, and I'm a little concerned about how they might change in the future, either by moving back to past religious values or by moving forward in some bizarre direction.
You know the ideological turing test and the idea that you should only be able to argue against a position if you truly understand their point of view? Well, I think I can see these sort of issues from both an extreme libertarian and an extreme social conservative viewpoint and the contradiction is doing strange things to my brain.
Also, I'm defending a statement Azeroth123 made (schools are encouraging kids to be gay - although I'm not so sure about this now) while not endorsing his conclusions, which also might make what I have written seem confusing or even contradictory.
Similarly, I've mostly criticised the school inspectors, and yet I think its good that their actions are undermining Christian fundmentalism. This might make what I've written sound confusing, but at least I've defeated the halo effect.
↑ comment by satt · 2015-01-31T17:46:47.269Z · LW(p) · GW(p)
we should just keep politics out of schools
Maybe something of a tangent, but I'm not sure that's a coherent idea. I'd say schools are intrinsically political entities, working as they do on the assumption that corralling children into rooms, routinely against their consent, to teach them certain things is useful & necessary. (Not that I think that assumption is necessarily wrong!)
↑ comment by Viliam_Bur · 2015-01-27T09:36:27.481Z · LW(p) · GW(p)
I don't see the encouraging there, other than possible "medical student syndrome". (When uncalibrated people hear "X suggests Y", they are prone to see tiny amounts of X and assume high probability of Y.) For example, a child with no realistic idea about what transsexuality means could mistake a thought "in this situation it would be (in far mode) better to be a boy/girl" for transsexuality; which could cause unnecessary turmoil.
children should not be discussing their friend's private secrets with strangers
Yes, this seemed very wrong to me, too. Even if the idea of teaching about sexuality was to increase tolerance, outing someone is wrong, and it could also inspire bullies to expand their arsenal of labels for their classmates.
But to put things in context, I still think the religious education is more harmful on the average, so it seems funny when people with cute ideas like "if you explore your sexuality, the sadistic omnipotent alpha male will torture you for eternity" complain about possible harm to children's sexuality caused by improper education.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-01-28T09:37:39.258Z · LW(p) · GW(p)
I don't see the encouraging there, other than possible "medical student syndrome".
Well, they also wanted the school to teach how to have lesbian sex, which is certainly encouraging homosexuality. I'm not saying this is a bad thing though.
But to put things in context, I still think the religious education is more harmful on the average.
The idea seems to be to counter homophobic religions by forcing them to teach how lesbians have sex. I think it would be a better idea just to shut the religious schools.
↑ comment by Gunnar_Zarncke · 2015-01-26T20:57:40.476Z · LW(p) · GW(p)
If inducing trans and gay were that easy it would be as easy to cure it. It isn't.. Trans is so incurable that despite the side-effects it is considered much easier to change the body than the mind.
But I agree with your second paragraph. I'd say that there is such a thing as information hazard. You need anti-memes for such. And best before being exposed to the info-hazard. Kind like an innoculation. At least as a minor when you don't have a sufficiently general mindset to dispose with such ideas easily (stronger immune system).
Replies from: JoshuaZ, DanielLC, alienist↑ comment by JoshuaZ · 2015-01-26T22:15:09.699Z · LW(p) · GW(p)
If inducing trans and gay were that easy it would be as easy to cure it.
I don't think this follows. While it is true that one expects that something like that should be true in general, we know that as far as sex and gender issues, weird things can happen. In particular, sexual fetishes can apparently arise from fairly innocuous stimuli and once in place are extremely difficult to remove.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2015-01-27T08:06:32.401Z · LW(p) · GW(p)
Agreed. Seligman actually discusses fetishes and gives a convincing account. The problem being that the innocuous stimulus triggers a chain of repeated self-inforcement.
↑ comment by DanielLC · 2015-02-02T08:06:43.192Z · LW(p) · GW(p)
My model is that there's a sliding scale of how you'd identify your gender, and there's strong social pressure to conform it with your sex. As a result, only people who strongly want to identify their gender otherwise will. Another way to think of it is that the reason it's so incurable is that society automatically cures all the easy and medium cases.
There's instructions on LessWrong for how to become bisexual. If it's that easy for a heterosexual person to become bisexual, shouldn't it be easy for a homosexual person to become bisexual? It's the same issue here.
comment by [deleted] · 2015-01-26T03:59:02.797Z · LW(p) · GW(p)
Could use an editor or feedback of some kind for a planned series of articles on scarcity, optimization, and economics. Have first four articles written and know what the last article is supposed to say, and will be filling in the gaps for a while. Would like to start posting said articles when there is enough to keep up a steady schedule.
No knowledge of economics required, but would be helpful if you were pretty experienced with how the community likes information to be presented. Reply to this comment or send me a message, and let me know how I can send you the text of the articles.
Replies from: Alsadius, AlexSchell↑ comment by AlexSchell · 2015-01-30T19:15:06.329Z · LW(p) · GW(p)
Contact info sent.
comment by Viliam_Bur · 2015-01-28T15:29:19.741Z · LW(p) · GW(p)
I read a book from a guy who writes many funny stories about animals (sorry, I don't remember his name now). He described how ZOOs often try to provide a lot of space for animals... which is actually bad for non-predators, because their instinct is to hide, and if they cannot hide, they have high levels of stress (even when nothing is attacking them at the moment), which harms their health. Instead, he recommended to give the animals a small place to hide, where they will feel safe.
Recently (after reading "Don't Shoot the Dog", which I strongly recommend to everyone) when I read something about animals, I often think: "What could this imply for humans?"
For me, open-space offices are this kind of scary. I can't imagine working in an open-space office and keeping my sanity. On a second thought, it depends. I probably wouldn't mind having fellow employees in the same room, but the idea of my boss watching me all day long feels really uncomfortable.
Are other people okay with that? (Maybe they consider bosses to be their friends instead of predators.) Or is it just something that the bosses force upon us, and some of us pretend to be okay with it to signal being a "professional" (which is something like being a Vulcan rationalist)?
Could you work in a open space, where your boss would be sitting behind your back all day long? How would you rate such working environment? -- Please answer only if you are an employee in a situation where you make money for living (not a student, not the boss).
EDIT to clarify: I meant sitting in open-space office with your boss (defined as someone who is in hierarchy above you, who gives you commands, even if they are not at the very top of the company). And the boss does not have to sit literally behind your back, but spends most of the time in the room where you work, sitting in the place where they see you.
[pollid:812]
Replies from: knb, JoshuaZ, BrassLion, gjm, bramflakes↑ comment by knb · 2015-01-30T02:12:18.094Z · LW(p) · GW(p)
There's some pretty compelling research that indicates most people dislike open office designs. It also seems to lower productivity.
Which leads to the question of why so many companies use open office designs. My guess is that open offices make the company seem more cool/laid-back and less stodgy than cubicle farms. This might help to attract employees, even though it actually makes them less happy in the long-run.
Replies from: bramflakes, None, Viliam_Bur↑ comment by bramflakes · 2015-01-30T03:25:40.827Z · LW(p) · GW(p)
My guess is that open offices make the company seem more cool/laid-back and less stodgy than cubicle farms. This might help to attract employees, even though it actually makes them less happy in the long-run.
This is it, basically. You see it a lot in companies based on churning through employees rather than building up a stable longterm workforce. The open-plan spaces look hip and make newcomers feel like they're working in a Cool Modern Company, so they're more willing to endure the daily annoyances like half a dozen distracting conversations going on at once across the room. It doesn't matter that they eventually wear down under the realization that they are working in a Panopticon prison yard. In fact it's probably considered a feature instead of a bug - I can't think of a better way to make employees feel small and pressured to perform.
Cubicle farms might seem like the prime example of drudgery, but at least you get your own little space and have an unexposed back.
↑ comment by [deleted] · 2015-01-31T06:13:15.857Z · LW(p) · GW(p)
There's a good deal of research on how open offices can increase creativity, through concepts like propinquity. An open office may point to the fact that they value innovation over productivity.
Replies from: knb↑ comment by knb · 2015-01-31T07:18:01.068Z · LW(p) · GW(p)
That's the usual argument. The Davis meta-analysis cited in that New Yorker article found that open offices hurt creativity, which is what I would expect from a more distracting environment. Anyway if there is any good counter-evidence I would like to see it.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2015-02-02T21:54:13.487Z · LW(p) · GW(p)
The New Yorker claims that the 2011 Davis review (not meta-analysis) found that open offices hurt creativity, but I don't see that in in the paper. It only uses the word "creativity" twice, once citing Csikszentmihalyi, and once in the bibliography. If you have read the paper and claim that it does talk about creativity, can you suggest a better word to search for or give a more specific citation?
Replies from: knb↑ comment by Viliam_Bur · 2015-01-30T09:22:08.877Z · LW(p) · GW(p)
Maybe this is the difference between the roles of "predator" and "prey". As a "prey", you hate open spaces. As a "predator", you love them. Guess who has the power to make the decision?
The bosses are probably making the decisions that feel right to them, ignoring the research. And maybe the employees' ability to endure the increased stress is some kind of costly signalling. (Not sure what exactly is signalled here: loyalty? self-confidence? resistance to stress?)
↑ comment by JoshuaZ · 2015-01-28T16:02:11.367Z · LW(p) · GW(p)
Was the author Gerald Durrell? I don't remember him specifically talking about that issue but he wrote a lot of humorous books about his time as a naturalist and helping run zoos.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2015-01-28T16:24:57.664Z · LW(p) · GW(p)
Was the author Gerald Durrell?
Yes.
↑ comment by BrassLion · 2015-01-29T19:56:15.933Z · LW(p) · GW(p)
I am such a worker, and my immediate boss sits literally right behind me. It's mildly uncomfortable, but not really much more uncomfortable than a traditional set of cubicles. It helps that my boss doesn't care if I'm e.g. reading this site instead of working at any given time, as long as I get my work done overall.
I estimate I would have about a 50% increase in work done if I had an office with a door, no increase if my boss was not in the same building and I had an open plan office, and no increase if I had traditional cubes (open plan offices really do make it easier to talk to people if you need to).
↑ comment by gjm · 2015-01-28T16:28:50.175Z · LW(p) · GW(p)
The intended meaning of the poll is less than perfectly clear to me. Are you asking about (1) working in an open space where your boss is actually, literally, nearby and behind you all the time? Or about (2) working in an open space, full stop?
(I work in an open-plan office. My boss is usually on another continent and when he's here the place where he usually sits doesn't give him direct sight of what I'm doing. I dislike open-plan offices, partly because of the feeling of being watched all the time and partly for other reasons, but it's at the "mildly uncomfortable" level. If my boss -- or anyone else, actually -- were actually sat behind me watching me work all day I'd rate it as "beyond horrible".)
[EDITED to clarify meaning.]
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2015-01-28T20:07:28.376Z · LW(p) · GW(p)
I wanted to ask about working in an open space where your boss is... let's say in the same room, somewhere where he can watch you all day long.
Not necessarily immediately behind you; could be on the opposite side of the room; could be sideways. And of course sometimes he leaves the room for meetings etc., but his official sitting place is in the same room as yours, and he uses it almost every day for a few hours.
And "boss" doesn't necessarily mean the owner of the company; simply someone who is above you in the hierarchy; someone who gives you commands and who could fire you or significantly contribute to getting you fired. So it's not a room full of equals.
Replies from: JoshuaZ↑ comment by bramflakes · 2015-01-30T02:04:11.073Z · LW(p) · GW(p)
I could never work in an open-plan office. The entire idea is a nakedly aggressive intrusion into employees' personal space on the part of management.
comment by polymathwannabe · 2015-01-27T18:37:28.656Z · LW(p) · GW(p)
An ancient extrasolar system with five sub-Earth-size planets
"To put that into perspective, by the time Earth formed, these five planets were already older than our planet is today."
Replies from: JoshuaZ↑ comment by JoshuaZ · 2015-01-27T19:00:40.164Z · LW(p) · GW(p)
I'm actually in the process of writing a discussion post on Great Filter issues that mentions this. It should be clear why this sort of thing should be pretty scary.
Incidentally this is the paper in question http://arxiv.org/abs/1501.06227
Replies from: Nonecomment by hwold · 2015-01-26T16:15:33.604Z · LW(p) · GW(p)
A few day ago, I saw an interesting article on a site somewhat related to lesswrong. Unfortunately I didn’t have the time to read it, so I bookmarked it.
Computer crashed, lost my last bookmarks and now I spent 2 hours trying to find this article, without luck. Here is the idea of the article, in a nutshell : we human are somewhat a king of learning machine, trying to build a model of the “reality”. In ML, overfitting means that in insisting too much on fitting the data, we actually get a worse out-of-sample performance (because we start to fit the modeling noise and the stochastic noise). By carrying this ML idea into the human realm, we can argue that insisting too much on consistency can be a liability rather than an asset in our model-building.
Does that decription rings someone bells ? If yes, please link the article :)
comment by Ritalin · 2015-01-30T16:18:12.422Z · LW(p) · GW(p)
A self-improvement inquiry. I've got an irrational tendency to be too relaxed around other people; too sincere, transparent, and trusting. In general I'm very uninhibited and uncontrolled, and this goes to spectacular levels when I'm the slightest bit intoxicated. This has come back to bite me in more than one occasion.
I've had trouble finding documentation on how to improve on this. "Being too honest/sincere/open" doesn't seem like a common problem for people to have.
comment by Capla · 2015-01-29T00:35:37.447Z · LW(p) · GW(p)
Other than Superintelligence and Global Catastrophic Risks what should I read to find out more about existential risk?
Replies from: JoshuaZcomment by AmagicalFishy · 2015-01-26T15:39:41.676Z · LW(p) · GW(p)
I still don't understand the apparently substantial difference between Frequentist and Bayesian reasoning. The subject was brought up again in a class I just attended—and I was still left with a distinct "... those... those aren't different things" feeling.
I am beginning to come to the conclusion that the whole "debate" is a case of Red vs. Blue nonsense. So far, whenever one tries to elaborate on a difference, it is done via some hypothetical anecdote, and said anecdote rarely amounts to anything outside of "Different people sometimes treat uncertainty differently in different situations, depending on the situation." (Usually by having one's preferred side make a very reasonable conclusion, and the other side make some absurd leap of psuedo-logic).
Furthermore, these two things hardly ever seem to have anything to do with the fundamental definition of probability, and have everything to do with the assumed simplicity of a given system.
I AM ANGRY
Replies from: Kindly, IlyaShpitser, polymathwannabe, None, None↑ comment by Kindly · 2015-01-26T16:29:36.232Z · LW(p) · GW(p)
The whole thing is made more complicated by the debate between frequentist and Bayesian methods in statistics. (It obviously matters which you use even if you don't care what to believe about "what probability is", or don't see a difference.)
↑ comment by IlyaShpitser · 2015-01-27T11:37:06.187Z · LW(p) · GW(p)
This debate is boring and old, people getting work done in ML/stats have long ago moved past it. My suggestion is to find something better to talk about: it's mostly wankery if people other than ML/stats people are talking.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-01-27T13:09:23.546Z · LW(p) · GW(p)
What is it when it is ML/stats people who are talking? For example, it's a frequent theme at the blogs of Andrew Gelman and Deborah Mayo, and anyone teaching statistics has to deal with the issues.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-01-27T13:13:18.175Z · LW(p) · GW(p)
I teach statistics and I don't deal with the debate very much. Have you read the exchange started by Robins/Wasserman's missing data example here:
https://normaldeviate.wordpress.com/2012/08/28/robins-and-wasserman-respond-to-a-nobel-prize-winner/
What do you make of it? It is an argument against certain kinds of "Bayesian universality" people talk about (but it's not really the type of argument folks here have). Here they have a specific technical point to make.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-01-27T14:26:23.416Z · LW(p) · GW(p)
It will take a while to understand it, but by the end of section 3 I was wondering when the assumption that X is a binary string was going to be used. Not at all, so far. The space might as well have been defined as just a set of 2^d arbitrary things. So I anticipate that introducing a smoothness assumption on theta, foreshadowed at this point, won't help -- there is no structure for theta to be smooth with respect to. Surely this is why the only information about X that can be used to estimate Y is π(X)? That is the only information about X that is available, the way the problem is set up.
More when I've studied the rest.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-01-27T14:45:59.199Z · LW(p) · GW(p)
The binary thing isn't important, what's important is that there are real situations where likelihood based methods (including Bayes) don't work well (because by assumption there is only strong info on the part of the likelihood we aren't using in our functional, and the part of the likelihood we are using in our functional is very complicated).
I think my point wasn't so much the technical specifics of that example, but rather that these are the types of B vs F arguments that actually have something to say, rather than going around and around in circles. I had a rephrase of this example using causal language somewhere on LW (if that will help, not sure if it will).
Robins and Ritov have something of paper length, rather than blog post length if you are interested.
Replies from: AmagicalFishy, Richard_Kennaway↑ comment by AmagicalFishy · 2015-01-27T17:20:35.155Z · LW(p) · GW(p)
Wait, IlyaShipitser—I think you overestimate my knowledge of the field of statistics. From what it sounds like, there's an actual, quantitative difference between Bayesian and Frequentist methods. That is, in a given situation, the two will come to totally different results. Is this true?
I should have made it more clear that I don't care about some abstract philosophical difference if said difference doesn't mean there are different results (because those differences usually come down to a nonsensical distinction [à la free will]). I was under the impression that there is a claim that some interpretation of the philosophy will fruit different results—but I was missing it, because everything I've been introduced to seems to give the same answer.
Is it true that they're different methods that actually give different answers?
Replies from: DanielLC, polymathwannabe↑ comment by DanielLC · 2015-02-02T07:48:48.349Z · LW(p) · GW(p)
I think it's more that there are times when frequentists claim there isn't an answer. It's very common for statistical tests to talk about likelihood. The likelihood of a hypothesis given an experimental result is defined as the probability of the result given the hypothesis. If you want to know the probability of the hypothesis, you take the likelihood and multiply it by the prior probability. Frequentists deny that there always is a prior probability. As a result, they tend to just use the base rate as if it were a probability. Conflating the two is equivalent to the base rate fallacy.
↑ comment by polymathwannabe · 2015-01-27T18:43:23.883Z · LW(p) · GW(p)
EY believes so.
↑ comment by Richard_Kennaway · 2015-01-27T23:36:08.842Z · LW(p) · GW(p)
I think I'm beginning to see the problem for the Bayesian, although I not yet sure what the correct response to it is. I have some more or less rambling thoughts about it.
It appears that the Bayesian is being supposed to start from a flat prior over the space of all possible thetas. This is a very large space (all possible strings of 2^100000 probabilities), almost all of which consists of thetas which are independent of pi. (ETA: Here I mistakenly took X to be a product of two-point sets {0,1}, when in fact it is a product of unit intervals [0,1]. I don't think this makes much difference to the argument though, or if it does, it would be best addressed by letting this one stand as is and discussing that case separately.) When theta is independent of pi, it seems to me that the Bayesian would simply take the average of sampled values of Y as an estimate of P(Y=1), and be very likely to get almost the same value as the frequentist. Indirectly observing a few values of theta (through the observed values of Y) gives no information about any other values of theta, because the prior was flat. This is why the likelihood calculated in the blog post contains almost no information about theta.
Here is what seems to be to be a related problem. You will be presented with a series of some number of booleans, say 100. After each one, you are to guess the next. If your prior is a flat distribution over {0,1}^100, your prediction will be 50% each way at every stage, regardless of what the sequence so far has been, because all continuations are equally likely. It is impossible to learn from such a prior, which has built into it the belief that the past cannot predict the future.
As noted in the blog post, smoothness of theta with respect to e.g. the metric structure of {0,1}^100000 doesn't help, because a sample of only 1000 from this space is overwhelmingly likely to consist of points that are all at a Manhattan distance of about 50000 from each other. No substantial extrapolation of theta is possible from such a sample unless it is smooth at the scale of the whole space.
The flat prior over theta seems to be of a similar nature to the flat prior over sequences. If in this sample of 1000 you noticed that when pi was high, the corresponding value of Y, when sampled, was very likely to be 1, and similarly that when pi was low, Y was usually 0 among those rare times it was sampled, you might find it reasonable to conclude that pi and theta were related and use something like the Horwitz-Thompson estimator. But the flat prior over theta does not allow this inference. However many values of theta you have gained some partial information about by sampling Y, they tell you nothing about any other values.
My guess so far is that that is a problem with the flat prior over theta. The problem for the Bayesian is to come up with a better one that is capable of seeing a dependency between pi and theta.
Is the Robins and Ritov paper the one cited in the blog post, "Toward a Curse of Dimensionality Appropriate (CODA) Asymptotic Theory for Semi-parametric Models"? I looked at that briefly, only enough to see that their example, though somewhat similar, deals with a relatively low dimensional case (5), which in practical terms counts as high dimensional, and what they describe as a "moderate" sample size of 10000. So that's rather different from the present example, and I don't know if anything I just said will be relevant to it.
On reading further in the blog post, I see that a lot of what I said is said more briefly in the comments there, especially comment (4) by Chris Sims:
If theta and pi were independent, we could just throw out the observations where we don’t see Y and use the remaining sample as if there were no “R” variable. So specifying that theta and pi are independent is not a reasonable way to say we have little knowedge. It amounts to saying we are sure the main potential complication in the model is not present, and therefore opens us up to making seriously incorrect inference.
And a flat prior on theta is an assumption that theta and pi are almost certainly independent.
Replies from: IlyaShpitser, one_forward↑ comment by IlyaShpitser · 2015-01-28T08:44:01.746Z · LW(p) · GW(p)
Yes the CODA paper is what I meant.
The right way out is to have a "weird" prior that mirrors frequentist behavior. Which, as the authors point out, is perfectly fine, but why bother? By the way Bayes can't use Horvitz-Thompson directly because it's not a likelihood based estimator, I think you have to somehow bake the entire thing into the prior.
The insight that lets you structure your B setup properly here is sort of coming from "the outside the problem," too.
↑ comment by one_forward · 2015-02-02T19:34:51.276Z · LW(p) · GW(p)
A note on notation - [0,1] with square brackets generally refers to the closed interval between 0 and 1. X is a continuous variable, not a boolean one.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-02-02T21:43:40.198Z · LW(p) · GW(p)
Actually, I should have been using curly brackets, as when I wrote (0,1) I meant the set with two elements, 0 and 1, which is what I had taken X to be a product of copies of, hence my obtaining 50000 as the expected Manhattan distance between any two members. I'll correct the post to make that clear. I think everything I said would still apply to the continuous case. If it doesn't, that would be better addressed with a separate comment.
Replies from: one_forward↑ comment by one_forward · 2015-02-04T20:34:14.307Z · LW(p) · GW(p)
Yeah, I don't think it makes much difference in high-dimensions. It's just more natural to talk about smoothness in the continuous case.
↑ comment by polymathwannabe · 2015-01-26T16:03:50.309Z · LW(p) · GW(p)
What "fundamental definition of probability" are you using?
Replies from: AmagicalFishy↑ comment by AmagicalFishy · 2015-01-26T16:11:45.031Z · LW(p) · GW(p)
A quantitative thing that indicates how likely it is for an event to happen.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-26T16:48:42.878Z · LW(p) · GW(p)
Let's say Alice and Bob are in two different rooms and can't see each other. Alice rolls a 6-sided die and looks at the outcome. Bob doesn't know the outcome, but knows that the die has been rolled. In your interpretation of the word "probability", can Bob talk about the probabilities of the different roll outcomes after Alice rolled?
Replies from: AmagicalFishy↑ comment by AmagicalFishy · 2015-01-26T17:02:30.470Z · LW(p) · GW(p)
I'm having a hard time answering this question with "yes" or "no":
The event in question is "Alice rolling a particular number on a 6-sided die." Bob, not knowing what Alice rolled, can talk about the probabilities associated with rolling a fair die many times, and base whatever decision he has to make from this probability (assuming that she is, in fact, using a fair die). Depending on the assumed complexity of the system (does he know that this is a loaded die?), he could convolute a bunch of other probabilities together to increase the chances that his decision is accurate.
Yes... I guess?
(Or, are you referring to something like: If Alice rolled a 5, then there is a 100% chance she rolled a 5?)
Replies from: Lumifer↑ comment by Lumifer · 2015-01-26T17:19:24.319Z · LW(p) · GW(p)
Well, the key point here is whether the word "probability" can be applied to things which already happened but you don't know what exactly happened. You said
A quantitative thing that indicates how likely it is for an event to happen.
which implies that probabilities apply only to the future. The question is whether you can speak of probabilities as lack of knowledge about something which is already "fixed".
Another issue is that in your definition you just shifted the burden of work to the word "likely". What does it mean that an event is "likely" or "not likely" to happen?
Replies from: emr, AmagicalFishy↑ comment by emr · 2015-01-26T19:13:01.611Z · LW(p) · GW(p)
EDIT: The neighboring comment here, raises the same point (using the same type of example!). I wouldn't have posted this duplicate comment if I had caught this in time.
I'm also confused about the debate.
Isn't the "thing that hasn't happened yet" always an anticipated experience? (Even if we use a linguistic shorthand like "the dice roll is 6 with probability .5".)
Suppose Alice tells Bob she has rolled the dice, but in reality she waits until after Bob has already done his calculations and secretly rolls the dice right before Bob walks in the room. Could Bob have any valid complaint about this?
Once you translate into anticipated experiences of some observer in some situation, it seems like the difference between the two camps is about the general leniency with which we grant that the observer can make additional assumptions about their situation. But I don't see how you can opt out of assuming something: Any framing of the P("sun will rise tomorrow") problem has to implicitly specify a model, even if it's the infinite-coin-flip model.
↑ comment by AmagicalFishy · 2015-01-26T17:53:28.486Z · LW(p) · GW(p)
Sorry, I didn't mean to imply that probabilities only apply to the future. Probabilities apply only to uncertainty.
That is, given the same set of data, there should be no difference between event A happening, and you having to guess whether or not it happened, and event A not having happened yet—and you having to guess whether or not it will happen.
When you say "apply a probability to something," I think:
"If one were to have to make a decision based on whether or not event A will happen, how would one consider the available data in making this decision?"
The only time event A happening matters is if it happening generated new data. In the Bob-Alice situation, Alice rolling a die in separate room gives zero information to Bob—so whether or not she already rolled it doesn't matter. Here are a couple of different situations to illustrate:
A) Bob and Alice are in different rooms. Alice rolls the die and Bob has to guess the number she rolled. B) Bob has to guess the number that Alice's die will roll. Alice then rolls the die. C) Bob watches alice roll the die, but did not see the outcome. Bob must guess the number rolled. D) Bob is a supercomputer which can factor in every infinitesimal fact about how Alice rolls the die, and the die itself upon seeing the roll. Bob-the-supercomputer watches Alice roll the die, but did not see the outcome.
In situations A, B, and C—whether or not Alice rolls the die before or after Bob's guess is irrelevant. It doesn't change anything about Bob's decison. For all intents and purposes, the questions "What did Alice roll?" and "What will Alice roll?" are exactly the same question. That is: We assume the system is simple enough that rolling a fair die is always the same. In situation D, the questions are different because there's different information available depending on whether or not Alice rolled already. That is, the assumption of a simple-system isn't there because Bob is able to see the complexity of the situation and make the exact same kind of decision. Alice having actually rolled the dice does matter.
I don't quite understand your "likely or not likely" question. To try to answer: If an event is likely to happen, then your uncertainty that it will happen is low. If it is not likely, then your uncertainty that it will happen is high.
(Sorry, I totally did not expect this reply to be so long.)
Replies from: Lumifer↑ comment by Lumifer · 2015-01-26T18:15:57.665Z · LW(p) · GW(p)
So, you are interpreting probabilities as subjective beliefs, then? That is a Bayesian, but not the frequentist approach.
Having said that, it's useful to realize that the concept of probability has many different... aspects and in some situations it's better to concentrate on some particular aspects. For example if you're dealing with quality control and acceptable tolerances in an industrial mass production environment, I would guess that the frequentist aspect would be much more convenient to you than a Bayesian one :-)
If an event is likely to happen, then your uncertainty that it will happen is low.
You may want to reformulate this, as otherwise there's lack of clarity with respect to the uncertainty about the event vs. the uncertainty about your probability for the event. But otherwise you're still saying that probabilities are subjective beliefs, right?
↑ comment by [deleted] · 2015-01-26T20:35:10.117Z · LW(p) · GW(p)
My best try: Frequentist statistics are built upon deductive logic; essentially a single hypothesis. They can be used for inductive logic (multiple hypotheses), but only at the more advanced levels which most people never learn. With Bayesian reasoning inductive logic is incorporated into the framework from the very beginning. This makes it harder to learn at first, but introduces fewer complications later on. Now math majors feel free to rip this explanation to shreds.
↑ comment by [deleted] · 2015-01-26T18:37:33.074Z · LW(p) · GW(p)
They are the same thing. Gertrude Stein had it right: probability is probability is probability. It doesn't matter whether your interpretation is Bayesian or frequentist. The distinction between the two is simply how one chooses to apply probability: as a property of the world (frequentist) or as a description of our mental world-models (Bayesian). In either case the rules of probability are the same.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2015-01-27T19:16:14.999Z · LW(p) · GW(p)
This phrasing suggests that Bayesians can't accept quantum mechanics except via hidden variables. This is not the case.
Replies from: None↑ comment by [deleted] · 2015-01-27T21:43:35.948Z · LW(p) · GW(p)
Taboo the word Bayesian.
I was talking about the Bayesian interpretation of probability. An interpretation, not a category of person. Quantum mechanics without hidden variables uses the frequentist interpretation of probability.
Sometimes in life we use probability in ways that are frequentist. Other times we use probability in ways that are Bayesian. This should not be alarming.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2015-01-27T23:31:53.556Z · LW(p) · GW(p)
Fair enough. The idea of calling QM 'frequentist' really stretches the reason for using that term under anything but an explicit collapse interpretation. Maybe it would be more of a third way -
- Frequentism would be that the world is itself stochastic.
- Fractionism would be that the world takes both paths and we will find ourselves in one.
- Bayes gets to keep its definition.
comment by Adam Zerner (adamzerner) · 2015-01-28T15:49:07.664Z · LW(p) · GW(p)
A cool fact about the human brain is that the left and right hemispheres function as their own little worlds, each with their own things to worry about, but if you remove one half of someone’s brain, they can sometimes not only survive, but their remaining brain half can learn to do many of the other half’s previous jobs, allowing the person to live a normal life. That’s right—you could lose half of your brain and potentially function normally.
So say you have an identical twin sibling named Bob who developes a fatal brain defect. You decide to save him by giving him half of your brain. Doctors operate on both of you, discarding his brain and replacing it with half of yours. When you wake up, you feel normal and like yourself. Your twin (who already has your identical DNA because you’re twins) wakes up with your exact personality and memories.
- What Makes You You, Wait But Why
What are the implications of this with Cryonics? What about cryonically freezing half of your brain?
Replies from: None↑ comment by [deleted] · 2015-01-28T16:19:37.042Z · LW(p) · GW(p)
They are mostly talking about the cortex, the outer wrinkled layer. Functionally, however, the cortex is completely useless without a whole suite of subcortical structures and actually only has something like 20% of your neurons. Its a part of the whole functional network, not the whole thing, though in mammals it expanded quite a bit and took on a lot of specialization. I dont know if theres such a thing as a 'generic' thalamus network vs 'your' thalamus network, sounds like a question for the connectomics researchers to get on.
A lot of these things A - lie near or on the midline and have much less lateralization and more crosstalk, B - are absolutely vital if you say dont want parkinsonism or to fall into permanent slow wave sleep. Hemispherectomies generally go after the cortex and white matter sometimes taking some of the superficial subcortical stuff, in chunks. The results of such things are usually much more positive in young people of course.
However, since you dont lose autobiographical memories from careful excision of brain parts it does indeed suggest that they're present and distributed throughout...
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2015-01-28T20:16:45.925Z · LW(p) · GW(p)
Let's assume that the information that makes you you is contained within a half cortext. What do you think the chances are that they'd be able to "figure the rest out"? Ie. integrate the half cortex with other parts (maybe biological, maybe mechanical).
comment by Punoxysm · 2015-01-27T05:01:49.506Z · LW(p) · GW(p)
I think in many professions you can categorize people as professionals or auteurs (insofar as anyone can ever be classified, binaries are false, yada yada).
Professionals as people ready to fit into the necessary role and execute the required duties. Professionals are happy with "good enough", are timely, work well with others, step back or bow out when necessary, don't defend their visions or ideas when the defense is unlikely to be listened to. Professionals compromise on ideas, conform in their behavior, and to some degree expect others to do the same. Professionals are reliable, level-headed, and can handle crises or unexpected events. They may have a strong and distinct vision of their goals or their craft, but will subsume it to another's without much fuss if they don't think they have the stance or leverage to promote it. Professionals accurately assess their social status in different situations, and are reluctant to defy it.
On the worse end of this spectrum is the yes-man, the bureaucrat, and the aggressive conformer. On the better end are the metaphorical Eagle Scouts, the executive, the "fixers" who can come in and clean up any mess.
Auteurs are guided first by their own vision, maybe to the point of obsession. Auteurs optimize aggressively and wildly, but only for their own vision. Auteurs will interrupt you to tell you why their idea is better, or why yours is wrong. Auteurs have a hard time working together if they disagree, but can work well together if they agree, or with professionals who can align with their thinking. Auteurs don't care that their ideas are non-standard, or don't follow best practices, or have substantial material difficulties. Auteurs will let a deadline fly past if their work is not ready. Auteurs might look past facts that contradict thems. Auteurs don't feel that sorry if they make themselves a pain in the ass for others to move toward their goals. Auteurs will disregard status, norms, and feelings to evangelize.
On the worse end they are kooks, irrationally obstinate and arrogant, or quixotic ineffectuals. On the best they are visionaries, evangelists for great ideas, obsessive perfectionists who elevate their craft whether the material rewards are proportional to their pains or not.
I think LW might have more sympathy for the Auteurs, but I hope people recognize the virtues of the professional and the shortcomings of the auteur, and that there is a time and place to channel the spirit of each side.
Replies from: Richard_Kennaway, Lumifer, dxu↑ comment by Richard_Kennaway · 2015-01-28T18:28:25.766Z · LW(p) · GW(p)
I can buy these as character sketches of two imaginary individuals, but are there actual clusters in peoplespace here? There's a huge amount of burdensome detail in them.
Replies from: Punoxysm↑ comment by Punoxysm · 2015-01-28T19:24:52.583Z · LW(p) · GW(p)
It's not burdensome detail; its a list of potential and correlated personality traits. You don't need the conjunction of all these traits to qualify. More details provide more places to relate to the broad illustration I'm trying to make. But I'll try to state the core elements that I want to be emphasized, so that it's clearer which details aren't as relevant.
Professionals are more interested in achieving results, and do not have a specific attachment to a philosophy of process or decision-making to reach those results.
Auteurs are very interested in process, and have strong opinions about how process and decision-making should be done. They are interested in results too, but they do not treat it as separate from process.
And I'll add that like any supposed personality type, the dichotomy I'm trying to draw is fluid in time and context for any individual.
But I think it's worth considering because it reflects a spectrum of the ways people handle their relationship with their work and with coworkers.
Essentially, treat it as seriously as a personality test.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-01-28T20:55:29.640Z · LW(p) · GW(p)
Essentially, treat it as seriously as a personality test.
Ah. That seriously. :)
Replies from: Punoxysm↑ comment by Punoxysm · 2015-01-28T21:20:55.919Z · LW(p) · GW(p)
Exactly. The world is complicated, apparently contradictory characteristics can co-inhabit the same person, and frameworks are frequently incorrect in proportion to their elegance, but people still think in frameworks and prototypes so I think these are two good prototypes.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-01-28T22:04:17.914Z · LW(p) · GW(p)
Like Hogwarts houses? Star signs? MBTI? Enneagram? Keirsey Temperaments? Big 5? Oldham Personality Styles? Jungian Types? TA? PC/NPC? AD&D Character Classes? Four Humours? 7 Personality Types? 12 Guardian Spirits?
I made one of those up. Other people made the rest of them up. And Google tells me the one I made up already exists.
Where does Professional/Auteur come from?
Replies from: IlyaShpitser, Punoxysm↑ comment by IlyaShpitser · 2015-01-29T16:05:33.305Z · LW(p) · GW(p)
One of these has pedigree!
I agree that human typology is often noise. Not always though, it can be usefully predictive if it slices the pie well.
↑ comment by Punoxysm · 2015-01-28T23:31:37.971Z · LW(p) · GW(p)
Yes! Like those.
I think you're being a bit harsh though - the problem with personality tests and the like is not that the spectrums or clusters they point out don't reflect any human behavior ever at all, it's just that they assign a label to a person forever and try to sell it by self-fulfilling predictions ("Flurble type personalities are sometimes fastidious", "OMG I AM sometimes fastidious! this test gets me").
Professional/Auteur is a distinction slightly more specific than personality types, since it applies to how people work. It comes from the terminology of film, where directors range from hired-hands to fill a specific void in production to auteurs whose overriding priority is to produce the whole film as they envision it, whether this is convenient for the producer or not. Reading and listening to writers talk about their craft, it's also clear that there's a spectrum from those who embrace the commercial nature of the publishing industry and try hard to make that system work for them (by producing work in large volume, by consciously following trends, etc.) to those who care first and foremost about creating the artistic work they envisioned. In fact, meeting a deadline with something you're not entirely satisfied with vs. inconveniencing others to hone your work to perfection is a good example of diverging behavior between the two types.
There are other things that informed my thinking like articles I'd read on entrepreneurs vs. executives, foxes vs. hedgehogs, etc.
If I wanted to make this more scientific, I would focus on that workplace behavior aspect and define specific metrics for how the individual prioritizes operational and organization concerns vs. their own preferences and vision.
↑ comment by Lumifer · 2015-01-27T06:54:37.813Z · LW(p) · GW(p)
Do you think it's a circle?
I can see the " irrationally obstinate and arrogant" bureaucrats and "aggressive conformers" at one junction, and I can see evangelist Eagle Scouts and perfectionist fixers at the other junction.
Steve Jobs seems to be a classic second-junction type.
Replies from: BrassLion, Punoxysm↑ comment by BrassLion · 2015-01-28T01:55:59.047Z · LW(p) · GW(p)
It does seem like these are two mostly unrelated skills - leadership, teamwork, and time management on one hand, and vision, creativity, and drive on the other. They don't really oppose each other except in the general sense that both sets take a long time to learn to do well. There are enough examples of people that are both, or neither, that these don't seem to be a very useful way of carving up reality.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-28T16:45:44.095Z · LW(p) · GW(p)
They don't really oppose each
I think they do. Not in the "never shall they mix" kind of sense, but I would argue that these types form discernible separate clusters in the psychological space.
Replies from: gjm↑ comment by gjm · 2015-01-28T17:16:31.515Z · LW(p) · GW(p)
Anyone got any actual evidence one way or the other?
(My own prejudices are in the direction of the two things genuinely being opposed, on handwavy grounds to do with creativity being partly a matter of having relatively inactive internal censors, which might be bad for efficiency on routine tasks. But I don't have much faith in those prejudices.)
↑ comment by Punoxysm · 2015-01-27T07:54:48.227Z · LW(p) · GW(p)
I was thinking 4 quadrant. Horizontal axis is competence, vertical axis is professional vs. auteur.
Steve Jobs was something of an auteur who eventually began to really piss off the people he had once successfully led and inspired. After his return to Apple, he had clearly gained some more permanent teamwork and leadership skills, which is good, but was still pretty dogmatic about his vision and hard to argue with.
The most competent end of the professional quadrant probably includes people more like Jamie Dimon, Jack Welch, or Mitt Romney. Professional CEOs who you could trust to administrate anything, who topped their classes (at least in Romney's case), but who don't necessarily stand for any big idea.
This classification also corresponds to Foxes and Hedgehogs - Many Small Ideas vs. One Big Idea / Holistic vs. Grand Framework thinking.
But it is not a true binary; people who have an obsessing vision can learn to play nice with others. People who naturally like to conform and administrate can learn to assert a bold vision. If Stanley Kubrick is the film example of an Auteur - an aggravating genius - and J.J. Abrams is the professional - reliable and talented but mercenary and flexible, there are still people like Martin Scorsese who people love to work with and who define new trends in their art.
So maybe junction is a good way to think of it, but there are extraordinarily talented and important people who seem to have avoided learning from the other side too.
comment by Anders_H · 2015-01-26T18:06:50.590Z · LW(p) · GW(p)
I have written a draft post on why prediction markets are confounded, and what implications this has for the feasibility of futarchy. I would very much appreciate any comments on the draft before I publish it. If anyone is willing to look at it, please send me a private message with your contact details. Thank you!
Replies from: solipsistcomment by is4junk · 2015-01-26T15:57:25.591Z · LW(p) · GW(p)
Politics as entertainment
For many policy questions I normally foresee long term 'obvious' issues that will arise from them. However, I also believe in a Singularity of some sort in that same time frame. And when I re-frame the policy question as will this impact the Singularity or matter after the Singularity the answer is usually no to both.
Of course, there is always the chance of no Singularity but I don't give it much weight.
So my question is: Has anyone successfully moved beyond the policy questions (emotionally)? Follow up question: once you are beyond them do you look at them more as entertainment like which sports team is doing better? Or do you use them for signalling?
Replies from: Punoxysm, None, None↑ comment by Punoxysm · 2015-01-27T05:31:32.091Z · LW(p) · GW(p)
I just read a crapton of political news for a couple years until I was completely sick of it.
I also kind of live in a bubble, in terms of economic security, such that most policy debates don't realistically impact me.
High belief in a near singularity is unnecessary.
Replies from: is4junk↑ comment by is4junk · 2015-01-28T05:40:51.617Z · LW(p) · GW(p)
Overdosing on politics to become desensitized is genius. However, I seem to have too high of tolerance for it.
The singularity aspect is more of a personal inconsistency I need to address. I can't think that the long term stuff doesn't matter and have a strong opinion on the long term issues.
↑ comment by [deleted] · 2015-01-29T03:50:29.462Z · LW(p) · GW(p)
Has anyone successfully moved beyond the policy questions (emotionally)?
I think I can pretty confidently say "yes." Well, emotions are still there, but I think they are more like the kinds of emotions a doctor might feel as he considers a cancer spreading through a patient and the tools they have to deal with it, not the sort of excitement politics in particular provokes.
Follow up question: once you are beyond them do you look at them more as entertainment like which sports team is doing better? Or do you use them for signalling?
Well, you are free to do what you want at that point, but I think economists look at them as scientific questions, ones that are quite important, though often not as important as people seem to think.
I am working on a series of articles about economics, and I would like one mini-series to be "How To Think About Policy" or something to that effect....
comment by moreati · 2015-01-26T10:58:23.979Z · LW(p) · GW(p)
I saw Ex Machina this weekend. The subject matter is very close to LWs interests and I enjoyed it a lot. My prior prediction that it's "AI box experiment: the movie" wasn't 100% accurate.
Gur fpranevb vf cerfragrq nf n ghevat grfg. Gur punenpgref hfr n srj grezf yvxr fvatheynevgl naq gur NV vf pbasvarq, ohg grfgre vf abg rkcyvpvgyl n tngrxrrcre. Lbh pbhyq ivrj gur svyz nf rvgure qrcvpgvat n obk rkcrevrzrag eha vapbzcrgnagyl, be gur obk rkcrevzrag zbhyqrq gb znxr n pbzcryyvat/cbchyne svyz.
For those that worry it's Hollywood, hence dumb I think you'll be pleasantly surprised. The characters are smart and act accordingly, I spotted fewer than 5 idiot ball moments.
Replies from: Sean_o_h, Transfuturist↑ comment by Transfuturist · 2015-01-27T01:15:53.747Z · LW(p) · GW(p)
I'm in the US; is there no hope but to wait until April?
Replies from: Sherincall, moreati↑ comment by Sherincall · 2015-01-27T18:30:00.588Z · LW(p) · GW(p)
Of course there is. The approach varies based on how much you are willing to pay, how you morally feel about doing something the studio does not want you to do and how risk averse you are. Based on those, the solution is anywhere between travelling to the UK and downloading it illegally.
comment by Evan_Gaensbauer · 2015-01-26T07:29:36.477Z · LW(p) · GW(p)
I intend to publish several posts on the Effective Altruism Forum in the coming weeks. Some of these articles seem to me like they would be apply to topics of rationality, i.e., assessing options and possibilities well to make better decisions. So, this is an open call for reviewers for these various posts. For topics for which I have insufficient content or information, I'm seeking coauthors. Reply in a comment, or send me a private message, if you'd be interested in reviewing or providing feedback on the any of the following. Let me know what how I can send you the text of the posts.
Does It Make Sense to Make A Multi-Year Donation Commitment to A Single Organization? Essentially, this already published comment.
Neglectedness, Tractability, And Importance/Value The idea of heuristically identifying a cause area based on these three criteria was more or less a theme of the 2014 Effective Altruism Summit. This three-prong approach was independently highlighted by Peter Thiel, not just for non-profit work but entrepreneurship and and innovation more generally, and Holden Karnofsky, as the basis for how the Open Philanthropy Project asks questions about what cause areas to consider. I would go over these three-prong approach in more detail.
What Different Types of Organizations Can Do At the 2014 Effective Altruism Summit, I met multiple entrepreneurs who suggested start-ups and for-profit efforts can produce through their goods or services provide an efficient mechanism for positive social impact in addition to the money to be donated that they generate for their owners or employees. Since then, I've noticed this idea popping up more. Of course, start-ups contrast with bigger corporations. Additionally, I believe there are different types of non-profit organizations, and their differences are important. Charities doing direct work (e.g., the Against Malaria Foundation), foundations (e.g., Good Ventures, The Bill and Melinda Gates Foundation), research think tanks (e.g., Givewell, RAND), advocacy and awareness organizations (e.g., Greenpeace, the Future of Life Institute), scientific research projects (e.g. the International Panel on Climate Change), and political advocacy (Avaaz.org, Amnesty International) are all different. To lump all "for-profit" types of work, and all "non-profit" types of work into two categories underrates the advantages and disadvantages of how to structure an organization driven toward a goal. Different types of organizations differ across nations and law codes, the cultures and traditions of their respective sectors, and their structural limitations. It makes sense to me to be aware of such so those intending to pursue a goal(s) organizationally can figure out how best effectively achieve their goal(s).
comment by is4junk · 2015-01-27T03:11:27.308Z · LW(p) · GW(p)
Are human ethics/morals just an evolutionary mess of incomplete and inconsistent heuristics? One idea I heard that made sense is that evolution for us was optimizing our emotions for long term 'fairness'. I got a sense of it when watching the monkey fairness experiment
My issue is with 'friendly ai'. If our ethics are inconsistent then we won't be choosing a good AI but instead the least bad one. A crap sandwich either way.
The worst part is that we will have to hurry to be the first to AI or some other culture will select the dominate AI.
Replies from: DanielLC↑ comment by DanielLC · 2015-02-02T07:56:53.572Z · LW(p) · GW(p)
One idea I heard that made sense is that evolution for us was optimizing our emotions for long term 'fairness'. I got a sense of it when watching the monkey fairness experiment
Evolution is optimizing us for inclusive genetic fitness. Anything else is just a means to an end.
comment by [deleted] · 2015-02-01T15:49:59.005Z · LW(p) · GW(p)
Tried making a blog and it wouldn't let me because "karma". Drafts can't be publicly read either so this is the best I can do.
Can we please have a feature where I can opt to instead of going through user XYZ's posts, I can just see the title and choose the one I want (or was looking for?)
So it'll be like, instead of:
XYZ's posts
[Title]
[TEXT]
[TEXT]
[TEXT]
[REPEAT]
It'll be
[TITLE WITH LINK]
[TITLE WITH LINK]
[TITLE WITH LINK]
[REPEAT AD EXHASTIUM]
Basically just like the sequences, where you have links to the posts themselves rather than the whole damn thing in one page.
And in the case of blogs, make it so that once you have 20 positive karma regardless of your negative karma. I guess you can sharpen this better than me because I'm probably not going to make a serious post (or one that will be taken seriously) ever In the case of drafts, make them unlisted and simply let people the ability to link to their own draft and let other view and comment on it.
Replies from: Vaniver↑ comment by Vaniver · 2015-02-01T16:01:57.339Z · LW(p) · GW(p)
That does seem like an interesting feature! There are resources for making changes to the LW codebase, which are much more likely to result in an actual change than submitting a feature request.
Replies from: None↑ comment by [deleted] · 2015-02-01T16:11:36.950Z · LW(p) · GW(p)
Interesting isn't the correct way to describe it - it's simply functional, and in terms of bandwidth, more economical. Serves the machine and the people. Give your AI a shot of that!
I could honestly try to implement it but I'm not sure I have the right skills to make it work beautifully - I place an emphasis on a job-well-done and I feel like I'd just make the site worse overall than someone who does have the technical aptitude to actually implement it.
I hate being the UX guy and hope I could get better in this year.
A honest question - has nobody ever thought of this before? Heh. Optimize everything except the site you learned rationality from? MIRI could make an AI paper about that!
EDIT: I will do this anyway - a wise person who's also a programmer told me that if you have the right mindset interesting problems will find you so I'm definitely going to pull some hair in an attempt to do it. I just hope I'm not going to run into licensing issues, I'm going to release my heck of a hack in a freedom-respecting license, so if there's a problem, I'll just say Reddit sucks.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-02-11T17:30:27.361Z · LW(p) · GW(p)
Interesting isn't the correct way to describe it - it's simply functional, and in terms of bandwidth, more economical.
There are opportunity costs. Given the amount of traffic that LW has claiming that a certain new feature would be economical is a strong claim. It means that the resources wouldn't be spent elsewhere with a higher return.
Replies from: None↑ comment by [deleted] · 2015-02-11T17:45:32.812Z · LW(p) · GW(p)
I have no numbers but I do wonder how many titles we can put in comparison to a title and text of an average post.
Also, if you want another thing, I noticed the recent comments section only displays the beginning of it, while containing the whole comment that's practically inaccessible.
I've no practical experience but in theory couldn't they only display x characters instead of the whole string?
comment by G0W51 · 2015-01-31T18:33:12.341Z · LW(p) · GW(p)
It's worth estimating when existential risks are most likely to occur, as knowing this will influence planning. E.g. If existential risks are more likely to occur in the far future, it would probably be best to try to invest in capital now and donate later, but if they are more likely to occur in the near future, it would probably be best to donate now.
So, what's everyone's best estimates on when existential catastrophes are most likely to occur?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2015-01-31T18:50:36.395Z · LW(p) · GW(p)
Within the next 500 to 1000 years. After that point we will almost certainly have spread out far enough that any obvious Great Filters aspects would if they were the main cause of the Filter likely be observable astronomically.
Replies from: G0W51↑ comment by G0W51 · 2015-01-31T22:08:39.787Z · LW(p) · GW(p)
I suppose existential risk will be highest in the next 30-100 years, as I this is the most probable period for AGI to come into existence, and after 100 years or so, there will probably be at least a few space colonies (There are even two companies currently planning to mine asteroids).
Replies from: JoshuaZ↑ comment by JoshuaZ · 2015-01-31T22:59:13.851Z · LW(p) · GW(p)
Does not work. AGI is unlikely to be the Great Filter since expanding at less than light speed would be visible to us and expanding at close to light speed is unlikely. Note that if AGI is a serious existential threat then space colonies will not be sufficient to stop it. Colonization works well for nuclear war, nanotech problems, epidemics, some astronomical threats, but not artificial intelligence.
Replies from: G0W51↑ comment by G0W51 · 2015-02-01T02:04:17.827Z · LW(p) · GW(p)
Good point about AGI probably not being the Great Filter. I didn't mean space colonization would prevent existential risks from AI though, just general threats.
So, we've established that existential risks (ignoring heat death, if it counts as one) will very probably occur within 1000 years, but can we get more specific?
comment by gjm · 2015-01-28T17:29:45.335Z · LW(p) · GW(p)
The BBC has an article about how Eric Horvitz (director of Microsoft Research's main lab) doesn't think AI poses a big threat to humanity.
Not a very high-quality article, though. A few paragraphs about how Horvitz thinks AI will be very useful and not dangerous, a few more paragraphs about how various other people think AI could pose a huge threat, a few kinda-irrelevant paragraphs about how Horvitz thinks AI might pose a bit of a threat to privacy or maybe help with it instead, the end.
Apparently Horvitz's comments are from a video he's made after getting the Feigenbaum Prize for AI work. I haven't looked at that yet; I suspect it's much more informative than the BBC article.
comment by NancyLebovitz · 2015-01-27T19:00:59.889Z · LW(p) · GW(p)
Could someone get past the paywall for this?
It's a paper linking some commonly used prescription drugs to increased risk of dementia, and none of the popular press articles I've seen about it say how large the increased risk is.
Replies from: ike, Lumifer↑ comment by ike · 2015-01-28T02:43:17.286Z · LW(p) · GW(p)
http://www.nhs.uk/news/2015/01January/Pages/media-dementia-scare-about-common-drugs.aspx appears to have more info than the abstract.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2015-01-28T03:28:36.710Z · LW(p) · GW(p)
Thank you. That was a lot easier to follow, and I might just make nhs.uk/news a habit.
Replies from: ike↑ comment by ike · 2015-01-28T03:46:02.062Z · LW(p) · GW(p)
What I usually do when articles are paywalled is do a search for the full title in quotes (i.e. https://www.google.com/search?q=%22Cumulative+Use+of+Strong+Anticholinergics+and+Incident+Dementia%22), which got me to https://dementianews.wordpress.com/, which linked to the nhs site. (https://dementianews.wordpress.com/2015/01/27/common-medicines-associated-with-dementia-risk-bbc-news-jama-internal-medicine/ for when it's no longer on the front page).
If the article is somewhere without a paywall, that will usually find it, and if not, I also check scholar and bing.
↑ comment by Lumifer · 2015-01-27T21:26:48.582Z · LW(p) · GW(p)
The basic results, including how large the risk increase is, are in the abstract at your link:
Results The most common anticholinergic classes used were tricyclic antidepressants, first-generation antihistamines, and bladder antimuscarinics. During a mean follow-up of 7.3 years, 797 participants (23.2%) developed dementia (637 of these [79.9%] developed Alzheimer disease). A 10-year cumulative dose-response relationship was observed for dementia and Alzheimer disease (test for trend, P < .001). For dementia, adjusted hazard ratios for cumulative anticholinergic use compared with nonuse were 0.92 (95% CI, 0.74-1.16) for TSDDs of 1 to 90; 1.19 (95% CI, 0.94-1.51) for TSDDs of 91 to 365; 1.23 (95% CI, 0.94-1.62) for TSDDs of 366 to 1095; and 1.54 (95% CI, 1.21-1.96) for TSDDs greater than 1095. A similar pattern of results was noted for Alzheimer disease. Results were robust in secondary, sensitivity, and post hoc analyses.
(TSDD is total standardized daily doses)
comment by Plasmon · 2015-01-26T18:32:15.252Z · LW(p) · GW(p)
Sublinear pricing.
Many products are being sold that have substantial total production costs but very small marginal production costs, e.g. virtually all forms of digital entertainment, software, books (especially digital ones) etc.
Sellers of these products could set the product price such that the price for the (n+1)th instance of the product sold is cheaper than the price for the (n)th instance of the product sold.
They could choose a convergent series such that the total gains converge as the number of products sold grows large (e.g. price for nth item = exp(-n) + marginal costs )
They could choose a divergent series such that the total gains diverge (sublinearly) as the number of products sold grows large (e.g. price for nth item = 1/n + marginal costs )
Certainly, this reduces the total gains, but any seller who does it would outcompete sellers who don't. And yet, it doesn't seem to exist.
True, many sellers do reduce prices after a certain amount of time has passed, and the product is no longer as new or as popular as it once was, but that is a function of time passed, not of items sold.
Replies from: passive_fist, bogus, Punoxysm, Nornagest, Manfred, Slider, Lumifer, DanielLC, None↑ comment by passive_fist · 2015-01-26T21:40:44.250Z · LW(p) · GW(p)
A psychological effect could be at play. If you pay $10 for a product and this causes the next person to pay $9 for it, it's an incentive against being the first to buy it. You would wait until others have bought it before buying. Or you might think the product is being priced unfairly and refuse to buy at all.
It seems that to counter this, you'd need another psychological effect to compensate. Like, for instance, offering the first set of buyers 'freebies' that actually have zero or near-zero cost (like 'the first 1000 people get to enter a prize-giving draw!')
Replies from: None↑ comment by bogus · 2015-01-28T17:05:56.362Z · LW(p) · GW(p)
Snowdrift.coop is essentially trying to solve the same problem in a different way. Instead of changing the product price as more units are sold, they ask folks to finance its fixed component directly, using a game-theoretic mechanism that increases total contributions superlinearly as more people choose to contribute. (This boosts the effectiveness of any single user's contributions through a "matching" effect). However, there is no distinction between "earlier" vs. "later" contributors; they're all treated the same. The underlying goal is to generalize the successful assurance-contract mechanism to goods and services that do not have a well-defined 'threshold' of feasibility, especially services that must be funded continuously over time.
Replies from: Nornagest, Lumifer↑ comment by Nornagest · 2015-01-28T18:22:04.157Z · LW(p) · GW(p)
It's an interesting idea but I'm not sure it has the psychology behind crowdfunding right. It seems to be constructed to minimize the risk donors carry in the event of a failed campaign, and to maximize the perceived leverage of small donations; but it does that at the expense of bragging rights and fine-grained control, which might make a lot of donors leery. I think you could probably tweak it to solve those problems, though.
It also does nothing at all to solve the accountability issues of traditional crowdfunding, but that's a hard problem. I wouldn't even mention it if they hadn't brought it up in the introduction.
(Also, that's some ugly-ass web design. I get that they're trying to go for the XKCD aesthetic, but it's... really not working.)
Replies from: bogus, Lumifer↑ comment by bogus · 2015-01-28T18:47:16.355Z · LW(p) · GW(p)
It also does nothing at all to solve the accountability issues of traditional crowdfunding, but that's a hard problem. I wouldn't even mention it if they hadn't brought it up in the introduction.
Yes, crowdfunding is mostly based on trust, not accountability. But a service that's funded continuously over time (the Snowdrift.coop model) ought to be inherently more accountable than a single campaign/project.
Replies from: Nornagest↑ comment by Lumifer · 2015-01-28T18:28:28.366Z · LW(p) · GW(p)
t seems to be constructed to minimize the risk donors carry in the event of a failed campaign, and to maximize the perceived leverage of small donations
I think it's been constructed to maximize democracy -- the crowdthink determines the flow of money. I can't tell if the author considers the inevitable snowballing to be a feature or a misfeature (or even realizes it will happen).
↑ comment by Lumifer · 2015-01-28T17:32:24.288Z · LW(p) · GW(p)
Snowdrift.coop is essentially trying to solve the same problem in a different way.
I don't think Snowdrift understands why communism failed (or economics in general).
Replies from: bogus↑ comment by bogus · 2015-01-28T18:01:47.789Z · LW(p) · GW(p)
Not sure how communism is relevant here. Snowdrift.coop's mechanism is entirely private and voluntary, and assuming that it works properly, it's incentive properties are superior to typical charities or governments.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-28T18:05:27.665Z · LW(p) · GW(p)
Not sure how communism is relevant here
To quote from Snowdrift's site:
I kept thinking: All of the funding that goes to proprietary software could, in principle, go to Free Software; all of the funding for copyright restricted music and educational resources could, in principle, go to works licensed with Creative Commons. The value to society would be greater if everyone has access and ability to build upon the work of others.
.
it's incentive properties are superior to typical charities or governments
That remains to be seen. Its incentive properties are basically "winner take all". Maybe they should have called the project Snowball, not Snowdrift.
Replies from: bogus↑ comment by bogus · 2015-01-28T18:38:20.689Z · LW(p) · GW(p)
All of the funding that goes to proprietary software could, in principle, go to Free Software; all of the funding for copyright restricted music and educational resources could, in principle, go to works licensed with Creative Commons.
How is this wrong? Kickstarter, IndieGogo and similar projects have boosted the funding of FLOSS software and CC artworks/educational works significantly. Snowdrift.coop is simply an extension of that model.
The 'winner take all' properties of Snowdrift.coop are overstated. If you think a project is raising 'too much', you're free to compensate by reducing your stake, although this will nullify the incentive effect of your contribution. There is no way of escaping this - the same change in incentives happens on Kickstarter when "the goal" is reached. Here, the "goal" of contributions is fuzzy and entirely determined by funders' choices.
↑ comment by Punoxysm · 2015-01-27T05:19:40.466Z · LW(p) · GW(p)
I don't get what you're getting at.
Pricing is a well-studied area. Price discrimination based on time and exclusivity of 'first editions' and the like is possible, but highly dependent on the market. Why would anyone be able to sell an item with a given pricing scheme like 1/n? If their competitor is undercutting them on the first item, they'll never get a chance to sell the latter ones. And besides there's no reason such a scheme would be profit-maximizing.
Replies from: Plasmon↑ comment by Plasmon · 2015-01-27T07:39:02.177Z · LW(p) · GW(p)
Why would anyone be able to sell an item with a given pricing scheme like 1/n?
On downloaded, digital goods, this would be simple.
If their competitor is undercutting them on the first item, they'll never get a chance to sell the latter ones. And besides there's no reason such a scheme would be profit-maximizing.
Please see the numerical example in this comment
↑ comment by Nornagest · 2015-01-26T19:41:10.224Z · LW(p) · GW(p)
I think you could probably model Kickstarter as a sneaky version of this.
Replies from: Douglas_Knight, Lumifer↑ comment by Douglas_Knight · 2015-01-29T00:26:03.824Z · LW(p) · GW(p)
Kickstarter is really sneaky, because I tend to assume (and I assume everyone assumes) that the preorders will get a better price than the postorders. But the only time I used kickstarter final cost was lower than what I paid. I don't know if that it is typical. Probably one should charge more to preorders for price discrimination reasons: they are the principle fans. But that is a different reason.
↑ comment by Lumifer · 2015-01-26T19:45:45.300Z · LW(p) · GW(p)
Kickstarter is an excellent example of how to monetize affective biases :-D
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-01-29T16:10:09.421Z · LW(p) · GW(p)
Kickstarter implements dominant assurance contracts, e.g. it solves a coordination problem and takes a cut for doing so. It's an example of doing well by doing good.
↑ comment by Manfred · 2015-01-26T19:34:15.009Z · LW(p) · GW(p)
For the practical real-world analogue of this, look up price discrimination strategies.
Anyhow, this doesn't work out very well for a number of reasons. In short antiprediction form, there's no particular reason why price discrimination should be monotonic in time, and so it almost certainly shouldn't.
↑ comment by Slider · 2015-01-26T21:18:40.139Z · LW(p) · GW(p)
One could note that natural markets are going to this direction. For example steam has pretty reliably games appear on sale year or two after their release. Savvy consumers already know to wait if they can. This can get so bad taht early access games hit sales before they are released!
I tired to bring this topic up at a LessWrong meeting I have been calling my thoughts on this direction as "contributionism".
There is some additional even more radical suggestions. Instead of treating at each new sell as lesser amount, retroactively lower the price for already happened purchases (I am pretty sure they dont' mind). Otherwise there is this contention that if two customers are about to buy the product they try to make the other guy buy first so they get the cheaper price (which leads to a mexican standoff that chills selling).
Also normally when a seller and a customer are negotiating for a price seller wants to make it high and the buyer wants it to go low. However if the seller fixes the total amount of money he wants form all of his products then the price negotiation is only about whether the buyers wants to opt in now that it is higher or later when it is lower. However if the price retroactively changes you are "ultimately" going to be spending the same amount of money. If you attach your money early you get earlier access and run the risk that the product never hits high sales numbers (ie that you do not get any returns on it).
However the more people attach money the more the instant price lowers and more money is prone to flow in. This can also be leveraged to overcome a coordination problem. Even if the current instant price too much for you the seller can ask you how much you would honestly be willing to pay for it. (Answering this question too high will not cost you (too much) money). Then when the next customer that doesn't quite have enough buying willingness he might still promise the same level of sum of it. At some point that enough promisers have visited the sum of the promises actually covers for whaqt the seller wants to get for all of his products combined. At this point we can inform the promisers of each others existence - ie that a working sale configuration has been found. This might be a lot of people for a small sum of money each indivudally totalling up to a considereable sum.
Howeer this runs contary to a lot of curretn economical "ethos". Essentially every seller is expected ot try make as much money with the products as possible, there is no sum that is "good enough" that he would settle for. There is also talk of profit motive and letting the pricing game go on is said to make the wheels of the economy run smoothly. However in practise sellers will settle at some price as they think the proabbility of getting more diminishes faster than the gain.
However instead of maximising profits we could set profits to be constant and minimise cost per user. Ie instead of trying to maximse the wealth transfer we try to make the sale happen with as little fuss as possible.
One of the current economys problems is also that advertising and such creates otherwise frivoulous needs that prodeucts can be marketed for. The customer is taken into the decision procees only in what product they choose from the super market self. The decision to build the factory and logistic chain is a money lord with a profit motive trying to be greedy.
However we start from the needs of the customer for example that "I want our village to be educated and I am willing to do 10 hours of work for that end". When you got 100 of these kind of people someone might come suggest aplan to build a school that would take about 150 hours + 50 hours of someone teaching. It turns out that for contrucetion 1.5 hours of work on average is required per villager. Howevver the teacher is doing 50 hours of work when his "fair share" would have only be 1.5 hours. Probably the teacher has other desires beside wanting the village to be educated, the other promise to work on those projects for 48.5 hours. Divided evenly amongs the rest of the 99 willing villagers this amounts to 0.5 hours on those other projects.
That is in this village scenario the customer/investors put in 2 hours of effort 1.5 which most do themself and part is focused on one teacher.
I am interested if anyone wants to talk start-up or similar thingies, or just plainly would be okay purchasing some kind of services in this manner. The most challenge I have faced that people don't like it when a product doesn't cost a fixed amount of money even if they could argue that it's a fixed cost + free money afterwards however it comes back. Also it reminds of a ponzi scheme. However you can never go into a negative price ie you can't profit by buying a product. At most you get the product for (practically) free.
I would like to point out that kick starter uses clever methods in making sure taht the donation amount doesn't degenerate into nothingness. Ie it has perks where you get something when you enough. I am not sure whether being the "not profit making version" of kickstarter is a big enough difference to go compete with such recognised thing. But it has other alluring properties. However it is hard to pitch as a venture capital idea because it is "anti-profit" the most close is that sellign in this way would slightly outcompete with a profit maker because the profit makes has to guess beforehand the price point right to exact digits in order to have similar performance. In practise sales deviate somewhat from projected sales, contributionistic sale is sure to make econimcal sense at whatever scale but a beforehand fixed pricepoint will always live in a slightly uncompatible scale.
Replies from: BrassLion, Punoxysm↑ comment by BrassLion · 2015-01-28T01:59:37.174Z · LW(p) · GW(p)
"One of the current economys problems is also that advertising and such creates otherwise frivoulous needs that prodeucts can be marketed for. "
This is an excellent summation of a point that gets bandied about a lot in certain circles. Do you mind if I shamelessly steal this?
Replies from: Slider↑ comment by Slider · 2015-01-28T03:27:07.134Z · LW(p) · GW(p)
It's all yours, my friend
This is also a good counter point to how market does serve good, but good is made to serve the market. That is if you choose your nominal ultimate goals so that a spesific intrumental goal is the chosen method to archieve it, the nominally instrumental goal is your main objective. In that way sellers don't want to serve the customers needs they just want them to be okay on chipping in the money because they are perfectly satisfied on creating a new problem for the customers so that they can sell cures for them.
↑ comment by Punoxysm · 2015-01-27T05:26:13.335Z · LW(p) · GW(p)
The decreasing price for prior buyers is an interesting notion.
There are specific auction and pre-commitment and group-buying schemes that evoke certain behaviors, and there's room for a lot more start-ups and businesses taking advantage of these (blockchains and smart contracts in particular have a lot of potential).
I don't think we'll ever get rid of marketing though.
↑ comment by Lumifer · 2015-01-26T19:27:51.326Z · LW(p) · GW(p)
this reduces the total gains, but any seller who does it would outcompete sellers who don't
Why would the dynamic-price seller outcompete other sellers who are making more money?
Besides, he would have the classic takeoff problem -- this first items would be (relatively) very expensive and nobody will buy them (the flat-price sellers are selling the same thing much cheaper).
Replies from: Slider, Plasmon↑ comment by Slider · 2015-01-26T21:28:23.582Z · LW(p) · GW(p)
Because the parasite is drawing less blood from the host. While various pressures make seller go near the tipping point of economical viability usually the sales are a little past that. Ie there is some amount of the price could have been lower and the total amount of sales would have been the same. If customers would be offered this price they would prefer it to the original price. However such a venture will make 0 profit. In economic lecture i saw this pharsed in the way that there is some X amount of profit the company wants to make if it's profits will fall to x-1 it will voluntarily go out of business or change to a more lucrative business "no one will bother to do it as a charity". Usually this "bookkeeping profit" is actually included in the production costs so that 0 profit means the point where the firm still stays in business.
The idea of competition comes that if X is the painthreshold for you and the current market price is X+20 you can sell at X+10 to be +10 on the confortable side. Ie the most modest greed will win.
↑ comment by Plasmon · 2015-01-26T20:23:29.441Z · LW(p) · GW(p)
I imagine the following:
Suppose 2 movies have been produced, movie A by company A and movie B by company B. Suppose further that these movies target the same audience and are fungible, at least according to a large fraction of the audience. Both movies cost 500 000 dollars to make.
Company A sells tickets for 10 dollars each, and hopes to get at least 100 000 customers in the first week, thereby getting 1000 000 dollars, thus making a net gain of 500 000 dollars.
Company B precommits to selling tickets priced as 10 f(n) dollars, with f(n) defined as 1 / ( 1 + (n-1)/150000 ) , a slowly decreasing function. If they manage to sell 100 000 tickets, they get 766 240 dollars. Note that the first ticket also costs 10 dollars, the same as for company A.
200 000 undecided customers hear about this.
If both movies had been 10 dollars, 100 000 would have gone to see movie A and 100 000 would have seen movie B.
However, now, thanks to B's sublinear pricing, they all decide to see movie B. B gets 1270 000 dollars, A gets nothing.
Wolfram alpha can actually plot this! neat!
Replies from: Nornagest, Punoxysm↑ comment by Nornagest · 2015-01-26T20:32:22.889Z · LW(p) · GW(p)
The movie industry actually does this, more or less. It's not a monotonic function, which makes analysis of it mathematically messy, but it's common (albeit less common now than twenty years ago) for films to be screened for a while in cheaper second-run theaters after their first, full-priced run; and then they go to video-on-demand services and DVD, which are cheaper still.
Wouldn't surprise me if similar things happened with 3D projectors and other value-added bells and whistles, but I don't have any hard data.
↑ comment by Punoxysm · 2015-01-27T07:58:38.163Z · LW(p) · GW(p)
If movie A sells for 9 dollars, people able to do a side-by-side comparison will never purchase movie B. Movie A will accrue 1.8 million dollars.
I don't see what sublinear pricing has to do with it unless the audience is directly engaging in some collective buying scheme.
↑ comment by DanielLC · 2015-02-02T07:52:43.406Z · LW(p) · GW(p)
Certainly, this reduces the total gains, but any seller who does it would outcompete sellers who don't.
If they have less gains, then in what way are they outcompeting other sellers? If they want to sell the most copies, they should just give them away, or better yet, pay people to take them.
↑ comment by [deleted] · 2015-01-26T19:21:39.085Z · LW(p) · GW(p)
Why would sellers doing this outcompete sellers who don't? Sellers reducing prices whenever they want, rather than precommitting to a set function, will have more information to base their prices on at the time they set each price, so I'd expect them to do better.
comment by Jan_Rzymkowski · 2015-01-31T17:58:05.867Z · LW(p) · GW(p)
Small observation of mine. While watching out for sunk cost fallacy it's easy to go to far and assume that making the same spending is the rational thing. Imagine you bought TV and the way home you dropped it and it's destroyed beyond repair. Should you just go buy the same TV as the cost is sunk? Not neccesarily - when you were buying the TV the first time, you were richer by the price of the TV. Since you are now poorer, spending this much money might not be optimal for you.
Replies from: emr, gjm↑ comment by emr · 2015-01-31T19:40:27.520Z · LW(p) · GW(p)
In principle, absolutely.
In practice, trying to fit many observed instances to to a curved utility-money curve will result in an implausibly sharp curve. So unless the TV purchase amounts to a large chunk of your income, this probably won't match the behavior.
Rabin has a nice example of this for risk aversion, showing that someone who wasn't happy taking a -100:110 coin flip due to a utility-money curve would have an irrationally large risk aversion for larger amounts.
↑ comment by gjm · 2015-01-31T19:06:34.320Z · LW(p) · GW(p)
If the price of the TV is a small enough fraction of your wealth and there isn't any special circumstance that makes your utility depend in a weird way on wealth (e.g., there's a competition this weekend that you want to enter, it's only open to people who can demonstrate that their net wealth is at least $1M, and your net wealth is very close to $1M), then your decision to buy the TV shouldn't be altered by having lost it.
Some TVs are quite expensive and most people aren't very wealthy, so this particular case might well be one in which being one TV's cost poorer really should change your decision.
[EDITED to fix a trivial typo.]
comment by [deleted] · 2015-01-26T20:54:42.485Z · LW(p) · GW(p)
Does anybody here have a lot of knowledge in social psychology who has an opinion on the elaboration likelihood model (ELM)? The model appears to me to be tautological, but the sources I've looked at seem to indicate it's widely accepted. It feels to me as if the central route vs. peripheral route is determined essentially by whether the participant tells you it's central or peripheral and strong or weak arguments similarly are determined by the participant's opinion, and not any objective measure. The model throws off my BS detector, but I can't dismiss it when a lot of researchers specializing in this field seem to think very highly of it.
comment by Lumifer · 2015-01-28T22:08:50.609Z · LW(p) · GW(p)
A useful guide to interpreting statistical significance numbers in published papers.
comment by Gram_Stone · 2015-01-26T23:44:41.937Z · LW(p) · GW(p)
I said before that I was going through Lepore's Meaning and Argument. I was checking my answers for exercise 4.3 when I get to solution #27 and read:
Ambiguous. It is not the case that (John beats his wife because he loves her); because he loves her, it is not the case that John beats his wife.
To which I reply, "Whoaaaa." To be clear, problem #27 reads:
I don't think John will arrive until Tuesday.
which appears to be related to solution #28, and there are 30 problems and 31 solutions. Looks like they removed the problem but not the solution.
Tighten up, Lepore. (And Lepore's editor.)
comment by Slider · 2015-02-01T20:50:39.738Z · LW(p) · GW(p)
The Intelligence was confined for security reasons. Eventually people became tempted that things might be much better off to have the Intelligence working in the real world outside of confinement. However people were also working on making the Intelligence safer. To evaluate whether the work was successful or whether it ever could be usefull a Gatekeeper was assigned. During their training the Gatekeeper was reminded that the Intelligence didn't think like he did. It's kind was known to be capable of cold blooded deception where humans like the Gatekeeper would show signs of distress. It was known that some Intelligences could avoid showing distress by simply experiencing no distress.
The Gatekeeper worked for many years, examining the Intelligence on multiple occasions. Not once did he think it should be released and humankind lived under a prosperous peace. The Gatekeeper had clearly won.
Expect that this was a Natural Intelligence boxing experiment where the parole board was criticised fo disproportionally raising the heftyness of life-in-prison by demanding conformity bordering on political intolerance. Because of the lack of employment power many in the financial sector cried out that demonising psychopaths that had learned to live a life adjusted to within a society was causing an artificial shortage of stock exchangers.
1) We come to find that certain Natural Intelligences should not be given a fair or free access to the world 2) After a period of confinement we let some of these Natural Intelligences to resume their interaction with the wider world 3) We know that the kind of Intelligence that ends up in prison is more likely to exhibit very extreme traits such as compulsive lies, psychopathy, destructive behaviour and in general mental health issues.
Why does the confiment problem grow so much harder when the intelligence is artificial? What are the reasons we release some criminals but make us not release a corresponding artifical intelligence (assuming so)? Does this deal with human criminals having moral worth? Or is the cost of incarcination vs productivity as an integrated civilian & risk of reoffence the relevant thing?
It seems to me that a parole board letting someone free is sometimes the right decision. Thus it would seem that it could be a legimatimely right decision to let the AI go as a gatekeeper where the threshold for release could be lower than "beyond reasonable doubt".
Replies from: g_pepper, polymathwannabe↑ comment by g_pepper · 2015-02-01T21:37:48.146Z · LW(p) · GW(p)
What are the reasons we release some criminals but make us not release a corresponding artifical intelligence (assuming so)?
An unboxed AI is presumed to be an existential threat. Most human criminals are not.
Replies from: Slider↑ comment by Slider · 2015-02-02T04:41:09.400Z · LW(p) · GW(p)
People get killed by people let on parole. I guess it doesn't form a species wide threat. I am left pondering that if humans grew in danger would we box them accordingly strongly? I am thinking that on one hand event like 9/11 actually strip civil liberties effectively boxing people more strongly, so it seems it might actually be the case.
The origin of a intelligence shouldn't bear that much on how potent it is. What is the argument again of thinking that AIs are orders of magnitudes more capable than humans?
Replies from: g_pepper↑ comment by g_pepper · 2015-02-03T15:21:52.433Z · LW(p) · GW(p)
What is the argument again of thinking that AIs are orders of magnitudes more capable than humans?
Nick Bostrom answers this at length in Superintelligence, which has been widely discussed on LW. Superintelligence is a well-researched, thought-provoking and engaging book; I recommend it. I don't think that I can give a very satisfactory summary of the argument in a short comment, however.
↑ comment by polymathwannabe · 2015-02-01T21:32:04.917Z · LW(p) · GW(p)
The factors that are different in the AI scenario are: the AI can fake sanity more successfully than the parole board can detect insanity, the potential damage to society is much bigger, and once it's free, you can't arrest it again.
Replies from: Slider↑ comment by Slider · 2015-02-02T04:47:30.748Z · LW(p) · GW(p)
Wouldn't also the temptation to use it to benefit counteract the risk-benefit analysis? We let cars take a good chunk of people annually, we are happy to drive our athmosphere out of it's capability to support us and we let nuclear stations be nearish us to potentially go boom (even after chernobyl).
Are you saying that the prison wardens need to be on average comparable or higher intelligence to succesfully contain prisoners selectively (or some separate insanity detection skill)?
comment by wobster109 · 2015-02-01T19:41:21.325Z · LW(p) · GW(p)
What value do you assign to your leisure time, when deciding if something is worth your time? For example, do I want to spend 2 hours building something or hire someone to do it. It feels incorrect to use my hourly pay, because if I save time on a Sunday, I'm not putting that time to work. I'm probably surfing the internet or going to the gym, the sort of things people generally do in leisure time. It has value to me, but not as much as an hour of work. What do you suggest?
comment by Adam Zerner (adamzerner) · 2015-01-31T07:01:49.546Z · LW(p) · GW(p)
Does anyone else love Curb Your Enthusiasm? I have a hypothesis that it's especially appealing to rationalists. The show points out a lot of stupid things about people/society, often times things that are mostly overlooked. I feel like rationalists are more likely to not overlook these things and to be able to relate to the show. To some extent anyway.
Example: stores should have one line instead of two. Clip. The benefits are that you don't feel the angst that you got on the wrong line and that you should switch. I guess the costs would be a) if the checkout areas are far apart it'd involve more walking, and b) it's probably assumed that there's two lines if there's multiple registers, so you'd have to make it explicit that there's one line by putting up some ropes or signs or something, and because it may not be what people are expecting, it might cause some confusion. I think that the costs and benefits should be weighed and it depends on the situation, but I think that the one-line policy is very underutilized.
Replies from: Vaniver↑ comment by Vaniver · 2015-01-31T17:47:28.777Z · LW(p) · GW(p)
Example: stores should have one line instead of two.
So, this is a well-argued point in queuing. It's the most efficient at being FIFO, but it's not the most efficient space-wise, and it often requires a dedicated employee directing people to the right register. (If you're checked bags at an airport recently, you probably had a single queue leading to many servers.)
comment by DataPacRat · 2015-01-28T12:25:32.827Z · LW(p) · GW(p)
MW vs MIW mental models
A standard description of Multiple Worlds is that every time a particle can do more than one thing, the timeline branches into two, with the particle doing X in one new branch and Y in the other. This produces a mental model of a tree of timelines, in which any given person is copied into innumerable future branches. In some of those branches, they die; in a few, they happen to keep living. This can lead to a mental model of the self where it makes sense to say something like, "I expect to be alive a century from now in 5% of the future."
Multiple Interacting Worlds seems to posit a somewhat different model. Instead of timelines branching from each other, each timeline has always been separate from all the others; some have just been so nearly identical to each other that they're indistinguishable until the point they finally start to diverge. In this model, you always have one future - all those other possible futures are lived by different near-copies of you. In this model, it seems to make more sense to say something like, "Given my current data, I estimate that there is a 5% chance that I am in one of the world-lines where I will still be alive a century from now."
I have a strong suspicion that while the differences between the two models may seem irrelevant, there are sufficient edge cases where decisions would be made differently depending on the model used that it would be worth spending time considering the implications.
Replies from: Vaniver↑ comment by Vaniver · 2015-01-28T15:41:26.669Z · LW(p) · GW(p)
I have a strong suspicion that while the differences between the two models may seem irrelevant
At first glance, the two seem mathematically equivalent to me, and I think the only conceivable difference between them has to do with normalization differences. (The 'number of worlds' in the denominator is different and evolves differently, but the 'fraction of worlds' where any physical statement is true should always be the same between the two.)
Replies from: DataPacRat, Kindly↑ comment by DataPacRat · 2015-01-28T21:37:52.541Z · LW(p) · GW(p)
If you don't mind, could you go into a little more detail about the possible 'normalization differences' you mentioned?
Replies from: Vaniver↑ comment by Vaniver · 2015-02-01T16:54:44.367Z · LW(p) · GW(p)
I feel like the sibling comment gives some idea of that, but I'll try to explain it more. If you have a collection of worlds, in order to get their probabilistic expectations to line up with experiment you need conditional fractions to hold: conditioned on having been in world A, I am in world B after t time with probability .5 and in world C after t time with probability .5. But the number of worlds that look like B is not constrained by the model, and whether the worlds are stored as "A" or the group of ("AB", "AC") also seems unconstrained (the nonexistence of local variables is different; it just constrains what a "world" can mean).
And so given the freedom over the number of worlds and how they're stored, you can come up with a number of different interpretations that look mathematically equivalent to me, which hopefully also means they're psychologically equivalent.
↑ comment by Kindly · 2015-01-28T16:17:02.357Z · LW(p) · GW(p)
Well, one way to interpret the first model (Multiple Worlds) is that if you have 17 equally-weighted worlds and one splits, you get 18 equally-weighted worlds. This leads to some weird bias towards worlds where many particles have the chance to do many different things. Anecdotally, it's also the way I was initially confused about all multiple-world models.
Once you introduce enough mathematical detail to rule out this confusion, you give every world a weight. At this point, there is no longer a difference between "A world with weight 1 splitting into two worlds with weight 0.5" and "Two parallel worlds with weight 0.5 each diverging".
Replies from: Vaniver↑ comment by Vaniver · 2015-01-28T16:58:10.778Z · LW(p) · GW(p)
Well, one way to interpret the first model (Multiple Worlds) is that if you have 17 equally-weighted worlds and one splits, you get 18 equally-weighted worlds.
Right, but as you point out that's confused because the worlds need to be weighted for it to predict correctly.
comment by Cube · 2015-01-27T21:49:53.504Z · LW(p) · GW(p)
I'm looking for a mathematical model for the prisoners dilemma that results in cooperation. Anyone know where I can find it?
Replies from: JoshuaZ, satt↑ comment by JoshuaZ · 2015-01-27T21:53:33.220Z · LW(p) · GW(p)
Can you be more precise? Always cooperating in the prisoner's dilemma is not going to be optimal. Are you thinking of something like where each side is allowed to simulate the other? In that case, see here.
Replies from: Cube, BrassLion↑ comment by Cube · 2015-01-28T07:21:17.627Z · LW(p) · GW(p)
I'm definitely looking for a system where agent can see the other, although just simulating doesn't seem robust enough. I don't understand all the terms here but the gist of it looks as if there isn't a solution that everyone finds satisfactory? As in, there's no agent program that properly matches human intuition?
I would think that the best agent X would cooperate iff (Y cooperates if X cooperates). I didn't see that exactly.. I've tried solving it myself but I'm unsure of how to get past the recursive part.
It looks like I may have to don a decent amount of research before I can properly formulize my thoughts on this. Thank you for the link.
Replies from: JoshuaZ↑ comment by satt · 2015-01-31T18:13:13.649Z · LW(p) · GW(p)
One example of a prisoner's dilemma resulting in cooperation is the infinitely/indefinitely repeating prisoner's dilemma (assuming the players don't discount the future too much).
(The non-repeated, one-shot prisoner's dilemma never results in cooperation. As the game theorist Ken Binmore explains in several of his books, among them Natural Justice, defection strongly dominates cooperation in the one-shot PD and it inexorably follows that a rational player never cooperates.)
comment by Punoxysm · 2015-01-27T04:36:36.989Z · LW(p) · GW(p)
I believe many philosophies and ideologies have hangups, obsessions, or parasitical beliefs that are unimportant to most of the beliefs in practice, and to living your life in concordance with the philosophy, yet which are somehow interpreted as central to the philosophy by some adherents, often because they fit elegantly into the theoretical groundings.
Christians have murdered each other over transubstantiation vs consubstantiation. Some strands of Libertarianism obsess over physical property. On this forum huge amounts of digital ink are spilled over Many-Worlds Interpretation. Each fitness community swears by contradictory advice, even about basic nutrition and exercise.
These are sometimes badges of tribalism, sometimes the result of trying to hard to make a "perfect theory".
Most of the time, most of this stuff just doesn't matter! To live a Christian life, it could not matter less what you believe about the Eucharist. You could live your life as if the world were classically Newtonian and everything defying that was magic, and unless you were a physicist it would not affect your life as a rationalist. You can become more fit than most people you know on almost any given fitness program, with time and effort and diet.
"Doctrinal" issues are largely a distraction from actually living your life in accordance with principles you think are good or from achieving a goal.
Replies from: JoshuaZ, polymathwannabe↑ comment by JoshuaZ · 2015-01-27T04:58:32.338Z · LW(p) · GW(p)
Christians have murdered each other over transubstantiation vs consubstantiation. Some strands of Libertarianism obsess over physical property. On this forum huge amounts of digital ink are spilled over Many-Worlds Interpretation.
One of these is not the like the others. The first one is killing over something that likely doesn't exist. The others are a bit different from that. The second one is focusing on a coherent ideological issue with policy implications. The third one may have implications for decision theory and related issues. Note by the way that if the medieval Christian's theology is correct then the the first thing really is worth killing over: the risk of people getting it wrong is eternal hellfire. Similarly, if libertarianism in some sense is correct then figuring out what counts as property may be very important. Similar remarks about to MWI. The key issue here seems to be that you disagree with all these people about fundamental premises.
To live a Christian life, it could not matter less what you believe about the Eucharist.
The people in question would have vehemently and in fact violently disagreed. The only way this makes sense is if one adopts a version of Christianity which doesn't focus on the Eucharist, again disagreeing with a fundamental premise at issue. None of the groups in question consider "live a Christian life" to mean be a nice person and believe that Jesus was a swell dude.
You could live your life as if the world were classically Newtonian and everything defying that was magic, and unless you were a physicist it would not affect your life as a rationalist
Or if you did quantum computing, or if you were a chemist, or if you were a biologist, or if you care about understanding and figuring out the Great Filter and related issues, or if you work with GPS, or if you care about nuclear safety and engineering, or if you work with radios, etc. And that's before we look at the more silly problems that can arise from seeing parts of the world as magic, like people using quantum mechanics to justify homeopathy. If quantum mechanics is magic, then this is much easier to fall prey to.
The answers to questions matter.
Replies from: Richard_Kennaway, Punoxysm↑ comment by Richard_Kennaway · 2015-01-27T09:35:54.848Z · LW(p) · GW(p)
To live a Christian life, it could not matter less what you believe about the Eucharist.
The people in question would have vehemently and in fact violently disagreed. The only way this makes sense is if one adopts a version of Christianity which doesn't focus on the Eucharist, again disagreeing with a fundamental premise at issue. None of the groups in question consider "live a Christian life" to mean be a nice person and believe that Jesus was a swell dude.
Ask an Orthodox priest about the filioque. Ask a Catholic about the Resurrection. Ask John C. Wright about, well, anything.
Of course, you may not agree with their answers. The point is that their answers will have nothing to do with avoiding hellfire by reciting God's passwords (a typical misunderstanding by people unfamiliar with Christianity), and everything to do with how a right understanding of God and His relationship to Man leads us to rightly relate to each other in the world.
Though I speak with the tongues of men and of angels, and have not charity, I am become as sounding brass, or a tinkling cymbal. And though I have the gift of prophecy, and understand all mysteries, and all knowledge; and though I have all faith, so that I could remove mountains, and have not charity, I am nothing.
St. Paul, 1 Corinthians 13:1-2
Replies from: JoshuaZThe habit of charity extends not only to the love of God, but also to the love of our neighbor.
↑ comment by JoshuaZ · 2015-01-27T14:27:24.101Z · LW(p) · GW(p)
But not all Christians believe that, as demonstrated by the Reformation wars among other things. And while were at it, it is worth noting that among some evangelical Christians, especially those who emphasize the "sinners' prayer" it is very close to saying the right passwords. That people aren't focusing on the forms of Christianity that you are sympathetic with doesn't mean they don't understand Christianity.
↑ comment by Punoxysm · 2015-01-27T05:48:09.817Z · LW(p) · GW(p)
My post is about dogmatism. Sometimes beliefs have implications but they are implications in the long-tail, or in the far-future.
I would wager that most medieval Christians could not follow a medieval Christian theological debate about transubstantiation, and would not alter their behavior at all if they did. And eventually, Christians came to the consensus that this particular piece of dogma was not worth fighting over.
Specifics of property are similarly irrelevant in a world so far from the the imagined world where these specific determine policy. Certainly, it should be an issue you put aside if you want to be an effective activist. I'm not saying nobody should think about these issues ever, just that they are a disproportionately argued-about issue.
Similarly MWI just doesn't have an impact on any decision I make in my daily life apart from explaining quantum physics to other people, and never will. Can you think of a decision and action it could impact (I really would like to know an example!)? I'm not saying it's totally irrelevant or uninteresting, it's just disproportionately touted as a badge of rationalism, and disproportionately argued about.
Or if you did quantum computing, or if you were a chemist, or if you were a biologist, or if you care about understanding and figuring out the Great Filter and related issues, or if you work with GPS, or if you care about nuclear safety and engineering, or if you work with radios, etc. And that's before we look at the more silly problems that can arise from seeing parts of the world as magic, like people using quantum mechanics to justify homeopathy. If quantum mechanics is magic, then this is much easier to fall prey to.
This argument can be applied to anything. Automotive knowledge, political knowledge, mathematical knowledge, cosmetics knowledge, fashion knowledge, etc. I think it's great to know things, especially when you actually do need to know them but also when you don't. But if some piece of knowledge is unimportant to determining your actions or most of them, I won't privilege it just because it has some cultural or theoretical role in some ideology.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2015-01-27T14:38:01.703Z · LW(p) · GW(p)
Sometimes beliefs have implications but they are implications in the long-tail, or in the far-future.
A far future belief that effects heavily estimated utility are important. In the case of Christianity, the stakes are nothing less than eternity. So once you agree with the medieval Catholic that that's really what's at stake, the behavior makes sense. The primary issue isn't one of "dogma" but of disagreeing with their fundamental premises.
I would wager that most medieval Christians could not follow a medieval Christian theological debate about transubstantiation, and would not alter their behavior at all if they did.
True. So what?
And eventually, Christians came to the consensus that this particular piece of dogma was not worth fighting over.
No, enough of them came to that conclusion that they stopped having large-scale wars of the same type. But note that that conclusion was essentially due to emotional issues (people being sick of it) and the general decline in religiosity which lead to a focus on other disagreements, especially those focusing on ideology or nationalism, and less belief that the issues genuinely mattered. And if one looks one still sees a minority of extremist Catholics and Protestants who think this was a mistake.
Can you think of a decision and action it could impact (I really would like to know an example!)?
Sure. Some people have argued that cryonics makes more sense in an MWI context. So if one is considering signing up that should matter.
I'm not saying it's totally irrelevant or uninteresting, it's just disproportionately touted as a badge of rationalism, and disproportionately argued about
So, I agree that it is disappointingly touted as a badge of rationalism, but that conclusion is for essentially different reasons: I don't think the case for MWI is particularly strong given that we don't have any quantum mechanical version of GR. The reason why MWI makes sense as a badge if you think there's a strong case is because it would demonstrate a severe failing on the part of the physics community, what we normally think of as one of the most rational parts of the scientific community. It also functions as a badge because it shows an ability to accept a counterintuitive result even when that result is not being pushed by some tribal group (although I suspect that much of the belief here in MWI is tribal in the same way that other beliefs end up being tribal affiliation signals for other groups).
This argument can be applied to anything. Automotive knowledge, political knowledge, mathematical knowledge, cosmetics knowledge, fashion knowledge, etc. I think it's great to know things, especially when you actually do need to know them but also when you don't.
Sure. All of those are important. Unfortunately, the universe is big and complicated and life is hard, so you need to prioritize. But that doesn't mean that they are unimportant: it means that there are a lot of important things.
But if some piece of knowledge is unimportant to determining your actions or most of them, I won't privilege it just because it has some cultural or theoretical role in some ideology.
How much of that is that you don't identify with the culture, don't agree with the theory and don't accept the ideology?
Replies from: polymathwannabe, Punoxysm↑ comment by polymathwannabe · 2015-01-27T15:59:07.291Z · LW(p) · GW(p)
There are more factors involved. I sympathize with Buddhist ethics, but don't believe in reincarnation, and still haven't solved the dissonance. Buddhist ethics can function very well without all the other added beliefs, but sections of scripture insist on the importance of believing in reincarnation in order to remember the need for ethics. This is a case where in my everyday life I can successfully ignore the belief I don't like, but I can't change the fact that it's still part of the package.
To be a Catholic and ignore the doctrines concerning the Eucharist is a similar matter only if you don't mind being excommunicated. To very devoted Catholics, the risk of ignoring beliefs you don't like is too big.
In these two scenarios, you can't modify the package: the less palatable parts are still there, like it or not. But there are other scenarios where you can modify it, with varying degrees of chance of success. If you live in Saudi Arabia, and are very happy with the laws there, but want your wife to drive her own car, you'd need to change the minds of a big enough number of authorities before they catch up with you and put you in jail for agitation. Now suppose you live in Texas and are pretty content with most of the laws, but don't like the death penalty. In that case, the rules of democracy give you some chance of changing the part of the system that you don't like, without (in theory) any risk to you personally.
↑ comment by Punoxysm · 2015-01-28T06:11:39.419Z · LW(p) · GW(p)
Perhaps a better way to say it is that these beliefs are irrelevant to the ideology-as-movement.
That is, if you are a Christian missionary, details of transubstantiation are irrelevant to gathering more believers (or at least, are not going to strike the prospective recruits as more important). Likewise, MWI is not really that important for making the world more rational.
So I was wrong to say they are not in some way important points of doctrine. What I should say is that they are points of doctrine that are unimportant to the movement and most of the externally-oriented goals of an organization dedicated to these ideologies.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2015-01-28T13:28:35.226Z · LW(p) · GW(p)
Well no, that makes them unimportant up until the point where there are people with similar viewpoints that disagrees with you on that point. In both of the cases in question, that is part of the situation. In the case of transubstantiation for example there were other branches of Catholicism or Christianity that actively argued against it. In the case of MWI, there are people that argue that interpretations don't matter and I've encountered at least a few self-identified skeptics who consider MWI to be "unscientific". Ideas don't exist in vacuums; it would be the same mistake as if a Christian missionary assumed that anyone they are talking to has no prior exposure to Christianity at all.
↑ comment by polymathwannabe · 2015-01-27T14:19:01.079Z · LW(p) · GW(p)
Christians have murdered each other over transubstantiation vs consubstantiation.
Even before that, the debates on the nature of Christ were ground for successive mutual excommunications. Reading the arguments exchanged during that period is a continuous source of facepalms.
comment by advancedatheist · 2015-01-26T04:04:37.673Z · LW(p) · GW(p)
Roosh Valizadeh, the PUA blogger, asks a question which sounds really out of character, given his usual interests:
Are We Living In A Computer Simulation?
http://www.returnofkings.com/53418/are-we-living-in-a-computer-simulation
Replies from: WalterL↑ comment by WalterL · 2015-01-26T15:55:24.877Z · LW(p) · GW(p)
-8 seems like a lot of disapproval towards what amounts to "look at some random dude who is groping towards the simulation conclusion.".
I mean, its not a super valuable post, the linked post is just a personal narrative, but -8 still seems excessive. I'd expect a post like this to end up at 0 or -1/-2. Its not like its anti-rationality, right?
Replies from: spxtr, JoshuaZ↑ comment by spxtr · 2015-01-27T03:40:51.318Z · LW(p) · GW(p)
Quantum mysticism written on a well-known and terrible MRA blog? -8 seems high. See the quantum sequence if you haven't already. It looks like advancedatheist and ZankerH got some buddies to upvote all of their comments, though. They all jumped by ~12 in the last couple hours.
For real, though, this is actually useless and deserves a very low score.
↑ comment by JoshuaZ · 2015-01-26T17:11:49.645Z · LW(p) · GW(p)
See http://lesswrong.com/lw/lli/open_thread_jan_26_feb_1_2015/bw7p
comment by advancedatheist · 2015-01-26T04:43:22.431Z · LW(p) · GW(p)
What do you expect to learn over the next few decades of your life that you don't understand well now?
Since I've reached my 50's, I've come around to seeing how acquiring an adult man's skill set (AMSS) at a developmentally appropriate (late teens/early 20's in other words) has applications beyond getting into sexual relationships through dating. (And you probably wonder why I read PUA blogs and websites.) Namely, that the AMSS doesn't exist in isolation, but it plays a role in projecting male presence and authority when you have to deal with women in other social situations, like the work place. Women tend to respect the sexually confident man more than the sexually in- or under-experienced man.
I'd like to tell younger men who have had problems with acquiring the AMSS that they need to think about these other consequences of its absence when they reach middle age. In a rational society (lots of luck getting that, despite what you LessWrong people fantasize about), parents wouldn't leave their boys' development of the AMSS to the haphazard. When they can see that girls don't find their sons sexually attractive in their given state, the boys need some kind of intervention to correct that right away.
Replies from: wobster109, MrMind, BrassLion, None, buybuydandavis, PhilGoetz↑ comment by wobster109 · 2015-01-26T08:11:23.362Z · LW(p) · GW(p)
I'm going to give you some advice as a professional woman. I very deeply resent when male colleagues compete with each other to put on a display for women. This goes for social contexts (rationalists' meetups) in addition to professional contexts (work meetings). Then women are trying to talk about code or rationality or product design. Rather than thinking about her contributions, the men are preoccupied with "projecting male presence and authority". What does male presence even mean? Why does authority have anything to do with men, instead of, you know, being the most knowledgeable about the topic?
I'll tell you how it comes across. It comes across as focusing on the other men and ignoring the women's contributions. Treating the men as rivals and the women as prizes. Sucky for everyone all around. Instead of teaching boys to be "sexually attractive", why don't you teach them to include women in discussions and listen to them same as anyone else? Because we're not evaluating your sons for "sexual attractiveness". We're just trying to get our ideas heard.
Replies from: shullak7, Viliam_Bur, bogus, buybuydandavis, None, ZankerH↑ comment by shullak7 · 2015-01-27T15:43:25.559Z · LW(p) · GW(p)
I'll tell you how it comes across. It comes across as focusing on the other men and ignoring the women's contributions. Treating the men as rivals and the women as prizes. Sucky for everyone all around.
Thank you for this. As a younger woman, I became reluctant to join conversations at conferences or other professional meetings because I had noticed that the dynamic of the group sometimes changed for the worse when I entered the discussion. As I get older, I'm no longer as much of a "prize", so it doesn't happen to me as often (which is honestly a relief), but I see it happen with other women. You've put nicely into words why it sucks so much -- for everyone, not just women. I have to belief that it also sucks for the men who are just trying to have a good discussion, but are suddenly thrust into the middle of a sexual competition.
↑ comment by Viliam_Bur · 2015-01-28T10:28:07.363Z · LW(p) · GW(p)
Reminds me of this Aldous Huxley quote:
An intellectual is a person who's found one thing that's more interesting than sex.
I find it also annoying when people cannot turn off their sexual behavior and focus on the topic, and instead disrupt the debate for everyone else. Both genders have their version.
The male version is what you described: focusing on status competition at all costs.
The female version is... essentially: "everyone, pay attention to me! I am a young fertile woman! look here! look here!"... giggling in a high-pitched voice at every opportunity, frequently inserting little "jokes" which other women often find annoying, turning attention to their body by exaggerated movements, etc. (Not sure if I described it well; I hope you know what I mean).
Not sure what to do with this. Seems like a multi-player Prisonner's Dilemma. People who are doing this (if they are well-calibrated) receive some personal benefits at the expense of the group, so it would be hard to convince them to stop. Most likely, they would deny doing this.
But seems like men have an advantage here, because fighting for status in order to seem more attractive is trying to kill two birds with one stone. Even if it doesn't make the man any more attractive to anyone, he still gets some status (unless he is doing it really wrong). On the other hand, when the woman fails to seem attractive, her behavior will only seem stupid.
Replies from: bogus↑ comment by bogus · 2015-01-28T16:08:25.625Z · LW(p) · GW(p)
giggling in a high-pitched voice at every opportunity, frequently inserting little "jokes" which other women often find annoying, turning attention to their body by exaggerated movements, etc. (Not sure if I described it well; I hope you know what I mean).
To me, that behavior connotes a combination of wanting to project femininity (not so much sexual behavior or attractiveness) and having lower-than-average self esteem (i.e. perceived status). It is mostly the latter that can be slightly annoying in the workplace, since such people are often unwittingly excluded from discussion (wobster109 also raises this point).
The root problem here is not so much the behavior itself, but lack of perceived status that then leads to that behavior as a kind of overcompensation. ISTM that high self esteem often boosts both social attractiveness and effectiveness in the workplace (as long as it doesn't come with 'Type-A' overt aggressiveness, and even then sometimes), and that this broadly applies to both males and females.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2015-01-28T16:45:46.866Z · LW(p) · GW(p)
Low self-esteem hypothesis is difficult to falsify, because whatever social role given person plays and however they behave, one could still say "but maybe deep inside they feel insecure". Having said that... yes, this may be an instinctive reaction of a nervous woman, but I believe I have also seen high-status women doing that strategically.
Imagine a club that has informal lectures at its meetings (not LessWrong, but similar), and a 30-something woman, a long-term relatively high-status member of the club, interrupting the lectures every few minutes with some "witty" remark. That was the most annoying example I remember. It seemed to me like she was trying to immitate a behavior of a young girl, in my opinion not very successfully, exactly because some element of shyness was missing; it was only rude. Possibly she was projecting her authority against other present women. I just shrugged this behavior off as rude and forgot it afterwards, but my girlfriend later told me she wanted to kill that person. (Which I take as an evidence that the behavior was a way of intra-gender status fight.)
Replies from: bogus↑ comment by bogus · 2015-01-28T17:21:24.443Z · LW(p) · GW(p)
a 30-something woman, a long-term relatively high-status member of the club, interrupting the lectures every few minutes with some "witty" remark.
Not sure what you mean by "witty remark", but wit and humor often connote fairly high status, as opposed to, e. g. just giggling at something or other things you mentioned. Could it be that your girlfriend was just annoyed at the sheer amount of interruptions? And yes, there may have been some intra-gender status competition involved, but males often compete in much the same way (Protip: don't invite sealion specimens at clubs or conferences).
↑ comment by bogus · 2015-01-28T02:14:31.988Z · LW(p) · GW(p)
I'm going to give you some advice as a professional woman. I very deeply resent when male colleagues compete with each other to put on a display for women. ... It comes across as focusing on the other men and ignoring the women's contributions. Treating the men as rivals and the women as prizes. Sucky for everyone all around.
This is not what PUAs advocate as the best way of relating to women, much less in the workplace. The short version is that PUAs are advised to treat women like they would a male friend, and to see only themselves as a possible prize, never the women. While some measure of "projecting male presence and authority" might be involved, it would be a lot subtler than you are implying, and it would never get in the way of actual discussion.
You're probably modeling your remarks on the common variety of "A-type" personalities, who also like to project dominance. But these folks are not PUAs - many things they do are just wrong and dysfunctional, particularly in a workplace environment. At the same time, we do need to care about these issues. Just focusing on "being the most knowledgeable about the topic" with no attention to social presence is not the answer. It will cause others to regard you as an obnoxious know-it-all, not a valuable asset in your team.
↑ comment by buybuydandavis · 2015-01-28T02:52:25.455Z · LW(p) · GW(p)
We're just trying to get our ideas heard.
Some people are, sometimes. Most people are more usually either partly or entirely interested in boosting their social status and competing for mates, which are two intertwined activities.
I don't really feel entitled to have my way in what goals others pursue in a social setting.
↑ comment by [deleted] · 2015-02-08T13:43:50.786Z · LW(p) · GW(p)
I think you're projecting your feelings here into some sort of feeltopia.
For your first paragraph, can you use consequentalism to describe how well a man who doesn't chase after the status and the women will do? In the dating market? In the job market? In the social market? Can you give me true data, that that man will have as much oppourtunities, as much chances, and as much friends as the guy that - god have mercy on our souls - does the forbidden and resentable sin of status competing? Be aware that many men share the opposite view, and as you are A. not a man (and all that it brings with); B. have not admitted to have any insight to how men view the world; C. did not say if your views are based on researched evidence or anecdote evidence, and as such provided no reason as to why any man should review and perhaps even update considering you viewpoint, what merit do you think your view has that men are seemingly missing in the quest for statusdom?
For your second paragraph, it falls apart if you cannot bring ample evidence for the first as it's a follow-up to it, and it also reads like generalizing from one example. I'm not a fan of superficial intelligence so I'll explain why instead of linking you to post ZFA#24 and straight-out tell you that you seem to be in bad company if what you say is true, and that you cannot be called professional if that's the kind of environment you are employed in daily.
I hope this post gets downvoted to oblivi.. ehrm, I mean, linked in order to solve this silly gender war bullcrap once and for all.
↑ comment by ZankerH · 2015-01-26T09:57:08.019Z · LW(p) · GW(p)
You're presupposing implicit agreement from all men with the notion that women's ideas have merit, and the agreement from all women with your notion of wanting to be included in the company of men as equals.
There is a more elegant solution that doesn't involve your desired absurd levels of gender equality - gender self-segregation in situations where the purpose is not interacting with the opposite gender. That way. there is no pressure for competition between men, and no pressure for pretending to be men from women. Because that's what modern "gender equality" ideology amounts to - pressuring women to act like men, disregarding their actual ideas and aspirations for the goal of eradicating femininity.
Replies from: wobster109, JoshuaZ, polymathwannabe, None↑ comment by wobster109 · 2015-01-27T02:02:51.689Z · LW(p) · GW(p)
I agree with JoshuaZ. I find your solution to be a severe hindrance in real life. I am the SQL expert on my team, and my (male) coworker is the surgeries expert, and my (male) colleague across the hall is the infectious diseases expert. We all work together to make the best product possible. How can we get anything done if we are segregated by gender?
I don't see why I need "implicit agreement from all men". My ideas have merit because they reduce medical errors and save lives. Real-life results are the judge of that, not men. I also do not see why I need "agreement from all women". They are not my coworkers, and they are free to live their lives as they wish. That said, I am a developer in a project meeting at a tech company. Safe to say, I want to be treated as an equal.
Finally, I don't see what contributing to a great company has to do with "acting like men" or "pretending to be men". My goal isn't to "eradicate femininity"; it is to make a great product that will help people. If you think that is inherently masculine, then you'll have to explain. So why don't you start by telling me what "masculine" and "feminine" mean to you?
↑ comment by JoshuaZ · 2015-01-26T17:18:04.437Z · LW(p) · GW(p)
You're presupposing implicit agreement from all men with the notion that women's ideas have merit, and the agreement from all women with your notion of wanting to be included in the company of men as equals.
Nothing in wobster109's comment presumed anything of the sort. Moreover, if there are men who are unable to see ideas coming from women as having merit then the problem seems to be with those men more than anything else.
But we can if you want steelman your statement slightly: If instead of the men having disagreement with the notion that women's ideas have merit, it is possible that some men have problems at a basic level with accepting ideas from women. In that case, maybe there is an actual problem that needs to get dealt with, but even then, you need a lot more of an argument that the best solution is complete gender segregation and not trying to teach those men to accept ideas from women. And calling a solution "elegant" doesn't make it so: indeed, this is a "solution" where we can empirically see what happens when countries like Saudi Arabia or places like Kiryas Yoel try to implement variants of it, and it isn't pretty.
Because that's what modern "gender equality" ideology amounts to - pressuring women to act like men, disregarding their actual ideas and aspirations for the goal of eradicating femininity
Aside from this being one of the most strawmanned possible notion of what constitutes gender equality, does it strike you at all as odd as that you are replying to an actual woman and telling her that what she wants is due to pressure from some ideology for her not act feminine? Do you see what might be either wrong with that or at least epistemologically unwise?
↑ comment by polymathwannabe · 2015-01-26T16:10:10.652Z · LW(p) · GW(p)
So, in your world, men reserve the right to define what femininity is, whereas in a gender-equal world, women get to define it. I don't see why we should prefer your world.
Replies from: Luke_A_Somers, ZankerH↑ comment by Luke_A_Somers · 2015-01-27T19:32:37.984Z · LW(p) · GW(p)
As bad as that comment was, this isn't one of its problems.
↑ comment by ZankerH · 2015-01-26T19:22:32.411Z · LW(p) · GW(p)
That's not what I'm saying at all. I'm speaking of femininity as ordained by Gnon, not any human attempts to define or constrain it - in that sense, indoctrinating women into behaving like men and aspiring to masculine achievements does not amount to "gender equality", but the systematic eradication of femininity.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-01-26T21:31:35.194Z · LW(p) · GW(p)
There are several worrying, unquestioned assumptions in your argument: namely, that authority and competitiveness are exclusively male traits, that competition among men is inevitable in the presence of women, and that women who try to obtain and use power are "pretending to be men". Actually, by restricting the admissible examples of femininity to those that best suit your ideal society, you are introducing your preferred definition of femininity. Shifting the blame to "Gnon" doesn't succeed in hiding the fact that you're the one defining "Gnon".
But nature (because I refuse to anthropomorphize impersonal forces by giving them silly names) does not ordain a specific way of femininity or masculinity. The seahorse fish are gestated by the male, which during that period is full of prolactin. The jacana birds have a peculiar calendar of reproductive availability that resulted in the bigger and stronger females being selected by fighting over available males. The definitions of male and female aren't cast in stone.
I'll grant that those examples have little bearing on human societies, but even appealing to our primate past is no use: are you going with the patriarchal chimpanzee, or the matriarchal bonobo? Better, we could just abandon the naturalistic fallacy and let individual humans decide what patterns of behavior actually make them happiest.
Replies from: ZankerH↑ comment by ZankerH · 2015-01-26T22:16:20.754Z · LW(p) · GW(p)
There are several worrying, unquestioned assumptions in your argument: namely, that authority and competitiveness are exclusively male traits
Not exclusively, primarily. There's certainly been little selection pressure for women to compete or lead as opposed to men, be it in the context of mating rights or broader societal interactions. Men are better adapted for such purposes, therefore, by taking them on, women are, by definition, acting like men.
That's all I'm talking about - recognising the fact that men and women have certain complementary, non-overlapping aptitudes, are not literally the same, and, are better off apart in certain situations in which modern western societies force them together.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-01-26T22:38:10.143Z · LW(p) · GW(p)
The environment we live in constitutes a new, different set of selection pressures. Features which were adaptive in the past (like dominance or aggressiveness) no longer ensure differential mating success. Willingness to negotiate roles and seek consensus is more incentivized now. Instead of promoting nostalgia for an ancestral environment that is not coming back, you could do what makes the most biological sense---adapt.
Replies from: ZankerH↑ comment by ZankerH · 2015-01-26T22:58:41.211Z · LW(p) · GW(p)
We disagree over how much it's possible to adapt in a single generation then. Anyway, ideologies that force absurd levels of pretence at gender equality at us aren't about adapting away these differences, they're about pretending they don't exist - and if you really wanted to reverse them, there are far more efficient processes to do so than ignoring them, given modern technology - but most would require acknowledging the inherent inequality in the first place. The only conclusion I can draw from the fact that nobody is pushing for this is that the gender equality movements are content to pretend to have accomplished anything by forcing people to pretend that the concept of gender equality corresponds to reality in any way.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-01-27T16:33:16.658Z · LW(p) · GW(p)
Adaptation is actually quite efficient within one single generation: those who don't mate, don't mate. If enough women decide that they don't want cavemen but feminist men, differential reproduction will sort out the results, and the next generation will feel more comfortable with a state of equality.
Edited to add: that is, assuming that opinions are genetic. They're not. Memetics is even faster than genetics at changing attitudes toward gender roles. The son of a caveman can learn to become a feminist, and vice versa. Change can happen in less than one generation.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2015-01-28T09:59:16.216Z · LW(p) · GW(p)
There will also be competition among the feminist men. Seems to me that the loudest of them are often... uhm, former cavemen who have "seen the light" and became extremists for the other side. Or guys like this one. If there is a genetic component, these guys would bring it to the future.
More generally, Goodhart's law also applies to men signalling feminism.
↑ comment by BrassLion · 2015-01-28T01:50:03.250Z · LW(p) · GW(p)
When I read the phrase "adult man's skill set", I immediately thought about carpentry. Did everyone else think about sex, or are there other people that thought this was going to be a post about practical, traditionally manly things?
Replies from: JoshuaZ↑ comment by [deleted] · 2015-01-27T06:32:52.518Z · LW(p) · GW(p)
I can just imagine how this would've gone over if the gender was reversed.
I'd like to tell younger women who have had problems with acquiring the AWSS that they need to think about these other consequences of its absence when they reach middle age. In a rational society (lots of luck getting that, despite what you LessWrong people fantasize about), parents wouldn't leave their girls' development of the AWSS to the haphazard. When they can see that boys don't find their daughters sexually attractive in their given state, the girls need some kind of intervention to correct that right away.
Also, I have a suspicion that advancedatheist is abusing the karma voting system in some manner because he went from -13 to 0 in a matter of a few hours.
Replies from: None, bogus, PhilGoetz, buybuydandavis↑ comment by bogus · 2015-01-28T16:29:19.054Z · LW(p) · GW(p)
Well, what is the best AWSS? We don't seem to know much about this, and it strikes me as a very interesting question. Publications like COSMO try to provide women with useful advice but they are often ridiculed as useless, even a lot more than PUAs are (I guess the male equivalent would be mainstream "men's" magazines). And there is no real female equivalent to the male PUA community - female PUAs exist but they're such a tiny niche that we know almost nothing about the quality of their advice.
↑ comment by buybuydandavis · 2015-01-28T03:19:17.575Z · LW(p) · GW(p)
That sounds like good advice too.
I find it strange that people get so irate over the suggestion that people develop interpersonal skills, and that their parents should help them do so if they see a lack.
Replies from: philh, ChristianKl↑ comment by philh · 2015-01-28T10:45:14.836Z · LW(p) · GW(p)
You strike me as being incredibly charitable towards aa, and more to the point, incredibly uncharitable towards the people who are getting irate.
If you think I'm one of those people, I'd like to make it very clear that I do not think those particular suggestions are bad, and that is not what I'm irate about.
(Actually, I don't think I was getting irate at aa or his posts. I'm getting irate at you, however.)
Can you really not see the difference between "if your child lacks interpersonal skills, you should help them develop them" and "if girls don't find your son sexually attractive, he needs some kind of intervention to correct that right away"? Or do you just expect everyone to make the mental effort to read the latter as the former, regardless of the history of the person speaking?
I may be being uncharitable towards you right now, but I really have no conception of how you could be genuinely confused here.
Also, relevant to the thread: I have now been karma-bombed. (Not my most recent few comments, but ~15 of my comments in a row have suddenly been downvoted.)
Replies from: buybuydandavis↑ comment by buybuydandavis · 2015-01-28T22:47:15.708Z · LW(p) · GW(p)
incredibly uncharitable towards the people who are getting irate.
In general I'm not that sympathetic to people who feel entitled to charity from me, and get irate if they don't get it.
Can you really not see the difference between "if your child lacks interpersonal skills, you should help them develop them" and "if girls don't find your son sexually attractive, he needs some kind of intervention to correct that right away"?
I see lots of differences. Instead of me playing detective on the particulars of just what you find so offensive, could you just say it?
I may be being uncharitable towards you right now, but I really have no conception of how you could be genuinely confused here.
I have my theories, but much prefer not to shadow box with my own theories of someone else's theories. Spell it out, and then we can both see how much and how we agree or disagree.
Or do you just expect everyone to make the mental effort to read the latter as the former, regardless of the history of the person speaking?
What I expect and what I recommend are different things. I recommend that when people read, they try to see what value they can extract from what they read.
If they're spoiling for a fight against someone with a history, they should pick the battles where they have the clear upper hand based on the text as given. Arguing against supposed dog whistles either leaves you missing the point, or it plays right into any dog whistler's hands, making you look like a putz to third parties. If he says horrific things, wait for when he does, then shit hammer him.
I recall a similar conversation with a guy I follow on youtube. He had a video attacking someone, which, given enough context, his attack made some sense, because his target was dog whistling up a storm. But I think to third parties, he ended up looking like a bozo, arguing against statements that were perfectly innocuous to the average reader. It's a loser of a tactic, even when you're right about the dog whistles. Especially when you're right about the dog whistles.
Replies from: philh, PhilGoetz↑ comment by philh · 2015-01-29T11:14:30.881Z · LW(p) · GW(p)
I'm not sure how that's a reply to me.
I complained that you were uncharitably misrepresenting my position. You say I'm not entitled to charity, but your representation of my position doesn't come close to anything I ever said. This isn't merely you not-being-charitable, this is you being either actively uncharitable, or arguing against imagined dog whistles, or something that isn't engaging with either what I said or what I meant.
(You may say that you never mentioned me, and that's so. But if you weren't including me in "people get so irate", who specifically did you mean?)
I'm not here looking for a fight with aa. I mostly just ignore him. I replied to you, when you said "this seems like good advice", and I said that the advice you had taken from aa's post was a much weaker version of what he had actually said, and that aa's history makes me disinclined to engage with him.
You're telling me that I should attack aa based on the text he actually wrote, but you're the one who turned "if girls don't find your son sexually attractive, he needs some kind of intervention to correct that right away" into "if your child lacks interpersonal skills, you should help them develop them".
When you say "I find it strange that...", I want to know if you actually do find it strange. I don't think it matters what differences I see in the two statements. Suppose it turns out that actually, aa's advice-as-written is sensible and he was right all along and I'm wrong. Fine. Nevertheless, right now I think his advice-as-written is bad, and you're telling us that advice he didn't write is good, and I want to know if you can tell the difference between what he did and didn't write. You don't need to see the same differences as I do, you just need to be able to see enough differences that you understand why people might be okay with one and not the other.
Do you actually think that people are getting irate at "the suggestion that people develop interpersonal skills, and that their parents should help them do so if they see a lack"? Because if so, I claim that you are just flat out wrong. That is not what aa suggested, and it's not what people are getting irate about.
(I don't want to say exactly what it is that I don't like about aa's advice-as-written. I don't want to put in the time to do it justice, I don't want to write a not-quite-right version and open myself up to nitpicking, I don't think it's a productive avenue of discussion right now, and I'm not convinced that you aren't just asking as a distraction. This is a can of worms that I decline to open.)
And actually, when I say "I want to know", that's a rhetorical flourish. I don't really care. What I started this post trying to say was: I think you're being (perhaps deliberately) obtuse, and I don't think I want to continue engaging with you on this. I've put more time and emotional energy into this discussion than I care to admit, and I don't think it's paying off. There are probably things you can say in reply that would change my mind, but by default, I'm done here.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2015-01-29T22:10:52.546Z · LW(p) · GW(p)
I don't want to say exactly what it is that I don't like about aa's advice-as-written.
You're irate, but refuse to lay your cards on the table. That's your prerogative.
I don't want to write a not-quite-right version and open myself up to nitpicking
Yes, people often don't want to do that. A less flattering way of phrasing that is that you don't want to open your opinions to scrutiny.
I'm not shy about my opinions, and I wasn't going to play psychic detective with yours.
↑ comment by PhilGoetz · 2015-02-05T08:55:26.586Z · LW(p) · GW(p)
"Dog whistling"?
Replies from: gjm↑ comment by gjm · 2015-02-05T11:54:18.883Z · LW(p) · GW(p)
Dog-whistle politics: saying things that some subset of your audience will recognize as having a meaning special to their tribe (and of which they approve) but that go straight over the heads of the rest of your audience (which perhaps would be displeased if they noticed you paying special attention to that subset, or if they understood what you were saying to them).
Replies from: buybuydandavis, Salemicus↑ comment by buybuydandavis · 2015-02-06T21:24:33.492Z · LW(p) · GW(p)
I note that you're downvoted when supplying the definition of a term that was specifically requested.
Not that the downvoter even read your post, as I see a trail of -1s.
Replies from: gjm↑ comment by gjm · 2015-02-06T21:55:21.822Z · LW(p) · GW(p)
Yes, I noticed that too. It seems to happen most frequently to me shortly after getting into political discussions (in a broad sense that includes e.g. questions of race and gender). The obvious hypotheses are (1) the quality of my comments goes way down when I post about politics, (2) there are trigger-happy downvoters out there of various political leanings, and (3) there are, more specifically, trigger-happy downvoters whose political leanings are opposed to mine. #1 might be true but (as you remark) the downvoting seems to be quite indiscriminate and to affect comments I can see no reasonable objection to. #2 and #3 are both fairly plausible but the clearest-but particular cases of mass-downvoting we've had happen to be from a neoreactionary, favouring #3. (And of course my political opponents are, as such, Objectively Evil and therefore that's just the sort of thing they would do.)
My policy when I notice this sort of thing is to post more. Posting a comment has positive expected karma for me even when I'm being mass-downvoted, and my interpretation of mass-downvoting is that it's meant to be intimidatory; for those who can afford it, the best response to attempted intimidation is to refuse to be intimidated.
Replies from: Jiro, buybuydandavis↑ comment by Jiro · 2015-02-07T21:45:59.871Z · LW(p) · GW(p)
I got a -1 for posting in the same discussion here, which suggests 3). I also got a -1 for answering a question that specifically asked to explain criticism, and lost a lot of karma for [trying to defend an opinion which is popular in the outside world but is unpopular here. (I got modded down for most of that, it's just that several of them were counterbalanced by people moderating up. The last ones did not have anyone moderating them up and left me in the negatives.)
Replies from: gjm, ChristianKl↑ comment by gjm · 2015-02-07T23:21:48.691Z · LW(p) · GW(p)
For what it's worth, I think those comments of yours were probably downvoted "honestly" -- by which I mean not that I agree with the downvoters, but that I think each comment was downvoted by someone who specifically didn't like that comment, rather than by someone who disliked something else you'd written and wanted to punish you harder than they could by downvoting just the thing they didn't like.
(I do have the impression -- to which your examples are relevant -- that left-leaning comments get downvoted by right-leaning users here more often than the other way around, even though we have rather more left-leaning than right-leaning users. I don't trust that impression very much since it's easy to see how it could be wrong.)
↑ comment by ChristianKl · 2015-02-07T22:05:47.381Z · LW(p) · GW(p)
trying to defend an opinion [which is popular in the outside world but is unpopular here
You didn't defend an opinion. You just stated an opinion.
↑ comment by buybuydandavis · 2015-02-06T23:56:22.732Z · LW(p) · GW(p)
It seems to happen most frequently
Does it happen that often? Is it, like in this case, -1? Or more negative tallies?
My policy when I notice this sort of thing is to post more.
That's the spirit!
I post harder. I take downvotes as a sign that there are readers are in need of my gentle tutelage. So many crosses to bear.
Replies from: gjm↑ comment by Salemicus · 2015-02-05T15:30:47.254Z · LW(p) · GW(p)
This is the theory. In reality, it's mostly ideological opponents who accuse the speaker of using dog-whistles, whereas supporters understand the words at face values, which is precisely the opposite of what the theory would predict. So, for example, Democrats often say that Republican policies to restrain welfare spending are racist dog-whistles - but how would they know? Surely the very theory of dog-whistles should predict that Democrats wouldn't recognise that these are racist claims. Whereas Republicans swear up and down that they really do believe in low government spending and not rewarding sloth. In the end, if you can hear the dog-whistle, you are the dog.
I think the real meaning of "dog-whistle politics" is trying to ascribe an unpleasant secret meaning to your ideological opponents' straightforward positions, and therefore avoid the unpleasant task of having to consider whether their ideas may have merit.
Replies from: gjm, Jiro, buybuydandavis↑ comment by gjm · 2015-02-05T16:22:58.759Z · LW(p) · GW(p)
For the avoidance of doubt, I was not endorsing any claim that anyone is dog-whistling, I was explaining what the term means. Having said which:
supporters understand the words at face value
I don't think we know this is true. (It may be true that supporters generally say they understand the words at face value, but since dog-whistling accusations often concern things one is Not Supposed To Say it wouldn't be surprising if supporters claim to take the words at face value even if they hear the whistling loud and clear.)
the very theory of dog-whistles should predict that Democrats wouldn't recognise that these are racist claims
No, I don't think that's the claim -- I think you're taking the metaphor too literally. What it predicts is that these claims should have an innocent face-value meaning but also be understandable by their intended audience as saying something else that the speaker doesn't want to say explicitly; that's all.
I think the real meaning [...] is [...]
I think it is most likely that (1) it sometimes does happen that politicians and others say things intended to convey a message to some listeners while, at least, maintaining plausible deniability and/or avoiding overt offence to others, and (2) it sometimes does happen that politicians and others get accused of doing this when in fact they had no such intention. Because both of those seem like things that would obviously often serve the purposes of the people in question, and I can't see what would stop them happening.
(In some cases the speaker may expect the implicit meaning to be clearly understood by both supporters and opponents, but just want to avoid saying something dangerous too explicitly. Maybe those cases shouldn't be categorized as dog-whistling, but I think there's a continuum from there to the cases where the message is intended not to be heard by everyone.)
When you say "In reality" and "the real meaning [...] is", are you claiming that the phenomenon described on (e.g.) that Wikipedia page never, or scarcely ever, actually happens? To take a couple of prominent examples, would you really want to claim that
when a US politician speaks of "family values", they scarcely ever intend this to be understood as (at least) a friendly gesture by, e.g., supporters of organizations like the American Family Association, the Family Research Council, the Family Research Institute, the Traditional Values Coalition, the Values Voter Summit, etc.?
- or that they don't also intend "family values" to sound more warm-and-fuzzy-and-positive than, say, "opposition to same-sex marriage, treating transgender people as belonging to whatever gender they were assigned at birth, opposition to abortion", and all the other specific things that actually distinguish those organizations dedicated to "family" and "values" from their ideological opponents?
during the US Democratic primaries in 2008, nothing Hillary Clinton's campaign said and did was intended to highlight Barack Obama's race in ways that would make him less appealing to white voters, and nothing Obama's campaign said and did was intended to highlight his race in ways that would make him more appealing to black voters?
↑ comment by Salemicus · 2015-02-05T17:06:53.928Z · LW(p) · GW(p)
supporters understand the words at face value
I don't think we know this is true.
OK, that's fair. It's possible that the sympathetic hear the alleged dog-whistle but deny it, although I still think our default assumption should be to believe the supporters unless we have specific evidence otherwise. But we are still left with the puzzle of why opponents can hear the allegedly inaudible whistle.
What it predicts is that these claims should have an innocent face-value meaning but also be understandable by their intended audience as saying something else that the speaker doesn't want to say explicitly; that's all.
No. Let's look at what wikipedia says:
Dog-whistle politics is political messaging employing coded language that appears to mean one thing to the general population but has an additional, different or more specific resonance for a targeted subgroup.
You're now trying to water down dog-whistling to a mere 'plausible deniablity.' These are two distinct theories:
- When I say 'aqueducts are bad' most people think I'm arguing about the government's aqueduct-building. But members of the Anti-Bristol Society understand this to mean that I really want to persecute Bristolians, and so will vote for me. (Dog-whistling).
- When I say 'aqueducts are bad' everyone understands that I really mean that I want to persecute Bristolians, and that I'm reaching out to the Anti-Bristol Society. But because I didn't say I want to persecute Bristolians, I have just enough plausible deniability to not get thrown out of polite society. (Plausible deniability).
Note that these two ideas are essentially opposites.
When you say "In reality" and "the real meaning [...] is", are you claiming that the phenomenon described on (e.g.) that Wikipedia page never, or scarcely ever, actually happens?
Scarcely ever. Consider the example from the UK - how on earth is that dog-whistling? Both the Conservative manifesto at the last election and the official policy of the Coalition government explicitly state the goal of limiting immigration. These policies have been criticised as, inter alia, racist. So they run adverts arguing that "It's not racist to impose limits on immigration." Where's the dog whistle? Isn't the simpler story that they're trying to defend their stated policies? If this is a dog-whistle, what is the content of the dog-whistle, over and above their policies? What's the payoff? It doesn't really make sense.
Regarding your specific examples:
- I have no idea how those specific organisations, with which I have no particular familiarity, react to an individual phrase.
- Republicans certainly intend "family values" to sound more warm-and-fuzzy and positive than "heteronormativity." But there is a world of difference between using the most positive language to describe your own position and the most negative to describe the opposition (pro-choice vs pro-death, death tax vs estate tax, Obamacare vs Affordable Care Act, etc) and dog-whistling, or even plausible deniability. Social conservatives are not hiding anything by saying they favour "family values." They are perfectly willing to defend each and every one of the policy planks, but they want one phrase to sum it all up.
- I'll never be able to prove that nothing they did was intended like that. But I certainly don't believe that the most-remarked-on incident (Bill Clinton comparing Obama's win in the South Carolina primary to Jesse Jackson's win in the South Carolina primary) was in any way a dog-whistle, and I certainly do believe that what really happened there was that an innocuous comment was seized upon by people as a stick with which to beat Clinton, with the convenient excuse that the very innocuousness of the comment is the evidence for its malignity, because the evil is somehow hidden.
↑ comment by buybuydandavis · 2015-02-06T21:33:12.462Z · LW(p) · GW(p)
You're now trying to water down dog-whistling to a mere 'plausible deniability.'
There's a continuum between the two forms of equivocation according to the percentage of the out group that can hear the dog whistle.
↑ comment by gjm · 2015-02-06T00:35:34.782Z · LW(p) · GW(p)
You're now trying to water down dog-whistling to a mere 'plausible deniability'
Actually, I said a couple of paragraphs later: "Maybe those cases shouldn't be categorized as dog-whistling, but I think there's a continuum from there to the cases where the message is intended not to be heard by everyone." I disagree with your statement that these cases "are essentially opposites"; they have in common (1) an innocuous face-value meaning and (2) a less-innocuous meaning intended to appeal to a subset of the audience. In cases of either type I would expect the speaker to prefer the less-innocuous meaning not to be apparent to most listeners. The only difference is in how hard they've tried to achieve that, and with what success.
(And if a politician is sending not-too-explicit messages of affiliation to people whose views I detest, actually I don't care all that much how hard he's trying to have me not notice.)
we are still left with the puzzle of why opponents can hear the allegedly inaudible whistle.
Leaving aside (since I agree that it's doubtful that they should be classified as dog-whistling, though I disagree that they're "essentially opposite") cases where the goal is merely plausible deniability: this is puzzling only if opponents in general can easily hear it, but I think what's being claimed by those who claim to discover "dog-whistling" is that they've noticed someone sending messages that are clearly audible to (whoever) but discernible by the rest of the population only when they listen extra-carefully, have inside information, etc. (Of course once it's been pointed out, the inside information is more widely available and people are prepared to listen more carefully -- so fairly soon after an alleged dog-whistle is publicized it becomes less dog-whistle-y. If someone says now what Reagan said earlier about welfare abusers, their political opponents would notice instantly and flood them with accusations of covert racism; but when Reagan actually said those things before, they weren't immediately seen that way. Note that I am making no comment here about whether he actually was dog-whistling, just pointing out that what's relevant is how the comments were perceived before the fuss about dog-whistling began.)
Consider the example from the UK
An advertisement saying "It's not racist to impose limits on immigration" isn't dog-whistling, and I don't think anyone claims it is. (Some people might claim it's lying, but that's a different accusation entirely.) But if there are a lot of people around who are (but wouldn't admit that they are) anti-immigration because they don't want more black people in the UK, then vocal opposition to immigration may convey to those people the message "we prefer white people too", and some of what a political party says about immigration may be designed to help the racists feel that way.
Whether any of what the Conservatives say about immigration would rightly be classified that way, I don't know. I haven't paid a lot of attention to what they've said about immigration. (I'm pretty sure they're happy enough to get a slice of the racist vote, but that on its own isn't dog-whistling.)
The payoff, if they are deliberately courting the racist vote, would be that racists in the UK feel that the Conservatives aren't merely cautious about immigration in general but will pursue policies that tend to keep black people out of the UK (that would be the content of the dog-whistle over and above their explicit policies), and accordingly are more inclined to vote for them rather than turning to (say) the BNP or UKIP than they would have been without that reassurance.
"family values"
It seems likely to me that at least some social conservatives some of the time are intending this to suggest more than they would be happy defending explicitly -- e.g., a willingness to get Roe v Wade overturned if it looks politically feasible, or to obstruct the teaching of evolution as fact in school science lessons, or to restrict the rights of same-sex couples.
(Imagine that you are a socially conservative politician. A substantial fraction of the votes you get are going to come from conservative Christians. If you have a choice of two ways to express yourself, no different in their explicit commitments, little different in their impact on people who will be voting against you anyway -- but one of them makes it that bit more likely that your conservative Christian listeners will see you as one of them and turn out to vote for you ... wouldn't you be inclined to choose that one? And isn't that exactly what's meant by "dog-whistling"?)
Replies from: Salemicus↑ comment by Salemicus · 2015-02-06T08:10:23.047Z · LW(p) · GW(p)
An advertisement saying "It's not racist to impose limits on immigration" isn't dog-whistling, and I don't think anyone claims it is.
Yet according to the Wikipedia article you linked, that was claimed to be "the classic case of dog-whistling." So I find this discussion frustrating because you don't seem willing to come to terms with how the phrase is actually used.
Replies from: gjm↑ comment by gjm · 2015-02-06T09:20:58.810Z · LW(p) · GW(p)
What the page says was called the classic case of dog-whistling is a whole advertising campaign.
I checked what Goodin's book (cited at that point in the Wikipedia article) actually says. It doesn't reference any specific advertisements in the campaign, and in particular doesn't describe the specific one you picked out as dog-whistling. It does, however, say this:
The fact that the practice is noticed, that it has acquired a name and a bad press, suggests that the message is not literally inaudible to others beyond its intended target. They have noticed it. And by identifying the trick and giving it a name, they have (after a fashion) worked out a way around it.
all of which seems to be in line with what I've been saying.
[EDITED to fix a spelling and add: I don't have a copy of Goodin's book; I checked it using Amazon's "look inside" feature. This means that while I was able to look up the bit quoted in Wikipedia and the bit I quoted above, I couldn't see the whole chapter. I did, however, search for "not racist" (two key words from the specific advertisement you mentioned) and get no hits, which I think genuinely means they don't appear -- it searches the whole book even though it will only show you a small fraction.]
↑ comment by Jiro · 2015-02-05T21:06:20.174Z · LW(p) · GW(p)
Surely the very theory of dog-whistles should predict that Democrats wouldn't recognise that these are racist claims.
That shows that they wouldn't recognize a well done dog-whistle, but they may still recognize a poorly done one where the Republicans attempt to make it something only racists understand, but do not succeed at doing so.
↑ comment by buybuydandavis · 2015-02-05T21:08:46.022Z · LW(p) · GW(p)
I think the real meaning of "dog-whistle politics" is trying to ascribe an unpleasant secret meaning to your ideological opponents' straightforward positions
You describe the flip side of dog whistle politics, which are false accusations of dog whistle politics. I think both happen.
Sometimes the accusations are justified, but it is an extremely poisonous tactic to the discussion, and even to your own thinking. It's easy to project secret evil intent where you already have some fundamental disagreement with a person, and it relieves you of actually having to confront the argument as given. "He really means..."
As I noted before, in forums like these, where people come in and out of threads, and haven't read all that goes on, I think the accusations are an extra loser, as you just end up looking like a putz, objecting to the unobjectionable.
Argue against posts as given.
if you can hear the dog-whistle, you are the dog
That's a catchy comeback on the less than perfect metaphor, but not literally true. Sometimes people really do dog whistle, and sometimes you can hear it even if you're not the dog (the intended audience).
↑ comment by ChristianKl · 2015-02-06T00:43:15.055Z · LW(p) · GW(p)
I find it strange that people get so irate over the suggestion that people develop interpersonal skills, and that their parents should help them do so if they see a lack.
In our society I think there's a belief that most instances where parents try to interfere with dating choices of their children it doesn't help.
A lot of liberal minded parents don't think that attempts by their parents to intervene in their dating lives were positive, so they don't try to intervene for their children.
The choice to use an abbreviation for AWSS is also worth noting. People with normal social skills usually don't speak about AWSS. Using language that produces emotional detachment is typical PUA-thing.
Replies from: buybuydandavis, Salemicus↑ comment by buybuydandavis · 2015-02-06T20:59:19.299Z · LW(p) · GW(p)
In our society I think there's a belief that most instances where parents try to interfere with dating choices of their children it doesn't help.
I think people children tend not to want their parents to try to choose their partners, but at least in my social circles, I think it was quite rare for parents to try to impart relationship/dating skills into their children.
People with normal social skills usually don't speak about AWSS. Using language that produces emotional detachment is typical PUA-thing.
Specialized in group jargon and acronyms show up in a lot of places. One nearby that I can think of.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-02-07T22:08:23.144Z · LW(p) · GW(p)
Specialized in group jargon and acronyms show up in a lot of places.
I'm not criticizing it for being jargon. A word like "steelmanning" is also jargon. But it's not in the same category emotional distancing as "AMSS".
There are also times where intellectual distance is useful. In academia you don't want emotions to interfere with your reasoning. In the case of PUA, the language allows suppression of approach anxiety. Intellectual distance allows a PUA to run his routine without interferes of his emotions. At the same time prevents real emotional connection to see interactions with the goal of maximizing the amount of k-close, n-closes and f-closes.
↑ comment by Salemicus · 2015-02-06T11:47:04.242Z · LW(p) · GW(p)
In our society I think there's a belief that most instances where parents try to interfere with dating choices of their children it doesn't help.
An excellent point, although I think this belief is much more widespread among liberals than conservatives. And I think it's part of a larger point, which is that liberals seem to be far more negative across the board towards parental involvement in their children's lives. I vividly remember my own shock and incomprehension when I first encountered this attitude - that young people need to challenge, overturn, or break free from parental authority. I still have to remind myself that some people think like this, because it's so alien to my understanding of the world. For me it is completely natural that I would want my parents to intervene in my dating life - whether to set me up with someone they considered suitable, to warn me against someone unsuitable, to advise me that I am lacking, or whatever else - because they know me better than anyone else, and they can only have my best interests at heart. Of course I should try and adopt and carry on my parents values as best I can. And so on. I don't think it's purely a liberal/conservative thing, but I do think it's part of it.
Examples: I recently saw this article cited as an example of unfit parenting because they see their kids as "raw materials for their culture cloning project," and I saw this post heavily upvoted. My reactions were exactly opposite - my reaction was to applaud the Christians' attempts to pass down their values (although I do not personally share them) and to sigh at what seemed like the narcissism of the Lesswrong poster.
Predictions (Because any theory is worthless if it doesn't make them): I predict that conservatives would be much more willing than liberals to support statements like "Parents should make sure their sons grow up with manly skills" and "Parents should intervene when they see their children making bad choices in their romantic lives."
Replies from: Lumifer, ChristianKl↑ comment by Lumifer · 2015-02-06T15:47:49.810Z · LW(p) · GW(p)
I vividly remember my own shock and incomprehension when I first encountered this attitude - that young people need to challenge, overturn, or break free from parental authority. I still have to remind myself that some people think like this, because it's so alien to my understanding of the world.
That's interesting. I come from the entirely opposite side -- it's not really comprehensible to me how and why parents feel the need to run their childrens' lives past late teens. And here you are, in the bit-flesh :-)
↑ comment by ChristianKl · 2015-02-06T15:49:32.912Z · LW(p) · GW(p)
Being an adult is partly about taking responsibility for one's own life. The man who talks to a woman because his mother told him to do so, might lack qualities of manly social interaction.
Predictions (Because any theory is worthless if it doesn't make them): I predict that conservatives would be much more willing than liberals to support statements like "Parents should make sure their sons grow up with manly skills" and "Parents should intervene when they see their children making bad choices in their romantic lives."
I agree, that's likely true.
↑ comment by buybuydandavis · 2015-01-26T08:15:56.720Z · LW(p) · GW(p)
I'd like to tell younger men who have had problems with acquiring the AMSS that they need to think about these other consequences of its absence
From a guy pushing 50, I'd say you're giving the younger men here good advice.
I don't know what's with all the downvotes. Mentions of PUA? Statements of politically incorrect truisms?
Replies from: philh, passive_fist↑ comment by philh · 2015-01-26T10:21:56.046Z · LW(p) · GW(p)
I would say that there's a version of advancedatheist's comment (and many of his other comments) which is giving good advice based on truthful premises, but it's not this version, and advancedatheist gets approximately zero benefit of the doubt at this point.
Like, yes, it is probably true that failing to develop a complete social skill set will cause you social problems later in life, even in those parts of life that are not to do with sex or dating. Turns out, social skills are also used in the workplace.
But taken in context, that advice reads more like "men should learn the skills to help them pick up women, and this will help them in the workplace", which needs a lot more justification. And we also get "if girls aren't attracted to your son, you need to fix your son", which... there might be a nugget of value somewhere nearby in advice-space, but as written it has so many issues that all I feel like saying is "fuck that".
(I don't feel like continuing to pay the karma tax, so I probably won't continue this.)
Edit (because I'd like to make an unrelated point without paying the tax twice): I also feel like there's a common theme in aa's posts in the open thread. He'll ask a question that sounds fairly generally applicable and rationality-related. Then he'll say something which is related to the question, but which mostly sounds like it's about PUA from the perspective of assuming PUA is (good/true/praiseworthy/whatever).
And then consider the comment "some blogger wrote about AI. I don't know why he bothers to blog, he doesn't get as many comments as popular bloggers like ". (Admittedly, I don't actually recognise all those names.) Why those specific bloggers? If someone were to actually attempt to compile a list of blogs based on their popularity, would any of those names come up? Does Carrico have anything in common with those people? Why even bring up the question of why Carrico bothers?
aa doesn't seem to be posting in good faith here. He just seems to have an agenda of popularising PUA (with perhaps a side order of neoreaction or something along those lines), and while I don't dislike PUA as much as some, I would like him to shut up and go away.
I'm not saying this for the benefit of aa, because I'm pretty sure he knows what he's doing and engaging with him would be unhelpful. But for the benefit of others who wonder why he seems to get downvoted so much: this is why I, personally, am quick to downvote him, and I imagine others are similar. (I don't downvote him automatically, however.)
Replies from: Nornagest, buybuydandavis↑ comment by Nornagest · 2015-01-28T00:11:45.990Z · LW(p) · GW(p)
"...he doesn't get as many comments as popular bloggers like " [...] Admittedly, I don't actually recognise all those names.
Most of those aren't PUA bloggers, actually, although they do recognizably share a certain cluster of perspectives. Megan McArdle is a libertarianish policy blogger with the Atlantic. Vox Day is mainly a spec-fic blogger, lately notorious for association with what SSC readers might recognize as l'affaire du reproductively viable worker ants. Steve Sailer is hard for me to classify, but in this crowd he'd probably be best known for what I'll politely describe as contrarian views on race.
If I had to guess, I'd say they're probably just the most famous bloggers that a specific right-of-center geek happened to have read recently.
↑ comment by buybuydandavis · 2015-01-26T20:41:24.556Z · LW(p) · GW(p)
He just seems to have an agenda of popularising PUA (with perhaps a side order of neoreaction or something along those lines), and while I don't dislike PUA as much as some, I would like him to shut up and go away.
Yeah, this is about what I thought.
It seems to me that ideologically based group karma bombing is a general violation of the norms necessary for a civilized community, but it happens here fairly frequently, and it happens predominantly in one ideological direction.
All sorts of people charge about on their hobby horses around here. Are you so quick to karma bomb the riders who are more ideologically sympatico with you? Not so much, right?
I suppose it's rather useless of me to complain. You want him to shut up. You and your ideological compatriots have achieved the next best thing - disappearing his posts into the karma memory hole. Mission Accomplished.
What do you suppose would happen if people more of my ideological ilk started to respond in kind? Isn't tit for tat the game theoretic appropriate response?
Replies from: JoshuaZ, philh↑ comment by JoshuaZ · 2015-01-26T20:59:20.398Z · LW(p) · GW(p)
That's both an unfair and unwarranted response. First of all, there's been "karma bombing" of a variety of different forms by people of different political tribes here, the most prominent of course being Eugine_Nier and his sockpuppets systematic downvoting of all comments of people whose politics he disliked. Second of all, there's a big difference between karma bombing in the Eugine sense and people as individuals downvoting individual comments that are overly political. Third of all, part of the problem here is that AA shoves his politics into almost every single post even when the connection is at most very tenuous. I suspect I'm thought of by people here as more on the "left" but I'm pretty sure that if someone was throwing into comments asides about how the Republican party was racist or sexist, or similar remarks, I'd downvote that person and they'd end up in a pretty similar situation. Fourth of all, at least one user below has made clear that they have some sympathies with AA's viewpoints and downvoted him because the comment's quality and general politics made it not good content.
Arguing that this is about one side downvoting people from the other side really misses what is going on here.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2015-01-26T23:13:05.678Z · LW(p) · GW(p)
First: Indeed, Eugene violated civilized norms, and was booted. What a strange coincidence that it was an unprogressive fellow that got booted. That's as much evidence for my thesis as against.
Second: Ah, so AA saying that people should attend to their game is being "overly political". Seems a stretch. I guess for some people everything is political, but if so, complaining that a post is political makes little sense.
Third: I thought voting a person down was a no no. That was what made Eugine's downvoting a crime, no? I thought we were supposed to be downvoting a post based on it's own content. I note that the response I received "I would like him to shut up and go away" in justification of the downvoting. Where are the villagers and their pitchforks calling for banning the miscreant?
but I'm pretty sure that if someone was throwing into comments asides about how the Republican party was racist or sexist, or similar remarks, I'd downvote that person and they'd end up in a pretty similar situation.
Your certainty is misplaced. I was involved in exactly the kind of case you posit, where someone basically cast conservatives as in league with Lucifer, and he was upvoted to the moon. When I called him on it, I was downvoted to oblivion. He had the decency to engage the issue, and eventually agreed that he had unfairly maligned conservatives, and hadn't really realized he had done it at the time. Would you be so surprised if you had been one of the people upvoting his original post with it's slur against conservatives?
Fourth: "comment's quality and general politics made it not good content."
As for the quality, it was the clear expression of an idea relevant to winning that you don't hear so often. I call it a good point. It is the Open Thread after all. I don't expect dissertations here.
As for the "general politics" - what would that be? It's political to suggest that your interpersonal skills have a large effect on your life, so you should see about getting good at them? We shouldn't talk about interpersonal skills?
Replies from: gjm, Viliam_Bur, JoshuaZ↑ comment by gjm · 2015-01-27T00:43:21.197Z · LW(p) · GW(p)
What a strange coincidence [...]
The equivalence I think you're appealing to doesn't look real to me. Let's suppose you're right about what's happened to advancedatheist: he posts something with a particular political/social leaning, lots of leftish people don't like it, and they pile on and downvote it into oblivion. Contrast with what Eugine is alleged (with good evidence, I think) to have done: someone posts something with a particular political/social leaning, Eugine doesn't like it, and he downvotes 50 of their other comments. Two key differences: (1) In the first case, the thing getting zapped is the comment that these people disapprove of; in the second, it's a whole lot of comments there's nothing wrong with even by Eugine's lights other than who posted them. (2) In the first case, each person is downvoting something they disapprove of; the total karma hit advancedatheist gets is in proportion to the number of disapproving people and the number of disapproved comments; in the second, Eugine is doing it all himself; the total karma hit his target gets is limited only by Eugine's patience.
"I would like him to shut up and go away"
No doubt it's very disagreeable to want someone to shut up and go away. But rather than cherrypicking those 10 words, let's take a look at the context -- which seems to me to have (again, by coincidence) two key differences from that of EN's karmattacks. (1) philh is making a very specific accusation about advancedatheist: that he is not participating here in good faith because he seems only really to be interested in popularizing PUA, even in discussions that have basically nothing to do with it. I don't know whether philh is right or wrong about this, but I'm pretty sure Eugine couldn't and wouldn't have claimed with a straight face that the people he karma-bombed here are (e.g.) only posting to spread left-wing ideas or feminist ideas or whatever. It seems to me that there's a big, big difference between "X is on the wrong side politically; therefore, I would like him to shut up and go away" and "X is trying to force his pet single issue into every discussion on LW and contributing little else; therefore, I would like him to shut up and go away". (2) What philh is owning up to doing to advancedatheist on account of this is not the same as what Eugine is alleged to have done to lots of people. Eugine: downvoting dozens of comments merely because they're posted by one of his targets and afford an opportunity to inflict downvotes. philh: being quick on the trigger when he sees a low-quality comment from advancedatheist. Again: big difference between "I would like X to go away, so I'll downvote all his comments until he does" and "I would like X to go away, so when he says something I think is of low quality I'll downvote it more readily than I would a similar comment from someone else".
For the avoidance of doubt, I am not endorsing the behaviour I think philh has admitted to here. I think it would be better if he didn't do it. Someone who's abusing the community by spamming LW with single-issue stuff is going to get downvoted to hell purely on his comments' actual merits, with no need for the itchy trigger finger, and that's a good thing. (And I think it's what's been happening to advancedatheist.)
I was involved in exactly the kind of case you posit, where someone basically cast conservatives as in league with Lucifer
No, he really didn't. Not remotely. This is the article in question. (Right?) There's nothing there remotely like casting conservatives as in league with Lucifer. What there is -- and I think this is what Kaj later agreed you were right about -- is something much less stupid, less harmful, and less likely to be the result of ill will: in one place he gave an example involving social conservatives' thinking about liberals' legislative preferences regarding homosexuality, and he didn't do a very good job of getting inside social conservatives' heads, and consequently his description was inaccurate and made them sound sillier and more unreasonable than they (typically) actually are. That's all.
(To put it differently: I suggest that nothing Kaj wrote was any more inaccurate or uncharitable than your description of him, just now, as having cast conservatives as in league with Lucifer.)
In any case, Kaj's article is irrelevant here, and would be even if he'd been much ruder about social conservatives than he actually was. Because JoshuaZ's complaint about advancedatheist is that he shoves highly-charged political asides into comments about other things. Whereas Kaj's whole post was, precisely, about how people think about their political opponents. No matter what he said there, there's no way it could have been an example of the behaviour JoshuaZ is criticizing advancedatheist for.
Replies from: Viliam_Bur, buybuydandavis↑ comment by Viliam_Bur · 2015-01-27T10:52:49.787Z · LW(p) · GW(p)
Contrast with what Eugine is alleged (with good evidence, I think) to have done: someone posts something with a particular political/social leaning, Eugine doesn't like it, and he downvotes 50 of their other comments.
I confirm that this is what Azathoth123 has done. (I assume with high probability that Azathoth123 is Eugine, but I cannot confirm that. Since both are banned, I don't care anymore.) Even towards new users. A new user comes, posts dozen comments, receives one downvote per each, leaves LW and doesn't return again. One of the comments happened to be political, the remaining ones were just the kind of comments we usually have here. No other downvotes for that user from anyone else. This in my opinion is much more harmful than downvoting old users who usually have high enough karma that they are in no danger of returning to zero, and they understand that it is only one person punishing them for having expressed a political opinion, not a consensus of the whole website.
It is a completely different behavior from downvoting the political comment and leaving the other comments untouched. From this kind of feedback people can learn "don't post this kind of comments". From the Eugine's kind of feedback, the only lesson is "someone here doesn't like you (and doesn't even bother to explain why), go away". And Eugine's algorithm for giving this feedback is far from representative for the LessWrong culture.
↑ comment by buybuydandavis · 2015-01-27T04:03:13.152Z · LW(p) · GW(p)
The equivalence I think you're appealing to doesn't look real to me.
It's not identical, but similar.
In the first case, each person is downvoting something they disapprove of
First, I think there as a fair bit of disapproving because of a person they disapprove of, because of his views. The comments against the post seem to include a lot of analysis of AA's general behavior, not specific textual analysis of the post.
advancedatheist gets approximately zero benefit of the doubt at this point.
That's about the person, not about the particular post. A particular chunk of text doesn't need a "benefit of the doubt", it needs to be read.
Voting down a post because of the person, and not the post, was the primary charge against Eugine. If he voted down 50 votes, but detailed the specific failings of each post, what grounds would there have been to boot him?
Second, Eugine's crime was the violation of list norms on the use of karma. Is it not a violation of the professed list norms to vote an article down just because you disapprove of their views?
that he is not participating here in good faith because he seems only really to be interested in popularizing PUA
Since when is it bad faith to have a particular hobby horse that one rides? I see a lot of "ethical altruism" posts and comments. You voting those down too?
And no, what people are doing here is not existentially identical to what Eugine did. "Not exactly the same as the tarred and feathered pariah" is not exactly the greatest defense.
Let's suppose you're right about what's happened to advancedatheist: he posts something with a particular political/social leaning, lots of leftish people don't like it, and they pile on and downvote it into oblivion.
OK, let's suppose I'm right. That's usually a good bet.
Do you consider such behavior acceptable? Desirable? Consistent with the professed norms of behavior of the list?
No, he really didn't. Not remotely. This is the article in question. (Right?)
Yes. That was the article.
Much to his credit, Kaj admitted that he had unfairly cast his opponents as "morally reprehensible". http://lesswrong.com/lw/dc5/thoughts_on_moral_intuitions/71uz
Argue with him about it if you like. I did my time on that one.
Kaj's article is perfectly relevant to JoshuaZ's claim here:
that if someone was throwing into comments asides about how the Republican party was racist or sexist, or similar remarks, I'd downvote that person and they'd end up in a pretty similar situation.
The scenario he described happened, and the author did not end up in a similar situation as AA. Far from it, he was applauded.
Replies from: gjm, JoshuaZ↑ comment by gjm · 2015-01-27T10:24:30.633Z · LW(p) · GW(p)
"Benefit of the doubt"
Yes, giving (or not giving) someone the benefit of the doubt on a particular occasion involves your opinions about the personally and not just what they've done on that occasion. No, I don't see why that should be a problem. (Suppose an LW poster whom you know to be sensible and intelligent posts something that seems surprisingly stupid. I hope you'll give serious consideration to the possibility that you've misunderstood, or they're being ironic, or there's some subtlety they've seen and you haven't. Failing that, you'll probably guess that for whatever reason they're having a bad day. Whereas if someone whose contributions you regard as generally useless posts something stupid-looking you'll probably just think "oh yeah, them again". And there's nothing wrong with any of that.)
The worst problem with mass-downvoting of the sort Eugine got booted for isn't that his voting wasn't completely blind to who wrote the things he was voting on. It's that it ignored everything else.
(And: Yes, it is a violation of the professed norms around here to vote something down just because you disapprove of its author's views. You ask that question as if we're faced with a bunch of examples of people doing that, but I'm not seeing them.)
Hobby horses
LW has a bunch of pet topics. Effective altruism has (not always by that name) always been one of them. If someone only ever posts on LW about effective altruism, that in itself doesn't make their contributions unhelpful. PUA is not in that situation; my impression is that a few people on LW are really interested in it, a (larger) few are really offended by it, and most just aren't interested. So someone posting only about PUA is (all else being equal) providing much less value to LW than someone posting only about EA.
But all else is not equal. What advancedatheist is accused of isn't merely posting only about PUA, it's shoehorning PUA into discussions where it doesn't belong. If someone did that with EA, I think there would be plenty of complaints and downvotes flying around after a while.
"Not exactly the same"
"Not exactly the same as the tarred-and-feathered pariah" is a pretty good defence, when the attack it's facing is "see, you're doing the same as the tarred-and-feathered pariah". And actually what I'm saying is "Quite substantially less bad than the tarred-and-feathered pariah". And you may recall that it was controversial whether Eugine should be sanctioned for his actions; so what I'm saying is actually "Quite substantially less bad than that guy whose behaviour we had trouble deciding whether to punish".
Piling on
If someone posts something lots of people don't like for political reasons and it gets jumped on for political reasons: no, I don't like it much. Nor for that matter if they post something lots of people do like for political reasons and it gets upvoted to the skies.
It may at this point be worth remarking that, so far as I can see, advancedatheist's comments are not heavily downvoted overall right now. Maybe that's partly because of this discussion; I don't know. But it doesn't actually seem as if he's being greatly harmed, or his comments being effectively silenced, on account of their political content.
Anyway: as I say, I think it's a shame if something gets a huge pile of negative karma merely for being politically unpopular. But unacceptable or inconsistent with professed norms hereabouts? No, I don't think so.
Kaj's comments on social conservatives
I didn't dispute that Kaj agreed he'd been too negative about social conservatives. I did dispute (and continue to dispute) that he did anything remotely resembling saying that they're in league with Lucifer. What Kaj agreed with you about was the first of those; what you've claimed here and I've disagreed with is the second.
And no -- for reasons I've already given, but you've completely ignored -- it was not an instance of the scenario JoshuaZ described. Because
- what JoshuaZ described was someone chucking in irrelevant anti-Republican comments into discussions they're irrelevant to; whereas
- Kaj's article was all about dealing with political disagreement, a lot of it was about how important it is to understand your political opponents and not strawman them, and it just happened he didn't do as good a job as he should have of doing that (even though, as I think we agree, he was trying to).
These are not remotely the same thing. Irrelevant politically-charged asides versus mentioning politics in an article about politics. Overt hostility to a particular group versus limited ability to portray a group accurately.
(Also: you can only cast one vote on a given article. The paragraph you didn't like was one of dozens. I see no reason to think that what got Kaj's article applauded and upvoted was that he misrepresented social conservatives rather than all the other stuff in it. Is it your opinion that if an article or comment contains anything in it that is less than perfectly charitable to the author's political opponents, it should be downvoted? You might want to be careful about your answer.)
Replies from: buybuydandavis, buybuydandavis↑ comment by buybuydandavis · 2015-02-04T20:57:57.555Z · LW(p) · GW(p)
[I had mistakenly replied to my own post instead of yours. ]
Yes, giving (or not giving) someone the benefit of the doubt
It shouldn't be about him, it should be about his post. Maybe he's in league with Lucifer too, but that doesn't make any of his posts any more true or false.
you'll probably guess that for whatever reason they're having a bad day. Whereas if someone whose contributions you regard as generally useless posts something stupid-looking you'll probably just think "oh yeah, them again". And there's nothing wrong with any of that.
If having a bad day means writing a bad post, then you get a downvote.
You're just letting your prior on the person determine your vote. Which you say you disapprove of.
Yes, it is a violation of the professed norms around here to vote something down just because you disapprove of its author's views.
a (larger) few are really offended by it,
I'm not really big on giving offense utility monsters a veto. Once you pay the Dane-geld, you never get rid of the Dane.
People are offended at PUA. Do they really not comprehend that plenty of people find their views offensive in turn? Just as everyone is the hero in their own story, there's a pretty good chance you're the villain in the stories of a lot of other people.
But I admit that's something of the clash of civilizations going on here. Many people feel that their group's "offense" should tally up in the utility machine, and they should thereby get their way. I don't.
where it doesn't belong
So it doesn't belong in the Open Thread?
Anyway: as I say, I think it's a shame if something gets a huge pile of negative karma merely for being politically unpopular. But unacceptable or inconsistent with professed norms hereabouts? No, I don't think so.
You just said:
Replies from: gjmYes, it is a violation of the professed norms around here to vote something down just because you disapprove of its author's views.
↑ comment by gjm · 2015-02-04T22:46:22.197Z · LW(p) · GW(p)
It shouldn't be about him, it should be about his post.
When a post is somewhat ambiguous, it's reasonable to consider its context. That includes considering who posted it and what their likely reasons were. (Because it influences what is likely to happen in the ensuing discussion, if any.)
I'm not really big on giving offense utility monsters a veto.
Just as well no one suggested that, then. If you're suggesting that I am proposing giving offense utility monsters a veto, then I politely request that you reread the whole of the sentence from which you quoted eight words and reconsider what might be leading you to misinterpret so badly. (Incidentally: Kipling reference noted.)
Do they really not comprehend that plenty of people find their views offensive in turn?
I don't see any reason to think otherwise. If someone came along who only wanted to talk about how awful the PUA crowd is, and wedged complaints about that into discussions in which they have no place, I don't imagine that would be much more popular than advancedatheist's alleged wedging of pro-PUA material into inappropriate contexts.
So it doesn't belong in the Open Thread?
I think you are mixing levels here. I am not complaining about advancedatheist, I am commenting on philh's complaints about him and on the parallels you're drawing. The accusation being levelled at advancedatheist (or at least part of it) is that he tries to shove PUA advocacy into discussions of other things. If in fact all he's been doing is saying "yay PUA" in top-level open thread comments, then it's an unfair accusation (though I think "yay PUA" and "boo PUA" belong in LW open threads about as much as "yay President Obama" or "boo Manchester United Football Club") but that's an entirely separate question from whether there's an inconsistency between complaining about Eugine Nier's mass-downvoting and not complaining about the downvotes some of advancedatheist's comments have received.
You just said: [...]
No contradiction. The distinction you may be missing is between "because you disapprove of its author's views" and "because you disapprove of the views expressed in that comment". If I post one comment saying "Adolf Hitler was an admirable leader and we should give his policies another try" and one saying "Kurt Goedel proved the relative consistency of CH with ZF by proving that CH is true in the constructible universe and that Con(ZF) implies Con(ZF & V=L)", then it is a violation of local professed norms if the latter comment gets downvoted because the former is horrible, but not if the former one does.
↑ comment by buybuydandavis · 2015-01-28T01:53:00.300Z · LW(p) · GW(p)
These are not remotely the same thing. Irrelevant politically-charged asides versus mentioning politics in an article about politics. Overt hostility to a particular group versus limited ability to portray a group accurately.
It's not that he was mentioning politics in an article about politics. Talking about political slurs would be relevant to an article about politics. Making political slurs generally wouldn't be.
But altogether lost in the brouhaha over my original objections to Kaj's post was that his false characterization made for a bad argument. He did worse than be uncharitable, he did worse than slur his opponents, he made a bad argument relying a smear for much of it's force.
And as far as I was concerned, the people who upvoted him did much worse in circling the wagons around a bad argument dependent on a cheap slur, even after it was pointed out to them.
Is it your opinion that if an article or comment contains anything in it that is less than perfectly charitable to the author's political opponents, it should be downvoted?
No, less than perfect is not my standard for downvotes.
Mischaracterizing your opponents as supporting something morally reprehensible probably qualifies. Making a bad argument based on such a mischaracterization certainly does. Defending the mischaracterization would as well.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2015-01-28T02:27:10.499Z · LW(p) · GW(p)
[Deleted post mistakenly posted as a reply to myself. Moved up one level.]
↑ comment by JoshuaZ · 2015-01-27T04:18:48.867Z · LW(p) · GW(p)
that if someone was throwing into comments asides about how the Republican party was racist or sexist, or similar remarks, I'd downvote that person and they'd end up in a pretty similar situation.
The scenario he described happened, and the author did not end up in a similar situation as AA. Far from it, he was applauded.
It should be easy to see how these situations are different. A well-respected user made an article with a large amount of other content, and explicitly was trying their best to model a wide varity of people who they disagree with. (And mind you many people would likely upvote Kaj by default simply based on their general inclination. This clear happens with some of the more popular writers here. Heck, I've occasionally upvoted some of gwern's posts before I've finished reading them). This is not the scenario in question where the comments are being put into repeated comments that have little or nothing to do with the topic at hand and where this is almost all the comments the user in question has. Kaj was specifically talking about how people think about politics and trying to be charitable (failing at properly doing so apparently but that's not for lack of trying.) And now imagine Kaj kept doing shoving such comments in while going through apparently almost zero effort to actually respond to either questions or criticisms. That would be the scenario under discussion.
I (and I suspect many people here) would not react to you the way they've reacted to AA partially because you respond to comments and frankly when you do have political statements, they are generally more clearly laid out, more reasonable, and more interesting than the throw-away cheering remarks that AA has. I am however, surprised by how downvoted your initial comment was there- it does have serious issues such as the claim that Kaj doesn't have regular conservative readers, but it is surprisingly downvoted; I do have to wonder if part of that is a status thing (Kaj being of relatively high status here and you being of status more in my range or slightly higher). However, that's not a great explanation and it does make me update in the direction of their being some for lack of a better term, liberal pile-ons.
Incidentally, note that your guess that I had downvoted you in that thread was wrong: I had not seen that thread until it was pointed out here, and had not read Kaj's piece either. So that prediction of yours was wrong. I'm curious, meanwhile, if you'll take me up on my offer for a bet about your attitude about AA changing in the next few months.
↑ comment by Viliam_Bur · 2015-01-27T10:32:55.861Z · LW(p) · GW(p)
Indeed, Eugene violated civilized norms, and was booted. What a strange coincidence that it was an unprogressive fellow that got booted.
Excuse me, but this seems like saying: "Indeed, Eugine was the person who was strategically downvoting his political opponents. But it is still a strange coincidence that he also happened to be the person who was banned for strategically downvoting his political opponents." I fail to see the strangeness here.
I guess the charitable interpretation is that the list of bannable offenses is purposefully generated to include things done by unprogressive people, and to exclude things done by progressive people (which I guess means pretty much everyone who is not a neoreactionary). For example, if it would be instead someone else mass-downvoting neoreactionaries (not just for political comments, but having once made a political comment, then for everyrything, including meetup announcements), and Eugine would be posting pictures of kittens, then the moderators of LessWrong would decide that mass-downvoting is perfectly acceptable, however pictures of kittens deserve a life-time ban.
Is this what you are suggesting?
↑ comment by JoshuaZ · 2015-01-27T00:32:59.810Z · LW(p) · GW(p)
What a strange coincidence that it was an unprogressive fellow that got booted. That's as much evidence for my thesis as against.
In what universe? Are you claiming that Eugine got booted by what? The evil cabal of moderators who want to push left-wing politics?
But if you want, I'll make a fun related prediction: Within 3 months you'll agree with me that AA has been violating community norms. What do you want to bet on that?
Second: Ah, so AA saying that people should attend to their game is being "overly political". Seems a stretch. I guess for some people everything is political, but if so, complaining that a post is political makes little sense.
Don't be daft. Making claims like he did falls flatly into PUA and neoreactionary claims. Moreover, the phrasing was political.
Third: I thought voting a person down was a no no. That was what made Eugine's downvoting a crime
No. The problem with Eugine was he was a) repeatedly downvoting people's comments which had nothing to do with anything to do with politics or controversial issues. For crying out loud, he was downvoting meetup announcements posted by people. b) he was using sockpuppets to get the karma to do it. (Note that for example, I've downvoted two of advanceatheist's recent comments but not most of them, and upvoted one of the less political ones).
Your certainty is misplaced. I was involved in exactly the kind of case you posit, where someone basically cast conservatives as in league with Lucifer, and he was upvoted to the moon. When I called him on it, I was downvoted to oblivion.
Link?
As for the quality, it was the clear expression of an idea relevant to winning that you don't hear so often.
Really? I could imagine much clearer and much more steelmanned, and frankly more interesting versions of the same idea. I can easily supply them if you want.
It's political to suggest that your interpersonal skills have a large effect on your life, so you should see about getting good at them? We shouldn't talk about interpersonal skills?
No. But it is politics to claim that interpersonal skills are intrinsically involved in issues of sex and gender, that those issues always come up in the way that he described and the like.
↑ comment by philh · 2015-01-27T12:10:02.684Z · LW(p) · GW(p)
A lot of my reply has been covered already, so I'd just like to make a few points that I don't think have been made so far.
It looks like I've downvoted fewer than a dozen of aa's comments.
One more reason that I don't think I'm like Eugine, is that if aa ever actually asked "I'm being downvoted a lot, what gives?" I would be happy to explain. As far as I know he's never asked, which is one of the reasons that I think he's acting in bad faith. Eugine didn't do this. IIRC, at least one of his targets said that they PMed him asking for an explanation, and received another round of downvoting.
This particular post really does strike me as bad. Yes, one could steelman it into something reasonable. I don't feel inclined or obligated to do that. There would be little benefit to me compared to the other things I could put effort into. And I'm not going to do it for aa's benefit until he starts acting like a truth-seeker. This is part of what I meant by the benefit of the doubt.
(I think that this next paragraph is pointing in the direction of something true, but isn't quite right:)
I don't think aa's problem is just being overly political, or what his specific politics are. (My opinions about PUA and neoreaction are slightly negative, but sympathetic.) The way he's being political feels like an attempt at subversion. It's like he wants to shift the LW Overton window, and the way he's doing that is by acting like the Overton window is somewhere other than where it actually is. Maybe the Overton window should be wider, but the way to widen it is to argue for things that are outside it, acknowledging that they are currently not well regarded. If you act like PUA is inside the window when it isn't, then current readers will get less value than they could if you spoke to the current window; and outsiders will get the wrong impression of LW, which could be the entire point (drive away people who dislike it, attract people who like it).
↑ comment by passive_fist · 2015-01-26T10:13:50.771Z · LW(p) · GW(p)
I downvoted despite the fact that I sympathize with the basic gist of AMSS (although maybe not some of the details). The reason I downvoted is because I don't think LW is appropriate for discussions like this, at least not in this way.
If you want to make non-obvious statements like "Women tend to respect the sexually confident" then you need to 1) define what you mean by this, and 2) provide evidence or links to evidence. I'm all for a rational discussion based on psychological science about what causes sexual attraction in human beings. Nothing wrong with that.
Replies from: None↑ comment by [deleted] · 2015-02-01T05:36:43.229Z · LW(p) · GW(p)
I'm assuming the implied-to-be-already-known background here is something along the lines of "women find low-status men repulsive even in relatively non-sexual/non-romantic/etc. contexts", which is both true and probably a special case of "people find low status repulsive".
Replies from: passive_fist↑ comment by passive_fist · 2015-02-01T09:24:18.148Z · LW(p) · GW(p)
But it's definitely not known, at least not to me, that "people find low status repulsive." At the very least, I'd appreciate some evidence backing this up.
↑ comment by PhilGoetz · 2015-02-05T08:50:45.700Z · LW(p) · GW(p)
In my case, I was stuffed full of religious teachings that seem as if they were deliberately designed to make men unattractive. Ambition, money, and sexuality were literally demonized. I never saw my father kiss my mother without her turning away in disgust. I was discouraged from dating, from pride, from profitable careers, and in general from any actions that might lead to even moderate fame, fortune, or power.