Open thread, July 28 - August 3, 2014
post by polymathwannabe · 2014-07-28T20:27:12.810Z · LW · GW · Legacy · 243 commentsContents
243 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
243 comments
Comments sorted by top scores.
comment by sediment · 2014-07-28T22:21:55.822Z · LW(p) · GW(p)
I recently made a dissenting comment on a biggish, well-known-ish social-justice-y blog. The comment was on a post about a bracelet which one could wear and which would zap you with a painful (though presumably safe) electric shock at the end of a day if you hadn't done enough exercise that day. The post was decrying this as an example of society's rampant body-shaming and fat-shaming, which had reached such an insane pitch that people are now willing to torture themselves in order to be content with their body image.
I explained as best I could in a couple of shortish paragraphs some ideas about akrasia and precommitment in light of which this device made some sense. I also mentioned in passing that there were good reasons to want to exercise that had nothing to do with an unhealthy body image, such as that it's good for you and improves your mood. For reasons I don't fully understand, these latter turned out to be surprisingly controversial points. (For example, surreally enough, someone asked to see my trainer's certificate and/or medical degree before they would let me get away with the outlandish claim that exercise makes you live longer. Someone else brought up the weird edge case that it's possible to exercise too much, and that if you're in such a position then more exercise will shorten, not lengthen, your life.)
Further to that, I was accused of mansplaining twice. and then was asked to leave by the blog owner on grounds of being "tedious as fuck". (Granted, but it's hard not to end up tedious as fuck when you're picked up on and hence have to justify claims like "exercise is good for you".)
This is admittedly minor, so why am I posting about it here? Just because it made me realize a few things:
- It was an interesting case study in memeplex collision. I felt like not only did I hold a different position to the rest of those present, but we had entirely different background assumptions about how one makes a case for said position. There was a near-Kuhnian incommensurability between us.
- I felt my otherwise-mostly-dormant tribal status-seeking circuits fire up - nay, go into overdrive. I had lost face and been publicly humiliated, and the only way to regain the lost status was to come up with the ultimate putdown and "win" the argument. (A losing battle if ever there was one.) It kept coming to the front of my mind when I was trying to get other things done and, at a time when I have plenty of more important things to worry about, I wasted a lot of cycles on running over and over the arguments and formulating optimal comebacks and responses. I had to actively choose to disengage (in spite of the temptation to keep posting) because I could see I had more invested in it and it was taking up a greater cognitive load than I'd ever intended. This seems like a good reason to avoid arguing on the internet in general: it will fire up all the wrong parts of your brain, and you'll find it harder to disengage than you anticipated.
- It made me realize that I am more deeply connected to lesswrong (or the LW-osphere) than I'd previously realized. Up 'til now, I'd thought of myself as an outsider, more or less on the periphery of this community. But evidently I've absorbed enough of its memeplex to be several steps of inference away from an intelligent non-rationalist-identifying community. It also made me more grateful for certain norms which exist here and which I had otherwise gotten to take for granted: curiosity and a genuine interest in learning the truth, and (usually) courtesy to those with dissenting views.
↑ comment by pianoforte611 · 2014-07-29T00:21:11.066Z · LW(p) · GW(p)
but we had entirely different background assumptions about how one makes a case for said position. There was a near-Kuhnian incommensurability between us.
This is very frustrating and when I realize it is happening, I stop the engagement. In my experience, rationalists are not that different from smart science or philosophy types because we agree on very basic things like the structure of an argument and the probabilistic nature of evidence. But in my experience normal people are very difficult to have productive discussions with. Some glaring things that I notice happening are:
a) Different definitions of evidence. The Bayesian definition of evidence is anything that makes A more likely than not A. But for many people, evidence is anything that would happen given A. For example a conspiracy theorist might say "Well of course they would deny it if were true, this only proves that I'm right".
b) Aristotelianism: the idea that every statement is either true or false and you can prove statements deductively via reasoning. If you've reasoned that something is true, then you've proved it so it must be true. Here is a gem from an Aristotelian friend of mine "The people in the US are big, it must be the food and they use growth hormones in livestock, therefore people in the US are big because of growth hormones".
c) Arguments that aren't actually arguments. Usually these are either insults or signals of tribal affiliation. For example "Good to know you're better than everyone else" in response to a critical comment. But insults can be more subtle and they can masquerade as arguments. For example in response to a call for higher taxes someone might say "If you love taxes so much then why aren't you sending extra money to the treasury?".
d) Arguments that just have nothing to do with their conclusion. An institute called Heartmath stated this gem (rough paraphrase): "The heart sends more information to the brain than the brain does to the heart therefore the heart is more important that the brain".
e) Statistical illiteracy. I want to grab a flamethrower every time the following exchange happens:
Salviati: "According to this study people who are X tend to be Y"
Simplicio: "Well I know someone who is X but isn't Y, so there goes that theory"
f) Logical illiteracy:
Example 1:
Salviati: " If A then B"
Simplicio: "But A isn't true therefore your argument is invalid"
Example 2:
Simplicio: "X is A therefore X is B"
Salviati: "Let us apply a proof by contradiction. 'A implies B' is false because Y is A, but Y is not B"
Simplicio: "How dare you compare X to Y, they are totally different! Y is only not B because ..."
Sorry if the symbolic statements are harder to read, I didn't want to use too many object level issues.
Replies from: sediment, IlyaShpitser, army1987, Creutzer↑ comment by sediment · 2014-07-29T00:58:58.803Z · LW(p) · GW(p)
Sightings:
- Arguments that aren't actually arguments: argument by tribal affiliation was certainly in full force, as well as a certain general condescension bordering on insult.
- Statistical illiteracy: in an only minor variant of your hypothetical exchange, I said that very few people are doing too much exercise (tacitly, relative to the number of people who are doing too little), to which someone replied that they had once overtrained to their detriment, as if this disproved my point.
I was also struck by how weird it was that people were nitpicking totally incidental parts of my post, which, even if granted, didn't actually deduct from the essence of what I was saying. This seemed like a sort of "argument by attrition", or even just a way of saying "go away; we can tell you're not one of us."
A general pattern I've noticed: when processing an argument to which they are hostile, people often parse generalizations as unsympathetically as they can. General statements which would ordinarily pass without a second thought are taken as absolutes and then "disproved" by citations of noncentral examples and weird edge cases. I think this is pretty bad faith, and it seems common enough. Do we have a name for it? (I have to stop myself doing it sometimes.)
Your symbolic arguments made me laugh.
Replies from: Toggle, zedzed, Jiro, army1987↑ comment by Toggle · 2014-07-29T01:56:02.360Z · LW(p) · GW(p)
Social justice, apropos of the name, is largely an exercise in the manipulation of cultural assumptions and categorical boundaries- especially the manipulation of taboos like body weight. We probably shouldn't expect the habits and standards of the social justice community to be well suited to factual discovery, if only because factual discovery is usually a poor way to convince whole cultures of things.
But the tricky thing about conversation in that style is that disagreement is rarely amicable. In a conversation where external realities are relevant, the 'winner' gets social respect and the 'loser' gets to learn things, so disagreement can be mutually beneficial happy event. But if external realities are not considered, debate becomes a zero-sum game of social influence. In that case, you start to see tactics pop up that might otherwise feel like 'bad faith.' For example, you win if the other person finds debate so unpleasant that they stop vocalizing their disagreement, leaving you free to make assertions unopposed. On a site like Less Wrong, this result is catastrophic- but if your focus is primarily on the spread of social influence, then it can be an acceptable cost (or outright free, if you're of the postmodernist persuasion).
My general sense is that this is a fairly distinctive quality of social justice communities, so your feeling of alienation may have as much to do with the social justice community as it does with the LW memeplex. A random conversation about fat acceptance with culturally modal people might be a great deal less stressful. But then again, you probably shouldn't trust somebody else on LW to say that.
(I upvoted Simplicio and Salviati, by the way.)
Replies from: Vulture, Scott Garrabrant↑ comment by Scott Garrabrant · 2014-07-29T17:04:23.932Z · LW(p) · GW(p)
My general sense is that this is a fairly distinctive quality of social justice communities, so your feeling of alienation may have as much to do with the social justice community as it does with the LW memeplex.
I am very curious to what extent this is true, and would appreciate any evidence people have in either direction.
What is the cause of this? Is it just random fluctuation in culture that reinforce themselves? Perhaps I do not notice these problems in non social justice people just because they do not have an issue they care enough about to argue in this way. Perhaps, It is just availability bias as I spend too much time reading things social justice people say. Perhaps it is a function of the fact that the memes they are talking have this idea that they are being oppressed which makes them more fearful of outsiders.
↑ comment by zedzed · 2014-07-29T05:57:14.160Z · LW(p) · GW(p)
I'd call it being uncharitable. Extremely so, in this case.
Salviati: blah blah blah Exercise increases lifespan blah blah blah
Simplicio: THAT'S NOT TRUE THERE EXISTS AN EXCEPTION YOUR ENTIRE ARGUMENT IS INVALID
Because we're talking about being uncharitable, let's be charitable for a moment. Simplicio, in fact, made the mathematically proper counterargument: he produced a counterexample to a for-all claim. And finding one flaw with a mathematical proof is, in fact, sufficient to disregard the entire thing.
Clearly, though, Simplicio's argument is horrible and nobody should ever make it. If we check out the errata for Linear Algebra Done Right, we find that Dr. Axler derped some coefficients on page 81. His proof is incorrect, but any reasonable person can easily see how the coefficients were derped and what the correct coefficients were, and it's a trivial matter to change the proof to a correct proof.
Analogously, the proper response to an argument that's technically incorrect, but has an obvious correct argument that you know the author was making even if they phrased it poorly, is to replace the incorrect argument with the correct argument, not scream about the incorrect argument. Anyone who does anything differently should have their internet privileges revoked. It's more than a trivial inconvenience to write (and read) "the overwhelming scientific consensus indicates that, for most individuals, increasing exercise increases lifespan, although there's a few studies that may suggest the opposite, and there's a few outliers for whom increased exercise reduces lifespan" instead of "exercise increases lifespan".
So, now our argument looks like
Salviati: blah blah blah Exercise increases lifespan blah blah blah
Simplicio: THAT'S NOT TRUE THERE EXISTS AN EXCEPTION YOUR ENTIRE ARGUMENT IS INVALID
Salviati: Principle of charity, bro
Now, if Simplicio applies principle of charity, then they'll never make arguments like that again, and we've resolved the problem. If they don't, we discontinue debating with them, and we've resolved the problem.
There's a few failure modes here. We create a new route down which debates about akrasia-fighting devices can be derailed. We give a superweapon to people who we probably shouldn't trust with one. They may google it and find our community and we won't be able to keep them out of our walled garden. (I jest. Well, maybe.) But introducing principle of charity to people who have clearly never heard of it feels like it should either improve the quality of discourse or identify places we don't want to spend any time.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-07-29T08:11:11.579Z · LW(p) · GW(p)
In regular English, “exercise increases lifespan” doesn't mean ‘all exercise increases lifespan’ any more than “ducks lay eggs” means ‘all ducks [including males] lay eggs’.
Replies from: sediment↑ comment by sediment · 2014-07-29T10:23:36.913Z · LW(p) · GW(p)
Well, there's a frustrating sort of ambiguity there: it's able to pivot between the two in an uncomfortable way which leaves one vulnerable to exploits like the above.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2014-07-30T17:48:01.234Z · LW(p) · GW(p)
Sure, and it's also vulnerable to abuse from the other side:
"I have bogosthenia and can't exercise because my organs will fall out if I do. How should I extend my lifespan?"
"You should exercise! Exercise increases lifespan!"
"But my organs!"
"Are you saying exercise doesn't increase lifespan? All these studies say it does!"
"Did they study people with no organs?"
"Why are you bringing up organs again? Exercise increases lifespan. If you start telling people it doesn't, you're going to be responsible for N unnecessary deaths per year, you quack."
"... organs?"
↑ comment by Jiro · 2014-08-01T21:38:00.013Z · LW(p) · GW(p)
I was also struck by how weird it was that people were nitpicking totally incidental parts of my post, which, even if granted, didn't actually deduct from the essence of what I was saying.
I see this in lots of places where it's clearly not an argument by attrition. There's a sizable fraction of people on the Internet who are just over-literal.
↑ comment by A1987dM (army1987) · 2014-07-29T07:58:46.366Z · LW(p) · GW(p)
I said that very few people are doing too much exercise (tacitly, relative to the number of people who are doing too little), to which someone replied that they had once overtrained to their detriment, as if this disproved my point.
There's this issue though -- what matters is not the fraction of people who exercise too much among the general population, is the fraction of people who exercise too much among the people you're telling to exercise more to.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-07-29T21:33:36.724Z · LW(p) · GW(p)
Not even that. It's the fraction of people who have known someone who thought they exercised too much at least once in their lives.
↑ comment by IlyaShpitser · 2014-07-29T18:32:35.503Z · LW(p) · GW(p)
It's a first contact situation. You need to establish basic things first, e.g. "do you recognize this is a sequence of primes," "is there such a thing as 'good' and 'bad'," "how do you treat your enemies," etc.
↑ comment by A1987dM (army1987) · 2014-07-29T07:47:08.697Z · LW(p) · GW(p)
Simplicio: "Well I know someone who is X but isn't Y, so there goes that theory"
“Aren't you afraid of flying after that plane was shot down?” “No; flying is still much safer than driving, even taking terrorist attacks into account.” “But that plane was shot down!!!”
↑ comment by Creutzer · 2014-07-29T04:05:26.431Z · LW(p) · GW(p)
f) Logical illiteracy:
Example 1:
Salviati: " If A then B"
Simplicio: "But A isn't true therefore your argument is invalid"
Sorry for being nit-picky, but that is partly linguistic illiteracy on Salviati's part. Natural language conditionals are not assertible if their antecedent is false. Thus, by asserting "If A then B", he implies that A is possible, with which Simiplicio might reasonably disagree.
Replies from: pianoforte611, sediment, Luke_A_Somers↑ comment by pianoforte611 · 2014-07-29T11:36:59.307Z · LW(p) · GW(p)
Usually in these exchanges the truth value of A is under dispute. But it is nevertheless possible to make arguments with uncertain premises to see if the argument actually succeeds given its premises.
"But A isn't true" is also a common response to counterfactual conditionals - especially in thought experiments.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2014-07-30T17:39:52.650Z · LW(p) · GW(p)
Well, sometimes thought-experiments are dirty tricks and merit having their premises dismissed.
"If X, Y, and Z were all true, wouldn't that mean we should kill all the coders?"
"Well, hypothetically, but none of X, Y, and Z are true."
"Aha! So you concede that there are certain circumstances under which we should kill all the coders!"
My preferred answer being:
"I can't occupy the epistemic state that you suggest — namely, knowing that X, Y, and Z are true with sufficient confidence to kill all the coders. If I ended up believing X, Y, and Z, it's more likely that I'd hallucinated the evidence or been fooled than that killing all the coders is actually a good idea. Therefore, regardless of whether X, Y, and Z seem true to me, I can't conclude that we should kill all the coders."
But that's a lot more subtle than the thought-experiment, and probably constitutes fucking tedious in a lot of social contexts. The simplified version "But killing is wrong, and we shouldn't do wrong things!" is alas not terribly convincing to people who don't agree with the premise already.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-08-08T21:59:14.787Z · LW(p) · GW(p)
The simplified version "But killing is wrong, and we shouldn't do wrong things!" is alas not terribly convincing to people who don't agree with the premise already.
There are other ways of saying it. I think Iain Banks said it pretty well.
↑ comment by sediment · 2014-07-29T10:24:33.074Z · LW(p) · GW(p)
Can you give a quick example with the blanks filled in? I'm interested, but I'm not sure I follow.
Replies from: Creutzer↑ comment by Creutzer · 2014-07-29T19:41:13.004Z · LW(p) · GW(p)
A: If John comes to the party, Mary will be happy. (So there is a chance that Mary will be happy.)
B: But John isn't going to the party. (So your argument is invalid.)
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-07-30T11:55:47.030Z · LW(p) · GW(p)
That's what the subjunctive is for. If A had said “If Jon came to the party, Mary would be happy”, ...
Replies from: Creutzer↑ comment by Creutzer · 2014-08-01T06:17:17.041Z · LW(p) · GW(p)
The same thing can still happen with a subjunctive conditional, though.
A: If John came to the party, Mary would be happy. (So we could make Mary happy by making John come to the party.) B: But John isn't going to the party, no matter what we do. (So your argument is invalid.)
Also, pace George R. R. Martin, the name is still spelled John. Sorry, no offense, I just couldn't resist. :)
Replies from: gjm↑ comment by Luke_A_Somers · 2014-07-29T21:37:43.182Z · LW(p) · GW(p)
It depends why Salvati is bringing it up.
"If X(t), then A(t+delta). If A(t') then B(t'+delta')."
"But, not A(now)!"
Replies from: Creutzer↑ comment by Creutzer · 2014-07-30T05:10:28.749Z · LW(p) · GW(p)
Even with such a generic conditional (where t and t' are, effectively, universally quantified), the response can make sense with the following implied point: So not "B(now+delta')", hence we can't draw any presently relevant conclusions from your statement, so why are you saying this?
It may or may not be appropriate to dispute the relevance of the conditional in this way, depending on the conversational situation.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-07-30T15:00:24.441Z · LW(p) · GW(p)
Let me rephrase that with more words:
"If we do X, then A will happen. If A happens, then B happens."
"But A isn't happening."
↑ comment by Viliam_Bur · 2014-07-29T07:45:59.054Z · LW(p) · GW(p)
Here is how to win the argument:
Create another nickname, pretending to be a Native American woman. Say that the idea of precommitment to exercise reminds you that in the ancient times the hunters of your tribe believed that it is spiritually important to be fit. (Then the white people came and ruined everything.) If anyone disagrees with you, act emotional and tell them to check their privilege.
The only problem is that winning in this way is a lost purpose. Unless you consider it expanding your communication skills.
Replies from: Adele_L, Nornagest↑ comment by Adele_L · 2014-07-30T01:54:29.944Z · LW(p) · GW(p)
I've actually seen an argument online in which some social justicers (with the same bad habits as in the story above) were convinced that it is acceptable to care about male circumcision on the grounds that it made SRS (sexual reassignment surgery) more difficult for trans women. Typically (in this community), if you thought male circumcision was an issue - you were quickly shouted down as a dreaded MRA (men's rights activist).
↑ comment by Nornagest · 2014-07-29T18:32:50.750Z · LW(p) · GW(p)
Don't think that'd work. Traditional practices and attitudes are a sacred category in this sort of discourse, but that doesn't mean they're unassailable -- it just means that any sufficiently inconvenient ones get dismissed as outliers or distortions or fabrications rather than being attacked directly. It helps, of course, that in this case they'd actually be fabrications.
Focusing on feelings is the right way to go, though. This probably needs more refinement, but I think you should do something along the lines of saying that exercise makes you feel happier and more capable (which happens to be true, at least for me), and that bringing tangible consequences into the picture helps people escape middle-class patriarchal white Western consumer culture's relentless focus on immediate short-term gratification (true from a certain point of view, although not a framing I'd normally use). After that you can talk about how traditional cultures are less sedentary, but don't make membership claims and do not mention outcomes. You're not torturing yourself to meet racist, sexist expectations of health and fitness; you're meeting spiritual, mental, and incidentally physical needs that the establishment's conditioned you to neglect. The shock is a reminder of what they've stolen from you.
You'll probably still get accusations of internalized kyriarchy that way, but it ought to at least be controversial, and it won't get you accused of mansplaining.
Replies from: Viliam_Bur, Lumifer, Azathoth123↑ comment by Viliam_Bur · 2014-07-30T08:11:59.559Z · LW(p) · GW(p)
I think this is still too logical to work. Each step of an argument is another place that can be attacked. And because attacks are allowed to be illogical, even the most logical step has maybe 50% chance of breaking the chain. The shortest, and therefore the most powerful argument, is simply "X offends me!" (But to use this argument, you must belong to a group whose feelings are included in the social justice utility function.)
Now that I think about it, this probably explains why in this kind of debates you never get an explanation, only an angry "It's not my job to educate you!" when you ask about something. Using arguments and explanations is a losing strategy. (Also, it is what the bad guys do. You don't want to be pattern-matched to them.) Which is why people skilled in playing the game never provide explanations.
I hope your rationalist toucan is signed up for cryonics. :P
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-07-31T01:32:07.090Z · LW(p) · GW(p)
I'm sure it depends on where you hang out, but I've seen plenty of explanations from social justice people. A sample
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-07-31T09:27:02.211Z · LW(p) · GW(p)
Impressive.
In the linked article the author mentions that there are multiple definitions of racism and people often aren't clear about which one they use; and then decides to use the one without "..., but only when white people do it" as a default. And says that it is okay if white authors decide to write only white characters, but if they write also non-white characters they should describe their experiences realistically. (Then in the comments someone asks whether saying that every human being is racist doesn't render the word meaningless, and there is no outrage afterwards. Other people mention that calling someone racist is usually used just to silence or insult them.)
I am not sure whether this even should be called "social justice". It just seems like a common sense to me. (This specific article; I haven't read more from the same author yet.)
Somewhat related -- writing this comment I realized that I am kinda judging the sanity of the author by how much I agree with her. When I put it this way, it seems horrible. ("You are sane if and only if you agree with me.") But I admit it is a part of the algorithm I use. Is that a reason to worry? But then I remembered the parable that all correct maps of the same city are necessarily similar to each, although finding a set of similar maps does not guarantee their correctness (they could be copies of the same original wrong map). So, if you spend some time trying to make a map that reflects the territory better, and you believe you are sane enough, you should expect the maps of other sane people to be similar to yours. Of course this shouldn't be your only criterium. But, uhm, extraordinary maps require extraordinary evidence; or at least some evidence.
Replies from: gjm, Lumifer, NancyLebovitz↑ comment by gjm · 2014-08-02T21:15:16.440Z · LW(p) · GW(p)
I am not sure whether this even should be called "social justice". It just seems like common sense to me.
Perhaps social justice done right should just seem like common sense (to reasonable people). I mean, what's the alternative? Social injustice?
It would be a pity to use the term "social justice" to describe only facepalming irrationality. I mean, you then get this No True Scotsman sort of thing (maybe we should call it No True Nazi or something) where you refuse to say that someone's engaged in "social justice" even though what they're doing is crusading against sexism, racism, patriarchy, etc., simply because no True Social Justice Warror would engage in rational debate or respond to disagreement with sensible engagement rather than outrage.
(Minor vested interest disclosure: I happen to know some people who are both quite social-justice-y and quite rational, and I would find it unfortunate to be unable to say that on account of "social justice" and "rationality" getting gratuitously exclusive definitions.)
Replies from: None, Viliam_Bur↑ comment by [deleted] · 2014-08-04T08:46:52.777Z · LW(p) · GW(p)
even though what they're doing is crusading against sexism, racism, patriarchy, etc., simply because no True Social Justice Warror would engage in rational debate or respond to disagreement with sensible engagement rather than outrage.
Slightly off topic, but can I ask why patriarchy is assumed to be obviously bad?
I can certainly see the negative aspects of even moderate patriarchy, and wouldn't endorse extreme patriarchy or all forms of it, but its positive aspect seems to be civilization as we know it. It makes monogamy viable, reduces the time preferences of the people in a society, makes men invested in society by encouraging them to become fathers and husbands, boosts fertility rates to above replacement, likely makes the average man more attractive to the average woman improving many relationships, results in a political system of easily scalable hierarchy, etc.
Replies from: gjm↑ comment by gjm · 2014-08-04T16:30:12.584Z · LW(p) · GW(p)
I wasn't assuming it's obviously bad, I was describing it as a thing social-justice types characteristically crusade against.
As to whether moderate patriarchy is good or bad or mixed or neutral -- I imagine it depends enormously on how you define the term.
Replies from: None↑ comment by Viliam_Bur · 2014-08-02T23:18:32.007Z · LW(p) · GW(p)
So, like with "rationality" and "Hollywood rationality", we could have "social justice" and, uhm, "tumblr social justice"? Maybe this would work.
My main objection would be that words "social justice" already feel like a weird way to express "equality" or something like that. It's already a word that meant something ("justice") with an adjective that allowes you to remove or redefine its parts, and make it a flexible applause light.
Replies from: NancyLebovitz, gjm↑ comment by NancyLebovitz · 2014-08-03T13:28:23.414Z · LW(p) · GW(p)
Historical note, as I understand things-- the emotionally abusive power grab aspects didn't happen by coincidence. A good many people said that if they were polite and reasonable, what they said got ignored, so they started dumping rage.
Replies from: Viliam_Bur, Azathoth123, Nornagest↑ comment by Viliam_Bur · 2014-08-03T20:30:40.761Z · LW(p) · GW(p)
I propose an alternative explanation. Some people are just born psychopaths; they love to hurt other people.
Whatever nice cause you start, if it gains just a little power, sooner or later one of them will notice it and decide they like it. Then they will try to join it and optimize it for their own purposes. You will recognize that this happened when people around you start repeating memes that hurting other people is actually good for your cause. Now, in such environment people most skilled in hurting others can quickly rise to the top.
(Actually, both our explanations can be true at the same time. Maybe any movement that doesn't open its doors to psychopaths it doomed in the long term, because other people simply don't have enough power to change the society.)
↑ comment by Azathoth123 · 2014-08-03T18:47:15.156Z · LW(p) · GW(p)
A good many people said that if they were polite and reasonable, what they said got ignored, so they started dumping rage.
And then they complain when anybody else is 'uncivil'.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-08-03T19:55:12.613Z · LW(p) · GW(p)
I called it an emotionally abusive power grab because that's how I see it.
Nonetheless, I still think they're right about some of their issues.
↑ comment by Nornagest · 2014-08-03T17:17:35.212Z · LW(p) · GW(p)
I'd expect rage to be better at converting people already predisposed to belief into True Believers, but worse at making believers of the undecided, and much worse at winning over those predisposed to opposition.
Replies from: NancyLebovitz, Azathoth123↑ comment by NancyLebovitz · 2014-08-03T19:59:15.265Z · LW(p) · GW(p)
The rage level actually drives away some of the people who would be inclined to help them, and has produced something that looks a lot like PTSD in some of the people in the movement who got hit by opposition from others who were somewhat on the same side..
Still, they've gained a certain amount of ground on the average. I have no idea what the outcome will be.
↑ comment by Azathoth123 · 2014-08-03T18:48:38.359Z · LW(p) · GW(p)
Well, if you can vaguely imply that it might be physically dangerous to disagree, a little rage can work wonders.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-08-03T20:05:11.130Z · LW(p) · GW(p)
As far as I can tell, there's very little in the way of physical threats, but (most) people are very vulnerable to emotional attacks.
As I understand it, that's part of what's powering SJWs-- they felt (and I'd say rightly) that they were and are subject to pervasive emotional attack both from the culture and from individuals, and are trying to make a world they can be comfortable in.
That "as I understand it" is not boilerplate-- I read a fair amount of SJ material and (obviously) spent a lot of time thinking and obsessing about it, but this is a huge subject (and isn't the same in all times, places, and sub-cultures), and I've never been an insider.
↑ comment by gjm · 2014-08-03T00:05:45.909Z · LW(p) · GW(p)
That would be one option. Or (this is different because "Hollywood rationality" is not actually a variety of rationality) we could say that both those things really are varieties of social justice, but one of them is social justice plus a bunch of crazy ideas and attitudes that unfortunately happen to have come along for the ride in various social-justice-valuing venues.
I don't think "social justice" is just a weirdly contorted way to say "equality". The addition of an adjective is necessary because "justice" simpliciter covers things like imprisoning criminals rather than innocent bystanders, and not having kleptocratic laws; "social justice" means something like "justice in people's social interactions". In some cases that's roughly the same thing as equality, but in others equality might be the wrong thing (because different groups want different things, or because some historical injustice is best dealt with by a temporary compensating inequality in the other direction). -- Whether such inequality ever is a good approach, and how often if so, is a separate matter, but unless it's inconceivable "equality" can't be the right word.
Still, I'm not greatly enamoured of the term "social justice". But it's there, and it seems like it means something potentially useful, and it would be a shame if it ended up only being applicable where there's a whole lot of craziness alongside the concern for allegedly marginalized groups.
↑ comment by Lumifer · 2014-07-31T14:36:58.129Z · LW(p) · GW(p)
I realized that I am kinda judging the sanity of the author by how much I agree with her.
That doesn't seem horrible to me. There are many ways of being insane, but one of them is having a very wrong map (and you can express the one of standard criteria for clinical-grade mental illness -- interferes with functioning in normal life -- as "your map is so wrong you can't traverse the territory well").
I think the critical difference here is whether you disagree about facts (which are, hopefully, empirically observable and statements about them falsifiable) or whether you disagree about values, opinions, and forecasts. Major disagreement about facts is a good reason to doubt someone's sanity, but about values and predictions is not.
↑ comment by NancyLebovitz · 2014-08-03T13:25:58.299Z · LW(p) · GW(p)
I'm glad you liked it.
Since I'd have to overcome a really strong ugh field to read it again, I'd like to check on whether my memory of it is correct-- the one thing I didn't like about it was Mohanraj saying (implying?) that if you behave decently you won't be attacked. She was making promises about people who aren't as rational as she is.
Why an ugh field? Those essays came out when racefail was going on, and came with the added info that it took Mohanraj two and a half weeks to write them, and (at least as I read it) I should feel really guilty that a woman of color had to do the work. I just couldn't deal. I'm pretty sure the guilt trip wasn't from Mohanraj.
I read them later, and thought they were good except for the caveat mentioned above.
↑ comment by Lumifer · 2014-07-29T19:00:41.718Z · LW(p) · GW(p)
I don't think reinforcing stupidity is a good idea.
“Never argue with stupid people, they will drag you down to their level and then beat you with experience.” ― Mark Twain
This is that level:
Replies from: Nornagesthelps people escape middle-class patriarchal white Western consumer culture's relentless focus on immediate short-term gratification
↑ comment by Nornagest · 2014-07-29T19:19:39.574Z · LW(p) · GW(p)
That line was somewhat tongue-in-cheek. I wouldn't go that far over the top in a real discussion, although I might throw in a bit of anti-*ist rhetoric as an expected shibboleth.
That being said, these people aren't stupid. They don't generally have the same priorities or epistemology that we do, and they're very political, but that's true of a lot of people outside the gates of our incestuous little nerd-ghetto. Winning, in the real world, implies dealing with these people, and that's likely to go a lot better if we understand them.
Does that mean we should go out and pick fights with mainstream social justice advocates? No, of course not. But putting ourselves in their shoes every now and then can't hurt.
Replies from: sediment, Lumifer↑ comment by sediment · 2014-07-30T18:46:34.201Z · LW(p) · GW(p)
This makes some sense. I think part of the reason my contribution was taken so badly was, as I said, that I was arguing in a style that was clearly different to that of the rest of those present, and as such I was (in Villam Bur's phrasing) pattern-matched as a bad guy. (In other words, I didn't use the shibboleths.)
Significantly, no-one seemed to take issue with the actual thrust of my point.
↑ comment by Lumifer · 2014-07-29T19:29:40.410Z · LW(p) · GW(p)
That line was somewhat tongue-in-cheek.
Of course, but only somewhat :-)
these people aren't stupid
"These people" are not homogenous and there are a lot of idiots among them. However what most of them are is mindkilled. They won't update so why bother?
Replies from: Nornagest↑ comment by Nornagest · 2014-07-29T19:31:53.768Z · LW(p) · GW(p)
However what most of them are is mindkilled. They won't update so why bother?
Because we occasionally might want to convince them of things, and we can't do that without understanding what they want to see in an argument. Or, more generally, because it behooves us to get better at modeling people that don't share our epistemology or our (at least, my) contempt for politics.
Replies from: Lumifer↑ comment by Lumifer · 2014-07-29T19:45:52.272Z · LW(p) · GW(p)
Because we occasionally might want to convince them of things, and we can't do that without understanding what they want to see in an argument.
So, um, if you really let Jesus into your heart and accept Him as your personal savior you will see that He wants you to donate 50% of your salary to GiveWell's top charities..?
it behooves us to get better at modeling people that don't share our epistemology or our (at least, my) contempt for politics.
True, but you don't do that by mimicking their rhetoric.
Replies from: Nornagest↑ comment by Nornagest · 2014-07-29T20:25:32.551Z · LW(p) · GW(p)
True, but you don't do that by mimicking their rhetoric.
The point isn't to blindly mimic their rhetoric, it's to talk their language: not just the soundbites, but the motivations under them. To use your example, talking about letting Jesus into your heart isn't going to convince anyone to donate a large chunk of their salary to GiveWell's top charities. There's a Christian argument for charity already, though, and talking effective altruism in those terms might well convince someone that accepts it to donate to real charity rather than some godawful sad puppies fund; or to support or create Christian charities that use EA methodology, which given comparative advantage might be even better. But you're not going to get there without understanding what makes Christian charity tick, and it's not the simple utilitarian arguments that we're used to in an EA context.
Replies from: Lumifer↑ comment by Lumifer · 2014-07-29T20:44:49.363Z · LW(p) · GW(p)
The point isn't to mimic their rhetoric, it's to talk their language
There is a price: to talk in their language is to accept their framework. If you are making an argument in terms of fighting the oppression of white male patriarchy, you implicitly agree that the white male patriarchy is in the business of oppression and needs to be fought. If you're using the Christian argument for charity to talk effective altruism, you are implicitly accepting the authority of Jesus.
Replies from: Nornagest↑ comment by Nornagest · 2014-07-29T20:55:44.465Z · LW(p) · GW(p)
If you're using the Christian argument for charity to talk effective altruism, you are implicitly accepting the authority of Jesus.
Yes, you are. That's a price you need to pay if you want to get something out of mindkilled people, which incidentally tends to be the first step in introducing outside ideas and thereby making them less mindkilled. Reject it in favor of some kind of radical honesty policy, and unless you're very lucky and very charismatic you'll find yourself with no allies and few friends. But hey, you'll have the moral high ground! I hear that and $1.50 will get you a cup of coffee.
(My argument in the ancestor wasn't really about fighting the white male patriarchy, though; the rhetoric about that is just gingerbread, like appending "peace be upon him" to the name of the Prophet. It's about the importance of subjective experience and a more general contrarianism -- which are also SJ themes, just less obvious ones.)
Replies from: Lumifer↑ comment by Lumifer · 2014-07-29T21:12:23.771Z · LW(p) · GW(p)
That's a price you need to pay if you want to get something out of mindkilled people, which incidentally tends to be the first step in making them less mindkilled.
Maybe it's the price you need to pay, but I don't see how being able to get something out of mindkilled people is the first step in making them less mindkilled. You got what you wanted and paid for it by reinforcing their beliefs -- why would they become more likely to change them?
some kind of radical honesty policy
I am not going for radical honesty. What I'm suspicious of is using arguments which you yourself believe are bullshit and at the same time pretending to be a bona fide member of a tribe to which you don't belong.
And, by the way, there seems to be a difference between Jesus and SJ here. When talking to a Christian I can be "radically honest" and say something along the lines "I myself am not a Christian but you are and don't you recall how Jesus said that ...". But that doesn't work with SJWs -- if I start by saying "I myself don't believe in while male oppression but you do and therefore you should conclude that...", I will be immediately crucified for the first part and no one will pay any attention to the second.
Replies from: Nornagest↑ comment by Nornagest · 2014-07-29T21:26:42.306Z · LW(p) · GW(p)
I don't see how being able to get something out of mindkilled people is the first step in making them less mindkilled. You got what you wanted and paid for it by reinforcing their beliefs -- why would they become more likely to change them?
You're not substantially reinforcing their beliefs. Beliefs entangled with your identity don't follow Bayesian rules: directly showing anything less than overpoweringly strong evidence against them (and even that isn't a sure thing) tends to reinforce them by provoking rationalization, while accepting them is noise. If you don't like Christianity, you wouldn't want to use the Christian argument for charity with a weak or undecided Christian; but they aren't going to be mindkilled in this regard, so it wouldn't make a good argument anyway.
On the other hand, sneaking new ideas into someone's internal memetic ecosystem tends to put stress on any totalizing identities they've adopted. For example, you might have to invoke God's commandment to love thy neighbor as thyself to get a fundamentalist Christian to buy EA in the first place; but now they have an interest in EA, which could (e.g.) lead them to EA forums sharing secular humanist assumptions. Before, they'd have dismissed this as (e.g.) some kind of pathetic atheist attempt at constructing a morality in the absence of God. But now they have a shared assumption, a point of commonality. That'll lead to cognitive dissonance, but only in the long run -- timescales you can't work on unless you're very good friends with this person.
That cognitive dissonance won't always resolve against Christianity, but sometimes it will. And when it doesn't, you'll usually still have left them with a more nuanced and less stereotypical Christianity.
Replies from: Lumifer↑ comment by Lumifer · 2014-07-30T05:16:37.244Z · LW(p) · GW(p)
You're not substantially reinforcing their beliefs.
Well, yes, if we're talking about a single conversation, especially over the 'net, you are not going to affect much anything. Still, even if you do not reinforce then you confirm. And there are different ways to get mindkilled, entangling your identity with beliefs is only one of them...
On the other hand, sneaking new ideas into someone's internal memetic ecosystem tends to put stress on any totalizing identities they've adopted.
True, but the same caveat applies -- if we're talking about one or two conversations you're not going to produce much if any effect.
In any case, my line of thinking in this subthread wasn't concerned so much with the effectiveness of deconversion, but rather was more about the willingness to employ arguments that you don't believe but your discussion opponent might. I understand the need to talk to people in the language they understand, but there is a fine line to walk here.
↑ comment by Azathoth123 · 2014-07-30T02:31:31.768Z · LW(p) · GW(p)
Traditional practices and attitudes are a sacred category in this sort of discourse, but that doesn't mean they're unassailable -- it just means that any sufficiently inconvenient ones get dismissed as outliers or distortions or fabrications rather than being attacked directly.
That works a lot less well arguing against someone who is claiming to be from that culture.
It helps, of course, that in this case they'd actually be fabrications.
So? Most of the "traditional practices" SJ types sanctify are fabrications. That doesn't stop them from sanctifying them.
Replies from: Nornagest↑ comment by Nornagest · 2014-07-30T04:36:38.635Z · LW(p) · GW(p)
That works a lot less well arguing against someone who is claiming to be from that culture.
I've more than once seen people accused of not really being whatever they claim to be. "You're wrong about your culture's traditional practices" isn't a legal move, but "you're obviously an imposter" is.
↑ comment by Stabilizer · 2014-07-30T23:55:03.801Z · LW(p) · GW(p)
A lot of people are pointing out that perhaps it wasn't very wise for you to engage with such commenters. I mostly agree. But I also partially disagree. The negative effects of you commenting there, of course, are very clear. But, there are positive effects as well.
The outside world---i.e. outside the rationalist community and academia---shouldn't get too isolated from us. While many people made stupid comments, I'm sure that there were many more people who looked at your argument and went, "Huh. Guess I didn't think of that," or at least registered some discomfort with their currently held worldview. Of course, none of them would've commented.
Also, I'm sure your way of argumentation appealed to many people, and they'll be on the lookout for this kind of argumentation in the future. Maybe one of them will eventually stumble upon LW. By looking at the quality of argumentation was also how I selected which blogs to follow. I tried (and often failed) to avoid those blogs that employed rhetoric and emotional manipulation. One of the good blogs eventually linked to LW.
Thus, while the cost to you was probably great and perhaps wasn't worth the effort, I don't think it was entirely fruitless.
Replies from: sediment↑ comment by sediment · 2014-07-31T11:02:19.120Z · LW(p) · GW(p)
You're right.
I was glad to at least disrupt the de facto consensus. I agree that it's worth bearing in mind the silent majority of the audience as well as those who actually comment. The former probably outnumber the latter by an order of magnitude (or more?).
I suppose the meta-level point was also worth conveying. Ultimately, I don't care a great deal about the object-level point (how one should feel about a silly motivational bracelet) but the tacit, meta-level point was perhaps: "There are other ways, perhaps more useful, to evaluate things than the amount of moral indignation one can generate in response."
↑ comment by Shmi (shminux) · 2014-07-29T22:16:20.490Z · LW(p) · GW(p)
I don't think it's a good idea to get into a discussion on any forum where the term "mansplaining" is used to stifle dissent, even (or especially) if you have "a clear, concise, self-contained point".
Replies from: Lumifer↑ comment by Lumifer · 2014-07-30T05:26:01.507Z · LW(p) · GW(p)
I don't think it's a good idea to get into a discussion on any forum where the term "mansplaining" is used to stifle dissent
True for a serious discussion, but such forums make for interesting ethnographic expeditions :-) And if you're not above occasional trolling for teh lulz... :-D
Replies from: None, None↑ comment by Richard_Kennaway · 2014-07-29T09:57:53.338Z · LW(p) · GW(p)
I recently made a dissenting comment on a biggish, well-known-ish social-justice-y blog.
Um, why?
I mean, walking through a monkey house when all they're going to do is fling shit everywhere isn't something I would choose to do.
Replies from: sediment↑ comment by NancyLebovitz · 2014-07-29T13:25:32.915Z · LW(p) · GW(p)
I wasn't sure about doing discussion of the specific point, but other people are....
http://www.moveandbefree.com/blog/laziness-doesnt-exist
Here's an example from someone who believes strongly in cultivating internal motivation-- the opposite of shocking yourself if you don't do enough crudely monitored exercise.
The punishment approach to exercise arguably makes people less likely to exercise at all, and I think it increases the risk of injuries from exercise.
There really is a cultural problem-- how popular is the approach from the link compared to The Biggest Loser and boot camps for civilians?
Sidetrack: I'm imagining a shock bracelet to discourage involvement in pointless internet arguments. How would it identify them? Would people use it?
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-07-30T11:46:21.836Z · LW(p) · GW(p)
Sidetrack: I'm imagining a shock bracelet to discourage involvement in pointless internet arguments. How would it identify them?
That's probably a FAI-complete problem. See also: http://xkcd.com/810/
Would people use it?
A thing I would like is this. I would totally enable this on LW if it was an option. (And if someone volunteered to write a Firefox plugin to achieve the same client-side, they'd have all my gratefulness.)
Replies from: Bakkot↑ comment by ChristianKl · 2014-07-29T08:56:26.950Z · LW(p) · GW(p)
The whole idea of optimisation is controversial among some people because they see it as the opposite of being yourself.
Someone else brought up the weird edge case that it's possible to exercise too much, and that if you're in such a position then more exercise will shorten, not lengthen, your life
It's no weird edge case. If I remember right there was a recent study that came to that conclusion that went through the media.
↑ comment by A1987dM (army1987) · 2014-07-29T07:35:51.875Z · LW(p) · GW(p)
This seems like a good reason to avoid arguing on the internet in general: it will fire up all the wrong parts of your brain, and you'll find it harder to disengage than you anticipated.
True that.
↑ comment by Richard_Kennaway · 2014-07-29T11:44:14.612Z · LW(p) · GW(p)
The shock bracelet intrigues me. I imagine it could be interfaced to an app that could give shocks under all manner of chosen conditions. Do you have any more details? Is it a real thing, or (like this) just clickbait that no-one intends actually making?
Replies from: sediment, Lumifer↑ comment by sediment · 2014-07-29T12:38:16.131Z · LW(p) · GW(p)
It's called the Pavlok. It seems to be able to monitor a variety of criteria, some fairly smart.
Replies from: Richard_Kennaway, None↑ comment by Richard_Kennaway · 2014-07-29T13:12:31.026Z · LW(p) · GW(p)
Wow, it is indeed a real thing! Thank you for posting this.
↑ comment by [deleted] · 2014-07-29T19:31:42.886Z · LW(p) · GW(p)
I think this has the same problem than any kind of self-conditioning. I watched the video and the social community and gaming thing seem actually motivating, but I'm not sure about the punishment because you can always take the wristband off. Maybe there's a commitment and social pressure not to take the wristband off, but ultimately you yourself are responsible for keeping the wristband on your wrist and this is basically self-conditioning. Yvain made a good post about it.
Suppose you have a big box of candy in the fridge. If you haven’t eaten it all already, that suggests your desire for candy isn’t even enough to reinforce the action of going to the fridge, getting a candy bar, and eating it, let alone the much more complicated task of doing homework. Yes, maybe there are good reasons why you don’t eat the candy – for example, you’re afraid of getting fat. But these issues don’t go away when you use the candy as a reward for homework completion. However little you want the candy bar you were barely even willing to take out of the fridge, that’s how much it’s motivating your homework.
If the zap had any kind of motivating effect, wouldn't that effect firstly be directed towards taking the wristband off your wrist and not the much more distant and complex sequence of actions like going to the gym? I don't think small zap on its owns could motivate me to do even anything simple, like leaving the computer. Also, I agree with Yvain that rewards and punishments seem only have real effect when they happen unpredictably.
Replies from: Pfft↑ comment by Pfft · 2014-07-30T03:10:16.372Z · LW(p) · GW(p)
A more low-tech solution, which is recommended by countless self-help books/webpages of dubious authority, is to snap a rubber band against your own wrist when you have done something bad. It seems this should work roughly as well as the Pavlov? In theory it should suffer the same "can't condition yourself" problem. On the other hand, if lots of people recommend it, then maybe it works?
Replies from: Richard_Kennaway, MathiasZaman↑ comment by Richard_Kennaway · 2014-07-31T06:36:03.982Z · LW(p) · GW(p)
I suspect that if electric zapping or snapping a rubber band work (I don't know if they do), they do so by raising your level of attention to the problematic behaviour. A claim of Perceptual Control Theory is that reorganisation -- learning to control something better -- follows attention. Yanking your attention onto the situation whenever you're contemplating or committing sinful things may enable you to stop wanting to commit them.
See also the use of the cilice.
↑ comment by MathiasZaman · 2014-07-30T11:28:48.798Z · LW(p) · GW(p)
I've mostly seen that technique described as a way to cope with self-harm.
↑ comment by Lumifer · 2014-07-29T17:25:20.935Z · LW(p) · GW(p)
I imagine it could be interfaced to an app that could give shocks under all manner of chosen conditions.
Classic bash.org :-D
#4281 +(27833)- [X]
<Zybl0re> get up
<Zybl0re> get on up
<Zybl0re> get up
<Zybl0re> get on up
<phxl|paper> and DANCE
* nmp3bot dances :D-<
* nmp3bot dances :D|-<
* nmp3bot dances :D/-<
<[SA]HatfulOfHollow> i'm going to become rich and famous after i invent a device
that allows you to stab people in the face over the internet
↑ comment by Richard_Kennaway · 2014-07-29T10:04:12.368Z · LW(p) · GW(p)
I wonder what they think of Beeminder, that allows you to financially torture yourself over anything you want to. Not that I'm going to go over there, wherever it is, to ask.
Replies from: sedimentcomment by Viliam_Bur · 2014-07-29T07:18:30.568Z · LW(p) · GW(p)
Website suggestion: Retracted comments should collapse the thread (just like downvoted comments do now).
comment by A1987dM (army1987) · 2014-07-31T20:31:40.179Z · LW(p) · GW(p)
“Meditations On Moloch” on Slate Star Codex finally convinced me to donate to MIRI.
Replies from: Vulture, Nonecomment by Kaj_Sotala · 2014-07-29T13:59:45.460Z · LW(p) · GW(p)
The philosopher John Danaher is doing a series of posts on Bostrom's Superintelligence book. Posts that were up at the time of writing this comment:
Bostrom on Superintelligence (1): The Orthogonality Thesis
Bostrom on Superintelligence (2): The Instrumental Convergence Thesis
Bostrom on Superintelligence (3): Doom and the Treacherous Turn
Danaher has also blogged about AI risk topics before: see here, here, here, here, and here. He's also written on mind uploading and human enhancement.
comment by Metus · 2014-07-28T22:00:12.443Z · LW(p) · GW(p)
Since we are way too confident that bad things won't happen to us I have been researching how to prepare for several rare events with disastrous consequences. Starting the research I realised I have yet to find out what those events exactly are. So far I have found these, remedy given if known:
- Own death (write a will specifying how property shall be used and funeral arrangements in the interest of next of kin and close friends, if there are people financially dependent buy life insurance, if you believe in cryonics register as a member and make arrangements to pay for it. Prepare for the near-term possibility and long-term inevitability)
- Death of a family member or close friend (see above)
- Loss of possibility of legal consent e.g. through brain damage or disease (prepare a document detailing your views and wants in such a case, buy insurance to pay for aid, if people are financially dependent on you buy insurance for them too)
- Loss of consciousness and/or dependence on machine-assisted living (see above)
- Accidents of any kind such as traffic or work related (ignoring the circumstances above, buy specific insurance)
- Unforseen, non-work related liability (buy liability insurance)
- Damage from third parties without ability to pay or liability insurance (buy proper liability insurance)
- Breach of law not related to contracting work (buy legal insurance)
- Divorce
- Being robbed or theft
- Loss of income because of loss of ability to work
- Loss of income because of loss of employment
- Large-scale catastrophe (consult your local relevant government agency, such as FEMA, buy insurance for non-global events)
- Loss of property, more specifically capital
Some of these are more economic in nature, some take a massive psychological toll. To deal with an event means to either reduce its possibility or to reduce its impact. Insurance helps with the latter, psychological preparation further takes the edge off. Reducing the possibility can be through more expenses e.g. higher quality items or through change in behaviour.
This project is very much a work in progress. Should I complete or abandon it I will share the state I leave it in. Please post all your thoughts and relevant material. I am especially interested in some numbers such as probabilities of these things happening (like the oft-stated number of 50% divorce rate).
Replies from: Gavin, None↑ comment by Gavin · 2014-07-28T23:24:25.256Z · LW(p) · GW(p)
Anytime you're thinking about buying insurance, double check whether it actually makes more sense to self-insure. It may be better to put all the money you would otherwise put into insurance in "rainy day fund" rather than buying ten different types of insurance.
In general, if you can financially survive the bad thing, then buying insurance isn't a good idea. This is why it almost never makes sense to insure a $1000 computer or get the "extended warranty." Just save all the money you would spend on extended warranties on your devices, and if it breaks pay out of pocket to repair or get a new one.
This is a harshly rational view, so I certainly appreciate that some people get "peace of mind" from having insurance, which can have a real value.
Replies from: Kaj_Sotala, Metus↑ comment by Kaj_Sotala · 2014-07-29T08:47:42.443Z · LW(p) · GW(p)
Though note that an insurance may regardless be useful if you have self-control problems with regard to money. If you've paid your yearly insurance payment, the money is spent and will protect you for the rest of the year. If you instead put the money in a rainy day fund, there may be a constant temptation to dip into that fund even for things that aren't actual emergencies.
Of course, that money being permanently spent and not being available for other purposes does have its downsides, too.
Replies from: Gavin↑ comment by Metus · 2014-07-28T23:44:35.194Z · LW(p) · GW(p)
I appreciate the extention on my thought process. It is very clear to me that since you have to pay an insurance premium buying insurance is necessarily a net loss. Buying insurance is very meaningful before a rainy day fund is filled up, if emergency financing methods are not available through a credit card or very trustworthy person and if the insurance contracts include other services, e.g. getting liabilities of the other party paid in case of their unwillingness to pay.
This is implicit in my phrasing
rare events with disastrous consequences
but made explicit by your post and will be included in the end report. Generally I come to the conclusion that buying insurance is a necessity unless you are perversely rich and even then there is some meaning found in insurance as even insurance companies themselves are insured. Just go for contracts with high co-pay to lower the exposition to the insurance premium which is basically just unnecessary bureucracy in case of small claims, as in the example of the $1000 dollar computer. For things in that price class I read an interesting sentence "if you can not afford to buy it twice, you can't afford it in the first place" alluding to self-insurance.
Replies from: Richard_Kennaway, Gavin↑ comment by Richard_Kennaway · 2014-07-30T08:37:17.508Z · LW(p) · GW(p)
"if you can not afford to buy it twice, you can't afford it in the first place"
An excellent maxim, which has crystallised for me why I am so reluctant to move to a bigger house, even though I would like one, and I could buy one immediately for cash plus the price I'd get for my current house. It's because I can't afford to do that twice. With an extra cost-of-a-house in the bank I might.
↑ comment by [deleted] · 2014-07-29T02:59:26.144Z · LW(p) · GW(p)
Large-scale catastrophe (consult your local relevant government agency, such as , buy insurance for non-global events)
I believe that some foreign large-scale catastrophes could also negatively impact one’s well-being. Extreme example: imagine if everyone in the world suddenly died except the citizens of your country. Psychological toll aside, your country’s necessary transition from international trader to autarky would be painful. Losing trading partners = losing ability to specialize = reducing economic efficiency = falling wages.
The reinsurance markets often deal in the risks of the disastrous. Maybe you can evaluate reinsurance offering documents and prices in order to extract the market’s implied probability of similar events.
comment by niceguyanon · 2014-07-29T15:31:58.288Z · LW(p) · GW(p)
A quick search for tDCS did not turn up any major discussion newer than 2012 on LW. tDCS devices are now sub $100. Its safety track record seems to be intact. I bought one. There are places to discuss tDCS like on subreddits but I'd like to restart the conversation here with you rationalists.
Recently Radiolab did a piece about it
Replies from: ChristianKl, witzvo↑ comment by ChristianKl · 2014-07-29T15:35:50.388Z · LW(p) · GW(p)
Its safety track record seems to be intact.
I'm not sure that they did run sufficient experiments to demonstrate safety.
Replies from: niceguyanon↑ comment by niceguyanon · 2014-07-29T16:35:45.081Z · LW(p) · GW(p)
I'll admit that the basis for my statement is from the seemingly lack of much negative user reports or studies that reported high negative reactions regarding safety, rather than experiments specifically demonstrating safety.
↑ comment by witzvo · 2014-08-03T20:11:47.476Z · LW(p) · GW(p)
This seems really interesting. I'd like to learn more about it. So far I'm frustrated with the quality of information I've found. Here's a PMC search and a review behind a firewall.
Replies from: gwern↑ comment by gwern · 2014-08-03T20:51:06.389Z · LW(p) · GW(p)
a review behind a firewall.
Here you go: https://pdf.yt/d/I3KvgDrqP-uVjm0e / https://dl.dropboxusercontent.com/u/85192141/2014-priori.pdf
Replies from: witzvo↑ comment by witzvo · 2014-08-03T22:43:16.020Z · LW(p) · GW(p)
Thanks! Lest it confuse anyone else, please note that that review is all about effects of tDCS on the cerebellum and not a review of tDCS on the cerebrum or other brain structures. The cerebelar tDCS itself seems to have many effects including cerebellar motor cortical inhibition, gait adaptation, motor behaviour, and cognition (learning, language, memory, attention), though.
Here's a general review of the effect of tDCS on language
""" Despite their heterogeneities, the studies we reviewed collect- ively show that tDCS can improve language performance in healthy subjects and in patients with aphasia ( fi gure 4). Although relatively transient, the improvement can be remark- able: Monti and colleagues 52 found an improvement of approxi- mately 30% and Holland and Crinion 63 report a gain of approximately 25% in speech performance in aphasic patients. Intriguingly, no report described negative results in aphasic patients. """
EDIT:
Interestingly, there's limited evidence that it can be effective for patients suffering from autism too. E.g. case study finding 40% reduction in abnormal behavior for a severe case and improved language learning for minimally verbal children with autism.
comment by tetronian2 · 2014-07-30T02:30:05.913Z · LW(p) · GW(p)
This is a followup to a post I made in the open thread last week; apologies if this comes off as spammy. I will be running a program equilibrium iterated prisoner's dilemma tournament (inspired by the one last year). There are a few key differences from last year's tournament: First, the tournament is in Haskell rather than Scheme. Second, the penalty for bots that do not finish their computation within the pre-set time limit has been reduced. Third, bots have the ability to run/simulate each other but cannot directly view each other's source code.
Here are the rules and a brief tutorial (which are significantly more fleshed out than they were last week). I intend to open up the tournament and announce it via a discussion post in a few days, but until then, I would love to hear your feedback and suggested changes to the rules/implementation, no matter how major or minor. When the tournament opens, LW users with 50+ karma who do not know Haskell can PM me with an algorithm/psuedocode, and I will translate it into a working bot for them.
Replies from: James_Miller↑ comment by James_Miller · 2014-07-30T03:42:15.662Z · LW(p) · GW(p)
Each bot will play one match against all other bots and against itself
This biases the tournament towards cooperation and makes it no longer a PD.
If multiple people submit identical bots, only one copy of the bot will be entered
This biases the tournament away from the defect every single round strategy.
Consider creating an elimination tournament where you run the game, eliminate the bottom half of the players, then run again, then iterate until only one player remains. If you decide to do this and are willing to go to the effort of programming my entry (since I don't know Haskell) please enter me with a bot that always defects.
Replies from: tetronian2, tetronian2↑ comment by tetronian2 · 2014-07-30T09:33:48.428Z · LW(p) · GW(p)
Thank you! These changes both make sense; I will adjust the tournament structure as you described and enter your bot when the tournament is open.
↑ comment by tetronian2 · 2014-07-30T16:26:04.455Z · LW(p) · GW(p)
Can you explain the rationale behind the elimination setup a little more? An elimination tournament seems less fair than pure round-robin. Moreover, I ran some tests with various combinations of the example bots from the tutorial, and what generally happens is the bots with strategies along the lines of "I'll cooperate if you do" (i.e. tit-for-tat, justiceBot, mirrorBot) rise to the top and then just cooperate with each other, resulting in a multi-way tie. If the actual pool of submissions contain enough bots with that kind of strategy, a large tie seems inevitable. This result doesn't sound very exciting for a competition, but is there is some sense in which it is theoretically "better" than round-robin?
Edit: This outcome doesn't always happen, but it happens most of the time.
Replies from: James_Miller↑ comment by James_Miller · 2014-07-30T20:59:29.242Z · LW(p) · GW(p)
The elimination tournament better simulates evolution and capitalism. With a round robin you can have a successful strategy of taking resources from stupid people. But in nature and the marketplace stupid people won't consistently get resources and so a strategy of taking from them will not be a long-term effective one.
Replies from: tetronian2↑ comment by tetronian2 · 2014-07-30T21:54:25.796Z · LW(p) · GW(p)
Thank you, that is an excellent explanation and you have changed my mind; I will implement an elimination tournament as you described.
comment by fubarobfusco · 2014-07-28T23:47:08.015Z · LW(p) · GW(p)
I was thinking about the idea of lost purposes in my kitchen, and a vivid illustration of the idea occurred to me:
You plan to make homemade ice cream for your partner's birthday party next week, so you put "cream" on your shopping list. The next day, you break up with your partner on surprisingly unfriendly terms. You are no longer going to be attending the birthday party. But then you find yourself at the supermarket, with your shopping list in hand, putting a carton of cream into your cart.
EDIT: The birthday-party/break-up thing is a fictional scenario, not something that actually happened to me. Sorry for any worries!
comment by Punoxysm · 2014-07-28T20:48:43.639Z · LW(p) · GW(p)
I feel like parables here on LW, especially the longer and more tortured ones, are pretty much fallacy and bias breeding grounds. A couple egregious offenders, to my mind, include
Blue and Green Martians; about pick-up artistry
and
The Fable of the Dragon Tyrant; about death
Why do I take issue with them? Because while using analogies, including fanciful ones, can help us take the outside view on a problem where we are irrationally biased, these sorts of parables can also be a selective re-telling of the facts, and conclusions drawn from them simply don't transfer to the real world because of the way those facts are distorted, elided or transformed. Any argument with the conclusion then has to take us back to whatever the real-world analogue is, and explain why the parable is flawed.
In other words, a parable (particularly the long-winded, over-constructed sort people like to post on LW) can give you an outside view but it often just pulls you away from the only way to actually solve a problem: to engage with the problem, down the the gritty details that will be erased or distorted by a parable.
Replies from: PeerGynt, pianoforte611↑ comment by PeerGynt · 2014-07-28T21:29:21.641Z · LW(p) · GW(p)
I'm fairly sure this comment was not exactly intended as a compliment, but I can think of worse insults than having my writing put in the same category as Nick Bostrom. As the author of the first of these parables, even I recognize that these two stories differ very significantly in quality
The Blue and Green Martians parable was an attempt to discuss a question of ethics that is important to many members of this community, and which it is almost impossible to discuss elsewhere. The decision to use an analogy was an attempt to minimize mindkill. This did not succeed. However, I am fairly sure that if I had chosen not to use an analogy, the resulting flamewar would have been immense. This probably means that there are certain topics we just can't discuss, which feels distinctly suboptimal, but I'm not sure I have a better solution.
Replies from: Punoxysm, None↑ comment by Punoxysm · 2014-07-28T22:10:31.705Z · LW(p) · GW(p)
Well I don't like the dragon parable either. It's overlong, a bit condescending and ignores the core problem that anti-aging research has done a pretty poor job of showing concrete achievements, even if it's right that it's under-prioritized. I was not a fan of yours exactly because I think the parable elides the most important parts of the actual topic. Even if a direct discussion would be flamey, it's not better to discuss a poor proxy. I'm not trying to pick on you, I just think you tried to define the problem with some premises that were well worth dispute.
There are all sorts of other bad analogies out there though: "If canada launched missiles at the US, how would it respond?", even though the US hasn't turned Canada into a prison-state over the course of 50 years, is one on the news a lot right now.
Parables about the danger of nuclear weapons that ignore the fact that this danger was successfully handled (there was something on here using it as an analogy for AI).
Also, when parables are kind-of-but-not-really trying to be coy about what they're actually about is a bit annoying, leading to stilted writing (but that's the least of my issues).
EY also has a lot of dubious parables, but tackling those is a subject for a bigger post.
And of course there's the whole genre of parables where two fictional interlocutors are arguing, the strawman 'loses' the argument, and that's supposed to convince us of something. I think LW manages to avoid overt versions of this.
In the realm of politics (both Red/Blue and further-from-mainstream) people often apply "argument by utopia", which suffers similar issues in that it attempts to prematurely define convenient facts and use a narrative to elide gritty, worthwhile details of an issue.
Replies from: ChristianKl, fubarobfusco, blake8086↑ comment by ChristianKl · 2014-07-28T22:46:41.220Z · LW(p) · GW(p)
Parables about the danger of nuclear weapons that ignore the fact that this danger was successfully handled (there was something on here using it as an analogy for AI).
The danger wasn't successfully handled for a lot of values of "successful". The fact that you survive playing Russian roulette doesn't show that you successfully handled danger. Once a nuclear bomb nearly exploded in the US where 3 of 4 safety feature of the bomb failed. If I remember right it would have needed 3 of 3 people in the Russian submarine in the Cuban missle crisis to lunch a nuclear weapon and 2 of them wanted to lunch it. There are various lost nuclear weapons.
Replies from: sediment↑ comment by sediment · 2014-07-29T15:38:38.186Z · LW(p) · GW(p)
If I remember right it would have needed 3 of 3 people in the Russian submarine in the Cuban missle crisis to lunch a nuclear weapon and 2 of them wanted to lunch it.
Two of them got sick of their jobs and decided to just go to lunch. Luckily the third guy stayed at his post and just snacked on a sandwich.
↑ comment by fubarobfusco · 2014-07-29T00:05:24.560Z · LW(p) · GW(p)
Well I don't like the dragon parable either. It's overlong, a bit condescending and ignores the core problem that anti-aging research has done a pretty poor job of showing concrete achievements, even if it's right that it's under-prioritized.
Hmm. I suppose I thought the point of "Dragon Tyrant" was not to narrowly advocate for the anti-aging research program; but rather to get people to take seriously the "naïve" idea that death is bad.
Or, more specifically, to say that even though ① defeating death seems like an insurmountable goal because death has always been around, and ② there are people advocating on a wide variety of grounds against attempting to defeat death, it is nonetheless reasonable and desirable to consider.
"Dragon Tyrant" uses the technique, common to sociology and "soft" science fiction (e.g. Kurt Vonnegut, Douglas Adams), of making the familiar strange — taking something that we are so accustomed to that it is unquestioned, and portraying it as alien.
Replies from: Punoxysm↑ comment by Punoxysm · 2014-07-29T00:33:41.850Z · LW(p) · GW(p)
Well, solving death without solving aging wouldn't be that great, so I tend to look at anti-aging as the main path.
I think that people who don't get on the "death is bad" train do so because they don't take seriously the idea that there's any alternative. And certainly, if you're 80 years old and in poor health and going around denying your own mortality, you're probably in a worse mindset than somebody who accepts that they will soon die. Until research gets to a certain point, being a proud anti-deathist is just pretentious windmill-tilting.
In a few decades maybe technology will hold the serious possibility of living indefinitely, and people will be making choices about whether to receive certain therapies or genetically modify their children in some way; and the arguments to make will be much clearer.
But also, "Dragon Tyrant" is definitely kind of narrow because it very transparently calls politicians murderers for not putting more effort and funds into anti-death research, which is a bit more than the broad stuff you're saying.
↑ comment by blake8086 · 2014-07-30T18:04:15.667Z · LW(p) · GW(p)
I feel like the dragon parable correctly shows, if anything, negative progress being made towards dealing with the dragon, until suddenly, it is dealt with. I suppose one difference is that the anti-dragon projectile seems so much more achievable and imaginable than a cure for aging.
Replies from: Punoxysm↑ comment by [deleted] · 2014-07-29T15:20:07.660Z · LW(p) · GW(p)
What would you think of the following solution?
Announce 'I would like to have conversations about the controversial topic Pick-up Artistry. Because talking about it publicly can result in problems, If you want to talk with me about that topic, please send me a message stating your position on the topic.'
By keeping it open like that and not stating your own position, it seems to be about as not prone to mindkill as you could get.
The downside is, private conversations don't have as much bounce effects. For instance in the prior mentioned thread, Viliam_Bur essentially created a post which I don't think would ever get paralleled in a series of private conversations.
(Viliam_Bur's post for reference: http://lesswrong.com/r/discussion/lw/klx/ethics_in_a_feedback_loop_a_parable/b5oz
↑ comment by pianoforte611 · 2014-07-28T22:05:13.738Z · LW(p) · GW(p)
The point of a parable is not to de-bias a heated political topic and then draw direct conclusions about the original topic. It is an attempt to extract one or two key ideas that tend to get muddied up by contentious object level issues. It is essentially an extended analogy.
If someone makes an argument of the form "If A is X, then Y", then a parable is an attempt to extract this idea form the political arena and then test in on new inputs. "B is X, but is it Y?".
It is not an attempt to get rid of the finer details of an issue, but rather to figure out what those details are, and which details are irrelevant.
Replies from: Punoxysm, Lumifer↑ comment by Punoxysm · 2014-07-28T22:37:03.054Z · LW(p) · GW(p)
I get this. And I think many parables profoundly fail in this. They create a simplistic narrative and conclusion, then make it harder to argue about by transferring the logic over to an analogue.
Then it's hard to get the discussion back onto important details.
De-biasing and removing words that trigger an immediate emotional response is one major use of analogies too though.
↑ comment by Lumifer · 2014-07-29T00:36:57.800Z · LW(p) · GW(p)
It is an attempt to extract one or two key ideas
Given how it portrays women as an undifferentiated passive object with no preferences or any input into the process, I'd say this attempt failed.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-07-29T08:18:05.990Z · LW(p) · GW(p)
What? One of the main premises of the story is that humans all other things being equal prefer to be tickled by blue martians than to not be tickled and to not be tickled than to be tickled by green martians. Otherwise there would be no ethical problem at all -- martians could just tickle whomever they wanted.
Replies from: Lumifer↑ comment by Lumifer · 2014-07-29T15:00:10.768Z · LW(p) · GW(p)
Okay, women have a preference along a single axis which they do nothing about and do not express at all. The framework as described is all about what active agenty men could or should do to entirely passive npc women. I'm very far from being a feminist, but come on -- this is objectification and "don't worry your pretty head about it".
Replies from: Viliam_Bur, PeerGynt↑ comment by Viliam_Bur · 2014-07-30T08:46:45.387Z · LW(p) · GW(p)
I have a preference for eating tasty food in restaurants. But I am absolutely not interested in teaching chefs how to cook. If I am not satisfied with the food, I will simply never come back to that restaurant again. There are many restaurants to choose from. I don't really care about what happens to the owner of the bad restaurant; it's their problem, not mine.
Does this make me an entirely passive NPC, because I completely refuse to participate in this "how to get better at cooking" business and merely evaluate my satisfaction with the results? I don't think this would be a fair description. I am not waiting helplessly; my strategy is evaluating different restaurants and choosing the best. Yeah, if we assume that each chef can only make a limited amount of food, I am kinda playing a zero-sum game against other customers here. But still, playing zero-sum games is not passivity.
But a naive chef could complain: "All those customers do is criticize. They never help us, never teach us. How are we supposed to learn? Everyone's first cooked meal is far from perfect. Practice makes perfect, but practice inevitably includes making a few mistakes." From his point of view, the customers are kinda passive: they want better food, but they are not helping anyone to cook better; they merely avoid those who cook worse, which per se does not make them cook better.
(To make it even worse, in this world vocational schools for chefs have a very bad reputation. People believe they all teach you to use the cheapest ingredients and artificial flavors, because they once read an internet forum where a few chefs debated exactly this. Thus most chefs take a great care to avoid anything that could remind their customers of a vocational school.)
Replies from: Lumifer↑ comment by Lumifer · 2014-07-30T14:47:40.475Z · LW(p) · GW(p)
I don't understand your analogy.
First, here you are a consumer. You have no relationships with chefs and are not interested in relationships with chefs. You pay your money, you get your product and its qualities is all you care about. If that product came from a kitchen two blocks down the street, or was flown frozen from overseas, or made by a robot chef -- you don't care as long as it's good.
Second, you are active and make decisions. It is not the case that chefs jump on you as you walk down the street and attempt to stuff their food into your mouth. You pick the restaurant you go to. I see no passivity at all, it's just teaching chefs cooking is neither your role nor your desire.
↑ comment by PeerGynt · 2014-07-29T15:33:39.919Z · LW(p) · GW(p)
It is true that some participants in the analogy are "non-player characters". That is because some ethical questions only have implications for the choices of a subset of the agents. It should be permissible to discuss these ethical questions. Doing this properly will require adding information about all stakeholders whenever it is relevant, but it does not necessarily require all stakeholders to be "playable" in the sense that they actively make ethical decisions.
It is also true that the women in my story have a preference on a single axis, and that in real life, they also have preferences on other axes. I did not specify those preferences in the analogy, because I did not see the point in adding complications that do not have relevance to the resolution of the ethical question, which is a choice faced only by Martians.
If you feel that there is an additional axis which has important implications for the ethical choice that the Green Martians are facing, please specify what that axis is and why it is important. This would be an important contribution to the discussion. Otherwise, this comes across as saying "you should have added additional complications that were not relevant, in order to sufficiently signal that women are important ethical agents and not objects".
The fact that women are important ethical agents is so obvious that it is not even worth debating. However, I shouldn't have to signal this at every opportunity as a precondition for taking part in the discussion, especially not when this would require me to add unnecessary information to the story.
As for why the women don't express their preference not to be tickled by green martians, this is simply because I took this preference to be obvious and common knowledge to all participants in the analogy.
Replies from: Lumifer↑ comment by Lumifer · 2014-07-29T16:45:47.059Z · LW(p) · GW(p)
because I don't see the point in adding complications that do not have relevance to the discussion
Your parable is flawed at the core because you made a basic category mistake. Flirting is not an action, not something one person does to another one. It is interaction, something two people do together.
Deciding that one person in that interaction controls the encounter and does things, while the other is just a passive receptacle to the extent that not even her consent is required, never mind active participation, is not a useful framework for looking at how men and women interact.
comment by chaosmage · 2014-07-29T12:42:35.187Z · LW(p) · GW(p)
I've started to play with directed graphs kind of like Bayesian networks to visualize my belief structures. So a node is a belief (with some indication of confidence), while connections between graphs indicate how beliefs influence (my confidence in) other beliefs.
This seems useful for summarizing complex arguments, easy to memorize, and (when looking at a belief structure that's bigger than my working memory) for organizing and revising thought.
However, there are a few decisions in how to design the visual language of such graphs that I can't see obvious solutions to. If I include necessity and sufficiency, which seems really useful, how does that square with the confidence calculations? How should I represent negation (the other logical connectives are fairly obvious)? Should I have different types/shapes of nodes, and if so, which?
So I'd like to see the work of others who have done similar diagrammatic depictions of belief networks, to play with them and see what works for me. I've seen influence diagrams, but I'm not convinced the choices made there are obviously the best ones. Does anyone have pointers to other existing Bayesian diagram schemes I should look at?
Replies from: blake8086, NancyLebovitz↑ comment by blake8086 · 2014-07-30T17:56:24.083Z · LW(p) · GW(p)
http://systemsandus.com/ uses + and - to denote it, and I guess they just assume you can mostly keep track. I feel like it works on simple diagrams.
↑ comment by NancyLebovitz · 2014-07-29T13:30:15.086Z · LW(p) · GW(p)
Would it work to use red connections to indicate negations? If red is too emphatic, how about connections with dashes crossing the main line? How about thickness of lines to indicate how sure you are of a connection?
comment by MrMind · 2014-07-29T07:05:20.752Z · LW(p) · GW(p)
If I wanted to learn about a precise formulation of UDT, where should I look / who should I ask? Info on the wiki is hopelessly outdated, and there lacks a single clear exposition.
Replies from: Manfred, Richard_Kennaway↑ comment by Manfred · 2014-07-29T09:30:28.009Z · LW(p) · GW(p)
Have you read Wei Dai's posts already? E.g. here and here.
My one-sentence description is "find the best strategy, then follow it."
If you want a more formal approach, cousin it's post is probably a good place to start - if you like that, there are more posts by him that might be a good idea to read.
Replies from: cousin_it↑ comment by Richard_Kennaway · 2014-07-29T09:46:03.733Z · LW(p) · GW(p)
Is anyone working on formalising UDT, TDT, and the like, for publication in academic journals? Has any of it appeared there already?
Replies from: protest_boy↑ comment by protest_boy · 2014-07-31T04:02:09.920Z · LW(p) · GW(p)
There is this paper, http://commonsenseatheism.com/wp-content/uploads/2014/04/Hintze-Problem-class-dominance-in-predictive-dilemmas.pdf which was an honors thesis.
More discussion relevant to the state of UDT and TDT in this comment: http://lesswrong.com/lw/k3m/open_thread_2127_april_2014/au6e
comment by mare-of-night · 2014-07-29T23:22:13.776Z · LW(p) · GW(p)
My diet seems to influence my mind and body a lot more strongly than is normal. (Food intolerances that mess with my emotions or focus, apparent hypoglycemia that goes away when I take vitamin B, that sort of thing. I know a lot of people have something like this, but I've got so many that diet is the default first suspect whenever anything goes wrong.) I'm not sure whether this makes me a potentially useful test subject for things like nootropics because the effects might get inflated and easier to notice, or just an outlier whose results won't work on anyone else. I also wonder if this means there might be foods that have good effects on me for no apparent reason, in which case I might experiment to find them.
Could someone who knows more about biology than I do offer some insight?
Replies from: VAuroch, James_Miller↑ comment by VAuroch · 2014-07-30T08:28:15.853Z · LW(p) · GW(p)
I have noticed that maintaining a decent diet makes a massive difference to my mental state, but I have no reason to think this is unusual. You may not be either.
In short, consider generalizing from one example more.
Replies from: mare-of-night↑ comment by mare-of-night · 2014-07-30T18:14:33.710Z · LW(p) · GW(p)
I've suspected that food might have more affect on people in general than general opinion says it does. But I act really differently on my diet vs. when eating what most Americans eat (I haven't tried eating normally since childhood because the effects are too unpleasant, but I've made enough mistakes in a row to come close on one occasion - see my comment to James_Miller), and most other people act more like me on a good diet than me on a bad diet.
I've considered generalizing from one example when it comes to people who do act similar to me with a bad diet. I tend to keep quiet about it because it comes off as really insensitive to tell someone that their depression might be caused by the candy they eat, when I don't have any evidence for that besides generalizing from my own experience.
↑ comment by James_Miller · 2014-07-30T03:53:05.858Z · LW(p) · GW(p)
My diet seems to influence my mind and body a lot more strongly than is normal.
Alternative hypothesis: Diet has a huge influence on mind and body but most people lack the mindfulness to notice.
The Food Sense App might help you. http://www.bulletproofexec.com/find-your-kryptonite-with-the-free-bulletproof-food-sense-iphone-app/
Replies from: mare-of-night↑ comment by mare-of-night · 2014-07-30T18:00:55.517Z · LW(p) · GW(p)
Good hypothesis. I still don't think I'm completely normal, because when I eat a typical American diet I can't function in society, which most people seem to be able to do. (Mainly thinking of a family trip where I ate out a lot and wasn't as careful as usual, and after a couple days I was breaking down crying about once a day, at things that would normally just annoy me.) But then, I could see the more subtle symptoms being things that people assume are chronic problems they can't change. Alternately, my normal is near some kind of borderline for a mental problem and that's why my diet can push it over so easily.
I've also wondered how much of mental illness could be caused by this sort of thing. I was told that as a child, a doctor thought I had ADHD and was about to have me tested, and then my mom forgot to buy bananas at the store one week and my behavior suddenly improve. It seems likely that other children with the same problems I have exist, and most of their parents weren't already alert to diet influences.
Thanks for the app link. I don't have iphone, but I bet I can find something similar for android.
I guess what I should do first is hit up a library database and find out if anyone has already researched this. (I've made a few efforts to look it up before, but mostly just google searches - though I did find that mental symptoms for corn and milk allergies aren't unheard of.) If nutritionists don't think food works this way, but also haven't studied this specifically and found it false, I'm not sure if I should try to do my own experiment or not.
Replies from: Lumifer↑ comment by Lumifer · 2014-07-30T18:10:40.736Z · LW(p) · GW(p)
I guess what I should do first is hit up a library database and find out if anyone has already researched this.
Don't think it'll help you much -- you need to find out how you work, not what happens to some sample of some population of people none of whom are you.
In your place I'd start keeping a detailed mood/mental state diary AND a detailed food diary. After a few months you should be able to get a decent idea of what kinds of food do what to you.
You might also want to talk to gwern -- he does "how X affects me" mini-studies and has good methodology.
Replies from: mare-of-night↑ comment by mare-of-night · 2014-07-30T18:48:53.198Z · LW(p) · GW(p)
Sorry, I didn't make the intent clear. I do want to do more experiments on myself, and I need to work on figuring out a non-annoying way to collect data so I can do that. But I'm also really curious how common this sort of thing is in other people. So the library research is for testing your alternate hypothesis, and my hypothesis that some people are strongly influenced by food but mistake it for a chronic problem.
Replies from: Lumifer, Zian↑ comment by Lumifer · 2014-07-30T19:22:06.689Z · LW(p) · GW(p)
But I'm also really curious how common this sort of thing is in other people.
Browse forums for non-mainstream diets, e.g. paleo or vegan. You'll find LOTS of stories by people who found out that a change in their diet massively affects their health and/or mental state.
The thing is, though, on paleo forums the stories will be "So I stopped eating carbs and the mental fog just lifted and now I have energy..." and on vegan forums the stories will be "So I stopped eating animal products and the mental fog just lifted and now I have energy..." :-D It's all very individual, you still will need to figure out how you react to stuff.
Replies from: mare-of-night↑ comment by mare-of-night · 2014-07-30T19:38:58.175Z · LW(p) · GW(p)
Ah, thanks :) I figured that different diets are good for different people, since that's what seemed to happen for people I know. But I wanted to find out how common and how extreme that sort of thing is, since if people are getting results like "I can handle going to school now", then people should be more aware of it than they are.
I'm pretty sure I already know the most important reactions for me - I've gotten to the point that there's not anything really really wrong anymore. I didn't expect the rest to just be in a book somewhere, since what I've already found out by experimenting doesn't match up to any known pattern other than "diet does stuff".
Replies from: NancyLebovitz, NancyLebovitz↑ comment by NancyLebovitz · 2014-07-31T01:53:18.214Z · LW(p) · GW(p)
You might want to poke around Chris Kresser's blog-- he's a non-doctrinaire paleo guy, and his commenters include a lot of people with unusual symptoms.
Replies from: mare-of-night↑ comment by mare-of-night · 2014-07-31T02:25:22.911Z · LW(p) · GW(p)
Thanks! I'll have a look.
↑ comment by NancyLebovitz · 2014-07-31T01:55:15.008Z · LW(p) · GW(p)
You might want to poke around Chris Kresser's blog-- he's a non-doctrinaire paleo guy, and his commenters include a lot of people with unusual symptoms.
comment by Ben Pace (Benito) · 2014-07-29T14:30:16.906Z · LW(p) · GW(p)
Please vote on the removal of the brain/head image from the homepage.
[pollid:741]
Replies from: gwern, None, tut↑ comment by gwern · 2014-07-29T15:59:32.686Z · LW(p) · GW(p)
Why discuss it? Wouldn't it be better to A/B test which encourages new visitors to click on a link to another page?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-07-29T17:22:14.785Z · LW(p) · GW(p)
Yes, you're right. I didn't like the change that's all, and was hoping for a majority to back me. But if anyone wants to do that, that would certainly be a good idea
Replies from: jackk↑ comment by [deleted] · 2014-07-30T09:58:25.425Z · LW(p) · GW(p)
I thought the brain image was removed because it wasn't accurate.
↑ comment by tut · 2014-07-29T16:39:35.958Z · LW(p) · GW(p)
Didn't we have a special poll thread so that the RSS feed for the open thread would work?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-07-29T17:23:06.986Z · LW(p) · GW(p)
Oops. Did I mess something up?
comment by Scott Garrabrant · 2014-07-29T07:52:12.548Z · LW(p) · GW(p)
For a quick reminder of the power of many independent trials, estimate and then answer the following question:
I have 2 biased coins in my pocket. The first comes up heads with probability 51%, while the second comes up heads with probability 49%. I take a coin out of my pocket, uniformly at random, and flip it a million times. I observe that it comes up heads 508,634 times. What is the probability that it is the first coin?
Replies from: Lumifer, army1987, philh, DanielLC, Manfred↑ comment by Lumifer · 2014-07-31T19:53:35.506Z · LW(p) · GW(p)
Under the usual convention of 95% significance, it's neither :-D
> binom.test(508634, 1000000, 0.51)
Exact binomial test
data: 508634 and 1e+06
number of successes = 508634, number of trials = 1e+06, p-value = 0.006304
alternative hypothesis: true probability of success is not equal to 0.51
95 percent confidence interval:
0.5077 0.5096
sample estimates:
probability of success
0.5086
↑ comment by A1987dM (army1987) · 2014-07-30T12:05:17.619Z · LW(p) · GW(p)
Very very close to 1, if the trials are truly independent. But (as Jaynes mentioned in PT:TLoS) there are ways of flipping a coin that systematically favour one side over the other, and you might be unwittingly doing something like that. IOW inside the argument the probability that you took the second coin is negligible, but outside the argument it isn't.
↑ comment by philh · 2014-07-29T13:50:17.117Z · LW(p) · GW(p)
Gut: vg'f tbvat gb or fhssvpvragyl pybfr gb bar gung gur cebonovyvgl vf whfg tbvat gb ybbx yvxr n ohapu bs avarf gb zr.
Fermi estimate: VVEP bar qrpvory vf nobhg svsgl fvk creprag, naq crepragf punatr snfgre jura qrpvoryf ner fznyy, fb svsgl bar creprag vf yrff guna unys n qrpvory. Fb yrg'f fnl gung svsgl bar creprag vf nobhg mreb cbvag bar qo. Fb rnpu urnq cebivqrf abhtug cbvag bar qrpvoryf sbe pbva bar, naq rnpu gnvy cebivqrf artngvir gung sbe gur fnzr. Lbh unir nobhg gjragl gubhfnaq zber urnqf guna gnvyf, fb nobhg gjb gubhfnaq qo rivqrapr sbe pbva bar. Rirel gra qrpvoryf pbeerfcbaqf gb vapernfvat gur bqqf ol n snpgbe bs gra, fb cebonovyvgl vf gura fbzrguvat yvxr bar zvahf gra gb gur zvahf gjb uhaqerq.
Calculation: svsgl bar creprag vf abhtug cbvag bar frira qrpvoryf. Gurer ner gjb gvzrf rvtug gubhfnaq, fvk uhaqerq naq guvegl sbhe zber urnqf guna gnvyf. Gung znxrf nyzbfg rknpgyl guerr gubhfnaq qrpvoryf sbe urnqf. Fb gur bqqf ner gra gb gur guerr uhaqerq, tvivat cebonovyvgl nyzbfg rknpgyl bar zvahf gra gb gur zvahf guerr uhaqerq.
Compared to Manfred's answer: Jr qvssre ol n snpgbe bs gra gb gur gjb uhaqerq naq guerr. V qba'g pheeragyl unir gur gvzr gb jbex bhg jul, naq jura V qb unir gvzr V znl abg pner rabhtu.
Edit: I think I did it wrong. No time to correct it currently, but the true answer should be higher than mine.
Edit 2: Maybe not? I thought I needed to use both P(H|C1)/P(H|C2) and P(H|C1)/P(T|C1), which are confusingly identical. But when I actually put it on paper, it looks correct.
Replies from: Manfred↑ comment by Manfred · 2014-07-29T18:32:30.191Z · LW(p) · GW(p)
I have us being different by a factor of 10^40, but yeah, that's a bit surprising. Maybe we're far enough out in the tails that the normal approximation is breaking down?
Replies from: philh↑ comment by philh · 2014-07-29T23:24:23.979Z · LW(p) · GW(p)
Oh, I misbracketed your formula. Yes, 10^40.
I don't offhand have a model for why we expect your method to work, so I don't know why it fails. But another approach using the normal approximation gets within a factor of 10, so that shouldn't be it.
Um, I think you're just counting standard deviations in the wrong direction? You're counting standard deviations from 500,000 and doubling them, but the relevant distribution means are 510,000 and 490,000.
But no, those should be equivalent.
Oh! You're squaring a sum, not summing a square. You're counting the correct number of standard deviations in total, but you need the correct number for each distribution.
Dammit LW, stop nerd sniping me.
Replies from: Manfred↑ comment by DanielLC · 2014-07-29T18:39:30.227Z · LW(p) · GW(p)
Do you take into account the possibility that you miscounted, or are hallucinating, or any of the other events that are far more likely explanations than that it comes up heads with probability 49% and it came up heads that often just by chance?
Replies from: othercriteria, othercriteria↑ comment by othercriteria · 2014-07-29T23:22:02.613Z · LW(p) · GW(p)
When I know I'm to be visited by one of my parents and I see someone who looks like my mother, should my first thought be "that person looks so unlike my father that maybe it is him and I'm having a stroke"? Should I damage my eyes to the point where this phenomenon doesn't occur to spare myself the confusion?
Replies from: RowanE↑ comment by othercriteria · 2014-07-31T15:37:48.689Z · LW(p) · GW(p)
This comment rubbed me the wrong way and I couldn't figure out why at first, which is why I went for a pithy response.
I think what's going on is I was reacting to the pragmatics of your exchange with Coscott. Coscott informally specified a model and then asked what we could conclude about a parameter of interest, which coin was chosen, given a sufficient statistic of all the coin toss data, the number of heads observed.
This is implicitly a statement that model checking isn't important in solving the problem, because everything that could be used for model checking, e.g., statistics on runs to verify independence, the number of tails observed to check against a type of miscounting where the number of tosses don't add to 1,000,000, mental status inventories to detect hallucination, etc., is left out of the statistic communicated.
Maybe Coscott (the fictional version who flipped all those coins) did model checking or maybe not, but if it was done and the data suggested miscounting or hallucination, then Coscott wouldn't have stated the problem like this.
So, yeah, the points you raise are valid object-level ones, but bringing them up this way in a problem poser / problem solver context was really unexpected and seemed to violate the norms for this sort of exchange.
Replies from: DanielLC↑ comment by Manfred · 2014-07-29T09:21:56.864Z · LW(p) · GW(p)
Shortcut: Gurfr ner onfvpnyyl abezny qvfgevohgvbaf jvgu fgnaqneq qrivngvba fdeg(a)/2 = svir uhaqerq (gur snpgbe bs 1/2 pbzrf sebz gur c(1-c) grez va gur inevnapr). Gur qvssrerapr orgjrra gur gjb vavgvnyyl-rdhny cbffvovyvgvrf vf gura nobhg 34 fgnaqneq qrivngvbaf. Jung'f bar zvahf r gb gur zvahf 34 fdhnerq bire 2?
comment by Furcas · 2014-07-29T03:31:58.610Z · LW(p) · GW(p)
In his latest newsletter Louie Helm advises taking "activated" vitamin D in the form of Calcitriol or Paricalcitol, to raise one's Klotho levels, which is likely to increase one's IQ and longevity if you don't already have the gene for it. Since Calcitriol and Paricalcitol aren't over-the-counter, what would be the best way to acquire some?
http://rockstarresearch.com/increase-longevity-and-intelligence-with-boosted-klotho-levels/
Replies from: niceguyanon, James_Miller, ChristianKl, polymathwannabe↑ comment by niceguyanon · 2014-07-29T17:26:48.145Z · LW(p) · GW(p)
Check the comments near the bottom. Not the pet pharmacy link.
Replies from: satt↑ comment by satt · 2014-07-31T18:29:05.622Z · LW(p) · GW(p)
Seems Michael Jackson is back from the dead and commenting on Louie's blog. The salubrious powers of vitamin D know no bounds!
↑ comment by James_Miller · 2014-07-29T21:44:33.920Z · LW(p) · GW(p)
If it's true that the GT version gives you increased intelligence then there should be a dating service that matches TT with GG because their children would all be GT.
↑ comment by ChristianKl · 2014-07-29T10:55:30.044Z · LW(p) · GW(p)
How good is the case of ingesting Calcitriol/Paricalcitol over the standard cholecalciferol?
↑ comment by polymathwannabe · 2014-07-29T04:24:10.030Z · LW(p) · GW(p)
Your liver and kidneys already produce the necessary precursors. Eat enough dairy and get enough sunlight, and your body will do the rest.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-07-29T11:39:12.043Z · LW(p) · GW(p)
if you don't already have the gene for it.
comment by William_Quixote · 2014-07-30T22:33:35.284Z · LW(p) · GW(p)
Less wrong overlaps overcoming bias which plugs scicast. So there should be a bunch of folks on here competing in scicast. If so how are you folks doing?
comment by Toggle · 2014-07-29T17:05:50.348Z · LW(p) · GW(p)
I tried to think of the most harmless thing. Something I loved from my childhood. Something that could never ever possibly destroy us.
A thought occurred to me a while back. Call it the "Ghostbusters" approach to the existential risk of AI research. The basic idea is that rather than trying to make the best FAI on the first try, you hedge your bets. Work to make an AI that is a)unlikely to disrupt human civilization in a permanent way at all, and b)available for study.
Part of the stress of the 'one big AI' interpretation of the intelligence explosion is the sense that we'd better get it right the first time. But on the other hand, surely the space of all nonthreatening superintelligences is larger than the space of all helpful ones, and a comparatively easier target to hit on our first shot. You're still taking a gamble. But minimizing this risk seems much easier when you are not simultaneously trying to change human experience in positive ways. And having performed the action once, there would be a wealth of new information to inform later choices.
So I'm trying to decide if this is obviously true or obviously false: p(being destroyed by a primary FAI attempt) > p(being destroyed by a "Ghostbusters" attempt) * p(being destroyed by a subsequent more informed FAI attempt)
Replies from: drethelin↑ comment by drethelin · 2014-07-29T18:24:31.743Z · LW(p) · GW(p)
If you're making AI for study it shouldn't be super-intelligent at all, ideally it should be dumber than you. I can imagine an AGI that can usefully perform some tasks but is too stupid to self-modify itself into fooming if constrained. You can let it be in charge of opening and closing doors!
Replies from: Toggle↑ comment by Toggle · 2014-07-29T18:57:01.127Z · LW(p) · GW(p)
Well, I definitely agree that we should make non-super intelligent AIs for study, and also for a great many other reasons. But it's perhaps less clear what 'too stupid to foom' actually means for an AGI. There was a moment when a hominid brain crossed an invisible line and civilization became possible; but the mutation precipitating that change may not have obviously been a major event from the perspective of an outside observer. It may just have looked like another in a sequence of iterative steps. Is the foom line in about the same place as the agriculture line? Is it simpler? Harder?
On the other hand, it's possible to imagine an experimental AGI with values like "Fulfill [utility function X] in the strictly defined spatial domain of Neptune, using only materials that were contained in the gravity well of Neptune in the year 2000, including the construction of your own brain, and otherwise avoid >epsilon changes to probable outcomes for the universe outside the domain of Neptune." Then fill in whatever utility function you'd like to test; you could try this with each new iteration of AGI methodology, once you are actionably worried about the possibility of fooming.
comment by [deleted] · 2014-07-29T15:36:45.949Z · LW(p) · GW(p)
I am looking for methods by which I can gain experience working with state or federal (American) organizations. I plan to begin applying for jobs with government libraries and archives next year, and I would like some experience besides what I am doing for my current job. I do not mean that my current job is pointless, only that I feel no reason not to spend time augmenting it.
As I am not a student, I cannot apply for online internships, which was my first plan. I could enroll in an online class but I do not have the money for the tuition. So, I am looking for other options. Anything that will help me get a foot in the door and begin to network with people and organizations within the government.
Does anyone have any suggestions?
Replies from: VAuroch, fubarobfusco↑ comment by VAuroch · 2014-07-30T08:29:09.712Z · LW(p) · GW(p)
Are you sure that these online internships have as an explicit condition the requirement that you be a student? Most internships have no such condition.
Replies from: None↑ comment by [deleted] · 2014-07-30T13:35:56.346Z · LW(p) · GW(p)
The internships I have found so far through resources like the State department and usajobs.gov are all specifically for students. I've also emailed the person in charge of State department internships, asking if she knows of any non-student internships, and she has not heard of any.
Of course, this does not mean there is no cache of non-student internships out there. I simply have not found any through the resources I know of.
Replies from: VAuroch↑ comment by VAuroch · 2014-07-31T08:29:05.331Z · LW(p) · GW(p)
It's fairly plausible that most/all government internships are specifically for students, so if you've looked carefully and not found things then they may not exist. However, consider looking for local government; it will probably transfer well, especially if you can find internships with large cities or counties/regions.
Replies from: None↑ comment by [deleted] · 2014-07-31T13:03:50.347Z · LW(p) · GW(p)
From what sources I have found/spoken to, this looks like the case. The only internships appear to be student ones and there does not seem to be any non-student equivalents for getting your foot in the door. Networking and connections seem the best route.
↑ comment by fubarobfusco · 2014-07-30T17:10:33.432Z · LW(p) · GW(p)
If you code, talk to the Code for America folks?
comment by Metus · 2014-07-28T21:34:01.579Z · LW(p) · GW(p)
What are your opinions on professional certificates? Do they actually increase potential earnings or do they only make money for the certifying body? And are there very broad certificates that can be useful in any vaguely quantitative profession that uses mathematical models.
I ask because I am in the later part of studying physics and pretty sure that I neither want to nor be able to make it in academia, so I am working on alternative plans. Figured that some certificates could enhance my employability or show some alternatives I haven't been aware of yet. What do you think?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-28T21:49:57.390Z · LW(p) · GW(p)
Where do you want to be employed?
Replies from: Metus↑ comment by Metus · 2014-07-28T22:05:11.846Z · LW(p) · GW(p)
I have yet to find out. Currently I am in talks with an IT company for a paid internship where I could acquire some IT-related experience such as setting up and understanding IT infrastructure and working in a larger coding project. There is some appeal to that. On the other hand I love mathematical models which this kind of field would be lacking. A major problem of mine is that I am absolutely not willing to work long hours as I value my time very highly and want to use it to learn about the world around us for the time I am without a family or spend it with my family for the time I am with a family.
Replies from: iarwain1↑ comment by iarwain1 · 2014-07-28T22:42:35.721Z · LW(p) · GW(p)
Have you considered working in an IT support capacity for a company that does things using mathematical models? Done right this might allow you to use your love of math while also not working long hours.
Replies from: Metus↑ comment by Metus · 2014-07-28T22:47:53.309Z · LW(p) · GW(p)
I have not, thank you for the suggestion. Examples would be IT support for an insurance company or similar I think. Would experience as an actuary be beneficial? I am actually working on an (most probably unpaid) internship as an actuary too.
Replies from: iarwain1↑ comment by iarwain1 · 2014-07-29T00:05:53.883Z · LW(p) · GW(p)
I can't say myself since I'm sort of in the same boat as you are. I love research and abstract thinking, but I value time spent on family and relaxation more, so I decided against going into academia. IT / programming support for a research position is just an idea that I'd seen suggested I think in a few places. For myself I'm hoping to go for data science since that's very closely related to many types of research and if a research job doesn't work out then at least it pays quite well.
comment by Zian · 2014-08-03T03:36:18.331Z · LW(p) · GW(p)
In light of how long it usually takes for statistical models and discoveries to crawl out of academic articles -> practice, the LessWrong community will probably appreciate the efforts by the Consortium of Food Allergy Research (established w/ money from the US National Institutes of Health) to provide online probabilistic calculators for people's long-term prognoses:
comment by Ixiel · 2014-07-31T13:34:01.283Z · LW(p) · GW(p)
Sorry if this has been addressed, but is there a way to buy a copy of the major sequences (I assume it'd be too long without the word "major?") in a dead tree book form? Further, if not, anyone interested in getting relevant permissions and putting one together on Lulu or some such? I like highlighting my books, and know some folks I'd like to give the major sequences, but stapled copies would make me feel like a crackpot pamphleteer. Thanks in advance if there is a solution.
Replies from: None↑ comment by [deleted] · 2014-07-31T13:45:25.039Z · LW(p) · GW(p)
As far as I am aware, this is one of those "it's in the works" things. I know that, on MIRI, they are still asking for volunteers to proofread and edit the sequences. So, I assume from that that, no, there is not yet a book form.
Replies from: Ixiel↑ comment by Ixiel · 2014-07-31T14:29:41.682Z · LW(p) · GW(p)
Oh nice! I was planning to match my purchase price in a Miri donation so they got a piece anyway. Thanks for speedy reply.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-07-31T23:11:05.151Z · LW(p) · GW(p)
I am not sure about the plan, but I thought only an ebook version was planned. So -- if that is true -- you would have to print your own copy anyway. The biggest change is that someone else has already selected the articles and made a nice layout. Some of those articles were also redacted. It has over 2000 half letter (A5) pages.
By the way, if you don't mind a few typos and prefer speed, you could ask them to send you a working version. Also, the book contains a lot of hyperlinks (both internal and external), which will not work in the printed version. So, it will not be perfect either way. -- Unless someone also creates a printable version, which would e.g. replace the hyperlinks with footnotes. Again, I am not sure, but I think they are doing it in TeX, so finding a volunteer TeX expert could increase the probability of having the printable version. (Yet another possibility would be to convert their source files automatically in some other format, such as RTF or HTML, add the hyperlinks, and print it.)
Replies from: Ixielcomment by [deleted] · 2014-07-29T15:23:41.964Z · LW(p) · GW(p)
I am looking for a proofreader or three. The thing I would like proofed is short and so could easily be sent over PMs. I would like an outside view before submitting it.
Replies from: polymathwannabe, ChristianKl↑ comment by polymathwannabe · 2014-07-29T16:55:58.793Z · LW(p) · GW(p)
I'd be glad to help.
↑ comment by ChristianKl · 2014-07-29T15:35:24.362Z · LW(p) · GW(p)
What's the topic?
Replies from: Nonecomment by Qwake · 2014-07-29T09:36:25.109Z · LW(p) · GW(p)
"Memory is the framework of reality" This quote just popped into my head recently and I can't stop thinking about it.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-07-29T13:27:41.334Z · LW(p) · GW(p)
Or...
Memory is the framework of "reality".
Damn, now I want quote-marks with percentages attached.
Replies from: sedimentcomment by DanielLC · 2014-08-03T09:02:13.464Z · LW(p) · GW(p)
Regarding where to draw the boundary, are R, W, and Y really vowels? The biggest difference I've noticed between vowels and consonants is that vowels don't involve touching parts of your mouth together. This makes it a lot easier to transition between letters. The second thing is that vowels are always voiced. H is otherwise a vowel, but it seems like it might be worth calling a consonant on that basis.
R at least is always considered a consonant, but the sound W makes is considered a vowel if it's a hard U or an OO, and the sound Y makes is considered a vowel if it's a hard E or the second half of a hard I (which seems to be a soft O followed by a hard E). Also, Y is often stated as being sometimes a vowel, even though it seems to always be a hard E.
W and Y are generally placed like a consonant would be, but this doesn't seem like it means much, since vowels can be placed anywhere. Consonants can't be placed three in a row. (Unless the first two have the same place, the first is not a stop, and the second is, like "mp" or "nt", which makes it particularly easy to pronounce. This leads to things like the Japanese word senpai being consistently mispronounced by Japanese speakers as "sempai", even though the only consonant Japanese syllables can end with is an N.) It looks like W and Y have this rule, but it's not so much that Y is never between two consonants as it is that if a hard E is between two consonants, it's not represented with a Y. And this still doesn't excuse R, which is frequently the only vowel between two consonants. For example, "bird", "herd", and "turd" are all written with different "vowels" next to the R, but they are all pronounced with just the R sound.
This always bugged me a lot. Am I the only one that sees this?
Replies from: polymathwannabe, army1987, Richard_Kennaway, erratio, ChristianKl↑ comment by A1987dM (army1987) · 2014-08-04T13:48:32.502Z · LW(p) · GW(p)
(Whoops, this came out excessively snarky for no good reason; let me make the same points in a more civil manner. I shouldn't comment when tired or stressed out.)
↑ comment by Richard_Kennaway · 2014-08-04T08:23:44.487Z · LW(p) · GW(p)
Consonants can't be placed three in a row. (Unless the first two have the same place, the first is not a stop, and the second is, like "mp" or "nt"
Upthrust. Backstop. Blackstrap (4 in a row!) Axminster (twice, -ksm- and -nst-). Schatzkammer. Opschrijven . All of these violate this proposed rule. Not to mention things like xłp̓x̣ʷłtłpłłskʷc̓, in a language that stretches the very idea of a syllable.
This leads to things like the Japanese word senpai being consistently mispronounced by Japanese speakers as "sempai"
That should be "consistently pronounced." However the native speakers consistently pronounce something is right.
The concept of a consonant has fuzzy edges -- see polymathwannabe's comment. Why is this a problem?
If the data persistently fail to conform to rules abstracted from the data, it is the rules that are wrong).
Replies from: DanielLC, polymathwannabe↑ comment by DanielLC · 2014-08-04T20:33:49.053Z · LW(p) · GW(p)
All of these violate this proposed rule.
After reading some of these comments, there are more exceptions than that, and I wrote it confusingly. So how about this: you cannot have more than one consonant at the same side of a syllable without extenuating circumstances. Having two of them have the same place of articulation (like the s and t in backstop), is a common one. s and z seem to only be possible to place after consonants that are voiced and unvoiced respectively. Neither can be placed after a ʒ (the second half of a j sound).
That should be "consistently pronounced." However the native speakers consistently pronounce something is right.
The pronounce it in a way that violates the theory of what can and cannot be pronounced in Japanese. As far as I can understand, the Japanese alphabet has one character for each syllable. Each syllable has one consonant, then one or more vowels, then possibly an n. There is no syllable "sem".
I don't know if they always pronounce it "sempai". I know it is at least sometimes written "senpai". I just meant that it's very common to pronounce that way, even though it shouldn't be possible at all. If they have gratuitous English that has a syllable ending in a consonant, they stick a vowel after it. For example, "red" becomes "redo" (and the d is particularly t-like, and the r is something that has no English equivalent).
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-08-04T22:31:38.286Z · LW(p) · GW(p)
The pronounce it in a way that violates the theory of what can and cannot be pronounced in Japanese.
Then the theory is wrong. Whatever is pronounced in Japanese can be.
I don't know if they always pronounce it "sempai".
In Japanese, "n" regularly sounds as "m" before "p". It's a rule!
↑ comment by polymathwannabe · 2014-08-04T12:19:14.127Z · LW(p) · GW(p)
1) The clusters in "upthrust" and "backstop" actually have three consonantal sounds, even if some of those sounds are written as digraphs. This debate is going to be very difficult if we're stuck to English or any language whose spelling makes little sense, if at all. Ghoti and all that.
2) Those clusters aren't in a single syllable. Up-thrust. Back-stop. Black-strap. Schatz-kam-mer. Op-schrij-ven. Apart from the exotic example you cite (and that time a Muppet tried to pronounce the entire alphabet as a single word), I haven't seen more than three consonants in a single syllable.
Replies from: Richard_Kennaway, DanielLC↑ comment by Richard_Kennaway · 2014-08-04T13:13:41.623Z · LW(p) · GW(p)
The clusters in "upthrust" and "backstop" actually have three consonantal sounds
Yes. I wasn't intending them as examples of more than three, but of counterexamples to the rules that DanielLC proposed.
Those clusters aren't in a single syllable.
The original comment didn't talk about syllables.
I haven't seen more than three consonants in a single syllable.
"Firsts." On the other hand, a phoneticist might analyse the "ts" part as a single sound; except that on the phonetic level it appears to be two phonemes. So is (the sound represented in English spelling by) "ts" one consonant or two? Is the answer different for "tsetse" and for "firsts"? For "Katz" and for "cats"?
Linguistic categories are complicated.
↑ comment by erratio · 2014-08-03T21:19:26.195Z · LW(p) · GW(p)
Syllable-initially they're pretty obviously consonants (yam vs am). There are also lots of languages that have phonological rules that involve replacing semi-vowels with other consonants or vice versa, which is a pretty strong argument for them being part of the class of consonants in those languages. For the other stuff, what polymathwannabe said. This stuff is well-studied in linguistics and particularly in phonetics.
Replies from: DanielLC↑ comment by DanielLC · 2014-08-04T02:07:30.434Z · LW(p) · GW(p)
How do you tell if it's a consonant or just part of a diphthong? For example, is the y in may a consonant? If so, how about the y sound in mate?
Also, by that reasoning, even the r in beard, which has another vowel in it, would have to be a vowel since it's immediately followed by a consonant.
Replies from: erratio↑ comment by erratio · 2014-08-04T15:08:34.656Z · LW(p) · GW(p)
You're confusing orthography and phonology. "may" is spelt in IPA as /mei/, so yes, it's a diphthong there that English represents using a vowel + Y for historical reasons. Also, there isn't a y sound in "mate" if you pronounce it at normal speed.
I don't understand what you mean by "by that reasoning". But there's no reason for the r in "beard" to have to be a vowel since it's followed by a consonant, since that's never stopped most other consonants before.
Replies from: DanielLC↑ comment by DanielLC · 2014-08-04T20:15:15.971Z · LW(p) · GW(p)
You're confusing orthography and phonology.
I know the difference. They always teach vowels and consonants as letters instead of as phonemes, and most people seem to use them that way, so I just have to talk about the phonemes corresponding to those letters. I also don't know IPA very well, and I can't assume anyone else does, so I tend to just say things like "y sound".
Also, there isn't a y sound in "mate" if you pronounce it at normal speed.
http://dictionary.reference.com/ has my as /meɪ/ and mate as /meɪt/. Vowels are a lot more vaguely defined than consonants, so I don't know how consistently dictionaries use the same letter, but it has to have something close to an ɪ in it, or it would just be "met".
But there's no reason for the r in "beard" to have to be a vowel since it's followed by a consonant, since that's never stopped most other consonants before.
You can have multiple consonants in a row like that, but there's always caveats. You can't follow an n with a b, for example. This is because consonants are difficult to pronounce consecutively, unless there's some reason that those two work particularly well. r is like a vowel, and can be placed next to any consonant.
Replies from: erratio↑ comment by erratio · 2014-08-04T21:15:40.070Z · LW(p) · GW(p)
I also don't know IPA very well, and I can't assume anyone else does, so I tend to just say things like "y sound".
That's the problem right there though, you're assuming that 'y sound' corresponds to the letter Y in English. The letter Y can represent either the phoneme /j/ (pronounced as the syllable-initial y), or the smallcaps i. The general rule is that syllable-initially Y represents /j/, elsewhere it represents the smallcaps i. Same goes for W, it's /w/ syllable-initially, /u/ (or smallcaps omega, or barred-u depending on your dialect) elsewhere. R is similar but there's a lot more variability in how it's pronounced by individual people, for some people "bird" has a distinct consonant in there, for others it's just an r-flavored vowel, for people like me it's not there at all because I speak a non-rhotic dialect but I lengthen the preceding vowel somewhat as compensation.
Replies from: DanielLC↑ comment by DanielLC · 2014-08-05T00:23:52.277Z · LW(p) · GW(p)
Linguists tend to be a bit more specific than me. There may be a slight difference between /i/ and /j/, but they're really close. It doesn't seem to be enough to justify one being a vowel and the other being a consonant.
I tried listening to the recordings of /i/ vs /j/ on Wikipedia. /i/ just had /i/, but the recording for /j/ is /ja/, so it's hard to concentrate on the /j/. It sure sounds a lot like /ia/. Similarly, /w/ had /wa/, which sounds a heck of a lot like /ua/.
I feel like /y/ just means that you start out transitioning from /i/ to another vowel. You tend to emphasize the following vowel more. But since you could be transitioning to any vowel, it doesn't make sense that /y/ represents the transition itself. The only constant is it starts out as /i/.
A particularly interesting case is /jiː/ (Old English pronoun that is now spelled "ye"). It's clearly not just /i/, and /ii/ would sound identical. But it does seem to be somewhat of a palindrome. The /i/ at the end is extended longer, but the sounds are the same forwards and backwards. There's a slight change in the sound or emphasis between them, so it might be /ieiː/ or something where it moves to a subtly different vowel and back.
Replies from: erratio↑ comment by erratio · 2014-08-05T01:23:26.048Z · LW(p) · GW(p)
I am not interested in being an introductory phonology/phonetics textbook, but if you want to know why linguists think that semivowels should be considered a separate category to vowels, there is plenty of writing out there on the subject.I'm bowing out from further participation.
↑ comment by ChristianKl · 2014-08-03T11:47:48.209Z · LW(p) · GW(p)
Take a look at the IPA definition: http://en.wikipedia.org/wiki/International_Phonetic_Alphabet It quite clearly distinguishes the class of sounds that are consonants and those that are vowels.
Given that English orthography is messed up and the sound that letters make isn't always the same sound.
Replies from: DanielLC↑ comment by DanielLC · 2014-08-04T02:24:34.823Z · LW(p) · GW(p)
It says that it's the middle part of a vowel. That's not clear at all, considering that you can have more than one vowel in the middle and you don't need consonants at the ends.
You don't put a w or a y in the middle of a syllable, but it's frequent to put the corresponding sound there. You just don't spell it with a w or y. r is frequently put in the middle of a vowel with consonants at both ends. There is often no other vowel there. There's always another vowel written there, but the only sound made is an r sound. For example, bird, herd, and turd just have the r sound. Beard has a y sound before the r sound, but since it has a consonant after the r, the r clearly can't be a consonant.
Replies from: ChristianKl, polymathwannabe↑ comment by ChristianKl · 2014-08-04T10:08:25.217Z · LW(p) · GW(p)
It says that it's the middle part of a vowel.
No the whole article doesn't use the word "middle" even once if you do a quick search.
You don't put a w or a y in the middle of a syllable, but it's frequent to put the corresponding sound there.
You are still thinking in terms of letter instead of phonemes.
Your definition is not the one in that article. If you look at the vowel chart they all follow a similar schema.
But there also a definition on wikipedia:
In phonetics, a vowel is a sound in spoken language, such as an English ah! [ɑː] or oh! [oʊ], pronounced with an open vocal tract so that there is no build-up of air pressure at any point above the glottis. This contrasts with consonants, such as English sh! [ʃː], there is a constriction or closure at some point along the vocal tract.
For example, bird, herd, and turd just have the r sound.
In those examples "ir", "er" and "ur" are together the vowel "əː". There no real "r" sound in those words. In contrast words like "rare" or "sorry" actually have the "r" phoneme.
"əː" is a vowel and "r" is a consonant. To be more precise "r" is the voiced alveolar fricative while "ə" is the mid-central vowel also known as the schwa and the ":" suggests that it's long.
"ə" appears even two times in violet without there being any letter "r" in the word.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-08-04T14:20:14.907Z · LW(p) · GW(p)
In those examples "ir", "er" and "ur" are together the vowel "əː".
Not in American English.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-08-04T14:26:11.371Z · LW(p) · GW(p)
It's true in the kind of English that Google speaks. Maybe Californian English?
Replies from: erratio↑ comment by polymathwannabe · 2014-08-04T03:28:59.190Z · LW(p) · GW(p)
The presence of a consonant after the r in "beard" does not make the r any less consonantal. Words like "mast" and "mats" have two consonants at the end of a syllable and both are still fully consonantal.
Replies from: ChristianKl, DanielLC↑ comment by ChristianKl · 2014-08-04T10:09:56.190Z · LW(p) · GW(p)
The IPA of beard is: "bɪəd" b and d are consonants but the "r" belongs to "ə" which is a vowel.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-08-04T12:05:30.131Z · LW(p) · GW(p)
Merriam-Webster gives \ˈbird\, Cambridge gives /bɪərd/, and Wiktionary gives /bɪɹd/. Unfortunately Oxford requires a subscription, but all of the others seem to agree.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-08-04T13:03:28.919Z · LW(p) · GW(p)
Okay, I was going with Google define with gives /bɪəd/. It seems like the pronunciation various significantly between different English dialects.
If you listen to the Merriam-Webster audio file then there an /r/ sound. But if you listen to the audio file on Google define there's only the schwa.
Unfortunately Oxford requires a subscription, but all of the others seem to agree.
Given that all three give different definition of how the word is supposed to be pronounced, "agree"is a bit strong. Even if we only look at the "r" Wiktionary suggests an alveolar approximant for US English suggests that optional in UK English. On the other hand while the British dictionary Merriam-Webster suggest an alveolar trill and Cambridge suggest also a alveolar trill for US English.
That means whether not there is a consonant behind the "r" in beard and what consonant that might be depends on the dialect that you speak.
↑ comment by DanielLC · 2014-08-04T03:57:32.928Z · LW(p) · GW(p)
You can have multiple consonants in a row on the same part of a syllable, but it's restricted.
Most of them seem to have the same place of articulation. For example, n and t are alveolar. m and p are bilabial. Thus, ant and amp are allowed, but anp and amt are not.
s and z don't seem to have to follow alveolar consonants, but s, which is voiced, must follow a voiced consonant, and z, which is unvoiced, must follow a voiced consonant. I also can't help but suspect this has to do with the fact that in English, you make words plural by adding an s or z at the end. There might be a vowel placed before it, like in bridges, but it might have caused us to get better at sticking those letters directly after consonants. Can someone who speaks a language that doesn't do that comment on this?
It's hard to stick a bunch of consonants together, so you're only allowed to if there's extenuating circumstances. By contrast, vowels are easy. r is easy, so it's a vowel.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-08-04T10:33:57.590Z · LW(p) · GW(p)
Thus, ant and amp are allowed, but anp and amt are not.
"Amt" is a word in German. It is pronounced exactly as it looks, plus a glottal stop at the start.
Where are you getting these rules?
Replies from: DanielLC↑ comment by DanielLC · 2014-08-04T21:00:41.089Z · LW(p) · GW(p)
"Amt" is a word in German. It is pronounced exactly as it looks, plus a glottal stop at the start.
Surprising. It's not that hard to say "amt", but it's not any easier than just "mt". The syllable has a vowel in it, but
I don't know if that's just an odd word, or if Germany has different rules. For all I know, they frequently have syllables without vowels. I would expect them to follow the same rules, since English is a Germanic language, but I guess getting rid of almost all of their words would lead to getting rid of almost all of their rules about what words are possible.
Where are you getting these rules?
They're rules that I noticed English tends to follow, and the rules seem to make words easier to pronounce.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-08-04T22:21:35.272Z · LW(p) · GW(p)
They're rules that I noticed English tends to follow
It only tends to follow them. Exceptions abound; that is not a problem for the exceptions, but for the rules. An exception is not something that fails to obey the rule, it is something the rule failed to explain.
and the rules seem to make words easier to pronounce.
I think that not all, but a lot of the causality is the other way around: whatever your native language does is easier for you.