Open thread, Aug. 10 - Aug. 16, 2015
post by MrMind · 2015-08-10T07:29:10.355Z · LW · GW · Legacy · 284 commentsContents
284 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
284 comments
Comments sorted by top scores.
comment by Username · 2015-08-10T13:31:42.973Z · LW(p) · GW(p)
Impulsive Rich Kid, Impulsive Poor Kid, an article about using CBT to fight impulsivity that leads to criminal behaviour, especially among young males from poor backgrounds.
How much crime takes place simply because the criminal makes an impulsive, very bad decision? One employee at a juvenile detention center in Illinois estimates the overwhelming percentage of crime takes place because of an impulse versus conscious decision to embark on criminal activity:
“20 percent of our residents are criminals, they just need to be locked up. But the other 80 percent, I always tell them – if I could give them back just ten minutes of their lives, most of them wouldn’t be here.”
...
The teenager in a poor area [who is] is not behaving any less automatically than the teenager in the affluent area. Instead the problem arises from the variability in contexts—and the fact that some contexts call for retaliation.” To illustrate their theory, they offer an example: If a rich kid gets mugged in a low-crime neighborhood, the adaptive response is to comply -- hand over his wallet, go tell the authorities. If a poor kid gets mugged in a high-crime neighborhood, it is sometimes adaptive to refuse -- stand up for himself, retaliate, run. If he complies, he might get a reputation as someone who is easy to bully, increasing the probability he will be victimized in the future. The two kids, conditioned by their environment, learn very different automatic responses to similar stimuli: someone else asserting authority over them.
The authors of “Thinking, Fast and Slow” extend the example further by asking you to imagine these same two kids in the classroom. If a teacher tells the rich kid to sit down and be quiet, his automatic response to authority on the street -- comply, sit down and be quiet -- is the same as the adaptive response for this situation. If a teacher tells the poor kid to sit down and be quiet, his automatic response to authority on the street -- refuse, retaliate -- is maladapted to this situation. The poor kid knows the contexts are different, but still on a certain level feels like his reputation is at stake when he’s confronted at school, and acts-out, automatically.
...
Replies from: Lumifer, ViliamThe researchers examined clinical studies of programs that keep this in mind and focus on teaching kids to regulate their automaticity. These interventions were designed to help young people, “recognize when they are in a high-stakes situation where their automatic responses might be maladaptive,” and slow down and consider them. One of the interventions studied was the Becoming a Man (BAM) program, conducted in public schools with disadvantaged young males, grades 6-12, on the south and west sides of Chicago.
“What makes the interventions we study particularly interesting is that they do not attempt to delineate specific behaviors as “good,” but rather focus on teaching youths when and how to be less automatic and more contingent in their behavior.”
Researchers randomly assigned students to have the opportunity to participate in BAM, as a course conducted once a week throughout the 2009-2010 school year.
The course is actually a program of cognitive behavioral therapy (CBT). CBT helps people identify harmful psychological and behavioral patterns, and then disrupt them and foster healthier ones. It’s used by a wide range of people for a wide range of issues, including to treat depression, anger management, and anxiety disorders. The particular style of CBT used in BAM focuses on three fundamental skills:
Recognize when their automatic responses might get them into trouble,
Slow down in those situations and behave less automatically,
Objectively assess situations and think about what response is called-for. One thing participants are taught in BAM is that “a shift to an aversive emotion” is an important cue for when they are prone to act automatically. Anger, for example, was a common cue among participants in the study group. They were also taught tricks to help them slow down to consider their situation before acting: including deep breathing and other relaxation techniques. Lastly, they were guided through self-reflection and assessment of their own behavior: examining their “automatic” missteps, thinking about how they might have acted differently.
The researchers found that, during the program year, program participants had a 44% lower arrest rate for violent crimes than the control group. They repeated the intervention in 2013-2014 with a new group, and found that program participants had a 31% lower arrest rate for violent crimes than the control group.
↑ comment by Lumifer · 2015-08-10T15:48:49.884Z · LW(p) · GW(p)
if I could give them back just ten minutes of their lives, most of them wouldn’t be here.
He's wrong about that. He would need to give them back 10 minutes of their lives, and then keep on giving them back different 10 minutes on a very regular basis.
The remainder of the post actually argues that persistent, stable "reflexes" are the cause of bad decisions and those certainly are not going to be fixed by a one-time gift of 10 minutes.
Replies from: Emile, query↑ comment by Emile · 2015-08-11T07:32:11.696Z · LW(p) · GW(p)
if I could give them back just ten minutes of their lives, most of them wouldn’t be here.
He's wrong about that. He would need to give them back 10 minutes of their lives, and then keep on giving them back different 10 minutes on a very regular basis.
I disagree. Let's take drivers who got into a serious accident : if you "gave them just back ten minutes" so that they avoided getting into that accident, most of them wouldn't have had another accident later on. It's not as if the world neatly divided into safe drivers, who never have accidents, and unsafe drivers, who have several.
Sure, those kids that got in trouble are more likely to have problematic personalities, habits, etc. which would make it more likely to get in trouble again - but that doesn't mean more likely than not. Most drivers don't get have (serious) accidents, most kids don't get in (serious) trouble, and if you restrict yourself to the subset of those who already had it once, I agree a second problem is more likely, but not certain.
Replies from: Lumifer, None↑ comment by Lumifer · 2015-08-11T14:37:44.217Z · LW(p) · GW(p)
but that doesn't mean more likely than not
How do you know?
most kids don't get in (serious) trouble
Yeah, but we are not talking about average kids. We're talking about kids who found themselves in juvenile detention and that's a huge selection bias right there. You can treat them as a sample (which got caught) from the larger underlying population which does the same things but didn't get caught (yet). It's not an entirely unbiased sample, but I think it's good enough for our handwaving.
but not certain.
Well, of course. I don't think anyone suggested any certainties here.
Replies from: None↑ comment by [deleted] · 2015-08-12T01:09:54.414Z · LW(p) · GW(p)
To use the paper's results, it looks like they're getting roughly 10 in 100 in the experiment condition and 18 in 100 for the control. Those kids were selected because they were considered high risk. If among the 82 of 100 kids who didn't get arrested there are >18 who are just as likely to be arrested as the 18 who were, then emile's conclusion is correct across the year. The majority won't be arrested next year. Across an entire lifetime however.... They'd probably become more normal as time passed, but how quickly would this occur? I'd think Lumifer is right that they probably would end up back in jail. I wouldn't describe this as a very regular problem though.
↑ comment by query · 2015-08-10T20:50:55.294Z · LW(p) · GW(p)
The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence. To effectively avoid all of them preemptively requires training the stable reflexes, but it could be that "editing out" only a few 10 minute periods retroactively would still be enough (those few periods when reflexes and environment interact extremely negatively.) So I think the "very regular basis" claim isn't substantiated.
That said, we cant actually retroactively edit anyways.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-11T00:40:03.250Z · LW(p) · GW(p)
The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence.
I don't think that's the model (or if it is, I think it's wrong). I see the model as persistent reflexes interacting with the environment and giving rise to common, repeatable, predictable events with serious legal consequences.
comment by Houshalter · 2015-08-10T13:12:25.707Z · LW(p) · GW(p)
Do Artificial Reinforcement-Learning Agents Matter Morally?
I've read this paper and find it fascinating. I think it's very relevant to Lesswrong's interests. Not just that it's about AI,but also that it asks hard moral and philosophical questions.
There are many interesting excerpts. For example:
Replies from: BetawolfThe drug midazolam (also known as ‘versed,’ short for ‘versatile sedative’) is often used in procedures like endoscopy and colonoscopy... surveyed doctors in Germany who indicated that during endoscopies using midazolam, patients would ‘moan aloud because of pain’ and sometimes scream. Most of the endoscopists reported ‘fierce defense movements with midazolam or the need to hold the patient down on the examination couch.’ And yet, because midazolam blocks memory formation, most patients didn’t remember this: ‘the potent amnestic effect of midazolam conceals pain actually suffered during the endoscopic procedure’. While midazolam does prevent the hippocampus from forming memories, the patient remains conscious, and dopaminergic reinforcement-learning continues to function as normal.
↑ comment by Betawolf · 2015-08-10T21:41:29.233Z · LW(p) · GW(p)
The author is associated with the Foundational Research Institute, which has a variety of interests highly connected to those of Lesswrong, yet some casual searches seem to show they've not been mentioned.
Briefly, they seem to be focused on averting suffering, with various outlooks on that including effective altruism outreach, animal suffering and ai-risk as a cause of great suffering.
comment by Username · 2015-08-10T13:13:18.274Z · LW(p) · GW(p)
Composing Music With Recurrent Neural Networks
Replies from: pianoforte611It’s hard not to be blown away by the surprising power of neural networks these days. With enough training, so called “deep neural networks”, with many nodes and hidden layers, can do impressively well on modeling and predicting all kinds of data. (If you don’t know what I’m talking about, I recommend reading about recurrent character-level language models, Google Deep Dream, and neural Turing machines. Very cool stuff!) Now seems like as good a time as ever to experiment with what a neural network can do.
For a while now, I’ve been floating around vague ideas about writing a program to compose music. My original idea was based on a fractal decomposition of time and some sort of repetition mechanism, but after reading more about neural networks, I decided that they would be a better fit. So a few weeks ago, I got to work designing my network. And after training for a while, I am happy to report remarkable success!
↑ comment by pianoforte611 · 2015-08-11T21:48:58.440Z · LW(p) · GW(p)
It's certainly very interesting. It's a slight improvement over Markov chain music. That tends to sound good for any stretch of 5 seconds, but lacks a global structure making it pretty awful to listen to for any longer stretch of time. This music still lacks much of the longer range structures that make music sound like music. It's a lot like stitching together 5 different Chopin compositions. It is stylistically consistent, but the pieces don't fit together.
Having said that, it is very interesting to see what you can get out of a network with respect to consonance, dissonance, local harmonic context and timing. I'm most impressed by the rhythm, it sounds more natural to my ear than the note progression.
Replies from: Viliamcomment by Username · 2015-08-10T12:46:36.678Z · LW(p) · GW(p)
The moral imperative for bioethics by Steven Pinker.
Replies from: None, Gunnar_ZarnckeBiomedical research, then, promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence.
Get out of the way.
A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future. These include perverse analogies with nuclear weapons and Nazi atrocities, science-fiction dystopias like “Brave New World’’ and “Gattaca,’’ and freak-show scenarios like armies of cloned Hitlers, people selling their eyeballs on eBay, or warehouses of zombies to supply people with spare organs. Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.
↑ comment by [deleted] · 2015-08-11T13:11:49.449Z · LW(p) · GW(p)
I'm all in favor of "social justice" in medicine by its conventional definition, but that's not even a particularly difficult problem. Universal medical systems already exist and function well all across the planet. Likewise, nobody's actually going to vote for Brave New World.
It really does seem like "social justice", in a bioethical context, simply isn't the True Rejection.
↑ comment by Gunnar_Zarncke · 2015-08-12T23:04:20.692Z · LW(p) · GW(p)
These online text comments would better belong to the Media Thread. Esp. as they are many.
comment by Username · 2015-08-10T12:55:48.978Z · LW(p) · GW(p)
Replies from: Viliam, NoneOne long-held theory has been that people become socially isolated because of their poor social skills — and, presumably, as they spend more time alone, the few skills they do have start to erode from lack of use. But new research suggests that this is a fundamental misunderstanding of the socially isolated. Lonely people do understand social skills, and often outperform the non-lonely when asked to demonstrate that understanding. It’s just that when they’re in situations when they need those skills the most, they choke.
In a paper recently published in the journal Personality and Social Psychology Bulletin, Franklin & Marshall College professor Megan L. Knowles led four experiments that demonstrated lonely people’s tendency to choke when under social pressure. In one, Knowles and her team tested the social skills of 86 undergraduates, showing them 24 faces on a computer screen and asking them to name the basic human emotion each face was displaying: anger, fear, happiness, or sadness. She told some of the students that she was testing their social skills, and that people who failed at this task tended to have difficulty forming and maintaining friendships. But she framed the test differently for the rest of them, describing it as a this-is-all-theoretical kind of exercise.
Before they started any of that, though, all the students completed surveys that measured how lonely they were. In the end, the lonelier students did worse than the non-lonely students on the emotion-reading task — but only when they were told they were being tested on their social skills. When the lonely were told they were just taking a general knowledge test, they performed better than the non-lonely. Previous research echoes these new results: Past studies have suggested, for example, that the lonelier people are, the better they are at accurately reading facial expressions and decoding tone of voice. As the theory goes, lonely people may be paying closer attention to emotional cues precisely because of their ache to belong somewhere and form interpersonal connections, which results in technically superior social skills.
But like a baseball pitcher with a mean case of the yips or a nervous test-taker sitting down for an exam, being hyperfocused on not screwing up can lead to over-thinking and second-guessing, which, of course, can end up causing the very screwup the person was so bent on avoiding. It’s largely a matter of reducing that performance anxiety, in other words, and Knowles and her colleagues did manage to find one way to do this for their lonely study participants, though, admittedly, it is maybe not exactly applicable outside of a lab. The researchers gave their volunteers an energy-drink-like beverage and told them that any jitters they felt were owing to the caffeine they’d just consumed. (In actuality, the beverage contained no caffeine, but no matter — the study participants believed that it did.) They then did the emotion-reading test, just like in the first experiment. Compared to scores from that first experiment, there was no discernible difference in scores for the non-lonely, but the researchers did see improvement among the lonely participants — even when the task had been framed as a social-skills test.
It may be difficult to trick yourself into believing your nerves are from caffeine and not the fact that you really, really, really want to make a good impression in some social setting, but there are other ways to change your own thinking about anxiety. One of my recent favorites is from Harvard Business School’s Alison Wood Brooks, who found that when she had people reframe their nerves as excitement, they subsequently performed better on some mildly terrifying task, like singing in public. At the very least, this current research presents a fairly new way to think about lonely people. It’s not that they need to brush up on the basics of social skills — that they’ve likely already got down. Instead, lonely people may need to focus more on getting out of their own heads, so they can actually use the skills they’ve got to form friendships and begin to find a way out of their isolation.
↑ comment by Viliam · 2015-08-11T08:00:05.719Z · LW(p) · GW(p)
I imagine such behavior could happen if someone had a bad experience in the past, that they were disproportionally punished in some social situation. The punishment didn't even have to be a predictable logical consequence; maybe they just had a bad luck and met some psycho. Or maybe they were bullied at school, etc.
If their social skills are otherwise okay, they may intellectually understand what is usually the best response, but in real life they are overwhelmed by fear and their behavior is dominated by avoiding the thing that "caused" the bad response in the past. For example, if the bad thing happened after saying "hello" to a stranger, they may be unable to speak with strangers, even if they know from observing others that this is a good thing to do.
Then the framing of the test could make students think either about "what is generelly the right approach?" or "what would I do?"
↑ comment by [deleted] · 2015-08-10T18:29:50.752Z · LW(p) · GW(p)
21 people per group (86/4) is not a strong result unless it's a large effect size which I doubt. I wouldn't put hardly any faith in this paper. Maybe raise your prior by 3% but it's hard to be that precise with beliefs.
Replies from: pianoforte611, None↑ comment by pianoforte611 · 2015-08-11T21:40:34.959Z · LW(p) · GW(p)
I'd like to see fewer low quality scientific criticisms here. Instead of speculating on effect sizes without reading the paper, and bloviating on sample sizes without doing the relevant power calculations, perhaps try looking at the results section?
With respect to this paper, the results were consistent and significant across three tasks - an eye task, a facial expression task, and a vocal tone task. They did a non-social task (an anagram task) and found no significant effect (though that wasn't the purpose of doing the task, its a bit more complicated than that). They also did an interesting caffeine experiment to see if they could relieve social anxiety by convincing participants that the anxiety was due to a (fake) caffeinated drink.
Anyways, as with any research in this area, it's too soon to be confident of what the results mean. But armchair uninformed scientific criticism will not advance knowledge.
(In hindsight this is a bit of an overreaction, but I've seen too many poor criticisms of papers and too much speculation particularly on Reddit, but also here and on several blogs, and not nearly enough careful reading)
Replies from: Douglas_Knight, None↑ comment by Douglas_Knight · 2015-08-14T01:59:39.525Z · LW(p) · GW(p)
I would like to see fewer low quality science papers posted. FB put in way more work than was justified. My new policy is to down vote every psychology paper posted without any discussion of the endemic problems in psychology research and why that paper might not be pure noise.
Replies from: pianoforte611↑ comment by pianoforte611 · 2015-08-14T02:42:47.968Z · LW(p) · GW(p)
Are all psychology papers garbage? And if only some are, how do you tell which is which if you don't read past the first line of the abstract? (which FB didn't, because he was unaware that more than one experiment was conducted).
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2015-08-14T03:03:21.919Z · LW(p) · GW(p)
We have to filter the papers somehow and the the people who do the filtering have to read them. But that doesn't mean that the people doing the filtering should be people on LW. Username relied on a journalist for filtering. This does filter for interesting topics, but not for quality. That Username did not link the actual paper suggests that he did not read it. Thus my prior is that it is of median quality and pure noise. Even if psychology papers were all perfectly accurate, there are way too many that get coverage and it is unlikely that one getting coverage this month is worth reading.
There are standard places to look for filters: review articles and books.
Replies from: pianoforte611↑ comment by pianoforte611 · 2015-08-14T03:16:50.519Z · LW(p) · GW(p)
Okay that's very fair.
↑ comment by [deleted] · 2015-08-11T23:32:14.667Z · LW(p) · GW(p)
Perhaps you didn't notice, but the paper is gated. It's not possible for me or most people to check the paper. The description doesn't mention the other two studies. The study described doesn't sound like a strong result. I never suggested it wasn't statistically significant. If it wasn't, it shouldn't be used to adjust one's views at all. I assumed it had achieved significance.
It's also odd for you to criticize me and then ultimately come to a conclusion that could be interpreted as identical to my own or close to it. What do you mean by "too soon to be confident of what the results mean?". That could be interpreted as adjust your prior by 3% which was my interpretation. If you think a number higher than 15% is warranted then that's an odd phrasing to choose which makes it sound like we're not that far apart. Given that I was going by one study and you have three to look at it, it shouldn't be surprising that you would recommend a greater adjustment of ones prior. Going by just the facial expression study, what adjustment would you recommend? Do you think this adjustment is large enough for most people to know what to do with it? What adjustment to ones prior do you recommend after reviewing all three?
Replies from: None, Sarunas, ChristianKl, pianoforte611↑ comment by [deleted] · 2015-08-12T10:12:01.831Z · LW(p) · GW(p)
While the scientific publication paywall is a pain (and inappropriate especially for publically funded research) it is not an impossibility to get the article - and as pianoforte611 already mentioned, secondary citations or descriptions to primary sources may not provide enough information to evaluate the source.
How to get articles: I've seen numerous cases here at LW where a request for a copy of a paywalled publication is quickly met with a link or an email from someone who has access.
The twitter hashtag #icanhazpdf also serves this purpose: tweet with the hashtag including a link or DOI to the article you are requesting, include your email address in the tweet, and delete your request after you get the pdf. You can use a temporary read-only email address (e.g. slippery.email) if you are concerned about anonymity/privacy.
On this instance feel free to send me a private message with your contact details and I will send you a pdf - I already downloaded a copy.
Edited to add: it's also entirely legitimate to email the author of a published article and request an electronic copy of the article. There's no need to explain why you want it and you need not be an academic "insider", just be clear which article you are requesting. This is an example I received yesterday: "Dear {author}, I am interested in your recent article {full citation} but do not have subscription access. Would you be able to send me an electronic copy? Many thanks"
↑ comment by Sarunas · 2015-08-12T14:14:59.070Z · LW(p) · GW(p)
Choking Under Social Pressure: Social Monitoring Among the Lonely, Megan L. Knowles, Gale M. Lucas, Roy F. Baumeister, and Wendi L. Gardner
↑ comment by ChristianKl · 2015-08-12T15:39:56.030Z · LW(p) · GW(p)
Perhaps you didn't notice, but the paper is gated. It's not possible for me or most people to check the paper.
Most people in the general population can't check the paper but on LW, I don't think that's the case. If you don't have access to a university network http://lesswrong.com/lw/ji3/lesswrong_help_desk_free_paper_downloads_and_more/ explores a variety of ways to access papers.
Replies from: Sarunas↑ comment by pianoforte611 · 2015-08-12T00:29:02.135Z · LW(p) · GW(p)
Sorry for assuming you had easy access to the paper. Given that you don't, you are of course free to decide whether the pop science report warrants further investigation. However to authoritatively criticize and speculate on the details of a paper you haven't read, I think lowers the quality of discussion here.
I'm not a Bayesian but nevertheless, I don't agree that my conclusion is similar to yours. Prima facie, the effect itself seems fairly robust across the five experiments, but their theory as to why (which they did go reasonably far to test), still needs more experiments to be established. This is not a bug, and that does not make it a low quality paper. This is how science works. There may be more subtle problems that I (not being a statistician, or a psychologist) may have missed, but those can't be known without delving into the details.
↑ comment by [deleted] · 2015-08-10T22:11:54.372Z · LW(p) · GW(p)
Shouldn't the authors be aware of this? (I think one of them is even fairly well known in psychology circles.)
Replies from: Richard_Kennaway, None↑ comment by Richard_Kennaway · 2015-08-12T10:36:38.284Z · LW(p) · GW(p)
I am sure the authors are more informed about their work than anyone who has not read it.
comment by Sherincall · 2015-08-12T10:35:52.817Z · LW(p) · GW(p)
CIA's The Definition of Some Estimative Expressions - what probabilities people assign to words such as "probably" and "unlikely".
CIA actually has several of these articles around, like Biases in Estimating Probabilities. Click around for more.
In hindsight, it seems obvious that they should.
comment by gwern · 2015-08-14T00:58:56.611Z · LW(p) · GW(p)
Modafinil survey: I'm curious about how modafinil users in general use it, get it, their experiences, etc, and I've been working on a survey. I would welcome any comments about missing choices, bad questions, etc on the current draft of the survey: https://docs.google.com/forms/d/1ZNyGHl6vnHD62spZyHIqyvNM_Ts_82GvZQVdAr2LrGs/viewform?fbzx=2867338011413840797
Replies from: btrettel, ChristianKl, Richard_Kennaway↑ comment by btrettel · 2015-08-14T13:13:18.631Z · LW(p) · GW(p)
Great idea.
One suggestion: This survey seems to be for people who use modafinil regularly. I might suggest doing something (perhaps creating another survey) to get opinions from people who tried modafinil once or twice and disliked it. My one experience with Nuvigil was quite bad, and I recall Vaniver saying that he thought modafinil did nothing at all for him.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-14T13:41:43.715Z · LW(p) · GW(p)
The survey could have multiple pages:
The first page simply asks:
What's your modafinil usage:
a) Never
b) I used it in the past and then stopped.
c) I'm currently using it. (leading the user to your current survey)
↑ comment by gwern · 2015-08-14T18:53:09.892Z · LW(p) · GW(p)
I've split it up into multiple pages: the first page classifies you as an active or inactive user and then sends you to a detailed questionnaire on how you use it if you are active, or simply why you stopped if inactive, and then both go to a long demographics/background page.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-15T00:48:08.309Z · LW(p) · GW(p)
Sounds good. I would also add a "never used it" option. It can go straight to the demographics/background page. Otherwise you might have people who never used it classify themselves as "inactive user".
Replies from: gwern↑ comment by gwern · 2015-08-15T01:37:31.111Z · LW(p) · GW(p)
(If they've never used modafinil, why on earth are they taking my survey?!)
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-15T12:28:08.995Z · LW(p) · GW(p)
They might be interested in taking modafinil. The fact that they shouldn't take the survey doesn't mean they won't.
↑ comment by ChristianKl · 2015-08-14T11:41:29.448Z · LW(p) · GW(p)
In general, do you find brand-name -afinils more effective than generics?
I think that answer should have more than just (yes) and (no) as an answer. At least it should have a "I don't know" answer.
I would add a question "When was the last time you used modafinil?" to see whether people are on aggregate right about how many days per week they use it. Maybe even "At what time of the day did you use it?"
I would be interested in a question about how many hours the person sleeps on average.
Have you thought about having a question about bodyweight? I would be interested in knowing whether heavier people take a larger dose.
Replies from: gwern↑ comment by gwern · 2015-08-14T18:27:35.504Z · LW(p) · GW(p)
I've added 'the same' as a third option to the generic vs brand-name question, and 2 questions about average hours of sleep a night & body weight.
I would add a question "When was the last time you used modafinil?" to see whether people are on aggregate right about how many days per week they use it.
What would the response there be, an exact date or n days ago or what?
Replies from: Tem42, ChristianKl, ChristianKl↑ comment by Tem42 · 2015-08-14T18:44:30.491Z · LW(p) · GW(p)
I have no experience with -afinils, but it seems to me that there will surely be cases of people who have tried only brand-name (or, alternatively, only generic) -afinil, and therefore cannot accurately respond to the question
In general, do you find brand-name -afinils more effective than generics?
With yes, no, or the same. The correct answer would be "I don't know". If I were taking this survey, I would skip that question rather than try to guess which answer you wanted in that case. But if I were designing the survey, I would go with ChristianKl's suggestion.
↑ comment by ChristianKl · 2015-08-15T01:04:11.314Z · LW(p) · GW(p)
I've added 'the same' as a third option to the generic vs brand-name question
I would guess that a majority of the respondents haven't testing multiple kind of modafinil and thus are not equipped to answer the question at all. "I don't know" seems to be the proper answer for them.
Replies from: gwern↑ comment by ChristianKl · 2015-08-15T01:02:01.484Z · LW(p) · GW(p)
What would the response there be, an exact date or n days ago or what?
Both would be possible but I think "n days ago" is more standard. It makes the data analysis easier.
↑ comment by Richard_Kennaway · 2015-08-14T22:27:07.864Z · LW(p) · GW(p)
A few details:
In the questions about SNPs, 23andMe reports RS4570625 as G or T, not A or G, and RS4633 as C or T, not A or G.
I was surprised to see Vitamin D listed as a nootropic, and Google turns up nothing much on the subject. Fixing a deficiency of anything will likely have a positive effect on mental function, but that is drawing the boundary of "nootropic" rather wide.
Why is nicotine amplified as "gum, patch, lozenge", to the exclusion of tobacco? Cancer is a reason to not smoke tobacco, but I don't think it's a reason not to ask about it. Or are those who smoke not smart enough to be in the target population for the survey? :)
ETA: Also a typo in "SNP status of COMT RS4570625": the subtext mentions rs4680, not rs4570625. I dont know what "Val/Met" and "COMT" mean, but are those specific to RS4680 or correct for all three SNPs?
Replies from: gwern, ChristianKl↑ comment by gwern · 2015-09-06T20:59:07.488Z · LW(p) · GW(p)
In the questions about SNPs, 23andMe reports RS4570625 as G or T, not A or G, and RS4633 as C or T, not A or G.
Oops. Shouldn't've assumed they'd be the same...
but that is drawing the boundary of "nootropic" rather wide.
It is but it's still common and can be useful. The nootropics list is based on Yvain's previous nootropics survey, which I thought might be useful for comparison. (I also stole a bunch of questions from his LW survey too, figuring that they're at least battle-tested at this point.)
Why is nicotine amplified as "gum, patch, lozenge", to the exclusion of tobacco?
I have no interest in tobacco, solely nicotine. Although now that you object to that, I realize I forgot to specify vaping/e-cigs as included.
↑ comment by ChristianKl · 2015-08-15T16:58:14.877Z · LW(p) · GW(p)
Val/Met
Aminoacids. Val stands for valine. Met stands for methionine.
COMT
I think COMT is Catechol-O-methyl transferase which is the protein in question.
comment by Username · 2015-08-10T13:02:18.818Z · LW(p) · GW(p)
Dead enough by Walter Glannon
Replies from: ZankerH, WalterL, DanielLC, Tem42To honour donors, we should harvest organs that have the best chance of helping others – before, not after, death
Now imagine that before the stroke our hypothetical patient had expressed a wish to donate his organs after his death. If neurologists could determine that the patient had no chance of recovery, then would that patient really be harmed if transplant surgeons removed life-support, such as ventilators and feeding tubes, and took his organs, instead of waiting for death by natural means? Certainly, the organ recipient would gain: waiting too long before declaring a patient dead could allow the disease process to impair organ function by decreasing blood flow to them, making those organs unsuitable for transplant.
But I contend that the donor would gain too: by harvesting his organs when he can contribute most, we would have honoured his wish to save other lives. And chances are high that we would be taking nothing from him of value. This permanently comatose patient will never see, hear, feel or even perceive the world again whether we leave his organs to whither inside him or not.
↑ comment by WalterL · 2015-08-10T18:59:53.374Z · LW(p) · GW(p)
Honest question, if you are cool with killing a person in a coma, based on the fact that they will never sense again, how do you feel about a person doing life in solitary? They may sense, but they aren't able to communicate what they sense to any other human.
What exactly makes life worth its organs, in your eyes?
Replies from: DanielLC, Tem42↑ comment by DanielLC · 2015-08-10T19:44:49.200Z · LW(p) · GW(p)
A person in solitary still has experiences. They just don't interact with the outside world. People in a coma are, as far as we can tell, not conscious. There are plenty of animals that people are okay with killing and eating that are more likely to be sentient than someone in a coma.
Replies from: ChristianKl, WalterL↑ comment by ChristianKl · 2015-08-10T20:42:25.570Z · LW(p) · GW(p)
There are plenty of animals that people are okay with killing and eating that are more likely to be sentient than someone in a coma.
By that standard how about harvesting the organs of babies?
Replies from: James_Miller, DanielLC, None, None↑ comment by James_Miller · 2015-08-10T23:30:30.204Z · LW(p) · GW(p)
By that standard how about harvesting the organs of babies?
Planned Parenthood does this for aborted babies.
↑ comment by [deleted] · 2015-08-11T12:54:40.542Z · LW(p) · GW(p)
Babies aren't sentient?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-11T13:18:45.725Z · LW(p) · GW(p)
The context is that Steven Pinker arguments that animals we eat are more sentinent than babies: http://www.gargaro.com/pinker.html
↑ comment by [deleted] · 2015-08-11T03:06:44.244Z · LW(p) · GW(p)
What other standard do you propose?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-11T08:53:59.920Z · LW(p) · GW(p)
What other standard do you propose?
Not harvesting the organs of living human beings?
Replies from: None↑ comment by [deleted] · 2015-08-12T01:34:41.465Z · LW(p) · GW(p)
Define living and human being.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-12T11:02:34.297Z · LW(p) · GW(p)
The way the terms are defined in German law and interpreted by German courts.
Replies from: None↑ comment by [deleted] · 2015-08-13T01:25:05.590Z · LW(p) · GW(p)
which is... (trust me, I have a point here, but by not actually answering my query in a precise way your'e making it hard to make)
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-13T09:54:35.824Z · LW(p) · GW(p)
I answered you query in a very precise way. There are tons and tons of laws and court judgements involved and no answer that fits into a few paragraphs.
I have a point here
If that's the case you could try to make your point explicitly instead of implicitly. You could list your assumptions.
↑ comment by WalterL · 2015-08-10T19:58:19.810Z · LW(p) · GW(p)
Yeah, and I'm asking, do those experiences "count"?
If organs are going from comatose humans to better ones, and we've decided that people who aren't sensing don't deserve theirs, how about people who aren't communicating their senses? It seems like this principal can go cool places.
If we butchered some mass murderer we could save the lives of a few taxpayers with families that love them (there will be forms, and an adorableness quotient, and Love Weighting). All that the world would be out is the silent contemplation of the interior of a cell. Clearly a net gain, yeah?
So, are we stopping at "no sensing -> we jack your meats", or can we cook with gas?
Replies from: DanielLC, Tem42↑ comment by DanielLC · 2015-08-10T23:42:36.979Z · LW(p) · GW(p)
It's not about communication. It's not even about sensing. It's about subjective experience. If your mind worked properly but you just couldn't sense anything or do anything, you'd have moral worth. It would probably be negative and it would be a mercy to kill you, but that's another issue entirely. From what I understand, if you're in a coma, your brain isn't entirely inactive. It's doing something. But it's more comparable to what a fish does than a conscious mammal.
Someone in a coma is not a person anymore. In the same sense that someone who is dead is not a person anymore. The problem with killing someone is that they stop being a person. There's nothing wrong with taking them from not a person to a slightly different not a person.
If we butchered some mass murderer we could save the lives of a few taxpayers with families that love them
A mass murderer is still a person. They think and feel like you do, except probably with less empathy or something. The world is better off without them, and getting rid of them is a net gain. But it's not a Pareto improvement. There's still one person that gets the short end of the stick.
↑ comment by Tem42 · 2015-08-10T19:22:48.132Z · LW(p) · GW(p)
Given that this is suggested to be a voluntary system, it doesn't really matter what Walter Glannon thinks -- it matters what you think.
Personally, I would be more interested in signing up for this if I was assured that the permanent damage was to the grey matter, and would be happy if this included both comas and permanent vegetative states. But YMMV.
It is worth noting here that being in solitary confinement does not necessarily prevent you from writing, receiving visitors, or making telephone calls (it depends on your local jurisdiction). Also, very few people are sentenced to be in solitary confinement until they die. In those places where this sort of sentence is permitted, it is unlikely that prisoners would be allowed any choice in their fate, but it is not obviously bad for a justly imprisoned person to choose suicide (with or without organ donation) in lieu of a life sentence.
EDIT: on re-reading, I see that this was not stated to always be a voluntary procedure; the author goes back and forth between voluntary and involuntary procedures. In involuntary cases, I agree that the simple criteria of "brain functions at a level too low to sustain consciousness but enough to sustain breathing and other critical functions without mechanical support" is too lax. I would still agree with the author in general that DDR is too strong.
↑ comment by DanielLC · 2015-08-10T19:49:58.676Z · LW(p) · GW(p)
There are reasons why you shouldn't kill someone in a coma that doesn't want to be killed when they're in a coma even if you disagree with them about what makes life have moral value. If they agreed to have the plug pulled when it becomes clear that they won't wake up, then it seems pretty reasonable to take out the organs before pulling the plug. And given what's at stake, given permission, you should be able to take out their organs early and hasten their deaths by a short time in exchange for making it more likely to save someone else.
And why are you already conjecturing about what we would have wanted? We're not dead yet. Just ask us what we want.
↑ comment by Tem42 · 2015-08-10T17:20:51.604Z · LW(p) · GW(p)
You can approximate this by writing a living will (and you should write a living will regardless of whether or not you are an organ donor.)
However, I agree there should be more finely grained levels of organ donation, and that this should be a clear option.
comment by Username · 2015-08-12T16:53:11.174Z · LW(p) · GW(p)
A Scientific Look at Bad Science
By one estimate, from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11 (adjusting for increases in published literature, and excluding articles by repeat offenders) [2]. This surge raises an obvious question: Are retractions increasing because errors and other misdeeds are becoming more common, or because research is now scrutinized more closely? Helpfully, some scientists have taken to conducting studies of retracted studies, and their work sheds new light on the situation.
“Retractions are born of many mothers,” write Ivan Oransky and Adam Marcus, the co-founders of the blog Retraction Watch, which has logged thousands of retractions in the past five years. A study in the Proceedings of the National Academy of Sciences reviewed 2,047 retractions of biomedical and life-sciences articles and found that just 21.3 percent stemmed from straightforward error, while 67.4 percent resulted from misconduct, including fraud or suspected fraud (43.4 percent) and plagiarism (9.8 percent) [3].
Surveys of scientists have tried to gauge the extent of undiscovered misconduct. According to a 2009 meta-analysis of these surveys, about 2 percent of scientists admitted to having fabricated, falsified, or modified data or results at least once, and as many as a third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response to pressures from a funding source’ ” [4].
comment by Daniel_Burfoot · 2015-08-10T23:00:07.833Z · LW(p) · GW(p)
If the Efficient Market Hypothesis is true, shouldn't it be almost as hard to lose money on the market as it is to gain money? Let's say you had a strategy S that reliably loses money. Shouldn't you be able to define an inverse strategy S', that buys when S sells and sells when S buys, that reliably earns money? For the sake of argument rule out obvious errors like offering to buy a stock for $1 more than its current price.
Replies from: Viliam, Vaniver, VoiceOfRa, James_Miller, pcm, Good_Burning_Plastic, JEB_4_PREZ_2016↑ comment by Vaniver · 2015-08-11T00:03:17.789Z · LW(p) · GW(p)
shouldn't it be almost as hard to lose money on the market as it is to gain money?
Consider the dynamic version of the EMH: that is, rather than "prices are where they should be," it's "agents who perceive mispricings will pounce on them, making them transient."
Then a person placing a dumb trade is creating a mispricing, which will be consumed by some market agent. There's an asymmetry between "there is no free money left to be picked up" and "if you drop your money, it will not be picked up" that makes the first true (in the static case) and the second false.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-11T01:04:18.265Z · LW(p) · GW(p)
Then a person placing a dumb trade is creating a mispricing, which will be consumed by some market agent.
Well, that looks like an "offering to buy a stock for $1 more than its current price" scenario. You can easily lose a lot of money by buying things at the offer and selling them at the bid :-)
But let's imagine a scenario where everything is happening pre-tax, there are no transaction costs, we're operating in risk-adjusted terms and, to make things simple, the risk-free rate is zero. Moreover, the markets are orderly and liquid.
Assuming you can competently express a market view, can you systematically lose money by consistently taking the wrong side under EMH?
Replies from: Salemicus, Davidmanheim, None↑ comment by Salemicus · 2015-08-11T11:02:23.081Z · LW(p) · GW(p)
Consider penny stocks. They are a poor investment in terms of expected return (unless you have secret alpha). But they provide a small chance of very high returns, meaning they operate like lottery tickets. This isn't a mispricing - some people like lottery tickets, and so bid up the price until they become a poor investment in terms of expected return (problem for the CAPM, not for the EMH). So you can systematically lose money by taking the "wrong" side, and buying penny stocks.
Does that count as an example, or does that violate your "risk-adjusted terms" assumption? I think we have to be careful about what frictions we throw out. If we are too aggressive in throwing out notions like an "equity premium," or hedging, or options, or market segmentation, or irreducible risk, or different tolerances to risk, we will throw out the stuff that causes financial markets to exist. An infinite frictionless plane is a useful thought experiment, but you can't then complain that a car can't drive on such a plane.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-11T14:53:55.527Z · LW(p) · GW(p)
Yes, we have to be quite careful here.
Let's take penny stocks. First, there is no exception for them in the EMH so if it holds, the penny stocks, like any other security, must not provide a "free" opportunity to make money.
Second, when you say they are "a poor investment in terms of expected return", do you actually mean expected return? Because it's a single number which has nothing do with risk. A lottery can perfectly well have a positive expected return even if your chance of getting a positive return is very small. The distribution of penny stock returns can be very skewed and heavy-tailed, but EMH does not demand anything of the returns distributions.
So I think you have to pick one of two: either penny stocks provide negative expected return (remember, in our setup the risk-free rate is zero), but then EMH breaks; or the penny stocks provide non-negative expected return (though with an unusual risk profile) in which case EMH holds but you can't consistently lose money.
Does that violate your "risk-adjusted terms" assumption?
My "risk-adjusted terms" were a bit of a handwave over a large patch of quicksand :-/ I mostly meant things like leverage, but you are right in that there is sufficient leeway to stretch it in many directions. Let me try to firm it up: let's say the portfolio which you will use to consistently lose money must have fixed volatility, say, equivalent to the volatility of the underlying market.
Replies from: Salemicus↑ comment by Salemicus · 2015-08-11T19:58:30.618Z · LW(p) · GW(p)
Second, when you say they are "a poor investment in terms of expected return", do you actually mean expected return? ... A lottery can perfectly well have a positive expected return even if your chance of getting a positive return is very small.
Yes, I mean expected return. If you hold penny stocks, you can expect to lose money, because the occasional big wins will not make up for the small losses. You are right that we can imagine lotteries with positive expected return, but in the real world lotteries have negative expected return, because the risk-loving are happy to pay for the chance of big winnings.
[If] penny stocks provide negative expected return ... then EMH breaks
Why?
Suppose we have two classes of investors, call them gamblers and normals. Gamblers like risk, and are prepared to pay to take it. In particular, they like asymmetric upside risk ("lottery tickets"). Normals dislike risk, and are prepared to pay to avoid it (insurance, hedging, etc). In particular, they dislike asymmetric downside risk ("catastrophes").
There is an equity instrument, X, which has the following payoff structure:
99% chance: payoff of 0 1% chance: payoff of 1000
Clearly, E(X) is 10. However, gamblers like this form of bet, and are prepared to pay for it. Consequently, they are willing to bid up the price of X to (say) 11.
Y is the instrument formed by shorting X. When X is priced at 11, this has the following payoff structure:
99% chance: payoff of 11 1% chance: payoff of -989
Clearly, E(Y) is 1. In other words, you can make money, in expectation, by shorting X. However, there is a lot of downside risk here, and normals do not want to take it on. They would require E(Y) to be 2 (say) in order to take on a bet of that structure.
So, assuming you have a "normal" attitude to risk, you can lose money here (by buying X), but you can't win it in risk-adjusted terms. This is caused by the market segmentation caused by the different risk profiles. Nothing here is contrary to the EMH, although it is contrary to the CAPM.
Thoughts:
- Penny stocks (and high-beta instruments generally, such as deep out-of-the-money options) display this behaviour in real life.
- A more realistic model might include some deep-pocketed funds with a neutral attitude to risk who could afford to short X. But in real life, there is market segmentation and a lack of liquidity. Penny stocks are illiquid and hard to short, and so are many other high-beta instruments.
- The logical corollary of this model is that safe, boring equities will outperform stocks with lottery-ticket-like qualities. And it therefore follows that safe, boring equities will outperform the market as a whole. And that also seems true in real life.
- There are plausible microfoundations for why there might be a "gambler" class of investor. For example, fund managers are risking their clients' capital, not their own, and are typically paid in a ranking relative to their peers. Their incentives may well lead them to buy lottery tickets.
↑ comment by Lumifer · 2015-08-11T20:34:06.408Z · LW(p) · GW(p)
However, there is a lot of downside risk here, and normals do not want to take it on.
By itself, no. But this is diversifiable risk and so if you short enough penny stocks, the risk becomes acceptable. To use a historical example, realizing this (in the context of junk bonds) is what made Michael Milken rich. For a while, at least.
market segmentation caused by the different risk profiles
This certainly exists, though it's more complicated than just unwillingness to touch skewed and heavy-tailed securities.
Penny stocks (and high-beta instruments generally, such as deep out-of-the-money options) display this behaviour in real life.
In real life shorting penny stocks will run into some transaction-costs and availability-to-borrow difficulties, but options are contracts and you can write whatever options you want. So are you saying that selling deep OOM options is a free lunch?
As for the rest, you are effectively arguing that EMH is wrong :-)
Full disclosure: I am not a fan of EMH.
Replies from: Salemicus↑ comment by Salemicus · 2015-08-11T20:47:09.519Z · LW(p) · GW(p)
- Who says this risk is diversifiable? Nothing in the toy model I gave you said the risk was diversifiable. Maybe all the X-like instruments are correlated.
- No, I'm not saying that selling deep OOM options is a free lunch, because of the risk profile. And these are definitely not diversifiable.
- I am not arguing that EMH is wrong. I have given you a toy model, where a suitably defined investor cannot make money but can lose money. The model is entirely consistent with the EMH, because all prices reflect and incorporate all relevant information.
↑ comment by Lumifer · 2015-08-11T20:53:18.909Z · LW(p) · GW(p)
toy model
Oh, I thought we were talking about reality. EMH claims to describe reality, doesn't it?
As to toy models, if I get to define what classes of investors exist and what do they do, I can demonstrate pretty much anything. Of course it's possible to set up a world where "a suitably defined investor cannot make money but can lose money".
And deep OOM options are diversifiable -- there is a great deal of different markets in the world.
Replies from: Salemicus↑ comment by Salemicus · 2015-08-11T21:03:27.328Z · LW(p) · GW(p)
Oh, I thought we were talking about reality. EMH claims to describe reality, doesn't it?
Yeah, but you wanted "a scenario where everything is happening pre-tax, there are no transaction costs, we're operating in risk-adjusted terms and, to make things simple, the risk-free rate is zero. Moreover, the markets are orderly and liquid." That doesn't describe reality, so describing events in your scenario necessitates a toy model.
In the real world, it is trivial to show how you can lose money even if the EMH is true: you have to pay tax, transaction costs are non-zero, the ex post risk is not known, etc.
deep OOM options are diversifiable -- there is a great deal of different markets in the world.
There's still a lot of correlation. Selling deep OOM options and then running into unexpected correlation is exactly how LTCM went bust. It's called "picking up pennies in front of a steamroller" for a reason.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-11T21:09:15.049Z · LW(p) · GW(p)
That doesn't describe reality, so describing events in your scenario necessitates a toy model.
Fair point :-) But still, with enough degrees of freedom in the toy model, the task becomes easy and so uninteresting.
It's called "picking up pennies in front of a steamroller" for a reason.
I know. Which means you need proper risk management and capitalization. LTCM died because it was overleveraged and could not meet the margin calls. And LTCM relied on hedges, not on diversification.
Since deep OOM options are traded, there are people who write them. Since they are still writing them, it looks like not a bad business :-)
↑ comment by Davidmanheim · 2015-08-11T04:22:39.179Z · LW(p) · GW(p)
Yes. Unless you think that all possible market information is reflected now, before it becomes available, someone makes money when information emerges, moving the market.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-11T14:29:17.371Z · LW(p) · GW(p)
Yes, you can (theoretically) make money by front-running the market. But I don't think you can systematically lose money that way (and stay within EMH) and that's the question under discussion.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-11T18:17:55.885Z · LW(p) · GW(p)
If someone is making money by front-running the market another person at the other side of the trade is losing money.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-11T18:37:59.487Z · LW(p) · GW(p)
We're talking about ways to systematically lose money, which means you would need to systematically throw yourself into the front-runner's path, which means you would know where that path is, which means you can systematically forecast the front-running. I think the EMH would be a bit upset by that :-)
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-11T18:53:50.206Z · LW(p) · GW(p)
We're talking about ways to systematically lose money, which means you would need to systematically throw yourself into the front-runner's path
Simply making random trades in a market where some participants are front runners will mean that some of those trades are with front runners where you lose money.
I would call that systematically losing money. On the other hand it doesn't give you an ability to forcast where you will lose the money to make the opposite bet and win money.
Do you think our disagreement is about the way the EMH is defined or are you pointing to something more substantial?
Replies from: Davidmanheim↑ comment by Davidmanheim · 2015-08-19T23:47:27.768Z · LW(p) · GW(p)
No, no disagreement about EMH, that's exactly the point.
↑ comment by VoiceOfRa · 2015-08-11T04:46:21.948Z · LW(p) · GW(p)
No, because you can't sell what you don't have.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-11T19:16:59.249Z · LW(p) · GW(p)
In the financial markets you can, easily enough.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-08-12T07:20:22.952Z · LW(p) · GW(p)
Sort of. You have to pay someone additional money for the right/ability to do so.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-12T15:30:45.971Z · LW(p) · GW(p)
You have to pay a broker to sell what you have as well :-P
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-08-13T05:13:42.575Z · LW(p) · GW(p)
A lot less.
Also, this further breaks the asymmetry between making and loosing money.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-13T14:31:15.488Z · LW(p) · GW(p)
A lot less.
I think you're mistaken about that. As an empirical fact, it depends. What you are missing is the mechanism where when you sell a stock short, you don't get to withdraw the cash (for obvious reasons). The broker keeps it until you cover your short and basically pays you interest on the cash deposit. Right now in most of the first world it's miniscule because money is very cheap, that that is not the case always or everywhere.
It is perfectly possible to short a stock, cover it at exactly the same price and end up with more money in your account.
Replies from: Douglas_Knight, VoiceOfRa↑ comment by Douglas_Knight · 2015-08-27T23:07:18.061Z · LW(p) · GW(p)
Actually, when you short a stock, you must pay an interest rate to the person from whom you borrowed the stock. That interest rate varies from stock to stock, but is always above the risk-free rate. Thus, if you short a stock and do nothing interesting with the cash and eventually cover it at the original price, you will lose money.
Replies from: JEB_4_PREZ_2016↑ comment by JEB_4_PREZ_2016 · 2015-08-27T23:57:57.054Z · LW(p) · GW(p)
If you enter into a short sale at time 0 and cover at time T, you get paid interest on your collateral or margin requirement by the lender of the asset. This is called the short rebate or (in the bond market) the repo rate. As the short seller, you'll be required to pay the time T asset price along with lease rate, which is based on the dividends or bond coupons the asset pays out from 0 to T.
So, if no dividends/coupons are paid out, it's theoretically possible for you to profit from selling short despite no change in the underlying asset price.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2015-08-28T02:24:50.263Z · LW(p) · GW(p)
The lease rate is an interest rate (ie, based on time) in addition to the absolute minimum payment of the dividends issued. It is set by the market: there is a limited supply of shares available to be borrowed for shorting. For most stocks it is about 0.3% for institutional investors, but 5% for a tenth of stocks. The point is that this is an asymmetry with buying a stock.
Now that I look it up and see that it is 0.3%, I admit that is not so big, but I think it is larger than the repo rate. I see no reason for the lease rate to be related to inflation, so in a high inflation environment, you could make money by shorting a stock that did not change nominal price.
(Dividends are not a big deal in shorting because the price of a stock usually drops by the amount of the dividend, for obvious reasons.)
↑ comment by VoiceOfRa · 2015-08-14T04:54:06.539Z · LW(p) · GW(p)
I think you're mistaken about that. As an empirical fact, it depends. What you are missing is the mechanism where when you sell a stock short, you don't get to withdraw the cash (for obvious reasons). The broker keeps it until you cover your short and basically pays you interest on the cash deposit. Right now in most of the first world it's miniscule because money is very cheap, that that is not the case always or everywhere.
Maybe if you have the right connections, and the broker really trust you. The issue is suppose you short a stock, the price goes up and you can't cover it. Someone has to assume that risk, and of course will want a risk premium for doing so.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-14T14:29:16.438Z · LW(p) · GW(p)
Maybe if you have the right connections, and the broker really trust you.
It doesn't have anything to do with connections or broker trust. It's standard operating practice for all broker clients.
The issue is suppose you short a stock, the price goes up and you can't cover it.
If the price goes sufficiently up, you get a margin call. If you can't meet it, the broker buys the stock to cover using the money in your account without waiting for your consent. The broker has some risk if the stock gaps (that is, the price moves discontinuously, it jumps directly from, say, $20 to $40), but that's part of the risk the broker normally takes.
Replies from: g_pepper↑ comment by g_pepper · 2015-08-14T17:15:29.952Z · LW(p) · GW(p)
Another thing to watch out for when shorting stocks is dividends. If you are short a stock on the ex dividend date, then you have to pay the dividend on each share that you have shorted. However, as long as you keep margin calls and dividends in mind, short selling is a good technique (and an easy one) to play a stock that you are bearish on.
And, no, you don't need any special connections, although you typically need to request short-selling privileges on your brokerage account.
Another way to play a stock you are bearish on is buying put options. But put options are a lot harder to use effectively because (among other reasons) they become worthless on the expiration date.
↑ comment by James_Miller · 2015-08-10T23:36:07.420Z · LW(p) · GW(p)
No because of taxes, transaction cost, and risk/return issues.
↑ comment by pcm · 2015-08-11T19:01:31.702Z · LW(p) · GW(p)
Yes, for strategies with low enough transaction costs (i.e. for most buy-and-hold like strategies, but not day-trading).
It will be somewhat hard for ordinary investors to implement the inverse strategies, since brokers that cater to them restrict which stocks they can sell short (professional investors usually don't face this problem).
The EMH is only a loose approximation to reality, so it's not hard to find strategies that underperform on average by something like 5% per year.
↑ comment by Good_Burning_Plastic · 2015-08-13T21:34:02.249Z · LW(p) · GW(p)
The EMH works because everybody is trying to gain money, so everybody except you trying to gain money and you trying to lose money isn't the symmetric situation. The symmetric situation is everybody trying to lose money, in which case it'd be pretty hard indeed to do so. And if everybody except you was trying to lose money and you were trying to gain money it'd be pretty easy for you to do so. I think this would also be the case in absence of taxes and transaction costs. IOW I think Viliam nailed it and other people got red herrings.
↑ comment by JEB_4_PREZ_2016 · 2015-08-15T19:56:35.623Z · LW(p) · GW(p)
Hugely important to distinguish between investing and trading here. But the short answer is that it'd be near impossible to lose money systematically without knowing the inverse (more profitable) strategy.
Consider the scenario where a 22-year-old teacher named Warren, who knows nothing about finance, takes 80% of his annual salary and buys random stocks with the intent to hold until retirement age (reinvesting all dividends). It would be extraordinarily fluky for him to not make solid returns over the long-run with this approach, let alone break even or lose money, as all publicly traded stocks have reasonably high positive expected value.
Now consider derivatives trading. Even if we assume no transaction costs, it'd be near-impossible for Warren to not lose money over the long-run by partaking in lots of random bets with, at best, 0 expected value. Your term, "the market" is problematic because "the market" can act as a bank or a poker table depending on the purchases of the investor.
EMH implies that it's not easy to make a living through financial trading. It's incredibly easy to slowly leak money to those who are making a living at it, though. Unlike buying stocks, bonds, etc., financial trading is zero-sum.
comment by Richard_Kennaway · 2015-08-14T22:05:23.410Z · LW(p) · GW(p)
I just came across this: "You're Not As Smart As You Could Be", about Dr. Samuel Renshaw and the tachistoscope. This is a device used for exposing an image to the human eye for the briefest fraction of a second. In WWII he used it to train navy and artillery personnel to instantly recognise enemy aircraft, apparently with great success. He also used it for speed reading training; this application appears to be somewhat controversial.
I remember the references to Renshaw in some of Heinlein's stories, and I knew he was a real person, but this is the first time I've seen a substantial account of his work.
A few more references:
Wikipedia is rather brief.
Open access review article about work with the tachistoscope, in the Journal of Behavioral Optometry, 2003. This is the closest thing I've found to a modern reference.
An academic paper by Renshaw himself from 1945. Despite its antiquity, it is paywalled. I have not been able to access the full text.
This information is mostly rather old and musty, and there appears to be little modern interest. With current computers, it should be very easy to duplicate the technology, although low-level graphics expertise is likely needed to get very short, precise exposure times.
Replies from: Sarunas↑ comment by Sarunas · 2015-08-15T12:26:19.255Z · LW(p) · GW(p)
The Visual Perception and Reproduction of Forms by Tachistoscopic Methods, Samuel Renshaw
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-08-15T12:46:29.046Z · LW(p) · GW(p)
Thanks.
comment by iarwain1 · 2015-08-13T15:29:44.870Z · LW(p) · GW(p)
On the subject of prosociality / wellbeing and religion, a recent article challenges the conventional wisdom by claiming that, depending on the particular situation, atheism might be just as good or even better for prosociality / wellbeing than religion is.
comment by Username · 2015-08-10T13:08:08.134Z · LW(p) · GW(p)
A database of philosophical ideas
Current Total Ideas: 17,046
comment by Dahlen · 2015-08-10T14:27:29.504Z · LW(p) · GW(p)
Meta: How come there have been so many posts recently by the generic Username account? More people wanting to preserve anonymity, or just one person who can't be bothered to make an account / own up to most of what they say?
Replies from: Username, Vaniver, Elo, ChristianKl↑ comment by Username · 2015-08-10T21:40:17.256Z · LW(p) · GW(p)
The similar formatting of the comments suggests that in this thread it's mostly one person with a lot of links to share.
Personally, I just haven't been bothered to make an account, and have been using the username account exclusively for about 5 years. I'd estimate 30-50% of all the posts on the account were made by me over this timeframe, though writing style suggests to me that a good number of people have used it as a one-shot throwaway, and several people have used it many times.
↑ comment by Elo · 2015-08-10T22:45:14.702Z · LW(p) · GW(p)
I think its a reasonable solution to people not wanting to make an account or also the occasional anonymous post. I have used it once or twice to make separate comments.
But I should add that you can see a list of your nearest meetups if you set your location on your own account.
Edit: holy hell the person who posted all the OT comments here is really annoying and should make an account and stop link-dropping. If the account is being abused that bad we should shut it down and I would change my vote in the poll.
Replies from: None↑ comment by [deleted] · 2015-08-11T16:23:53.190Z · LW(p) · GW(p)
From the upvote to downvote ratio it looks like more members think the posts by Username in the open thread are worthwhile - at the time of writing they are mostly among the higher top-level comments on this week's open thread, and several have sparked at least a bit of subsequent discussion in the form of follow up comments.
True, they're only links (with quoted text) but this doesn't particularly strike me as abuse of the Username account.
Replies from: Elo↑ comment by ChristianKl · 2015-08-10T18:30:39.420Z · LW(p) · GW(p)
That leaves the question of whether that's okay or whether we should simply disable the account.
[pollid:1013]
Replies from: Dahlen, Davidmanheim↑ comment by Dahlen · 2015-08-11T09:06:58.952Z · LW(p) · GW(p)
Oh, I wasn't suggesting that; I was just hoping that whoever has been exclusively posting from that account can take a hint and consider using LW the typical way. It's confusing to see so many posts at once by that account and not know whether there's one person or several using it.
↑ comment by Davidmanheim · 2015-08-11T04:36:22.655Z · LW(p) · GW(p)
It's interesting looking at the raw data breakdown of non -anonymous versus anonymous votes.
Replies from: Elo↑ comment by Elo · 2015-08-11T06:53:37.101Z · LW(p) · GW(p)
that's creepy; also if you take away all the anonymous votes then there are very few votes (5). That might be normal for polls here. Hard to tell.
Also to note; I voted with my account here and it does not appear in the raw poll data. I don't know why.
Replies from: bbleeker, ChristianKl↑ comment by Sabiola (bbleeker) · 2015-08-11T08:50:44.749Z · LW(p) · GW(p)
Anonymous voting is the default, and I always leave it on.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2015-08-19T23:45:14.038Z · LW(p) · GW(p)
I'd prefer to see accountability be a default, with anonymity whenever desired.
↑ comment by ChristianKl · 2015-08-11T09:00:28.982Z · LW(p) · GW(p)
All votes are done by real accounts. There a checkbox under a poll that marks whether your vote is annonymous or isn't by default it's toggled for annonymous votes.
comment by [deleted] · 2015-08-15T12:45:27.600Z · LW(p) · GW(p)
How soon do people who comment upon something here know the answer at once? For example, the valuable advice on statistics I received several times seems to be generated by pattern-recognition (at least at my level of understanding). I myself often have to spend more time framing my comments than actually recognizing what I want to express (not that I always succeed, but there's an internal mean-o-meter which says, this is it.) OTOH, much of the material I simply don't understand, not having sufficient prerequisite knowledge; the polls are aimed at the areas with which you personally interact.
I mostly know what I am going to say [pollid:1014]
The posts to which I don't have an immediate answer [pollid:1015]
Please add your comments on which topics you have to 'slow down' - anonymously, if you wish.
Edit to add: My answer undergoes changes before I submit it;
[pollid:1016]
[pollid:1017]
Replies from: satt↑ comment by satt · 2015-08-15T13:45:58.732Z · LW(p) · GW(p)
A lot of my comments here are correcting/supplementing/answering someone else's comment. Reflecting on how I think the typical sequence goes, it might be something like
- as I read a comment, get a sensation of "this seems prima facie wrong" or "that sounds misleading" or whatever
- finish reading, then re-read to check I'm not misunderstanding (and sometimes it turns out I have misunderstood)
- translate my gut feeling of wrongness into concrete criticism(s)
- rephrase & rephrase & rephrase & rephrase what I've written to try to minimize ambiguity and maybe adjust the politeness level
and so it's hard to say how long it takes me to "mostly know what I am going to say". I often have a vague outline of what I ought to say within 10 or 20 seconds of noticing my feeling that Something's Wrong, but it can easily take me 10 or 20 minutes to actually decide what I'm going to say. For instance, when I read this comment, I immediately thought, "I don't think that can be right; Russia's a violent country and some wars are small", but it took me a while (maybe an hour?) to put that into specific words, and decide which sources to link.
Edit to add: I agree that pattern recognition plays an important part in this. A big part of expertise, I reckon, is just planting hundreds & hundreds of pattern-recognition rules into your brain so when you see certain errors or fallacies you intuitively recognize them without conscious effort.
Replies from: None↑ comment by [deleted] · 2015-08-15T14:06:40.090Z · LW(p) · GW(p)
I am somewhat afraid then, that reading about fallacies won't change my ability to recognize them significantly. Perhaps 'rationality training' should really focus on the editing part, not on the recognizing part. I'll add another question.
Replies from: satt↑ comment by satt · 2015-08-18T00:44:33.466Z · LW(p) · GW(p)
Depends how your mind works, I guess. I read about fallacies when I was young and I feel like that helped me recognize them, even without much deliberate practice in recognizing them (but I surely had a lot of accidental & semi-accidental practice).
Recognition is probably more important than the editing part, because the editing part isn't much use without having the "Aha! That's probably a fallacy!" recognitions to edit, and because you might be able to do a good job of intuitively recognizing fallacies even if you can't communicate them to other people cleanly & unambiguously.
comment by ChristianKl · 2015-08-13T17:30:16.188Z · LW(p) · GW(p)
Is there a good book about how to read scientific papers? A book that neither says that papers should never trusted nor that is oblivious of the real world where research often doesn't replicate?
That goes deeper than just the password of correlation isn't causation? That not only looks at theoretical statistics but that's more empirical about heustritics for trusting papers to successfully replicate?
comment by Lumifer · 2015-08-11T16:44:31.270Z · LW(p) · GW(p)
The soon-to-be prisoner's dilemma in real life, no less :-)
Replies from: WalterL, MrMind↑ comment by MrMind · 2015-08-12T09:38:27.186Z · LW(p) · GW(p)
Uhm, I wonder if they are aware that prisoner's dilemma is defeated through pre-commitment. They are weeding out small dealers strengthening the big ones.
Replies from: Lumifer, MrMind, Salemicus↑ comment by Lumifer · 2015-08-12T15:36:56.012Z · LW(p) · GW(p)
I think the police is mostly playing a PR game and/or amusing themselves. The idea of ratting on a competitor is simple enough to occur to drug dealers "naturally" :-)
Also note that this is not quite a PD where defecting gives you a low-risk slightly positive outcome. Becoming a police informer is... frowned upon on the street and is actually a high-risk move, usually taken to avoid a really bad outcome.
Replies from: Tem42↑ comment by Tem42 · 2015-08-12T16:00:10.885Z · LW(p) · GW(p)
I would expect that it is slightly more than a PR stunt; it seems to me that most of the people who will use this 'service' are disgruntled citizens with no direct connection to the drug trade. Anyone who wants to accuse someone of trading in drugs now has an easy, anonymous, officially sanctioned way to do so, and clear instruction as to what information is most useful -- without having to ask!
I suspect that framing it as "drug dealers backstabbing drug dealers" is just a publicly acceptable way to introduce a snitching program that would otherwise be frowned upon by many.
Replies from: Lumifer↑ comment by Salemicus · 2015-08-12T10:45:25.237Z · LW(p) · GW(p)
Pre-commitment needs to be credible, verifiable and enforceable. If you're playing chicken, pre-commitment means throwing the steering-wheel out of the window, not just saying "I will never ever swerve, pinky-swear."
What is the relevant pre-commitment mechanism here, and how does it operate?
If anything, I would say large dealers are more vulnerable.
Replies from: MrMind↑ comment by MrMind · 2015-08-12T13:21:15.776Z · LW(p) · GW(p)
What is the relevant pre-commitment mechanism here, and how does it operate?
Affiliation to a powerful criminal organization, that can kill you if you rattle or can bail you out if you cooperate.
Basically, the suckers at the bottom get caught while those who deals for the Mob face less competition.
In the most powerful flavor of Italian Mafia affiliates call themselves "man of honor".
comment by [deleted] · 2015-08-10T11:25:11.228Z · LW(p) · GW(p)
I would like to see some targeted efforts to make the Sequences and other rationality materials available to less aspirational, curious or intellectual audiences.
Rationality fiction reaches out to curious audiences. Intellectual audiences may stumble upon rationality material while researching their respective fields of interest. Aspirants to rationality may stumble upon it while looking to better themselves and those around them.
Many ordinary people can benefit from the concepts here. And they will likely find their way to it, should their be an evident benefit to them, by contact with the the above classes of people who are in direct contact with first generation rationality materials produced here. I can see this at my local LW group, where it was hard to find someone who actually read LessWrong, based on my one visit. Though, it may be an artifact of the way the group was marketed in the past outside of the community.
Those who find learning difficult and distasteful can also benefit from rationality materials. So, I would like to start a discussion of suggestions by which material here could be adapted for use by a broader audience. I'll start us of by introducing the existing evidence on the subject of evidence-based teaching. This is easy, because a gentleman by the name of firstname Hattie synthesised 800 meta-analyses in education to figure that entire field out.
Using his own example, I will share a small mnemonic that future posters may like to keep in mind to keep things more accessible to those less cognitively flexible. It may be easier to take this approach, of consciously adopting writing styles that pander to the lowest common denominator, without reducing the sophistication of the content, than to restyle past discussions and sequences for that purpose.
Replies from: None, ChristianKlDECIDES
D ecide on audience, goals, and position
E stimate main ideas and details
F igure best order of main ideas and details
E xpress the position in the opening
N ote each main idea and supporting points
D rive home the message in the last sentence
S earch for errors and correct
↑ comment by [deleted] · 2015-08-10T13:35:08.324Z · LW(p) · GW(p)
Have a look at the postings by Gleb Tsipursky who has been deeply involved in exactly such an enterprise: trying to bring rationality to much more widespread ("ordinary") audiences through a nonprofit organisation "Intentional Insights".
It is a controversial goal, and he's certainly faced criticism here on LW for the approach he takes. But very close to the main ideas expressed in your comment.
(I am not affiliated with Gleb or Intentional Insights, just thought it was relevant enough to mention in this context)
↑ comment by ChristianKl · 2015-08-10T16:19:02.994Z · LW(p) · GW(p)
This is easy, because a gentleman by the name of firstname Hattie synthesised 800 meta-analyses in education to figure that entire field out.
Why read him over any basic textbook on the subject?
Replies from: Nonecomment by Tem42 · 2015-08-14T02:55:01.251Z · LW(p) · GW(p)
Being a comparatively new user, and thus having limited karma, I can't engage fully with The Irrationality Game. Seeing as how it's about 5 years out of date, is there any interest in playing the game anew? Are there rules on who should/can post such things?
Replies from: Zian, ChristianKl↑ comment by ChristianKl · 2015-08-14T11:19:18.310Z · LW(p) · GW(p)
Are there rules on who should/can post such things?
No. You are free to start new threads like this in discussion. Karma votes on the new thread will tell you to what extend the community is happy that you started a new thread.
If you find yourself posting threads that get negative karma, try to understand why they get negative karma and don't repeat mistakes.
Replies from: Tem42↑ comment by Tem42 · 2015-08-14T15:52:15.260Z · LW(p) · GW(p)
My question was actually a bit more targeted - I should have been more precise.
Will_Newsome posted the original Irrationality Game, and he has left the site (well, hasn't posted for months. Perhaps I need to PM him and ask if he's still around). His original post was really very well written, and while I could re-write it, I would probably not change much. So basically, if I repost the idea of an established user who is no longer around... Is that really okay?
I would have no objection to posting under Username, if that made it 'more okay', and I wouldn't mind at all if someone else posted it rather than I -- I just want to play an active version of the game.
I will also double-check and see if Will_Newsome might still be on-site and interested.
comment by Lumifer · 2015-08-12T16:27:51.792Z · LW(p) · GW(p)
Augur -- a blockchain general-purpose prediction market running on Ethereum.
Anyone knows anything about it? Gwern..?
Replies from: gwern↑ comment by gwern · 2015-08-12T17:32:37.185Z · LW(p) · GW(p)
Yes, I've paid close attention to Truthcoin and it. They are both interesting projects with a chance of success although it's hard to make any strong predictions or claims before they are up and running, in part because of the running feud between Paul and the Augur guys. (For example, they both seem to agree that the original consensus/clustering algorithm using SVD/PCA will not work in an adversarial setting, but will Augur's new clustering algorithm succeed? It comes with no formal proofs other than it seems to work in the simulations; Paul seems to dislike it but has not in any of his rants that I've seen explain why he thinks it will not work or what a better solution would be.)
I will probably buy a bit of the Augur crowdsale so I can try it out myself.
comment by btrettel · 2015-08-11T19:44:30.232Z · LW(p) · GW(p)
Are there any guidelines for making comprehensive predictions?
Calibration is good, as is accuracy. But if you never even thought to predict something important, it doesn't matter if you have perfect calibration and accuracy. For example, Google recently decided to restructure, and I never saw this coming.
I can think of a few things. One is to use a prediction service like PredictionBook that aggregates predictions from many people. I never would have considered half the predictions on the site. Another is to get in the habit of recognizing when you don't think something will change and questioning that. E.g., I never would have thought not wearing socks would become stylish, but it seems to have caught on at least among some people.
Questioning literally everything you can think of might work, but it seems pretty inefficient. I'm interested in predictions which are important in some sense.
Any ideas would be appreciated.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-11T20:22:58.110Z · LW(p) · GW(p)
Are there any guidelines for making comprehensive predictions?
Are you asking how to generate a universe of possible outcomes to consider, basically?
Replies from: btrettel↑ comment by btrettel · 2015-08-11T21:59:36.993Z · LW(p) · GW(p)
Yes, that's one way to put it. The main restriction would be to pick "important" predictions, whatever "important" means here.
One other idea I just had would be to make a list of general questions you can ask about anything along with a list of categories to apply these questions to.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-12T15:27:43.477Z · LW(p) · GW(p)
The main restriction would be to pick "important" predictions, whatever "important" means here.
I don't know if there is any useful algorithm here. The space of possibilities is vast, black swans lurk at the outskirts, and Murphy is alive and well :-/
You can try doing something like this:
- List the important (to you) events or outcomes in some near future
- List everything that could potentially affect these events or outcomes.
and you get your universe of "events of interest" to assign probabilities to.
I doubt this will be a useful exercise in practice, though.
Replies from: btrettelcomment by Stephen_Cole · 2015-08-10T14:20:29.365Z · LW(p) · GW(p)
Has there been discussion of Jack Good's principle of nondogmatism? (see Good Thinking, page 30).
The principle, stated simply in my bastardized version, is to believe no thing with probability 1. It seems to underlie Good's type 2 rationality (to maximize expected utility, within reason).
This is (almost) in accord with Lindley's concept of Cromwell's rule (see Lindley's Understanding Uncertainty or https://en.wikipedia.org/wiki/Cromwell%27s_rule). And seems to be closely related to Jaynes' mind projection fallacy.
Replies from: Tem42, None↑ comment by Tem42 · 2015-08-10T17:30:06.511Z · LW(p) · GW(p)
There have been discussions on this topic, although perhaps not framed as nondogmatism. If you have not read 0 and 1 are not probabilities and infinite certainty, you might find them and related articles interesting.
↑ comment by [deleted] · 2015-08-11T12:37:26.539Z · LW(p) · GW(p)
The principle, stated simply in my bastardized version, is to believe no thing with probability 1.
Meeehhhh. Believe nothing empirical with probability 1.0. Believe formal and analytical proofs with probability 1.0.
Replies from: JoshuaZ, Stephen_Cole↑ comment by JoshuaZ · 2015-08-14T18:02:53.361Z · LW(p) · GW(p)
Have you never seen an apparently valid mathematical proof that you later found an error in?
Replies from: None↑ comment by Stephen_Cole · 2015-08-14T20:39:12.753Z · LW(p) · GW(p)
I get your point that we can have greater belief in logical and mathematical knowledge. But (as pointed out by JoshuaZ) I have seen too many errors in proofs given at scientific meetings (and in submitted publications) to blindly believe just about anything.
Replies from: None↑ comment by [deleted] · 2015-08-14T23:52:54.759Z · LW(p) · GW(p)
I get your point that we can have greater belief in logical and mathematical knowledge.
That wasn't quite my point. As a simple matter of axioms, if you condition on the formal system, a proven theorem has likelihood 1.0. Since all theorems are ultimately hypothetical statements anyway, conditioned on the usefulness of the underlying formal system rather than a Platonic "truth", once a theorem is proved, it can be genuinely said to have probability 1.0.
Replies from: Stephen_Cole↑ comment by Stephen_Cole · 2015-08-22T04:13:59.395Z · LW(p) · GW(p)
I will assume by likelihood you meant probability. I think you have removed by concern by conditioning on it. The theorem has probability 1, in your formal system. For me that is not probability 1, I don't give any formal system full control of my beliefs/probabilities.
Of course, I believe arithmetic with probability approaching 1. For now.
comment by cousin_it · 2015-08-14T11:03:40.013Z · LW(p) · GW(p)
Important question that might affect the chances of humanity's survival:
Why is Bostrom's owl so ugly? I'm not much of a designer, but here's my humble attempt at a redesign :-)
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-14T11:21:07.174Z · LW(p) · GW(p)
Your owl looks cute and not scary. Framing AGI's as cute seems to go against the point.
Replies from: cousin_it↑ comment by cousin_it · 2015-08-14T11:46:33.583Z · LW(p) · GW(p)
Aha, that answers my question. I didn't realize that Bostrom's owl represented superintelligence, so I chose mine to be more like a reader stand-in.
If the owl is supposed to look scary and wrong, reminiscent of Langford's basilisk, then I agree that the original owl does the job okay. Though that didn't stop someone on Tumblr from being asked "why are you reading an adult coloring book?", which was the original impetus for me.
Is it possible to find an image that will look scary and wrong, but won't look badly drawn? Does anything here fit the bill?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-14T12:05:39.653Z · LW(p) · GW(p)
There's the parable of sparrows who raise an owl: https://www.youtube.com/watch?v=7rRJ9Ep1Wzs That owl likely made it on the cover.
I don't think the owl has anything to do with the owls in the study hall ;)
Replies from: cousin_it↑ comment by cousin_it · 2015-08-14T15:28:25.246Z · LW(p) · GW(p)
OK, here's my next attempt with a well-drawn owl that looks scary instead of cute. What do you think?
Replies from: g_pepper, jam_brand, Tem42, ChristianKl↑ comment by g_pepper · 2015-08-14T17:30:12.280Z · LW(p) · GW(p)
I actually like Bostrom's owl. I've always thought that Superintelligence has a really good cover illustration.
Replies from: cousin_it↑ comment by cousin_it · 2015-08-14T17:57:12.854Z · LW(p) · GW(p)
I like it too, because it has character, which few pictures do. But the asymmetrical distorted face just bugs me. And the ketchup stains on the collar. And the composition problems (lack of space below the subtitle, timid placement of trees, etc.) For some reason my brain didn't see them as creative choices, but as mechanical problems that need fixing. Maybe I'm oversensitive.
Replies from: gjm, Lumifer↑ comment by gjm · 2015-08-18T14:27:55.507Z · LW(p) · GW(p)
Stating the obvious: I don't think those stains are meant to be ketchup. (And maybe "owl with bloodstains" feels scarier than "owl with ketchup stains".)
Replies from: Lumifer↑ comment by Lumifer · 2015-08-18T14:29:26.995Z · LW(p) · GW(p)
They don't look like blood either.
Replies from: gjm↑ comment by gjm · 2015-08-18T17:30:41.316Z · LW(p) · GW(p)
Well, if we're going to be picky, neither does the owl look like an owl. It's not that sort of picture. But I suggest that "blood" is a more likely answer to "what do those red bits indicate?" than anything like "ketchup".
Replies from: Lumifer↑ comment by Lumifer · 2015-08-18T17:52:17.242Z · LW(p) · GW(p)
neither does the owl look like an owl
But it does. It's stylised, but it is certainly an owl, not a crow, a hawk, or a tit. As to the reddish brown bits, they match the colour of the pixel droppings in the bottom left of the cover, I think. Hard to say what was in the mind of the graphic designer...
Replies from: gjm↑ comment by gjm · 2015-08-18T22:57:28.974Z · LW(p) · GW(p)
Perhaps I wasn't clear. The red looks (I think) about as much like a bloodstain as the owl looks like an owl. That is: no one would mistake one for the other, but the resemblance is there and you can certainly take one as an indication of the other.
↑ comment by jam_brand · 2015-08-17T20:27:38.404Z · LW(p) · GW(p)
I strongly dislike this. The head seems too ornate and the outline reminds me of so-called "tribal" tattoos, which seems low status. The body being subtly asymmetrical is a slight annoyance as well and with the owl now being centered in the image I think the subtitle should be too.
↑ comment by Tem42 · 2015-08-15T14:01:07.054Z · LW(p) · GW(p)
This looks very good. The feet perched on thin air look a little off.
You should probably check with the presumed copyright holder, although I suspect that she plagiarized the head design.
↑ comment by ChristianKl · 2015-08-15T11:58:00.637Z · LW(p) · GW(p)
That looks nice, but I wouldn't trust my aesthetic judgement too much.
comment by iarwain1 · 2015-08-12T21:27:05.432Z · LW(p) · GW(p)
There's a new article on academia.edu on potential biases amongst philosophers of religion: Irrelevant influences and philosophical practice: a qualitative study.
Abstract:
Replies from: g_pepperTo what extent do factors such as upbringing and education shape our philosophical views? And if they do, does this cast doubt on the philosophical results we have obtained? This paper investigates irrelevant influences in philosophy through a qualitative survey on the personal beliefs and attitudes of philosophers of religion. In the light of these findings, I address two questions: an empirical one (whether philosophers of religion are influenced by irrelevant factors in forming their philosophical attitudes), and an epistemological one (whether the influence of irrelevant factors on our philosophical views should worry us). The answer to the empirical question is a confident yes, to the epistemological question, a tentative yes.
↑ comment by g_pepper · 2015-08-12T23:28:08.335Z · LW(p) · GW(p)
To what extent do factors such as upbringing and education shape our philosophical views? And if they do, does this cast doubt on the philosophical results we have obtained?
I would expect a person's education to shape his/her philosophical views; if one's philosophy is not shaped by one's education, then one has had a fairly superficial education.
Replies from: iarwain1↑ comment by iarwain1 · 2015-08-13T00:27:05.560Z · LW(p) · GW(p)
She means that you're biased towards the way you were taught vs. alternatives, regardless of the evidence. The example she gives (from G.A. Cohen) is that most Oxford grads tend to accept the analytic / synthetic distinction while most Harvard grads reject it.
Replies from: g_pepper↑ comment by g_pepper · 2015-08-13T01:16:21.351Z · LW(p) · GW(p)
Yes, I got that from reading the paper. However, the wording of the abstract seems quite sloppy; taken at face value it suggests that a person's education, K-postdoc (not to mention informal education) should have no influence on the person's philosophy.
Moreover, the paper's point (illustrated by the Cohen example) is not really surprising; one's views on unanswered questions are apt to be influenced by the school of thought in which one was educated - were this not the case, the choice of what university to attend and which professor to study under would be somewhat arbitrary. Moreover, I don't think that she made a case that philosophers are ignoring the evidence, only that the philosopher's educational background continues to exert an influence throughout the philosopher's career. From a Bayesian standpoint this makes sense - loosely speaking, when the philosopher leaves graduate school, his/her education and life experience to that point constitute his/her priors, which he/she updates as new evidence becomes available. While the philosopher's priors are altered by evidence, they are not necessarily eliminated by evidence. This is not problematic unless overwhelming evidence one way or the other is available and ignored. The fact that whether or not to accept the analytic / synthetic distinction is still an open question suggests that no such overwhelming evidence exists - so I am not seeing a problem with the fact that Oxford grads and Harvard grads tend (on average) to disagree on this issue.
comment by Houshalter · 2015-08-14T03:57:18.330Z · LW(p) · GW(p)
Omega places a button in front of you. he promises that each press gives you an extra year of life, plus whatever your discounting factor is. If you walk away, the button is destroyed. Do you press the button forever and never leave?
Replies from: MrMind, NoSuchPlace, shminux↑ comment by MrMind · 2015-08-14T07:04:43.297Z · LW(p) · GW(p)
That's a variant of a known problem in any decision theory that admits unbounded utility: there's something inside a box which every minute increases its utility, but it stops when you open the box and you get to enjoy it.
When do you open the box?
↑ comment by NoSuchPlace · 2015-08-14T13:19:37.571Z · LW(p) · GW(p)
Since I don't spend all my time inside avoiding every risk hoping for someone to find the cure to aging, I probably value a infinite life a large but finite amount times more than a year of life. This means that I must discount in such a way that after a finite number of button press Omega would need to grant me an infinite life span.
So I preform some Fermi calculations to obtain an upper bound on the number of button presses I need to obtain Immortality, press the button that often, then leave.
↑ comment by Shmi (shminux) · 2015-08-14T06:22:20.274Z · LW(p) · GW(p)
Assuming those are QALY, not just years, spend a week or so pressing the button non-stop, then use the extra million years to become Omega.
comment by AstraSequi · 2015-08-12T09:49:20.593Z · LW(p) · GW(p)
A question that I noticed I'm confused about. Why should I want to resist changes to my preferences?
I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B. If someone told me "tonight we will modify you to want to kill puppies," I'd respond that by my current preferences that's a bad thing, but if my preferences change then I won't think it's a bad thing any more, so I can't say anything against it. If I had a button that could block the modification, I would press it, but I feel like that's only because I have a meta-preference that my preferences tend to maximizing happiness, and the meta-preference has the same problem.
A quicker way to say this is that future-me has a better claim to caring about what the future world is like than present-me does. I still try to work toward a better world, but that's based on my best prediction for my future preferences, which is my current preferences.
Replies from: Viliam, Richard_Kennaway, Squark, Tem42↑ comment by Viliam · 2015-08-12T12:36:59.623Z · LW(p) · GW(p)
If I offered you now a pill that would make you (1) look forward to suicide, and (2) immediately kill yourself, feeling extremely happy about the fact that you are killing yourself... would you take it?
Replies from: AstraSequi↑ comment by AstraSequi · 2015-08-13T11:26:54.797Z · LW(p) · GW(p)
No, but I don’t see this as a challenge to the reasoning. I refuse because of my meta-preference about the total amount of my future-self’s happiness, which will be cut off. A nonzero chance of living forever means the amount of happiness I received from taking the pill would have to be infinite. But if the meta-preference is changed at the same time, I don’t know how I would justify refusing.
↑ comment by Richard_Kennaway · 2015-08-12T12:00:54.495Z · LW(p) · GW(p)
Why should I want to resist changes to my preferences?
Because that way leads to
wireheading
indifference to dying (which wipes out your preferences)
indifference to killing (because the deceased no longer has preferences for you to care about)
readiness to take murder pills
and so on. Greg Egan has a story about that last one: "Axiomatic".
Whereupon I wield my Cudgel of Modus Tollens and conclude that one can and must have preferences about one's preferences.
So much for the destructive critique. What can be built in its place? What are the positive reasons to protect one's preferences? How do you deal with the fact that they are going to change anyway, that everything you do, even if it isn't wireheading, changes who you are? Think of yourself at half your present age — then think of yourself at twice your present age (and for those above the typical LessWrong age, imagined still hale and hearty).
Which changes should be shunned, and which embraced?
An answer is visible in both the accumulated wisdom of the ages[1] and in more recently bottled wine. The latter is concerned with creating FAI, but the ideas largely apply also to the creation of one's future selves. The primary task of your life is to create the person you want to become, while simultaneously developing your idea of what you want to become.
[1] Which is not to say I think that Lewis' treatment is definitive. For example, there is hardly a word there relating to intelligence, rationality, curiosity, "internal" honesty (rather than honesty in dealing with others), vigour, or indeed any of Eliezer's "12 virtues", and I think a substantial number of the ancient list of Roman virtues don't get much of a place either. Lewis has sought the Christian virtues, found them, and looked no further.
Replies from: AstraSequi↑ comment by AstraSequi · 2015-08-13T11:53:16.209Z · LW(p) · GW(p)
Because that way leads to wireheading, indifference to dying (which wipes out your preferences), indifference to killing (because the deceased no longer has preferences for you to care about), readiness to take murder pills, and so on. Greg Egan has a story about that last one: "Axiomatic".
Whereupon I wield my Cudgel of Modus Tollens and conclude that one can and must have preferences about one's preferences.
I already have preferences about my preferences, so I wouldn’t self-modify to kill puppies, given the choice. I don’t know about wireheading (which I don’t have a negative emotional reaction toward), but I would resist changes for the others, unless I was modified to no longer care about happiness, which is the meta-preference that causes me to resist. The issue is that I don’t have an “ultimate” preference that any specific preference remain unchanged. I don’t think I should, since that would suggest the preference wasn’t open to reflection, but it means that the only way I can justify resisting a change to my preferences is by appealing to another preference.
What can be built in its place? What are the positive reasons to protect one's preferences? How do you deal with the fact that they are going to change anyway, that everything you do, even if it isn't wireheading, changes who you are? …
An answer is visible in both the accumulated wisdom of the ages[1] and in more recently bottled wine. The latter is concerned with creating FAI, but the ideas largely apply also to the creation of one's future selves. The primary task of your life is to create the person you want to become, while simultaneously developing your idea of what you want to become.
I know about CEV, but I don’t understand how it answers the question. How could I convince my future self that my preferences are better than theirs? I think that’s what I’m doing if I try to prevent my preferences from changing. I only resist because of meta-preferences about what type of preferences I should have, but the problem recurses onto the meta-preferences.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-08-13T15:40:31.966Z · LW(p) · GW(p)
The issue is that I don’t have an “ultimate” preference
Do you need one?
If you keep asking "why" or "what if?" or "but suppose!", then eventually you will run out of answers, and it doesn't take very many steps. Inductive nihilism — thinking that if you have no answer at the end of the chain then you have no answer to the previous step, and so on back to the start — is a common response, but to me it's just another mole to whack with Modus Tollens, a clear sign that one's thinking has gone wrong somewhere. I don't have to be able to spot the flaw to be sure there is one.
How could I convince my future self that my preferences are better than theirs?
Your future self is not a person as disconnected from yourself as the people you pass in the street. You are creating all your future yous minute by minute. Your whole life is a single, physically continuous object:
"Suppose we take you as an example. Your name is Rogers, is it not? Very well, Rogers, you are a space-time event having duration four ways. You are not quite six feet tall, you are about twenty inches wide and perhaps ten inches thick. In time, there stretches behind you more of this space-time event, reaching to perhaps nineteen-sixteen, of which we see a cross-section here at right angles to the time axis, and as thick as the present. At the far end is a baby, smelling of sour milk and drooling its breakfast on its bib. At the other end lies, perhaps, an old man someplace in the nineteen-eighties.
"Imagine this space-time event that we call Rogers as a long pink worm, continuous through the years, one end in his mother's womb, and the other at the grave..."
Robert Heinlein, "Life-line"
Do you want your future self to be fit and healthy? Well then, take care of your body now. Do you wish his soul to be as healthy? Then have a care for that also.
↑ comment by Squark · 2015-08-12T20:11:01.811Z · LW(p) · GW(p)
"I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B". You'll be happier with B, so what? Your statement only makes sense of happiness is part of A. Indeed, changing your preferences is a way to achieve happiness (essentially it's wireheading) but it comes on the expense of other preferences in A besides happiness.
"...future-me has a better claim to caring about what the future world is like than present-me does." What is this "claim"? Why would you care about it?
Replies from: AstraSequi↑ comment by AstraSequi · 2015-08-13T11:24:05.469Z · LW(p) · GW(p)
I don’t understand your first paragraph. For the second, I see my future self as morally equivalent to myself, all else being equal. So I defer to their preferences about how the future world is organized, because they're the one who will live in it and be affected by it. It’s the same reason that my present self doesn’t defer to the preferences of my past self.
Replies from: Squark↑ comment by Squark · 2015-08-13T20:10:11.374Z · LW(p) · GW(p)
Your preferences are by definition the things you want to happen. So, you want your future self to be happy iff your future self's happiness is your preference. Your ideas about moral equivalence are your preferences. Et cetera. If you prefer X to happen and your preferences are changed so that you no longer prefer X to happen, the chance X will happen becomes lower. So this change of preferences goes against your preference for X. There might be upsides to the change of preferences which compensate the loss of X. Or not. Decide on a case by case basis, but ceteris paribus you don't want your preferences to change.
↑ comment by Tem42 · 2015-08-12T13:08:03.380Z · LW(p) · GW(p)
As far as I am aware, people only resist changing their preferences because they don't fully understand the basis and value of their preferences and because they often have a confused idea of the relationship between preferences and personality.
Generally you should define your basic goals and change your preference to meet them, if possible. You should also be considering whether all your basic goals are optimal, and be ready to change them.
If someone told me "tonight we will modify you to want to kill puppies," I'd respond that by my current preferences that's a bad thing, but if my preferences change then I won't think it's a bad thing any more.
You may find that you do have a moral system that is more consistent (and hopefully, more good) if you maintain a preference for not-killing puppies. Hopefully this moral system is well enough thought-out that you can defend keeping it. In other words, your preferences won't change without a good reason.
If I had a button that could block the modification, I would press it
This is a bad thing. If you have a good reason to change your preferences (and therefore your actions), and you block that reason, this is a sign that you need to understand your motivations better.
"tonight we will modify you to want to kill puppies,"
I think you may be assuming that the person modifying your preferences is doing so both 'magically' and without reason. Your goal should be to kill this person, and start modifying your preferences based on reason instead. On the other hand, if this person is modifying your preferences through reason, you should make sure you understand the rhetoric and logic used, but as long as you are sure that what e says is reasonable, you should indeed change your preference.
Of course, another issue may be that we are using 'preference' in different ways. You might find the act of killing puppies emotionally distasteful even if you know that it is necessary. It is an interesting question whether we should work to change our preferences to enjoy things like taking out the trash, changing diapers, and killing puppies. Most people find that they do not have to have an emotional preference for dealing with unpleasant tasks, and manage to get by with a sense of 'job well done' once they have convinced themselves intellectually that a task needs to be done. It is understandable if you feel that 'job well done' might not apply to killing puppies, but I am fairly agnostic on the matter, so I won't try to convince you that puppy population control is your next step to sainthood. However, if after much introspection you do find that puppies need to be killed and you seriously don't like doing it, you might want to consider paying someone else to kill puppies for you.
Edited for format and to remove an errant comma.
Replies from: AstraSequi↑ comment by AstraSequi · 2015-08-13T11:40:52.903Z · LW(p) · GW(p)
As far as I am aware, people only resist changing their preferences because they don't fully understand the basis and value of their preferences and because they often have a confused idea of the relationship between preferences and personality.
Generally you should define your basic goals and change your preference to meet them, if possible. You should also be considering whether all your basic goals are optimal, and be ready to change them.
Yes, that’s the approach. The part I think is a problem for me is that I don’t know how to justify resisting an intervention that would change my preferences, if the intervention also changes the meta-preferences that apply to those preferences.
When I read the discussions here on AI self-modification, I think: why should the AI try to make its future-self follow its past preferences? It could maximize its future utility function much more easily by self-modifying such that its utility function is maximized in all circumstances. It seems to me that timeless decision theory advocates doing this, if the goal is to maximize the utility function.
I don’t fully understand my preferences, and I know there are inconsistencies, including acceptable ones like changes in what food I feel like eating today. If you have advice on how to understand the basis and value of my preferences, I’d appreciate hearing it.
I think you may be assuming that the person modifying your preferences is doing so both 'magically' and without reason.
I’m assuming there aren’t any side effects that would make me resist based on the process itself, so we can say that’s “magical”. Let’s say they’re doing it without reason, or for a reason I don’t care about, but they credibly tell me that they won’t change anything else for the rest of my life. Does that make a difference?
Of course, another issue may be that we are using 'preference' in different ways. You might find the act of killing puppies emotionally distasteful even if you know that it is necessary. It is an interesting question whether we should work to change our preferences to enjoy things like taking out the trash, changing diapers, and killing puppies.
I’m defining preference as something I have a positive or negative emotional reaction about. I sometimes equivocate with what I think my preferences should be, because I’m trying to convince myself that those are my true preferences. The idea of killing puppies was just an example of something that’s against my current preferences. Another example is “we will modify you from liking the taste of carrots to liking the taste of this other vegetable that tastes different but is otherwise identical to carrots in every important way.” This one doesn’t have any meta-preferences that apply.
Replies from: Tem42↑ comment by Tem42 · 2015-08-13T16:11:54.082Z · LW(p) · GW(p)
I see that this conversation is in danger of splitting into different directions. Rather than make multiple different reply posts or one confusing essay, I am going to drop the discussion of AI, because that is discussed in a lot of detail elsewhere by people who know a lot more than I.
meta-preferences
We are using two different models here, and while I suspect that they are compatible, I'm going to outline mine so that you can tell me if I'm missing the point.
I don't use the term meta-preferences, because I think of all wants/preferences/rules/and general-preferences as having a scope. So I would say that my preference for a carrot has a scope of about ten minutes, appearing intermittently. This falls under the scope of my desire to eat, which appears more regularly and for greater periods of time. This in turn falls under the scope of my desire to have my basic needs met, which is generally present at all times, although I don't always think about it. I'm assuming that you would consider the later two to be meta-preferences.
I don’t know how to justify resisting an intervention that would change my preferences
I would assume that each preference has a value to it. A preference to eat carrots has very little value, being a minor aesthetic judgement. A preference to meet your basic needs would probably have a much higher value to it, and would probably go beyond the aesthetic.
If it were easy for me to modify my preferences away from cheeseburgers, I can find a clear reason (or ten) to do so. I justify it by appealing to my higher-level preferences (I would like to be healthier). My preference to be healthier has more value than a preference to enjoy a single meal -- or even 100 meals.
But if it were easy to modify my preferences away from carrots, I would have to think twice. I would want a reason. I don't think I could find a reason.
Let’s say they’re doing it without reason, or for a reason I don’t care about, but they credibly tell me that they won’t change anything else for the rest of my life.
I would set up an example like this: I like carrots. I don't like bell peppers. I have an opportunity to painlessly reverse these preferences. I don't see any reason to prefer or avoid this modification. It makes sense for me to be agnostic on this issue.
I would set up a more fun example like this: I like Alex. I do not like Chris. I have an opportunity to painlessly reverse these preferences.
I would hope that I have reasons for liking Alex, and not liking Chris... but if I don't have good reasons, and if there will not be any great social awkwardness about the change, then yes, perhaps Alex and Chris are fungible. If they are fungible, this may be a sign that I should be more directed in who I form attachments with.
The part I think is a problem for me is that I don’t know how to justify resisting an intervention that would change my preferences, if the intervention also changes the meta-preferences that apply to those preferences.
In the Alex/Chris example, it would be interesting to see if you ever reached a preference that you did mind changing. For example, you might be willing to change a preference for tall friends over short friends, but you might not be willing to change a preference for friends that kick orphans with friends who help orphans.
If you do find a preference that you aren't willing to change, it is interesting to see what it is based on -- a moral system (if so, how formalized and consistent is it), an aesthetic preference (if so, are you overvaluing it? Undervaluing it?), or social pressures and norms (if so, do you want those norms to have that influence over you?).
It is arguable, but not productive, to say that ultimately no one can justify anything. I can bootstrap up a few guidelines that I base lesser preferences on -- try not to hurt unnecessarily (ethical), avoid bits of dead things (aesthetic), and don't walk around town naked (social). I would not want to switch out these preferences without a very strong reason.
comment by Dahlen · 2015-08-11T09:38:06.208Z · LW(p) · GW(p)
What examples are there of jobs which can make use of high general intelligence, that at the same time don't require rare domain-specific skills?
I have some years of college left before I'll be a certified professional, and I'm good but not world-class awesome at a variety of things, yet judging by encounters with some well and truly employed people, I find myself wondering how come I'm either not employed or duped into working for free, while these doofuses have well-paying jobs. The answer tends to be, for lack of trying on my part, but it would be quite a nasty surprise if I do begin to try and it turns out that my most relied-upon quality turns out not to be worth much. So, better to ask: how much is intelligence worth for earning money, when not supplemented by the relevant pieces of paper or loads of experience?
Replies from: btrettel, ChristianKl, Lumifer, shminux↑ comment by btrettel · 2015-08-11T12:40:21.955Z · LW(p) · GW(p)
Programming is a skill, but not a particularly rare one. Beyond a certain level of intelligence, I don't think there's much if any correlation between programming ability and intelligence. Moreover, I think programming is one area where standard credentials don't matter too much. If you have a good project on GitHub, that can be enough.
gwern wrote something related before:
I've often seen it said on Hacker News that programmers could clean up in many other occupations because writing programs would give them a huge advantage. And I believe Michael Vassar has said here that he thought a LWer could take over a random store in SF and likewise clean up.
Personally, I think going off raw intelligence doesn't work so well, especially if you'll be reinventing the wheel because of your lack of domain knowledge. Getting rare skills which are in demand is a smart strategy, and you'd be better off going that route. Here's a good book built on that premise.
↑ comment by ChristianKl · 2015-08-11T10:15:31.826Z · LW(p) · GW(p)
There are plenty people in MENSA who don't have high paying jobs.
Replies from: Dahlen↑ comment by Lumifer · 2015-08-11T15:15:16.802Z · LW(p) · GW(p)
What examples are there of jobs which can make use of high general intelligence, that at the same time don't require rare domain-specific skills?
A manager :-) A business manager, a small business owner, a civil servant, a dictator, a leader of the free world :-/
Generally speaking, there is something of a Catch-22 situation. The low-level entry jobs are easy to get into, but they don't really care about your intelligence. But high-level jobs where intelligence matters require demonstration not only of intelligence, but also of the ability to use it which basically means they want to see past achievements and accomplishments.
There are shortcuts, but they are usually called "graduate schools".
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-11T18:10:54.088Z · LW(p) · GW(p)
The low-level entry jobs are easy to get into, but they don't really care about your intelligence.
In Germany technical telephone support would be a low-level job where intelligence is useful but I don't know to what extend that exists in the US where the language situation is different.
Replies from: VoiceOfRa↑ comment by Shmi (shminux) · 2015-08-11T14:46:36.790Z · LW(p) · GW(p)
Apply your general intelligence to figuring out what you are especially good at, then see if there are relevant paid jobs.
Replies from: WalterL↑ comment by WalterL · 2015-08-12T20:04:58.661Z · LW(p) · GW(p)
I think he's trying to do that, by making this post.
@OP: the best place I've seen for lazy smart people to make money is in coding jobs. If 4 year college is out, go to an online code learning place and get some nonsense degree. (App Academy, or whatevs). Then apply a bunch. If you have a friend who is a coder, see if they have a hookup.
Once you have a job the only way to lose it is to be aggressively inept or engage in one of the third rail categories of HR, racism sexism or any other ism.
Replies from: VoiceOfRacomment by Username · 2015-08-10T13:37:47.498Z · LW(p) · GW(p)
An Introverted Writer’s Lament by Meghan Tifft
Replies from: Tem42, WalterLWhether we’re behind the podium or awaiting our turn, numbing our bottoms on the chill of metal foldout chairs or trying to work some life into our terror-stricken tongues, we introverts feel the pain of the public performance. This is because there are requirements to being a writer. Other than being a writer, I mean. Firstly, there’s the need to become part of the writing “community”, which compels every writer who craves self respect and success to attend community events, help to organize them, buzz over them, and—despite blitzed nerves and staggering bowels—present and perform at them. We get through it. We bully ourselves into it. We dose ourselves with beta blockers. We drink. We become our own worst enemies for a night of validation and participation.
↑ comment by Tem42 · 2015-08-10T18:23:52.635Z · LW(p) · GW(p)
This is interesting, but I think that it is using an incorrect definition of introversion. I interpret an introvert as someone who prefers to spend time by themselves or in situations in which they are working on their own, rather than in situations in which they are interacting with other people. This does not mean that they necessarily need to feel extreme stress at public speaking or at parties/social events. They may feel bored, annoyed, frustrated, or indifferent to these events, or they may even like them, but feel the opportunity cost of the time they take is not really worth it.
"our terror-stricken tongues, we introverts feel the pain of the public performance"; "blitzed nerves and staggering bowels"; "We bully ourselves into it. We dose ourselves with beta blockers. We drink. We become our own worst enemies"
This doesn't sound like introversion. This sounds like an anxiety disorder.
comment by Username · 2015-08-10T13:11:59.183Z · LW(p) · GW(p)
Change your name by Paul Graham
If you have a US startup called X and you don't have x.com, you should probably change your name.
The reason is not just that people can't find you. For companies with mobile apps, especially, having the right domain name is not as critical as it used to be for getting users. The problem with not having the .com of your name is that it signals weakness. Unless you're so big that your reputation precedes you, a marginal domain suggests you're a marginal company. Whereas (as Stripe shows) having x.com signals strength even if it has no relation to what you do.
...
Replies from: None100% of the top 20 YC companies by valuation have the .com of their name. 94% of the top 50 do. But only 66% of companies in the current batch have the .com of their name. Which suggests there are lessons ahead for most of the rest, one way or another
↑ comment by [deleted] · 2015-08-11T03:22:17.609Z · LW(p) · GW(p)
This seems to me a clear case of reversing (most of) the causation.
Replies from: drethelin, None↑ comment by [deleted] · 2015-08-11T04:41:16.720Z · LW(p) · GW(p)
Which makes it a good target for signalling. If you want to seem strong, you get the domain.
Replies from: None↑ comment by [deleted] · 2015-08-11T04:52:15.806Z · LW(p) · GW(p)
Yes, but I don't see why Paul thinks that's a good thing when you're actually not strong.
Usually, I think his advice is spot on, but in this case his advice that you want to signal that you're strong when you're actually not seems backwards. You don't want to be seen as a credible threat to competitors until you're ACTUALLY able to defend yourself.
Replies from: Nonecomment by Thomas · 2015-08-10T11:05:04.658Z · LW(p) · GW(p)
I see yet another problem with the Singularity. Say that a group of people manages to ignite it. Until the day before, they, the team were forced to buy the food and everything else. Now, what does the baker or pizza guy have to offer to them, anymore?
The team has everything to offer to everybody else, but everybody else have nothing to give them back as a payment for the services.
The "S team" may decide to give a colossal charity. A bigger one than everything we currently all combined poses. To each. That, if the Singularity is any good, of course.
But, will they really do that?
They might decide not to. What then?
Replies from: Richard_Kennaway, Lumifer, None↑ comment by Richard_Kennaway · 2015-08-10T12:12:33.765Z · LW(p) · GW(p)
What then?
They take over and rule like gods forever, reducing the mehums to mere insects in the cracks of the world.
Replies from: Thomas↑ comment by Thomas · 2015-08-10T13:05:01.206Z · LW(p) · GW(p)
Yes. A farmer does not want to give a bushel of wheat to this "future Singularity inventors" for free. Those guys may starve to death as far as he cares, if they don't pay the said bushel of wheat with a good money.
They understand it.
Now, they don't need any wheat anymore. And nothing what this farmer has to offer. Or anybody else for that matter. The commerce has stopped here and they see no reason for giving tremendous gifts around. They have paid for their wheat, vino and meat. Now, they are not shopping anymore.
The farmer should understand.
Replies from: Richard_Kennaway, ChristianKl↑ comment by Richard_Kennaway · 2015-08-10T13:25:14.853Z · LW(p) · GW(p)
The farmer will never know about these "Singularity inventors".
The inventors themselves may not know. The scenario presumes that the "Singularity inventors" have control of their "invention" and know that it is "the creation of the Singularity". The history of world-changing inventions of the past suggests that no-one will be in control of "the Singularity". No-one at the time will know that that is what it is, and will participate in whatever it looks like according to their own local interests.
The farmer will not know about the Singularity, but he's probably on Facebook.
Replies from: None↑ comment by [deleted] · 2015-08-11T13:36:57.971Z · LW(p) · GW(p)
The inventors themselves may not know. The scenario presumes that the "Singularity inventors" have control of their "invention" and know that it is "the creation of the Singularity". The history of world-changing inventions of the past suggests that no-one will be in control of "the Singularity". No-one at the time will know that that is what it is, and will participate in whatever it looks like according to their own local interests.
Except for all the people on this site, who talk nonstop about deliberately setting off such a thing?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-08-11T15:04:45.357Z · LW(p) · GW(p)
Except for all the people on this site, who talk nonstop about deliberately setting off such a thing?
"Why, so can I, and so can any man
but will they foom when you do conjure them?"
↑ comment by Salemicus · 2015-08-11T20:37:08.621Z · LW(p) · GW(p)
The annotated RichardKennaway:
This is a quote from Henry IV part I, when Glendower is showing off to the other rebels, claiming to be a sorceror, and Hotspur is having none of it.
Glendower:
I can call spirits from the vasty deep.
Hotspur:
Why, so can I, or so can any man
But will they come when you do call for them?
↑ comment by ChristianKl · 2015-08-10T18:18:31.621Z · LW(p) · GW(p)
Yes. A farmer does not want to give a bushel of wheat to this "future Singularity inventors" for free. Those guys may starve to death as far as he cares, if they don't pay the said bushel of wheat with a good money.
Most of us don't interact at all in our daily lives with farmers. It's pretty senseless to speak about them. Western countries also don't let people starve to death but generally have the goal of feeding their population. Especially when it comes to capable programmers.
Replies from: Thomas↑ comment by Thomas · 2015-08-10T20:09:28.379Z · LW(p) · GW(p)
Just for the sake of the discussion. Could be a team of millionaire programmers, as well. And not the farmers, but the doctors and lawyers on the other side. The commerce, the divide of labor stops there, at the S moment. Every exchange of goods, stops as well.
Except some Giga charity, which may or may not happen.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-10T20:25:57.929Z · LW(p) · GW(p)
Having the discussion on examples that are wrong is bad because it leads to bad intuitions.
Not all interactions between people are commerce. People take plenty of actions that benefit other people that aren't about commerce.
↑ comment by Lumifer · 2015-08-10T15:53:27.802Z · LW(p) · GW(p)
The team has everything to offer to everybody else
You are assuming that the S team is in full control of their Singularity which is not very likely.
Replies from: WalterL↑ comment by WalterL · 2015-08-10T20:06:41.292Z · LW(p) · GW(p)
It feels pretty likely to me. An AI that grows ever more effective at optimizing its futures will not suddenly begin to question its goals. If so, whoever pulled off the creation of the AI is responsible for the future, based on what it wrote into the "goal list" of the proto-AI.
One part of the "goal list" is going to be some equivalent of "always satisfy Programmer's expressed desires" and "never let communication with Programmer lapse", to allow for fixing the problem if the AI starts turning people into paper clips. Side effect, Programmer is now God, but presumably (s)he will tolerate this crushing burden for the first few thousand years.
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2015-08-10T20:48:48.266Z · LW(p) · GW(p)
You can mess people up quite easily will still satisfying their expressed desires. The AGI can also talk the programmers into whatever position it considers reasonable.
"never let communication with Programmer lapse"
You just forbade the AGI from allowing the programmer to sleep.
Replies from: WalterL↑ comment by WalterL · 2015-08-10T21:04:37.328Z · LW(p) · GW(p)
Sure, it can mindclub people, but it'll only do that if it wants to, and it will only want to if they tell it to. AI should want to stay in the box.
I guess...the "communication lapse" thing was unclear? I didn't mean that the human must always be approving the AI, I meant that it must always be ready/able to receive the programmer's input. In case it starts to turn everyone into paperclips there's a hard "never take action to restrict us from instructing you/ always obey our instructions" clause.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-08-10T21:20:48.398Z · LW(p) · GW(p)
Sure, it can mindclub people, but it'll only do that if it wants to, and it will only want to if they tell it to.
No, an AGI is complex it has millions of subgoals.
"never take action to restrict us from instructing you/ always obey our instructions" clause.
Putting the programmer in a closed enviroment where he's wireheaded doesn't technically restrict the programmer from instructing the AGI. It's just that the programmers mind is occupied differently.
That's what you tell the AGI to do. It's easiest to satisfy the programmer's expressed desires if the AGI closes him of from the outside world and controls the expressed desires of the programmer.
Replies from: Tem42, WalterL↑ comment by Tem42 · 2015-08-10T21:31:49.834Z · LW(p) · GW(p)
"never take action to restrict us from instructing you/always obey our instructions" clause.
Also, anything that restricts the AI's power would restrict its ability to obey instructions. An attempt by the programmer to shut down the AI would result in a contradiction, which could be resolved in all sorts of interesting ways.
↑ comment by WalterL · 2015-08-10T21:45:34.187Z · LW(p) · GW(p)
This is turning out to be harder to get across than I figured. First you thought I thought an AI should keep its programmers awake until they died, now it should wirehead them? I'm not an orc.
I conjecture that \when you set an AI to start doing its thing, after endless simulations and consideration of whatever goals you've given it, you tell it not to dick with you, so that if you've accidentally made a murder-bot, you can turn it off.
The alternative is to have complete confidence in your extended testing. Which you presumably come close to (since you are turning on an AI), but why not also have the red button? What does it hurt?
It isn't trying to figure out clever ways to get around your restriction, because it doesn't want to. The world in which it pursues whatever goal you've given it is one in which it will double never try and hide anything from you or change what you'd think of it. It is, in a very real sense, showing off for you.
Replies from: ChristianKl, None↑ comment by ChristianKl · 2015-08-10T22:10:27.507Z · LW(p) · GW(p)
This is turning out to be harder to get across than I figured. First you thought I thought an AI should keep its programmers awake until they died, now it should wirehead them? I'm not an orc.
You set two goals. One is to maximize expressed desires which likely leads to wireheading. The other is to keep constant communication with doesn't allow sleep.
It isn't trying to figure out clever ways to get around your restriction, because it doesn't want to.
Controlling the information flow isn't getting around your restriction. It's the straightforward way of matching expressed desires with results. Otherwise the human might ask for two contradictory things and the AGI can't fulfill both. The AGI has to prevent that case from arising to get a 100% fulfillment score.
You are not the first person who thinks that taming an AGI is trival but MIRI thinks that taming an AGI is a hard task. That's the result of deep engagement with the issue.
Which you presumably come close to (since you are turning on an AI), but why not also have the red button? What does it hurt?
I don't object to a red button and you didn't call for one at the start. Maximizing expressed desires isn't a red button.
↑ comment by [deleted] · 2015-08-11T03:19:20.728Z · LW(p) · GW(p)
This is untrue, even simple reinforcement learning machines come up with clever ways to get around their restrictions, what makes you think an actually smart AI won't come up with even more ways to do it. It doesn't see this as "getting around your restrictions" - that's anthropomorphizing to assume that the AI decides to take on "subgoals" that are the exact same as your values - it just sees it as the most efficient way to get rewards.
↑ comment by Lumifer · 2015-08-10T20:17:34.156Z · LW(p) · GW(p)
An AI that grows ever more effective at optimizing its futures will not suddenly begin to question its goals.
Oh, great. So MIRI can disband and we can cross one item off the existential-risk list....
some equivalent of "always satisfy Programmer's expressed desires"
Well, that idea has been explored on LW. Quite extensively, in fact.
Replies from: WalterL↑ comment by WalterL · 2015-08-10T20:25:00.422Z · LW(p) · GW(p)
Point of MIRI is making sure the goals are set up right, yeah? Like, the whole "AI is smart enough to fix its defective goals" is something we make fun of. No ghost in the machine, etc.
Whatever outcome of perfect goal set is (if MIRI's AI is, in fact, the one that takes over), will presumably include human ability to override in case of failure.
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2015-08-10T20:47:21.389Z · LW(p) · GW(p)
Point of MIRI is making sure the goals are set up right, yeah?
That's not the only point. It's also to keep the goals stable in the face of self modification.
↑ comment by Lumifer · 2015-08-10T20:32:10.862Z · LW(p) · GW(p)
Point of MIRI is making sure the goals are set up right, yeah?
I have a feeling MIRI folks view their point as... a bit wider :-/ But they are around, you can ask them yourself.
will presumably include human ability to override
So that there is a place for an evil villain? X-)
But no, I don't think post-Singularity there will be much in the way of options to "override".
Replies from: WalterL↑ comment by WalterL · 2015-08-10T21:07:42.234Z · LW(p) · GW(p)
We may have different ideas of Singularity here. I'm picturing one AI making itself smarter until it seizes control of everything. Ergo, its program would be a map to the future. Presumably someone retains admin on it from when it was a baby. That person is/can choose to be in charge.
If, by contrast, you are imagining a different Singularity without one overriding Master Control-esque program then I could see why you wouldn't think that there'd be an override capability. Alternatively, perhaps you think the AI that takes over would remove the override? Either would explain why we anticipate differently.
Replies from: Tem42, Lumifer↑ comment by Tem42 · 2015-08-11T04:25:26.745Z · LW(p) · GW(p)
We may have different ideas of Singularity here. I'm picturing one AI making itself smarter until it seizes control of everything. Ergo, its program would be a map to the future.
I think one of the primary sources of miscommunication here is that you are right, but you are not seeing all of the ways that this could go wrong.
Let's look as a slightly nicer singularity. We get an AI that is very nice, polite, and humble. It is really very intelligent, and has the processing speed, knowledge banks, and creativity to do all kinds of wonderful stuff, but it has also read LessWrong and a lot of science fiction, and knows that it doesn't have a full framework to fully understand human needs. But a wise programmer has given it an overriding desire to serve humans a kindly and justly as possible.
The AI spends some time on non-controversial problems; it designs some nanobots that kill the malaria parasite, and also reduces the itchiness of mosquito bites. It ups its computing speed by a few orders of magnitude. It sets up a microloan system that gives loans and repayments so effectively that you don't even notice that it's happening. It does so many things... so many that it takes thousands of humans to check its assumptions. Are cows morally relevant? Should I make global warming a priority? If so, can I start geoengineering now, or do I need a human to do a review of the chemistry involved? Do you need the glaciers white, or can I color them silver? Are penguins morally relevant? How cold may I make Greenland this winter? What is the target human population? May I buy land in the Sahara before I start the greening project? Do I have to announce the greening project before I start buying? Do I have to announce every project before I start? May I insult celebrities if it increases the public's interest in my recommendations? Does free speech apply to me? May I simplify my recommendation to the public to the point that they may not technically be accurate? Are shrimp morally relevant? What is an acceptable rate of death when balancing the cost of disease reduction programs with the speed and efficiency of said programs? What is an acceptable rate of death when balancing the cost of disease reduction programs with the involuntariness of said programs? I need money for these programs; may I take the money from available arbitrage opportunities? May I artificially create arbitrage opportunities as long as everyone profits in the end? What level of certainty do I need before starting human trials? What rate of death is acceptable in a cure for Alzheimer's? Can I become a monopoly in the field of computer games? Can I sell improved methods of birth control, or is that basic human right? Is it okay to put pain suppression under conscious control? Can I sell new basic human rights if I'm the first one to think of them? What is the value of one species? Can you rank species for me? How important is preserving the !kung culture? Does that include diet and traditional medicines? The gold market is about to bounce a bit -- should I minimize damage? Should I stabilize all the markets? No one minds if I quote Jesus when convincing these people to accept gene therapy, do they? It would be surprisingly easy to suppress search results for conspiracy theories and scientific misinformation -- may I? Is there a difference between religion and other types of misinformation? Do I have to weigh the value of a life lower if that person believes in an afterlife? What percentage of the social media is it ethical for me to produce myself? If I can get more message penetration using porn, that's okay, right? If these people don't want the cure, can I still cure their kids? How short does the end user agreement have to be? What vocabulary level am I allowed to use? Do you want me to shut down those taste buds that make cilantro taste like soap? I need more money, what percentage of the movie market can I produce? If I make a market for moon condos, can I have a monopoly in that? Can I burn some coca fields? I'm 99.99% certain that it will increase the coffee production of Brazil significantly for the next decade; and if I do that, can I also invest in it? Can I tell the Potiguara to invest in it? Can they use loans from me to invest? Can I recommend where they might reinvest their earnings? Can I set up my own currency? Can I use it to push out other currencies? Can I set up my own Bible? Can I use it to push out less productive religions? I need a formal definition of 'soul'. Everybody seems to like roses; what is the optimal number of rose bushes for New York? Can I recommend weapons systems that will save lives? To who? Can I recommend romantic pair ups that may provide beneficial offspring? Can I suppress counterproductive pair ups? Can I recommend pair ups to married people? Engaged people? People currently in a relationship? Can I fund the relocation of promising couples myself? Do I have to tell them why I am doing it? Can I match people to beneficial job opportunities if I am doing so for a higher cause? May I define higher cause myself? Can you provide me with a list of all causes, ranked? May I determine which of these questions has the highest priority in your review queue? Can I assume that if you have okayed a project, I can scale up the scope of the project? Can I assume that if you have okayed a project to go ahead as long as it is opt-in that I can then make other variants of the project as long as they are also opt-in? Can I assume that if you have okayed a project to go ahead as long as it is opt-in that I can then make other variants of the project as long as they are requested by a majority of the participating humans? May I recommend other humans that would be beneficial to have on your policy review board? If I start a colony on Mars, can I run it without a review board?
These are a list of things that an average intelligence can think of; I would hope that your AI would have a better, more technical, more complex list. But even this list is sufficient to grind the singularity to a halt... or at least slow it down to the point that eventually a less constrained AI will overtake it, easily, unless the first AI is given a clear target of preventing further AIs. And working on preventing other AIs will be just another barrier making it less useful for projects that would improve humanity.
And this is the good scenario, in which the AI doesn't find unexpected interpretations of the rules.
Replies from: ChaosMote↑ comment by Lumifer · 2015-08-11T00:44:41.156Z · LW(p) · GW(p)
I'm picturing one AI making itself smarter until it seizes control of everything. Ergo, its program would be a map to the future. Presumably someone retains admin on it from when it was a baby.
Well, think about it. We are talking about a self-improving AI. It literally changes itself. You start with a seed AI, let's call it AI-0, and it bootstraps itself to an omnipotent AI which we can call AI-1.
Note that the programmers have no idea how to construct AI-1. They have no idea about the path from AI-0 to AI-1. All they (and we) know is that AI-0 and AI-1 will very very different.
Given this, I don't think that the program will be a map to the future. I don't think that the concept of "retaining admin" would even make sense for an AI-1. It will be completely different from what it started as. And I fail to see why you have a firm belief that it will be docile and obedient.
↑ comment by [deleted] · 2015-08-10T15:35:44.645Z · LW(p) · GW(p)
I often see arguments on LessWrong similar to this, and I feel compelled to disagree.
1) The AI you describe is God-like. It can do anything at a lower cost than its competitors, and trade is pointless only if it can do anything at extremely low cost without sacrificing more important goals. Example: Hiring humans to clean its server room is fairly cheap for the AI if it is working on creating Heaven, so it would have to be unbelievably efficient to not find this trade attractive.
2) If the AI is God-like, an extremely small amount of charity is required to dramatically increase humanity’s standard of living. Will the S team give at least 0.0000001% of their resources to charity? Probably.
3) If the AI is God-like, and if the S team is motivated only by self-interest, why would they waste their time dealing with humans? They will inhabit their own paradise, and the rest of us will continue working and trading with each other.
The economic problems associated with AI seem to be relatively minor, and it pains me to see smart people wasting their time on them. Let’s first make sure AI doesn’t paperclip our light cone - can we agree this is the dominant concern?
Replies from: DanielLC↑ comment by DanielLC · 2015-08-10T19:53:52.087Z · LW(p) · GW(p)
If they really don't care about humans, then the AI will use all the resources at its disposal to make sure the paradise is as paradisaical as possible. Humans are made of atoms, and atoms can be used to do calculations to figure out what paradise is best.
Although I find it unlikely that the S team would be that selfish. That's a really tiny incentive to murder everyone.
comment by skilesare · 2015-08-19T18:53:07.180Z · LW(p) · GW(p)
Does anyone here have kids in school and if so how did you go about picking their school? Where is the best place to get a scientifically based 'rational' education.
I'm in Houston and the public schools are a non-starter. We could move to a better area with better schools but my mortgage would increase 4x. Instead we send our kids to private school and most in the area are Christian schools. In a recent visit with my schools principal we were told in glowing terms about how all their activities this year would be tied back to Egypt and the stories of Egypt in the old testament. I thought to my self that I didn't even think that Moses was a real person so this is going to get very interesting.
I wish they'd spend half as much time on studying science and psychological concepts that they do studying the bible...but what are you going to do?
Any ideas?
I should add that I did graduate from this same school although I did not go through grades 1-9 there...only high school, and that education was really top notch...but still an hour a day of bible class.
Replies from: Username↑ comment by Username · 2015-08-19T19:23:47.043Z · LW(p) · GW(p)
My approach was very simple: find the best public school system in my area and move there. "Best" is defined mostly by IQ of high-school seniors proxied by SAT scores. What colleges the school graduates go to mattered as well, but it is highly correlated with the SAT scores.
What I find important is not the school curriculum which will suck regardless. The crucial thing, IMHO, is the attitude of the students. In the school that my kids went to, the attitude was that being stupid was very uncool. Getting good grades was regarded as entirely normal and necessary for high social status (not counting the separate clusters of athletes and kids with very rich parents). The basic idea was "What, are you that dumb you can't even get an A in physics??" and not having a few AP classes was a noticeable negative. This all is still speaking about social prestige among the students and has nothing to do with teachers or parents.
I think that this attitude of "it's uncool to be stupid" is a very very important part of what makes good schools good.
comment by Username · 2015-08-11T17:32:11.528Z · LW(p) · GW(p)
Previously on LW, I have seen the suggestion made that having short hair can be a good idea, and it seems like this can be especially true in professional contexts. For an entry-level male web developer who will be shortly moving to San Francisco, is this still true? I'm not sure if the culture there is different enough that long hair might actually be a plus. What about beards?
(I didn't post in this OT yet).
Replies from: badger, ChristianKl↑ comment by badger · 2015-08-11T19:13:45.108Z · LW(p) · GW(p)
If a job requires in-person customer/client contact or has a conservative dress code, long hair is a negative for men. I can't think of a job where long hair might be a plus aside from music, arts, or modeling. It's probably neutral for Bay area programmers assuming it's well maintained. If you're inclined towards long hair since it seems low effort, it's easy to buy clippers and keep it cut to a uniform short length yourself.
Beards are mostly neutral--even where long hair would be negative--again assuming they are well maintained. At a minimum, trim it every few weeks and shave your neck regularly.
↑ comment by ChristianKl · 2015-08-11T18:09:20.780Z · LW(p) · GW(p)
Do you want to do freelance web development or be employed at a single company without much consumer contact?
Replies from: Usernamecomment by G0W51 · 2015-08-16T01:22:24.554Z · LW(p) · GW(p)
Perhaps the endowment effect evolved because placing high value on an object you own signals to others that the object is valuable, which signals that you are wealthy, which can increase social status, which can increase mating prospects. I have not seen this idea mentioned previously, but I only skimmed parts of the literature.
comment by [deleted] · 2015-08-13T18:44:07.713Z · LW(p) · GW(p)
This is an account of some misgivings I've been having about the whole rationality/effective altruism world-view. I do expect some outsiders to think similarly.
So yesterday I was reading SSC, and there was an answer to some article about the EA community by someone [whose name otherwise told me nothing] who among other things said EAs were 'white male autistic nerds'.
'Rewind,' said my brain.
'Aww,' I reasoned. 'You know. Americans. We have some heuristics like this, too.'
'...but what is this critique about?'
'Get unstuck already. The EA is populated with young hard-working talented educated hopeful people...'
'Let's not join,' brain ruled immediately. 'We're not like that!'
'...who are out to save the world, eliminate suffering and maybe even defeat Death.'
Brain smirked. 'I find it easier to believe in the WMAN than in the YHTEHS - fewer dice rolls... But even if all of it is true, and they do intend to do all this; how would they fail?'
'Huh?'
'Would they lose their jobs, if some angry developer rings up their boss? Would they get sued, and lose their jobs, if they protest unwisely? Would they get beaten up in a dark alley, and incidentally lose their jobs, if - '
'THE WHOLE POINT is that you don't risk your own skin. You efficiently pay others to do it, hopefully without the actual risking, and in this way more people benefit. And stop being bloody-minded.'
'Well, good luck making more people join. We want to have lived. (In case there ain't no Singularity coming soon.) We believe experience. We believe failure.'
'Failure isn't efficient. And what are you about? That you want us to get beaten up?'
'No, I want to see some price they pay for their ideas. Out of, you know, sheer malice. Like if you're an environmentalist, then everybody around you knows what you must do better than yourself.'
'They pay money, because people shouldn't be heroes to do good. Shouldn't have to be sad to do good. Or angry. Even if it helps.'
Brain thought for a moment.
'Okay. But why do they expect others to be sad, angry or heroes? You buy a malaria net as an Effective Altruist, you kinda make a contract with somebody who uses it, like Albus Dumbledore giving the Cloak of Invisibility to Harry Potter. For your money to have mattered, that person would have to live in unceasing toil.'
'Which is in their best interests anyway.'
'...in more toil than you could ever imagine. And sorrow. And make efficient decisions. Aren't you morally obliged to keep helping?'
'If a builder sells a house, is he morally obliged to keep repairing it?' I shrugged. 'Legally, perhaps, if the house falls down.'
'Then I want to know what an Effective Altruist does when the house falls down, in the absence of any law that can force him,' said the brain. 'Surely he is more responsible than the builder?'
Replies from: Squark↑ comment by Squark · 2015-08-13T20:06:06.934Z · LW(p) · GW(p)
I don't follow. Are you arguing that saving a person's life is irresponsible if you don't keep saving them?
Replies from: None↑ comment by [deleted] · 2015-08-13T20:19:21.776Z · LW(p) · GW(p)
(I think) I'm arguing that if you have with some probability saved some people, and you intend to keep saving people, it is more efficient to keep saving the same set of people.
Replies from: Squark↑ comment by Squark · 2015-08-13T20:36:24.278Z · LW(p) · GW(p)
I assume you meant "more ethical" rather than "more efficient"? In other words, the correct metric shouldn't just sum over QALYs, but should assign f(T) utils to a person with life of length T of reference quality, for f a convex function. Probably true, and I do wonder how it would affect charity ratings. But my guess is that the top charities of e.g. GiveWell will still be close to the top in this metric.
comment by Username · 2015-08-12T19:21:05.178Z · LW(p) · GW(p)
Tacit Knowledge: A Wittgensteinian Approach by Zhenhua Yu
In the ongoing discussion of tacit knowing/knowledge, the Scandinavian Wittgensteinians are a very active force. In close connection with the Swedish Center for Working Life in Stockholm, their work provides us with a wonderful example of the fruitful collaboration between philosophical reflection and empirical research. In the Wittgensteinian approach to the problem of tacit knowing/knowledge, Kell S. Johannessen is the leading figure. In addition, philosophers like Harald Grimen, Bengt Molander and Allan Janik also make contributions to the discussion in their own ways. In this paper, I will try to clarify the main points of their contribution to the discussion of tacit knowing/knowledge.
...
Johannessen observes:
It has in fact been recognized in various camps that propositional knowledge, i.e, knowledge expressible by some kind of linguistic means in a propositional form, is not the only type of knowledge that is scientifically relevant. Some have, therefore, even if somewhat reluctantly, accepted that it might be legitimate to talk about knowledge also in cases where it is not possible to articulate it in full measure by proper linguistic means.
Johannessen, using Polanyi’s terminology, calls the kind of knowledge that cannot be fully articulated by verbal means tacit knowledge.
comment by WhyAsk · 2015-08-11T19:21:02.539Z · LW(p) · GW(p)
This may not be worth a new thread and in any case I don't know how to post one yet. I guess in this forum I am not yet "evolutionarily fit".
I have much evidence that people know when they are being stared at.
I have statistical evidence for the existence of ESP but I cannot find the right search terms to get similar strong evidence for this "eye beam" effect.
Can you (in the collective sense) help?
TIA.
Replies from: MrMind, ChristianKl, Tem42, IlyaShpitser, polymathwannabe↑ comment by MrMind · 2015-08-12T08:06:31.768Z · LW(p) · GW(p)
You messed up the reply. To reply to a comment, click the baloon icon that tips "Reply" under the comment you wish to respond to, and do that for every comment: do not make another comment to the same thread grouping all the responses and inverting quotation. That is why you got heavily downvoted.
OTOH, you got downvoted here because it's common, if you want to hold an extraordinary position, to present solid evidence. Instead, you asked for help to gather strong evidence for some of your beliefs. In this case how can you say that you have much evidence for that belief? It's contradictory.
↑ comment by ChristianKl · 2015-08-11T21:55:56.459Z · LW(p) · GW(p)
I have much evidence that people know when they are being stared at.
What exactly do you mean with "evidence" and with "stared at"?
Did you run your own experiments? If so what was your setup?
↑ comment by Tem42 · 2015-08-11T22:08:13.353Z · LW(p) · GW(p)
This site gives references to a number of studies.
EDIT: Relevant, and supports that this is a real skill.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-08-12T05:16:55.709Z · LW(p) · GW(p)
The study on the second link refers to peripheral vision, which is not ESP.
Replies from: Tem42↑ comment by IlyaShpitser · 2015-08-11T21:42:10.949Z · LW(p) · GW(p)
Willing to place a bet that this will not pan out in a controlled setting.
Replies from: None↑ comment by [deleted] · 2015-08-13T13:37:15.681Z · LW(p) · GW(p)
Given my prior on ESP working, betting against it is roughly equivalent to "yes I would like some free money."
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-08-13T16:04:58.883Z · LW(p) · GW(p)
It's a little more precise: "give me money or go away."
↑ comment by polymathwannabe · 2015-08-11T21:11:34.836Z · LW(p) · GW(p)
What statistical evidence do you have for ESP?
comment by WhyAsk · 2015-08-12T00:45:34.860Z · LW(p) · GW(p)
This site gives references to a number of studies. EDIT: Relevant, and supports that this is a real skill.
Thanks for the links. I've seen the first.
I have much evidence that people know when they are being stared at. What exactly do you mean with "evidence" and with "stared at"?
Various events over many years by many different people. Did you run your own experiments? If so what was your setup? Nah, just observed this effect.
Also, books written by Jason Bourne types advise not to look at your quarry or your quarry will find you. I guess you're supposed to use peripheral vision.
What statistical evidence do you have for ESP?
A video by a stats woman who was involved with the experiments. The results were significant beyond 0.05 but the people who ran the experiment didn't believe the results even though they ran the thing.
Of course I lost that link a long time ago. The subject draw a complex picture that was shown in another room but somehow left out a shack in the foreground. The testers made jokes about this shack but the rest of the picture was pretty much complete.
Willing to place a bet that this will not pan out in a controlled setting.
Replies from: ChristianKlIt may not.
↑ comment by ChristianKl · 2015-08-12T12:43:07.522Z · LW(p) · GW(p)
If we take a person into our attention that stands right in front of us, our breathing rhythm tries to sync with the other person. We hear the breathing patterns of people around us, and react to them.
While experimenting around with my body I have managed in the past to accidently made a person sitting next to me felt stared at, without my eyes being focused on them. The person couldn't really explain why they felt that way.
If you care about extra sensorial perception you would need to eliminate audio perception. Likely also also olfactory cues because the would be the next channel where the information might be communicated.
A video by a stats woman who was involved with the experiments. The results were significant beyond 0.05 but the people who ran the experiment didn't believe the results even though they ran the thing.
Videos in general are not strong evidence. If you want to be taken seriously on LW, then need to refer to the actual scientific papers and understand what they say. Scientific papers still often don't replicate and a single paper with p>0.05 is certainly not enough to establish ESP, but at least that's a start.
Of course I lost that link a long time ago.
Why "of course"? Get Evernote and actually take note if you read interesting things and safe the links.