Brainstorming for post topics
post by NancyLebovitz · 2014-05-31T15:08:41.361Z · LW · GW · Legacy · 149 commentsContents
149 comments
I suggested recently that part of the problem with with LW was a lock of discussion posts which was caused by people not thinking of much to post about.
When I ask myself "what might be a good topic for a post?", my mind goes blank, but surely not everything that's worth saying that's related to rationality has been said.
So, is there something at the back of your mind which might be interesting? A topic which got some discussion in an open thread that could be worth pursuing?
If you've found anything which helps you generate useable ideas, please comment about it-- or possibly write a post on the subject.
149 comments
Comments sorted by top scores.
comment by Omid · 2014-05-31T17:32:13.445Z · LW(p) · GW(p)
Proposal: Don't fear GATTACA. A post where I explain why people are afraid of the dystopia featured in GATTACA, and why these fears are unjustified.
Replies from: James_Miller↑ comment by James_Miller · 2014-06-02T15:43:56.579Z · LW(p) · GW(p)
Take into account the possible prisoners' dilemma where the technology works out so that to maximize expected IQ you have to expose the genetically engineered babies to a huge risk of nasty genetic conditions and China goes for it and the United States must either accept that in 20 years China will gain a huge advantage or breed these damaged geniuses ourselves.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-02T17:11:52.737Z · LW(p) · GW(p)
the possible prisoners' dilemma where the technology works out so that to maximize expected IQ you have to expose the genetically engineered babies to a huge risk of nasty genetic conditions
Why is this a prisoner's dilemma? An arms race is a different kind of a game.
Replies from: James_Miller↑ comment by James_Miller · 2014-06-02T18:05:06.666Z · LW(p) · GW(p)
It would be a prisoner's dilemma like game if both sides would prefer to have a binding credible agreement in which no one takes huge chances with the tech, but since you would expect the other side to cheat (and would cheat yourself even if you knew the other side wouldn't) you use the tech.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-02T18:35:03.645Z · LW(p) · GW(p)
if both sides would prefer to have a binding credible agreement
Is there any evidence for that? Specifically, I don't see why China would want one.
Replies from: James_Miller↑ comment by James_Miller · 2014-06-02T21:54:52.337Z · LW(p) · GW(p)
They might not, but then again we might learn how to create super-high IQ people before we learn to genetically engineer high-loyalty people and consequently the super-geniuses would pose a future risk to the Chinese communist party.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-03T14:49:43.847Z · LW(p) · GW(p)
I don't see the Chinese Communist Party being worried about that. I suspect that if they embark on an IQ-enhancement program, its goal and likely results will be a small rise in average IQ, not a collection of flawed supergeniuses.
Replies from: gwern↑ comment by gwern · 2014-06-04T01:55:38.321Z · LW(p) · GW(p)
IVF is a difficult, painful, somewhat dangerous process which requires a lot of money, cooperation, and apparatus, while embryo selection doesn't sound like it would cost much more for higher levels of selection; if you're going to do it at all, it makes more sense to maximize bang for your buck by going for geniuses than settling for near-invisible increases in averages. If nothing else, where do you get all the gynecologists from? To do a nationwide program would require hundreds of thousands of specialists (at a minimum; China has 18 million babies a year).
Replies from: Lumifer↑ comment by Lumifer · 2014-06-04T04:48:30.606Z · LW(p) · GW(p)
if you're going to do it at all, it makes more sense to maximize bang for your buck by going for geniuses than settling for near-invisible increases in averages.
The problem is that no one knows how to go for geniuses. The first step has to be, essentially, large-scale experimentation which, I suspect, will start with just culling out "defects". China likely has the will and the ethics to do this, the West certainly does not.
Replies from: gwern↑ comment by gwern · 2014-06-04T18:55:13.743Z · LW(p) · GW(p)
The problem is that no one knows how to go for geniuses.
I don't follow. If you have the sorts of genotype/phenotype databases which let you select for a few variants to increase average intelligence a little bit, then you aren't technologically very far from having the databases to select for a lot of variants to increase average intelligence a lot. I don't see any reason to expect long-term stagnation where interventions can easily increase by a few points but a lot of points is just impossible.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-05T01:06:23.611Z · LW(p) · GW(p)
If you have the sorts of genotype/phenotype databases which let you select for a few variants to increase average intelligence a little bit, then you aren't technologically very far from having the databases to select for a lot of variants to increase average intelligence a lot.
First, no one has databases which let you select even a few variants. We know a bunch of mutations which reliably decrease intelligence. I don't think we know what reliably increases it.
Second, the idea that we can just pile all the small improvements together to get a supergenius relies on unlikely assumptions, for example the additivity of these improvements and lack of negative side-effects.
Replies from: gwern↑ comment by gwern · 2014-06-05T01:55:33.499Z · LW(p) · GW(p)
First, no one has databases which let you select even a few variants. We know a bunch of mutations which reliably decrease intelligence. I don't think we know what reliably increases it.
I am aware of this. But you were the one discussing the hypothetical that the Chinese government would be more likely to do an embryo selection program aimed at modest national-wide increases in averages; clearly you are presupposing that such databases exist, and so I'm not sure why you're objecting that your hypothetical is currently a hypothetical.
Second, the idea that we can just pile all the small improvements together to get a supergenius relies on unlikely assumptions, for example the additivity of these improvements
My understanding is that, far from being an 'unlikely assumption', methods like twin studies & GCTA used to estimate aspects of the genetic contribution to intelligence have long shown that the majority (usually something like ~70%, going off Meng's citations) is in fact additive.
and lack of negative side-effects.
High IQ types don't have that many problems.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-05T02:13:48.863Z · LW(p) · GW(p)
hypothetical that the Chinese government would be more likely to do an embryo selection program aimed at modest national-wide increases in averages
I probably wasn't clear. The hypothetical program would not be aimed at modest nation-wide increases. It would be aimed at figuring out how to genetically engineer intelligence. I expect that its first fruits would be modest increases in the averages of the program subjects -- not the averages of the whole population of China.
the majority (usually something like ~70%, going off Meng's citations) is in fact additive.
The studies examined normal ranges of intelligence. The additivity may or may not hold when pushing into genius territory.
High IQ types don't have that many problems.
That's not self-evident to me for very high IQ types. Besides, the attempts to genetically engineer high IQ might find different paths in that general direction, some are likely to have serious side effects.
Replies from: gwern↑ comment by gwern · 2015-02-27T01:32:33.775Z · LW(p) · GW(p)
The additivity may or may not hold when pushing into genius territory.
There's no a priori reason to expect additivity to suddenly fail when going outside. That's the point of additivity: if they depended on the presence or absence of other variants to have effect, then that would fall into the non-additive parts.
That's not self-evident to me for very high IQ types.
When we look at regressions for IQ, we almost always see strong positive effects going as high as we can meaningfully measure or get sample sizes. Consider the SMPY studies. I'm not aware of any results from their longitudinal results showing worse problems than your average 100 IQ schmoe. And it should be self-evident: do you associate MIT or Stanford or Harvard or Tsinghua graduates with extremely high flameout rates, shorter lifespans, lower incomes, any of that...?
the attempts to genetically engineer high IQ might find different paths in that general direction, some are likely to have serious side effects.
If there were serious common side effects from the common variants detected by current GWAS, as a statistical necessity, those variants would have been disease hits before they were intelligence hits of small effect.
Replies from: Lumifer↑ comment by Lumifer · 2015-02-27T15:48:14.196Z · LW(p) · GW(p)
There's no a priori reason to expect additivity to suddenly fail when going outside.
We just don't know at this point. On general grounds I'm suspicious of claims that in highly complex systems stochastic relationships observed for the middle of the distribution necessarily hold far into the tails. In this case I have no strong opinions on whether it will or will not hold.
MIT or Stanford or Harvard or Tsinghua graduates
By "very high IQ types" I mean geniuses. MIT, Stanford, etc. do not graduate geniuses, they graduate merely high-IQ people. Off the top of my head, I would expect geniuses to have a higher rate of mental/emotional issues and a shorter lifespan, though that's a prior, I haven't looked at data.
common variants detected by current GWAS
I'm talking about different paths.
Replies from: gwern↑ comment by gwern · 2015-02-27T16:24:17.292Z · LW(p) · GW(p)
We just don't know at this point.
Why do you have your skeptical prior? Where have similar genetic engineering efforts failed? Have we not been able to breed cows and cats and dogs and horses for all sorts of things and traits many SDs beyond their ancestral wild populations?
By "very high IQ types" I mean geniuses. MIT, Stanford, etc. do not graduate geniuses, they graduate merely high-IQ people.
If there are large negative effects then you should be able to show it by looking at the available large samples of high-IQ types, of which MIT/Stanford/SMPY/etc are the best ones. There's not going to be any magical triggerpoint where IQ 150 people have all the benefits we know high IQ types do and which all the extrapolations predict and we verify up to the limits of our research capability, and then just beyond where we can gather reliable sample sizes, at IQ 151, suddenly they start to lose 20 years of life expectancy and go mad.
I would expect geniuses to have a higher rate of mental/emotional issues and a shorter lifespan,
It sounds like your beliefs on this topic are molded by some outdated Romantic myths about genius.
I'm talking about different paths.
I have no idea what you mean. All proposals are for using GWAS results based on existing variation (since no one knows what other genetic changes one would make!), and my argument for safety works there. What different paths?
Replies from: Lumifer↑ comment by Lumifer · 2015-02-27T16:36:33.502Z · LW(p) · GW(p)
Have we not been able to breed cows and cats and dogs and horses for all sorts of things and traits many SDs beyond their ancestral wild populations?
Not for intelligence, as far as I know. Though dog breeds are widely considered to vary in intelligence -- have there been any attempts to quantify it?
As to "failing", traditional genetic engineering certainly ran into some limits. To continue with dogs, large breeds have shorter lifespans. Many breeds have well-known pervasive genetic problems (hip dysplasia in German shepherds, etc.).
It sounds like your beliefs on this topic are molded by some outdated Romantic myths about genius.
I doubt it's Romantic myths since people that come to my mind mostly lived in the XX century, but yes, I've said that it's a prior and I'm open to evidence other than handwaving.
since no one knows what other genetic changes one would make
That's the point of experimenting :-)
Replies from: gwern, Vaniver↑ comment by gwern · 2015-02-28T00:09:54.817Z · LW(p) · GW(p)
I doubt it's Romantic myths since people that come to my mind mostly lived in the XX century
The Romantics invented the myth of insane geniuses touched by divinity, but that doesn't mean people holding that belief suffer from amnesia and are unable to list any examples from after the Romantics... Given the lifetime prevalence of any mental illness in the general population, it would be surprising if one couldn't list some anecdotes like Godel.
That's the point of experimenting :-)
There's a practically infinite number of genetic changes one could make. Understanding of genetic networks influencing cognition will have to be extraordinarily good before any researchers can write down a completely novel gene or variant which has no natural examples and experiment with it. For better or worse, for the next several decades, we're stuck exploiting natural variants - all interventions are going to look something like 'people with X seem to be smarter, let's try adding X to others or select for it' or 'Y is a rare or de novo variant, maybe it's harmful, let's remove it or select against'.
Replies from: Lumifer↑ comment by Lumifer · 2015-02-28T18:08:16.141Z · LW(p) · GW(p)
the myth of insane geniuses touched by divinity
That's not my mental model at all. I haven't thought deeply about it, but I probably imagine geniuses as an overclocked, supercharged, often highly specialized piece of wetware running on the same-reliability components, possibly crowding out some other capabilities, and frequently having social problems just due to the fact that 99.9%+ of people around you are quite different from yourself.
↑ comment by Vaniver · 2015-02-27T19:51:23.085Z · LW(p) · GW(p)
Though dog breeds are widely considered to vary in intelligence -- have there been any attempts to quantify it?
Yes; take a look at this, and the generic wikipedia page. One of the more visible tests is the number of times a new command must be repeated to be learned. Overall, there's not too much agreement because there are a number of different interpretations of what it means for a dog to be intelligent, and no one has (to my knowledge) done the factor analysis to look for g in dogs.
Replies from: gwern↑ comment by gwern · 2015-02-28T00:16:17.110Z · LW(p) · GW(p)
We do have such a test battery for primates, though, the Primate Cognitive Test Battery (came up 2014 in showing chimp intelligence is, of course, heritable). Cross-species comparisons have been done and don't show much difference aside from humans: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3317657/ (and from a different avenue, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3049098/ ). This is a bit surprising to me but I suppose it's not like we've deliberately bred any of those species for intelligence or anything.
comment by Omid · 2014-05-31T17:26:29.574Z · LW(p) · GW(p)
Proposal: You don't need politics. In which I argue that keeping up with the news and political controversies is not a duty nor effective altruism. Intended to counteract the "Rah political activism!" message I got in school.
Replies from: DanArmak, Viliam_Bur, michaelkeenan, Benito, torekp↑ comment by DanArmak · 2014-05-31T18:32:21.715Z · LW(p) · GW(p)
I endorse this approach. Ever since high school (so for about 12 years), I have deliberately stayed ignorant of all politics local to my country and of local news. I absolutely never watch or read the news, and I rarely find myself discussing these topics with my friends.
News and politics are designed to generate outrage and promote anti-rationalism and epistemic dark arts. They also strongly select for bad and depressing news, and for non-representative surprising incidents. On the other hand, the value from my knowing about the news is very small (e.g. in terms of changing my behavior).
↑ comment by Viliam_Bur · 2014-06-01T22:32:44.107Z · LW(p) · GW(p)
Sometimes politics steps into your life. For example, you want to teach people rationality, but a religious political party just made religious education mandatory in schools. Or you invented a better way to teach maths to kids, but you can't use it, because in your country all schools must strictly follow the plans written by government. Etc. The idea is that the political power can prevent you from doing the right thing, so unless your plan is just to break the law and go to jail, you must somehow get involved with politics.
Of course, you could also just give up this specific topic, and choose some other topic where there is no direct political opposition. Or you could just write a lot about your idea, and hope that someone else will notice it and do the dirty work for you.
↑ comment by michaelkeenan · 2014-06-01T18:45:11.375Z · LW(p) · GW(p)
This is a great topic. I know of three good resources on it:
I Hate The News by Aaron Swartz
News is bad for you – and giving up reading it will make you happier by Rolf Dobelli in The Guardian
Avoid News: Towards a Healthy News Diet (PDF) by Rolf Dobelli - a longer version of the one in The Guardian.
↑ comment by Ben Pace (Benito) · 2014-05-31T20:19:03.501Z · LW(p) · GW(p)
Would almost write this post myself - btw does 'rah' mean 'yay'?
Replies from: free_rip↑ comment by torekp · 2014-06-01T13:44:32.544Z · LW(p) · GW(p)
Be careful how you phrase this. Because I want Friendly AI, I do need politics - in a different sense of the word, one that has little to do with keeping up with the news or what the pundits are saying.
Little, but not nothing. The mundane political wars contain many object lessons on how to win at politics in the broader sense.
comment by James_Miller · 2014-05-31T16:26:52.290Z · LW(p) · GW(p)
I would like to see a post collecting examples from history and great literature of rationality or irrationality that illuminate key LW principles.
comment by TylerJay · 2014-05-31T20:27:08.792Z · LW(p) · GW(p)
A while back, I read "The Little Book of Common Sense Investing" by John Bogle, the founder of Vanguard and creator the first index fund. It's an analysis of why index funds are a better option than actively managed mutual funds.
I've had some highly-upvoted comments on the merits of index funds on the past, so I've considered doing a writeup on it to give LessWrong a summary since it seems that a lot of people around here know that they're supposed to be good, but don't really understand all the reasons why. Is there any interest around this?
Edit: Thanks for the positive response. I'll work on it and try to get it out in the next couple weeks. Does anyone have any input on whether it would be appropriate to post in Main?
Replies from: Viliam_Bur, ChristianKl, eggman↑ comment by Viliam_Bur · 2014-06-01T22:40:24.198Z · LW(p) · GW(p)
I would like to read it, as long as there will be more than just the basic idea of "you can't be reliably better than the market, and the index funds copy the market".
For example, there are many index funds. I know they are supposed to be better than all other options, but how do I compare them against each other? Or, how much of the historical success of index funds is a survivor bias, that USA was simply not destroyed in a war, while many other countries were? (If you had invested your money in 1900 in Russian or German index funds, how much would you have today? Let's suppose you would put 1/3 in Russian, 1/3 in German, 1/3 in American index funds, how much would you have today? Is the advice for your readers to pick a random country, to always pick USA because that worked in the past, or to diversify internationally?)
↑ comment by ChristianKl · 2014-06-01T07:57:48.299Z · LW(p) · GW(p)
LessWrong a summary since it seems that a lot of people around here know that they're supposed to be good, but don't really understand all the reasons why
I would think that the reasons are fairly well known, what kind of reasons do you think the average person on LW misses?
Replies from: solipsistcomment by seez · 2014-05-31T23:33:02.693Z · LW(p) · GW(p)
Some questions I'd love to see addressed in posts:
How much can we raise the sanity waterline without transhumanism (i.e. assuming current human biology is a constant)?
Is the sanity waterline rising?
What is the best way to introduce rationality to different groups of people/subcultures?
Does LW and other rationality reading material unnecessarily signal nerdiness so strongly that it limits its effectiveness and ability to spread?
What are the best things someone with very low tech skills can do for the rationality movement, and for the world?
If LW is declining/failing, why is this happening, could this have been prevented, and are other rationality-related communities infected with the same problem?
Replies from: Viliam_Bur, Will_BC, ChristianKl, TimS, ChristianKl↑ comment by Viliam_Bur · 2014-06-01T23:04:51.698Z · LW(p) · GW(p)
If LW is declining/failing
I like the idea in general, I just recommend caution in evaluating whether the LW is declining. I mean, it's obvious from the context of this thread that many people feel so, however...
There was a time when Eliezer wrote a new article every day, for a year. And I loved reading those articles, but writing them was not how Eliezer wanted to spend the rest of his life, so it is natural that he gradually stopped. This feels like a decline from the "less new cool stuff to read every day" point of view. But on the other hand... all the stuff Eliezer wrote, it's still there. We are not in a newspaper business, the old copies are not automatically thrown away, and don't have to be repeated every year. It's collected to the e-book now (by the way, how's the progress there?). There is CFAR as a separate organization; they do seminars. There are meetups in many countries around the world.
What I'm saying is that the important part is the rationalist movement, not merely its website. If people at meetups actually accomplish something, that is more awesome than debating online. So we shouldn't judge the whole thing only by the daily number of new articles in the Discussion. Ironically, the fact that until recently the Discussion page was cluttered by meetup announcements was a signal of success (and of a bad design - which later got fixed). Now, if the number of LW meetups were declining, that would be something to worry about; and I didn't look at specific numbers.
↑ comment by Will_BC · 2014-06-03T04:06:17.559Z · LW(p) · GW(p)
I have been mostly lurking for a couple of months, but organizing people is one of my main areas of interest, and I have some practical experience in doing it. I have had thoughts along these lines, and right now I'm having a biweekly Google hangout with some friends and family to discuss the issue and get feedback on my ideas. I'd like to very gradually introduce the topics to the rationalist community. But the core idea that I'm working on right now is that rationality is not interesting to the general public because rationality is too abstract. I would like to form a community where the main outreach is "Success Clubs" or something like that, basically a support group for improving your life designed by rationalists. I would also like to create a currency that people earned by attending the meetings and participating in the broader organization. I think the success of cryptocurrencies, video games, and karma systems is evidence that this could be a very useful motivator.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-06-03T08:17:43.147Z · LW(p) · GW(p)
I am interested to hear more about your ideas. Maybe in a separate article?
I think we need some vision of "what next", so those who have a project should describe it. And then it may become true.
Replies from: Will_BC↑ comment by Will_BC · 2014-06-03T19:09:31.706Z · LW(p) · GW(p)
I intend to make a discussion post, once my ideas are more polished and I have sufficient karma. Right now, I'm having a biweekly Google Hangout with a few people and trying to set up a Simple Machine Forum, so if anyone is interested in either of those send me a PM and I'll let you know how they're progressing.
↑ comment by ChristianKl · 2014-06-01T08:00:46.134Z · LW(p) · GW(p)
What are the best things someone without very low tech skills can do for the rationality movement, and for the world?
Probably depends very much on the other skills the person has. I don't see how tech skills are central.
Replies from: seez↑ comment by seez · 2014-06-02T16:58:27.011Z · LW(p) · GW(p)
The original had a typo. It's fixed now. To clarify, I am concerned that especial attention is paid to tech skills and how they can be used. I would like to see greater focus on other diverse skills.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-02T21:39:21.302Z · LW(p) · GW(p)
There a lot of movement building activities that don't need tech skills. At the Community Camp in Berlin Jonas Vollmer for example said that they got the permission to hold a TEDx Rationality but at the moment don't have the manpower to organize the event as they focus the energy on other projects.
A lot of movement building activities don't depend on being able to program.
Replies from: MathiasZaman↑ comment by MathiasZaman · 2014-06-03T17:22:03.356Z · LW(p) · GW(p)
A lot of movement building activities don't depend on being able to program.
There's probably an article in that as well.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-03T21:35:41.283Z · LW(p) · GW(p)
One that doesn't need technical skills to be written ;)
↑ comment by TimS · 2014-06-03T01:10:00.164Z · LW(p) · GW(p)
How much can we raise the sanity waterline without transhumanism (i.e. assuming current human biology is a constant)?
As phrased, the parenthetical assumes that biological improvement is the only or primary cause of raising the sanity line. That is not necessarily true - I personally suspect it is false.
↑ comment by ChristianKl · 2014-06-02T21:40:13.705Z · LW(p) · GW(p)
How much can we raise the sanity waterline without transhumanism (i.e. assuming current human biology is a constant)?
The question presupposes mind body dualism. Biology get's changed through mental interventions and it's not at all clear how many interventions are possible.
Replies from: Luke_A_Somers, NancyLebovitz↑ comment by Luke_A_Somers · 2014-06-03T12:17:05.118Z · LW(p) · GW(p)
The question presupposes mind body dualism.
No, it doesn't. It asks how much of sanity is dependent on nurture vs nature.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-03T12:38:40.239Z · LW(p) · GW(p)
The idea of nurture vs. nature comes out of mind-body dualism.
When biologists debate the influence in genes they look at the amount of variation inside a given population that's due to genes. They don't look at extremes and they especially can't look at extremes produced by yet undiscovered methods.
That said, the idea that you can't change biology through nurture doesn't hold up. A lot more Americans are today overweight than 200 years ago. Being overweight is a biological difference from being underweight.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-06-03T16:21:09.732Z · LW(p) · GW(p)
The idea of nurture vs. nature comes out of mind-body dualism.
Maybe historically, but in this context? When seez said
(i.e. assuming current human biology is a constant)
seez did not mean that literally everything biological is completely fixed. If that were the case, we would be statues. The qualification meant that we are not considering here modifying humans directly at the biological level, going instead through communication channels. That these communication channels will produce biological effects is aside from the point.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-03T21:36:05.765Z · LW(p) · GW(p)
The qualification meant that we are not considering here modifying humans directly at the biological level, going instead through communication channels.
On of the best interventions for increasing cognitive performance of nerds who do no sports at all is to get them to do sports.
You hear nerds say that they have a body instead of that they are their body. That reflects that people live their lives based on mind-body dualism.
I'm at the moment reading Feldenkrais who says that you have basically four areas that you can approach if you want to improve humans. Sensation, feelings, thoughts and movement. Feldenkrais makes arguments that movement is the area where you can get the most bang for your bucks through intervention.
If you try to solve all issues on the level of thoughts than you are massively constraining the tools that you can use. If you push someone in a ugh-field that person has a noticeable physical response. Sometimes it makes sense to simply engage on the physical level.
A hug can be a physical intervention that solves an emotional issue of another person.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-06-04T19:45:25.754Z · LW(p) · GW(p)
Your skill at nitpicking is awe-inspiring.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-04T20:38:22.749Z · LW(p) · GW(p)
Your skill at nitpicking is awe-inspiring.
I sometimes do engage in nitpicking but in this instance I'm just arguing a position that's very foreign for you.
I do have two choices. I list a bunch of obviously true claims to make my point. Than you say I'm nitpicking because I say things that are trivially true. I could also make bigger claims and then you would argue that I don't have evidence for them that you find convincing.
Stereotypical nerds don't do sport because they think as their body as something that isn't them but that's a tool. That's inherently a meme that comes from mind-body dualism. Unfortunately simply making a logical argument doesn't help people do identify with their body. It's a difficult belief to change on a cognitive level if you limit your toolbox. If you on the other hand let a person do Feldenkrais or another somatic framework for long enough they usually do that switch and start to identify with their body and stop speaking as if their body is something they possess. Of course you can get somebody to say that they changed the belief more easily but the underlying alief might still the same even if someone pretends to have changed his mind.
Problem modelling matters a great deal. Being willing to change core assumptions is central for making progress on issues such as raising the sanity line.
Replies from: Luke_A_Somers, Viliam_Bur↑ comment by Luke_A_Somers · 2014-06-05T01:11:20.775Z · LW(p) · GW(p)
No, I agree strongly with everything that you have said in this entire thread except that any of it had anything to do with the post you were responding to.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-05T09:19:58.709Z · LW(p) · GW(p)
No, I agree strongly with everything that you have said in this entire thread except that any of it had anything to do with the post you were responding to.
A while ago I had a LW discussion about listen to one's heart. It took me quite a while to get people to consider that some people actually mean the phrase very literally. For them it was just a metaphor that's in their mind and the phrase had little to do with the actual biological heart.
Let's take dual-n-back as intervention for improving intelligence. As far as I understand Gwern did run the meta analysis and it doesn't work for that purpose. Purely mental interventions don't get you very far. I do advocate that you need to think more about addressing somatic issues if you actually want to build training that improves intelligence.
David Burns who did a lot to popularize Cognitive Behavior Therapy (CBT) with his book "The Feeling Good Handbook" doesn't call what he does these days Cognitive Behavior Therapy anymore. Just focusing on the mind and the cognition is 20-30 year old thought. Burns nowadays considers it important that patients feel a warm connection with their therapist.
A lot psychology academia is still in that old mental frame. Academia isn't really where innovation happens.
I do think we have to consider putting people in floating tanks or on treadmills while they do dual-n-back or similar tasks if we want to get strong intelligence improvement to work.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-06-05T10:28:27.842Z · LW(p) · GW(p)
Okay. The thing is, all of that stuff would be allowed under the restriction you were objecting to. Everything you are proposing is working within the system of human biology, optimizing it. You're not replacing it with something else altogether like computer chips or genetically re-engineering one's myelin or whatever.
As a reminder, the exchange began:
Replies from: ChristianKlHow much can we raise the sanity waterline without transhumanism (i.e. assuming current human biology is a constant)?
The question presupposes mind body dualism. Biology get's changed through mental interventions and it's not at all clear how many interventions are possible.
↑ comment by ChristianKl · 2014-06-05T11:03:44.173Z · LW(p) · GW(p)
The thing is, all of that stuff would be allowed under the restriction you were objecting to.
I don't care that much about what's allowed but about what people actually do. Even if a nerd intellectually understands that mind-body dualism is wrong, then he can still ignore his body and avoid exercising because he doesn't get the idea at a deep level.
Why do you consider something that changes hormone levels keeping biology constant but something that changes genes not keeping biology constant?
More importantly, once you understand that there a lot of unexplored space the question of how far we could improve becomes a question that obviously nobody is going to be able to answer.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-06-05T14:40:16.589Z · LW(p) · GW(p)
Couldn't you have said the interesting parts of that without the aggressive 'You're being a mind-body dualist!' part?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-05T21:33:23.813Z · LW(p) · GW(p)
Couldn't you have said the interesting parts of that without the aggressive 'You're being a mind-body dualist!' part?
Why would I? One of the core points of the argument is fighting mind-body dualism. It's the connection to the original sentence I'm challenging. A connection that otherwise didn't seem obvious to you.
As far as the word "aggressive" goes, challenging ideas at a deep level can raise emotions. I don't think that's a reason to avoid deep intellectual debate and only debate superficial issues that don't raise emotions.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-06-06T01:19:50.297Z · LW(p) · GW(p)
How much can we raise the sanity waterline without transhumanism (i.e. assuming current human biology is a constant)?
The only way I can see this as mind-body dualistic is by taking a very strong, restrictive sense of the phrase 'human biology' - one which does not already include those things that humans are biologically capable of without high-technological transhumanist aid. You assumed that the definition in use was one you would strongly disagree with, despite contextual clues that this was not the case: if the poster thinks that transhumanistic modifications CAN impact the sanity waterline, this person is clearly not a mind-body dualist!
Basically, you picked a fight with someone who agreed with you over something you agreed with them about and insisted that they disagreed with you. It's obnoxious.
I don't think that's a reason to avoid deep intellectual debate and only debate superficial issues that don't raise emotions.
When you're dealing with emotionally charged issues, you need to be very careful. It's not the time to run in throwing words into peoples' mouths.
↑ comment by Viliam_Bur · 2014-06-05T23:23:32.892Z · LW(p) · GW(p)
If you on the other hand let a person do Feldenkrais or another somatic framework for long enough they usually do that switch and start to identify with their body and stop speaking as if their body is something they possess.
Akrasia survey data analysis suggests that the most useful anti-akrasia technique (at least for LW audience) is exercise for increased energy.
Not sure if these things are connected, but if identifying with one's body would lead one to exercise more... then we could have a possible way to overcome akrasia here. Changing your feelings could be strategically better than spending willpower.
Do you think this could work? (I am not sure if it even makes sense.)
If yes, could you write an introductory article about the somatic frameworks?
↑ comment by NancyLebovitz · 2014-06-03T10:56:55.438Z · LW(p) · GW(p)
I don't think raising the sanity waterline requires mind-body dualism. For a small example, it's only in recent years that I've heard people saying "Why not me?" instead of "Why me?". [Source: I'm an NPR (public radio) junkie.
A large example would be that it's no longer normal for people to think it's alright to own slaves.
Replies from: Lumifer, ChristianKl↑ comment by ChristianKl · 2014-06-03T12:21:23.051Z · LW(p) · GW(p)
The idea of current human biology being constant assumes mind-body dualism. Any mental intervention changes biology.
The fact that you are not perceptive enough to notice the physical difference between a "Why not me?" instead of "Why me?"-person doesn't suggest that it doesn't there.
comment by Omid · 2014-05-31T17:02:40.887Z · LW(p) · GW(p)
Proposal: Quantified risks of gay sex: As a bi-curious man, I have some interest in gay sex, but I'm also worried about STDs. As a nerd, I'd like to weight my subjective desire to have gay sex against the objective risks of stds. This has been surprisingly difficult.
The risks of lesbian sex doesn't need quantification because it's basically zero. The risks of straight sex have been decently-enough quantified here and here. But there's no comparable guide for gay sex.
All of the websites for gay men give vague advice like "wearing a condom is safer than not wearing a condom." Sure, but does wearing a condom make gay sex safe enough to rationally partake in, or is it like wearing a seatbelt while you're drunk driving? I'd like to write a post that told men how risky gay sex was and how much of that risk can be avoided. It would help men decide not just whether they should have gay sex, but whether they should get circumcised or insist their partners be tested.
This post could be a hazard if it exposes Less Wrong to legal risk, or if it says something boneheaded and damages the forum's credibility. So I'd probably need some help researching and editing it and I'd want to show it to whoever is in charge of these forums before I post it.
Replies from: Vaniver, Izeinwinter, pianoforte611, NancyLebovitz, falenas108↑ comment by Vaniver · 2014-05-31T18:28:44.513Z · LW(p) · GW(p)
I would be interested in helping with this post. (I am a gay man who does not partake in casual sex, primarily because of the health risks.) From what I recall when I looked into this last, there's huge value in breaking out the various kinds of sex, because of huge risk differences.
↑ comment by Izeinwinter · 2014-06-02T13:59:33.965Z · LW(p) · GW(p)
There is no "overall risk for gay sex". There is a risk for how you meet partners, and there is a risk level for each particular sex act you engage in and which protections you take.
Anal isn't mandatory. - a pretty high percentage of all gay men (30%) don't do the back door at all, coming or going, and it carries the same risks irrespective of your partners gender.
Having unprotected anal with someone you are not in a long term relationship with and who has tested clean is basically nuts, but a lot of guys do it, which is why gay men have such depressing averages. But those are averages. If you have good boundaries and behave reasonably, it's as safe as any recreational activity ever is. That means condoms. On a practical level, get some where you like the flavor and carry them around because, uhm, well.. Remember what I said about anal not being mandatory? If you have a problem with oral, that's going to be an issue.
↑ comment by pianoforte611 · 2014-06-03T15:48:12.888Z · LW(p) · GW(p)
I'm also a gay man who would be interested in this topic. It seems extremely narrow though so perhaps not appropriate for discussion. Maybe more suitable for a personal blog with a link post (but I don't have a personal blog). If you're going ahead with it, I'd like to help but I have little experience in this kind of research.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-06-04T12:58:55.719Z · LW(p) · GW(p)
I think it's appropriate for LW-- it will have implications about how to do research as well as the specific topic.
Replies from: Joshua_Blaine↑ comment by Joshua_Blaine · 2014-06-06T21:30:43.985Z · LW(p) · GW(p)
As a general point the "off topic" complaint is used too much to shut down what I think would be valuable contributions to the site. If we're only ever allowed to talk about rationality, but not demonstrate ourselves using it, then we've make a community-crafting mistake.
↑ comment by NancyLebovitz · 2014-05-31T17:11:11.134Z · LW(p) · GW(p)
wearing a condom is safer than wearing a condom
You presumably meant "than not wearing a condom".
Legal risk seems unlikely-- I've never heard of anyone sued for just giving bad advice.
Saying something boneheaded and damaging the site's credibility seems more possible, but not what I'd call an extremely likely. A substantial compendium of research may well be likely to do more good than harm, but damned if I know how to compute that.
Replies from: Omid↑ comment by Omid · 2014-05-31T17:36:46.517Z · LW(p) · GW(p)
Fixed, thanks.
A lot of websites use a "This is not medical advice" disclaimer, enough to justify a generic template.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-01T07:43:32.775Z · LW(p) · GW(p)
The fact that one website uses a disclaimer doesn't show that the person who created the website knows what he's doing. He might just have copied what other people are doing to be safe.
↑ comment by falenas108 · 2014-05-31T20:18:37.505Z · LW(p) · GW(p)
In my response to this post, I realized I was basically starting to write what the post you were talking about.
Anyway, as someone who has done research into this, the answer is the risk is higher, but not all that much. Suggested start here: http://www.aidsmap.com/Consistent-condom-use-in-anal-sex-stops-70-of-HIV-infections-study-finds/page/2586976/
Replies from: solipsist↑ comment by solipsist · 2014-05-31T22:19:55.279Z · LW(p) · GW(p)
The article you linked to says condom use reduces infection rate by 70%. That sounds good (sort of).
But what's the base rate? Gay and bisexual men make up 2% of the population and 72% of all new infections among 13-24 year-olds. That sounds really bad.
I would appreciate some well-researched article with numbers and conditional probabilities and the like. Perhaps several people could approach the problem from different angles and each do their own writeup?
Replies from: falenas108↑ comment by falenas108 · 2014-06-01T14:23:25.043Z · LW(p) · GW(p)
Yeah, that doesn't surprise me all that much. For PIV sex, there is a huge incentive to use condoms: as birth control. Even with people using birth control, condoms are still pretty common.
But, in the gay community many people don't use condoms. As stated in the other article, only 1/6 gay men regularly use condoms. Hence, the higher numbers despite condoms reducing infection rate by 70% for anal sex, compared to the ~80-85% reduction in PIV sex.
Also, slight nitpick. Gay/bi/MSM make up 2% of the general population, but I'd be willing to bet that they make up a much larger percent of 13-24 year olds.
Replies from: None↑ comment by [deleted] · 2014-06-01T16:39:27.518Z · LW(p) · GW(p)
Also, slight nitpick. Gay/bi/MSM make up 2% of the general population, but I'd be willing to bet that they make up a much larger percent of 13-24 year olds.
Could you explain?
Replies from: solipsist, None↑ comment by solipsist · 2014-06-01T17:05:37.593Z · LW(p) · GW(p)
Younger americans are more likely to identify as gay or bisexual, one assumes for cultural reasons.
Numbers at gallop poll link were:
- 6.2% of 18-29 year olds
- 3.2% of 30-49 year olds
- 2.6% of 50-64 year olds
- 1.9% of 65+ year olds
↑ comment by [deleted] · 2014-06-01T16:55:45.929Z · LW(p) · GW(p)
Presumably some of the barriers to entry have been lowered for this demographic.
Replies from: bramflakes↑ comment by bramflakes · 2014-06-02T11:25:08.787Z · LW(p) · GW(p)
Or Greg Cochran is onto something.
Replies from: Nonecomment by Omid · 2014-05-31T16:31:03.517Z · LW(p) · GW(p)
Proposal: If you're depressed, maybe your life sucks. A meta-contrarian post where I argue that you can't always "have a positive attitude" towards bad things in your life, and that fixing your life's problems might be a better strategy than learning to cope with them.
Replies from: kalium, Will_BC, TylerJay, DanArmak, Vladimir_Nesov, army1987↑ comment by kalium · 2014-06-02T05:30:22.152Z · LW(p) · GW(p)
Yes! I see so many arguments that the environment simply doesn't matter in depression, and most of them seem to come from, say, grad school administrators who benefit from denying that they're creating a horrible environment with no clear expectations, no positive feedback, no opportunities to socialize, etc. If depression is always a purely random chemical imbalance, well, it's a pretty neat coincidence that mine vanished within a week of my quitting grad school.
Replies from: NancyLebovitz, Barry_Cotter↑ comment by NancyLebovitz · 2014-06-02T15:19:18.900Z · LW(p) · GW(p)
Also, I think something that contributes to depression at my end is a background mental script that I'm always feeling the wrong thing, and part of that script is that I should be tough enough to not be affected negatively.
↑ comment by Barry_Cotter · 2014-06-03T14:04:40.080Z · LW(p) · GW(p)
Please write about this or link me to someone who has already. Congratulations on your escape.
Replies from: kalium↑ comment by kalium · 2014-06-05T06:01:53.779Z · LW(p) · GW(p)
At the time, I had a moral system in which it was not permissible to leave grad school because science was the thing I should be doing. However, towards the end of my first year I became too depressed to do any problem sets and as a result I had to drop all my classes at the last minute and would then have had to reapply to get back in, which wasn't happening. If I'd been slightly less vulnerable to stress-related depression, I suppose I'd still be there (and still be quite unhappy, so maybe the whole thing was adaptive after all).
I don't have a good link to post, but if I write more extensively I'll put it here.
↑ comment by Will_BC · 2014-06-03T04:25:31.536Z · LW(p) · GW(p)
There was an RSA clip about this awhile back. Smile or Die
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-06-03T12:06:26.173Z · LW(p) · GW(p)
It's a summary of Ehrenreich's Bright-Sided (Smile or Die in the UK), and very good.
↑ comment by DanArmak · 2014-05-31T18:28:20.334Z · LW(p) · GW(p)
That seems just obviously true. What precisely is "If you're depressed, maybe your life sucks" designed to refute?
Who/when/why endorses any of these statements:
- Depression is not significantly correlated with actual life problems to feel bad about.
- It's practically impossible to have a positive attitude about some really bad things if they happen to you. And if you could, it probably wouldn't be a good idea.
- If there's a problem that's making you depressed, it's better to learn to cope with it and adopt a positive attitude, then to try to fix the problem.
↑ comment by ShardPhoenix · 2014-06-01T00:46:26.671Z · LW(p) · GW(p)
A lot of people (eg on Reddit) seem to believe that depression is caused by a "chemical imbalance" and that the solution is antidepressant drugs. That's a bit different from the "positive attitude" case but also something that might be disagreed with.
Replies from: DanArmak↑ comment by DanArmak · 2014-06-01T08:20:05.741Z · LW(p) · GW(p)
The drugs we have today are not a good solution. I've suffered from depression myself and have been prescribed many different drugs at times. The best match was still only a partial solution. I've read enough to know that this is pretty typical: few people get 'total remission' from depression on psychiatric drugs alone. Most have some degree of improvement, but have to try many drugs first (i.e. there's no good prediction of a drug will do to a person) and usually have at least minor side effects (i.e. the drugs are not very specific in their action).
It's hard to argue that some hypothetical not-yet-invented drug might be a perfect cure to depression; that's just one step removed from the truism that our minds are our brains and so susceptible to neurochemical intervention.
Replies from: Izeinwinter↑ comment by Izeinwinter · 2014-06-02T14:05:57.110Z · LW(p) · GW(p)
I am assign a pretty high likelihood that what we call depression is in fact several distinct disorders that merely present similar symptoms, and that for this reason we are never going to get really effective treatment's for depression until we get better diagnostics. It would explain why people have such varying drug responses - if you have depression type a and get medication effective for depression type d...
↑ comment by Vladimir_Nesov · 2014-06-01T15:10:18.098Z · LW(p) · GW(p)
This post seems relevant. Advice shouldn't be formulated as pushing behavior in a certain direction, because different people benefit from pushing behavior in different directions.
↑ comment by A1987dM (army1987) · 2014-05-31T22:13:52.864Z · LW(p) · GW(p)
comment by ChristianKl · 2014-06-01T07:56:53.612Z · LW(p) · GW(p)
How do you give compliments effectively? When do you give them?
What kind of compliments are best? How do you train yourself to give them? When do you give them? What are the effects on the person receiving the compliment?
Replies from: TimS↑ comment by TimS · 2014-06-03T01:15:14.937Z · LW(p) · GW(p)
I've found success by focusing on the appearance of sincerity of what I say. The other key insight was making sure that I was spending time trying to figure out what the other person is actually proud of, instead of what I was most interested in.
In other words, don't worry on originality - focus on saying something positive about the thing the other person is interested in.
comment by ChristianKl · 2014-06-01T07:48:11.601Z · LW(p) · GW(p)
Review of the literature on the effectiveness of various strategies to mitigate cognitive biases
comment by Error · 2014-05-31T16:21:46.540Z · LW(p) · GW(p)
Spending five minutes thinking about it:
Boredom. What it is, how to notice it, what to do about it. Eliezer wrote some about this in Fun Theory but it could probably be expanded on. (and the topic is amusing to me for...Reasons)
To recurse on the subject: Idea generation. How new ideas work, how to come up with them on demand, how to separate good ones from bad ones.
Dealing with irrationality in others. e.g. if you're part of a group that's mindkilling itself, and you can't just walk away, how can you de-mindkill the group? Successfully, that is. Yvain wrote a bit about this here with regard to individuals, but there's probably room for more.
...I had three more suggestions that I thought of while away from the keyboard for a few minutes, but they all went out of my head before I made it back, even though I specifically tried to note them. If someone knows why this sort of thing happens, it would be nice to write it up, because it happens to me all the time and it drives me insane.
(yes, I know the fix is to carry a notebook everywhere and use it religiously. I still want to know why it happens)
Replies from: William_Quixote↑ comment by William_Quixote · 2014-05-31T21:27:07.495Z · LW(p) · GW(p)
I would suggest the fix is to carry a smartphone at all times rather than a notebook. The phone fits in your pocket and odds are you might need one anyway.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-01T08:03:12.368Z · LW(p) · GW(p)
I would add to use Evernote on that phone. Evernote makes the notes searchable and syncs them.
comment by pragmatist · 2014-05-31T17:22:10.608Z · LW(p) · GW(p)
I'm teaching at a summer school intended to introduce philosophy students to science and the scientific method. I'm teaching a course on probability and statistics, and also one on physics. In both courses, I'm emphasizing conceptual issues that philosophy students might find interesting and relevant. Maybe I'll try polishing up some of my lecture notes and posting them here.
comment by leplen · 2014-06-01T17:26:59.237Z · LW(p) · GW(p)
I'm broadly interested in the question, what physical limits if any, will a superintelligence face? What problems will it have to solve and which ones will it struggle with?
Eliezer Yudkowsky has made the claim "A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.”
I can't see how this is true. It isn't obvious to me that one could conclude anything from a video like that without a substantial prior knowledge of mathematical physics. Seeing a red, vaguely circular object, move across a screen tells me nothing unless I already know an enormous amount.
We can put absolute physical limits on the energy cost of a computation, at least in classical physics. How many computations would we expect an AI to need in order to do X or Y. Can we effectively box an AI by only giving it a 50W power supply?
I think there are some interesting questions at the intersection of information theory/physics/computer science that seem like they would be relevant for the AI discussion that I haven't seen addressed anywhere. There's a lot of hand-waving, and arguments about things that seem true, but "seem true" is a pretty terrible argument. Unlike math, "seem true" pretty reliably yields whatever you wanted to believe in the first place.
I'm making slow progress on some of these questions, and I'll eventually write it up, but encouragement, suggestions, etc. would be pretty welcome, because it's a lot of work and it's pretty difficult to justify the time/effort expenditure.
Replies from: Pentashagon, jaime2000, NancyLebovitz, shminux↑ comment by Pentashagon · 2014-06-03T09:03:28.012Z · LW(p) · GW(p)
I can't see how this is true. It isn't obvious to me that one could conclude anything from a video like that without a substantial prior knowledge of mathematical physics. Seeing a red, vaguely circular object, move across a screen tells me nothing unless I already know an enormous amount.
This DeepMind paper describes their neural network learning from an emulated Atari 2600 display as its only input and eventually learning to directly use its output to the emulated Atari controls to do very well at several games. The neural network was not built with prior knowledge of Atari game systems or the games in question, except for the training using the internal game score as a direct measurement of success.
More than 3 frames from the display were used for training, but it arguably wasn't a superintelligence looking at them.
↑ comment by jaime2000 · 2014-06-01T19:19:13.197Z · LW(p) · GW(p)
I can't see how this is true. It isn't obvious to me that one could conclude anything from a video like that without a substantial prior knowledge of mathematical physics. Seeing a red, vaguely circular object, move across a screen tells me nothing unless I already know an enormous amount.
He elaborates on the claim a bit in this comment.
Replies from: leplen↑ comment by leplen · 2014-06-02T20:12:56.744Z · LW(p) · GW(p)
And it still doesn't make any sense. Think about the motion of a helium balloon. Think about the motion of a charged particle in a magnetic field. There's literally an infinite number of possible formal mathematical models that could include 12 frames of an apple falling. Thing about how enormous the leap of logic is to go from, "This one thing moved" to "All things must move the way that one thing does." There's quite simply not enough observations and not enough information in seeing something happen once to prove a theory. I feel like the reason people like this analogy is because an apple falling feels like something we understand, and so it's easy to imagine something smarter than us understanding it too, but we only understand apples falling because we've seen so many things fall.
How much of physics can you generalize from this image? If you want to really get an idea of how hard this problem is, try and tell me how much physics you can learn from this sound clip. It's the same information as the image, just presented in a format that doesn't allow you to easily access all of the incredibly difficult learning you've already done that allows you to easily interpret images.
That comment you link to walks directly through a correct chain of reasoning, and it has AI-Einstein miraculously picking the correct needle out of an infinite haystack. But it's a fiction story, and so horrendously improbably things are allowed to happen. The millions and billions of other possible theories that fit the data that tiny-boxed-Einstein could have also invented don't warrant a mention. How many curves can you draw that correctly fit two data points? There are an infinite number of possible theories and no amount of intelligence is going to allow you count to infinity any faster than anyone else.
it's a very obvious hypothesis to the right kind of Bayesian.
It's not at all clear to me what this means given the existence of Aumann's agreement theorem.
Replies from: XiXiDu↑ comment by XiXiDu · 2014-06-03T08:57:09.664Z · LW(p) · GW(p)
...it has AI-Einstein miraculously picking the correct needle out of an infinite haystack. But it's a fiction story, and so horrendously improbably things are allowed to happen.
I've been making similar complaints for years. And the replies I get are along the following lines:
Skeptic01: X is a highly conjunctive hypothesis. There's a lot of hand-waving, and arguments about things that seem true, but "seem true" is a pretty terrible argument.
LW-Member01: This is what we call the "unpacking fallacy" or "conjunction fallacy fallacy". It is very easy to take any event, including events which have already happened, and make it look very improbable by turning one pathway to it into a large series of conjunctions.
Skeptic01: But you are telling a detailed story about the future. You are predicting "the lottery will roll 12345134", while I merely point out that the negation is more likely.
LW-Member02: Not everyone here is some kind of brainwashed cultist. I am a trained computer scientist, and I held lots of skepticism about MIRI's claims, so I used my training and education to actually check them.
Skeptic01: Fine, could you share your research?
LW-Member02: No, that's not what I meant!
LW-Member03: Ignore him, Skeptic01 is a troll!!!
...much later...
Skeptic01: I still think this is all highly speculative...
LW-Member03: We've already explained to Skeptic01 why he is wrong. He's a troll!!!
Replies from: None↑ comment by NancyLebovitz · 2014-06-01T18:30:40.385Z · LW(p) · GW(p)
I'm guessing that the super-intelligence would deduce more from the details of the webcam than from the details of a short film or single image.
It couldn't know whether the image represented something real or a hypothetical construction.
↑ comment by Shmi (shminux) · 2014-06-02T21:09:06.646Z · LW(p) · GW(p)
Eliezer Yudkowsky has made the claim "A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.”
This is one of those grandiose and silly claims that gives this site a bad rap. There is no way to prove this statement of faith in Bayesian Superintelligence (BS for short), because: there is no BS (ehm...) around to test it with, and even if there were, the setup itself (BS+webcam) is so ridiculous, it would never be tried. Anyway, as far as I know, Eliezer's absolute faith in Bayesianism is not shared by anyone else at MIRI/CFAR, at least not nearly as fervently.
comment by MathiasZaman · 2014-06-03T22:40:59.118Z · LW(p) · GW(p)
One thing I've been thinking about would be posts specifically designed to elicit discussion, rather than teaching about something.
To give an example, someone on /r/LessWrong posted a question about creating a rationalist sport. It's fun and interesting to talk about and it's a good way of exercising things like "holding of on proposing solutions."
comment by Qiaochu_Yuan · 2014-06-01T18:43:31.303Z · LW(p) · GW(p)
I keep a list, in Workflowy, of titles for posts almost none of which I've turned into posts. (I generally recommend using Workflowy for capture in this way.) Here are the ones where I at least remember what the point of the post was supposed to be:
- Against ethical consistency
- Against ethical criteria
- Against verbal reasoning
- The instrumental lens
- Maximizing utility vs. the hedonic treadmill
- Mathematics for rationalists
- Beware cool ideas
- How to not die (RomeoStevens already wrote this post though)
↑ comment by Ben Pace (Benito) · 2014-06-01T23:01:36.095Z · LW(p) · GW(p)
Mathematics for rationalists
Ooh, what was in this one?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2014-06-02T17:51:48.012Z · LW(p) · GW(p)
It was going to be something like a guide to what kind of mathematics it might be good for rationalists to learn, but when I started writing the post I realized it was a gigantic project and I didn't care about it enough to actually give it the time it deserved. Sorry!
Replies from: Benito, None↑ comment by Ben Pace (Benito) · 2014-06-02T17:59:30.317Z · LW(p) · GW(p)
That's too bad. Atm I'm planning my next five years of study in maths and related areas - got any quick hints?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2014-06-02T18:42:12.085Z · LW(p) · GW(p)
What do you want to do?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-06-02T20:23:36.275Z · LW(p) · GW(p)
Oh, I just meant what was in your list so I can take a look at that. Unless that amount of work involved was figuring out which maths was useful, in which case I'll understand if you can't help me.
Edit: I would like to work in FAI research after I get a degree in maths/computer science, so would like to spend the next few years studying appropriately.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-02T21:21:30.938Z · LW(p) · GW(p)
Usefulness depends on the purpose for which you want to learn math.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-06-02T21:22:41.503Z · LW(p) · GW(p)
And he specifically said he was talking about the maths useful for rationalists. I meant to imply that I wanted to know the areas, so I could go study them, because they would help me be more rational.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-02T21:29:28.369Z · LW(p) · GW(p)
Yes, and he asked you what you want to do. Meaning that he might not give every rationalist the same recommendation. Someone who wants to work on AGI needs different math than someone who wants to go into another direction.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-06-02T21:48:11.615Z · LW(p) · GW(p)
Cheers.
↑ comment by [deleted] · 2014-06-02T18:30:45.678Z · LW(p) · GW(p)
Wait, so you've tabled this project?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2014-06-02T18:42:00.821Z · LW(p) · GW(p)
It's extremely tabled. It's chaired.
↑ comment by protest_boy · 2014-06-10T06:10:19.209Z · LW(p) · GW(p)
I would love to see these as posts. (I really enjoyed your posts on the CFAR list about human ethics).
What does "The instrumental lens" hint at?
Replies from: Qiaochu_Yuan, komponisto↑ comment by Qiaochu_Yuan · 2014-06-14T03:20:25.606Z · LW(p) · GW(p)
At the time I had that idea I got the impression that some of the people around me were leaning too heavily on what I was calling the "epistemic lens," where your perspective on people is primarily based on their beliefs. I think this is mostly unhelpful, e.g. it can cause people to be snooty about religion for what I see to be no good reason. I think an "instrumental lens," where your perspective on people is primarily based on their actions, is much more helpful. In general I'm a fan of instrumental rationality, rather than epistemic rationality, being the more foundational thing.
↑ comment by komponisto · 2014-06-16T05:18:50.979Z · LW(p) · GW(p)
CFAR list
How does one get on this list?
comment by moridinamael · 2014-06-02T19:54:54.491Z · LW(p) · GW(p)
Be Impressed by the Status Quo.
It was going to be a post about how the world is actually really complicated and awesome, and on average one misses a lot of things by assuming that one could do better than the status quo just because cynicism and worldliness. I had a few examples in mind, too. It has really just been a lack of focused time and a feeling that the post might be poorly received that has prevented me from working on it.
Replies from: David_Gerard↑ comment by David_Gerard · 2014-06-07T10:43:42.308Z · LW(p) · GW(p)
Related: Mundane Magic and Louis CK's "Everything's Amazing, Nobody's Happy" routine.
comment by JoshuaFox · 2014-06-01T16:29:39.705Z · LW(p) · GW(p)
An overview of UDT. It can include a comparison of UDT to TDT.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-06-01T23:00:36.580Z · LW(p) · GW(p)
Can I ask why?
Replies from: JoshuaFox↑ comment by JoshuaFox · 2014-06-02T06:04:28.700Z · LW(p) · GW(p)
I just want to understand UDT, I often need several articles, both popular and more formal, before I really understand something like this.
There have been plenty of articles on UDT, but not an overview.
Replies from: Tyrrell_McAllister, JoshuaFox↑ comment by Tyrrell_McAllister · 2014-06-02T21:32:23.937Z · LW(p) · GW(p)
Here's a brief write-up of the basic idea of UDT that I wrote awhile back.
Replies from: JoshuaFox↑ comment by JoshuaFox · 2014-06-04T20:42:47.328Z · LW(p) · GW(p)
Thank you! I worked my way through it, and the level of formalism is fine. As you say, it is not meant to include the motivation. I'd appreciate an article that includes the motivation for each element of the formalism.
Also, some concepts were not defined, like "execution history." If "programs" are pure functions (stateless), I am not sure what a history is. Or maybe there is a temporal model here, like the one in the work of Hutter, Legg etc?
Actually, if I understand correctly, the "programs" P1, P2,.... represent the environment (as expressed in Hutter's formalism). (Or perhaps P1, P2, ... represent different programs the agent could run inside itself?) If P1, P2... are the environment, why have multiple programs, ..., when we could combine them into one thing called "environment"? In your article there is a utility function, and Hutter's model has rewards coming from the environment according to an unknown reward function. But I don't understand the essential difference between approaches here. Since the final choice is a maxarg, I still haven't figured out what this definition of UDT adds to the trivial idea "make the choice with highest expected utility."
The article is great for what it is intended to be , and I am glad we have it. But I'd like to see an intro/overview to UDT.
↑ comment by JoshuaFox · 2014-06-16T14:29:03.177Z · LW(p) · GW(p)
Just read Daniel Hintze's BA thesis (Arizona State University). It is the best intro to UDT and TDT I have seen so far.
(My understanding of Hintze's writing is partly based on lots of other reading on TDT and UDT that I didn't understand as well, but I think that even if I did not have that background, it would be the best intro.)
comment by ChristianKl · 2014-06-01T07:40:48.357Z · LW(p) · GW(p)
How to talk about politics without getting mindkilled and go beyond simple rehearsing talking points
comment by leplen · 2014-06-01T17:38:37.385Z · LW(p) · GW(p)
A topic that I've been working on recently is applying a lot of the rationality, effectiveness lessons from LW, Thinking Fast and Slow, Getting Things Done, etc. into a model of leadership. Preferably a model I could devise ways to test, and tweak.
I think that having accurate beliefs about the world is great, but effectiveness in modifying the world is better, and a big part of of the world we're interested in modifying is made up of other people. How do I apply all of this rationality stuff to actually accomplishing things, especially when accomplishing things means working with others?
I'm sure other people are equally or more qualified than I am to discuss this topic, but I haven't really seen it discussed and it seems like something that would be valuable to the community.
comment by Viliam_Bur · 2014-06-03T08:42:24.581Z · LW(p) · GW(p)
From "Go Forth and Create the Art!":
Yet there is, I think, more absent than present in this "art of rationality"—defeating akrasia and coordinating groups are two of the deficits I feel most keenly. I've concentrated more heavily on epistemic rationality than instrumental rationality, in general. And then there's training, teaching, verification, and becoming a proper experimental science based on that. And if you generalize a bit further, then building the Art could also be taken to include issues like developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem...
When you try to develop part of the human art of thinking (...) You will be tempted by fake explanations of the mind, fake accounts of causality, mysterious holy words, and the amazing idea that solves everything.
I hope that someone who learns the part of the Art that I've set down here (...) will not immediately run away; they will not just make stuff up at random; they may be moved to consult the literature in experimental psychology; they will not automatically go into an affective death spiral around their Brilliant Idea; they will have some idea of what distinguishes a fake explanation from a real one.
You will need to draw on multiple sources to create your portion of the Art. You should not be getting all your rationality from one author (...) To the best of my knowledge there is no true science that draws its strength from only one person.
I somewhat suspect that you couldn't develop the Art just by sitting around thinking to yourself, "Now how can I fight that akrasia thingy?" You'd develop the rest of the Art in the course of trying to do something. Maybe even (...) some task difficult enough to strain and break your old understanding and force you to reinvent a few things.
...or just read the whole article from 2009.
So, I would like to see a summary of everything we know about (a) defeating akrasia, and (b) coordinating groups. Along with how do we know what we think we know: was there a replicated experiment that proved something, or is it just a hypothesis from a popular book that feels right?
Unlike programming a Friendly AI, this is something all of us can try at home. But doing it properly would require some research and experiments.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-06-03T11:01:04.160Z · LW(p) · GW(p)
Speaking of akrasia, I'm hoping some people will write and post articles about these intriguing topics.
Do we know whether akrasia is just one thing?
Replies from: Viliam_Bur, David_Gerard↑ comment by Viliam_Bur · 2014-06-03T22:42:08.261Z · LW(p) · GW(p)
Do we know whether akrasia is just one thing?
I suspect that it's not, and that treating different situations as the same thing just because there is a surface similarity ("something should be done, but either isn't done or is done too late") means ignoring the details which are critical for understanding the situation and fixing the problem.
Seems to me there are at least three different situations:
a) There is a task that I would enjoy doing, and I would actually do it if you locked me in a room without internet access for an hour or two. But in normal situation I am likely to start reading something on internet, and then it's too late and I have to sleep.
b) There is a task that I hate doing, and I already have serious "ugh field" around it. I would do anything to avoid it. Even if I can't escape and do something else, I would prefer to just close my eyes and pretend the task doesn't exist. In reality I do a small part, and then I find any excuse to run away.
c) The task is not done for objective reasons, e.g. I originally extremely underestimated the amount of work it would take, and overestimated my amount of free time.
And then of course it could a combination of two or three of these points. But I think they are not reducible merely to one (and therefore one solution will not fit all of them). For example, sometimes what feels like an (a) may actually be a (b); there may be a hidden "ugh field" we don't want to admit. Like, I want to program a computer game, my self-image tells me that I should love programming games, but I am actually afraid that I would fail, but I don't want to admit this fear to myself. On the other hand, sometimes I procrastinate on things like watching movies, which doesn't seem like a hidden fear.
Also, not all work is equivalent. It may be easier to work on something that feels meaningfull, than on something that feels completely useless. Or it may be easier to work when I know my friends are working too. Etc. There may be important things that we even don't suspect of being important, so we don't include them in the description of them problem. And these things may differ between individuals, because they depend on personality (whether they care about friends are doing) or beliefs (whether they consider the same work meaningful).
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-06-04T13:04:12.852Z · LW(p) · GW(p)
I suspect there's a physical component-- sometimes doing things feels like I have to haul myself over a high threshold to get started-- and it probably won't be easy to continue, either. (Some people find it hard to get started, but easy to continue.)
In any case, the difficulty with starting might be a serotonin/dopamine thing, or at least it sounds like a very mild version of a Parkinson's symptom.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-06-05T23:31:42.090Z · LW(p) · GW(p)
It seems to me that sometimes my ability to do things correlates with weather (high/low pressure), but I don't keep records to prove or disprove this hypothesis. Just writing it here as an example for a hypothesis I would usually not think about. It doesn't explain why person X is more productive than person Y, but it could explain why person Y failed on a specific day.
↑ comment by David_Gerard · 2014-06-07T10:35:19.641Z · LW(p) · GW(p)
Do we know whether akrasia is just one thing?
It's always seemed a bit magical category to me. "Not doing what I think I'm supposed to be doing" or something.
comment by Gunnar_Zarncke · 2014-05-31T20:25:15.179Z · LW(p) · GW(p)
I also noticed the low number of high vote posts recently. But I didn't jump to the conclusion that it is a decline. Is it? Didn't such periods of low activity occur before? Is there a real problem or is there more fear of a problem than a real problem? Or is the slow onset of a (natural?) decline in an online forum when the caravan moves on (to avoid the picture of the greener pastures I used recently).
Someone with access to the DB should be able to quickly generate a histogram of the monthly number of posts with >N votes.
Replies from: MathiasZaman↑ comment by MathiasZaman · 2014-06-01T11:31:55.082Z · LW(p) · GW(p)
Even so, having a place where people can post ideas for posts and see if those ideas are liked isn't a bad idea anyway.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-06-01T18:32:13.033Z · LW(p) · GW(p)
Indeed. Just keep that idea separate from panic and actionism :-)
comment by DataPacRat · 2014-05-31T15:31:40.558Z · LW(p) · GW(p)
I'm writing an attempt at a RationalFic. So far, I posted about it to /r/rational at Reddit, in an Open Thread here, and I've now finished the first story arc. Since there's a new Media Thread tomorrow, I plan on posting about it there.
What I haven't done is make a separate Discussion post. I kind of want to, in order to generate as much useful feedback and commentary as possible, but I've been told in the past that a few of my Discussion posts should have gone in Open Threads instead.
comment by leplen · 2014-06-01T14:53:41.350Z · LW(p) · GW(p)
I think this is a good idea and perhaps should become a recurring thread.
I know that my experience is that I often have ideas about topics/questions/potential discussion that might be of interest to the LW community, but I have a finite amount of time to invest in writing blog posts/preparing my musings for an outside audience. It isn't always clear to me which of these topics are welcome on LW, since they may be only tangentially related to "refining the art of human rationality". A thread like this provides a sounding board for people in a similar situation.
I think there's also some potential for collaboration. If multiple people are interested in the same topic, they could conceivably email back and forth/collaborate on a post which might be of significantly higher quality than either of them would produce alone.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-06-01T15:05:04.489Z · LW(p) · GW(p)
I'm interested in seeing your ideas-- you could give them a preliminary test in this comment thread.
Also, an article can be posted at LW and in some other venue, even though it's annoying to have to reformat the links.