What topics would you like to see more of on LessWrong?
post by Emile · 2010-12-13T16:20:11.474Z · LW · GW · Legacy · 138 commentsContents
138 comments
Are there any areas of study that you feel are underrepresented here, and would be interesting and useful to lesswrongers?
I feel some topics are getting old (Omega, drama about moderation policy, a newcomer telling us our lack of admiration for his ideas is proof of groupthink, Friendly AI, Cryonics, Epistemic vs. Instrumental Rationality, lamenting how we're a bunch of self-centered nerds, etc. ...), and with a bit of luck, we might have some lurkers that are knowledgeable about interesting areas, and didn't think they could contribute.
Please stick to one topic per comment, so that highly-upvoted topics stand out more clearly.
138 comments
Comments sorted by top scores.
comment by Emile · 2010-12-13T17:20:39.666Z · LW(p) · GW(p)
Statistics - we've had a few posts on those, but some intro posts could be nice, or some discussion of things like principal component analysis, statistical signficance, misuses of statistics, etc.
(by comparison, it seems like we've had maybe three different introductions to decision theory, even though it's a topic of less general use)
Replies from: Matt_Simpson, jsalvatier↑ comment by Matt_Simpson · 2010-12-13T18:17:55.062Z · LW(p) · GW(p)
In the near future, I plan to write a somewhat detailed discussion of the JAMA paper on vitamins that Phil Goetz and Robin Hanson have written about.
At some point I also plan to write an essay on what statistical models are actually models of and the implications for actually doing statistics, just to make sure I thoroughly work out the ideas. When I do, I'll probably post it here for anyone who's interested.
(Posting this to make it more likely that I actually do these things)
↑ comment by jsalvatier · 2011-01-24T18:54:51.143Z · LW(p) · GW(p)
I've been thinking about this. I know a fair amount about Bayesian statistics (less about classical statistics), but I'm not sure where introduction to topics like this would fit in. For example, the topic that springs to my mind is an introduction to least-squares estimation: why it makes sense from a bayesian perspective, what it's assumptions and limitations are, and how to do it. However, this topic seems too parochial to do make a post (and also covered in the statistics book I recommended). Would it fit in in the discussion section?
Replies from: Emilecomment by Scott Alexander (Yvain) · 2010-12-13T20:48:55.393Z · LW(p) · GW(p)
I'd like to see less about the application of rationality to various different everyday life skills (which usually end up being bad self-help), less navel-gazing about how great we are and how great rationality is, fewer gimmicks, and more attempt to dissolve confusions. explain previously puzzling things, and point out logical pitfalls that one might otherwise fall into. This is what Eliezer did so well in the Sequences, and I can't believe that there aren't any good problems left to cover.
Replies from: jsalvatier, Emile↑ comment by jsalvatier · 2010-12-13T20:59:27.961Z · LW(p) · GW(p)
For reference, on the main page right now, here are my classifications:
2 naval gazing 3 reduction attempts 2 self help 2 meetup 1 charity (not sure how to classify)
↑ comment by Emile · 2010-12-13T20:51:37.017Z · LW(p) · GW(p)
Agree, agree, but ...
fewer gimmicks
... what do you mean by gimmicks, here?
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2010-12-14T12:19:34.200Z · LW(p) · GW(p)
I don't want to say anything specific enough to offend anyone, but some of the ideas I saw on the "neat stuff" thread seemed sort of gimmicky to me.
Replies from: Emile↑ comment by Emile · 2010-12-14T13:51:21.122Z · LW(p) · GW(p)
... yeah, guest posts and illustration contests are pretty gimmicky, I don't know what kind of fool would propose those :D
I thought yo were talking about gimmicky stuff on LessWrong right now, of which I don't see many examples (maybe the Diplomacy games?). Navel gazing and bad self help though, yup, plenty of those (and my last two threads - this one and the "neat stuff" - probably fall under navel gazing too)
comment by atucker · 2010-12-14T00:32:44.766Z · LW(p) · GW(p)
Direct advice for young (= precollege) people. They have pretty much their whole life ahead of them, and if you can get to them and give them advice before they start down some potentially limiting major life path/choice then its a huge gain. I personally want a bit of help with this...
You could do something yourself, but if you get 2 people to do that same thing then you effectively dedicate 2 more lifetimes to the effort. And doing so doesn't even eat up that much of your own time.
Replies from: Alexei↑ comment by Alexei · 2010-12-15T00:36:28.435Z · LW(p) · GW(p)
Also, post college people. I graduated from a university a year ago, and just found myself with a lot of free time to try something. It would be nice to have some help, advice, ideas, etc. This would probably need a separate venue though, aside from lesswrong posts or discussions.
comment by Perplexed · 2010-12-13T18:10:24.689Z · LW(p) · GW(p)
I would like to see sequences of top level postings providing semi-technical tutorials on topics of interest to rationalists.
As one example of a topic: Game Theory
Actually, there is material here for several sequences, dealing with several sub-topics. We need a sequence on games with incomplete information, on iterated games, on two-person cooperative games (we have a couple articles already, but we haven't yet covered Nash's 1953 paper with threats), and on multi-person cooperative games (Shapley value, Core, Nucleolus, and all that).
Replies from: apophenia, prase↑ comment by apophenia · 2011-08-07T10:08:53.799Z · LW(p) · GW(p)
I've studied game theory and rationality, and I don't use game theory even when applying rationality to game design! I've used some of the nontechnical results (threats, from Shelling's book) to negotiate and precommit but that's about it. Has someone else used game theory in real life?
Unless someone else responds to this comment, my guess is that this topic is of greater interest to readers than it is of any use.
Replies from: satt↑ comment by satt · 2011-08-07T21:30:50.819Z · LW(p) · GW(p)
I'm reading Tom Slee's book No One Makes You Shop at Wal-Mart, and it applies game theory to some dressed-up toy examples (prisoner's dilemma, coordination games, etc.) to demonstrate why agents making individual decisions to maximize their utility (representing consumers using the power of individual choices) can fail to maximize their total utility (representing the failure of individual consumer choice to secure optimal outcomes for consumers).
[Edit: I should note that Slee's book isn't very technical, so maybe it's more evidence against needing the full-blown mathematical machinery of game theory? I'm about 100 pages in and it hasn't gotten much more hardcore than tabulating the results of games in a payoff matrix and an informal explanation of Nash equilibrium.]
Replies from: None↑ comment by [deleted] · 2011-08-07T22:16:37.410Z · LW(p) · GW(p)
I should note that Slee's book isn't very technical, so maybe it's more evidence against needing the full-blown mathematical machinery of game theory?
The machinery is still there, even if you can't see it.
Replies from: satt↑ comment by satt · 2011-08-08T01:21:13.737Z · LW(p) · GW(p)
Good point. What I mean is that lots of readers could still get some mileage out of game theory without having to know the rigorous mathematics underlying it (although as you say the mathematics is still there even if someone doesn't know it's there). For example, I don't need to know how to use a fixed point theorem to prove the existence of Nash equilibria for all finite games to be aware of why the prisoner's dilemma is a sticky situation.
↑ comment by prase · 2010-12-13T18:39:25.620Z · LW(p) · GW(p)
It is indeed quite surprising that there wasn't systematic posting on this, given the amount of interest in prisoner's dilemmas and such things. It is maybe because the traditional game theory is bound to traditional causal decision theory which is not much popular here, but nevertheless I would be interested to learn more about it.
Replies from: Alexei↑ comment by Alexei · 2010-12-15T00:46:20.662Z · LW(p) · GW(p)
I found these Yale lectures on Game Theory to be a wonderful introduction to the topic. I think they cover all the basic points and lay a very good foundation. Perhaps we can find good resources like this and vote on them, taking the best and adding them along-side the Sequences.
comment by cousin_it · 2010-12-13T17:11:14.273Z · LW(p) · GW(p)
I have a meta-answer: I want LW to have a good shot at advancing humanity's understanding. For that it's not enough to be rational, you also have to be at the frontier of an area. As Vaniver puts it, to make progress you need an important unsolved problem "worked out to third order".
So far the only area where LW has succeeded in putting anyone at the frontier and pushing forward is... yeah... decision theory math. So I'm concentrating my effort there, finding and proving new theorems within the concept space discovered by Eliezer and Wei. If another such topic arises, I'll be happy to help there too.
Replies from: Perplexed↑ comment by Perplexed · 2010-12-14T00:08:18.781Z · LW(p) · GW(p)
I want LW to have a good shot at advancing humanity's understanding. For that it's not enough to be rational, you also have to be at the frontier of an area.
I disagree on two counts. For one, I doubt that LW is the best place to develop original research (i.e. to publish ideas and rough drafts and get feedback). For that, you want a different kind of forum or 'subscription list' or wiki or RSS feed. Something with fewer people, most of whom are likely to read what you write. Something more focused. And, more important than a subscription list, you also need a peer-reviewed journal, so that your finished research actually gets academically published. SIAI should create a journal. JFAI or IJFAI.
Second, I disagree that the only way to advance humanity's understanding is by doing original research. We can also advance mankind's intellectual heritage indirectly, by doing original pedagogy - by publicizing ideas and problems in an interesting way for a broader audience, and thus attracting more sharp people to working with you at the cutting edge.
Replies from: cousin_it, Vaniver↑ comment by cousin_it · 2010-12-14T00:57:14.711Z · LW(p) · GW(p)
We have a mailing list on decision theory. Many new results we post here come originally from there. LW served as catalyst for bringing us together, creating interest in decision theory. Hopefully LW can serve as catalyst for other topics too.
Replies from: wedrifid↑ comment by wedrifid · 2010-12-14T11:13:37.734Z · LW(p) · GW(p)
We have a mailing list on decision theory. Many new results we post here come originally from there. LW served as catalyst for bringing us together, creating interest in decision theory.
Who is 'we' and where is said list?
To the extent that the 'we' is exclusive it is not relevant to LW or this post and to the extent that it is not I'd like to know where it is. :)
Replies from: cousin_it↑ comment by cousin_it · 2010-12-14T11:26:35.248Z · LW(p) · GW(p)
http://groups.google.com/group/decision-theory-workshop
Replies from: wedrifid↑ comment by wedrifid · 2010-12-14T11:39:39.320Z · LW(p) · GW(p)
Thanks. If someone there accepts me I'll have a whole lot more stuff to add to my incremental reading collection. :)
↑ comment by Vaniver · 2010-12-14T18:24:41.874Z · LW(p) · GW(p)
We can also advance mankind's intellectual heritage indirectly, by doing original pedagogy
Strong agreement. When you say "advance mankind" instead of "advance an academic discipline" you need either technology or education, and rationality isn't the sort of thing that can be turned into technology easily.
comment by James_Miller · 2010-12-13T20:31:19.680Z · LW(p) · GW(p)
Discussions of general cognitive enhancing tools such as Adderall and N-Back.
Replies from: gwern↑ comment by gwern · 2011-01-02T01:41:26.011Z · LW(p) · GW(p)
I'd like to see more discussion of n-back too, but I'm not sure there is anything to discuss. There's only so much research on it and I think I've covered most of the anecdotal material as well in my DNB FAQ.
Replies from: None↑ comment by [deleted] · 2011-12-12T21:47:44.276Z · LW(p) · GW(p)
Do our own research?
Replies from: gwern↑ comment by gwern · 2011-12-12T22:07:20.582Z · LW(p) · GW(p)
Nobody really cares enough; eg. some SIAI House people tried and abandoned n-backing.
EDIT: and another example just came in: http://lesswrong.com/lw/19f/open_thread_october_2009/5fvy?context=3
comment by XiXiDu · 2010-12-13T18:31:45.573Z · LW(p) · GW(p)
Teaching utilizable rationality skills by exemplifying the application of rationality to, and the dissolving of, real-life problems via the breakdown of decision procedures.
Examples: Annual flu vaccination; A Bayesian Take on Julian Assange
Replies from: XiXiDu↑ comment by XiXiDu · 2010-12-15T11:50:06.792Z · LW(p) · GW(p)
I just came across this article which exemplifies my above comment.
comment by Davorak · 2010-12-13T19:36:08.995Z · LW(p) · GW(p)
I would like to see tests for rationality discussed, criticized and designed. The equivalent of a IQ but for rationality. Just like for IQ scores it would not have to be perfect to be useful. Knowing a persons level of rationality is incredibly useful if it can be done in a reproducible and accurate way.
Replies from: Nornagest, Emile↑ comment by Nornagest · 2010-12-13T19:43:45.057Z · LW(p) · GW(p)
Absolutely. I'm particularly interested in ways of measuring group rationality, since a lot of the more important dysfunctions of thought show up only through group dynamics and since it's hard to get anything really important done as a lone hacker or researcher.
I can think of some useful ways of measuring subvalues contributing to this value (even something as simple as paying attention at poker night goes a long way), but coming up with a workable general metric has eluded me so far.
↑ comment by Emile · 2010-12-13T20:23:13.670Z · LW(p) · GW(p)
I don't know if it can be reliably measured - my impression is that it's pretty damn hard, and that once you heard about most tests (the kind that are used to show irrationality in the first place, i.e. those already discussed on less wrong), you won't fall for them but that doesn't say much about how less likely you are for thinking right in "real life" situations where your brain isn't primed by "this is a test of rationality".
Some sub-components can be reliably measured - such as calibration. Any others?
Replies from: Desrtopa, Craig_Heldreth, Davorak↑ comment by Desrtopa · 2010-12-14T03:36:54.667Z · LW(p) · GW(p)
My thought on hearing the proposal was that it would be impractically difficult, but on further consideration I suspect it would be much easier than creating a reliable test for intelligence. With proper effort, we should at least be able to beat the standard set by IQ tests
↑ comment by Craig_Heldreth · 2010-12-13T21:27:23.310Z · LW(p) · GW(p)
Replies from: Emile↑ comment by Emile · 2010-12-13T22:21:01.226Z · LW(p) · GW(p)
I got through unscathed, but I suspect most atheists would. I don't think that test is very good at discerning rationality beyond the "knows about logic" and "doesn't believe in God".
Replies from: Spurlock↑ comment by Spurlock · 2010-12-14T04:32:35.370Z · LW(p) · GW(p)
FWIW, I started it with "God Exists" as true and also got through it unscathed. But you're right, it doesn't seem to try very hard presenting atheists with pitfalls and traps. I was at least expecting some kind of foul trickery about the Big Bang.
↑ comment by Davorak · 2010-12-13T23:06:36.010Z · LW(p) · GW(p)
I would expect it to be extremely hard. I do not limit the scope of these tests to simple paper question and answer. I would be ok if an MRI was needed, for the teste not to know the test was about rationality, if it required a group of well trained actors to fool the teste, or a virtual simulation(within todays technology). Of course tests that can be used/verified immediately by users are preferable.
In the end the techniques required may be prohibitive, but at least it would make known what techniques/technologies to watch/promote for the future.
comment by Emile · 2010-12-14T10:48:13.801Z · LW(p) · GW(p)
Economics
We had a few posts, but some intro-level posts would be nice.
Replies from: Alexei↑ comment by Alexei · 2010-12-15T00:21:23.461Z · LW(p) · GW(p)
I would love to see how other LW members invest their money. Some questions would be: Do you do your own investing? Do you pick stocks? If yes, what's your method?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2010-12-15T03:57:22.825Z · LW(p) · GW(p)
Seconded.
Actually, more than this, if we have members who think they've got a good approach to investment, either their own or a published one they'd endorse, I'd love to see a public longitudinal "bake-off" of those approaches. That is, start with $N and see what each approach nets given real-world events over X months.
Similarly, let people go public with their predictions and confidence levels thereof.
Actually give us a chance to see how various approaches work.
comment by Perplexed · 2010-12-13T20:58:36.764Z · LW(p) · GW(p)
I would like to see sequences of top level postings providing semi-technical tutorials on topics of interest to rationalists.
As one example of a collection of related topics: Information Theory, K-complexity, and Computational Complexity
Replies from: jsteinhardt↑ comment by jsteinhardt · 2010-12-14T16:44:00.937Z · LW(p) · GW(p)
Does K-complexity == Kolmogorov complexity?
I am interested what you want to know about these for. Like many things, I think a partial understanding of these topics can be counterproductive. (In other words, to be productive I think such posts should be as technical as necessary to get to the level of actually using the concept to solve problems.)
ETA: I just realized that my reply sounded rather negative. I do think such topics would be very interesting and a good addition to LW!
Replies from: Perplexed↑ comment by Perplexed · 2010-12-14T17:08:41.026Z · LW(p) · GW(p)
Does K-complexity == Kolmogorov complexity?
Yes, that was my intention.
I am interested what you want to know about these for.
I didn't say I want to know about them. I said I would like to see tutorials so that other people commenting here would know something about them. :)
Like many things, I think a partial understanding of these topics can be counterproductive. (In other words, to be productive I think such posts should be as technical as necessary to get to the level of actually using the concept to solve problems.)
Yes, partial understanding is counter-productive. But it already exists. For example, I mentioned both K-complexity and computational complexity. What do they have in common? Practically nothing! But I've seen evidence that this is not really understood by all LW commenters. I would like to see presentations that are as technical as necessary to show that they are different concepts dealing with different problems.
As for getting to the technical level necessary to solve problems - well, has the concept of Kolmogorov or Chaitin complexity ever really solved a problem, rather than just clarifying a concept?
Replies from: Alexei, jsteinhardt↑ comment by Alexei · 2010-12-15T00:42:20.525Z · LW(p) · GW(p)
I would like to understand and know more about those concepts (as well as TDT that's often mentioned here). I have a minor in mathematics, but I still feel like I can't do the real math unless it's explained to me step by step. I often try to understand various concepts by going to respective wiki pages but my eyes quickly glaze over and I lose interest.
It would be nice if someone described these concepts starting with what problem we are trying to solve, some possible approaches, and then discuss the specific approach. I think it's safe to assume knowledge of algebra and calculus, but all other concepts I think should be introduced and explained.
↑ comment by jsteinhardt · 2010-12-14T18:22:06.256Z · LW(p) · GW(p)
K-complexity, probably not. Information theory, yes. Complexity theory, well, I would guess so but I don't have any examples at hand.
Your point about partial understanding already existing is well taken.
comment by Emile · 2010-12-13T17:06:03.486Z · LW(p) · GW(p)
Data visualisation - making and understanding neat graphs that present data. It's not something analytical nerds like us are usually that good at, but it's a skill worth developing. Just look at all this sexy stuff!
There's some tension between presenting data honestly (see this discussion, which motivated me to write this post), and I think LessWrong can have higher standards than some blogs who just like pretty sexy pictures.
(edit: here's another discussion of the tension between data analysis and neat design)
Replies from: jsteinhardt↑ comment by jsteinhardt · 2010-12-14T16:38:04.350Z · LW(p) · GW(p)
MATLAB is pretty good at this. Are you interested in a tutorial on [data visualization in] MATLAB, or something else?
If people prefer free software, Octave is, I am told, a good free substitute for MATLAB, although I have no experience with it myself and would be unable to write a post on it.
Replies from: Matt_Simpson, Emile↑ comment by Matt_Simpson · 2010-12-14T17:00:03.722Z · LW(p) · GW(p)
If you're clever, R is wonderful for this and free. There's a fairly large working group in the stat department here at Iowa State dedicated to statistical graphics, and they exclusively use R. An alum designed the ggplot2 package.
Replies from: Perplexed, jsteinhardt↑ comment by jsteinhardt · 2010-12-14T18:24:19.921Z · LW(p) · GW(p)
Is there a good existing tutorial for R? If not, do you have the time and background to write one?
Replies from: Matt_Simpson, XFrequentist↑ comment by Matt_Simpson · 2010-12-14T18:55:09.835Z · LW(p) · GW(p)
There are plenty of tutorials for R - just google. I'm sure someone else has written a much better tutorial than I could write.
R is a full blown programming language without a simple to use GUI (at least not in the base package), so if you don't have any programming experience it might be slow going. But the freedom of a programming language makes the learning curve worth it (if you've used SAS, you understand what I mean)
My grad program offers 1 credit course in R to first year grad students, and much of it is available online. When it comes to statistical graphics, the stuff on ggplot2 is particularly relevant, and in the first 2-3 sets of lecture notes.
See also the ggplot2 reference manual
↑ comment by XFrequentist · 2011-01-11T23:21:24.562Z · LW(p) · GW(p)
My personal favorite: Quick-R
↑ comment by Emile · 2010-12-14T17:07:29.216Z · LW(p) · GW(p)
That could be useful (R is open source too, and seems to be more popular than Octave or even Matlab).
What would also be useful would be tips on how to present data in a understandable, attractive, and non-misleading way (i.e. not a bunch of tables or a basic unreadable scatterplot).
I'm thinking of Razib Khan's posts, who tend to have nice eye-candy. Developping those kind of skills in LessWrongers would be neat.
comment by Morendil · 2010-12-13T18:02:30.171Z · LW(p) · GW(p)
More on the "failures of traditional science" - e.g. Jonah Lehrer's "The Truth Wears Off".
Replies from: Davorak, prase↑ comment by Davorak · 2010-12-13T19:30:51.284Z · LW(p) · GW(p)
Do you have the link to the actual study and data for "The Truth Wears Off"? I have read a few summary article sprinkled around the web but it has always come off as pseudoscience to me. This is a hug claim and I have seen no evidence to back it up.
Replies from: Morendil↑ comment by Morendil · 2010-12-14T08:15:39.834Z · LW(p) · GW(p)
It's a popular article, not a study. It references several scientists; Jonathan Schooler on the "decline" of his results on verbal overshadowing, Michael Jennions, and so on. (I was able to find the full text by combining the article title and the term "PDF" in one Google query.)
Seth Roberts has some comments.
This blog which I've just stumbled across and haven't read yet has more commentary.
Replies from: Davorak↑ comment by Davorak · 2010-12-14T17:49:55.461Z · LW(p) · GW(p)
I was under the impression that Lehrer was referencing some journal article specifically. I took away that a combination of bad science(unrepeatable experimental set up because all of the variables were not pinned down by the experimental scientist), publication bais, and randomness some times lead to an over reporting of positive results. It really does seem like Lehrer over stated his argument to make the topic seem more important then it is.
Is there a point that the article is trying to make that you are interested in that I am missing?
edited for spelling + clarity
↑ comment by prase · 2010-12-13T18:36:57.466Z · LW(p) · GW(p)
It is a very interesting article. Do they have an idea about the cause? It is, do they believe that the diminishing observable effects are result of earlier bias in experiments, or reflection of some real change, like viruses evolving immunity against the drugs?
comment by Kingreaper · 2010-12-14T03:33:22.075Z · LW(p) · GW(p)
I think we need more (Defence against the) Dark Arts discussion.
And yes I do think we need to learn to use them, as well as defend against them. An irrational person cannot be convinced that rationality is good through the use of rationality.
Replies from: apophenia↑ comment by apophenia · 2011-08-07T09:47:44.015Z · LW(p) · GW(p)
By "via rationality" I assume you mean "via logical argument or sound science", which is an absurd substitution. Rationalists should win. The Dark Arts therefore are a type of instrumental rationality. That said, I still disagree, at least for some irrational people (let's roughly say anyone I could convince to eating a food that gives them a stomachache).
They can be convinced they should study [instrumental] rationality, it just requires you present unreasonably large amounts of evidence and don't use logical inference or experiments. (And when I say unreasonably large, that's for people in college studying science. For merely average twenty-somethings, you may need to beat them over the head with solid bricks of evidence.) Caveat: I do not often interact with allegedly common people who don't meet the minimum bar of adjusting expectations based on (sufficient) observation, so this comment does not apply to such persons. It is still a useful comment.
I.e. look, I used this thingy called rationality and I made/saved thousands of dollars, got a boyfriend, and fixed significant mental problems. Seemed to work for me okay. You need to go REALLY overkill on the evidence for non-science folks though. Again, beat them over the head with it. Make it something that will help them personally, too. I've found it useful to get people to agree (not verbally and aloud, though that's an interesting experiment) that whatever mysterious method I used to I do that, it would be a good thing to learn, BEFORE I revealed that the answer is something weird or "educational" sounding. This second half is only slightly dark-artsy (consistency bias).
Replies from: Kingreaper↑ comment by Kingreaper · 2011-08-13T08:17:26.992Z · LW(p) · GW(p)
No, by via rationality, I mean via rationality. You cannot use the rational part of their brain to convince them that it is good to be rational, because the rational part of them already knows that, it's just not in charge.
Convincing them, through the rational part of themselves, that eating a certain food gives them stomachache, is often easy. But that's a completely different problem, with no real relation to the problem I was talking about.
Replies from: apophenia↑ comment by apophenia · 2011-08-20T09:49:16.082Z · LW(p) · GW(p)
So, let's call the thing I'm talking about "winning". It is EXTREMELY helpful although not logically necessary to think winning is a good idea in order to win. I'm talking about how to convince people of that helpful step, so they can, next, learn how to win, and finally, apply the knowledge and win.
Either you're talking about a rationality that doesn't consist of winning, or I'm hearing: "You cannot use the 'winning' part of their brain to convince them that it is good to win, because the 'winning' part of them already knows that, it's just not in charge." Why on earth should I restrict myself to some arbitrary 'winning' part of their brain, if such a thing existed, to convince them that it's good to win? That sounds silly.
Please let me know if I even make sense.
Replies from: Kingreaper↑ comment by Kingreaper · 2011-08-20T16:38:52.092Z · LW(p) · GW(p)
That is in fact what I'm saying. It's rational to use the dark arts to convince people to be rational, and irrational to try and use rationality to try and persuade people to be rational.
Yes it would be silly (ie. irrational) to think otherwise. However many otherwise rational people do think silly things.
comment by Alexandros · 2010-12-13T16:42:38.993Z · LW(p) · GW(p)
Entrepreneurship would be nice.
Replies from: Louie, wedrifid, Davorak↑ comment by Louie · 2010-12-14T02:20:21.911Z · LW(p) · GW(p)
As an entrepreneur, this strikes me as an incredible waste of our community's intellectual ability.
Being an entrepreneur is like being a rapper. You need to deal in the narrow band of ideas that are just good enough that you can make them look ridiculously popular, but also terribly flawed enough that customers need to pay you for a new version of your stuff next quarter. And these great looking, deeply flawed ideas have to be something that you can use to destroy markets with near monopoly powers.
I'd rather see Less Wrong explore actually good ideas... not business ideas.
Also, good entrepreneurship is the ultimate combo of every "dark art" that people repudiate on this site. Just for starters, it requires TONS of self-deception + other deception if you expect to have the motivation to work on it or the investment needed to succeed.
However, I could be wrong here. The alternative of working for others is even worse. It gives you terrible incentives to improve yourself (or even maintain yourself) since you get no improved earnings for better performance... merit pay raises are a joke.
Replies from: mosasaur↑ comment by mosasaur · 2010-12-14T14:43:32.777Z · LW(p) · GW(p)
Just like certain drugs can trick the human brain into thinking they are beneficial, economic incentives are making us admire billionaires and corporations. But they are evil. They influence our media to make them report positively on them. Non cooperative individuals or organizations receive no money, no ads. But ads interrupt our thinking, turn us all into attention disorder cases. I agree with the points you offer, but I very much disagree with the idea that because such biases distort our minds and our economies they are therefore not worth discussing. To the contrary, the economy drives technological development which in turn leads to AI and further technology. If the biases in our thinking are present at such a basic level this can not be ignored. I would go as far as saying that your comment addresses one of the most important topics I have seen here in a long time.
Replies from: Alexei↑ comment by Alexei · 2010-12-15T01:14:40.075Z · LW(p) · GW(p)
I would really like to hear more about this topic. I am actually trying something entrepreneur-ish right now (see my post on LifeTracking). I want to help people, but I also want to make money, but not in a way that also damages/hurts people. I was going to put ads in my program, but then I just couldn't bring myself to do it. I personally hate ads and I wouldn't want to push them onto anyone.
Replies from: mosasaur↑ comment by mosasaur · 2010-12-15T08:24:38.459Z · LW(p) · GW(p)
You'd like to hear more? According to all the down votes there aren't many like you.
Unfortunately you won't like what I have to say either, I think. I hate your life tracking idea. Too many attempts to maintain my weight by calorie counting and weighing my food. Even when I was successful it destroyed my motivation and killed my intelligence. I believe many overweight people are really eating and eating because there are not enough vitamins and other nutrients in their food, especially vitamin D.
I also hate the fact that I up voted the GP when his "save the world" idea, using money and mathematics is total crap. As stupid as Kaj-Sotala's career post. How can academia save the world when it is the root of corruption? Any one succeeding in such an environment is at most an idiot savant, not smart or anything, it doesn't matter how much math they know.
I like the idea that you want to avoid ads because they damage people. But what is worse is they damage your message, by showing you to affiliate with corruption. You still want to make some money even though the money flow and supply is almost totally controlled by evil banks and corporations.
What to do if the fight is so hard? For one thing, everything helps. I almost deleted my post here because of the downvotes. Now I know there is at least a little controversy. So I'll try and give some ideas.
My main idea is acquiring independence. Just like Wikipedia freed our knowledge from commercial and academic lock in, Wikileaks and other such organizations are now freeing our news and information sources. Next we need to free ourselves from commercial manufacturing, for example by having three D printers in our garden sheds. Another form of independence would be freeing the physical basis of Internet itself by organizing ourselves into wireless mesh networks. Most handhelds are now physically capable of functioning as network nodes that relay signals to other handhelds. If you would write an app for that it would help.
Myself, I am interested in freeing the world from commercial energy providers by trying to get myself off the grid, using solar energy or wind energy or by finding some other small scale energy generating technology. The fact that I haven't been able to do so yet doesn't change the path I am planning to take.
What we have to do is nothing less than to rebuild the word from the ground up, using open source and small scale user controlled technology. We even have to invent our own money system. It feels a little like implementing a computer language in itself, like the python pypy project. Except that the new world is not controlled by the corrupt entities of commerce, academia and government that control it now. Luckily, I think it is possible to do this step by step.
So let the downvotes begin. Is there a way to delete this account in case I get fed up again with the downvotes and want to protect myself against further misguided self damaging attempts to help you ungrateful and hostile testosteron driven fools?
By the way, EY has turned into a dictator. Or maybe he always was one.
Replies from: Emile↑ comment by Emile · 2010-12-15T09:10:26.472Z · LW(p) · GW(p)
So let the downvotes begin. Is there a way to delete this account in case I get fed up again with the downvotes and want to protect myself against further misguided self damaging attempts to help you ungrateful and hostile testosteron driven fools?
Why self-damaging? Your brain may be enclined to tell you that losing karma on a website is "bad for you", but don't believe it.
Also, your comments are a bit of a self-fulfilling prophecy - complaints about how "my comment will be downvoted for failing to comform to groupthink!" are annoying.
(and geez, "testosteron driven fools"? We're a bunch of nerds! Reading Harry Potter fanfiction! And using gender-neutral pronouns!)
Replies from: mosasaur↑ comment by mosasaur · 2010-12-15T09:54:58.355Z · LW(p) · GW(p)
So all my comments (and those of the people replying to me too!) are now beneath a visibility threshold. Contrary to popular opinion I believe this reflects badly on this site, not on me.
Even you don't react to my content, only to my attitude.
We're now both invisible because a few people prevent all others from seeing us.
"testosteron driven fools"?
QED.
Replies from: Alexei↑ comment by Alexei · 2010-12-15T15:16:11.683Z · LW(p) · GW(p)
Neither my comment nor Emile's is below the threshold. Check your settings. If you want to see bad/"controversial" comments (I put those in quotes because I believe LW is actually pretty good an not down-voting controversial statements. I myself have been down-voted many times (given my # of comments) and each time it has been for a good reason) you can adjust your threshold or just not hide those comments altogether.
I think you are being down-voted because you throw out broad statements without any support. I've heard that it actually helps people to track exactly how much they eat and that fact alone usually causes them to eat less. Who is right? Neither of us until we show some concrete evidence. I am citing this interview. What are your sources? Though to be honest, LifeTracking app is a lot more than just a weight tracker and I would hate for people to think about it only in those terms. You can track anything with it.
"Academia is the source of corruption"? "Any one succeeding in such an environment is at most an idiot savant"? "The money flow and supply is almost totally controlled by evil banks and corporations"? Those are all very very heavy statements and, as far as I can tell, not at all obvious. Would do you have to support them?
I understand your want to be self-contained, self-sustaining, and independent, but you will spend most of your life trying to achieve that without accomplishing much else. It's ok to rely on other people, on tools, even on corporations, you just have to understand and accept the costs. And many times those costs are very reasonable for what you are getting out of it. Starting from scratch is nice to imagine, but if you've done programming, you'll understand that in the end you almost always end up in the same sort of mess you tried leaving. To sustain any kind of technology you need to rely on entities outside yourself and they will always have an option of cheating you. Punish the cheaters, reward the cooperators. Iterated prisoner's dilemma with tit-for-tat strategy.
Also, if you are interested in "currency from ground up" check out BitCoin.
↑ comment by wedrifid · 2010-12-14T11:15:28.149Z · LW(p) · GW(p)
This has been done by others who are entirely better informed on the subject than us.
Replies from: Alexandros↑ comment by Alexandros · 2010-12-14T11:58:24.312Z · LW(p) · GW(p)
I know of Rolf Nelson's series of posts on the matter, all other links appreciated
Replies from: wedrifid↑ comment by wedrifid · 2010-12-14T13:05:27.220Z · LW(p) · GW(p)
Some of these are ok.
Replies from: Alexandros↑ comment by Alexandros · 2010-12-14T13:13:54.655Z · LW(p) · GW(p)
Yeah, not quite what I had in mind.
Replies from: wedrifid↑ comment by wedrifid · 2010-12-14T13:29:57.497Z · LW(p) · GW(p)
I would like to see LW posts of the kind you are after but I unfortunately the LW community is not particularly suited to the task and some of the norms here are hostile to accurate epistemic knowledge of elements of the subject in question.
Most of the links there will provide better information than what you would get here.
Replies from: Alexandros↑ comment by Alexandros · 2010-12-14T16:10:46.071Z · LW(p) · GW(p)
At this point I feel like I'm stating the obvious but:
The thread asked what I'd like to see. I answered.
How in the world would you be able to make inclusive claims about what expertise exists in the community?
↑ comment by Davorak · 2010-12-13T19:40:24.472Z · LW(p) · GW(p)
I would like to know your reasoning could you elaborate? Why would you like to see more Entrepreneurship on LessWrong?
Replies from: Alexandros, Nick_Roy↑ comment by Alexandros · 2010-12-13T20:05:21.810Z · LW(p) · GW(p)
Well not entrepreneurship as such, but more like discussions about entrepreneurship. There's all sorts of things in the process that could benefit from rationality, and it'd be nice to see them expanded on: How to pick an idea, How to build a team, Whether starting a business is a good use of time to begin with, etc.
↑ comment by Nick_Roy · 2010-12-14T00:26:02.103Z · LW(p) · GW(p)
Discussions on the relationship between entrepreneurship and rationality would forward SIAI and FHI's common goal of increasing the amount of money spent on existential risks. There is an Existential Risk Reduction Career Network, but no entrepreneurship discussion group devoted to reducing existential risks that I know of. Entrepreneurship offers an income-earning alternative for people facing situations where standard careers are sub-optimal.
comment by Perplexed · 2010-12-13T20:40:35.702Z · LW(p) · GW(p)
I would like to see sequences of top-level postings which consist of systematic, chapter-by-chapter reviews of books which are of interest to rationalists.
As one example of such a book: Pearl: "Causality"
Replies from: wedrifid↑ comment by wedrifid · 2010-12-14T11:03:01.356Z · LW(p) · GW(p)
I like the idea but would perhaps go with a couple of posts on the most significant themes rather than chapter-by-chapter reviews. Or, in the case of chapter-by-chapter reviews not not make it post-per-chapter!
(Chapter by chapter reviews have flopped in the past but perhaps wouldn't have if they were not each presented as top level posts.)
comment by komponisto · 2010-12-13T23:07:56.853Z · LW(p) · GW(p)
Applied epistemic rationality:
Using the techniques of rationality and the language of Bayesian probability theory to help ourselves and each other sort out truth from falsehood in the world out there.
I.e., more stuff like this. (I've done mine, and am eager to participate in someone else's!)
Replies from: Louie, DanielVarga, steven0461↑ comment by Louie · 2010-12-14T01:46:20.404Z · LW(p) · GW(p)
I want to do an applied Bayesian analysis of what credence I should give to the Sierpinski conjecture being true.
I've been thinking that perhaps the small covering set sizes for known Sierpinski numbers, and the projections on where we expect to find primes (see A61) is enough to be "effectively certain" of the conjecture's truth even without actually having the prime counter-examples in hand. For instance, I feel like I should be able to quantify the value of Bayesian evidence that each primality test contributes to the overall project goal of proving the Sierpinski conjecture. And if I can show that I expect to update in favour of the conjecture's truth after hearing of the results of X more primality tests, then I should also be able to update on that now, right?
Working this out for a problem I'm familiar with might help us get better at analysing the truth of other scientific conjectures in general. But the reason I haven't done this so far is that despite understanding Bayesian reasoning abstractly and the rules about conserving probability, I don't know how to formally select a prior for the analysis. I realise this is probably not a big problem as long as it's not pathologically bad -- can I just say 50% true or maybe 90% since someone smart who I respect went to the trouble to publish a paper saying he believed it? Guess that's not hard, but how do I calculate the value of incremental evidence in favour of the conjecture's truth that has accumulated over the years as more and more possible ways for it to be false have been eliminated?
I picked this because I've been running the distributed computing system trying to solve this problem through brute-force computational means the past 8 years without actually knowing how sure I should be about the actual thing I'm trying to prove. It might be good to know what I'm doing, huh?
↑ comment by DanielVarga · 2010-12-15T11:51:07.539Z · LW(p) · GW(p)
Let me mention that Nate Silver at 538 did something quite similar to your Knox post, just today:
http://fivethirtyeight.blogs.nytimes.com/2010/12/15/a-bayesian-take-on-julian-assange/
By the way, I downvoted the parent. It is nothing personal, but debates like those induced by the Knox post are not what I'd like to see more of here.
↑ comment by steven0461 · 2010-12-14T00:04:34.797Z · LW(p) · GW(p)
I keep wanting to offer this as a test case, but I'm worried that if it holds up it will make LW look stupid.
comment by Alexandros · 2010-12-13T19:32:27.013Z · LW(p) · GW(p)
It'd be nice if we could develop some semi-formal process by which we'd collaboratively determine where the evidence points for a certain question.
Replies from: Jack↑ comment by Jack · 2010-12-13T20:33:12.231Z · LW(p) · GW(p)
Along these lines I still maintain the most useful exercise I ever engaged in here was the the Amanda Knox survey. We came at an issue which we had no better knowledge of than the general public. Nor was there any reason to expect us to agree- the issue bore no connection to the beliefs we share mostly as a result of self-selection (materialism, atheism). We independently looked at the evidence and mostly came up with similar results and then kept talking until nearly all disagreements were resolved. It still strikes me as a stunning example of how successful our methods can be. I'm desperate to do more of these. Of course, the issue is a lack of cases. We might try scientific controversies instead of legal ones. Unfortunately, few options will have the excitement of a murder trial involving pretty upper-middle class white girls. But using this stuff is so empowering and informative, I have to think we don't do this enough. (Though I have found the Diplomacy game helpful in this regard).
Replies from: komponisto↑ comment by komponisto · 2010-12-13T23:11:59.293Z · LW(p) · GW(p)
Along these lines I still maintain the most useful exercise I ever engaged in here was the the Amanda Knox survey.
Wow, I really appreciate this comment, especially on the anniversary of my "the answer is..." post!
I've made the suggestion into its own top-level comment.
comment by Perplexed · 2010-12-13T20:30:58.630Z · LW(p) · GW(p)
I would like to see sequences of top level postings providing semi-technical tutorials on topics of interest to rationalists.
As one example of a topic: Moral Philosophy
EY has commented that his MetaEthics sequence is one of his least successful. Can anyone else do better? The 'official' ethical position here seems to be a kind of utilitarianism, but we ought (for some values of 'ought') to also know something about competing approaches to ethics, including deontological ethics, virtue ethics, and naturalistic ethics (Nozick, Gauthier, and Binmore, for example).
I know almost nothing about virtue ethics, for example, but it is intriguing because it seems to provide the most natural solution to the Parfit's hitchhiker problem and other decision problems where 'good reputation' is involved.
Replies from: wedrifid, Vaniver↑ comment by wedrifid · 2010-12-14T11:05:22.085Z · LW(p) · GW(p)
EY has commented that his MetaEthics sequence is one of his least successful. Can anyone else do better? The 'official' ethical position here seems to be a kind of utilitarianism, but we ought (for some values of 'ought') to also know something about competing approaches to ethics, including deontological ethics, virtue ethics, and naturalistic ethics (Nozick, Gauthier, and Binmore, for example).
We have deontological ethics covered. At least one post by Alicorn and the actual direct moral assertions around here are usually significantly deontological in nature.
↑ comment by Vaniver · 2010-12-14T18:20:35.242Z · LW(p) · GW(p)
EY has commented that his MetaEthics sequence is one of his least successful. Can anyone else do better?
I started a post and fizzled about 2/3rds of the way through. I may approach it again with a fresh mind, and another topic I'm thinking about touches on it by the fringes, and so may be a good introduction.
comment by Perplexed · 2010-12-13T17:58:00.378Z · LW(p) · GW(p)
In the discussion section, I would like to see more short reviews of books, lectures, and debates, computer games, blogs, forums, and software products with links to online resources.
As one specific example: Video lectures from TED, Edge, Blogging Heads or just YouTube
comment by PeerInfinity · 2010-12-16T16:45:48.821Z · LW(p) · GW(p)
Existential Risks
More specifically, topics other than Friendly AI. Groups other than SIAI and FHI that are working on projects to reduce specific x-risks that might happen before anyone has a chance to create a FAI. Cost/benefit analysis of donating to these projects instead of or in addition to SIAI and FHI.
I thought the recent post on How to Save the World was awesome, and I would like to see more like it. I would like to see each of the points from that post expanded into a post of its own.
Is LW big enough for us to be able to form sub-groups of people who are interested in specific topics? Maybe with sub-reddits, or a sub-wiki? Regular IRC/Skype/whatever chat meetings? I still haven't thought through the details of how this would work. Does anyone else have ideas about this?
comment by XFrequentist · 2010-12-14T17:55:22.768Z · LW(p) · GW(p)
Medicine.
Obvious areas of practical interest are pandemics and aging, but I'm sure there are others.
It's also home to an alternate rationality universe called Evidence-Based Medicine, which may have some lessons for LW on how to engage a hostile audience, and for which LW might have further lessons on reductionism and statistics.
comment by fortyeridania · 2010-12-14T01:00:09.186Z · LW(p) · GW(p)
I'd like to see more posts about emotion. Feelings influence decisions and beliefs quite strongly, yet there's very little on this site (that I've seen) that deals with this topic. Exceptions include posts on misguided [compassion] (http://lesswrong.com/lw/6z/purchase_fuzzies_and_utilons_separately/).
Depression, anger, lust, and, of course, happiness, can all lead people astray; under their influence, people often make poor decisions (i.e. ones that harm their own futures or harm others).
Naturally, emotions can also be a force for good. Nevertheless, the relationship between rationality and emotion is complex, and I'd like to know more.
comment by Emile · 2010-12-13T16:48:19.514Z · LW(p) · GW(p)
Postmodernism - I've been intrigued since David mentioned it :
(Postmodernism is not inherently rubbish - it is indeed a fantastically useful tool in criticism and understanding of human culture, and other human activities that might as well be culture. As Lucidfox points out, rather more is relative than most people assume, and postmodernism is useful in working out what that is. Any writer should IMO have a working familiarity with its tools. However, some proponents really don't realise that reality exists, and they end up slightly embarrassed.)
(There's already an article on RationalWiki)
Replies from: prase, Jack, Daniel_Burfoot↑ comment by prase · 2010-12-13T18:47:04.857Z · LW(p) · GW(p)
So, even if postmodernism is madness, is there a method in it? I doubt, but am curious. The RationalWiki article is not much convincing, it just asserts that postmodernism is somehow valuable, but does not say how.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2010-12-13T19:34:40.203Z · LW(p) · GW(p)
People who habitually project their own wiring onto their environment can benefit from a systematic approach to short-circuiting that projection. A lot of postmodernist traditions attempt to encourage such an approach, and when they succeed, they are beneficial.
That said, IME the study of cognition, evopsych, and other cultures achieves much the same benefit more cost-effectively.
↑ comment by Jack · 2010-12-13T19:24:56.429Z · LW(p) · GW(p)
This is rather vague actually. Postmodernism lumps together hundreds of ideas and concepts. These ideas vary greatly in their usefulness and their bullshit content. Any statement broad enough to cover all of postmodernism is trivial and unhelpful- I'm not sure there is a "Postmodernism for rationalists" waiting to be written. For a general overview the wikipedia article is really, really good.
Also, attention RationalWiki writers: Derrida was not the founder of postmodernism.
Replies from: Emile↑ comment by Emile · 2010-12-13T20:19:11.731Z · LW(p) · GW(p)
I'm not sure there is a "Postmodernism for rationalists" waiting to be written.
But maybe there are some specific topics or methods or approaches that would fall under "postmodernism" or "critical theory" or "deconstruction", and that could be interesting or useful to rationalists.
Or something like "this is how postmodernists would approach such-and-such a subject, and their analysis has some value in this case".
↑ comment by Daniel_Burfoot · 2010-12-13T20:47:05.342Z · LW(p) · GW(p)
Postmodernism - I've been intrigued since David mentioned it :
I'm interested in the general category of social science ideas that were big and important 50 or 100 years ago, but have since fallen out of favor. I'm sure guys like Freud, Jung, Weber, Marx, and Durkheim had some interesting ideas. I'd like to know what those ideas were, and why people at the time took them so seriously.
Replies from: Vaniver↑ comment by Vaniver · 2010-12-14T18:33:26.749Z · LW(p) · GW(p)
I'm sure guys like Freud, Jung, Weber, Marx, and Durkheim had some interesting ideas. I'd like to know what those ideas were, and why people at the time took them so seriously.
So... my opinion here is generally, if someone is worth reading the originals, you'd know already. It is worth reading The Wealth Of Nations, because Adam Smith was that clever and that comprehensive. Is it worth reading Marx? Well... not really. His method of explanation is pretty poor, actually.
With Freud, you have one core insight that is truly revolutionary (the primary human drive is libido) and then a bunch of mind projection fallacy. Humans actually have an anti-incest impulse such that they don't find people they grew up around attractive. Freud's mother was totally hot, though (or something), and so Freud projected his Oedipus complex onto everyone else. The rest of what he posited is generally totally wrong. (Not as familiar with Jung / Weber / Durkheim.)
And so if someone asks me whether they should read Smith xor Hayek, I have a hard time answering. If someone asks me whether they should read Darwin xor Dawkins, I favor Dawkins (but haven't read much of Darwin). When someone asks me whether they should read modern psychology xor Freud, there's no contest. You could read something like The Red Queen by Ridley and get a much fuller expansion of Freud's insight than you could by reading Freud.
(xor = exclusive or; if it's regular or, the answer for the first one, at least, is "both!")
comment by Perplexed · 2010-12-13T20:45:12.094Z · LW(p) · GW(p)
I would like to see sequences of top-level postings which consist of systematic, chapter-by-chapter reviews of books which are of interest to rationalists.
As one example of such a book: Rawls: "A Theory of Justice"
This one is important because Rawls's device of "the original position" is one clever way to make utilitarianism mathematically respectable, in that it permits interpersonal comparison of utilities.
Replies from: Morendil↑ comment by Morendil · 2010-12-14T08:47:00.112Z · LW(p) · GW(p)
This one is important because Rawls's device of "the original position" is one clever way to make utilitarianism mathematically respectable, in that it permits interpersonal comparison of utilities.
What !?
Replies from: Perplexed↑ comment by Perplexed · 2010-12-14T15:19:13.992Z · LW(p) · GW(p)
I would like to thank you for your expression of incredulity. It forced me to look at the question a little more closely, and led me to this excellent paper by Binmore. If I had read it first, I would not have been so complimentary about Rawls. I had forgotten just how different Rawls's 'Veil of Ignorance' is from Harsanyi's. And it is Harsanyi's version that deserves our respect, not Rawls's. Nevertheless, I think I was technically correct. Rawls is mathematically respectable, however deficient he is on more realistic grounds.
There are several issues here worth discussing, including:
- Rawls's maximin vs Harsanyi's lottery (Rawls, in effect, assumes infinite risk aversion)
- Objective vs subjective interpersonal comparisons. Do people agree on their interpersonal comparisons? Does it matter, since decision-making is subjective in any case?
- Our lack of direct access to other people's preferences (which is also a problem in standard complete-information game theory even if you don't attempt interpersonal comparisons).
I probably ought to put together a top-level posting on this topic - if only to clarify my own thinking. But I'm too lazy right now. So instead, I'll just reread Binmore's paper and maybe check out some of his references.
Replies from: Morendilcomment by Pavitra · 2010-12-14T03:07:56.505Z · LW(p) · GW(p)
Something for those of us from the Bardic Conspiracy.
The actual causal reason I suggest this is because I like Less Wrong and I think good prose is a valuable skill and therefore want these things to intersect. However, I could also rationalize...
(beware of persuasion)
Rationality:art::medicine:transhumanism. Clearly there is such a thing as good and bad art; therefore being good at it must be a comprehensible skill, which can be studied and formalized into a proper field of engineering. LW seems uniquely well-qualified to do so, and the project, if successful, should even be able to help raise support to fight existential risk -- "The one thing that actually has seemed to raise credibility, is famous people associating with the organization, like Peter Thiel funding us, or Ray Kurzweil on the Board."
comment by Perplexed · 2010-12-13T20:51:20.782Z · LW(p) · GW(p)
I would like to see sequences of top-level postings which consist of systematic, chapter-by-chapter reviews of books which are of interest to rationalists.
As one example of such a book: Kahneman, Slovic, & Tversky: "Judgment Under Uncertainty: Heuristics and Biases"
In some sense, EY has already covered much of this, but it would be worthwhile to have the material recovered by someone else. It is material which is close to the heart of what LessWrong stands for.
comment by Davorak · 2010-12-13T19:27:05.833Z · LW(p) · GW(p)
I would like to see step by step guides on how to create and maintain local rational meetup groups. Tutorials, example debates, how to be moderator, and etc. All of the resources that can possibly be provided online to lower the barrier for people who want to organize locally.
I have nothing more then first hand observation, but it seems much "easier" to propagate a culture long term(potentially multi-generational) with in person meetups. If LessWorng wants to have the largest impact possible on humanities rationality this seems like the best option.
comment by jaimeastorga2000 · 2010-12-13T18:48:26.138Z · LW(p) · GW(p)
I would like to see more practical techniques. I mean, don't get me wrong, reading lesswrong has slowly changed my outlook on life in many ways that are likely to have big practical effects in the future, but there is something gratifying and awesome about the immediate feedback of reading something, trying it out in real life, and seeing its effects.
comment by Perplexed · 2010-12-13T17:55:13.552Z · LW(p) · GW(p)
In the discussion section, I would like to see more short reviews of books, lectures, and debates, computer games, blogs, forums, and software products with links to online resources.
As one specific example: Singularity related SF&F Novels or stories
comment by taryneast · 2011-01-14T19:12:39.184Z · LW(p) · GW(p)
I may have missed them, but what I'd most like to see are a set of posts on interacting harmoniously with non-rationalists or maybe getting along in an irrational world.
There are plenty of people in my life who are not going to (and never will be) interested in becoming more rational. Who, when I try to explain why it's more effective for a certain favourite theory/technique to be cast out - will vehemently resist.
Yet as they tend to be family (or colleagues) I still need to get along with them in a way that is happy and productive for both of us. Ignoring them, getting angry or contemptuous simply does not help. Yet it's often extremely difficult to make life work when what you believe, and what they believe are so diametrically opposed.
The example that springs to mind is work-related: operating harmoniously in a company that prefers over-confident time-estimates...
Are there successful techniques that people have for working alongside a company that does this? (I have plenty that don't work) :)
The posts that most spring to mind would be "cultish counter-cultishness" and "lonely dissent"... but I'd love to see a whole sequence on the subject.
...I'm still only partway through the sequences... let me know if I've just missed that one.
comment by RHollerith (rhollerith_dot_com) · 2010-12-14T22:44:52.626Z · LW(p) · GW(p)
The relatively few times people here have written about human health or "applied human biology" have been useful to me, and I'd like to see more.
comment by jsteinhardt · 2010-12-14T18:33:34.442Z · LW(p) · GW(p)
No one has mentioned AI yet? (Not friendly AI, just regular AI.) Is there any interest in this? I had intended to write a few posts about it if I ever had time.
Replies from: Vaniver↑ comment by Vaniver · 2010-12-14T18:36:06.772Z · LW(p) · GW(p)
I have an interest in regular AI but as far as I can tell it's mostly distinct from rationality. AI seems much more about task completion and decision-making in narrow regions with strongly identified problems; AGIs and humans are much more about task selection and decision-making in broad regions with weakly identified problems.
Replies from: jsteinhardt↑ comment by jsteinhardt · 2010-12-14T18:51:06.639Z · LW(p) · GW(p)
I meant the parts of AI relevant to AGI. While certain subfields of AI are specialized task-solvers (and the researchers know it), there are plenty of fields that want to solve the general problem.
Isn't AGI also mostly distinct from rationality? I don't get the connection.
comment by Larks · 2010-12-14T23:23:52.934Z · LW(p) · GW(p)
Idea for a thread: people post hypothesis H and evidence E such that that
- They believe H is probably true
- They believe most people do not think H true
- They believe most people would rapidly believe H was true if shown E
- They believe most people would appreciate learning that H, once they believed H and knew E.
comment by Kevin · 2010-12-14T16:13:34.058Z · LW(p) · GW(p)
Correct political analysis.
Replies from: Alexei↑ comment by Alexei · 2010-12-15T01:57:27.215Z · LW(p) · GW(p)
People here don't like discussing politics because it's a very charged topic, it's very easy to become defensive/offensive, and it doesn't yield very much utility. Article
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-12-15T02:02:45.014Z · LW(p) · GW(p)
Yes, but we are going to need to deal with eventually. If we're only rational by staying away from the issues that we are actually very emotional about then we aren't doing anything very impressive. "Refining the art of human rationality except in the areas that normally inflames emotions" doesn't sound like a great motto.
Replies from: Emile, Alexei, wedrifid↑ comment by Emile · 2010-12-15T09:23:52.410Z · LW(p) · GW(p)
I don't think we, as a community, are strong enough at rationality to deal with it. I'd like to be wrong, but I'm not in much of hurry to find out.
And even if we are good enough, and have a big rational discussion on a hot-button and usually divisive topic, and come out mostly agreeing that the Herp position is right an the Derp position is wrong - that will just make the site more attractive to less-rational Herpists, and give Derpists a pretext to dismiss LessWrong because "they're obviously motivated by Herpism".
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-12-15T17:29:32.076Z · LW(p) · GW(p)
And even if we are good enough, and have a big rational discussion on a hot-button and usually divisive topic, and come out mostly agreeing that the Herp position is right an the Derp position is wrong - that will just make the site more attractive to less-rational Herpists, and give Derpists a pretext to dismiss LessWrong because "they're obviously motivated by Herpism".
This argument seems like it might show too much. If someone said this about an issue that isn't political (say the existence of God) we would reject it. What gives politics such a unique status? Certainly religious opinions create about as much tribalism.
Replies from: WrongBot, Emile↑ comment by WrongBot · 2010-12-15T18:09:51.199Z · LW(p) · GW(p)
Religious questions are much easier than political questions, and can be answered with much more certainty. Political questions are also detrimental to sane discussion because they often rest upon (disguised) moral questions, which encourage tribalism and are harder still to resolve, if they can be resolved at all.
↑ comment by Emile · 2010-12-15T22:01:15.808Z · LW(p) · GW(p)
Religious opinions are divisive, we took our side, and don't seem to be considered very highly by those who took the other.
I don't mind closing the community to believers, in first approximation their ideas are worthless. But I wouldn't extend that to liberals, libertarians, conservatives, environmentalists, anarchists, etc.
I can't think of any other topic beyond religion and politics where it's commonly expected that everyone has a position - some people can have strong and conflicting opinions on say parenting styles or whether Esperanto is a real language or interpretations of quantum mechanics or operating systems or whether Batman could beat Superman, but they don't go around saying "Oh yeah he disagrees with me because he's a yellowist" or something like that.
↑ comment by Alexei · 2010-12-15T02:09:04.531Z · LW(p) · GW(p)
I don't know if I am explaining this well (and may be someone can do better), but discussing politics tends to create "us" vs. "them" mentality really quickly. We are all (well, except, Eliezer of course) flawed humans here and are prone to such reactions. I think all of us will get out more utility if we juts obtain from discussing politics while we become more rational. An infight would be very deadly to a small community like this. It's like protecting a child from some aspects of the outside world: yes, they'll have to deal with it eventually, but for now it's better if they don't.
Replies from: wedrifid↑ comment by wedrifid · 2010-12-15T02:25:02.033Z · LW(p) · GW(p)
I don't know if I am explaining this well (and may be someone can do better), but discussing politics tends to create "us" vs. "them" mentality really quickly.
You explained it well and it is a phenomenon we often reference. Perhaps the greatest source of bias humans have. Certainly the source of the most annoying biases we have. :)
We are all (well, except, Eliezer of course) flawed humans here and are prone to such reactions.
Don't even joke about that (please).
An infight would be very deadly to a small community like this.
From past experience the infighting isn't deadly but it is certainly distracting and probably does some damage. I note that as a community we don't seem especially prone to strongly identifying with world or national politics in the patriotic tribal sense. That kind of politics is far closer to mere abstract theory. The real political infighting lesswrong is vulnerable to tends to be moral and local. (And vulnerable it is.)
Replies from: ata, Emile↑ comment by ata · 2010-12-15T02:31:10.062Z · LW(p) · GW(p)
Don't even joke about that (please).
It may or may not have been a sarcastic reference to this, rather than joke-cultishness.
Replies from: wedrifid↑ comment by wedrifid · 2010-12-15T02:58:30.349Z · LW(p) · GW(p)
It may or may not have been a sarcastic reference to this, rather than joke-cultishness.
I understand. I was making a (minimally emphasized) personal request to not be reminded of that, even in jest. It is a source of (mild) negative affect and causes a commensurately mild interference with my ability to maintain respect for Eliezer, with spill over to SIAI and LW. (But I didn't want to include the reasoning with the request because it didn't feel like the time to criticise.)
Replies from: Alexei, ata↑ comment by Alexei · 2010-12-15T03:45:39.504Z · LW(p) · GW(p)
I'm sorry, I am still a bit new to this website, and I miss the finer points sometimes. I'll try not to make jokes like that in the future. I understand where you are coming from and how my sort of remarks, even in jest, could be damaging.
Replies from: JoshuaZ↑ comment by Emile · 2010-12-15T09:14:55.337Z · LW(p) · GW(p)
I note that as a community we don't seem especially prone to strongly identifying with world or national politics in the patriotic tribal sense. That kind of politics is far closer to mere abstract theory. The real political infighting lesswrong is vulnerable to tends to be moral and local. (And vulnerable it is.)
I agree that there probably isn't that much tribal patriotism here, but there are probably a few posters whose minds are tainted by ideology (libertarianism, left-liberalism, environmentalism, maybe conservatism) (I don't consider my mind untainted by ideology), which can be equally polarizing.