CFAR and SI MOOCs: a Great Opportunity 2012-11-13T10:30:15.543Z


Comment by Wrongnesslessness on Research is polygamous! The importance of what you do needn't be proportional to your awesomeness · 2013-05-29T06:55:23.837Z · LW · GW

If you mean the less-fun-to-work-with part, it's fairly obvious. You have a good idea, but the smarter person A has already thought about it (and rejected it after having a better idea). You manage to make a useful contribution, and it is immediately generalized and improved upon by the smarter persons B and C. It's like playing a game where you have almost no control over the outcome. This problem seems related to competence and autonomy, which are two of the three basic needs involved in intrinsic motivation.

If you mean the issue of why fun is valued more than doing something that matters, it is less clear. My guess is that's because boredom is a more immediate and pressing concern than meaningless existence (where "something that matters" is a cure for meaningless existence, and "fun" is a cure for boredom). Smart people also seem to get bored more easily, so the need to get away from boredom is probably more important for them.

Comment by Wrongnesslessness on Research is polygamous! The importance of what you do needn't be proportional to your awesomeness · 2013-05-28T07:09:02.306Z · LW · GW

When I read this:

9) To want to be the best in something has absolutely no precedence over doing something that matters.

I immediately thought of this.

On a more serious note, I have the impression that while some people (with conservative values?) do agree that doing something that matters is more important than anything else (although "something that matters" is usually something not very interesting), most creatively intelligent people go through their lives trying to optimize fun. And while it's certainly fun to hang out with people smarter than you and learn from them, it's much less fun to work with them.

Comment by Wrongnesslessness on Post ridiculous munchkin ideas! · 2013-05-13T10:30:26.203Z · LW · GW

I've always wanted a name like that!

But I'm worried that with such a generic English name people will expect me to speak perfect English, which means they'll be negatively surprised when they hear my noticeable accent.

Comment by Wrongnesslessness on CFAR and SI MOOCs: a Great Opportunity · 2012-11-13T14:31:00.828Z · LW · GW

In my opinion, this second question is far from being as important as the first one. Also, please see these posting guidelines:

These traditionally go in Discussion:

  • a link with minimal commentary
  • a question or brainstorming opportunity for the Less Wrong community

Beyond that, here are some factors that suggest you should post in Main:

  • Your post discusses core Less Wrong topics.
  • The material in your post seems especially important or useful.
  • You put a lot of thought or effort into your post. (Citing studies, making diagrams, and agonizing over wording are good indicators of this.)
  • Your post is long or deals with difficult concepts. (If a post is in Main, readers know that it may take some effort to understand.)
  • You've searched the Less Wrong archives, and you're pretty sure that you're saying something new and non-obvious. The more of these criteria that your post meets, the better a candidate it is for Main.
Comment by Wrongnesslessness on Rationality Quotes November 2012 · 2012-11-05T09:53:16.736Z · LW · GW

The inhabitants of Florence in 1494 or Athens in 404 BCE could be forgiven for concluding that optimism just isn't factually true. For they knew nothing of such things as the reach of explanations or the power of science or even laws of nature as we understand them, let alone the moral and technological progress that was to follow when the Enlightenment got under way. At the moment of defeat, it must have seemed at least plausible to the formerly optimistic Athenians that the Spartans might be right, and to the formerly optimistic Florentines that Savonarola might be. Like every other destruction of optimism, whether in a whole civilization or in a single individual, these must have been unspeakable catastrophes for those who had dared to expect progress. But we should feel more than sympathy for those people. We should take it personally. For if any of those earlier experiments in optimism had succeeded, our species would be exploring the stars by now, and you and I would be immortal.

David Deutsch, The Beginning of Infinity

Comment by Wrongnesslessness on Jews and Nazis: a version of dust specks vs torture · 2012-09-09T05:36:45.499Z · LW · GW

I'm quite sure I'm not rounding when I prefer hearing a Wagner opera to hearing any number of folk dance tunes, and when I prefer reading a Vernor Vinge novel to hearing any number of Wagner operas. See also this comment for another example.

It seems, lexicographic preferences arise when one has a choice between qualitatively different experiences. In such cases, any differences in quantity, however vast, are just irrelevant. An experience of long unbearable torture cannot be quantified in terms of minor discomforts.

Comment by Wrongnesslessness on Jews and Nazis: a version of dust specks vs torture · 2012-09-09T05:24:21.702Z · LW · GW

I've always thought the problem with real world is that we cannot really optimize for anything in it, exactly because it is so messy and entangled.

I seem to have lexicographic preferences for quite a lot of things that cannot be sold, bought, or exchanged. For example, I would always prefer having one true friend to any number of moderately intelligent ardent followers. And I would always prefer a FAI to any number of human-level friends. It is not a difference in some abstract "quantity of happiness" that produces such preferences, those are qualitatively different life experiences.

Since I do not really know how to optimize for any of this, I'm not willing to reject human-level friends and even moderately intelligent ardent followers that come my way. But if I'm given a choice, it's quite clear what my choice will be.

Comment by Wrongnesslessness on Jews and Nazis: a version of dust specks vs torture · 2012-09-08T17:13:01.234Z · LW · GW

It is not a trivial task to define a utility function that could compare such incomparable qualia.


However, it is possible for preferences not to be representable by a utility function. An example is lexicographic preferences which are not continuous and cannot be represented by a continuous utility function.

Has it been shown that this is not the case for dust specks and torture?

Comment by Wrongnesslessness on Jews and Nazis: a version of dust specks vs torture · 2012-09-08T16:35:18.688Z · LW · GW

I'm a bit confused with this torture vs. dust specks problem. Is there an additive function for qualia, so that they can be added up and compared? It would be interesting to look at the definition of such a function.

Edit: removed a bad example of qualia comparison.

Comment by Wrongnesslessness on Cryonics: Can I Take Door No. 3? · 2012-09-07T04:44:40.516Z · LW · GW

With its low probability, it doesn't significantly contribute to expected utility, so for decision making purposes it's an irrelevant hypothetical.

Well, this sounds right, but seems to indicate some problem with decision theory. If a cat has to endure 10 rounds of Schrödinger's experiments with 1/2 probability of death in each round, there should be some sane way for the cat to express its honest expectation to observe itself alive in the end.

Comment by Wrongnesslessness on Cryonics: Can I Take Door No. 3? · 2012-09-06T12:54:54.602Z · LW · GW

Perhaps at this point you can argue, that you totally expect this mental pattern to be reattached to some set of memories or some personality outside the Matrix.

There should be some universes in which the simulators will perform a controlled procedure specifically designed for saving me. This includes going to all the trouble of reattaching what's left of me to all my best parts and memories retrieved from an adequate backup.

Of course, it is possible that the simulators will attach some completely arbitrary memories to my poor degraded personality. This nonsensical act will surely happen in some universes, but I do not expect to perceive myself as existing in these cases.

It seems you are right that gradual degradation is a serious problem with QI-based survival in non-simulated universes (unless we move to a more reliable substrate, with backups and all).

Comment by Wrongnesslessness on Cryonics: Can I Take Door No. 3? · 2012-09-06T09:34:26.743Z · LW · GW

since it's known with great certainty that there is no afterlife, the hypothetical isn't worth mentioning

I'm convinced that the probability of experiencing any kind of afterlife in this particular universe is extremely small. However, some versions of us are probably now living in simulations, and it is not inconceivable that some portion of them will be allowed to live "outside" their simulations after their "deaths". Since one cannot feel one's own nonexistence, I totally expect to experience "afterlife" some day.

Comment by Wrongnesslessness on [SEQ RERUN] Raised in Technophilia · 2012-08-30T07:54:15.894Z · LW · GW

considering that the dangers of technology might outweigh the risks.

This should probably read "might outweigh the benefits".

Comment by Wrongnesslessness on LessWrong could grow a lot, but we're doing it wrong. · 2012-08-21T15:18:57.282Z · LW · GW

We don't have to attract everyone. We should just make sure that the main page does not send away people who would have stayed if they were exposed to some other LW stuff instead.

That's a good point. However, I think there is not much we can do about it by refining the main page. More precisely, I doubt that even a remotely interested in rationality and intelligent person can leave "a community blog devoted to refining the art of human rationality" without at least taking a look at some of the blog posts, irrespective of the contents of the main page itself. We all know examples of internet sites with poor design, but great information content.

So the question of refining the main page, I think, really comes down to selecting the right articles for the Recent Promoted Articles and Featured Articles sections. The rest is already there.

Comment by Wrongnesslessness on Rationality Quotes August 2012 · 2012-08-17T16:30:47.117Z · LW · GW

And they aren't even regular pentagons! So, it's all real then...

Comment by Wrongnesslessness on Why Don't People Help Others More? · 2012-08-16T12:14:15.694Z · LW · GW

Thanks for making me understand something extremely important with regard to creative work: Every creator should have a single, identifiable victim of his creations!

Comment by Wrongnesslessness on Natural Laws Are Descriptions, not Rules · 2012-08-07T12:11:46.058Z · LW · GW


I cannot imagine a real physicist saying something like that. Sounds more like a bad physics teacher... or a good judge.

Comment by Wrongnesslessness on The Irrationality Game · 2012-04-13T18:24:03.066Z · LW · GW

But humans are crazy! Aren't they?

Comment by Wrongnesslessness on The Irrationality Game · 2012-04-13T17:02:12.548Z · LW · GW

All existence is intrinsically meaningless. After the Singularity, there will be no escape from the fate of the rat with the pleasure button. No FAI, however Friendly, will be able to work around this irremediable property of the Universe except by limiting the intelligence of people and making them go through their eternal lives in carefully designed games. (> 95%)

Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)

Comment by Wrongnesslessness on My Algorithm for Beating Procrastination · 2012-02-09T12:29:29.995Z · LW · GW

Of course, another problem (and that's a huge one) is that our head does not really care much about our goals. The wicked organ will happily do anything that benefits our genes, even if it leaves us completely miserable.

Comment by Wrongnesslessness on My Algorithm for Beating Procrastination · 2012-02-09T12:23:44.133Z · LW · GW

One problem with this equation is that it dooms us to use hyperbolic discounting (which is dynamically inconsistent), not exponential discounting, which would be rational (given rationally calibrated coefficients).

Comment by Wrongnesslessness on What will rationality look like in the future? · 2012-02-03T05:48:35.110Z · LW · GW

The powers of instrumental rationality in the context of rapid technological progress and the inability/unwillingness of irrational people to listen to rational arguments strongly suggest the following scenario:

After realizing that turning a significant portion of the general population into rationalists would take much more time and resources than simply taking over the world, rationalists will create a global corporation with the goal of saving the humankind from the clutches of zero- and negative-sum status games.

Shortly afterwards, the Rational Megacorp will indeed take over the world and the people will get good government for the first time in the history of the human race (and will live happily ever after).

Comment by Wrongnesslessness on Help! Name suggestions needed for Rationality-Inst! · 2012-01-30T05:58:17.127Z · LW · GW
  • Foundation for Human Sapience (or Foundation for Advanced Sapience)

  • Reality Transplantation Center

  • Thoughtful Organization

  • CORTEX - Center for Organized Rational Thinking and EXperimentation

  • OOPS - Organization for Optimal Perception Seekers

  • BAYES - Bureau for Advancing Yudkowsky's Experiments in Sanity

Comment by Wrongnesslessness on POSITION: Design and Write Rationality Curriculum · 2012-01-20T21:50:05.007Z · LW · GW

I agree. The waterline metaphor is not so commonly known outside LW that it would evoke anything except some watery connotations.

So, what about a nice-looking acronym like "Truth, Rationality, Universe, Eliezer"? :)

Comment by Wrongnesslessness on Completeness, incompleteness, and what it all means: first versus second order logic · 2012-01-18T11:24:28.450Z · LW · GW

Wikipedia is accessible if you disable JavaScript (or use a mobile app, or just Google cache).

Comment by Wrongnesslessness on Open Thread, January 15-31, 2012 · 2012-01-16T07:48:43.570Z · LW · GW

I would prefer this comment to be more like 0

Does your preference mean that you honestly think the intrinsic value of the comment does not justify its vote count, or that you just generally prefer moderation and extremes irritate you?

In the former case, I would definitely vote toward what I thought would be a more justified vote count. Though in the latter case, I would probably be completely blind to my bias.

Comment by Wrongnesslessness on The Savage theorem and the Ellsberg paradox · 2012-01-14T20:52:32.468Z · LW · GW

If ambiguity aversion is a paradox and not just a cognitive bias, does this mean that all irrational things people systematically do are also paradoxes?

What particular definition of "paradox" are you using? E.g, which one of the definitions in the Paradox wikipedia article?

Comment by Wrongnesslessness on Can the Chain Still Hold You? · 2012-01-13T11:55:09.154Z · LW · GW

Sod off! Overt aggression is a pleasant relief compared to the subtle, catty 'niceness' that the most competitive humans excel at.

Hmm... Doesn't this look like something an aggressive alpha male would say?..


Comment by Wrongnesslessness on Can the Chain Still Hold You? · 2012-01-13T05:37:20.263Z · LW · GW

So the true lesson of this post is that we should get rid of all the aggressive alpha males in our society. I guess I always found the idea obvious, but now that it has been validated, can we please start devising some plan for implementing it?

Comment by Wrongnesslessness on Less wrong has a fitocracy group (invites) · 2012-01-11T17:57:33.262Z · LW · GW

Are there any general-purpose anti-akrasia gamification social sites? I recently found Pomodorium but it is regrettably single-player.