Karma Motivation Thread
post by Jack · 2010-12-13T21:59:08.756Z · LW · GW · Legacy · 38 commentsContents
38 comments
This idea is so obvious I can't believe we haven't done it before. Many people here have posts they would like to write but keep procrastinating on. Many people also have other work to do but keep procrastinating on Less Wrong. Making akrasia cost you money is often a good way to motivate yourself. But that can be enough of a hassle to deter the lazy, the ADD addled and the executive dysfunctional. So here is a low transaction cost alternative that takes advantage of the addictive properties of Less Wrong karma. Post a comment here with a task and a deadline- pick tasks that can be confirmed by posters; so either Less Wrong posts or projects that can be linked to or photographed. When the deadline comes edit your comment to include a link to the completed task. If you complete the task, expect upvotes. If you fail to complete the task by the deadline, expect your comment to be downvoted into oblivion. If you see completed tasks, vote those comments up. If you see past deadlines vote those comments down. At least one person should reply to the comment, noting the deadline has passed-- this way it will come up in the recent comments and more eyes will see it.
Edit: DanArmak makes a great suggestion.
38 comments
Comments sorted by top scores.
comment by topynate · 2011-01-26T02:52:08.514Z · LW(p) · GW(p)
Task: Write a patch for the Less Wrong codebase that hides deleted/banned posts from search engines.
Deadline: Sunday, 30 January.
Replies from: topynate, lukeprog↑ comment by topynate · 2011-01-27T23:17:01.480Z · LW(p) · GW(p)
Just filed a pull request. Easy patch, but it took a while to get LW working on my computer, to get used to the Pylons framework and to work out that articles are objects of class Link. That would be because LW is a modified Reddit.
comment by WrongBot · 2010-12-14T05:23:28.061Z · LW(p) · GW(p)
Task: Relaunch my blog with a substantive (and LessWrong-relevant) introductory post.
Deadline: Monday, December 20.
Replies from: WrongBot↑ comment by WrongBot · 2010-12-20T04:33:31.227Z · LW(p) · GW(p)
My blog has returned to the world of the living, accompanied by a post pointing out why solipsism is really silly, aimed at readers without much of a background in philosophy. I would be grateful for any criticisms, suggestions, or other comments LessWrongians have to offer.
I will make a second post by Monday, December 27th, probably about Searle's Chinese Room and why it's dumb.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2010-12-20T05:23:00.671Z · LW(p) · GW(p)
Does anyone other than Searle actually consider Searle's Chinese Room a compelling argument? (And if so, what do they consider it a compelling argument for?)
Replies from: ata, Sniffnoy↑ comment by Sniffnoy · 2010-12-20T05:35:43.936Z · LW(p) · GW(p)
Unfortunately, yes. (Or if not compelling, at least respectable.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2010-12-20T16:30:49.363Z · LW(p) · GW(p)
Well, when you put it that way, I guess I consider it a respectable argument, myself.
That is, it's a useful exercise for starting to think rigorously about what it means to be a mind. That's what thought experiments are for, after all, to make you think about things you might not have thought about otherwise. That function deserves respect.
If you decide the Chinese Box really does understand Chinese, that implies certain things about the nature of understanding. If you decide the Chinese Box simply can't exist at all, that implies other things. If you decide it could understand Chinese if only X or Y, ditto. If you decide that neither the Chinese Box nor any other system is actually capable of understanding Chinese, ibid.
But Searle really does seem to believe that it provides a reason to conclude one way over another, and that seems downright bizarre to me.
comment by Jack · 2010-12-13T22:01:22.253Z · LW(p) · GW(p)
Task: Publish a top-level, main-page post (Edit: Wow, by popular demand, the post will be on the Dutch Book argument) Deadline: December 20th
Replies from: Jack, jsalvatier↑ comment by jsalvatier · 2010-12-13T23:51:41.998Z · LW(p) · GW(p)
Do you have something specific in mind?
Replies from: Jack↑ comment by Jack · 2010-12-14T00:33:24.680Z · LW(p) · GW(p)
I've been trying to decide. I have a long list of ideas that I never get around to. Maybe I'll list the likely options and anyone who wants to can weigh in.
Metaethics- in particular a defense of a tolerant pluralism with regard to normative theories, perhaps an explanation/investigation into the is/ought distinction
Quantum mechanics- probably not what I'll do this week because it requires more research than other ideas, but I think we're overconfident in the Many World's interpretation and focusing on the wrong alternatives.
A summary/explanation of Quinean metaphysics and epistemology
The type-token distinction/ambiguity, perhaps going into it's relation to personal identity
An investigation into how mental concepts relate to complexity, by way of refuting Swinburne's Bayesian argument for theism
An explanation of the Dutch Book argument
Thats just the top of my list. As you may be able to tell from the way I've won my karma, I'm quite good with quick and dirty ideas but far less successful with post-length explanations. Hopefully I can get through a lot of these (and hopefully they actually end up being things people want to read).
Replies from: Louie, tenshiko↑ comment by Louie · 2010-12-14T01:00:28.612Z · LW(p) · GW(p)
I've always wanted to see a list of real-world Dutch booking examples. Something like "Dutch booking for Fun and Profit". I'd like to Dutchbook more people if possible. Like on InTrade.
Replies from: Jack↑ comment by Jack · 2010-12-15T02:23:44.344Z · LW(p) · GW(p)
I'm going to talk about its role in Bayesian epistemology, not so much doing it to people. But I'll definitely talk about some real-life examples. I'll use Intrade as one of the examples (unfortunately, you won't be able to make any money off it; Intrade's commissions are too high).
comment by DanArmak · 2010-12-14T21:49:04.370Z · LW(p) · GW(p)
I suggest that discussion about the idea itself is placed under this comment, to avoid cluttering the main thread where people publish commitments.
Replies from: DanArmak, Jack↑ comment by DanArmak · 2010-12-14T21:54:04.149Z · LW(p) · GW(p)
Several people have now used this to commit to doing something others can benefit from, like LW posts. I suggest an alternative method: when a user commits to doing something, everyone who is interested in that thing being done will upvote that comment. However, if the task is not complete by the deadline, everyone who upvoted commits to coming back and downvoting the comment instead.
This way, people can judge whether the community is interested in their post, and the karma being gained or lost is proportional to the amount of interest. Also, upvoting and then downvoting effectively doubles the amount of karma at stake.
Replies from: Jackcomment by ata · 2010-12-13T22:30:20.995Z · LW(p) · GW(p)
Cool idea. I've actually been working on a web app called Accomplishment Karma based on a similar mechanism; I hope (though I don't promise) to have it up by mid-January.
Replies from: Kingreaper↑ comment by Kingreaper · 2010-12-14T00:02:42.232Z · LW(p) · GW(p)
Maybe you should put your karma where your keyboard is?
comment by JoshuaZ · 2010-12-13T22:17:10.823Z · LW(p) · GW(p)
Task: Finish next draft of integer complexity work with User:Sniffnoy, including upper bound material.
Deadline: December 29.
Replies from: Jack, JoshuaZ, cousin_itcomment by orthonormal · 2010-12-15T20:18:53.110Z · LW(p) · GW(p)
Task: Write a top-level LW post on one of several interesting topics (vote below).
Deadline: Wednesday, December 22.
Replies from: orthonormal, Jack, orthonormal, orthonormal, orthonormal, orthonormal↑ comment by orthonormal · 2010-12-15T20:23:18.140Z · LW(p) · GW(p)
Topic #2: An introductory post on consequentialist ethics, focusing on the usual misconceptions about consequentialism (that it's inherently self-centered, shortsighted, etc).
Vote this comment up (and the karma balance down) if you prefer this topic.
↑ comment by Jack · 2011-01-06T14:22:26.414Z · LW(p) · GW(p)
Doesn't look like this got done either. DOWNVOTE IT
Replies from: orthonormal↑ comment by orthonormal · 2011-01-06T15:18:12.884Z · LW(p) · GW(p)
Yup, I failed.
↑ comment by orthonormal · 2010-12-15T20:25:41.389Z · LW(p) · GW(p)
Topic #3: The beginning of a sequence on interpretations of experience, meant to introduce rationality to people tempted toward philosophical relativism. A little bit of Nietzsche isn't actually a bad thing.
Vote this comment up (and the karma balance down) if you prefer this topic.
↑ comment by orthonormal · 2010-12-15T20:21:28.206Z · LW(p) · GW(p)
Topic #1: My layman's view of existential risks introduced by various intelligence-related technologies, and the proper response to the massive uncertainties about them.
Vote this comment up (and the karma balance down) if you prefer this topic.
↑ comment by orthonormal · 2010-12-21T01:11:09.602Z · LW(p) · GW(p)
I'll break the tie and write on Topic #3.
↑ comment by orthonormal · 2010-12-15T20:26:14.216Z · LW(p) · GW(p)
Karma balance: vote me down if you've voted above.
Yaka-Wow.
Several people have now used this to commit to doing something others can benefit from, like LW posts. I suggest an alternative method: when a user commits to doing something, everyone who is interested in that thing being done will upvote that comment. However, if the task is not complete by the deadline, everyone who upvoted commits to coming back and downvoting the comment instead.
This way, people can judge whether the community is interested in their post, and the karma being gained or lost is proportional to the amount of interest. Also, upvoting and then downvoting effectively doubles the amount of karma at stake.