Posts

Anti-akrasia tool: like stickK.com for data nerds 2011-10-10T02:09:57.297Z
Anti-Akrasia Reprise 2010-11-16T11:16:16.945Z
How a pathological procrastinor can lose weight [Anti-akrasia] 2009-04-18T20:05:49.049Z

Comments

Comment by dreeves on Making 2023 ACX Prediction Results Public · 2024-03-05T22:22:44.466Z · LW · GW

I guess in practice it'd be the tiniest shred of plausible deniability. If your prior is that alice@example.com almost surely didn't enter the contest (p=1%) but her hash is in the table (which happens by chance with p=1/1000) then you Bayesian-update to a 91% chance that she did in fact enter the contest. If you think she had even a 10% chance on priors then her hash being in the table makes you 99% sure it's her.

Comment by dreeves on Making 2023 ACX Prediction Results Public · 2024-03-05T20:35:27.041Z · LW · GW

To make sure I understand this concern:

It may be better to use a larger hash space to avoid an internal (in the data set) collisions, but then you lower the number of external collisions.

Are you thinking someone may want plausible deniability? "Yes, my email hashes to this entry with a terrible Brier score but that could've been anyone!"

Comment by dreeves on Making 2023 ACX Prediction Results Public · 2024-03-05T20:08:29.961Z · LW · GW

This should be fine. In past years, Scott has had an interface where you could enter your email address and get your score. So the ability to find out other people's scores by knowing their email address is apparently not an issue. And it makes sense to me that one's score in this contest isn't particularly sensitive private information.

Source: Comment from Scott on the ACX post announcing the results

Comment by dreeves on Snake Eyes Paradox · 2023-07-14T04:35:52.817Z · LW · GW

"At some point one of those groups will be devoured by snakes" is erroneous

I wouldn't say erroneous but I've added this clarification to the original question:

"At some point one of those groups will be devoured by snakes and then I stop" has an implicit "unless I roll snake eyes forever". I.e., we are not conditioning on the game ending with snake eyes. The probability of an infinite sequences of non-snake-eyes is zero and that's the sense in which it's correct to say "at some point snake eyes will happen" but non-snake-eyes forever is possible in the technical sense of "possible".

It sounds contradictory but "probability zero" and "impossible" are mathematically distinct concepts. For example, consider flipping a coin an infinite number of times. Every infinite sequence like HHTHTTHHHTHT... is a possible outcome but each one has probability zero.

So I think it's correct to say "if I flip a coin long enough, at some point I'll get heads" even though we understand that "all tails forever" is one of the infinitely many possible sequences of coin flips.

Comment by dreeves on Fatebook for Slack: Track your forecasts, right where your team works · 2023-05-12T00:22:18.584Z · LW · GW

Just set it up in the Beeminder work Slack and I am immediately in love 😍

First forecast: Will at least 4 of us (including me) play this reindeer game? (96% probability so far)

Comment by dreeves on Strategy of Inner Conflict · 2022-11-17T07:42:09.025Z · LW · GW

Ooh, I think there's a lot of implicit Beeminder criticism here that I'm eager to understand better. Thanks for writing this up! 

We previously argued against similar claims -- https://blog.beeminder.com/blackmail/ -- and said that the "just get the different parts of yourself to get along" school of thought was insufficiently specific about how to do that. But here you've suggested some smart, specific ideas and they sound good!

My other Beeminder defense is that there are certain bare minimums that you know it would be irrational to fall below. So I recommend having the Beeminder goal as insurance and then also implementing all the strategies you describe. If those strategies work and it's easy-peasy to stay well above Beeminder's bright red line, then wonderful. Conflict avoided. If those strategies happen to fail, Beeminder will catch you. (Also you get a nice graph of your progress, quantified-self-style.)

PS: More recently we had a post about how compatible Beeminder turns out to be with CBT which I think also argues against the dichotomy you're implying here with Conflict vs Cooperation. https://blog.beeminder.com/cbt/ 

Comment by dreeves on [Crosspost] On Hreha On Behavioral Economics · 2021-11-10T23:05:27.637Z · LW · GW

Btw, Scott mentioned having to read a bunch to figure out the subtle difference between loss aversion and the endowment effect. I attempted a full explainer: https://blog.beeminder.com/loss/

Comment by dreeves on App and book recommendations for people who want to be happier and more productive · 2021-11-08T01:22:12.905Z · LW · GW

I don't necessarily endorse it but the moral argument would go like so: "I'm definitely not going to pay to read that article so me bypassing the paywall is not hurting the newspaper. The marginal cost is zero. Stealing from a kiosk, on the other hand, deprives the newspaper of a sale (and is just obvious plain old stealing)." In other words, "I'm not stealing a newspaper from the kiosk, I'm just breaking in, photocopying it real quick, and putting it right back. No harm no foul!"

A counterargument might be that you're contributing to demand for paywall-bypassing which does deprive the newspaper of sales, just less directly.

Comment by dreeves on App and book recommendations for people who want to be happier and more productive · 2021-11-07T23:59:58.821Z · LW · GW

This list is pretty amazing (and I'm not just saying that because Beeminder is on it!) and you've persuaded me on multiple things already. Some comments and questions:

  1. CopyQ: I use Pastebot and I see there are So Many of these and would love a recommendation from someone who feels strongly that there's a particular one I should definitely be using.
  2. Google Docs quick-create: You inspired me to make a link in my bookmarks bar (built in to Chrome) to https://doc.new which I think is simpler and just as good. (Yes, it's kind of ridiculous that "doc.new" is their URL for that instead of something sane like "docs.google.com/new".)
  3. Beeminder vs StickK: Obviously I'm absurdly biased but I can't imagine anyone here preferring StickK and it would actually help us a ton to understand why someone might prefer StickK. Their referee feature is better, but everything else is so much worse! Especially anti-charities -- those are an abomination from an EA perspective, right??
  4. Pocket: We have an argument at https://blog.beeminder.com/pocket for why this is a big deal. (I guess I should also mention our Habitica integration; and Focusmate is probably coming soon!
Comment by dreeves on Is there a beeminder without the punishment? · 2021-09-14T20:49:59.109Z · LW · GW

Good question and good answers! Someone mentioned that the fancy/expensive Beemium plan lets you cap pledges at $0. On the non-premium plan you can cap pledges at $5, so another conceivable solution is to combine that + a conservative slope on your graph + setting alarms or something? + chalking up occasional failures, if rare enough, as effectively the cost of the service.

Or like another person said, you can make the slope zero (no commitment at all), but that may defeat the point, with the graph offering no guidance on how much you'd like to be doing.

Comment by dreeves on My Marriage Vows · 2021-07-24T00:32:20.566Z · LW · GW

PS: Of course this was also prompted by us nerding out about your and Marcus's vows so thank you again for sharing this. I'm all heart-eyes every time I think about it!

Comment by dreeves on My Marriage Vows · 2021-07-24T00:10:20.245Z · LW · GW

Ah, super fair. Splitting any outside income 50/50 would still work, I think. But maybe that's not psychologically right in y'all's case, I don't know. For Bee and me, the ability to do pure utility transfers feels like powerful magic!

Me to Bee while hashing out a decision auction today that almost felt contentious, due to messy bifurcating options, but then wasn't:

I love you and care deeply about your utility function and if I want to X more than you want to Y then I vow to transfer to you U_you(Y)-U_you(X) of pure utility! [Our decision auction mechanism in fact guarantees that.]

Then we had a fun philosophical discussion about how much better this is than the hollywood concept of selfless love where you set your own utility function to all zeros in order for the other's utility function to dominate. (This falls apart, of course, because of symmetry. Both of us do that and where does that leave us?? With no hair, an ivory comb, no watch, and a gold watchband, is where!)

Comment by dreeves on My Marriage Vows · 2021-07-22T22:26:41.951Z · LW · GW

Ooh, this is exciting! We have real disagreements, I think!

It might all be prefaced on this: Rather than merge finances, include in your vows an agreement to, say, split all outside income 50/50. Or, maybe a bit more principled, explicitly pay your spouse for their contributions to the household.

One way or another, rectify whatever unfairness there is in the income disparity directly, with lump-sum payments. Then you have financial autonomy and can proceed with mechanisms and solution concepts that require transferrable utility!

Comment by dreeves on My Marriage Vows · 2021-07-22T17:55:57.641Z · LW · GW

I love this so much and Bee (my spouse) and I have started talking about it. Our first question is whether you intend to merge your finances. We think you shouldn't! Because having separate finances means having transferrable utility which puts more powerful and efficient and fair decision/bargaining mechanisms at your disposal.

My next question is why the KS solution vs the Nash solution to the bargaining problem?

But also are you sure the Shapley value doesn't make more sense here? (There's a Hart & Mas-Colell paper that looks relevant.) Either way, this may be drastically simplifiable for the 2-player case.

Thanks so much for sharing this. It's so sweet and nerdy and heart-warming and wonderful! And congratulations!

Comment by dreeves on Bayes' theorem, plausible deniability, and smiley faces · 2021-04-19T16:49:06.769Z · LW · GW

Oh, Quirrell is referring to what game theorists call Cheap Talk. If the thing I'm trying to convince you of is strictly in my own brain -- like whether I intend to cooperate or defect in an upcoming Prisoner's Dilemma -- then any promises I make are, well, cheap talk. This is related to costly signals and strategic commitment, etc etc.

Anyway, I think that's the missing piece there. "Nothing you can do to convince me [about your own intentions] [using only words]".

Comment by dreeves on Bayes' theorem, plausible deniability, and smiley faces · 2021-04-12T22:18:15.086Z · LW · GW

This is indeed a fun way to illustrate Bayesian thinking! But I have a monkey wrench! There exist people who view smileys as almost explicitly connoting passive-aggression or sarcasm. Like the whole reason to add a smiley is to soften something mean. I'm not quite sure if there are enough such people to worry about but I think that that perception of smileys is out there.

Comment by dreeves on Applied Picoeconomics · 2021-04-08T09:00:34.153Z · LW · GW

Correction to the Ainslie link: http://picoeconomics.org/breakdown.htm

Comment by dreeves on Applied Picoeconomics · 2021-04-04T05:26:37.142Z · LW · GW

Hi from the future [1]! Beeminder has a version of this built in: the one-week akrasia horizon. You can change anything about a Beeminder goal, including ending it, at any time, but the change doesn't take effect for a week. As Katja Grace once said on Overcoming Bias: "[you] can’t change it out of laziness unless you are particularly forward thinking about your laziness (in which case you probably won’t sign up for this)."

 

[1] I'm mildly terrified that it's against the norms to reply to something this old. I've been thinking hard about your (Scott's) recent ACX post, "Towards A Bayesian Theory Of Willpower," and am digging up all your previous thoughts on the topic, so here I am.

Comment by dreeves on Your Cheerful Price · 2021-02-21T21:57:15.126Z · LW · GW

Good thought experiment! I replied in the form of another Yudkowsky vignette. :)

Summary: "Infinity" is a perfectly coherent Cheerful Price for, say, something sufficiently repugnant to you or something very unethical. (But also you must have a finite Cheerful Price for anything, no matter how bad, if the badness happens with sufficiently small probability.)

Comment by dreeves on Your Cheerful Price · 2021-02-21T21:45:25.166Z · LW · GW

That reminds me of this delightful and hilarious (edit: and true!) thing Eliezer said once:

Let me try to clear up the notion that economically rational agents must be cold, heartless creatures who put a money price on everything.

There doesn't have to be a financial price you'd accept to kill every sentient being on Earth except you. There doesn't even have to be a price you'd accept to kill your spouse. It's allowed to be the case that there are limits to the total utility you know how to generate by spending currency, and for anything more valuable to you than that, you won't exchange it for a trillion dollars.

Now, it *does* have to be the case for a von Neumann-Morgenstern rational agent that if a sum of money has any value to you at all, you will exchange anything else you have -- or any possible event you can bring about -- *at some probability* for that sum of money. So it *is* true that as a rational agent, there is some *probability* of killing your spouse, yourself, or the entire human species that you will cheerfully exchange for $50.

I hope that clears up exactly what sort of heartless creatures economically rational agents are.

Comment by dreeves on Your Cheerful Price · 2021-02-20T21:29:59.078Z · LW · GW

Interesting! It hadn't occurred to me that this could be read as any kind of repudiation of "shut up and multiply". My previous comment on this post takes at stab at reconciling Cheerful Prices with my own extreme shut-up-and-multiply way of thinking.

Comment by dreeves on Your Cheerful Price · 2021-02-20T19:13:19.598Z · LW · GW

Oh my goodness I love this. I'm actually so philosophically on board that I'm confused about treating Cheerful Prices as single real numbers. In my homo-economicus worldview, there exists a single price at which I'm exactly indifferent and then my cheerfulness goes up smoothly/continuously from there. It feels very arbitrary to pick something on that continuum and call it "the" cheerful price I have.

(My answer is to turn the nerdery up to 11 and compute a Shapley value, etc etc, but let me save that for another time or place. Jacob Falkovich and I have been talking about jointly blogging about this. We'll definitely want to tie it in to the concept of Cheerful Prices if we do!)

Translated into this delightful new language of Cheerful Prices, the rough version of my approach is like so:

I as the buyer name my lowest possible Cheerful Price (where I just barely find it worth it) and you as the seller name your highest possible Cheerful Price (above which it's just not worth it to you) and we settle on the mean of those two.

But maybe the point of Cheerful Prices is to simplify that. Let one person on one side of the trade make a guess about the consumer surplus and name something in that range. I.e., by naming my Cheerful Price I'm saying that at that price I'd be getting a big enough chunk of the consumer surplus that I don't need to know the size of your chunk. If you, as my counterparty, feel the same then we're golden.

Comment by dreeves on The Power to Solve Climate Change · 2019-10-21T23:28:53.669Z · LW · GW

Really good points. It's funny, I have a draft of a similar point about personal behavior change that I tried to make as provocative-sounding as possible:

http://doc.dreev.es/carbonfoot (Trying To Limit Your Personal Carbon Footprint Hurts The Environment)

But note the PS where I suggest a counterargument: making personal sacrifices for climate change may shape your identity, drive you to greater activism, and make your activism and climate evangelism more persuasive (to those who don't appreciate the economics and game theory of it).

Comment by dreeves on Being the (Pareto) Best in the World · 2019-06-25T07:28:17.376Z · LW · GW

Nice! I've heard a similar idea called a "talent stack" or "skill stack" but explaining it in terms of staking out a chunk of the Pareto frontier is much better.

Coincidentally, I just wrote a post explaining the idea of Pareto dominance -- http://blog.beeminder.com/pareto -- in case that's useful to anyone.

Comment by dreeves on Calibrate your self-assessments · 2017-05-30T22:38:20.982Z · LW · GW

Now resurrected!

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-02-13T22:50:54.614Z · LW · GW

Thank you! See above ("Better to not have people feel like their desperation is being capitalized on.") for my response to your first question. And we actually believe that our system is, in practice if not in theory, strategy-proof. It's explicitly ok to game the system to our hearts' delight. It seems to be quite robust to that. Our utilities tend to either be uncannily well-matched, in which case it's kind of a coin flip who wins, or they're wildly different, but we never seem to have enough certainty about how different they'll be for it to be fruitful to distort our bids much.

The strategy of "just say a number such that you're torn about whether you'd rather win or lose" seems to be close enough to optimal.

Comment by dreeves on White Lies · 2014-02-12T08:16:34.978Z · LW · GW

How about adding a tiny bit of ambiguity (or evasion of the direct question) and making up for it with more effusiveness, eg, "it's not only my job but it feels really good to know that I'm helping you so I really want you to bug me about even trivial-seeming things!" All true and all she's omitting is her immediate annoyance but that is truly secondary, as she points out below about first-order vs second-order desires.

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-02-01T06:09:15.338Z · LW · GW

Yes, we're super keen to make sure the efficient thing happens regardless of the initial distribution of resources/responsibilities/property-rights/etc. And we use yootling as a bargaining mechanism to make that happen. In general we're always willing to shove work to each other or redistribute resources as efficiency dictates, using payments to make that always be fair.

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-02-01T06:03:57.335Z · LW · GW

In practice the sealed-bid version seems to be ungameable, at least for us! None of the problems you mentioned have arisen. My parents have tried this and had more problems but as far as I could tell it always involved contention about what to consider to be joint 50/50 decisions. Bethany and I seem to have no problem with that, using the heuristic of "when in doubt, just call it a 50/50 decision and yootle for it".

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-02-01T05:37:36.081Z · LW · GW

Fixed and fixed. Thank you!

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-01-23T00:02:04.706Z · LW · GW

I'm impressed! That's kind of the conclusion we gradually came to as well, after a lot of trial and error. Better to not have people feel like their desperation is being capitalized on.

Another way to put it: when you're really desperate to win a particular auction it's really nice to be able to just say so honestly, with a crazy high bid. Trying to allocate the surplus equitably means that I have to carefully strategize on understating my desperation. (And worst of all, a mistake means a highly inefficient outcome!)

PS: To be clear about first-price vs second-price, it's technically neither since there's no distinct seller.

Here's the n-player, arbitrary shares version:

Each participant starts with some share of the decision. Everyone submits a sealed bid, the second-highest of which is taken to be the Fair Market Price (FMP). The high bidder wins, and buys out everyone else's shares, ie, pays them the appropriate fraction of the FMP.

"Even yootling", or just "yootling", refers to the special case of two players and 50/50 shares. In that case, instead of bidding a fair market price (FMP), you say how much you're willing to pay if you win. True FMP is twice that, since you only have to pay half of FMP with even yootling. So instead of deciding what you'd pay, doubling it to get FMP, then halving FMP to get the actual payment, we short circuit that and you just say the payment as your bid. For yootling with uneven shares it's easier to bid FMP and then pay the appropriate fraction of that.

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-01-22T19:11:20.277Z · LW · GW

Bethany and I philosophically bite the bullet on this, which is basically to just agree with your second point: the wealthy person gets their way all the time and the poor person gets what's to them a lot of money and everyone is happy.

If that's unpalatable or feels unfair then I think the principled solution is for the wealthy person to simply redress the unfairness with a lump sum payment to redistribute the wealth.

I don't think it's reasonable -- ignoring all the psychology and social intricacies, as I'm wont to do [1] -- to object both to auctions with disparate wealth and to lump sum redistribution to achieve fairness.

Now that I'm introspecting, I suppose it's the case that Bethany and I tend to seize excuses to redistribute wealth, but they have to be plausible ones.

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-01-22T18:53:56.798Z · LW · GW

You're right that it's similar to a Vickrey auction in that the 2nd highest bid (in the 2-player case) is used as the price, but it's different in that there's no 3rd-party seller. The good is jointly owned and the payment will go from one player to the other. In particular, yootling is not strictly incentive compatible like Vickrey is (though in practice it seems to be close enough).

Thanks for the pointer to Landsburg! Looks like he worked out a way (by enlisting another economist couple) to have meaningful auctions despite having joint money with his spouse. I predict that system didn't hold together though. I should email him!

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-01-22T07:27:46.474Z · LW · GW

Specifically, here's the little add-on for Loqi that conducts auctions: https://github.com/aaronpk/zenircbot-bid

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-01-22T07:25:00.195Z · LW · GW

Agreed, we just haven't gotten to that yet. The auctioneer chatroom bot is pretty new.

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-01-22T07:23:20.035Z · LW · GW

Upvoted for the delightfully flattering implication for my and Bethany's relationship. :)

But, yes, a prerequisite is that everyone think like an economist, where everything you care about can be assigned a dollar value.

See also the core assumptions at the top of Bethany's article [http://messymatters.com/autonomy].

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-01-22T07:18:13.946Z · LW · GW

We have a protocol for deciding when to yootle: if the possibility of yootling is so much as mentioned then we must yootle. The only fair way to object to yootling is to dispute that it's a 50/50 decision. If it is a fundamentally joint decision then how would you object? "I want to get my way but not pay anything"? Not so nice. You could say "I don't want to yootle, I'll just do it your way". But that's equivalent to bidding 0, so might as well go through with the yootling. And after 9 years we do have quite efficient ways to conduct these auctions, with fingers or our phones or out loud.

Comment by dreeves on Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" · 2014-01-22T06:56:15.788Z · LW · GW

Great question, and upon reflection (I actually looked this up in my PhD dissertation just now!) I agree. I actually can't remember the last time Bethany and I used a joint purchase auction. For some reason it never comes up -- we just each buy things and don't worry about joint ownership. If we did disagree about whether to buy a household item we'd probably just straight up yootle for whether to buy it (with the cost split 50/50 if we did).

Comment by dreeves on Group Rationality Diary, December 16-31 · 2013-12-17T07:15:02.051Z · LW · GW

Holy cow, thank you so much for this. Speaking of WTF reactions, I hope that won't be how this is perceived. Yours is a perfect example of both the insidiousness and the genius of Beeminder's exponential pledge schedule.

The fact that there's no doubt in your mind that you got more value out of Beeminder than the $130some dollars you paid is I hope evidence that it's more genius than insidiousness. :)

Yours is a textbook case of using Beeminder exactly as intended, to ride the pledge schedule up to the point where the amount of money at risk scares you into never actually paying it. For some people paying even the first $5 is sufficiently aversive. Others go all the way to $810, which has been, almost universally, sufficient to keep people toeing the line. (Ie, only one person has ever actually defaulted with $810 at stake.)

Some people (Katja Grace is an example) prefer to cap the amount at risk and are happy to pay a small fee occasionally. That has the danger of being more expensive in the long term as each particular derailment isn't a big deal and you can keep delusionally being like "ok, but this time for real!". Mostly, though, I think it depends on the severity of the akrasia for the specific thing you're beeminding.

Comment by dreeves on Group Rationality Diary, October 1-15, plus frequency poll · 2013-10-08T23:21:35.902Z · LW · GW

I very much agree with the parenthetical about pushups. I beemind 30 pushups per day -- http://beeminder.com/d/push -- with the idea that I'll gradually ramp that up as my max reps increases. Except I'm failing to ever do that and have been at 30/day forever. If I cared more I'd ramp it up though. Right now I'm just happy to be forced to maintain some semblance of baseline upper-body strength.

The general point: beemind inputs, not outputs. Ie, things you have total control over.

PS: The Beeminder android app has a pushup counter built in, where you put your phone on the floor and touch your nose to it on each pushup and it tallies them for you.

Comment by dreeves on Rationality, competitiveness and akrasia · 2013-10-03T06:31:11.750Z · LW · GW

Pomodoros is a great metric. Katja Grace makes the case for that here: http://www.overcomingbias.com/2012/08/on-the-goodness-of-beeminder.html (she just calls them blocks of time).

I think raw number of hours is a fine metric too though. Discretizing into pomodoros has both advantages and disadvantages.

If you can quantify actual output, that might be ideal. Like how we track User-Visible Improvements to Beeminder. You might expect that to be too fuzzy a metric but we found a criterion that's been rock solid for years now: If we're willing to publicly tweet it then it counts. Pride prevents us from ever getting too weaselly about it.

Comment by dreeves on Open thread, August 19-25, 2013 · 2013-09-23T01:37:26.029Z · LW · GW

Very fair point! Just like with Beeminder, if you're lucky enough to simply not suffer from akrasia then all the craziness with commitment devices is entirely superfluous. I liken it to literal myopia. If you don't have the problem then more power to you. If you do then apply the requisite technology to fix it (glasses, commitment devices, decision auctions).

But actually I think decision auctions are different. There's no such thing as not having the problem they solve. Preferences will conflict sometimes. Just that normal people have perfectly adequate approximations (turn taking, feeling each other out, informal mental point systems, barter) to what we've formalized and nerded up with our decision auctions.

Comment by dreeves on Post ridiculous munchkin ideas! · 2013-05-20T18:21:40.616Z · LW · GW

See also the digit-sound method: http://www.decisionsciencenews.com/2012/01/06/how-to-remember-numbers/

(I have the vague intention to create a handy tool based on that, which I'd call digimaphone: http://digimaphone.com )

Comment by dreeves on Post ridiculous munchkin ideas! · 2013-05-20T16:43:36.326Z · LW · GW

Too funny; those are the middle names of my kids! :)

Comment by dreeves on Why is it rational to invest in retirement? I don't get it. · 2013-05-20T15:40:55.308Z · LW · GW

I wrote an article with a smilar conclusion: http://messymatters.com/savings

It includes this caricature of traditional financial advice: "You want to stop working when you’re 60ish, right? And you don’t want to be dirt poor at that point, right? So here’s what you do: live as if you’re dirt poor from now till you’re 60. Problem solved."

Comment by dreeves on Programming the LW Study Hall · 2013-03-15T00:50:04.545Z · LW · GW

Gah! :) I did not think of that! Kind of like how I did not think of how much "beeminder" looks like "beerminder".

Comment by dreeves on Programming the LW Study Hall · 2013-03-14T23:22:47.605Z · LW · GW

I have a donation to the cause: the domain "pomochat.com". (I owe the LessWrong community bigtime -- I don't think Beeminder would've gotten off the ground without it!)

I bequeath the domain with no strings attached. I can transfer ownership of the domain or just point it at wherever folks suggest. Assuming of course that no one comes up with a better domain!

Comment by dreeves on MetaMed: Evidence-Based Healthcare · 2013-03-07T21:09:34.717Z · LW · GW

Interesting question! Since it's an especially interesting question for those not fully in the in-crowd I thought it might be worth rephrasing in less technical language:

Is MetaMed comprised of LessWrong folks or significantly influenced by LessWrong folks, or that style of thinking? If so, this sounds like a great test of the real-world efficacy of LessWrong ideas. In other words, if MetaMed succeeds that's some powerful evidence that this rationality shit works! (And to be intellectually honest we have to also precommit to admitting that -- should MetaMed fail -- it's evidence that it doesn't.)

PS: Since Michael Vassar is involved it's safe to say the answer to the first part is yes!

Comment by dreeves on Co-Working Collaboration to Combat Akrasia · 2013-03-07T18:12:22.884Z · LW · GW

Shannon, this sounds really valuable! Thanks to you and Mqrius for kicking this off.

I just wanted to mention that if there's demand for more social features in Beeminder, we're definitely listening. (Outsiders often tell us we should have more social features but LessWrong (and similar communities like Quantified Self) are our bread and butter so if we hear it here we'll pay more attention.)

Comment by dreeves on Nov 16-18: Rationality for Entrepreneurs · 2012-11-11T09:15:29.618Z · LW · GW

Thanks so much, Robert!

And breaking news: I'm now part of the program!

(I'm really excited about this!)