Posts
Comments
Ah, thank you! Sounds like Obsidian users will find this more convenient than eat-the-richtext. Maybe we could start a list of other editors or tools that solve this problem...
A couple days ago I wanted to paste a paragraph from Sarah Constantin's latest post on AGI into Discord and of course the italicizing disappeared which drives me bananas and I thought there must exist tools for solving that problem and there are but they're all abominations so I said to ChatGPT (4o),
can you build a simple html/javascript app with two text areas. the top text area is for rich text (rtf) and the bottom for plaintext markdown. whenever any text in either text area changes, the app updates the other text area. if the top one changes, it converts it to markdown and updates the bottom one. if the bottom one changes, it converts it to rich text and updates the top one.
aaaand it actually did it and I pasted it into Replit and... it didn't work but I told it what errors I was seeing and continued going back and forth with it and ended up with the following tool without touching a single line of code: eat-the-richtext.dreev.es
PS: Ok, I ended up going back and forth with it a lot (12h45m now in total, according to TagTime) to get to the polished state it's in now with tooltips and draggable divider and version number and other bells and whistles. But as of version 1.3.4 it's 100% ChatGPT's code with me guiding it in strictly natural language.
Here's a decade-old gem from Scott Alexander, "If Climate Change Happened To Have Been Politicized In The Opposite Direction"
I think this is a persuasive case that commitment devices aren't good for you. I'm very interested in how common this is, and if there's a way you could reframe commit devices to avoid this psychological reaction to them. One idea is to focus on incentive alignment that avoids the far end of the spectrum. With Beeminder in particular, you could set a low pledge cap and then focus on the positive reinforcement of keeping your graph pretty by keeping the datapoints on the right side of the red line.
I guess in practice it'd be the tiniest shred of plausible deniability. If your prior is that alice@example.com almost surely didn't enter the contest (p=1%) but her hash is in the table (which happens by chance with p=1/1000) then you Bayesian-update to a 91% chance that she did in fact enter the contest. If you think she had even a 10% chance on priors then her hash being in the table makes you 99% sure it's her.
To make sure I understand this concern:
It may be better to use a larger hash space to avoid an internal (in the data set) collisions, but then you lower the number of external collisions.
Are you thinking someone may want plausible deniability? "Yes, my email hashes to this entry with a terrible Brier score but that could've been anyone!"
This should be fine. In past years, Scott has had an interface where you could enter your email address and get your score. So the ability to find out other people's scores by knowing their email address is apparently not an issue. And it makes sense to me that one's score in this contest isn't particularly sensitive private information.
Source: Comment from Scott on the ACX post announcing the results
"At some point one of those groups will be devoured by snakes" is erroneous
I wouldn't say erroneous but I've added this clarification to the original question:
"At some point one of those groups will be devoured by snakes and then I stop" has an implicit "unless I roll snake eyes forever". I.e., we are not conditioning on the game ending with snake eyes. The probability of an infinite sequences of non-snake-eyes is zero and that's the sense in which it's correct to say "at some point snake eyes will happen" but non-snake-eyes forever is possible in the technical sense of "possible".
It sounds contradictory but "probability zero" and "impossible" are mathematically distinct concepts. For example, consider flipping a coin an infinite number of times. Every infinite sequence like HHTHTTHHHTHT... is a possible outcome but each one has probability zero.
So I think it's correct to say "if I flip a coin long enough, at some point I'll get heads" even though we understand that "all tails forever" is one of the infinitely many possible sequences of coin flips.
Just set it up in the Beeminder work Slack and I am immediately in love 😍
First forecast: Will at least 4 of us (including me) play this reindeer game? (96% probability so far)
Ooh, I think there's a lot of implicit Beeminder criticism here that I'm eager to understand better. Thanks for writing this up!
We previously argued against similar claims -- https://blog.beeminder.com/blackmail/ -- and said that the "just get the different parts of yourself to get along" school of thought was insufficiently specific about how to do that. But here you've suggested some smart, specific ideas and they sound good!
My other Beeminder defense is that there are certain bare minimums that you know it would be irrational to fall below. So I recommend having the Beeminder goal as insurance and then also implementing all the strategies you describe. If those strategies work and it's easy-peasy to stay well above Beeminder's bright red line, then wonderful. Conflict avoided. If those strategies happen to fail, Beeminder will catch you. (Also you get a nice graph of your progress, quantified-self-style.)
PS: More recently we had a post about how compatible Beeminder turns out to be with CBT which I think also argues against the dichotomy you're implying here with Conflict vs Cooperation. https://blog.beeminder.com/cbt/
Btw, Scott mentioned having to read a bunch to figure out the subtle difference between loss aversion and the endowment effect. I attempted a full explainer: https://blog.beeminder.com/loss/
I don't necessarily endorse it but the moral argument would go like so: "I'm definitely not going to pay to read that article so me bypassing the paywall is not hurting the newspaper. The marginal cost is zero. Stealing from a kiosk, on the other hand, deprives the newspaper of a sale (and is just obvious plain old stealing)." In other words, "I'm not stealing a newspaper from the kiosk, I'm just breaking in, photocopying it real quick, and putting it right back. No harm no foul!"
A counterargument might be that you're contributing to demand for paywall-bypassing which does deprive the newspaper of sales, just less directly.
This list is pretty amazing (and I'm not just saying that because Beeminder is on it!) and you've persuaded me on multiple things already. Some comments and questions:
- CopyQ: I use Pastebot and I see there are So Many of these and would love a recommendation from someone who feels strongly that there's a particular one I should definitely be using.
- Google Docs quick-create: You inspired me to make a link in my bookmarks bar (built in to Chrome) to https://doc.new which I think is simpler and just as good. (Yes, it's kind of ridiculous that "doc.new" is their URL for that instead of something sane like "docs.google.com/new".)
- Beeminder vs StickK: Obviously I'm absurdly biased but I can't imagine anyone here preferring StickK and it would actually help us a ton to understand why someone might prefer StickK. Their referee feature is better, but everything else is so much worse! Especially anti-charities -- those are an abomination from an EA perspective, right??
- Pocket: We have an argument at https://blog.beeminder.com/pocket for why this is a big deal. (I guess I should also mention our Habitica integration; and Focusmate is probably coming soon!
Good question and good answers! Someone mentioned that the fancy/expensive Beemium plan lets you cap pledges at $0. On the non-premium plan you can cap pledges at $5, so another conceivable solution is to combine that + a conservative slope on your graph + setting alarms or something? + chalking up occasional failures, if rare enough, as effectively the cost of the service.
Or like another person said, you can make the slope zero (no commitment at all), but that may defeat the point, with the graph offering no guidance on how much you'd like to be doing.
PS: Of course this was also prompted by us nerding out about your and Marcus's vows so thank you again for sharing this. I'm all heart-eyes every time I think about it!
Ah, super fair. Splitting any outside income 50/50 would still work, I think. But maybe that's not psychologically right in y'all's case, I don't know. For Bee and me, the ability to do pure utility transfers feels like powerful magic!
Me to Bee while hashing out a decision auction today that almost felt contentious, due to messy bifurcating options, but then wasn't:
I love you and care deeply about your utility function and if I want to X more than you want to Y then I vow to transfer to you U_you(Y)-U_you(X) of pure utility! [Our decision auction mechanism in fact guarantees that.]
Then we had a fun philosophical discussion about how much better this is than the hollywood concept of selfless love where you set your own utility function to all zeros in order for the other's utility function to dominate. (This falls apart, of course, because of symmetry. Both of us do that and where does that leave us?? With no hair, an ivory comb, no watch, and a gold watchband, is where!)
Ooh, this is exciting! We have real disagreements, I think!
It might all be prefaced on this: Rather than merge finances, include in your vows an agreement to, say, split all outside income 50/50. Or, maybe a bit more principled, explicitly pay your spouse for their contributions to the household.
One way or another, rectify whatever unfairness there is in the income disparity directly, with lump-sum payments. Then you have financial autonomy and can proceed with mechanisms and solution concepts that require transferrable utility!
I love this so much and Bee (my spouse) and I have started talking about it. Our first question is whether you intend to merge your finances. We think you shouldn't! Because having separate finances means having transferrable utility which puts more powerful and efficient and fair decision/bargaining mechanisms at your disposal.
My next question is why the KS solution vs the Nash solution to the bargaining problem?
But also are you sure the Shapley value doesn't make more sense here? (There's a Hart & Mas-Colell paper that looks relevant.) Either way, this may be drastically simplifiable for the 2-player case.
Thanks so much for sharing this. It's so sweet and nerdy and heart-warming and wonderful! And congratulations!
Oh, Quirrell is referring to what game theorists call Cheap Talk. If the thing I'm trying to convince you of is strictly in my own brain -- like whether I intend to cooperate or defect in an upcoming Prisoner's Dilemma -- then any promises I make are, well, cheap talk. This is related to costly signals and strategic commitment, etc etc.
Anyway, I think that's the missing piece there. "Nothing you can do to convince me [about your own intentions] [using only words]".
This is indeed a fun way to illustrate Bayesian thinking! But I have a monkey wrench! There exist people who view smileys as almost explicitly connoting passive-aggression or sarcasm. Like the whole reason to add a smiley is to soften something mean. I'm not quite sure if there are enough such people to worry about but I think that that perception of smileys is out there.
Correction to the Ainslie link: http://picoeconomics.org/breakdown.htm
Hi from the future [1]! Beeminder has a version of this built in: the one-week akrasia horizon. You can change anything about a Beeminder goal, including ending it, at any time, but the change doesn't take effect for a week. As Katja Grace once said on Overcoming Bias: "[you] can’t change it out of laziness unless you are particularly forward thinking about your laziness (in which case you probably won’t sign up for this)."
[1] I'm mildly terrified that it's against the norms to reply to something this old. I've been thinking hard about your (Scott's) recent ACX post, "Towards A Bayesian Theory Of Willpower," and am digging up all your previous thoughts on the topic, so here I am.
Good thought experiment! I replied in the form of another Yudkowsky vignette. :)
Summary: "Infinity" is a perfectly coherent Cheerful Price for, say, something sufficiently repugnant to you or something very unethical. (But also you must have a finite Cheerful Price for anything, no matter how bad, if the badness happens with sufficiently small probability.)
That reminds me of this delightful and hilarious (edit: and true!) thing Eliezer said once:
Let me try to clear up the notion that economically rational agents must be cold, heartless creatures who put a money price on everything.
There doesn't have to be a financial price you'd accept to kill every sentient being on Earth except you. There doesn't even have to be a price you'd accept to kill your spouse. It's allowed to be the case that there are limits to the total utility you know how to generate by spending currency, and for anything more valuable to you than that, you won't exchange it for a trillion dollars.
Now, it *does* have to be the case for a von Neumann-Morgenstern rational agent that if a sum of money has any value to you at all, you will exchange anything else you have -- or any possible event you can bring about -- *at some probability* for that sum of money. So it *is* true that as a rational agent, there is some *probability* of killing your spouse, yourself, or the entire human species that you will cheerfully exchange for $50.
I hope that clears up exactly what sort of heartless creatures economically rational agents are.
Interesting! It hadn't occurred to me that this could be read as any kind of repudiation of "shut up and multiply". My previous comment on this post takes a stab at reconciling Cheerful Prices with my own extreme shut-up-and-multiply way of thinking.
Oh my goodness I love this. I'm actually so philosophically on board that I'm confused about treating Cheerful Prices as single real numbers. In my homo-economicus worldview, there exists a single price at which I'm exactly indifferent and then my cheerfulness goes up smoothly/continuously from there. It feels very arbitrary to pick something on that continuum and call it "the" cheerful price I have.
(My answer is to turn the nerdery up to 11 and compute a Shapley value, etc etc, but let me save that for another time or place. Jacob Falkovich and I have been talking about jointly blogging about this. We'll definitely want to tie it in to the concept of Cheerful Prices if we do!)
Translated into this delightful new language of Cheerful Prices, the rough version of my approach is like so:
I as the buyer name my lowest possible Cheerful Price (where I just barely find it worth it) and you as the seller name your highest possible Cheerful Price (above which it's just not worth it to you) and we settle on the mean of those two.
But maybe the point of Cheerful Prices is to simplify that. Let one person on one side of the trade make a guess about the consumer surplus and name something in that range. I.e., by naming my Cheerful Price I'm saying that at that price I'd be getting a big enough chunk of the consumer surplus that I don't need to know the size of your chunk. If you, as my counterparty, feel the same then we're golden.
Really good points. It's funny, I have a draft of a similar point about personal behavior change that I tried to make as provocative-sounding as possible:
http://doc.dreev.es/carbonfoot (Trying To Limit Your Personal Carbon Footprint Hurts The Environment)
But note the PS where I suggest a counterargument: making personal sacrifices for climate change may shape your identity, drive you to greater activism, and make your activism and climate evangelism more persuasive (to those who don't appreciate the economics and game theory of it).
Nice! I've heard a similar idea called a "talent stack" or "skill stack" but explaining it in terms of staking out a chunk of the Pareto frontier is much better.
Coincidentally, I just wrote a post explaining the idea of Pareto dominance -- http://blog.beeminder.com/pareto -- in case that's useful to anyone.
Now resurrected!
Thank you! See above ("Better to not have people feel like their desperation is being capitalized on.") for my response to your first question. And we actually believe that our system is, in practice if not in theory, strategy-proof. It's explicitly ok to game the system to our hearts' delight. It seems to be quite robust to that. Our utilities tend to either be uncannily well-matched, in which case it's kind of a coin flip who wins, or they're wildly different, but we never seem to have enough certainty about how different they'll be for it to be fruitful to distort our bids much.
The strategy of "just say a number such that you're torn about whether you'd rather win or lose" seems to be close enough to optimal.
How about adding a tiny bit of ambiguity (or evasion of the direct question) and making up for it with more effusiveness, eg, "it's not only my job but it feels really good to know that I'm helping you so I really want you to bug me about even trivial-seeming things!" All true and all she's omitting is her immediate annoyance but that is truly secondary, as she points out below about first-order vs second-order desires.
Yes, we're super keen to make sure the efficient thing happens regardless of the initial distribution of resources/responsibilities/property-rights/etc. And we use yootling as a bargaining mechanism to make that happen. In general we're always willing to shove work to each other or redistribute resources as efficiency dictates, using payments to make that always be fair.
In practice the sealed-bid version seems to be ungameable, at least for us! None of the problems you mentioned have arisen. My parents have tried this and had more problems but as far as I could tell it always involved contention about what to consider to be joint 50/50 decisions. Bethany and I seem to have no problem with that, using the heuristic of "when in doubt, just call it a 50/50 decision and yootle for it".
Fixed and fixed. Thank you!
I'm impressed! That's kind of the conclusion we gradually came to as well, after a lot of trial and error. Better to not have people feel like their desperation is being capitalized on.
Another way to put it: when you're really desperate to win a particular auction it's really nice to be able to just say so honestly, with a crazy high bid. Trying to allocate the surplus equitably means that I have to carefully strategize on understating my desperation. (And worst of all, a mistake means a highly inefficient outcome!)
PS: To be clear about first-price vs second-price, it's technically neither since there's no distinct seller.
Here's the n-player, arbitrary shares version:
Each participant starts with some share of the decision. Everyone submits a sealed bid, the second-highest of which is taken to be the Fair Market Price (FMP). The high bidder wins, and buys out everyone else's shares, ie, pays them the appropriate fraction of the FMP.
"Even yootling", or just "yootling", refers to the special case of two players and 50/50 shares. In that case, instead of bidding a fair market price (FMP), you say how much you're willing to pay if you win. True FMP is twice that, since you only have to pay half of FMP with even yootling. So instead of deciding what you'd pay, doubling it to get FMP, then halving FMP to get the actual payment, we short circuit that and you just say the payment as your bid. For yootling with uneven shares it's easier to bid FMP and then pay the appropriate fraction of that.
Bethany and I philosophically bite the bullet on this, which is basically to just agree with your second point: the wealthy person gets their way all the time and the poor person gets what's to them a lot of money and everyone is happy.
If that's unpalatable or feels unfair then I think the principled solution is for the wealthy person to simply redress the unfairness with a lump sum payment to redistribute the wealth.
I don't think it's reasonable -- ignoring all the psychology and social intricacies, as I'm wont to do [1] -- to object both to auctions with disparate wealth and to lump sum redistribution to achieve fairness.
Now that I'm introspecting, I suppose it's the case that Bethany and I tend to seize excuses to redistribute wealth, but they have to be plausible ones.
You're right that it's similar to a Vickrey auction in that the 2nd highest bid (in the 2-player case) is used as the price, but it's different in that there's no 3rd-party seller. The good is jointly owned and the payment will go from one player to the other. In particular, yootling is not strictly incentive compatible like Vickrey is (though in practice it seems to be close enough).
Thanks for the pointer to Landsburg! Looks like he worked out a way (by enlisting another economist couple) to have meaningful auctions despite having joint money with his spouse. I predict that system didn't hold together though. I should email him!
Specifically, here's the little add-on for Loqi that conducts auctions: https://github.com/aaronpk/zenircbot-bid
Agreed, we just haven't gotten to that yet. The auctioneer chatroom bot is pretty new.
Upvoted for the delightfully flattering implication for my and Bethany's relationship. :)
But, yes, a prerequisite is that everyone think like an economist, where everything you care about can be assigned a dollar value.
See also the core assumptions at the top of Bethany's article [http://messymatters.com/autonomy].
We have a protocol for deciding when to yootle: if the possibility of yootling is so much as mentioned then we must yootle. The only fair way to object to yootling is to dispute that it's a 50/50 decision. If it is a fundamentally joint decision then how would you object? "I want to get my way but not pay anything"? Not so nice. You could say "I don't want to yootle, I'll just do it your way". But that's equivalent to bidding 0, so might as well go through with the yootling. And after 9 years we do have quite efficient ways to conduct these auctions, with fingers or our phones or out loud.
Great question, and upon reflection (I actually looked this up in my PhD dissertation just now!) I agree. I actually can't remember the last time Bethany and I used a joint purchase auction. For some reason it never comes up -- we just each buy things and don't worry about joint ownership. If we did disagree about whether to buy a household item we'd probably just straight up yootle for whether to buy it (with the cost split 50/50 if we did).
Holy cow, thank you so much for this. Speaking of WTF reactions, I hope that won't be how this is perceived. Yours is a perfect example of both the insidiousness and the genius of Beeminder's exponential pledge schedule.
The fact that there's no doubt in your mind that you got more value out of Beeminder than the $130some dollars you paid is I hope evidence that it's more genius than insidiousness. :)
Yours is a textbook case of using Beeminder exactly as intended, to ride the pledge schedule up to the point where the amount of money at risk scares you into never actually paying it. For some people paying even the first $5 is sufficiently aversive. Others go all the way to $810, which has been, almost universally, sufficient to keep people toeing the line. (Ie, only one person has ever actually defaulted with $810 at stake.)
Some people (Katja Grace is an example) prefer to cap the amount at risk and are happy to pay a small fee occasionally. That has the danger of being more expensive in the long term as each particular derailment isn't a big deal and you can keep delusionally being like "ok, but this time for real!". Mostly, though, I think it depends on the severity of the akrasia for the specific thing you're beeminding.
I very much agree with the parenthetical about pushups. I beemind 30 pushups per day -- http://beeminder.com/d/push -- with the idea that I'll gradually ramp that up as my max reps increases. Except I'm failing to ever do that and have been at 30/day forever. If I cared more I'd ramp it up though. Right now I'm just happy to be forced to maintain some semblance of baseline upper-body strength.
The general point: beemind inputs, not outputs. Ie, things you have total control over.
PS: The Beeminder android app has a pushup counter built in, where you put your phone on the floor and touch your nose to it on each pushup and it tallies them for you.
Pomodoros is a great metric. Katja Grace makes the case for that here: http://www.overcomingbias.com/2012/08/on-the-goodness-of-beeminder.html (she just calls them blocks of time).
I think raw number of hours is a fine metric too though. Discretizing into pomodoros has both advantages and disadvantages.
If you can quantify actual output, that might be ideal. Like how we track User-Visible Improvements to Beeminder. You might expect that to be too fuzzy a metric but we found a criterion that's been rock solid for years now: If we're willing to publicly tweet it then it counts. Pride prevents us from ever getting too weaselly about it.
Very fair point! Just like with Beeminder, if you're lucky enough to simply not suffer from akrasia then all the craziness with commitment devices is entirely superfluous. I liken it to literal myopia. If you don't have the problem then more power to you. If you do then apply the requisite technology to fix it (glasses, commitment devices, decision auctions).
But actually I think decision auctions are different. There's no such thing as not having the problem they solve. Preferences will conflict sometimes. Just that normal people have perfectly adequate approximations (turn taking, feeling each other out, informal mental point systems, barter) to what we've formalized and nerded up with our decision auctions.
See also the digit-sound method: http://www.decisionsciencenews.com/2012/01/06/how-to-remember-numbers/
(I have the vague intention to create a handy tool based on that, which I'd call digimaphone: http://digimaphone.com )
Too funny; those are the middle names of my kids! :)
I wrote an article with a smilar conclusion: http://messymatters.com/savings
It includes this caricature of traditional financial advice: "You want to stop working when you’re 60ish, right? And you don’t want to be dirt poor at that point, right? So here’s what you do: live as if you’re dirt poor from now till you’re 60. Problem solved."
Gah! :) I did not think of that! Kind of like how I did not think of how much "beeminder" looks like "beerminder".