Could we solve this email mess if we all moved to paid emails?
post by jacobjacob · 2019-08-11T16:31:10.698Z · LW · GW · 27 commentsThis is a question post.
Contents
It would be very helpful data if people who'd use this is if >=50 other people also did would post just saying "I'd use this is >=50 particular other people did". Background Costly signalling and avoiding information asymmetries How the signalling problem is currently solved, and why that’s bad Brief FAQ What if replacing email with paid emails puts us in another equilibrium that’s bad for unexpected reasons? What if people don’t have enough money? Wouldn’t this waste a lot of money? If this is basically right: then what do we do? None Answers 10 Kaj_Sotala 9 Stuart Anderson 6 ChristianKl 4 spencerb 2 Dagon None 27 comments
Have you ever…
- Sent an email to someone in rationality and not heard back for many weeks (or more)?
- Avoided sending an email to someone because you wanted to spare their attention, despite thinking there was a fair chance they’d be genuinely interested?
- Wanted some way to signal that you actually cared more than usual about this email, but without having to burn social capital (such as by saying “urgent” or “please read”)?
- Had to ignore an email because, even though it might have been interesting, figuring that out would simply have been too effortful?
I think that 1) problems like these are prevalent, 2) they have pretty bad consequences, and 3) they could be partly solved by using services where you can pay to send someone an email (N.B. payment is conditional on reply).
I’m considering running a coordination campaign to move the community to using paid emails (in addition to their ordinary inbox), but before launching that unilaterally I want more confidence it is a good idea.
It would be very helpful data if people who'd use this is if >=50 other people also did would post just saying "I'd use this is >=50 particular other people did".
Background
Email seems broken. This is not that surprising: your email is basically a to-do list where other people (and companies) can add items for free, without asking; and where you’re the only one who can remove them. We should do something about this.
More broadly, the attention economy seems broken [LW · GW]. Recognising this, many rationalists use various software tools to protect themselves from apps that are engineered to be addictive. This helps at an individual level, but it doesn’t help solve the collective action problem of how to allocate our attention as a community. We should do something about this.
Costly signalling and avoiding information asymmetries
An “information asymmetry” is situation where someone has true information which they are unable to communicate. For example, suppose 10 economists are trying to influence government policy on issue X, and one of them actually, really knows what the most effective thing is. Yet, they might not be able to communicate this to the decision-makers, since the remaining 9 have degrees from equally prestigious institutions and arguments that sound equally rigorous to someone without formal training in economics. Information asymmetries are a key mechanism that generate bad equilibria.
When it comes to email, this might look as follows: Lots of people write to senior researchers asking for feedback on papers or ideas, yet they’re mostly crackpots or uninteresting, so most stuff is not worth reading. A promising young researcher without many connections would want their feedback (and the senior researcher would want to give it!), but it simply takes too much effort to figure out that the paper is promising, so it never gets read. In fact, expecting this, the junior researcher might not even send it in the first place
This could be avoided if people who genuinely believed their stuff was important could pay some money as a costly signal of this fact. Actual crackpots could of course also pay up, but 1) they might be less likely to, and 2) the payment would offset some of the cost of the recipient figuring out whether the email is important or not.
How the signalling problem is currently solved, and why that’s bad
Currently, the signalling problem is solved by things like:
- Spending lots of effort crafting interesting-sounding intros which signal that the thing is worth reading, instead of just getting to the point
- Burning social capital -- adding tags like “[Urgent]” or “[Important]” to the subject line
This is bad, because:
1) It’s a slippery slope to a really bad equilibrium. I’ve gotten emails with titles like “Jacob, is everything alright between us?” because I didn’t buy a water bottle from some company. This is what we should expect when companies fight for my attention without any way to just directly pay for it. Even within the rationality community, if our only way of allocating importance is by drawing upon very serious vocabulary, we’ll create an incentive for exaggeration, differentially favouring those less scrupulous about this practice [LW · GW], and chip away at our ability to use shared-cues-of-importance when it really matters.
2) The main thing protecting us from this inside a smaller community is that people want to preserve their reputations. But if you’re unsure how important your thing is, and mislabeling it means potentially crying-wolf and risking your reputation, this usually makes it more worth it to just avoid the tag. Which means that we lose out on all those times when your thing actually was important and using the tag would have communicated that.
3) It puts the recipient between a rock and a hard place, and they’re not being compensated for it. If you mark something as “[Urgent]” that actually is urgent, and the person responds and does what you want, you’ve still presented them with the choice between sacrificing some ability to freely prioritise their tasks, and sacrificing some part of the quality of your relationship. There should be some easy way for you to compensate them for that.
4) It’s way too coarse-grained. There’s not really any way of saying:
“This is kinda important, but not that urgent, though it would probably be good if you read it at some point, though that depends on what else is on your plate”
apart from writing exactly that -- but then you’re making a complicated cognitive demand [LW · GW], which has already burnt lots of attention for the recipient.
Brief FAQ
What if replacing email with paid emails puts us in another equilibrium that’s bad for unexpected reasons?
At the moment, it doesn’t seem feasible for us to use this to replace email. There isn’t even software available for doing that completely. Rather, people would consent to receiving paid messages (for example via earn.com, see below) in addition to having their regular inbox.
What if people don’t have enough money?
As mentioned above, sending standard emails are still an option. Yet this becomes a problem in the world where we move to the equilibrium where a standard email is taken to signal “I didn’t pay for this, so it’s not that important”. Then I can imagine grants for “email costs” being a thing, or that the benefits of the new equilibrium outweigh this cost, or that they don’t. I’m uncertain.
Wouldn’t this waste a lot of money?
Not really, assuming that the people who you send money to are at least as effective at spending it as you are, which seems likely if this gets used within the rationality community.
If this is basically right: then what do we do?
Earn.com is a site which offers paid emails. For example, you can pay to message me at earn.com/jacobjacob/ EDIT: note that payment is conditional on actually getting a reply.
If this seems like something that could solve the current email mess, we should coordinate to get a critical mass of the community to sign-up, and make their profile url:s available. (Compare this to how we’ve previously started using things reciprocity.io and Calendly.)
I’d be happy to coordinate such a campaign, but I don’t want to do it until I’m more confident it would be a good thing.
(For the record, I have no relation to earn.com and would not benefit personally by others joining, beyond the obvious positive effects on the community. They simply seem like the best available option for doing this. They have a pretty solid team, and are used by some very senior VCs like Marc Andreessen and Keith Rabois.)
Answers
To the extent that I've experienced these kinds of problems, their core cause has been that I haven't had the time or energy to answer my messages, not that there would have been particularly many of them or because of any information asymmetry. So I wouldn't use this service because I don't recognize the problem that it's describing from my own experience.
↑ comment by jacobjacob · 2019-08-12T09:54:08.788Z · LW(p) · GW(p)
Thanks, this is a good data-point.
Though I want to ask: if people know that you have this problem, as things stand currently, they might just avoid messaging you (since they don't have any way of compensating you for marginally making the burden on you worse)? Moreover, your time and energy are presumably exchangeable for money (though not indefinitely so)?
So it still seems paid emails might help with that?
(PS. I don't think it's only about information asymmetries and having too many emails, though I realise the OP quite strongly implies that. Might rewrite to reflect that.)
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2019-08-12T10:49:58.510Z · LW(p) · GW(p)
Moreover, your time and energy are presumably exchangeable for money (though not indefinitely so)?
I guess to some extent, but it feels like the world is already full of things that I could be getting money for if I had the time and energy to pursue them on top of other things that I am already doing. I feel like monetary rewards for answering messages that I wouldn't have answered otherwise would have to be relatively significant to have an impact, like upwards from 20€ or something. But then in that case I would start feeling concerned that people who didn't want to spend that much money messaging me would feel discouraged from doing so.
Also, if I try to simulate in my head the experience of getting a paid email, it feels like a signal that the message isn't worth responding to? Like, some part of my mind has the model "if the sender knew that this was an important message, then they could count on me responding to it, so if they feel the need to add a monetary incentive they must know that this isn't very important". Or something - I'm not sure if I endorse that reasoning on an intellectual level, but it seems to trigger some kind of an emotional association to things like using money to buy status when you don't have other qualities that would earn you status. (In contexts like romantic relationships or an author paying money to a vanity press to self-publish when they can't get a real publisher to agree to publish their work and pay them.)
Re: this part in your post -
This could be avoided if people who genuinely believed their stuff was important could pay some money as a costly signal of this fact.
As I understand it, the point of a costly signal is that it's supposed to be relatively more affordable if you actually have that quality. If you have lots of health, then you can avoid to burn health on things which aren't directly useful, more than people with little health can. But the amount of money that you have, seems independent of how important your stuff is? Your could be a millionaire who wanted my opinion on something totally unimportant. You say that actual crackpots might be less likely to pay, but I would expect that if anything they would be even more likely to pay.
Replies from: jacobjacob↑ comment by jacobjacob · 2019-08-13T17:29:45.533Z · LW(p) · GW(p)
As I understand it, the point of a costly signal is that it's supposed to be relatively more affordable if you actually have that quality.
Yeah. In hindsight the terminology of "costly signal" is a bit unfortunate, because the payment here would actually work a bit like Mario's jump or LessWrong karma -- it's a very simple mechanism which can co-opted to solve a large number of different problems. In particular, the money is not intended to be burnt (as would be the case with status signals, or proof-of-work as mentioned in some other comments), but actually paid to you.
Overall appreciate you writing up those points, they're pretty helpful in understanding how people might (and might not) use this.
-
↑ comment by jacobjacob · 2019-08-12T09:35:44.117Z · LW(p) · GW(p)
This seems to be saying:
"assuming a large team of full-time devs and ability to push whatever solution out to everyone who uses email, what should we build?"
which is quite different from what the post is asking for:
"should just the rationality community coordinate to make this move, to marginally adding on something to current email, using an already existing software tool?"
Replies from: stuart-anderson↑ comment by Stuart Anderson (stuart-anderson) · 2019-08-21T10:15:15.059Z · LW(p) · GW(p)
-
↑ comment by Matt Goldenberg (mr-hire) · 2019-08-11T22:34:18.209Z · LW(p) · GW(p)
What is the difference between proof of work (paying with electricity) or just paying with the much more fungible money?
Replies from: stuart-anderson↑ comment by Stuart Anderson (stuart-anderson) · 2019-08-12T07:44:40.237Z · LW(p) · GW(p)
-
Replies from: jacobjacob, rhollerith_dot_com↑ comment by jacobjacob · 2019-08-12T09:46:46.737Z · LW(p) · GW(p)
I don't get this.
If businesses can simply buy their way around the problem they'll do exactly that.
There's a finite amount of money such that, if you got paid that amount, you'd be happy receiving an unsolicited ad email. There's also a finite amount of money such that, if they had to pay that amount, it wouldn't be worth it for advertisers to send you an email.
In equilibrium (probably given lots of additional specification of details) this makes the content you receive worth it.
I don't see where this goes wrong in a way that's solved by PoW.
Replies from: stuart-anderson↑ comment by Stuart Anderson (stuart-anderson) · 2019-08-21T10:55:36.161Z · LW(p) · GW(p)
-
↑ comment by RHollerith (rhollerith_dot_com) · 2019-08-13T00:31:41.069Z · LW(p) · GW(p)
If businesses can simply buy their way around the problem they'll do exactly that. . . . you're forced to wait by PoW
I don't understand. An employee of the business writes the message, then hits send, which causes the provable work to be done by some computer somewhere after which the message is delivered. (The code to do that when a person hits send does not currently exist, but it is only a few lines of code, and if your proposal gets adopted by many people, then such code will come into existence.)
When is this waiting that you refer to? Is the fact that there is a delay between the hitting of the send button and the delivery of the message supposed to act as a deterrent somehow?
If the provable work is something that only a human can do, i.e., cannot effectively be automated, then why did you mention BOINC?
Replies from: stuart-anderson
↑ comment by Stuart Anderson (stuart-anderson) · 2019-08-21T11:03:28.277Z · LW(p) · GW(p)
-
↑ comment by remizidae · 2019-08-11T21:55:15.984Z · LW(p) · GW(p)
You should probably say at the beginning of this what “paid email” is. I figured it out by the end, but it’s not a well-known term.
Replies from: stuart-anderson↑ comment by Stuart Anderson (stuart-anderson) · 2019-08-12T07:45:57.466Z · LW(p) · GW(p)
-
It feels to me like it will produce weird social dynamics.
If people can freely set their price for receiving messages, setting a high price would be a signal for being high status. That produces weird signaling interactions in a community like ours.
There's plenty of literature on how people value an interaction less if they are payed for the interaction.
I don't feel like getting spam from people in the rationality community is a problem that I'm having and don't feel a need to discourage people from sending me messages.
I agree that email is an attention-sucking mess, but I see the problem differently from jacobjacob. I would happily get all the emails from the rationality community; my problem is that email is dominated by marketing, mailing lists, etc.
I think that using earn.com is likely to exacerbate the problem of unrequested marketing emails. It is free to send messages, and the sender only pays upon receipt of a response. This is like selling advertising on a per-click-though basis rather than on a per-view basis, and if it took off I would expect spam to quickly dominate the platform. Even if the message model were modified to incur costs for sending a message, I still think that many companies would gladly pay to send messages over a trusted high-status channel (similar to how fundraisers include stickers or cash in their mailings to raise the likelihood of you reading the materials). I'm not sure that friends and contacts would value my responses enough to match corporate calculations.
↑ comment by Gurkenglas · 2019-08-12T21:34:32.918Z · LW(p) · GW(p)
Spam dominating the platform is fine, because you are expected to sort by money attached, and read only until you stop being happy to take people's money.
If your contacts do not value your responses more than corporations do, that actually sounds like a fine Schelling point for choosing between direct research participation and earning to donate.
If you feel that a contact's question was intellectually stimulating, you can just send them back some or all of the fee to incentivize them sending you such.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-08-13T14:20:05.303Z · LW(p) · GW(p)
In the proposed platform all messages have the same money attached. There's no way to sort by money attached.
I will happily accept payment for reading and responding to e-mail. I will not pay to send one, and I don't know of any cases where I feel the need to pay for someone's initial reading of an e-mail (I may want to pay for their attention, but that will be a negotiation or fee for a thing, not for a mail).
What _might_ be valuable is a referral service - a way to have someone who (lightly) knows you and who (somewhat) knows the person you want to correspond with, who can vouch for the fact that there's some actual reason not to ignore your mail. No payment in money, some payment (and reinforcement) in reputation.
Basically, e-mail isn't the problem, the variance in quality of things for me to look at is the problem. Curation is the answer, not payment.
↑ comment by Raemon · 2019-08-12T00:27:42.240Z · LW(p) · GW(p)
The referral service is basically already what we have – you're much more likely to get responses to an email to a busy person if you have someone send an intro on your behalf. (You can't automate this much, because part of the whole point is that it's a costly signal. Automating it just gets you linkedIn, where people mindlessly click a button saying "sure I know this person well enough to click a button for them" but people learn to tune out that signal).
Replies from: jacobjacob↑ comment by jacobjacob · 2019-08-12T10:14:19.161Z · LW(p) · GW(p)
Yeah, referrals are important.
They also have some problems:
- Requires you to already know the right people
- If the referer doesn't know how valuable the recipient will think this is, they might just avoid it as opposed to risking burning reputation (same problem as outlined above)
- It's still costly for the recipient. E.g. it doesn't have any way of compensating the recipient for giving them the "reduce option value" vs "slightly harm relationship" trade-off
There are probably large differences between how important each of these problems are, though I'm moderately confident that at least the first presents are real and important user case. If the options are "Pay $50 to message X" or "Try to network with people who can then introduce you to X", the first might be better and save us some social games.
Replies from: Dagon↑ comment by Dagon · 2019-08-12T13:38:16.468Z · LW(p) · GW(p)
It doesn't require you to know the right people, it requires you to expend effort to determine the right people, and then to convince THOSE less-busy people that the referral is valuable.
For many, they have assistants or employees who perform this as an actual task - filter the volume of contacts and handle most things, escalating those that warrant it. That's great. For others, this is more informal - they have other communication channels like LessWrong, or twitter, or social network meshes, and you can get their attention by getting the attention of any of their friends or posting interesting stuff on those channels.
Either way (or ways in between and outside this), it uses the community to signal value of communication between individuals, rather than only discrete per-message signals that ignore any context.
Basically, there are two cases:
1) the recipient will want to talk with you, but doesn't know it. In this case, you need to show that you're interesting, not that you're interested. Spending money isn't interesting. Being interesting to people around me is interesting.
2) the recipient won't care, even after reading. In this case, money may compensate for their time, but probably not and it doesn't get you the attention you want anyway. A useless reply isn't worth their time nor your money.
Note that I'm assuming you're talking about trivial amounts of money (less than full-time equivalent pay for their time), and for more than a trivial form-letter response to collect the bounty. I'd be very interested in a SINGLE concrete example where any amount of money is a good value for both parties who wouldn't otherwise connect. Ideally, you'd give two examples: one of someone you wouldn't respond to without your $5, and one of someone who's not responding to you, who you'd pay $X to do so (including what X you'd pay and what kind of response would qualify).
After some more thought, I think my main objection is that adding small amounts of money to a communication is a pretty strong NEGATIVE signal that I want to read the communication. I want to read interesting things that lead to more interesting things. The fact that someone will pay to have me read it is an indication that I don't want to read it otherwise.
27 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2019-08-12T02:03:53.283Z · LW(p) · GW(p)
I think it's great that you're trying this and I hope it succeeds.
But I won't be using it. For me, the biggest problem is lowering the sense of obligation I feel to answer other people's emails. Without a sense of obligation, there's no problem - I just delete it and move on. But part of me feels like I'm incurring a social cost by doing this, so it's harder than it sounds.
I feel like using a service like this would make the problem worse, not better. It would make me feel a strong sense of obligation to answer someone's email if they had paid $5 to send it. What sort of monster deletes an email they know the other person had to pay money to send?
In the same way, I would feel nervous sending someone else a paid email, because I would feel like I was imposing a stronger sense of obligation on them to respond to my request, rather than it being a harmless ask they can either answer or not. This would be true regardless of how important my email was. Meanwhile, people who don't care about other people's feelings won't really be held back, since $5 is not a lot of money for most people in this community.
I think the increased obligation would dominate any tendency for me to get less emails, and make this a net negative in my case. I still hope other people try it and report back.
Replies from: Raemon↑ comment by Raemon · 2019-08-12T02:28:22.078Z · LW(p) · GW(p)
Quick check (I think Jacob should update the OP since I think a couple people have made this interpretation) – the service Jacob uses only charges people if you respond to their email. Curious if that changes your take on the situation?
Replies from: Yvain, pktechgirl, jacobjacob↑ comment by Scott Alexander (Yvain) · 2019-08-13T04:09:28.693Z · LW(p) · GW(p)
I'm sorry, I didn't understand that. Yes, this answers my objection (although it might cause other problems like make me less likely to answer "sorry, I can't do that" compared to just ghosting someone)
Replies from: jacobjacob, Raemon↑ comment by jacobjacob · 2019-08-13T22:52:06.415Z · LW(p) · GW(p)
I also don't know if "this answers my objection" means "oh, then I'd use it" or "other problems still seem to big" (though I'd bet on the latter).
↑ comment by Raemon · 2019-08-13T19:45:03.576Z · LW(p) · GW(p)
Not saying there's an existing service that does this, but this sounds like a pretty important use case for such a service to have. I think it'd be good for such services to have a button that's like "hey, read this but don't think it makes sense. Bye." (I could imagine having that button charge a smaller amount or something)
(That said, I'm not sure this whole equilibrium actually makes sense. I don't personally feel the need to use it)
↑ comment by Elizabeth (pktechgirl) · 2019-08-12T23:38:24.459Z · LW(p) · GW(p)
only charges people if you respond to their email
This doesn't seem to solve the problem, which is compensating someone for the attention to evaluate if your email is worth responding to. If I'm sending a substantive response, I'm probably glad I got the e-mail.
I assume this is done to keep people from soliciting lots of email solely for the money, but it doesn't solve that problem, since you can always send a pro-forma response.
↑ comment by jacobjacob · 2019-08-12T10:21:38.373Z · LW(p) · GW(p)
Updated the OP to clarify this. Will hold off on replying until I know whether this changes Scott's mind or not!
comment by Gurkenglas · 2019-08-12T21:39:34.565Z · LW(p) · GW(p)
Instead of email, we could use Less Wrong direct messages with karma instead of money. Going further, we could set up a karma-based prediction market on what score posts will reach, and use its predictions to set post visibility. Compare Stack Exchange's bounty system.
comment by Dagon · 2019-08-12T17:14:33.574Z · LW(p) · GW(p)
May I ask for a resolution comment (or follow-up questions if not resolved) when you've decided that this question has sufficient answers to make a decision or summarize a consensus?
It's not fair to pick on this one, and I apologize for that, but this is one of a number of recent topics that generate opinions and explore some models (some valuable, many interesting), but then kind of die out rather than actually concluding anything.
Replies from: jacobjacob
↑ comment by jacobjacob · 2019-08-13T09:38:25.915Z · LW(p) · GW(p)
That's a great point, I will do that.
comment by philh · 2019-08-16T14:29:01.803Z · LW(p) · GW(p)
Tangential, but I confess I'm surprised that the model is "pay if you get a reply". I would have expected "pay if they think you wasted their time" (i.e. you attach an amount of money, they read your email and then choose to collect the money or return it to you).
I guess that would be solving a different problem. Of the four "have you ever"s from the beginning, I think it would help with like, one and a half.
comment by Zack_M_Davis · 2019-08-12T03:28:26.946Z · LW(p) · GW(p)
someone in rationality [...] the community [...] many rationalists [...] the collective action problem of how to allocate our attention as a community. [...] within the rationality community [...] positive effects on the community
What community?
The problems with email that you mention are real and important. I'm glad that people are trying to solve it. If you think one particular solution (such as earn.com) is unusually good and you want it to win, then it might make sense for you to do some marketing work on their behalf, such as the post you just wrote.
What I don't understand (or rather, what I understand all too well and now wish to warn against after realizing just how horribly it's fucked with my ability to think in a way that I am only just now beginning to recover from) is this incestuous CliqueBot-like behavior that makes people think in terms of sending email to "someone in rationality", rather than just sending email to someone.
In the late 'aughts, Eliezer Yudkowsky wrote a bunch of really insightful blog posts about how to think. I think they got collected into a book? I can't recommend that book enough—it's really great stuff. ("AI to Zombies" is lame subtitle, though.) Probably there are some other good blog posts on the lesswrong.com website, too? (At least, I like mine [LW · GW].)
But this doesn't mean you should think of the vague cluster of people who have been influenced by that book as a coherent group, "rationalists", the allocation of whose attention is a collective action problem (more so than any other number of similar clusters of people like "biologists", or "entrepreneurs", or "people with IQs above 120"). Particularly since mentally conflating rationality (the true structure of systematically correct reasoning) with the central social tendency of so-called "rationalists" (people who socially belong to a particular insular Bay Area-centered subculture) is likely to cause information cascades [LW · GW], as people who naïvely take the "rationalist" brand name literally tend to blindly trust the dominant "rationalist" opinion as the correct one, without actually checking whether "the community" is doing the kind of information processing that would result in systematically correct opinions.
And if you speak overmuch of the Way you will not attain it.
Replies from: Wei_Dai, elityre, Douglas_Knight↑ comment by Wei Dai (Wei_Dai) · 2019-08-12T08:25:54.126Z · LW(p) · GW(p)
You seem to be bringing up a hobbyhorse (mentally conflating rationality with "rationalists") under a post that is at most tangentially related, which I personally think is fine but should be noted as such. (In other words I don't think this comment is a valid criticism of the OP, if it was intended as such.)
But this doesn’t mean you should think of the vague cluster of people who have been influenced by that book as a coherent group, “rationalists”, the allocation of whose attention is a collective action problem (more so than any other number of similar clusters of people like “biologists”, or “entrepreneurs”, or “people with IQs above 120″).
Given that biologists do in fact face a collective action problem of allocating attention (which they solve using conferences and journals), it seems perfectly fine to me to talk about such a problem for rationalists as well. What is LW2 if not a (partial) solution to such a problem? (Perhaps "entrepreneurs" and "people with IQs above 120" can't be said to face such a problem but they're also much bigger and less cohesive groups than "rationalists" or "biologists".)
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-08-13T03:52:58.837Z · LW(p) · GW(p)
Thanks, the hobbyhorse/derailing concern makes sense. (I noticed that too, but only after I posted the comment.) I think going forward I should endeavor to be much more reserved about impulsively commenting in this equivalence class of situation. A better plan: draft the impulsive comment, but don't post it, instead saving it as raw material for the future top-level post I was planning to eventually write anyway.
Luckily the karma system was here to keep me accountable and prevent my bad blog comment from showing up too high on the page (3 karma in 21 votes (including a self-strong-upvote), a poor showing for me).
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-08-13T03:59:24.877Z · LW(p) · GW(p)
3 karma in 21 votes (including a self-strong-upvote)
Actually, jeez, the great-grandparent doesn't deserve that self-strong-upvote; let me revise that to a no-self-vote.
↑ comment by Eli Tyre (elityre) · 2019-08-13T03:24:29.507Z · LW(p) · GW(p)
The supermajority of the people that I interact with, in person and online, are people who were influenced by that book, and like me, make substantial life decisions on the bases of associated arguments. Many of them likewise interact largely with other people who were influenced by that book.
Even stronger than that, people of this category are densely socially connected. The fact that someone identifies as "a rationalist", is pretty strong evidence that I know them, or know of them. This is in contrast with "entrepreneurs", for instance. Even the most well-connected entrepreneurs don't know most of the people who identify as entrepreneurs. Dito for "people with IQs over 120", and biologists.
Why wouldn't I draw a boundary around that cluster of people, and attempt interventions on that cluster in particular?
It seems to me that the "rationality community" is both a natural category, and a useful category.
But perhaps you're claiming that I should use this category, but I shouldn't give it the label of "rationality", because then I'm I'm making the connotation (to myself) that this group is unusually rational?
↑ comment by Douglas_Knight · 2019-08-26T15:41:02.205Z · LW(p) · GW(p)
A community is not relevant to the statement of the problem, but a community is relevant to the collective action problem of adopting a solution (depending on the solution). I agree that the opening sentence about sending "an email to someone in rationality" is unhealthy and condemn it with you.
But, as others said, Jacob is right to talk of "a coordination campaign to move the community" and at some point he has to name the community. (There are additional issues of whether the community exists and whether its existence or name is bad. Those are hobbyhorses.)
comment by Donald Hobson (donald-hobson) · 2019-08-14T00:01:36.218Z · LW(p) · GW(p)
If I am sending you an email, it could be because I have some info that I believe would benefit you and am honestly trying to be helpful in sending it. I am unlikely to do this if I have to pay you.
comment by jacobjacob · 2019-08-12T10:43:48.652Z · LW(p) · GW(p)
This was crossposted to the EA forum [EA(p) · GW(p)] replacing all mentions of "rationality" with "EA", mutatis mutandis.
comment by FactorialCode · 2019-08-11T17:51:10.769Z · LW(p) · GW(p)
I like this idea, and I think for it to take off, it would have to be implemented by easily piggy backing off of the existing email system. If I could download some kind of browser-extension that allowed me to accept payment for emails while letting me continue to use my existing email, I would consider having that option.
However, I think this could face some adoption problems. I could easily imagine there being negative social consequences to advertising having a paid email address. As it makes the statement "I am more likely to ignore your messages unless you pay me for my time." common knowledge.
Replies from: Raemon, stuart-anderson↑ comment by Raemon · 2019-08-11T18:45:55.828Z · LW(p) · GW(p)
My guess is that paid email services are tailored for (or marketed to) the sort of person who's already happy to send that signal (i.e. CEOs, founders, etc).
[edit: went and looked at earn.com, which looked _differently_ weird than I was expected, something something tailored for people who are up for saying 'I am into weird things like bitcoin']
Replies from: jacobjacob↑ comment by jacobjacob · 2019-08-12T10:25:21.150Z · LW(p) · GW(p)
Yup, I think one of the main use cases is to enable a way of contacting people much higher status/more busy than you that doesn't require a ton of networking or makes their lives terrible. (N.b. I have lots of uncertainty over whether there's actually demand for this among that cohort, which is partly why I'm writing this.)
Replies from: ChristianKl↑ comment by ChristianKl · 2019-08-12T10:57:42.547Z · LW(p) · GW(p)
To me contacting people much higher status then you is a different process then rationalists contacting fellow rationalists.
↑ comment by Stuart Anderson (stuart-anderson) · 2019-08-12T07:52:26.568Z · LW(p) · GW(p)
-
comment by Donald Hobson (donald-hobson) · 2019-08-13T23:58:16.207Z · LW(p) · GW(p)
Having these norms would create scammers that try to look prestigious. If you only get paid when you reply to a message, lots of low value replies are going to be sent.
comment by Slider · 2019-08-11T21:26:23.503Z · LW(p) · GW(p)
I could not clean information to get a picture what is the percieved problem. The closest I got was guessing that the game Uplink had a mission trakcing system that was a mission per email.
If someone tells you to do something you can exercise your own judgement whether you will do it. If a person tells you to jump into a well and you do jump into a well the problem is not that people are able to talk to you (unrestrictidly) but that you are way too suggestible than is good for your own health.
You can say no. You can control who you reveal your which email address to. You can control what kind of connectivity you ask for. (I have to pay because you lack filtering skills?)
It's also weird that burning social capital is undesirable but burning money would be. And the moloch toolbox link mainly adviced to the contrary. The "magic tower" metaphor critises that just taking 4 years off people is not a good way to select employess. It would seem you need to believe that burning $1 or $10 would somehow make the content more quality? It would also seem that paying to cry wolf would be a more tempting joke as more extreme reactions would be expected (and the cost being "adequate compensation" for the disruption, there was a phenomenon where making a trivial cost on picking your kids late from kindergarden made the parents late more and not sorry).
There could be something interesting about how mechanics of email leads to "drive-by-burdening" where you skip the negotiation phase whether some committed can be formed and just simply proceed to assume that it will be done. But I would asssume the solution or problem formulation would be more social or communication centric. And would guess the solutions could be "stop making everyone your boss, make commitments that you don't plan on keeping (spam your business card with email on it to 'network' and then not wanting to reply when people use it, are you in or out in this networking thing?) and avoid legimate turning down just because it's icky". And in the case of your actual employer boss leaving too much emails a line of "you have to talk about overburdening with management and be prepared to leave employement that doesn't fit your life"
I also have a bad feeling about baggage that economical thinking will bring with it. It is dissatisying to me that the reservations are nebolous to me. But if communication is free I can focus on whether the idea has merit, on paid communication I might tend to shift to a frame where I "assume merit" rather than "verify merit". And rather than being confident in my communication because I speak the truth I could be confident because of sunk-cost fallacies etc.