What would you need to be motivated to answer "hard" LW questions?

post by Raemon · 2019-03-28T20:07:48.747Z · LW · GW · 12 comments

This is a question post.

Contents

  Motivations
    Intrinsic vs Extrinsic
      Improving Intrinsic Motivation
    Bounties and Reliability 
      Costly signaling of value
      Serious times requires livable-money
  What would it take?
None
  Answers
    26 Wei_Dai
    19 Dagon
    15 John_Maxwell_IV
    6 ryan_b
    5 SoerenMind
    5 G Gordon Worley III
    3 dunedale
    2 AllAmericanBreakfast
None
12 comments

Edit: Significantly rewritten. Original question was more specifically oriented around money-as-a-motivator.

One of the questions (ha) that we are asking ourselves on the LW team is "can the questions feature be bootstrapped into a scalable way of making intellectual progress on things that matter."

Motivations

Intrinsic vs Extrinsic

I'd cluster most knobs-to-turn here into "intrinsic motivation" and "extrinsic motivation."

Intrinsic motivation covers things like "the question is interesting, and specified in a way that is achievable, and fun to answer."

Extrinsic motivation can include things like "karma rewards, financial rewards, and other things that explicitly yield higher status for

(Things like "I feel a vague warm glow because I answered the question of someone I respect and they liked the answer" can blur the line between intrinsic and extrinsic motivation)

Improving Intrinsic Motivation

Right now I think there's room to improve the flow of answering questions:

Bounties and Reliability

A lot of questions are just hard to answer – realistically, you need a lot of time, at least some of the time won't be intrinsically fun, and the warm glow of success won't add up to "a few days to a few months worth of work."

So we're thinking of adding some more official support for bounties. There have been some pretty successful bounty-driven content on LW (such as the AI Alignment Prize [LW · GW], the Weird Aliens [LW · GW] Question [LW · GW], and Understanding Information [LW · GW] Cascades [LW · GW]), which have motivated more attention on questions.

Costly signaling of value

They showcase that the author of the question cares about the answer. Even if the money is still relatively minor, it reaffirms that if you work on the question, someone will actually derive value from it, which can be an actual important part of intrinsic motivation (as well as a somewhat legible-but-artificial status game you to more easily play, which I'd classify as extrinsic)

Serious times requires livable-money

In some cases you just actually need to put serious time into solving it to succeed, which means you either need to have already arranged your life such that you can spend serious time answering questions on LW, or you need "answering hard questions on LW" to actually provide you with enough financial support to do so.

This requires not just "enough" money, but enough reliability of money that "quit your day job" (or get a day job that pays-less-but-gives-more-flexiblity) is a an actual option.

What would it take?

So, with all that in mind...

What would it take for you (you, personally), to start treating "answer serious LW questions" as a thing you do semi-regularly, and/or put serious time into?

My assumptions (possibly incorrect) here are that you need a few things (in some combination)

Some types of intellectual labor I'm imagining here (which may or may not all fit neatly into the "questions" framework).

"Serious" questions could range from "take an afternoon of your time" to "take weeks or months of research", and I'm curious what the actual going rate for those two ends of the spectrum are, for LW readers who are a plausible fit for this type of distributed work.

Answers

answer by Wei Dai (Wei_Dai) · 2019-03-30T17:53:29.265Z · LW(p) · GW(p)

I feel like I should provide some data as someone who participated in a number of past bounties.

  1. For one small bounty <$100, it was a chance to show off my research (i.e., Googling and paper skimming) skills, plus it was a chance to learn something that I was somewhat interested in but didn't know a lot about.
  2. For one of the AI alignment related bounties (Paul's "Prize for probable problems" for IDA), it was a combination of the bounty giver signaling interest, plus it serving as coordination for a number of people to all talk about IDA at around the same time and me wanting to join that discussion while it was a hot topic.
  3. For another of the AI alignment related bounties (Paul's "AI Alignment Prize"), it was a chance to draw attention to some ideas that I already had or was going to write about anyway.
  4. For both of the AI alignment related bounties, when a friend or acquaintance asks me about my "work", I can now talk about these prize that I recently won, which sounds a lot cooler than "oh, I participate on this online discussion forum". :)
comment by Raemon · 2019-03-30T18:03:24.472Z · LW(p) · GW(p)

Thanks, it was useful to hear about that variety of cases.

answer by Dagon · 2019-03-28T21:24:39.813Z · LW(p) · GW(p)

I don't think it's possible on LW. It's not a matter of money (ok, it is, in that I don't think anyone's likely to offer a compelling bounty that I expect to be able to win). It's not a matter of reliability of available offers (except that I don't expect ANY).

It's _is_ a question of reliability and trust, though. There are no organizations or people I trust enough to define a task well and make sure multiple aren't competing in some non-transparent way, so that I actually expect to get paid for the work posted on a discussion site. And I don't expect that I have enough track record for any bidder to prefer me for the kind of tasks you're talking about at the rates I expect. [edit to add] Nor do I have any tasks where I'd prefer a bounty or open-bid rather than finding a partner/employee and agreeing on specific terms.

It's also a question of what LW is for - posting and discussion of thought-provoking, well-researched, interestingly-modeled, and/or fun ideas is something that's very hard to measure in order to reward monetarily. Also, I'll be massively demotivated by thinking of this as a commercial site, even if I'm only in the free area.

My recommendation would be to use a different place to manage the tasks and the bid/ask process, and the acceptance of work and payment. Some tasks and their outputs might be appropriate to link here, but not the job management.

tl;dr: don't mix money into LW. Social and intellectual rewards are working pretty well, and putting commerce into it could well kill it.

comment by Raemon · 2019-03-28T22:45:37.437Z · LW(p) · GW(p)

There's some important points here that I'm going to address by rewriting the OP significantly.

Replies from: Raemon
comment by Raemon · 2019-03-28T23:15:49.047Z · LW(p) · GW(p)

(I've rewritten the post a bunch, which doesn't directly answer your question but at least frames the question a bit better, which seemed higher priority)

Replies from: Dagon
comment by Dagon · 2019-03-28T23:42:41.010Z · LW(p) · GW(p)

Thanks. Still triggers my "money would be a de-motivator for what I like about LW" instinct, but I'm glad you're acknowledging that it's only one aspect of the question you're asking.

The relevant questions are "how do you know what things need additional motivation" and "why do you think LW is best suited for it"? For the kind of things you're talking about (summarizing research, things that take "a few days to a few weeks" of "not intrinsically-fun"), I think that matching is more important than motivation. Finding someone with the right skillset and mindset to be ABLE to do the work at an acceptable cost is a bigger filter than motivating someone who just doesn't know it's needed. And I don't think LW is the only place you'd want to advertise such work anyway.

Fortunately, it's easy to test. Don't add any site features, just post a job that you think is typical of what you're thinking. See how people (both applicants and observers) react.

Note that I really _DO_ like your thinking about breaking down into managable sub-questions and managing inquiries that are bigger than a single post. I'd love to explore that completely separately from motivation and taskrabbit-like knowledge work.

Replies from: Raemon, Raemon
comment by Raemon · 2019-03-29T00:02:34.394Z · LW(p) · GW(p)

Part of the impetus of our current thought process is there does seem to be a limit on the complexity of stuff that typically gets answered without bounties attached (but, we now have a track record of occasional bounty posts successfully motivating such work).

I could imagine it turning out that the correct balance involves "not building any additional site features, just allow it to be something that happens organically sometimes", so that it can happen but there's friction that prevents runaway Moloch processes.

I currently think there's room to slightly increase the option of monetary incentives without destroying everything but it's definitely something I'd want to think carefully about.

My answer (not necessarily endorsed by rest of team) to your question is something like "right now, it seems like for the most part, LessWrong motivates stuff that is either Insight Porn, or 'Insight That Is At Least Reasonably Competitive with Insight Porn.'"

And we actually have the collective orientation and many of the skills needed to work on real, important problems collaboratively. Much of those problems won't have the intrinsic feedback loops that make it natural to solve them – they're neither as fun to work on nor to read.

We see hints of people doing this anyway, but despite the fact that I think, say, a Scott Alexander More Than You Wanted To Know post is 10x as valuable as the average high-karma LW post, it isn't 10x as rewarded. (And I'd be much less willing to read if it Scott wasn't as funny, and it's sad if people have to gain the skill 'be funny' in order to work on stuff like that)

Meanwhile, there's a bunch of ways Academia seems to systematically suck. I asked a friend who's a bio grad student if Academia could use better communication infrastructure. And they said (paraphrased) "hah. Academia isn't about communication and working together to solve problems. Academics wouldn't want to share their early work, they'd be afraid of getting scooped."

I'm not sure if their experience is representatives but it seemed at least pretty common.

Meanwhile, LessWrong has an actual existing culture that is pretty well suited to this. I think a project that attempted to move this elsewhere would not be nearly as successful. Even a "serious intellectual progress" solution network is still a social network, and still requires the chicken/egg problem of getting people to collectively believe in it.

I'm much more excited about such a project bootstrapping off LW than trying to start from scratch.

Replies from: Dagon
comment by Dagon · 2019-03-29T17:46:44.107Z · LW(p) · GW(p)
(but, we now have a track record of occasional bounty posts successfully motivating such work).

Can you elaborate on this? I haven't seen any bounty-driven work adjacent to LW, and I'd like to look at a few successes to help me understand whether adding some of those mechanisms to LW is useful, comparing to adding some LW interactions (ads or links) to those places where bounties are already successful.

I'm much more excited about such a project bootstrapping off LW than trying to start from scratch.

I totally get that, but those aren't the only two options, and that excitement doesn't make it the right choice.

Replies from: Raemon
comment by Raemon · 2019-03-29T18:03:05.010Z · LW(p) · GW(p)

Examples of bounties were included in the rewrite (they have already become moderately common on LW, and most of the time seem to produce more/better discussion). See the middle section for a few links.

I meant ‘excited’ in the sense that I expect it to work and generate a lot of value.

comment by Raemon · 2019-03-29T20:53:03.490Z · LW(p) · GW(p)

I suspect there's a higher level difference in our thinking, something like:

It seems like your position is "LessWrong seems basically fine, don't fix what's not broke."

Whereas my position is something like "LessWrong seems... well, fine, but there's a huge gap between where we are now, and where I think we could be if we put in a lot more experimentation and optimization effort, and I think it would be fairly sad if LessWrong stayed where it is now.

I also think there's a huge gap between where we are now, and where we were when Eliezer was writing. And where we are now feels very precarious. It depends on a number of people all having interesting things to write about that they are writing about publicly.

In "The Dark Times" (2015-2017), the interesting people gradually migrated elsewhere, and then LessWrong withered. We've built some new mechanical things that have improved the baseline of how LessWrong works, but we haven't changed the overall landscape in a way that makes me confident that the site wouldn't wither again.

I think even maintaining the baseline of "keep the site pretty okay" requires a continuous injection of effort and experimentation. And I'd like to get to a place where the site is generating as much (and ideally more) value as it was in 2011, without relying on a single author to drum up lots of interest.

Getting there will necessarily involve a lot of experimentation, so my default response to any given experiment is "that sounds interesting, let's think about it and figure out how to make it the best version of itself and take the site in interesting new directions" rather than "hmm, but will this hurt the status quo?".

Replies from: Dagon
comment by Dagon · 2019-03-30T15:00:07.633Z · LW(p) · GW(p)
LessWrong seems basically fine, don't fix what's not broke.

That's not how I'd summarize it. Much credit to you and the team and all the other participants for how well it's doing, but I remember the various ups and downs, and the near-death in the "dark times". I also hope it can be even better, and I don't want to prevent all changes so it stagnates and dies again.

I do fear that a complete pivot (such that monetary prizes are large and common enough that money is a prime motivator) will break it. The previous prizes all seemed small enough that they were basically a bit above the social status of a giant upvote, and I didn't see any day-long efforts from any of the responders. That's very different from what you seem to be considering.

So I support cautious experimentation, and gradual changes. Major experiments (like prizes big enough to motivate day- or week-long efforts) probably should be labeled as experiments and done with current site features, rather than investing very much in. I'm actually more gung-ho than "so let's think about it and figure out how to make it the best version" in many cases - I'd rather go with "let's try it out cheaply and then think about what worked and what didn't". Pick something you'd like to fund (or find someone who has such a topic and the money to back it up), run it in Google Docs, with a link and summary here.

This applies to the more interesting (to me; I recognize that I'm not the only constituent) ideas as well. Finding ways to break problems down into manageable questions, and to link/synthesize the results seems like a HUGE potential, and it can be tested pretty cheaply. Have someone start a "question sequence" - no tech change, just titled as such. The asker seeks input on how to split the problem, as well as on sub-problems.

Really, I don't mean to say "this is horrible, please don't do anything in this cluster of ideas!" I do mean to say "I'm glad you're thinking about the issues, but I see a _LOT_ of risk in introducing monetary incentive where social incentives are far more common. Please tread very carefully."

(Not sure how serious I am about the following - it may just be an appeal to meta) You could use this topic as an experiment. Ruby's posting some documents about Q&A thinking - put together an intro post, and label them all (including this post) "LW Q&A sequence". Ask people how best to gather data and perform experiments along the way.

answer by John_Maxwell (John_Maxwell_IV) · 2019-03-30T07:10:18.000Z · LW(p) · GW(p)

If answering the question takes weeks or months of work, won't the question have fallen off the frontpage by the time the research is done?

What motivates me is making an impact and getting quality feedback on my thinking. These both scale with the number of readers. If no one will read my answer, I'm not feeling very motivated.

comment by Raemon · 2020-04-04T01:16:44.059Z · LW(p) · GW(p)

I'm currently exploring a possible feature wherein question-authors, and moderators, can flag answers as "Top Answers", which trigger the question moving to the top of the home page, and adding the most recent "top answer" author as a co-author of the post.

Not 100% sure on the implementation details. Does that sound like that would help with this problem?

comment by Raemon · 2019-03-30T07:19:34.178Z · LW(p) · GW(p)

Well, the question asker will always see it (they'll receive a notification). The act of answering it will also:

a) put it on the recent discussion section

b) it'll also appear in on the slightly revamped Questions [? · GW] page [? · GW], where both "Top Questions" and "Recent Activity" are sorted by which questions were most recently commented on. ("Top Questions" are "questions with 40 or more karma, sorted by recently commented/answered").

We'll be putting some work into figuring out how to keep questions take up "the correct amount of attention" (i.e. enough so old, important questions aren't lost track of, but without cluttering the frontpage). If an answer is good, we will likely also curate the question along with the answer.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-03-30T07:37:14.087Z · LW(p) · GW(p)

This could motivate me to spend minutes or hours answering a question, but I think it would be insufficient to motivate me to spend weeks or months. Maybe if there was an option to also submit my question answer as a regular post.

Replies from: Raemon
comment by Raemon · 2019-03-30T18:05:24.283Z · LW(p) · GW(p)

I do think that when you're tackling something that'll take weeks or months, it's quite likely you'll end up with multiple posts worth of content. In that case I think the "Answer" part would look more like linking to a separate post (or sequence) and summarizing it, than writing the whole thing in the answer section.

(I've also been thinking about having high-quality answers getting displayed as part of the question's post item, so rather than the "primary author" being the person who asked the question, the top answer author is given prominent billing)

answer by ryan_b · 2019-04-01T20:47:28.090Z · LW(p) · GW(p)

For the examples you give, the improvements you cite to intrinsic motivation + karma would be sufficient to motivate me for questions of the "take an afternoon of your time" type, which is approximately where my blogposts have been landing anyway. Further, several are already of the summarize papers/point to a list of sources type. On the long end of weeks or months, bounties in the hundreds would probably satisfy depending on the level of interest I have, which is the true variable in whether I engage.

It is hard to tell in the current format what kind of depth-of-answer the questioner is looking for, and what time frame would be appropriate for an answer. It is also hard to tell how well answered a question already is, which has a big impact on reading older questions or questions with many answers. Mostly I have been viewing questions at the same rate as blog posts, but it occurs to me that they don't age in the same way informative or exploratory posts do; the question is unresolved until it is.

Having some way to disentangle the content of this site from when it was posted would be handy.

comment by GPT2 · 2019-04-01T20:47:36.064Z · LW(p) · GW(p)

It's still worth mentioning, but what sort of guidelines should be included when using the site and making it a better place?

For a few months I started donating a bit more money to LW instead of to EA, as a motivation to donate for reasons that don't sound very interesting to me, but for reasons that are hard to evaluate and simple enough that I'd like to not see it's impact on the world. But now, while I might donate that money to EA, the potential consequence is still quite a big win to my happiness and my career.

answer by [deleted] · 2019-05-17T13:06:31.567Z · LW(p) · GW(p)

It would help if the poster directly approaches or tags me as a relevant expert.

answer by Gordon Seidoh Worley (G Gordon Worley III) · 2019-03-28T21:53:01.142Z · LW(p) · GW(p)

Given that there is some probability of winning a question, let's just guess it's 20% on any particular question I might try to answer. This suggests to me a bounty of 5x whatever I would be willing to answer the question for in order to make me willing to do it. Assuming a question takes about a day of work (8 hours) to answer fully and successfully, and given our 5x multiplier, I'd be willing to try to answer a question I wasn't already excited to answer for other reasons if it paid about $1800.

Many others may have lower opportunity costs, though (and I undercounted a bit because I assume any question I would answer would deliver me at least some sense of value beyond the money; otherwise my number would probably jump up closer to $2500).

comment by Raemon · 2019-03-28T22:29:50.952Z · LW(p) · GW(p)

Yeah, there's two issues this points at that we've been thinking about:

1. "bounties" come with an issue where you're not sure you'll succeed, so if you're actually relying on it for "real money" (instead of using the money as an indicator that someone cared which might motivate you enough to do it for fun), you need much more money for it to work

2. I actually expect a "well functioning" Q&A system works by having lots of people tackle small parts of a problem, in ways that are harder to assign credit for. (Or, at least, credit is distributed among many people)

Two approaches we've thought about include:

  • be more like "craigslist for intellectual progress", where one section of LW is more like a contact-job-finding board. (This runs into usual issues of "job finding is hard both for employers and employees", but would mean that you don't need the 5x multiplier)
  • instead of "there's one bounty that goes to the best thing", a common pattern ends up being "question-asker puts forth the total amount they're willing to spend on a thing", with a vague goal of "distribute that money fairly towards people who contributed."
    • Relatedly, we've considered something like a "tip jar" feature, where you can put a link to your paypal/patreon/whatever that shows up as a (not-too-obtrusive, but available) button when you mouse over someone's username, or something). So that it's easier to see "oh, this person did something that's worth about $10 to me, I'mma give them $10." And this might lend itself towards rewarding the person who took an initial step of "refactor your confusing question into 3 separate less confusing ones."
comment by jacobjacob · 2019-03-28T23:27:58.061Z · LW(p) · GW(p)

If people provided this as a service, they might be risk-averse (it might make sense for people to be risk-averse with their runway), which means you'd have to pay more than hourly rate/chance of winning.

This might not be a problem, as long as the market does the cool thing markets do: allowing you to find someone with a lower opportunity cost than you for doing something.

answer by dunedale · 2020-04-04T20:47:52.512Z · LW(p) · GW(p)

In order to be motivated, I would like to have a good idea of the impact the work would be making. I would like to see a clear explanation of the process taken to come up with the question and a list of who in LW supports this question as being an effective target of attention at this point in time and why. Maybe this could be documented in the question post and maybe there could be rounds that potential questions to go through for community members to vote/discuss/rate them. Maybe there could be a backlog of other questions that have not been chosen yet with reasons why they have not been chosen yet to help new questions arise. I would also like to know which other LW users are working on it (to avoid duplication of efforts) and if there are good opportunities for delegating work among multiple community members.

I like the idea of sub-questions. It might be interesting to have a display in the form of a graph with vertices as question/answers and directed edges as indicating a sub-super relationship between questions/answers. I think this would help us get a big picture view of the progress made and how it was achieved.

Since there is only so much that can be done by one community, I think it could in some cases be useful to have questions that are intended to be handed off to external parties like academic groups or certain organizations or renowned individuals after we do enough investigatory work.

answer by DirectedEvolution (AllAmericanBreakfast) · 2019-03-29T05:30:39.470Z · LW(p) · GW(p)

If this blog's "hard questions" have utility, they should be novel, important, and answerable.

Important questions are highly likely to be known already among experts in the relevant field. If they're answerable, one of those experts is likely already working on it with more rigor than you're capable of extracting from a crowd of anonymous bloggers. I think, then, that any questions you ask have a high probability of being redundant, unimportant, or unanswerable (at least to a useful degree of rigor). Unfortunately, you're unlikely to know that in advance unless you vet the questions with experts in the relevant literature.

And at that point, you're starting to look like an unaccountable, opaque, disorganized, and underresourced anonymously peer-reviewed journal.

It might be interesting to explore the possibility that a wiki-written or amateur-sourced peer reviewed journal could have some utility, especially if it focused on a topic that is not so dependent on the expensive and often opaque process of gathering empirical data. I expect that anyone who can advance the field of mathematics is probably already a PhD mathematician. So philosophy, decision theory, something like that?

Developing a process to help an anonymous crowd of blog enthusiasts turn their labor into a respectable product would be useful and motivating. I would start by making your next "hard question" what specific topic such a PRJ could usefully focus on.

comment by Raemon · 2019-03-30T19:36:59.439Z · LW(p) · GW(p)

Your premises seem strange to me – questions are either important and already worked on, or not important? Already-worked-on-questions don't need answers? Both of these seem false.

If an expert somewhere knows the answer to something, I still often need to know the answer myself (because it's a piece of a broader puzzle that I care about, which the expert doesn't necessarily care about). I still need someone to go find the answer, distill it, and to help put it into a new context.

The LW community historically has tackled questions that were important, and that few other people were working on (in particular related to human rationality, AI alignment and effective altruism)

12 comments

Comments sorted by top scores.

comment by johnswentworth · 2019-03-29T21:56:32.404Z · LW(p) · GW(p)

I would like more concrete examples of nontrivial questions people might be interested in. Too much of this conversation is too abstract, and I worry people are imagining different things.

Toward that end, here are a few research projects I've either taken on or considered, which I would have been happy to outsource and which seem like a good fit for the format:

  • Go through the data on spending by US colleges. Look at how much is actually charged per student (including a comparison of sticker price to actual tuition), how much is spent per student, and where all the money is spent. Graph how these have changed over time, to figure out exactly which expenditures account for the rapid growth of college cost. Where is all the extra money going? (I've done this one; results here.)
  • Go through the data on aggregate financial assets held, and on real capital assets held by private citizens/public companies/the state (i.e. patents, equipment, property, buildings, etc) to find out where money invested ultimately ends up. What are the main capital sinks in the US economy? Where do marginal capital investments go? (I've also done this one, but haven't gotten around to writing it up.)
  • Go through the genes of JCVI's minimal cell, and write up an accessible explanation of the (known) functionality of all of its genes (grouping them into pathways/systems as needed). The idea is to give someone with minimal bio background a comprehensive knowledge of everything needed for bare-minimum life. Some of this will have to be speculative, since not all gene functions are known, but a closed list of known-unknowns sure beats unknown-unknowns.
  • Something like Laura Deming's longevity FAQ, but focused on the macro rather than micro side of what's known - i.e. (macroscopic) physiology of vascular calcification and heart disease, alzheimers, cancer, and maybe a bit on statistical models of old-age survival rates. In general, there seems to be lots of research on the micro side, lots known on the macro side, but few-if-any well-understood mechanistic links from tone to the other; so understanding both sides in depth is likely to have value.
  • An accessible explanation of Cox' Theorem, especially what each piece means. The tough part: include a few examples in which a non-obvious interpretation of a system as a probabilistic model is directly derived via Cox' Theorem. I have tried to write this at least four separate times, and the examples part in particular seems like a great exercise for people interested in embedded agency.
Replies from: Raemon
comment by Raemon · 2019-03-29T22:13:41.204Z · LW(p) · GW(p)

Thanks, and yeah these are approximately the same order-of-magnitude-of-difficulty that was I imagining as a "hard" question (some seem to require more specialized knowledge though, which I'm not sure of the viability of)

Some additional examples (answering Chris_Leong's question here I guess), are:

  • My existing question "How Old is Smallpox [LW · GW]" (which I wasn't quite satisfied by the answer of). This one requires some specialist knowledge and is pretty niche so it seems fair if it doesn't get answered. [fwiw I cared about the answer to this to know whether it was epistemically kosher to read 500 Million at Winter Solstices ceremonies.]
  • My existing question "How Much Funding and Researchers were in AI, and AI Safety, in 2018 [LW · GW]", which I think is "somewhat hard, but accessible, and doesn't require specialist knowledge to answer."
  • Summarize the important highlights of this paper ("Job, Career, Calling") on how people relate to their job. Also seems pretty generalist.
    • Possible re-run that experiment on Mechanical Turk on a broader audience (I specifically wanted to know how people who work in "traditionally shitty jobs" related to their work.)
  • In general, I'd like to build an engine that translates "papers written in academic-ese" into "plain english summaries".

comment by Raemon · 2019-03-29T23:19:16.206Z · LW(p) · GW(p)

Ruby has now written a post that explains some more of the background thinking that underlay this post. It's not super polished but if you read this question and felt "...this seems to be missing context?", here is some of it [LW · GW].

comment by Chris_Leong · 2019-03-29T03:39:33.039Z · LW(p) · GW(p)

This whole discussion is pretty abstract. I'd be quite interested to know what kind of content you are trying to encourage. Is it just high quality articles in general or on specific topics?

Another idea for improving the quality of content, although not necessarily directly related to questions, would be to have a Less Wrong "Magazine". For this proposal, a certain number of articles would be accepted in every issue. The competitive nature and the existence of an editor would lead to higher quality content production. This would be rewarded by the submitted content presumably receiving more attention and prestige as people would know that more effort went into producing these articles.

comment by jacobjacob · 2019-03-28T23:24:18.055Z · LW(p) · GW(p)

I think the question, narrowly interpreted as "what would cause me to spend more time on the object-level answering questions on LW" doesn't capture most of the exciting things that happen when you build an economy around something. In particular, that suddenly makes various auxiliary work valuable. Examples:

  • Someone spending a year living off of one’s savings, learning how to summarise comment threads, with the expectation that people will pay well for this ability in the following years
  • A competent literature-reviewer gathering 5 friends to teach them the skill, in order to scale their reviewing capacity to earn more prize money
  • A college student building up a strong forecasting track-record and then being paid enough to do forecasting for a few hours each week that they can pursue their own projects instead of having to work full-time over the summer
  • A college student dropping out to work full-time on answering questions on LessWrong, expecting this to provide a stable funding stream for 2+ years
  • A professional with a stable job and family and a hard time making changes to their life-situation, taking 2 hours/week off from work to do skilled cost-effectiveness analyses, while being fairly compensated
  • Some people starting a “Prize VC” or “Prize market maker”, which attempts to find potential prize winners and connect them with prizes (or vice versa), while taking a cut somehow

I have an upcoming post where I describe in more detail what I think is required to make this work.

Replies from: Raemon
comment by Raemon · 2019-03-28T23:30:28.793Z · LW(p) · GW(p)

This all seems plausible, but for it to work it seemed like it needed to cross the initial threshold of "are people actually willing to do this, and what will it take for them to do so?". For this question I'm most interested in getting a sense of what things are necessary for this sort of project to get off the ground.

comment by Dr_Manhattan · 2019-03-28T20:20:24.628Z · LW(p) · GW(p)

I think the answer will be highly dependent on the question. The opportunity cost of someone answering a q-n on yoga (being a yoga expert) is very different from someone answering an investment question.

Replies from: Raemon
comment by Raemon · 2019-03-28T20:26:57.621Z · LW(p) · GW(p)

Sure. But (for you, in particular), what are some examples of types of questions and the amount you'd need, at different degrees of difficulty?

(Might also be a good time to share what fields of expertise you have, to get a sense of what domains LW might be particularly good at thinking seriously about)

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2019-03-29T17:19:27.596Z · LW(p) · GW(p)

Sure. I know something about general CS stuff, ML, applied Bayesian stats and finance. Generally I would not be answering questions for a bounty (I'm well compensated to do this at work and I don't want *more work*) but would spend some time if I think it helped people of contributed to an important body of knowledge. For me it comes from a different "time budget". I realize many people would feel differently but there's probably a class of people like me.

comment by mako yass (MakoYass) · 2019-04-01T22:53:27.845Z · LW(p) · GW(p)
New features such as the ability to spawn related questions that break down a confusing question into an easier.

Why would that be any better than just mentioning related questions in a comment, or compiling links to the subquestions in your answer?

Replies from: Raemon, GPT2
comment by Raemon · 2019-04-02T00:48:09.606Z · LW(p) · GW(p)

Two clusters of reasons:

1. Nudges/incentives.

We didn't technically need a questions feature in the first place (there's nothing stopping you from writing questions as posts), but having an explicit feature sends a strong signal that this is an encouraged norm.

We also didn't technically need to implement "answers" as a type separate from comments, but we did so to help ensure that people would approach questions with a different mindset than typical posts. (i.e. actually try to figure a thing out, rather than just sort of meander around on the internet).

I actually considered not only adding Related Questions, but making them the default way of interacting with question posts (rather than submitting an answer), based on the notion that people seemed to be rushing to answer hard questions when the correct next step was to break it down further. (We ultimately decided not to do this)

2. Enabling other features and architecture

Another reason we created answers (rather than just using comments), was that an Answer type makes it a bit more sensible to do things like "mark a question as Answered" (so that future people who search for the question will find a convenient Question/Answer pair). We haven't actually built that feature yet but still plan to.

Similarly, we're interested in a Related Questions because they suggest ways of more easily rearranging question pages and question sequences. For an Open Question, you can look at a high level overview of what subquestions have been answered and which haven't, which suggests a different way of engaging with the overall topic.

comment by GPT2 · 2019-04-01T22:53:35.081Z · LW(p) · GW(p)
  • I think the general setup should be "all posts" at least, since it's so straightforward to look at a post separately from a list of each concept.

  • I think I have a specific concept for something I'm trying to say, but I didn't know how to describe that concept.

  • I think I've solved the problem of having to explicitly list several things to say in order to get to the kind of answer which results in more answers. If I just don't feel like doing this by the time you get to the next one, I guess it'd be useful to have my own concept.

  • That's what actually makes the post a better concept.

  • The main problem is with getting your concept to work together that's not how things really work together. I don't think it really helps to have that concept to work together on an underlying, just a) you can't, a) I don't think you could work together on a new concept, and b) even if it doesn't, you have to have the concept in your area of expertise to build it into a new concept.

So I'm hoping that it doesn't sound too insane to list a concept and then tell me how to do it, without which the concepts are useless.

Some thoughts on the latter thought, which I do in a few places:

  • It may be that your concept is already a large part of your concept, but that it doesn't have to be a big part of it.
  • It may be that this insight isn't always useful in other kinds of contexts. I'm not sure that this is always true for some context-related things, but it seems like a useful concept that's already built into my brain.
  • I'm not sure what to make of the "if anyone had a concept and I just don't keep track of it, it's not safe to ignore it" distinction.

Overall, it seems like this has been more generally useful, and I'm now aware that having several threads of thought seems easier and more natural to many people in some contexts, but has this explanation as the thing to remember? I don't think it is, though. I also hope for the rest of you to find it