Posts

The why and how of daily updates 2019-05-05T15:21:42.300Z · score: 29 (11 votes)
Wikipedia pageviews: still in decline 2017-09-26T23:03:27.902Z · score: 38 (19 votes)
Wikipedia usage survey results 2016-12-25T13:55:33.123Z · score: 7 (8 votes)
The great decline in Wikipedia pageviews (condensed version) 2015-03-27T14:02:23.518Z · score: 24 (16 votes)
Tentative tips for people engaged in an exercise that involves some form of prediction or forecasting 2014-07-30T05:24:35.963Z · score: 6 (7 votes)
Claim: Scenario planning is preferable to quantitative forecasting for understanding and coping with AI progress 2014-07-25T03:43:34.501Z · score: 1 (2 votes)
[QUESTION]: Looking for insights from machine learning that helped improve state-of-the-art human thinking 2014-07-25T02:10:10.859Z · score: 3 (4 votes)
[QUESTION]: Academic social science and machine learning 2014-07-19T15:13:28.704Z · score: 11 (12 votes)
How deferential should we be to the forecasts of subject matter experts? 2014-07-14T23:41:39.019Z · score: 13 (13 votes)
Scenario analyses for technological progress for the next decade 2014-07-14T16:31:59.625Z · score: 10 (10 votes)
Communicating forecast uncertainty 2014-07-12T21:30:54.053Z · score: 5 (6 votes)
Forecasting rare events 2014-07-11T22:48:43.212Z · score: 5 (5 votes)
[QUESTION]: Driverless car forecasts 2014-07-11T00:25:35.644Z · score: 7 (7 votes)
Domains of forecasting 2014-07-09T13:45:50.501Z · score: 6 (6 votes)
The insularity critique of climate science 2014-07-09T01:17:03.466Z · score: 9 (13 votes)
[QUESTION]: What are your views on climate change, and how did you form them? 2014-07-08T14:52:06.756Z · score: 5 (7 votes)
Carbon dioxide, climate sensitivity, feedbacks, and the historical record: a cursory examination of the Anthropogenic Global Warming (AGW) hypothesis 2014-07-08T01:58:40.238Z · score: 3 (9 votes)
[QUESTION]: LessWrong web traffic data? 2014-07-07T21:41:08.435Z · score: 4 (4 votes)
Time series forecasting for global temperature: an outside view of climate forecasting 2014-07-07T16:25:08.038Z · score: 3 (7 votes)
Climate science: how it matters for understanding forecasting, materials I've read or plan to read, sources of potential bias 2014-07-07T16:18:57.061Z · score: 3 (7 votes)
Weather and climate forecasting: how the challenges differ by time horizon 2014-07-04T15:28:08.735Z · score: 6 (8 votes)
Futures studies: the field and the associated community 2014-07-02T23:47:29.810Z · score: 5 (5 votes)
Scenario planning, its utility, and its relationship with forecasting 2014-07-02T02:32:25.167Z · score: 6 (9 votes)
General-purpose forecasting and the associated community 2014-06-26T02:49:51.005Z · score: 3 (5 votes)
Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming 2014-06-25T21:47:07.424Z · score: 24 (24 votes)
An overview of forecasting for politics, conflict, and political violence 2014-06-24T22:10:39.093Z · score: 7 (7 votes)
Lessons from weather forecasting and its history for forecasting as a domain 2014-06-23T17:08:58.453Z · score: 12 (12 votes)
Moving on from Cognito Mentoring 2014-05-16T22:42:56.907Z · score: 52 (55 votes)
Paradigm shifts in forecasting 2014-05-08T19:38:11.822Z · score: 3 (8 votes)
Some historical evaluations of forecasting 2014-05-07T02:42:56.582Z · score: 8 (11 votes)
The track record of survey-based macroeconomic forecasting 2014-04-22T04:57:40.998Z · score: 3 (6 votes)
Utilitarian discernment bleg 2014-04-20T23:33:26.974Z · score: 1 (6 votes)
Human capital or signaling? No, it's about doing the Right Thing and acquiring karma 2014-04-20T21:04:18.401Z · score: 21 (22 votes)
The usefulness of forecasts and the rationality of forecasters 2014-04-17T03:49:09.378Z · score: 0 (5 votes)
Stories for exponential growth 2014-04-16T15:15:46.369Z · score: 4 (8 votes)
Different time horizons for forecasting 2014-04-16T03:30:26.853Z · score: 1 (4 votes)
Using the logarithmic timeline to understand the future 2014-04-16T02:00:52.368Z · score: 4 (7 votes)
Beware technological wonderland, or, why text will dominate the future of communication and the Internet 2014-04-13T17:34:23.869Z · score: 11 (12 votes)
Evaluating GiveWell as a startup idea based on Paul Graham's philosophy 2014-04-12T14:04:12.832Z · score: 13 (18 votes)
How relevant are the lessons from Megamistakes to forecasting today? 2014-04-12T04:53:59.786Z · score: 8 (11 votes)
Quote dump for "megamistakes" 2014-04-12T04:53:56.731Z · score: 1 (4 votes)
Supply, demand, and technological progress: how might the future unfold? Should we believe in runaway exponential growth? 2014-04-11T19:07:09.786Z · score: 14 (16 votes)
Bleg: Read and learn, or become an activist? 2014-04-09T21:39:36.526Z · score: 4 (7 votes)
The value of the online hive mind 2014-04-09T16:52:38.818Z · score: 4 (5 votes)
The failed simulation effect and its implications for the optimization of extracurricular activities 2014-04-08T19:27:53.567Z · score: 9 (12 votes)
A summary and broad points of agreement and disagreement with Cal Newport's book on high school extracurriculars 2014-04-08T01:55:54.292Z · score: 10 (11 votes)
High school students and epistemic rationality 2014-03-15T17:40:58.192Z · score: 3 (4 votes)
Biomedical research, superstars, and innovation 2014-03-14T22:38:40.340Z · score: 2 (5 votes)
High school students and effective altruism 2014-03-14T19:04:38.727Z · score: 9 (12 votes)
What attracts people to learning things that they consider neither interesting nor important? 2014-03-14T17:32:43.188Z · score: 5 (6 votes)

Comments

Comment by vipulnaik on The why and how of daily updates · 2019-05-06T13:48:07.602Z · score: 2 (2 votes) · LW · GW

That's a normal part of life :). Any things that I decide to do in a future day, I'll copy/paste to over there, but I usually won't delete the items from the checklist for the day where I didn't complete them (thereby creating a record of things I expected or hoped to do, but didn't).

For instance, at https://github.com/vipulnaik/daily-updates/issues/54 I have two undone items.

Comment by vipulnaik on Raemon's Scratchpad · 2018-07-29T20:13:53.440Z · score: 3 (2 votes) · LW · GW

There is some related stuff by Carl Shulman here: https://www.greaterwrong.com/posts/QSHwKqyY4GAXKi9tX/a-personal-history-of-involvement-with-effective-altruism#comment-h9YpvcjaLxpr4hd22 that largely agrees with what I said.

Comment by vipulnaik on Raemon's Scratchpad · 2018-07-16T05:11:18.628Z · score: 24 (6 votes) · LW · GW

My understanding is that Against Malaria Foundation is a relatively small player in the space of ending malaria, and it's not clear the funders who wish to make a significant dent in malaria would choose to donate to AMF.

One of the reasons GiveWell chose AMF is that there's a clear marginal value of small donation amounts in AMF's operational model -- with a few extra million dollars they can finance bednet distribution in another region. It's not necessarily that AMF itself is the most effective charity to donate to to end malaria -- it's just the one with the best proven cost-effectiveness for donors at the scale of a few million dollars. But it isn't necessarily the best opportunity for somebody with much larger amounts of money who wants to end malaria.

For comparison:

The main difference I can make out between the EA/GiveWell-sphere and the general global health community is that malaria interventions (specifically ITNs) get much more importance in the EA/GiveWell-sphere, whereas in the general global health spending space, AIDS gets more importance. I've written about this before: http://effective-altruism.com/ea/1f9/the_aidsmalaria_puzzle_bleg/

Comment by vipulnaik on Should we be spending no less on alternate foods than AI now? · 2017-10-31T05:35:43.651Z · score: 0 (0 votes) · LW · GW

I tried looking in the IRS Form 990 dataset on Amazon S3, specifically searching the text files for forms published in 2017 and 2016.

I found no match for (case-insensitive) openai (other than one organization that was clearly different, its name had openair in it). Searching (case-insensitive) "open ai" gave matches that all had "open air" or "open aid" in them. So, it seems like either they have a really weird legal name or their Form 990 has not yet been released. Googling didn't reveal any articles of incorporation or legal name.

Comment by vipulnaik on Writing That Provokes Comments · 2017-10-04T16:45:17.312Z · score: 42 (18 votes) · LW · GW

In my experience, writing full-fledged, thoroughly researched material is pretty time-consuming, and if you push that out to the audience immediately, (1) you've sunk a lot of time and effort that the audience may not appreciate or care about, and (2) you might have too large an inferential gap with the audience for them to meaningfully engage.

The alternative I've been toying with is something like this: when I'm roughly halfway through an investigation, I publish a short post that describes my tentative conclusions, without fully rigorous backing, but with (a) clearly stated conclusions, and (b) enough citations and other signals that there's decent research backing my process. Then I ask people what they think of the thesis, which parts they are interested in, and what they are skeptical of. Then after I finish the rest of the investigation I push a polished writeup only for those parts (for the rest, it's just informal notes + general pointers).

For examples, see https://www.lesserwrong.com/posts/ghBZDavgywxXeqWSe/wikipedia-pageviews-still-in-decline and http://effective-altruism.com/ea/1f9/the_aidsmalaria_puzzle_bleg/ (both are just the first respective steps for their projects).

I feel like this both makes comments more valuable to me and gives more incentive to commenters to share their thoughts, but the jury is still out.

Comment by vipulnaik on Wikipedia pageviews: still in decline · 2017-09-30T14:50:26.209Z · score: 3 (1 votes) · LW · GW

FWIW, my impression is that data on Wikipedia has gotten somewhat more accurate over time, due to the push for more citations, though I think much of this effect occurred before the decline started. I think the push for accuracy has traded off a lot against growth of content (both growth in number of pages and growth in amount of data on each page). These are crude impressions (I've read some relevant research but don't have strong reason to believe that should be decisive in this evaluation) but I'm curious to hear what specific impressions you have that are contrary to this.

Comment by vipulnaik on Wikipedia pageviews: still in decline · 2017-09-30T14:46:35.195Z · score: 3 (1 votes) · LW · GW

If you have more fine-grained data at your disposal on different topics and how much each has grown or shrunk in terms of number of pages, data available on each page, and accuracy, please share :).

Comment by vipulnaik on Wikipedia pageviews: still in decline · 2017-09-30T04:05:04.345Z · score: 3 (1 votes) · LW · GW

In the case of Wikipedia, I think the aspects of quality that correlate most with explaining pageviews are readily proxied by quantity. Specifically, the main quality factors in people reading a Wikipedia page are (a) the existence of the page (!), (b) whether the page has the stuff they were looking for. I proxied the first by number of pages, and the second by length of the pages that already existed. Admittedly, there are a lot more subtleties to quality measurement (which I can go into in depth at some other point) some of which can have indirect, long-term effects on pageviews, but on most of these dimensions Wikipedia hasn't declined in the last few years (though I think it has grown more slowly than it would with a less dysfunctional mod culture, and arguably too slowly to keep pace with the competition).

Comment by vipulnaik on Wikipedia pageviews: still in decline · 2017-09-29T04:51:46.927Z · score: 3 (1 votes) · LW · GW

Great point. As somebody who has been in the crosshairs of Wikipedia mods (see ANI) my bias would push me to agree :). However, despite what I see as problems with Wikipedia mod culture, it remains true that Wikipedia has grown quite a bit, both in number of articles and length of already existing articles, over the time period when pageviews declined. I suspect the culture is probably a factor in that it represents an opportunity cost: a better culture might have led to an (even) better Wikipedia that would not have declined in pageviews so much, but I don't think the mod culture led to a quality decline per se. In other words, I don't think the mechanism:

counterproductive mod culture -> quality decline -> pageview decline

is feasible.

Comment by vipulnaik on Wikipedia pageviews: still in decline · 2017-09-28T05:03:43.539Z · score: 6 (3 votes) · LW · GW

Great points. As I noted in the post, search and social media are the two most likely proximal mechanisms of causation for the part of the decline that's real. But neither may represent the "ultimate" cause: the growth of alternate content sources, or better marketing by them, or changes in user habits, might be what's driving the changes in social media and search traffic patterns (in the sense that the reason Google's showing different results, or Facebook is making some content easier to share, is itself driven by some combination of what's out there and what users want).

The main challenge with search engine ranking data is that (a) the APIs forbid downloading the data en masse across many search terms, and (b) getting historical data is difficult. Some SEO companies offer historical data, but based on research Issa and I did last year, we'd have to pay a decent amount to even be able to see if the data they have is helpful to us, and it may very well not be.

The problem with Google Trends is that (a) it does a lot of normalization (it normalizes search volume relative to total search volume at the time), which makes it tricky to interpret data over time, and (b) it's hard to download data en masse. Also, a lot of Google Trends results are just amusingly weird, e.g. https://trends.google.com/trends/explore?date=all&q=Facebook (see https://www.facebook.com/vipulnaik.r/posts/10208985033078964 for more discussion)-- are we really to believe that interest in Facebook spiked in October 2012, and that it has returned in 2017 (after a 5-year decline) to what it used to be back in 2009? Google Trends is just yet another messy data series that I would have to acquire expertise in the nuances of, not a reliable beacon of truth against which Wikipedia data can be compared.

The one external data source I have been able to collect with reasonable reliability is Facebook share counts. At the end of each month, I record Facebook share counts for a number of Wikipedia pages by hitting the Facebook API (a process that takes several days because of Facebook's rate limiting). Based on this I now have decent time series of cumulative Facebook share counts, such as https://wikipediaviews.org/displayviewsformultiplemonths.php?tag=Colors&allmonths=allmonths-api&language=en&drilldown=cumulative-facebook-shares If I do a more detailed analysis, this data will be important for evaluating the social media hypothesis.

How interested are you in seeing an exploration of the search engine ranking and increased use of social media hypotheses?

Comment by vipulnaik on Beta - First Impressions · 2017-09-27T18:42:59.543Z · score: 2 (1 votes) · LW · GW
Comment by vipulnaik on Wikipedia pageviews: still in decline · 2017-09-27T15:10:27.060Z · score: 4 (2 votes) · LW · GW

The Wikimedia Foundation has not ignored the decline. For instance, they discuss the overall trends in detail in their quarterly readership metrics reports, the latest of which is at https://commons.wikimedia.org/wiki/File:Wikimedia_Foundation_Readers_metrics_Q4_2016-17_(Apr-Jun_2017).pdf The main difference between what they cover and what I intend to cover are (a) they only cover overall rather than per-page pageviews, (b) they focus more on year-over-year comparisons than long-run trends, (c) related to (b), they don't discuss the long-run causes. However, these reports are a great way of catching up on incremental overall traffic level updates as well as any analytics or measurement discrepancies that might be driving weird numbers.

The challenge of raising more funds with declining traffic has also been noted in fundraiser discussions, such as at https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2015-10-14/News_and_notes which has the quote:

Better performing banners are required to raise a higher budget with declining traffic. We’ll continue testing new banners into the next quarter and sharing highlights as we go.

Comment by vipulnaik on Wikipedia pageviews: still in decline · 2017-09-27T01:43:49.377Z · score: 2 (1 votes) · LW · GW

They still show up in the total comment count :).

Comment by vipulnaik on Wikipedia pageviews: still in decline · 2017-09-26T23:04:47.956Z · score: 1 (1 votes) · LW · GW

Comment by vipulnaik on LessWrong analytics (February 2009 to January 2017) · 2017-04-18T21:25:40.463Z · score: 2 (3 votes) · LW · GW

For all the talk about the "decline" of LessWrong, total pageviews and sessions to LessWrong have stayed 5-10 times higher than those to the Effective Altruism Forum (the EAF numbers are documented in my post).

Comment by vipulnaik on Wikipedia usage survey results · 2017-03-17T22:00:37.860Z · score: 0 (0 votes) · LW · GW

The 2017 SSC Survey had 5500 respondents. Presumably this survey was more widely visible and available than mine (which was one link in the middle of a long link list).

https://slatestarcodex.com/2017/03/17/ssc-survey-2017-results/

Comment by vipulnaik on Wikipedia usage survey results · 2017-01-07T23:58:42.498Z · score: 1 (1 votes) · LW · GW

Varies heavily by context. Typical alternatives:

(a) Google's own answers for simple questions.

(b) Transactional websites for search terms that denote possible purchase intent, or other websites that are action-oriented (e.g., Yelp reviews).

(c) More "user-friendly" explanation sites (e.g., for medical terminology, a website that explains it in a more friendly style, or WikiHow)

(d) Subject-specific references (some overlap with (c), but could also include domain Wikias, or other wikis)

(e) When the search term is trending because of a recent news item, then links to the news item (even if the search query itself does not specify the associated news)

Comment by vipulnaik on Wikipedia usage survey results · 2016-12-28T23:19:40.185Z · score: 0 (0 votes) · LW · GW

Interesting. I suspect that even among verbal elites, there are further splits in the type of consumption. Some people are heavy on reading books since they want a full, cohesive story of what's happening, whereas others consume information in smaller bits, building pieces of knowledge across different domains. The latter would probably use Wikipedia more.

Similarly, some people like opinion-rich material whereas others want factual summaries more. The factual summary camp probably uses Wikipedia more.

However, I don't know if there are easy ways of segmenting users, i.e., I don't know if there are websites or communities that are much more dominated by users who prefer longer content, or users who prefer factual summaries.

Comment by vipulnaik on Wikipedia usage survey results · 2016-12-26T05:22:30.690Z · score: 1 (1 votes) · LW · GW

Good idea, but I don't think he does the census that frequently. The most recent one I can find is from 2014: http://slatestarcodex.com/2015/11/04/2014-ssc-survey-results/

The annual LessWrong survey might be another place to consider putting it. I don't know who's responsible for doing it in 2017, but when I find out I'll ask them.

Comment by vipulnaik on Wikipedia usage survey results · 2016-12-25T15:00:31.486Z · score: 2 (2 votes) · LW · GW

It's not too late, if I do so decide :). In other words, it's always possible to spend later for larger samples, if that actually turns out to be something I want to do.

Right now, I think that:

  • It'll be pretty expensive: I'd probably want to spend using several different survey tools, since each has its strengths and weaknesses (so SurveyMonkey, Google Surveys, maybe Survata and Mechanical Turk as well). Then with each I'd need 1000+ responses to be able to regress against all variables and variable pairs. The costs do add up quickly to over a thousand dollars.

  • I don't currently have that much uncertainty: It might show that age and income actually do explain a little more of the variation than it seems right now (and that would be consistent with the Pew research). But I feel that we already have enough data to see that it doesn't have anywhere near the effect that SSC membership has.

I'm open to arguments to convince me otherwise.

Comment by vipulnaik on Wikipedia usage survey results · 2016-12-25T14:50:17.418Z · score: 0 (0 votes) · LW · GW

I've published a new version of this post where the takeaways are more clearly highlighted (I think!). The post is longer but the takeaways (which are summarized on top) should be quick to browse if you're interested.

It's at http://lesswrong.com/r/discussion/lw/odb/wikipedia_usage_survey_results/

Comment by vipulnaik on Wikipedia usage survey results · 2016-12-25T14:48:55.856Z · score: 2 (2 votes) · LW · GW

Good point! Something I thought a bit about but didn't get around to discussing in this post. The Slate Star Codex audience returned a total of 618 responses. I don't have a very good idea of how many people read the SSC blog carefully enough to go through all the links, but my best guess is that that number is in the low thousands. If that's the case the response rate is 15% or higher. This is still low but not that low.

Another way of framing this: how low would the response rate have to be for the true SSC readership to be like the SurveyMonkey Audience or Google Surveys audiences? Based on the numbers it seems like the selection bias would have to be really strong for that to happen.

So while I don't think selection for Wikipedia specifically is the driving factor here, it could be that rather than talk about SSC readership, it makes more sense to talk about "SSC readers who are devoted enough and curious enough to read through every link in the link roundup."

On a related note, effective response rates for on-site Wikipedia surveys (which we didn't discuss here, but might be the subject of future posts) can be around 0.1% to 0.2%, see for instance Why We Read Wikipedia (to get the response rate you would need to use existing info on the number of pageviews to Wikipedia; I have emailed the researchers and confirmed that the response rate was in that ballpark). Compared to that, the SSC response rate seems pretty high and more definitely informative about the population.

Comment by vipulnaik on Wikipedia usage survey results · 2016-12-25T14:17:45.249Z · score: 1 (1 votes) · LW · GW

Per the suggestion at Improve comments by tagging claims, here is a comment to collect discussion of the third takeaway:

The gap between elite samples of Wikipedia users and general United States Internet users is significantly greater than the gap between the different demographics within the United States that we measured. It is comparable to the gap between United States Internet users and Internet users in low-income countries.

I'm still a little surprised at the low effect sizes of demographic differences within the United States. Still, a lot of questions can be raised about the methodology. Other than gender, we didn't really collect large samples for anything. And Google Surveys uses inferred values for age and income for most respondents, so it's probably not that reliable.

The Pew Internet surveys offer some independent evidence of the strength of the correlation of Wikipedia use with gender, age, and income, but the questions there are too coarse (just asking people whether they use Wikipedia).

Could there be other demographic variables that we didn't explore that could have higher predictive power?

Comment by vipulnaik on Wikipedia usage survey results · 2016-12-25T14:12:44.929Z · score: 0 (0 votes) · LW · GW

Per the suggestion at Improve comments by tagging claims, here is a comment to collect discussion of the second takeaway:

we’ve revised upward our estimate of the impact per pageview, and revised downward our estimate of the broad appeal and reach of Wikipedia.

A lot of this comes down to whether the indicators we've identified for heavy Wikipedia use actually are things to be optimistic about. Is the typical SSC or LessWrong reader better able to use information gleaned from Wikipedia?

And what about the alleged downside that Wikipedia is being read by fewer people than we might think? How much does that cut into the value of writing pages with hopefully broad appeal?

Comment by vipulnaik on Wikipedia usage survey results · 2016-12-25T14:06:39.929Z · score: 0 (0 votes) · LW · GW

On a related note, one of famous LessWronger Carl Shulman's research suggestions mentions Wikipedia:

Try to get datasets (Wikipedia lists, World Bank info, USDA, etc.) as a primary step in thinking about a question.

From his research advice document

Comment by vipulnaik on Wikipedia usage survey results · 2016-12-25T14:04:35.295Z · score: 1 (1 votes) · LW · GW

Per the suggestion at Improve comments by tagging claims, here is a comment to collect discussion of the first takeaway:

Wikipedia consumption is heavily skewed toward a profile of “elite” people, and these people use the site in qualitatively different ways.

I didn't talk about it much in the post since it would be too speculative, but I'm interested in more concrete thoughts on predicting what websites or online communities would have a high degree of Wikipedia use. The SurveyMonkey Audience and Google Surveys results plausibly show that crude demographic proxies such as intelligence, education, wealth, income, gender, age, etc. have very little predictive power compared with something like "reads Slate Star Codex and is willing to click to a survey link from there."

I wonder what sort of attributes might be most predictive of using Wikipedia a lot. I'd say it's something like "general intellectual curiosity": curiosity of an intellectual kind, but general, across domains, rather than narrowly related to one domain where one can achieve enough mastery so as not to need Wikipedia. I do know of curious people who don't use Wikipedia much, because their curiosity is specific to some domains where they have far surpassed Wikipedia, or Wikipedia doesn't cover well.

I wonder what other websites similar to SSC might qualify. Would LessWrong? Wait But Why? EconLog? Overcoming Bias? XKCD? SMBC Comics?

I also wonder what friend networks or other online community filters would predict high Wikipedia use. Does being a Yudkowsky follower on Facebook predict high Wikipedia use? What about being in particular subreddits?

Comment by vipulnaik on Fact Posts: How and Why · 2016-12-05T01:28:51.956Z · score: 3 (3 votes) · LW · GW

I like the spirit of the suggestion here, but have at least two major differences of opinion regarding:

  • The automatic selection of venue: I think that blogs are only a place of "last resort" for facts and not the goto place. I would suggest venues like Wikipedia (when it's notable enough and far enough away from original research), wikiHow and Wikia (for cases somewhat similar to Wikipedia but suited to the specifics of those sites), and domain-specific sharing fora as better choices in some contexts.
  • The filtering out of opinion and biased sources: I think separating out factual sources from opinion-based sources is harder than it looks, that many numbers, esp. in the social sciences, are based on a huge amount of interpretation conventions that you can't fully grok without diving into the associated opinion pieces from different perspectives, and that epistemic value is greater when you integrate it all. That said, a "facts-only" approach can be a nice starting point for bringing priors into a conversation.

Automatic selection of venue

Collecting and organizing facts is great not just for the fact-gatherer but also for others who can benefit from the readymade process. In some cases, your exploration heavily includes personal opinion or idiosyncratic selection of direction. For these cases, posting to a personal site or blog, or a shared discussion forum for the topic, is best. In other cases, a lot of what you've uncovered is perfectly generic. In such cases, places like Wikipedia, wikiHow, Wikia, or other wikis and fact compendiums can be good places to share your facts. I've done this quite a bit, and also sponsored others to do similar explorations. This provides more structure and discipline to the exercise and significantly increases the value to others.

Filtering out of opinion and biased sources

There are a few different aspects to this.

First, the numbers you receive don't come out of thin air; they are usually a result of several steps of recording and aggregation. Understanding and interpreting how this data is aggregated, what it means on the ground, etc. are things that require both an understanding of the mathematical/statistical apparatus and of the real-world processes involved. Opinion pieces can point to different ways of looking at the same numbers.

For instance, if you just download a table of fertility rates and then start opining on how population is changing, you're likely to miss out on the complex dynamics of fertility calculations, e.g., all the phenomena such as tempo effects, population momentum, etc. You could try deriving all these insights yourself (which isn't that hard, just takes several days of looking at the numbers and running models) or you could start off by reading existing literature on the subject. Opinionated works often do a good job of covering these concepts, even when they come to fairly wrong conclusions, just because they have to cover the basics to even have an intelligent conversation.

Moreover, there are many cases where people's opinions ARE the facts that you are interested in. To take the example of fertility, let's say that you uncover the concepts of ideal, desired, and expected fertility and are trying to use them to explain how fertility is changing. How will you understand who men's and women's ideal fertility numbers are changing over time? Surveys only go so far and are fairly inadequate. Opinion pieces can shed important light on the opinions of the people writing them, and comments on them can be powerful indicators of the opinions of the commenters. In conjunction with survey data, this could give you a richer sense of theories of fertility change.

It's also hard to keep your own biases, normative or factual, out of the picture.

My experience and view is that it's better to read opinion pieces from several different perspectives to better identify your own biases and control for them, as well as get a grounding in the underlying conceptual lexicon. This could be done before, after, or in conjunction with the lookup of facts -- each approach has its merits.

Comment by vipulnaik on On the importance of Less Wrong, or another single conversational locus · 2016-11-29T18:38:51.957Z · score: 3 (3 votes) · LW · GW

Whoops, sorry for missing that. Upvoted, hopefully it gets to zero and resurfaces.

Comment by vipulnaik on On the importance of Less Wrong, or another single conversational locus · 2016-11-29T18:38:22.840Z · score: 6 (6 votes) · LW · GW

It could also be a good way for the Internets to give up on trying to talk in a forum where you are around.

Comment by vipulnaik on On the importance of Less Wrong, or another single conversational locus · 2016-11-29T02:43:19.405Z · score: 10 (10 votes) · LW · GW

The impression I form based on this is that the main blocker to LessWrong revitalization is people writing sufficiently attractive posts. This seems to mostly agree with the emerging consensus in the comments, but the empirical backing from the survey is nice. Also, it's good to know that software or interface improvements aren't a big blocker.

As for what's blocking content creators from contributing to LessWrong, here are a few hypotheses that don't seem to have been given as much attention as I'd like:

  1. Contributing novel content becomes harder as people's knowledge base and expectations grow: Shooting off a speculative missive no longer works in 2016 the way it might have worked in 2011 -- people have already seen a lot of the basic speculation, and need something more substantive to catch their attention. But the flip side is that something that's truly substantive is going to require a lot of work to research and write, and then even more work to simplify and explain elegantly. This problem is stronger on LessWrong because of the asymmetric nature of rewards. On Facebook, you can still shoot off a speculative missive -- it's your own Facebook post -- and you won't get blasted for being unoriginal or boring. A lot of people will like, comment, and share your status if you're famous enough or witty enough. On LessWrong, you'll be blasted more.
  2. Negative reception and/or lack of reception is more obvious on LessWrong: Due to the karma system of LessWrong, it's brutally obvious when your posts aren't liked enough by people, and/or don't get enough comments. On personal blogs, this is a little harder for outsiders to make out (unless the blogger explicitly makes the signals obvious) and even then, harder to compare with other people's posts. This means that when people are posting things they have heavy personal investment in (e.g., they've spent months working on the stuff) they may feel reluctant to post it on LW and find it upvoted less than a random post that fits more closely in LW norms. The effects are mediated purely through psychological impact on the author. For most starting authors, the audience one reaches through LW, and the diversity of feedback one gets, is still way larger than that one would get on one's own blog (though social media circulation has lessened the gap). But the psychological sense of having gotten "only" three net upvotes compared to the 66 of the top-voted post, can make people hesitant. I remember a discussion with somebody who was disheartened about the lack of positive response but I pointed out that in absolute terms it was still more than a personal blog.
  3. Commenters' confidence often exceeds their competence, but the commenters still sound prima facie reasonable: On newspaper and magazine blogs, the comments are terrible, but they're usually obviously terrible. Readers can see them and shrug them off. On LessWrong, star power commenters often make confident comments that seem prima facie reasonable yet misunderstand the post. This is particularly the case as we move beyond LW's strong areas and into related domains, which any forum dedicated to applying rationality to the real world should be able to do. The blame here isn't solely on the commenters who make the mistaken assertions but also on the original post for not being clear enough, and on upvoters for not evaluating things carefully enough. Still, this does add to the burden of the original poster, who now has to deal with potential misconceptions and misguided but confident putdowns that aren't prima facie wrong. Hacker News has a similar problem though the comments on HN are more obviously bad (obviously ill-informed uncharitable criticism) so it might be less of a problem there.
  4. Commitment to topics beyond pet rationality topics isn't strong and clear enough: LessWrong is fairly unique as a forum with the potential for reasonably high quality discussion of just about any topic (except maybe politics and porn and sex stuff). But people posting on non-pet topics aren't totally sure how much their post belongs on LessWrong. A more clear embrace of "all topics under the sun" -- along with more cooperative help from commenters to people who post on non-conventional topics -- can help.
Comment by vipulnaik on On the importance of Less Wrong, or another single conversational locus · 2016-11-29T02:23:05.490Z · score: 8 (8 votes) · LW · GW

I might have missed it, but reading through the comment thread here I don't see prominent links to past discussions. There's LessWrong 2.0 by Vaniver last year, and, more recently, there is LessWrong use, successorship, and diaspora. Quoting from the section on rejoin conditions in the latter:

A significant fraction of people say they'd be interested in an improved version of the site. And of course there were write ins for conditions to rejoin, what did people say they'd need to rejoin the site?

(links to rejoin condition write-ins)

Feel free to read these yourselves (they're not long), but I'll go ahead and summarize: It's all about the content. Content, content, content. No amount of usability improvements, A/B testing or clever trickery will let you get around content. People are overwhelmingly clear about this; they need a reason to come to the site and right now they don't feel like they have one. That means priority number one for somebody trying to revitalize LessWrong is how you deal with this.

Comment by vipulnaik on Linkposts now live! · 2016-09-28T22:56:22.166Z · score: 4 (4 votes) · LW · GW

I'm unable to edit past posts of mine; it seems that this broke very recently and I'm wondering if it's related to the changes you made.

Specifically, when I click the Submit or the "Save and Continue" buttons after making an edit, it goes to lesswrong.com/submit with a blank screen. When I look at the HTTP error code it says it's a 404.

I also checked the post after that to see if the edit still went through, and it didn't. In other words, my edit did not get saved.

Do you know what's going on? There were a few corrections/expansions on past posts that I need to push live soon.

Comment by vipulnaik on A Review of Signal Data Science · 2016-08-22T23:48:33.857Z · score: 0 (0 votes) · LW · GW

One relevant consideration in such an evaluation is that Signal's policies with respect to various things (like percentage of income taken, initial deposit, length of program) may have changed since the program's inception. Of course, the program itself has changed since it started. Therefore, feedback or experiences from students in initial cohorts needs to be viewed in that light.

Disclosure: I share an apartment with Jonah Sinick, co-founder of Signal. I have also talked extensively about Signal with Andrew J. Ho, one of its key team members, and somewhat less extensively with Bob Cordwell, the other co-founder. ETA: I also conducted a session on data science and machine learning engineering in the real world (drawing on my work experience) with Signal's third cohort on Saturday, August 20, 2016.

Comment by vipulnaik on Wikipedia usage survey results · 2016-07-17T00:16:46.878Z · score: 3 (3 votes) · LW · GW

I think Issa might write a longer reply later, and also update the post with a summary section, but I just wanted to make a quick correction: the college-educated SurveyMonkey population we sampled in fact did not use Wikipedia a lot (in S2, CEYP had fewer heavy Wikipedia users than the general population).

It's worth noting that the general SurveyMonkey population as well as the college-educated SurveyMonkey population used Wikipedia very little, and one of our key findings was the extent to which usage is skewed to a small subset of the population that uses it heavily (although almost everybody has heard of it and used it at some point). Also, the responses to S1Q2 show that the general population rarely seeks Wikipedia actively, in contrast with the small subset of heavy users (including many SSC readers, people who filled my survey through Facebook).

Your summary of the post is an interesting take on it (and consistent with your perspective and goals) but the conclusions Issa and I drew (especially regarding short-term value) were somewhat different. In particular, both in terms of the quantity of traffic (over a reasonably long time horizon) and the quality and level of engagement with pages, Wikipedia does better than a lot of online content. Notably, it does best in terms of having sustained traffic, as opposed to a lot of "news" that trends for a while and then drops sharply (in marketing lingo, Wikipedia content is "evergreen").

Comment by vipulnaik on An update on Signal Data Science (an intensive data science training program) · 2016-07-07T05:51:45.993Z · score: 1 (1 votes) · LW · GW

Following up!

Comment by vipulnaik on The Science of Effective Fundraising: Four Common Mistakes to Avoid · 2016-04-16T16:16:59.928Z · score: 2 (2 votes) · LW · GW

[Comment cross-posted to the Effective Altruism Forum]

[I will use "Effective Altruists" or "EAs" to refer to the people who self-identify as members of the community, and "effective altruists" (without capitalization) for people to whom effectiveness matters a lot in altruism, regardless of whether they self-identify as EAs.]

I think this post makes some important and valuable points. Even if not novel, the concise summary here could make for a good WikiHow article on how to be a more effective fundraiser. However, I believe that this post falls short by failing to mention, let alone wrestle with, the tradeoffs involved with these strategies.

I don't believe there is a clear and obvious answer to the many tradeoffs involved with adopting various sales tactics that compromise epistemic value. I believe, however, that not even acknowledging these tradeoffs can lead to potentially worse decisions.

My points below overlap somewhat.

First, effective altruists in general, and EAs in particular, are a niche segment in the philanthropic community. The rules for selling to this niche can differ from the rules of selling to the general public. So much so that sales tactics that are considered good for the general public are actively considered bad when selling to this niche. Putting an identifiable victim may help with, say, 30% of potential donors in the general public, but alienate 80% of potential donors among effective altruists, because they have (implicitly or explicitly) learned to overcome the identifiable victim effect. In general, using messaging targeted at the public for a niche that is often based, implicitly or explicitly, on rejecting various aspects of such messaging, is a bad thing. A politician does not benefit from taking positions held by the majority of people all the time; rather, whereas some politicians are majoritarian moderates, others seek specific niches where their support is strong, often with the alienation of a majority as a clear consequence (for instance, a politician in one subregion of a country may adopt rhetoric and policies that make the politician unpopular countrywide but guarantee re-election in that subregion). Similarly, not every social network benefits from adopting Facebook's approach to partial openness and diversity of forms of expression. Snapchat, Pinterest, and Twitter have each carved a niche based on special features they have.

Second, in addition to the effect in rhetorical terms, it's also important to consider the effect in substantive terms on how the organizations involved spend their money and resources, and make decisions. Ideally, you can imagine a wall of separation: the organization focuses on being maximally effective, and a separate sales/fundraising group optimizes the message for the general public. However, many of the strategies suggested here actually affect the organization's core functions. Pairing donors with individual recipients significantly affects the organization's operations on the ground, raising costs. Could this in the long run lead to e.g. organizations selecting to operate in areas where recipients have characteristics that make them more interesting to donors to communicate with (e.g., they are more familiar with the language of the donor's country?). I don't see a way of making overall effectiveness, in the way that many EAs care about, still the dominant evaluation criterion if fundraising success is tied heavily to other outreach strategies.

Third (building somewhat on the first), insofar as there is a tradeoff between being able to sell more to effective altruists versus appealing more to the general public, the sign of the financial effect is actually ambiguous. The number of donors in the general public is much larger, but the amount that they donate per capita tends to be smaller. One of the ingredients to EA success is that its strength lies not so much in its numbers but in the depth of convictions of many self-identified EAs, plus other effective altruists (such as GiveWell donors). People who might have previously donated a few hundred dollars a year for an identifiable victim may now be putting in tens of thousands of dollars because the large-scale statistics have touched them in a deeper way. GiveWell moved $103 million to its top charities in 2015, of which $70 million was from Good Ventures (that's giving away money from a Facebook co-founder) and another $20 million is from individual donors who are giving amounts in excess of $100,000 each. To borrow sales jargon, these deals are highly lucrative and took a long time to close. Closing them required the donor to have high confidence in the epistemic rigor from a number of donors, many of whom were probably jaded by psychologically pitch-perfect campaigns. I'm not even saying that GiveWell's reviews are actually rigorous, but rather, that the perception of rigor surrounding them was a key aspect to many people donating to GiveWell-recommended charities.

Fourth, if the goal is to spread better, more rational giving habits, then caving in to sales tactics that exploit known forms of irrationality hampers that goal.

None of these imply that the ideas you suggest are inapplicable in the context of EA or for effective altruists in general. Nor am I suggesting that EAs (or effective altruists in general) are bias-free and rational demigods: I think many EAs have their own sets of biases that are more sophisticated than those of the general public but still real. I also think that many of the biases, such as the identifiable victim, can actually be epistemically justified somewhat, and you could make a good epistemic case for using individual case studies as not just a sales strategy but something that actually helps provide yet another sanity check (this is sort of what GiveWell tried to do by sponsoring field trips to the areas of operation of its top charities). You could also argue that the cost of alienating some people is a cost worth bearing in order to achieve a somewhat greater level of popularity, or that a wall of separation is not that hard to achieve.

But acknowledging these tradeoffs openly is a first step to letting others (including the orgs and fundraisers you are targeting) make a careful, informed decision. It can also help people figure out new, creative compromises. Perhaps, for instance, showing an identifiable victim and, after people are sort-of-sold, then pivoting to the statistics, provides the advantages of mass appeal and epistemic rigor. Perhaps there are ways to use charities' own survey data to create composite profiles of typical beneficiaries that can help inform potential donors as well as appeal to their desire for an identifiable victim. Perhaps, at the end of the day, raising money matters more than spreading ideas, and getting ten million people to donate a few hundred dollars a year is better than the current EA donor profile or the current GiveWell donor profile.

Comment by vipulnaik on The great decline in Wikipedia pageviews (condensed version) · 2015-03-31T23:07:42.592Z · score: 1 (1 votes) · LW · GW

Eh? Desktop is still more than half:

https://stats.wikimedia.org/EN/TablesPageViewsMonthlyCombined.htm

https://stats.wikimedia.org/EN/TablesPageViewsMonthlyMobile.htm

https://stats.wikimedia.org/EN/TablesPageViewsMonthly.htm

Comment by vipulnaik on The great decline in Wikipedia pageviews (condensed version) · 2015-03-31T23:06:04.758Z · score: 2 (2 votes) · LW · GW

I didn't pick them as points that were most extreme as of earlier years, I picked them as generically popular topics. There should be no particular temporal directionality to view counts for such pages.

Comment by vipulnaik on [QUESTION]: LessWrong web traffic data? · 2015-02-17T23:08:49.727Z · score: 0 (0 votes) · LW · GW

No

Comment by vipulnaik on [QUESTION]: Looking for insights from machine learning that helped improve state-of-the-art human thinking · 2014-07-25T06:49:15.752Z · score: 0 (0 votes) · LW · GW

Thanks, both of these look interesting. I'm reading the Google paper right now.

Comment by vipulnaik on Scenario analyses for technological progress for the next decade · 2014-07-16T05:37:31.100Z · score: 0 (0 votes) · LW · GW

Good question.

I'm not an expert in machine learning either, but here is what I meant.

If you're running an algorithm such as linear or logistic regression, then there are two dimension numbers that are relevant: the number of data points, and the number of features (i.e., the number of parameters). For the design matrix of the regression, the number of data points is the number of rows and the number of features/parameters is the number of columns.

Holding the number of parameters constant, it's true that if you increase the number of data points beyond a certain amount, you can get most of the value through subsampling. And even if not, more data points is not such a big issue.

But the main advantage of having more data is lost if you still use the same (small) number of features. Generally, when you have more data, you'd try to use that additional data to use a model with more features. The number of features would still be less than the number of data points. I'd say that in many cases it's about 1% of the number of data points.

Of course, you could still use the model with the smaller number of features. In that case, you're just not putting the new data to much good use. Which is fine, but not an effective use of the enlarged data set. (There may be cases where even with more data, adding more features is no use, because the model has already reached the limits of its predictive power).

For linear regression, the algorithm to solve it exactly (using normal equations) takes time that is cubic in the number of parameters (if you use the naive inverse). Although matrix inversion can in principle be done faster than cubic, it can't be faster than quadratic, which is a general lower bound. Other iterative algorithms aren't quite cubic, but they're still more than linear.

Comment by vipulnaik on Scenario analyses for technological progress for the next decade · 2014-07-16T01:06:01.868Z · score: 1 (1 votes) · LW · GW

My scenario #1 explicitly says that even in the face of a slowdown, we'll see doubling times of 10-25 years: "If the doubling time reverts to the norm seen in other cutting-edge industrial sectors, namely 10-25 years, then we'd probably see the introduction of revolutionary new product categories only about once a generation."

So I'm not predicting complete stagnation, just a slowdown where computing power gains aren't happening fast enough for us to see new products every few years.

Comment by vipulnaik on Scenario analyses for technological progress for the next decade · 2014-07-15T00:29:46.519Z · score: 3 (3 votes) · LW · GW

I think continued progress of Moore's law is quite plausible, and that was one of the scenarios I considered (Scenario #2). That said, it's interesting that you express high confidence in this scenario relative to the other scenarios, despite the considerable skepticism of computer scientists, engineers, and the McKinsey report.

Would you like to make a bet for a specific claim about the technological progress we'll see? We could do it with actual money if you like, or just an honorary bet. Since you're claiming more confidence than I am, I'd like the odds in my favor, at somewhere between 2:1 and 4:1 (details depend on the exact proposed bet).

My suggestion to bet (that you can feel free to ignore) isn't intended to be confrontational. cf.

http://econlog.econlib.org/archives/2012/05/the_bettors_oat.html

Comment by vipulnaik on Forecasting rare events · 2014-07-12T14:34:03.893Z · score: 0 (0 votes) · LW · GW

Thanks! I added pandemics (though not in the depth I should have). I'll look at some of the others.

Comment by vipulnaik on The insularity critique of climate science · 2014-07-11T03:11:38.830Z · score: 0 (0 votes) · LW · GW

The full correspondence is here:

http://www.theclimatebet.com/?page_id=4

Maybe it's lame (?) but I don't think they're being deceptive -- they're quite explicit that Gore refused to bet.

The fact that he refused to bet could be interpreted either as evidence that the bet was badly designed and didn't reflect the fundamental point of disagreement between Gore and Armstrong, or as evidence that Gore was unwilling to put his money where his mouth is.

I'm not sure what interpretation to take.

btw, here's a bet that was actually properly entered into by both parties (neither of them a climate scientist):

http://econlog.econlib.org/archives/2014/06/bauman_climate.html

Comment by vipulnaik on The insularity critique of climate science · 2014-07-10T15:08:05.745Z · score: 0 (0 votes) · LW · GW

Have you looked at http://www.theclimatebet.com (mentioned in an UPDATE at the end of Critique #1 in my post)?

Comment by vipulnaik on The insularity critique of climate science · 2014-07-10T06:17:00.820Z · score: 1 (1 votes) · LW · GW

Thanks, fixed!

Comment by vipulnaik on Domains of forecasting · 2014-07-09T21:55:28.754Z · score: 1 (1 votes) · LW · GW

Thanks for both the appreciation and the suggestion.

I intend to do a concluding post on the MIRI blog, linking to all of these; if Luke agrees, I can cross-post that to LessWrong and accompany that with a full listing of blog posts.

I'll also put a list of all my posts on my personal website later on.

Comment by vipulnaik on Domains of forecasting · 2014-07-09T21:35:40.205Z · score: 1 (1 votes) · LW · GW

Good point. I'd looked at financial market forecasting along with macroeconomic forecasting, when I was investigating survey-based macroeconomic forecasting. I have some of the collected material, but I don't think I ever wrote it up. Thanks for reminding me! I'll add it to this post later.

Comment by vipulnaik on The insularity critique of climate science · 2014-07-09T20:24:56.364Z · score: -1 (1 votes) · LW · GW

Actually, it's somewhat unclear whether the IPCC scenarios did better than a "no change" model -- it is certainly true over the short time period, but perhaps not over a longer time period where temperatures had moved in other directions.

Co-author Green wrote a paper later claiming that the IPCC models did not do better than the no change model when tested over a broader time period:

http://www.kestencgreen.com/gas-improvements.pdf

But it's just a draft paper and I don't know if the author ever plans to clean it up or have it published.

I would really like to see more calibrations and scorings of the models from a pure outside view approach over longer time periods.

Armstrong was (perhaps wrongly) confident enough of his views that he decided to make a public bet claiming that the No Change scenario would beat out the other scenario. The bet is described at:

http://www.theclimatebet.com/

Overall, I have high confidence in the view that models of climate informed by some knowledge of climate should beat the No Change model, though a lot depends on the details of how the competition is framed (Armstrong's climate bet may have been rigged in favor of No Change). That said, it's not clear how well climate models can do relative to simple time series forecasting approaches or simple (linear trend from radiative forcing + cyclic trend from ocean currents) type approaches. The number of independent out-of-sample validations does not seem to be enough and the predictive power of complex models relative to simple curve-fitting models seems to be low (probably negative). So, I think that arguments that say "our most complex, sophisticated models show X" should be treated with suspicion and should not necessarily be given more credence than arguments that rely on simple models and historical observations.