Feed the spinoff heuristic!

post by CarlShulman · 2012-02-09T07:41:28.468Z · LW · GW · Legacy · 91 comments

Follow-up to:

Parapsychology: the control group for science

Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields

Recent renewed discussions of the parapsychology literature and Daryl Bem's recent precognition article brought to mind the "market test" of claims of precognition. Bem tells us that random undergraduate students were able to predict with 53% accuracy where an erotic image would appear in the future. If this effect was actually real, I would rerun the experiment before corporate earnings announcements, central bank interest rate changes, etc, and change the images based on the reaction of stocks and bonds to the announcements. In other words, I could easily convert "porn precognition" into "hedge fund trillionaire precognition."

If I was initially lacking in the capital to do trades, I could publish my predictions online using public key cryptography and amass an impressive track record before recruiting investors. If anti-psi prejudice was a problem, no one need know how I was making my predictions. Similar setups could exploit other effects claimed in the parapsychology literature (e.g. the remote viewing of the Scientologist-founded Stargate Project of the U.S. federal government). Those who assign a lot of credence to psi may want to actually try this, but for me this is an invitation to use parapsychology as control group for science, and to ponder a general heuristic for crudely estimating the soundness of academic fields for outsiders.

One reason we trust that physicists and chemists have some understanding of their subjects is that they produce valuable technological spinoffs with concrete and measurable economic benefit. In practice, I often make use of the spinoff heuristic: If an unfamiliar field has the sort of knowledge it claims, what commercial spinoffs and concrete results ought it to be producing? Do such spinoffs exist? What are the explanations for their absence?

For psychology, I might cite systematic desensitization of specific phobias such as fear of spiders, cognitive-behavioral therapy, and military use of IQ tests (with large measurable changes in accident rates, training costs, etc). In financial economics, I would raise the hundreds of billions of dollars invested in index funds, founded in response to academic research, and their outperformance relative to managed funds. Auction theory powers tens of billions of dollars of wireless spectrum auctions, not to mention evil dollar-auction sites

This seems like a great task for crowdsourcing: the cloud of LessWrongers has broad knowledge, and sorting real science from cargo cult science is core to being Less Wrong. So I ask you, Less Wrongers, for your examples of practical spinoffs (or suspicious absences thereof) of sometimes-denigrated fields in the comments. Macroeconomics, personality psychology, physical anthropology, education research, gene-association studies, nutrition research, wherever you have knowledge to share.

ETA: This academic claims to be trying to use the Bem methods to predict roulette wheels, and to have passed statistical significance tests on his first runs. Such claims have been made for casinos in the past, but always trailed away in failures to replicate, repeat, or make actual money. I expect the same to happen here. 

91 comments

Comments sorted by top scores.

comment by lukeprog · 2012-02-09T16:49:03.040Z · LW(p) · GW(p)

If psychology worked, I would expect marketing firms to use it to make millions of people buy tons of shit that they don't need and that won't make them happy.

Replies from: AnnaSalamon, waveman, MichaelVassar, play_therapist, Dmytry
comment by AnnaSalamon · 2012-02-10T01:42:47.917Z · LW(p) · GW(p)

Is there any evidence, one way or the other, as to whether marketers draw useful info from academic psychology?

Replies from: lukeprog, JoachimSchipper
comment by lukeprog · 2012-02-10T18:58:01.777Z · LW(p) · GW(p)

More cheap evidence: marketing textbooks are stuffed full of mainstream psychological results and applications to the business of marketing.

comment by JoachimSchipper · 2012-02-10T12:42:52.265Z · LW(p) · GW(p)

Cheap evidence: Hacker News is full of people trying to get rich by selling something (usually access to web applications), and e.g. "Predictably irrational" has been mentioned. The marketing guru Seth Godin says he's been influenced quite a bit by Poundstone's "Priceless", which apparently "dives into the latest psychological findings".

Of course, this is only informal evidence, only shows "some marketers" and only shows "believed to be useful".

Waveman's comment also seems relevant.

comment by waveman · 2012-02-09T22:18:06.297Z · LW(p) · GW(p)

Indeed. Case study of Freud's nephew who basically invented modern PR.

http://en.wikipedia.org/wiki/Edward_Bernays

comment by MichaelVassar · 2012-02-11T23:15:51.230Z · LW(p) · GW(p)

Don't they?

Replies from: lukeprog
comment by lukeprog · 2012-02-12T03:41:02.636Z · LW(p) · GW(p)

Yes they do. That was my intended meaning. :)

comment by play_therapist · 2012-02-09T18:59:39.224Z · LW(p) · GW(p)

I believe marketers do use psychology and many, if not most, Americans do buy "tons of shit that they don't need and that won't make them happy!"

Replies from: MBlume
comment by MBlume · 2012-02-09T21:15:09.708Z · LW(p) · GW(p)

I believe Luke intended this to be understood =)

comment by Dmytry · 2012-02-13T21:42:29.189Z · LW(p) · GW(p)

Well I think to large extent marketing firms rely on their own know-how, which I imagine is rather scientific. I have first hand experience with this (I am selling a computer game through Steam). Various statistics is used to see what does better. Their marketing people are really great at e.g. picking the most-clickable banner design, versus the one that I thought would be the most clickable (I did my own stats and confirmed their choice).

comment by TimS · 2012-02-09T14:10:40.592Z · LW(p) · GW(p)

Shorter version of OP's argument.

comment by Grognor · 2012-02-09T11:14:22.940Z · LW(p) · GW(p)

If personality psychology holds water, I would expect dating sites to use it and produce better results than Traditional Romance. Does it? From the outside looking in, it looks like it does.

It would also be useful in selecting dorm room compatibility, which I can tell you from the inside looking out does not work at all or isn't being used. I wouldn't expect it to be used in this context, though. No money in it.

Replies from: khafra, Jayson_Virissimo, Nornagest
comment by khafra · 2012-02-11T01:15:23.280Z · LW(p) · GW(p)

Contrary evidence:

“To date, there is no compelling evidence that any online dating matching algorithm actually works,” Finkel observes. “If dating sites want to claim that their matching algorithm is scientifically valid, they need to adhere to the standards of science, which is something they have uniformly failed to do. In fact, our report concludes that it is unlikely that their algorithms can work, even in principle, given the limitations of the sorts of matching procedures that these sites use.”

Replies from: Desrtopa, thomblake
comment by Desrtopa · 2012-02-12T05:57:40.361Z · LW(p) · GW(p)

I don't know if any of the dating sites they reviewed use a similar system to OkCupid (users answer questions and also pick how they want matches to answer those questions and how important they are to them,) but I don't think OkCupid was included in that study. The author wrote that the matching algorithms of the companies they reviewed are proprietary, and were not shared with the researchers, but OkCupid's matching algorithm is publicly available.

comment by thomblake · 2012-02-13T16:36:38.257Z · LW(p) · GW(p)

In fact, our report concludes that it is unlikely that their algorithms can work, even in principle

That's a rather strong claim. Matching people up completely at random can work in principle.

Replies from: khafra
comment by khafra · 2012-02-13T18:05:26.189Z · LW(p) · GW(p)

Perhaps by "work" they meant "do better than letting people choose solely based on reading a short essay and seeing a picture," although that sounds difficult to make precise. Maybe just "do better than random." We might have to wait until they publish.

Replies from: thomblake
comment by thomblake · 2012-02-13T18:26:05.103Z · LW(p) · GW(p)

Again, it's the "even in principle" I was objecting to. Picking people at random can in principle do better than letting people choose solely based on reading a short essay and seeing a picture. And uniformly random algorithm A can in principle do better than uniformly random algorithm B.

Saying something isn't possible "even in principle" specifically means that it cannot happen in any logically possible world - that's the entire difference between saying "even in principle" and leaving it out. It can't even accidentally win.

comment by Jayson_Virissimo · 2012-02-14T05:48:46.329Z · LW(p) · GW(p)

This week's issue of The Economist has a summary of the scientific evidence behind the popular Internet dating websites.

comment by Nornagest · 2012-02-11T02:01:07.379Z · LW(p) · GW(p)

I don't think OKCupid contains a good way of tracking long-term romantic success once a relationship escapes from the site, but it certainly has the data to correlate any one of several personality metrics with length of correspondence, which strikes me as a half-decent proxy: there's a huge library of personality tests on the site, including some well-known ones like the MBTI and the Big 5. OKTrends has almost certainly touched on this before, although you'd probably have to apply a lot of logical glue yourself to get a theory to stick together properly.

OKC's primary metric, however, relies on self-selected answers to a large pool of crowdsourced questions. If there's been any academic research done in that exact space I'm not aware of it, but it wouldn't be too much of a stretch to view correlations between match metrics and actual romantic success as answering the question "how well do people know their own romantic preferences?" -- or conversely to see academic answers to that question as informing OKC's methodology.

comment by Dmytry · 2012-02-13T14:51:06.094Z · LW(p) · GW(p)

There's interesting thing: Some people managed to acquire a lot of wealth via trading. That would lead you to believe their claims with regards to the methods they use being effective.

However, if you simulate the stock market with identically skilled agents, you obtain basically same wealth distribution as observed in the real world, with some few agents ending up extremely 'rich'. One can imagine that such agents, if they were people, would rationalize their undeserved wealth to feel better about themselves.

Replies from: argonz
comment by argonz · 2012-02-15T21:07:29.430Z · LW(p) · GW(p)

The problem of the silent cemetery(sample bias?). If we start with a large enough cohort of "equally skilled" traders, who just make their investment by random, we will still end up with a handful of "old foxes" who just standing because of their proportional luck. In the meantime the failed ones laying in the cemetery silently as nobody asks them.

Of course the lucky one's skills will be rationalized(narrative fallacy in Taleb's term) and not just by themselves but the majority around them, the media etc..

comment by Dr_Manhattan · 2012-02-09T15:48:42.330Z · LW(p) · GW(p)

I would add to this that having a method would often (but not always) produce a clear "leader in the field" (first-mover advantage going to the discoverer). So seeing Google's share of the market is a strong indicator (even without first-hand knowledge) that "they have a serious advantage in search" vs existence of many competing diet companies does not tell me "they figured out nutrition".

Replies from: CarlShulman
comment by CarlShulman · 2012-02-09T19:33:40.327Z · LW(p) · GW(p)

Good point, but to nitpick Google wasn't a first-mover in search, it defeated AltaVista and other search competitors based on superior performance. They were a first-mover with PageRank, though.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2012-02-09T20:25:25.281Z · LW(p) · GW(p)

Yes, thanks for clarification, that's what I meant by the first mover: relative to "the thing that gives them a lot of power".

comment by Eugine_Nier · 2012-02-11T04:55:52.231Z · LW(p) · GW(p)

Two issues with this heuristic:

1) It doesn't work well for credence goods.

2) Sometimes it takes a long time for sciences to find an application, two modern examples are astrophysics, and particle physics.

Replies from: Robert_Unwin
comment by Robert_Unwin · 2012-02-11T11:10:28.076Z · LW(p) · GW(p)

(2) is a useful point, but doesn't generalize fully. To take your own examples, if some theories in astrophysics and particle physics were extremely well supported by the standards of physics, then the lack of spinoffs would not undermine them very much. If the theories are well supported, then they've made lots of novel predictions that have been verified. That a particular spinoff works is just evidence that a particular novel prediction is verified.

Replies from: BrianNachbar
comment by BrianNachbar · 2012-02-16T23:25:57.427Z · LW(p) · GW(p)

Today, the many spinoffs of physics in general can lend support to branches that haven't produced spinoffs yet. But what about the first developments in physics? How soon after Newton's laws were published did anyone use them for anything practical? Or how long did it take for early results in electromagnetics (say, the Coulomb attraction law) to produce anything beyond parlor tricks? I don't know the answers here, and if there were highly successful mathematical engineers right on Newton's heels, I'd be fascinated to hear about it, but there very well may not have been.

Of course, theory always has to precede spinoffs; it would make no sense to reject a paper from a journal due to lack of spinoffs. To use the heuristic, we need some idea of how long is a reasonable time to produce spinoffs. If there is such a "spinoff time," it probably varies with era, so fifty years might have been a reasonable delay between theory and spinoff in the seventeenth century but not in the twenty-first.

comment by Robert_Unwin · 2012-02-10T12:01:35.339Z · LW(p) · GW(p)

Tetlock's political judgment study was a test for macroeconomics, political science and history. Yet people with PhDs in these areas did no better on predicting macro political and economic events than those without any PhD. Maybe macro helps in producing good econometric models, but it doesn't help in making informal predictions. (Whereas one suspects that physics and chemistry would help in a test of quick predictions about a novel physical or chemical system, vs. people without a PhD in these fields).

Replies from: Eugine_Nier, Dmytry
comment by Eugine_Nier · 2012-02-11T04:45:58.479Z · LW(p) · GW(p)

Another analogy is that having a PhD in the relevant sciences doesn't help you play sports.

Replies from: Robert_Unwin
comment by Robert_Unwin · 2012-02-11T11:16:22.122Z · LW(p) · GW(p)

In some sports, applied science seems important to improving expert performance. The PhD knowledge is used to guide the sportsperson (who has exceptional physical abilities). Likewise, our skill at making reliably sturdy buildings has dramatically improved due to knowledge of physics and materials science. But the PhDs don't actually put the buildings up, they just tell the builders what to do.

Replies from: dbaupp
comment by dbaupp · 2012-02-11T13:03:58.586Z · LW(p) · GW(p)

In some sports, applied science seems important to improving expert performance. The PhD knowledge is used to guide the sportsperson (who has exceptional physical abilities).

I can't find the references now, but I have seen several stories about sports (specifically, some football teams in Australia) using psychology and other scientific knowledge (and improving because of it).

comment by Dmytry · 2012-02-13T21:34:07.039Z · LW(p) · GW(p)

Well, some disciplines are a bit too hard for humans to actually reason about (such as predicting complex interactions of many people), the demand for something that looks like science results in a supply of pseudoscience. That was the case for medicine through history until relatively recently - very strong demand for some solutions, lack of any genuine solutions, resulting in a situation where frauds and self deception are the best effort.

With the economics, perhaps an extremely intelligent individual may be able to make interesting predictions, but the individual as intelligent as most of the traders can't predict anything interesting. The 'political science' is altogether non-scientific discipline that calls itself science and thus is even worse than garden variety pre-science which is at least scientific enough to see how unscientific it is.

The history would only help predictively if the agents (politicians, etc) were really unaware of history and if little changed since closest precedent, which isn't at all the case.

comment by Robert_Unwin · 2012-02-09T13:58:16.651Z · LW(p) · GW(p)

Re: your examples successful spin-offs for psychology, to what extent did these therapies come out of well-established theory? Maybe someone can weigh in here. It seems possible that these are good therapies but ones that don't have a strong basis in theory (in contrast to technologies from physics or chemistry).

Replies from: katydee
comment by katydee · 2012-02-09T16:59:50.060Z · LW(p) · GW(p)

While cognitive-behavioral therapy could in some ways be characterized as an offshoot of the philosophy known as Stoicism (which oddly seems to have "lucked into" quite a set of effective beliefs, especially when compared to most other philosophies) rather than an offshoot of psychology, the psychological research process and psychological theory as a whole have definitely acted to inform and refine CBT.

Replies from: Robert_Unwin
comment by Robert_Unwin · 2012-02-10T11:52:26.306Z · LW(p) · GW(p)

I was looking for someone to specify a well supported psychological theory that predicts that CBT should be effective. What's the theory, and what's the evidence that people believed it before CBT came along?

I also think Shulman's example of IQ is different from the physics/chemistry case. It was discovered that scores on a short IQ test predicted long-term job performance on a range of tasks. Organizations that used IQ in hiring were then able to obtain better long-term job performance. But IQ was not something that was predicted from a model of how the brain or mind works. Even now, a century after the development of IQ tests, I'm not sure we have a good bottom up account of why a few little reasoning questions can be as informative about human cognitive performance as IQ seems to be. (Not saying that IQ gives you all the information you want, but a few short questions provide a surprising amount of information).

Replies from: katydee
comment by katydee · 2012-02-10T16:14:09.658Z · LW(p) · GW(p)

The issue here is that the theory that predicts that CBT should be effective is called "Stoicism" and has been around for a long while prior to the concept of a psychological research process.

If you are looking for a therapy or action that arose from psychological theory directly, I would recommend looking into the treatment of PTSD (not even recognized as a treatable condition until the 1970s) or something-- CBT has been informed and refined by the research process, but its underpinnings existed prior to the research process itself.

comment by dvasya · 2012-02-09T18:50:34.399Z · LW(p) · GW(p)

Physicist Ilya Prigogine developed his famous theory of dissipative systems which was expected to explain a lot of things from thermodynamics of living systems to the nature of the arrow of time. It is a very well-developed and deep theory. Yet, in my scientific life, I have never seen an actual numerical calculation of a measurable quantity utilizing any of Prigogine's concepts such as "rate of entropy production". Looks definitely like a missing spinoff!

Replies from: asr
comment by asr · 2012-02-12T08:44:42.641Z · LW(p) · GW(p)

People do use thermodynamics. Are you in a position to say whether Priogine's work is ever relevant to professional chemical engineers?

Replies from: dvasya
comment by dvasya · 2012-02-22T03:05:38.493Z · LW(p) · GW(p)

That's the point: what people use is normal equilibrium or close-to-equilibrium thermodynamics. Even in situations that seem far out of the scope of equilibrium thermodynamics and where one would normally expect Prigogine physics to be the perfect candidate - one example being CVD or VLS growth of various nanotubes/nanowires/etc. - I have never seen the latter applied. Everybody just goes with good old (near-)equilibrium chemical thermodynamics. Now this might be just a manifestation of Maslow's hammer, and Prigogine physics is hard, but for what it's worth, here's one example of a big hole that should be covered by the theory but is, in fact, not.

comment by Daniel_Burfoot · 2012-03-01T05:19:25.483Z · LW(p) · GW(p)

Computer vision is suspiciously lacking in practical spinoffs, even though people have been studying it for 40 years.

Replies from: DanielFilan, khafra
comment by DanielFilan · 2021-06-02T00:55:05.875Z · LW(p) · GW(p)

This is no longer true.

comment by khafra · 2012-03-01T13:23:16.061Z · LW(p) · GW(p)

Really? Flashy stuff like Word Lens is rare, but stuff like more prosaic OCR, increasingly automated consumer photography, and face-recognizing CCTV seems to be economically effective.

I certainly agree that there are unsolved difficulties in computer vision with probably profitable solutions which may be Hard Problems, though.

comment by sark · 2012-02-20T17:51:05.946Z · LW(p) · GW(p)

I really like this. It emphasizes the fundamentally instrumental nature of rationality.

comment by JoachimSchipper · 2012-02-09T15:52:27.110Z · LW(p) · GW(p)

If I was initially lacking in the capital to do trades, I could publish my predictions online using public key cryptography and amass an impressive track record before recruiting investors.

This is a nitpick, but this protocol is at least underspecified. Aside from the need to prove that you made the predictions before the events, you also need to be able to prove that you made no other predictions before the event.

(I've always wondered why no pump-and-dump scammers use this: after ten "buy/short " mails, 1/1024 of your mailing list will have received 10/10 correct predictions from you (and another 10/1024 will have received 9/10 correct predictions.) Which should be enough to convince quite a few to buy up some penny stock (with the scammer taking the other, profitable side of the trade.) In the spirit of this post, it's probably not profitable enough. Or spammers are stupid.)

Replies from: gwern, jimrandomh, MBlume, thomblake, CarlShulman
comment by gwern · 2012-02-09T21:29:19.459Z · LW(p) · GW(p)

I've always wondered why no pump-and-dump scammers use this: after ten "buy/short " mails

They used to, in the days of snail-mail, and that scam became one of the common examples of selection bias and other issues because it's so nifty; why don't they with email? Probably difficulty of getting through, as pointed to.

Replies from: khafra
comment by khafra · 2012-02-10T14:20:24.306Z · LW(p) · GW(p)

I wonder if scammers know that you can still send snail mail.

comment by jimrandomh · 2012-02-09T16:36:52.059Z · LW(p) · GW(p)

(I've always wondered why no pump-and-dump scammers use this: after ten "buy/short " mails, 1/1024 of your mailing list will have received 10/10 correct predictions from you (and another 10/1024 will have received 9/10 correct predictions.)

Getting someone to receive 11 mails in a row is hard, because of immune responses to spam. Getting someone to actually read those mails is hard, for a similar reason. Needing a large number of recipients and using stock-related terminology both make it harder. And then, even if you got through defenses and actually convinced people that you could correctly predict stock prices, most of them still wouldn't do anything about it.

comment by MBlume · 2012-02-09T18:39:34.373Z · LW(p) · GW(p)

Thanks for the stamper link, I was hoping something like that existed.

The latter could be helped by some stamping service that would allow you to include your name in the stamping request, with some publicly available provision for finding out how many requests someone made in a time period. If Carl actually attached "Carl Shulman" to the request, and to no others, and we had independent reason to believe that was his True Name, we could assume he wasn't running the 10^1024 scam.

Replies from: CarlShulman
comment by CarlShulman · 2012-02-09T18:45:09.299Z · LW(p) · GW(p)

10^1024 scam

Typo.

comment by thomblake · 2012-02-09T17:27:18.663Z · LW(p) · GW(p)

You're looking for people smart enough to understand the scam and dumb enough to fall for it. That seems much less profitable than existing scams.

comment by CarlShulman · 2012-02-09T18:40:55.558Z · LW(p) · GW(p)

Right, the thought would be to do this in public fashion, so that recipients can search for other results to see you hadn't posted others.

comment by Douglas_Knight · 2012-02-09T22:51:56.633Z · LW(p) · GW(p)

It seems to me that there are two different heuristics here and it is worth separating them.

But first I should explain why I think my initial reading of this post suggests heuristics that I think are problematic. The mere existence of CBT does not seem like strong evidence for psychology. It is no more evidence for modern mainstream psychology than freudian psychoanalysis is evidence for freudian psychology. As I understand it, CBT is gaining market share against other forms of talk therapy, but largely because of academic authority, roughly the same way that the other therapies got established. I am a fan of CBT because its proponents claim to do experiments distinguishing its efficacy from that of other talk therapies and failing to distinguish other talk therapies from talking to untrained people (which is still useful). But why do I need CBT for that? I can check that mainstream psychologists are more enthusiastic about experiments than freudian ones without resorting to the particular case of CBT. Similarly, competing nutritional theories are successful in the marketplace, sold both by large organizations with advertising budgets (Weight Watchers vs Atkins) and personal trainers working by word of mouth. But I agree that they example of CBT sheds light on psychology.

One heuristic is that experiments with every-day comprehensible goals are more useful for evaluating a field than experiments of technical claims. Most obviously, it is easier to evaluate the value of the knowledge demonstrated by such experiments than technical knowledge. Knowing that statins lower cholesterol is only useful if I trust the medical consensus on cholesterol, but knowing that they lower all-cause mortality is inherently valuable (though if the population of the experiment was chosen using cholesterol, this is also evidence that the doctors are correct about cholesterol). Similarly, the efficacy of CBT shows that psychologists know useful things, and not just trivia about what people do in weird situations. Moreover, I suspect that such experiments are more reliable than technical experiments. In particular, I suspect that they are less vulnerable to publication bias and data-mining. Certainly, I have to learn about technical measures to determine how vulnerable technical experiments are to experimenter bias.

The other heuristic is that selling a theory to someone else is a good sign. Unfortunately, this seems to me of limited value because people buy a lot of nonsense, not just competing psychological and nutritional theories, but also horoscopes. How does the military differ from academic psychologists? I'm sure it hires a lot of them. They do much larger and longer experiments than academics. They do more comprehensive experiments, with better measures of success, analogous to the advantage of all-cause mortality over number of heart attacks (let alone cholesterol). They could eliminate publication bias because they know all the studies they're doing, but only if the people in charge understand this issue; and there is still is some kind of bias in the kind of studies they let me read. These are all useful advantages, but in the end it does not look very different to me than the academic psychology we're trying to evaluate. Similarly, industry consumes a lot of biological and chemical research, which is evidence that the research is, as a whole, real, but it fails to publish attempts to replicate, so the information is indirect. On the other hand, these industries, like the military, use the knowledge internally, which is better evidence than commercial CBT and nutrition, which try to sell the knowledge directly, and mainly demonstrate the value of academic credentials to selling knowledge.

Replies from: CarlShulman
comment by CarlShulman · 2012-02-10T00:56:34.584Z · LW(p) · GW(p)

Right, my examples were selected for a) presence of spinoffs, and b) evidence that the spinoffs were substantive. E.g. I excluded psychic hotlines and Freudian analysis.

comment by waveman · 2012-02-09T22:20:34.236Z · LW(p) · GW(p)

I have been unable to find any practical spinoffs of gender studies.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-02-16T01:47:18.121Z · LW(p) · GW(p)

That seems to me to be something which, if they can produce correct results, would be used to prevent things from going wrong in a public fashion, or by private consultants... somewhat like how you don't expect much in the way of spinoffs from studies of criminal justice studies, except for specialists (i.e. lawyers).

comment by EllaDeker · 2018-01-05T12:50:11.982Z · LW(p) · GW(p)

I think it's interesting topics for research papers, I've read something like that here: https://essays-service.com/blog/540-argumentative-essay-topics. It's great that students conduct similar studies. There are currently no qualitative content that is interesting to learn.

comment by EllaDeker · 2018-01-05T12:46:26.398Z · LW(p) · GW(p)

I think it's interesting topics for research papers, I've read something like that here: https://essays-service.com/blog/540-argumentative-essay-topics. It's great that students conduct similar studies. There are currently no qualitative content that is interesting to learn.

comment by chaosmosis · 2012-04-28T04:15:20.599Z · LW(p) · GW(p)

There's this great XKCD that totally makes this exact same point, except with more lolz.

comment by JoachimSchipper · 2012-02-13T15:21:28.748Z · LW(p) · GW(p)

If stock market economics worked, Noble prize winners would make money.

(This is slightly unfair, since the Black-Merton-Scholes theory does make other people money, to an extent. Additionally, while Merton and Scholes were on the board, LTCM was not strictly based on their theories. Still, surprised to see that this hasn't been mentioned.)

comment by gwern · 2012-02-10T00:43:54.014Z · LW(p) · GW(p)

Did you steal this from XKCD?

comment by lukeprog · 2012-03-03T20:56:12.236Z · LW(p) · GW(p)

Related: the generative heuristic.

comment by Desrtopa · 2012-02-12T05:43:22.158Z · LW(p) · GW(p)

Bem tells us that random undergraduate students were able to predict with 53% accuracy where an erotic image would appear in the future. If this effect was actually real, I would rerun the experiment before corporate earnings announcements, central bank interest rate changes, etc, and change the images based on the reaction of stocks and bonds to the announcements. In other words, I could easily convert "porn precognition" into "hedge fund trillionaire precognition."

This doesn't just assume that the effect is reproduceable, it assumes that the effect generalizes to things other than erotic images. Considering that erotic imagery gets special treatment in our brain's processes that finance does not, this seems like a dubious assumption even given the premise that the effect is real.

Replies from: nshepperd
comment by nshepperd · 2012-02-12T06:30:51.613Z · LW(p) · GW(p)

No it doesn't?

The idea is to (say) show an erotic image on the right if the stock goes up, and one on the left if the stock goes down. It's still porn precognition, except the "randomness" source is the stock market rather than whatever they used in the original experiment.

Replies from: Desrtopa
comment by Desrtopa · 2012-02-12T06:43:52.482Z · LW(p) · GW(p)

Ah, you're right, I misinterpreted that.

It does still assume though that the effect allows one to predict better than chance where the image will appear regardless of the process that determines the location.

Suppose humans had some sort of telepathy that allowed them to read the state of the computer on some subconscious level and thereby predict the location where the image would appear, if the location were determined by the computer that was displaying the images. Predicting corporate earning announcements, interest rate changes, etc. would be an entirely different matter.

comment by Will_Newsome · 2012-02-10T06:00:25.615Z · LW(p) · GW(p)

In other words, I could easily convert "porn precognition" into "hedge fund trillionaire precognition."

Not if psi is capricious, and the evidence suggests it is. (I say this to emphasize that psi is singular in this respect; your heuristic might work for other fields.) (ETA: I guess macroeconomics has similar problems.)

(ETA2: Think about it from the simulation hypothesis perspective: you're trying to manipulate the gods into doing something for you. You're dealing with transhumanly intelligent agents. It's likely not a good idea to try to be clever.

Black magic is not a myth. It is a totally unscientific and emotional form of magic, but it does get results — of an extremely temporary nature. The recoil upon those who practice it is terrific.
It is like looking for an escape of gas with a lighted candle. As far as the search goes, there is little fear of failure!
To practice black magic you have to violate every principle of science, decency, and intelligence. You must be obsessed with an insane idea of the importance of the petty object of your wretched and selfish desires.
I have been accused of being a "black magician." No more foolish statement was ever made about me. I despise the thing to such an extent that I can hardly believe in the existence of people so debased and idiotic as to practice it.

Aleister Crowley)

Replies from: CarlShulman, Grognor
comment by CarlShulman · 2012-02-10T07:19:21.037Z · LW(p) · GW(p)

Not if psi is capricious, and the evidence suggests it is.

Name me some parapsychologists who believe that, preferably ones who score highly on your other quality measures. Bem and Broderick and Radin and Goertzel and such claim that psi stuff is replicable, and don't claim that it would bend over backwards to avoid doing anything useful.

Evidence for the capriciousness of X is also evidence against X existing.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-02-10T15:57:35.727Z · LW(p) · GW(p)

Name me some parapsychologists who believe that

Too lazy. If you check out the references of these papers you might find various examples. I trust Kennedy and thus trust who he trusts.

By the way, have you tested your psi abilities? If so what were the results?

Replies from: CarlShulman
comment by CarlShulman · 2012-02-10T18:50:40.604Z · LW(p) · GW(p)

I trust Kennedy and thus trust who he trusts.

Why?

By the way, have you tested your psi abilities? If so what were the results?

I have had no spooky experiences, and can't predict RPS or dice better than chance over moderately-sized datasets. Have you had psi-experiences, or positive results in some kind of self-experiment?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-02-11T06:38:00.529Z · LW(p) · GW(p)

Why?

He was involved in calling out some fraud going on where he worked, he's honest about what motivated him to get involved in psi research (various personal experiences), he understands the statistics well enough to know the weaknesses of meta-analyses and the necessity of having powerful methods, he's pointed out various methodological problems with psi research as it's usually practiced, he doesn't try to hide weird results or pretend that weird results are the ones that the experiment was intended to find, he recognizes that most claimed psi experiences can be explained away by purely mundane factors, with a few exceptions he's very careful to pay attention to all reasonable hypotheses about possible mechanisms for psi given the limited and ambiguous evidence, et cetera.

can't predict RPS or dice better than chance

Nor worse than chance, I presume? I'd figure you a goat after all.

I haven't done any rigorous self-experimentation as I'm superstitious and am mildly freaked out about the idea that reality actively corroborates whatever inductive biases you happen to have. Rationality is hard enough in a non-agentic world. Truth would seem to be about having terms in your utility function pertaining to cooperation with other agents, so if the information I get doesn't help me cooperate with others then I don't see any grounds for me to trust it or for me to go out and find it. Yay anti-epistemology. That's a rationalization; I'm not entirely sure why I'm afraid of rigorous self-tests.

Replies from: CarlShulman
comment by CarlShulman · 2012-02-11T08:30:14.335Z · LW(p) · GW(p)

You can have other Bay Area LessWrongers watch or help set up the experiments. That will at least help in cooperation with this community.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-02-11T11:02:21.646Z · LW(p) · GW(p)

Good point, but to some extent that might defeat the purpose. Since my model is that psi is evasive I expect that the more people I clue in to the results or even the existence of the experiments, the less likely it is I'll get significant or sensible results. And with the retrocausal effects demonstrated by PEAR and so on, if I ever intend to publicize the results in the future then that itself is enough to cause psi to get evasive. Kennedy actually recommends keeping self-experimentation to oneself and precommiting to telling no one about the results for these reasons. So basically even if you get incredibly strong results you're left with a bunch of incommunicable evidence. Meh.

I have various responses ready for our other conversation by the way, which I'd like to get back to soon. I was finally able to get a solid twenty-two hours of sleep. My fluid intelligence basically stops existing when sleep-deprived.

Replies from: Vaniver, FeepingCreature, wallowinmaya
comment by Vaniver · 2012-02-11T16:08:55.112Z · LW(p) · GW(p)

And with the retrocausal effects demonstrated by PEAR and so on, if I ever intend to publicize the results in the future then that itself is enough to cause psi to get evasive.

This reminds me of the story of the poker player who concluded it was unlucky to track his winnings and losses because whenever he did it, he lost way more than he expected to.

Replies from: gwern, Will_Newsome
comment by gwern · 2012-02-11T16:11:12.390Z · LW(p) · GW(p)

http://lesswrong.com/lw/20y/rationality_quotes_april_2010/1ugy

Replies from: Vaniver
comment by Vaniver · 2012-02-11T19:55:45.861Z · LW(p) · GW(p)

Thanks for the link! (I think I saw it first in Rational Decisions, since I hadn't upvoted that quote before.)

comment by Will_Newsome · 2012-02-11T16:27:02.935Z · LW(p) · GW(p)

Seems plausible his observations were correct if he had a small sample size, if not his judgment about what to do given his observations. (I say this only because the default reaction of "what an impossibly idiotic person" might deserve a slight buffer when as casual readers we don't know many actual details of the case in question. What evidence filtered/fictional evidence and what not.)

comment by FeepingCreature · 2012-02-12T15:27:05.798Z · LW(p) · GW(p)

Sorry for butting in, but don't you find it strangely convenient that your psi effect is defined just so as to move it outside the domain of scientific inquiry? Do you anticipate ever finding a way to reliably distinguish it from random chance, or do you anticipate forming another excuse, ahem, reason why you should have expected from the start that the way you just tried would not reliably show it? I'd claim you're chasing invisible dragons, but I find it incredulous that you haven't thought of the comparison yourself, which leaves me confused. How does an effect look like that is real but cannot be distinguished from random chance by any reliable method? How would you extract utility from such an effect? And is it worth it to break your tools of inquiry that otherwise work very well, just so you can end up believing in an effect that is true but useless? Food for thought.

Replies from: Will_Newsome, Eugine_Nier
comment by Will_Newsome · 2012-02-13T13:45:35.629Z · LW(p) · GW(p)

I am aware of this. I would have to be incredibly stupid not to be aware of it.

Do you anticipate ever finding a way to reliably distinguish it from random chance

I can reliably distinguish it from random chance, but by hypothesis I just can't tell you about it. I can get evidence, just not communicable evidence.

I think maybe every time I post about evasive psi I should include a standard disclaimer along the lines of "Yes, I realize how incredibly dodgy this sounds and I also find it rather frustrating, but bringing it up and harping on it never leads anywhere."

comment by Eugine_Nier · 2012-02-12T21:56:19.682Z · LW(p) · GW(p)

How about trying to leave a line of retreat and imagine what the world would be like if the theory Will is proposing were correct?

Replies from: Will_Newsome, FeepingCreature
comment by Will_Newsome · 2012-02-13T13:58:38.276Z · LW(p) · GW(p)

(E.g., imagine a transhumanly intelligent agent who only hangs out with you when it knows that no one will believe that it hung out with you. This means that when it hangs out with you it can do arbitrarily magical things, but you'll never be able to tell anyone about it, because the agent went out of its way to keep that from happening, and it's freakin' transhumanly intelligent so you know that any apparent chance of convincing others of its visit is probably not actually a chance. Is this theory improbable? Absolutely. But supposing that the agent actually does hang out with you and does arbitrarily magical stuff, you don't have any way of convincing others that the theory is a posterior probable, and you'll probably just end up making a fool out of yourself if you try, as the agent predicted.

I think a problem might be when people think of psi they think 'ability to shoot fireballs' rather than 'convincing superintelligences to act on your behalf' (note that that's just one possible mechanism of many and we shouldn't privilege any hypotheses yet). If people thought they were dealing with intelligent agents then they'd use the parts of their brain designed for dealing with agents, and those parts are pretty good at what they do. Note we only want to use those parts because, at least in my opinion, psi as a relatively passive phenomenon seems to be a falsified hypothesis, or at the very least it doesn't explain a ton of things that seem just as real as passive psi phenomena.)

Replies from: thomblake
comment by thomblake · 2012-02-13T16:35:26.063Z · LW(p) · GW(p)

Oh, you mean Bill Murray.

comment by FeepingCreature · 2012-02-12T22:08:37.623Z · LW(p) · GW(p)

That's my point, I don't expect to be able to make consistently differing observations! If his theory is correct, we still wouldn't be able to reliably exploit that feature.

I'm not saying it's wrong, I'm saying even if it's right it's useless to believe.

I mean if there is some form of reliable Psi I'll have a party because that'd be awesome.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-02-13T13:48:12.890Z · LW(p) · GW(p)

I think you should look more closely at the arguments I made above: my hypothesis makes testable predictions, but if verified the evidence isn't reliably communicable to other people. By my hypothesis psi is perhaps "exploitable" but I cringe at the thought of trying to "exploit" a little-understood agentic process in the case that it actually exists.

Replies from: Desrtopa
comment by Desrtopa · 2012-02-13T13:50:25.028Z · LW(p) · GW(p)

but I cringe at the thought of trying to "exploit" a little-understood agentic process in the case that it actually exists.

Why?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-02-13T14:05:28.554Z · LW(p) · GW(p)

A safety heuristic. Just say no to demons, for the same reason you should say no to drugs until you figure out what they are, what they do, and the intentions of the agent offering them to you.

comment by David Althaus (wallowinmaya) · 2012-02-13T17:50:30.203Z · LW(p) · GW(p)

Kennedy actually recommends keeping self-experimentation to oneself and precommiting to telling no one about the results for these reasons.

Does Kennedy recommend a specific type of self-experimentation? What's the best way to test one's psi-abilities in your opinion?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-02-13T18:02:12.328Z · LW(p) · GW(p)

I don't remember if he has any specific recommendations. I don't know what the best way to test ones abilities would be but the REG (random event generator) paradigm seems highly conducive to rigorous and thorough experimentation. Alas, I forget what the literature says about pseudo-random generators. I can't in good faith recommend psi experiments; on the one hand if psi is for real then we're probably doing it all the time without realizing it (which is I think the typical Eastern perspective), on the other hand it seems like a generally bad idea to go out of ones way to play around with a little-understood perhaps-agentic process. Playing with Thor seems significantly dumber than playing with fire.

comment by Grognor · 2012-02-11T16:31:53.668Z · LW(p) · GW(p)

I must ask, since you are a known troll, do you really believe in psychic fucking powers, or are you just testing LW's ability to distinguish sanity by your comments' karma?

Replies from: Eugine_Nier, Will_Newsome
comment by Eugine_Nier · 2012-02-11T19:41:12.301Z · LW(p) · GW(p)

My understanding is that he thinks the LW consensus underestimates the likelihood of psychic powers.

comment by Will_Newsome · 2012-02-11T17:16:30.192Z · LW(p) · GW(p)

I do believe something weird akin to "psychic fucking powers" is going on.

Normally it is clear when I am or am not trolling. The vast majority of my contributions to Less Wrong have positive karma for a reason.

Replies from: None
comment by [deleted] · 2012-02-11T17:34:22.520Z · LW(p) · GW(p)

Oh, glub.

Normally it is clear when I am or am not trolling.

Obviously not.

The vast majority of my contributions to Less Wrong have positive karma for a reason.

Karma doesn't mean that.

Replies from: Grognor
comment by Grognor · 2012-02-11T17:50:51.452Z · LW(p) · GW(p)

Normally it is clear when I am or am not trolling.

Obviously not.

Beg to differ with both.