Original Research on Less Wrong
post by lukeprog · 2012-10-29T22:50:35.671Z · LW · GW · Legacy · 48 commentsContents
General philosophy Decision theory / AI architectures / mathematical logic Ethics AI Risk Strategy None 48 comments
Hundreds of Less Wrong posts summarize or repackage work previously published in professional books and journals, but Less Wrong also hosts lots of original research in philosophy, decision theory, mathematical logic, and other fields. This post serves as a curated index of Less Wrong posts containing significant original research.
Obviously, there is much fuzziness about what counts as "significant" or "original." I'll be making lots of subjective judgment calls about which suggestions to add to this post. One clear rule is: I won't be linking anything that merely summarizes previous work (e.g. Stuart's summary of his earlier work on utility indifference).
Update 09/20/2013: Added Notes on logical priors from the MIRI workshop, Cooperating with agents with different ideas of fairness, while resisting exploitation, Do Earths with slower economic growth have a better chance at FAI?
Update 11/03/2013: Added Bayesian probability as an approximate theory of uncertainty?, On the importance of taking limits: Infinite Spheres of Utility, Of all the SIA-doomsdays in the all the worlds...
Update 01/22/2014: Added Change the labels, undo infinitely good, Reduced impact AI: no back channels, International cooperation vs. AI arms race, Naturalistic trust among AIs: The parable of the thesis advisor’s theorem
General philosophy
-
Highly Advanced Epistemology 101 for Beginners. Eliezer's bottom-up guide to truth, reference, meaningfulness, and epistemology. Includes practical applications and puzzling meditations.
-
Seeing Red, A Study of Scarlet, Nature: Red in Tooth and Qualia. Orthonormal dissolves Mary's room and qualia.
-
Counterfactual resiliency test for non-causal models. Stuart Armstrong suggests testing non-causal models for "counterfactual resiliency."
-
Thoughts and problems with Eliezer's measure of optimization power. Stuart Armstrong examines some potential problems with Eliezer's concept of optimization power.
-
Free will. Eliezer's particular compatibilist-style solution to the free will problem from reductionist viewpoint.
-
The absolute Self-Selection Assumption. A clarification on anthropic reasoning, focused on Wei Dai s UDASSA framework.
-
SIA, conditional probability, and Jaan Tallinn s simulation tree. Stuart Armstrong makes the bridge between Nick Bostrom s Self-Indication Assumption (SIA) and Jann Tallinn s of superintelligence reproduction.
-
Mathematical Measures of Optimization Power. Alex Altair tackles one approach to mathematically formalizing Yudkowsky s Optimization Power concept.
-
Caught in the glare of two anthropic shadows. Stuart_Armstrong provides a detailed analysis of the "anthrophic shadow" concept and its implications.
-
Bayesian probability as an approximate theory of uncertainty?. Vladimir Slepnev argues that Bayesian probability is an imperfect approximation of what we want from a theory of uncertainty.
-
Of all the SIA-doomsdays in the all the worlds.... Stuart_Armstrong on the doomsday argument, the self-sampling assumption and the self-indication assumption.
Decision theory / AI architectures / mathematical logic
-
Towards a New Decision Theory, Explicit Optimization of Global Strategy (Fixing a Bug in UDT1). Wei Dai develops his new decision theory, UDT.
-
Counterfactual Mugging. Vladimir Nesov presents a new Newcomb-like problem.
-
Cake, or death! Summary: "the naive cake-or-death problem emerges for a value learning agent when it expects its utility to change, but uses its current utility to rank its future actions," and "the sophisticated cake-or-death problem emerges for a value learning agent when it expects its utility to change predictably in certain directions dependent on its own behavior."
-
An angle of attack on Open Problem #1, How to cheat Löb's Theorem: my second try. Benja tackles the problem of "how, given Löb's theorem, an AI can replace itself with a better expected utility maximizer that believes in as much mathematics as the original AI."
-
A model of UDT with a concrete prior over logical statements. Benja attacks the problem of logical uncertainty.
-
Decision Theories: A Less Wrong Primer, Decision Theories: A Semi-Formal Analysis, Part I, Decision Theories: A Semi-Formal Analysis, Part II, Decision Theories: A Semi-Formal Analysis, Part III, Halt, Melt, and Catch Fire, Hang On, I Think This Works After All. Orthonormal explains the TDT/UDT approach to decision theory and then develops his own TDT-like algorithm called Masquerade.
-
Decision Theory Paradox: PD with Three Implies Chaos?. Orthonormal describes "an apparent paradox in a three-agent variant of the Prisoner's Dilemma: despite full knowledge of each others' source codes, TDT agents allow themselves to be exploited by CDT, and lose completely to another simple decision theory."
-
Naive TDT, Bayes nets, and counterfactual mugging. Stuart Armstrong suggests a reason for TDT's apparent failure on the counterfactual mugging problem.
-
Bounded versions of Gödel's and Löb's theorems, Formalising cousin_it's bounded versions of Gödel's theorem. Vladimir Slepnev proposes bounded versions of Gödel's and Löb's theorems, and Stuart Armstrong begins to formalize one of them.
-
Satisficers want to become maximisers. Stuart Armstrong explains why satisficers want to become maximisers.
-
Would AIXI protect itself?. Stuart Armstrong argues that "with practice the AIXI [agent] would likely seek to protect its power source and existence, and would seek to protect its memory from 'bad memories' changes. It would want to increase the amount of 'good memory' changes. And it would not protect itself from changes to its algorithm and from the complete erasure of its memory. It may also develop indirect preferences for or against these manipulations if we change our behaviour based on them."
-
The mathematics of reduced impact: help needed. Stuart Armstrong explores some ways we might reduce the impact of maximizing AIs.
-
AI ontology crises: an informal typology. Stuart Armstrong builds a typology of AI ontology crises, following de Blanc (2011).
-
In the Pareto-optimised crowd, be sure to know your place. Stuart Armstrong argues that "In a population playing independent two-player games, Pareto-optimal outcomes are only possible if there is an agreed universal scale of value relating each players' utility, and the players then acts to maximise the scaled sum of all utilities."
-
If you don't know the name of the game, just tell me what I mean to you. Stuart Armstrong summarizes: "Both the Nash Bargaining solution (NBS), and the Kalai-Smorodinsky Bargaining Solution (KSBS), though acceptable for one-off games that are fully known in advance, are strictly inferior for independent repeated games, or when there exists uncertainty as to which game will be played."
-
The Blackmail Equation. Stuart Armstrong summarizes Eliezer's result on blackmail in decision theory.
-
Expected utility without the independence axiom. Stuart Armstrong summarizes: "Deprived of independence, expected utility sneaks in via aggregation."
-
An example of self-fulfilling spurious proofs in UDT, A model of UDT without proof limits. Vladimir Slepnev examines several problems related to decision agents with spurious proof-searchers.
-
The limited predictor problem. Vladimir Slepnev describes the limited predictor problem, "a version of Newcomb's Problem where the predictor has limited computing resources. To predict the agent's action, the predictor simulates the agent for N steps. If the agent doesn't finish in N steps, the predictor assumes that the agent will two-box."
-
A way of specifying utility functions for UDT. Vladimir Slepnev advances a method for specifying utility functions for UDT agents.
-
A model of UDT with a halting oracle, Formulas of arithmetic that behave like decision agents. Vladimir Slepnev specifies an optimality notion which "matches our intuitions even though the universe is still perfectly deterministic and the agent is still embedded in it, because the oracle ensures that determinism is just out of the formal system's reach." Then, Nisan revisits some of this result's core ideas by representing the decision agents as formulas of Peano arithmetic.
-
AIXI and Existential Dispair. Paul Christiano discusses how an approximate implementation of AIXI could lead to an erratically behaving system.
-
The Absent-Minded Driver. In this post, Wei Dai examines the absent-minded driver problem. He tries to show how professional philosophers failed to reach the solution to time inconsistency, while rejecting Eliezer s people are crazy explanation.
-
Clarification of AI Reflection Problem. A clear description for the commonly discussed problem in Less Wrong of reflection in AI systems, along with some possible solutions, by Paul Christiano.
-
Motivating Optimization Processes. Paul Christiano addresses the question of how and when we can expect an AGI to cooperate with humanity, and how it might be easier to implement than a completely Friendly AGI .
-
Universal agents and utility functions. Anja Heinisch replaces AIXI's reward function with a utility function .
-
Ingredients of Timeless Decision Theory, Timeless Decision Theory: Problems I Can t Solve, Timeless Decision Theory and Meta-Circular Decision Theory. Eliezer s posts describe the main details of his Timeless Decision Theory, some problems for which he doesn t possess decision theories and reply to Gary Drescher s comment describing Meta-Circular Decision Theory. These insights later culminated in Yudkowsky (2010).
-
Confusion about Newcomb is confusion about counterfactuals, Why we need to reduce could , would , should , Decision theory: why Pearl helps reduce could and would , but still leaves us with at least three alternatives. Anna Salamon s posts use causal Bayes nets to explore the difficulty of interpreting counterfactual reasoning and the related concepts of should, could, and would."
-
Bayesian Utility: Representing Preference by Probability Measures. Vladimir Nesov presents a transformation of the standard expected utility formula.
-
A definition of wireheading. Anja attempts to reach a definition of wireheading that encompasses the intuitions about the concept that have emerged from LW discussions.
-
Why you must maximize expected utility. Benja presents a slight variant on the Von Neumann-Morgenstern approach to the axiomatic justification of the principle of maximizing expected utility.
-
A utility-maximizing varient of AIXI. Alex Mennen builds on Anja's specification of a utility-maximizing variant of AIXI.
-
A fungibility theorem. Nisan s alternative to the von Neumann-Morgenstern theorem, proposing the maximization of the expectation of a linear aggregation of one s values.
-
Logical uncertainty, kind of. A proposal, at least. Manfred's proposed solution on how to apply the basic laws of belief manipulation to cases where an agent is computationally limited.
-
Save the princess: A tale of AIXI and utility functions. Anja discusses utility functions, delusion boxes, and cartesian dualism in attempting to improve upon the original AIXI formalism.
-
Naturalism versus unbounded (or unmaximisable) utility options. Stuart Armstrong poses a series of questions regarding unbounded utility functions.
-
Beyond Bayesians and Frequentists. Jacob Steinhardt compares two approaches to statistics and discusses when to use them.
-
VNM agents and lotteries involving an infinite number of possible outcomes. AlexMennen summarizes: The VNM utility theorem only applies to lotteries that involve a finite number of possible outcomes. If an agent maximizes the expected value of a utility function when considering lotteries that involve a potentially infinite number of outcomes as well, then its utility function must be bounded.
-
A Problem with playing chicken with the universe. Karl explains how a model of UDT with a halting oracle might have some problematic elements.
-
Intelligence Metrics and Decision Theories, Metatickle Intelligence Metrics and Friendly Utility Functions. Squark reviews a few previously proposed mathematical metrics of general intelligence and proposes his own approach.
-
Probabilistic L�b theorem, Logic in the Language of Probability. Stuart Armstrong looks at whether reflective theories of logical uncertainty still suffer from L�b's theorem.
-
Notes on logical priors from the MIRI workshop. Vladimir Slepnev summarizes: "In Counterfactual Mugging with a logical coin, a 'stupid' agent that can't compute the outcome of the coinflip should agree to pay, and a 'smart' agent that considers the coinflip as obvious as 1=1 should refuse to pay. But if a stupid agent is asked to write a smart agent, it will want to write an agent that will agree to pay. Therefore the smart agent who refuses to pay is reflectively inconsistent in some sense. What's the right thing to do in this case?"
-
Cooperating with agents with different ideas of fairness, while resisting exploitation. Eliezer Yudkowsky investigates some ideas from the MIRI workshop that he hasn’t seen in informal theories of negotiation.
-
Naturalistic trust among AIs: The parable of the thesis advisor’s theorem. Benja discusses Nik Weaver's suggestion for 'naturalistic trust'.
Ethics
-
The Metaethics Sequence. Eliezer explains his theory of metaethics. Many readers have difficulty grokking his central points, and may find clarifications in the discussion here.
-
No-Nonsense Metaethics. Lukeprog begins to outline his theory of metaethics in this unfinished sequence.
-
The Fun Theory Sequence. Eliezer develops a new subfield of ethics, "fun theory."
-
Consequentialism Need Not Be Near-Sighted. Orthonormal's summary: "If you object to consequentialist ethical theories because you think they endorse horrible or catastrophic decisions, then you may instead be objecting to short-sighted utility functions or poor decision theories."
-
A (small) critique of total utilitarianism. Stuart Armstrong analyzes some weaknesses of total utilitarianism.
-
In the Pareto world, liars prosper. Stuart Armstrong presents a new picture proof of a previously known result, that "if there is any decision process that will find a Pareto outcome for two people, it must be that liars will prosper: there are some circumstances where you would come out ahead if you were to lie about your utility function."
-
Politics as Charity, Probability and Politics. Carl Shulman analyzes the prospects for doing effective charity work by influencing elections.
-
Value Uncertainty and the Singleton Scenario. Wei Dai examines the problem of value uncertainty, a special case of moral uncertainty in which consequentialism is assumed.
-
Pascal's Mugging: Tiny Probabilities of Vast Utilities. Eliezer describes the problem of Pascal's Mugging, later published in Bostrom (2009).
-
Ontological Crisis in Humans. Wei Dai presents the ontological crisis concept applied to human existence and goes over some examples.
-
Ideal Advisor Theories and Personal CEV. Luke Muehlhauser and crazy88 place CEV in the context of mainstream moral philosophy, and use a variant of CEV to address a standard line of objections to ideal advisor theories in ethics..
-
Harsanyi s Social Aggregation Theorem and what it means for CEV. Alex Mennen describes the relevance of Harsanyi's Social Aggregation Theorem to possible formalizations of CEV.
-
Three Kinds of Moral Uncertainty. Kaj Sotala's attempts to explain what it means to be uncertain about moral theories.
-
A brief history of ethically concerned scientists. Kaj_Sotala gives historical examples of ethically concerned scientists.
-
Pascal s Mugging for bounded utility functions. Benja s post describing the Pascal Mugging problem under truly bounded utility functions.
-
Pascal's Muggle: Infinitesimal Priors and Strong Evidence. Eliezer Yudkowsky discusses the role of infinitesimal priors and their decision-theoretic consequences in a "Pascal's Mugging"-type situation.
-
An Attempt at Preference Uncertainty Using VNM. nyan_sandwich tackles the problem of making decisions when you are uncertain about what your object-level preferences should be.
-
Gains from trade: Slug versus Galaxy - how much would I give up to control you? and Even with default points, systems remain exploitable. Stuart_Armstrong provides a suggestion as to how to split the gains from trade in some situations and how such solutions can be exploitable.
-
On the importance of taking limits: Infinite Spheres of Utility. aspera shows that "if we want to make a decision based on additive utility, the infinite problem is ill posed; it has no unique solution unless we take on additional assumptions."
-
Change the labels, undo infinitely good. Stuart_Armstrong talks about a small selection of paradoxes connected with infinite ethics.
AI Risk Strategy
-
AI Risk and Opportunity: A Strategic Analysis. The only original work here so far is the two-part history of AI risk thought, including descriptions of works previously unknown to the LW/SI/FHI community (e.g. Good 1959, 1970, 1982; Cade 1966).
-
AI timeline predictions: Are we getting better?, AI timeline prediction data. A preview of Stuart Armstrong's and Kaj Sotala's work on AI predictions, later published in Armstrong & Sotala (2012).
-
Self-Assessment in Expert Ai Prediction. Stuart_Armstrong suggests that the predictive accuracy of self selected experts might be different from those elicited from less selected groups.
-
Kurzweil's predictions: good accuracy, poor self-calibration. A new analysis of Kurzweil's predictions, by Stuart Armstrong.
-
Tools versus agents, Reply to Holden on Tool AI. Stuart Armstrong and Eliezer examine Holden's proposal for Tool AI.
-
What is the best compact formalization of the argument for AI risk from fast takeoff? A suggestion of a few steps on how to compact and clarify the argument for the Singularity Institute s Big Scary Idea , by LW user utility monster.
-
The Hanson-Yudkowsky AI-Foom Debate. The 2008 debate between Robin Hanson and Eliezer Yudkowsky, largely used to exemplify the difficulty of resolving disagreements even between expert rationalists. It focus on the likelihood of hard AI takeoff, the need for a theory of Friendliness, the future of AI, brain emulations and recursive improvement.
-
What can you do with an Unfriendly AI? Paul Christiano discusses how we could eventually turn an Unfriendly AI into a useful system by filtering the way it interacts and constraining how its answers are given to us.
-
Cryptographic Boxes for Unfriendly AI. Following the tone of the previous post and AI boxing in general, this post by Paul Christiano explores the possibility of using cryptography as a way to guarantee friendly outputs.
-
How can I reduce existential risk from AI?. A post by Luke Muehlhauser describing the Meta, Strategic and Direct work one could do in order to reduce the risk for humanity stemming from AI.
-
Intelligence explosion vs Co-operative explosion. Kaj Sotala s description of how an intelligence explosion could emerge from the cooperation of individual artificial intelligent systems forming a superorganism.
-
Assessing Kurzweil: The Results; Assessing Kurzweil: The Gory Details. Stuart_Armstrong's attempt to evaluate the accuracy of Ray Kurzweil's model of technological intelligence development.
-
Domesticating reduced impact AIs. Stuart Armstrong attempts to give a solid foundation from which one can build a 'reduced impact AI'.
-
Why Ai may not foom. John_Maxwell_IV explains how the intelligence of a self-improving AI may not grow as fast as we might think.
-
Singleton: the risks and benefits of one world governments. Stuart_Armstrong attempts to lay out a reasonable plan for tackling the singleton problem.
-
Do Earths with slower economic growth have a better chance at FAI?. Eliezer Yudkowsky argues that GDP growth acceleration may actually decrease our chances of getting FAI.
-
Reduced impact AI: no back channels. Stuart_Armstrong presents a further development of the reduced impact AI approach.
-
International cooperation vs. AI arms race. Brian_Tomasik talks about the role of government in a possible AI arms race.
48 comments
Comments sorted by top scores.
comment by incariol · 2012-10-30T22:04:03.878Z · LW(p) · GW(p)
Um... perhaps Wei Dai's analysis of the absent-minded driver problem (with it's subsequent resolution in the comments) and paulfchristiano's AIXI and existential despair would qualify?
Replies from: lukeprogcomment by Yoav Ravid · 2021-11-14T13:40:30.304Z · LW(p) · GW(p)
Wow, this is an awesome post! It would be difficult (though perhaps still worth it) to update this post with all the original research done since it was written and keep the format of the short summaries, but perhaps this should be a tag?
comment by roland · 2012-11-03T13:54:42.311Z · LW(p) · GW(p)
Privileging the Hypothesis . I don't think this bias has been describe anywhere else.
Replies from: Unnamed↑ comment by Unnamed · 2012-11-03T22:24:04.886Z · LW(p) · GW(p)
I don't think it has been named elsewhere, but there is related research. Kahneman described something very similar to it in Thinking, Fast and Slow:
“The probability of a rare event will (often, not always) be overestimated, because of the confirmatory bias of memory. Thinking about that event, you try to make it true in your mind. A rare event will be overweighted if it specifically attracts attention. …. And when there is no overweighting, there will be neglect. When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that may be exposed to events no one has yet experienced, this is not good news.”
comment by Epiphany · 2012-10-30T00:29:20.730Z · LW(p) · GW(p)
According to an article on PLOS Medicine, Most Published Research Findings Are False. Feynman provides an unsettling perspective on what's happened with research as well in Cargo Cult Science.
Have we done any better?
Replies from: Eliezer_Yudkowsky, lukeprog, gwern↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-30T01:07:14.878Z · LW(p) · GW(p)
That's for experimental statistical reports. Trying to do math runs into a different set of dangers.
You can easily beat "Most published research findings are false" by reporting Bayesian likelihood ratios instead of "statistical significance", or even just keeping statistical significance and demanding p < .001 instead of the ludicrous p < .05. It should only take <2.5 times as many subjects to detect a real effect at p < .001 instead of p < .05 and the proportion of false findings would go way down immediately. That's what current grantmakers and journals would ask for if they cared.
Replies from: gwern, Daniel_Burfoot, Deleet, Emily↑ comment by gwern · 2012-10-30T02:34:04.119Z · LW(p) · GW(p)
It should only take <2.5 times as many subjects to detect a real effect at p < .001 instead of p < .05 and the proportion of false findings would go way down immediately.
If anyone is curious about the details here - you can derive from the basic t-test statistics (easier to understand is the z-test) an equation for the power of an equation for 0.05 and 90% power which goes
The "1.64" here is a magic value derived from a big table for the normal distribution: it indicates a result which for the normal distribution of the null is 1.64 standard deviations out towards the tails, which turns out to happen 0.05 or 5% of the time when you generate random draws from the null hypothesis's normal distribution. But we need to know how many standard deviations out we have to go in order to have a result which appears only 0.001 or 0.1% of the time when you randomly draw. This magic value turns out to be 3.09. So we plug that in and now the equation looks like:
can be given a value too from the table, but it's smaller, just 1.28 (10% of the population will be that far out). So we can substitute in for both:
Simplify:
Multiply the standard deviation by both sides to start to get at n/n':
Square to expose the naked n/n':
%5E2%20\text{%20and%20}%20n'%20%3E%20(4.37%20\times%20\hat{\sigma}_D)%5E2)
Distribute:
Simplify:
Obviously we can divide both n and n' by to get rid of that, leaving us with:
So, how much bigger is n' than n? This requires an advanced operation known as division:
And 2.23 is indeed <2.5.
We can double check by firing up a power calculator and messing around with various sample sizes and effect sizes and powers to see how n changes with more stringent significances:
Replies from: Eliezer_Yudkowsky$ R
library(pwr)
pwr.t.test(d=0.1,sig.level=0.05,power=0.90)Two-sample t test power calculation
n = 2102.445
d = 0.1
sig.level = 0.05
power = 0.9
alternative = two.sidedNOTE: n is number in each group
pwr.t.test(d=0.1,sig.level=0.001,power=0.90)
Two-sample t test power calculation
n = 4183.487
d = 0.1
sig.level = 0.001
power = 0.9
alternative = two.sidedNOTE: n is number in each group
4183 / 2102
[1] 1.99001
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-30T04:45:05.582Z · LW(p) · GW(p)
I was just mentally approximating log(.001)/log(.05) = 2.3.
Replies from: Mark_Eichenlaub↑ comment by Mark_Eichenlaub · 2012-10-30T15:34:28.912Z · LW(p) · GW(p)
Thanks. Sometimes I learn a lot from people saying fairly-obvious (in retrospect) things.
In case anyone is curious about this, I guess that Eliezer knew it instantly because each additional data point brings with it a constant amount of information. The log of a probability is the information it contains, so an event with probability .001 has 2.3 times the information of an event of probability .05.
If that's not intuitive, consider that p=.05 means that you have a .05 chance of seeing the effect by statistical fluke (assuming there's no real effect present). If your sample size is n times as large, the probability becomes (.05)^n. (Edit: see comments below) To solve
(.05)^n = .001
take logs of both sides and divide to get
n = log(.001)/log(.05)
Replies from: MTGandP, jsteinhardt↑ comment by MTGandP · 2012-11-02T20:52:51.425Z · LW(p) · GW(p)
The log of a probability is the information it contains
Why?
Replies from: gwern↑ comment by gwern · 2012-11-02T21:08:38.014Z · LW(p) · GW(p)
You mean why isn't the information of a particular number just its length, or its size, and is its log of all things?
Because you can think of each part of the number as telling you how to navigate a binary tree to the node target meaning, and the opposite of a binary tree is the logarithm; at least, that's how I think of it when I use it in my essays like Death Note anonymity.
↑ comment by jsteinhardt · 2012-10-31T09:12:24.969Z · LW(p) · GW(p)
If your sample size is n times as large, the probability becomes (.05)^n
I'm not sure that follows.
Replies from: Eliezer_Yudkowsky, Mark_Eichenlaub↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-31T09:37:32.805Z · LW(p) · GW(p)
If a given piece of evidence E1 provides Bayesian likelihood for theory T1 over T2, and E2 was generated by an isomorphic process, then we get the likelihood ratio squared, providing that T1 and T2 are single possible worlds and have no parameters being updated by E1 or E2 so that the probability of the evidence is conditionally independent.
Thus sayeth Bayes, so far as I can tell.
As for the frequentists...
Well, logically, we're allegedly rejecting a null hypothesis. If the "null hypothesis" contains no parameters to be updated and the probability that E1 was generated by the null hypothesis is .05, and E2 was generated by a causally conditionally independent process, the probability that E1+E2 was generated by the null hypothesis ought to be 0.0025.
But of course gwern's calculation came out differently in the decimals. This could be because some approximation truncated a decimal or two. But it could also be because frequentism actually calculates the probability that E1 is in some amazing class [E] of other data we could've observed but didn't, to be p < 0.05. Who knows what strange class of other data we could've seen but didn't, a given frequentist method will put E1 + E2 into? I mean, you can make up whatever the hell [E] you want, so who says you've got to make up one that makes [E+E] have the probability of [E] squared? So if E1 and E2 are exactly equally likely given the null hypothesis, a frequentist method could say that their combined "significance" is the square of E1, less than the square, more than the square, who knows, what the hell, if we obeyed probability theory we'd be Bayesians so let's just make stuff up. Sorry if I sound a bit polemical here.
See also: http://lesswrong.com/lw/1gc/frequentist_statistics_are_frequently_subjective/
Replies from: Unnamed, Kindly, jsteinhardt↑ comment by Unnamed · 2012-11-01T04:00:29.902Z · LW(p) · GW(p)
You can't just multiply p-values together to get the combined p-value for multiple experiments.
A p-value is a statistic that has a uniform(0,1) distribution if the null hypothesis is true. If you take two independent uniform(0,1) variables and multiply them together, the product is not a uniform(0,1) variable - it has more of its distribution near 0 and less near 1. So multiplying two p-values together does not give you a p-value; it gives you a number that is smaller than the p-value that you would get if you went through the appropriate frequentist procedure.
Replies from: DaFranker↑ comment by DaFranker · 2012-11-01T14:42:20.950Z · LW(p) · GW(p)
In the course of figuring out what the hell the parent comment was talking about and how one was supposed to do the calculation, I found this. p-values are much clearer for me now, thanks for bringing this up.
Replies from: Eliezer_Yudkowsky, Kindly↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-11-01T15:42:17.883Z · LW(p) · GW(p)
Don't get me wrong, this is a good paper, well-written to be clearly understandable and not to be deliberately obtuse like far too many math papers these days, and the author's heart is clearly in the right place, but I still screamed while reading it.
How can anyone read this, and not bang their head against the wall at how horribly arbitrary this all is... no wonder more than half of published findings are false.
Replies from: DaFranker↑ comment by DaFranker · 2012-11-01T15:59:19.510Z · LW(p) · GW(p)
Unfortunately, walls solid enough to sustain the force of the bang I wanted to produce were not to be found within a radius of five meters when I was reading it. I did want to bang my head on my desk, though.
The arbitrari-ness of all the decisions (who decides the cutoff point to reject the null and on what basis? "Meh, whatever" seems to be the ruling methodology) did strike me as unscientific. Or, well, as un-((Some Term For What I Used To Think "Science" Meant Until I Saw That Most Of It Was About Testing Arbitrary Hypotheses Rather Than Deliberate Cornering Of Facts)) as something actually following the scientific method can get.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-11-01T16:09:30.545Z · LW(p) · GW(p)
I don't mind the arbitrary cutoff point. That's like a Bayesian reporting likelihood ratios and leaving the prior up to the reader.
It's more things like, "And now we'll multiply all the significances together, and calculate the probability that their multiplicand would be equal to or lower than the result, given the null hypothesis" that make me want to scream. Why not take the arithmetic mean of the significances and calculate the probability of that instead, so long as we're pretending the actual result is part of an arbitrary class of results? It just seems horribly obvious that you just get further and further away from what the likelihood ratios are actually telling you, as you pile arbitrary test on arbitrary test...
↑ comment by Kindly · 2012-11-01T15:25:45.178Z · LW(p) · GW(p)
That is a really interesting paper.
Also, I found that the function R_k in Section 2 has the slightly-more-closed formula ) where P_k(x) is the first k terms of the Taylor series for e^x (and has the formula with factorials and everything). Just in case anyone wants to try this at home.
↑ comment by Kindly · 2012-10-31T13:58:22.776Z · LW(p) · GW(p)
A more generous way to think about frequentism (which can be justified by some conditional probability sleight-of-hand) is that the significance of some evidence E is actually the probability that the null hypothesis is true, given E and also some prior distribution that is swept under the rug and (mostly) not under the experimenter's control. Which is bad, yes, but in many cases the prior distribution is at least close to something reasonable. And there are some cases in which we can somewhat change the prior distribution to reflect our real priors: for example, when choosing to conduct a 1-tailed test rather than a 2-tailed one.
Under this interpretation, it is silly to expect significances to multiply. You'd really be saying something like Pr[H|E1+E2] = Pr[H|E1] Pr[H|E2]. And that's simply not true: you are double-counting the prior probability Pr[H] when you do this. The frequentist approach is a correct way to combine these probabilities, although this isn't obvious because nobody actually knows what the frequentist Pr[H] is.
But if you read about two experiments with a p-value of 0.05, and think of them as one experiment with a p-value of 0.0025, you are very very very wrong; not just frequentist-wrong but Bayesian-wrong as well.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-31T14:39:46.035Z · LW(p) · GW(p)
the significance of some evidence E is actually the probability that the null hypothesis is true, given E
No frequentist says this. They don't believe in P(H|E). That's the explicit basis of the whole philosophy. People who talk about the probability of a hypothesis given the evidence are Bayesians, full stop.
Statistical significance is, albeit in a strange and distorted way, supposed to be about P(E|null hypothesis), and so, yes, two experiments with a p-value of 0.05 should add up to somewhere in the vicinity of p < 0.0025, because it's about likelihoods, which do multiply, and not posteriors.
Replies from: jsteinhardt, Kindly↑ comment by jsteinhardt · 2012-10-31T16:31:10.882Z · LW(p) · GW(p)
While some frequentist methods do use likelihoods, the mapping from likelihood to p-value is non-linear, so multiplying them would still be a mistake, at least as far as I can tell.
↑ comment by Kindly · 2012-10-31T14:50:39.840Z · LW(p) · GW(p)
I'm not saying that frequentists believe this. I'm saying that the frequentist math (which computes Pr[E|H0]) is equivalent to computing Pr[H0|E] with respect to a prior distribution under which Pr[H0]=Pr[E]. Furthermore, this is a reasonable thing to look at, because from that point of view the way statistical significances combine actually makes sense.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-11-01T04:41:59.984Z · LW(p) · GW(p)
Pr[H0]=Pr[E]
Whaa?
Replies from: Kindly↑ comment by Kindly · 2012-11-01T04:55:36.775Z · LW(p) · GW(p)
Well, we have, in general, Pr[H0|E] = Pr[E|H0] * Pr[H0]/Pr[E]. Frequentists compute Pr[E|H0] instead of Pr[H0|E], but this turns out not to matter if Pr[H0]/Pr[E] cancels, which happens when the above equality holds.
From a certain point of view, this is just mathematical sleight of hand, of course. Also, the "E" is actually some class of outcomes that are grouped together (e.g. all outcomes in which 8 or more coins, out of 10, came up heads). But if we combine sequences of experimental results in the correct way, then this means that the frequentist and Bayesian result differ only by a constant factor (precisely the factor which we assumed, above, to be 1).
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-11-01T07:09:31.423Z · LW(p) · GW(p)
Why the heck would the probability of seeing the evidence, conditional on the mix of all hypotheses being considered, exactly equal the prior probability of the null hypothesis?
Replies from: Kindly↑ comment by Kindly · 2012-11-01T13:42:59.511Z · LW(p) · GW(p)
It wouldn't. Probably a better way to explain it would have been to factor their ratio out as a constant.
Anyway, I've totally messed up explaining this, so I will fold for now and direct you to a completely different argument made elsewhere in the comments which is more worthy of being considered.
↑ comment by jsteinhardt · 2012-10-31T16:29:00.073Z · LW(p) · GW(p)
Suppose that our data are coin flips, and consider three hypotheses: H0 = always heads, H1 = fair coin, H2 = heads with probability 25%. Now suppose that the two hypotheses we actually want to test between are H0 and H' = 0.5(H1+H2). After seeing a single heads, the likelihood of H0 is 1 and the likelihood of H' is 0.5(0.5+0.25). After seeing two heads, the likelihood of H0 is 1 and the likelihood of H' is 0.5(0.5^2+0.25^2). In general, the likelihood of H' after n heads is 0.5(0.5^n+0.25^n), i.e. a mixture of multiple geometric functions. In general if H' is a mixture of many hypotheses, the likelihood will be a mixture of many geometric functions, and therefore could be more or less arbitrary.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-11-01T04:43:15.531Z · LW(p) · GW(p)
That's why I specified single possible worlds / hypotheses with no internal parameters that are being learned.
Replies from: jsteinhardt↑ comment by jsteinhardt · 2012-11-01T05:30:46.950Z · LW(p) · GW(p)
Oops, missed that; but that specification doesn't hold in the situation we care about, since rejecting the null hypotheses typically requires us to consider the result of marginalizing over a space of alternative hypotheses (well, assuming we're being Bayesians, but I know you prefer that anyways =P).
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-11-01T07:05:19.678Z · LW(p) · GW(p)
Well, right, assuming we're Bayesians, but when we're just "rejecting the null hypothesis" we should mostly be concerned about likelihood from the null hypothesis which has no moving parts, which is why I used the log approximation I did. But at this point we're mixing frequentism and Bayes to the point where I shan't defend the point further - it's certainly true that once a Bayesian considers more than exactly two atomic hypotheses, the update on two independent pieces of evidence doesn't go as the square of one update (even though the likelihood ratios still go as the square, etc.).
↑ comment by Mark_Eichenlaub · 2012-10-31T12:42:26.949Z · LW(p) · GW(p)
You're right. That would be true if we did n independent tests, not one test with n-times the subjects.
e.g. probability of 60 or more heads in 100 tosses = .028
probability of 120 or more heads in 200 tosses = .0028
but .028^2 = .00081
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-31T14:47:21.504Z · LW(p) · GW(p)
Amazing, innit? Meanwhile in the land of the sane people, the likelihood function from any given propensity to come up heads, to the observed data, is exactly squared for 120 in 200 vs. 60 in 100.
↑ comment by Daniel_Burfoot · 2012-10-30T16:14:32.253Z · LW(p) · GW(p)
It should only take <2.5 times as many subjects to detect a real effect at p < .001 instead of p < .05 and the proportion of false findings would go way down immediately.
But then people could only publish 1/50 as many papers!
Replies from: DaFranker↑ comment by DaFranker · 2012-10-30T17:13:15.489Z · LW(p) · GW(p)
I had to do a double-take before I realized that this probably wasn't a serious attempt at a counterargument. I'm still not quite convinced that it isn't. Poe's Law and related things.
Yes, it does seem like some peoples' true rejections might turn out to be less opportunities for appeal to public and gaining popularity / funding.
↑ comment by Deleet · 2012-10-30T13:10:10.104Z · LW(p) · GW(p)
I have made a habit out of ignoring p<.05 values when they are reported, unless its a special case where getting more subjects is too difficult or impossible.* I normally go with p<0.01 results unless its very easy to gather more subjects, in which case going with p<0.001 or lower is good.
- For those cases, one can rely on repeated measurements over time of the same subjects over time. For instance, when comparing cross-country scores where the number of subjects is maxed out at 100-200. E.g. in The Spirit Level (book).
↑ comment by Emily · 2012-11-03T15:37:27.221Z · LW(p) · GW(p)
it should only take <2.5 times as many subjects to detect a real effect at p < .001 instead of p < .05
Not to disagree with the overarching point, but the use of "only" here is inappropriate under some circumstances. Eg, a neuropsychological study requiring participants with a particular kind of brain injury is going to find more than doubling its n extremely difficult and time-consuming. For this kind of study (presuming insistence on working with p-values) it seems better to roll with the "ludicrous" p < .05 and rely on replication elsewhere for improved reliability. "Ludicrous" is too strong in fields with small effect sizes and small subject pools; they just need a much higher rate of replication.
↑ comment by lukeprog · 2012-10-30T01:01:06.274Z · LW(p) · GW(p)
Dunno.
I bet math and logic papers have a higher frequency of valid results than medicine papers have of true results, and that LW math and logic results are more often valid than not.
Mainstream philosophy, however, is vastly less truth-tracking than medicine. I bet LW has a better philosophical track record than mainstream philosophy (merely by being naturalistic, reductionistic, relatively precise, cogsci-informed, etc.), but I'm not sure by how much.
comment by lukeprog · 2012-10-30T17:54:25.488Z · LW(p) · GW(p)
No help from LW whatsoever? I was at least expecting people to mention the obvious stuff, like Eliezer's free wlil sequence. :(
Replies from: None, Benito, DaFranker↑ comment by [deleted] · 2012-10-30T18:04:23.102Z · LW(p) · GW(p)
Could you say something about your methods of deciding originality and significance? One of the problems with figuring something like this out is that LW and mainstream academia often use significantly different jargon. It may be that LW will be unhelpful here because in order to work out what's original and significant, you'd have to be an expert both in LW stuff and in mainstream academic discussions.
↑ comment by Ben Pace (Benito) · 2012-10-31T19:06:05.374Z · LW(p) · GW(p)
Are you going to put on the Free Will sequence? And other important contributions from the sequences, like the reductionism post and dissolving the question (and Zombies too!)? (They're pretty important to philosophy.)
Replies from: lukeprog↑ comment by DaFranker · 2012-10-30T18:11:29.269Z · LW(p) · GW(p)
"I would help, but I suck at researching papers to compare LW stuff with mainstream science, so I can't really do much."
Unfortunately, I might also want to post that (for mostly status reasons) if I wouldn't help, which makes the information content near-zero and qualifies as noise.
Unless it's a meta post explaining why I wasn't posting and that I think many other users might judge themselves unable to help.
comment by A1987dM (army1987) · 2012-11-02T21:49:56.848Z · LW(p) · GW(p)
You forgot to escape the underscores in cousin_it's username.
Replies from: lukeprog↑ comment by lukeprog · 2012-11-02T22:41:53.594Z · LW(p) · GW(p)
It's an error in LW's rendering. Not sure how to fix it.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-11-03T04:57:15.779Z · LW(p) · GW(p)
Fixed. (If you edit HTML source, LW software typically isn't involved in deciding how the post gets rendered.)