oscar_cunningham feed - LessWrong 2.0 Readeroscar_cunningham’s posts and comments on the Effective Altruism Forumen-usComment by Oscar_Cunningham on Wolf's Dice
https://lw2.issarice.com/posts/zd89utY4afA59p58k/wolf-s-dice#5A6ywv26Mkigs4GmF
<p>Right. But also we would want to use a prior that favoured biases which were near fair, since we know that Wolf at least thought they were a normal pair of dice.</p>oscar_cunningham5A6ywv26Mkigs4GmF2019-07-17T13:06:40.118ZComment by Oscar_Cunningham on Open Thread April 2019
https://lw2.issarice.com/posts/dYMih9oqYuFzQaS3c/open-thread-april-2019#Zkr5yLu99PY3aoNks
<p>Suppose I'm trying to infer probabilities about some set of events by looking at betting markets. My idea was to visualise the possible probability assignments as a high-dimensional space, and then for each bet being offered remove the part of that space for which the bet has positive expected value. The region remaining after doing this for all bets on offer should contain the probability assignment representing the "market's beliefs".</p><p>My question is about the situation where there is no remaining region. In this situation for every probability assignment there's some bet with a positive expectation. Is it a theorem that there is always an arbitrage in this case? In other words, can one switch the quantifiers from "for all probability assignments there exists a positive expectation bet" to "there exists a bet such that for all probability assignments the bet has positive expectation"?</p>oscar_cunninghamZkr5yLu99PY3aoNks2019-04-04T07:04:15.002ZComment by Oscar_Cunningham on The Kelly Criterion
https://lw2.issarice.com/posts/BZ6XaCwN4QGgH9CxF/the-kelly-criterion#mcGHEp79miyMXBJtu
<p>I believe you missed one of the rules of Gurkenglas' game, which was that there are at most 100 rounds. (Although it's possible I misunderstood what they were trying to say.)</p>
<p>If you assume that play continues until one of the players is bankrupt then in fact there are lots of winning strategies. In particular betting any constant proportion less than 38.9%. The Kelly criterion isn't unique among them.</p>
<p>My program doesn't assume anything about the strategy. It just works backwards from the last round and calculates the optimal bet and expected value for each possible amount of money you could have, on the basis of the expected values in the next round which it has already calculated. (Assuming each bet is a whole number of cents.)</p>
oscar_cunninghammcGHEp79miyMXBJtu2018-10-17T00:38:14.810ZComment by Oscar_Cunningham on The Kelly Criterion
https://lw2.issarice.com/posts/BZ6XaCwN4QGgH9CxF/the-kelly-criterion#MmLjtQtvgMxzinHPP
<blockquote>If you wager one buck at a time, you win almost certainly.</blockquote><p>But that isn't the Kelly criterion! Kelly would say I should open by betting <em>two</em> bucks.</p><p>In games of that form, it seems like you should be more-and-more careful as the amount of bets gets larger. The optimal strategy doesn't tend to Kelly in the limit.</p><p>EDIT: In fact my best opening bet is $0.64, leading to expected winnings of $19.561.</p><p>EDIT2: I reran my program with higher precision, and got the answer $0.58 instead. This concerned me so I reran again with infinite precision (rational numbers) and got that the best bet is $0.21. The expected utilities were very similar in each case, which explains the precision problems.</p><p>EDIT3: If you always use Kelly, the expected utility is only $18.866.</p>oscar_cunninghamMmLjtQtvgMxzinHPP2018-10-16T18:39:28.929ZComment by Oscar_Cunningham on The Kelly Criterion
https://lw2.issarice.com/posts/BZ6XaCwN4QGgH9CxF/the-kelly-criterion#7zMv5WAWr9nYvLGZE
<p>Can you give a concrete example of such a game?</p>oscar_cunningham7zMv5WAWr9nYvLGZE2018-10-16T16:33:39.971ZComment by Oscar_Cunningham on The Kelly Criterion
https://lw2.issarice.com/posts/BZ6XaCwN4QGgH9CxF/the-kelly-criterion#JkMZB4qZ2op5cBQDg
<blockquote>even if your utility outside of the game is linear, inside of the game it is not.</blockquote><p>Are there any games where it's a wise idea to use the Kelly criterion even though your utility outside the game is linear?</p>oscar_cunninghamJkMZB4qZ2op5cBQDg2018-10-16T14:06:19.033ZComment by Oscar_Cunningham on The Kelly Criterion
https://lw2.issarice.com/posts/BZ6XaCwN4QGgH9CxF/the-kelly-criterion#yLNEiw5wBmMNmTPiQ
<blockquote>Marginal utility is decreasing, but in practice falls off far less than geometrically.</blockquote><p>I think this is only true if you're planning to give the money to charity or something. If you're just spending the money on yourself then I think marginal utility is literally zero after a certain point.</p>oscar_cunninghamyLNEiw5wBmMNmTPiQ2018-10-16T13:34:57.081ZComment by Oscar_Cunningham on Open Thread September 2018
https://lw2.issarice.com/posts/fteSdEFCv4r43rhj7/open-thread-september-2018#7hATmraChHhCxsmD6
<p>Yeah, I think that's probably right.</p><p>I thought of that before but I was a bit worried about it because Löb's Theorem says that a theory can never prove this axiom schema about itself. But I think we're safe here because we're assuming "If T proves φ, then φ" while not actually working in T.</p>oscar_cunningham7hATmraChHhCxsmD62018-09-26T12:34:52.460ZComment by Oscar_Cunningham on Open Thread September 2018
https://lw2.issarice.com/posts/fteSdEFCv4r43rhj7/open-thread-september-2018#oDmXRbug5TmWZPRDS
<p>I'm arguing that, for a theory T and Turing machine P, "T is consistent" and "T proves that P halts" aren't together enough to deduce that P halts. And as I counter example I suggested T = PA + "PA is inconsistent" and P = "search for an inconsistency in PA". This P doesn't halt even though T is consistent and proves it halts.</p><p>So if it doesn't work for that T and P, I don't see why it would work for the original T and P.</p>oscar_cunninghamoDmXRbug5TmWZPRDS2018-09-26T11:00:25.121ZComment by Oscar_Cunningham on Open Thread September 2018
https://lw2.issarice.com/posts/fteSdEFCv4r43rhj7/open-thread-september-2018#wdXRyDDgW4S8rjcx3
<p>Consistency of T isn't enough, is it? For example the theory (PA + "The program that searches for a contradiction in PA halts") is consistent, even though that program doesn't halt.</p>oscar_cunninghamwdXRyDDgW4S8rjcx32018-09-25T18:07:25.927ZComment by Oscar_Cunningham on Quantum theory cannot consistently describe the use of itself
https://lw2.issarice.com/posts/pxpiGtyZpxmXg8hHW/quantum-theory-cannot-consistently-describe-the-use-of#eHcRvvncyhQpmdpaY
<p>https://www.scottaaronson.com/blog/?p=3975</p>oscar_cunninghameHcRvvncyhQpmdpaY2018-09-25T09:53:31.606ZComment by Oscar_Cunningham on Open Thread September 2018
https://lw2.issarice.com/posts/fteSdEFCv4r43rhj7/open-thread-september-2018#3SzEaArQ9hfzgT4XM
<p>This is a good point. The Wikipedia pages for other sites, like <a href="https://en.wikipedia.org/wiki/Reddit">Reddit</a>, also focus unduly on controversy.</p>oscar_cunningham3SzEaArQ9hfzgT4XM2018-09-20T20:11:55.066ZComment by Oscar_Cunningham on Zut Allais!
https://lw2.issarice.com/posts/zNcLnqHF5rvrTsQJx/zut-allais#PGDFqhK8aByguJemf
<p>And the fact that situations like that occurred in humanity's evolution explains why humans have the preference for certainty that they do.</p>oscar_cunninghamPGDFqhK8aByguJemf2018-09-06T20:35:20.377ZComment by Oscar_Cunningham on Open Thread September 2018
https://lw2.issarice.com/posts/fteSdEFCv4r43rhj7/open-thread-september-2018#2zQEMXcy9Exw5eNdw
<p>As well as ordinals and cardinals, Eliezer's construction also needs concepts from the areas of computability and formal logic. A good book to get introduced to these areas is Boolos' "Computability and Logic".</p>oscar_cunningham2zQEMXcy9Exw5eNdw2018-09-03T09:10:35.767ZComment by Oscar_Cunningham on Open Thread September 2018
https://lw2.issarice.com/posts/fteSdEFCv4r43rhj7/open-thread-september-2018#f8c7fZio2rKgMjRyT
<blockquote>being unable to imagine a scenario where something is possible</blockquote><p>This isn't an accurate description of the mind projection fallacy. The mind projection fallacy happens when someone thinks that some phenomenon occurs in the real world but in fact the phenomenon is a part of the way their mind works.</p><p>But yes, it's common to almost all fallacies that they are in fact weak Bayesian evidence for whatever they were supposed to support.</p>oscar_cunninghamf8c7fZio2rKgMjRyT2018-09-03T08:08:29.260ZComment by Oscar_Cunningham on Open Thread September 2018
https://lw2.issarice.com/posts/fteSdEFCv4r43rhj7/open-thread-september-2018#q9b3zGQFshbie26h7
<p>Eliezer made <a href="http://forums.xkcd.com/viewtopic.php?f=14&t=7469&sid=3b6d016c7172b82d00b7789cf293b232&start=1240#p3254229">this attempt</a> at naming a large number computable by a small Turing machine. What I'm wondering is exactly what axioms we need to use in order to prove that this Turning machine does indeed halt. The description of the Turing machine uses a large cardinal axiom ("there exists an I0 rank-into-rank cardinal"), but I don't think that assuming this cardinal is enough to prove that the machine halts. Is it enough to assume that this axiom is consistent? Or is something stronger needed?</p>oscar_cunninghamq9b3zGQFshbie26h72018-09-01T17:12:14.313ZComment by Oscar_Cunningham on You Play to Win the Game
https://lw2.issarice.com/posts/ggzWxjrGGCMJDssBn/you-play-to-win-the-game#gJ9ubmptGdFkZuuEB
<blockquote>games are a specific case where the utility (winning) is well-defined</blockquote><p>Lots of board games have badly specified utility functions. The one that springs to mind is Diplomacy; if a stalemate is negotiated then the remaining players "share equally in a draw". I'd take this to mean that each player gets utility 1/n (where there are n players, and 0 is a loss and 1 is a win). But it could also be argued that they each get 1/(2n), sharing a draw (1/2) between them (to get 1/n each wouldn't they have to be "sharing equally in a win"?).</p><p>Another example is <a href="https://en.wikipedia.org/wiki/Castle_Panic">Castle Panic</a>. It's allegedly a cooperative game. The players all "win" or "lose" together. But in the case of a win one of the players is declared a "Master Slayer". It's never stated how much the players should value being the Master Slayer over a mere win.</p><p>Interesting situations occur in these games when the players have different opinions about the value of different outcomes. One player cares more about being the Master Slayer than everyone else, so everyone else lets them be the Master Slayer. They think that they're doing much better that everyone else, but everyone else is happy so long as they all keep winning.</p>oscar_cunninghamgJ9ubmptGdFkZuuEB2018-08-31T19:10:06.529ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#by6Rcyu4SvCrYJcG8
<p>I actually learnt quantum physics from that sequence, and I'm now a mathematician working in Quantum Computing. So it can't be too bad!</p><p>The explanation of quantum physics is the best I've seen anywhere. But this might be because it explained it in a style that was particularly suited to me. I really like the way it explains the underlying reality first and only afterwards explains how this corresponds with what we perceive. A lot of other introductions follow the historical discovery of the subject, looking at each of the famous experiments in turn, and only building up the theory in a piecemeal way. Personally I hate that approach, but I've seen other people say that those kind of introductions were the only ones that made sense to them.</p><p>The sequence is especially good if you don't want a math-heavy explantation, since it manages to explain exactly what's going on in a technically correct way, while still not using any equations more complicated than addition and multiplication (as far as I can remember).</p><p>The second half of the sequence talks about interpretations of quantum mechanics, and advocates for the "many-worlds" interpretation over "collapse" interpretations. Personally I found it sufficient to convince me that collapse interpretations were bullshit, but it didn't quite convince me that the many-worlds interpretation is obviously true. I find it plausible that the true interpretation is some third alternative. Either way, the discussion is very interesting and worth reading.</p><p>As far as "holding up" goes, I once read through the sequence looking for technical errors and <a href="https://www.lesswrong.com/posts/JrhoMTgMrMRJJiS48/decoherence#myhBWJMN7gg8jkZZs">only found one</a>. Eliezer says that the wavefunction can't become more concentrated because of Liouville's theorem. This is completely wrong (QM is time-reversible, so if the wavefunction can become more spread out it must also be able to become more concentrated). But I'm inclined to be forgiving to Eliezer on this point because he's making exactly the mistake that he repeatedly warns us about! He's confusing the distribution described by the wavefunction (the uncertainty that we <em>would</em> have if we performed a measurment) with the uncertainty we <em>do</em> have <em>about</em> the wavefunction (which is what Liouville's theorem actually applies to).</p>oscar_cunninghamby6Rcyu4SvCrYJcG82018-08-16T19:31:54.613ZComment by Oscar_Cunningham on [deleted post]
https://lw2.issarice.com/posts/AA5Cd5cWzthHJuZhZ/the-ever-expanding-moral-circle#NNBDokfvXCKRkDcsJ
<p>Really, the fact that different sizes of moral circle can incentivize coercion is just a trivial corollary of the fact that value differences in general can incentivize coercion.</p>oscar_cunninghamNNBDokfvXCKRkDcsJ2018-08-15T13:34:41.626ZComment by Oscar_Cunningham on [deleted post]
https://lw2.issarice.com/posts/AA5Cd5cWzthHJuZhZ/the-ever-expanding-moral-circle#8CuZt62sWcDJdqBmM
<blockquote>
<p>When people have a wide circle of concern and advocate for its widening as a norm, this makes me nervous because it implies huge additional costs forced on me, through coercive means like taxation or regulations</p>
</blockquote>
<p>At the moment I (and many others on LW) are experiencing the opposite. We would prefer to give money to people in Africa, but instead we are forced by taxes to give to poor people in the same country as us. Since charity to Africa is much more effective, this means that (from our point of view) 99% of the taxed money is being wasted.</p>
oscar_cunningham8CuZt62sWcDJdqBmM2018-08-15T07:04:31.147ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#bWtdaQ4wGnydYrLjb
<p>Okay, sure. But an idealized rational reasoner wouldn't display this kind of uncertainty about its own beliefs, but it would still have the phenomenon you were originally asking about (where statements assigned the same probability update by different amounts after the introduction of evidence). So this kind of second-order probability can't be used to answer the question you originally asked.</p>oscar_cunninghambWtdaQ4wGnydYrLjb2018-08-14T13:06:38.396ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#6AWzJy2Ed9ohiStnz
<blockquote>It seems like you're describing a Bayesian probability distribution over a frequentist probability estimate of the "real" probability.</blockquote><p>Right. But I was careful to refer to f as a frequency rather than a probability, because f isn't a description of our beliefs but rather a physical property of the coin (and of the way it's being thrown).</p><p></p><blockquote>Agreed that this works in cases which make sense under frequentism, but in cases like "Trump gets reelected" you need some sort of distribution over a Bayesian credence, and I don't see any natural way to generalise to that.</blockquote><p>I agree. But it seems to me like the other replies you've received are mistakenly treating all propositions as though they do have an f with an unknown distribution. Unnamed suggests using the beta distribution; the thing which it's the distribution of would have to be f. Similarly rossry's reply, containing phrases like "something in the ballpark of 50%" and "precisely 50%", talks as though there is some unknown percentage to which 50% is an estimate.</p><p>A lot of people (like in the paper Pattern linked to) think that our distribution over f is a "second-order" probability describing our beliefs about our beliefs. I think this is wrong. The number f doesn't describe our beliefs at all; it describes a physical property of the coin, just like mass and diameter.</p><p>In fact, any kind of second-order probability must be trivial. We have introspective access to our own beliefs. So given any statement about our beliefs we can say for certain whether or not it's true. Therefore, any second-order probability will either be equal to 0 or 1.</p>oscar_cunningham6AWzJy2Ed9ohiStnz2018-08-13T11:22:04.778ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#p7Wn36MCwYn2ZohmW
<p>The Open Thread appears to no longer be stickied. Try pushing the pin in harder next time.</p>oscar_cunninghamp7Wn36MCwYn2ZohmW2018-08-13T10:06:29.757ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#itG5vwp4Gg9wyA4eN
<p>It doesn't really matter for the point I was making, so long as you agree that the probability moves further for the second coin.</p>
oscar_cunninghamitG5vwp4Gg9wyA4eN2018-08-12T06:43:20.191ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#LkGYtas228Beohu6B
<p>This is related to the problem of predicting a coin with an unknown bias. Consider two possible coins: the first which you have inspected closely and which looks perfectly symmetrical and feels evenly weighted, and the second which you haven't inspected at all and which you got from a friend who you have previously seen cheating at cards. The second coin is much more likely to be biased than the first.</p><p>Suppose you are about to toss one of the coins. For each coin, consider the event that the coin lands on heads. In both cases you will assign a probability of 50%, because you have no knowledge that distinguishes between heads and tails.</p><p>But now suppose that before you toss the coin you learn that the coin landed on heads for each of its 10 previous tosses. How does this affect your estimate?</p><ul><li>In the case of the first coin it doesn't make very much difference. Since you see no way in which the coin could be biased you assume that the 10 heads were just a coincidence, and you still assign a probability of 50% to heads on the next toss (maybe 51% if you are beginning to be suspicious despite your inspection of the coin).</li><li>But when it comes to the second coin, this evidence would make you very suspicious. You would think it likely that the coin had been tampered with. Perhaps it simply has two heads. But it would also still be possible that the coin was fair. Two headed coins are pretty rare, even in the world of degenerate gamblers. So you might assign a probability of around 70% to getting heads on the next toss.</li></ul><p>This shows the effect that you were describing; both events had a prior probability of 50%, but the probability changes by different amounts in response to the same evidence. We have a lot of knowledge about the first coin, and compared to this knowledge the new evidence is insignificant. We know much less about the second coin, and so the new evidence moves our probability much further.</p><p>Mathematically, we model each coin as having a fixed but unknown frequency with which it comes up heads. This is a number 0 ≤ f ≤ 1. If we knew f then we would assign a probability of f to any coin-flip except those about which we have direct evidence (i.e. those in our causal past). Since we don't know f we describe our knowledge about it by a probability distribution P(f). The probability of the next coin-flip coming up heads is then the expected value of f, the integral of P(f)f.</p><p>Then in the above example our knowledge about the first coin would be described by a function P(f) with a sharp peak around 1/2 and almost zero probability everywhere else. Our knowledge of the second coin would be described by a much broader distribution. When we find out that the coin has come up heads 10 times before our probability distribution updates according to Bayes' rule. It changes from P(f) to P(f)f^10 (or rather the normalisation of P(f)f^10). This doesn't affect the sharply pointed distribution very much because the function f^10 is approximately constant over the sharp peak. But it pushes the broad distribution strongly towards 1 because 1^10 is 1024 times larger than 1/2^10 and P(f) isn't 1024 times taller near 1/2 than near 1.</p><p>So this is a nice case where it is possible to compare between two cases how much a given piece of evidence moves our probability estimate. However I'm not sure whether this can be extended to the general case. A proposition like "Trump gets reelected" can't be thought of as being like a flip of a coin with a particular frequency. Not only are there no "previous flips" we can learn about, it's not clear what another flip would even look like. The election that Trump won doesn't count, because we had totally different knowledge about that one.</p>oscar_cunninghamLkGYtas228Beohu6B2018-08-11T22:13:19.688ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#xzoSF5EDkvTJaX28t
<p>I see, thanks. I had been looking at the page https://www.lesswrong.com/daily, linked to from the sidebar under the same phrase "All Posts".</p>oscar_cunninghamxzoSF5EDkvTJaX28t2018-08-04T19:28:29.654ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#3tyWNF3p3bZdiSkXN
<p>I don't see it there. Have you done the update yet?</p>
oscar_cunningham3tyWNF3p3bZdiSkXN2018-08-04T11:59:37.965ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#b3ur3ZLZiGHYydCq9
<p>What does "stickied" do?</p>oscar_cunninghamb3ur3ZLZiGHYydCq92018-08-03T20:59:30.993ZComment by Oscar_Cunningham on What are your plans for the evening of the apocalypse?
https://lw2.issarice.com/posts/hxJBc5Qo3oea3X443/what-are-your-plans-for-the-evening-of-the-apocalypse#3gGbwnbw6nm6SbrT4
<p>The financial effects would be immediate and extreme. All sorts of mad things would happen to stock prices, inflation, interest rates, etc. The people who quit their jobs to live off their savings might well find that their savings don't stretch as far as they thought, which is probably a good thing since the whole system would collapse much faster than five years if a significant proportion of people were to quit their jobs.</p>oscar_cunningham3gGbwnbw6nm6SbrT42018-08-02T09:42:00.526ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#QXZdCSA6FQZESfNAt
<p>Okay, great.</p>
oscar_cunninghamQXZdCSA6FQZESfNAt2018-08-01T20:20:49.510ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#mDMNwAy5wAN57Eg4u
<p>Is it possible to subscribe to a post so you get notifications when new comments are posted? I notice that individual <em>comments</em> have subscribe buttons.</p>oscar_cunninghammDMNwAy5wAN57Eg4u2018-08-01T11:45:15.744ZComment by Oscar_Cunningham on Open Thread August 2018
https://lw2.issarice.com/posts/8xc43uA5nyxkAudiR/open-thread-august-2018#tFaCNwJofiJSws2BS
<p>Old LW had a link to the open thread in the sidebar. Would it be good to have that here so that comments later in the month still get some attention?</p>oscar_cunninghamtFaCNwJofiJSws2BS2018-08-01T11:16:46.942ZComment by Oscar_Cunningham on Applying Bayes to an incompletely specified sample space
https://lw2.issarice.com/posts/Mxj6DtRgLg88vEdti/applying-bayes-to-an-incompletely-specified-sample-space#pxEXqWX5ScqkpFYiH
<p>I've always thought that chapter was a weak point in the book. Jaynes doesn't treat probabilities of probabilities in quite the right way (for one thing they're really probabilities of frequencies). So take it with a grain of salt.</p>
oscar_cunninghampxEXqWX5ScqkpFYiH2018-07-30T22:01:44.566ZComment by Oscar_Cunningham on Bayesianism (Subjective or Objective)
https://lw2.issarice.com/posts/EmhfawXSZ7FRHALCe/bayesianism-subjective-or-objective#JkzpikjjscoBgz5xh
<p>I'm not quite sure what you mean here, but I don't think the idea of calibration is directly related to the subjective/objective dichotomy. Both subjective and objective Bayesians could desire to be well calibrated.</p>oscar_cunninghamJkzpikjjscoBgz5xh2018-07-30T14:42:59.792ZComment by Oscar_Cunningham on Bayesianism (Subjective or Objective)
https://lw2.issarice.com/posts/EmhfawXSZ7FRHALCe/bayesianism-subjective-or-objective#o4P9PEpvysstNEkus
<p>Also, here's Eliezer on the subject: <a href="https://www.lesswrong.com/posts/XhaKvQyHzeXdNnFKy/probability-is-subjectively-objective">Probability is Subjectively Objective</a></p><p>Under his definitions he's subjective. But he would definitely say that agents with the same state of knowledge must assign the same probabilities, which rules him out of the very subjective camp.</p>oscar_cunninghamo4P9PEpvysstNEkus2018-07-30T12:48:53.195ZComment by Oscar_Cunningham on Bayesianism (Subjective or Objective)
https://lw2.issarice.com/posts/EmhfawXSZ7FRHALCe/bayesianism-subjective-or-objective#j6nNxNd9J5J8Hsu8B
<p>I think everyone agrees on the directions "more subjective" and "more objective", but they use the words "subjective"/"objective" to mean "more subjective/objective than me".</p><p>A very subjective position would be to believe that there are no "right" prior probabilities, and that it's okay to just pick any prior depending on personal choice. (i.e. Agents with the same knowledge can assign different probabilities)</p><p>A very objective position would be to believe that there are some probabilities that must be the same even for agents with different knowledge. For example they might say that you must assign probability 1/2 to a fair coin coming up heads, no matter what your state of knowledge is. (i.e. Agents with different knowledge must (sometimes) assign the same probabilities)</p><p>Jaynes and Yudkowsky are somewhere in between these two positions (i.e. agents with the same knowledge must assign the same probabilities, but the probability of any event can vary depending on your knowledge of it), so they get called "objective" by the maximally subjective folk, and "subjective" by the maximally objective folk.</p><p>The definitions in the SEP above would definitely put Jaynes and Yudkowsky in the objective camp, but there's a lot of room on the scale past the SEP definition of "objective".</p>oscar_cunninghamj6nNxNd9J5J8Hsu8B2018-07-30T12:38:21.300ZComment by Oscar_Cunningham on Bayesianism (Subjective or Objective)
https://lw2.issarice.com/posts/EmhfawXSZ7FRHALCe/bayesianism-subjective-or-objective#JyF6hX7Dno3KFkHRq
<p><a href="https://plato.stanford.edu/entries/epistemology-bayesian/">The SEP</a> is quite good on this subject:</p><p></p><blockquote><strong>Subjective and Objective Bayesianism. </strong>Are there constraints on prior probabilities other than the probability laws? Consider a situation in which you are to draw a ball from an urn filled with red and black balls. Suppose you have no other information about the urn. What is the prior probability (before drawing a ball) that, given that a ball is drawn from the urn, that the drawn ball will be black? The question divides Bayesians into two camps:</blockquote><blockquote>(a) <em>Subjective Bayesians</em> emphasize the relative lack of rational constraints on prior probabilities. In the urn example, they would allow that any prior probability between 0 and 1 might be rational (though some Subjective Bayesians (e.g., Jeffrey) would rule out the two extreme values, 0 and 1). The most extreme Subjective Bayesians (e.g., de Finetti) hold that the only rational constraint on prior probabilities is probabilistic coherence. Others (e.g., Jeffrey) classify themselves as subjectivists even though they allow for some relatively small number of additional rational constraints on prior probabilities. Since subjectivists can disagree about particular constraints, what unites them is that their constraints rule out very little. For Subjective Bayesians, our actual prior probability assignments are largely the result of non-rational factors—for example, our own unconstrained, free choice or evolution or socialization.</blockquote><blockquote>(b) <em>Objective Bayesians</em> (e.g., Jaynes and Rosenkrantz) emphasize the extent to which prior probabilities are rationally constrained. In the above example, they would hold that rationality requires assigning a prior probability of 1/2 to drawing a black ball from the urn. They would argue that any other probability would fail the following test: Since you have no information at all about which balls are red and which balls are black, you must choose prior probabilities that are invariant with a change in label (“red” or “black”). But the only prior probability assignment that is invariant in this way is the assignment of prior probability of 1/2 to each of the two possibilities (i.e., that the ball drawn is black or that it is red).</blockquote><blockquote>In the limit, an Objective Bayesian would hold that rational constraints uniquely determine prior probabilities in every circumstance. This would make the prior probabilities <em>logical probabilities</em> determinable purely <em>a priori</em>.</blockquote><p></p><p>Under these definitions, Eliezer and LW in general fall under the Objective category. We tend to believe that two agents with the same knowledge should assign the same probability.</p>oscar_cunninghamJyF6hX7Dno3KFkHRq2018-07-29T14:41:41.883ZComment by Oscar_Cunningham on Open Thread July 2018
https://lw2.issarice.com/posts/dPGTt8pTMA2oyKjE9/open-thread-july-2018#tfTRRsf9CLYzcosbt
<p>Sure, the inductor doesn't know which systems are consistent, but nevertheless it eventually starts believing the proofs given by any system which is consistent.</p>oscar_cunninghamtfTRRsf9CLYzcosbt2018-07-17T15:54:16.771ZComment by Oscar_Cunningham on Open Thread July 2018
https://lw2.issarice.com/posts/dPGTt8pTMA2oyKjE9/open-thread-july-2018#mppHezqAgis2YgHSC
<p>Is there a preferred way to flag spam posts like this one: https://www.lesswrong.com/posts/g7LgqmEhaoZnzggzJ/teaching-is-everything-and-more ?</p>oscar_cunninghammppHezqAgis2YgHSC2018-07-11T12:43:15.305ZComment by Oscar_Cunningham on Open Thread July 2018
https://lw2.issarice.com/posts/dPGTt8pTMA2oyKjE9/open-thread-july-2018#3doMxPjXAqAaf6KZD
<p>Could logical inductors be used as a partial solution to <a href="https://en.wikipedia.org/wiki/Hilbert%27s_second_problem">Hilbert's Second Problem</a> (of putting mathematics on a sure footing)? Thanks to Gödel we know that there are lots of things that any given theory can't prove. But by running a logical inductor we could at least say that these things are true with some probability. Of course a result proved in the <a href="https://arxiv.org/abs/1609.03543">"Logical Induction" paper</a> is that the probability of an undecidable statement tends to a value that is neither 0 or 1, so we can't use this approach to justify belief in a stronger theory. But I noticed a weaker result that does hold. There's a certain class of statements such that (assuming ZF is consistent) an inductor over PA will think that they're very likely as soon as it finds a proof for them in ZF.</p><p></p><p>This class of statements is those with only <a href="https://en.wikipedia.org/wiki/Bounded_quantifier">bounded quantifiers</a>; those where every "∀" and "∃" are restricted to a predefined range. This class of statements is decidable, meaning that there's a Turing machine that will take a bounded sentence and will always halt and tell you whether or not it holds in <em>ℕ</em>. Because of this every bounded sentence has a proof (or a proof of its negation) in both PA and ZF (and PA and ZF agree which it is).</p><p></p><p>But the proofs of a bounded sentence in PA and ZF can have very different lengths. Consider the self-referential bounded sentence "PA cannot prove this sentence in fewer than 1000000 symbols". This must have a proof in PA, since we can just check all sentence with fewer than 1000000 symbols by brute force, but its proof must be longer than 1000000 symbols, or else we would get a contradiction. But the preceding sentences constitute a proof in ZF with much fewer than 1000000 symbols. So the sentence is provable in both PA and ZF, but the ZF proof is much shorter.</p><p></p><p>It might seem like the bounded sentences can't express many interesting concepts. But in fact I'd contend that they can express most (if not all) things that you might actually need to know. For example, it seems like the fact "For all x and y, x + y = y + x" is a useful unbounded sentence. But whenever you face a situation where you would want to use it there are always some particular x and y that apply in that situation. So then we can use the bounded sentence "x + y = y + x" instead, where x and y stand for whichever values actually occurred.</p><p></p><p>Now I'll show that logical inductors over PA eventually trust proofs in ZF of bounded sentences (assuming ZF is consistent). Consider the Turing machine that takes as input a number n and searches through all strings in length order, keeping track of any that are a ZF proof of a bounded sentence. When it's been searching for n timesteps it stops and outputs whichever bounded sentence it found a proof for last. Call this sentence ϕ_n. Now let P be a logical inductor over PA. Assuming that ZF is consistent, the sequence ϕ_n are all theorems of PA, and by construction there's a polynomial time Turning machine that outputs them. So by a theorem in the logical inductor paper, we have that P_n(ϕ_n) tends to 1 as n goes to infinity, meaning that for large n the logical inductor becomes confident in ϕ_n sometime around day n. If a bounded statement ϕ has a ZF proof in m symbols then it's equal to ϕ_n for n ~ exp(m). So P begins to think that ϕ is very likely from day exp(m) onward.</p><p></p><p>Assuming that the logical inductor is working with a deductive process that searches through PA proofs in length order, this can occur long before the deductive process actually proves that ϕ is true. The exponential doesn't really make a difference here, since we don't know exactly how fast the deductive process is working. But it hardly matters, because the length of ZF proofs can be arbitrarily better than those in PA. For example the shortest proof of the sentence "PA cannot prove this sentence in fewer than exp(exp(exp(n))) symbols" in PA is longer than exp(exp(exp(n))) symbols, whereas the length of the shortest proof in ZF is about log(n).</p><p></p><p>So in generality what we have proved is that weak systems will accept as very good evidence proofs given in stronger theories, so long as the target of the proof is a bounded sentence, and so long as the stronger theory is in fact consistent. This is an interesting partial answer to Hilbert's question, since it explains why we would care about proofs in ZF, even if we only believe in PA.</p>oscar_cunningham3doMxPjXAqAaf6KZD2018-07-10T21:41:21.485ZComment by Oscar_Cunningham on What could be done with RNA and DNA sequencing that's 1000x cheaper than it's now?
https://lw2.issarice.com/posts/RBjLCdKZj9LhP6Rir/what-could-be-done-with-rna-and-dna-sequencing-that-s-1000x#M68n8dpDoSmL5Howz
<p>If we can do testing quickly then we could use it for security. Perhaps (further into the future) your phone will test your DNA when you try to use it?</p>oscar_cunninghamM68n8dpDoSmL5Howz2018-06-26T19:17:14.270ZComment by Oscar_Cunningham on UDT can learn anthropic probabilities
https://lw2.issarice.com/posts/ma5Jc4wPT36j3X84P/udt-can-learn-anthropic-probabilities#xzPfNfXEeDfobHsJJ
<p>Can I actually do this experiment, and thereby empirically determine (for myself but nobody else) which of SIA and SSA is true?</p>oscar_cunninghamxzPfNfXEeDfobHsJJ2018-06-25T19:39:02.806ZComment by Oscar_Cunningham on Set Up for Success: Insights from 'Naïve Set Theory'
https://lw2.issarice.com/posts/WPtdQ3JnoRSci87Dz/set-up-for-success-insights-from-naive-set-theory#WkLrdzwgPgZpaPNZi
<blockquote>This was valuable feedback for calibration, and I intend to continue this practice. I'm still worried that down the line and in the absence of teachers, I may believe that I've learnt the research guide with the necessary rigor, go to a MIRIx workshop, and realize I hadn't been holding myself to a sufficiently high standard. Suggestions for ameliorating this would be welcome.</blockquote><p>I think if you read more textbooks you'll naturally get used to the correct level of rigour.</p>oscar_cunninghamWkLrdzwgPgZpaPNZi2018-02-28T09:00:45.042ZComment by Oscar_Cunningham on ProbDef: a game about probability and inference
https://lw2.issarice.com/posts/gJ75o6czqifoTcSFg/probdef-a-game-about-probability-and-inference#W2D2AfsyMNx2ZjSdk
<p>Fun game! And the music is <em>really</em> nice. By the way you have a typo in there somewhere. It says you refresh to ten shields in a level where you only get three.</p>oscar_cunninghamW2D2AfsyMNx2ZjSdk2018-01-02T07:01:33.240ZComment by Oscar_Cunningham on Can we see light?
https://lw2.issarice.com/posts/NN77egsmkXoNfYHxW/can-we-see-light#B7YiNp3rGvgNJSSuW
<p>What about photon-photon interactions? :-)</p>oscar_cunninghamB7YiNp3rGvgNJSSuW2017-12-08T18:48:13.081ZComment by Oscar_Cunningham on Simple refutation of the ‘Bayesian’ philosophy of science
https://lw2.issarice.com/posts/QjxYbo9yotsAH647Z/simple-refutation-of-the-bayesian-philosophy-of-science#Brnc8hGqPxgp4vRXk
<blockquote>
<p>However, if T is an explanatory theory (e.g. ‘the sun is powered by nuclear fusion’), then its negation ~T (‘the sun is not powered by nuclear fusion’) is not an explanation at all.</p>
</blockquote>
<p>The words "explanatory theory" seem to me to have a lot of fuzziness hiding behind them. But to the extent that "the sun is powered by nuclear fusion" is an explanatory theory I would say that the proposition ~T is just the union of many explanatory theories: "the sun is powered by oxidisation", "the sun is powered by gravitational collapse", and so on for all explanatory theories except "nuclear fusion".</p>
<blockquote>
<p>Therefore, suppose (implausibly, for the sake of argument) that one could quantify ‘the property that science strives to maximise’. If T had an amount q of that, then ~T would have none at all, not 1-q as the probability calculus would require if q were a probability.</p>
</blockquote>
<p>There are lots of negative facts that are worth knowing and that scientists did good work to discover. When Michelson and Morley discovered that light did <em>not</em> travel through luminiferous aether that was a fact worth knowing, and lead to the discovery of special relativity. So even if you don't call ~T an explanatory theory it seems like it still has a lot of "the property that science strives to maximise"</p>
<blockquote>
<p>Also, the conjunction (T₁ & T₂) of two mutually inconsistent explanatory theories T₁ and T₂ (such as quantum theory and relativity) is provably false, and therefore has zero probability. Yet it embodies some understanding of the world and is definitely better than nothing.</p>
</blockquote>
<p>A Bayesian might instead define theories T₁' = "quantum theory leads to approximately correct results in the following circumstances ..." and T₂' "relativity leads to approximately correct results in the following circumstances ...". Then T₁' and T₂' would both have a high probability and be worth knowing, and so would their conjunction. The original conjunction, T₁ & T₂, would mean "both quantum theory and relativity are exactly true". This of course is provably false, and so has probability 0.</p>
<blockquote>
<p>Furthermore if we expect, with Popper, that all our best theories of fundamental physics are going to be superseded eventually, and we therefore believe their negations, it is still those false theories, not their true negations, that constitute all our deepest knowledge of physics.</p>
</blockquote>
<p>Right, right. The statement T₁ is false; but the statement T₁' is true.</p>
<blockquote>
<p>What science really seeks to ‘maximise’ (or rather, create) is explanatory power.</p>
</blockquote>
<p>Does Deutsch write anywhere about what a precise definition of "explanation" would be?</p>
oscar_cunninghamBrnc8hGqPxgp4vRXk2017-11-01T14:45:56.176ZComment by Oscar_Cunningham on Just a photo
https://lw2.issarice.com/posts/iQXMiyRtRN7ps8ShM/just-a-photo#tjLfJA23L8PsAvf8P
<p>It's also similar to this image:</p>
<p><a href="https://i.pinimg.com/736x/60/16/5b/60165bfd56829bb95563a36cd69a5825--art-optical-optical-illusions.jpg">https://i.pinimg.com/736x/60/16/5b/60165bfd56829bb95563a36cd69a5825--art-optical-optical-illusions.jpg</a></p>
<p>It's difficult to see it as anything until it "snaps" and then it's impossible to not see it.</p>
oscar_cunninghamtjLfJA23L8PsAvf8P2017-10-20T11:11:15.691ZComment by Oscar_Cunningham on Stupid Questions - September 2017
https://lw2.issarice.com/posts/rhJ75a6FqwtAxXTP9/stupid-questions-september-2017#rEvMBpnZAssbNbJ5z
<div class="ory-row"><div class="ory-cell ory-cell-sm-12 ory-cell-xs-12"><div class="ory-cell-inner ory-cell-leaf"><div><p>This is a good question. The answer is that it shouldn't take any energy to hold something in place, but your arms are very inefficient. When you keep one of your muscles contracted the individual cells in that muscle are all contracting and relaxing repeatedly. This burns energy. So for a human holding a dumbbell takes energy. But this is just an unfortunate consequence of the way muscles work. If the human body had some way to "lock" the skeleton into place then you would be able to hold a dumbbell for as long as you wanted.</p></div></div></div></div>oscar_cunninghamrEvMBpnZAssbNbJ5z2017-09-27T15:59:49.144ZComment by Oscar_Cunningham on Open thread, September 25 - October 1, 2017
https://lw2.issarice.com/posts/T6xoNuMdyF8gSbxgm/open-thread-september-25-october-1-2017#tHWmJKN3Cke7mZaiT
<p>If you fail to get your n flips in a row, your expected number of flips on that attempt is the sum from i = 1 to n of i*2^-i, divided by (1-2^-n). This gives (2-(n+2)/2^n)/(1-2^-n). Let E be the expected number of flips needed in total. Then:</p>
<blockquote>
<p>E = (2^-n)n + (1-2^-n)[(2-(n+2)/2^n)/(1-2^-n) + E]</p>
</blockquote>
<p>Hence (2^-n)E = (2^-n)n + 2 - (n+2)/2^n, so E = n + 2^(n+1) - (n+2) = 2^(n+1) - 2</p>
oscar_cunninghamtHWmJKN3Cke7mZaiT2017-09-25T10:03:03.833ZComment by Oscar_Cunningham on Open thread, August 28 - September 3, 2017
https://lw2.issarice.com/posts/3F5zp2SiQbhTnBwWk/open-thread-august-28-september-3-2017#B7q9bnjcL9pHYRRpx
<p>I think you must just have an error in your code somewhere. Consider going round 3. Let the probability you say "3" be p_3. Then according to your numbers</p>
<blockquote>
<p>164/512 = 15/64 + (1 - 15/64)*(1/2)*p_3</p>
</blockquote>
<p>Since the probability of escaping by round 3 is the probability of escape by round 2, plus the probability you don't escape by round 2, multiplied by the probability the coin lands tails, multiplied by the probability you say "3".</p>
<p>But then p_3 = 11/49, and 49 is not a power of two!</p>
oscar_cunninghamB7q9bnjcL9pHYRRpx2017-09-01T09:57:53.785Z[SEQ RERUN] The Cartoon Guide to Löb's Theorem
https://lw2.issarice.com/posts/9XQwYhkiSLvAMXGpc/seq-rerun-the-cartoon-guide-to-loeb-s-theorem
<p>Today's post, <a href="/lw/t6/the_cartoon_guide_to_lobs_theorem/">The Cartoon Guide to Löb's Theorem </a> was originally published on 17 August 2008. A summary (taken from the <a href="http://wiki.lesswrong.com/wiki/Less_Wrong/2008_Articles/Summaries">LW wiki</a>):</p>
<blockquote>An explanation, using cartoons, of Lob's theorem.</blockquote>
<p><br />Discuss the post here (rather than in the comments to the original post).<br /><br /><em>This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was <a href="/lw/t5/when_anthropomorphism_became_stupid/">When Anthropomorphism Became Stupid</a>, and you can use the <a href="/r/discussion/tag/sequence_reruns/">sequence_reruns tag</a> or <a href="/r/discussion/tag/sequence_reruns/.rss">rss feed</a> to follow the rest of the series.<br /><br />Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go <a href="/r/discussion/lw/5as/introduction_to_the_sequence_reruns/">here</a> for more details, or to have meta discussions about the Rerunning the Sequences series.</em></p>oscar_cunningham9XQwYhkiSLvAMXGpc2012-08-05T08:28:19.128ZFocus on rationality
https://lw2.issarice.com/posts/bBFLpfKCgndBw9iGp/focus-on-rationality
<p>(This is my view in the recent debate about posts giving a "rational" discussion of some random topic. It was originally at <a href="/lw/crd/only_say_rational_when_you_cant_eliminate_the_word/6pvq">comment</a> level but I've extended it and posted it in discussion because I want to know if and where people disagree with me, and for what reasons.)</p>
<p> </p>
<p> </p>
<p>I come to Less Wrong to learn about how to think and how to act effectively. I care about general algorithms that are useful for many problems, like "Hold off on proposing solutions" or "Habits are ingrained faster when you pay concious attention to your thoughts when you perform the action". These posts have very high value to me because they improve my effectiveness across a wide range of areas.</p>
<p>Another such technique is "Dissolving the question". Yvain's "<a href="/lw/2as/diseased_thinking_dissolving_questions_about/">Diseased thinking: dissolving questions about disease</a>" is valuable as an exemplary performance of this technique. It adds to Eliezer's description of question-dissolving by giving a demonstration of its use on a real question. It's main value comes from this, anything I learnt about disease whilst reading it is just a bonus.</p>
<p>To quote badger in the recent thread "<a href="/lw/cqz/rational_toothpaste_a_case_study/">Rational Toothpaste: A Case Study</a>"</p>
<blockquote>
<p>I claim a post on "rational toothpaste buying" could be on-topic and useful, if correctly written to illustrate determining goals, assessing tradeoffs, and implementing the final conclusions. A post detailing the pros and cons of various toothpaste brands is for a dentistry or personal hygiene forum; a post about algorithms for how to determine the best brands or whether to do so at all is for a rationality forum.</p>
</blockquote>
<p>But we don't need more than one or two such examples! Yvain's post about question-dissolving was the only such post I ever need to read.</p>
<p>Posts about toothpaste, house-buying, room-decoration, fashion, shaving or computer hardware only tell me about that particular thing. As good as many of them are they'll never be as useful as a post that teaches me a general method of thought applicable on many problems. And if I want to know about some particular topic I'll just look it up on Google, or go to a library.</p>
<p>It's not possible for LessWrong to give a rational treatment of every subject. There are just too many of them. Even if we did I wouldn't be able to carry all that info around in my head. That's why I need to learn general algorithms for producing rational decisions.</p>
<p>Even though badger makes it clear in the quote I gave that the post is supposed to about the algorithms used, the in the rest of the post almost all the discussion is on the object level (although the conclusion is good). That is, even though badger talks about which methods he's using and why, the focus is still on "What can these methods teach us about toothpaste?" and not "What can optimising toothpaste teach us about our methods?". I'd prefer it if posts tried to answer questions more like the latter. The comments exhibit the same phenomenon. Only one of the comments (kilobug's) is talking about the methods used. Most of the rest are actually <em>talking about toothpaste</em>.</p>
<p>So what I'm suggesting is that LessWrong posts (don't forget there's a whole internet to post things on) should <em>focus on rationality</em>. They can talk about other things too, but the question should always be "What can X teach us about rationality?" and not "What can rationality teach us about X?"</p>oscar_cunninghambBFLpfKCgndBw9iGp2012-06-02T19:25:37.362Z[META] Recent Posts for Discussion and Main
https://lw2.issarice.com/posts/k56pPwSSBfxcRfYFL/meta-recent-posts-for-discussion-and-main
<p>This link</p>
<p><a href="/r/all/recentposts">http://lesswrong.com/r/all/recentposts</a></p>
<p>gives a page which lists all the recent posts in both the Main and Discussion sections. I've posted it in the comments section before, but I decided to put it in a discussion post because it's a really handy way of accessing the site. I found it by guessing the URL.</p>oscar_cunninghamk56pPwSSBfxcRfYFL2012-05-13T10:42:39.986ZRationality Quotes April 2012
https://lw2.issarice.com/posts/EbQWLTuxwJpJFdebn/rationality-quotes-april-2012
<p><span style="font-family: Arial, Helvetica, sans-serif; font-size: 12px; line-height: 11px; text-align: justify;">Here's the new thread for posting quotes, with the usual rules:</span></p>
<ul>
<li><span style="font-family: Arial,Helvetica,sans-serif; font-size: 12px; line-height: 11px; text-align: justify;">Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)</span></li>
<li><span style="font-family: Arial,Helvetica,sans-serif; font-size: 12px; line-height: 11px; text-align: justify;">Do not quote yourself</span></li>
<li><span style="font-family: Arial,Helvetica,sans-serif; font-size: 12px; line-height: 11px; text-align: justify;">Do not quote comments/posts on LW/OB</span></li>
<li><span style="font-family: Arial,Helvetica,sans-serif; font-size: 12px; line-height: 11px; text-align: justify;">No more than 5 quotes per person per monthly thread, please.</span></li>
</ul>oscar_cunninghamEbQWLTuxwJpJFdebn2012-04-03T00:42:04.135ZHarry Potter and the Methods of Rationality discussion thread, part 11
https://lw2.issarice.com/posts/8yEdpDpGgvDWHeodM/harry-potter-and-the-methods-of-rationality-discussion-17
<div>
<p><strong>EDIT: New discussion thread <a href="/lw/b5s/harry_potter_and_the_methods_of_rationality/">here</a>.</strong></p>
<p> </p>
<p>This is a new thread to discuss Eliezer Yudkowsky's <em><a href="http://www.fanfiction.net/s/5782108/1/">Harry Potter and the Methods of Rationality</a></em> and anything related to it. With two chapters recently the previous thread has very quickly reached 500 comments. The latest chapter as of 17th March 2012 is <a href="http://www.fanfiction.net/s/5782108/79/Harry_Potter_and_the_Methods_of_Rationality">Ch. 79</a>.</p>
<p>There is now a site dedicated to the story at <a href="http://hpmor.com/">hpmor.com</a>, which is now the place to go to find the <a href="http://hpmor.com/notes/">authors notes</a> and all sorts of other goodies. AdeleneDawner has kept an <a href="http://www.evernote.com/pub/adelenedawner/Eliezer">archive of Author's Notes</a>. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)</p>
<p><br />The first 5 discussion threads are on the main page under the <a href="/tag/harry_potter/">harry_potter tag</a>. Threads 6 and on (including this one) are in the <a href="/r/discussion/tag/harry_potter/">discussion section</a> using its separate tag system. Also: <a href="/lw/2ab/harry_potter_and_the_methods_of_rationality">one</a>, <a href="/lw/2ie/harry_potter_and_the_methods_of_rationality">two</a>, <a href="/lw/2nm/harry_potter_and_the_methods_of_rationality">three</a>, <a href="/lw/2tr/harry_potter_and_the_methods_of_rationality">four</a>, <a href="/lw/30g/harry_potter_and_the_methods_of_rationality">five</a>, <a href="/r/discussion/lw/364/harry_potter_and_the_methods_of_rationality/">six</a>, <a href="/r/discussion/lw/3rb/harry_potter_and_the_methods_of_rationality/">seven</a>, <a href="/lw/797/harry_potter_and_the_methods_of_rationality/">eight</a>, <a href="/lw/7jd/harry_potter_and_the_methods_of_rationality/">nine</a>, <a href="/lw/ams/harry_potter_and_the_methods_of_rationality/">ten</a>.<br /><br />As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.<br /><br /><strong>Spoiler Warning</strong>: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. <a href="/lw/2tr/harry_potter_and_the_methods_of_rationality/2v1l">More specifically</a>:</p>
<blockquote>
<p>You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).<br /><br />If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.</p>
</blockquote>
</div>oscar_cunningham8yEdpDpGgvDWHeodM2012-03-17T09:41:23.620ZHarry Potter and the Methods of Rationality discussion thread, part 10
https://lw2.issarice.com/posts/LKFR5pBA3bBkERDxL/harry-potter-and-the-methods-of-rationality-discussion-2
<div>
<p><strong>(The HPMOR discussion thread after this one is <a href="/r/discussion/lw/axe/harry_potter_and_the_methods_of_rationality/">here</a>.)</strong></p>
<p>This is a new thread to discuss Eliezer Yudkowsky's <em><a href="http://www.fanfiction.net/s/5782108/1/">Harry Potter and the Methods of Rationality</a></em> and anything related to it. There haven't been any chapters recently, but it looks like there are a bunch in the pipeline and the old thread is nearing 700 comments. The latest chapter as of 7th March 2012 is <a href="http://www.fanfiction.net/s/5782108/77/Harry_Potter_and_the_Methods_of_Rationality">Ch. 77</a>.</p>
<p>There is now a site dedicated to the story at <a href="http://hpmor.com/">hpmor.com</a>, which is now the place to go to find the <a href="http://hpmor.com/notes/">authors notes</a> and all sorts of other goodies. AdeleneDawner has kept an <a href="http://www.evernote.com/pub/adelenedawner/Eliezer">archive of Author's Notes</a>.</p>
<p><br />The first 5 discussion threads are on the main page under the <a href="/tag/harry_potter/">harry_potter tag</a>. Threads 6 and on (including this one) are in the <a href="/r/discussion/tag/harry_potter/">discussion section</a> using its separate tag system. Also: <a href="/lw/2ab/harry_potter_and_the_methods_of_rationality">one</a>, <a href="/lw/2ie/harry_potter_and_the_methods_of_rationality">two</a>, <a href="/lw/2nm/harry_potter_and_the_methods_of_rationality">three</a>, <a href="/lw/2tr/harry_potter_and_the_methods_of_rationality">four</a>, <a href="/lw/30g/harry_potter_and_the_methods_of_rationality">five</a>, <a href="/r/discussion/lw/364/harry_potter_and_the_methods_of_rationality/">six</a>, <a href="/r/discussion/lw/3rb/harry_potter_and_the_methods_of_rationality/">seven</a>, <a href="/lw/797/harry_potter_and_the_methods_of_rationality/">eight</a>, <a href="/lw/7jd/harry_potter_and_the_methods_of_rationality/">nine</a>.<br /><br />As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.<br /><br /><strong>Spoiler Warning</strong>: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. <a href="/lw/2tr/harry_potter_and_the_methods_of_rationality/2v1l">More specifically</a>:</p>
<blockquote>
<p>You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).<br /><br />If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.</p>
</blockquote>
</div>oscar_cunninghamLKFR5pBA3bBkERDxL2012-03-07T16:46:49.993ZOpen thread, November 2011
https://lw2.issarice.com/posts/s9oafmAMQKwEWP4vb/open-thread-november-2011
<p>Discuss things here if they don't deserve a post in <a href="/promoted/">Main</a> or <a href="/r/discussion/new/">Discussion</a>.</p>
<p>If a topic is worthy and receives much discussion, make a new thread for it.</p>oscar_cunninghams9oafmAMQKwEWP4vb2011-11-02T18:19:16.423ZLessWrong running very slow?
https://lw2.issarice.com/posts/KneLbHhkMfxtkXRzY/lesswrong-running-very-slow
<p>LessWrong pages are taking a long time to load. Today they are especially bad, to the point where if I make a comment the page times out before it is posted. Is this true for other people? Do those who run the site know the cause? Can it be fixed?</p>
<p> </p>
<p>EDIT: Confirmed: It's not just me, it's probably everyone.</p>
<p>EDIT2: I also apologise for my appalling grammar in the title.</p>oscar_cunninghamKneLbHhkMfxtkXRzY2011-09-30T20:15:32.128ZWord Pronunciation
https://lw2.issarice.com/posts/fYyLstMBTP5WyhdTZ/word-pronunciation
<p>How does one pronounce these words?</p>
<ul>
<li>Modus Ponens (EDIT: <a href="/lw/7k1/word_pronunciation/4sx1">Pronunciation given here.</a>)</li>
<li>Modus Tollens (<a href="/lw/7k1/word_pronunciation/4sx1">Here</a>)</li>
<li>Hofstadter (as in Douglas) (<a href="/r/discussion/lw/7k1/word_pronunciation/4sx1">Here</a>)</li>
<li>Jaynes (as in Edwin) (<a href="/r/discussion/lw/7k1/word_pronunciation/4sx1">Here</a>)</li>
<li>Parfit (as in Derek) (<a href="/r/discussion/lw/7k1/word_pronunciation/4sx1">Here</a>)</li>
<li>Deutsch (as in David) (<a href="/lw/7k1/word_pronunciation/4sx8">Here</a>)</li>
<li>Thiel (as in Peter) (<a href="/r/discussion/lw/7k1/word_pronunciation/4t1i">Here</a>)</li>
<li>Muehlhauser (as in Luke (as in <a href="/user/lukeprog">lukeprog</a>)) (<a href="/r/discussion/lw/7k1/word_pronunciation/4t29">Here</a>)</li>
</ul>
<p>Thanks.</p>
<p>(If there are any other words commonly used here that you don't know how to pronounce, mention them in the comments and I'll copy them into the post, to make a handy reference.)</p>oscar_cunninghamfYyLstMBTP5WyhdTZ2011-09-10T14:25:58.162ZHarry Potter and the Methods of Rationality discussion thread, part 9
https://lw2.issarice.com/posts/WQ7XMjqvuRRj8nkpu/harry-potter-and-the-methods-of-rationality-discussion-3
<div>
<p><strong>(The HPMOR discussion thread after this one is <a href="/lw/ams/harry_potter_and_the_methods_of_rationality/">here</a>.)</strong></p>
<p>The previous thread is over the 500-comment threshold, so let's start a new <em><a href="http://www.fanfiction.net/s/5782108/1/">Harry Potter and the Methods of Rationality</a></em> discussion thread. This is the place to discuss Eliezer Yudkowsky's Harry Potter fanfic and anything related to it. The latest chapter as of 09/09/2011 is <a href="http://www.fanfiction.net/s/5782108/77/Harry_Potter_and_the_Methods_of_Rationality">Ch. 77</a>.</p>
<p><br />The first 5 discussion threads are on the main page under the <a href="/tag/harry_potter/">harry_potter tag</a>. Threads 6 and on (including this one) are in the <a href="/r/discussion/tag/harry_potter/">discussion section</a> using its separate tag system. Also: <a href="/lw/2ab/harry_potter_and_the_methods_of_rationality">one</a>, <a href="/lw/2ie/harry_potter_and_the_methods_of_rationality">two</a>, <a href="/lw/2nm/harry_potter_and_the_methods_of_rationality">three</a>, <a href="/lw/2tr/harry_potter_and_the_methods_of_rationality">four</a>, <a href="/lw/30g/harry_potter_and_the_methods_of_rationality">five</a>, <a href="/r/discussion/lw/364/harry_potter_and_the_methods_of_rationality/">six</a>, <a href="/r/discussion/lw/3rb/harry_potter_and_the_methods_of_rationality/">seven</a>, <a href="/lw/797/harry_potter_and_the_methods_of_rationality/">eight</a>. The <a href="http://www.fanfiction.net/u/2269863/Less_Wrong">fanfiction.net author page</a> is the central location for information about updates and links to HPMOR-related goodies, and AdeleneDawner has kept an <a href="http://www.evernote.com/pub/adelenedawner/Eliezer">archive of Author's Notes</a>.<br /><br />As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.<br /><br /><strong>Spoiler Warning</strong>: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. <a href="/lw/2tr/harry_potter_and_the_methods_of_rationality/2v1l">More specifically</a>:</p>
<blockquote>
<p>You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).<br /><br />If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.</p>
</blockquote>
</div>oscar_cunninghamWQ7XMjqvuRRj8nkpu2011-09-09T13:29:52.355ZWriting guide?
https://lw2.issarice.com/posts/PA7e2PpKYq8noJGdB/writing-guide
<p>I remember seeing a short writing guide written by Eliezer for use by the SIAI, but now I can't find it. Anyone have a link for it?</p>oscar_cunninghamPA7e2PpKYq8noJGdB2011-07-26T07:06:29.801ZSignatures for posts
https://lw2.issarice.com/posts/SzSwn8qvW5Z5i2ad6/signatures-for-posts
<p>Kaj_Sotala suggested <a href="/lw/6lm/community_norm_question_brief_text_ad_signatures/">here</a> that some people may wish to add signatures to posts (i.e. top-level posts, not comments (hell no!)) to link to the author's homepage and such, and this idea was supported (<a href="/lw/6lm/community_norm_question_brief_text_ad_signatures/4hxk">poll</a>). I <a href="/lw/6lm/community_norm_question_brief_text_ad_signatures/4hy1">suggested</a> that we make such signatures into an official looking standard template, and this suggestion was upvoted. This post contains my design for such a template. The last time I learnt HTML was back in 2003 when I was about eleven, so this is probably bad code by modern standards, but I'm hoping that people will criticise until we have a version that looks good on all browsers.</p>
<p>Code (Improved by Dreaded_Anomaly):</p>
<blockquote>
<p><div style="background:#f7f7f8; display:table;"> <br /> <a href="/user/<strong>Your user name goes here</strong>/submitted/"><img style="float: left; margin: 5px;" src="<strong>URL of the image goes here</strong>" alt="<strong>Your name</strong>'s posts" width="64" height="64" /></a><strong>Text goes here (links entered as usual) </strong><br /> </div></p>
</blockquote>
<p>Which produces a signature like the one at the bottom of this post. To use the code in the article editor press the HTML button and enter it at the bottom of the page. (Note that having your image be 64*64 to begin with will mean that it doesn't need to be scaled. Scaling sometimes makes images look weird or pixelly.)</p>
<p>I suggest that everyone who uses such a signature writes about themselves in formally and in the third person. Think of an "About the Author" section on the dust-cover of a book. This will raise the status of the site by making it reminiscent of an edited publication.</p>
<div style="background:#f7f7f8; display:table;"><a href="/user/Oscar_Cunningham/submitted/"><img style="float: left; margin: 5px;" src="http://i157.photobucket.com/albums/t43/Macbi/LWAvatar-1.jpg" alt="Oscar's posts" width="64" height="64" /></a>Oscar Cunningham is a Mathematics student at <a href="http://www.trin.cam.ac.uk/">Trinity College</a>, Cambridge (UK). Interests include probability, decision theory, and <a href="http://en.wikipedia.org/wiki/Ultimate_%28sport%29">Ultimate</a>.</div>oscar_cunninghamSzSwn8qvW5Z5i2ad62011-07-11T18:45:13.050ZRationality Quotes: June 2011
https://lw2.issarice.com/posts/DxzJY2rpsYYSFZXqb/rationality-quotes-june-2011
<p>Y'all know the rules:</p>
<ul style="margin: 10px 2em; list-style-type: disc; list-style-position: outside; padding: 0px;">
<li>Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)</li>
<li>Do not quote yourself.</li>
<li>Do not quote comments/posts on LW/OB.</li>
<li>No more than 5 quotes per person per monthly thread, please.</li>
</ul>oscar_cunninghamDxzJY2rpsYYSFZXqb2011-06-01T08:17:07.695Z