Counterfactual outcome state transition parameters

2018-07-27T21:13:12.014Z · score: 39 (13 votes)
Comment by anders_h on The New Riddle of Induction: Neutral and Relative Perspectives on Color · 2017-12-02T18:56:46.422Z · score: 5 (2 votes) · LW · GW

In my view, "the problem of induction" is just a bunch of philosophers obsessing over the fact that induction is not deduction, and that you therefore cannot predict the future with logical certainty. This is true, but not very interesting. We should instead spend our energy thinking about how to make better predictions, and how we can evaluate how much confidence to have in our predictions. I agree with you that the fields you mention have made immense progress on that.

I am not convinced that computer programs are immune to Goodmans point. AI agents have ontologies, and their predictions will depend on that ontology. Two agents with different ontologies but the same data can reach different conclusions, and unless they have access to their source code, it is not obvious that they will be able to figure out which one is right.

Consider two humans who are both writing computer functions. Both the "green" and the "grue" programmer will believe that their perspective is the neutral one, and therefore write a simple program that takes light wavelength as input and outputs a constant color predicate. The difference is that one of them will be surprised after time t, when suddenly the computer starts outputting different colors from their programmers experienced qualia. At that stage, we know which one of the programmers was wrong, but the point is that it might not be possible to predict this in advance.

Comment by anders_h on The New Riddle of Induction: Neutral and Relative Perspectives on Color · 2017-12-02T17:20:29.841Z · score: 6 (3 votes) · LW · GW

I am not sure I fully understand this comment, or why you believe my argument is circular. It is possible that you are right, but I would very much appreciate a more thorough explanation.

In particular, I am not "concluding" that humans were produced by an evolutionary process; but rather using it as background knowledge. Moreover, this statement seems uncontroversial enough that I can bring it in as a premise without having to argue for it.

Since "humans were produced by an evolutionary process" is a premise and not a conclusion, I don't understand what you mean by circular reasoning.

The New Riddle of Induction: Neutral and Relative Perspectives on Color

2017-12-02T16:15:08.912Z · score: 14 (7 votes)
Comment by anders_h on Odds ratios and conditional risk ratios · 2017-02-02T15:03:19.943Z · score: 0 (0 votes) · LW · GW

Update: The editors of the Journal of Clinical Epidemiology have now rejected my second letter to the editor, and thus helped prove Eliezer's point about four layers of conversation.

Comment by anders_h on Odds ratios and conditional risk ratios · 2017-01-25T06:02:48.605Z · score: 0 (0 votes) · LW · GW

Why do you think two senior biostats guys would disagree with you if it was obviously wrong? I have worked with enough academics to know that they are far far from infallible, but curious on your analysis of this question.

Good question. I think a lot of this is due to a cultural difference between those of us who have been trained in the modern counterfactual causal framework, and an old generation of methodologists who felt the old framework worked well enough for them and never bothered to learn about counterfactuals.

Comment by anders_h on Odds ratios and conditional risk ratios · 2017-01-25T03:55:43.955Z · score: 1 (1 votes) · LW · GW

I wrote this on my personal blog; I was reluctant to post this to Less Wrong since it is not obviously relevant to the core interests of LW users. However, I concluded that some of you may find it interesting as an example of how the academic publishing system is broken. It is relevant to Eliezer's recent Facebook comments about building an intellectual edifice.

Odds ratios and conditional risk ratios

2017-01-25T03:55:04.420Z · score: 4 (5 votes)
Comment by Anders_H on [deleted post] 2017-01-25T03:48:12.884Z

I wrote this on my personal blog; I was reluctant to post this to Less Wrong since it is not obviously relevant to the core interests of LW users. However, I concluded that some of you may find it interesting as an example of how the academic publishing system is broken. It is relevant to Eliezer's recent Facebook comments about building an intellectual edifice.

Comment by anders_h on Is Caviar a Risk Factor For Being a Millionaire? · 2017-01-25T02:32:53.938Z · score: 0 (0 votes) · LW · GW

VortexLeague: Can you be a little more specific about what kind of help you need?

A very short, general introduction to Less Wrong is available at http://lesswrong.com/about/

Essentially, Less Wrong is a reddit-type forum for discussing how we can make our beliefs more accurate.

Comment by anders_h on Choosing prediction over explanation in psychology: Lessons from machine learning · 2017-01-18T13:58:16.346Z · score: 1 (1 votes) · LW · GW

Thank you for the link, that is a very good presentation and it is good to see that ML people are thinking about these things.

There certainly are ML algorithms that are designed to make the second kind of predictions, but generally they only work if you have a correct causal model

It is possible that there are some ML algorithms that try to discover the causal model from the data. For example, /u/IlyaShpitser works on these kinds of methods. However, these methods only work to the extent that they are able to discover the correct causal model, so it seems disingenious to claim that we can ignore causality and focus on "prediction".

Comment by anders_h on Choosing prediction over explanation in psychology: Lessons from machine learning · 2017-01-18T01:23:22.918Z · score: 2 (2 votes) · LW · GW

I skimmed this paper and plan to read it in more detail tomorrow. My first thought is that it is fundamentally confused. I believe the confusion comes from the fact that the word "prediction" is used with two separate meanings: Are you interested in predicting Y given an observed value of X (Pr[Y | X=x]), or are you interested in predicting Y given an intervention on X (i.e. Pr[Y|do(X=x)]).

The first of these may be useful for certain purposes. but If you intend to use the research for decision making and optimization (i.e. you want to intervene to set the value of X , in order to optimize Y), then you really need the second type of predictive ability, in which case you need to extract causal information from the data. This is only possible if you have a randomized trial, or if you have a correct causal model.

You can use the word "prediction" to refer to the second type of research objective, but this is not the kind of prediction that machine learning algorithms are designed to do.

In the conclusions, the authors write:

"By contrast, a minority of statisticians (and most machine learning researchers) belong to the “algorithmic modeling culture,” in which the data are assumed to be the result of some unknown and possibly unknowable process, and the primary goal is to find an algorithm that results in the same outputs as this process given the same inputs. "

The definition of "algorithmic modelling culture" is somewhat circular, as it just moves the ambiguity surrounding "prediction" to the word "input". If by "input" they mean that the algorithm observes the value of an independent variable and makes a prediction for the dependent variable, then you are talking about a true prediction model, which may be useful for certain purposes (diagnosis, prognosis, etc) but which is unusable if you are interested in optimizing the outcome.

If you instead claim that the "input" can also include observations about interventions on a variable, then your predictions will certainly fail unless the algorithm was trained in a dataset where someone actually intervened on X (i.e. someone did a randomized controlled trial), or unless you have a correct causal model.

Machine learning algorithms are not magic, they do not solve the problem of confounding unless they have a correct causal model. The fact that these algorithms are good at predicting stuff in observational datasets does not tell you anything useful for the purposes of deciding what the optimal value of the independent variable is.

In general, this paper is a very good example to illustrate why I keep insisting that machine learning people need to urgently read up on Pearl, Robins or Van der Laan. The field is in danger of falling into the same failure mode as epidemiology, i.e. essentially ignoring the problem of confounding. In the case of machine learning, this may be more insidious because the research is dressed up in fancy math and therefore looks superficially more impressive.

Comment by anders_h on Triple or nothing paradox · 2017-01-05T23:59:28.417Z · score: 0 (0 votes) · LW · GW

Thanks for catching that, I stand corrected.

Comment by anders_h on Triple or nothing paradox · 2017-01-05T22:52:14.657Z · score: 3 (3 votes) · LW · GW

The rational choice depends on your utility function. Your utility function is unlikely to be linear with money. For example, if your utility function is log (X), then you will accept the first bet, be indifferent to the second bet, and reject the third bet. Any risk-averse utility function (i.e. any monotonically increasing function with negative second derivative) reaches a point where the agent stops playing the game.

A VNM-rational agent with a linear utility function over money will indeed always take this bet. From this, we can infer that linear utility functions do not represent the utility of humans.

(EDIT: The comments by Satt and AlexMennen are both correct, and I thank them for the corrections. I note that they do not affect the main point, which is that rational agents with standard utility functions over money will eventually stop playing this game)

Comment by anders_h on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-23T17:36:32.640Z · score: 2 (2 votes) · LW · GW

Because I didn't perceive a significant disruption to the event, I was mentally bucketing you with people I know who severely dislike children and would secretly (or not so secretly) prefer that they not attend events like this at all; or that they should do so only if able to remain silent (which in practice means not at all.) I suspect Anders_H had the same reaction I did.

Just to be clear, I did not attend Solstice this year, and I was mentally reacting to a similar complaint that was made after last year's Solstice event. At last year's event, I did not perceive the child to be at all noteworthy as a disturbance. From reading this thread, it seems that the situation may well have been different this year, and that my reaction might have been different if I had been there. I probably should not have commented without being more familiar with what happened at this year's event.

I also note that my thinking around this may very well be biased, as I used to live in a group house with this child.

Comment by anders_h on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-22T06:00:51.168Z · score: 4 (7 votes) · LW · GW

While I understand that some people may feel this way, I very much hope that this sentiment is rare. The presence of young children at the event only adds to the sense of belonging to a community, which is an important part of what we are trying to "borrow" from religions.

Comment by anders_h on Feature Wish List for LessWrong · 2016-12-19T07:57:05.056Z · score: 2 (2 votes) · LW · GW

I'd like each user to have their own sub domain (I.e such that my top level posts can be accessed either from Anders_h.lesswrong.com or from LW discussion). If possible it would be great if users could customize the design of their sub domain, such that posts look different when accessed from LW discussion.

Comment by anders_h on This one equation may be the root of intelligence · 2016-12-12T02:56:28.887Z · score: 3 (3 votes) · LW · GW

Given that this was posted to LW, you'd think this link would be about a different equation..

Is Caviar a Risk Factor For Being a Millionaire?

2016-12-09T16:27:14.760Z · score: 40 (41 votes)
Comment by anders_h on Open thread, Nov. 21 - Nov. 27 - 2016 · 2016-11-22T20:58:25.516Z · score: 7 (7 votes) · LW · GW

The one-year embargo on my doctoral thesis has been lifted, it is now available at https://dash.harvard.edu/bitstream/handle/1/23205172/HUITFELDT-DISSERTATION-2015.pdf?sequence=1 . To the best of my knowledge, this is the first thesis to include a Litany of Tarski in the introduction.

Comment by anders_h on On Trying Not To Be Wrong · 2016-11-11T22:08:48.111Z · score: 5 (5 votes) · LW · GW

Upvoted. I'm not sure how to phrase this without sounding sycophantic, but here is an attempt: Sarah's blog posts and comments were always top quality, but the last couple of posts seem like the beginning of something important, almost comparable to when Scott moved from squid314 to Slatestarcodex.

Comment by anders_h on Open Thread, Sept 5. - Sept 11. 2016 · 2016-09-07T23:07:05.720Z · score: 7 (7 votes) · LW · GW

Today, I uploaded a sequence of three working papers to my website at https://andershuitfeldt.net/working-papers/

This is an ambitious project that aims to change fundamental things about how epidemiologists and statisticians think about choice of effect measure, effect modification and external validity. A link to an earlier version of this manuscript was posted to Less Wrong half a year ago, the manuscript has since been split into three parts and improved significantly. This work was also presented in poster form at EA Global last month.

I want to give a heads up before you follow the link above: Compared to most methodology papers, the mathematics in these manuscripts is definitely unsophisticated, almost trivial. I do however believe that the arguments support the conclusions, and that those conclusions have important implications for applied statistics and epidemiology.

I would very much appreciate any feedback. I invoke "Crocker's Rules" (see http://sl4.org/crocker.html) for all communication regarding these papers. Briefly, this means that I ask you, as a favor, to please communicate any disagreement as bluntly and directly as possible, without regards to social conventions or to how such directness may affect my personal state of mind.

I have made a standing offer to give a bottle of Johnnie Walker Blue Label to anyone who finds a flaw in the argument that invalidates the paper, and a bottle of 10-year old Single Scotch Malt to anyone who finds a significant but fixable error; or makes a suggestion that substantially improves the manuscript.

If you prefer giving anonymous feedback, this can be done through the link http://www.admonymous.com/effectmeasurepaper .

Link: The Economist on Paperclip Maximizers

2016-06-30T12:40:33.942Z · score: 5 (6 votes)
Comment by anders_h on Secret Rationality Base in Europe · 2016-06-17T19:47:36.910Z · score: 4 (6 votes) · LW · GW

This is almost certainly a small minority view, but from my perspective as a European based in the Bay Area who may be moving back to Europe next summer, the most important aspect would be geographical proximity to a decent university where staff and faculty can get away with speaking only English.

Comment by anders_h on Why you should consider buying Bitcoin right now (Jan 2015) if you have high risk tolerance · 2016-06-14T17:57:25.679Z · score: 1 (1 votes) · LW · GW

What do you mean by "no risk"? This sentence seems to imply that your decisions are influenced by the sunk cost fallacy.

Try to imagine an alien who has been teleported into your body, who is trying to optimize your wealth. The fact that the coins were worth a third of their current price 18 months ago would not factor into the alien's decision.

Comment by anders_h on Open thread, Jun. 13 - Jun. 19, 2016 · 2016-06-14T01:05:37.629Z · score: 4 (4 votes) · LW · GW

There may be an ethically relevant distinction between a rule that tells you to avoid being the cause of bad things, and a rule that says you should cause good things to happen. However, I am not convinced that causality is relevant to this distinction. As far as I can tell, these two concepts are both about causality. We may be using words differently, do you think you could explain why you think this distinction is about causality?

Comment by anders_h on The Valentine’s Day Gift That Saves Lives · 2016-05-18T17:30:51.862Z · score: 1 (1 votes) · LW · GW

It would seem that the existence of such contractors follows logically from the fact that you are able to hire people despite the fact that you require contractors to volunteer 2/3 of their time.

Comment by anders_h on Open Thread May 16 - May 22, 2016 · 2016-05-17T21:07:59.908Z · score: 2 (4 votes) · LW · GW

The Economist published a fascinating blog entry where they use evidential decision theory to establish that tattoo removal results in savings to the prison system. See http://www.economist.com/blogs/freeexchange/2014/08/tattoos-jobs-and-recidivism . Temporally, this blog entry corresponds roughly to the time I lost my respect for the Economist. You can draw your own causal conclusions from this.

Comment by anders_h on How do you learn Solomonoff Induction? · 2016-05-17T18:21:35.480Z · score: 8 (8 votes) · LW · GW

Solomonoff Induction is uncomputable, and implementing it will not be possible even in principle. It should be understood as an ideal which you should try to approximate, rather than something you can ever implement.

Solomonoff Induction is just bayesian epistemology with a prior determined by information theoretic complexity. As an imperfect agent trying to approximate it, you will get most of your value from simply grokking Bayesian epistemology. After you've done that, you may want to spend some time thinking about the philosophy of science of setting priors based on information theoretic complexity.

Comment by anders_h on Lesswrong 2016 Survey · 2016-03-26T22:10:21.611Z · score: 42 (42 votes) · LW · GW

I took the survey

Comment by anders_h on Open thread, Mar. 14 - Mar. 20, 2016 · 2016-03-21T17:28:37.253Z · score: 2 (2 votes) · LW · GW

Thanks. Good points. Note that many of those words are already established in the literature with same meaning. For the particular example of "doomed", this is the standard term for this concept, and was introduced by Greenland and Robins (1986). I guess I could instead use "response type 1" but the word doomed will be much more effective at pointing to the correct concept, particularly for people who are familiar with the previous literature.

The only new term I introduce is "flip". I also provide a new definition of effect equality, and it therefore seems correct to use quotation marks in the new definition. Perhaps I should remove the quotation marks for everything else since I am using terms that have previously been introduced.

Comment by anders_h on Open thread, Mar. 14 - Mar. 20, 2016 · 2016-03-20T17:00:36.339Z · score: 0 (0 votes) · LW · GW

Do you mean probability instead of probably?

Yes. Thanks for noticing. I changed that sentence after I got the rejection letter (in order to correct a minor error that the reviewers correctly pointed out), and the error was introduced at that time. So that is not what they were referring to.

If the reviewers don't succeed in understanding what you are saying you might have explained yourself in casual language but still failed.

I agree, but I am puzzled by why they would have misunderstood. I spent a lot of effort over several months trying to be as clear as possible. Moreover, the ideas are very simple: The definitions are the only real innovation: Once you have the definitions, the proofs are trivial and could have been written by a high school student. If the reviewers don't understand the basic idea, I will have to substantially update my beliefs about the quality of my writing. This is upsetting because being a bad writer will make it a lot harder to succeed in academia. The primary alternative hypotheses for why they misunderstood are either (1) that they are missing some key fundamental assumption that I take for granted or (2) that they just don't want to understand.

Comment by anders_h on Open thread, Mar. 14 - Mar. 20, 2016 · 2016-03-20T05:32:34.388Z · score: 2 (2 votes) · LW · GW

Three days ago, I went through a traditional rite of passage for junior academics: I received my first rejection letter on a paper submitted for peer review. After I received the rejection letter, I forwarded the paper to two top professors in my field, who both confirmed that the basic arguments seem to be correct and important. Several top faculty members have told me they believe the paper will eventually be published in a top journal, so I am actually feeling more confident about the paper than before it got rejected.

I am also very frustrated with the peer review system. The reviewers found some minor errors, and some of their other comments were helpful in the sense that they reveal which parts of the paper are most likely to be misunderstood. However, on the whole, the comments do not change my belief in the soundness of the idea, and in my view they mostly show that the reviewers simply didn’t understand what I was saying.

One comment does stand out, and I’ve spent a lot of energy today thinking about its implications: Reviewer 3 points out that my language is “too casual”. I would have had no problem accepting criticism that my language is ambiguous, imprecise, overly complicated, grammatically wrong or idiomatically weird. But too casual? What does that even mean? I have trouble interpreting the sentence to mean anything other than an allegation that I fail at a signaling game where the objective is to demonstrate impressiveness by using an artificially dense and obfuscating academic language.

From my point of view, “understanding” something means that you are able to explain it in a casual language. When I write a paper, my only objective is to allow the reader to understand what my conclusions are and how I reached them. My choice of language is optimized only for those objectives, and I fail to understand how it is even possible for it to be “too casual”.

Today, I feel very pessimistic about the state of academia and the institution of peer review. I feel stronger allegiance to the rationality movement than ever, as my ideological allies in what seems like a struggle about what it means to do science. I believe it was Tyler Cowen or Alex Tabarrok who pointed out that the true inheritors of intellectuals like Adam Smith are not people publishing in academic journals, but bloggers who write in a causal language. I can’t find the quote but today it rings more true than ever.

I understand that I am interpreting the reviewers choice of words in a way that is strongly influenced both by my disappointment in being rejected, and by my pre-existing frustration with the state of academia and peer review. I would very much appreciate if anybody could steelman the sentence “the writing is too casual”, or otherwise help me reach a less biased understanding of what just happened.

The paper is available at https://rebootingepidemiology.files.wordpress.com/2016/03/effect-measure-paper-0317162.pdf . I am willing to send a link to the reviewers’ comments by private message to anybody who is interested in seeing it.

Comment by anders_h on Link: Evidence-Based Medicine Has Been Hijacked · 2016-03-16T22:43:42.576Z · score: 4 (4 votes) · LW · GW

I think the evidence for the effectiveness of statins is very convincing. The absolute risk reduction from statins will depend primarily on your individual baseline risk of coronary disease. From the information you have provided, I don't think your baseline risk is extraordinarily high, but it is also not negligible.

You will have to make a trade-off where the important considerations are (1) how bothered you are by the side-effects, (2) what absolute risk reduction you expect based on your individual baseline risk, (3) the marginal price (in terms of side effects) that you are willing to pay for slightly better chance at avoiding a heart attack. I am not going to tell you how to make that trade-off, but I would consider giving the medications a try simply because it is the only way to get information on whether you get any side effects, and if so, whether you find them tolerable.

(I am not licensed to practice medicine in the United States or on the internet, and this comment does not constitute medical advise)

Link: Evidence-Based Medicine Has Been Hijacked

2016-03-16T19:57:49.294Z · score: 17 (18 votes)
Comment by anders_h on If there was one element of statistical literacy that you could magically implant in every head, what would it be? · 2016-02-26T07:40:28.484Z · score: 0 (0 votes) · LW · GW

Why do you want to be able to do that? Do you mean that you want to be able to look at a spreadsheet and move around numbers in your head until you know what the parameter estimates are? If you have access to a statistical software package, this would not give you the ability to do anything you couldn't have done otherwise. However, that is obvious, so I am going to assume you are more interested in groking some part of the underlying the epistemic process. But if that is indeed your goal, the ability to do the parameter estimation in your head seems like a very low priority, almost more of a party trick than actually useful.

Comment by anders_h on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-09T22:38:10.445Z · score: 1 (1 votes) · LW · GW

I disagree with this. In my opinion QALYs are much superior to DALYs for reasons that are inherent to how the measures are defined. I wrote a Tumblr post in response to Slatestarscratchpad a few weeks ago, see http://dooperator.tumblr.com/post/137005888794/can-you-give-me-a-or-two-good-article-on-why .

Comment by anders_h on The Fable of the Burning Branch · 2016-02-08T18:51:30.203Z · score: 3 (11 votes) · LW · GW

Richard, I don't think Less Wrong can survive losing both Ilya and you in the same week. I hope both of you reconsider. Either way, we definitely need to see this as a wake-up call. This forum has been in decline for a while, but this week I definitely think it hit a breaking point.

Comment by anders_h on Lesswrong Survey - invitation for suggestions · 2016-02-08T18:49:04.965Z · score: 7 (7 votes) · LW · GW

How about asking "What is the single most important change that would make you want to participate more frequently on Less Wrong?"

This question would probably not be useful for the census itself, but it seems like a great opportunity to brainstorm..

Comment by anders_h on Disguised Queries · 2016-02-07T19:45:59.705Z · score: 0 (0 votes) · LW · GW

I run the Less Wrong meetup group in Palo Alto. After we announced the events at Meetup.com, we often get a lot of guests who are interested in rationality but who have not read the LW sequences. I have an idea for a introductory session where we have the participants do a sorting exercise. Therefore, I am interested in getting 3D printed versions of rubes, bleggs and other items references in this post.

Does anyone have any thoughts on how to do this cheaply? Is there sufficient interest in this to get a kickstarter running? I expect that these items may be of interest to other Less Wrong meetup groups, and possibly to CFAR workshops and/or schools?

Comment by anders_h on Clearing An Overgrown Garden · 2016-01-31T04:50:39.820Z · score: 5 (5 votes) · LW · GW

Less Wrong doesn't seem "overgrown" to me. It actually seems dried out and dying because the culture is so negative people don't want to post here. I believe Eliezer has talked about how whenever he posted something on LW, the comments would be full of people trying to find anything wrong with it.

"Overgrown" was probably a bad analogy, I tried too hard to reference the idea of well-kept gardens. What I was trying to say is that there are too many hostile elements who are making this website an unwelcoming place, by unnecessary criticism, ad hominem attacks and downvotes; and that those elements should have been removed from the community earlier. I actually think we agree on this.

I think trying to impose strict new censorship rules and social control over communication is more likely to deal the death blow to this website than to help it. LessWrong really needs an injection of positive energy and purpose. In the absence of this, I expect LW to continue to decline.

OK, from reading this and other comments I accept that this was the weakest part of my post. Also, after re-reading the Wiki entry on Crocker's rule, I don't think I intended to suggest anything quite that extreme. Crocker's rules say that rudeness is acceptable simply in order to provide a precise and accurate signal of annoyance. This is certainly not what I had in mind.

I apologize for my incorrect usage of the term "Crocker's rules", and I recognize that this was probably not a good idea. I hope someone can come up with a policy that achieves the objectives I had in mind when I wrote that sentence.

Comment by anders_h on Clearing An Overgrown Garden · 2016-01-29T23:05:56.726Z · score: 4 (4 votes) · LW · GW

Would there be a way for people who already maintain blogs elsewhere to cross-post to their LW subdomain? (Would this even be desirable?)

We would have to discuss this with people who run blogs elsewhere to find out what solutions would work for them. My preferred solution would be for people to import their old blog posts, and then redirect their domain to the LW subdomain. I do not know whether outside bloggers would find this acceptable. In some cases we may also have to consider the question of advertisement revenues.

Do you envision LW2 continuing to include applied rationality type posts? Does that work with "everything should work towards Aumann agreement"?

My apologies, I did not intend to declare that posts about applied rationality should be avoided. I guess my phrasing reveals my bias towards the "epistemic" part of this community rather than the "instrumental" side. My personal preference is to shift the community focus back towards epistemic rationality, but that is a separate discussion which I did not intend to raise here. The community should discuss this separately.

users may not repeatedly bring up the same controversial discussion outside of their original context

How could we track this, other than relying on mods to be like "ugh, this poster again"?

There would have to be some moderator discretion on this issue. My personal view is that we should err on the side of allowing most content. This language was intended for extreme cases where the community consensus is clear that a line has been crossed, such as Eugine, AdvancedAtheist or Jim Donald.

professionally edited rationality journal

Woah. Is this really a thing that MIRI could (resources permitting) just like ... do?

Yes, they can certainly do this if they have the resources. Initially, academics may not take the journal seriously and it definitely will not be indexed in academic databases. If the quality is sufficiently high, it is conceivable that this may change.

Clearing An Overgrown Garden

2016-01-29T22:16:56.620Z · score: 15 (20 votes)
Comment by anders_h on Study partner matching thread · 2016-01-27T18:25:03.029Z · score: 5 (5 votes) · LW · GW

I'm a postdoctoral scholar at METRICS and I'd be happy to talk to you about this. Get in touch by e-mail or private message. Also, I'm giving a talk about a new research idea at the METRICS internal lab meeting this coming Monday at 12:00 at Stanford. You are welcome to attend if you want to meet the METRICS group (but the professors are probably going to be busy and may not have time to talk with you)

Comment by anders_h on Rationality Quotes Thread January 2016 · 2016-01-26T00:30:46.609Z · score: 2 (2 votes) · LW · GW

Whoops, my apologies. Thanks for noticing. Corrected

Comment by anders_h on Rationality Quotes Thread January 2016 · 2016-01-25T22:09:46.502Z · score: 15 (17 votes) · LW · GW

This note is for readers who are unfamiliar with The_Lion:

This user is a troll who has been banned multiple times from Less Wrong. He is unwanted as a participant in this community, but we are apparently unable to prevent him from repeatedly creating new accounts. Administrators have extensive evidence for sockpuppetry and for abuse of the voting system. The fact that The_Lion's comment above is heavily upvoted is almost certainly entirely due to sockpuppetry. It does not reflect community consensus

Comment by anders_h on Rationality Quotes Thread January 2016 · 2016-01-25T21:41:27.703Z · score: 5 (5 votes) · LW · GW

As expected, my karma fell by 38 points and my "positive percentage" fell from 97% to 92% shortly after leaving this comment

Comment by anders_h on Rationality Quotes Thread January 2016 · 2016-01-25T18:26:51.467Z · score: 6 (10 votes) · LW · GW

Cloud Atlas is my favorite movie ever and I recommend it to anyone reading this. In fact, it is my opinion that it is one of the most important pieces of early 21st century art.

The downvote is however not for your bad taste in movies, but for intentionally misgendering Lana. More generally, you can consider it payback for your efforts to make Less Wrong an unwelcoming place. I care about this community, and you are doing your best to break it.

At this stage, I call for an IP ban.

Comment by anders_h on Map:Territory::Uncertainty::Randomness – but that doesn’t matter, value of information does. · 2016-01-24T19:34:21.276Z · score: 0 (0 votes) · LW · GW

So thinking probabilities existing as "things itself" taken to the extreme could lead one to the conclusion that one cant say much for example about single-case probabilities.

Thinking probabilities can exists in the territory leads to no such conclusion. Thinking probabilities exist only in the territory may lead to such a conclusion, but that is a strawman of the points that are being made.

It would be insane to deny that frequencies exist, or that they can be represented by a formal system derived from the Kolmogorov (or Cox) axioms.

It would also be insane to deny that beliefs exist, or that they can be represented by a formal system derived from the Kolmogorov (or Cox) axioms.

I think this confusion would all go away if people stopped worrying about the semantic meaning of the word "probability" and just specified whether they are talking about frequency or belief. It puzzles me when people insist that the formal system can only be isomorphic to one thing, and it is truly bizarre when they take sides in a holy war over which of those things it "really" represents. A rational decision maker genuinely needs both the concept of frequency and the concept of belief.

For instance, an agent may need to reason about the proportion (frequency) P of Everett branches in which he survives if he makes a decision, and also about how certain he is about his estimate of that probability. Let's say his beliefs about the probability P follow a beta distribution, or any other distribution bounded by 0 and 1. In order to make a decision, he may do something like calculate a new probability Q, which is the expected value of P under his prior. You can interpret Q as the agent's beliefs about the probability of dying, but it also has elements of frequency.

You can make the untestable claim that all Everett branches have the same outcome, and therefore that Q is determined exclusively by your uncertainty about whether you will live or die in all Everett branches. This would be Bayesian fundamentalism. You can also go to the other extreme and argue that Q is determined exclusively by P, and that there is no reason to consider uncertainty. That would be Frequentist fundamentalism. However, there is a spectrum between the two and there is no reason we should only allow the two edge cases to be possible positions. The truth is almost certainly somewhere in between.

Meetup : Palo Alto Meetup: Lightning Talks

2016-01-20T20:04:34.593Z · score: 2 (2 votes)
Comment by anders_h on Stupid Questions, 2nd half of December · 2016-01-12T22:34:27.652Z · score: 1 (1 votes) · LW · GW

Sure, this is true, thanks for noticing. Sorry about the inaccurate/incorrect wording. It does however not affect the main idea.

Comment by anders_h on Open Thread, January 11-17, 2016 · 2016-01-12T18:18:51.679Z · score: 3 (3 votes) · LW · GW

I finally gave in and opened a Tumblr account at http://dooperator.tumblr.com/ . This open-thread comment is just to link my identity on Less Wrong with my username on websites where I do not want my participation to be revealed by a simple Google search for my name, such as SlateStarCodex and Tumblr.

Comment by anders_h on Stupid Questions, 2nd half of December · 2016-01-12T01:51:56.644Z · score: 0 (0 votes) · LW · GW

There is a difference between "How sure are you that if we looked at the coin now, it is heads?" and "How sure are you that if we looked at the coin only once, at the end of the experiment, it is heads?"

In the first variant, the thirder position is unambiguously true.

I the second variant, I suspect that you really need more precision in the words to answer it. I think a halfer interpretation of this question is at least plausible under some definitions of "how sure"

Unless "how sure" refers explicitly to well specified bet, many attempts to define it will end up being circular.

Comment by anders_h on Variations on the Sleeping Beauty · 2016-01-11T09:33:43.744Z · score: 1 (1 votes) · LW · GW

This post caused me to type up some old, unrelated thoughts about Sleeping Beauty. I posted it as a comment to the stupid questions thread at http://lesswrong.com/lw/n3v/stupid_questions_2nd_half_of_december/d14z . I'd very much appreciate any feedback on this idea. This comment is just to catch the attention of readers interested in Sleeping Beauty who may not see the comment in the stupid questions thread.

Comment by anders_h on Stupid Questions, 2nd half of December · 2016-01-11T06:14:50.498Z · score: 2 (2 votes) · LW · GW

I have an intuition that I have dissolved the sleeping beauty paradox as semantic confusion about the word "probability". I am aware that my reasoning is unlikely to be accepted by the community, but I am unsure what is wrong with it. I am posting this to the "stupid questions" thread to see if helps me gain any insight either on Sleeping Beauty or on the thought process that led to me feeling like I've dissolved the question.

When the word "probability" is used to describe the beliefs of an agent, we are really talking about how that agent would bet, for instance in an ideal prediction market. However, if the rules of the prediction market are unclear, we may get semantic confusion.

In general, when you are asked "What is the probability that the coin came up heads" we interpret this as "how much are you willing to pay for a contract that will be worth 1 dollar if the coin came up heads, and nothing if it came up tails". This seems straight forward, but in the sleeping beauty problem, the agent may make the same bet multiple times, which introduces ambiguity.

Person 1 may interpret then the question as follows: "Every time you wake up, there is a new one dollar bill on the table. How much are you willing to pay for a contract that gives you the dollar if the coin came up heads?". In this interpretation, you get to keep all the dollars you won throughout the experiment.

In contrast, person 2 may interpret the question as follows "There is one dollar on the table. Every time you wake up, you are given a chance to revise the price you are willing to pay for the contract, but all earlier bets are cancelled such that only the last bet counts". In this interpretation, there is only one dollar to be won.

Person 1 will conclude that the probability is 1/3, and person 2 will conclude that the probability is 1/2. However, once they agree on what bet they are asked to make, the disagreement is dissolved.

The first definition is probably better matched to current usage of the word. This gives most rationalists a strong intuition that the thirder position is "correct". However, if you want to know which definition is most useful or applicable, this really depends on the disguised query, and on which real world scenario the parable is meant to represent. If the payoff utility is only determined once (at the end of the experiment), then the halfer definition could be more useful?

ETA: After reading the Wikipedia:Talk section for Sleeping Beauty, it appears that this idea is not original and that in fact a lot of people have reached the same conclusion. I should have read that before I commented...

Meetup : Palo Alto Meetup: Introduction to Causal Inference

2016-01-03T02:22:37.793Z · score: 2 (2 votes)

Meetup : Palo Alto Meetup: The Economics of AI

2016-01-03T02:20:41.540Z · score: 2 (2 votes)
Comment by anders_h on Rationality Quotes Thread December 2015 · 2016-01-01T18:19:55.895Z · score: 2 (2 votes) · LW · GW

The original quote said to rate each intervention by how much it helped or hurt the situation, i.e. its individual-level causal effect. None of those study designs will help you with that: They may be appropriate if you want to estimate the average effect across multiple similar situations, but that is not what you need here.

This is a serious question. How do you plan to rate the effectiveness of things like the decision to intervene in Libya, or the decision not to intervene in Syria, under profound uncertainty about what would have happened if the alternative decision had been made?

Comment by anders_h on Rationality Quotes Thread December 2015 · 2016-01-01T00:16:10.285Z · score: 2 (4 votes) · LW · GW

Rate how much each intervention (or decision not to intervene) helped or hurt the situation, in retrospect, on a >scale from -10 to +10.

How do you plan to do this without counterfactual knowledge?

Comment by anders_h on In defense of philosophy · 2015-12-23T06:24:36.201Z · score: 4 (4 votes) · LW · GW

Contrarily to LukeProg, knowledge of the Gettier Problem improves one's epistemology because it shows that knowledge equals justified true belief is not a viable stance.

Consider two agents who are communicating with each other in an attempt to reach Aumann Agreement. These agents will certainly need precise words for the following concepts:

Reality: "Is statement A true?"

Belief: "Does agent M believe that statement A is true?" and "With what probability does Agent M believe that statement A is true?"

Map/Territory correspondence: "Does Agent M's belief that Statement A is true correspond to reality?"

Calibration: "Are Agent M's beliefs well calibrated?"

Epistemic process: "What method did agent M use to generate his posterior beliefs?" "Is that method reliable?"

Gettier problems show that you won't be able to project these five dimensions onto a single binary. Which is true but not very insightful. Moreover, I can't imagine that the ability to reach Aumann agreement will ever depend on the definition of "knowledge". Therefore, this is mostly an empty semantics discussion.

Post-doctoral Fellowships at METRICS

2015-11-12T19:13:12.419Z · score: 12 (13 votes)

On stopping rules

2015-08-02T21:38:08.617Z · score: 5 (6 votes)

Meetup : Boston: Trigger action planning

2015-05-24T20:00:31.195Z · score: 1 (2 votes)

Meetup : Boston: Making space in Interpersonal Interactions

2015-05-24T19:58:56.123Z · score: 1 (2 votes)

Meetup : Boston: How to Beat Perfectionism

2015-05-08T17:42:02.809Z · score: 2 (3 votes)

Meetup : Boston: Unconference

2015-03-19T16:58:45.803Z · score: 1 (2 votes)

Prediction Markets are Confounded - Implications for the feasibility of Futarchy

2015-01-26T22:39:33.638Z · score: 17 (18 votes)

Meetup : Boston: Antifragile

2015-01-02T20:04:48.211Z · score: 1 (2 votes)

Meetup : Boston: Self Therapy

2014-11-13T17:20:19.375Z · score: 1 (2 votes)

Meetup : The Design Process

2014-10-24T03:37:24.473Z · score: 1 (2 votes)

Meetup : Boston Meetup - New Location

2014-10-15T04:39:13.533Z · score: 1 (2 votes)

Meetup : Meta Meetup

2014-10-02T16:49:44.048Z · score: 1 (2 votes)

Meetup : Social Skills

2014-09-10T00:55:23.241Z · score: 1 (2 votes)

Meetup : Passive Investing and Financial Independence

2014-09-10T00:53:05.455Z · score: 1 (2 votes)

Meetup : Prediction Markets and Futarchy

2014-09-02T14:13:33.018Z · score: 1 (2 votes)

Meetup : Nick Bostrom Talk on Superintelligence

2014-09-02T14:09:43.925Z · score: 4 (5 votes)

Meetup : The Psychology of Video Games

2014-08-11T04:58:57.663Z · score: 1 (2 votes)

Ethical Choice under Uncertainty

2014-08-10T22:13:38.756Z · score: 3 (6 votes)

Causal Inference Sequence Part II: Graphical Models

2014-08-04T23:10:02.285Z · score: 8 (9 votes)

Causal Inference Sequence Part 1: Basic Terminology and the Assumptions of Causal Inference

2014-07-30T20:56:31.866Z · score: 27 (28 votes)

Sequence Announcement: Applied Causal Inference

2014-07-30T20:55:41.741Z · score: 24 (25 votes)