## Posts

Book Review: Discrete Mathematics and Its Applications (MIRI Course List) 2015-04-14T09:08:38.981Z · score: 15 (16 votes)

Comment by lawrencec on How good a proxy for accuracy is precision? · 2019-09-01T17:52:53.663Z · score: 2 (2 votes) · LW · GW

I wonder if that's because they're using the ISO definition of accuracy? A quick google search for these diagrams led me to this reddit thread, where the discussion below reflects the fact that people use different definitions of accuracy.

EDIT: here's a diagram of the form that Elizabeth is complaining about (source: the aforementioned reddit thread):

Comment by lawrencec on Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? · 2019-08-31T08:43:49.844Z · score: 8 (6 votes) · LW · GW

Hyperbolic discounting leads to preferences reversals over time: the classic example is always preferring a certain $1 now to$2 tomorrow, but preferring a certain $2 in a week to$1 in 6 days. This is a pretty clear sign that it never "should" be done - An agent with these preferences might find themselves paying a cent to switch from $1 in 6 days to$2 in 7, then, 6 days later, paying another cent to switch it back and get th $1 immediately. However, in practice, even rational agents might exhibit hyperbolic discounting like preferences (though no preference reversals): for example, right now I might not believe you're very trustworthy and worry you might forget to give me money tomorrow. So I prefer$1 now to $2 tomorrow. But if you actually are going to give me$1 in 6 days, I might update to thinking you're quite trustworthy and then be willing to wait another day to get \$2 instead. (See this paper for a more thorough discussion of this possibility: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1689473/pdf/T9KA20YDP8PB1QP4_265_2015.pdf)

Comment by lawrencec on Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? · 2019-08-31T08:12:12.156Z · score: 3 (4 votes) · LW · GW

Something something near mode vs far mode?

Comment by lawrencec on How good a proxy for accuracy is precision? · 2019-08-31T07:08:03.039Z · score: 4 (4 votes) · LW · GW

I believe your definition of accuracy differs from the ISO definition (which is the usage I learned in undergrad statistics classes, and also the usage most online sources seem to agree with): a measurement is accurate insofar as it is close to the true value. By this definition, the reason the second graph is accurate but not precise is because all the points are close to the true value. I'll be using that definition in the remainder of my post. That being said, Wikipedia does claim your usage is the more common usage of the word.

I don't have a clear sense of how to answer your question empirically, so I'll give a theoretical answer.

Suppose our goal is to predict some value . Let be our predictor for (for example, we could have ask a subject to predict ). A natural way to measure accuracy for prediction tasks is the mean squared error , where a lower mean square error is higher accuracy. The Bias Variance Decomposition of mean squared error gives us:

The first term on the right is the bias of your estimator - how far the expected value of your estimator is from the true value. An unbiased estimator is one that, in expectation, gives you the right value (what you mean by "accuracy" in your post, and what ISO calls "trueness"). The second term is the variance of your estimator - how far your estimator is, in expectation, from the average value of the estimator. Rephrasing a bit, this measures how imprecise your estimator is, on average.

As both the terms on the right are always non-negative, the bias and variance of your estimator both lower bound your mean square error.

However, it turns out that there's often a trade off between having an unbiased estimator and a more precise estimator, known appropriately as the bias-variance trade-off. In fact, there are many classic examples in statistics of estimators that are biased but have lower MSE than any unbiased estimator. (Here's the first one I found during Googling)

Comment by lawrencec on 2017 LessWrong Survey · 2017-09-14T00:41:03.787Z · score: 22 (22 votes) · LW · GW

I took the survey!

Comment by lawrencec on Open thread, August 14 - August 20, 2017 · 2017-08-16T09:47:30.266Z · score: 1 (1 votes) · LW · GW

Why do you think this doesn't exist?

Comment by lawrencec on Open thread, August 14 - August 20, 2017 · 2017-08-16T09:42:16.909Z · score: 0 (0 votes) · LW · GW

For what it's worth, though, as far as I can tell we don't have the ability to create an AI that will reliably maximize the number of paperclips in the real world, even with infinite computing power. As Manfred said, model-based goals seems to be a promising research direction for getting AIs to care about the real world, but we don't currently have the ability to get such an AI to reliably actually "value paperclips". There are a lot of problems with model-based goals that occur even in the POMDP setting, let alone when the agent's model of the world or observation space can change. So I wouldn't expect anyone to be able to propose a fully coherent complete answer to your question in the near term.

It might be useful to think about how humans "solve" this problem, and whether or not you can port this behavior over to an AI.

If you're interested in this topic, I would recommend MIRI's paper on value learning as well as the relevant Arbital Technical Tutorial.

Comment by lawrencec on Game Theory & The Golden Rule (From Reddit) · 2017-07-30T04:44:43.306Z · score: 1 (1 votes) · LW · GW

The reason for this is because of the 5% chance for mistakes. Copycat does worse vs both Simpleton and Copycat than Simpleton does against itself.

Comment by lawrencec on Epistemological Implications of a Reduction of Theoretical Implausibility to Cognitive Dissonance · 2017-07-25T04:33:06.235Z · score: 0 (0 votes) · LW · GW

I'm really confused by this.

Comment by lawrencec on The dark arts: Examples from the Harris-Adams conversation · 2017-07-23T06:20:40.834Z · score: 8 (7 votes) · LW · GW

I think the term "Dark Arts" is used by many in the community to refer to generic, truth-agnostic ways of getting people to change their mind. I agree that Scott Adams demonstrates mastery of persuasion techniques, and that this is indeed not necessarily evidence that he is not a "rationalist".

However, the specific claim made by James_Miller is that it is a "model rationalist disagreement". I think that since Adams used the persuasion techniques that Stabilizer mentioned above, it's pretty clear that it isn't a model rationalist disagreement.

Comment by lawrencec on MILA gets a grant for AI safety research · 2017-07-23T05:06:41.985Z · score: 1 (1 votes) · LW · GW

Awesome! I heard a rumor that David Krueger (one of Bengio's grad students) is one of the main people pushing the safety initiative there, can anyone confirm?

Comment by lawrencec on Book Review: Mathematics for Computer Science (Suggestion for MIRI Research Guide) · 2017-07-23T05:03:12.354Z · score: 3 (3 votes) · LW · GW

Thanks for the review! I definitely had the sense that Rosen was doing a lot of hand holding and handwaving - it's certainly a very introductory text. I've read both Rosen and Eppstein and actually found Rosen better. The discrete math class I took in college used Scheinerman's Mathematics: A Discrete Introduction, which I also found to be worse than Rosen.

At the time I actually really enjoyed the fact that Rosen went on tangents and helped me learn how to write a proof, since I was relatively lacking in mathematical maturity. I'd add that Rosen does cover proof writing earlier in the book, but I suspect that MCS might do this job better. Given the target audience of the MIRI research guide, I think it makes sense to switch over to MCS from Rosen.

Comment by lawrencec on AI Safety reading group · 2017-01-30T17:03:31.407Z · score: 1 (1 votes) · LW · GW

Thanks Søren! Could I ask what you're planning on covering in the future? Is this mainly going to be a technical or non-technical reading group?

I noticed that your group seems to have covered a lot of the basic readings on AI Safety, but I'm curious what your future plans.

Comment by lawrencec on Ideas for Next Generation Prediction Technologies · 2016-12-22T16:14:06.510Z · score: 0 (0 votes) · LW · GW

I haven’t heard much about machine learning used for forecast aggregation. It would seem to me like many, many factors could be useful in aggregating forecasts. For instance, some elements of one’s social media profile may be indicative of their forecasting ability. Perhaps information about the educational differences between multiple individuals could provide insight on how correlated their knowledge is.

I think people are looking in to it: The Good Judgment Project team used simple machine learning algorithms as part of their submission to IARPA during the ACE Tournament. One of the PhD students involved in the project wrote his dissertation on a framework for aggregating probability judgments. In the Good Judgment team at least, people are also in using ML for other aspects of prediction - for example, predicting if a given comment will change another person's forecasts - but I don't think there's been much success.

I think a real problem is that there's a real paucity of data for ML-based prediction aggregation compared to most machine learning projects - a good prediction tournament gets a couple hundred forecasts resolving in a year, at most.

Probability density inputs would also require additional understanding from users. While this could definitely be a challenge, many prediction markets already are quite complicated, and existing users of these tools are quite sophisticated.

I think this is a bigger hurdle than you'd expect if you're implementing these for prediction tournaments, though it might be possible to do for prediction markets. (However, I'm curious how you're going to implement the market mechanism in this case.) Anecdotally speaking many of the people involved in GJ Open are not particularly math or tech savvy, even amongst the people who are good at prediction.

Comment by lawrencec on "Flinching away from truth” is often about *protecting* the epistemology · 2016-12-20T20:56:48.543Z · score: 2 (2 votes) · LW · GW

Fair point.

Comment by lawrencec on Stupid Questions December 2016 · 2016-12-20T20:53:18.349Z · score: 2 (2 votes) · LW · GW

I'm just saying that you have an infinite sequence of spheres with the property X. You're saying that because the sequence is infinite I can't point to the last sphere and therefore can't say anything about it. I'm saying that because all spheres in this sequence have the property X, it doesn't matter that the sequence is infinite.

This isn't true in general. Each natural number is finite, but the limit of the natural numbers is infinite. Just because each of the intermediate shapes has property doesn't mean the limiting shape has property X. Notably, in this case each of the intermediate shapes has a non-zero amount of empty space, but the limiting shape has no empty space.

Comment by lawrencec on Stupid Questions December 2016 · 2016-12-20T20:51:58.925Z · score: 1 (1 votes) · LW · GW

Maybe think about the problem this way:

Suppose there was some small ball inside of your super-packed structure that isn't filled. Then we can fill this ball, and so the structure isn't super-packed. It follows that the volume of the empty space inside of your structure has to be 0.

Now, what does your super-packed structure look like, given that it's a empty cube that's been filled?

EDIT: Nevermind, just saw that Villiam gave a similar answer.

Comment by lawrencec on "Flinching away from truth” is often about *protecting* the epistemology · 2016-12-20T20:44:01.894Z · score: 2 (2 votes) · LW · GW

I think they're equivalent in a sense, but that bucket diagrams are still useful. A bucket can also occur when you conflate multiple causal nodes. So in the first example, the kid might not even have a conscious idea that there are three distinct causal nodes ("spelled oshun wrong", "I can't write", "I can't be a writer"), but instead treats them as a single node. If you're able to catch the flinch, introspect, and notice that there are actually three nodes, you're already a big part of the way there.

Comment by lawrencec on Nassim Taleb on Election Forecasting · 2016-12-20T20:37:54.163Z · score: 0 (0 votes) · LW · GW

Thanks for posting this! I have a longer reply to Taleb's post that I'll post soon. But first:

When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with? Or do you do a mental adjustment or discounting based on some factor you think they've left out?

I think it depends on the model. First, note that all forecasting models only take into account a specific set of signals. If there are factors influencing the vote that I'm both aware of and don't think are reflected in the signals, then you should update their forecast to reflect this. For example, I think that because Nate Silver's model was based on polls that lag behind current events, if you had some evidence that a given event was really bad or really good for one of the two candidates, such as the Comey letter or the Trump video, you should update in favor of/against a Trump Presidency before it becomes reflected in the polls.

The math is based on assumptions though that with high uncertainty, far out from the election, the best forecast is 50-50.

Not really. The key assumption is that your forecasts are a Wiener process - a continuous time martingale with normally-distributed increments. (I find this funny because Taleb spends multiple books railing against normality assumptions.) This is kind of a troubling assumption, as Lumifer points out below. If your forecast is continuous (though it need not be), then it can be thought of as a time-transformed Wiener process, but as far as I can tell he doesn't account for the time-transformation.

Everyone agrees that as uncertainty becomes really high, the best forecast is 50-50. Conversely, if you make a confident forecast (say 90-10) and you're properly calibrated, you're also implying that you're unlikely to change your forecast by very much in the future (with high probability, you won't forecast 1-99).

I think the question to ask is - how much volatility should make you doubt a forecast? If someone's forecast varied daily between 1-99 and 99-1, you might learn to just ignore them, for example. Taleb tries to offer one answer to this, but makes some questionable assumptions along the way and I don't really agree with his result.

Comment by lawrencec on Nassim Taleb on Election Forecasting · 2016-12-20T20:12:32.073Z · score: 0 (0 votes) · LW · GW

We have the election estimate F a function of a state variable W, a Wiener process WLOG

That doesn't look like a reasonable starting point to me.

That's fine actually, if you assume your forecasts are continuous in time, then they're continuous martingales and thus equivalent to some time-changed Wiener process. (EDIT: your forecasts need not be continuous, my bad.) The problem is that he doesn't take into the time transformation when he claims that you need to weight your signal by 1/sqrt(t).

He also has a typo in his statement of Ito's Lemma which might affect his derivation. I'll check his math later.

Comment by lawrencec on Celebrating All Who Are in Effective Altruism · 2016-01-24T18:03:55.036Z · score: 0 (0 votes) · LW · GW

Can you give a link to posts showing elitism in EA that weren't written in response to this one?

Comment by lawrencec on Open thread, Dec. 21 - Dec. 27, 2015 · 2015-12-22T17:31:41.763Z · score: 0 (0 votes) · LW · GW

Wait, how would you get P(H) = 1?

Comment by lawrencec on An Introduction to Löb's Theorem in MIRI Research · 2015-12-01T23:27:00.739Z · score: 0 (0 votes) · LW · GW

This is several months too late, but yes! Gödel Machines runs into the Löbstacle, as seen in this MIRI paper. From the paper:

it is clear that the obstacles we have encountered apply to Gödel machines as well. Consider a Gödel machine G1 whose fallback policy would “rewrite” it into another Gödel machine G2 with the same suggester (proof searcher, in Schmidhuber’s terminology). G1’s suggester now wants to prove that it is acceptable to instead rewrite itself into G0 2 , a Gödel machine with a very slightly modified proof searcher. It must prove that G0 2 will obtain at least as much utility as G2. In order to do so, naively we would expect that G0 2 will again only execute rewrites if its proof searcher has shown them to be useful; but clearly, this runs into the Löbian obstacle, unless G1 can show that theorems proven by G0 2 are in fact true.

Comment by lawrencec on Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally · 2015-08-24T20:23:24.856Z · score: 2 (2 votes) · LW · GW

No, it isn't. Being curious is a good heuristic for most people, because most people are in the region where information gathering is cheaper than the expected value of gathering information. I don't think we disagree on anything concrete: I don't claim that it's rational in itself a priori but is a fairly good heuristic.

Comment by lawrencec on Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally · 2015-08-24T16:20:46.949Z · score: 1 (1 votes) · LW · GW

I agree denotationally, but object connotatively with 'rationality is systemized winning', so I left it out. I feel that it would take too long to get rid of the connotation of competition that I believe is associated with 'winning'. The other point that would need to be delved into is: what exactly does the rationalist win at? I believe by winning Elizer meant winning at newcomb's problem, but the idea of winning is normally extended into everything.

I think that Eliezer has disavowed using this statement precisely because of the connotations that people associate with it.

It is because of this that rationality is often considered to be split into two parts: normative and descriptive rationality.

What happened to prescriptive rationality?

Comment by lawrencec on Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally · 2015-08-24T16:20:05.614Z · score: 2 (2 votes) · LW · GW

I'm not sure if this is correct, but my best guess is:

It maximizes utility, in so far as most goals are better achieved with more information, and people tend to systematically underestimate the value of collecting more information or suffer from biases that prevent them from acquiring this information. Or, in other words, curiosity is virtuous because humans are bounded and flawed agents, and it helps rectify the biases that we fall prey to. Just like being quick to update on evidence is a virtue, and scholarship is a virtue.

Comment by lawrencec on Complex Novelty · 2015-08-14T06:57:37.990Z · score: 1 (1 votes) · LW · GW

Yes, I think he recognizes this in this post. He also writes about this (from a slightly different perspective) in high challenge.

Comment by lawrencec on The Error of Crowds · 2015-08-07T01:23:17.535Z · score: 1 (2 votes) · LW · GW

Results from the Good Judgment Project suggest that putting people into teams lets them significantly outperform (have lower Brier's scores than) predictions from both (unweighted) averaging of probabilities and the (admittedly also unweighted) averaging of probability estimates from the better portion of predictors. This seems to offer weak evidence that what goes on in a group is not simple averaging.

Comment by lawrencec on You have a set amount of "weirdness points". Spend them wisely. · 2015-08-03T15:43:41.338Z · score: 1 (1 votes) · LW · GW

That being said, I'm confident that I would pass ideological turing tests.

Cool! You can try taking them here: http://blacker.caltech.edu/itt/

Comment by lawrencec on State-Space of Background Assumptions · 2015-07-31T00:07:21.938Z · score: 1 (1 votes) · LW · GW

Wow, that was a long survey. Done! I'm not sure how good my answers were, like others mentioned a lot of the questions felt underspecified.

Comment by lawrencec on MIRI's Approach · 2015-07-30T21:28:12.143Z · score: 6 (6 votes) · LW · GW

Thanks Nate, this is a great summary of the case for MIRI's approach!

Out of curiosity, is there an example where algorithms led to solutions other than Bird and Layzell? That paper seems to be cited a lot in MIRI's writings.

Comment by lawrencec on MIRI's Approach · 2015-07-30T21:23:28.359Z · score: 3 (3 votes) · LW · GW

I'm not sure what you're looking for in terms of the PAC-learning summary, but for a quick intro, there's this set of slides or these two lectures notes from Scott Aaronson. For a more detailed review of the literature in all the field up until the mid 1990s, there's this paper by David Haussler, though given its length you might as well read up Kearns and Vazirani's 1994 textbook on the subject. I haven't been able to find a more recent review of the literature though - if anyone had a link that'd be great.

Comment by lawrencec on The Brain as a Universal Learning Machine · 2015-07-07T09:03:27.211Z · score: 0 (0 votes) · LW · GW

This was a great post, thanks!

One thing I'm curious about is how the ULH explains to the fact that human thought seems to be divided into System 1/System 2 - is this solely a matter of education history?

Comment by lawrencec on Making Beliefs Pay Rent (in Anticipated Experiences) · 2015-06-30T21:18:47.647Z · score: 2 (2 votes) · LW · GW

You're definitely right that there's some areas where it's easier to make beliefs pay rent than others! I think there's two replies to your concern:

1) First, many theories from math DO pay rent (the ones I'm most aware of are statistics and computer-science related ones). For example, better algorithms in theory (say Strassen's algorithm for multiplying matrices) often correspond to better results in practice. Even more abstract stuff like number theory or recursion theory do yield testable predictions.

2) Even things that can't pay rent directly can be logical implications of other things that pay rent. Eliezer wrote about this kind of reasoning here.

Comment by lawrencec on Rationality Quotes Thread June 2015 · 2015-06-02T15:18:52.750Z · score: 12 (12 votes) · LW · GW

"Mystics exult in mystery and want it to stay mysterious. Scientists exult in mystery for a different reason: it gives them something to do."

Richard Dawkins, The God Delusion, on the topic of mysterious answers to mysterious questions.

Comment by lawrencec on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 · 2015-05-29T02:15:48.456Z · score: 0 (0 votes) · LW · GW

Here's a thing that's been bugging me for a while.

For Gryffindors there's "Gryffindorks". Are there any similarly good insults for the other three houses?

Comment by lawrencec on Open Thread, May 25 - May 31, 2015 · 2015-05-26T04:23:28.692Z · score: 3 (3 votes) · LW · GW

I've noticed recently that listening to music with lyrics significantly hampers comprehension for reading texts as well as essay-writing ability, but has no (or even slightly positive) effects on doing math. My informal model of the problem is that the words of the song disrupt the words being formed in my head. Has anyone else experienced anything similar?

Comment by lawrencec on Nick Bostrom's TED talk on Superintelligence is now online · 2015-05-03T20:43:41.708Z · score: 0 (0 votes) · LW · GW

Ah, I see. Fair enough!

Comment by lawrencec on Nick Bostrom's TED talk on Superintelligence is now online · 2015-05-02T02:49:03.610Z · score: 1 (1 votes) · LW · GW

I'm not sure your argument proves your claim. I think what you've shown is that there exist reasons other than the inability to create perfect boxes to care about the value alignment problem.

We can flip your argument around and apply it to your claim: imagine a world where there was only one team with the ability to make superintelligent AI. I would argue that it'll still be extremely unsafe to build an AI and try to box it. I don't think that this lets me conclude that a lack of boxing ability is the true reason that the value alignment problem is so important.

Comment by lawrencec on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-22T13:41:27.723Z · score: 1 (1 votes) · LW · GW

And then 7 days later, you die.

Comment by lawrencec on Book Review: Discrete Mathematics and Its Applications (MIRI Course List) · 2015-04-22T12:59:56.543Z · score: 2 (2 votes) · LW · GW

Wow. That's pretty impressive.

If you have a decent background in Math already, I've been told that Knuth's Concrete Mathematics might be more interesting (though it's really not appropriate as an introductory text). I've skimmed through a copy, and it seems to cover series and number theory at a much higher level, if that's what you're looking for.

Comment by lawrencec on Book Review: Discrete Mathematics and Its Applications (MIRI Course List) · 2015-04-22T12:53:40.102Z · score: 2 (2 votes) · LW · GW

In my experience there have been three kinds of books: easy books, which I can skim and then do the exercises for, medium books, which I can read carefully one or two times and then do the exercises for, and hard books, which I need to read multiple times + take notes on to do the exercises for.

In most cases I try to do a majority of the exercises either in the sections indicated by the research guide, or, in the case where the research guide doesn't offer any section numbers, the whole textbook.

Comment by lawrencec on Book Review: Discrete Mathematics and Its Applications (MIRI Course List) · 2015-04-19T07:44:30.657Z · score: 1 (1 votes) · LW · GW

Okay, cool! Word of warning, though, I don't think the MIRI list isn't really good for people just starting out. Most of the books assume a decent amount of mathematical background. They're also oriented toward a specific goal (and most people probably don't know half the stuff on the list).

If you insist on using the MIRI list, I recommend starting with either this one, the Linear Algebra Book, or the Logic and Computability book. They're well written and don't require much mathematical background.

Speaking of which, how much math background do you have?

Comment by lawrencec on Book Review: Discrete Mathematics and Its Applications (MIRI Course List) · 2015-04-17T08:43:52.841Z · score: 2 (2 votes) · LW · GW

Comment by lawrencec on Why isn't the following decision theory optimal? · 2015-04-16T11:13:52.953Z · score: 3 (3 votes) · LW · GW

Fair enough. A less formal version of UDT. UDT at least has a formulation in Godel-Lob provability logic.

Comment by lawrencec on Why isn't the following decision theory optimal? · 2015-04-16T05:18:45.181Z · score: 6 (6 votes) · LW · GW

Actually, if you push the precommittment time all the way back, this sounds a lot like an informal version of Updateless Decision Theory, which, by the way, seems to get everything that TDT gets right, plus counterfactual mugging and a lot of experiments that TDT gets wrong.

Comment by lawrencec on Book Review: Discrete Mathematics and Its Applications (MIRI Course List) · 2015-04-14T23:17:06.557Z · score: 3 (3 votes) · LW · GW

Yes, I think that's true. There are gaps, but they're mainly "trust me" results way out of the scope of the book, like the existence of NP-complete problems and so forth. He definitely doesn't have proofs that require large leaps in intuition.

Comment by lawrencec on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 · 2015-03-15T03:07:01.666Z · score: 3 (3 votes) · LW · GW

I think he's referring to the definition of ambition Quirrel uses in Chapter 70:

The Defense Professor's fingers idly spun the button, turning it over and over. "Then again, only a very few folk ever do anything interesting with their lives. What does it matter to you if they are mostly witches or mostly wizards, so long as you are not among them? And I suspect you will not be among them, Miss Davis; for although you are ambitious, you have no ambition."

Comment by lawrencec on How to debate when authority is questioned, but really not needed? · 2015-03-03T21:24:08.467Z · score: 0 (0 votes) · LW · GW

If you're looking for well-policed blogs, you can try Slate Star Codex and any of the other "rationality blogs" listed in the LW wiki.

Comment by lawrencec on How to debate when authority is questioned, but really not needed? · 2015-03-03T21:15:48.144Z · score: 0 (0 votes) · LW · GW

Who the OP is does affect the prior probability that he is wrong. If the majority of economics viewpoints held by non-economists is wrong (which is a big if), then the commentators would be justified in assigning near-zero amount of credence in what he's saying. If the OP presented a detailed, technical argument in favor of his positions, then this would "screen out" OP's level of experience. But barring such an argument, the commentators may have a point.

That being said, the average internet commentator may not be the best conversation partner.