Comment by chris_leong on Is there a difference between uncertainty over your utility function and uncertainty over outcomes? · 2019-03-18T21:39:00.511Z · score: 2 (1 votes) · LW · GW

Thanks, very interesting. I guess when I said I was imagining a situation where oranges were twice as valuable as was imagining them as worth X utility in situation A and 2X in situation B and suggesting we could just double the number of oranges instead. So it seems like you're talking about a slightly different situation than the one I was envisaging.

Is there a difference between uncertainty over your utility function and uncertainty over outcomes?

2019-03-18T18:41:38.246Z · score: 15 (6 votes)
Comment by chris_leong on Renaming "Frontpage" · 2019-03-09T01:48:53.302Z · score: 5 (3 votes) · LW · GW

Art definitely needs its own section in order to flourish and co-ordination would be an interesting section. I believe that the impact of creating a section is hard to determine in advance and so running experiments is important.

I've written before that I feel meta is under-rated. Too much meta is definitely negative, but too little can be just as negative. I don't have any issue with meta being normally relegated to its own section, but I feel that meta discussion are occasionally important enough that they should be promoted to the frontpage/curated.

Comment by chris_leong on Asymptotically Benign AGI · 2019-03-07T11:16:06.567Z · score: 2 (1 votes) · LW · GW

I suspect that AI Safety via Debate could be benign for certain decisions (like whether to release an AI) if we were to weight the debate more towards the safer option.

Comment by chris_leong on An Extensive Categorisation of Infinite Paradoxes · 2019-02-27T00:24:33.386Z · score: 10 (2 votes) · LW · GW

My primary response to this comment will take the form of a post, but I should add that I wrote: "I will provide informal hints on how surreal numbers could help us solve some of these paradoxes, although the focus on this post is primarily categorisation, so please don't mistake these for formal proofs".

Your comment seems to completely ignore this stipulation. Take for example this:

"Of course, your solution seems to involve implicitly changing the setting to have surreal-valued time and space... You might want to make more of an explicit note of it, though"

Yes, there's a lot of philosophical groundwork that would need to be done to justify the surreal approach. That's why I said that it was only an informal hint.

I'm going to assume, since you're talking about surreals and didn't specify otherwise, that you mean exp(s log 2), using the usual surreal exponential

Yes, I actually did look up that there was a way of defining 2^s where s is a surreal number.

Let's accept the premise that you're using a surreal-valued probability measure instead of a real one

I wrote a summary of a paper by Chen and Rubio that provides the start of a surreal decision theory. This isn't a complete probability theory as it only supports finite additivity instead of countable additivity, but it suggests that this approach might be viable.

I could keep going, but I think I've made my point that you're evaluating these informal comments as though I'd claimed they were a formal proof. This post was already long enough and took enough time to write as is.

I will admit that I could have been clearer that many of these remarks were speculative, in the sense of being arguments that I believed were worth working towards formalising, even if all of the mathematical machinery doesn't necessarily exist at this time. My point is that justifying the use of surreals numbers doesn't necessarily involve solving every paradox; it should also be persuasive to solve a good number of them and then to demonstrate that there is good reason to believe that we may be able to solve the rest in the future. In this sense, informal arguments aren't valueless.

Comment by chris_leong on An Extensive Categorisation of Infinite Paradoxes · 2019-02-26T12:35:46.161Z · score: 2 (4 votes) · LW · GW

This is quite a long post, so it may take some time to write a proper reply, but I'll get back to you when I can. The focus of this post was on gathering together all the infinite paradoxes that I could manage. I also added some informal thoughts on how surreal numbers could help us conceptualise the solution to these problems, although this wasn't the main focus (it was just convenient to put them in the same space).

Unfortunately, I haven't continued the sequence since I've been caught up with other things (travel, AI, applying for jobs), but hopefully I'll write up some new posts soon. I've actually become much less optimistic about surreal numbers for philosophical reasons which I'll write up soon. So my intent is for my next post to examine the definition of infinity and why this makes me less optimistic about this approach. After that, I want to write up a bit more formally how the surreal approach would work, because even though I'm less optimistic about this approach, perhaps someone else will disagree with my pessimism. Further, I think it's useful to understand how the surreal approach would try to resolve these problems, even if only to provide a solid target for criticism.

Comment by chris_leong on Blackmail · 2019-02-23T21:45:20.965Z · score: 3 (2 votes) · LW · GW

You've assumed here that the default is for Alice not to share, while it might seem positive if the default was for her to share. So in practise, it'll depend on how many new shares it incentivises (including people who only went to the effort to discover the information since blackmail is legal) vs. how many people benefit from being able to trade. In practise, I suspect the first factor will easily outweigh the second.

Comment by chris_leong on How the MtG Color Wheel Explains AI Safety · 2019-02-16T14:31:11.256Z · score: 2 (1 votes) · LW · GW

"Finally, note that one way to stop a search from creating an optimization daemon is to just not push it too hard." - An "optimisation demon" doesn't have to try to optimise itself to the top. What about a "semi-optimisation demon" that tries to just get within the appropriate range?

Deconfusing Logical Counterfactuals

2019-01-30T15:13:41.436Z · score: 18 (7 votes)
Comment by chris_leong on Deconfusing Logical Counterfactuals · 2019-01-29T12:52:10.331Z · score: 2 (1 votes) · LW · GW

I'm confused, I'm claiming determinism, not indeterminism

Comment by chris_leong on Is Agent Simulates Predictor a "fair" problem? · 2019-01-25T18:49:28.210Z · score: 2 (1 votes) · LW · GW

BTW, I published the draft, although fairness isn't the main topic and only comes up towards the end.

Comment by chris_leong on For what do we need Superintelligent AI? · 2019-01-25T16:30:26.241Z · score: 2 (1 votes) · LW · GW

"But to solve many x-risks we don't probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition" - unlikely to happen in the forseeable future

Comment by chris_leong on For what do we need Superintelligent AI? · 2019-01-25T15:07:13.781Z · score: 5 (4 votes) · LW · GW

I've actually had similar thoughts myself about why developing AI sooner wouldn't be that good. Technology isn't the barrier in most places to human flourishing, but governance.

Prevention of the creation of other potentially dangerous superintelligences

Solving existential risks in general

Comment by chris_leong on Solve Psy-Kosh's non-anthropic problem · 2019-01-25T13:48:04.533Z · score: 2 (1 votes) · LW · GW

Further update: Do you want to cause good to be done or do you want to be in a be in a world where good is done? That's basically what this question comes down to.

Comment by chris_leong on Is Agent Simulates Predictor a "fair" problem? · 2019-01-25T12:45:17.103Z · score: 2 (1 votes) · LW · GW

It still doesn't seem like defining a "fair" class of problems is that useful" - discovering one class of fair problems lead to CDT. Another lead to TDT. This theoretical work is seperate from the problem of producing pragmatic algorithms that deal with unfairness, but both approaches produce insights.

"This meta decision theory would itself be a decision theory that does well on both types of problems so such a decision theory ought to exist" - I currently have a draft post that does allow some kinds of rewards based on algorithm internals to be considered fair and which basically does the whole meta-decision theory thing (that section of the draft post was written a few hours after I asked this question which is why my views in it are slightly different).

Comment by chris_leong on Is Agent Simulates Predictor a "fair" problem? · 2019-01-25T12:35:13.988Z · score: 9 (2 votes) · LW · GW

I don't quite understand the question, but unfair refers to the environment requiring the internals to be a particular way. I actually think it is possible to allow some internal requirements to be considered fair and I discuss this in one of my draft posts. Nonetheless, it works as a first approximation.

Comment by chris_leong on Is Agent Simulates Predictor a "fair" problem? · 2019-01-25T01:05:16.396Z · score: 2 (1 votes) · LW · GW

"ASP doesn’t seem impossible to solve (in the sense of having a decision theory that handles it well and not at the expense of doing poorly on other problems) so why define a class of “fair” problems that excludes it?" - my intuition is the opposite, that doing well on such problems means doing poorly on others.

Comment by chris_leong on Is Agent Simulates Predictor a "fair" problem? · 2019-01-24T23:33:24.284Z · score: 2 (1 votes) · LW · GW

I already acknowledged in the real post that there exist problems that are unfair, so I don't know why you think we disagree there.

Comment by chris_leong on Is Agent Simulates Predictor a "fair" problem? · 2019-01-24T23:29:52.009Z · score: 4 (2 votes) · LW · GW

"My thinking about this is that a problem is fair if it captures some aspect of some real world problem" - I would say that you have to accept that the real world can be unfair, but that doesn't make real world problems "fair" in the sense gestured at in the FDT paper. Roughly, it is possible to define a broad class of problems such that you can have an algorithm that optimally handles all of them, for example if the reward only depends on your choice or predictions of your choice.

"It seems unsatisfactory that increased predictive power can harm an agent" - that's just life when interacting with other agents. Indeed, in some games, exceeding a certain level of rationality provides an incentive for other players to take you out. That's unfair, but that's life.

Is Agent Simulates Predictor a "fair" problem?

2019-01-24T13:18:13.745Z · score: 22 (6 votes)
Comment by chris_leong on Link: That Time a Guy Tried to Build a Utopia for Mice and it all Went to Hell · 2019-01-23T14:20:41.937Z · score: 5 (3 votes) · LW · GW

But why?

Comment by chris_leong on Link: That Time a Guy Tried to Build a Utopia for Mice and it all Went to Hell · 2019-01-23T12:40:51.538Z · score: 2 (1 votes) · LW · GW

It doesn't really explain what happened though?

Comment by chris_leong on Debate AI and the Decision to Release an AI · 2019-01-21T15:58:01.294Z · score: 3 (2 votes) · LW · GW

"probably AGI complete" - As I said, B is equivalently powerful to A, so the idea is that both should be AGIs. If A or B can break out by themselves, then there's no need for a technique to decide whether to release A or not.

Comment by chris_leong on Capability amplification · 2019-01-20T19:15:19.725Z · score: 3 (2 votes) · LW · GW

The concept of reachability lets you amplify a policy, select a policy worse than the amplification, then amplify this worse policy. The problem is that a policy that is worse may actually amplify better than a policy that is better. So I don't find this definition very useful unless we also have an algorithm for selecting the optimal worse policy to amplify. Unfortunately, that problem doesn't seem very tractable at all.

Comment by chris_leong on Debate AI and the Decision to Release an AI · 2019-01-19T17:22:35.942Z · score: 2 (1 votes) · LW · GW

There could be specific circumstances where you know that another team will release a misaligned AI next week, but most of the time you'll have a decent chance that you could just make a few more tweaks before releasing.

Comment by chris_leong on Anthropics is pretty normal · 2019-01-19T14:03:36.370Z · score: 3 (2 votes) · LW · GW

1) Subjectively distinguishable needs to be clarified. It can either a) that a human receives enough information/experience to distinguish themselves b) that a human will remember information/experience in enough detail to distinguish themselves from another person. The later is more important for real-world anthropics problems and results in significantly more copies.

2) "In most areas, we are fine with ignoring the infinity and just soldiering on in our local area" - sure, but SSA is inherently non-local. It applies over the whole universe, not just the Hubble Volume. If we're going to use an approximation to handle our inability to model infinities, we should be using a large universe, large enough to break your model, rather than a medium sized one.

Comment by chris_leong on Debate AI and the Decision to Release an AI · 2019-01-18T16:06:27.069Z · score: 2 (1 votes) · LW · GW

A considers A' to be a different agent so it won't help A' for nothing. But there could be some issues with acausal cooperation that I haven't really thought about enough to have a strong opinion on.

Comment by chris_leong on Debate AI and the Decision to Release an AI · 2019-01-18T12:24:21.509Z · score: 2 (1 votes) · LW · GW

"And we hope that gives an advantage to the side of truth" - we aren't even relying on that. We're handicapping the AI that wants to be released in terms of message length.

Comment by chris_leong on Debate AI and the Decision to Release an AI · 2019-01-18T12:22:15.691Z · score: 2 (1 votes) · LW · GW

That's a good point, except you aren't addressing my scheme as explained by Gurkenglas

Debate AI and the Decision to Release an AI

2019-01-17T14:36:53.512Z · score: 8 (3 votes)
Comment by chris_leong on In SIA, reference classes (almost) don't matter · 2019-01-17T13:26:49.296Z · score: 4 (3 votes) · LW · GW

Thanks, that's helpful. Actually, now that you've put it that way, I recall having known this fact at some point in the past.

Comment by chris_leong on In SIA, reference classes (almost) don't matter · 2019-01-17T12:15:07.438Z · score: 2 (1 votes) · LW · GW

This result seems strange to me, even though the maths seems to check out. Is there a conceptual explanation of why this should be the case?

Comment by chris_leong on Buy shares in a megaproject · 2019-01-16T23:19:35.264Z · score: 2 (1 votes) · LW · GW

It isn't clear to me how this resolves the problem of Megaprojects. If the shares fall, then perhaps we can tell that the project is likely to fall behind and be assessed a penalty and knowing that will allow some mitigation, but that's a pretty minor fix.

Comment by chris_leong on In SIA, reference classes (almost) don't matter · 2019-01-16T13:00:05.418Z · score: 4 (2 votes) · LW · GW

pR(Ui) already had an R(Ui), then you divided by it, but the original factor disappears so you are left with a divided by R(Ui). But I don't see where the original factor of R(Ui) went, which would have resulted in cancelling.

Comment by chris_leong on What are the open problems in Human Rationality? · 2019-01-16T11:30:34.561Z · score: 4 (3 votes) · LW · GW

"It took someone deciding to make it their fulltime project and getting thousands of dollars in funding, which is roughly what such things normally take" - lots of open source projects get off the ground without money being involved

Comment by chris_leong on In SIA, reference classes (almost) don't matter · 2019-01-15T21:41:40.257Z · score: 4 (2 votes) · LW · GW

When you calculate pR(Ui|sub), you perform the following transformation pR(Ui)→pR(Ui)×R0(Ui)/R(Ui), but an R(Ui) seems to go missing. Can anyone explain?

Comment by chris_leong on Sleeping Beauty Not Resolved · 2019-01-15T21:30:33.828Z · score: 5 (3 votes) · LW · GW

I just thought I'd add a note in case anyone stumbles upon this thread: Stuart has actually now changed his views on anthropic probabilities as detailed here.

Comment by chris_leong on Anthropics: Full Non-indexical Conditioning (FNC) is inconsistent · 2019-01-15T21:26:45.575Z · score: 4 (2 votes) · LW · GW

Full non-indexical conditioning is broken in other ways too. As I argued before, the core of this idea is essentially a cute trick where by precommitting to only guess on a certain sequence, you can manipulate the chance that at least one copy of you guesses and that the guesses of your copies are correct. Except full non-indexical conditioning doesn't precommit so the probabilities calculated are for a completely different situation. Hopefully the demonstration of time inconsistency will make it clearer that this approach is incorrect.

Comment by chris_leong on What are the open problems in Human Rationality? · 2019-01-14T14:10:37.753Z · score: 24 (8 votes) · LW · GW

What do you consider to be his core insights? Would you consider writing a post on this?

Comment by chris_leong on What are the open problems in Human Rationality? · 2019-01-14T13:46:39.541Z · score: 6 (7 votes) · LW · GW

Group rationality is a big one. It wouldn't surprise me if rationalists are less good on average at co-ordinating than other group because rationalists tend to be more individualistic and have their own opinions of what needs to be done. As an example, how long did it take for us to produce a new LW forum despite half of the people here being programmers? And rationality still doesn't have its own version of CEA.

Comment by chris_leong on AlphaGo Zero and capability amplification · 2019-01-11T21:11:53.286Z · score: 2 (1 votes) · LW · GW

I don't suppose you could explain how it uses P and V? Does it use P to decide which path to go down and V to avoid fully playing it out?

Comment by chris_leong on Which approach is most promising for aligned AGI? · 2019-01-08T10:33:06.810Z · score: 2 (1 votes) · LW · GW

This question is specifically about building it, but that's a worthwhile clarification.

Comment by chris_leong on Optimizing for Stories (vs Optimizing Reality) · 2019-01-08T02:30:49.233Z · score: 2 (1 votes) · LW · GW

I think it might be interesting to discuss how story analysis differs from signalling analysis since I expect most people on Less Wrong to be extremely familiar with this. One difference is that people are happy to be given a story about you even if it is imperfect so that they can slot you into a box. Another is that signalling analysis focuses on whether something makes you look good or bad, while story analysis focuses on how engaging a narrative is. It also focuses more on how cultural tropes shape perspectives - ie. the romanticisation of bank robbers.

Which approach is most promising for aligned AGI?

2019-01-08T02:19:50.278Z · score: 6 (2 votes)
Comment by chris_leong on Does anti-malaria charity destroy the local anti-malaria industry? · 2019-01-07T02:33:55.361Z · score: 2 (1 votes) · LW · GW

Seems possible, though malaria nets seems like such a niche industry that it wouldn't result in much additional human or infrastructural capital

On Abstract Systems

2019-01-06T23:41:52.563Z · score: 14 (8 votes)
Comment by chris_leong on In what ways are holidays good? · 2018-12-28T02:22:31.458Z · score: 6 (3 votes) · LW · GW

You missed conversational and social signalling value. Travel is an excellent conversation topic as almost everyone has some memories that they'd love to share. Or at least I find it more interesting than most other smalltalk topics as you're at least learning about other parts of the world. And people who have travelled a lot are seen as more adventurous.

Comment by chris_leong on Can dying people "hold on" for something they are waiting for? · 2018-12-27T20:56:34.671Z · score: 5 (3 votes) · LW · GW

Maybe we should differentiate holding off losing consciousness from holding off dying? Because I know that I can definitely hold off on falling asleep and maybe holding onto consciousness is the same?

On Disingenuity

2018-12-26T17:08:47.138Z · score: 34 (15 votes)
Comment by chris_leong on Boundaries enable positive material-informational feedback loops · 2018-12-23T15:02:58.386Z · score: 4 (4 votes) · LW · GW

If EA focused more on feedback loops, then there'd be less focus on donating money to charity. How would you like these resources to be deployed instead?

Best arguments against worrying about AI risk?

2018-12-23T14:57:09.905Z · score: 15 (7 votes)
Comment by chris_leong on Anthropic probabilities and cost functions · 2018-12-21T23:07:43.616Z · score: 5 (2 votes) · LW · GW

See also: If a tree falls on Sleeping Beauty.

Comment by chris_leong on Anthropic paradoxes transposed into Anthropic Decision Theory · 2018-12-21T22:44:39.660Z · score: 2 (1 votes) · LW · GW

I suppose that makes sense if you're a moral non-realist.

Also, you may care about other people for reasons of morality. Or simply because you like them. Ultimately why you care doesn't matter and only the fact that you have a preference matters. The morality aspect is inessential.

Comment by chris_leong on Anthropic paradoxes transposed into Anthropic Decision Theory · 2018-12-21T13:10:54.288Z · score: 2 (1 votes) · LW · GW

"So you can't talk about anthropic "probabilities" without including how much you care about the cost to your copies" - Yeah, but that isn't anything to do with morality, just individual preferences. And instead of using just a probability, you can define probability and the number of repeats.

Comment by chris_leong on Anthropic paradoxes transposed into Anthropic Decision Theory · 2018-12-21T12:59:02.286Z · score: 2 (1 votes) · LW · GW

Well, if anything that's about your preferences, not morality.

Comment by chris_leong on Anthropic paradoxes transposed into Anthropic Decision Theory · 2018-12-21T12:04:53.009Z · score: 2 (1 votes) · LW · GW

The way I see it, your morality defines a preference ordering over situations and your decision theory maps from decisions to situations. There can be some interaction there is that different moralities may want different inputs, ie. consequentialism only cares about the consequences, while others care about the actions that you chose. But the point is that each theory should be capable of standing on its own. And I agree with probability being somewhat ambiguous for anthropic situations, but our decision theory can just output betting outcomes instead of probabilities.

Comment by chris_leong on Anthropic paradoxes transposed into Anthropic Decision Theory · 2018-12-21T10:07:42.302Z · score: 2 (1 votes) · LW · GW

The point is that ADT is a theory of morality + anthropics. When your core theory of anthropics conceptually shouldn't refer to morality at all, but should be independent.

Comment by chris_leong on The Mad Scientist Decision Problem · 2018-12-21T00:13:21.813Z · score: 2 (1 votes) · LW · GW

This is an interesting reformulation of Counterfactual Mugging. In the case where the cooperation of the paperclip maximiser is provable I don't see it any different from a Counterfactual Mugging taking place before the AI comes into existence. The only way I see this as becoming more complicated is when the AI tries to blackmail you in the counterfactual world.

What are some concrete problems about logical counterfactuals?

2018-12-16T10:20:26.618Z · score: 26 (6 votes)

An Extensive Categorisation of Infinite Paradoxes

2018-12-13T18:36:53.972Z · score: -4 (20 votes)

No option to report spam

2018-12-03T13:40:58.514Z · score: 38 (14 votes)

Summary: Surreal Decisions

2018-11-27T14:15:07.342Z · score: 27 (6 votes)

Suggestion: New material shouldn't be released too fast

2018-11-21T16:39:19.495Z · score: 24 (8 votes)

The Inspection Paradox is Everywhere

2018-11-15T10:55:43.654Z · score: 26 (7 votes)

One Doubt About Timeless Decision Theories

2018-10-22T01:39:57.302Z · score: 15 (7 votes)

Formal vs. Effective Pre-Commitment

2018-08-27T12:04:53.268Z · score: 9 (4 votes)

Decision Theory with F@#!ed-Up Reference Classes

2018-08-22T10:10:52.170Z · score: 10 (3 votes)

Logical Counterfactuals & the Cooperation Game

2018-08-14T14:00:34.032Z · score: 17 (7 votes)

A Short Note on UDT

2018-08-08T13:27:12.349Z · score: 11 (4 votes)

Counterfactuals for Perfect Predictors

2018-08-06T12:24:49.624Z · score: 13 (5 votes)

Anthropics: A Short Note on the Fission Riddle

2018-07-28T04:14:44.737Z · score: 12 (5 votes)

The Evil Genie Puzzle

2018-07-25T06:12:53.598Z · score: 21 (8 votes)

Let's Discuss Functional Decision Theory

2018-07-23T07:24:47.559Z · score: 27 (12 votes)

The Psychology Of Resolute Agents

2018-07-20T05:42:09.427Z · score: 11 (4 votes)

Newcomb's Problem In One Paragraph

2018-07-10T07:10:17.321Z · score: 8 (4 votes)

The Prediction Problem: A Variant on Newcomb's

2018-07-04T07:40:21.872Z · score: 28 (8 votes)

What is the threshold for "Hide Low Karma"?

2018-07-01T00:24:40.838Z · score: 8 (2 votes)

The Beauty and the Prince

2018-06-26T13:10:29.889Z · score: 9 (3 votes)

Anthropics: Where does Less Wrong lie?

2018-06-22T10:27:16.592Z · score: 17 (4 votes)

Sleeping Beauty Not Resolved

2018-06-19T04:46:29.204Z · score: 16 (6 votes)

In Defense of Ambiguous Problems

2018-06-17T07:40:58.551Z · score: 8 (6 votes)

Merging accounts

2018-06-16T00:45:00.460Z · score: 6 (1 votes)

Resolving the Dr Evil Problem

2018-06-10T11:56:09.549Z · score: 11 (4 votes)

Principled vs. Pragmatic Morality

2018-05-29T04:31:04.620Z · score: 22 (4 votes)

Decoupling vs Contextualising Norms

2018-05-14T22:44:51.705Z · score: 127 (38 votes)

Hypotheticals: The Direct Application Fallacy

2018-05-09T14:23:14.808Z · score: 45 (14 votes)

Rationality and Spirituality - Summary and Open Thread

2018-04-21T02:37:29.679Z · score: 41 (10 votes)

Raven Paradox Revisited

2018-04-15T00:08:01.907Z · score: 18 (4 votes)

Have you considered either a Kickstarter or a Patreon?

2018-04-13T01:09:41.401Z · score: 23 (6 votes)

On Dualities

2018-03-15T02:10:47.612Z · score: 8 (4 votes)

Welcome to Effective Altruism Sydney

2018-03-14T23:32:31.443Z · score: 11 (2 votes)

Using accounts as "group accounts"

2018-03-09T03:44:42.322Z · score: 21 (4 votes)

Monthly Meta: Common Knowledge

2018-03-03T00:05:09.488Z · score: 47 (12 votes)

Experimental Open Threads

2018-02-26T03:13:16.999Z · score: 63 (13 votes)

Clarifying the Postmodernism Debate With Skeptical Modernism

2018-02-16T09:40:30.757Z · score: 29 (13 votes)

Monthly Meta: Referring is Underrated

2018-02-08T01:36:48.200Z · score: 42 (13 votes)

Pseudo-Rationality

2018-02-06T08:25:46.527Z · score: 54 (23 votes)

Conflict Theorists pretend to be Mistake Theorists

2018-01-28T12:06:31.307Z · score: 2 (12 votes)

Making Exceptions to General Rules

2018-01-17T23:47:09.156Z · score: 47 (20 votes)

In Defence of Meta

2018-01-08T00:43:40.399Z · score: 39 (15 votes)