Comment by chris_leong on A Key Power of the President is to Coordinate the Execution of Existing Concrete Plans · 2019-07-17T01:19:50.044Z · score: 2 (1 votes) · LW · GW

I find it strange to say that we don't have any plan. Surely the government could set up scholarships or research institute or some kind of committee to look into this?

Comment by chris_leong on What would be the signs of AI manhattan projects starting? Should a website be made watching for these signs? · 2019-07-04T21:37:21.408Z · score: 6 (3 votes) · LW · GW

Would having this information actually be beneficial. Perhaps it'd be good for us to know what is going on, but it might be negative for certain governments to know about this as it might increase the chance of an AI arms race.

Comment by chris_leong on How/would you want to consume shortform posts? · 2019-07-03T05:38:19.877Z · score: 5 (3 votes) · LW · GW

I'd suggest initially making short-form a seperate section of the site as my suspicion that if it really is a compelling feature it should be able to succeed on its own without homepage integration. Otherwise, it likely doesn't provide enough value to make up for the loss of nuance.

Comment by chris_leong on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-07-01T17:20:52.783Z · score: -1 (3 votes) · LW · GW

Submission low bandwidth: This is a pretty obvious one, but: Should we release AI x that we're convinced is aligned?

Submission: Wei Dai wanted to ask about the best future posts. Why not ask about the best past posts as well to see if any major insights were overlooked?

Submission: What would I think about problem X if I had ten years to think about it?

Comment by chris_leong on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-07-01T17:05:48.047Z · score: 2 (3 votes) · LW · GW

What if another AI would have counterfactually written some of those posts to manipulate us?

Comment by chris_leong on How does one get invited to the alignment forum? · 2019-06-26T06:17:24.601Z · score: 2 (1 votes) · LW · GW

Well, you don't have to answer this now. You'll probably have a better idea once you've promoted a few more people.

How does one get invited to the alignment forum?

2019-06-23T09:39:20.042Z · score: 17 (7 votes)
Comment by chris_leong on Should rationality be a movement? · 2019-06-21T16:52:22.684Z · score: 10 (6 votes) · LW · GW

To be honest, I'm not happy with my response here. There was also a second simultaneous discussion topic about whether CEA was net positive and even though I tried simplifying this into a single discussion, it seems that I accidentally mixed in part of the other discussion (the original title of this post in draft was EA vs. rationality).

Update: I've now edited this response out.

Should rationality be a movement?

2019-06-20T23:09:10.555Z · score: 53 (22 votes)
Comment by chris_leong on Should rationality be a movement? · 2019-06-19T16:39:21.394Z · score: 5 (2 votes) · LW · GW

I need to also add some discussion of the worries of movements, but mobile is playing up for me again

Comment by chris_leong on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-19T05:06:38.856Z · score: 2 (1 votes) · LW · GW

I didn't systematically review his work, just clicked on random articles to see how much value I could extract. Feel free to look me to any reasonable accessible articles.

Comment by chris_leong on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-16T16:18:52.956Z · score: 4 (2 votes) · LW · GW

Thanks for writing this; I've briefly attempted looking at his ideas, but most of it is unreadable. Most of his remaining ideas seem at least somewhat mystical, which makes me skeptical, but it's useful to know!

Comment by chris_leong on All knowledge is circularly justified · 2019-06-13T14:26:38.091Z · score: 3 (2 votes) · LW · GW

I don't have a rigorous argument against an infinite chain, but here's my current set of intuitions: Let's suppose that we have an infinite chain of reasons. Where does the chain come from? Does it pop out of nowhere? Or is there some intuition or finite collection of intuitions that we can posit as an explanation for the chain? While technically possible that the infinite chain could require infinite different intuitions to justify, this seem rather unlikely to me. What then if we accept that there is an intuition or there are intuitions behind the chain? Well, now we ask why these intuitions are reliable. And if we hit an infinite chain again, we can try the same trick and so on until we actually find a cycle

What kind of thing is logic in an ontological sense?

2019-06-12T22:28:47.443Z · score: 13 (4 votes)
Comment by chris_leong on Some Thoughts on Metaphilosophy · 2019-06-12T15:31:34.860Z · score: 2 (1 votes) · LW · GW

"The point here is that no matter how we measure complexity, it seems likely that philosophy would have a "high computational complexity class" according to that measure." - I disagree. The task of philosophy is to figure out how to solve the meta problem, not to actually solve all individual problems or the worst individual problem

Comment by chris_leong on Long Term Future Fund applications open until June 28th · 2019-06-12T11:51:29.900Z · score: 2 (1 votes) · LW · GW

How strongly do you think improving human meta-philosophy would improve computational meta-philosophy?

Comment by chris_leong on Long Term Future Fund applications open until June 28th · 2019-06-12T11:46:17.674Z · score: 4 (2 votes) · LW · GW

Perhaps it'd be useful if there was a group that took more of a dialectical approach, such as in a philosophy class? For example, it could collect different perspectives on what needs to happen for AI to go well and try to help people understand the assumptions underlying the project they are considering being valuable.

Comment by chris_leong on Raven Paradox Revisited · 2019-06-12T11:31:49.945Z · score: 2 (1 votes) · LW · GW

Yeah, this is better than my example of food & medicine.

Comment by chris_leong on Liar Paradox Revisited · 2019-06-12T11:27:06.566Z · score: 2 (1 votes) · LW · GW

Unfortunately, I didn't see this comment and now I can't remember, so I just removed it and we'll never know.

Comment by chris_leong on Who Wants To Start An Important Startup? · 2019-06-12T11:14:41.635Z · score: 11 (3 votes) · LW · GW

This comment didn't age well

Comment by chris_leong on Dissolving the zombie argument · 2019-06-12T10:24:28.666Z · score: 4 (2 votes) · LW · GW

"Noting that there is a certain level of verbal confusion does not imply that there is nothing going on except verbal confusion" - I'm not claiming that verbal confusion is all that is going on, but I will admit that I could have been clearer about what I meant. You are correct that Chalmer's aim was to highlight something about consciousness and for many people discussion of zombies can be a useful way of illustrating how reductive the materialist theory of consciousness is. But from a logical standpoint, it's not really any different from the argument you'd make if you were discussing consciousness directly. So if the zombie argument is easier to grasp for you, great; otherwise you can ignore it and focus on direct discussion of consciousness instead.

Comment by chris_leong on Dissolving the zombie argument · 2019-06-11T14:50:48.719Z · score: 2 (1 votes) · LW · GW

I've had at least one or two conversations with materialists who claim that the concept of a philosophical zombie is incoherent, although I couldn't promise you that is a majority of them.

Anyway, if both parties agree on the definition of p-zombies, then they won't fall into the issue that this post is trying to help people avoid, of using the same word "zombie" in different ways and mistaking a linguistic dispute for something more substantial. Indeed, when trying to determine whether we are all zombies or none of us are zombies, we end up trying to determine whether consciousness is substantial or reductive - this moves the question away from zombies to consciousness itself.

Comment by chris_leong on Dissolving the zombie argument · 2019-06-11T12:16:49.334Z · score: 2 (1 votes) · LW · GW

How does this apply here?

Comment by chris_leong on Dissolving the zombie argument · 2019-06-11T01:19:36.882Z · score: 2 (1 votes) · LW · GW

Is your point that we have no reason to label such processes as consciousness? If so, I'd agree with you, and I actually intent to write a post on this soon.

Comment by chris_leong on Dissolving the zombie argument · 2019-06-11T00:09:22.550Z · score: 6 (3 votes) · LW · GW

On the contrary, it is reasonable for people to update in response to this argument, such as if they realise they hold views that are inconsistent. For example, if they identify as a materialist, but haven't actually thought through what a the materialist view of consciousness would entail, they might discover that this is not something they actually endorse.

Comment by chris_leong on Dissolving the zombie argument · 2019-06-10T14:41:32.923Z · score: 4 (2 votes) · LW · GW

Good work with the chart. That would have taken a lot of effort!

Dissolving the zombie argument

2019-06-10T04:54:54.716Z · score: 1 (5 votes)
Comment by chris_leong on Natural Structures and Definitions · 2019-06-09T09:45:36.078Z · score: 4 (2 votes) · LW · GW

I just reread this comment, which is lucky since I failed to appreciate it the first time round. It's a very useful framing, even though I still agree with my original assessment that there really isn't a dichotomy between trying to figure out what a word means and how the world is, as often you're often trying to figure out how the world is so that you can define a word in a way that is socially useful.

Comment by chris_leong on Visiting the Bay Area from 17-30 June · 2019-06-08T13:55:07.627Z · score: 4 (2 votes) · LW · GW

Hi Avturchin, I think it'll be easier to discuss off-site. Please PM me either your email address or Facebook and we can see if we can sort something out. I don't know anything about aging, but the other topics you've mentioned sound interesting. Also what dates will you be in San Fransisco?

Visiting the Bay Area from 17-30 June

2019-06-07T02:40:03.668Z · score: 18 (4 votes)
Comment by chris_leong on All knowledge is circularly justified · 2019-06-05T10:27:08.938Z · score: 2 (1 votes) · LW · GW

My response here is along the lines of Pascal's Wager, if it has no validity, then nothing has validity since everything is circular, so we may as well assume validity. But of course that is circularly justified as well

Comment by chris_leong on All knowledge is circularly justified · 2019-06-05T08:32:33.497Z · score: 2 (1 votes) · LW · GW

"so if you dig deeper, you hit the limits of introspection and slide into rationalizing without realizing it. I guess it's what you are saying." - that's not what I'm saying. It's almost impossible to achieve, but I think it is possible to understand the exact extent to which you are rationalising

Comment by chris_leong on All knowledge is circularly justified · 2019-06-05T08:30:35.467Z · score: 2 (1 votes) · LW · GW

Yeah, I think you're right. I read that post a while back, but forgot about it until you mentioned it. Nonetheless, I think I'll find this post useful as he discusses a few different ideas there, while sometimes I want to pick out just one. And apart from this one post, these ideas seem to have been mainly ignored, when there should be a while host of consequences

Comment by chris_leong on Seeking suggestions for EA cash-prize contest · 2019-05-29T23:28:38.453Z · score: 7 (4 votes) · LW · GW

For $50-$100 probably should all go to the winner, but might want to divide it up if you get more responses.

For compelling reasons not to run a cash prize contest, there's a chance people might be discouraged from entering via a low prize (not saying this is the case, just that this is possible). That said, it sounds like an entirely appropriate amount for questions like - "What is the most effective way to run a cash-prize EA essay contest?" as it doesn't require as much effort to write an essay on this topic as on other topics which would require more research.

It also might be the case that the contest needs to be run by someone with high social status or endorsed by an EA organisation in order for people to care about winning.


Comment by chris_leong on mAIry's room: AI reasoning to solve philosophical problems · 2019-05-24T13:14:56.807Z · score: 4 (2 votes) · LW · GW

I don't necessarily see that as a verses. A good verbal explanation can provide enough information for you to simulate a formal model in your head. And obviously it'll never be as reliable as working through a formal description step by step, but often that level of reliability isn't required.

Comment by chris_leong on mAIry's room: AI reasoning to solve philosophical problems · 2019-05-24T12:54:49.987Z · score: 4 (2 votes) · LW · GW

Hmm, interesting. Now that you're stating the opposite, it's pretty clear to me that there are very particular assumptions underlying my claim that, "the valuable contribution here is not the formalisation, but the generator behind the formalisation" and maybe I should be more cautious about generalising to other people.

One of my underlying assumptions was a particular model of becoming good at maths - focusing on what ideas might allow you to generate the proof yourself, rather than trying to remember the exact steps. Of course, it is a bit parochial for me to act as though this is the "one true path".

Comment by chris_leong on Would an option to publish to AF users only be a useful feature? · 2019-05-21T05:27:41.496Z · score: 6 (3 votes) · LW · GW

That wouldn't stay secret. I'm pretty confident that someone would leak all this information at some point. But beyond this is creates difficulties in who gets access to the Alignment Forum as it then wouldn't just be about having sufficient knowledge to comment on these issues, but also about trust.

Comment by chris_leong on "UDT2" and "against UD+ASSA" · 2019-05-21T05:17:21.014Z · score: 2 (1 votes) · LW · GW

One aspect of UD+ASSA that is weird is that the UD is uncomputable itself. This seems to contradict the notion of assuming that everything is computable, although maybe there is a special justification that can be given for this?

I don't think the the 0.91 probability is necessarily incorrect. You just have to remember that as long as you care about your family and not your experience of knowing your family is looked after, you only get paid out once in the universe, not once per copy.

Comment by chris_leong on mAIry's room: AI reasoning to solve philosophical problems · 2019-05-21T04:25:41.545Z · score: 6 (3 votes) · LW · GW

This post clearly helped a lot of other people, but it follows a pattern that many other posts on Less Wrong also follow which I consider negative. The valuable contribution here is not the formalisation, but the generator behind the formalisation. The core idea appears to be the following:

"Human brains contain two forms of knowledge: - explicit knowledge and weights that are used in implicit knowledge (admittedly the former is hacked on top of the later, but that isn't relevant here). Mary doesn't gain any extra explicit knowledge from seeing blue, but her brain changes some of her implicit weights so that when a blue object activates in her vision a sub-neural network can connect this to the label "blue"."

Unfortunately, there is a wall of maths that you have to wade through before this is explained to you. I feel it is much better when you provide your readers with a conceptual understanding of what is happening and only then include the formal details.

Comment by chris_leong on Offer of collaboration and/or mentorship · 2019-05-19T02:57:55.447Z · score: 2 (1 votes) · LW · GW

I mean that there isn't a property of logical counterfactuals in the universe itself. However, once we've created a model (/map) of the universe, we can then define logical counterfactuals as about asking a particular question about this model. We just need to figure out what that question is.

Comment by chris_leong on Offer of collaboration and/or mentorship · 2019-05-17T22:24:28.977Z · score: 2 (1 votes) · LW · GW

You've explained the system. But what's the motivation behind this?

Even though I only have a high level understanding of what you're doing, I generally disagree with this kind of approach on a philosophical level. It seems like you're reifying logical counterfactuals, when I see them more like an analogy, ie. positing a logical counterfactual is an operation that takes place on the level of the map, not the territory.

Comment by chris_leong on Offer of collaboration and/or mentorship · 2019-05-16T22:46:52.643Z · score: 4 (2 votes) · LW · GW

Can you tell me more about your ideas related to logical counterfactuals? They're an area of been working on as well.

Comment by chris_leong on Feature Request: Self-imposed Time Restrictions · 2019-05-16T10:16:55.312Z · score: 2 (1 votes) · LW · GW

I'm hugely in favour of this. There have been quite reasonable questions raised about how much Less Wrong improves us and how much it sucks up our time.

Comment by chris_leong on Coherent decisions imply consistent utilities · 2019-05-14T18:06:29.991Z · score: 4 (2 votes) · LW · GW

Okay, so there is an additional assumption that these strings are all encoded as infinite sequences. Instead, they could be encoded with a system that starts by listing the number of digits or -1 if the sequence if infinite, then provide those digits. That's a pretty key property to not mention (then again, I can't criticise too much as I was too lazy to read the PDF). Thanks for the explanation!

Comment by chris_leong on Physical linguistics · 2019-05-14T02:54:08.262Z · score: 5 (3 votes) · LW · GW

I agree that "Why is this rock this rock instead of that rock?" is a good place to start, even if they aren't perfectly analogous. Now, it isn't entirely clear what is being asked. The first question that we could be asking is: "Why is this rock the way that it is instead of the way that rock is?", in which case we could talk about the process of rock formation and the rock's specific history. Another question we could be asking is, "Why is this rock here at this time instead of that rock?" and again we'd be talking about history and how people or events moved it. We could even make anthropic arguments, "This rock isn't a million degrees because if it were that hot it would not longer be a rock" or "This rock isn't a diamond and this is unsurprising as they are rare". Here we'd be asking, "Given a random rock, why are we most likely to be observing certain characteristics?"

One difference with the human example is that the human is asking the question, "Why am I me instead of someone else?" So you can also reason about your likely properties on the basis of being the kind of being who is asking that question. Here the question is interpreted as, "Why is the entity asking this question this entity instead of another entity?".

Another issue which becomes clearer is the symmetry. Barrack Obama might ask, "Why am I me instead of the Pope?" whilst at the same time the Pope asks, "Why am I me instead of Barrack Obama?". So even if you had been someone else, you might very well of been asking the same question. I think this ties well into the notion of surprise. Let's suppose a million people receive a social security number and you receive 235,104. You might argue, "How surprising, there was only a one in a million chance of receiving this number!". However you could have said this regardless of which number you'd been given, so it isn't that surprising after all.

Another question that could be asked is, "Why is my consciousness receiving the qualia (subjective experience) from this physical body?" In this case, the answer depends on your metaphysics. Materialists would say this is a mistaken question as qualia don't exist. Christianity might say it's because God chose to attach this soul to this body. Other spiritual theories might have souls floating around which inhabit any body which is free (although this raises questions such as: what if no soul chooses to inhabit a body and which soul gets to inhabit which body). Lastly, there's theories like property dualism where the consciousness is a result of the mental properties of particles so that the consciousness corresponding to any one particular body couldn't be attached to anyone else without breaking the laws of the universe. So as described in my post Natural Structures and Definitions, this last interpretation is one of those questions that is conditionally meaningful to ask.

Comment by chris_leong on Coherent decisions imply consistent utilities · 2019-05-14T02:09:04.395Z · score: 2 (1 votes) · LW · GW

Hmm, I'm still not following. Limits are uncomputable in general, but I just need one computational function where I know the limits at one point and then I can set it to p+1 instead. Why wouldn't this function still be computable? Maybe "computable function" is being defined differently than I would expect.

Comment by chris_leong on Coherent decisions imply consistent utilities · 2019-05-13T04:29:45.463Z · score: 5 (5 votes) · LW · GW

My understanding of the arguments against using a utility maximiser is that proponents accept that this will lead to sub-optimal or dominated outcomes, but they are happy to accept this because they believe that these AIs will be easier to align. This seems like a completely reasonable trade-off to me. For example, imagine that choosing option A is worth 1 utility. Option B is worth 1.1 utility if 100 mathematical statements are all correct, but -1000 otherwise (we are ignoring the costs of reading through and thinking about all 100 mathematical statements). Even if each of the statements seems obviously correct, there is a decent chance that you messed up on at least 1 of them, so you'll most likely want to take the outside view and pick option A. So I don't think it's necessarily an issue if the AI is doing things that are obviously stupid from an inside view.

Comment by chris_leong on Coherent decisions imply consistent utilities · 2019-05-13T04:19:36.345Z · score: 4 (2 votes) · LW · GW

"Because all computable functions are continuous" - how does this make any sense? Why can't I just pick a value x=1 and if it's left limit and right limit are p, set the function to p+1 at x=1.

Comment by chris_leong on Coherent decisions imply consistent utilities · 2019-05-13T04:15:54.303Z · score: 2 (1 votes) · LW · GW

"Is a fleeting emotional sense of certainty over 1 minute, worth automatically discarding the potential $5-million outcome?" - I know it's mostly outside of what is being modelled here, but suspect that someone who takes the 90% bet and wins nothing might experience much more than just a fleeting sense of disappointment, much more than someone who takes the 45% chance and doesn't win.

Comment by chris_leong on Narcissism vs. social signalling · 2019-05-12T09:23:13.608Z · score: 2 (1 votes) · LW · GW

Do you have any empirical evidence?

Comment by chris_leong on Mixed Reference: The Great Reductionist Project · 2019-05-12T09:19:25.278Z · score: 2 (1 votes) · LW · GW

This is a really good point, I'm disappointed that he didn't respond to it.

Comment by chris_leong on Probability interpretations: Examples · 2019-05-12T08:42:33.358Z · score: 5 (3 votes) · LW · GW

"The propensity and frequentist views regard as nonsense the notion that we could talk about the probability of a mathematical fact" - couldn't a frequentist define a reference class using all the digits of Pi? And then assume that the person knows nothing about Pi so that they throw away the place of the digit?

Comment by chris_leong on Narcissism vs. social signalling · 2019-05-12T04:23:00.893Z · score: 2 (1 votes) · LW · GW

What did he believe changed?

Narcissism vs. social signalling

2019-05-12T03:26:31.552Z · score: 15 (7 votes)
Comment by chris_leong on Natural Structures and Definitions · 2019-05-01T10:29:11.993Z · score: 2 (1 votes) · LW · GW

Sure you have to be using the word in some way, but there's not guarantee that there's a meaningful concept that can be extracted from it or whether the term is just used in ways that are hopelessly confused.

Comment by chris_leong on Natural Structures and Definitions · 2019-05-01T09:25:36.206Z · score: 2 (1 votes) · LW · GW

"You don't need to guess what someone means, or what level of discussion they're looking for" - yes, that is part of the point of providing possible interpretations - to help with the clarification

Natural Structures and Definitions

2019-05-01T00:05:35.698Z · score: 21 (8 votes)

Liar Paradox Revisited

2019-04-17T23:02:45.875Z · score: 11 (3 votes)

Agent Foundation Foundations and the Rocket Alignment Problem

2019-04-09T11:33:46.925Z · score: 13 (5 votes)

Would solving logical counterfactuals solve anthropics?

2019-04-05T11:08:19.834Z · score: 23 (-2 votes)

Is there a difference between uncertainty over your utility function and uncertainty over outcomes?

2019-03-18T18:41:38.246Z · score: 15 (4 votes)

Deconfusing Logical Counterfactuals

2019-01-30T15:13:41.436Z · score: 26 (9 votes)

Is Agent Simulates Predictor a "fair" problem?

2019-01-24T13:18:13.745Z · score: 22 (6 votes)

Debate AI and the Decision to Release an AI

2019-01-17T14:36:53.512Z · score: 10 (4 votes)

Which approach is most promising for aligned AGI?

2019-01-08T02:19:50.278Z · score: 6 (2 votes)

On Abstract Systems

2019-01-06T23:41:52.563Z · score: 15 (9 votes)

On Disingenuity

2018-12-26T17:08:47.138Z · score: 33 (16 votes)

Best arguments against worrying about AI risk?

2018-12-23T14:57:09.905Z · score: 15 (7 votes)

What are some concrete problems about logical counterfactuals?

2018-12-16T10:20:26.618Z · score: 26 (6 votes)

An Extensive Categorisation of Infinite Paradoxes

2018-12-13T18:36:53.972Z · score: -2 (23 votes)

No option to report spam

2018-12-03T13:40:58.514Z · score: 38 (14 votes)

Summary: Surreal Decisions

2018-11-27T14:15:07.342Z · score: 27 (6 votes)

Suggestion: New material shouldn't be released too fast

2018-11-21T16:39:19.495Z · score: 24 (8 votes)

The Inspection Paradox is Everywhere

2018-11-15T10:55:43.654Z · score: 26 (7 votes)

One Doubt About Timeless Decision Theories

2018-10-22T01:39:57.302Z · score: 15 (7 votes)

Formal vs. Effective Pre-Commitment

2018-08-27T12:04:53.268Z · score: 9 (4 votes)

Decision Theory with F@#!ed-Up Reference Classes

2018-08-22T10:10:52.170Z · score: 10 (3 votes)

Logical Counterfactuals & the Cooperation Game

2018-08-14T14:00:34.032Z · score: 17 (7 votes)

A Short Note on UDT

2018-08-08T13:27:12.349Z · score: 11 (4 votes)

Counterfactuals for Perfect Predictors

2018-08-06T12:24:49.624Z · score: 13 (5 votes)

Anthropics: A Short Note on the Fission Riddle

2018-07-28T04:14:44.737Z · score: 12 (5 votes)

The Evil Genie Puzzle

2018-07-25T06:12:53.598Z · score: 21 (8 votes)

Let's Discuss Functional Decision Theory

2018-07-23T07:24:47.559Z · score: 27 (12 votes)

The Psychology Of Resolute Agents

2018-07-20T05:42:09.427Z · score: 11 (4 votes)

Newcomb's Problem In One Paragraph

2018-07-10T07:10:17.321Z · score: 8 (4 votes)

The Prediction Problem: A Variant on Newcomb's

2018-07-04T07:40:21.872Z · score: 28 (8 votes)

What is the threshold for "Hide Low Karma"?

2018-07-01T00:24:40.838Z · score: 8 (2 votes)

The Beauty and the Prince

2018-06-26T13:10:29.889Z · score: 9 (3 votes)

Anthropics: Where does Less Wrong lie?

2018-06-22T10:27:16.592Z · score: 17 (4 votes)

Sleeping Beauty Not Resolved

2018-06-19T04:46:29.204Z · score: 18 (7 votes)

In Defense of Ambiguous Problems

2018-06-17T07:40:58.551Z · score: 8 (6 votes)

Merging accounts

2018-06-16T00:45:00.460Z · score: 6 (1 votes)

Resolving the Dr Evil Problem

2018-06-10T11:56:09.549Z · score: 11 (4 votes)

Principled vs. Pragmatic Morality

2018-05-29T04:31:04.620Z · score: 22 (4 votes)

Decoupling vs Contextualising Norms

2018-05-14T22:44:51.705Z · score: 129 (39 votes)

Hypotheticals: The Direct Application Fallacy

2018-05-09T14:23:14.808Z · score: 47 (14 votes)

Rationality and Spirituality - Summary and Open Thread

2018-04-21T02:37:29.679Z · score: 41 (10 votes)

Raven Paradox Revisited

2018-04-15T00:08:01.907Z · score: 18 (4 votes)

Have you considered either a Kickstarter or a Patreon?

2018-04-13T01:09:41.401Z · score: 23 (6 votes)

On Dualities

2018-03-15T02:10:47.612Z · score: 8 (4 votes)