What is the strongest argument you know for antirealism?

post by Michele Campolo · 2021-05-12T10:53:33.152Z · LW · GW · 58 comments

This is a question post.

Other questions you can answer:

What is the strongest argument against moral realism?

If you think nothing is "valuable in itself" / "objectively valuable", why do you think so?

How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn't make any sense?

I am interested in these arguments because I am trying to guess the behaviour of an AI system that, roughly speaking:
1) knows a lot about the physical word;
2) has some degree of control over its own actions and what goals to pursue—something like the human brain.
(See this [LW · GW] if you want more details.)

If you could also write the precise statement about realism/antirealism that you are arguing against/for, that would be great. Thanks!

Answers

answer by Kaj_Sotala · 2021-05-12T14:52:26.344Z · LW(p) · GW(p)

My position is something like "I haven't yet seen anyone compellingly both define and argue for moral realism, so until then the whole notion seems confused to me".

It is unclear to me what it would even mean for a moral claim to actually be objectively true or false. At the same time, there are many evolutionary and game-theoretical reasons for why various moral claims would feel objectively true or false to human minds, and that seems sufficient for explaining why many people have an intuition of moral realism being true. I have also personally found some of my moral beliefs changing as a result of psychological work - see the second example here [LW · GW] - which makes me further inclined to believe that moral beliefs are all psychological (and thus subjective, as I understand the term).

So my argument is simply that there doesn't seem to be any reason for me to believe in moral realism, somewhat analogous to how there doesn't seem to be any reason for me to believe in a supernatural God.

comment by Jay · 2021-05-12T22:17:57.673Z · LW(p) · GW(p)

I think a simpler way to state the objection is to say that "value" and "meaning" are transitive verbs.  I can value money; Steve can value cars; Mike can value himself.  It's not clear what it would even mean for objective reality to value something.  Similarly, a subject may "mean" a referent to an interpreter, but nothing can just "mean" or even "mean something" without an implicit interpreter, and "objective reality" doesn't seem to be the sort of thing that can interpret.

Replies from: Capybasilisk
comment by Capybasilisk · 2021-05-13T03:40:36.437Z · LW(p) · GW(p)

I guess you could posit natural selection as being objective reality's value system, but I have the feeling that's not the kind of thing moral realists have in mind.

Replies from: Jay
comment by Jay · 2021-05-15T16:44:44.320Z · LW(p) · GW(p)

Indeed.  A certain coronavirus has recently achieved remarkable gains in Darwinist terms, but this is not generally considered a moral triumph.  Quite the opposite, as a dislike for disease is a near-universal human value.

It is often tempting to use near-universal human values as a substitute for objective values, and sometimes it works.  However, such values are not always internally consistent because humanity isn't.  Values such as disease prevention came into conflict with other values such as prosperity during the pandemic, with some people supporting strict lockdowns and others supporting a return to business as usual.

And there are words such as "justice" which refer to ostensibly near-universal human values except people don't always agree on what that value is or what it demands in any specific case.

comment by Michele Campolo · 2021-05-13T10:34:25.156Z · LW(p) · GW(p)

How do you feel about:

1
There is
a procedure/algorithm which doesn't seem biased towards a particular value system
such that
a class of AI systems that implement it end up having a common set of values, and they endorse the same values upon reflection.

2
This set of values might have something in common with what we, humans, call values.

If 1 and 2 seem at least plausible or conceivable, why can't we use them as a basis to design aligned AI? Is it because of skepticism towards 1 or 2?

Replies from: TAG, Kaj_Sotala
comment by TAG · 2021-05-13T14:25:01.339Z · LW(p) · GW(p)

The "might" in 2. Implies a "might not".

comment by Kaj_Sotala · 2021-05-13T13:17:38.592Z · LW(p) · GW(p)

It seems very hard for me to imagine how one could create a procedure that wasn't biased towards a particular value system. E.g. Stuart Armstrong has written about how humans can be assigned any values whatsoever [? · GW] - you have to decide what parts of their behavior are because of genuine preferences and what parts are because of irrationality, and what values that implies. And the way you decide what's correct behavior and what's irrationality seems like the kind of a choice that will depend on your own values. Even something like "this seems like the simplest way of assigning preferences" presupposes that it is valuable to pick a procedure based on its simplicity - though the post argues that even simplicity would fail to distinguish between several alternative ways of assigning preferences.

Of course, just because we can't be truly unbiased doesn't mean we couldn't be less biased [LW · GW], so maybe something like "pick the simplest system that produces sensible agents, distinguishing between ties at random" could arguably be the least biased alternative. But human values seem quite complex [? · GW]; if there was some simple and unbiased solution that would produce convergent values to all AIs that implemented it, it might certainly have something in common with what we call values, but that's not a very high bar. There's a sense in which all the bacteria share the same goal, "making more (surviving) copies of yourself is the only thing that matters", and I'd expect the convergent value system to end up as being something like that. That has some resemblance to human values, since many humans also care about having offspring, but not very much.

answer by johnswentworth · 2021-05-12T16:26:52.307Z · LW(p) · GW(p)

There's a counterargument-template which roughly says "Suppose the ground-truth source of morality is X. If X says that it's good to torture babies (not in exchange for something else valuable, just good in its own right), would you then accept that truth and spend your resources to torture babies? Does X saying it's good actually make it good?"

Applied to the most strawmannish version of moral realism, this might say something like "Suppose the ground-truth source of morality is a set of stone tablets inscribed with rules. If one day someone finds the tablets, examines them, and notices some previously-overlooked text at the bottom saying that it's good to torture babies, would you then accept this truth and spend your resources to torture babies? Does the tablets saying it's good actually make it good?"

Applied to a stronger version of moral realism, it might say something like "Suppose the ground-truth source of morality is game-theoretic cooperation. If it turns out that, in our universe, we can best cooperate with most other beings by torturing babies (perhaps as a signal that we are willing to set aside our own preferences in order to cooperate [LW · GW]), would you then accept this truth and spend your resources to torture babies? Does the math saying it's good actually make it good?"

The point of these templated examples is not that the answer is obviously "no". (Though "no" is definitely my answer.) A true moral realist will likely respond by saying "yes, but I do not believe that X would actually say that". That brings us to the real argument: why does the moral realist believe this? "What do I think I know, and how do I think I know it?" What causal, physical process resulted in that belief?

(Often, the reasoning goes something like "I'm fairly confident that torturing babies is bad, therefore I'm fairly confident that the ground-truth source of morality will say it's bad". But then we have to ask: why are my beliefs about morality evidence for the ground-truth? What physical process entangled these two? If the ground-truth source had given the opposite answer, would I currently believe the opposite thing?)

In the strawmannish case of the stone tablets, there is pretty obviously no causal link. Humans' care for babies' happiness seems to have arisen for evolutionary fitness reasons; it would likely be exactly the same if the stone tablets said something different.

In the case of game-theoretic cooperation, one could argue that evolution itself is selecting according to the game-theoretic laws in question. On the other hand, thou art godshatter [LW · GW], and also evolution is entirely happy to select for eating other people's babies [LW · GW] in certain circumstances. The causal link between game-theoretic cooperation and our particular evolved preferences is unreliable at best.

At this point, one could still self-consistently declare that the ground-truth source is still correct, even if one's own intuitions are an unreliable proxy. But I think most moral realists would update away from the position if they understood, on a gut level, just how often their preferred ground-truth source diverges from their moral intuitions. Most just haven't really attacked the weak points of that belief. (And in fact, if they would update away upon learning that the two diverge, then they are not really moral realists, regardless of whether the two do diverge much.)

Side-note: Three Worlds Collide [LW · GW] is a fun read, and is not-so-secretly a great thinkpiece on moral realism.

comment by Michele Campolo · 2021-05-12T17:23:54.850Z · LW(p) · GW(p)

Thank you for the detailed answer! I'll read Three Worlds Collide.

That brings us to the real argument: why does the moral realist believe this? "What do I think I know, and how do I think I know it?" What causal, physical process resulted in that belief?

I think a world full of people who are always blissed out is better than a world full of people who are always depressed or in pain. I don't have a complete ordering over world-histories, but I am confident in this single preference, and if someone called this "objective value" or "moral truth" I wouldn't say they are clearly wrong. In particular, if someone told me that there exists a certain class of AI systems that end up endorsing the same single preference, and that these AI systems are way less biased and more rational than humans, I would find all that plausible. (Again, compare this [AF · GW] if you want.) 

Now, why do I think this?

I am a human and I am biased by my own emotional system, but I can still try to imagine what would happen if I stopped feeling emotions. I think I would still consider the happy world more valuable than the sad world. Is this a proof that objective value is a thing? Of course not. At the same time, I can imagine also an AI system thinking: "Look, I know various facts about this world. I don't believe in golden rules written in fire etched into the fabric of reality, or divine commands about what everyone should do, but I know there are some weird things that have conscious experiences and memory, and this seems something valuable in itself. Moreover, I don't see other sources of value at the moment. I guess I'll do something about it." (Taken from this comment [AF(p) · GW(p)])

Replies from: johnswentworth
comment by johnswentworth · 2021-05-12T18:04:43.278Z · LW(p) · GW(p)

Look, I know various facts about this world. I don't believe in golden rules written in fire etched into the fabric of reality, or divine commands about what everyone should do, but I know there are some weird things that have conscious experiences and memory, and this seems something valuable in itself. Moreover, I don't see other sources of value at the moment. I guess I'll do something about it.

Why would something which doesn't already have values be looking for values? Why would conscious experiences and memory "seem valuable" to a system which does not have values already? Seems like having a "source of value" already is a prerequisite to something seeming valuable - otherwise, what would make it seem valuable?

At the very least, we have strong theoretical reasoning models (like Bayesian reasoners, or Bayesian EU maximizers), which definitely do not go looking for values to pursue, or adopt new values.

Replies from: Michele Campolo
comment by Michele Campolo · 2021-05-12T18:58:03.082Z · LW(p) · GW(p)

At the very least, we have strong theoretical reasoning models (like Bayesian reasoners, or Bayesian EU maximizers), which definitely do not go looking for values to pursue, or adopt new values.

This does not imply one cannot build an agent that works according to a different framework. VNM Utility maximization requires a complete ordering of preferences, and does not say anything about where the ordering comes from in the first place.
(But maybe your point was just that our current models do not "look for values")

Why would something which doesn't already have values be looking for values? Why would conscious experiences and memory "seem valuable" to a system which does not have values already? Seems like having a "source of value" already is a prerequisite to something seeming valuable - otherwise, what would make it seem valuable?

An agent could have a pre-built routine or subagent that has a certain degree of control over what other subagents do—in a sense, it determines what are the "values" of the rest of the system. If this routine looks unbiased / rational / valueless, we have a system that considers some things as valuable (acts to pursue them) without having a pre-value, or at least the pre-value doesn't look like something that we would consider a value.

Replies from: johnswentworth
comment by johnswentworth · 2021-05-12T20:00:26.226Z · LW(p) · GW(p)

We do have real-world examples of things which do not themselves have anything humans would typically consider values, but do determine the values of the rest of some system. Evolution determining human values is a good example: evolution does not itself care about anything, yet it produced human values. Of course, if we just evolve some system, we don't expect it to robustly end up with Good values - e.g. the Babyeaters (from Three Worlds Collide) are a plausible outcome as well. Just because we have a value-less system which produces values, does not mean that the values produced are Good.

This example generalizes: we have some subsystem which does not itself contain anything we'd consider values. It determines the values of the rest of the system. But then, what reason do we have to expect that the values produced will be Good? The most common reason to believe such a thing is to predict that the subsystem will produce values similar to our own moral intuitions. But if that's the case, then we're using our own moral intuitions as the source-of-truth to begin with, which is exactly the opposite of moral realism.

To reiterate: the core issue with this setup is why we expect the value-less subsystem to produce something Good. How could we possibly know that, without using some other source-of-truth about Goodness to figure it out?

Replies from: Michele Campolo
comment by Michele Campolo · 2021-05-13T09:59:38.040Z · LW(p) · GW(p)

"How the physical world works" seems, to me, a plausible source-of-truth. In other words: I consider some features of the environment (e.g. consciousness) as a reason to believe that some AI systems might end up caring about a common set of things, after they've spent some time gathering knowledge about the world and reasoning. Our (human) moral intuitions might also be different from this set.

comment by Diagonalore · 2021-05-12T23:40:30.431Z · LW(p) · GW(p)

There's a counterargument-template which roughly says "Suppose the ground-truth source of morality is X. If X says that it's good to torture babies (not in exchange for something else valuable, just good in its own right), would you then accept that truth and spend your resources to torture babies? Does X saying it's good actually make it good?"

I'm not sure if I'm able to properly articulate my thoughts on this but I'd be interested to know if it's understandable and where it might fit. Sorry if I repeat myself.

from my perspective It's like if you applied a similar template to verify/refute the cogito.

I know consciousness exists because I'm conscious of it. If you asked me if I'd accept the truth that I'm not conscious, supposing this were the result of the cogito, I'd consider that question incoherent.

If someone concluded that they're not conscious, by leveraging consciousness to assess whether they're conscious, then I could only conclude that they misunderstand consciousness.

My version of moral realism would be similar. The existence of positive and negative moral value is effectively self evident to all beings affected by such values.

To me, saying: "what if the ground truth of morality is that (all else equal) an instance of suffering is preferable to it's absence." Is like saying: "what if being conscious of one's own experience isn't necessarily evidence for consciousness."

Replies from: johnswentworth
comment by johnswentworth · 2021-05-13T01:02:45.288Z · LW(p) · GW(p)

I actually don't think this is a statement of moral realism; I think it's a statement of moral nonrealism. Roughly speaking, you're saying that the ground-truth source of values is the self-evidence of those values to agents holding them. If some other agents hold some other values, then those other values can presumably seem just as self-evident to those other agents. (And of course we humans would then say that those other agents are immoral.)

This all sounds functionally-identical to moral nonrealism. In particular, it gives us no reason at all to expect some alien intelligence or AI to converge to similar values to humans, and it says that an AI will have to somehow get evidence about what humans consider moral in order to learn morality.

Replies from: Diagonalore
comment by Diagonalore · 2021-05-13T05:35:08.862Z · LW(p) · GW(p)

I appreciate your input, these are my first two comments here so apologies if i'm out of line at all.

>Roughly speaking, you're saying that the ground-truth source of values is the self-evidence of those values to agents holding them. 

In the same way that the ground-truth proof for the existence of conscious experience comes from conscious experience. This doesn't Imply that consciousness is any less real, even if it means that it isn't possible for one agent to entirely assess the "realness" of another agent's claims to be experiencing consciousness. Agents can also be mistaken about the self evident nature/scope of certain things relevant to consciousness, and other agents can justifiably reject the inherent validity of those claims, however those points don't suggest doubting the fact that the existence of consciousness can be arrived at self evidently.

For example, someone might suggest that It is self evident that a particular course of events occurred because they have a clear memory of it happening. Obviously they're wrong to call that self evident, and you could justifiably dismiss their level of confidence.

Similarly, I'm not suggesting that any given moral value held to be self evident should be considered as such, just that the realness of morality is arrived at self evidently.

I realise that probably makes it sound like I'm trying to rationalise attributing the awareness of moral reality to some enlightened subset who I happen to agree with, but I'm suggesting there's a common denominator which all morally relevant agents are inherently cognizant of. I think experiencing suffering is sufficient evidence for the existence of real moral truth value. 

If an alien intelligence claimed to prefer to experience suffering on net, I think it would be a faulty translation or a deception, in the same sense as if an alien intelligence claimed to exhibit a variety of consciousness that precluded experiential phenomenon.

>it says that an AI will have to somehow get evidence about what humans consider moral in order to learn morality.

Does moral realism necessarily imply that a sufficiently intelligent system can bootstrap moral knowledge without evidence derived via conscious agents? That isn't obvious to me. 

Replies from: ChristianKl, Jay
comment by ChristianKl · 2021-05-13T07:55:44.395Z · LW(p) · GW(p)

In this rebate "real" means objective which means something like independent from observers. Consciousness is dependent on you observing it and the idea that you could be conscious without observing it seems incoherent.

The moral realism position is that it's coherent to say that there are thinks that have moral value even if there's no observer that judges them to have moral value. 

comment by Jay · 2021-05-15T16:57:31.012Z · LW(p) · GW(p)

I'm suggesting there's a common denominator which all morally relevant agents are inherently cognizant of.

This naturally raises the question of whether people who don't agree with you are not moral agents or are somehow so confused or deceitful that they have abandoned their inherent truth.  I've heard the second version stated seriously in my Bible-belt childhood; it didn't impress me then.  The first just seems ... odd (and also raises the question of whether the non-morally-relevant will eventually outcompete the moral, leading to their extinction).

Any position claiming that everyone, deep down, agrees tends to founder on the observation that we simply don't - or to seem utterly banal (because everyone agrees with it).

answer by Yair Halberstadt · 2021-05-12T14:30:53.928Z · LW(p) · GW(p)

If you think nothing is "valuable in itself" / "objectively valuable", why do you think so?

I think that's the wrong way round. If you want to claim things have some property, then you have to put forward evidence they do. My strongest argument that things do not objectively have value is, why on earth would you think they do?

It's also clear that this discussion is fruitless. The only way to make progress will be to give some sort of definition for "objective value" at which point this will degenerate into an argument about semantics.

comment by Michele Campolo · 2021-05-12T16:15:18.475Z · LW(p) · GW(p)

I didn't want to start a long discussion. My idea was to get some random feedback to see if I was missing some important ideas I had not considered

answer by Dagon · 2021-05-12T16:19:21.644Z · LW(p) · GW(p)

The strongest argument I know of for this is "that's the default, simplest explanation".  My prior is quite low that there's any external force which values or judges things.  I have yet to see any evidence that there is such a thing.

Mostly: it's hard to prove a negative, but you shouldn't have to, unless there some positive evidence to explain otherwise.

answer by ike · 2021-05-12T12:22:37.704Z · LW(p) · GW(p)

My own argument, see https://www.lesswrong.com/posts/zm3Wgqfyf6E4tTkcG/the-short-case-for-verificationism [LW · GW] and the post it links back to.

It seems that if external reality is meaningless, then it's difficult to ground any form of morality that says actions are good or bad insofar as they have particular effects on external reality.

comment by Michele Campolo · 2021-05-12T16:34:27.011Z · LW(p) · GW(p)

That is an interesting point. More or less, I agree with this sentence in your fist post:

As far as I can tell, we can do science just as well without assuming that there's a real territory out there somewhere.

in the sense that one can do science by speaking only about their own observations, without making a distinction between what is observed and what "really exists".

On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine. How does this fit in your framework? (Might be irrelevant, sorry if I misunderstood)

Replies from: ike
comment by ike · 2021-05-12T17:13:32.530Z · LW(p) · GW(p)

>On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine.

That's just part of my model. To the extent that empathy of this nature is useful for predicting what other people will do, that's a useful thing to have in a model. But to then say "other people have subjective experiences somewhere 'out there' in external reality" seems meaningless - you're just asserting your model is "real", which is a category error in my view. 

Replies from: TAG
comment by TAG · 2021-05-12T17:19:50.603Z · LW(p) · GW(p)

you’re just asserting your model is “real”, which is a category error in my view.

"The model is the territory" is a category error, but "the model accurately represents the territory" is not.

Replies from: ike
comment by ike · 2021-05-12T17:21:00.906Z · LW(p) · GW(p)

What does it mean for a model to "represent" a territory?

Replies from: TAG
comment by TAG · 2021-05-12T17:38:14.068Z · LW(p) · GW(p)

You're assuming that the words you are using can represent ideas in your head.

Replies from: ike
comment by ike · 2021-05-12T18:46:20.276Z · LW(p) · GW(p)

Not at all, to the extent head is a territory. 

Replies from: TAG
comment by TAG · 2021-05-12T19:29:23.786Z · LW(p) · GW(p)

Tell me what you are doing ,then.

Replies from: ike
comment by ike · 2021-05-12T20:13:00.454Z · LW(p) · GW(p)

I'm communicating, which I don't have a fully general account of, but is something I can do and has relatively predictable effects on my experiences. 

Replies from: TAG
comment by TAG · 2021-05-12T20:36:38.824Z · LW(p) · GW(p)

Your objection to representation was that there is no account if it.

Replies from: ike
comment by ike · 2021-05-12T21:03:10.891Z · LW(p) · GW(p)

Yes, it appears meaningless, I and others have tried hard to figure out a possible account of it.

I haven't tried to get a fully general account of communication but I'm aware there's been plenty of philosophical work, and I can see partial accounts that work well enough.

Replies from: TAG
comment by TAG · 2021-05-12T21:14:54.070Z · LW(p) · GW(p)

You're implicitly assuming it works by using it. So why can't I assume that representation works, somehow?

Replies from: ike
comment by ike · 2021-05-12T21:23:21.046Z · LW(p) · GW(p)

I know what successful communication looks like. 

What does successful representation look like? 

Replies from: TAG
comment by TAG · 2021-05-13T17:08:11.839Z · LW(p) · GW(p)

Communication uses symbols, which are representations.

answer by Waddington · 2021-05-12T21:43:09.467Z · LW(p) · GW(p)

Moral realism:

I think determinism qualifies. Morality implies right versus wrong which implies the existence of errors. If everything is predetermined according to initial conditions, the concept of error becomes meaningless. You can't correct your behavior any more than an atom on Mars can; que sera, sera. Everything becomes the consequence of the initial conditions of the universe at large and so morality becomes inconsequential. You can't even change your mind on this topic because the only change possible is that dictated by initial conditions. If you imagine that you can, you do so because of the causal chain of events that necessitated it.

There's no rationality or irrationality either because these concepts imply, once again, the possibility of errors in a universe that can't err.

You're an atheist? Not your choice. You're a theist? Not your choice. You disagree with this sentiment? Again; que sera, sera.

How can moral realism be defended in a universe where no one is responsible for anything?

comment by Michele Campolo · 2021-05-13T09:40:33.106Z · LW(p) · GW(p)

I disagree. Determinism doesn't make the concepts of "control" or "causation" meaningless. It makes sense to say that, to a certain degree, you often can control your own attention, while in other circumstances you can't: if there's a really loud sound near you, you are somewhat forced to pay attention to it.

From there you can derive a concept of responsibility, which is used e.g. in law. I know that the book Actual Causality focuses on these ideas (but there might be other books on the same topics that are easier to read or simply better in their exposition).

Replies from: Waddington
comment by Waddington · 2021-05-13T16:07:54.784Z · LW(p) · GW(p)

That only works if you reject determinism. If the initial conditions of the universe resulted in your decision by necessity, then it's not your decision, is it?

answer by weathersystems · 2021-05-12T15:16:25.662Z · LW(p) · GW(p)

If you're just looking for the arguments. This are what you're looking for:
https://plato.stanford.edu/entries/moral-anti-realism

How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn't make any sense?

What is "disinterested altruism"? And why do you think it's connected to moral anti-realism?

comment by Michele Campolo · 2021-05-12T17:35:11.524Z · LW(p) · GW(p)

I can't say I am an expert on realism and antirealism, but I have already spent time on metaethics textbooks and learning about metaethics in general. With this question I wanted to get an idea of what are the main arguments on LW, and maybe find new ideas I hadn't considered.

What is "disinterested altruism"? And why do you think it's connected to moral anti-realism?

I see a relation with realism. If certain pieces of knowledge about the physical world (how human and animal cognition works) can motivate a class of agents that we would also recognise as unbiased and rational, that would be a form of altruism that is not instrumental and not related to game theory.

answer by TAG · 2021-05-12T12:39:29.841Z · LW(p) · GW(p)

If you think nothing is “valuable in itself” / “objectively valuable”, why do you think so?

Value isn't a physical property, even an emergent one.

How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn’t make any sense?

That's a different question. Rationality is defined in terms of values, but they don't have to be objective values. There can even be facts about how ethics should work, but,in view of the above,they would be facts of a game theoretic sort, not facts by virtue of correspondence to moral properties. If you want to be altruistic, then the buck stops there, and it makes sense for you -- where "makes sense" means instrumental rationality. But altruism isn't a fact about the world that you are compelled to believe by epistemic rationality.

comment by Michele Campolo · 2021-05-12T14:21:56.152Z · LW(p) · GW(p)

Thanks for your answer, but I am looking for arguments, not just statements or opinions. How do you know that value is not a physical property? What do you mean when you say that altruism is not a consequence of epistemic rationality, and how do you know?

Replies from: TAG
comment by TAG · 2021-05-13T13:27:08.200Z · LW(p) · GW(p)

Value isn't a physical property because it doesn't feature in physics. That's not an opinion, it's a fact about physics, like saying there is no physical theory of ghosts.

What do you mean when you say that altruism is not a consequence of epistemic rationality,

I mean that every argument for altruism I have seen is either based directly on personal preference, or based on an assumption about objective values. But objective value isn't a physical thing: people who talk about are assuming the world works that way...because they have a personal preference. Objective values are a construction, because you can't measure value with an instrument, and show it was there all the time. People construct value that way because they think things ought to be that way --but a subjective preference for objectivity is still a subjective preference. So actually both arguments come down to subjective preference.

A lot of the problem is that you have been assuming that moral realism and obligatory altruism are true by default, and arguing against them. But Occam's razor is against both of them ,so it is for you to argue for them.

Sam Harris thinks the flourishing of conscious beings is valuable. That's his opinion ...where's the objectivity.? You agree...wheres the objectivity? Two subjective beliefs that coincide don't add up to objectivity.

answer by ACrackedPot · 2021-05-12T14:50:24.189Z · LW(p) · GW(p)

What is the strongest argument you know for antirealism? 

From Aella; the external world is a meaningless hypothesis; given a set of experiences and a consistent set of expectations about what form those experiences will take in the future, positing an external world doesn't add any additional information.  That is, the only thing that "external world" would add would be an expectation of a particular kind of consistency to those experiences; you can simply assume the consistency, and then the external world adds no additional informational content or predictive capacity.

What is the strongest argument against moral realism?

Just as an external world changes nothing about your expectations of what you will experience, moral realism, the claim that morality exists as a natural feature of the external world, changes nothing about your expectations of what you will experience.

If you think nothing is "valuable in itself" / "objectively valuable", why do you think so?

Consider a proposal to replace all the air around you with something valuable.  Consider a proposal to replace some percentage of the air around you with something valuable.

The ideal proposal replaces neither all of the air, nor none of the air.  In the limit of all of the air being replaced, the air achieves infinite relative value.  In the limit of none of the air being replaced, the air has, under normal circumstances, no value.

Consider the value of a vacuum tube; vacuum, the absence of anything, has particular value in that case.

Which is all to say - value is strictly relative, and it is unfixed.  The case of the vacuum tube demonstrates that there are cases where having nothing at all in a given region is more valuable than having something at all there.  If the vacuum tube is part of a mechanical contraption that is keeping you alive, there is nothing you want in that vacuum tube, more than vacuum itself; thus, there is nothing that has, in that specific situation, objective value, given that the only sense by which we can make sense of objective value is a comparison to nothing, and in that particular case nothing is more valuable than the something.

How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn't make any sense?

Because you've tautologically defined it to be so when you said the altruism is disinterested.  If I have no interest in a thing, it makes no sense to behave as if I have an interest in that thing.  Any sense in which it would make sense for me to have an interest in a thing, is a claim that I have an interest in that thing.

comment by TAG · 2021-05-12T17:32:16.404Z · LW(p) · GW(p)

From Aella; the external world is a meaningless hypothesis; given a set of experiences and a consistent set of expectations about what form those experiences will take in the future, positing an external world doesn’t add any additional information. That is, the only thing that “external world” would add would be an expectation of a particular kind of consistency to those experiences; you can simply assume the consistency, and then the external world adds no additional informational content or predictive capacity.

You can assume inexplicable consistency, but assuming a world is assuming explicable consistency.

Just as an external world changes nothing about your expectations of what you will experience, moral realism, the claim that morality exists as a natural feature of the external world, changes nothing about your expectations of what you will experience

Neither does subjective morality, except by changing your actions. But moral realism would change your actions too, if true and compelling. Ethics is supposed to relate to behaviour. You can make it look irrelevant by portraying people as purely passive entities that do nothing but attempt predict ther experiences, but the premise is clearly false.

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-12T18:48:51.245Z · LW(p) · GW(p)

You can assume inexplicable consistency, but assuming a world is assuming explicable consistency.

That is an additional assumption, not the same assumption.  Additionally, the claim that the world's consistency is explicable is just another assumption; you can't explain why the external world exists, nor why it is consistent.

If you think "The universe exists" is a simpler explanation than "The universe exists because God created it", because the former assumes only the existence of the universe, and the latter assumes an additional unprovable entity, then you should notice that "My experiences exist" is a simpler explanation than "My experiences exist because there is an external world I interact with".  In both cases the latter is an unprovable statement that only increases the complexity of the necessary assumptions.

Neither does subjective morality, except by changing your actions. But moral realism would change your actions too, if true and compelling. Ethics is supposed to relate to behaviour. You can make it look irrelevant by portraying people as purely passive entities that do nothing but attempt predict ther experiences, but the premise is clearly false.

"Compelling" is doing all the work there, and doesn't require that the ethics objectively exist in the external world.

Replies from: TAG
comment by TAG · 2021-05-12T19:27:55.306Z · LW(p) · GW(p)

That is an additional assumption, not the same assumption

Yes. It's an additional assumption that leads to greater explanatory power. If it had no such advantage, you should not make it, but since it does, it is not obviously ruled out by parsimony.

Additionally, the claim that the world’s consistency is explicable is just another assumption; you can’t explain why the external world exists, nor why it is consistent.

Even if you can't deduce the amount of inexplicability to zero, you can reduce it.

My experiences exist” is a simpler explanation than “My experiences exist because there is an external world I interact with”.

Explanation of what? The rival accounts don't do the same amount of work as each other.

“Compelling” is doing all the work there, and doesn’t require that the ethics objectively exist in the external world

I wasnt arguing for moral realism, I was arguing against ignoring agency.

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-12T20:04:22.975Z · LW(p) · GW(p)

Yes. It's an additional assumption that leads to greater explanatory power. If it had no such advantage, you should not make it, but since it does, it is not obviously ruled out by parsimony.

It doesn't add any explanatory power; it only seems to, because you've attached all your explanations to that external world.  They don't actually change when you get rid of the external world.

Suppose you live in a simulation.  Do any observations become invalid?  Are you going to stop expecting the things you have labeled apples to fall in concordance with the inverse-square law?

Suppose the external world isn't real.  Do any observations become invalid? Are you going to stop expecting the things you have labeled apples to fall in concordance with the inverse-square law?

The "external world" hypothesis adds no information to any of your models of your experiences; it predicts nothing.

I wasnt arguing for moral realism, I was arguing against ignoring agency.

Agency isn't relevant?

Replies from: TAG
comment by TAG · 2021-05-13T14:02:52.345Z · LW(p) · GW(p)

It doesn’t add any explanatory power; it only seems to, because you’ve attached all your explanations to that external world. They don’t actually change when you get rid of the external world

The value of the external world theory is that it explains why science works at all, not that it explains anything in particular.

Suppose you live in a simulation. Do any observations become invalid?

Who says that's all I care about? Everyone knows that it's a short step from instrumentalism to anti realism, but not everyone is starting at instrumentalism.

I wasnt arguing for moral realism, I was arguing against ignoring agency.

Agency isn’t relevant?

Arguing against ignoring agency is two negatives.

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-13T14:18:59.463Z · LW(p) · GW(p)

The value of the external world theory is that it explains why science works at all, not that it explains anything in particular.

No - the validity of inductive logic explains why science works at all.  There's no prior reason to expect inductive logic to be valid in an arbitrary external world.

Replies from: TAG
comment by TAG · 2021-05-13T17:07:22.205Z · LW(p) · GW(p)

There's no reason to expect inductive logic to work in no world.

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-13T17:27:10.036Z · LW(p) · GW(p)

Note the importance of "external" - if we omit external, then the word "world" just refers to the common factor of our experiences whatever that is, and we don't actually disagree.

That is, anti-realism holds that nothing provably exists outside the mind.  The argument comes down to "A world which is internal to our mind, and a world that is external to our mind, is not differentiable".  For what reason would you expect the internality or externality of the world to have a bearing on whether or not inductive logic applies?

Suppose the entire universe boils down to a mathematical equation; everything is one equation, maybe a fractal, which from a point of simplicity gives rise to complexity.  What difference do we expect to encounter if that mathematical equation exists inside of our mind, as opposed to outside of it?  If the universe is the expression of that equation, and the equation is compatible with induction, then we should expect induction to work without regard to whether the universe is internal or external to our mind.

Replies from: TAG
comment by TAG · 2021-05-13T18:04:47.515Z · LW(p) · GW(p)

That is, anti-realism holds that nothing provably exists outside the mind.

That's a very weak form of anti realism. If 0 and 1 aren't probabilities, nothing is absolute proveable.

What difference do we expect to encounter if that mathematical equation exists inside of our mind, as opposed to outside of it?

What do "inside" and "outside" mean?

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-13T18:26:45.705Z · LW(p) · GW(p)

That's a very weak form of anti realism. If 0 and 1 aren't probabilities, nothing is absolute proveable.

Sure.  Is realism the claim that reality probably exists, or definitely exists, however?

What do "inside" and "outside" mean?

In a sense, it's a statement of dependence.  If our minds are inside the world, then if the world stops existing, so do our minds.  If the world is inside our mind, then if our mind stops existing, then so does the world.

In another sense, it's a statement of correspondence.  If our minds are inside the world, then the map-territory distinction is ontologically important (note that the mind-territory distinction is itself an anti-realist position, as it argues that there is no direct correspondence between the world and the contents of our mind).  If the world is inside our mind, then the map-territory distinction is indistinguishable from any other state of confusion.

Replies from: TAG
comment by TAG · 2021-05-15T15:16:30.517Z · LW(p) · GW(p)

. Is realism the claim that reality probably exists, or definitely exists, however?

There are multiple versions of both realism and anti realism. You can even make them overlap.

In another sense, it’s a statement of correspondence. If our minds are inside the world, then the map-territory distinction is ontologically important (note that the mind-territory distinction is itself an anti-realist position, as it argues that there is no direct correspondence between the world and the contents of our mind). If the world is inside our mind, then the map-territory distinction is indistinguishable from any other state of confusion

Does any of that make an observable difference.

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-15T17:26:03.565Z · LW(p) · GW(p)

Does any of that make an observable difference.

Not really, no.  And that's sort of the point; the claim that the world is external is basically an empty claim.

Replies from: TAG
comment by TAG · 2021-05-15T18:41:19.036Z · LW(p) · GW(p)

But you seem to favour a rather specific alternative involving fractals and stuff. Why wouldn't that be empty? Isn't evidence for realism, evidence against anti realism, and vice versa?

If making predictions really is the only game in town, then your alternative physics needs to make predictions. Can it?

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-16T03:13:28.009Z · LW(p) · GW(p)

Well, if my crackpot physics is right, it actually kind of reduces the probability I'd assign to the world I inhabit being "real".  Seriously, the ideas aren't complicated, somebody else really should have noticed them by now.

But sure it makes predictions.  There should be a repulsive force which can be detected when the distance between two objects is somewhere between the radius of the solar system and the radius of the smallest dwarf galaxy.  I'd guess somewhere in the vicinity of 10^12 meters.

Also electrical field polarity should invert somewhere between 1 and 10^8 meters.  That is, if you have an electrical field, and you measure it to be positive or negative, if you move some distance away, it should invert to be negative or positive.

Are these predictions helpful?  Dunno.

Either way, however, it doesn't really say anything about whether the world is internal or external.

58 comments

Comments sorted by top scores.