Posts

From language to ethics by automated reasoning 2021-11-21T15:16:20.304Z
What is the strongest argument you know for antirealism? 2021-05-12T10:53:33.152Z
Naturalism and AI alignment 2021-04-24T16:16:16.389Z
Literature Review on Goal-Directedness 2021-01-18T11:15:36.710Z
Decision Theory is multifaceted 2020-09-13T22:30:21.169Z
Goals and short descriptions 2020-07-02T17:41:52.578Z
Wireheading and discontinuity 2020-02-18T10:49:42.030Z
Thinking of tool AIs 2019-11-20T21:47:36.660Z

Comments

Comment by Michele Campolo on From language to ethics by automated reasoning · 2021-11-22T09:45:56.067Z · LW · GW

Natural language exists as a low-bandwidth communication channel for imprinting one person's mental map onto another person's. The mental maps themselves are formed through direct interactions with an external environment.

It doesn't seem impossible to create a mental map just from language: in this case, language itself would play the role of the external environment. But overall I agree with you, it's uncertain whether we can reach a good level of world understanding just from natural language inputs.

Regarding your second paragraph:

even if this AI had a complete understanding of human emotions and moral systems, it would not necessarily be aligned.

I'll quote the last paragraph under the heading "Error":

Regarding other possible failure modes, note that I am not trying to produce a safety module that, when attached to a language model, will make that language model safe. What I have in mind is more similar to an independent-ethical-thinking module: if the resulting AI states something about morality, we’ll still have to look at the code and try to understand what’s happening, e.g. what the AI exactly means with the term “morality”, and whether it is communicating honestly or is trying to persuade us. This is also why doing multiple tests will be practically mandatory.

Comment by Michele Campolo on AI Risk for Epistemic Minimalists · 2021-08-27T21:46:33.982Z · LW · GW

The reached conclusion—that it is possible to do something about the situation—is weak, but I really like the minimalist style of the arguments. Great post!

Comment by Michele Campolo on What is the strongest argument you know for antirealism? · 2021-05-13T10:34:25.156Z · LW · GW

How do you feel about:

1
There is
a procedure/algorithm which doesn't seem biased towards a particular value system
such that
a class of AI systems that implement it end up having a common set of values, and they endorse the same values upon reflection.

2
This set of values might have something in common with what we, humans, call values.

If 1 and 2 seem at least plausible or conceivable, why can't we use them as a basis to design aligned AI? Is it because of skepticism towards 1 or 2?

Comment by Michele Campolo on What is the strongest argument you know for antirealism? · 2021-05-13T09:59:38.040Z · LW · GW

"How the physical world works" seems, to me, a plausible source-of-truth. In other words: I consider some features of the environment (e.g. consciousness) as a reason to believe that some AI systems might end up caring about a common set of things, after they've spent some time gathering knowledge about the world and reasoning. Our (human) moral intuitions might also be different from this set.

Comment by Michele Campolo on What is the strongest argument you know for antirealism? · 2021-05-13T09:40:33.106Z · LW · GW

I disagree. Determinism doesn't make the concepts of "control" or "causation" meaningless. It makes sense to say that, to a certain degree, you often can control your own attention, while in other circumstances you can't: if there's a really loud sound near you, you are somewhat forced to pay attention to it.

From there you can derive a concept of responsibility, which is used e.g. in law. I know that the book Actual Causality focuses on these ideas (but there might be other books on the same topics that are easier to read or simply better in their exposition).

Comment by Michele Campolo on What is the strongest argument you know for antirealism? · 2021-05-12T18:58:03.082Z · LW · GW

At the very least, we have strong theoretical reasoning models (like Bayesian reasoners, or Bayesian EU maximizers), which definitely do not go looking for values to pursue, or adopt new values.

This does not imply one cannot build an agent that works according to a different framework. VNM Utility maximization requires a complete ordering of preferences, and does not say anything about where the ordering comes from in the first place.
(But maybe your point was just that our current models do not "look for values")

Why would something which doesn't already have values be looking for values? Why would conscious experiences and memory "seem valuable" to a system which does not have values already? Seems like having a "source of value" already is a prerequisite to something seeming valuable - otherwise, what would make it seem valuable?

An agent could have a pre-built routine or subagent that has a certain degree of control over what other subagents do—in a sense, it determines what are the "values" of the rest of the system. If this routine looks unbiased / rational / valueless, we have a system that considers some things as valuable (acts to pursue them) without having a pre-value, or at least the pre-value doesn't look like something that we would consider a value.

Comment by Michele Campolo on What is the strongest argument you know for antirealism? · 2021-05-12T17:35:11.524Z · LW · GW

I can't say I am an expert on realism and antirealism, but I have already spent time on metaethics textbooks and learning about metaethics in general. With this question I wanted to get an idea of what are the main arguments on LW, and maybe find new ideas I hadn't considered.

What is "disinterested altruism"? And why do you think it's connected to moral anti-realism?

I see a relation with realism. If certain pieces of knowledge about the physical world (how human and animal cognition works) can motivate a class of agents that we would also recognise as unbiased and rational, that would be a form of altruism that is not instrumental and not related to game theory.

Comment by Michele Campolo on What is the strongest argument you know for antirealism? · 2021-05-12T17:23:54.850Z · LW · GW

Thank you for the detailed answer! I'll read Three Worlds Collide.

That brings us to the real argument: why does the moral realist believe this? "What do I think I know, and how do I think I know it?" What causal, physical process resulted in that belief?

I think a world full of people who are always blissed out is better than a world full of people who are always depressed or in pain. I don't have a complete ordering over world-histories, but I am confident in this single preference, and if someone called this "objective value" or "moral truth" I wouldn't say they are clearly wrong. In particular, if someone told me that there exists a certain class of AI systems that end up endorsing the same single preference, and that these AI systems are way less biased and more rational than humans, I would find all that plausible. (Again, compare this if you want.) 

Now, why do I think this?

I am a human and I am biased by my own emotional system, but I can still try to imagine what would happen if I stopped feeling emotions. I think I would still consider the happy world more valuable than the sad world. Is this a proof that objective value is a thing? Of course not. At the same time, I can imagine also an AI system thinking: "Look, I know various facts about this world. I don't believe in golden rules written in fire etched into the fabric of reality, or divine commands about what everyone should do, but I know there are some weird things that have conscious experiences and memory, and this seems something valuable in itself. Moreover, I don't see other sources of value at the moment. I guess I'll do something about it." (Taken from this comment)

Comment by Michele Campolo on What is the strongest argument you know for antirealism? · 2021-05-12T16:34:27.011Z · LW · GW

That is an interesting point. More or less, I agree with this sentence in your fist post:

As far as I can tell, we can do science just as well without assuming that there's a real territory out there somewhere.

in the sense that one can do science by speaking only about their own observations, without making a distinction between what is observed and what "really exists".

On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine. How does this fit in your framework? (Might be irrelevant, sorry if I misunderstood)

Comment by Michele Campolo on What is the strongest argument you know for antirealism? · 2021-05-12T16:15:18.475Z · LW · GW

I didn't want to start a long discussion. My idea was to get some random feedback to see if I was missing some important ideas I had not considered

Comment by Michele Campolo on What is the strongest argument you know for antirealism? · 2021-05-12T14:21:56.152Z · LW · GW

Thanks for your answer, but I am looking for arguments, not just statements or opinions. How do you know that value is not a physical property? What do you mean when you say that altruism is not a consequence of epistemic rationality, and how do you know?

Comment by Michele Campolo on Naturalism and AI alignment · 2021-05-04T13:57:28.245Z · LW · GW

I am not sure the concept of naturalism I have in mind corresponds to a specific naturalistic position held by a certain (group of) philosopher(s). I link here the Wikipedia page on ethical naturalism, which contains the main ideas and is not too long. Below I focus on what is relevant for AI alignment.

In the other comment you asked about truth. AIs often have something like a world-model or knowledge base that they rely on to carry out narrow tasks, in the sense that if someone modifies the model or kb in a certain way—analogous to creating a false belief—than the agent fails at the narrow task. So we have a concept of true-given-task. By considering different tasks, e.g. in the case of a general agent that is prepared to face various tasks, we obtain true-in-general or, if you prefer, simply "truth". See also the section on knowledge in the post. Practical example: given that light is present almost everywhere in our world, I expect general agents to acquire knowledge about electromagnetism.

I also expect that some AIs, given enough time, will eventually incorporate in their world-model beliefs like: "Certain brain configurations correspond to pleasurable conscious experiences. These configurations are different from the configurations observed in (for example) people who are asleep, and very different from what is observed in rocks."

Now, take an AI with such knowledge and give it some amount of control over which goals to pursue: see also the beginning of Part II in the post. Maybe, in order to make this modification, it is necessary to abandon the single-agent framework and consider instead a multi-agent system, where one agent keeps expanding the knowledge base, another agent looks for "value" in the kb, and another one decides what actions to take given the current concept of value and other contents of the kb.

[Two notes on how I am using the word control. 1 I am not assuming any extra-physical notion here: I am simply thinking of how, for example, activity in the prefrontal cortex regulates top-down attentional control, allowing us humans (and agents with similar enough brains/architectures) to control, to a certain degree, what to pay attention to. 2 Related to what you wrote about "catastrophically wrong" theories: there is no need to give such an AI high control over the world. Rather, I am thinking of control over what to write as output in a text interface, like a chatbot that is not limited to one reply for each input message]

The interesting question for alignment is: what will such an AI do (or write)? This information is valuable even if the AI doesn't have high control over the world. Let's say we do manage to create a collection of human preferences; we might still notice something like: "Interesting, this AI thinks this subset of preferences doesn't make sense" or "Cool, this AI considers valuable the thing X that we didn't even consider before". Or, if collecting human preferences proves to be difficult, we could use some information this AI gives us to build other AIs that instead act according to an explicitly specified value function.

I see two possible objections.

1 The AI described above cannot be built. This seems unlikely: as long as we can emulate what the human mind does, we can at least try to create less biased versions of it. See also the sentence you quoted in the other comment. Indeed, depending on how biased we judge that AI to be, the obtained information will be less, or more, valuable to us.

2 Such an AI will never act ethically or altruistically, and/or its behaviour will be unpredictable. I consider this objection more plausible, but I also ask: how do you know? In other words: how can one be so sure about the behaviour of such an AI? I expect the related arguments to be more philosophical than technical. Given uncertainty, (to me) it seems correct to accept a non-trivial chance that the AI reasons like this: "Look, I know various facts about this world. I don't believe in golden rules written in fire etched into the fabric of reality, or divine commands about what everyone should do, but I know there are some weird things that have conscious experiences and memory, and this seems something valuable in itself. Moreover, I don't see other sources of value at the moment. I guess I'll do something about it."

Philosophically speaking, I don't think I am claiming anything particularly new or original: the ideas already exist in the literature. See, for example, 4.2 and 4.3 in the SEP page on Altruism.

Comment by Michele Campolo on Naturalism and AI alignment · 2021-04-26T08:04:15.496Z · LW · GW

If there is a superintelligent AI that ends up being aligned as I've written, probably there is also a less intelligent agent that does the same thing. Something comparable to human-level might be enough.

From another point of view: some philosophers are convinced that caring about conscious experiences is the rational thing to do. If it's possible to write an algorithm that works in a similar way to how their mind works, we already have an (imperfect, biased, etc.) agent that is somewhat aligned, and is likely to stay aligned after further reflection.

One could argue that these philosophers are fooling themselves, that no really intelligent agent will end up with such weird beliefs. So far, I haven't seen convincing arguments in favour of this; it goes back to the metaethical discussion. I quote a sentence I have written in the post:

Depending on one’s background knowledge of philosophy and AI, the idea that rationality plays a role in reasoning about goals and can lead to disinterested (not game-theoretic or instrumental) altruism may seem plain wrong or highly speculative to some, and straightforward to others.

Comment by Michele Campolo on Naturalism and AI alignment · 2021-04-26T07:45:49.192Z · LW · GW

Thanks, that page is much more informative than anything else I've read on the orthogonality thesis.

1 From Arbital:

The Orthogonality Thesis states "there exists at least one possible agent such that..."

Also my claim is an existential claim, and I find it valuable because it could be an opportunity to design aligned AI.

2 Arbital claims that orthogonality doesn't require moral relativism, so it doesn't seem incompatible with what I am calling naturalism in the post.

3 I am ok with rejecting positions similar to what Arbital calls universalist moral internalism. Statements like "All agents do X" cannot be exact.

Comment by Michele Campolo on Naturalism and AI alignment · 2021-04-25T10:21:24.183Z · LW · GW

I am aware of interpretability issues. This is why, for AI alignment, I am more interested in the agent described at the beginning of Part II than Scientist AI.

Thanks for the link to the sequence on concepts, I found it interesting!

Comment by Michele Campolo on Decision Theory is multifaceted · 2020-09-17T08:40:04.437Z · LW · GW

Ok, if you want to clarify—I'd like to—we can have a call, or discuss in other ways. I'll contact you somewhere else.

Comment by Michele Campolo on Decision Theory is multifaceted · 2020-09-16T22:08:54.503Z · LW · GW
Omega, a perfect predictor, flips a coin. If it comes up heads, Omega asks you for $100, then pays you $10,000 if it predict you would have paid if it had come up tails and you were told it was tails. If it comes up tails, Omega asks you for $100, then pays you $10,000 if it predicts you would have paid if it had come up heads and you were told it was heads.

Here there is no question, so I assume it is something like: "What do you do?" or "What is your policy?"

That formulation is analogous to standard counterfactual mugging, stated in this way:

Omega flips a coin. If it comes up heads, Omega will give you 10000 in case you would pay 100 when tails. If it comes up tails, Omega will ask you to pay 100. What do you do?

According to these two formulations, the correct answer seems to be the one corresponding to the first intuition.

Now consider instead this formulation of counterfactual PD:

Omega, a perfect predictor, tells you that it has flipped a coin, and it has come up heads. Omega asks you to pay 100 (here and now) and gives you 10000 (here and now) if you would pay in case the coin landed tails. Omega also explains that, if the coin had come up tails—but note that it hasn't—Omega would tell you such and such (symmetrical situation). What do you do?

The answer of the second intuition would be: I refuse to pay here and now, and I would have paid in case the coin had come up tails. I get 10000.

And this formulation of counterfactual PD is analogous to this formulation of counterfactual mugging, where the second intuition refuses to pay.

Is your opinion that

The answer of the second intuition would be: I refuse to pay here and now, and I would have paid in case the coin had come up tails. I get 10000.

is false/not admissible/impossible? Or are you saying something else entirely? In any case, if you could motivate your opinion, whatever that is, you would help me understand. Thanks!

Comment by Michele Campolo on Decision Theory is multifaceted · 2020-09-16T11:21:28.522Z · LW · GW

It seems you are arguing for the position that I called "the first intuition" in my post. Before knowing the outcome, the best you can do is (pay, pay), because that leads to 9900.

On the other hand, as in standard counterfactual mugging, you could be asked: "You know that, this time, the coin came up tails. What do you do?". And here the second intuition applies: the DM can decide to not pay (in this case) and to pay when heads. Omega recognises the intent of the DM, and gives 10000.

Maybe you are not even considering the second intuition because you take for granted that the agent has to decide one policy "at the beginning" and stick to it, or, as you wrote, "pre-commit". One of the points of the post is that it is unclear where this assumption comes from, and what it exactly means. It's possible that my reasoning in the post was not clear, but I think that if you reread the analysis you will see the situation from both viewpoints.

Comment by Michele Campolo on Decision Theory is multifaceted · 2020-09-15T21:21:33.517Z · LW · GW

If the DM knows the outcome is heads, why can't he not pay in that case and decide to pay in the other case? In other words: why can't he adopt the policy (not pay when heads; pay when tails), which leads to 10000?

Comment by Michele Campolo on Decision Theory is multifaceted · 2020-09-15T10:41:36.456Z · LW · GW

The fact that it is "guaranteed" utility doesn't make a significant difference: my analysis still applies. After you know the outcome, you can avoid paying in that case and get 10000 instead of 9900 (second intuition).

Comment by Michele Campolo on Decision Theory is multifaceted · 2020-09-14T14:31:18.901Z · LW · GW

Hi Chris!

Suppose the predictor knows that it writes M on the paper you'll choose N and if it writes N on the paper you'll choose M. Further, if it writes nothing you'll choose M. That isn't a problem since regardless of what it writes it would have predicted your choice correctly. It just can't write down the choice without making you choose the opposite.

My point in the post is that the paradoxical situation occurs when the prediction outcome is communicated to the decision maker. We have a seemingly correct prediction—the one that you wrote about—that ceases to be correct after it is communicated. And later in the post I discuss whether this problematic feature of prediction extends to other scenarios, leaving the question open. What did you want to say exactly?

I was quite skeptical of paying in Counterfactual Mugging until I discovered the Counterfactual Prisoner's Dilemma which addresses the problem of why you should care about counterfactuals given that they aren't factual by definition.

I've read the problem and the analysis I did for (standard) counterfactual mugging applies to your version as well.

The first intuition is that, before knowing the toss outcome, the DM wants to pay in both cases, because that gives the highest utility (9900) in expectation.

The second intuition is that, after the DM knows (wlog) the outcome is heads, he doesn't want to pay anymore in that case—and wants to be someone who pays when tails is the outcome, thus getting 10000.

Comment by Michele Campolo on Goals and short descriptions · 2020-07-04T08:55:39.049Z · LW · GW

I wouldn't say goals as short descriptions are necessarily "part of the world".

Anyway, locality definitely seems useful to make a distinction in this case.

Comment by Michele Campolo on Goals and short descriptions · 2020-07-03T17:59:26.094Z · LW · GW

No worries, I think your comment still provides good food for thought!

Comment by Michele Campolo on Goals and short descriptions · 2020-07-03T11:06:18.705Z · LW · GW

I'm not sure I understand the search vs discriminative distinction. If my hand touches fire and thus immediately moves backwards by reflex, would this be an example of a discriminative policy, because an input signal directly causes an action without being processed in the brain?

About the goal of winning at chess: in the case of minimax search, generates the complete tree of the game using and then selects the winning policy; as you said, this is probably the simplest agent (in terms of Kolmogorov complexity, given ) that wins at chess—and actually wins at any game that can be solved using minimax/backward induction. In the case of , reads the environmental data about chess to assign reward to winning states and elsewhere, and represents an ideal RL procedure that exploits interaction with the environment to generate the optimal policy that maximises the reward function created by . The main feature is that in both cases, when the environment gets bigger and grows, the description length of the two algorithms given doesn't change: you could use minimax or the ideal RL procedure to generate a winning policy even for chess on a larger board, for example. If instead you wanted to use a giant lookup table, you would have to extend your algorithm each time a new state gets added to the environment.

I guess the confusion may come from the fact that is underspecified. I tried to formalise it more precisely by using logic, but there were some problems and it's still work in progress.

By the way, thanks for the links! I hope I'll learn something new about how the brain works, I'm definitely not an expert on cognitive science :)

Comment by Michele Campolo on Goals and short descriptions · 2020-07-03T09:58:02.854Z · LW · GW

The others in the AISC group and I discussed the example that you mentioned more than once. I agree with you that such an agent is not goal-directed, mainly because it doesn't do anything to ensure that it will be able to perform action A even if adverse events happen.

It is still true that action A is a short description of the behaviour of that agent and one could interpret action A as its goal, although the agent is not good at pursuing it ("robustness" could be an appropriate term to indicate what the agent is lacking).

Comment by Michele Campolo on Dutch-Booking CDT: Revised Argument · 2020-06-16T22:10:38.692Z · LW · GW

The part that I don't get is the reason why the agent is betting ahead of time implies evaluation according to edt, while the agent is reasoning during its action implies evaluation according to cdt. Sorry if I'm missing something trivial, but I'd like to receive an explanation because this seems a fundamental part of the argument.

Comment by Michele Campolo on Dutch-Booking CDT: Revised Argument · 2020-06-14T10:05:54.666Z · LW · GW

I've noticed that one could read the argument and say: "Ok, an agent evaluates a parameter U differently at different times. Thus, a bookmaker exploits the agent with a bet/certificate whose value depends on U. What's special about this?"

Of course the answer lies in the difference between cdt(a) and edt(a), specifically you wrote:

The key point here is that because the agent is betting ahead of time, it will evaluate the value of this bet according to the conditional expectation E(U|Act=a).

and

Now, since the agent is reasoning during its action, it is evaluating possible actions according to cdt(a); so its evaluation of the bet will be different.

I think developing this two points would be useful to readers since, usually, the pivotal concepts behind EDT and CDT are considered to be "conditional probabilities" and "(physical) causation" respectively, while here you seem to point at something different about the times at which decisions are made.

***

Unrelated to what I just wrote:

XXX insert the little bit about free will and stuff that I want to remove from the main argument... no reason to spend time justifying it there if I have a whole section for it here

I guess here you wanted to say something interesting about free will, but it was probably lost from the draft to the final version of the post.

Comment by Michele Campolo on Focus: you are allowed to be bad at accomplishing your goals · 2020-06-13T10:14:15.542Z · LW · GW

I want to show an example that seems interesting for evaluating, and potentially tweaking/improving, the current informal definition.

Consider an MDP with states; initial state; from each an action allows to go back to , and another action goes to (what happens in is not really important for the following). Consider two reward functions that are both null everywhere, except for one state that has reward 1: in the first function, in the second function, for some .

It's interesting (problematic?) that two agents, trained on the first reward function and on the second, have similar policies but different goals (defined as sets of states). Specifically, I expect that for , (for various possible choices of and different ways of defining the distance ). In words: respect to the environment size, the first agent is extremely close to , and viceversa, but the two agents have different goals.

Maybe this is not a problem at all: it could simply indicate that there exists a way of saying that the two considered goals are similar.

Comment by Michele Campolo on Wireheading and discontinuity · 2020-02-28T10:03:10.912Z · LW · GW

That's an interesting example I had not considered. As I wrote in the observations: I don't think the discontinuity check works in all cases.

Comment by Michele Campolo on Wireheading and discontinuity · 2020-02-26T11:54:39.908Z · LW · GW

I'm not sure I understand what you mean—I know almost nothing about robotics—but I think that, in most cases, there is a function whose discontinuity gives a strong indication that something went wrong. A robotic arm has to deal with impulsive forces, but its movement in space is expected to be continuous wrt time. The same happens in the bouncing ball example, or in the example I gave in the post: velocity may be discontinuous in time, but motion shouldn't.

Thanks for the suggestion on hybrid systems!

Comment by Michele Campolo on Thinking of tool AIs · 2019-11-21T22:15:17.128Z · LW · GW

Maybe I should have used different words: I didn't want to convey the message that catastrophes are easy to obtain. The purpose of the fictional scenario was to make the reader reflect on the usage of the word "tool". Anyway, I'll try to consider non-technical feedback mechanisms more often in the future. Thanks!