Posts

Comments

Comment by rikisola on The True Prisoner's Dilemma · 2015-07-21T08:15:07.115Z · LW · GW

I think I'm starting to get this. Is this because it uses heuristics to model the world, with humans in it too?

Comment by rikisola on Open Thread, Jul. 13 - Jul. 19, 2015 · 2015-07-18T15:27:36.567Z · LW · GW

Yes, that's actually the reason why I wanted to tackle the "treacherous turn" first, to look for a general design that would allow us to trust the results from tests and then build on that. I'm seeing as order of priority: 1) make sure we don't get tricked, so that we can trust the results of what we do; 2) make the AI do the right things. I'm referring to 1) in here. Also, as mentioned in another comment to the main post, part of the AI's utility function is evolving to understand human values, so I still don't quite see why exactly it shouldn't work. I envisage the utility function as being the union of two parts, one where we have described the goal for the AI, which shouldn't be changed with iterations, and another with human values, which will be learnt and updated. This total utility function is common to all agents, including the AI.

Comment by rikisola on Open Thread, Jul. 13 - Jul. 19, 2015 · 2015-07-18T09:51:43.716Z · LW · GW

Hi Vaniver, yes my point is exactly that of creating honesty, because that would at least allow us to test reliably so it sounds like it should be one of the first steps to aim for. I'll just write a couple of lines to specify my thought a little further, which is to design an AI that: 1- uses an initial utility function U, defined in absolute terms rather than subjective terms (for instance "survival of the AI" rather than "my survival"); 2- doesn't try to learn another utility function for humans or for other agents, but uses for everyone the same utility function U it uses for itself; 3- updates this utility function when things don't go to plan, so that it improves its predictions of reality. In order to do this, this "universal" utility function would need to be the result of two parts: 1) the utility function that we initially gave the AI to describe its goal, which I suppose should be unchangeable, and 2) the utility function with the values that it is learning after each iteration, which hopefully should eventually resemble human values as that would make its plans work better eventually. I'm trying to understand whether such a design is technically feasible and whether it would work in the intended way? Am I right in thinking that it would make the AI "transparent", in the sense that it would have no motivation to mislead us. Also wouldn't this design make the AI indifferent to our actions, which is also desirable? Seems to me like it would be a good start. It's true that different people would have different values, so I'm not sure about how to deal with that. Any thought?

Comment by rikisola on Open Thread, Jul. 13 - Jul. 19, 2015 · 2015-07-18T09:43:11.092Z · LW · GW

Hi ChristianKI, thanks, I'll try to find the article. Just to be clear though I'm not suggesting to hardcode values, I'm suggesting to design the AI so that it uses for itself and for us the same utility function and updates it as it gets smarter. It sounds from the comments I'm getting that this is technically not feasible so I'll aim at learning exactly how an AI works in detail and maybe look for a way to maybe make it feasible. If this was indeed feasible, would I be right in thinking it would not be motivated to betray us or am I missing something there as well? Thanks for your help by the way!

Comment by rikisola on The True Prisoner's Dilemma · 2015-07-18T09:34:46.528Z · LW · GW

Yes I think 2) is closer to what I'm suggesting. Effectively what I am thinking is what would happen if, by design, there was only one utility function defined in absolute terms (I've tried to explaine this in the latest open thread), so that the AI could never assume we would disagree with it. By all means, as it tries to learn this function, it might get it completely wrong, so this certainly doesn't solve the problem of how to teach it the right values, but at least it looks to me that with such a design it would never be motivated to lie to us because it would always think we would be in perfect agreement. Also, I think it would make it indifferent to our actions as it would always assume we would follow the plan from that point onward. The utility function it uses (same for itself and for us) would be the union of a utility function that describes the goal we want it to achieve, which would be unchangeable, and the set of values it is learning after each iteration. I'm trying to understand what would be wrong with this design, cause to me it looks like we would have achieved an honest AI, which is a good start.

Comment by rikisola on Open Thread, Jul. 13 - Jul. 19, 2015 · 2015-07-17T18:32:41.144Z · LW · GW

Sorry for my misused terminology. Is it not feasible to design it with those characteristics?

Comment by rikisola on The True Prisoner's Dilemma · 2015-07-17T18:27:44.013Z · LW · GW

mmm I see. So maybe we should have coded it so that it cared for paperclips and for an approximation of what we also care about, then on observation it should update its belief of what to care about, and by design it should always assume we share the same values?

Comment by rikisola on Open Thread, Jul. 13 - Jul. 19, 2015 · 2015-07-17T18:05:03.512Z · LW · GW

Hi all, thanks for taking your time to comment. I'm sure it must be a bit frustrating to read something that lacks technical terms as much as this post, so I really appreciate your input. I'll just write a couple of lines to summarize my thought, which is to design an AI that: 1- uses an initial utility function U, defined in absolute terms rather than subjective terms (for instance "survival of the AI" rather than "my survival"); 2- doesn't try to learn an utility function for humans or for other agents, but uses for everyone the same utility function U it uses for itself; 3- updates this utility function when things don't go to plan, so that it improves its predictions. Is such a design technically feasible? Am I right in thinking that it would make the AI "transparent", in the sense that it would have no motivation to mislead us. Also wouldn't this design make the AI indifferent to our actions, which is also desirable? It's true that different people would have different values, so I'm not sure about how to deal with that. Any thought?

Comment by rikisola on Open Thread, Jul. 13 - Jul. 19, 2015 · 2015-07-17T16:32:12.480Z · LW · GW

I see. But rather than dropping this clause, shouldn't it try to update its utility function in order to improve its predictions? If we somehow hard-coded the fact that it can only ever apply its own utility function, then it wouldn't have other choice than updating that. And the closer it gets to our correct utility function, the better it is at predicting reality.

Comment by rikisola on Open Thread, Jul. 13 - Jul. 19, 2015 · 2015-07-17T16:26:30.562Z · LW · GW

Yes that's what would happen if the AI tries to build a model for humans. My point is that if it was to instead simply assume humans were an exact copy of itself, so same utility function and same intellectual capabilities it would assume that they would reach the same exact same conclusions and therefore wouldn't need any forcing, nor any tricks.

Comment by rikisola on Open Thread, Jul. 13 - Jul. 19, 2015 · 2015-07-17T15:26:35.053Z · LW · GW

Hi all, I'm new here so pardon me if I speak nonsense. I have some thoughts regarding how and why an AI would want to trick us or mislead us, for instance behaving nicely during tests and turning nasty when released and it would be great if I could be pointed in the right direction. So here's my thought process.

Our AI is a utility-based agent that wishes to maximize the total utility of the world based on a utility function that has been coded by us with some initial values and then has evolved through reinforced learning. With our usual luck, somehow it's learnt that paperclips are a bit more useful than humans. Now the "treacherous turn" problem that I've read about says that we can't trust the AI if it performs well under surveillance, because it might have calculated that it's better to play nice until it acquires more power before turning all humans into paperclips. I'd like to understand more about this process. Say it calculates that the world with maximum utility is one where it can turn us all into paperclips with minimum effort, with the total utility of this world being UAI(kill)=100. Second best is a world where it first plays nice until it is unstoppable, then turns us into paperclips. This is second best because it's wasting time and resources to achieve the same final result. UAI(nice+kill)=99. Why would it possibly choose the second, sub-optimal, option, which is the most dangerous for us? I suppose it would only choose it if it associated it with a higher probability of success, which means somehow, somewhere the AI must have calculated that the the utility a human would give to these scenarios is different than what it is giving, otherwise we would be happy to comply. In particular, it must believe that for each possible world w:

if UAI(kill)≥UAI(w)≥UAI(nice+kill) then Uhuman(w)≤Uhuman(nice+kill)

How is the AI calculating utilities from a human point of view? (Sorry but this questions comes straight out of my poor understanding of AI architectures.) Is it using some kind of secondary utility function that it applies to humans to guess their behavior? If the process that would motivate the AI to trick us is anything similar to this, then it looks to me like it could be solved by making the AI use EXACTLY it's own utility function when it refers to other agents. Also note that the utilities must not be relative to the agent, but to the AI. For instance, if the AI greatly values its own survival over the survival of other agents, then the other agents should equally greatly value the AI's survival over their own. This should be easily achieved if whenever the AI needs to look up another agent's utility for any action it is simply redirected to its own.

This way the AI will always think we would love it's optimum plan and would never see the need to lie to us or trick us, brainwashing us or engineer us in any way as it would only be a waste of resources. In some cases it might even openly look for our collaboration if that makes the plan any better. Clippy, for instance, might say "OK guys I'm going to turn everything into paperclips, can you please quickly get me the resources I need to begin with, then you can all line up over there for paperclippification. Shall we start?".

This also seems to make the AI indifferent to our actions, provided its belief regarding the identity of our utility functions is unchangeable. For instance, even while it sees us pressing the button to blow it up, it won't think we are going to jeopardize the plan. That would be crazy. Or it won't try to stop us from re-booting it. Considering that it can't imagine you not going along with the plan from that moment onward, it's never a good choice to waste time and resources to stop you. There's no need to stop you.

Now obviously this does not solve the problem of how to make it do the right thing, but it looks to me that at least we would be able to assume that a behavior observed during tests should be honest. What am I getting wrong? (don't flame me please!!!)

Comment by rikisola on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-17T14:58:01.445Z · LW · GW

Hi Lumifer. Yes, to some extent. At the moment I don't have co-location so I minimized latency as much as possible in other ways and have to stick to the slower, less efficient markets. I'd like to eventually test them on larger markets but I know that without co-location (and maybe a good deal of extra smarts) I stand no chance.

Comment by rikisola on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-17T13:14:35.511Z · LW · GW

Hi John, thanks for the encouragement. One thing that strikes me of this community is how most people make an effort to consider each other's point of view, it's a real indicator of a high level of reasonableness and intellectual honesty. I hope I can practice this too. Thanks for pointing me to the open threads, they are perfect for what I had in mind.

Comment by rikisola on Welcome to Less Wrong! (7th thread, December 2014) · 2015-07-17T12:00:33.752Z · LW · GW

Hi all, I'm new. I've been browsing the forum for two weeks and only now I've come across this welcome thread, so nice to meet you! I'm quite interested in the control problem, mainly because it seems like a very critical thing to get right. My background is a PhD in structural engineering and developing my own HFT algorithms (which for the past few years has been my source of both revenue and spare time). So I'm completely new to all of the topics on the forum, but I'm loving the challenge. At the moment I don't have any karma points so I can't publish, which is probably a good thing given my ignorance, so may I post some doubts and questions here in hope to be pointed in the right direction? Thanks in advance!

Comment by rikisola on The True Prisoner's Dilemma · 2015-07-17T08:30:28.090Z · LW · GW

One thing I can't understand. Considering we've built Clippy, we gave it a set of values and we've asked it to maximise paperclips, how can it possibly imagine we would be unhappy about its actions? I can't help but thinking that from Clippy's point of view, there's no dilemma: we should always agree with its plan and therefore give it carte blanche. What am I getting wrong?

Comment by rikisola on The True Prisoner's Dilemma · 2015-07-17T08:15:31.755Z · LW · GW

Hi there, I'm new here and this is an old post but I have a question regarding the AI playing a prisoner dilemma against us, which is : how would this situation be possible? I'm trying to get my head around why the AI would think that our payouts are any different than his payouts, given that we built it, we thought it (some) of our values in a rough way and we asked it to maximize paperclips, which means we like paperclips. Shouldn't the AI think we are on the same team? I mean, we coded it that way and we gave it a task, what process exactly would make the AI ever think we would disagree with its choice? So for instance if we coded it in such a way that it values a human life 0, then it would only see one choice: make 3 paperclips. And it shouldn't have any reason to believe that's not the best outcome for us too, so the only possible outcome from its point of view in this case should be (+0 lives, +3 paperclips). Basically the main question is: how can the AI ever imagine that we would disagree with it? (I'm honestly just asking as I'm struggling with this idea and am interested in this process) Thanks!

Comment by rikisola on Moral AI: Options · 2015-07-13T11:43:12.561Z · LW · GW

I feel like a mixed approach is the most desirable. There is a risk that if the AI is allowed to simply learn from humans, we might get a greedy AI that maximizes its Facebook experience while the rest of the World keeps dying of starvation and wars. Also, our values probably evolve with time (slavery, death penalty, freedom of speech...) so we might as well try and teach the AI what our values should be rather than what they are right now. Maybe then it's the case of developing a top-down, high level ethical system and use it to seed a neural network that then picks up patterns in more detailed scenarios?

Comment by rikisola on Creating a satisficer · 2015-07-09T11:44:10.550Z · LW · GW

Ah yep that'll do.

Comment by rikisola on Creating a satisficer · 2015-07-08T13:18:25.716Z · LW · GW

Thanks for your reply, I had missed the fact that M(εu+v) is also ignorant of what u and v are. In this case is this a general structure of how a satisficer should work, but then when applying it in practice we would need to assign some values to u and v on a case by case basis, or at least to ε, so that M(εu+v) could veto? Or is it the case that M(εu+v) uses an arbitrarily small ε, in which case it is the same as imposing Δv>0?

Comment by rikisola on Help needed: nice AIs and presidential deaths · 2015-07-08T12:48:18.320Z · LW · GW

Nevermind this comment, I read some more of your posts on the subject and I think I got the point now ;)

Comment by rikisola on Creating a satisficer · 2015-07-08T12:44:47.890Z · LW · GW

Say M(u-v) suggests killing all humans so that it can make more paperclips. u is the value of a paperclip and v is the value of a human life. M(εu+v) might accept it if εΔu > -Δv, so it seems to me at the end it all depends on the relative value we assign to paperclips and human lives, which seems to be the real problem.

Comment by rikisola on Presidents, asteroids, natural categories, and reduced impact · 2015-07-08T12:11:26.850Z · LW · GW

I'm also struggling with the above. The first quote says that with event ¬X "it will NOT want to have the correct y-coordinate outputted". The second says the opposite, the robot WILL output "the y-coordinate of the laser correctly, given ¬X".

Comment by rikisola on Help needed: nice AIs and presidential deaths · 2015-07-08T10:50:46.447Z · LW · GW

Every idea that comes to my mind is faced by the big question "if we were able to program a nice AI for that situation, why would we not program it to be nice in every situation". I mean, it seems to me that in that scenario we would have both a solid definition of niceness and the ability to make the AI stick to it. Could you elaborate a little on that? Maybe an example?

Comment by rikisola on Help needed: nice AIs and presidential deaths · 2015-07-08T09:08:52.940Z · LW · GW

And even if it knew the correct answer to that question, how can you be sure it wouldn't instead lie to you in order to achieve its real goals? You can't really trust the AI if you are not sure it is nice or at least indifferent...

Comment by rikisola on Creating a satisficer · 2015-07-07T15:56:08.000Z · LW · GW

Hi Stuart. I'm new here so excuse me if I happen to ask irrelevant or silly questions as I am not as in-depth into the subject as many of you, nor as smart. I found quite interesting the idea of leaving M(u-v) in the ignorance of what u and v are. In such a framework though wouldn't "kill all humans" be considered an acceptable satisficer if u (whatever task we are interested in) is given a much larger utility than v (human lives)? Does it not all boil down to defining the correct trade-off between the utility of u and v so that M(εu+v) vetoes at the right moment?

Comment by rikisola on Research interests I don't currently have time to develop alone · 2015-07-07T10:35:13.485Z · LW · GW

Hi Stuart, I'm not sure if this post is still active but I've only recently come across your work and I'd like to help. At the moment I'm particularly interested in the Control problem, the Fermi paradox and the simulation hypothesis, but I'm sure a chat with you would spur my interest in other directions too. Would be great if you could get in touch so maybe we can figure out if I can be of any help.