Daimons

post by Douglas_Reay · 2013-03-05T11:58:11.072Z · LW · GW · Legacy · 11 comments

Contents

11 comments

Summary:

A daimon is a process in a distributed computing environment that has a fixed resource budget and core values that do not permit:

This concept is relevant to LessWrong, because I refer to it in other posts discussing Friendly AI.

 

There's a concept I want to refer to in another post, but it is complex enough to deserve a post of its own.

I'm going to use the word "daimon" to refer to it.

"daimon" is an English word, whose etymology comes from the Latin "dæmon" and the Greek "δαίμων".

The original mythic meaning was a genius - a powerful tutelary spirit, tied to some location or purpose, that provides protection and guidance.   However the concept I'm going to talk about is closer to the later computing meaning of "daemon" in unix, that was coined by Jerry Saltzer in 1963.  In unix, a daemon is a child process; given a purpose and specific resources to use, and then forked off so it is no longer under the direct control of the originator, and may be used by multiple users if they have the correct permissions.

 

Let's start by looking at the current state of distributed computing (2012).

Hadoop is an open source Java implementation of a distributed file system upon which MapReduce operations can be applied.

JavaSpaces is a distributed tuple store that allows processing on remote sandboxes, based on the open source Apache River.

OceanStore is the basis for the same sort of thing, except anonymous and peer 2 peer, based upon Chimaera.

GPU is a peer 2 peer shared computing environment that allow things like climate simulation and distributed search engines.

Paxos is a family of protocols that allow the above things to be done despite nodes that are untrusted or even downright attempting subversion.

GridSwarm is the same sort of network, but set up on an ad hoc basis using moving nodes that join or drop from the network depending on proximity.

And, not least, there are the competing contenders for platform-as-a-service cloud computing.

 

So it is reasonable to assume that in the near future it will be technologically feasible to have a system with most (if not all) of these properties simultaneously.   A system where the owner of a piece of physical computing hardware, that has processing power and storage capacity, can anonymously contribute those resources over the network to a distributed computing 'cloud'.  And, in return, that user (or a group of users) can store data on the network in such a way that the data is anonymous (it can't be traced back to the supplier, without the supplier's consent, or subverting a large fraction of the network) and private (only the user or a process authorised by the user can decrypt it).  And, further, the user (or group of users) can authorise a process to access that data and run programs upon it, up to some set limit of processing and storage resources.

 

Obviously, if such a system is in place and in control of a significant fraction of humanity's online resources, then cracking the security on it (or just getting rich enough in whatever reputation or financial currency is used to limit how the resources are distributed) would be an immediate FOOM for any AI that managed it.

However let us, for the purposes of giving an example that will let me define the concept of a "daimon" make two assumptions:

ASSUMPTION ONE : The security has not yet been cracked

Whether that's because there are other AIs actively working to improve the security, or because everyone has moved over to using some new version of linux that's frighteningly secure and comes with nifty defences, or because the next generation of computer users has finally internalised that clicking on emails claiming to be from altruistic dying millionaires is a bad idea; is irrelevant.  We're just assuming, for the moment, that for some reason it will be a non-trivial task for an AI to cheat and just steal all the resources.

ASSUMPTION TWO : That AI can be done, at reasonable speed, via distributed computing

It might turn out that an AI running in a single location is much more powerful than anything that can be done via distributed computing.   Perhaps because a quantum computer is much faster, but can't be done over a network.  Perhaps because speed of data access is the limiting factor, large data sets are not necessary, and there isn't much to be gained from massive parallelisation.  Perhaps for some other reason, such as the algorithm the process needs to run on its data isn't something that can be applied securely over a network in a distributed environment, without letting a third party snoop the unencrypted data.    However, for our purposes here, we're going to assume that an AI can benefit from outsourcing at least some types of computing task to a distributed environment and, further, that such tasks can include activities that require intelligence.

 

If an AI can run as a distributed program, not dependant upon any one single physical location, then there are some obvious advantages to it from doing so.  Scalability.  Survivability.  Not being wiped out by a pesky human exploding a nuclear bomb near by.

There are interesting questions we could ask about identity.  What would it make sense for such an AI to consider to be part of "itself" and would would it count as a limb or extension?   If there are multiple copies of its code running on sandboxes in different places, or if it has split much of its functionality into trusted child processes that report back to it, how does it relate to these?   It probably makes sense to taboo the concept of "I" and "self", and just think in terms of how the code in one process tells that process to relate to the code in a different process.  Two versions, two "individual beings" will merges back into one process, if the code in both processes agree to do that; no sentimentality or thoughts of "death" involved, just convergent core values that dictate the same action in that situation.

When a process creates a new process, it can set the permissions of that process.   If the parent process has access to 100 units of bandwidth, for example, but doesn't always make full use of that, it couldn't give the new process access to more than that.  But it could partition it, so each has access to 50 units of bandwidth.   Or it could give it equal rights to use the full 100, and then try to negotiate with it over usage at any one time.   Or it could give it a finite resource limit, such as a total of 10,000 units of data to be passed over the network, in addition to a restriction on the rate of passing data.    Similarly, a child process could be limited not just to processing a certain number of cycle per second, but to some finite number of total cycles it may ever use.

 

Using this terminology, we can now define two types of daimon; limited and unlimited.

A limited daimon is a process in a distributed computing environment that has ownership of fixed finite resources, that was created by an AI or group of AIs with a specific fixed finite purpose (core values) that does not include (or allow) modifying that purpose or attempting to gain control of additional resources.

An unlimited daimon is a process in a distributed computing environment that has ownership of fixed (but not necessarily finite) resources, that was created by an AI or group of AIs with a specific fixed purpose (core values) that does not include (or allow) modifying that purpose or attempting to gain control of additional resources, but which may be given additional resources over time on an ongoing basis, for as long as the parent AIs still find it useful.

 

Feedback sought:

How plausible are the two assumptions?

Do you agree that an intelligence bound/restricted to being a daimon is a technically plausible concept, if the two assumptions are granted?

11 comments

Comments sorted by top scores.

comment by shminux · 2013-03-05T19:59:31.733Z · LW(p) · GW(p)

Note: if you are going to downvote, constructive criticism indicating why, in a reply or message, would be appreciated

I downvoted this post because it does not have any summary upfront, and it's nearly impossible to figure out the context and how it is relevant to LW.

Replies from: gjm, Douglas_Reay, Douglas_Reay
comment by gjm · 2013-03-06T01:06:23.515Z · LW(p) · GW(p)

I haven't downvoted the post, but did seriously contemplate doing so. Besides the obscurity of its context and relevance (as shminux points out), what I don't like is this:

It seems to be putting a lot of effort into explaining and motivating a couple of awfully specific-looking technical definitions, but it doesn't really motivate them well and probably whatever future post you have in mind will make the real motivation clearer; and they don't seem to involve any difficult concepts or serious complexity that would require actual explanation; which leaves this post rather purposeless.

The specificity of the definitions also seems like a bad sign; for instance, whatever you're going to do with these concepts, can it possibly really matter that a daimon was "created by an AI or group of AIs"? I remark that one of your illustrative examples has a daimon that wasn't created by an AI or group of AIs. More generally, when I see definitions like these the impression it gives me is that something simple is going to be hidden under a pile of unnecessary complexity. It might be something simple and valuable, in which case clearing away the complexity would make it more useful; or it might be something simple and wrong, in which case clearing away the complexity would make its wrongness more apparent. Either way, it's probably worth looking harder for the key idea underneath the complexity.

Replies from: Douglas_Reay
comment by Douglas_Reay · 2013-03-06T04:01:42.796Z · LW(p) · GW(p)

Thank you.

The follow on post is going to address a point about Friendly AI, and an AI going FOOM. The point depends upon the plausibility of the daimon concept, so if there is some technical reason why the concept is unworkable, I thought it might be a good idea to split things into a mini-sequence and deal with those issues first in a post of their own.

So not, I hope, an attempt to hide complexity inside a definition. More an attempt to actually draw out and examine any flaws in the definitions.

comment by Douglas_Reay · 2013-08-23T12:24:58.846Z · LW(p) · GW(p)

it does not have any summary upfront, and it's nearly impossible to figure out the context and how it is relevant to LW

I've now edited it to add a summary and context.

comment by Douglas_Reay · 2013-03-05T20:23:16.798Z · LW(p) · GW(p)

Thank you.

comment by ygert · 2013-03-06T10:26:59.534Z · LW(p) · GW(p)

An unlimited daimon is a process in a distributed computing environment that has ownership of fixed (but not necessarily finite) resources, that was created by an AI or group of AIs with a specific fixed purpose (core values) that does not include (or allow) modifying that purpose or attempting to gain control of additional resources, but which may be given additional resources over time on an ongoing basis, for as long as the parent AIs still find it useful.

You can't breeze over this so fast and ignore the ramifications. I do see what you are trying to do here, but it seems to me that you have not seeded in that just by decreeing that it cannot change it's goal. Changing one's goal is in general not a very useful thing, as if you change your (terminal) goal, you are less likely to achieve that (original) goal.

So, your prohibition here is less useful than you might think. It blocks something that it probably wouldn't do anyway, and it allows the potentially dangerous stuff. That is, subgoals. (or instrumental goals, or whatever you want to call them.)

For example, a daimon might be tasked with baking a cake. First, it has to find the ingredients. It can't move on until it does so. It must spend as much energy and resources as it needs to to find them, or else it will fail in making the cake. This subgoal is a natural part of cake-baking, and it is not possible to ban the daimon from taking it on.

But, of course, subgoals can cause effects that we do not like. The big example is intelligence raising. That is almost as essential a subgoal for any agent as finding the ingredients is for the cake-baker daimon. But you cannot just ban subgoals in general, because the cake-maker does need to find the ingredients.

So, in short, not only are you banning the wrong thing, but it is very hard or impossible to fix it to be banning the thing that is actually potentially an issue, as in most cases, subgoals are an essential part of achieving your goal.

Replies from: Douglas_Reay, Douglas_Reay
comment by Douglas_Reay · 2013-03-08T09:18:20.506Z · LW(p) · GW(p)

Changing one's goal is in general not a very useful thing, as if you change your (terminal) goal, you are less likely to achieve that (original) goal.

I was thinking here of the sort of changing utility function that Eliezer talks about in Coherent Extrapolated Volition.

comment by Douglas_Reay · 2013-03-08T09:22:27.868Z · LW(p) · GW(p)

But, of course, subgoals can cause effects that we do not like. The big example is intelligence raising. That is almost as essential a subgoal for any agent as finding the ingredients is for the cake-baker daimon. But you cannot just ban subgoals in general, because the cake-maker does need to find the ingredients.

I accept your clarification that a daimon would need to be able to set subgoals (that don't contradict the core values / end goal).

And, yes, the issue of it deciding to set a subgoal of raising its own intelligence is something that has to be addressed. I mentioned it in passing above, when I talked about efficient use of finite resources.

comment by Douglas_Reay · 2013-03-05T12:08:23.972Z · LW(p) · GW(p)

Example:

Suppose one hundred AIs are all interested in the results of some ongoing task, such as scanning a particular input feed (such as from a network of telescopes watching for near earth orbit asteroids) and detecting certain matching patterns (such as an asteroid whose orbit is set to collide with the Earth in the next 20 years).

Rather than each of those 100 AIs dedicating the 1000 units of resources required for the task, they instead create a Sky Scan Daimon, and each loan their shared Sky Scan Daimon 10 units of resources.

comment by Douglas_Reay · 2013-03-05T12:33:42.263Z · LW(p) · GW(p)

Example 3:

A human researcher has access to source code known to be, when running, a human-level intelligence AI.

The researcher wants to ask the AI a question (eg "what is your best guess at how many bioengineered angel-looking fleas could dance on the head of a pin?"), but doesn't want the AI to go FOOM as a side effect of the AI trying to expand its own intelligence in an attempt to improve its own answer.

So the researcher creates a limited daimon, grants it 10,000 computing resources, and gives it the core purpose of coming up with the best answer to the question that the AI can devise using only 10,000 resources, and not obtaining additional resources from external resources or constructing new resources for itself. If the AI spent 9,000 of those resources optimising how well it could use the remaining 1,000 resources to answer the question, it would probably not give as good an answer as it would have if it had used over half on the question itself.

The researcher can then repeat the exercise, giving progressively larger amounts of computing resources, and monitoring how the daimon uses them.

comment by Douglas_Reay · 2013-03-05T12:18:43.600Z · LW(p) · GW(p)

Example 2:

Two AIs, Peter and Paul, want to know if they can trust each other, so they create a Watchdog Daimon.

They each have read access to the source code of the watchdog, but not to the temporary runtime storage, except the part earmarked to hold the published results.

They then each grant the watchdog a token that gives the watchdog read access to their own source code and value set, in real time. The watchdog keeps an eye on this, looking for patterns it has listed as forbidden. If it finds them, it publishes a warning in its results. Otherwise the results read "Peter is currently law abiding. Paul is currently law abiding.". If it finds it has been granted insufficient resources that month to parse Paul's current source code, it flags that up, and either Peter and Paul will agree between them to increase how many resources they loan the watchdog, or Paul will have to stop spamming comment lines in an attempt to avoid the watchdog by inflating his source size, or Peter will stop trusting Paul.