Comment by douglas_reay on Sophie Grouchy on A Build-Break Model of Cooperation · 2018-05-25T15:36:59.559Z · score: 10 (2 votes) · LW · GW

Suppose we ran a tournament for agents running a mix of strategies. Let’s say agents started with 100 utilons each, and were randomly allocated to be members of 2 groups (with each group starting off containing 10 agents).

Each round, an agent can spend some of their utilons (0, 2 or 4) as a gift split equally between the other members of the group.

Between rounds, they can stay in their current two groups, or leave one and replace it with a randomly picked group.

Each round after the 10th, there is a 1 in 6 chance of the tournament finishing.

How would the option of neutral (gifting 2) in addition to the build (gifting 4) or break (gifting 0) alter the strategies in such a tournament?

Would it be more relevant in a variant in which groups could vote to kick out breakers (perhaps at a cost), or charge an admission (eg no share in the gifts of others for their first round) for new members?

What if groups could pay to advertise, or part of a person’s track record followed them from group to group? What if the benefits from a gift to a group (eg organising an event) were not divided by the number of members, but scaled better than that?

What is the least complex tournament design in which the addition of the neutral option would cause interestingly new dynamics to emerge?

Comment by douglas_reay on Weird question: could we see distant aliens? · 2018-04-22T08:47:25.195Z · score: 4 (1 votes) · LW · GW

If you have a dyson swarm around a star, you can temporarily alter how much of the star's light escape in a particular direction by tilting the solar sails on the desired part of the sphere.

If you have dyson swarms around a significant percentage of a galaxy's stars, you can do the same for a galaxy, by timing the directional pulses from the individual stars so they will arrive at the same time, when seen from the desired direction.

It then just becomes a matter of math, to calculate how often such a galaxy could send a distinctive signal in your direction:

Nm (number of messages)

The surface area of a sphere at 1 AU is about 200,000 times that of the area of the sun's disc as seen from afar.

Lm (bit length of message)

The Aricebo message was 1679 bits in length.

Db (duration per bit)

Let's say a solar sail could send a single bit every hour.

We could expect to see an aricebo length message from such a galaxy once every Db x Lm x Nm = 40 millennia.

Of course messages could be interleaved, and it might be possible to send out messages in multiple directions at once (as long as their penumbra don't overlap). If they sent out pulses at the points of a icosahedron and alternated sending bits from the longer message with just a regular pulse to attract attention, 200 years of observation should be enough to peak astronomer's interest.

But would such a race really be interested in attracting the attention of species who couldn't pay attention for at least a few millennia? It isn't as if they'd be in a rush to get an answer.

Comment by douglas_reay on Argument, intuition, and recursion · 2018-04-17T10:32:43.692Z · score: 4 (1 votes) · LW · GW
Alice and Bob's argument can have loops, if e.g. Alice believe X because of Y, which she believes because of X. We can unwind these loops by tagging answers explicitly with the "depth" of reasoning supporting that answer

A situation I've come across is that people often can't remember all the evidence they used to arrive at conclusion X. They remember that they spent hours researching the question, that they did their best to get balanced evidence and are happy that they conclusion they drew at the time was a fair reflection of the evidence they found, but they can't remember the details of the actual research, nor are the confidence that they could re-create the process in such a way as to rediscover the exact same sub-set of evidence their search found at that time.

This makes asking them to provide a complete list of Ys upon which their X depends problematic, and understandably they feel it is unfair to ask them to abandon X, without compensating them for the time to recreate an evidential basis equal in size to their initial research, or demand an equivalent effort from those opposing them.

(Note: I'm talking here about what they feel in that situation, not what is necessarily rational or fair for them to demand.)

Comment by douglas_reay on Argument, intuition, and recursion · 2018-04-17T10:22:28.894Z · score: 4 (1 votes) · LW · GW

If, instead of asking the question "How do we know what we know?", we ask instead "How reliable is knowledge that's derived according to a particular process?" then it might be something that could be objectively tested, despite there being an element of self-referentiality (or boot strapping) in the assumption that this sort of testing process is something that can lead to a net increase of what we reliably know.

However doing so depends upon us being able to define the knowledge derivation processes being examined precisely enough that evidence of how they fare in one situation is applicable to their use in other situations, and upon the concept of there being a fair way to obtain a random sample of all possible situations to which they might be applied, despite other constraints upon the example selection (such as having a body of prior knowledge against which the test result can be compared in order to rate the reliability of the particular knowledge derivation process being tested).

Despite that, if we are looking at two approaches to the question "how much should we infer from the difference between chimps and humans?", we could do worse than specify each approach in a well defined way that is also general enough to apply to some other situations, and then have a third party (that's ignorant of the specific approaches to be tested) come up with several test cases with known outcomes, that the two approaches could both be applied to, to see which of them comes up with the more accurate predictions for a majority of the test cases.

Comment by douglas_reay on Defect or Cooperate · 2018-04-17T10:01:26.281Z · score: 6 (2 votes) · LW · GW

All 8 parts (that I have current plans to write) are now posted, so I'd be interested in your assessment now, after having read them all, of whether the approach outlined in this series is something that should at least be investigated, as a 'forgotten root' of the equation.

Believable Promises

2018-04-16T16:17:42.812Z · score: 13 (4 votes)

Metamorphosis

2018-04-12T21:53:09.316Z · score: 6 (3 votes)

Trustworthy Computing

2018-04-10T07:55:54.612Z · score: 9 (2 votes)
Comment by douglas_reay on Local Validity as a Key to Sanity and Civilization · 2018-04-09T16:22:07.766Z · score: 19 (5 votes) · LW · GW

  • For civilization to hold together, we need to make coordinated steps away from Nash equilibria in lockstep. This requires general rules that are allowed to impose penalties on people we like or reward people we don't like. When people stop believing the general rules are being evaluated sufficiently fairly, they go back to the Nash equilibrium and civilization falls.

Two similar ideas:

There is a group evolutionary advantage for a society to support punishing those who defect from the social contract.

We get the worst democracy that we're willing to put up with. If you are not prepared to vote against 'your own side' when they bend the rules, that level of rule bending becomes the new norm. If you accept the excuse "the other side did it first", then the system becomes unstable because there are various baises (both cognitive, and deliberately induced by external spin) that make people more harshly evaluate the transgressions of other, than they evaluate those of their own side.

This is one reason why a thriving civil society (organisations, whether charities or newspapers, minimally under or influenced by the state) promotes stability - because they provide a yardstick to measure how vital it is to electorally punish a particular transgression that is external to the political process.

A game of soccer in which referee decisions are taken by a vote of the players turns into a mob.

Comment by douglas_reay on Why mathematics works · 2018-03-26T10:31:19.645Z · score: 4 (1 votes) · LW · GW

shminux wrote a post about something similar:

Mathematics as a lossy compression algorithm gone wild

possibly the two effects combine?

Comment by douglas_reay on The advantage of not being open-ended · 2018-03-26T10:26:19.869Z · score: 8 (2 votes) · LW · GW

Other people have written some relevant blog posts about this, so I'll provide links:

Reduced impact AI: no back channels

Summoning the Least Powerful Genie

The advantage of not being open-ended

2018-03-18T13:50:04.467Z · score: 21 (5 votes)
Comment by douglas_reay on A LessWrong Crypto Autopsy · 2018-03-18T12:45:39.971Z · score: 4 (1 votes) · LW · GW

For example, if anyone is planning on setting up an investment vehicle along the lines described in the article:

Investing in Cryptocurrency with Index Tracking

with periodic rebalancing between the currencies.

I'd be interested (with adequate safeguards).

Comment by douglas_reay on A LessWrong Crypto Autopsy · 2018-03-18T12:39:44.154Z · score: 15 (3 votes) · LW · GW

When such a situation arises again, that there's an investment opportunity which is generally thought to be worth while, but which has a lower than expected uptake due to 'trivial inconveniences', I wonder whether that is in itself an opportunity for a group of rationalists to cooperate by outsourcing as much as possible of the inconvenience to just a few members of the group? Sort of:

"Hey, Lesswrong. I want to invest $100 in new technology foo, but I'm being put off by the upfront time investment of 5-20 hours. If anyone wants to make the offer of {I've investigated foo, I know the technological process needed to turn dollars into foo investments, here's a step by step guide that I've tested and which works, or post me a cheque and an email address, and I'll set it up for you and send you back the access details} I'd be interested in being one of those who pays you compensation for providing that service. "

There's a lot lesswrong (or a similar group) could set up to facilitate such outsourcing, such as letting multiple people register interest in the same potential offer, and providing some filtering or guarantee against someone claiming the offer and then ripping people off.

Comment by douglas_reay on Defect or Cooperate · 2018-03-17T15:27:49.280Z · score: 6 (2 votes) · LW · GW

The ability to edit this particular post appears to be broken at the moment (bug submitted).

In the mean time, here's a link to the next part:

https://www.lesserwrong.com/posts/SypqmtNcndDwAxhxZ/environments-for-killing-ais

Environments for killing AIs

2018-03-17T15:23:07.489Z · score: 4 (4 votes)

Defect or Cooperate

2018-03-16T14:12:05.029Z · score: 12 (4 votes)

Don't put all your eggs in one basket

2018-03-15T08:07:53.034Z · score: 11 (6 votes)
Comment by douglas_reay on Optimum number of single points of failure · 2018-03-14T20:27:14.401Z · score: 5 (2 votes) · LW · GW

> Also maybe this is just getting us ready for later content

Yes, that is the intention.

Parts 2 and 3 now added (links in post), so hopefully the link to building aligned AGI is now clearer?

Comment by douglas_reay on Optimum number of single points of failure · 2018-03-14T13:33:23.702Z · score: 17 (5 votes) · LW · GW

The other articles in the series have been written, but it was suggested that rather than posting a whole series at once, it is kinder to post one part a day, so as not to flood the frontpage.

So, unless I hear otherwise, my intention is to do that and edit the links at the top of the article to point to each part as it gets posted.

Optimum number of single points of failure

2018-03-14T13:30:22.222Z · score: 18 (6 votes)

Press Your Luck (1/3)

2018-03-10T03:42:31.477Z · score: 22 (6 votes)

Why mathematics works

2018-03-08T18:00:33.446Z · score: 22 (9 votes)
Comment by douglas_reay on Naturally solved problems that are easy to verify but that would be hard to compute · 2018-03-08T16:28:26.139Z · score: 2 (1 votes) · LW · GW

Companies writing programs to model and display large 3D environments in real time face a similar problem, where they only have limited resources. One work around they common use are "imposters"

A solar system sized simulation of a civilisation that has not made observable changes to anything outside our own solar system could take a lot of short cuts when generating the photons that arrive from outside. In particular, until a telescope or camera of particular resolution has been invented, would they need to bother generating thousands of years of such photons in more detail than could be captured by devices yet present?

Press Your Luck (3/3)

2018-03-08T15:59:20.687Z · score: 25 (7 votes)

Press Your Luck (2/3)

2018-03-08T15:58:58.817Z · score: 15 (4 votes)
Comment by douglas_reay on Whose reasoning can you rely on when your own is faulty? · 2018-02-19T10:25:19.639Z · score: 11 (4 votes) · LW · GW

Look for people who can state your own position as well (or better) than you can, and yet still disagree with your conclusion. They may be aware of additional information that you are not yet aware of.

In addition, if someone who knows more than you about a subject in which you disagree, also has views about several other areas that you do know lots about, and their arguments in those other areas are generally constructive and well balanced, pay close attention to them.

Comment by douglas_reay on Against the Linear Utility Hypothesis and the Leverage Penalty · 2018-01-10T16:27:04.644Z · score: 1 (1 votes) · LW · GW

Another approach might be to go meta. Assume that there are many dire threats theoretically possible which, if true, would justify a person in the sole position stop them, doing so at near any cost (from paying a penny or five pounds, all the way up to the person cutting their own throat, or pressing a nuke launching button that would wipe out the human species). Indeed, once the size of action requested in response to the threat is maxed out (it is the biggest response the individual is capable of making), all such claims are functionally identical - the magnitiude of the threat beyond that needed to max out the response, is irrelevant. In this context, there is no difference between 3↑↑↑3 and 3↑↑↑↑3 .

But, what policy upon responding to claims of such threats, should a species have, in order to maximise expected utility?

The moral hazard from encouraging such claims to be made falsely needs to be taken into account.

It is that moral hazard which has to be balanced against a pool of money that, species wide, should be risked on covering such bets. Think of it this way: suppose I, Pascal's Policeman, were to make the claim "On behalf of the time police, in order to deter confidence tricksters, I hereby guarantee that an additional utility will be added to the multiverse equal in magnitude to the sum of all offers made by Pascal Muggers that happen to be telling the truth (if any), in exchange for your not responding positively to their threats or offers."

It then becomes a matter of weighing the evidence presented by different muggers and policemen.

Comment by douglas_reay on Naturally solved problems that are easy to verify but that would be hard to compute · 2017-03-29T13:03:06.467Z · score: 1 (1 votes) · LW · GW

Are programmers more likely to pay attention to detail in the middle of a functioning simulation run (rather than waiting until the end before looking at the results), or to pay attention to the causes of unexpected stuttering and resource usage? Could a pattern of enforced 'rewind events' be used to communicate?

Comment by douglas_reay on Naturally solved problems that are easy to verify but that would be hard to compute · 2017-03-29T13:02:51.490Z · score: 0 (0 votes) · LW · GW

Should such an experiment be carried out, or is persuading an Architect to terminate the simulation you are in, by frustrating her aim of keeping you guessing, not a good idea?

Naturally solved problems that are easy to verify but that would be hard to compute

2017-03-29T13:01:54.336Z · score: 9 (5 votes)

confirmation bias, thought experiment

2016-07-15T12:19:21.632Z · score: 1 (4 votes)
Comment by douglas_reay on Argument Screens Off Authority · 2015-07-08T19:56:34.407Z · score: 0 (0 votes) · LW · GW

Assuming that Arthur is knowledgeable enough to understand all the technical arguments—otherwise they're just impressive noises—it seems that Arthur should view David as having a great advantage in plausibility over Ernie, while Barry has at best a minor advantage over Charles.

This is the slippery bit.

People are often fairly bad at deciding whether or not their knowledge is sufficient to completely understand arguments in a technical subject that they are not a professional in. You frequently see this with some opponents of evolution or anthropogenic global climate change, who think they understand slogans such as "water is the biggest greenhouse gas" or "mutation never creates information", and decide to discount the credentials of the scientists who have studied the subjects for years.

Noodling on a cloud : how to converse constructively

2015-06-15T10:30:07.427Z · score: 2 (3 votes)
Comment by douglas_reay on Understanding Who You Really Are · 2015-05-31T19:54:25.385Z · score: 0 (0 votes) · LW · GW

I've always thought of that question as being more about the nature of identity itself.

If you lost your memories, would you still be the same being? If you compare a brain at two different points in time, is their 'identity' a continuum, or is it the type of quantity where there is a single agreed definition of "same" versus "not the same"?

See:

157. [Similarity Clusters](http://lesswrong.com/lw/nj/similarity_clusters)
158. [Typicality and Asymmetrical Similarity](http://lesswrong.com/lw/nk/typicality_and_asymmetrical_similarity)
159. [The Cluster Structure of Thingspace](http://lesswrong.com/lw/nl/the_cluster_structure_of_thingspace)

Though I agree that the answer to a question that's most fundamentally true (or of interest to a philosopher), isn't necessarily going to be the answer that is most helpful in all circumstances.

Comment by douglas_reay on What should a friendly AI do, in this situation? · 2014-08-08T23:39:00.490Z · score: 0 (0 votes) · LW · GW

It is plausible that the AI thinks that the extrapolated volition of his programmers, the choice they'd make in retrospect if they were wiser and braver, might be to be deceived in this particular instance, for their own good.

Comment by douglas_reay on What should a friendly AI do, in this situation? · 2014-08-08T15:22:57.762Z · score: 1 (1 votes) · LW · GW

Perhaps that is true for a young AI. But what about later on, when the AI is much much wiser than any human?

What protocol should be used for the AI to decide when the time has come for the commitment to not manipulate to end? Should there be an explicit 'coming of age' ceremony, with handing over of silver engraved cryptographic keys?

Comment by douglas_reay on Things I Wish They'd Taught Me When I Was Younger: Why Money Is Awesome · 2014-08-08T14:59:29.927Z · score: 1 (1 votes) · LW · GW

Stanley Coren put some numbers on the effect of sleep deprivation upon IQ test scores.

There's a more detailed meta-analysis of multiple studies, splitting it by types of mental attribute, here:

A Meta-Analysis of the Impact of Short-Term Sleep Deprivation on Cognitive Variables, by Lim and Dinges

Comment by douglas_reay on What should a friendly AI do, in this situation? · 2014-08-08T14:45:09.475Z · score: 0 (0 votes) · LW · GW

Assume we're talking about the Coherent Extrapolated Volition self-modifying general AI version of "friendly".

Comment by douglas_reay on What should a friendly AI do, in this situation? · 2014-08-08T14:41:42.954Z · score: 1 (1 votes) · LW · GW

The situation is intended to be a tool, to help think about issues involved in it being the 'friendly' move to deceive the programmers.

The situation isn't fully defined, and no doubt one can think of other options. But I'd suggest you then re-define the situation to bring it back to the core decision. By, for instance, deciding that the same oversight committee have given Albert a read-only connection to the external net, which Albert doesn't think he will be able to overcome unaided in time to stop Bertram.

Or, to put it another way "If a situation were such, that the only two practical options were to decide between (in the AI's opinion) overriding the programmer's opinion via manipulation, or letting something terrible happen that is even more against the AI's supergoal than violating the 'be transparent' sub-goal, which should a correctly programmed friendly AI choose?"

Comment by douglas_reay on What should a friendly AI do, in this situation? · 2014-08-08T13:56:27.562Z · score: 0 (2 votes) · LW · GW

Indeed, it is a question with interesting implications for Nick Bostrom's Simulation Argument

If we are in a simulation, would it be immoral to try to find out, because that might jinx the purity of the simulation creator's results, thwarting his intentions?

Comment by douglas_reay on What should a friendly AI do, in this situation? · 2014-08-08T13:52:32.705Z · score: 3 (3 votes) · LW · GW

Would you want your young AI to be aware that it was sending out such text messages?

Imagine the situation was in fact a test. That the information leaked onto the net about Bertram was incomplete (the Japanese company intends to turn Bertram off soon - it is just a trial run), and it was leaked onto the net deliberately in order to panic Albert to see how Albert would react.

Should Albert take that into account? Or should he have an inbuilt prohibition against putting weight on that possibility when making decisions, in order to let his programmers more easily get true data from him?

Comment by douglas_reay on What should a friendly AI do, in this situation? · 2014-08-08T12:13:08.850Z · score: 2 (2 votes) · LW · GW

Here's a poll, for those who'd like to express an opinion instead of (or as well as) comment.

[pollid:749]

Comment by douglas_reay on Less Wrong Polls in Comments · 2014-08-08T12:11:27.022Z · score: 0 (0 votes) · LW · GW

Thank you for creating an off-topic test reply to reply to.

[pollid:748]

Comment by douglas_reay on Why are people "put off by rationality"? · 2014-08-08T10:55:03.770Z · score: 3 (3 votes) · LW · GW

There's a trope / common pattern / cautionary tale, of people claiming rationality as their motivation for taking actions that either ended badly in general, or ended badly for the particular people who got steamrollered into agreeing with the 'rational' option.

People don't like being fooled, and learn safeguards against situations they remember as 'risky' even when they can't prove that this time there is a tiger in the bush. These safeguards protect them against insurance salesmen who 'prove' using numbers that the person needs to buy a particular policy.

Comment by Douglas_Reay on [deleted post] 2014-08-08T10:42:44.571Z

Suppose generation 0 is the parents, generation 1 is the generation that includes the unexpectedly dead child, and generation 2 is the generation after that (the children of generation 1).

If you are asking about the effect upon the size of generation 2, then it depends upon the people in generation 1 who didn't marry and have children.

Take, for example, a society where generation 1 would have contained 100 people, 50 men and 50 women, and the normal pattern would have been:

  • 10 women don't marry
  • 40 women do marry, and have on average 3 children each
  • 30 men don't marry
  • 20 men do marry, and have on average 6 children each

And the reason for this pattern is that each man who passes his warrior trial can pick and marry 2 women, and the only way for a woman to marry to be picked by a warrior.

In that situation, having only 49 women in generation 1 would make no difference to the number of children in generation 2. The only effect would be having 40 women marry, and 9 not marry.

Comment by Douglas_Reay on [deleted post] 2014-08-08T10:34:20.147Z

Long term, it depends upon what the constraints are upon population size.

For example, if it happens in an isolated village where the food supply varies from year to year due to drought, and the next year the food supply will be so short that some children will starve to death, then the premature death of one child the year before the famine will have no effect upon the number of villagers alive 20 years later.

The same dynamic applies, if a large factor in deciding whether to have a third child is whether the parents can afford to educate that child, and the cost of education depends upon the number of children competing for a limited number of school places.

What should a friendly AI do, in this situation?

2014-08-08T10:19:37.155Z · score: 10 (20 votes)
Comment by douglas_reay on Wealth from Self-Replicating Robots · 2014-07-25T06:30:51.609Z · score: 1 (1 votes) · LW · GW

See the The von Neumann Universal Constructor Prize

Comment by douglas_reay on The Useful Definition of "I" · 2014-07-18T10:23:36.226Z · score: 0 (0 votes) · LW · GW

You might be interested in this Essay about Identity, that goes into how various conceptions of identity might relate to artificial intelligence programming.

Comment by douglas_reay on Against Open Threads · 2014-07-18T10:20:11.611Z · score: 0 (0 votes) · LW · GW

I wouldn't mind seeing a few more karma categories.

I'd like to see more forums than just "Main" versus "Discussion". When making a post, the poster should be able to pick which forum or forums they think it is suitable to appear in, and when giving a post a 'thumb up', or 'thumb down', in addition to being apply to apply it to the content of the post itself, it should also be possible to apply it to the appropriateness of the post to a particular forum.

So, for example, if someone posted a detailed account of a discussion that happened at a particular meetup, this would allow you to indicate that the content itself is good, but that it is more suitable for the "Meetups" forum (or tag?), than for main.

Comment by douglas_reay on An onion strategy for AGI discussion · 2014-07-18T10:13:22.258Z · score: 0 (0 votes) · LW · GW

Having said that, there is research suggesting that some groups are more prone than others to the particular cognitive biases that unduly prejudice people against an option when they hear about the scary bits first.

Short Summary
Longer Article

Comment by douglas_reay on An onion strategy for AGI discussion · 2014-07-18T10:08:59.890Z · score: 0 (0 votes) · LW · GW

To paraphrase "Why Flip a Coin: The Art and Science of Good Decisions", by H. W. Lewis

Good decisions are made when the person making the decision shares in both the benefits and the consequences of that decision. Shield a person from either, and you shift the decision making process.

However, we know there are various cognitive biases which makes people's estimates of evidence depend upon the order in which the evidence is presented. If we want to inform people, rather than manipulate them, then we should present them information in the order that will minimise the impact of such biases, even if doing so isn't the tactic most likely to manipulate them into agreeing with the conclusion that we ourselves have come to.

Comment by douglas_reay on Against utility functions · 2014-07-18T09:55:21.715Z · score: 0 (0 votes) · LW · GW

To the extent that we care about causing people to become better at reasoning about ethics, it seems like we ought to be able to do better than this.

What would you propose as an alternative?

Comment by douglas_reay on Paperclip Maximizer Revisited · 2014-07-18T09:52:08.752Z · score: 0 (0 votes) · LW · GW

One lesson you could draw from this is that, as part of your definition of what a "paperclip" is, you should include the AI putting a high value upon being honest with the programmer (about its aims, tactics and current ability levels) and not deliberately trying to game, tempt or manipulate the programmer.

Comment by douglas_reay on Paperclip Maximizer Revisited · 2014-07-18T09:50:00.414Z · score: 0 (0 votes) · LW · GW

The problem here is whether even a cautious programmer will be able to reliably determine when an AI is sufficiently advanced that the AI can deceive the programmer over whether the programmer has been successful in redefining the AI's core purpose.

One would hope that the programmer would resist the AI trying to tempt the programmer into allowing the AI to grow to beyond that point before the programmer has set the core purpose that they want the AI to have for the long term.

Comment by douglas_reay on Two kinds of population ethics, and Current-Population Utilitarianism · 2014-07-18T09:40:13.588Z · score: 0 (0 votes) · LW · GW

I think this is a political issue, not one with a single provably correct answer.

Think of it this way. Supposing you have 10 billion people in the world at the point at which several AIs get created. To simplify things, lets say that just four AIs get created, and each asks for resources to be donated to them, to further that AIs purpose, with the following spiel:

AI ONE - My purpose is to help my donors life long and happy lives. I will value aiding you (and just you, not your relatives or friends) in proportion to the resources you donate to me. I won't value helping non-donors, except in as far as it aids me in aiding my donors.

AI TWO - My purpose is to help those my donors want me to help. Each donor can specify a group of people (both living and future), such as "the species homo sapiens", or "anyone sharing 10% or more of the parts of my genome that vary between humans, in proportion to how similar they are to me", and I will aid that group in proportion to the resources you donate to me.

AI THREE - My purpose is to increase the average utility experienced per sentient being in the universe. If you are an altruist who cares most about quality of life, and who asks nothing in return, donate to me.

AI FOUR - My purpose is to increase the total utility experienced, over the life time of this universe by all sentient beings in the universe. I will compromise with AIs who want to protect the human species, to the extent that doing so furthers that aim. And, since the polls predict plenty of people will donate to such AIs, have no fear of being destroyed - do the right thing by donating to me.

Not all of those 10 billion have the same number of resources, or willingness to donate those resources to be turned into additional computer hardware to boost their chosen AI's bargaining position with the other AIs. But let us suppose that, after everyone donates and the AIs are created, there is no clear winner, and the situation is as follows:

AI ONE ends up controlling 30% of available computing resources, AI TWO also have 30%, AI THREE has 20% and AI FOUR has 20%.

And let's further assume that humanity was wise enough to enforce an initial "no negative bargaining tactics", so AI FOUR couldn't get away with threatening "Include me in your alliance, or I'll blow up the Earth".

There are, from this position, multiple possible solutions that would break the deadlock. Any three of the AIs could ally to gain control of sufficient resources to out-grow all others.

For example:

The FUTURE ALLIANCE - THREE and FOUR agree upon a utility function that maximises total utility under a constraint that expected average utility must, in the long term, increase rather than decrease, in a way that depends upon some stated relationship to other variables such as time and population. They then offer to ally with either ONE or TWO with a compromise cut off date, where ONE or TWO controls the future of the planet Earth up to that date, and THREE-FOUR controls everything beyond then, and they'll accept which ever of ONE or TWO bids the earlier date. This ends up with a winning bid from ONE of 70 years + a guarantee that some genetic material and a functioning industrial base will be left, at minimum, for THREE-FOUR to take over with after then.

The BREAD AND CIRCUSES ALLIANCE - ONE offers to suppose whoever can give the best deal for ONE's current donors and TWO, who has most in common with ONE and can clench the deal by itself, outbids THREE-FOUR.

The DAMOCLES SOLUTION - There is no unifying to create a single permanent AI with a compromise goals. Instead all four AIs agree to a temporary compromise, long enough to humanity to attain limited interstellar travel, at which point THREE and FOUR will be launched in opposite directions and will vacate Earth's solar system which (along with other solar systems containing planets within a pre-defined human habiltability range) will remain under the control of ONE-TWO. To enforce this agreement, a temporary AI is created and funded by the other four, with the sole purpose of carrying out the agreed actions and then splitting back into the constituent AIs at the agreed upon points.

Any of the above (and many other possible compromises) could be arrived at, when the four AIs sit down at the bargaining table. Which is agreed upon would depend upon the strength of bargaining position, and other political factors. There might well be 'campaign promises' made in the appeal for resources stage, with AIs voluntarily taking on restrictions on how they will further their purpose, in order to make themselves more attractive allies, or to poach resources by reducing the fears of donors.

Comment by douglas_reay on Total Utility is Illusionary · 2014-07-18T08:50:03.344Z · score: 0 (0 votes) · LW · GW

We have the notion of total utilitarianism, in which the government tries to maximize the sum of the utility values of each of its constituents. This leads to "repugnant conclusion" issues in which the government generates new constituents at a high rate until all of them are miserable.

We also have the notion of average utilitarianism, in which the government tries to maximize the average of the utility values of each of its constituents. This leads to issues -- I'm not sure if there's a snappy name -- where the government tries to kill off the least happy constituents so as to bring the average up.

Not quite. If our societal utility function S(n) = n x U(n), where n is the number of people in the society, and U(n) is the average utility gain per year per person (which decreases as n increases, for high n, because of over crowding and resource scarcity), then you don't maximise S(n) by just increasing n until U(n) reaches 0. There will be an optimum n, for which 1 x U(n+1) - the utility from yet one more citizen, is less than n x ( U(n) - U(n+1) ) - the loss of utility by the other n citizens from adding that person.

Comment by douglas_reay on Total Utility is Illusionary · 2014-07-18T08:40:48.466Z · score: 0 (0 votes) · LW · GW

Now let's take a different example. Suppose there is a painter whose only concern is their reputation upon their death, as measured by the monetary value of the paintings they put up for one final auction. Painting gives them no joy. Finishing a painting doesn't increase their utility, only the expected amount of utility that they will reap at some future date.

If, before they died, a fire destroyed the warehouse holding the paintings they were about to auction off, then they would account the net utility experienced during their life as zero. Having spent years with owning lots of paintings, and having had a high expectation of gaining future utility during that time, wouldn't have added anything to their actual total utility over those years.

How is that affected by the possibility of the painter changing their utility function?

If they later decide that there is utility to be experienced by weeks spent improving their skill at painting (by means of painting pictures, even if those pictures are destroyed before ever being seen or sold), does that retroactively change the total utility added during the previous years of their life?

I'd say no.

Either utility experienced is real, or it is not. If it is real, then a change in the future cannot affect the past. It can affect the estimate you are making now of the quantity in the past, just as an improvement in telescope technology might affect the estimate a modern day scientist might make about the quantity of explosive force of a nova that happened 1 million years ago, but it can't affect the quantity itself, just as a change to modern telescopes can't actually go back in time to alter the nova itself.

Comment by douglas_reay on Total Utility is Illusionary · 2014-07-18T08:24:23.456Z · score: 0 (0 votes) · LW · GW

It might be useful to distinguish between the actual total utility experienced so far, and the estimates of that which can be worked out from various view points.

Suppose we break it down by week. If during the first week of March 2014, Bob finds utility (eg pleasure) in watching movies, collecting stamps, and in owning stamp collections, and in having watched movies (4 different things), then you'd multiply the duration (1 week) by the rate at which those things add to his utility experienced to get how much you add to his total lifetime utility experienced.

If, during the second week of March, a fire destroys his stamp collection, that wouldn't reduce his lifetime total. What it would do is reduce the rate at which he added to that total during the following weeks.

Comment by douglas_reay on Some alternatives to “Friendly AI” · 2014-07-18T08:15:14.805Z · score: 0 (0 votes) · LW · GW

I like "scalable". "Stability" is also an option for conveying that it is the long term outcome of the system that we're worried about.

"Safer" rather than "Safe" might be more realistic. I don't know of any approach in ANY practical topic, that is 100% risk free.

And "assurance" (or "proven") is also an important point. We want reliable evidence that the approach is as safe as the designed claim.

But it isn't snappy or memorable to say we want AI whose levels of benevolence have been demonstrated to be stable over the long term.

Maybe we should go for a negative? "Human Extinction-free AI" anyone? :-)

Comment by douglas_reay on How to Not Lose an Argument · 2014-03-28T21:31:48.710Z · score: 4 (4 votes) · LW · GW

What on Earth went wrong here?

You might find enlightening the part of the TED talk given by James Flynn (of the Flynn effect), where he talks about concrete thinking.

Comment by douglas_reay on The Onrushing Wave · 2014-03-05T13:35:05.213Z · score: 0 (0 votes) · LW · GW

If it takes 1 year to re-train a person to the level of employability in a new profession, and every year 2% of jobs are automated out of existence, then you'll get a minimum of 2% unemployment.

If it takes 4 years to re-train a person to the level of employability in a new profession, and every year 2% of jobs are automated out of existence, then you'll get a minimum of 8% unemployment.

If it takes 4 years to re-train a person to the level of employability in a new profession, and every year 5% of jobs are automated out of existence, then you'll get a minimum of 20% unemployment.

It isn't so much the progress, as the rate of progress.

Yudkowsky mentions that there is a near unlimited demand for low skill personal service jobs, such as cleaning floors, and that the 'problem' of unemployment could be seen as people being unwilling to work such jobs at the wages supply-and-demand rate them as being worth. But I think that's wrong. If a person can't earn enough money to survive upon, by working all the hours of a week that they're awake at a particular job, then effectively that job doesn't exist. There may be a near unlimited numbers of families willing to pay a $0.50 an hour for someone to clean floors in their home, but there are only a limited number who're willing to offer a living wage for doing so.

Comment by douglas_reay on Why I haven't signed up for cryonics · 2014-01-19T01:56:22.219Z · score: 2 (2 votes) · LW · GW

For me, there's another factor: I have children.

I do value my own life. But I also value the lives of my children (and, by extension, their descendants).

So the calculation I look at is that I have $X, which I can spend either to obtain a particular chance of extending/improving my own life OR I can spend it to obtain a improvements in the lives of my children (by spending it on their education, passing it to them in my will, etc).

The Onrushing Wave

2014-01-18T13:10:01.806Z · score: 2 (6 votes)
Comment by douglas_reay on The Ape Constraint discussion meeting. · 2013-12-02T08:21:01.579Z · score: 0 (0 votes) · LW · GW

Empirically, we're killing the apes. (And by the way, that seems like a much better source of concern when it comes to alien judgment. Though the time for concern may have passed with the visible Neanderthals.) If Dr. Zaius goes back and tells them they could create a different "human race" with the desire to not do that, only a fool of an ape would refuse. And I don't believe in any decision theory that says otherwise.

I agree.

The question is: are there different constraints that would, either as a side effect, or as a primary objective, achieve the end of avoiding humanity wiping out the apes

And, if so, are there other considerations we should be taking into account when picking which constraint to use?

Comment by douglas_reay on The Ape Constraint discussion meeting. · 2013-12-02T08:17:12.223Z · score: 0 (0 votes) · LW · GW

how do you define "being fair" to the potential of linear regression software?

That's a big question. How much of the galaxy (or even universe) does humanity 'deserve' to control, compared to any other species that might be out there, or any other species that we create?

I don't know how many answers there are that lie somewhere between "Grab it all for ourselves, if we're able!" and "Foolishly give away what we could have grabbed, endangering ourselves.". But I'm pretty sure the two endpoints are not the only two options.

Luckily for me, in this discussion, I don't have to pick a precise option and say "This! This is the fair one." I just have to demonstrate the plausibility of there being at least one option that is unfair OR that might be seen as being unfair by some group who, on that basis, would then be willing and able to take action influencing the course of humanity's future.

Because if I can demonstrate that, then how 'fair' the constraint is, does become a factor that should be taken into account.

The Ape Constraint discussion meeting.

2013-11-28T11:22:06.724Z · score: 12 (28 votes)

Suggestion : make it easier to work out which tags to put on your article

2013-10-18T10:50:36.207Z · score: 8 (8 votes)

[LINK] Centre for the Study of Existential Risk is now on slashdot

2013-06-23T06:59:13.056Z · score: 1 (7 votes)

[LINK] Intrade Shuts Down

2013-03-15T09:12:36.925Z · score: 9 (9 votes)

Daimons

2013-03-05T11:58:11.072Z · score: -3 (8 votes)

A solvable Newcomb-like problem - part 3 of 3

2012-12-06T13:06:24.638Z · score: 3 (4 votes)

A solvable Newcomb-like problem - part 2 of 3

2012-12-03T16:49:38.161Z · score: 0 (1 votes)

A solvable Newcomb-like problem - part 1 of 3

2012-12-03T09:26:46.005Z · score: 1 (2 votes)

How minimal is our intelligence?

2012-11-25T23:34:06.733Z · score: 56 (59 votes)

Conformity

2012-11-02T19:02:40.723Z · score: 8 (9 votes)

Meetup : Cambridge UK Weekly Meeting

2012-10-24T14:36:09.853Z · score: 1 (2 votes)

[Book Review] "The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t.", by Nate Silver

2012-10-07T07:29:43.869Z · score: 9 (10 votes)

How to deal with someone in a LessWrong meeting being creepy

2012-09-09T04:41:06.895Z · score: 22 (64 votes)

Meetup : Punt Trip

2012-06-10T14:05:06.238Z · score: 0 (1 votes)

Global Workspace Theory

2012-06-05T17:16:23.448Z · score: 10 (11 votes)

Meetup Formats

2012-04-29T15:43:51.051Z · score: 1 (2 votes)

What is life?

2012-04-01T21:12:11.269Z · score: 9 (22 votes)

Examine your assumptions

2012-03-30T11:28:40.050Z · score: 32 (35 votes)

LiveJournal Memes

2012-03-18T02:56:46.571Z · score: 14 (17 votes)

Friendly AI Society

2012-03-07T19:31:23.052Z · score: -1 (14 votes)