A Proposal for a Simpler Solution To All These Difficult Observations and Problems

post by Flinter · 2017-01-16T18:13:42.905Z · LW · GW · Legacy · 82 comments

Contents

82 comments

I am not perfectly sure how this site has worked (although I skimmed the "tutorials") and I am notorious for not understanding systems as easily and quickly as the general public might. At the same time I suspect a place like this is for me, for what I can offer but also for what I can receive (ie I intend on (fully) traversing the various canons).

I also value compression and time in this sense, and so I think I can propose a subject that might serve as an "ideal introduction" (I have an accurate meaning for this phrase I won't introduce atm).

I've read a lot of posts/blogs/papers that are arguments which are founded on a certain difficulties, where the observation and admission of this difficulty leads the author and the reader (and perhaps the originator of the problem/solution outlines) to defer to some form of a (relative to what will follow) long winded solution.

I would like to suggest, as a blanket observation and proposal, that most of these difficult problems described, especially on a site like this, are easily solvable with the introduction of an objective and ultra-stable metric for valuation.


I think maybe at first this will seem like an empty proposal.  I think then, and also, some will see it as devilry (which I doubt anyone here thinks exists).  And I think I will be accused of many of the fallacies and pitfalls that have already been previously warned about in the canons.

That latter point I think might suggest that I might learn well and fast from this post as interested and helpful people can point me to specific articles and I WILL read them with sincere intent to understand them (so far they are very well written in the sense that I feel I understand them because they are simple enough) and I will ask questions.

But I also think ultimately it will be shown that my proposal and my understanding of it doesn't really fall to any of these traps, and as I learn the canonical arguments I will be able to show how my proposal properly addresses them.

82 comments

Comments sorted by top scores.

comment by gjm · 2017-01-16T19:45:08.418Z · LW(p) · GW(p)

helpful people can point me to specific articles

I suggest taking a look at the Complexity of Value page on the LW wiki, not because "complexity of value" as defined there is exactly what I think you're missing (it isn't) but because several of the links there will take you to relevant stuff in (as you put it) the canons. The "Fake Utility Functions" post mentioned there and its predecessors are worth a read, for instance. Also "Value is Fragile" and "The Hidden Complexity of Wishes".

(All this talk of "canons" makes me a little uneasy, so let me make it absolutely explicit that I'm not pointing you at these things because you are not allowed to disagree with them, or because I am certain that everything they say is right.)

Replies from: Flinter
comment by Flinter · 2017-01-16T19:53:06.432Z · LW(p) · GW(p)

Yup you are speaking perfectly to my point. Thankfully I am familiar with Szabo's works to some degree which is very relevant and inter-linked with the link you gave. In regard to cannons I don't mean it in the derogatory sense although I think ultimately it might be shown that they are that too.

So I think you are speaking to the problem of creating such a metric. But I urging us to push past that and take it as a given that we have such a unit that is objective value.

Replies from: gjm
comment by gjm · 2017-01-16T20:08:28.269Z · LW(p) · GW(p)

I've no objection to the strategy of decomposing problems into parts ("what notion of value shall we use?" and "now, how do we proceed given this notion of value?") and attacking the parts separately. Just so long as we remember, while addressing the second part, that (1) we haven't actually solved the first part yet, (2) we don't even know that it has a solution, and (3) we haven't ruled out the possibility that the best way to solve the overall problem doesn't involve decomposing it in this fashion after all.

(Also, in some cases the right way to handle the second part may depend on the actual answer to the first part, in which case "let us suppose we have an answer" isn't enough.)

But in this case I remark that you aren't in fact proposing "a simpler solution to all these difficult observations and problems", you are proposing that instead of those difficult problems we solve a probably-much-simpler problem, namely "how should we handle AI alignment etc. once we have an agreed-upon notion of value that matches our actual opinions and feelings, and that's precisely enough expressed and understood that we could program it into a computer and be sure we got it right?".

Replies from: Flinter
comment by Flinter · 2017-01-16T20:26:42.713Z · LW(p) · GW(p)

Yes I mean to outline the concept of a stable metric of value and then I will be able to show how to solve such a problem.

"But in this case I remark that you aren't in fact proposing "a simpler solution to all these difficult observations and problems", you are proposing that instead of those difficult problems we solve a probably-much-simpler problem, namely "how should we handle AI alignment etc. once we have an agreed-upon notion of value that matches our actual opinions and feelings, and that's precisely enough expressed and understood that we could program it into a computer and be sure we got it right?"."

No I don't understand this, and I suspect you haven't quite understood me (even though I don't think I can be clearer). My proposal, a stable metric of value, immediately resolves many problems that are effectively paradoxes (or rendered such by my proposal, and then re-solved).

I'm not sure what you would disagree with, I think maybe you mean to say that the introduction of a stable metric still requires "solutions" to be invented to deal with all of these "problems". I'm not sure I would even agree to that, but it doesn't speak to the usefulness of what I suggest if I can come up with such a metric for stable valuation.

Replies from: gjm
comment by gjm · 2017-01-16T21:30:34.266Z · LW(p) · GW(p)

You have not, so far, proposed an actual "stable metric of value" with the required properties.

If you do, and convince us that you have successfully done so, then indeed you may have a "simpler solution to all these difficult problems". (Or perhaps a not-simpler one, but at any rate an actual solution, which is more than anyone currently claims to have.) That would be great.

But so far you haven't done that, you've just said "Wouldn't it be nice if there were such a thing?". (And unless I misunderstood you earlier, you explicitly said that that was all you were doing, that you were not purporting to offer a concrete solution and shouldn't be expected to do so.)

In which case, no solution to the original problems is on the table. Only a replacement of the original problems with the easier ones that result when you add "Suppose we have an agreed-upon notion of value that, etc., etc., etc." as a premise.

Or -- this is just another way of putting the same thing, but it may match your intentions better -- perhaps you are proposing not a solution to an easier problem but an incomplete solution to the hard problem, one that begins "Let's suppose we have a metric of value such that ...".

Would you care to be absolutely explicit about the following question? Do you, or do you not, have an actual notion of value in mind, that you believe satisfies all the relevant requirements? Because some of the things you're saying seem to presuppose that you do, and some that you don't.

Replies from: Flinter
comment by Flinter · 2017-01-16T21:36:01.646Z · LW(p) · GW(p)

Yes I understand what you mean to say here. And so I mean to attend to your questions:

"Would you care to be absolutely explicit about the following question? Do you, or do you not, have an actual notion of value in mind, that you believe satisfies all the relevant requirements? Because some of the things you're saying seem to presuppose that you do, and some that you don't."

Yes I do have an actual notion of value in mind, that does satisfy all of the relevant requirements. But first we have to find a shared meaning for the word "ideal": http://lesswrong.com/r/discussion/lw/ogt/do_we_share_a_definition_for_the_word_ideal/

Because the explanation of the notion is difficult (which you already know because you quote the owner of this site as being undecided on it etc.)

Replies from: moridinamael
comment by moridinamael · 2017-01-16T23:23:45.827Z · LW(p) · GW(p)

There is some hope that the desires/values of human beings might converge in the limit of time, intelligence, and energy. Prior to such a convergence, globally recognized human value is likely not knowable.

Replies from: Flinter
comment by Flinter · 2017-01-16T23:26:17.891Z · LW(p) · GW(p)

It converges on money. And it IS knowledgeable. Nash's defined it and gave it to us before he left. Why won't you open the dialogue on the subject?

Replies from: gjm, moridinamael
comment by gjm · 2017-01-17T00:34:42.223Z · LW(p) · GW(p)

It converges on money.

At the risk of sounding like a pantomime audience: Oh no it doesn't. A little more precisely: I personally do not find that my values are reducible to money, nor does it appear to me that other people's generally are, nor do I see any good reason to think that they are or should be tending that way. There are some theorems whose formulations contain the words "market" and "optimal", but so far as I can tell it is not correct to interpret them as saying that human values "converge on money".

And at the risk of sounding like a cultist, let me point you at "Money: the unit of caring", suggest that the bit about "within the interior of a [single] agent" is really important, and ask whether you are sure you have good grounds for making the further extension you appear to be making.

Replies from: Flinter
comment by Flinter · 2017-01-17T00:50:11.951Z · LW(p) · GW(p)

No you haven't interpreted what I said correctly (and its a normal mistake) so you haven't spoken to it, but you still might take issue. I am more suggesting that by definition we all agree on money. Money is the thing we most agree on, the nature of it is that it takes the place of complex barter and optimizes trade, and it does so as the introduction of a universally accepted transfereable utility. If it doesn't do this it would serve no purpose and cease to be money.

That you don't value it is probably less true than it is irrational, and speaks to the lack of quality of the money we are offered more than anything (which is something I haven't shown to be true yet).

"And at the risk of sounding like a cultist, let me point you at "Money: the unit of caring", suggest that the bit about "within the interior of a [single] agent" is really important, and ask whether you are sure you have good grounds for making the further extension you appear to be making."

I think you mean that I have extended the principle of caring through money to AI and you feel that article objects (or perhaps I don't know what you refer to). It is perfectly inline and reasonable to suggest that AI will be a part of us and that money will evolve to bridge the two "entities" to share values and move forward in an (super) rational manner (one money will allow us to function as a single entity).

Replies from: gjm
comment by gjm · 2017-01-17T01:16:13.666Z · LW(p) · GW(p)

I am more suggesting that by definition we all agree on money.

If this is true by definition then, necessarily, the fact can't tell us anything interesting about anything else (e.g., whether an AI programmed in a particular way would reliably act in a way we were happy about).

If you mean something with more empirical consequences -- and, now I think about it, even if you really do mean "by definition" -- then I think it would help if you were more explicit about what you mean by "agree on money". Do you mean we all agree on the appropriate price for any given goods? I think that's the reverse of the truth. The reason why trade happens and benefits us is that different people value different goods differently. In a "good enough" market we will all end up paying the same amount for a given good at a given time, but (1) while there's a sense in which that measures how much "the market" values the good at that time, there's no reason why that has to match up with how any individual feels about it, and (2) as I've pointed out elsewhere in this discussion there are many things people care about that don't have "good enough" markets and surely never will.

That you don't value it is probably less true than it is irrational

I didn't say I don't value money, and if you thought I said that then I will gently suggest that you read what I write more carefully and more charitably. What I said is that my values are not reducible to money, and what I meant is that there are things I value that I have no way of exchanging for money. (And also, though you would have a case of sorts for calling it "irrational", that the values I assign to things that are exchangeable for money aren't always proportional to their prices. If I were perfectly rational and perfectly informed and all markets involved were perfectly liquid for short as well as long trading and buying and selling things carried no overheads of any kind and for some reason I were perfectly insulated from risks associated with prices changing ... then arguably I should value things in proportional to their prices, because if I didn't I could improve my lot by buying some things while selling others until the prices and/or my values adjusted. In practice, every one of those conditions fails, and I don't think it is mostly because of irrationality that my values and the market's prices don't align perfectly.)

I think you mean that I have extended the principle of caring through money to AI

Nope. I mean that what you're saying about money seems to require extending the principle from one agent to all agents.

Replies from: Flinter
comment by Flinter · 2017-01-17T01:30:47.770Z · LW(p) · GW(p)

I don't think that its founded in economics or any money theory to suggest that it is something that we don't collectively agree on. It also goes against market theory and the efficient market hypothesis to suggest that the price of a good is not an equilibrium related to the wants of the market is it not?

" (1) while there's a sense in which that measures how much "the market" values the good at that time, there's no reason why that has to match up with how any individual feels about it"

Yup you have perfectly highlighted the useful point and also (the end of the quote) shown the perspective that you could continue to argue for no reason against.

"What I said is that my values are not reducible to money"

I can't find it. It was about your wifes love. I think we could simply cut out things not reducible to money from the dialogue, but I also suspect that you would put a value on your wife's love.

But if its in relation to USD that doesn't make sense because its not stable over time. But you could do it in relation to a stable value metric, for example, would you pay for a movie if she expressed her love to you for it.

Nope. I mean that what you're saying about money seems to require extending the principle from one agent to all agents.

I'm not sure what the problem here is, but you aren't speaking to any of my argument. A metric for value is super useful for humans, and solves the problem of how to keep AI on the correct track. Why aren't we speaking to Nash's proposal for such a metric?

You are fighting a strawman by arguing versus me.

And I still think its bs that the mod buried Nash's works and any dialogue on it. And that we aren't attending to it in this dialogue speaks to that.

Replies from: gjm
comment by gjm · 2017-01-17T03:26:27.371Z · LW(p) · GW(p)

It also goes against market theory and the efficient market hypothesis to suggest that the price of a good is not an equilibrium related to the wants of the market is it not?

Given time, markets reach equilibria related to the wants of the participants. Sure. So far as I know, there are no guarantees on how long they will take to do so (and, e.g., if you're comparing finding market equilibria with solving chess, the sort of market setup you'd need to contrive to get something equivalent to solving chess surely will take a long time to converge, precisely because finding the optimum would be equivalent to solving chess; in the real world, what would presumably actually happen is that the market would settle down to something very much not equivalent to actually solving chess, and maybe a few hedge funds with superpowered computers and rooms full of PhDs would extract a little extra money from it every now and then); and there are any number of possible equilibria related to the wants of the participants.

I think we could simply cut out things not reducible to money from the dialogue

Well, we could cut out anything from the dialogue. But you can't make bits of human values go away just by not talking about them, and the fact is that lots of things humans value are not practically reducible to money, and probably never will be.

you could do it in relation to a stable value metric, for example, would you pay for a movie if she expressed her love to you for it.

That ... isn't quite how human relationships generally work. But, be that as it may, I'm still not seeing a market here that's capable of assigning meaningful prices to love or even to concrete manifestations of love. I mean, when something is both a monopoly and a monopsony, you haven't really got much of a market.

Why aren't we speaking to Nash's proposal for such a metric?

I don't know why you aren't. I'm not because I have only a hazy idea what it is (and in fact I am skeptical that what he was trying to do was such a metric), and because it's only after much conversation that you've made it explicit that your proposal is intended to be exactly Nash's proposal (if indeed it is). Was I supposed to read your mind? Regrettably, that is not among my abilities.

And I still think its bs that the mod buried Nash's works and any dialogue on it.

Perhaps there is some history here of which I'm unaware; I have no idea what you're referring to. I haven't generally found that the moderators here zap things just out of spite, and if something you posted got "buried" I'm guessing there was a reason.

Replies from: Flinter
comment by Flinter · 2017-01-17T03:38:25.466Z · LW(p) · GW(p)

Given time, markets reach equilibria related to the wants of the participants. Sure. So far as I know, there are no guarantees on how long they will take to do so (and, e.g., if you're comparing finding market equilibria with solving chess, the sort of market setup you'd need to contrive to get something equivalent to solving chess surely will take a long time to converge, precisely because finding the optimum would be equivalent to solving chess; in the real world, what would presumably actually happen is that the market would settle down to something very much not equivalent to actually solving chess, and maybe a few hedge funds with superpowered computers and rooms full of PhDs would extract a little extra money from it every now and then); and there are any number of possible equilibria related to the wants of the participants.

Thats where poker is relevant. Firstly I am not speaking to reality, that is your implication. I spoke about a hypothetical future from an asymptotic limit. In regard to poker I have redesigned the industry to foster a future environment in which the players act like a market that brute force solve the game. The missing element from chess, or axiom in regard to poker, is that it should be arranged so players can accurately asses who the skilled players are. So we are saying theoretically it COULD be solved this way, and not speaking to how reality will unfold (yet).

Well, we could cut out anything from the dialogue. But you can't make bits of human values go away just by not talking about them, and the fact is that lots of things humans value are not practically reducible to money, and probably never will be.

I will show this isn't wholly true but I cannot do it before understanding Nash's proposal together, therefore I cannot speak to it atm.

That ... isn't quite how human relationships generally work. But, be that as it may, I'm still not seeing a market here that's capable of assigning meaningful prices to love or even to concrete manifestations of love. I mean, when something is both a monopoly and a monopsony, you haven't really got much of a market.

Some as the above, I can't speak to this intelligibly yet.

I don't know why you aren't. I'm not because I have only a hazy idea what it is (and in fact I am skeptical that what he was trying to do was such a metric),

Yes it was:

The metric system does not work because french chefs de cuisine are constantly cooking up new and delicious culinary creations which the rest of the world then follows imitatively. Rather, it works because it is something invented on a scientific basis…Our view is that if it is viewed scientifically and rationally (which is psychologically difficult!) that money should have the function of a standard of measurement and thus that it should become comparable to the watt or the hour or a degree of temperature.~Ideal Money

Its exactly his proposal.

and because it's only after much conversation that you've made it explicit that your proposal is intended to be exactly Nash's proposal (if indeed it is). Was I supposed to read your mind? Regrettably, that is not among my abilities

Perhaps there is some history here of which I'm unaware; I have no idea what you're referring to. I haven't generally found that the moderators here zap things just out of spite, and if something you posted got "buried" I'm guessing there was a reason.

Yes exactly the mod messed up our dialogue, you weren't properly introduced, but the introduction was written, and moderated away. The mod said Hayek > Nash

It was irrational.

Replies from: gjm
comment by gjm · 2017-01-17T04:20:59.009Z · LW(p) · GW(p)

Yes it was: [...] money should have the function of a standard of measurement [...]

I agree that Nash hoped that ("ideal") money could become "a standard of measurement" comparable in some way to scientific units. The question, though, is how broad a range of things he expected it to be able to measure. Nothing in the lecture you linked to a transcript of makes the extremely strong claim you are making, that we should use "ideal money" as a measure for literally all human values.

The mod said "Hayek > Nash"

One of three things [EDITED to add: oops, I meant two things] is true. (1) Your account of what "the mod" did is inaccurate, or grossly misleading by omission, or something. (2) My notion of what the LW moderators do is surprisingly badly wrong. If whoever did what Flinter is describing would care to comment (PM me if you would prefer not to do it in public), I am extremely curious.

(For what it's worth, my money is on "grossly misleading by omission". I am betting that whatever was removed was removed for some other reason -- e.g., I see some signs that you tried to post something in Main, which is essentially closed at present -- and that if indeed the mod "said Hayek > Nash" this was some kind of a joke, and they said some other things that provided an actual explanation of what they were doing and why. But that's all just guesswork; perhaps someone will tell me more about what actually happened.)

Replies from: Flinter
comment by Flinter · 2017-01-17T04:42:38.549Z · LW(p) · GW(p)

You are not using the standard accepted definition of the word ideal, please look it up, and or create a shared meaning with me.

"Nothing in the lecture you linked to a transcript of makes the extremely strong claim you are making, that we should use "ideal money" as a measure for literally all human values."

This is a founded extrapolation of mine and an implication of his.

Yes the mod said Hayek > Nash and its not a joke its a prevailing ignorant attitude, by those that read Hayek but won't read and address Nash.

It's not significant if I omitted something, I was told it was effectively petty (my words) but it isn't. It's significant because John Nash said so.

Replies from: gjm
comment by gjm · 2017-01-17T12:10:52.728Z · LW(p) · GW(p)

You are not using the standard accepted definition of the word ideal

As I have just said elsewhere in our discussion, I am not using any definition of the word "ideal". I may of course have misunderstood what you mean by "ideal money", but if so it is not because I am assuming it means "money which is ideal" according to any more general meaning of "ideal".

This is a founded extrapolation of mine and an implication of his.

I have so far seen nothing that convinces me that he intended any such implication. In any case, of course the relevant question is not what Nash thought about it but what's actually true; even someone as clever as Nash can be wrong (as e.g. he probably was when he thought he was the Pope) so we could do with some actual arguments and evidence on this score rather than just an appeal to authority.

It's not significant if I omitted something.

That depends on what you omitted. For instance, if the person who removed your post gave you a cogent explanation of why and it ended with some jokey remark that "personally I always preferred Hayek anyway", it would be grossly misleading to say what you did (which gives the impression that "Hayek > Nash" was the mod's reason for removing your post).

I do not know who removed your post (for that matter I have only your word that anything was removed, though for the avoidance of doubt I would bet heavily that you aren't lying about that) but my impression is that on the whole the LW community is more favourably disposed towards Nash than towards Hayek. Not that that should matter. In any case: I'm sorry if this is too blunt, but I flatly disbelieve your implication that your post was removed because a moderator prefers Hayek to Nash, and I gravely doubt that it was removed with a given reason that a reasonable person other than you would interpret as being because the moderator prefers Hayek to Nash.

Replies from: Flinter
comment by Flinter · 2017-01-17T16:01:51.317Z · LW(p) · GW(p)

As I have just said elsewhere in our discussion, I am not using any definition of the word "ideal". I may of course have misunderstood what you mean by "ideal money", but if so it is not because I am assuming it means "money which is ideal" according to any more general meaning of "ideal".

Ya you misunderstood. And you still haven't double checked your definition of ideal. Are you sure its correct?

I have so far seen nothing that convinces me that he intended any such implication. In any case, of course the relevant question is not what Nash thought about it but what's actually true; even someone as clever as Nash can be wrong (as e.g. he probably was when he thought he was the Pope) so we could do with some actual arguments and evidence on this score rather than just an appeal to authority.

Ya you are a smart person that can completely ignore the argument posed by Nash but can still kinda sorta backhandedly show that he is wrong, without risking your persona....you are a clever arguer aren't you?

That depends on what you omitted. For instance, if the person who removed your post gave you a cogent explanation of why and it ended with some jokey remark that "personally I always preferred Hayek anyway", it would be grossly misleading to say what you did (which gives the impression that "Hayek > Nash" was the mod's reason for removing your post).

It is the reason, and you would call it grossly misleading. Let's find the significance of Nash's work, and then it will be obvious the mod moderated me because of their own (admitted) ignorance.

I do not know who removed your post (for that matter I have only your word that anything was removed, though for the avoidance of doubt I would bet heavily that you aren't lying about that) but my impression is that on the whole the LW community is more favourably disposed towards Nash than towards Hayek. Not that that should matter. In any case: I'm sorry if this is too blunt, but I flatly disbelieve your implication that your post was removed because a moderator prefers Hayek to Nash, and I gravely doubt that it was removed with a given reason that a reasonable person other than you would interpret as being because the moderator prefers Hayek to Nash.

So are stuck in trying to win arguments which is the root reason why you haven't even heard of the main body of work that Nash was working on nearly his whole life. You are ignorant to the entire purpose of his career and thesis to his works. It's an advanced straw man to continue to suggest a mod wouldn't mod me the way I said, and not to address Nash's works.

Nash is not favored over Hayek, Nash is being ignored here, the most significance work he has produced nobody here even knows existed (if you find one person that heard of it hear, do you think that would prove me wrong?).

Ignorance towards Nash, is the reason the mod moved my thread, unsurprisingly they came to the public thread to say Hayek > Nash...You don't know, but that is a theme among many players in regard to their theories on economics and money...but the Hayek's are simply ignorant and wrong. And they haven't traversed Nash's works.

Replies from: gjm
comment by gjm · 2017-01-17T16:43:53.489Z · LW(p) · GW(p)

Ya you misunderstood.

OK. Would you care to help me understand correctly, or are you more interested in telling me how stupid I am?

And you still haven't double checked your definition of ideal. Are you sure its correct?

There is no possible modification I could make to my definition of "ideal" that would make any difference to my understanding of your use of the phrase "ideal money". I have already explained this twice.

Ya you are a smart person [...] you are a clever arguer aren't you?

If you would like to make some actual arguments rather than sneering at me then I am happy to discuss things.

It is the reason, and you would call it grossly misleading.

At this point, I simply do not believe you when you say it is the reason. Not because I think you are lying; but it doesn't look to me as if you are thinking clearly at any time when your opinions or actions are being challenged.

Let's find the significance of Nash's work, and then it will be obvious the mod moderated me because of their own (admitted) ignorance.

In the absence of more information about what the moderator said, no possible information about the significance of Nash's work could bring that about.

So are stuck in trying to win arguments which is the root reason why you haven't even heard of the main body of work that Nash was working on nearly his whole life.

Er, what? No, I haven't (more accurately, hadn't) heard of it because no one mentioned it to me before. Is that difficult to understand?

Nash is being ignored here, the most significance work he has produced nobody here even knows existed

Nash is not being ignored here. "Ideal money" has not been a topic of conversation here before, so far as I can recall. If your evidence that Nash is "ignored" here is that we have not been talking about "ideal money", you should consider two other hypotheses: (1) that the LW community is interested in Nash but not in "ideal money" and (2) that the LW community is interested in Nash but happens not to have come across "ideal money" before. I think #2 is probably the actual explanation, however preposterous you may find it that anyone would read anything about Nash and not know about your pet topic.

(I think I already mentioned that Nasar's book about Nash doesn't see fit to mention "ideal money" in its index. It's a popular biography rather than an academic study, and the index may not perfectly reflect the text, but I think this is sufficient to show that it's possible for a reasonable person to look quite deeply at Nash's life and not come to the conclusion that "ideal money" is "the main body of work that Nash was working on nearly his whole life".)

The person who removed your earlier post has already explained that what he was actually saying about Hayek was not "Hayek is better than Nash" but "please don't think I'm removing this because I dislike the ideas; I am a fan of Hayek and these ideas of Nash's resemble Hayek's". This is more or less the exact opposite of your characterization of what happened.

Replies from: Flinter
comment by Flinter · 2017-01-17T17:00:55.277Z · LW(p) · GW(p)

OK. Would you care to help me understand correctly, or are you more interested in telling me how stupid I am?

There is no possible modification I could make to my definition of "ideal" that would make any difference to my understanding of your use of the phrase "ideal money". I have already explained this twice.

Existing merely as an image in the mind:

An ideal is a concept or standard of perfection, existing merely as an image in the mind, or based upon a person or upon conduct: We admire the high ideals of a religious person

I think you erred saying there is no possible modification.

Er, what? No, I haven't (more accurately, hadn't) heard of it because no one mentioned it to me before. Is that difficult to understand?

Yes and you are going to suggest we are not ignoring Nash, but we are.

Nash is not being ignored here. "Ideal money" has not been a topic of conversation here before, so far as I can recall. If your evidence that Nash is "ignored" here is that we have not been talking about "ideal money", you should consider two other hypotheses: (1) that the LW community is interested in Nash but not in "ideal money" and (2) that the LW community is interested in Nash but happens not to have come across "ideal money" before. I think #2 is probably the actual explanation, however preposterous you may find it that anyone would read anything about Nash and not know about your pet topic.

Yes and in the future everyone is going to laugh at you all for claiming and pretending to be smart, and pretending to honor Nash, when the reality is, Nash spanked you all.

(I think I already mentioned that Nasar's book about Nash doesn't see fit to mention "ideal money" in its index. It's a popular biography rather than an academic study, and the index may not perfectly reflect the text, but I think this is sufficient to show that it's possible for a reasonable person to look quite deeply at Nash's life and not come to the conclusion that "ideal money" is "the main body of work that Nash was working on nearly his whole life".)

Yup she ignored his life's work, his greatest passion, and if you watch his interviews he thinks its hilarious.

The person who removed your earlier post has already explained that what he was actually saying about Hayek was not "Hayek is better than Nash" but "please don't think I'm removing this because I dislike the ideas; I am a fan of Hayek and these ideas of Nash's resemble Hayek's". This is more or less the exact opposite of your characterization of what happened.

No, it will be shown they thought it was an inconsequential move because they felt Nash's Ideal Money was insignificant. It was a subjective play.

Replies from: gjm
comment by gjm · 2017-01-17T18:29:17.826Z · LW(p) · GW(p)

I think you erred saying there is no possible modification.

Nope. But if you tell me that when you say "ideal money" you mean "a system of money that is ideal in the sense of existing merely as an image in the mind", why then I will adjust my understanding of how you use the phrase. Note that this doesn't involve any change at all in my general understanding of the word "ideal", which I already knew sometimes has the particular sense you mention (and sometimes has other senses); what you have told me is how you are using it in this particular compound term.

In any case, this is quite different from how Nash uses the term. If you read his article in the Southern Economic Journal, you will see things like this, in the abstract:

I present [...] a specific proposal about how a system or systems of "ideal money" might be established and employed

(so he is thinking of this as something that can actual happen, not something that by definition exists merely as an image in the mind). Similarly, later on,

the possibilities with regard to actually establishing a norm of money systems that could qualify as "ideal" are dependent on the political circumstances of the world.

(so, again, he is thinking of such systems as potentially actualizable) and, a couple of paragraphs later,

We of Terra could be taught how to have ideal monetary systems if wise and benevolent extraterrestrials were to take us in hand and administer our national money systems [...]

(so he says explicitly it could actually happen if we had the right guidance; note also that he here envisages ideal monetary systems, plural, suggesting that in this instance he isn't claiming that there's one true set of value ratios that will necessarily be reached). In between those last two he refers to

"good" or "ideal" money

so he clearly has in mind (not necessarily exclusively) a quite different meaning of "ideal", namely "perfect" or "flawless".

you are going to suggest we are not ignoring Nash, but we are.

It doesn't look much like that for me.

in the future everyone is going to laugh at you all

In the future everyone will have forgotten us all, most likely.

for claiming and pretending to be smart

Do please show me where I either claimed or pretended to be smart. (If you ask my opinion of my intelligence I will tell you, but no such thing has come up in this discussion and I don't see any reason why it should. If my arguments are good, it doesn't matter if I'm generally stupid; if my arguments are bad, it doesn't matter if I'm generally clever.)

she ignored his life's work, his greatest passion

So far, you have presented no evidence at all that this is a reasonable description. Would you care to remedy that? In any case, what's relevant right now is not whether "ideal money" was Nash's greatest passion, but whether it should have been obvious to us that it was (since you are taking the absence of earlier discussion of "ideal money" as indication that LW has been ignoring Nash). And, however hilarious Nash may or may not have found it, I repeat that the fact that a reasonable person writing a fairly lengthy biography of Nash didn't make a big deal of "ideal money" is strong evidence that other people may reasonably think likewise. Even if we are wrong.

comment by moridinamael · 2017-01-16T23:40:38.563Z · LW(p) · GW(p)

This is a dialogue. We are dialoguing.

You're saying value converges on money correct? Money is indeed a notion that humans have invented in order to exchange things that we individually value. Having more money lets you get more of things you value. This is all good and fine. But "converges" and "has converged upon" are very different things.

I'm also not sure what it would look like for money to "be" value. A bad person can use money to do something horrible, something that no other person on earth approves of. A bad person can destroy immense value by buying a bomb, for example.

Replies from: Flinter
comment by Flinter · 2017-01-16T23:47:56.265Z · LW(p) · GW(p)

This is all the respect Nash's Ideal money gets on this forum? He spent 20 years on the proposal. I think that is shameful and disrespectful.

Anyways, no. I am saying that "we" all converge on money. We all agree on it, that is the nature of it. And it is perfectly reasonable to suggest that so would (intelligent (and super bad)) AI be able to. And (so) it would because that is the obviously rational thing to do (I mean to show that Nash's argument explains why this is).

Replies from: moridinamael, Filipe
comment by moridinamael · 2017-01-17T00:01:35.065Z · LW(p) · GW(p)

It would help move things along if you would just lay out your argument rather than intimating that you have a really great argument that nobody will listen to. Just spell it out, or link to it if you've done so elsewhere.

Replies from: Flinter
comment by Flinter · 2017-01-17T00:04:39.205Z · LW(p) · GW(p)

It was removed by a mod.

Replies from: moridinamael
comment by moridinamael · 2017-01-17T00:13:44.175Z · LW(p) · GW(p)

It should still be in your drafts. Just copy it here.

Replies from: Flinter
comment by Flinter · 2017-01-17T00:19:16.069Z · LW(p) · GW(p)

Yup so I can get banned. I didn't expect this place to be like this.

Replies from: moridinamael
comment by moridinamael · 2017-01-17T00:21:19.271Z · LW(p) · GW(p)

Just send it to me as a private message.

comment by Filipe · 2017-01-16T23:55:15.418Z · LW(p) · GW(p)

You mean Money is the Unit of Caring ? :)

Replies from: Flinter
comment by Flinter · 2017-01-17T00:00:33.286Z · LW(p) · GW(p)

"In our society, this common currency of expected utilons is called "money". It is the measure of how much society cares about something."

"This is a brutal yet obvious point, which many are motivated to deny."

"With this audience, I hope, I can simply state it and move on."

Yes but only to an extent. If we start to spend the heck out of our money to incite a care bear care-a-thon, we would only be destroying what we have worked for. Rather, it is other causes that allow spending to either be a measure or not of caring.

So I don't like the way the essay ends.

Furthermore, it is more to the point to say it is reasonable that we all care about money and so will AI. That is the nature of money, it is intrinsic to it.

comment by moridinamael · 2017-01-16T20:17:09.410Z · LW(p) · GW(p)

This topic comes up every once in a while. In fact, one of the more recent threads was started by me, though it may not be obvious to you at first how that thread is related to this topic.

I think it's actually fun to talk about the structure of an "ultra-stable metric" or even an algorithm by which some kind of "living metric" may be established and then evolved/curated as the state of scientific knowledge evolves.

Replies from: Flinter
comment by Flinter · 2017-01-16T20:21:31.524Z · LW(p) · GW(p)

Yes that is relevant. Now I have said something to this end. It is a stable VALUE metric we should be wanting to levate. And that will allow us to (also) quantify our own sanity. I think I can suggest this and speak to that link.

comment by MrMind · 2017-01-17T10:03:31.732Z · LW(p) · GW(p)

For a shared and stable value metric to function as a solution to the AI alignment problem it would need also to be:

  • computable;
  • computable in new situations where no comparable examples exist;
  • convergent under self-evaluation.

To illustrate the last requirement, let me make an example. Let's suppose that to a new AI is given the task of dividing some fund between the existing four prototype of nuclear fusion plants. It will need to calculate the value of each prototype and their very different supply chains. But it also need to calculate the value of those calculation, since it's computational power is not infinite, and decide how much to ponder and to what extent calculate the details of those simulation. But it would also need to calculate the value of those calculation, and so on. Only a value that is convergent under self evaluation can be guaranteed to point to an optimal solution.

If that's what we would have available, then I think FAI would be mostly solved.

Replies from: Flinter
comment by Flinter · 2017-01-17T10:09:36.209Z · LW(p) · GW(p)

How is it going to calculate such things with out a metric for valuation?

If that's what we would have available, then I think FAI would be mostly solved.

Yes so you are seeing the significance of Nash's proposal, but you don't believe he is that smart, who is that on?

Replies from: MrMind
comment by MrMind · 2017-01-17T10:45:33.080Z · LW(p) · GW(p)

How is it going to calculate such things with out a metric for valuation?

Sure, I'm just pointing out that objective and stable are necessary but not sufficient conditions for a value metric to solve the FAI problem, it would also need to have the three features that I detailed, and possibly others.
It's not a refutation, it's an expansion.

Replies from: Flinter
comment by Flinter · 2017-01-17T16:36:10.696Z · LW(p) · GW(p)

Right but you subtly back handedly agree its a necessary component of AI. If you come back to say "Sure but its not necessarily the ONLY missing component" I will think of you dumb.

comment by Davidmanheim · 2017-01-16T18:48:31.231Z · LW(p) · GW(p)

I think what you're missing is that metrics are difficult - I've written about that point in a number of contexts; www.ribbonfarm.com/2016/06/09/goodharts-law-and-why-measurement-is-hard/

There are more specific metric / goal problems with AI; Eliezer wrote this https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/ - and Dario Amodei has been working on it as well; https://openai.com/blog/faulty-reward-functions/ - and there is a lot more in this vein!

Replies from: Flinter
comment by Flinter · 2017-01-16T19:28:34.141Z · LW(p) · GW(p)

Ok. I skimmed it, and I think I understand your post well enough (if not I'll read deeper!). What I am introducing into the dialogue is a theoretical and conceptually stable unit of value. I am saying, let's address the problems stated in your articles as if we don't have the problem of defining our base unit and that it exists and is agreed upon and it is stable for all time.

So here is an example from one of your links:

"Why is alignment hard?

Why expect that this problem is hard? This is the real question. You might ordinarily expect that whoever has taken on the job of building an AI is just naturally going to try to point that in a relatively nice direction. They’re not going to make evil AI. They’re not cackling villains. Why expect that their attempts to align the AI would fail if they just did everything as obviously as possible?

Here’s a bit of a fable. It’s not intended to be the most likely outcome. I’m using it as a concrete example to explain some more abstract concepts later.

With that said: What if programmers build an artificial general intelligence to optimize for smiles? Smiles are good, right? Smiles happen when good things happen."

Do we see how we can solve this problem now? We simply optimize the AI system for value, and everyone is happy.

If someone creates "bad" AI we could measure that, and use the measurement for a counter program.

Replies from: gjm, Davidmanheim, Davidmanheim, gjm
comment by gjm · 2017-01-16T19:38:43.514Z · LW(p) · GW(p)

We simply optimize the AI system for value

"Simply"?

If we had a satisfactory way of doing that then, yes, a large part of the problem would be solved. Unfortunately, that's because a large part of the problem is that we don't have a clear notion of "value" that (1) actually captures what humans care about and (2) is precise enough for us to have any prospect of communicating it accurately and reliably to an AI.

Replies from: Flinter
comment by Flinter · 2017-01-16T19:46:51.366Z · LW(p) · GW(p)

Yes but my claim was IF we had such a clear notion of value then most of the problems on this site would be solved (by this site I mean for example what popular cannons are based around as interesting problems). I think you have simply agreed with me.

Replies from: gjm
comment by gjm · 2017-01-16T19:59:37.091Z · LW(p) · GW(p)

When you say "problem X is easily solved by Y" it can mean either (1) "problem X is easily solved, and here is how: Y!" or (2) "if only we had Y, then problem X would easily be solved".

Generally #1 is the more interesting statement, which is why I thought you might be saying it. (That, plus the fact that you refer to "my proposal", which does rather suggest that you think you have an actual solution, not merely a solution conditional on another hard problem.) It transpires that you're saying #2. OK. In that case I think I have three comments.

First: yes, given a notion of value that captures what we care about and is sufficiently precise, many of the problems people here worry about become much easier. Thus far, we agree.

Second: it is far from clear that any such notion actually exists, and as yet no one has come up with even a coherent proposal for figuring out what it might actually be. (Some of the old posts I pointed you at elsewhere argue that if there is one then it is probably very complicated and hard to reason about.)

Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI's values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI -- which may work very differently, and think very differently, from the one we start with -- will do things we are happy about?

Replies from: Flinter
comment by Flinter · 2017-01-16T20:18:08.561Z · LW(p) · GW(p)

"When you say "problem X is easily solved by Y" it can mean either (1) "problem X is easily solved, and here is how: Y!" or (2) "if only we had Y, then problem X would easily be solved"."

Yes I am speaking to (2) and once we understand the value of it, then I will explain why it is not insignificant.

"Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI's values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI -- which may work very differently, and think very differently, from the one we start with -- will do things we are happy about?"

You would create the first AI to seek value, and then knowing that it is getting smarter and smarter, it would tend towards seeing the value I propose and optimize itself in relation to what I am proposing, by your own admission of how the problem you are stating works.

Replies from: gjm
comment by gjm · 2017-01-16T21:24:14.302Z · LW(p) · GW(p)

it would tend towards seeing the value I propose and optimize itself in relation to what I am proposing

I am not sure which of two things you are saying.

Thing One: "We program the AI with a simple principle expressed as 'seek value'. Any sufficiently smart thing programmed to do this will converge on the One True Value System, which when followed guarantees the best available outcomes, so if the AIs get smarter and smarter and they are programmed to 'seek value' then they will end up seeking the One True Value and everything will be OK."

Thing Two: "We program the AI with a perhaps-complicated value system that expresses what really matters to us. We can then be confident that it will program its successors to use the same value system, and they will program their successors to use the same value system, etc. So provided we start out with a value system that produces good outcomes, everything will be OK."

If you are saying Thing One, then I hope you intend to give us some concrete reason to believe that all sufficiently smart agents converge on a single value system. I personally find that very difficult to believe, and I know I'm not alone in this. (Specifically, Eliezer Yudkowsky, who founded the LW site, has written a bit about how he used to believe something very similar, changed his mind, and now thinks it's obviously wrong. I don't know the details of exactly what EY believed or what arguments convinced him he'd been wrong.)

If you are saying Thing Two, then I think you may be overoptimistic about the link between "System S follows values V" and "System S will make sure any new systems it creates also follow values V". This is not a thing that reliably happens when S is a human being, and it's not difficult to think of situations in which it's not what you'd want to happen. (Perhaps S can predict the behaviour of its successor T, and figures out that it will get more V-aligned results if T's values are something other than V. I'm not sure that this can be plausible when T is S's smarter successor, but it's not obvious to me that the possibility can be ruled out.)

Replies from: Flinter
comment by Flinter · 2017-01-16T21:33:20.245Z · LW(p) · GW(p)

I REALLY appreciate this dialogue. Yup I am suggesting #1. It's observable reality that smart agents converge to value the same thing yes, but that is the wrong way to say it. "Natural evolution will levate (aka create) the thing that all agents will converge to", this is the correct perspective (or more valuable perspective). Also I should think that is obvious to most people here.

Eliezer Y will rethink this when he comes across what I am proposing.

Replies from: gjm
comment by gjm · 2017-01-17T00:26:58.565Z · LW(p) · GW(p)

Natural evolution will levate (aka create) the thing that all agents will converge to

This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things -- e.g., us -- that produce values.)

I should think that is obvious to most people here.

My guess is that you are wrong about that; in any case, it certainly isn't obvious to me.

Replies from: Flinter
comment by Flinter · 2017-01-17T00:34:25.808Z · LW(p) · GW(p)

"This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things -- e.g., us -- that produce values.)"

I am saying, in a newtonian vs Quantum science money naturally evolves as a thing that the collective group wants, and I am suggesting this phenomenon will spread to and drive AI. This is both natural and a rational conclusion and something favorable the re-solves many paradoxes and difficult problems.

But money is not the correct word, it is an objective metric for value that is the key. Because money can also be a poor standard for objective measurement.

Replies from: gjm
comment by gjm · 2017-01-17T01:04:52.731Z · LW(p) · GW(p)

Given the actual observed behaviour of markets (e.g., the affair of 2008), I see little grounds for hoping that their preferences will robustly track what humans actually care enough, still less that they will do so robustly enough to answer the concerns of people who worry about AI value alignment.

Replies from: Flinter
comment by Flinter · 2017-01-17T01:08:01.176Z · LW(p) · GW(p)

Nash speaks to the crisis of 2008 and explains how it is the lack of an uncorruptable standard basis for value that stops us from achieving such a useful market. You can't target optimal spending for optimal caring though, I just want to clear on that.

Replies from: gjm
comment by gjm · 2017-01-17T01:31:09.814Z · LW(p) · GW(p)

OK. And has Nash found an uncorruptable standard basis for value? Or is this meant to emerge somehow from The Market, borne aloft no doubt by the Invisible Hand? So far, that doesn't actually seem to be happening.

I'm afraid I don't understand your last sentence.

Replies from: Flinter
comment by Flinter · 2017-01-17T01:31:54.873Z · LW(p) · GW(p)

Yes. And why do we ignore him?

Replies from: gjm
comment by gjm · 2017-01-17T03:01:25.792Z · LW(p) · GW(p)

The things I've seen about Nash's "ideal money" proposal -- which, full disclosure, I hadn't heard of until today, so I make no guarantee to have seen enough -- do not seem to suggest that Nash has in fact found an uncorruptable standard basis for value. Would you care to say more?

Replies from: Flinter
comment by Flinter · 2017-01-17T03:15:53.856Z · LW(p) · GW(p)

Yup. Firstly you fully admit that you are, previous to my entry, ignorant to Nash's lifes work. What he spoke of and wrote of for 20 years country to country. It is what he fled the US about when he was younger, to exchange his USD for the Swiss franc because it was of superior quality, and in which the US navy tracked him down and took him back in chains (this is accepted not conspiracy).

Nash absolutely defined an incorruptible basis for valuation and most people have it labeled as an "icpi" industrial price consumption index. It is effectively an aggregate of stable prices across the global commodities, and it can be said if our money were pegged to it then it would be effectively perfectly stable over time. And of course it would need to be adjusted which means it is politically corruptible, but Nash's actual proposal solves for this too:

…my personal view is that a practical global money might most favorably evolve through the development first of a few regional currencies of truly good quality. And then the “integration” or “coordination” of those into a global currency would become just a technical problem. (Here I am thinking of a politically neutral form of a technological utility rather than of a money which might, for example, be used to exert pressures in a conflict situation comparable to “the cold war”.)

Our view is that if it is viewed scientifically and rationally (which is psychologically difficult!) that money should have the function of a standard of measurement and thus that it should become comparable to the watt or the hour or a degree of temperature.~All quotes are from Ideal Money

Ideal Money is an incorruptible basis for value.

Now it is important you attend to this thread, its quick, very quick: http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/

Replies from: gjm
comment by gjm · 2017-01-17T04:04:19.689Z · LW(p) · GW(p)

you fully admit that you are, previous to my entry, ignorant to Nash's lifes work. What he spoke of and wrote of for 20 years country to country.

First of all, we will all do better without the hectoring tone. But yes, I was ignorant of this. There is scarcely any limit to the things I don't know about. However, nothing I have read about Nash suggests to me that it's correct to describe "ideal money" as "his life's work".

It is what he fled the US about when he was younger [...] and in which the US navy tracked him down and took him back in chains

You are not helping your case here. He was, at this point, suffering pretty badly from schizophrenia. And so far as I can tell, the reasons he himself gave for leaving the US were nothing to do with the quality of US and Swiss money.

industrial consumption price index

Let me see if I've understood this right. You want a currency pegged to some basket of goods ("global commodities", as you put it), which you will call "ideal money". You then want to convert everything to money according to the prices set by a perfectly efficient infinitely liquid market, even though no such market has ever existed and no market at all is ever likely to exist for many of the things people actually care about. And you think this is a suitable foundation for the values of an AI, as a response to people who worry about the values of an AI whose vastly superhuman intellect will enable it to transform our world beyond all recognition.

What exactly do you expect to happen to the values of those "global commodities" in the presence of such an AI?

it would need to be adjusted which means it is politically corruptible [...]

(Yep.)

[...] but Nash's actual proposal solves for this too:

But nothing in what you quote does any such thing.

Now it is important you attend to this thread

It may be important to you that I do so, but right now I have other priorities. Maybe tomorrow.

Replies from: Flinter
comment by Flinter · 2017-01-17T04:15:11.209Z · LW(p) · GW(p)

First of all, we will all do better without the hectoring tone. But yes, I was ignorant of this. There is scarcely any limit to the things I don't know about. However, nothing I have read about Nash suggests to me that it's correct to describe "ideal money" as "his life's work".

He lectured and wrote on the topic for the last 20 years of his life, and it is something he had been developing in his 30's

You are not helping your case here. He was, at this point, suffering pretty badly from schizophrenia. And so far as I can tell, the reasons he himself gave for leaving the US were nothing to do with the quality of US and Swiss money.

Yes he was running around saying the governments are colluding against the people and he was going to be their savior. In Ideal Money he explains how the Keyneisan view of economics is comparable to bolshevik communism. These are facts and they show that he never abandoned his views when he was "schizophrenic", and that they are in fact based on rational thinking. And yes it is his own admission that this is why he fled the US and denounced his citizenship.

Let me see if I've understood this right. You want a currency pegged to some basket of goods ("global commodities", as you put it), which you will call "ideal money". You then want to convert everything to money according to the prices set by a perfectly efficient infinitely liquid market, even though no such market has ever existed and no market at all is ever likely to exist for many of the things people actually care about. And you think this is a suitable foundation for the values of an AI, as a response to people who worry about the values of an AI whose vastly superhuman intellect will enable it to transform our world beyond all recognition.

What exactly do you expect to happen to the values of those "global commodities" in the presence of such an AI?

Yup exactly, and we are to create AI that basis its decisions on optimizing value in relation to procuring what would effectively be "ideal money"

But nothing in what you quote does any such thing.

I don't need to do anything to show Nash made such a proposal of a unit of value expect quote him saying it is his intention. I don't need to put the unit in your hand.

It may be important to you that I do so, but right now I have other priorities. Maybe tomorrow.

It's simply and quick, your definition of ideal is not inline with the standard definition. Google it.

Replies from: gjm
comment by gjm · 2017-01-17T11:58:14.680Z · LW(p) · GW(p)

Yes he was running around saying the governments are colluding against the people and he was going to be their savior. [...]

He was also running around saying that he was Pope John XXIII because 23 was his favourite prime number. And refusing academic positions which would have given him a much better platform for advocating currency reform (had he wanted to do that) on the basis that he was already scheduled to start working as Emperor of Antarctica. And saying he was communicating with aliens from outer space.

Of course that doesn't mean that everything he did was done for crazy reasons. But it does mean that the fact that he said or did something at this time is not any sort of evidence that it makes sense.

Could you point me to more information about the reasons he gave for leaving the US and trying to renounce his citizenship? I had a look in Nasar's book, which is the only thing about Nash I have on my shelves, and (1) there is no index entry for "ideal money" (the concept may crop up but not be indexed, of course) and (2) its account of Nash's time in Geneva and Paris is rather vague about why he wanted to renounce his citizenship (and indeed about why he was brought back to the US).

Yup exactly

Let me then repeat my question. What do you expect to happen to the values of those "global commodities" in the presence of an AI whose capabilities are superhuman enough to make value alignment an urgent issue? Suppose the commodities include (say) gold, Intel CPUs and cars, and the AI finds an energy-efficient way to make gold, designs some novel kind of quantum computing device that does what the Intel chips do but a million times faster, and figures out quantum gravity and uses it to invent a teleportation machine that works via wormholes? How are prices based on a basket of gold, CPUs and cars going to remain stable in that kind of situation?

[EDITED to add:] Having read a bit more about Nash's proposal, it looks as if he had in mind minerals rather than manufactured goods; so gold might be on the list but probably not CPUs or cars. The point stands, and indeed Nash explicitly said that gold on its own wasn't a good choice because of possible fluctuations in availability. I suggest that if we are talking about scenarios of rapid technological change, anything may change availability rapidly; and if we're talking about scenarios of rapid technological change driven by a super-capable AI, that availability change may be under the AI's control. None of this is good if we're trying to use "ideal money" thus defined as a basis for the AI's values.

(Of course it may be that no such drastic thing ever happens, either because fundamental physical laws prevent it or because we aren't able to make an AI smart enough. But this is one of the situations people worried about AI value alignment are worried about, and the history of science and technology isn't exactly short of new technologies that would have looked miraculous before the relevant discoveries were made.)

Or: suppose instead that the AI is superhumanly good at predicting and manipulating markets. (This strikes me as an extremely likely thing for people to try to make AIs do, and also a rather likely early step for a superintelligent but not yet superpowered AI trying to increase its influence.) How confident are you that the easiest way to achieve some goal expressed in terms of values cashed out in "ideal money" won't be to manipulate the markets to change the correspondence between "ideal money" and other things in the world?

I don't need to do anything [...] except quote him saying it is his intention.

But what you quoted him as saying he wanted was not (so far as I can tell) the same thing as you are now saying he wanted. We are agreed that Nash thought "ideal money" could be a universal means of valuing oil and bricks and computers. (With the caveat that I haven't read anything like everything he said and wrote about this, and what I have read isn't perfectly clear; so to some extent I'm taking your word for it.) But what I haven't yet seen any sign of is that Nash thought "ideal money" could also be a universal means of valuing internal subjective things (e.g., contentment) or interpersonal things not readily turned into liquid markets (e.g., sincere declarations of love) or, in short, anything not readily traded on zero-overhead negligible-latency infinitely-liquid markets.

your definition of ideal is not in line with the standard definition.

I have (quite deliberately) not been assuming any particular definition of "ideal"; I have been taking "ideal money" as a term of art whose meaning I have attempted to infer from what you've said about it and what I've seen of Nash's words. Of course I may have misunderstood, but not by using the wrong definition of "ideal" because I have not been assuming that "ideal money" = "money that is ideal in any more general sense".

Replies from: Flinter
comment by Flinter · 2017-01-17T16:33:14.379Z · LW(p) · GW(p)

He was also running around saying that he was Pope John XXIII because 23 was his favourite prime number. And refusing academic positions which would have given him a much better platform for advocating currency reform (had he wanted to do that) on the basis that he was already scheduled to start working as Emperor of Antarctica. And saying he was communicating with aliens from outer space.

No you aren't going to tell Nash how he could have brought about Ideal Money. In regard to, for example, communicating with aliens, again you are being wholly ignorant. Consider this (from Ideal Money):

We of Terra could be taught how to have ideal monetary systems if wise and benevolent extraterrestrials were to take us in hand and administer our national money systems analogously to how the British recently administered the currency of Hong Kong.~Ideal Money

See? He has been "communicating" with aliens. He was using his brain to think beyond not just nations and continents but worlds. "What would it be like for outside observers?" Is he not allowed to ask these questions? Do we not find it useful to think about how extraterrestrials would have an effect on a certain problem like our currency systems? And you call this "crazy"? Why can't Nash make theories based on civilizations external to ours without you calling him crazy?

See, he was being logical, but people like you can't understand him.

Of course that doesn't mean that everything he did was done for crazy reasons. But it does mean that the fact that he said or did something at this time is not any sort of evidence that it makes sense.

This is the most sick (ill) paragraph I have traversed in a long time. You have said "Nash was saying crazy things, so he was sick, therefore the things he was saying were crazy, and so we have to talk them with a grain of salt.

Nash birthed modern complexity theory at that time and did many other amazing things when he was "sick". He also recovered from his mental illness not because of medication but by willing himself so. These are accepted points in his bio. He says he started to reject politically orientated thinking and return to a more logical basis (in other words he realized running around telling everyone he is a god isn't helping any argument).

Could you point me to more information about the reasons he gave for leaving the US and trying to renounce his citizenship? I had a look in Nasar's book, which is the only thing about Nash I have on my shelves, and (1) there is no index entry for "ideal money" (the concept may crop up but not be indexed, of course) and (2) its account of Nash's time in Geneva and Paris is rather vague about why he wanted to renounce his citizenship (and indeed about why he was brought back to the US).

"I emerged from a time of mental illness, or what is generally called mental illness..."

"...you could say I grew out of it."

Those are relevant quotes otherwise.

https://www.youtube.com/watch?v=7Zb6_PZxxA0 12:40 it starts but he explains about the francs at 13:27 "When I did become disturbed I changed my money into swiss francs.

There is another interview he explains that the us navy took him back in chains, can't recall the video.

Let me then repeat my question. What do you expect to happen to the values of those "global commodities" in the presence of an AI whose capabilities are superhuman enough to make value alignment an urgent issue? Suppose the commodities include (say) gold, Intel CPUs and cars, and the AI finds an energy-efficient way to make gold, designs some novel kind of quantum computing device that does what the Intel chips do but a million times faster, and figures out quantum gravity and uses it to invent a teleportation machine that works via wormholes? How are prices based on a basket of gold, CPUs and cars going to remain stable in that kind of situation?

You are messing up (badly) the accepted definition of ideal. Nonetheless Nash deals with your concerns:

We can see that times could change, especially if a “miracle energy source” were found, and thus if a good ICPI is constructed, it should not be expected to be valid as initially defined for all eternity. It would instead be appropriate for it to be regularly readjusted depending on how the patterns of international trade would actually evolve.

Here, evidently, politicians in control of the authority behind standards could corrupt the continuity of a good standard, but depending on how things were fundamentally arranged, the probabilities of serious damage through political corruption might becomes as small as the probabilities that the values of the standard meter and kilogram will be corrupted through the actions of politicians.~Ideal Money

.

[EDITED to add:] Having read a bit more about Nash's proposal, it looks as if he had in mind minerals rather than manufactured goods; so gold might be on the list but probably not CPUs or cars. The point stands, and indeed Nash explicitly said that gold on its own wasn't a good choice because of possible fluctuations in availability. I suggest that if we are talking about scenarios of rapid technological change, anything may change availability rapidly; and if we're talking about scenarios of rapid technological change driven by a super-capable AI, that availability change may be under the AI's control. None of this is good if we're trying to use "ideal money" thus defined as a basis for the AI's values.

No he gets more explicate tho and so something like cpu's etc. would be sort of reasonable (but I think probably better to look at the underlying commodities used for these things). For example:

Moreover, commodities with easily and reliably calculable prices are most suitable, and relatively stable prices are very desirable. Another basic cost that could be used would be a standard transportation cost, the cost of shipping a unit quantity of something over long international distances.

(Of course it may be that no such drastic thing ever happens, either because fundamental physical laws prevent it or because we aren't able to make an AI smart enough. But this is one of the situations people worried about AI value alignment are worried about, and the history of science and technology isn't exactly short of new technologies that would have looked miraculous before the relevant discoveries were made.)

Yes, we are simultaneously saying the super smart thing to do would be to have ideal money (ie money comparable to an optimally chosen basket of industrial commodity prices), while also worry a super smart entity wouldn't support the smart action. It's clear fud.

Or: suppose instead that the AI is superhumanly good at predicting and manipulating markets. (This strikes me as an extremely likely thing for people to try to make AIs do, and also a rather likely early step for a superintelligent but not yet superpowered AI trying to increase its influence.) How confident are you that the easiest way to achieve some goal expressed in terms of values cashed out in "ideal money" won't be to manipulate the markets to change the correspondence between "ideal money" and other things in the world?

Ideal money is not corruptible. Your definition of ideal is not accepted as standard.

But what you quoted him as saying he wanted was not (so far as I can tell) the same thing as you are now saying he wanted. We are agreed that Nash thought "ideal money" could be a universal means of valuing oil and bricks and computers. (With the caveat that I haven't read anything like everything he said and wrote about this, and what I have read isn't perfectly clear; so to some extent I'm taking your word for it.) But what I haven't yet seen any sign of is that Nash thought "ideal money" could also be a universal means of valuing internal subjective things (e.g., contentment) or interpersonal things not readily turned into liquid markets (e.g., sincere declarations of love) or, in short, anything not readily traded on zero-overhead negligible-latency infinitely-liquid markets.

I might ask if you think such things could be quantified WITHOUT a standard basis for value? I mean its strawmany. Nash has an incredible proposal with a very long and intricate argument, but you are stuck arguing MY extrapolation, without understanding the underlying base argument by Nash. Gotta walk first, "What IS Ideal Money?"

I have (quite deliberately) not been assuming any particular definition of "ideal"; I have been taking "ideal money" as a term of art whose meaning I have attempted to infer from what you've said about it and what I've seen of Nash's words. Of course I may have misunderstood, but not by using the wrong definition of "ideal" because I have not been assuming that "ideal money" = "money that is ideal in any more general sense".

Yes this is a mistake, and its an amazing one to see from everyone. Thank you for at least partially addressing his work.

comment by Davidmanheim · 2017-01-23T16:22:00.686Z · LW(p) · GW(p)

Another point - "What I am introducing into the dialogue is a theoretical and conceptually stable unit of value."

Without a full and perfect system model, I argued that creating perfectly aligned metrics, liek a unit of value, is impossible. (To be fair, I really argued that point in the follow-up piece; www.ribbonfarm.com/2016/09/29/soft-bias-of-underspecified-goals/ ) So if our model for human values is simplified in any way, it's impossible to guarantee convergence to the same goal without a full and perfect systems model to test it against.

comment by Davidmanheim · 2017-01-23T15:41:01.964Z · LW(p) · GW(p)

"If someone creates "bad" AI we could measure that, and use the measurement for a counter program."

(I'm just going to address this point in this comment.) The space of potential bad programs is vast - and the opposite of a disastrous values misalignment is almost always a different values misalignment, not alignment.

In two dimensions, think of a misaligned wheel; it's very unlikely to be exactly 180 degrees (or 90 degrees) away from proper alignment. Pointing the car in a relatively nice direction is better than pointing it straight at the highway divider wall - but even a slight misalignment will eventually lead to going off-road. And the worry is that we need to have a general solution before we allow the car to get to 55 MPH, much less 100+. But you argue that we can measure the misalignment. True! If we had a way to measure the angle between its alignment and the correct one, we could ignore the misaligned wheel angle, and simple minimize the misalignment -which means the measure of divergence implicitly contains the correct alignment.

For an AI value function, the same is true. If we had a measure of misalignment, we could minimize it. The tricky part is that we don't have such a metric, and any correct such metric would be implicitly equivalent to solving the original problem. Perhaps this is a fruitful avenue, since recasting the problem this way can help - and it's similar to some of the approaches I've heard Dario Amodei mention regarding value alignment in machine learning systems. So it's potentially a good insight, but insufficient on its own.

comment by gjm · 2017-01-16T19:34:45.448Z · LW(p) · GW(p)

If someone creates "bad" AI then we may all be dead before we have the chance to "use the measurement for a counter program". (Taking "AI" here to mean "terrifyingly superintelligent AI", because that's the scenario we're particularly keen to defuse. If it turns out that that isn't possible, or that it's possible but takes centuries, then these problems are much less important.)

Replies from: Flinter
comment by Flinter · 2017-01-16T19:50:03.591Z · LW(p) · GW(p)

That's sort of moot for 2 reasons. Firstly what I have proposed would be the game theoretically optimal approach to solving the problem of a super terrbad ai. There is no better approach against such a player. I would also suggest there is no other reasonable approach. And so this speaks to the speed in relation to other possible proposed solutions.

Now of course we are still being theoretical here, but its relevant to point that out.

Replies from: gjm
comment by gjm · 2017-01-16T20:03:06.679Z · LW(p) · GW(p)

The currently known means for finding game-theoretically optimal choices are, shall we say, impractical in this sort of situation. I mean, chess is game-theoretically trivial (in terms of the sort of game theory I take it you have in mind) -- but actually finding an optimal strategy involves vastly more computation than we have any means of deploying, and even finding a strategy good enough to play as well as the best human players took multiple decades of work by many smart people and a whole lot of Moore's law.

Perhaps I'm not understanding your argument, though. Why does what you say make what I say "sort of moot"?

Replies from: Flinter
comment by Flinter · 2017-01-16T20:15:29.495Z · LW(p) · GW(p)

So lets take poker for example. I have argued (lets take it as an assumption which should be fine) that poker players never have enough empirical evidence to know their own winrates. It's always a guess and since the game isn't solved they are really guessing about whether they are profitable and how profitable they are. IF they had a standard basis for value then it could be arranged that players brute force the solution to poker. That is to say if players knew who was playing correctly then they would tend towards the correct players strategy.

So there is an argument, to be explored, that the reason we can't solve chess is because we are not using our biggest computer which is the entirety of our markets.

The reason your points are "moot" or not significant, is because there is not theoretically possible "better' way of dealing with ai, than having a stable metric of value.

This happens because objective value is perfectly tied to objective morality. That which we all value is that which we all feel is good.

Replies from: gjm
comment by gjm · 2017-01-16T21:14:39.834Z · LW(p) · GW(p)

there is an argument [...] that the reason we can't solve chess is because we are not using our biggest computer which is the entirety of our markets.

"The entirety of our markets" do not have anywhere near enough computational power to solve chess. (At least, not unless someone comes up with a novel way of solving chess that's much cleverer than anything currently known.)

That which we all value is that which we all feel is good.

It sounds as if this is meant to be shorthand for some sort of argument for your thesis (though I'm not sure exactly what thesis) but if so I am not optimistic about the prospects for the argument's success given that "we all" don't value or feel the same things as one another.

Replies from: Flinter
comment by Flinter · 2017-01-16T21:24:02.786Z · LW(p) · GW(p)

"The entirety of our markets" do not have anywhere near enough computational power to solve chess. (At least, not unless someone comes up with a novel way of solving chess that's much cleverer than anything currently known.)

It is the opinion of some well established (and historical) economics philosophers the markets can determine the optimal distribution of our commodities. Such an endeavor is at least several orders of magnitudes higher than the computing power required to solve chess.

"It sounds as if this is meant to be shorthand for some sort of argument for your thesis (though I'm not sure exactly what thesis) but if so I am not optimistic about the prospects for the argument's success given that "we all" don't value or feel the same things as one another."

You have stepped outside the premise again, which is a stable metric of value, this implies objectivity, which implies we all agree on the value of it. This is the premise.

Replies from: gjm
comment by gjm · 2017-01-17T00:22:13.499Z · LW(p) · GW(p)

It is the opinion of some well established [...] economics philosophers that markets can determine the optimal distribution of our commodities. [...]

Let me know when they get their Fields Medals (or perhaps, if it turns out that they're right but that the ways in which markets do this are noncomputable) their Nobel prizes, and then we can discuss this further.

[...] we all agree on the value of it. This is the premise.

Oh. Then your premise is flatly wrong, since people in fact don't all agree about value.

(In any case, "objective" doesn't imply everyone agrees. Whether life on earth has been around for more than a million years is a matter of objective fact, but people manage to disagree about it.)

Replies from: Flinter
comment by Flinter · 2017-01-17T00:28:51.465Z · LW(p) · GW(p)

Well I am speaking of Hayek Nash and Szabo (and smith) and I don't think medals makes for a strong argument (especially vs the stated fellows.

"Oh. Then your premise is flatly wrong, since people in fact don't all agree about value."

By what definition and application of the word premise, is it "wrong"? I am suggesting we take the premise as given, and I would like to speak of the implications. Calling it wrong is silly.

"(In any case, "objective" doesn't imply everyone agrees. Whether life on earth has been around for more than a million years is a matter of objective fact, but people manage to disagree about it.)"

The nature of money is such that "everyone agrees" that is how it becomes money and it is therefore and thus "objective". But I am not yet speaking to that, I am speaking to the premise which is a value metric that everyone DOES agree on.

Replies from: gjm
comment by gjm · 2017-01-17T01:03:00.434Z · LW(p) · GW(p)

I don't think medals makes for a strong argument (especially vs the stated fellows).

Maybe you are misunderstanding my argument, which isn't "a bunch of clever people think differently, so Hayek et al must be wrong" but "if you are correctly describing what Hayek et al claim, and if they are right about that, then someone has found either an algorithm worthy of the Fields medal or a discovery of non-algorithmic physics worthy of a Nobel prize".

I am suggesting we take the premise as given, and I would like to speak of the implications.

I am suggesting that if I take at face value what you say about the premise, then it is known to be false, and I am not very interesting in taking as given something that is known to be false. (But very likely you do not actually mean to claim what on the face of it you seem to be claiming, namely that everyone actually agrees about what matters.)

The nature of money is such that "everyone agrees" that is how it becomes money

I think this is exactly wrong. Prices (in a sufficiently free and sufficiently liquid market) tend to equalize, but not because everyone agrees but because when people disagree there are ways to get rich by noticing the fact, and when you do that the result is to move others closer to agreement.

In any case, this only works when you have markets with no transaction costs, and plenty of liquidity. There are many things for which no such markets exist or seem likely to exist. (Random example: I care whether and how dearly my wife loves me. No doubt I would pay, if need and opportunity arose, to have her love me more rather than less. But there is no market in my wife's love, it's hard to see how there ever could be, if you tried to make one it's hard to see how it would actually help anything, and by trading in such a market I would gravely disturb the very thing the market was trying to price. This is not an observation about the fuzziness of the word "love"; essentially all of that would remain true if you operationalized it in terms of affectionate-sounding words, physical intimacy, kind deeds, and so forth.)

Replies from: Flinter
comment by Flinter · 2017-01-17T01:14:00.446Z · LW(p) · GW(p)

Yes Nash will get the medals for Ideal Money, this is what I am suggesting.

I am not proposing something "false" as a premise. I am saying, assume an objective metric for value exists (and then lets tend to the ramifications/implications). There is nothing false about that....

What I am saying about money, that you want to suggest is false, is that it is our most objective valuation metric. There is no more objective device for measuring value, in this world.

The rest you are suggesting is a way of saying we don't have free markets now, but if we continue to improve we will asymptotically approach it at the limits. Then you might agree at the limits our money will be stable in the valuation sense and COULD be such a metric (but its value isn't stable at present time!)

In regard to your wifes love the market value's it at a constant in relation to this theoretical notion, that your subjective valuation disagrees with the ultimate objective metric (remember its a premise that doesn't necessarily exist) doesn't break the standard.

Replies from: gjm
comment by gjm · 2017-01-17T01:29:21.798Z · LW(p) · GW(p)

I am saying, assume an objective metric for value exists. [...] There is nothing false about that

If, in fact, no objective metric for value exists, then there is something false about it. If, less dramatically, your preferred candidate for an objective metric doesn't exist (or, perhaps better, exists but doesn't have the properties required of such a metric) and we have no good way of telling whether some other objective metric exists, then there's something unsatisfactory about it even if not quite "false" (though in that case, indeed, it might be reasonable to say "let's suppose there is, and see what follows").

What I am saying about money [...] is that it is our most objective valuation metric.

Ah, now that's a different claim altogether. Our most objective versus actually objective. Unfortunately, the latter is what we need.

a way of saying we don't have free markets now, but if we continue to improve we will asymptotically approach it at the limits.

The first part, kinda but only kinda. The second, not so much. Markets can deviate from ideality in ways other than not being "free". For instance, they can have transaction costs. Not only because of taxation, bid-offer spreads, and the like, but also (and I think unavoidably) because doing things takes effort. They can have granularity problems. (If I have a bunch of books, there is no mechanism by which I can sell half of one of them.) They can simply not exist. Hence, "only kinda". And I see no reason whatever to expect markets to move inexorably towards perfect freedom, perfect liquidity, zero transaction costs, infinitely fine granularity, etc., etc., etc. Hence "not so much".

I don't understand your last paragraph at all. "The market values it at a constant in relation with this theoretical notion" -- what theoretical notion? what does it mean to "value it at a constant"? It sounds as if you are saying that I may be wrong about how much I care how much my wife loves me, if "the market" disagrees; that sounds pretty ridiculous but I can't tell how ridiculous until I understand how the market is supposedly valuing it, which at present I don't.

Replies from: Flinter
comment by Flinter · 2017-01-17T01:39:56.252Z · LW(p) · GW(p)

If, in fact, no objective metric for value exists, then there is something false about it

I doubt it is accepted logic to suggest a premise is intrinsically false.

.>If, less dramatically, your preferred candidate for an objective metric doesn't exist (or, perhaps better, exists but doesn't have the properties required of such a metric) and we have no good way of telling whether some other objective metric exists, then there's something unsatisfactory about it even if not quite "false" (though in that case, indeed, it might be reasonable to say "let's suppose there is, and see what follows").

Yes this. I will make it satisfactory, in a jiffy.

Ah, now that's a different claim altogether. Our most objective versus actually objective. Unfortunately, the latter is what we need.

No we need both. They are both useful, and I present both, in the context of what is useful (and therefore wanted).

The first part, kinda but only kinda. The second, not so much. Markets can deviate from ideality in ways other than not being "free". For instance, they can have transaction costs. Not only because of taxation, bid-offer spreads, and the like, but also (and I think unavoidably) because doing things takes effort. They can have granularity problems. (If I have a bunch of books, there is no mechanism by which I can sell half of one of them.) They can simply not exist. Hence, "only kinda". And I see no reason whatever to expect markets to move inexorably towards perfect freedom, perfect liquidity, zero transaction costs, infinitely fine granularity, etc., etc., etc. Hence "not so much".

Yes all these things I mean to say, as friction and inefficiency, would suggest it is not free, and you speak to all Szabo's articles and Nash's works which I am familiar with. But I also say this in a manner such "provided we continue to evolve rationally" or "provided technology continues to evolve". I don't need to prove we WILL evolve rationally and our tech will not take a step back. I don't need to prove that to show in this thought experiment what the end game is.

I don't understand your last paragraph at all. "The market values it at a constant in relation with this theoretical notion" -- what theoretical notion? what does it mean to "value it at a constant"? It sounds as if you are saying that I may be wrong about how much I care how much my wife loves me, if "the market" disagrees; that sounds pretty ridiculous but I can't tell how ridiculous until I understand how the market is supposedly valuing it, which at present I don't.

You aren't expected to understand how we get to the conclusion, just that there is a basis for value, a unit of it, that everyone accepts. It doesn't matter if a person disagrees, they still have to use it because the general society has deemed it "that thing". And "that thing" that we all generally accept is actually called money. I am not saying anything that isn't completely accepted by society.

Go to a store and try to pay with something other than money. Go try to pay your taxes in a random good. They aren't accepted. Its silly to argue you could do this.

Replies from: gjm
comment by gjm · 2017-01-17T03:11:04.653Z · LW(p) · GW(p)

I doubt it is accepted logic to suggest a premise is intrinsically false.

I'm not sure what your objection actually is. If someone comes along and says "I have a solution to the problems in the Middle East. Let us first of all suppose that Israel is located in Western Europe and that all Jews and Arabs have converted to Christianity" then it is perfectly in order to say no, those things aren't actually true, and there's little point discussing what would follow if they were. If you are seriously claiming that money provides a solution to what previously looked like difficult value-alignment problems because everyone agrees on how much money everything is worth, then this is about as obviously untrue as our hypothetical diplomat's premise. I expect you aren't actually saying quite that; perhaps at some point you will clarify just what you are saying.

all these things [...] as friction and inefficiency, would suggest it is not free

Many of them seem to me to have other obvious causes.

"provided we continue to evolve rationally" or "provided technology continues to evolve"

I don't see much sign that humanity is "evolving rationally", at least not if that's meant to mean that we're somehow approaching perfect rationality. (It's not even clear what that means without infinite computational resources, which there's also no reason to think we're approaching; in fact, there are fundamental physical reasons to think we can't be.)

You aren't expected to understand how we get to the conclusion

If you are not interested in explaining how you reach your conclusions, then I am not interested in talking to you. Please let me know whether you are or not, and if not then I can stop wasting my time.

I am not saying anything that isn't completely accepted by society.

You are doing a good job of giving the impression that you are. There is certainly nothing resembling a consensus across "society" that money answers all questions of value.

Replies from: Flinter
comment by Flinter · 2017-01-17T03:23:31.073Z · LW(p) · GW(p)

I'm not sure what your objection actually is. If someone comes along and says "I have a solution to the problems in the Middle East. Let us first of all suppose that Israel is located in Western Europe and that all Jews and Arabs have converted to Christianity" then it is perfectly in order to say no, those things aren't actually true, and there's little point discussing what would follow if they were. If you are seriously claiming that money provides a solution to what previously looked like difficult value-alignment problems because everyone agrees on how much money everything is worth, then this is about as obviously untrue as our hypothetical diplomat's premise. I expect you aren't actually saying quite that; perhaps at some point you will clarify just what you are saying.

Yes exactly. You want to say because the premise is silly or not reality then it cannot be useful. That is wholly untrue and I think I recall reading an article here about this. Can we not use premises that lead to useful conclusions that don't rely on the premise? You have no basis for denying that we can. I know this. Can I ask you if we share the definition of ideal: http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/

I don't see much sign that humanity is "evolving rationally", at least not if that's meant to mean that we're somehow approaching perfect rationality. (It's not even clear what that means without infinite computational resources, which there's also no reason to think we're approaching; in fact, there are fundamental physical reasons to think we can't be.)

Yes because you don't know that our rationality is tied to the quality of our money in the Nashian sense, or in other words if our money is stable in relation to an objective metric for value then we become (by definition of some objective truth) more rational. I can't make this point though, without Nash's works.

If you are not interested in explaining how you reach your conclusions, then I am not interested in talking to you. Please let me know whether you are or not, and if not then I can stop wasting my time.

Yes I am in the process of it, and you might likely be near understanding, but it takes a moment to present and the mod took my legs out.

You are doing a good job of giving the impression that you are. There is certainly nothing resembling a consensus across "society" that money answers all questions of value.

No that is not what I said or how I said it. Money exists because we need to all agree on the value of something in order to have efficiency in the markets. To say "I don't agree with the American dollar" doesn't change that.

Replies from: gjm
comment by gjm · 2017-01-17T15:14:53.945Z · LW(p) · GW(p)

You want to say because the premise is silly or not reality then it cannot be useful.

Not quite. It can be interesting and useful to consider counterfactual scenarios. But I think it's important to be explicit about them being counterfactual. And, because you can scarcely ever change just one thing about the world, it's also important to clarify how other things are (counterfactually) changing to accommodate the main change you have in mind.

So, in this case, if I understand you right what you're actually saying is something like this. "Consider a world in which there is a universally agreed-upon currency that suffers no inflation or deflation, perhaps by being somehow pegged to a basket of other assets of fixed value; and that is immune to other defects X, Y, Z suffered by existing currencies. Suppose that in our hypothetical world there are markets that produce universally-agreed-upon prices for all goods without exception, including abstract ones like "understanding physics" and emotionally fraught ones like "getting on well with one's parents" and so forth. Then, let us consider what would happen to problems of AI value alignment in such a world. I claim that most of these problems would go away; we could simply tell the AI to seek value as measured by this universally-agreed currency."

That might make for an interesting discussion (though I think you will need to adjust your tone if you want many people to enjoy discussions with you). But if you try to start the same discussion by saying or implying that there is such a currency, you shouldn't be surprised if many of the responses you get are mostly saying "oh no there isn't".

Even when you do make it clear that this is a counterfactual, you should expect some responses along similar lines. If what someone actually cares about is AI value alignment in the real world, or at least in plausible future real worlds, then a counterfactual like this will be interesting to them only in so far as it actually illuminates the issue in the real world. If the counterfactual world is too different from the real world, it may fail to do that. At the very least, you should be ready to explain the relevance of your counterfactual to the real world. ("We can bring about that world, and we should do so." "We can make models of what such a currency would actually look like, and use those for value alignment." "Considering this more convenient world will let us separate out other difficult issues around value alignment." Or whatever.)

No that is not what I said or how I said it.

OK. But it looks to me as if something like the stronger claim I treated you as making is actually needed for "ideal money" to be any kind of solution to AI value alignment problems. And what you said before was definitely that we do all agree on money, but now you seem to have retreated to the weaker claim that we will or we might or we would in a suitably abstracted world or something.

Replies from: Lumifer, Flinter
comment by Lumifer · 2017-01-17T16:06:20.113Z · LW(p) · GW(p)

or implying that there is such a currency

I have a suspicion that there is a word hanging above your discussion, visible to Flinter but not to you. It starts with "bit" and ends with "coin".

Replies from: Flinter, gjm
comment by Flinter · 2017-01-17T16:51:10.209Z · LW(p) · GW(p)

Ideal Money is an enthymeme. But Nash speak FAR beyond the advent of an international e-currency with a stably issued supply.

comment by gjm · 2017-01-17T16:11:14.940Z · LW(p) · GW(p)

Actually, it was visible to me too, but I didn't see any particular need to introduce it to the discussion until such time as Flinter sees fit to do so. (I have seen a blog that I am pretty sure is Flinter's, and a few other writings on similar topics that I'm pretty sure are also his.)

(My impression is that Flinter thinks something like bitcoin will serve his purposes, but not necessarily bitcoin itself as it now is.)

Replies from: Flinter, Lumifer
comment by Flinter · 2017-01-17T16:54:18.515Z · LW(p) · GW(p)

After painting the picture of what Ideal Money, Nash explains the intrinsic difficulties of bringing it about. Then he comes up with the concept of "asymptotically ideal money":

The idea seems paradoxical, but by speaking of “inflation targeting” these responsible official are effectively CONFESSING…that it is indeed after all possible to control inflation by controlling the supply of money (as if by limiting the amount of individual “prints” that could be made of a work of art being produced as “prints).~Ideal Money

M. Friedman acquired fame through teaching the linkage between the supply of money and, effectively, its value. In retrospect it seems as if elementary, but Friedman was as if a teacher who re-taught to American economists the classical concept of the “law of supply and demand”, this in connection with money.

Nash explains the parameters of gold in regard to why we have historically valued it, he is methodical, and he also explains golds weaknesses in this context.

comment by Lumifer · 2017-01-17T16:19:45.149Z · LW(p) · GW(p)

I'm impatient and prefer to cut to the chase :-)

Replies from: Flinter
comment by Flinter · 2017-01-17T16:55:22.066Z · LW(p) · GW(p)

Its too difficult to cut to, because the nature of this problem is such that we all have incredibly cognitive bias towards not understanding it or seeing it.

comment by Flinter · 2017-01-17T15:50:50.391Z · LW(p) · GW(p)

To the first set of paragraphs...ie:

But if you try to start the same discussion by saying or implying that there is such a currency

If I start by saying there IS such a currency? What does "ideal" mean to you? I think you aren't using the standard definition: http://lesswrong.com/r/discussion/lw/ogt/do_we_share_a_defintion_for_the_word_ideal/

But it looks to me as if something like the stronger claim I treated you as making is actually needed for "ideal money" to be any kind of solution to AI value alignment problems.

I did not come here to specifically make claims in regard to AI. What does it mean to ignore Nash's works, his argument, and the general concept of what Ideal Money is...and then to say that my delivery and argument is weak in regard to AI?

And what you said before was definitely that we do all agree on money, but now you seem to have retreated to the weaker claim that we will or we might or we would in a suitably abstracted world or something.

No you have not understood the nature of money. A money is chosen by the general market, it is propriety. This is what I mean to say in this regard, no more, no less. To tell me you don't like money therefore not "everyone" uses it is petty and simply perpetuating conflict.

There is nothing to argue about in regard to pointing out that we converge on it, in the sense that we all socially agree to it. If you want to show that I am wrong by saying that you specifically don't, or one , or two people, then you are not interesting in dialogue you are being petty and silly.

Replies from: gjm
comment by gjm · 2017-01-17T18:11:04.024Z · LW(p) · GW(p)

What does "ideal" mean to you?

It means, in this context, "the first word of the technical term 'ideal money' which Flinter has been using, and which I am hoping at some point he will give us his actual definition of".

If I start by saying there IS such a currency? What does "ideal" mean to you?

You began by saying this:

I would like to suggest, as a blanket observation and proposal, that most of these difficult problems described, especially on a site like this, are easily solvable with the introduction of an objective and ultra-stable metric for valuation.

which, as I said at the time, looks at least as much like "There is such a metric" as like "Let's explore the consequences of having such a metric". Then later you said "It converges on money" (not, e.g., "it and money converge on a single coherent metric of value"). Then when asked whether you were saying that Nash has actually found an incorruptible measure of value, you said yes.

I appreciate that when asked explicitly whether such a thing exists you say no. But you don't seem to be taking any steps to avoid giving the impression that it's already around.

I did not come here to specifically make claims in regard to AI.

Nope. But you introduced this whole business in the context of AI value alignment, and the possible relevance of your (interpretation of Nash's) proposal to the Less Wrong community rests partly on its applicability to that sort of problem.

What does it mean to ignore Nash's works, his argument, and the general concept of what Ideal Money is ... and then to say that my delivery and argument is weak in regard to AI?

I'm here discussing this stuff with you. I am not (so far as I am aware) ignoring anything you say. What exactly is your objection? That I didn't, as soon as you mentioned John Nash, go off and spend a week studying his thoughts on this matter before responding to you? I have read the Nash lecture you linked, and also his earlier paper on Ideal Money published in the Southern Economic Journal. What do you think I am ignoring, and why do you think I am ignoring it?

But your question is an odd one. It seems to be asking, more or less, "How dare you have interests and priorities that differ from mine?". I hope it's clear that that question isn't actually the sort that deserves an answer.

No you have not understood the nature of money. A money is chosen by the general market, it is propriety.

I think I understand the nature of money OK, but I'm not sure I understand what you are saying about it. "A money"? Do you mean a currency, or do you mean a monetary valuation of a good, or something else? What is "the general market", in a world where there are lots and lots of different markets, many of which use different currencies? In the language I speak, "propriety" mostly means "the quality of being proper" which seems obviously not to be your meaning. It also (much less commonly) means "ownership", which seems a more likely meaning, but I'm not sure what it actually means to say "money is ownership". Would you care to clarify?

This is what I mean to say in this regard, no more, no less.

It seems to me entirely different from your earlier statements to which I was replying. Perhaps everything will become clearer when you explain more carefully what you mean by "A money is chosen by the general market, it is propriety".

To tell me you don't like money therefore not "everyone" uses it [...]

Clearly our difficulties of communication run both ways. I have told you neither of those things. I like money a great deal, and while indeed not everyone uses it (there are, I think, some societies around that don't use money) it's close enough to universally used for most purposes. (Though not everyone uses the same money, of course.)

I genuinely don't see how to get from anything I have said to "you don't like money therefore not everyone uses it".

There is nothing to argue about in regard to pointing out that we converge on it, in the sense that we all socially agree to it.

I think, again, some clarification is called for. When you spoke of "converging on money", you surely didn't just mean that (almost) everyone uses money. The claim I thought you were making, in context, was something like this: "If we imagine people getting smarter and more rational without limit, their value systems will necessarily converge to a particular limit, and that limit is money." (Which, in turn, I take to mean something like this: to decide which of X and Y is better, compute their prices and compare numerically.) It wasn't clear at the time what sort of "money" you meant, but you said explicitly that the results are knowable and had been found by John Nash. All of this goes much, much further than saying that we all use money, and further than saying that we have (or might in the future hope to have) a consistent set of prices for tradeable goods.

It would be very helpful if you would say clearly and explicitly what you mean by saying that values "converge on money".

[...] you specifically [...] or one, or two people [...]

I mentioned my own attitudes not in order to say "I am a counterexample, therefore your universal generalization is false" but to say "I am a counterexample, and I see no reason to think I am vastly atypical, therefore your universal generalization is probably badly false". I apologize if that wasn't clear enough.

Replies from: Flinter
comment by Flinter · 2017-01-17T18:30:37.350Z · LW(p) · GW(p)

It means, in this context, "the first word of the technical term 'ideal money' which Flinter has been using, and which I am hoping at some point he will give us his actual definition of".

Ideal, the standard definition, means implies that it is conceptual.

You began by saying this:

I would like to suggest, as a blanket observation and proposal, that most of these difficult problems described, especially on a site like this, are easily solvable with the introduction of an objective and ultra-stable metric for valuation.

which, as I said at the time, looks at least as much like "There is such a metric" as like "Let's explore the consequences of having such a metric". Then later you said "It converges on money" (not, e.g., "it and money converge on a single coherent metric of value"). Then when asked whether you were saying that Nash has actually found an incorruptible measure of value, you said yes.

Yes he did and he explains it perfectly. And its a device, I introduced into the dialogue and showed how it is to be properly used.

I appreciate that when asked explicitly whether such a thing exists you say no. But you don't seem to be taking any steps to avoid giving the impression that it's already around.

It's conceptual in nature.

Nope. But you introduced this whole business in the context of AI value alignment, and the possible relevance of your (interpretation of Nash's) proposal to the Less Wrong community rests partly on its applicability to that sort of problem.

Yup we'll get to that.

I'm here discussing this stuff with you. I am not (so far as I am aware) ignoring anything you say. What exactly is your objection? That I didn't, as soon as you mentioned John Nash, go off and spend a week studying his thoughts on this matter before responding to you? I have read the Nash lecture you linked, and also his earlier paper on Ideal Money published in the Southern Economic Journal. What do you think I am ignoring, and why do you think I am ignoring it?

Nope, those are past sentiments, my new ones are I appreciate the dialogue.

But your question is an odd one. It seems to be asking, more or less, "How dare you have interests and priorities that differ from mine?". I hope it's clear that that question isn't actually the sort that deserves an answer.

Yes but its a product of never actual entering sincere dialogue with intelligent players on the topic of Ideal Money so I have to be sharp when we are not addressing it and instead addressing complex subject, AI, in relation to Ideal Money but before understanding Ideal Money (which is FAR more difficult to understand than AI).

I think I understand the nature of money OK, but I'm not sure I understand what you are saying about it. "A money"? Do you mean a currency, or do you mean a monetary valuation of a good, or something else? What is "the general market", in a world where there are lots and lots of different markets, many of which use different currencies? In the language I speak, "propriety" mostly means "the quality of being proper" which seems obviously not to be your meaning. It also (much less commonly) means "ownership", which seems a more likely meaning, but I'm not sure what it actually means to say "money is ownership". Would you care to clarify?

Why aren't you using generally accepted definitions?

the state or quality of conforming to conventionally accepted standards of behavior or morals. the details or rules of behavior conventionally considered to be correct. the condition of being right, appropriate, or fitting.

Yes money can mean many things, but if we thing of the purpose of it and how and why it exists it is effectively that thing which we all generally agree on. If one or two people play a different game that doesn't invalidate the money. Money serves a purpose that involves all of us supporting it through unwritten social contract. There is nothing else that serves that purpose better. It is the nature of money.

It seems to me entirely different from your earlier statements to which I was replying. Perhaps everything will become clearer when you explain more carefully what you mean by "A money is chosen by the general market, it is propriety".

Money is the general accepted form of exchange. There is nothing here to investigate, its a simple statement.

Clearly our difficulties of communication run both ways. I have told you neither of those things. I like money a great deal, and while indeed not everyone uses it (there are, I think, some societies around that don't use money) it's close enough to universally used for most purposes. (Though not everyone uses the same money, of course.)

Yes.

I genuinely don't see how to get from anything I have said to "you don't like money therefore not everyone uses it".

Money has the quality that it is levated by our collective need for an objective value metric. But if I say "our" and someone says "well you are wrong because not EVERYONE uses money" then I won't engage with them because they are being dumb.

I think, again, some clarification is called for. When you spoke of "converging on money", you surely didn't just mean that (almost) everyone uses money. The claim I thought you were making, in context, was something like this: "If we imagine people getting smarter and more rational without limit, their value systems will necessarily converge to a particular limit, and that limit is money." (Which, in turn, I take to mean something like this: to decide which of X and Y is better, compute their prices and compare numerically.) It wasn't clear at the time what sort of "money" you meant, but you said explicitly that the results are knowable and had been found by John Nash. All of this goes much, much further than saying that we all use money, and further than saying that we have (or might in the future hope to have) a consistent set of prices for tradeable goods.

We all converge to money and to use a single money, it is the nature of the universe. It is obvious money will bridge us with AI and help us interact. And yes this convergence will be such that we will solve all complex problems with it, but we need it to be stable to begin to do that.

So in the future, you will do what money tells you. You won't say, I'm going to do something that doesn't procure much money, because it will be the irrational thing to do.

It would be very helpful if you would say clearly and explicitly what you mean by saying that values "converge on money".

Does everyone believe in Christianity? Does everyone converge on it? Does everyone converge on their beliefs in the after life?

No but the nature of money is such that its the one thing we all agree on. Again telling me no we don't just shows you are stupid. This is an obvious point, it is the purpose of money, and I'm not continuing on this path of dialogue because its asinine.

I mentioned my own attitudes not in order to say "I am a counterexample, therefore your universal generalization is false" but to say "I am a counterexample, and I see no reason to think I am vastly atypical, therefore your universal generalization is probably badly false". I apologize if that wasn't clear enough.

Yes you live in a reality in which you don't acknowledge money, and I am supposed to believe that. You don't use money, you don't get paid in money, you don't buy things with money, you don't save money. And I am supposed to think you are intelligent for pretending this?

We all agree on money, it is the thing we all converge on. Here is the accepted definition of converge:

tend to meet at a point. approximate in the sum of its terms toward a definite limit.