post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by AlexMennen · 2017-08-30T16:49:11.542Z · LW(p) · GW(p)

If downvoting was enabled, I would have downvoted this (as well as pretty much all your other posts here that I've read).

Replies from: cousin_it, DragonGod
comment by cousin_it · 2017-08-31T11:03:58.983Z · LW(p) · GW(p)

I have mod power but don't want to use it unilaterally. Should I limit DragonGod to one post per week?

Replies from: Elo
comment by Elo · 2017-08-31T11:48:23.814Z · LW(p) · GW(p)

I have discussed this with dragongod and already told him twice. He disregarded my instruction to post this.

Replies from: cousin_it
comment by cousin_it · 2017-08-31T15:07:46.334Z · LW(p) · GW(p)

In this case, perhaps we should delete the post?

Replies from: username2
comment by username2 · 2017-09-01T16:07:16.230Z · LW(p) · GW(p)

I appreciate you are working under challenging circumstances but surely moderators should be permitted and willing to remove undesirable content. Please, yes.

Even an empty forum is preferable to embarrassments like this post series.

Replies from: cousin_it
comment by cousin_it · 2017-09-01T17:04:05.210Z · LW(p) · GW(p)

Done.

comment by DragonGod · 2017-08-30T17:00:26.859Z · LW(p) · GW(p)

Care to tell me why? What exactly is wrong with the post?

Replies from: AlexMennen
comment by AlexMennen · 2017-08-30T17:07:40.848Z · LW(p) · GW(p)

Your argument runs on pure wishful thinking, and others have already looked into actually correct ways of finding (C,C) equilibria in variants of the prisoner's dilemma in which agents are given more information about each other.

Replies from: DragonGod
comment by DragonGod · 2017-08-30T17:44:37.256Z · LW(p) · GW(p)

Please explain how it runs on wishful thinking?

Replies from: AlexMennen
comment by AlexMennen · 2017-08-30T19:15:50.338Z · LW(p) · GW(p)

There isn't a good justification for the prohibition on constant strategies, and the alternatives are too vague for it to be clear what the effects of prohibiting constant strategies actually is. You assumed the agents had the ability to "predispose themselves" to a particular action, but if they do that, then they still ultimately end up just making decisions based on what they think the other player is thinking, and not based on what they predisposed themselves to do, so it shouldn't be called a predisposition. You're trying to get something (mutual cooperation) out of nothing (a so-called predisposition that never gets acted on).

Replies from: DragonGod
comment by DragonGod · 2017-08-30T22:24:47.507Z · LW(p) · GW(p)

I edited the article to explain what I meant by predisposition, so please reread it.

As for constant strategies, adopting a constant strategy (if predicted by the opponent) leads to outcomes ranked 3rd or 4th in their preferences). If at least one of them adopts a constant strategy, then if the other predicts that, the other would adopt the defect invariant strategy regardless of the constant strategy adopted by the opponent. Thus, they would not adopt the constant strategy, because it is suboptimal.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-08-31T02:37:39.471Z · LW(p) · GW(p)

One problem is that you seem to be assuming that the agents can predict each other. That is impossible, for the reason you explained yourself. If the agent predicts the other and then acts on that, then if they each predict the other, there is circular causality, which is impossible. So they cannot predict each other.

Replies from: DragonGod
comment by DragonGod · 2017-08-31T16:47:21.104Z · LW(p) · GW(p)

I explained how to resolve the circular causality with predisposition.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-09-01T01:54:44.360Z · LW(p) · GW(p)

You cannot resolve the circular causality. What you are saying is that each agent realizes, "Oh, I can't base my decision on predicting the other one. So I just have to decide what to do, without predicting." Correct. But then they still cannot base their decision on predicting the other, since they just decided not to do that.

Replies from: DragonGod
comment by DragonGod · 2017-09-01T10:37:08.434Z · LW(p) · GW(p)

Yes, but they update that decision based on how they predict the other agent reacts to their predisposition? I added a diagram explaining it.

They temporarily decide on a choice (predisposition) say q. They then update q based on how they predict the other agent would react to q.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-09-01T14:17:04.749Z · LW(p) · GW(p)

The question is where you stop in that method of procedure, and if each agent can perfectly predict where the other will stop thinking about it and act, the original circular causality will return.

Replies from: DragonGod
comment by DragonGod · 2017-09-01T15:45:03.989Z · LW(p) · GW(p)

Explain please?

At (D,D) no agent would change their strategy, because it is a Nash equilibrum.

(D,C) collapses into (D,D). (C,D) collapses into (D,D).

At (C,C) any attempt to change strategy leads to either (D,C) or (C,D) which both collapse into (D,D).

So (C,C) forms (for lack of a better name) a reflective equilibrium. I don't understand how you reached circular causality.

comment by IlyaShpitser · 2017-08-30T21:34:05.031Z · LW(p) · GW(p)

Advice: read a lot more, write a lot less, and when doing papers, do them with someone who did them before.

EY's first paper sucked (he agrees).


Paper-writing is a complex skill, I am advising apprenticing to learn it. It's why I mentioned graduate school earlier.

Replies from: DragonGod
comment by DragonGod · 2017-08-30T22:36:59.242Z · LW(p) · GW(p)

Hmmm... Maybe I'll consider it in more depth later. I have started my education (self teaching using books) on decision theory; this was a stab at the subject before I learned the formal approaches and such. It was my best effort attempt at solving the prisoner's dilemma with the little understanding of decision theory I learned then.

I am thinking of formalising the concept I outlined here, and developing it into a paper after completing my decision theory education; is there any problem with my theory?

comment by Friendly-HI · 2017-08-30T19:52:39.397Z · LW(p) · GW(p)

This entire thing is super confused. A lot of complexity and assumptions are hidden inside your words, seemingly without you even realizing it.

The whole point of using a formal language is that IF your premises / axioms are correct, AND you only use logically allowed operations, THEN what comes out at the tail end should equal truth. However, you are really just talking with letters acting as placeholders for what could just as well be simply more words:

Committing on A's part, causes B to commit to defect (and vice versa). committing leads to outcomes ranked 3rd and 4th in their preferences. As A and B are rational, they do not commit.

What does "commit" mean?

As A is not committing, A's strategy is either predict(B) or !predict(B).

What could it even mean to not commit and !predict(B)? How is not predicting B just another way of committing to defect/collaborate?

If A predicts B will defect, A can cooperate or defect. If A predicts B will cooperate, A can cooperate or defect, Vice versa.

They can cooperate or defect whether or not they predict or don't predict each other, because that is all they can do anyway so this statement has zero information content.

The above assignment is circular and self-referential. If A and/or B tried to simulate it, it leads to a non terminating recursion of the simulations.

What makes you possibly say that with such confidence at this point in all this confusion?

You should google "cargo cult science" by Feynman, because It seems like there is also such a thing as cargo cult rationality and frankly I think you're doing it. I'm not trying to be mean and it's not like I could do these kind of mental gymnastics better than you but you can't do it it either yet and the sooner you realize it the better for you.

Replies from: DragonGod
comment by DragonGod · 2017-08-30T20:52:22.118Z · LW(p) · GW(p)

What does "commit" mean?

To commit to a choice means to decide to adopt that choice irrespective of all other information. Committing means not taking into account any other information in deciding your choice.
 

What could it even mean to not commit and !predict(B)? How is not predicting B just another way of committing to defect/collaborate?

To not commit is to decide to base your strategy on your prediction of the choice the opponent adopts. You can either choose the same choice as what you predicted, or choose the opposite of what you predicted (any other choice is adopting an invariant strategy).

They can cooperate or defect whether or not they predict or don't predict each other, because that is all they can do anyway so this statement has zero information content.

Yes, I was merely outlining A's options when they reach the point in their decision making that they predict the opponent's strategy.
 

What makes you possibly say that with such confidence at this point in all this confusion?

It is obvious. Self-referential assignments do not compute. If the above assignment was implemented as a program it would not terminate. Trying to implement a self referential assignment leads to an infinite recursion.
 

You should google "cargo cult science" by Feynman, because It seems like there is also such a thing as cargo cult rationality and frankly I think you're doing it. I'm not trying to be mean and it's not like I could do these kind of mental gymnastics better than you but you can't do it it either yet and the sooner you realize it the better for you.

I am not, if you do not understand what I said, then you can ask for clarification rather than assuming poor epistemic hygiene on my part—that is not being charitable.

Replies from: philh
comment by philh · 2017-08-31T13:27:02.949Z · LW(p) · GW(p)

if you do not understand what I said, then you can ask for clarification rather than assuming poor epistemic hygiene on my part

The problem with this strategy is that one can end up wasting arbitrary amounts of one's time on idiots and suicide rocks.

Replies from: DragonGod
comment by DragonGod · 2017-08-31T16:54:51.088Z · LW(p) · GW(p)

I'm pretty sure that charity is a good principle to have in general.

comment by dogiv · 2017-08-30T20:54:40.183Z · LW(p) · GW(p)

The first section is more or less the standard solution to the open source prisoner's dilemma, and the same as what you would derive from a logical decision theory approach, though with different and less clear terminology than what is in the literature.

The second section, on application to human players, seems flawed to me (as does the claim that it applies to superintelligences who cannot see each other's source code). You claim the following conditions are necessary:

  1. A and B are rational

  2. A and B know each other's preferences

  3. They are each aware of 1 and 2

But in fact, your concept of predisposing oneself relies explicitly on having access to the other agent's source code (and them having access to yours). If you know the other agent does not have access to your source code, then it is perfectly rational to predispose yourself to defect, whether or not you predict that the other agent has done the same. Cooperating only makes sense if there's a logical correlation between your decision to cooperate and your opponent's decision to cooperate; both of you just being "rational" does not make your decision processes identical.

"Recurrent Decision Theory" is not a meaningful idea to develop based on this post; just read and understand the existing work on UDT/FDT and you will save yourself some trouble.

Replies from: DragonGod
comment by DragonGod · 2017-08-30T22:34:13.155Z · LW(p) · GW(p)

I want to discuss the predisposition part. My argument for human players depends on this. If I was going to predispose myself, decide to choose an option, then which option would I predispose myself to?

If the two players involved don't have mutual access to each other's source code, then how would they pick up on the predisposition? Well, if B is perfectly rational, and has these preferences, then B is for all intents and purposes equivalent to a version of me with these preferences. So I engage in a game with A. Now, because A also knows that I am rational and have these preferences, A* would simulate me simulating him.

This leads to a self referential algorithm which does not compute. Thus, at least one of us must predispose ourselves. Predisposition to defection leads to (D, D), and predisposition to cooperation leads to (C, C). (C, C) > (D, D) thus the agents predispose themselves to cooperation.

Remember that the agents update their choice based on how they predict the other agent would react to an intermediary decision step. Because they are equally rational, their decision making process is reflected.

Thus A* is a high fidelity prediction of B, and B* is a high fidelity prediction of A.

Please take a look at the diagrams.

Replies from: dogiv
comment by dogiv · 2017-08-31T13:42:07.334Z · LW(p) · GW(p)

You are assuming that all rational strategies are identical and deterministic. In fact, you seem to be using "rational" as a stand-in for "identical", which reduces this scenario to the twin PD. But imagine a world where everyone makes use of the type of supperrationality you are positing here--basically, everyone assumes people are just like them. Then any one person who switches to a defection strategy would have a huge advantage. Defecting becomes the rational thing to do. Since everybody is rational, everybody switches to defecting--because this is just a standard one-shot PD. You can't get the benefits of knowing the opponent's source code unless you know the opponent's source code.

Replies from: DragonGod
comment by DragonGod · 2017-08-31T16:53:35.529Z · LW(p) · GW(p)

In this case, I think the rational strategy is identical. If A and B are perfectly rational and have the same preferences, then assuming they didn't both know the above two, they wold converge on the same strategy.

I believe that for any formal decision problem, a given level of information about that problem, and a given set of preferences, there is only one rational strategy (not a choice, but a strategy. The strategy may suggest a set of choices as opposed to any particular choice), but there is only one such strategy.

I speculate that everyone knows that if a single one of them switched to defect, then all of them would, so I doubt it.

However, I haven't analysed how RDT works in prisoner dilemma games with n > 2, so I'm not sure.

comment by Dagon · 2017-08-30T17:20:41.949Z · LW(p) · GW(p)

The hard part is having any security in one's prediciton. This whole thing is based on lack of deception possibilities. Neither player can believe that either player can be predicted to cooperate and then actually defect. If you believe you can defect and still be predicted to cooperate, you should defect. And if you believe that your opponent can defect even if you predict they'll cooperate, you should defect.

In humans (and probably most complex agents), there is some uncertainty and non-identical context between players. This means that all 4 outcomes are possible, not just C,C and D,D.

Replies from: DragonGod
comment by DragonGod · 2017-08-30T17:46:23.701Z · LW(p) · GW(p)

In recursive decision theory, I think only C,C and D,D are possible. If A and B are perfectly rational, know each other's preferences, and know the above two facts, then they can model the other by modelling how they would behave if they were in the other's shoes.

Replies from: Dagon
comment by Dagon · 2017-08-30T19:25:54.537Z · LW(p) · GW(p)

This is a simple game and you state the preferences in the problem statement. Perfect rationality is pretty easy for this constrained case. These are NOT sufficient to perfectly predict (and know that the opponent has perfect prediction) the outcome.

Such prediction is actually impossible. It's not a matter of "sufficiently intelligent", as a perfect simulation is recursive - it includes simulating your opponent's simulation of your simulation of your opponent's simulation of you (etc. etc.). Actual mirroring or cross-causality (where your decision CANNOT diverge from your opponent's) requires full state duplication and prevention of identity divergence. This is not a hurdle that can be overcome generally.

This is similar (or maybe identical to) Hofstadter's https://en.wikipedia.org/wiki/Superrationality, and is very distinct from only perfect rationality. It's a cute theory, which fails in all imaginable situations where agential identity is anything like our current experience.

Replies from: DragonGod
comment by DragonGod · 2017-08-30T20:43:31.524Z · LW(p) · GW(p)

The agents are provided information about their perfect rationality and mutual knowledge of each other's preferences. I showed a resolution to the recursive simulation problem. The agents can avoid it by predisposing themselves.