Posts

Comments

Comment by jamespayor on AI Alignment Open Thread August 2019 · 2019-08-08T04:05:03.217Z · score: 13 (5 votes) · LW · GW

It wasn't meant as a reply to a particular thing - mainly I'm flagging this as an AI-risk analogy I like.

On that theme, one thing "we don't know if the nukes will ignite the atmosphere" has in common with AI-risk is that the risk is from reaching new configurations (e.g. temperatures of the sort you get out of a nuclear bomb inside the Earth's atmosphere) that we don't have experience with. Which is an entirely different question than "what happens with the nukes after we don't ignite the atmosphere in a test explosion".

I like thinking about coordination from this viewpoint.

Comment by jamespayor on AI Alignment Open Thread August 2019 · 2019-08-06T22:11:52.309Z · score: 20 (10 votes) · LW · GW

There is a nuclear analog for accident risk. A quote from Richard Hamming:

Shortly before the first field test (you realize that no small scale experiment can be done—either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself! The next day when he came for the answers I remarked to him, "The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen—after all, there could be no experiments at the needed energy levels." He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, "What have you done, Hamming, you are involved in risking all of life that is known in the Universe, and you do not know much of an essential part?" I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, "Never mind, Hamming, no one will ever blame you."

https://en.wikipedia.org/wiki/Richard_Hamming#Manhattan_Project

Comment by jamespayor on Coherent behaviour in the real world is an incoherent concept · 2019-02-12T18:15:18.927Z · score: 26 (7 votes) · LW · GW
First problem with this argument: there are no coherence theories saying that an agent needs to maintain the same utility function over time.

This seems pretty false to me. If you can predict in advance that some future you will be optimizing for something else, you could trade with future "you" and merge utility functions, which seems strictly better than not. (Side note: I'm pretty annoyed with all the use of "there's no coherence theorem for X" in this post.)

As a separate note, the "further out" your goal is and the more that your actions are for instrumental value, the more it should look like world 1 in which agents are valuing abstract properties of world states, and the less we should observe preferences over trajectories to reach said states.

(This is a reason in my mind to prefer the approval-directed-agent frame, in which humans get to inject preferences that are more about trajectories.)

Comment by jamespayor on Diagonalization Fixed Point Exercises · 2018-12-07T10:05:05.603Z · score: 9 (2 votes) · LW · GW

Q7 (Python):

Y = lambda s: eval(s)(s)
Y('lambda s: print("Y = lambda s: eval(s)(s)\\nY({s!r})")')

Q8 (Python):

Not sure about the interpretation of this one. Here's a way to have it work for any fixed (python function) f:

f = 'lambda s: "\\n".join(s.splitlines()[::-1])'

go = 'lambda s: print(eval(f)(eval(s)(s)))'

eval(go)('lambda src: f"f = {f!r}\\ngo = {go!r}\\neval(go)({src!r})"')

Comment by jamespayor on Rationalist Lent · 2018-02-14T22:18:19.752Z · score: 15 (5 votes) · LW · GW

I've recently noticed something about me: Attempting to push away or not have experience, actually means pushing away those parts of myself that have that experience.

I then feel an urge to remind readers of a view of Rationalist Lent as an experiment. Don't let it this be another way that you look away from what's real for you. But do let it be a way to learn more about what's real for you.

Comment by jamespayor on Beta-Beta Testing: Frontpage Rework [Update - further tweak] · 2018-02-10T05:22:07.610Z · score: 10 (3 votes) · LW · GW

Just a PSA: right-clicking or middle-clicking the posts on the frontpage toggle whether the preview is open. Please make them only expand on left clicks, or equivalent!

Comment by jamespayor on Against Instrumental Convergence · 2018-01-28T06:13:19.156Z · score: 13 (4 votes) · LW · GW

Let's go a little meta.

It seems clear that an agent that "maximizes utility" exhibits instrumental convergence. I think we can state a stronger claim: any agent that "plans to reach imagined futures", with some implicit "preferences over futures", exhibits instrumental convergence.

The question then is how much can you weaken the constraint "looks like a utility maximizer", before instrumental convergence breaks? Where is the point in between "formless program" and "selects preferred imagined futures" at which instrumental convergence starts/stops applying?

---

This moves in the direction of working out exactly which components of utility-maximizing behaviour are necessary. (Personally, I think you might only need to assume "backchaining".)

So, I'm curious: What do you think a minimal set of necessary pieces might be, before a program is close enough to "goal directed" for instrumental convergence to apply?

This might be a difficult question to answer, but it's probably a good way to understand why instrumental convergence feels so real to other people.

Comment by jamespayor on Against Instrumental Convergence · 2018-01-28T06:00:15.382Z · score: 11 (3 votes) · LW · GW

Hm, I think an important piece of "intuitionistic proof" didn't transfer, or is broken. Drawing attention to that part:

Regardless of the details of how "decisions" are made, it seems easy for the choice to be one of the massive array of outcomes possible once you have control of the light-cone, made possible by acquiring power.

So here, I realize, I am relying on something like "the AI implicitly moves toward an imagined realizable future". I think that's a lot easier to get than the pipeline you sketch.

I think I'm being pretty unclear - I'm having trouble conveying my thought structure here. I'll go make a meta-level comment instead.

Comment by jamespayor on Against Instrumental Convergence · 2018-01-27T21:10:55.038Z · score: 31 (10 votes) · LW · GW

I think there's an important thing to note, if it doesn't already feel obvious: the concept of instrumental convergence applies to roughly anything that exhibits consequentialist behaviour, i.e. anything that does something like backchaining in its thinking.

Here's my attempt at a poor intuitionistic proof:

If you have some kind of program that understands consequences or backchains or etc, then perhaps it's capable of recognizing that "acquire lots of power" will then let it choose from a much larger set of possibilities. Regardless of the details of how "decisions" are made, it seems easy for the choice to be one of the massive array of outcomes possible once you have control of the light-cone, made possible by acquiring power. And thus I'm worried about "instrumental convergence".

---

At this point, I'm already much more worried about instrumental convergence, because backchaining feels damn useful. It's the sort of thing I'd expect most competent mind-like programs to be using in some form somewhere. It certainly seems more plausible to me that a random mind does backchaining, than a random mind looks like "utility function over here" and "maximizer over there".

(For instance, even setting aside how AI researchers are literally building backchaining/planning into RL agents, one might expect most powerful reinforcement learners to benefit a lot from being able to reason in a consequentialist way about actions. If you can't literally solve your domain with a lookup table, then causality and counterfactuals let you learn more from data, and better optimize your reward signal.)

---

Finally, I should point at some relevant thinking around how consequentialists probably dominate the universal prior. (Meaning: if you do an AIXI-like random search over programs, you get back mostly-consequentialists). See this post from Paul, and a small discussion on agentfoundations.