Have you lost your purpose?

post by toonalfrink · 2019-05-30T22:35:38.295Z · score: 29 (15 votes) · LW · GW · 6 comments

Not long ago, I noticed myself wondering: why am I working on AI Safety again?

I remembered caring deeply about it. Doing essentially everything for the sake of that singular point in the future. Resting for the sake of work. Brooding over it every moment. But not now.

Now I found myself just following the path of least resistance that my past self had carved out for me. Simply carrying out my social role [LW · GW]. Sheepishly. Kinda ignoring the point of it.

My sincere worry had turned into a fake belief [LW · GW]. One that I kept to preserve social capital, while conveniently forgetting the mentally taxing worldview that motivated my work in the first place.

This had been going on for at least a year, and it worries me. It shows that losing purpose [LW · GW] can happen internally, quietly, long before it manifests outwardly in obviously malign choices. Long before you notice. I was still acting as if I was pursuing AI Safety, but merely to placate social control mechanisms. I wonder just how much thinking was wasted in the times that those mechanisms were appeased.

I thought long and hard about it. What was the point again? I simulated what I thought might happen. I imagined seeing hours of work being done in the 3 seconds after uttering a command. I imagined everything changing overwhelmingly fast, being lost in horrible confusion. I imagined the joy of idea generation being my bottleneck, instead of boring execution.

And it all came back. The fire lit. My beliefs paid rent [LW · GW] again.

Do yours?


6 comments

Comments sorted by top scores.

comment by shminux · 2019-05-31T02:33:42.569Z · score: 15 (4 votes) · LW · GW
I thought long and hard about it. What was the point again? I simulated what I thought might happen. I imagined seeing hours of work being done in the 3 seconds after uttering a command. I imagined everything changing overwhelmingly fast, being lost in horrible confusion. I imagined the joy of idea generation being my bottleneck, instead of boring execution.

I would like to have a peek in this thought process in more detail, if you feel like sharing.

comment by toonalfrink · 2019-05-31T15:19:20.832Z · score: 4 (2 votes) · LW · GW

Sure.

It starts with the sense that, if something doesn't feel viscerally obvious, there is something left to be explained.

It's a bottom up process. I don't determine that images will convince me, then think of some images and play them in front of me so that they will hopefully convince my s1.

Instead I "become" my s1, take on a skeptical attitude, and ask myself what the fuss is all about.

Warning: the following might give you nightmares, if you're imaginative enough.

In this case, what happened was something like "okay, well I guess at some point we're going to have pretty strong optimizers. Fine. So what? Ah, I guess that's gonna mean we're going to have some machines that carry out commands for us. Like what? Like *picture of my living room magically tidying itself up*. Really? Well yeah I can see that happening. And I suppose this magical power can also be pretty surprising [LW · GW]. Like *blurry picture/sense of surprising outcome*. Is this possible? Yeah like *memory of this kind of surprise*. What if this surprise was like 1000x stronger? Oh fuck..."

I guess the point is that convincing a person, or a subagent, can be best explained as an internal decision to be convinced, and not as an outside force of convincingness. So if you want to convince a part of you that feels like something outside of you, then first you have to become it. You do this by sincerely endorsing whatever it has to say. Then if the part of you feels like you, you (formerly it) decide to re-evaluate the thing that the other subagent (formerly you) disagreed with.

A bit like internal double crux, but instead of going back and forth you just do one round. Guess you could call it internal ITT.

comment by robertskmiles · 2019-06-03T23:07:40.630Z · score: 11 (3 votes) · LW · GW

Oh yes. I think for me some of this has come from the growth of the AI Safety field and the shift in the overton window around this in the time since I started thinking about it. In 2011 I had this feeling of "We are barrelling towards an apocalypse and nobody is paying it any attention". I think a lot of my fire came from the fact that drastic things clearly needed to be done and almost nobody was doing anything, so, shit, I guess it's on me. And now the situation has changed a fair bit, and my personal situation has changed a lot, in that I'm now surrounded by people who also care about this and are working on it, or at least recognise it as an important issue. Sys2 sees pretty clearly that what we've got is nowhere near enough and the problem is very far from solved, but Sys1 sees all these smart and competent people working hard on it, and feels like "Well the whole tribe is oriented to this threat pretty well, so if it can be met, we'll meet it". So what keeps me going is the social stuff, in the sense of "We're all working on this thing in some way, and nobody else seems to be set up to do the specific job I'm doing, so I can be useful to the group".

comment by Stuart_Armstrong · 2019-05-31T13:57:05.999Z · score: 4 (3 votes) · LW · GW

I sometimes feel like that. But I've set things up so that my social role is to work on what I decided was important (AI safety) so I can let my social role carry me for part of the time.

comment by toonalfrink · 2019-05-31T15:26:09.036Z · score: 3 (2 votes) · LW · GW

Does that still lead to good outcomes though? I found that being motivated by my social role makes me a lot less effective because signalling and the actual thing come apart considerably. At least for the short term.

comment by Stuart_Armstrong · 2019-05-31T15:43:56.908Z · score: 5 (3 votes) · LW · GW

Meh. My lack of social awareness helps here. I don't tend to signal, I just lose urgency (which is not ideal, but better than other options). I mean, the problem is* fascinating, which helps.