Posts

Comments

Comment by sampe on Building up to an Internal Family Systems model · 2019-02-17T20:20:11.674Z · LW · GW

Thank you very much for the detailed reply! You answered all my questions.

I got the Self-Therapy audiobook after writing my comment. Looks great so far.

Comment by sampe on Building up to an Internal Family Systems model · 2019-02-17T13:28:20.382Z · LW · GW

Wow, this is all very interesting.

I have been using this framework for a bit and I think I have found some important clues about some exile-manager-firefighter dynamics in myself. Although I'm just starting and I still have to clarify my next steps, I feel hopeful that this is the right direction.

There are some things which I would like to know more about. Feel free to answer any.

Which agent should the sympathetic listener be talking to? The manager, the exile, or both?

Assuming that one correctly identifies which thoughts (and ultimately, which situations) a manager deems dangerous, and that one successfully does cognitive defusion, to what extent is it feasible, in your opinion, to have the manager (the exile) update by just talking to them vs by experiencing the dangerous situation again but positively? To what extent is it possible that despite a sympathetic listener talks with the manager/exile, they still don't update easily until they directly see some experiences which contradict what they believe? Which things make updating by talking/experiencing harder/easier?

Comment by sampe on Unrolling social metacognition: Three levels of meta are not enough. · 2018-08-27T08:46:39.742Z · LW · GW

Alex felt it was bad that Alex felt that Bailey felt that Alex leaving out the milk was bad.

I want to point out that not all instances of the word "felt" mean the same thing here, and I want to split the two meanings into new words for clarity. I think this has consequences about what it means to have feelings about one's own feelings.

Bailey felt that Alex leaving out the milk was bad.

In this case, "feeling" stands for "evaluating". Bailey gives a low value to Alex's action, and in some sense, maybe even to Alex. E.g. Alex is dumb, or Alex's actions are suboptimal.

Alex felt that Bailey felt that Alex leaving out the milk was bad.

In this case, the top level "feeling" stands for "sensing". This is a complex phenomenon:

  • X happens inside someone's mind;
  • the person holding X produces some observable effect Y (e.g. they make a facial expression);
  • the part of your brain specialized in this kind of work observes Y, infers X from it (you can't observe X!), and produces a conscious sensation Z (e.g. feeling judged);
  • you consciously observe Z in yourself.

Alex felt it was bad that Alex felt that Bailey felt that Alex leaving out the milk was bad.

The top level "feeling" stands again for "evaluating". After observing Z in yourself, you give it a low value. But Z is a proxy for X given that X is true (you may be wrong inferring X from Y). Giving a low value to Z roughly means "I wish it was not the case that X", but it's difficult to recognize as such because I think it happens very rapidly and as a complex mental sensation rather than a sentence in your head. I posit that all feelings about one's own feelings are of this kind, and I'm interested in hearing counterexamples.

So, to recap:

Alex evaluated it was bad that Alex sensed that Bailey evaluated that Alex leaving out the milk was bad.

I think the unrolling is clearer this way.

Comment by sampe on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-07-28T20:24:34.512Z · LW · GW

Kaj, where can I read more about the three marks of existence? Preferably something as detailed as possible while still being readable in no more than a full day.

Comment by sampe on Melatonin: Much More Than You Wanted To Know · 2018-07-13T22:24:55.874Z · LW · GW

How exactly does one go in order to get a diagnosis (and also treatment, but removing the huge doubt is the main thing) of Non-24-Hour-Sleep Disorder?

And can it be used as a medical certificate to obtain health benefits?

Comment by sampe on OpenAI releases functional Dota 5v5 bot, aims to beat world champions by August · 2018-06-28T14:44:39.616Z · LW · GW

As someone who is just starting on machine learning as an autodidact, here are my thoughts on what's happening (probably an incoherent ramble).

I wouldn't have expected gradient descent to find particularly great solutions for Dota. When OpenAI released the 1v1 bot with all the restrictions of the case, I figured that was about the maximum achievable through gradient descent and that if they managed to do more in the following years (like they just did), and if someone managed to make a similar architecture work for even a slightly broader range of games, e.g. first person shooters (similarly to what happened from AlphaGo to AlphaZero), let's call this architecture GamesZero, then basically GamesZero had to be equivalent to a full human level AGI. (but even in that case, I expected the 5v5 bot to come out after 3-5 more years if at all possible, not one year)

There was also some confusion that I had at the time, which I guess I still have but it's gradually dissolving, about the fact that if GamesZero could come out of gradient descent then gradient descent also had to be an AGI, in the sense that e.g. our brains do something very similar to gradient descent, or that any AGI must implement some sort of gradient descent. It just wasn't behaving like one because we needed faster algorithms or something.

My current opinion is a bit different. Gradient descent is an optimization process more similar to natural selection than to what our brains do. GamesZero can come out of gradient descent in the same way that human brains came out of natural selection, and while it can be said that all four of them are optimization processes, there is a sense in which I would put human brains and GamesZero in the "AGI category" (whatever that means), while I would put natural selection and gradient descent in the "blind idiot god category". Which is to say, I don't expect neither human brains or GamesZero to make use of gradient descent, anymore than I expect human brains to make use of simulated natural selections. But of course I could be horribly wrong.

Now that gradient descent accomplished 30% of what I thought was sufficient to make an AGI and that I thought it could never do, I wouldn't be surprised if GamesZero comes out in two years, although I would be panicked (I will be surprised in addition to being panicked if it comes out next year).

The question then is: is GamesZero an AGI?

What makes me suspicious that the answer is yes is the categorization that I mentioned earlier. If GamesZero can work in a variety of games ranging from MOBAs to FPSs then I also expect it to work in the outside world, similarly to how human brains could only work in certain environments and only behave in certain ways before discovering fire (or whatever was the critical historical point) and then natural selection made something happen that made them able in principle to go to the moon.