How some algorithms feel from inside

post by cousin_it · 2011-05-17T11:26:02.885Z · LW · GW · Legacy · 43 comments

Contents

43 comments

So I decided to list the known applications of this popular LW phrase:

What else?

43 comments

Comments sorted by top scores.

comment by jsalvatier · 2011-05-17T16:09:41.326Z · LW(p) · GW(p)

Hope you add these to the wiki.

comment by Emile · 2011-05-17T12:44:11.129Z · LW(p) · GW(p)

Liking and disliking are how feedback-based control feels from the inside.

Replies from: Nisan, jsalvatier
comment by Nisan · 2011-05-17T17:21:39.497Z · LW(p) · GW(p)

Alternatively, value and desirability are how a prediction-value machine feels from the inside.

comment by jsalvatier · 2011-05-18T00:31:43.993Z · LW(p) · GW(p)

There are some related LW posts.

comment by Kaj_Sotala · 2011-05-18T07:42:28.190Z · LW(p) · GW(p)

Having pictures of your screw-up replay themselves in your mind is how (I speculate) a temporal difference learning algorithm feels from the inside.

comment by Giles · 2011-05-17T19:34:29.436Z · LW(p) · GW(p)
  • Surpise is how updating feels from the inside
  • Happiness is how an increase in expected reproductive success feels from the inside
  • Depression is how a major reevaluation of strategy feels from the inside
  • Liking or disliking a person is the feeling associated with someone's karma score in reciprocal altruism games (e.g. iterated prisoner's dilemma. I don't understand Control theory so I don't know whether or not I agree with Emile here).
  • You know how you're sometimes supposed to abandon Causal Decision Theory in order to punish someone? From the inside that feels like anger.
Replies from: wedrifid, None, cousin_it
comment by wedrifid · 2011-05-17T23:40:28.511Z · LW(p) · GW(p)

Depression is how a major reevaluation of strategy feels from the inside

We label rather a lot of different things under the label 'depression'. So it can also be how executing a strategy of caution and conservation feels like.

comment by [deleted] · 2011-05-18T06:06:49.422Z · LW(p) · GW(p)

Depression is how a major reevaluation of strategy feels from the inside

Depression is how broken hardware feels from the inside.

Replies from: wedrifid
comment by wedrifid · 2011-05-18T06:23:01.883Z · LW(p) · GW(p)

Depression is how broken hardware feels from the inside.

I have to disagree with this as a matter of fact. The hardware is operating as intended; most depression really is a feature not a bug. Unfortunately the hardware just doesn't care how we feel about it when it executes its adaptations. Fortunately we don't have to care about what it is trying to achieve - we can go ahead and eradicate depression, aging and death regardless.

Replies from: None
comment by [deleted] · 2011-05-18T06:56:11.415Z · LW(p) · GW(p)

I have difficulty imagining how depression - not "feeling sad", but clinical depression - could be adaptive, either in the evolutionary environment or the modern one. Perhaps I am insufficiently imaginative (but I imagine not).

Replies from: wedrifid
comment by wedrifid · 2011-05-18T07:05:52.773Z · LW(p) · GW(p)

I have difficulty imagining how depression - not "feeling sad", but clinical depression - could be adaptive, either in the evolutionary environment or the modern one. Perhaps I am insufficiently imaginative (but I imagine not).

Yes, I'm afraid that is a failure of imagination. Theorists have imagined various adaptive causes.

Replies from: Kaj_Sotala, None
comment by Kaj_Sotala · 2011-05-18T08:22:57.468Z · LW(p) · GW(p)

Those are interesting hypotheses, but it still seems overconfident to claim "as a matter of fact" that depression is a feature. "Adaptation is an 'onerous concept' to be invoked only when alternative explanations fail". I'd still assign at least a 20% chance for depression to be a bug.

(One could also argue that even if depression were a feature in the EEA, it's still a bug in the sense of being too easily activated in a modern environment... but that's starting to sound too much like a debate over definitions.)

Replies from: wedrifid
comment by wedrifid · 2011-05-18T08:40:44.448Z · LW(p) · GW(p)

"Adaptation is an 'onerous concept' to be invoked only when alternative explanations fail"

I disagreed with (the strength of) your position back when you made that post. I believe you privileged 'alternative explanations' too highly - not being adaptation is not a default that gets special epistemic rights until onerous proof is supplied. When something is as prevalent as depression is and also has a blatantly obvious precautionary social role in less civilised cultures (and even abusive situations in otherwise civilised cultures) it is 'not being an adaptation' that becomes an onerous claim.

That said:

I'd still assign at least a 20% chance for depression to be a bug.

I only go a couple of bits little lower than that myself.

"As a matter of fact" was meant to convey that the disagreement was not to the spirit or sentiment (which I share), merely to a historical detail. This is distinct from the usage "this is known with complete confidence so there".

comment by [deleted] · 2011-05-18T07:18:37.255Z · LW(p) · GW(p)

That's interesting, although "it's a failure mode, like chronic pain, of a normal mechanism" is the most immediately reasonable-sounding one to me.

If any of the others are even partially true, I would characterize them as "hardware has evolved to be broken". Which I believe puts us into violent agreement.

Replies from: wedrifid
comment by wedrifid · 2011-05-18T07:40:01.087Z · LW(p) · GW(p)

That's interesting, although "it's a failure mode, like chronic pain, of a normal mechanism" is the most immediately reasonable-sounding one to me.

There is a failure mode - much like 'autoimmune disease' is a failure mode of the infection fighting feature. But most of depression just doesn't fit that category. The 'reasonableness' here tends to be a social judgement more than a descriptive one. But evolution just doesn't have our social instincts.

Which I believe puts us into violent agreement.

About the desirability of the feature and approach to managing it if not of the nature, cause and adaptive purpose. Depression is a normal and natural feature of the human animal. It is also bad, evil, deprecated and an enemy to terminated with extreme prejudice.

comment by cousin_it · 2011-05-17T19:37:05.085Z · LW(p) · GW(p)

Nice, but 3 and 4 feel more tenuous to me than the rest. About depression, why would reevaluation of strategy sometimes lead to suicide? About liking, can't I like someone even if they didn't do anything good for me?

Replies from: Giles
comment by Giles · 2011-05-17T19:46:35.357Z · LW(p) · GW(p)

Also, what I put for "happiness" isn't an algorithm. Sorry about that one.

Suicides are pretty rare, and there are rare cases where suicide works as an evolutionary strategy - essentially where you are taking resources away from more successful kin.

Re. liking - iterated prisoner's dilemma starts off with "cooperate" so you should like everyone to start off with. Would you continue to like someone if they kept defecting?

Replies from: cousin_it, Perplexed
comment by cousin_it · 2011-05-17T20:01:55.607Z · LW(p) · GW(p)

About liking, I don't like everyone by default. And sometimes people like people who defect against them. There's some correlation, but it doesn't seem to be a complete explanation.

About suicide, do you think suicide is much less likely if you have no kin? That seems empirically checkable and I'd be amazed if it turned out to be true.

Replies from: Giles
comment by Giles · 2011-05-17T20:53:24.317Z · LW(p) · GW(p)

I would have expect suicide to be less likely if you have no kin, as that's my hypothesis. A very quick Googling doesn't show anything up though, so I'll update in favor of "other hypothesis I haven't thought of yet".

[Edit: in order to help vindicate or demolish this hypothesis, some other things it would predict: suicidal intentions would be correlated with loss of appetite, spending frugally and family-wide poverty]

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2011-05-17T22:16:14.326Z · LW(p) · GW(p)

The original journal article is behind a paywall, but here is Jesse Bering's SciAm Blog post elucidating on Dr. Denys deCatanzaro's theory on adaptive suicide.

His theory seems to be in alignment with your beliefs on the subject.

comment by Perplexed · 2011-05-18T01:36:14.061Z · LW(p) · GW(p)

Happiness is how an increase in expected reproductive success feels from the inside

Also, what I put for "happiness" isn't an algorithm. Sorry about that one.

I might have written instead that happiness is what you feel when your algorithm for tracking your plan for achieving reproductive success reports that the trajectory is close to being as you had planned. Happiness strikes me as an emotion which celebrates the maintenance of a pleasant status-quo, rather than one that goes around looking for a boost.

comment by loqi · 2011-05-17T17:51:08.499Z · LW(p) · GW(p)

"I" is how feeling stuff from the inside feels from the inside.

comment by XiXiDu · 2011-05-17T15:47:59.706Z · LW(p) · GW(p)

Feeling from the inside is how a human contemplating consciousness feels from inside.

comment by David_Gerard · 2011-05-17T13:06:06.531Z · LW(p) · GW(p)

This is an excellent idea for a post, as it really makes clear what "feels like from the inside" means: to reverse the process of reducing our feelings, and instead starting from the drives and building up to the feelings. (It took me a while to get this.)

comment by Emile · 2011-05-17T19:02:46.544Z · LW(p) · GW(p)

Fun is how learning skills feels from the inside.

Replies from: wedrifid, SilasBarta, jsalvatier
comment by wedrifid · 2011-05-17T23:32:29.971Z · LW(p) · GW(p)

Fun is how learning skills feels from the inside.

Or doing things that are likely to build social connections.

comment by SilasBarta · 2011-05-18T17:13:07.706Z · LW(p) · GW(p)

Game design (a book about which you linked) tries to get the player into a "sweet spot" between complete predictability of the game (boring; easily-learned classifier), and complete unpredictability of the game (confusion that doesn't look like it will go away).

So, more generally, maybe one could say that fun is how anticipation of learning a classifier feels from inside?

comment by jsalvatier · 2011-05-17T23:25:48.327Z · LW(p) · GW(p)

Perhaps with the caveat that the learning involved doesn't necessarily mean 'skill learning'; drinking with your friends is fun but doesn't usually involve learning a skill.

Replies from: Emile
comment by Emile · 2011-05-18T17:51:56.788Z · LW(p) · GW(p)

Agreed; if we want to be more precise one could say that humans find various activities inherently enjoyable: sex, eating, seeing certain things, hearing certain sounds, and learning. The pleasure of learning is roughly what is meant by "fun" in the context of solo games, though other things (socializing, competition, nice graphics, story) can also make the game enjoyable.

comment by Emile · 2011-05-17T13:40:45.341Z · LW(p) · GW(p)

Memorization without understanding is how a Giant Look-Up Table feels from the inside.

Practicing to train your reflex actions is how pre-populating the cache of important functions feels from the inside (OK, I admit that one is quite far-fetched).

Replies from: CuSithBell
comment by CuSithBell · 2011-05-17T23:34:42.806Z · LW(p) · GW(p)

These seem more like metaphors - my understanding was that this exercise was more literal? (Not that I've gone through the comments and found yours most nonliteral, this just happened to be the post that came to my eye.)

Edit: Or, did you mean these to be literal? I may have misconstrued your definitions.

comment by Nisan · 2011-05-17T17:24:48.754Z · LW(p) · GW(p)

A particular kind of "consciousness" or self-awareness is what a Cartesian camcorder feels like from the inside.

comment by sixes_and_sevens · 2011-05-17T12:01:01.142Z · LW(p) · GW(p)

I would propose that kinship and sexual attraction are two mostly distinct cases of how a nearest-neighbour algorithm feels from the inside.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2011-05-18T14:55:12.559Z · LW(p) · GW(p)

resisting attraction to neighbor jokes...

This and some of the examples in the comments seem to deviate from the pattern. Attraction comes after the nearest-neighbor algorithm has run (even if I buy this claim), not what the algorithm feels like from the inside.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2011-05-18T22:50:58.252Z · LW(p) · GW(p)

I was thinking of the strength of attraction (or kinship, although that's a lot more static), being the continuous updating of a person's perceived distance from us along multiple salient axes.

comment by faul_sname · 2012-01-30T23:35:50.693Z · LW(p) · GW(p)

Time might be what an increase in information/entropy feel like from the inside.

comment by SilasBarta · 2011-05-18T17:03:17.098Z · LW(p) · GW(p)

My conjecture:

(I know, in a sense, all "feeling from the inside" is qualia, but that's my best attempt to fit this to the template.)

Never developed into an article, but my initial exposition of the idea got high karma

comment by Alexandros · 2011-05-17T15:55:03.729Z · LW(p) · GW(p)

erm... OCD is how a sorting algorithm feels from the inside?

comment by timtyler · 2011-05-17T19:06:30.151Z · LW(p) · GW(p)
  • Believing in essences is how a classifier feels from inside

  • Free will is how an action-chooser feels from inside

  • Self esteem is how a status calculation feels from inside

Am pretty sure all of those things can be done unconsciously.

Replies from: Perplexed
comment by Perplexed · 2011-05-18T01:41:17.710Z · LW(p) · GW(p)

I'm not sure why that is an objection. For example, if I have just unconsciously chosen an action, I might then (consciously) claim that the choice was an exercise of free will. Having unconsciously classified, I might then (consciously) report my belief that I have apprehended the essence.

comment by AlphaOmega · 2011-05-17T21:44:18.832Z · LW(p) · GW(p)

Consciousness is how the algorithms of the universal simulation feel from the inside. We are a self-aware simulation program.

Replies from: faul_sname
comment by faul_sname · 2012-01-08T09:47:39.847Z · LW(p) · GW(p)

Not quite sure why this comment was voted down. We do try to simulate the world around us (or at least I do). While I don't know if that is the root of consciousness, it seems to be a plausible claim that consciousness is the feeling of trying to simulate the universes resulting from different choices.