[LINK] Causal Entropic Forces

post by Qiaochu_Yuan · 2013-04-20T23:57:34.160Z · LW · GW · Legacy · 14 comments

This paper seems relevant to various LW interests. It smells like The Second Law of Thermodynamics, and Engines of Cognition, but I haven't wrapped my head enough around either to say more than that. Abstract:

Recent advances in fields ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization, but no formal physical relationship between them has yet been established. Here, we explicitly propose a first step toward such a relationship in the form of a causal generalization of entropic forces that we find can cause two defining behaviors of the human “cognitive niche”—tool use and social cooperation—to spontaneously emerge in simple physical systems. Our results suggest a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems.

14 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2013-04-21T06:37:50.559Z · LW(p) · GW(p)

Also linked in the open thread.

comment by Douglas_Reay · 2013-04-28T06:12:06.821Z · LW(p) · GW(p)

I think any program designed to maximise some quantity within a simulated situation will have the potential to solve some problems. It is interesting that, when the quantity you choose to try to maximise is the entropy of the situation, then some of the problems this solves are useful ones, but I don't think it is particularly significant, with respect to understanding the nature of and reason for intelligence in a universe with our particular set of physical laws, as some are claiming.

Take, for example, Wissner-Gross' explanation of "tool use" in his video.

relevant still image illustrating tool use

Set a simulation going. See where the disks end up, under the rules you set for the simulation. THEN label the disks as being things (a hand, a tool and a piece of food) that would provide a plausible explanation for why an intelligent creature would want the disks to finish up at that particular end configuration.

If a creature were actually doing it, the intelligence would lie at least as much in selecting in advance which quantity to maximise in order to achieve a desired result, as in carrying out such an algorithm (and, also, there's no evidence that this is actually how we implement the algorithm in our heads).

There's also the matter that the universe isn't particularly efficient at maximising entropy. Through the statistical properties underlying thermodynamics there's a ratchet effect that entropy will tend to increase rather than decrease which, eventually, will lead to the universe ending up at maximum entropy; but that's rather different from localised seeking behaviour intended to find a situation with maximum entropy in order to solve a problem.

comment by DanielLC · 2013-04-21T02:52:37.996Z · LW(p) · GW(p)

If I understand this right, they're not talking about entropy. They're talking about putting yourself in a position where you have more choices. I think a better word would be power.

Replies from: timtyler
comment by timtyler · 2013-04-21T12:12:25.787Z · LW(p) · GW(p)

They clearly say are talking about entropy. As do most of their cites. I think "power" would be the wrong term.

Power maximisation and entropy maximisation are closely related concepts - but these ideas can be teased apart:

Given the opportunity, evolved organisms can be expected to spitefully destroy resources accessible only to other unrelated agents - assuming that doing so is inexpensive. I think that entropy maximisation is more clearly consistent with such behaviour than power maximisation is.

Also, entropy is a global measure - while power is a measure of flux in some kind of pipe. Entropy maximisation is thus a simpler idea.

Replies from: DanielLC
comment by DanielLC · 2013-04-21T19:51:57.026Z · LW(p) · GW(p)

I can't actually read the paper, but according to the accompanying article jamesf linked to:

Hoping to firm up such notions, Wissner-Gross teamed up with Cameron Freer of the University of Hawaii at Manoa to propose a “causal path entropy.” This entropy is based not on the internal arrangements accessible to a system at any moment, but on the number of arrangements it could pass through on the way to possible future states.

They are talking about "causal path entropy", a term they defined, not "entropy", the well known physics term. Confusing them would be a bad idea.

Power maximisation and entropy maximisation are closely related concepts

Power maximization is what an intelligent agent does that values power. What you linked to is a statistical tool for finding priors. They are unrelated. Am I misunderstanding something?

Replies from: Richard_Kennaway, timtyler
comment by Richard_Kennaway · 2013-04-21T20:48:28.977Z · LW(p) · GW(p)

I can't actually read the paper

Here.

comment by timtyler · 2013-04-22T00:48:11.339Z · LW(p) · GW(p)

You need some background, by the sound of it. The main link between power maximization and entropy maximization is that power is usually acquired in order to perform work, and doing work eventually leads to generating entropy. So: the two ideas often make similar predictions.

As for the link between the Maximum entropy principle of E.T. Jaynes and Maximum entropy thermodynamics, the best resource on that topic which I am aware of is:

Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states by Roderick Dewar, particularly the historical overview in the introduction - at the beginning.

comment by timtyler · 2013-04-21T01:52:19.366Z · LW(p) · GW(p)

This is an idea I've been attempting to promote since 2001.

There's a literature on the topic dating back almost 100 years. Here's me in 2009 on the topic.

Replies from: curationary
comment by curationary · 2013-04-24T05:16:12.484Z · LW(p) · GW(p)

Do you have the script of your talk? It's very hard to capture the phrases in the video for some reason.

Replies from: timtyler
comment by timtyler · 2013-04-24T23:29:28.532Z · LW(p) · GW(p)

The transcript is just below the video.

comment by timtyler · 2013-04-21T12:47:38.566Z · LW(p) · GW(p)

It smells like The Second Law of Thermodynamics, and Engines of Cognition, but I haven't wrapped my head enough around either to say more than that.

Both articles mention "entropy" - but I think that's about it.

comment by royf · 2013-04-29T18:28:57.178Z · LW(p) · GW(p)

Our research group and collaborators, foremost Daniel Polani, have been studying this for many years now. Polani calls an essentially identical concept empowerment. These guys are welcome to the party, and as former outsiders it's understandable (if not totally acceptable) that they wouldn't know about these piles of prior work.

comment by ESRogs · 2013-04-27T19:48:47.307Z · LW(p) · GW(p)

Another article on the paper, with comments from Wissner-Gross, including possible implications for AI Friendliness: http://io9.com/how-skynet-might-emerge-from-simple-physics-482402911.

comment by jamesf · 2013-04-21T00:22:15.398Z · LW(p) · GW(p)

Here's the accompanying article on the paper.

This is a utility function that says "maximize entropy in the environment". The sequence post says, to condense it into an inadequate analogy, "evidence is to thinking as entropy is to thermodynamics".

I'm not asserting this isn't a deep and important finding, but it reminds me of the NES-game playing AI that plays games by taking whatever action maximizes some value in memory over a short time span. Works really well for some games, but it doesn't seem promising as a general NES-game player the same way a human can be (after countless hours of frustrating practice).