Cognitive Bias of AI Researchers?

post by Mindey · 2018-12-22T09:20:52.045Z · LW · GW · Legacy · 7 comments

Contents

7 comments

I find it inconvenient that many AI discussions circle around "agents", “environments” and “goals”. These are non-mathematical words, and by using this vocabulary we are anthropomorphizng natural world's phenomena.

While an "agent" may well be a set of interacting processes, that produce emergent phenomena (like volition, cognition, action), it is not a fundamental and pragmatic mathematical concept. The truly fundamental pragmatic mathematical concepts may be:

(1) states: a set of possible world states (each being a sets of conditions).
(2) processes: a set of world processes progressing towards some of those states.

If so, how could we avoid that anthropomorphic cognitive bias in our papers, and discussions?

Would the (1), (2) terms be a good alternative for our discussions, and to express the ideas in most AI research papers? E.g., Bob is a process, and Alice is processes,... and they collectively are progressing towards some desired state convergent state, defined by process addition.

What fundamental concepts would you consider to be a better alternative to talk formally about the AI domain?

7 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2018-12-22T19:10:03.932Z · LW(p) · GW(p)

In AI research, “agents”, “environments” and “goals” sometimes refer to intuitive concepts, and sometimes refer to math that are attempts to formalize those concepts. So they are both mathematical and non-mathematical words, just like "point" and "line" are. This is just how almost every field of research works. Consider "temperature" in physics, "secure" in cryptography, "efficient" in economics, etc.

Replies from: Inyuki, Mindey
comment by Inyuki · 2018-12-26T08:01:08.058Z · LW(p) · GW(p)

New terminology only makes sense when phenomena they describe have new qualities on top of the basic phenomena. Every process is an optimizer, because anything that changes states, optimizes towards something (say, new state). Thus, "maximizer," "intelligent agent" etc. may be said to be redundant.

comment by Mindey · 2018-12-23T05:22:19.925Z · LW(p) · GW(p)

Certainly true, yet, just because this is how almost every field of research works, doesn't mean that it is how they should work, and I like shminux's point [LW(p) · GW(p)].

comment by Ben Pace (Benito) · 2018-12-22T09:30:50.582Z · LW(p) · GW(p)

I feel a bit confused reading this. The notion of an expected utility maximiser is standard in game theory and economics, and is (mathematically) defined as having a preference (ordering) over states that is complete, transitive, continuous and independent.

Did you not really know about the concept when you wrote the OP? Perhaps you've mostly done set theory and programming, and not run into the game theory and economics models?

Or maybe you find the concept unsatisfactory in some other way? I agree that it can give lease to bring in all of one's standard intuitions surrounding goals and such, and Rohin Shah has written [? · GW] a post trying to tease those apart. Nonetheless, I hear that the core concept is massively useful in economics and game theory, suggesting it's a still a very useful abstraction.

Similarly, concepts like 'environment' are often specified mathematically. I once attended an 'intro to AI' course at a top university, and it repeatedly would define the 'environment' (the state space) of a search algorithm in toy examples - the course had me had to code A* search into a pretend 'mars rover' to drive around and find its goal. Things like defining a graph, the weights of its edges, etc, or otherwise enumerating the states and how they connect to each other, are ways of defining such concepts.

If you have any examples of people misusing the words - situations where an argument is made by association, and falls if you replace the common word with a technically precise definition - that would also be interesting.

Replies from: Inyuki
comment by Inyuki · 2018-12-22T10:38:12.739Z · LW(p) · GW(p)
I feel a bit confused reading this. The notion of an expected utility maximiser is standard in game theory and economics. Or maybe you find the concept unsatisfactory in some other way?

The latter. Optimization is more general than expected utility maximization. By applying expected utility theory, one is trying to minimize the expected distance to a set of conditions (goal), rather than distance to a set of conditions (state) in abstract general sense.

The original post (OP) is about refactoring the knowledge tree in order to make the discussions less biased and more accessible across disciplines. For example, the use of abbreviations like "OP" may make it less accessible across audiences. Similarly, using well-defined concepts like "agent" may make discussions less accessible to those who know just informal definitions (similar to how the mathematical abstractions of point and interval may be confusing to the un-initiated).

The concepts of "states" and "processes" may be less confusing, because they are generic, and don't seem to have other interpretations within similar domains in everyday life, unlike "environments", "agents", "intervals", "points" and "goals" do.

Replies from: habryka4
comment by habryka (habryka4) · 2018-12-22T17:27:46.694Z · LW(p) · GW(p)

Are you the same person as the author of the top-level post? (You seem to have a different username)

comment by Shmi (shminux) · 2018-12-22T17:27:55.972Z · LW(p) · GW(p)

I agree that states and processes match the underlying physical and biological mechanisms better. The AI researchers tend to violently ignore this level of reality however, as it does not translate well into the observable behaviors of what looks to us like agents making decisions based on hypotheticals and counterfactuals. I've posted about it here multiple times before. I am skeptical you will get more traction.