What is Wrong?

post by Inyuki · 2019-02-01T12:02:13.023Z · LW · GW · 2 comments

Contents

  Is AI wrong?
None
2 comments

I've always looked at LessWrong as a community that aims to reduce error of reasoning. However, the word "wrong" always seemed to have connotations with ethics and logic, less so goal pursuit. Something being "right" or "wrong" is generally thought of as a state of a logical proposition with respect to some logic axioms and ontological assumptions, rather than a pragmatic one.

However, it may be true that the axioms of logic that one believes are a result of one's interests, the observations about the world. For example, if one is interested in binary-tree-like understanding, one chooses to accept the law of excluded middle. If one is interested in understanding the universe through simulation, then one may choose to accept the axioms of constructive logic. If one is interested in disproving obviously true statements, one chooses Trump logic, and so on. It is pragmatic...

So, what do we do if I we to *explore* rather than to adhere to any predefined logic axioms? In general, if one has a goal Y, one searches for logic axioms X, that would help the reasoning to achieve Y. Therefore, with respect to an agent with a goal Y, the "wrong" is any X that does not minimize one's distance to Y, and being "Less Wrong" implies *not just* reducing cognitive or reasoning errors, but *generally* "optimizing" -- not just in the domain of logical functions, or ethical functions, but in general.

The answer as to what specific domain we have to optimize to be less wrong in general, has been elusive to me, but it seems that creation of new more capable media to transmit, hold, preserve, that let evolve and flourish all of our systems, minds and cultures, is the one domain with respect to which we should consider what is wrong or right.

So, when judging something (X) to be right or wrong, we should take a look, how does this affects world's total information content (Y).

Is AI wrong?

AI is a way to compress information by creating powerful models. Once model is built, information may be thrown away. When a social network like G+ acquires all the information that needed about the participants, then the company like Google learns it, and prunes it (closes the service). After all, if it figures out how you produce the content, you may not be valuable as information source anymore.

It may be wrong to concentrate the AI power in the hands of a few. It may be right to empower everyone to refactor (compress) their minds, and let them be more efficient agents of the society, cooperating towards maximizing world's information content (Y).

2 comments

Comments sorted by top scores.

comment by Bae's Theorem (Senarin) · 2019-02-01T21:10:45.398Z · LW(p) · GW(p)

If I understand you correctly, I wholeheartedly agree that "Less Wrong" is not just referring to a dichotomy of "what is true" and "what is wrong" (although that is part of it, ala 'Map and Territory').

There's a reason rationality is often called "systemized winning"; while the goals that you are trying to win are entirely subjective, rationality can help you decide what is most optimal in your pursuit of goal Y.

Replies from: Inyuki
comment by Inyuki · 2019-02-05T14:09:53.925Z · LW(p) · GW(p)

Well, my main point was, that error can be of arbitrary type, one may be of the modeling of what is ("Map"), another of modeling what we want to be ("Territory"), and one can think of infinite number of various types of "errors", - logical, ethical, pragmatic, moral, ecological, cultural, situational, .. the list goes on and on. And, if each type of error we think of "suboptimality", then "less err" or "less wrong" would be etymologically equivalent to "optimize". So, we're a community for optimization. And that's actually equivalent to intelligence.

No matter, if we are seeking for truth or pragmatics, the methods of rationality remain largely the same -- it's the general mathematical methods of optimizing.