Why hasn't the technology of Knowledge Representation (i.e., semantic networks, concept graphs, ontology engineering) been applied to create tools to help human thinkers?

post by Polytopos · 2020-03-09T06:11:13.632Z · LW · GW · 6 comments

This is a question post.

Contents

  Answers
    13 quanticle
    1 Polytopos
None
6 comments

I've been studying the field of knowledge representation (KR) and everything I've come across is focused on building knowledge systems for machine processing. I wonder why nobody seems to have applied these ideas to make digital KR systems for human beings to use as tool for active inquiry or personal knowledge management. Reading this stuff, I get the sense that it mostly about encoding a lot of boring facts rather than helping us with the edge of discovery.

Two off-the-cuff hypotheses:

1. Lack of economic incentives to develop high-quality general user facing software for KR. These tools are too hard to use effectively in their current state to have any kind of widespread adoption outside of profit-driven business interests.

2. Inability of existing KR systems to ergonomically conform to human patterns of learning and reasoning. If so, this might be due to a lack of sufficient understanding how to transition between informal natural language based reasoning and formalized reasoning, or it may simply be that the chosen formalisms are not the best ones for empowering human thought.

On the other hand, I might be dead wrong. Maybe there has been some brilliant use of this stuff for human discovery that I am unaware of. If so, I would love to know about it.

Answers

answer by quanticle · 2020-03-09T07:07:54.070Z · LW(p) · GW(p)

Third hypothesis: knowledge representation isn't actually a good paradigm for either human or machine learning. Neural networks don't have to be initialized with a structure, they infer the structure from the data, just like humans do.

comment by johnswentworth · 2020-03-09T17:47:07.830Z · LW(p) · GW(p)

"Infer the structure from the data" still implies that the NN has some internal representation of knowledge. Whether the structure is initialized or learned isn't necessarily central to the question - what matters is that there is some structure, and we want to know how to represent that structure in an intelligible manner. The interesting question is then: are the structures used by "knowledge representation" researchers isomorphic to the structures learned by humans and/or NNs?

I haven't read much on KR, but my passing impression is that the structures they use do not correspond very well to the structures actually used internally by humans/NNs. That would be my guess as to why KR tools aren't used more widely.

On the other hand, there are representations of certain kinds of knowledge which do seem very similar to the way humans represent knowledge - causal graphs/Bayes nets are an example which jumps to mind. And those have seen pretty wide adoption.

comment by Polytopos · 2020-03-09T12:54:38.910Z · LW(p) · GW(p)

Good hypothesis, here is why I don't think it's likely to be true.

It seems to me that when humans make explicit arguments with written language, we are doing a natural language form of knowledge representation. In science and philosophy the process of making conceptual models explicit is very useful for theory formulation and evaluation. i.e., In conceptual domains, human thinkers don't learn like today's neural nets, we don't just immerse ourselves in a sea of raw numbers and absorb the correlations. We might do something like that on the perceptual level, but with scientific and philosophical thought, we are able to abstract over experience and explicitly formulate hypotheses, theories, and arguments. We name patterns to form concepts, and then we reason about these concepts. We make arguments to contextualize and interpret the significance of observations.

All of these operations of human thinking involve a natural language version of knowledge representation. But natural language is imprecise and it doesn't scale well. It is transmitted through books and articles that pile up as information silos. I'm not saying we can or should eliminate natural language from intellectual inquiry, it will always have a role, but my question is why haven't we supplemented it with a formal knowledge representation system designed for human thinkers.

answer by Polytopos · 2020-03-10T18:37:58.957Z · LW(p) · GW(p)

Thanks to the comments and discussion, I was motivated to do more research into my own question. What I've found is that there have been some attempts to use semantic technologies for personal knowledge management (PKM).

I have not found evidence one way or the other as to whether these tools have been helpful for knowledge discovery, but they seem promising.

The main tool that would be accessible to the average user is Semantic MediaWiki, this is an extension to Wikipedia's popular MediaWiki software that adds KR functionality based on semantic web technologies.

Here is an article about how to set this up for PKM.

PDF Journal article Semantic Wikis for Personal Knowledge Management

-This article does a good job outlining a general theory of how to build a semantic knowledge application for PKM. The arguments are not tied to a specific software implementation.

PDF Journal article Learning with Semantic Wikis

-I haven't read this article yet, but from the abstract it sounds generally useful

6 comments

Comments sorted by top scores.

comment by johnswentworth · 2020-03-09T17:57:51.094Z · LW(p) · GW(p)
2. Inability of existing KR systems to ergonomically conform to human patterns of learning and reasoning. If so, this might be due to a lack of sufficient understanding how to transition between informal natural language based reasoning and formalized reasoning, or it may simply be that the chosen formalisms are not the best ones for empowering human thought.

I haven't studied knowledge representation much, but my passing impression is that this is the main problem. I suspect that KR people tried too hard to make their structures look like natural language, when in fact the underlying structures of human thought not are not particularly language-shaped.

Central example driving my intuition here: causal graphs/Bayes nets. These seem to basically-correctly capture human intuition about causality. Once you know the language of causal graphs, it's really easy to translate intuition about causality into the graphical language - indicating a "knowledge representation" which lines up quite well with human reasoning. And sure enough, causal graphs have been pretty widely adopted.

On the other hand, somewhat ironically, things like concept graphs and semantic networks do a pretty crappy job of capturing concepts and the semantics of words. Try to glean the meaning of "cat" from a semantic graph, and you'll learn that it has a "tail", and "whiskers", is a "mammal", and so forth. Of course, we don't really know what any of those words mean either - just a big network of links to other strings. It would be a great tool for making a fancy Markov language model, but it's not great for actually capturing human knowledge.

Replies from: Polytopos
comment by Polytopos · 2020-03-09T19:38:38.148Z · LW(p) · GW(p)

Interesting, can you give some examples to illustrate how causal/Bayes nets are used to aid reasoning / discovery?

I see merit in the idea that semantic networks may focus too much on the structure of language, and not enough on the structure of the underlying domain being modelled. As active thinkers, we are looking to build an understanding of the domain, not an understanding of how we talked about that domain.

Issues of language use, such as avoiding ambiguity, could sometimes be useful especially in more abstract argumentation, but more important is being able to track all of the relationships among the domain specific entities and organizing lines of evidence.

comment by Pattern · 2020-03-11T22:25:45.340Z · LW(p) · GW(p)
1. Lack of economic incentives to develop high-quality general user facing software for KR. These tools are too hard to use effectively in their current state to have any kind of widespread adoption outside of profit-driven business interests.

Other possibilities:

(Following quanticle's convention, I will continue the count. 1 and 2 are in the Original Post, 3 is in quanticle's response [LW(p) · GW(p)].)


4.

It's a work in progress, but hasn't progressed to the point that

a) ads would be useful

b) it has the cash to spend on ads (it may provide some or a lot of value, but in small domains, for a few people)

5.

The idea that some people are more visual (the eye), others more linguistic (the ear) etc. are, if not accurate in detail, then in the abstract - a variety of preferences requires a tool to do a lot of things.

6.

a) Knowledge Representation requires knowledge to be in a specific form, or captured. After data gathering takes off, so will KR.

b) Or it's a chicken and egg problem - knowledge isn't useful unless it's "managed well" in order to make large quantities of information useful.

c) Knowledge is a misnomer here - it's information, not knowledge, and knowledge is what is important.

d) a variation of b - the perfect tool here needs a lot of other functionalities integrated (thinking/information tools have a lot of benefits together - representation, memorization, etc.)

comment by Said Achmiz (SaidAchmiz) · 2020-03-09T06:44:20.407Z · LW(p) · GW(p)

To learn about attempts to develop user-facing “knowledge representation” software (and related) tools, read about “mind mapping” (and follow the links in the “Information mapping” sidebar).

Replies from: Polytopos, Polytopos
comment by Polytopos · 2020-03-09T12:41:14.560Z · LW(p) · GW(p)

Hi Said. I'm new here, would you mind explaining what a sidebar is, maybe providing a link or instructions to find said sidebar? Thanks.