The concept of evidence as humanity currently uses it is a bit of a crutch.

post by TheSkeward · 2019-05-20T23:09:22.893Z · LW · GW · 5 comments

Contents

5 comments

Just a thought I had today. I'm sure that it's trivial to the extent that it's correct, but it's a slow work day and I've been lurking here for too long.

Superintelligent AI (or other post-human intelligence) is unlikely to use the concept of "evidence" in the same way we do. It's very hard for neural networks (including human brains) to explain what they "know". The human brain is a set of information-gathering tools plugged into various levels of pattern-recognition systems. When we say we know something, that's an entirely intuitive process. There's no manual tallying going on - the tallying is happening deep in our subconscious, pre-System 1 thinking.

The idea of scientific thinking and evidence is not gathering more information - it's throwing out all the rest of the information we've gathered. It's saying "I will rely on only these controlled variables to come to a conclusion, because I think that's more trustworthy than my intuition." Which is because our intuitions are optimized for winning tribal social dynamics and escaping tigers.

In fact, it's so hard for neural networks to explain why they know what they know that one of the things that's been suggested is a sub-neural network with read access to the top network, optimized only for explaining it to humans.

The nature of reality is such that diseases are diagnosable (or will be very soon) by neural networks using the help of ton of uninteresting, uncompelling micro-bits of evidence, such as "people wearing this color shirt/having this color eyes/of this age-gender-race combination have a slightly higher prior for having these diseases". These things, while being true in a statistical sense, don't make a compelling narrative that you could encode as Solid Diagnostic Rules (to say nothing of the way one could game the system if they were encoded that way).

As an example, OpenAI Five is able to outperform top humans at Dota 2, but the programmers have no idea 'why'. They make statements like 'we had OpenAI run a probability analysis based only on the starting hero selection screen, and OpenAI gave itself a 96% chance of winning, so it evidently thinks this composition is very strong.' And the actual reason, in fact, doesn't boil down into human-compatible narratives like "well, they've got a lot of poke and they match up well in lane", which is close to the limit of narrative complexity the human concept of 'evidence' can support.

5 comments

Comments sorted by top scores.

comment by shminux · 2019-05-21T02:47:29.954Z · LW(p) · GW(p)

I wonder if the current state of the art corresponds to the pre-conscious level of evolution, before the internal narrator and self-awareness. Maybe soon the neural networks will develop the skill of explaining (or rationalizing) their decisions.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2019-05-21T22:16:14.915Z · LW(p) · GW(p)

This seems pretty likely. An AI that does internal reasoning will find it useful to have its own opinions on why it thinks things, which need bear about as much relationship to their internal microscopic function as human opinions about thinking do to human neurons.

comment by ChristianKl · 2019-05-21T15:13:30.677Z · LW(p) · GW(p)

The concept of evidence that we have in Anglo-American discourse isn't a universal one of humanity as a whole.

If you go to a good humanities department people can tell you about knowledges that are structured quite differently.

comment by a gently pricked vein (strangepoop) · 2019-05-21T17:33:41.603Z · LW(p) · GW(p)

I don't think the "idea of scientific thinking and evidence" has so much to do with throwing away information as adding reflection, post which you might excise the cruft.

Being able to describe what you're doing, ie usefully compress existing strategies-in-use, is probably going to be helpful regardless of level of intelligence because it allows you to cheaply tweak your strategies when either the situation or the goal is perturbed.

comment by TAG · 2019-05-22T10:40:37.218Z · LW(p) · GW(p)

There's no general agreement among humans about what constitutes evidence, which is why Aumanns theorem has so little to do with reality. How can two agents be exposed to the same evidence when they don't agree on what constitutes evidence.?