Hello, is it you I'm looking for?

post by Knuckels McGinty · 2020-01-28T20:56:09.191Z · LW · GW · 13 comments

This is a question post.

Contents

  Answers
    3 Bucky
None
13 comments

Hi

Sorry if diving in with my question is a breach of your etiquette, but I have a kind of burning question I was hoping some of you guys could help me with. I've been reading the core texts and clicking around but can't quite figure out if this has been covered before.

Does anyone know of any previous attempts at building a model of ranking the quality of statements? By which I mean ranking things like epistemic claims, claims about causation and that kind of thing. Something that aims to distill the complexity of the degrees of certainty and doubt we should have into something simple like a number? Really importantly, I mean something that would be universally applicable, objective (or something like it) not just based on an estimate of one's own subjective certainty (my understanding of Bayesian reasoning and Alvin Goldman style social epistemology).

I've been working on something like that for a couple of years as a kind of hobby . I've read a lot of things on subjects that are adjacent (probability, epistemology, social psychology) but never found anything that seems like an attempt to do that.

I think that means I'm either a unique genius, a crazy person or bad at describing/ searching for what I'm looking for. Option 1 seems unlikely, option 2 is definitely possible but I suspect that option 3 is the real one. Does anyone know of any work in this area they can point me towards?

Cheers - M

Answers

answer by Bucky · 2020-02-05T10:28:34.247Z · LW(p) · GW(p)

I'm not sure this is an exact match to your question but it sounds like maybe what you're looking for is something like Solomonoff induction.

In Bayes the subjectivity come from choosing priors. Solomonoff induction includes an objective way to calculate the priors (see also Kolmogorov complexity). Unfortunately it isn't actually computable - I was asking a kind of similar question [LW · GW] last year which has some answers about this.

I asked a follow-up question [LW · GW] regarding complexity whose answers were super useful to my understanding of these kinds of things - particularly the sequence [? · GW] which johnswentworth wrote.

comment by Knuckels McGinty · 2020-02-08T19:39:58.776Z · LW(p) · GW(p)

That was really interesting. Some of it was a little too technical for me, but hopefully I can spend some time learning some of the parts that threw me and see if I can figure out exactly how close that is.

My first impression is that would be the microscopic view of one part of the whole model. I actually had in mind something much more basic, but where that level of complexity could be added slowly as the overall model is built. It's a kind of never-ending project that improves it's accuracy as more is added to it.

In one imaginary iteration of this, I just hire people to do that level of work for me and tell me what the answer is.

Anyway, thanks.

13 comments

Comments sorted by top scores.

comment by rsaarelm · 2020-01-30T06:30:02.492Z · LW(p) · GW(p)

Your first problem is that you need a theory for just how do statements relate to the state of the world. Have you read Wittgenstein's Philosophical Investigations?

Overall, this basically sounds like analytical philosophy plus 1970s style AI. Lots of people have probably figured this would be a nice thing to have, but once you drop out of the everyday understanding of language and try to get to the bottom of what's really going on, you end up in the same morass where AI research and modern philosophy are stuck in.

Replies from: Knuckels McGinty
comment by Knuckels McGinty · 2020-02-04T17:49:19.753Z · LW(p) · GW(p)

Thanks for the reply

I haven't read anything besides overviews of (or takes on) Wittgenstein, but if you think it's worthwhile I'll definitely give it a shot.

I can't say that I'm familiar with the morass that you speak of. I work in clinical medicine and tend to just have a 10,000 mile view on philosophy. Can you maybe elaborate on what you see the problem as?

I really am mostly just anxious not to waste my time on things that have been done before and failed.

Replies from: ESRogs, rsaarelm
comment by ESRogs · 2020-02-05T05:52:49.097Z · LW(p) · GW(p)
I can't say that I'm familiar with the morass that you speak of. I work in clinical medicine and tend to just have a 10,000 mile view on philosophy. Can you maybe elaborate on what you see the problem as?

You might want to take a look at the A Human's Guide to Words [? · GW] sequence. (Or, for a summary, see just the last post in that sequence: 37 Ways That Words Can Be Wrong [? · GW].)

Replies from: Knuckels McGinty
comment by Knuckels McGinty · 2020-02-08T19:46:57.688Z · LW(p) · GW(p)

I read "37 ways...". Thanks. I think I understand what you mean now.

I think those would definitely be the sorts of problems I would run into if I was to do this via a Philosophy PHD (something I've thought about, but don't think I would be very likely to pursue) or in building an AI algorithm.

I think they are problems that I would need to be cognizant of, but I think I have a workaround that still lets me create something valuable, but maybe not something that would satisfy philosophers.

comment by rsaarelm · 2020-02-05T17:34:11.824Z · LW(p) · GW(p)

The problem is that we think statements have a somewhat straightforward relation to reality because we can generally make sense of them quite easily. In reality it turns out that that ease comes from a lot of hidden work our brain does being smart on the spot every time it needs to fit a given sentence to the given state of reality, and nobody really appreciated this until people started trying to build AIs that do anything similar and repeatedly ended up with things with no ability to distinguish between things that are realistically plausible and incoherent nonsense.

I'm not really sure how to communicate this effectively beyond gesturing at the sorry history of the artificial intelligence research program from the 1950s onwards despite thousands of extremely clever people putting their minds to it. The sequences ESrogs suggests in the sibling reply also deal with stuff like this.

comment by ESRogs · 2020-02-05T06:08:35.408Z · LW(p) · GW(p)
I think that means I'm [...] bad at describing/ searching for what I'm looking for.

One thing that might help, in terms of understanding what you're looking for, is -- how do you expect to be able to use this "model of ranking"?

It's not quite clear to me whether you're looking for something like an algorithm -- where somebody could code it up as a computer program and you could feed in sentences and it will spit out scores, or something more like a framework or rubrik -- where the work of understanding and evaluating sentences will still be done by people, but they can use the framework/rubrik as a guide to decide how to rate the sentences, or something else.

Replies from: Knuckels McGinty
comment by Knuckels McGinty · 2020-02-08T20:22:26.041Z · LW(p) · GW(p)

Definitely the "framework or rubrik" option. More like a rubrik than anything else, but with some fun nuance here and there. Work would be done by humans but all following the same rules.

There are a number of ways that I would like to use it in the future, but in the immediate most practical sense what I'm working on is a plan to create internet content that answers people's questions (via google. Siri, Alexa, etc) but makes declarative statements about the quality of information used to create those answers.

So for example, right now (02/08/20) if somebody asks google "does the MMR vaccine cause autism?" you get this page:

https://www.google.com/search?q=does+the+MMR+vaccine+cause+autism%3F&oq=does+the+MMR+vaccine+cause+autism%3F&aqs=chrome..69i57j0.9592j1j8&sourceid=chrome&ie=UTF-8

Which is a series of articles from various sites all pointing you in the direction of the right answer, but ultimately dancing around it and really just inviting you to make up your on mind.

What I would want to do is to create content that directly answers even difficult questions and trades the satisfaction of directness of the answer for the intellectual work of making you think about the quality rating we give it.

Creating a series of rules that gets to the heart of how the quality of evidence varies for different types of claims is obviously quite difficult. I think I've found a way to do it, but I would really like to know if it's been tried before and failed for some reason, or if someone has a better or faster way than mine.

I think that my way around the problems mentioned in the above replies is just conceding from the start that my model is not and can never be a perfect representation of the world. However, if it's done well enough it could bring a lot of clarity to a lot of problems.

Replies from: ESRogs
comment by ESRogs · 2020-02-08T21:13:15.628Z · LW(p) · GW(p)

Ah! It's much clearer to me now what you're looking for.

Two things that come to mind as vaguely similar:

1) The habit of some rationalist bloggers of flagging claims with "epistemic status". (E.g. here or here)

2) Wikipedia's guidelines for verifiability (and various other guidelines that they have)

Of course, neither is exactly what you're talking about, but perhaps they could serve as inspiration.

Replies from: Knuckels McGinty
comment by Knuckels McGinty · 2020-02-09T19:47:43.703Z · LW(p) · GW(p)

I'm glad i managed to finally be understandable. Part of the problem is that my enthusiasm for the project leads me to be a bit coy about revealing too much detail on the internet. The other problem is that I'm frequently straying into academic territories I don't know that well so I think I tend to use words to describe it that are probably not be the correct ones.

Thanks for those, it was interesting to see how some other people have approached the problem and if nothing else it tells me that other people are trying to take the epistemology of everyday discourse seriously so hopefully there will be an appetite for my version.

Replies from: ESRogs
comment by ESRogs · 2020-02-09T21:29:18.275Z · LW(p) · GW(p)
my enthusiasm for the project leads me to be a bit coy about revealing too much detail on the internet

FWIW, it may be worth keeping in mind the Silicon Valley maxim that ideas are cheap, and execution is what matters. In most cases you're far more likely to make progress on the idea if you get it out into the open, especially if execution at all depends on having collaborators or other supporters. (Also helpful to get feedback on the idea.) The probability that someone else successfully executes on an idea that you came up with is low.

Replies from: Knuckels McGinty
comment by Knuckels McGinty · 2020-02-10T02:36:29.133Z · LW(p) · GW(p)

I've heard similar things and agree completely. It's just difficult to fight the impulse to bury away the details!

comment by Pattern · 2020-01-29T20:34:55.729Z · LW(p) · GW(p)

Probability?

Replies from: Knuckels McGinty
comment by Knuckels McGinty · 2020-02-04T17:50:15.600Z · LW(p) · GW(p)

Yeah, I suppose in a way it is!