Questions about AGI's Importance

post by curi · 2017-10-31T20:50:22.094Z · LW · GW · Legacy · 116 comments

Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?

116 comments

Comments sorted by top scores.

comment by Brendan Long (korin43) · 2017-10-31T22:15:25.048Z · LW(p) · GW(p)

I suspect this has been answered on here before in a lot more detail, but:

  • Evolution isn't necessarily trying to make us smart; it's just trying to make us survive and reproduce
  • Evolution tends to find local optima (see: obviously stupid designs like how the optical nerve works)
  • We seem to be pretty good at making things that are better than what evolution comes up with (see: no birds on the moon, no predators with natural machine guns, etc.)

Also, specifically in AI, there is some precedent for there to be only a few years between "researchers get AI to do something at all" and "this AI is better at its task than any human who has ever lived". Chess did it a while. It just happened with Go. I suspect we're crossing that point with image recognition now.

Replies from: curi
comment by curi · 2017-10-31T22:29:20.324Z · LW(p) · GW(p)

Do you expect AGI to be qualitatively or quantitatively better at thinking than humans?

Do you think there are different types of intelligence? If so, what types? And would AGI be the same type as humans?

EDIT: By "intelligence" I mean general intelligence.

comment by curi · 2017-11-10T18:47:37.175Z · LW(p) · GW(p)

I'm getting an error trying to load Lumifer's comment in the highly nested discussion, but I can see it in my inbox, so I'll try replying here without the nesting. For this comment, I will quote everything I reply to so it stands alone better.

Isn't it convenient that I don't have to care about these infinitely many theories?

why not?

Why not what?

Why don't you have to care about the infinity of theories?

you can criticize categories, e.g. all ideas with feature X

How can you know that every single theory in that infinity has feature X? or belongs to the same category?

It depends which infinity we're talking about. Suppose the problem is persuading LW ppl about Paths Forward and you say "Use a shovel". That refers to infinitely many different potential solutions. However, they can be criticized as a group by pointing out that a shovel won't help solve the problem. What does a shovel have to do with it? Irrelevant!

This criticism only applies to the infinite category of ideas about shovels, not everything. I'm able to criticize that whole infinite group as a unit because it was brought up as a unit, and defined according to having a particular feature for all the theories in the group (that they involve trying to solve the problem specifically with a shovel.)

The criticism is also contextual. It relates to using shovels for this particular problem. But shovels still help with some other problems. The context the criticism works in is broader than the single problem about paths forward persuasion of LW ppl – e.g. it also applies to anti-induction persuasion of Objectivists. This is typical – the point has some applicability to multiple contexts, but not universal applicability.

If you instead said "Do something" then you'd be bringing up a different infinity with more stuff in it, and I'd have a different reply: "Do what? That isn't helpful because you're pointing me to a large number of non-solutions without pointing out any solution. I agree there is a solution contained in there, somewhere, but I don't know what it is, and you don't seem to either, so I can't use it currently. So I'm stuck with the regular options like doing a solution I do know of or spending more time looking for solutions."

I will admit that there may be a solution with a shovel that actually would work (one way to get this is to take some great solution and then tack on a shovel, which is not optimal but may still be way better than anything we currently know of). So my criticism doesn't 100% rule shovels out. However, it rules shovels out for the time being, as far as is known, pending a new idea about how to make a shovel work. We can only act on solutions we know of, and I have a criticism of the shovel category of ideas as we currently understand it. Our current understanding is that shovels help us dig, and can be used as weapons, and can be salvaged for resources like wood and metal, and can be sold, but that just vaguely saying "use a shovel somehow" does not help me solve a problem of intellectually persuading people.

you can't observe entities

My nervous system makes perfectly good entities out of my sensory stream. Moreover, a rat's nervous system also makes perfectly good entities out if its sensory stream regardless of the fact that the rat has never heard of epistemology and is not very philosophically literate.

I don't think humans think like rats, and I propose we don't debate animal "intelligence" at this time. I'll try to speak to the issue in a different way.

I think humans have adequate control over their observing that they don't get stuck and unable to make progress due to built-in biases and errors. For example, people can consciously think "that looked like a dog at first glance, but actually isn't a painting of a dog". So you can put thought into what the entities are. To the extent you have a default, you can partly change what that default is, and partly reinterpret it after doing the observation. And you're capable of observing in a sufficiently non-lossy way to get whatever information you need (at least with tools like microscopes for some cases). You aren't just inherently, permanently blind to some ways of dividing up the world into entities, or some observable things.

And whatever default your genes gave you about entities is not super reliable. It may be pretty good, but it's very much capable of errors. So I'll make a weaker claim: you can't infallibly observe entities. You need to put some actual thought into what the entities are and aren't, and the inductivist perspective doesn't address this well. (As to rats, they actually start making gross errors in some situations, due to their inability to think like a human to deal with situations they weren't evolved for.)

you have to interpret what entities there are (or not – as you advocated by saying only prediction matters)

or not

Or not? Prediction matters, but entities are an awfully convenient way to make predictions.

but when two ways of thinking about entities (or, a third option, not thinking about entities at all) give identical predictions, then you said it doesn't matter which you do? one entity (or none) is as good as another as long as the predictions come out the same?

but i don't think all ways of looking at the world in terms of entities are equally convenient for aiding us in making predictions (or for some other important things like coming up with new hypotheses!)

Replies from: Lumifer
comment by Lumifer · 2017-11-10T19:52:33.790Z · LW(p) · GW(p)

Huh, that shaft ended in loud screech and a clang... Let's drop another shaft!

Why don't you have to care about the infinity of theories?

I don't have to care about the infinity of theories because if they all make exactly the same predictions, I don't care that they are different.

This is highly convenient because I am, to quote an Agent, "only human" and humans are not well set up to deal with infinities.

they can be criticized as a group by pointing out that a shovel won't help solve the problem

How do you know that without examining the specific theories?

We can only act on solutions we know of, and I have a criticism of the shovel category of ideas as we currently understand it.

Right, but the point is that you do not have solution at the moment and there is an infinity of theories which propose potential shovel-ready solutions. You have no basis for rejecting them because "I don't know of a solution with a shovel" -- they are new to you solutions, that's the whole point.

To the extent you have a default, you can partly change what that default is, and partly reinterpret it after doing the observation.

Yes, of course, but you were claiming there are no such things as observations at all, merely some photons and such flying around. Being prone to errors is an entirely different question.

one entity (or none) is as good as another as long as the predictions come out the same?

Predictions do not come out of nowhere. They are made by models (= imperfect representations of reality) and "entity" is just a different word for a "model". If you don't have any entities, what exactly generates your predictions?

Replies from: curi
comment by curi · 2017-11-10T20:33:02.082Z · LW(p) · GW(p)

I don't find these replies very responsive. Are you trying to understand what I'm getting at, or just writing local replies to a selection of my points? This is not the first time I've tried to write some substantial explanation and gotten not much engagement from you (IMO).

Replies from: Lumifer
comment by Lumifer · 2017-11-10T21:08:07.849Z · LW(p) · GW(p)

Oh, I understand what you are getting at. I just think that you're wrong.

I'm writing local replies because fisking walls of text gets tedious very very quickly. There is no point in debating secondary effects when it's pretty clear that the source disagreement is deeper.

Replies from: curi
comment by curi · 2017-11-10T21:14:43.880Z · LW(p) · GW(p)

I'm going to end the discussion now, unless you object. I'm willing to consider objections.

I'm stopping for a variety of reasons, some of which I talked about previously like your discussion limitations like about references. I think you don't understand and aren't willing to do what it takes to understand.

If we stop and you later want to get these issues addressed, you would be welcome to post to the FI forum: http://fallibleideas.com/discussion-info

Replies from: Lumifer
comment by Lumifer · 2017-11-10T21:20:59.666Z · LW(p) · GW(p)

I think you don't understand and aren't willing to do what it takes to understand.

s/understand/be convinced/g and I'll agree :-)

Was a fun ride!

comment by ImmortalRationalist · 2017-11-02T18:00:08.274Z · LW(p) · GW(p)

Here is a somewhat relevant video.

comment by whpearson · 2017-11-01T12:21:21.732Z · LW(p) · GW(p)

Has anyone here put much thought into parenting/educating AGIs?

I'm interested in General Intelligence Augmentation, what it would be like try and build/train an artificial brain lobe and try and make it part of a normal human intelligence.

I wrote a bit on my current thoughts on how I expect to align it using training/education here but watching this presentation is necessary for context.

comment by siIver · 2017-11-01T09:32:32.932Z · LW(p) · GW(p)

Because

"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"

https://www.youtube.com/watch?v=EUjc1WuyPT8&t=3320s

AI will be quantitatively smarter because it'll be able to think over 10000 times faster (arbitrary conservative lower bound) and it will be qualitatively smarter because its software will be built by an algoirthm far better than evolution

Replies from: Lumifer, curi
comment by Lumifer · 2017-11-01T15:57:45.253Z · LW(p) · GW(p)

AI will be quantitatively smarter because it'll be able to think over 10000 times faster

My calculator can add large numbers much, much faster than I. That doesn't make it "quantitatively smarter".

an algoirthm far better than evolution

Given that no one has any idea about what that algorithm might look like, statements like this seem a bit premature.

Replies from: Tehuti, curi
comment by Tehuti · 2017-11-05T20:26:07.341Z · LW(p) · GW(p)

My calculator can add large numbers much, much faster than I. That doesn't make it "quantitatively smarter.

Your brain actually performs much more analysis each second that any computer we have:

At the time of this writing, the fastest supercomputer in the world is the Tianhe-2 in Guangzhou, China, and has a maximum processing speed of 54.902 petaFLOPS. A petaFLOP is a quadrillion (one thousand trillion) floating point calculations per second. That’s a huge amount of calculations, and yet, that doesn’t even come close to the processing speed of the human brain. In contrast, our miraculous brains operate on the next order higher. Although it is impossible to precisely calculate, it is postulated that the human brain operates at 1 exaFLOP, which is equivalent to a billion billion calculations per second.

https://www.scienceabc.com/humans/the-human-brain-vs-supercomputers-which-one-wins.html

Of course this is structurally very different from a CPU or a GPU etc, but the whole power of the brain is still way bigger.

comment by curi · 2017-11-01T21:22:22.065Z · LW(p) · GW(p)

I think AGIs will be built by evolution, and use evolution for their own thinking, because I think human thinking uses evolution (replication with variation and selection of ideas). I don't think any other method of knowledge creation is known, other than evolution.

Replies from: Lumifer
comment by Lumifer · 2017-11-02T00:37:23.251Z · LW(p) · GW(p)

I don't think any other method of knowledge creation is known, other than evolution.

The scientific method doesn't look much like evolution to me. At a simpler level, things like observation and experimentation don't look like it, either.

Replies from: username2, curi
comment by username2 · 2017-11-10T13:23:40.690Z · LW(p) · GW(p)

I went down the rabbit hole of your ensuing discussion and it seems to have broken LW, but didn't look like you were very convinced yet. Thanks for taking one for the team.

Replies from: Lumifer
comment by Lumifer · 2017-11-10T15:39:26.997Z · LW(p) · GW(p)

Too deep we delved there, and woke the nameless fear...

I suspect there is an implicit max thread depth and once it's reached, LW's gears and cranks (if only!) screech to a halt.

comment by curi · 2017-11-02T00:48:54.072Z · LW(p) · GW(p)

The scientific method involves guesses (called "hypotheses") and criticism (including by experimental tests). That follows the pattern of evolution (exactly, not by analogy): replication with variation (guessing), and selection (criticism).

Replies from: Lumifer, Elo
comment by Lumifer · 2017-11-02T16:10:00.544Z · LW(p) · GW(p)

That follows the pattern of evolution (exactly, not by analogy)

Not at all. Hypothesis generation doesn't look like taking the current view and randomly changing one element in it. More importantly, science is mostly teleological and evolution is not.

But let's take a trivial example. Let's say I'm walking by a food place and I notice a new to me dish. I order it, eat it, and decide that it's tasty. I have acquired knowledge. How's that like evolution?

Replies from: curi
comment by curi · 2017-11-02T17:59:39.991Z · LW(p) · GW(p)

the way you decide it's tasty is by guessing it's tasty, and guessing some other things, and criticizing those guesses, and "it's tasty" survives criticism while its rivals don't.

lots of this is done at an unconscious level.

it has to be this way b/c it's the only known way of creating knowledge that could actually work. if you find it awkward or burdensome, that doesn't make it impossible – which puts it ahead of its rivals.

Replies from: Lumifer
comment by Lumifer · 2017-11-02T18:23:20.381Z · LW(p) · GW(p)

The word you're looking for is "testing". I test whether that thing is tasty.

Testing is not the same thing as evolution.

it has to be this way b/c it's the only known way of creating knowledge that could actually work

That's an entirely circular argument.

Replies from: curi
comment by curi · 2017-11-02T18:30:30.341Z · LW(p) · GW(p)

Evolution is an abstract pattern which makes progress via the correction of errors using selection. If something fits the pattern, then it's evolution.

Would you agree with something like: if induction doesn't work, and CR does, then it's a good idea to accept CR? Even if you find it counter-intuitive and awkward from your current perspective?

Replies from: Lumifer
comment by Lumifer · 2017-11-02T20:01:39.707Z · LW(p) · GW(p)

Evolution is an abstract pattern which makes progress via the correction of errors using selection

I think we might be having terminology problems -- in particular I feel that you stick the "evolution" label on vastly broader things.

First, the notion of progress. Evolution doesn't do progress not being teleological. Evolution does adapation to the current environment. A decrease in complexity is not an uncommon event in evolution, for example. A mass die-off is not an uncommon event, either.

Second, evolution doesn't correct "errors". Those are not errors, those are random exploratory steps. A random walk. And evolution does not correct them, it just kills off those who misstep (which is 99.99%+ of steps).

if induction doesn't work, and CR does, then it's a good idea to accept CR?

Sure. Please provide empirical evidence.

And I still don't understand what's wrong with plain-vanilla observation as a way to acquire knowledge.

Replies from: curi
comment by curi · 2017-11-02T20:27:18.738Z · LW(p) · GW(p)

killing off a misstep is a way of getting rid of that error. the stuff that doesn't work is probabilistically removed from later generations – so the effect there is error correction. (experimenting itself isn't a mistake, but some of the experiments work badly – error).

Evolution adapts, yes. Adapting something to solve a particular problem = creating knowledge of how to solve that problem. Biological evolution is limited in what problems it solves but still powerful enough to create human intelligence b/c of the ability for a single piece of knowledge to solve multiple problems.

abstractly, guesses and criticism fits the pattern of evolution: there are generations of ideas. the ideas in the next generation aren't purely random, they retain some things that worked in the previous generation (to some extent we're seeing variation instead of something totally separate), and then criticism is selection. if you keep applying the same criticism over and over, you'll get ideas adapted to not being refuted by that criticism.

Please provide empirical evidence.

Our disagreement is about philosophy.

And I still don't understand what's wrong with plain-vanilla observation as a way to acquire knowledge.

What do you observe (observation is lossy and there are many choices about where to focus your attention), and then what do you learn from it? Any set of observations fits infinitely many patterns.

Replies from: Lumifer
comment by Lumifer · 2017-11-02T20:41:01.224Z · LW(p) · GW(p)

abstractly, guesses and criticism fits the pattern of evolution

I still don't think so, but as I mentioned it's merely a terminology problem: you are using the word "evolution" in an unexpected way.

Please provide empirical evidence.

Our disagreement is about philosophy.

Ah, well then. In this case I probably should inform you that your mistakes are due to the invisible dragon in my garage. When he gets gas, his dreams are troubled and seep into the minds of humans, corrupting their epistemology. See, he is a Philosophical Dragon.

What do you observe ... and then what do you learn from it?

I observe a rock and learn that there is a rock in front of me.

Replies from: curi
comment by curi · 2017-11-02T21:43:25.836Z · LW(p) · GW(p)

why did you learn there was a rock in front of you, instead of an alien that looks like a rock?

do you, perhaps, have a criticism of the alien suggestion?

Replies from: Lumifer
comment by Lumifer · 2017-11-03T00:50:40.706Z · LW(p) · GW(p)

I cannot guarantee that it's not an alien that looks like a rock, but my priors insist that it's highly improbable.

do you, perhaps, have a criticism of the alien suggestion?

Me, no, but you might want to talk to that chap over there, William of Ockham...

Replies from: curi
comment by curi · 2017-11-03T04:09:12.473Z · LW(p) · GW(p)

so you prefer a dogmatic prior over criticisms which are themselves exposed to criticism?

Replies from: Lumifer
comment by Lumifer · 2017-11-03T14:22:40.721Z · LW(p) · GW(p)

a dogmatic prior

How is it dogmatic when a prior's sole purpose in life is literally to be updated, to change?

over criticisms

Which criticisms? Where do they come from? Who makes them and for what reason?

Replies from: curi
comment by curi · 2017-11-03T17:34:22.075Z · LW(p) · GW(p)

How is it dogmatic when a prior's sole purpose in life is literally to be updated, to change?

not by critical arguments.

Which criticisms? Where do they come from? Who makes them and for what reason?

humans make critical arguments, like the ones in this discussion.

Replies from: Lumifer
comment by Lumifer · 2017-11-03T19:38:34.628Z · LW(p) · GW(p)

So we started here:

I observe a rock and learn that there is a rock in front of me.

There are just two of us here, me and the rock. If there are no humans around to make criticisms, I cannot acquire knowledge?

not by critical arguments

If these critical arguments get to count as evidence, yes, by them, too. If they don't, well, that raises interesting questions.

Replies from: curi
comment by curi · 2017-11-03T20:39:46.352Z · LW(p) · GW(p)

There are just two of us here, me and the rock. If there are no humans around to make criticisms, I cannot acquire knowledge?

You are a human who is present and can criticize.

If these critical arguments get to count as evidence, yes, by them, too. If they don't, well, that raises interesting questions.

You're defining "evidence" differently than I am. I think evidence refers to what you might call empirical evidence. How do you incorporate critical arguments into probability updating?

Replies from: Lumifer
comment by Lumifer · 2017-11-04T00:24:11.043Z · LW(p) · GW(p)

You are a human who is present and can criticize.

But I don't do that. My eyes send some information to my brain, my brain does, basically, pattern-matching and says "looks like a rock". Another part of the brain runs a sanity check ("Would seeing a rock be reasonable here? Yes.") and I'm done.

In particular, I do NOT generate a large number of hypotheses about what that thing might be and internally criticize them.

How do you incorporate critical arguments into probability updating?

Easily enough. Valid critical arguments tend to point to empirical evidence which contradicts the hypothesis. Other than that, the only valid arguments that come to mind are those which demonstrate incoherency or internal contradictions.

Replies from: curi
comment by curi · 2017-11-04T01:36:38.502Z · LW(p) · GW(p)

We have massive philosophical differences. I think you're wrong in important ways and that your school of thought has been refuted by literature it hasn't answered (by e.g. Popper and Deutsch).

Are you interested in resolving this in a serious, thorough way to a conclusion? I understand this would take a large effort by each of us.

Replies from: Lumifer
comment by Lumifer · 2017-11-05T00:12:30.685Z · LW(p) · GW(p)

Are you interested in resolving this in a serious, thorough way to a conclusion?

Depends on what "resolving" means. If you have in mind pinpointing the precise issues from which our disagreement stems, sure. But I don't think it would take a large effort.

On the other hand, if what you have in mind is teaching me the proper way to do philosophy, that's much more problematic...

Replies from: curi
comment by curi · 2017-11-05T00:20:13.216Z · LW(p) · GW(p)

i mean figuring out the disagreements and discussing them and resolving the disagreements. actually figuring out which positions are correct and why. not just agreeing to disagree.

Replies from: Lumifer
comment by Lumifer · 2017-11-05T02:46:58.105Z · LW(p) · GW(p)

I suspect we have disagreements about what does "correct" mean and what criteria of correctness we can use to establish it :-)

But we can start by figuring out the precise questions to which we answer differently. Do you have any guesses?

Replies from: curi
comment by curi · 2017-11-05T06:23:33.693Z · LW(p) · GW(p)

You believe Bayesian Epistemology and I believe Critical Rationalism. They disagree about e.g. induction, empiricism, instrumentalism.

Replies from: Lumifer
comment by Lumifer · 2017-11-05T06:57:46.176Z · LW(p) · GW(p)

Labels aren't terribly useful.

Let's start with the basics. We'll probably agree that external/objective reality exists. That we can gain some knowledge of that reality, and that this knowledge cannot be perfect. So far so good?

Thus we have reality and we have imperfect models of this reality in our heads. What happens when we have multiple models for same piece of reality?

Replies from: curi
comment by curi · 2017-11-05T08:02:13.496Z · LW(p) · GW(p)

Labels aren't terribly useful.

Why not? They have meanings which people familiar with the field have substantial convergence about.

Let's start with the basics. We'll probably agree that external/objective reality exists. That we can gain some knowledge of that reality, and that this knowledge cannot be perfect. So far so good?

Yes.

Thus we have reality and we have imperfect models of this reality in our heads.

Yes, and these models are not merely about prediction.

What happens when we have multiple models for same piece of reality?

Critical arguments.

Replies from: Lumifer
comment by Lumifer · 2017-11-05T22:16:52.506Z · LW(p) · GW(p)

Why not?

Because most discussions suffer from the problem of different people understanding the same word differently. This is especially pronounced for labels (aka shortcuts to complicated concepts).

Critical arguments.

Hold on. First, is it acceptable to have multiple models at the same time? Do you have to declare one of them the best? It's not uncommon to have many models none of which you can falsify at the moment, how do you sort them out?

Replies from: curi
comment by curi · 2017-11-05T23:06:21.172Z · LW(p) · GW(p)

You can "have" multiple models in the sense of knowing about them and being able to use them if you wanted to.

But if your N models all contradict, then at least N-1 of them are wrong. So you shouldn't simultaneously believe 2+ of them are true.

You can always have a non-refuted idea about how to proceed in life (with low enough resource cost, not with e.g. infinite time). This stuff is covered at length but is complicated to learn. Are you interested in doing things like reading a bunch and discussing it as you go along so you can learn it?

Replies from: Lumifer
comment by Lumifer · 2017-11-06T04:07:57.354Z · LW(p) · GW(p)

you shouldn't simultaneously believe 2+ of them are true

Is "true" a binary value or you can have fractions? Is it possible for a model to be X% true?

Also, you have N models which contradict somewhere (otherwise they would be identical). You can't falsify any of them at the moment. How do you go about selecting between them?

Are you interested in doing things like reading a bunch and discussing it as you go along so you can learn it?

No. As I pointed out before, I am not interested in being taught.

Replies from: curi
comment by curi · 2017-11-06T04:58:29.344Z · LW(p) · GW(p)

Is "true" a binary value or you can have fractions? Is it possible for a model to be X% true?

Binary.

Also, you have N models which contradict somewhere (otherwise they would be identical). You can't falsify any of them at the moment. How do you go about selecting between them?

I said we have answers to this but they are complicated, and you said you don't want to read enough to understand them. I don't know why you're repeating the question. Do you disbelieve me that understanding it in a few paragraphs of forum discussion is unrealistic?

I'm totally open to discussing approaches to resolving disagreement, but I'm not open to you simply ignoring my suggestions about how to proceed and then trying to proceed in a way I don't think will work without saying why I'm mistaken about it being a bad approach. I'm also open to discussing where it's worth spending time and why, and how to decide that, and addressing skepticism. One approach is to start reading things and stop at the first thing you think is a mistake or have a question about, then comment. If you think you find a mistake, you only read more if it's fixed or you then discover you were mistaken about the mistake.

No. As I pointed out before, I am not interested in being taught.

What if you're mistaken? Is the plan to stay mistaken, or is there a way to become less wrong? Do you have some kind of alternative you think is better which lets you learn all the important things while e.g. avoiding reading?

Replies from: Lumifer
comment by Lumifer · 2017-11-06T06:23:36.413Z · LW(p) · GW(p)

Binary.

That's interesting. We've agreed that all models are imperfect representations of reality. Why are some imperfect models true? From a certain point of view all of them are false.

Do you disbelieve me that understanding it in a few paragraphs of forum discussion is unrealistic?

Why, yes, I do. If you can't concisely draw at least the outlines of your position, I might even disbelieve that you understand your own views.

simply ignoring my suggestions

You have reading comprehension problems. I didn't ignore them -- I explicitly said I don't agree to them. Let me repeat: I am not interested in being assigned a reading list.

What if you're mistaken?

That's quite possible. But as I noted, reading comprehension is useful: I am quite interested in learning, I am quite uninterested in being taught, at least in this context.

Replies from: curi
comment by curi · 2017-11-06T06:32:54.806Z · LW(p) · GW(p)

Why are some imperfect models true?

I didn't say that they are.

You didn't ask for an outline, you asked for an answer. Those are different things.

It's hard to give outlines of solutions to people who are unfamiliar with the framework your solution is designed within, and who don't want to put in effort. Do you understand that problem? I also don't see the point of trying to write custom material for you (under the extra constraint of keeping it very short, and while having very limited info about you to customize with), when you don't want to read the canonical stuff, and I don't expect you to still be speaking to me in a few days (because of repeated indicators of hostility and disinterest).

Under what circumstances would you learn or answer David Deutsch on epistemology?

Replies from: Lumifer
comment by Lumifer · 2017-11-06T15:37:35.057Z · LW(p) · GW(p)

I didn't say that they are.

So what is it that you are you saying?

All models are imperfect. If some are true, my question stands. If none are true, I don't see any use for the concept of "true".

Do you understand that problem?

No, I don't. If you are unable to explain your position other than by saying "Go read the book, it explains everything", I have an inclination to think you yourself don't understand what you are trying to say.

I don't expect you to "write custom material". I expect you to be able to hold a conversation where you are can put forward your views in a clear and concise manner.

answer David Deutsch on epistemology

The thing is, if I want to go read Popper or Deutsch, I can go read Popper or Deutsch. I don't see what you will be able to add to my reading -- I can do it myself.

Replies from: curi
comment by curi · 2017-11-06T18:25:02.778Z · LW(p) · GW(p)

No, I don't. If you are unable to explain your position other than by saying "Go read the book, it explains everything", I have an inclination to think you yourself don't understand what you are trying to say.

Would that be your belief if I wrote the book? If I was in the acknowledgements of the book? If my best friend wrote it and I'd discussed the material at length with him? If the author was a fan of mine? You seem to be trying to judge credentials in some way without saying where the lines are, and without asking any questions about mine.

And I didn't say go read the book it explains everything, I said I don't want to rewrite the book for you, so start reading it and reply when you have your first criticism or question – just as if it was a forum post.

The thing is, if I want to go read Popper or Deutsch, I can go read Popper or Deutsch. I don't see what you will be able to add to my reading -- I can do it myself.

You will have comments when you read – questions and criticisms – which we can discuss as they come up. That's different than reading it alone.

Why do you want me to rewrite canonical material? Are you going to refuse to read any links or references of any kind, ever? What are the rules for when you do read those?

Why don't you go read DD/Popper, with or without me? Have you answered them? Have you any reference answering them? If not, why leave outstanding criticisms unanswered? Isn't that problematic?

The point is, you seem to draw some important distinction between 1) material I take responsibility for but didn't personally write (maybe you're assuming I won't take responsibility for things I reference the same as if I wrote them? I will.) 2) material I wrote in the past. 3) material I wrote specifically for this conversation. We have a massive archive of writing, some of it very polished. It refutes various claims LW makes that you seem to believe. You haven't answered it. You don't seem to know of anything that answers it. Yet you aren't interested. Under what circumstances would you be interested?

If you want material to be customized for you in some way, i don't know what way it is. If you read a little canonical stuff and said "This isn't working for me b/c it targets audience X and I have trait Y" then I could help bridge that gap for you. But you haven't expressed any objection of that type.

Replies from: Lumifer
comment by Lumifer · 2017-11-06T18:58:26.510Z · LW(p) · GW(p)

I looked you up. That clarifies a lot:

I’m a philosopher. ... I sell educational philosophy material and philosophy consulting.

All problems can be solved by knowing how. I tell you how. I figure out your problems and their solutions. I help you learn anything you’d like to learn. ... Your life could be better. I can help.

Oh, and LOL at

I have expert knowledge in parenting/education, physics, economics, evolution, psychiatry, social dynamics, relationships, business, politics, and some parts of history.

So, Mr. Expert On Everything (and "world class at several computer games"), I am sorry but I'm looking neither for a philosophy tutor nor for a life mentor. You want to teach me: I do not desire to be taught.

We can talk about interesting problems, e.g. in epistemology, but if your position is that reading your favourite book has to be the beginning of all discussions, we're not going to get anywhere.

Would that be your belief if I wrote the book?

Yes. Inability to clearly and succinctly formulate the main tenets points to a lack of understanding. I don't care if you're friends with Deutsch and spent a lot of time chatting with him.

Have you answered them? Have you any reference answering them?

"Answering" Popperianism requires a book-length effort at least. I don't see any reason for me to spend that effort. As to references, Popper published many decades ago. Since the entire world hasn't converted to his views, I would expect to find a lot of references which disagree with him and Deutsch. Surely you're not arguing that there are none?

you seem to draw some important distinction

No, I do not. I'm attempting to hold a small, local, mostly self-contained conversation about epistemology where we can build certain structures out of certain well-defined words and see if they fail under stress. You want to turn it into an educational "read the textbook" session.

refutes various claims LW makes that you seem to believe

So, be specific. Which claims do you think I believe? Please list and refute.

As to "a massive archive of writing", yes, indeed we do. Much of it disagrees with Popper and Deutsch. So what?

If you want material to be customized for you

Sigh. Let me repeat once again, in caps and bold:

I DO NOT WANT TO BE TAUGHT BY YOU.

Replies from: curi
comment by curi · 2017-11-06T19:10:39.481Z · LW(p) · GW(p)

When I rewrite canonical material for this discussion, what should I change from the original? How should it differ from copy/pasting passages? Should I just paste stuff from sources and not tell you it's pasted, and then you'll engage with it?

As to "a massive archive of writing", yes, indeed we do. Much of it disagrees with Popper and Deutsch. So what?

Where is the argument that CR is mistaken? CR provides arguments that Bayesian Epistemology is mistaken.

"Answering" Popperianism requires a book-length effort at least. I don't see any reason for me to spend that effort.

Has anyone done it? If not, do you see a problem there? If so, can you give a reference that you will take responsibility for?

Surely you're not arguing that there are none?

I claim none of the existing criticism of CR is correct. I take it you don't know of any that's correct, but wish to ignore the matter anyway. Why? Under what circumstances do you think arguments should be answered instead of ignored? Only when popular?

Laughing at me is rude and a non-argument. Yelling is also rude.

You seem to be hostile to the idea of discussing methodology before discussing a particular topic, even though we disagree about methodology. Do you think discussion methodology is unimportant and boring?

Replies from: Lumifer
comment by Lumifer · 2017-11-06T19:35:39.259Z · LW(p) · GW(p)

When I rewrite canonical material for this discussion, what should I change from the original?

I don't expect you to be a copy-and-paste bot, I expect you to be able to hold a conversation. I don't particularly care whether you quote, modify, or invent from scratch. You have been remarkably resistant to a concise formulation of your views relevant to our discussion -- if you feel that nothing less than Deutsch's whole book can do, well, we have a problem.

I claim none of the existing criticism of CR is correct.

Of course you do :-)

Under what circumstances do you think arguments should be answered instead of ignored?

Notice how you have NOT presented any arguments to be answered. You merely pointed in the general direction of a philosophical theory which claims (don't they all?) to have the answers.

Laughing at me is rude and a non-argument. Yelling is also rude.

I laugh a lot. Laughing is good.

As to yelling, well, you ignored that sentence only what, three or four times? Do you hear me now? :-)

You seem to be hostile to the idea of discussing methodology before discussing a particular topic

Au contraire. I would like to discuss methodology, but "go read a book" is not discussion.

Replies from: curi
comment by curi · 2017-11-06T19:39:55.284Z · LW(p) · GW(p)

I'm trying to discuss the methodology of reading text until your first comment/question/criticism and then replying. You have been ignoring this.

I did not ignore you about teaching, I heard you and I'm trying to have a peer discussion. But you keep interpreting things differently than I do. OK, understandable, but you need to be tolerant of different perspectives instead of yell. I am not yelling about being ignored about the methodology point about looking for the first mistake.

I can give you specific sources with details but first I asked if you'd be willing to look at them and you said no, so that's why I didn't actually give you a specific reference. I'm also bringing up that there is literature criticizing your school of thought, which your school of thought seems to have no answer to – isn't that a problem? Or what is your methodology such that that is ignorable? Or do you deny this is the case?

We disagree about e.g. induction. So you want me to rewrite one of the arguments about induction I've written in the past, because you don't want a reference. Right? I don't understand the purpose of this. Explain? It sounds like duplication of work to me.

Replies from: Lumifer
comment by Lumifer · 2017-11-06T20:13:29.641Z · LW(p) · GW(p)

I'm trying to discuss the methodology of reading text until your first comment/question/criticism and then replying

So you want to do exegesis. That makes the subject of the inquiry the text itself and the meaning contained in it.

The issue is that I'm not particularly interested in the text and CR. I'm interested in basic epistemological approaches of which CR is merely one. It's basically the difference between dissecting frogs and reading a book about the proper ways to dissect frogs and what you would find if you cut one open. In this case I want to dissect frogs and not read books.

I am not yelling about being ignored about the methodology point

I'm not ignoring it -- I'm explicitly telling you I don't want it.

there is literature criticizing your school of thought

Oh boy. From the fact that you found me on LW you immediately deduced what my school of thought is? That might have been... hasty :-) And remember, I told you that labels are not terribly useful?

We disagree about e.g. induction.

We do? Did I say anything about induction? I'm sure there is a strawman waiting in the wings to be conveniently demolished, but what does it have to do with me?

Replies from: curi
comment by curi · 2017-11-06T20:21:28.849Z · LW(p) · GW(p)

We disagree about e.g. induction.

We do? Did I say anything about induction? I'm sure there is a strawman waiting in the wings to be conveniently demolished, but what does it have to do with me?

I have been paying attention to what you wrote, e.g.:

And I still don't understand what's wrong with plain-vanilla observation as a way to acquire knowledge.

This statement indicates to me that we disagree about induction.

The issue is that I'm not particularly interested in the text and CR. I'm interested in basic epistemological approaches of which CR is merely one. It's basically the difference between dissecting frogs and reading a book about the proper ways to dissect frogs and what you would find if you cut one open. In this case I want to dissect frogs and not read books.

What exactly do you think is different btwn a text by DD, a text by me, and new text typed by me into this forum? To me they are all text, but you treat them totally different. Plz explain the methodology.

You express your disinterest in CR. Since I'm writing CR ideas, I take that as disinterest in what I'm saying. What would it take for you to become interested and try to address all known criticisms of your positions? Also do you have a website where you've written down your views to expose them to criticism, or do you have a reference which does this for you and which you'll take responsibility for?

Replies from: Lumifer
comment by Lumifer · 2017-11-06T20:35:30.906Z · LW(p) · GW(p)

This statement indicates to me that we disagree about induction.

Induction isn't about acquiring knowledge from observations, induction is about generalizing from some limited set of observations to universal rules/laws.

To me they are all text, but you treat them totally different

The key is a couple of comments up:

I'm attempting to hold a small, local, mostly self-contained conversation about epistemology where we can build certain structures out of certain well-defined words and see if they fail under stress.

Note: local. Note: self-contained.

As I said, I don't care if you quote or write original text. What I'm looking for is small, specific, limited in scope.

Since I'm writing CR ideas, I take that as disinterest in what I'm saying.

To the extent you're promoting/popularizing CR, yes, I'm uninterested in being swayed to its side.

What would it take for you to become interested and try to address all known criticisms of your positions?

Time. Loads and loads of free time :-D

do you have a website where you've written down your views to expose them to criticism, or do you have a reference which does this for you and which you'll take responsibility for?

Nope and nope. Sorry.

Replies from: curi
comment by curi · 2017-11-06T20:42:05.512Z · LW(p) · GW(p)

Induction isn't about acquiring knowledge from observations, induction is about generalizing from some limited set of observations to universal rules/laws.

I don't understand how arguing with me about induction is going to prove your point that we don't disagree.

Note: local. Note: self-contained.

Why do you want it to be local and self-contained? I don't want to exclude important ideas based on their source. I want to judge ideas by their substance, regardless of their source. But you started objecting to that, so here we are and I've tried many times to get you to clarify your methodology. I'm now trying again, despite the yelling and ridiculing.

I also don't know what your rules are – if I wrote something a month ago, can I link that? Yesterday, but it was originally for some other conversation? So I've been trying to find out what your methodology rules are, because I literally don't know what you consider allowable in the conversation or not, plus also I think I disagree iwth your methodology (but I'm still trying to clarify it).

To the extent you're promoting/popularizing CR, yes, I'm uninterested in being swayed to its side.

What if it's correct and you're mistaken? This isn't a matter of sides, but truth. I read you as saying you don't care about the truth if CR is true, but I guess you mean something else – what?

Time. Loads and loads of free time :-D

What would convince you to reallocate time? If you don't have time to think much, we could just stop now... I organized my life to have time to deal with ideas.

Nope and nope. Sorry.

Why not? Are you very interested in ideas? Are you young and new to trying to trying to understand things? Old and new? Don't see the value of a website or any kind of canonical statements of your views?

Replies from: Lumifer
comment by Lumifer · 2017-11-06T21:12:11.902Z · LW(p) · GW(p)

I don't understand how arguing with me about induction is going to prove your point that we don't disagree.

Oh, but it's meta arguing! :-)

In any case, the point is that you assume I hold some positions without any... support for these assumptions.

Why do you want it to be local and self-contained?

("Local" doesn't mean you can't bring it quotes from a book. It means none of your arguments are incorporated by reference but instead have to be fully included in the text of the thread)

Basically to prevent the conversation from losing shape and clarity. Most philosophical discussions tend to sink into the quicksand of subtly (or not so subtly) different definitions for words used and degenerate into mutually-incomprehensible stand-offs or splotchy messes.

Also -- a fun observation -- a lot of people adept at quoting from sources turn out to have a very shaky understanding of what these sources actually mean and what the implications are (this is a general observation, not aimed at you in particular).

The rules are the rules of a conversation: you talk/type in easily digestible chunks, you can quote anything you want but don't use "pointers" (points over yonder: "that thing over there proves my point, go check it out if you doubt it"), pass your variables by value. It would help if you give hard definitions for the terms you use.

What if it's correct and you're mistaken?

We haven't figured out what does "correct" mean :-)

I read you as saying you don't care about the truth if CR is true, but I guess you mean something else – what?

My time and attention are limited. I don't feel establishing the validity of CR should be at the top of my to-do list.

What would convince you to reallocate time?

Changes in relative importance of things. There is a local saying coming from Eliezer that beliefs should pay rent. If the validity of CR starts to affect my life in major ways, I would reallocate time to thinking about it.

And you realize, of course, that there are a great many more ideas than CR, so even you decided to dedicate your life to "deal with ideas", CR is still not the obvious choice.

Why not?

There is a variety of reasons. One is that I'm not particularly interested in converting everyone to my worldview. Another is that it changes on occasion. Yet another is that putting up a vanity website would do pretty much nothing useful for me.

Replies from: curi
comment by curi · 2017-11-06T21:21:21.836Z · LW(p) · GW(p)

If the validity of CR starts to affect my life in major ways, I would reallocate time to thinking about it.

I bet it does. What do you do and what are some of your main philosophical beliefs which you would think it's important if they're mistaken? (I'll be happy to answer the same question though not with any use of pointers to my websites banned.)

And you realize, of course, that there are a great many more ideas than CR, so even you decided to dedicate your life to "deal with ideas", CR is still not the obvious choice.

I reviewed all the well known options (and some but not all obscure ones – and I don't mind reviewing more obscure ones when someone interested in conversation brings one up) and made a judgement about which is correct and non-refuted, and that all the others are refuted by arguments I know. In epistemology, that one is CR.

I would expect other people to attempt something like this, but I find they normally haven't – and don't want to begin. Does this sort of project interest you? If not, what sort of truth-seeking does interest you?

And if you want me to put in extra work to use fewer references than I normally would – do you have any value to offer to motivate me to do this? For example, do you think you'll continue the conversation to a conclusion? Most people don't, and I currently don't expect you to, and I'd rather not jump through a bunch of hoops for you and then you just stop responding.

Replies from: Lumifer
comment by Lumifer · 2017-11-06T21:40:12.501Z · LW(p) · GW(p)

I bet it does.

What exactly is the falsifiable claim that you're making and how would you expect it to be falsified? :-)

some of your main philosophical beliefs which you would think it's important if they're mistaken?

Oh, there are lot. Existence of afterlife, for example. The nature of morality. Things like that.

I reviewed all the well known options (and some but not all obscure ones ... ) and made a judgement about which is correct

How confident are you of your judgement?

Does this sort of project interest you?

Not particularly because of lack of relevancy (see above about paying rent). I don't feel the need to pass a judgement on a set of options if that choice will lead to zero change.

do you think you'll continue the conversation to a conclusion?

I don't expect this conversation to have a conclusion in the sense of general agreement that A is wrong and B is correct. I view it more as a -- to use a Culture name -- A Frank Exchange Of Views which might lead to new information being exchanged, new angles of view opened, maybe even new perspectives -- but nothing as decisive as a sharp-edged black-and-white conclusion.

Replies from: curi
comment by curi · 2017-11-06T21:49:21.488Z · LW(p) · GW(p)

Oh, there are lot. Existence of afterlife, for example. The nature of morality. Things like that.

Will you briefly indicate some specifics, especially things you think CR might disagree about?

How confident are you of your judgement?

Very, because I've put a great deal of effort (as have some others) into doing this investigation, finding people who believe I'm mistaken and are willing to discuss, etc. There are no major outstanding leads left that need checking but haven't been checked. I genuinely don't know what more I could do that would make a big difference. I can do some lesser things like double check more things that have been singled checked, or make more websites and optimize them more and get more traffic to them so that there's more potential criticism (both raw traffic quantity and also getting specific smart ppl).

Not particularly because of lack of relevancy (see above about paying rent). I don't feel the need to pass a judgement on a set of options if that choice will lead to zero change.

Why do you think knowing what way of thinking is correct would lead to zero change? It led to tons of change for me. For you, I'd expect it to mean re-evaluating more or less your entire life and making huge changes. Areas of change-implication include parenting, relationships/marriage, how to discuss, induction, views on science and ways of judging scientific claims, approach to AGI, etc.

but nothing as decisive as a sharp-edged black-and-white conclusion.

Do you think that sort of conclusion is a valuable thing to reach in general? About some issues? I do.

Replies from: Lumifer
comment by Lumifer · 2017-11-06T22:03:58.131Z · LW(p) · GW(p)

things you think CR might disagree about?

These things are orthogonal to CR, CR standing or falling does not affect them.

That's precisely the reason I'm not terribly interested in heavily engaging with CR.

Very

From my point of view it's a bad sign.

I'd expect it to mean re-evaluating more or less your entire life and making huge changes. Areas of change-implication include parenting, relationships/marriage, how to discuss, induction, views on science and ways of judging scientific claims, approach to AGI, etc.

How so? I don't see why changing views on epistemology would lead a different approach to, say, marriage or parenting.

Do you think that sort of conclusion is a valuable thing to reach in general?

Valuable, but rarely available for issues of importance.

Replies from: curi
comment by curi · 2017-11-06T22:08:19.003Z · LW(p) · GW(p)

Epistemology is the field which says how knowledge is created.

Solutions to problems are a type of knowledge.

How to solve problems in a marriage is therefore determined substantially by epistemology.

Education of children is primarily an issue of helping them create knowledge. How to do this depends on how knowledge is created.

You're mistaken about what is orthogonal to CR. You mentioned afterlife – what to believe about that is a matter of judging arguments (or put another way: creating knowledge of whether there is or isn't an afterlife), and for that you need epistemology which is the field that tells you the methods of discussing and evaluating ideas. You also mentioned morality. Moral argument is governed by epistemology, and also lots of morality is basically derived from epistemology because morality is about how to live and some of the key things about how to live are to live in an rational, error-correcting and problem-solving way.

Valuable, but rarely available for issues of importance.

What if it was routinely available, if you knew how? That's what my epistemology says. So there's impact-on-life there!

From my point of view it's a bad sign.

If you can suggest a way I should change my methods for judging this, please share it. (If you have preliminary questions first, feel free to ask them!)

Replies from: Lumifer
comment by Lumifer · 2017-11-06T22:20:30.851Z · LW(p) · GW(p)

How to solve problems in a marriage is therefore determined substantially by epistemology.

Cute play with words, but bears no relationship to the real world. Ditto for parenting. Ditto for afterlife.

You're offering a version of the argument that since physics deals with the lowest (most basic) levels of matter, all other sciences are (or should be) physics: chemistry, biology, sociology, etc. So solving problems in marriage is physics because you are both made out of atoms.

What if it was routinely available

We have a basic disagreement: you think that models are either true or not, and I think, to quote George Box, that "All models are wrong but some are useful".

change my methods for judging this

Rely less on whether someone can successfully argue something and more on empirical reality.

Replies from: curi
comment by curi · 2017-11-06T22:24:08.930Z · LW(p) · GW(p)

I'm not playing with words, I'm expressing the CR perspective. You apparently disagree, but if CR is correct then what I said is correct. So CR's correctness has consequences for your life.

I am not offering reductionism. Married people literally do things like discuss disagreements and try to solve problems – exactly the kind of thing CR governs. That doesn't mean CR is the only thing you need to know – you also need to know relationship-specific stuff (which you btw need to learn – and so CR is relevant there).

We have a basic disagreement: you think that models are either true or not,

I think many ideas aren't models. This is a CR belief which would have impacts on your thinking if you understood it and decided it was correct.

Rely less on whether someone can successfully argue something and more on empirical reality.

Can you be more specific? How does anything I'm doing or saying clash with reality? Arguments about reality are totally welcome, and I've both sought them out and created them myself.

BTW CR philosopher David Deutsch is literally a founder of a parenting/educational movement. Here is one of my essays about CR and parenting: http://fallibleideas.com/taking-children-seriously

Replies from: Lumifer
comment by Lumifer · 2017-11-07T16:10:31.808Z · LW(p) · GW(p)

I'm expressing the CR perspective

So what is the domain that CR claims? I thought it was merely epistemology, but apparently it includes marital counseling and parenting advice?

By the way, your style pattern-matches to religious proselytizing very well.

I think many ideas aren't models.

So far we had the underlying reality and imperfect representations thereof which we called "models". What is an "idea"?

Can you be more specific?

You said

I've put a great deal of effort (as have some others) into doing this investigation, finding people who believe I'm mistaken and are willing to discuss, etc. ... make more websites and optimize them more and get more traffic to them so that there's more potential criticism

You're looking for criticism from people, not from reality.

Think about it this way: let's say you have an idea about how to make a killing in financial markets. Your understanding of how to figure out whether it works is to ask all your friends and interested strangers (IRL and on the 'net) to criticize it. If they can't convince you it's bad, you declare it good.

But there is another way -- you don't ask anyone's opinion, but instead actually attempt to trade it and see if it works.

I prefer the second type of testing claims to the first one.

Replies from: curi
comment by curi · 2017-11-07T17:54:03.889Z · LW(p) · GW(p)

So what is the domain that CR claims?

CR is an epistemology. It has implications, not domain claims.

Methods of thinking are used in every field!

By the way, your style pattern-matches to religious proselytizing very well.

Can you link an example? I'm skeptical but I'd like to read something similar to my writing.

You're looking for criticism from people, not from reality.

I've done both. But the primary issue here is critical argument, not testing, b/c it's about philosophy, not science. My tests are anecdotal and don't really matter to the discussion.

If there's a particular test you think is important for me to do, what is it?

EDIT forgot link about ideas http://fallibleideas.com/ideas

Replies from: Lumifer
comment by Lumifer · 2017-11-07T18:23:29.928Z · LW(p) · GW(p)

It has implications, not domain claims.

You were much more gung ho about it just a little bit earlier:

Epistemology is the field which says how knowledge is created. Solutions to problems are a type of knowledge. How to solve problems ... is therefore determined substantially by epistemology.

...Moral argument is governed by epistemology, and also lots of morality is basically derived from epistemology

and on your website you're quite explicit that your approach can solve ALL problems.

Can you link an example?

Not so much writing style, but argumentative style. Basically, you comments try to set in a number of hooks (like "This stuff is covered at length but is complicated to learn. Are you interested in doing things like reading a bunch and discussing it as you go along so you can learn it?" or "What do you do and what are some of your main philosophical beliefs which you would think it's important if they're mistaken?"), these hooks have a line and all lines lead back to "start reading this book and let's discuss it" which is where you really want to end up. And there is the promise that this philosophy will significantly influence my entire life.

I see this as having a lot of parallels with classic proselytizing, say, Christian, where you set your hooks ("Are you unhappy? Does life make no sense to you?"), all lines lead to reading the Good News and inviting Jesus into your heart and, of course, once you accept Him into your life, that life is supposed to change dramatically.

But the primary issue here is critical argument, not testing, b/c it's about philosophy, not science.

Note another disagreement point: about the relative value of critical arguments vs empirical testing :-)

If there's a particular test you think is important for me to do, what is it?

The standard one: does it work?

For example, you are offering parenting advice. Does it work? How do you know? Ditto for all the other kind of life advice that you offer and want to charge for.

Replies from: curi
comment by curi · 2017-11-07T18:33:31.435Z · LW(p) · GW(p)

Yes my philosophy works great. I have a great life, lots of success, etc, etc.

This is anecdotal and open to debate about how to interpret the test results. I don't wish to switch from debating ideas to sharing tons of personal info and debating my life choices (some of which are successful at non-standard values, and so will appear unsuccessful, and the right values have to be debated to judge it, and etc etc).

Even if my personal life was a mess, that still wouldn't refute my philosophy. That wouldn't be an argument which refutes any particular epistemology claim.

You seem to object to the concept of critical argument, and its role as the method of dealing with many issues.

You were much more gung ho about it just a little bit earlier:

I don't see the difference. Implications are a big deal.

Replies from: Lumifer
comment by Lumifer · 2017-11-07T19:14:37.639Z · LW(p) · GW(p)

This is anecdotal

I don't mean your personal life.

You offer advice professionally. How do you know that you advice leads to desired outcomes? Does it? In which percentage of cases? Did you measure anything?

You seem to object to the concept of critical argument

I don't object to the concept. I object to it being sufficient to determine whether something is "true" (using your terminology) and to the idea that enough critical arguments can replace real-life testing.

I don't see the difference.

When people say "X has implications for this" and "This is determined substantially by X", these sentences usually have different meanings.

Replies from: curi
comment by curi · 2017-11-07T19:28:35.044Z · LW(p) · GW(p)

I have no interest in violating the privacy of my clients, or claiming my philosophy is good b/c of my consulting results. I'm not claiming that, so you don't need to challenge it.

Such methods could not settle the philosophical issues, anyway. I might communicate badly, My clients might be a non-random sample of people with very ambitious goals. My clients might not do what I advised. etc, etc, etc. Any empirical results would be logically compatible with my philosophy being true.

"This is determined substantially by X"

please don't paraphrase me incorrectly, in quote marks, while omitting any actual quote.

Replies from: Lumifer
comment by Lumifer · 2017-11-07T19:34:28.639Z · LW(p) · GW(p)

What does this have to do with the privacy of your clients? I am not asking you to tell me stories, I'm asking whether you have any metrics of the performance of the product that you're selling.

Any empirical results would be logically compatible with my philosophy being true.

I thought you were Popperian. Is your philosophy empirically falsifiable, then?

please don't paraphrase me incorrectly

Direct quote:

How to solve problems in a marriage is therefore determined substantially by epistemology.

Replies from: curi
comment by curi · 2017-11-07T19:50:34.232Z · LW(p) · GW(p)

Thanks for the quote; I was mistaken to say your paraphrase was incorrect. They're big implications. I don't see the point of this part of the discussion.

Popperians say scientific ideas should be (empirically) falsifiable. Philosophy isn't empirically falsifiable, it's addressed by critical arguments.

I do not use consulting metrics in marketing or other public statements; they relate to private matters; I'm not going to discuss them. However I thought of a better way to approach this:

I’ve given lots of advice, for free, in public, with permalinks. So, unlike my private consulting, I’ll talk about that. Broadly here are the results:

Some people love my advice. Super fans! A larger number of people don’t want to talk with me. Haters! (I'm intentionally saying the results are pretty polarized.)

How is that to settle anything? Are we to go by popular opinion? You brought this topic up to try to get away from people. But I regard this as being about people! And btw I don't know what metrics you would consider appropriate for this.

What I wanted to look at isn’t people but critical arguments, and my claim is that FI is non-refuted – meaning not just that no refutation is known to me, but also that no one else knows one who is willing to share it. I think it’s wise to survey the literature, take public comments, seek out discussions at a variety of forums, etc, in addition to thinking about it personally. That’s a worthwhile extra step to help find refutations.

So the thing I was talking about, as I see it, was fundamentally about ideas (particularly critical arguments), not people; and the thing you’re bringing up is about what people do, how they react to advice, etc – about people rather than arguments/ideas.

I was trying to talk about the current objective state of the intellectual debate; you’re bringing up the issue of how people react to me and what happens in their lives.

Replies from: Lumifer
comment by Lumifer · 2017-11-08T02:08:21.969Z · LW(p) · GW(p)

Philosophy isn't empirically falsifiable

Hold on, hold on. Your philosophy isn't abstract ruminations about the numbers of angels on the head of a pin. Your philosophy has implications. BIG implications. In fact, you're saying it changes people's lives!

And these are phenomena of the empirical realm. We can look at them. We can evaluate them. We can see if the "implications" actually lead to consequences that your philosophy predicts and expects. Unless your philosophy just shrugs and says "Beats me, I have no idea what these interventions will do", it makes predictions about these implications.

And the good thing about all these is that they are verifiable and falsifiable.

So.. how about testing these implications? If they fail, would you insist it has no bearing on the philosophy?

I thought of a better way to approach this ... How is that to settle anything?

I agree, the public reaction to ideas doesn't tell you much. But how is this "a better way", then?

What I wanted to look at isn’t people but critical arguments

I was talking mostly about the whole of reality, not just people, and my point is that critical arguments by themselves are insufficient.

the current objective state of the intellectual debate

What is the word "objective" doing in there?

you’re bringing up the issue of how people react to me

No, I don't. You just did. I'm talking about testing your ideas in reality, in particular, by the simplest test of whether they work.

Replies from: curi
comment by curi · 2017-11-08T04:08:54.494Z · LW(p) · GW(p)

As before, you don't know how CR works, we have massive philosophical differences, and your questions are based on assuming aspects of your philosophy are true. Are you interested in understanding a different perspective, or do you just want to challenge my ideas to meet the criteria your framework says matter?

Replies from: Lumifer
comment by Lumifer · 2017-11-08T16:04:40.583Z · LW(p) · GW(p)

your questions are based on assuming aspects of your philosophy are true

I don't think so. At the moment we are operating in a very simple, almost crude, framework: there's reality, there are models, we can detect some mismatches between the reality and the models. Isn't falsification one of the favourite Popperian ideas?

Are you interested in understanding a different perspective

I am asking you questions, am I not? And offering you -- what do you call them? ah -- critical arguments.

Replies from: curi, ChristianKl
comment by curi · 2017-11-08T18:39:46.772Z · LW(p) · GW(p)

Popperians say scientific ideas should be (empirically) falsifiable. Philosophy isn't empirically falsifiable, it's addressed by critical arguments.

I let you take substantial control over conversation flow. You took it here – you overestimated your knowledge of Popper and were totally wrong. You do not seem to have learned from this error.

You didn't answer my question about your interest, and you seem totally lost as to what we disagree about. You're still, in response to "your questions are based on assuming aspects of your philosophy are true", making the same assumptions while denying it. You don't have anything like a sense of what we disagree about, but you're trying to lead the conversation anyway. Your questions are in service of lines of argument, not finding out what I think – and the lines of argument don't make sense because you don't know what to target.

Replies from: Lumifer
comment by Lumifer · 2017-11-08T18:51:03.454Z · LW(p) · GW(p)

and were totally wrong

What exactly did I say that was totally wrong? Quote, please.

making the same assumptions

These assumptions take half a sentence. There are exactly three of them:

there's reality, there are models, we can detect some mismatches between the reality and the models

Which one do you think is unjustified?

the lines of argument don't make sense because you don't know what to target

Supply me with targets, then :-D

Replies from: curi
comment by curi · 2017-11-08T19:09:49.094Z · LW(p) · GW(p)

Quoting:

Any empirical results would be logically compatible with my philosophy being true.

I thought you were Popperian. Is your philosophy empirically falsifiable, then?

Popperians say scientific ideas should be (empirically) falsifiable. Philosophy isn't empirically falsifiable, it's addressed by critical arguments.

I regard this as indicating you misunderstand CR.

Then later:

Isn't falsification one of the favourite Popperian ideas?

In science, yes, testing is a favored idea, though even in science most ideas are rejected without being tested:

http://curi.us/1504-the-most-important-improvement-to-popperian-philosophy-of-science

But you don't want references, and I don't want to rewrite or copy/paste my blog post which is itself summarizing some information from books that would be better to look at directly.


I have a lot of targets on my websites, like http://fallibleideas.com and https://reasonandmorality.com, but you've said you don't want to look at them.

Do you have a website with information I could skim to find disagreements? Earlier, IIRC, I tried to ask about some of your important beliefs but you didn't put forward some positions to debate.

Is there any written philosophy material you think is correct, and would be super interested to learn contains mistakes? Or do you just think the ideas in your head are correct but they aren't written down, and you'd like to learn about mistakes in those? Or do you think your own ideas have some flaws, but are pretty good, so if I pointed out a couple mistakes it might not make much difference to you?

What do you want to get out of this discussion? Coming to agree about some major philosophy issues would be a big effort. Under what sort of circumstances do you expect you would stop discussing? Do you have a discussion methodology which is written down anywhere? I do. http://curi.us/1898-paths-forward-short-summary

I have a philosophy I think is non-refuted. I don't know of any mistakes and would be happy to find out. It's also written down in public to expose it to scrutiny.

Replies from: Lumifer
comment by Lumifer · 2017-11-08T19:53:32.336Z · LW(p) · GW(p)

I regard this as indicating you misunderstand CR

Your philosophy is advertised as "All problems can be solved by knowing how. I tell you how."

This looks to me as crossing the demarcation threshold. Would you insist that there are no possible empirical observations which can invalidate you advice?

Do you have a website with information I could skim to find disagreements? ... Is there any written philosophy material you think is correct, and would be super interested to learn contains mistakes?

You asked before. Still nope and nope.

Under what sort of circumstances do you expect you would stop discussing?

When you stop being interesting.

I don't know of any mistakes and would be happy to find out.

Define "mistake".

Replies from: curi
comment by curi · 2017-11-08T20:01:09.411Z · LW(p) · GW(p)

You can bring up observations in a discussion of a piece of advice, but as always the role of the evidence is governed by arguments stating its role. And the primary issue here is argument.

All problems can be solved by knowing how.

This is a theory claim.

I tell you how.

This is a claim that I have substantial problem solving knowledge for sale, but is not intended to indicate I already know full solutions to all problems. It's sufficiently non-specific that I don't think it's a very good target for discussion.

When you stop being interesting.

Why are you interested now?

Define "mistake".

http://fallibleideas.com/definitions

And are you really unfamiliar with this common English word? Do you know what being wrong is? Less wrong? Error? Flaw?

Are you trying to raise some sort of philosophical issue? If so, please state it directly.

You asked before. Still nope and nope.

What about the rest?

Or do you just think the ideas in your head are correct but they aren't written down, and you'd like to learn about mistakes in those? Or do you think your own ideas have some flaws, but are pretty good, so if I pointed out a couple mistakes it might not make much difference to you?

Replies from: Lumifer
comment by Lumifer · 2017-11-08T20:29:48.153Z · LW(p) · GW(p)

Why are you interested now?

I'm interested in smart weird people :-P

And are you really unfamiliar with this common English word?

Oh, boy. We are having fundamental philosophical disagreements and you think dictionary definitions of things like "wrong" are adequate?

You say that philosophy is not falsifiable. OK, let's assume that for the time being. So can we apply the term "wrong" to some philosophies and "right" to others? On which basis? You will say "critical arguments". What is a critical argument? Within which framework are you going to evaluate them? You want "mistakes" pointed out to you. What kind of things will you accept as a "mistake" and what kind of things will you accept as indicating that it's valid?

I disagree that definitions are not all that important.

do you just think the ideas in your head are correct

Well, obviously I think they are correct to some degree (remember, for me "truth" is not a binary category).

and you'd like to learn about mistakes in those?

See above: what is a "mistake", given that we're deliberately ignoring empirical testing?

Things I'd like to learn are more like new to me frameworks, angles of view, reinterpretations of known facts. To use Scott Alexander's terminology, I want to notice concept-shaped holes.

Replies from: curi
comment by curi · 2017-11-08T20:52:58.404Z · LW(p) · GW(p)

Criteria of mistakes are themselves open to discussion. Some typical important ways to point out mistakes are:

1) internal contradictions, logical errors

2) non sequiturs

3) a reason X wouldn't solve problem Y, even though X is being offered as a solution to Y

4) an idea assumes/uses and also contradicts some context (e.g. background knowledge)

5) pointing out a contradiction with evidence

6) pointing out ambiguity, vagueness

there are many other types of critical arguments. for example, sometimes an argument, X, claims to refute Y, but X, if correct, refutes everything (or everything in a relevant category). it's a generic argument that could equally well be used on everything, and is being selectively applied to Y. that's a criticism of X's capacity to criticize Y.


Ideas solve problems (put another way, they have purposes), with "problem" understood very broadly (including answering questions, explaining an issue, accomplishing a goal). A mistake is something which prevents an idea from solving a problem it's intended to solve (it fails to work for its purpose).

By correcting mistakes we get better ideas. We fix issues preventing our problems from being solved and our purposes achieved (including the purpose of correctly intellectually understanding philosophy, science, etc). We should prefer non-refuted ideas (no known mistakes) to refuted ideas (known mistakes).

Replies from: Lumifer
comment by Lumifer · 2017-11-08T21:07:46.642Z · LW(p) · GW(p)

Some typical important ways to point out mistakes

Ways to point out mistakes? Then the question remains: what is a "mistake"? A finger pointing at the moon is not the moon.

Your (4) is the same thing as (1) -- or (5), take your pick. Your (5) is forbidden here -- remember, we are deliberately keeping to one side of the demarcation threshold -- no empirical evidence or empirical testing allowed. (6) is quite curious -- is being vague a "mistake"?

Ideas solve problems

In the real world? Then they are falsifiable and we can bring empirical evidence to bear. You were very anxious to avoid that.

By correcting mistakes we get better ideas

Looks like a non sequitur: generating new (and better) ideas is quite distinct from fixing the errors of old ideas -- similar to the difference between writing a new program and debugging an existing one.

We should prefer non-refuted ideas (no known mistakes) to refuted ideas (known mistakes).

I would argue that we should prefer ideas which successfully solve problems to ideas which solve them less successfully (demarcation! science! :-D)

Replies from: curi
comment by curi · 2017-11-08T21:37:50.782Z · LW(p) · GW(p)

Ways to point out mistakes? Then the question remains: what is a "mistake"? A finger pointing at the moon is not the moon.

I actually wrote a sentence

A mistake is [...]

Do you not read ahead before replying, and don't go back and edit either?

(6) is quite curious -- is being vague a "mistake"?

In general, yes. It technically depends on context (like the problem specification details). Normally e.g. the context of answering a question is you want an adequately clear answer, so an inadequately clear answer fails.

In the real world? Then they are falsifiable and we can bring empirical evidence to bear. You were very anxious to avoid that.

Ideas solve intellectual problems, and some of those solutions can be used to solve problems we care about in the real world by acting according to a solution. Some problems (e.g. in math) are more abstract and it's unclear what to use the solutions for.

I have nothing against the real world. But even when the real world is relevant, you still have to make an argument saying how to use some evidence in the intellectual debate. The intellectual debate is always primary. You can't just directly look at the world and know the answers, though sometimes the arguments involved with getting from evidence X to rejecting idea Y are sufficiently standard that people don't write them out.

You are welcome to mention some evidence in a criticism of my philosophy claims if you think you see a way to relevantly do that.

Looks like a non sequitur: generating new (and better) ideas is quite distinct from fixing the errors of old ideas -- similar to the difference between writing a new program and debugging an existing one.

You have idea X (plus context) to solve problem P. You find a mistake, M. You come up with a new idea to solve P which doesn't have M. Whether it's a slightly adjusted version of X (X2) or a very different idea that solves the same problem is kinda immaterial. Both are acceptable. Methodologically, the standard recommendation is to look for X2 first.

I would argue that we should prefer ideas which successfully solve problems to ideas which solve them less successfully (demarcation! science! :-D)

I consider solving a problem to be binary – X does or doesn't solve P. And I consider criticisms to be binary – either they are decisive (says why the idea doesn't work) or not.

Problems without success/failure criteria I consider inadequately specified. Informally we may get away with that, but when trying to be precise and running into difficult issues then we need to specify our problems better.

Replies from: Lumifer
comment by Lumifer · 2017-11-09T01:31:36.504Z · LW(p) · GW(p)

I actually wrote a sentence

That's a curious definition of a "mistake". It's very... instrumental and local. A "mistake" is a function of both an idea and a problem -- therefore, it seems, if you didn't specify a particular problem you can't talk about ideas being mistaken. And yet your examples -- e.g. an internal logical inconsistency -- don't seem to require a problem to demonstrate that an idea is broken.

I have nothing against the real world

Oh, I'm sure it's relieved to hear that

But even when the real world is relevant, you still have to make an argument saying how to use some evidence in the intellectual debate.

Why is that?

The intellectual debate is always primary.

That's an interesting claim. An intellectual debate is what's happening inside your head. You are saying that it's primary compared to the objective reality outside of your head. Am I understanding your correctly?

I consider solving a problem to be binary – X does or doesn't solve P.

Only if a problem has a binary outcome. Not all problems do.

And I consider criticisms to be binary – either they are decisive (says why the idea doesn't work) or not.

A black-and-white vision seems unnecessary limiting.

Consider standard statistics. Let's say we're trying to figure out the influence of X on Y (where both are real values). First, there is no sharp boundary between a solution and a not-solution. You can build a variety of statistical models which will make different trade-offs and produce different results. There is no natural dividing line between a slightly worse model which would be a not-solution and a slightly better model which will be a solution.

Moreover, since these different models are making trade-offs, you can criticise these trade-offs, but generally speaking it's difficult to say that this one is outright wrong and that one is clearly right. There's a reason they're called trade-offs.

Typically at the end you pick a statistical model or an ensemble of models, but the question "is the problem solved, yes or no?" is silly: it is solved to some extent, not fully, but it's not at the "we have no idea" stage either.

Problems without success/failure criteria I consider inadequately specified.

Life must be very inconvenient for you.

By the way, what about optimization problems? The goal is to maximize Y by manipulating X. There is no threshold, you want Y to be as large as possible. What's the criterion for success?

Replies from: curi
comment by curi · 2017-11-09T01:58:13.338Z · LW(p) · GW(p)

That's a curious definition of a "mistake". It's very... instrumental and local.

This is not local – I specified context matters (whether the context is stated as part of the problem, or specified separately, is merely a matter of terminology.)

You can't determine whether a particular sentence is a correct or incorrect answer without knowing the context – e.g. what is it supposed to answer? The same statement can be a correct answer to one issue and an incorrect answer to a different issue. If you don't like this, you can build the problem and the context into the statement itself, and then evaluate it in isolation.

I'm guessing the reason you consider my view on mistakes "instrumental" is because I think one has to look at the purpose of an idea instead of just the raw data. It's because I add a philosophy layer where you don't. So your alternative to "instrumental" is to say something like "mistakes are when ideas fail to correspond to empirical reality" – and to ignore non-empirical issues, interpretation issues, and that answers to questions need to correspond to the question which could e.g. be about a hypothetical scenario. To the extent that questions, goals, human problems, etc, are part of reality then, sure, this is all about reality. But I'm guessing we can both agree that's a difference of perspective.

And yet your examples -- e.g. an internal logical inconsistency -- don't seem to require a problem to demonstrate that an idea is broken.

Self-contradictory ideas are broken for many problems. In general, we try to criticize an idea as a solution to a range of problems, not a single one. Those criticisms are more interesting. If your criticism is too narrow, it won't work on a slight variant of the idea. You normally want to criticize all the variants sharing a particular theme.

Self-contradictory ideas can (as far as we know) only be correct solutions to some specific types of problems, like for use in parody or as a discussion example.

But even when the real world is relevant, you still have to make an argument saying how to use some evidence in the intellectual debate.

Why is that?

Because facts are not self-explanatory. Any set of facts is open to many interpretations. (Not equally correct interpretations or anything like that, merely logically possible interpretations. So you have to talk about your interpretation, unless the other person can guess it. And you have to talk about how your interpretation of the evidence fits into the debate – e.g. that in contradicts a particular claim – though, again, in simple cases other people may guess that without you saying it.)

That's an interesting claim. An intellectual debate is what's happening inside your head. You are saying that it's primary compared to the objective reality outside of your head. Am I understanding your correctly?

You may prefer to think of it as the philosophy issues are always prior to the other issues. E.g. the role of a particular piece of evidence in reaching some conclusion is governed by ideas and methodology about the role of evidence in general, an interpretation of the raw data in this case, some general epistemology about how conclusions are reached and judged, etc.

Oh, I'm sure it's relieved to hear that

Please stop the sarcasm or tell me how/why it's productive and non-hostile.

A black-and-white vision seems unnecessary limiting.

it's intentional in order to solve epistemology problems which (I claim) have no other (known) solution. And it's not limiting because things like statistics are used in a secondary role. E.g. you can say "if the following statistical metric gives us 99% or more confidence, i will consider that an adequate solution to my problem". (approaches like that, which use a cutoff amount to determine binary success or failure, are common in science).

First, there is no sharp boundary between a solution and a not-solution.

that depends, as i said, on how the problem is specified.

in the final analysis, when it comes to human action and decision making, for any given issue you decide yes to a particular thing and no to its rivals. if you hedge, then you're deciding yes about that particular hedge.

There is no natural dividing line between a slightly worse model which would be a not-solution and a slightly better model which will be a solution.

depends on the problem domain. e.g. in school sometimes you need an 87 on the test to pass the class, and an 86 will result in failing. so a slightly better test performance can cross a large dividing line. breakpoints like this come up all over the place, e.g. with faster casting speed in diablo 2 (when you hit 37% faster casting speed the casting animation drops by 1 frame. it doesn't drop another frame until 55%. so gear sets totally 40% and 45% FCR are actually equal. (not the actual numbers.)).

Moreover, since these different models are making trade-offs, you can criticise these trade-offs, but generally speaking it's difficult to say that this one is outright wrong and that one is clearly right. There's a reason they're called trade-offs.

it may be difficult, but nevertheless you have to make a decision. the decision should itself by judged in a binary way and be non-refuted – you don't have a criticism of making that particular decision.

i've addressed this stuff at great length. https://yesornophilosophy.com/argument

By the way, what about optimization problems? The goal is to maximize Y by manipulating X. There is no threshold, you want Y to be as large as possible. What's the criterion for success?

then do whatever maximizes it. anything with a lower score would be refuted (a mistake to do) since there's on option which gets a higher score. since the problem is to do the thing with the best score (implicitly limited to only options you know of after allocating some amount of resources to looking for better options), second best fails to address that problem.

more typically you don't want to maximize a single factor. i go into this at length in my yes or no philosophy.

Replies from: Lumifer
comment by Lumifer · 2017-11-09T03:51:35.399Z · LW(p) · GW(p)

one has to look at the purpose of an idea instead of just the raw data

Oh, I agree. It's just that you were very insistent about drawing the line between unfalsifiable philosophy and other empirically-falsifiable stuff and here you're coming back into the real-life problems realm where things are definitely testable and falsifiable. I'm all for it, but there are consequences.

So you have to talk about your interpretation, unless the other person can guess it.

Sure, but that's not an intellectual debate. If someone asks how to start a fire and I explain how you arrange kindling, get a flint and a steel, etc. there is no debate -- I'm just transferring information.

the philosophy issues are always prior to the other issues

Not necessarily. If you put your hand into a fire, you will get a burn -- that's easy to learn (and small kids learn it fast). Which philosophy issues are prior to that learning?

Please stop the sarcasm

No can do. But tell you what, the fewer silly things you say, the less often you will encounter overt sarcasm :-)

in order to solve epistemology problems

Which problems you can't solve otherwise?

for any given issue you decide yes to a particular thing and no to its rivals

There are lot of issues with continuous (real number) decisions. Let's say you're deciding how much money to put into your retirement fund this year and the reasonable range is between $10K and $20K. You are not going to treat $14,999 and $15,000 as separate solutions, are you?

breakpoints like this come up all over the place

Sure they do, but not always. And your approach requires them.

the decision should itself by judged in a binary way and be non-refuted

I still don't see the need for these rather severe limitations. You want to deal with reality as if it consists of discrete, well-delineated chunks and, well, it just doesn't. I understand that you can impose thresholds and breakpoints any time you wish, but they are artifacts and if your method requires them, it's a drawback.

then do whatever maximizes it

Yes, but you typically have an explore-or-exploit problem. You need to spend resources to look for a better optimum, at each point in time you have some probability of improving your maximum, but there are costs and they grow. At which point do you stop expending resources to look for a better solution?

Replies from: curi
comment by curi · 2017-11-09T04:33:48.863Z · LW(p) · GW(p)

It's just that you were very insistent about drawing the line between unfalsifiable philosophy and other empirically-falsifiable stuff

if you have an empirical argument to make, that's fine. but i don't think i'm required to provide evidence for my philosophical claims. (btw i criticize the standard burden of proof idea in Yes or No Philosophy. in short, if you can't criticize an idea then it's non-refuted and demanding some sort of burden of proof is not a criticism since lack of proof doesn't prevent an idea from solving a problem.)

in order to solve epistemology problems

Which problems you can't solve otherwise?

the problem of induction. problems about how to evaluate arguments (how do you score the strength of an argument? and what difference does it really make if one scores higher than another? either something points out why a solution doesn't work or it doesn't. unless you specifically try to specify non-binary problems. but that doesn't really work. you can specify a set of solutions are all equal. ok then either pick any one of them if you're satisfied, or else solve some other more precise problem that differentiates. you can also specify that higher scoring solutions on some metric are better, but then you just pick the highest scoring one, so you get a single solution or maybe a tie again. and whether you've chosen a correct solution given the problem specification, or not, is binary.) and various problems about how you decide what metrics to use (the solution to that being binary arguments about what metrics to use – or in many cases don't use a metric. metrics are overrated but useful sometimes.)

Yes, but you typically have an explore-or-exploit problem. You need to spend resources to look for a better optimum, at each point in time you have some probability of improving your maximum, but there are costs and they grow. At which point do you stop expending resources to look for a better solution?

Yes so then you guess what to do and criticize your guesses. Or, if you wish, define a metric with positive points for a higher score and negative points for resources spent (after you guess-and-criticize to figure out how to put the positive score and all the different types of resources into the same units) and then guess how to maximize that (e.g. define a metric about resources allocated to getting a higher score on the first metric, spend that much resources, and then use the highest scoring solution.

multi-factor metrics don't work as well as people think, but are ok sometimes (but you have to make a binary judgement about whether to use a particular metric for a particular situation, or not – so the binary judgement is prior and governs the use of the metric). here's a good article about issues with them:

https://www.newyorker.com/magazine/2011/02/14/the-order-of-things

scoring systems are overrated but are allowable in binary epistemology given that their use is governed by binary judgements (should I proceed by doing the thing that scores the highest on this metric? make critical arguments about that and make a binary judgement. so the binary judgement is prior but then things like metrics and statistics are allowable as secondary things which are sometimes quite useful.)

You are not going to treat $14,999 and $15,000 as separate solutions, are you?

depends how precise the problem or context says to be. (or bigger picture, it depends how precise is worth the resources to be – which you should either specify in the problem or consider part of the context.)

if you don't care about single dollar level of precision (cuz you want to save resources like effort to deal with details), just e.g. specify in the problem that you only care about increments of $500 or that (to save problem solving resources like time) you just want to use the first acceptable solution you come up with that you determine meet some standards of good enough (these are no longer strictly single variable maximization problems).

breakpoints like this come up all over the place

Sure they do, but not always. And your approach requires them.

they aren't required, you can specify the problem however you want (subject to criticism) so you it makes clear what is a solution or not (or a set of tied solutions you're indifferent btwn which you can then tiebreak arbitrarily if you have no criticism of doing it arbitrarily).

if the problem specifies that some solutions are better than others (not my preferred way to specify problems – i think it's epistemologically misleading), then when you act you should pick one of the solutions in the highest tier you have a solution in, and reject the others. whether this method (pick a highest tier solution) is correct, and whether you've used it in this case, are both binary issues open to criticism.

At which point do you stop expending resources to look for a better solution?

when you guess it's best to stop and your guess is non-refuted and the guess to continue looking is refuted. (you may, if you want to, define some stopping metric and make a subject-to-criticism binary yes-or-no judgement about whether to use that stopping metric.)

the philosophy issues are always prior to the other issues

Not necessarily. If you put your hand into a fire, you will get a burn -- that's easy to learn (and small kids learn it fast). Which philosophy issues are prior to that learning?

i think small kids do guesses and criticism, and use methods of learning (what I would call philosophical methods), even if they can't state those methods in English. i also think ppl who have never studied philosophy use philosophy methods, which they picked up from their culture here and there, even if they don't consciously understand themselves or know the names of the things they're doing. and to the extent ppl learn, i think it's guesses and criticism in some form, since that's the only known method of learning (at a low level, it's evolution – the only known solution to the problem of where the appearance of design comes from – saying it comes from "intelligence" is like attributing it to God or an intelligent designer – it doesn't tell you how god/intelligence does it. my answer to that is, at a low level, evolution. layers of abstraction are built on top of that so it looks more varied at a higher level.).

Replies from: Lumifer
comment by Lumifer · 2017-11-09T17:29:42.372Z · LW(p) · GW(p)

i don't think i'm required to provide evidence for my philosophical claims

It depends on what do you want to do with them. If all you want to do is keep them on a shelf and once in a while take them out, dust them, and admire them, then no, you don't. On the other hand, if you want to persuade someone to change their mind, evidence might be useful. And if you want other people to take action based on your claims', ahem, implications, evidence might even be necessary.

the problem of induction. problems about how to evaluate arguments

It seems that the root of these problems is your insistence that truth is a binary category. If you are forced to operate with single-bit values and have to convert every continuous function into a step one, well, sure you will have problems.

The thread seem to be losing shape, so let's do a bit of a summary. As far as I can see, the core differences between us are:

  • You think truth (and arguments) are binary, I think both have continuous values;
  • You think intellectual debates are primary and empirical testing is secondary, I think the reverse;

Looks reasonable to you?

Replies from: curi
comment by curi · 2017-11-09T18:24:50.114Z · LW(p) · GW(p)

the two things you listed are ok with me. i'd add induction vs guesses-and-criticism/evolution to the list of disagreements.

do you think there's a clear, decisive mistake in something i'm saying?

can you specify how you think induction works? as a fully defined, step-by-step process i can do today?

though what i'd prefer most is replies to the things i said in my previous message.

Replies from: Lumifer
comment by Lumifer · 2017-11-09T19:03:43.240Z · LW(p) · GW(p)

do you think there's a clear, decisive mistake in something i'm saying?

I would probably classify it as suboptimal. It's not a "clear, decisive mistake" to see only black and white -- but it limits you.

can you specify how you think induction works?

In the usual way: additional data points increase the probability of the hypothesis being correct, however their influence tends to rapidly decline to zero and they can't lift the probability over the asymptote (which is usually less than 1). Induction doesn't prove anything, but then in my system nothing proves anything.

What you said in the previous message is messy and doesn't seem to be terribly impactful. Talking about how you can define a loss function or how you can convert scores to a yes/no metric is secondary and tertiary to the core disagreements we have.

Replies from: curi
comment by curi · 2017-11-09T19:09:20.636Z · LW(p) · GW(p)

In the usual way: additional data points increase the probability of the hypothesis being correct,

the probability of which hypotheses being correct, how much? how do you differentiate between hypotheses which do not contradict any of the data?

Replies from: Lumifer
comment by Lumifer · 2017-11-09T19:18:49.315Z · LW(p) · GW(p)

the probability of which hypotheses being correct, how much?

For a given problem I would have a set of hypotheses under consideration. A new data point might kill some of them (in the Popperian fashion) or might spawn new ones. Those which survive -- all of them -- gain some probability. How much, it depends. No simple universal rule.

how do you differentiate hypotheses which do not contradict any of the data?

For which purpose and in which context? I might not need to differentiate them.

Occam's razor is a common heuristic, though, of course, it is NOT a guide to whether a particular theory is correct or not.

Replies from: curi
comment by curi · 2017-11-09T19:20:43.934Z · LW(p) · GW(p)

Do all the non-contradicted-by-evidence ideas gain equal probability (so they are always tied and i don't see the point of the "probabilities"), or differential probability?

EDIT: I'm guessing your answer is you start them with different amounts of probability. after that they gain different amounts accordingly (e.g. the one at 90% gains less from the same evidence than the one at 10%). but the ordering (by amount of probability) always stays the same as how it started, apart from when something is dropped to 0% by contradicting evidence. is that it? or do you have a way (which is part of induction, not critical argument?) to say "evidence X neither contradicts ideas Y nor Z, but fits Y better than Z"?

Replies from: Lumifer
comment by Lumifer · 2017-11-09T20:00:28.051Z · LW(p) · GW(p)

Different hypotheses (= models) can gain different amounts of probability. They can start with different amounts of probability, too, of course.

to say "evidence X neither contradicts ideas Y nor Z, but fits Y better than Z"?

Of course. That's basically how all statistics work.

Say, if I have two hypotheses that the true value of X is either 5 or 10, but I can only get noisy estimates, a measurement of 8.7 will add more probability to the "10" hypothesis than to the "5" hypothesis.

Replies from: curi
comment by curi · 2017-11-09T20:03:34.613Z · LW(p) · GW(p)

what do you do about ideas which make identical predictions?

Replies from: gjm, Lumifer
comment by gjm · 2017-11-09T20:36:14.005Z · LW(p) · GW(p)

They get identical probabilities -- if their prior probabilities were equal.

If (as is the general practice around these parts) you give a markedly bigger prior probability to simpler hypotheses, then you will strongly prefer the simpler idea. (Here "simpler" means something like "when turned into a completely explicit computer program, has shorter source code". Of course your choice of language matters a bit, but unless you make wilfully perverse choices this will seldom be what decides which idea is simpler.)

In so far as the world turns out to be made of simply-behaving things with complex emergent behaviours, a preference for simplicity will favour ideas expressed in terms of those simply-behaving things (or perhaps other things essentially equivalent to them) and therefore more-explanatory ideas. (It is at least partly the fact that the world seems so far to be made of simply-behaving things with complex emergent behaviours that makes explanations so valuable.)

comment by Lumifer · 2017-11-09T20:11:03.989Z · LW(p) · GW(p)

I don't need to distinguish between them, then.

Replies from: curi
comment by curi · 2017-11-09T20:15:54.131Z · LW(p) · GW(p)

so you don't deal with explanations, period?

Replies from: Lumifer
comment by Lumifer · 2017-11-09T20:38:04.076Z · LW(p) · GW(p)

I do, but more or less only to the extent that they will make potential different predictions. If two models are in principle incapable of making different predictions, I don't see why should I care.

Replies from: curi
comment by curi · 2017-11-09T20:41:14.561Z · LW(p) · GW(p)

so e.g. you don't care if trees exist or not? you think people should stop thinking in terms of trees and stick to empirical predictions only, dropping any kind of non-empricial modeling like the concept of a tree?

Replies from: Lumifer
comment by Lumifer · 2017-11-09T20:48:15.301Z · LW(p) · GW(p)

you don't care if trees exist or not?

I don't understand what this means.

any kind of non-empricial modeling like the concept of a tree?

The concept of a tree seems pretty empirical to me.

Replies from: curi
comment by curi · 2017-11-09T20:56:07.500Z · LW(p) · GW(p)

there are infinitely many theories which say trees don't exist but make identical predictions to the standard view involving trees existing.

trees are not an observation, they are a conceptual interpretation. observations are things like the frequencies of photons at times and locations.

Replies from: Lumifer
comment by Lumifer · 2017-11-09T21:14:32.765Z · LW(p) · GW(p)

there are infinitely many theories which say trees don't exist but make identical predictions

Isn't it convenient that I don't have to care about these infinitely many theories?

Since there is an infinity of them, I bet you can't marshal critical arguments against ALL of them :-P

trees are not an observation

I think you're getting confused between actual trees and the abstract concept of a tree.

observations are things like the frequencies of photons at times and locations.

I don't think so. Human brains do not process sensory input in terms of " frequencies of photons at times and locations".

Replies from: curi
comment by curi · 2017-11-09T21:16:13.096Z · LW(p) · GW(p)

Isn't it convenient that I don't have to care about these infinitely many theories?

why not?

Since there is an infinity of them, I bet you can't marshal critical arguments against ALL of them :-P

you can criticize categories, e.g. all ideas with feature X.

I think you're getting confused between actual trees and the abstract concept of a tree.

i don't think so. you can't observe entities. you have to interpret what entities there are (or not – as you advocated by saying only prediction matters)

Replies from: Lumifer
comment by Lumifer · 2017-11-09T21:30:19.712Z · LW(p) · GW(p)

why not?

Why not what?

you can criticize categories, e.g. all ideas with feature X

How can you know that every single theory in that infinity has feature X? or belongs to the same category?

you can't observe entities

My nervous system makes perfectly good entities out of my sensory stream. Moreover, a rat's nervous system also makes perfectly good entities out if its sensory stream regardless of the fact that the rat has never heard of epistemology and is not very philosophically literate.

or not

Or not? Prediction matters, but entities are an awfully convenient way to make predictions.

comment by ChristianKl · 2017-11-08T17:46:11.544Z · LW(p) · GW(p)

Isn't falsification one of the favourite Popperian ideas?

I don't think you are supposed to use it for the important models.

Replies from: Lumifer
comment by Lumifer · 2017-11-08T18:36:56.195Z · LW(p) · GW(p)

The ones too important to be falsified? :-D

comment by Elo · 2017-11-02T04:44:47.227Z · LW(p) · GW(p)

The scientific method

You read the same book as me! "Theory And Reality - Peter Godfrey Smith". I am surprised you say this.

What you describe is the hypothetico-deductive method (https://en.wikipedia.org/wiki/Scientific_Method pictured here is the hypothetico-deductive method, wikipedia is wrong and disagrees with it's own sources). The hypothetico-deductive method involves guesses but the scientific method according to that book is about:

  1. observation
  2. measurement (and building models that can be predictive of that measurement)
  3. standing on the shoulders of the extisting body of knowledge.
  4. ???
  5. Profit!

Edit: that wiki page has changed a lot over the last few months and now I am less sure about what it says.

Replies from: curi
comment by curi · 2017-11-02T07:04:51.479Z · LW(p) · GW(p)

I don't understand what reading a book has to do with it, or what you wish me to take from the wikipedia link. In my comment I stated the CR position on scientific method, which is my position. Do you have a criticism of it?

comment by curi · 2017-11-01T09:38:23.477Z · LW(p) · GW(p)

i think humans don't use their full computational capacity. why expect an AGI to?

in what way do you think AGI will have a better algorithm than humans? what sort of differences do you have in mind?

Replies from: siIver
comment by siIver · 2017-11-01T10:30:43.966Z · LW(p) · GW(p)

It doesn't really matter whether the AI uses their full computational capacity. If the AI has a 100000 times larger capacity (which is again a conservative lower bound) and it only uses 1% of it, it will still be 1000 as smart as the human's full capacity.

AGI's algorithm will be better, because it has instant access to more facts than any human has time to memorize, and it will not have all of the biases that humans have. The entire point of the sequences is to list dozens of ways that the human brain reliably fails.

Replies from: curi
comment by curi · 2017-11-01T20:23:28.954Z · LW(p) · GW(p)

If the advantage is speed, then in one year an AI that thinks 10,000x faster could be as productive as a person who lives for 10,000 years. Something like that. Or as productive as one year each from 10,000 people. But a person could live to 10,000 and not be very productive, ever. That's easy, right? Because they get stuck, unhappy, bored, superstitious ... all kinds of things can go wrong with their thinking. If AGI only has a speed advantage, that won't make it immune to dishonesty, wishful thinking, etc. Right?

Humans have fast access to facts via google, databases, and other tools, so memorizing isn't crucial.

The entire point of the sequences is to list dozens of ways that the human brain reliably fails.

I thought they talked about things like biases. Couldn't an AGI be biased, too?

Replies from: Lumifer
comment by Lumifer · 2017-11-01T20:26:18.162Z · LW(p) · GW(p)

For fun ways in which NN classifiers reliably fail, google up adversarial inputs :-)

Example

Replies from: Elo
comment by Elo · 2017-11-01T20:38:50.955Z · LW(p) · GW(p)

Rubbish in, rubbish out - right?

Replies from: Lumifer
comment by Lumifer · 2017-11-02T00:33:31.554Z · LW(p) · GW(p)

No, not quite. It's more like "let us poke around this NN and we'll be able to craft inputs which look like one thing to a human and a completely different thing to the NN, and the NN is very sure of it".