Posts

TAG's Shortform 2020-08-13T09:30:22.058Z

Comments

Comment by tag on The Problem of the Criterion · 2021-01-24T06:59:09.304Z · LW · GW

Finally, we come to a special case of particularist responses known as pragmatism., and it’s this kind of response that Yudkowsky offered. The idea of pragmatism is to say that there is some purpose to be served and by serving that purpose we can do an end-run around the problem of the criterion by tolerating unjustified knowledge so long as it works well enough to achieve some end

This kind of pragmatism solves an easier version of the problem. Winning, and it's close relative, prediction, can both be measured directly. That what allows you tell that something works without having an apriori theoretical justification. On the other hand, correspondence-to-reality can't be tested directly...you can't stand apart from the map territory relationship. In Scientism it's common to assume that predctiveness is a sign of correspondence...

Comment by tag on The Problem of the Criterion · 2021-01-24T04:50:05.427Z · LW · GW

Coherentist responses reject the idea that truth, knowledge, etc. must be grounded and instead seek to find a way of balancing what is known with how it is known to form a self-consistent system.

Coherentism may lack foundations in the sense that foundationalism has foundations, but it still has guiding principles -- notably, that coherence is conducive to truth!

Comment by tag on The Problem of the Criterion · 2021-01-24T04:27:50.975Z · LW · GW

And then there’s skepticism, arguably the only “correct” position of Chisholm’s three in that it’s the only one that seemingly doesn’t require assuming something on faith. Spoiler alert: it does because it still needs some reason to prefer skepticism over the alternatives, thus it still ends up begging the question. Skepticism is also not very useful because even though it might not lead to incorrectly believing that a false thing is true, it does this by not allowing one to believe anything is true!

Which is a problem if you have good reason to believe some things in general are true. If you do, you might as well believe whichever specific things are most plausible , even if they are not fully justified.

You have characterised scepticism as a nothing-is-true position. But "it is true that nothing is true" s self defeating. If the problem of the criterion means that nothing is well justified, then strong claims should be avoided , including strong negative claims like "nothing is true". So scepticism done right is moderation in all things.

Comment by tag on Deutsch and Yudkowsky on scientific explanation · 2021-01-23T19:08:15.453Z · LW · GW

Ah, I see. I’m not sure I would describe SI as ‘solving’ those puzzles, rather than recasting them in a clearer light

The claim has been made , even if you don't believe it.

Beliefs should pay rent in anticipated experiences, which feels like a very SI position to have

Rationalists don't consistently believe that, because if they did , they would be indfferent about MW versus Copenhagen , since all interpretations make the same predictions. Lesswrongian epistemology isn't even consistent.

But I think the actual productive path, once you’re moderately confident Zeus isn’t on Olympus, is not trying to figure out if invisi-Zeus is in causally-disconnected-Olympus, but looking at humans to figure out why they would have thought Zeus was intuitively likely in the first place; this is the dissolving the question approach

If you can have a non empirical reason to believe in non interacting branches of the universal wave function, your theist opponents can have a non empirical reason to believe in non interacting gods.

With regard to QM, when I read through this post, it is relying pretty heavily on Occam’s Razor, which (for Eliezer at least) I assume is backed by SI

Of course not. SI can't tell you why simplicity matters, epistemologically. At the same time. It is clear that simplicity is no additional help in making predictions. Once you have filtered out the non predictive order programmes, the remaining ones are all equally predictive ... so whatever simpliciy is supplying, it isn't extra productiveness. The obvious answer is that it's some ability to show that, out of N equally predictive theories, one corresponds to reality.

That's a standard defence of Occam's razor. It isn't given by SI, as we have seen. SI just needs the simplicity criterion in order to be able to spit something out.

But there are other defenses of Occam's razor.

And the traditional versions don't settle everything in favour of MWI and against (sophisticated versions of) God..those are open questions.

And SI isnt a new improved version of Occam's razor. In fact , it is unable to relate simplicity to truth.

But a thing that I hadn’t noticed before this conversation, which seems pretty interesting to me, is that whether you prefer MWI might depend on whether you use the simplicity prior or the speed prior, a

These old problems are open problems because we can't agree on which kind of simplicity is relevant. SI doesn't help because it introduces yet another simplicity measure. Or maybe two,the speed prior and the space prior.

I think the real argument for MWI rests more on the arguments here

  1. Wrongly conflates Copenhagen with Objective Reduction.

  2. Wrongly assumes MW is the only alternative to "Copenhagen".

Comment by tag on Deutsch and Yudkowsky on scientific explanation · 2021-01-23T00:43:18.105Z · LW · GW

If you need to come to a determinate result in a finite number of computational steps (my replacement for ‘time’), then SI isn’t the tool for you

It isn't any a tool for anybody because it's uncomputable. Whatever interest it has must be theoretical.

I'm responding to claims that SI can solve long standing philosophical puzzles such as the existence of God or the correct interpretation of quantum mechanics. The claims have been made, and they have been made here but they may not have been made by you.

Comment by tag on Deutsch and Yudkowsky on scientific explanation · 2021-01-22T22:30:47.332Z · LW · GW

SI only works for computable universes; otherwise you’re out of luck

SI cannot generate realistic hypotheses about uncomputable universes , but it doesn't follow that it can generate realistic hypotheses about computable universes.

You can’t assign a uniform probability to all the programs, because there are infinitely many, and while there is a mathematically well-defined “infinitely tall function” there isn’t a mathematically well-defined “infinitely flat function.”

The fact that an SI must sort and filter candidate functions does not mean it s doing so according to probability.

our guess has to be that way because there are more ways to have long variables

Given the assumptions that you have an infinite number of prgrammes, and that you need to come to a determinate result in finite time, then you need to favour shorter programmes. That's a reasonable justification for the operation of an SI which happens to have nothing to do truth or probability or reference or realism. (You lapsed into describing the quantity an SI sorts programmes by as "probability"...that has not, of course, been established)

If I can’t ground out the question in some deep territorial way, then it feels like the question isn’t really about the territory.

You haven't shown that an SI is capable of anything deep and territorial. After all,it's only trying to predict observations.

Comment by tag on Where to Draw the Boundaries? · 2021-01-22T21:37:55.388Z · LW · GW

The point of the thought experiment is that, for the alien, all of that is totally mundane (ie scientific) knowledge. So why can’t that observation count as scientific for us?

The point is that the rule "if it is not in the territory it should not be in the map" does not apply in cases where we are constructing reality, not just reflecting it.

If you are drafting a law to introduce gay marriage, it isn't objection to say that it doesn't already exist.

IE, just because we have control over a thing doesn’t—in my ontology—indicate that the concept of map/territory correspondence no longer applies

I didn't say it doesn't apply at all. But theres a major difference between maps where the causal arrow goes t->m (science, reflection) and ones where it goes m->t (culture,construction)

Once you have constructed something according to a map (blueprint), you can study it scientifically, as anthropologists and scociologists do. But once something has been constructed, the norms of social scientists are that they just describe it. Social scientists don't have a norm that social constructs have to be rejected because they don't reflect pre existing reality.

Comment by tag on Deutsch and Yudkowsky on scientific explanation · 2021-01-22T18:53:59.282Z · LW · GW

The issues are whether the quantity, which you have called a probability , actually is probability , and whether the thing you at treating as a model of reality, is actually a such a model , in the sense of scientific realism, or merely something that churns out predictions, in the sense of instrumentalism.

I’m not quite sure how to respond to this; like, I think you’re right that SI is not solving the hard problem, but I think you’re wrong that SI is not solving the easy problem.

What are the hard and easy problems? Realism and instrumentalism? I haven't said that SI is incapable of instrumentalism (prediction). Indeed, that might be the only thing it can do.

For example, I think the quantity actually is a probability, in that it satisfies all of the desiderata that probability theory places on quantities.

I think the mathematical constraints are clearly insufficient to show that something is a probability, even if they are necessary. If I have a cake of 1m^2, and I cut it up. Then the pieces sum to 1. But pieces of cake aren't probabilities.

Do I think it’s the probability that the actual source code of the universe is that particular implementation? Well, it sure seems shaky to have as an axiom that God prefers shorter variable names, but since probability is in the mind, I don’t want to rule out any programs a priori, and there are more programs with longer variable names than programs with shorter variable names, I don’t see any other way to express what my state of ignorance would be given infinite cognitive resources.

So every hypothesis has the same probability of "not impossible". Well, no, several times over. You haven't shown that programmes are hypotheses, and what an SI is doing is assigning different non zero order probabilities, not a uniform one, and it is doing so based on programme length, although we don't know that reality is a programme, and so on.

Also, I’m not really sure what a model of reality in the sense of scientific realism is,

Do you think scientists are equally troubled?

But are you saying that the “deep structure” is the ontologcal content?

My current suspicion is that we’re having this discussion, actually; it feels to me like if you were the hypercomputer running SI, you wouldn’t see the point of the ontological content; you could just integrate across all the hypotheses and have perfectly expressed your uncertainty about the world.

Even if I no longer have a instrumental need for something, I can terminally value it.

But it isn't about me.

The rational sphere in general value realism, and make realistic claims. Yudkowsky has made claims about God not existing, and MWI being true that are explicitly based on SI style reasoning. So the cat is out of the bag... SI cannot be defended as something that was only ever intended as an instrumentalist predictor without walking back those claims.

But if you’re running an algorithm that uses caching or otherwise clumps things together, those intermediate variables feel like they’re necessary objects that need to be explained and generated somehow.

You'.re saying realism is an illusion? Maybe that's your philosophy, but it's not the less wrong philosophy.

[Like, it might be interesting to look at the list of outputs that a model in the sense of scientific realism could give you, and ask if SI could also give you those outputs with minimal adjustment.]

It's obvious that it could, but so what?

Comment by tag on The Problem of the Criterion · 2021-01-21T17:54:23.232Z · LW · GW

Philosopher: “There is a way of thinking where you can never have a feeling of certainty. Not on any mental, physical, or social level. And I can teach it to you!”

Caveman: “Why would I want to learn that?”

Rationalist: If it is impossible to be certain, I want to believe it is impossible to be certain.

Comment by tag on Deutsch and Yudkowsky on scientific explanation · 2021-01-21T02:42:02.509Z · LW · GW

There’s no general way to apply SI to answer a bounded question with a sensible bounded answer. Hence, when you say “you can make your stable of hypotheses infinitely large”, this is misleading: programs aren’t hypotheses, or explanations, in the normal sense of the word, for almost all of the questions we’d like to understand

And it's also unclear, to say the least , that the criterion that an SI uses to prefer and discard hypotheses/programmes actually is a probability, despite being labelled as such.

Comment by tag on Deutsch and Yudkowsky on scientific explanation · 2021-01-21T02:29:05.587Z · LW · GW

I think there are two different things going on:

First, if you want to use probabilities, then you need your total probability to sum to one, and there’s not a way to make that happen unless you assign higher probabilities to shorter programs than longer programs.

Second, programs that don’t correspond to observations get their probability zeroed out.

You still haven't answered the question "probability of what?".

You have a process that assigns a quantity to a thing. The details of how the quantity gets assigned are not the issue. The issues are whether the quantity, which you have called a probability , actually is probability , and whether the thing you at treating as a model of reality, is actually a such a model , in the sense of scientific realism, or merely something that churns out predictions, in the sense of instrumentalism.

Labelling isn't enough.

All of SI’s ability to ‘locate the underlying program of reality’ comes from the second point. The first point is basically an accounting convenience / necessity.

You haven't shown that it has any such ability. Prediction is not correspondence.

the bayesians seem to think that it’s philosophically trivial, as you can just assume away the problem by making your stable of hypotheses infinitely large.

...and casually equating programmes and hypotheses and casually equating prediction and correspondence...

The fact that bayesians don't have an containing every possible hypothesis, combined with the fact that they also dont have a method of hypothesis w formation is a problem...but it's not the problem I am talking about today.

But why should an SI have the ability to correspond to reality, when the only thing it is is designed to do is predict observations?

By assumption, your observations are generated by reality.

That doesn't answer the question. The issue is not whether reality exists, the question is which theories correspond to it. What reality is, not whether reality is.

What grounds that assumption out is… more complicated, but I’m guessing not the thing you’re interested in?

" why should an SI have the ability to correspond to reality, when the only thing it is is designed to do is predict observations?"

You still haven't told me. It's possible for a predictive theory to fail to correspond, so there is no link of necessity between prediction and correspondence.

Maybe it’s a category error to say of programmes that they have some level of probability.

I mean, I roughly agree with this in my second paragraph, but I think ‘category error’ is too harsh. Like, there are lots of equivalent programs, right? [That is, if I do SI by considering all text strings interpreted as python source code by a hypercomputer, then for any python-computable mathematical function, there’s an infinite class of text strings that implement that function.] And so actually what we care about is closer to “the integrated probability of an upcoming token across all programs”,

What I care about is finding a correct ontological model of reality. Caring about which programmes predict upcoming tokens is a means to that end. There is a well defined and conventional sense in which upcoming tokens have a certain probability, because the arrival of a token is an event, and conventional probability theory deals with events.

But the question is about the probability of the programme itself.

Even if programmes are actually making distinct claims about reality , which has not been shown, then some "integration" of different programmes is not going to be a clear model!

and if you looked at your surviving programs for a sufficiently complicated world, you would likely notice that they have some deep structural similarities that suggest they’re implementing roughly the same function.

No. In general it's possible for completely different algorithms to produce equivalent results.

But are you saying that the "deep structure" is the ontologcal content?

Comment by tag on Deutsch and Yudkowsky on scientific explanation · 2021-01-21T02:27:13.613Z · LW · GW

I think there are two different things going on:

First, if you want to use probabilities, then you need your total probability to sum to one, and there’s not a way to make that happen unless you assign higher probabilities to shorter programs than longer programs.

Second, programs that don’t correspond to observations get their probability zeroed out.

You still haven't answered the question "probability of what?".

You have a process that assigns a quantity to a thing. The details of how the quantity gets assigned at not the issue. The issues are whether the quantity, which you have called a probability , actually is probability , and whether the thing you at treating as a model of reality, is actually a such a model , in the sense of scientific realism, or merely something that churns out predictions, in the sense of instrumentalism.

Labelling isn't enough.

All of SI’s ability to ‘locate the underlying program of reality’ comes from the second point. The first point is basically an accounting convenience / necessity.

You haven't shown that it has any such ability. Prediction is not correspondence.

the bayesians seem to think that it’s philosophically trivial, as you can just assume away the problem by making your stable of hypotheses infinitely large.

...and casually equating programmes and hypotheses and casually equating prediction and correspondence...

The fact that bayesians don't have an containing every possible hypothesis, combined with the fact that they also dont have a method of hypothesis w formation is a problem...but it's not the problem I am talking about today.

But why should an SI have the ability to correspond to reality, when the only thing it is is designed to do is predict observations?

By assumption, your observations are generated by reality.

That doesn't answer the question. The issue is not whether reality exists, the question probability theories correspond to it. What reality is, not whether reality is.

What grounds that assumption out is… more complicated, but I’m guessing not the thing you’re interested in?

" why should an SI have the ability to correspond to reality, when the only thing it is is designed to do is predict observations?"

You still haven't told me. It's possible for a predictive theory to fail to correspond, so there is no link of necessity between prediction and correspondence.

Maybe it’s a category error to say of programmes that they have some level of probability.

I mean, I roughly agree with this in my second paragraph, but I think ‘category error’ is too harsh. Like, there are lots of equivalent programs, right? [That is, if I do SI by considering all text strings interpreted as python source code by a hypercomputer, then for any python-computable mathematical function, there’s an infinite class of text strings that implement that function.] And so actually what we care about is closer to “the integrated probability of an upcoming token across all programs”,

What are cared about is finding a correct ontological model if reality. Caring about which programmes predict upcoming tokens is a means to that end. There is a well defined and conventional sense in which upcoming tokens have a certain probability, because the arrival of a token is an event, and conventional probability theory deals with events.

But the question is about the probability of the programme itself.

Even if programmes are actually making distinct claims about reality , which has not been shown, then some "integration" of different programmes is not going to be a clear model!

and if you looked at your surviving programs for a sufficiently complicated world, you would likely notice that they have some deep structural similarities that suggest they’re implementing roughly the same function.

No. In general it's possible for completely different algorithms to produce equivalent results.

But are you saying that the "deep structure" is the ontologcal content?

Comment by tag on Deutsch and Yudkowsky on scientific explanation · 2021-01-20T23:51:20.195Z · LW · GW

Ok, but that isn't answering the question. I know that shortness is the criterion for saying that a programme is probable . The question is about the upshot, what that means...other than shortness. If the upshot is that a short programme is more likely to correspond to reality, then SI is indeed formalised epistemology. But why should an SI have the ability to correspond to reality, when the only thing it is is designed to do is predict observations? And how can a programme correspond when it is not semantically interpretable?

Maybe it's a category error to say of programmes that they have some level of probability.

Comment by tag on Deutsch and Yudkowsky on scientific explanation · 2021-01-20T05:46:45.213Z · LW · GW

The problem is that the types of hypotheses considered by Solomonoff induction are not explanations, but rather computer programs which output predictions.

Indeed. Solomonoff Inductors contain computer programmes, not explanations, not hypotheses and not beliefs. That makes it quite hard to understand the sense in which they are dealing with probability.

Probability of what? Hypotheses and beliefs have a probability of being true, of succeeding in corresponding. What does it mean to say that one programme is more probable than another? That it is short? A shorter bitstring is more likely to be found in a random sequence, but what has that to do with constructing a true model of the universe?

If you are dealing with propositions instead of programmes, it is easy to explain the relationship between simplicity and probability-of-corrrsponding-to-reality: the probability of a small conjunction of propositions is generally higher than the probability of a large number.

Comment by tag on What is going on in the world? · 2021-01-17T18:39:53.620Z · LW · GW

The world is controlled by governments, and really awesome governance seems to be scarce and terrible governance common

Or...liberal democracy has spread , as other systems have failed. But maybe liberal democracy isnt good enough to count as really awesome.

Comment by tag on The True Face of the Enemy · 2021-01-16T19:21:36.762Z · LW · GW

The underclass is coming from somewhere, even with compulsory education.

Comment by tag on The True Face of the Enemy · 2021-01-16T15:56:43.044Z · LW · GW

What is missing where? Some countries allow homeschooling. Some countries allow school choice. Etc.

Comment by tag on The True Face of the Enemy · 2021-01-16T15:49:47.546Z · LW · GW

If the problem is that School is Prison, having a choice of prison is not a solution.

Comment by tag on The True Face of the Enemy · 2021-01-16T15:27:25.832Z · LW · GW

The “bottom 20%” may continue using the old decrepit system, or just quit and do better things

Will they have the option to do nothing?

Comment by tag on The True Face of the Enemy · 2021-01-16T15:26:26.873Z · LW · GW

If you put people in a place where they’re taught to learn from being taught, how can you expect them to be able to learn for themselves?

It didn't stop me being able to self teach.

Comment by tag on Coherent decisions imply consistent utilities · 2021-01-13T23:39:26.618Z · LW · GW

Context is important. If you publish something without comment or counterpoint, you're hinting that it's to be taken as true.

Comment by tag on The True Face of the Enemy · 2021-01-13T18:48:42.756Z · LW · GW

The does everything wrong by the usual standards of rational epistemology, but will be warmly received anyway.

Wherever two or three Rationalists are gathered together, they will moan about the education system. But not in a very rational way. It's a topic that rationalists are predictably irrational about. They don't build up a step by step fact based critique, they make sweeping ,emotive claims about how generally terrible it is.

The hard problem of education is how to educate everybody. (Or what to do with them otherwise) That's the problem governments face. It's easy for smart people to come up with educational methods that work for smart people. Smart people can educate themselves with a library and a computer. That's good enough for the top 20% , but what about the bottom 20%, who aren't naturally academic, and don't have parents capable of home schooling? I have heard no suggestion from the rationalsphere.

Comment by tag on Debate update: Obfuscated arguments problem · 2021-01-13T01:32:35.054Z · LW · GW

If you argue with Marxists, post-modernists, or the Woke, you’ll similarly find that, for every solid argument you have that proves a belief of theirs is wrong, they have some assumptions which to them justify dismissing your argument

They might well say the same about you. All arguments are based on fundamental assumptions that are necessarily unproven.

Comment by tag on Debate update: Obfuscated arguments problem · 2021-01-12T12:49:29.678Z · LW · GW

Is the set of real numbers simple or complex? What information does it contain? What information doesnt it contain?

Comment by tag on Science in a High-Dimensional World · 2021-01-10T19:13:20.410Z · LW · GW

until then the results will generally work in practice.

Doesn't really contradict what I am saying. In theory, I am saying, you can't exclude mysterious extra variables...but in practice that often doesn't matter, as you are saying.

Comment by tag on Science in a High-Dimensional World · 2021-01-10T19:06:16.156Z · LW · GW

The key to answering that question is determinism. If the system’s behavior can be predicted perfectly, then there is no mystery left to explain, no information left which some unknown variable could provide

  1. What matters is local determinism. You need to show that behaviour is predictable from factors under your control. If local determinism fails, it is hard to tell whether locality or determinism failed individually.

  2. And showing that a system's behaviour is predictable when N factors are held constant by the experimenter doesn't show that those are the only ones it is conditionally dependent one. Its behaviour might counterfactually depend on factors which the experimenter did not vary and which did not naturally change over the course of the experiment. In general, you can't exclude mysterious extra variables.

Comment by tag on The Sense-Making Web · 2021-01-05T17:15:52.278Z · LW · GW

What use is that?

Comment by tag on The Sense-Making Web · 2021-01-05T15:14:05.382Z · LW · GW

the Sensemaking scene—however vaguely defined

It hasn't been defined at all, even vaguely.

Comment by tag on Don't Use Your Favorite Weapon · 2021-01-03T20:42:31.215Z · LW · GW

It's mostly this.

Even the most earnest effort, though, will face the challenge that it’s hard to guess which argument you don’t personally like is most likely to land with someone of a very different ideological persuasion

Which means that ... you probably don't even have good weapons, because it so difficult to build them.

Comment by tag on Covid 12/10: Vaccine Approval Day in America · 2021-01-03T17:46:44.483Z · LW · GW

And it still doesn't follow from that , that anything untoward is going on.

The events match the narrative where the evil PMCs screw everyone else over, but they also match the narrative where lockdowns are the best solution for everybody.

So you still need to disprove that.

Comment by tag on Rob B's Shortform Feed · 2021-01-03T00:59:22.663Z · LW · GW

If the physics map doesn’t imply the mind map (because of the zombie argument, the Mary’s room argument, etc.), then how do you come to know about the mind map?

Direct evidence. That's the starting point of the whole thing. People think that they have qualia because it seems to them that they do.

What is the version of this story for the mind map, once we assume that the mind map has contents that have no causal effect on the physical world?

I'm not assuming that. I'm arguing against epiphenomenalism.

So I am saying that the mental is causal, but I am not saying that it is a kind of physical causality, as per reductive physicalism. Reductive physicalism is false because consciousness is irreducible, as you agree. Since mental causation isn't a kind of physical causation, I don't have to give a physical account if it.

And I am further not saying that the physical and mental are two separate ontologcal domains, two separate territories. I am talking about maps, not territories.

Without ontologcal dualism, there are no issues of overdetermination or interaction.

Comment by tag on Book review: Rethinking Consciousness · 2021-01-02T22:58:22.344Z · LW · GW

If you start from the assumption that only "outside" -- third person ,objective -- evidence counts , then it is easy to come to the conclusion that only physical causation counts.

Comment by tag on Covid 12/10: Vaccine Approval Day in America · 2021-01-02T22:27:51.657Z · LW · GW

unprecedented

The unprecedented part is the global,not the lockdowns. Staying inside during plagues is well attested historically.

But to develop that account, further nuances need to be brought out

Maybe, you are not really saying what is is wrong with the simple account. You keep harping on about the professional managerial classes, but you still don't have evidence that lockdowns are benefitting them, or that they are not benefitting the relatives of poorer people.

Wanting to make sacrifices to protect your elderly relatives is not weird.

Comment by tag on Book review: Rethinking Consciousness · 2021-01-01T03:00:10.231Z · LW · GW

Why do I have free will?

..is quite answerable for some definitions of free will.

Comment by tag on Book review: Rethinking Consciousness · 2021-01-01T02:58:57.490Z · LW · GW

Well, if we reject that claim, then we’re kinda stuck saying that if there are qualia, they are somewhere to be found within that chain of causation. And if there’s nothing to be found in the chain of causation that looks like qualia,

Looks from the inside. or looks from the outside?

Comment by tag on Book review: Rethinking Consciousness · 2021-01-01T02:49:01.512Z · LW · GW

If explaining reports of consciousness involves solving the hard problem, then no one has explained reports of consciousness, since no one has has solved the HP.

Of course, some people (eg. Dennett) think that reports of consciousness can be explained ... and don't accept that there is an HP.

And the HP isn't about consciousness in general, it is about qualia or phenomenal consciousnessm the very thing that illusionism denies.

Edit: the basic problem with what you are saying is that there are disagreements about what explanation is, and about what needs to be explained. The Dennet side was that once you have explained all the objective phenomena objectively, you have explained everything. The Chalmers side thinks that leaves out the most important stuff.

Comment by tag on A non-mystical explanation of "no-self" (three characteristics series) · 2021-01-01T01:51:59.377Z · LW · GW

I don't suppose you or be able to explain a quantum computer to a caveman.

Comment by tag on Rob B's Shortform Feed · 2021-01-01T01:10:51.388Z · LW · GW

But ‘consciousness is real and irreducible’ isn’t tenable: it either implies violations of physics as we know it (interactionism), or implies we can’t know we’re conscious (epiphenomenalism).

The epiphenomenalist worry is that, if qualia are not denied entirely, they have no causal role to play, since physical causation already accounts for everything that needs to be accounted for.

But physics is a set of theories and descriptions...a map. Usually, the ability of a map to explain and is not exclusive of another map's ability to do so on. We can explain the death of Mr Smith as the result of bullet entering his heart, or as the result of a finger squeezing a trigger, or a a result of the insurance policy recently taken out on his life, and so on.

So why can't we resolve the epiphenomenal worry by saying that that physical causation and mental causation are just different, non rivalrous, maps? I screamed because my pain fibres fired" alongside -- not versus "I screamed becaue I felt a sharp pain". It is not the case that there is physical stuff that is doing all the causation, and mental stuff that is doing none of it: rather there is a physical view of what is going on, and a mentalistic view.

Physicalists are reluctant to go down this route, because physicalism is based on the idea that there is something special about the physical map, which means it is not just another map. This special quality means that a physical explanation excludes others, unlike a typical map. But what is it?

It's routed in reductionism, the idea that every other map (that is, every theory of the special sciences) can or should reduce to the physical map.

But the reducibility of consciousness is the center of the Hard Problem. If consciousness really is irreducible, and not just unreduced, then that is evidence against the reduction of everything to the physical, and, in turn, evidence against the special, exclusive nature of the physical map.

So, without the reducibility of consciousness, the epiphenomenal worry can be resolved by the two-view manoeuvre. (And without denying the very existence of qualia).

Comment by tag on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T22:41:06.520Z · LW · GW

So how do you communicate entirely new inventions? Or should you not?

Comment by tag on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T20:47:39.014Z · LW · GW

The English language lacks the concept of “being aware from a sensation”, actually, the English language lacks any concept around “sensation” other than “experiencing it”

English isn't a programming language. It doesn't have fixed semantics, and one of the ways the envelope can be pushed us through use of metaphor.

( 'Push the envelope' is itself a fairly recent metaphor:-

To attempt to extend the current limits of performance. To innovate, or go beyond commonly accepted boundaries.

What's the origin of the phrase 'Push the envelope'? This phrase came into general use following the publication Tom Wolfe's book about the space programme - The Right Stuff, 1979:

"One of the phrases that kept running through the conversation was ‘pushing the outside of the envelope’... [That] seemed to be the great challenge and satisfaction of flight test")

Comment by tag on Where to Draw the Boundaries? · 2020-12-31T19:41:01.164Z · LW · GW

Lots of physical things can have varied instantiations. EG “battery”. That in itself doesn’t seem like an important barrier.

If the question "is thing X an instance if type T" is answered by human concerns, then passive reflection of pre existing reality isn't the only game in town.

If type T is not a natural kind, then science is not the only game in town.

Comment by tag on Where to Draw the Boundaries? · 2020-12-31T18:50:51.377Z · LW · GW

So if your friends are using concepts which are optimized for other things, then either (1) you’ve got differing goals and you now would do well to sort out which of their concepts have been gerrymandered, (2) they’ve inherited gerrymandered concepts from someone else with different goals, or (3) your friends and you are all cooperating to gerrymander someone else’s concepts (or, (4), someone is making a mistake somewhere and gerrymandering concepts unnecessarily).

So? That's a very particular set of problems. If you try to solve them by banning all unscientific concepts, then you lose all the usefulness they have in other contexts.

I’m just saying there’s something special about avoiding these things, whenever possible,

Wherever possible, or wherever beneficial? Does it make the world a better place to keep pointing out that tomatoes are fruit?

because if you care deeply about clear thinking, and don’t want the overhead of optimizing your memes for political ends (or de-optimizing memes from friends from those ends), this is the way to do it.

You personally can do what you like. If you don't assume that everyone has to have the same solution, then there is no need for conflict.

If you use a gerrymandered concept, you may have no understanding of the non-gerrymandered versions; or you may have some understanding, but in any case not the fluency to think in them.

I'm not following you any more. Of course unscientific concepts can go wrong -- anything can. But if you're not saying everyone should use scientific conceotts all the time, what are you saying?

I see Zack as (correctly) ruling in mere optimization of concepts to predict the things we care about, but ruling out other forms of optimization of concepts to be useful.

I think that is Zacks argument, and that it s fallacious. Because we do things other than predict.

Low level manipulation is ubiquitous. You need to argue for “manipulative in an egregiously bad way” separately

I’m arguing that Zack’s definition is a very good Schelling fence to put up

You are arguing that it is remotely possible to eliminate all manipulation???

One of Zack’s recurring arguments is that appeal to consequences is an invalid argument when considering where to draw conceptual boundaries

Obtaining good consequences is a very good reason to do a lot of things.

Comment by tag on Where to Draw the Boundaries? · 2020-12-31T18:34:06.304Z · LW · GW

Why are (b) and (d) not exceptions to your thesis, already?

You surely need to argue that exceptions to everything-is-prediction are i) non existent, or ii) minor or iii) undesirable, normatively wrong.

But co ordination is extremely valuable.

And "self fulfilling prophecy" is basically looking at creation and construction through the lens of prediction. Making things is important. If you build something according to a blueprint, it will happen to be the case that once it is built, the blueprint describes it, but that is incidental.

You can make predictions about money, but that is not the central purpose of money.

Comment by tag on My unbundling of morality · 2020-12-30T21:48:28.064Z · LW · GW

Optimizing trade-offs for personal benefits: E.g., net-neutrality is good for middle-class people, bad for poor people. “Bravery debates” might fall under this umbrella as well.

What you are talking about is optimising trade off for group benefits. You can't usually get personal benefits unless you wield absolute power.

Jockeying for group benefits is very much a thing, but the thing we generally call "politics".

Comment by tag on Where to Draw the Boundaries? · 2020-12-30T15:36:37.662Z · LW · GW

It’s true that I can change the numbers in my bank account by EG withdrawing/depositing money, but this is very similar to observing that I can change a rock by breaking it; it doesn’t turn the rock into a non-factual matter.

Rocks existed before the concept of rocks. Money did not exist before he concept of money.

As a thought experiment, imagine an alien species observing earth without interfering with it in any way. Surely, for them, our “social constructs” could be a matter of science, which could be predicted accurately or inaccurately, etc?

If the alien understands the whole picture, it will notice the causal arrow from human concerns to social constructs. For instance, if you want gay marriage to be a thing, you amend the marriage construct so that is.

Comment by tag on Dissolving the Problem of Induction · 2020-12-29T21:31:03.280Z · LW · GW

Youre subsuming the epistemic problem of induction under the ontologcal problem of induction, but you haven't offered a solution to the ontologcal problem of induction.

Edit:

How do you know that the world is stable? Effect has followed cause in the past, but stability means that it will also do so in the future..but to think that it will do so in the future because it has done so in the past is inductive reasoning.

Comment by tag on Dissolving the Problem of Induction · 2020-12-29T21:31:03.279Z · LW · GW

Tyre subsuming the epistemic problem of induction under the ontologcal problem of induction, but you haven't offered a solution to the ontologcal problem of induction

Comment by tag on Morality as "Coordination", vs "Do-Gooding" · 2020-12-29T20:26:55.277Z · LW · GW

both aspects involve some deep structure related to being an agent in the world, neither seems like just messy implementation details for the other

Indeed. Specifically, "right" and "good" are not synonyms.

"Right" and "wrong", that is praisweorthiness and blameability are concepts that belong to deontology. A good outcome in the consequentialist sense, one that is a generally desired, is a different concept from a deontologically right action.

Consider a case where someone dies in an industrial accident , although all rules were followed: if you think the plant manager should be exonerated because he folowed the rules, you are siding with deontology, whereas if you think he should be punished because a death occurred under his supervision, you are siding with consequentialism.

Comment by tag on Dissolving the Problem of Induction · 2020-12-29T03:15:49.863Z · LW · GW

If you assume that all science is theoretical, and/or you have endless time to generate the perfect theory , that is true.

But neither assumption is true.

Induction is vital for practical purposes. If your world is being ravaged by a disease , you need to understand its progression ahead of having a full theory. Our ancestors needed to understand that the berry that made you sick yesterday will make you sick today...and to do that well ahead of having a theory of biochemistry.

Inductive reasoning is important for survival, not just for relative luxuries like science.

Comment by tag on Dissolving the Problem of Induction · 2020-12-28T22:50:02.010Z · LW · GW

Sure, curve fitting is one technique to generate theories,

I am saying that prediction is valuable per se, that curve fitting gives you predictions, and that curve fitting is induction, and that induction is therefore needed in spite of Deutschs argument.

Induction is also important for eveyday reasoning.

If you think of a theory as something that does nothing but make predictions, then induction is generating theories...but it is s not an explanatorytheory, in terms of the standard explanatory/ empirical distinction.

Unfortunately, the belief that theories can be losslessly represented as programmes elides the distinction.