Posts

Comments

Comment by tag on Discussion about COVID-19 non-scientific origins considered harmful · 2020-04-08T22:11:26.839Z · score: 1 (1 votes) · LW · GW

If something can be shown to be true then some will change their views based on that.

Whilst others will fail to understand, and others still will misunderstand. You don't have a proof that openness is guaranteed to be positive sum.

Comment by tag on Alignment as Translation · 2020-04-08T20:51:45.808Z · score: 1 (1 votes) · LW · GW

The rules it’s given are, presumably, at a low level themselves.

The rules that the low level AI runs on could be medium level. There is no point in giving it very low level rules, since its job is to fill in the details. But the point is that I am stipulating that the rules should be high level enough to be human-readable.

The question is not whether the low-level AI will follow those rules, the question is what actually happens when something follows those rules. A python interpreter will not ever deviate from the simple rules of python, yet it still does surprising-to-a-human things all the time.

But the world hasn't ended. A python interpreter doesn't do surprisingly intelligent things, because it is not intelligent.

The problem is not that the AI might deviate from the given rules. The problem is that the rules don’t always mean what we want them to mean.

In your framing of the problem , you create one superpowerful AI that has to be programmed perfectly, which is impossible. In my solution, you reduce the problem to more manageable chunks. My solution is already partially implemented.

Comment by tag on Life as metaphor for everything else. · 2020-04-08T20:31:21.310Z · score: 1 (1 votes) · LW · GW

But sometimes there are special properties, such as nuclear properties, which were not supposed by 19thC science.

Comment by tag on Two Alternatives to Logical Counterfactuals · 2020-04-08T19:54:25.951Z · score: 2 (2 votes) · LW · GW

I am not aware of a good reason to believe that a perfect decision theory is even possible, or that counterfactuals of any sort are the main obstacle.

Comment by tag on What is the subjective experience of free will for agents? · 2020-04-08T19:37:10.050Z · score: 1 (1 votes) · LW · GW

Additionally, I have a strong belief that the world is subjectively deterministic, i.e. that from my point of view the world couldn’t have turned out any other way than the way it did because I only ever experience myself to be in a single causal history.

That seems to be a non-sequitur. The fact that things did happen in one particular way does not imply that they could only have happened that way.

Just noticed that the same error is in Possibility and Couldness:

The coin itself is either heads or tails.

That doesn’t mean it must have been whatever it was,

Comment by tag on What is the subjective experience of free will for agents? · 2020-04-08T19:33:22.445Z · score: 1 (1 votes) · LW · GW

Eliezer is only answering the question of what the algorithm is like from the inside;

Do we have a good reason to think an algorithm would feel like anything from the inside?

it doesn’t offer an complete alternative model, only shows why a particular model doesn’t make sense;

Which particular model?

and so we are left with the problem of how to understand what it is like to make a decision from an outside perspective, i.e. how do I talk about how someone makes a decision and what a decision is from outside the subjective uncertainty of being the agent in the time prior to when the decision is made.

I can't see why you shouldn't be able to model subjective uncertainty objectively.

Comment by tag on Counterfactuals are an Answer, Not a Question · 2020-04-08T17:51:07.290Z · score: 1 (1 votes) · LW · GW

Well, it's a statement, not a question.

Comment by tag on Discussion about COVID-19 non-scientific origins considered harmful · 2020-04-08T17:37:03.880Z · score: 1 (1 votes) · LW · GW

If something is wrong then it can be demonstrably proven so

To whom? Do you think anything can be proven to anybody?

Comment by tag on Two Alternatives to Logical Counterfactuals · 2020-04-08T14:15:26.589Z · score: 3 (2 votes) · LW · GW

Philosophers talk about free will because it is contentious and therefore worth discussing philosophically , whereas will, qua wants and desires, isn't.

cf, the silly physicists who insist on talking about dark matter, when anyone can see that ordinary matter exists.

Comment by tag on Two Alternatives to Logical Counterfactuals · 2020-04-08T14:03:32.776Z · score: 1 (1 votes) · LW · GW

What about school 3, the one that solves the problem with compartmentalisation/sandboxing?

Comment by tag on Referencing the Unreferencable · 2020-04-05T12:51:04.573Z · score: 3 (2 votes) · LW · GW

Equivicaction is using a term in different senses *during the course of an argument"..that is under conditions where it should normatively have a stable meaning. It is still the case that some words are ambiguous, and that recognising ambiguity can solve problems.

Comment by tag on Two Alternatives to Logical Counterfactuals · 2020-04-02T17:01:30.296Z · score: 1 (1 votes) · LW · GW

Then worlds in which I choose Y are logically incoherent

From an omniscient point of view, or from your point of view? The typical agent has imperfect knowledge of both the inputs to their decision procedure, and the procedure itself. So long as an agent treats what it thinks is happening, as only one possibility, then there is not contradiction because possible-X is always compatible with possibly not-X.

Comment by tag on Two Alternatives to Logical Counterfactuals · 2020-04-01T19:44:50.182Z · score: 1 (1 votes) · LW · GW

Those are the conditions under which counterfactuals are flat out impossible. But we have plenty of motivation to consider hypotheticals ,and we don't generally know how possible they are

Comment by tag on Two Alternatives to Logical Counterfactuals · 2020-04-01T18:58:11.468Z · score: 1 (1 votes) · LW · GW

You are assuming a very strong set of conditions..that determinism holds,that the agent has perfect knowledge of its source code, and that it is compelled to consider hypothetical situations in maximum resolution.

Comment by tag on Two Alternatives to Logical Counterfactuals · 2020-04-01T18:34:59.517Z · score: 3 (2 votes) · LW · GW

They are not logically incoherent in thenselves. They are inconsistent with what actually happened. That means that if you try to be bundle the hypothetical,the logical counterfactual ,in with your model of reality, the resulting mish mash will be inconsistent. But the resulting mish mash isn't the logical counterfactual per se.

W can think about counterfactuals without our heads the exploding. That is the correct starting point. How is that possible? The obvious answer is that consideration of hypothetical scenarios takes place in a sandbox.

Comment by tag on Two Alternatives to Logical Counterfactuals · 2020-04-01T17:55:05.767Z · score: 1 (1 votes) · LW · GW

Under determinism, you should be a nonrealist about real counterfactuals, but there is still no problem with logical counterfactuals. So what is "the problem of logical counterfactuals"?

Comment by tag on Alignment as Translation · 2020-04-01T17:42:28.675Z · score: 1 (1 votes) · LW · GW

Until you hit a hard limit, like lack of resources.

Comment by tag on Two Alternatives to Logical Counterfactuals · 2020-04-01T17:33:59.437Z · score: 1 (1 votes) · LW · GW

Without some assumption similar to “free will” it is hard to do any decision theory at all, as you can’t compare different actions; there is only one possible action.

Under determinism, there is only one actually possible action, and that doesn't stop you comparing hypothetical actions. Logical possibility =/= real possibility. Since logical possibilities are only logical possibilities, no sincere assumption of real free will is required.

Since you are invariably in a far from omniscient state about both the world and your own inner workings, you are pretty much always dealing with hypotheses, not direct insight into reality.

Comment by tag on Two Alternatives to Logical Counterfactuals · 2020-04-01T16:55:47.891Z · score: 1 (1 votes) · LW · GW

this “what would have happened” world is logically incoherent.

There is a logical contradiction between the idea that your actions are determined, and the idea that you could have acted differently under the exact same circumstances. There is no such problem if you do not assume determinism, meaning that the "problem" of logical counterfactuals is neither unavoidable nor purely logical -- it is not purely logical because a metaphysical assumption, an assumption about the way reality works is involved.

The assumption of determinism is implicit in talking of yourself as a computer programme, and the assumption of indeterminism is implicit in talking about yourself as nonetheless having free will.

A purely logical counterfactual , a logical counterfactual properly so-called, is a hypothetical state of affairs, where a different input or set of preconditions is supposed, and a different, also hypothetical output or result obtains. Such a counterfactual is logically consistent -- it just isn't consistent with what actually occurred.

According to counterfactual nonrealism, there is no fact of the matter about what “would have happened” had a different action been taken.

People calculate logical counterfactuals all the time. You can figure out what output a programme will give in response to an input it has never received by looking at the code. But note that that is a purely epistemological issue. There may be a separate, ontological, not epistemological issue about real counterfactuals. If you have good reason to believe in determinsim, which you don't, you should disbelieve in real counterfactuals. But that says nothing about logical counterfacuals. So long as some hygiene is exercised about the epistemological/ontological distinction, and the logical/real disinticntion then there is no problem.

The apparent nondeterminism is, then, only due to the epistemic limitation of the agent at the time of making the decision, a limitation not faced by a later version of the agent (or an outside agent) with more computation power.

Note that problems agents have in introspecting their own decision making are not problems with counterfactuals (real or logical) per se.

This leads to a sort of relativism: what is undetermined from one perspective may be determined from another.

It doesn't lead to serious relativism, because the perspectives are asymmetrical. The agent that knows more is more right.

A problem that comes up is that of “spurious counterfactuals”

A "spurious" counterfactual is just a logical, as opposed to real, counterfactual. The fact that it could never have occurred means it was never a real counterfactual.

Comment by tag on Alignment as Translation · 2020-04-01T08:52:50.266Z · score: 1 (1 votes) · LW · GW

But it can't be approached like e^−x either, because the marginal cost of hardware starts to rise once you get low on resources.

Edit:

Exponential decay looks like this

Whereas he marginal cost curve looks like this

Comment by tag on Alignment as Translation · 2020-03-31T10:50:09.874Z · score: 1 (1 votes) · LW · GW

It takes more than one atom to represent one atom computationally, so the limit can't be reached. Really, the issue is going beyond human cognitive limitations.

Comment by tag on Solipsism is Underrated · 2020-03-31T10:32:08.368Z · score: 1 (1 votes) · LW · GW

So evidence contrary to materialism isn't evidence, it's a delusion.

Comment by tag on Solipsism is Underrated · 2020-03-30T17:56:22.075Z · score: 1 (1 votes) · LW · GW

We don't have a uniformly poor understanding: we understand some aspects of mentality much better than others..

Comment by tag on Alignment as Translation · 2020-03-30T17:55:10.830Z · score: 1 (1 votes) · LW · GW

If you know in general that a low level AI will follow the rule si has been given, you don't need to keep re-checking.

Comment by tag on Solipsism is Underrated · 2020-03-30T17:44:46.003Z · score: 1 (1 votes) · LW · GW

“lack of a satisfying explanatory solution” does not imply low likelihood if you think that the explanatory solution exists but is computationally hard to find (which in fact seems pretty reasonable).

OTOH, you should keep lowering the probability of ever finding a satisfactory explanation the longer you keep failing to find one.

Comment by tag on Solipsism is Underrated · 2020-03-30T17:41:12.486Z · score: 4 (3 votes) · LW · GW

You haven't addressed the "what's so special about you" objection.

Comment by tag on Solipsism is Underrated · 2020-03-30T17:33:59.445Z · score: 1 (1 votes) · LW · GW

I am not sure where the instinct that consciousness can’t be materialistic comes from, although I would suspect that it might come from a large amount of uncertainty, and an inability to imagine any specific answer that you would consider a good explanation. Wherever this instinct comes from, I don’t think it is reliable.

The unstated background assumption of the article you are responding to is that the hard Problem is real and hard. It is certainly hard to dispute that we have made no progress in writing algorithms that experience sensations or feelings. Whether we ever will is another matter, but impossibility arguments exist.

I don’t know how your brain works either, but I am equally sure it is made of (atoms, quantum waves, strings or whatever).

Is that a falsifiable hypothesis? What would falsify it?

Comment by tag on Solipsism is Underrated · 2020-03-30T17:20:36.593Z · score: 1 (1 votes) · LW · GW

Nice post. I tend to think that solipsism of the sort you describe (a form of “subjective idealism”) ends up looking almost like regular materialism, just phrased in a different ontology. That’s because you still have to predict all the things you observe, and in theory, you’d presumably converge on similar “physical laws” to describe how things you observe change as a materialist does.

Which is to say that idealistic instrumentalism is as complex as materialistic instrumentalism. The complexity of the minimum ruleset you need to predict observation is the same in each case. But that doesn't mean the complexity of materialist ontology is the same as the complexity of idealist ontology. Idealism asserts that mentality, or some aspect of it, is fundamental , whereas materialism says that is all a complex mechanism. So idealism is asserting a simpler ontology. Which itslef is pretty orthogonal to the question how much complexity you need to predict observation. (of course, the same confusion infects discussions of the relative complexity of different interpretations of quantum mechanics).

Anyway, I find these questions to be some of the most difficult in philosophy, because it’s so hard to know what we’re even talking about. We have to explain the datum that we’re conscious, but what exactly does that datum look like? It seems that how we interpret the datum depends on what ontology we’re already assuming. A materialist interprets the datum as saying that we physically believe that we’re conscious, and materialism can explain that just fine. A non-materialist insists that there’s more to the datum than that.

Yes. It's hard to agree what evidence is, meaning that is hard to do philosophy, and impossible to do philosophy algorithmically.

Comment by tag on Programming: Cascading Failure chains · 2020-03-29T23:00:52.838Z · score: 1 (1 votes) · LW · GW

That ends hanging in the air ,rather.

How does Haskel compare to better dynamic languages? Should we use Haskell for everything?

Comment by tag on Solipsism is Underrated · 2020-03-29T11:59:39.097Z · score: 1 (1 votes) · LW · GW

Consider the limiting case of describing minds in terms of algorithms, you scan a philosophers brain, put the data into a computer, and predict exactly their discussion on qualia. Once you have a complete understanding of why the philosopher talks about qualia, if the philosopher has any info about qualia at all, the process by which they gained that info should be part of the model.

That isn't an understanding of a philosophers brain, it's an artificial construct that produces the same outputs given the same inputs. The function of the human kidney can be replaced by a kidney dialysis machine ,but that does not mean kidneys do just exist,nor does it mean that you can understand how kidneys work by looking at dialysis machines.

Comment by tag on Alignment as Translation · 2020-03-28T17:06:17.104Z · score: 1 (1 votes) · LW · GW

>Thanks to ongoing technology changes, both of these constraints are becoming more and more slack over time - compute and information are both increasingly abundant and cheap.

>Immediate question: what happens in the limit as the prices of both compute and information go to zero?

>Essentially, we get omniscience: our software has access to a perfect, microscopically-detailed model of the real world.


Nope. A finite sized computer cannot contain a fine-grained representation of the entire universe. Note, that while the *marginal* cost of processing and storage might approach zero, that doesn't mean that you can have infinite computers for free, because marginal costs rise with scale. It would be extremely *expensive* to build a planet sized computer.

Comment by tag on Alignment as Translation · 2020-03-28T17:00:16.310Z · score: 1 (1 votes) · LW · GW

I am assuming that the AI that engages in out-of-the-box thinking is not fast, and that the conjunction of fast *and* unpredictable is the central problem.

The market will demand AI that's faster than humans, and at least as capable of creative, unpredictable thinking.
However, the same AI does not have to be both. This approach to AI safety is copied from a widespread organisational
principal, where the higher levels do the abstract strategic thinking, the least predictable stuff,
the middle levels do the concrete, tactical thinking and the lowest levels do what they are told.
The fastest and most fine grained actions are at the lowest level. The higher level can only communicate with the lower levels by communicating an amended strategy or policy: they are not able interrupt fine-grained decisions, and only hear about fine grained actions after they have happenned. I have given an abstract description of this organising principle because there are multiple concrete examples: large businesses, militaries, and the human brain/CNS. Businesses already use fast but not very flexible systems to do things faster than humans, notably in high frequency trading. The question is whether
more advanced AI's will be responsible for fine-grained trading decisions, the all-in-one approach, or whether advanced AI will substitute for or assist business analysts and market strategists.

A standard objection to Tool AI is that having a human check all the TAI's decisions would slow things up too much. The above architecture allows an alternative, where human checking occurs between levels. In particular, communication from the highest level to the lower ones is slow anyway. The main requisite for this apprach to AI safety is a human readable communications protocol.


Making it predictable" at a high-level requires translating high-level "predictability" into some low-level specification, which just brings us back to the original problem: translation is hard.

If you are checking your high level AI as you go along, you need a high level language that is human comprehensible.

Comment by tag on Occam's Guillotine · 2020-03-28T16:07:03.648Z · score: 1 (1 votes) · LW · GW
Slugs and stars are both real, but slugs can't understand stars. In fact, they can't understand slugs

Dyson's Law of Artificial Intelligence

"Anything simple enough to be understandable will not be complicated enough to behave intelligently, while anything complicated enough to behave intelligently will not be simple enough to understand."

Comment by tag on Alignment as Translation · 2020-03-27T13:43:34.820Z · score: 1 (1 votes) · LW · GW
I don't think this is realistic if we want an economically-competitive AI. There are just too many real-world applications where we want things to happen which are fast and/or irreversible. In particular, the relevant notion of "slow" is roughly "a human has time to double-check", which immediately makes things very expensive.

There's already an answer to that: you separate "fast" from "unpredictable". The AI that does things fast is not the AI that engages in out-of-the-box thinking.

Comment by tag on Adding Up To Normality · 2020-03-25T23:01:05.906Z · score: 1 (1 votes) · LW · GW

It all adds up to normality.

That seems to me to be a superposition of two different arguments.

There's a philosophy-of-science claim that any theory that isn't obviously wrong must be compatible with all observations to date.

And there's a kind of normative claim that you shouldn't change your behaviour a lot when you switch from one ontology to another.

The sameness of predicted observations is just the sameness of predicted observation, not everything. Interpretations of quantum mechanics, to be taken seriously, must agree on the core set of observations, but they can and do vary in their ontological implications. They have to differ about something., or they wouldn't be different interpretations.

But it is entirely possible for ethics to vary with ontology. It is uncontroversial that the possibility of free will impacts ethics, at the theoretical level. Why shouldn't the possibility of many worlds?

Oh no, then there are always copies of me doing terrible things, and so none of my choices matter!

May not be necessarily true, but it is not necessarily false. It is not absurd, it is a reasonable thing to worry about ... at the theoretical level.

But that doesn't contradict the other version of "it all adds up to normality", because that claim is a piece of practical advice. Although it seems possible for deep theoretical truths of metaphysics to impact ethics, the connection is to complex and doubtful to be allowed to affect day-to-day behaviour.

Comment by tag on Occam's Guillotine · 2020-03-25T18:17:06.678Z · score: 1 (3 votes) · LW · GW

Once we have a map of the connectome we’ll be well on our way to really understanding how brains work. Psychology should absolutely be treated as a hard science.

Now? When we the promissory note has not been cashed?

In any case, that is not the main problem. The main problem is that "X is a real thing in reality" doesn't in any way guarantee that X is comprehensible to some entity Y. Slugs and stars are both real, but slugs can't understand stars. In fact, they can't understand slugs. We don't know whether we are smart enough to understand ourselves.

And cognitive limitations aren't the only problem. Epistemology has inherent problems, such as the problem of unfounded foundations, which can't be solved by throwing Compute at them.

Comment by tag on Occam's Guillotine · 2020-03-25T17:10:40.989Z · score: 1 (3 votes) · LW · GW

Unless there is. There are many theoretical arguments for why psychology and ethics cant be solved by the hard sciences, and there is a dearth or practical evidence that they can.

Simply stating such a controversial claim isn't proof, and stating on Korzybski's authority isn't proof either.

Comment by tag on Authorities and Amateurs · 2020-03-25T16:54:33.684Z · score: 1 (1 votes) · LW · GW

Proscribe or prescribe?

"Prescribe means "to set down authoritatively for direction" or "to set down a medical procedure in order to cure or alleviate symptoms." The noun form is prescription, that is, something prescribed. Proscribe means "prohibit or limit" or "ostracize or avoid in a social sense"

Comment by tag on Occam's Guillotine · 2020-03-25T16:41:59.810Z · score: 1 (1 votes) · LW · GW

"You can’t design a bridge without actually knowing the tensile strength of steel and the compressive strength of concrete, these facts are not open to interpretation. Designing a society is no different [..]"

Distinguish necessity and sufficiency. There may be some objective truths that can be leveraged for social engineering, but it's obvious that designing a society also involves solving questions outside the hard sciences, going from social psychology to ethics. You're begging an enormous question there.

Comment by tag on The questions one needs not address · 2020-03-22T19:37:33.500Z · score: 1 (1 votes) · LW · GW

Most of your undefinable terms have multiple definitions. That's a problem, but it's not the same problem as having no idea what you are talking about.

Comment by tag on Programmers Should Plan For Lower Pay · 2020-03-19T11:50:06.728Z · score: 1 (1 votes) · LW · GW

If you measure programmer productivity by the number of jobs replaced by their code, not lines of code, then programmer productivity is almost unlimited.

Comment by tag on A critical agential account of free will, causation, and physics · 2020-03-17T12:55:52.215Z · score: 1 (1 votes) · LW · GW

Let R be a relation. Then “R(X, Y)” contradicts “not R(X, Y)”.

But the relativist can just go to R(X,YZ). It's a general counterargument.

Special relativity is falsifiable even though it defines position/velocity relationally.

As discussed, relativity isn't relativism.

...the falsifier must be some cognitive process that can make observations. Experimental results can only falsify theories if those results are observed by some cognitive process that can conceptualize the theory. Unobservable experimental results are of no use.

None of that ^^^ supports this VVV ...

Falsifiability, properly understood, is subjective,[...]

...because "subjective" doesn't mean "done by some kind of agent".

Yes, the cognitive process may be a standardized intersubjective[...]

Indeed.

Comment by tag on A critical agential account of free will, causation, and physics · 2020-03-16T13:09:42.862Z · score: 1 (1 votes) · LW · GW

By agential, I mean that the ontology I am using is from the point of view of an agent: a perspective that can, at the very least, receive observations, have cognitions, and take actions. By critical, I mean that this ontology involves uncertain conjectures subject to criticism, such as criticism of being logically incoherent or incompatible with observations. This is very much in a similar spirit to critical rationalism.

Critical rationalism is an awkward bedfellow to relativism. Central examples of criticisms tend to involve contradiction, but relativists can reject contradictions on the basis that A is indexed to X but Not-A is indexed to Y.

Comment by tag on A conversation on theory of mind, subjectivity, and objectivity · 2020-03-15T14:37:07.239Z · score: 1 (1 votes) · LW · GW

Jessica: We’re now talking about the sensation of the flavor of the chocolate though. Is this really that different from talking about “that car over there”? I don’t see how some entities can, in a principled way, be classified as objective and some as subjective

Consider the Mary's room argument. If you know everything objective about a car, you know everything about it. But Mary knows everything objective about your brain without knowing how red looks to you.

Like, in talking about “X” I’m porting something in my mental world-representation into the discursive space

Distinguish cloning and referring:-

When you talk about "how chocolate tastes to Rocko",you are referring to how chocolate tastes to Rocko, but you are not instantiating his neural patterns in your brain ,so you don't know how chocolate tastes to him.

Reference is very far from complete knowledge. To use the computer analogy, a reference (pointer,key, handle etc) allows you to access sn object,but doesn't tell you everything about it -- you still have to query it. A reference is generally the minimum amount of information that distinguishes one entity from another.

Jessica: Okay, I agree with this sort of mental/outside-mental distinction, and you can define subjective/objective to mean that.

Or you can use effable/ineffable , or "requires personal instantiation".

Comment by tag on On the ontological development of consciousness · 2020-03-15T14:20:11.533Z · score: 1 (1 votes) · LW · GW

Agree regarding high order thought, but “qualia” seems to mean the contents of the subjective point of view? Based on SEP article. “There is something it is like for me to undergo each state, some phenomenology that it has. Philosophers often use the term ‘qualia’ (singular ‘quale’) to refer to the introspectively accessible, phenomenal aspects of our mental lives.”

I don't see if that is agreeing with my point or not.

Comment by tag on The absurdity of un-referenceable entities · 2020-03-15T12:33:10.725Z · score: 1 (1 votes) · LW · GW

For a reference to a class to be meaningful to some agent, it must in some way be related to that agent, e.g. to their observations/actions.

I find that pretty vague. It doesn't have much meaning for this agent.

Comment by tag on The absurdity of un-referenceable entities · 2020-03-15T12:16:41.367Z · score: 1 (1 votes) · LW · GW

But, a slight amount of further reflection betrays the absurdity involved in asserting the possible existence of un-referenceable entities. “Un-referenceable entities” is, after all, a reference

We have an answer to that: you can reference a class of entities ,as a class,that can't be referenced individually.

Part of the absurdity in saying that the physical world may be un-referenceable

Who do you think is saying that? I only know of assertions that parts of it ...in other decoherent branches, or over cosmological horizons or inside event horizons...are inaccessible.

Comment by tag on A conversation on theory of mind, subjectivity, and objectivity · 2020-03-15T11:57:06.529Z · score: 1 (1 votes) · LW · GW

Physics can’t say what an epistemic component is.

Physics doesn't say what shoppingcentres are..there a difference between being unable to solve a problem in principle , and leaving details to be filled in.

Epistemically, these observations can’t be considered “already-physical”, that’s assuming the conclusion

You also shouldn't assume they are non physical. In fact, observers and observations can be treated in a neutral way that doesn't beg any metaphysical questions.

Comment by tag on Puzzles for Physicalists · 2020-03-15T11:32:25.804Z · score: 1 (1 votes) · LW · GW

“Fundamental entity” is a reference and references are deictic

You haven't shown that every reference is deictic. In particular ,you haven't shown that references to classes are deictic.

Still needs an account of what is a recording device, in physicalist terms

I don't see why that would be a major problem.

Paradigmatic functions

Thats gainsaying my point. I say that "function" has several barely related meanings ,you say there is a single "paradigmatic" meaning.

Comment by tag on Puzzles for Physicalists · 2020-03-15T11:21:34.452Z · score: 1 (1 votes) · LW · GW

it isn’t meaningful to say H20 theory is true independent of the theory’s connections with already-known-about phenomena such as pre-chemistry water.

I don't know who you are think is doing that.