Posts

TAG's Shortform 2020-08-13T09:30:22.058Z · score: 4 (1 votes)

Comments

Comment by tag on What are examples of Rationalist fable-like stories? · 2020-09-28T17:26:55.465Z · score: 2 (2 votes) · LW · GW

Hanson:

I make the analogy of that with a monkey trap.

OTOH, Chesterton's fence.

Comment by tag on What are examples of Rationalist fable-like stories? · 2020-09-28T17:25:35.841Z · score: 2 (2 votes) · LW · GW

The literal monkey trap is probably a myth.

https://www.google.com/amp/s/mikepalma.wordpress.com/2010/09/22/spider-monkey-syndrome-reality-or-myth/amp/

Comment by tag on A Priori · 2020-09-28T16:41:51.556Z · score: 1 (1 votes) · LW · GW

My argument is that the predictions are canonical representation of the belief, so it’s fine if the semantics say things about the territory that the predictions can’t say, as long as everything it says that does not affect the predictions is meaningless.

  1. How can you say something, but say something meaningless?

  2. Why does not saying anything (meaningful) about the territory buy you? What's the advantage?

Realists are realists because they place a terminal value in knowing what the territory is above and beyond making predictions. They can say what the advantage is ... to them. If you don't personally value knowing what the territory is, that need not apply to others.

The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called “gravity”. If you call that force “travigy” instead, it will cause no difference in the predictions

Travigy means nothing, or it means gravity. Either way , it doesnt affect my argument.

You don't seem to understand what semantics is. It's not just a matter of spelling changes or textual changes. A semantic change doesn't mean that two strings fail strcmp() , it means that terms have been substituted with meaningful terms that mean something different.

And I claim that the “center of the universe” is similar—it has no meaning in the territory

"There is a centre of the universe" is considered false in modern cosmology. So there is no real thing corresponding to the meaning of string "centre of the universe". Which is to say that the string "centre of the universe" has a meaning , unlike the string "flibble na dar wobble".

If it had any effect at all on the territory, it should have somehow affected the predictions.

The territory can be different ways that produce the same predictions.

Comment by tag on A Priori · 2020-09-26T12:45:12.174Z · score: 1 (1 votes) · LW · GW

I only change the title, I don’t change anything

Maybe you do, but it's my thought experiment!

The semantics are still very important as a compact representation of predictions.

That isn't what you need to show. You need to show that the semantics have no ontological implications, that they say nothing about the territory .

Comment by tag on David Deutsch on Universal Explainers and AI · 2020-09-26T09:30:23.058Z · score: 1 (1 votes) · LW · GW

But that creates it's own problem: there's no longer a strong reason to believe in Universal Explanation. We don't know that humans are universal explainers, because if there is something a human can't think of ... well a human can't think of it! All we can do is notice confusion.

Comment by tag on A Priori · 2020-09-24T22:27:59.361Z · score: 1 (1 votes) · LW · GW

I can understand that your revised scenario is unverifiable, by understanding the words you wrote, ie. by grasping their meaning. As usual, the claim that some things are unverifiable is parasitic on the existence of a kind of meaning that has nothing to do with verifiability.

Comment by tag on A Priori · 2020-09-24T22:09:30.975Z · score: 1 (1 votes) · LW · GW

The Quotation is not the Referent. Just because the text describing them is different doesn’t mean the assertions themselves are different.

..because exact synonymy is possible. Exact synonymy is also rare, and it gets less probable the longer the text is.

You need to be clear whether you are claiming that two theories are the same because their empirical content is the same, or because their semantic content is the same.

just like describing f(x)=(x+1)2 as g(x)=x2+2x+1 does not make it a different function.

Those are different...computationally. They would take a different amount of time to execute.

Pure maths is exceptional in its lack of semantics.

f=ma

and

P=IV

..are identical mathematically, but have different semantics in physics.

If A≢B, even though they give the same predictions, then something other than the state and laws of the universe is deciding whether a belief is true or false (actually—how much accurate is it)

If two theories are identical empirically and ontologically, then some mysterious third thing would be needed to explain any difference. But that is not what we are talking about. What we are discussing is your claim that empirical difference is the only possible difference , equivalently that the empirical content of a theory is all its content.

Then the answer to "what further difference could there be" is "what the theories say about reality".

Comment by tag on A Priori · 2020-09-24T21:49:10.319Z · score: 1 (1 votes) · LW · GW

I’m not sure I follow—what do you mean by “didn’t work”? Shouldn’t it work the same as the heliocentric theory, seeing how every detail in its description is identical to the heliocentric model?

If you take a heliocentric theory, and substitute "geocentric" for "heliocentric", you get a theory that doens't work in the sense of making correct predictions. You know this, because in previous comments you have already recognised the need for almost everything else to be changed in a geocentric theory in order to make it empirically equivalent to a heliocentric theory.

In my own words (:= how I understand it) more complicated explanations have a higher burden of proof and therefore require more bits of evidence. If they give the same predictions as the simpler explanations, then each bit of evidence counts for both the complicated and the simple beliefs—but the simpler beliefs had higher prior probabilities, so after adding the evidence their posterior probabilities should keep being higher.

What does "true" mean when you use it?

A geocentric theory can match any observation, providing you complicate it endelessly.

This discussion is about your claim that two theories are the same iff their empirical predictions are the same. But if that is the case, why does complexity matter?

EY is a realist and a correspondence theorist. He thinks that "true" means "corresponds to reality", and he thinks that complexity matters, because, all other things being equal, a more complex theory is less likely to correspond than a simpler one. So his support of Occam's Razor, his belief in correspondence-truth, and his realism all hang together.

But you are arguing against realism, in that you are arguing that theories have no content beyond their empirical content, ie their predictive power. You are denying that they are have any semantic (non empirical content), and, as an implication of that, that they "mean" or "say" nothing about the territory. So why would you care that one theory in more complex than another, so long as its predictions are accurate?

Comment by tag on Dach's Shortform · 2020-09-24T11:49:45.181Z · score: 1 (1 votes) · LW · GW

You didn't list "superintelligence is unlikely" among the list of possible explanations.

Comment by tag on David Deutsch on Universal Explainers and AI · 2020-09-24T11:41:02.439Z · score: 3 (2 votes) · LW · GW

Quantitative limitations amount to qualitative limitations in this case.

The only truly universal TM has infinite memory and is infinitely programmable. Neither is true of humans.

We can't completely wipe and reload our brains, so we might be forever constrained by some fundamental hardcoding , something like Chomskyan innate linguistic structures , or Kantian perceptual categories.

And having quantitative limitations puts a ceiling on which concepts and theories we can entertain. Which is effectively a qualitative limit.

AIs are also finite , although they might have less restrictive limits.

There's no jump to universality because there is no jump to infinity.

Comment by tag on This Territory Does Not Exist · 2020-09-23T10:19:25.194Z · score: 1 (1 votes) · LW · GW

Appeal to personal intuition.

Comment by tag on This Territory Does Not Exist · 2020-09-22T08:36:00.597Z · score: 1 (1 votes) · LW · GW

"appear to be"

Comment by tag on This Territory Does Not Exist · 2020-09-22T08:24:53.279Z · score: 1 (1 votes) · LW · GW

The multiverse argument is

  1. Ontological propositions are unverifiable

  2. Unverifiable propositions are meaningless.

2 would apply to any unverifiable statement.

Comment by tag on The Short Case for Verificationism · 2020-09-21T17:49:35.744Z · score: 1 (1 votes) · LW · GW

Read back,there's an even number of negatives.

Comment by tag on This Territory Does Not Exist · 2020-09-21T17:46:07.108Z · score: 1 (1 votes) · LW · GW

This isn’t true—I’ve made numerous arguments for this claim not purely based on intuition.

I didn't say that the only argument you made was based on intuition. I said that you made an a argument based on intuition, ie. one of your arguments was.

the arguments for this conclusion only apply to ontological statements.

Why? Because your intuition doesn't tell you that an undecidable statement is meaningless unless it is ontological?

Well, maybe it doesn't , after all anyone can intuit any thing. That's the problem with intuition.

The early verificationist had a different problem: they argued for the meaninglessness of metaphysical statements systematically , but ran into trouble when the verificationist principle turned out be meaningless in its own terms.

Comment by tag on A Priori · 2020-09-20T14:37:26.319Z · score: 1 (1 votes) · LW · GW

If A and B assert different things, we can test for these differences.

You keep assuming verificationism in order to prove verificationism.

They assert different things because they mean different things, because the dictionary meanings are different.

In the thought experiment we are considering , the contents of the box can be er be tested. Nonetheless $10 and $100 mean different things.

Comment by tag on A Priori · 2020-09-20T14:27:08.249Z · score: 1 (1 votes) · LW · GW

It would be a theory that didn't work, because you only changed one thing.

Comment by tag on A Priori · 2020-09-20T11:57:00.143Z · score: 1 (1 votes) · LW · GW

And if the set of universes where statement A is true is identical to the set of universes where statement B is true

They're not, because A and B assert different things.

Comment by tag on A Priori · 2020-09-20T11:48:51.748Z · score: 1 (1 votes) · LW · GW

Dictionaries don't define complex scientific theories.

Our complicated , bad, wrong , neo-geocentric theory is still a geocentric theory.

Therefore it makes different assertions about the territory than heliocentricism.

Comment by tag on A Priori · 2020-09-19T19:57:46.003Z · score: 1 (1 votes) · LW · GW

Whether something is empiricly unknowable forever is itself unknowable ... it's an acute form of the problem of induction.

it doesn’t matter what's inside it

But that isn't quite the same as say ing that statements about what's inside are meaningless. A statement can be meaningful without mattering. And you have to be able to interpret the meaning, in the ordinary sense, in order to be able to notice that it doesn't matter.

Comment by tag on A Priori · 2020-09-19T15:55:15.550Z · score: 1 (1 votes) · LW · GW

Semantically and ontologically. The dictionary meanings of the words heliocentric and geocentric are opposites, so they assert different things about the territory.

Note that this the default hypothesis. Whatever I just called "dictionary meaning" is what is usually called "meaning" simpliciter.

Attempts to resist this conclusion are based on putting forward non standard definitions of "meaning", which need to because argued for, not just assumed.

Comment by tag on This Territory Does Not Exist · 2020-09-18T17:41:50.926Z · score: 1 (1 votes) · LW · GW

You can respond by taking ontological terms as primitive,

That's not what I said. I said that you made a claim based on nothing but intuition, and that a contrary claim based on nothing but intuition is neither better nor worse than it.

Every one of the arguments I’ve put forward clearly applies only to the kinds of ontological statements

The argument that if it has no observable consequences, it is meaningless does not apply to only ontological statements.

Comment by tag on A Priori · 2020-09-18T17:13:09.941Z · score: 1 (1 votes) · LW · GW

You’ll need more than just epicycles to make the geocentric model yield accurate predictions

It takes more than literal epicycles , but there are any number of ways of complicating a theory to meet the facts.

But still—if we could, it will not be different than a correct model

Of course it is different. Heliocentricism says something different about reality than geocentricism.

Comment by tag on This Territory Does Not Exist · 2020-09-18T10:07:56.317Z · score: 1 (1 votes) · LW · GW

My problem with ontological statements is they don’t appear to be meaningful.

You certainly started by making a direct appeal to your own intuition. Such an argument can be refuted by intuiting differently.

Those reasons apply to ontological statements and not to other statements.

You don't have any systematic argument to that effect. Other verificationist s might, but you don't.

There's a tradition of justifying the verification principle as an analytical truth, for instance. Your rr invention of verificationism is worse than the original .

Comment by tag on The Short Case for Verificationism · 2020-09-18T10:04:18.088Z · score: 1 (1 votes) · LW · GW

don’t significantly undermine the ability to make claims about ontology.

Comment by tag on This Territory Does Not Exist · 2020-09-17T20:11:57.171Z · score: 1 (1 votes) · LW · GW

But you shouldn't apply your beliefs to ontological statements . If the problem with ontological statements is that they don't constrain beliefs, it's unreasonable to except other statements that don't constrain beliefs.

Comment by tag on The Short Case for Verificationism · 2020-09-17T19:32:34.685Z · score: 1 (1 votes) · LW · GW

Since the argument does not mention probability, it doesn't refute the counterargument that unlikely scenarios involving simulations or multiple universes don't significantly undermine the ability to make claims about ontology.

Comment by tag on The Short Case for Verificationism · 2020-09-17T18:14:01.549Z · score: 1 (1 votes) · LW · GW

I don't see any mention of probability.

Comment by tag on avturchin's Shortform · 2020-09-17T18:06:44.156Z · score: 1 (1 votes) · LW · GW

The only ontology that is required is Bayesianism,

Bayesianism isn't an ontology.

Comment by tag on A Priori · 2020-09-17T17:50:12.789Z · score: 1 (1 votes) · LW · GW

Indeed, it seems there is no way to justify Occam’s Razor except by appealing to Occam’s Razor, making this argument unlikely to convince any judge who does not already accept Occam’s Razor.

That's very much not proven. There are multiple arguments for Occams Razor ,(see the Wikipedia page) , most or all of which aren't circular.

Comment by tag on A Priori · 2020-09-17T17:24:21.609Z · score: 1 (1 votes) · LW · GW

If two explanations yield the exact same predictions, then they are different representations of the same belief

Not at all. A basically false explanation, such as a geocentric model of the solar system, can predict as accurately as a basically true model, so long as you are allowed to add endless numbers of epicycles. That's one of the basic motivations for using Occams Razor. If predictive power and ontological content were identical, there would be no need for it .

Comment by tag on This Territory Does Not Exist · 2020-09-17T17:15:42.163Z · score: 1 (1 votes) · LW · GW

"Beliefs are meaningless unless they constrain expectations" and "beliefs are meaningless if they are about ontology" don't mean the same thing. The verificationist principle isn't about ontology, on the one hand, but still doesn't constrain expectations, on the other.

Comment by tag on Notes on good judgement and how to develop it (80,000 Hours) · 2020-09-17T16:27:30.872Z · score: -1 (2 votes) · LW · GW

What I mean by "difference of degree" is "NOT difference of kind".

Comment by tag on Invisible Frameworks · 2020-09-17T11:22:06.984Z · score: 1 (1 votes) · LW · GW

If there is an urgent need to actually build safe AI, as was widely believed 10+ years ago, Marcello's comment makes sense .

Comment by tag on Notes on good judgement and how to develop it (80,000 Hours) · 2020-09-17T10:22:44.761Z · score: 1 (1 votes) · LW · GW

The degree of confidence with which it is held is not usual among physicists, and a number have objected to it over the years.(Generally receiving short shrift)

I can see a difference of degree between lesswrong and mensa, but only a difference of degree. There is no need to explain why mensa are contrarian, as if you are completely different

Comment by tag on Notes on good judgement and how to develop it (80,000 Hours) · 2020-09-15T11:09:35.747Z · score: 1 (1 votes) · LW · GW

Are you a physicist?

Comment by tag on The Short Case for Verificationism · 2020-09-15T11:07:04.412Z · score: 1 (1 votes) · LW · GW

Why should something that is possible but low probability have so much impact?

Comment by tag on The Short Case for Verificationism · 2020-09-14T18:25:52.899Z · score: 1 (1 votes) · LW · GW

I don't know what your thoughts on plausibility are. But multiversal theories are straightforwardly excluded by the original version of Occams Razor, the one about not multiplying entities.

Comment by tag on Notes on good judgement and how to develop it (80,000 Hours) · 2020-09-14T09:38:18.691Z · score: 1 (1 votes) · LW · GW

But then the rationality community is full of people who think they have disproved to Copenhagen Interpretation. Maybe the rationality community deals in controversialism, but their controversies aren't so salient to people in the community.

Comment by tag on Why haven't we celebrated any major achievements lately? · 2020-09-13T17:43:48.566Z · score: 1 (1 votes) · LW · GW

Why would an intellectual need an event on top of academic qualifications?

Comment by tag on Notes on good judgement and how to develop it (80,000 Hours) · 2020-09-13T15:33:43.281Z · score: 1 (1 votes) · LW · GW

“Intelligence” as “processing speed” is really flawed, and in-practice intelligence already measures something closer to “good judgement”

Intelligence or IQ?

Comment by tag on The universality of computation and mind design space · 2020-09-13T14:34:58.848Z · score: 1 (1 votes) · LW · GW

This is mostly a quantitative issue.

If you define a UTM as having infinite capacity, then a human is not a UTM.

If you are talking about finite TMs, then a smaller finite TM cannot emulate a larger one. A larger finite TM might be able to emulate a smaller one, but cannot necessarily . A human cannot necessarily emulate a TM with less total processing power than a human brain, because a human cannot devote 100% of cognitive resources to the emulation. Your brain is mostly devoted to keeping your body going.

This can easily be seen from the history and methodology of programming. Humans have a very limited ability to devoted their cognitive resources to emulating low level computation, so programmers found it necessary to invent high level languages and tools to minimise their disadvantages and maximise their advantages in terms of higher level thought and pattern recognition.

Humans are so bad at emulating a processor executing billions of low level instructions per second that our chances of being able to predict an AI using that technique in real time are zero.

Comment by tag on The Short Case for Verificationism · 2020-09-12T20:42:25.369Z · score: 1 (1 votes) · LW · GW

Meeting contrary arguments is part of a making an argument. There definitely is such a counterargument, even if you have never heard of it. That's what steel manning and strong manning are all about.

Comment by tag on avturchin's Shortform · 2020-09-12T19:54:17.623Z · score: 1 (1 votes) · LW · GW

MWI is more than one theory, because everything is more than one thing.

There is an approach based on coherent superpositions, and a version based on decoherence. These are incompatible opposites.

How simple a version of MWI is, depends on how it deals with all the issues, including the basis problem.

Comment by tag on The Short Case for Verificationism · 2020-09-12T19:42:08.182Z · score: 1 (1 votes) · LW · GW

Surely the burden of proof is on someone suggesting that Occam somehow rescues realism

That's not sure at all. Anti realism is quite contentious.

Besides, level IV is arguably simpler than almost any alternative, including a singleton universe.

It can come out as very simple or very complex depending on how you construe Occam.

Comment by tag on The Short Case for Verificationism · 2020-09-12T19:21:13.964Z · score: 1 (1 votes) · LW · GW

I accept Occam, but for me it’s just a way of setting priors in a model using to make predictions.

But you don't have a proof that that is the only legitimate use of Occam. If realists can use Occam to rescue realism, then realism gets rescued.

And part of my argument here is how the mere possibility of large universes destroys the coherency of realism. Even those rejecting simulations would still say it’s possible.

That would be the sort of problem that probablistic reasoning addresses.

Comment by tag on The Short Case for Verificationism · 2020-09-12T18:13:59.175Z · score: 1 (1 votes) · LW · GW

You are making two claims..about whether ontological indeterminacy holds, and about the meaning of "meaning" .

Setting aside the second claim , the first claim rests on an assumption that the only way to judge a theory is direct empiricism. But realists tend to have other theoretical desiderata in mind..a lot would reject simulations and large universes on the basis of Occams Razor, for instance.

As for the rest..you might have a valid argument that it's inconsistent to believe in both empirical realism and large universes.

Comment by tag on The Short Case for Verificationism · 2020-09-12T17:52:16.667Z · score: 1 (1 votes) · LW · GW

I don't think they do. But that should not be in dispute. The point of a logical argument is to achieve complete clarity about the premises and the way they imply the conclusion.

Comment by tag on The Short Case for Verificationism · 2020-09-12T15:29:21.829Z · score: 1 (1 votes) · LW · GW

There are gaps in the argument, then.

Comment by tag on The Short Case for Verificationism · 2020-09-12T15:09:19.209Z · score: 1 (1 votes) · LW · GW

I think a claim is meaningful if it’s possible to be true and possible to be false. Of course this puts a lot of work on “possible”.

That's not the standard verificationist claim, which is more that things are meaningful if they can be verified as true or false.