Posts

TAG's Shortform 2020-08-13T09:30:22.058Z

Comments

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-18T15:59:50.418Z · LW · GW

If Loosemore’s point is only that an AI wouldn’t have separate semantics for those things, then I don’t see how it can possibly lead to the conclusion that concerns about disastrously misaligned superintelligent AIs are absurd.

If there's one principle argument that it is highly likely for an ASI to be an existential threat, then refuting it refutes claims about ASI and existential threat.

Maybe you think there are other arguments.

E.g., consider the “paperclip maximizer” scenario. You could tell that story in terms of a programmer who puts something like “double objective_function() { return count_paperclips(DESK_REGION); }” in their AI’s code. But you could equally tell it in terms of someone who makes an AI that does what it’s told, and whose creator says “Please arrange for there to be as many paperclips as possible on my desk three hours from now.”.

If it obeys verbal commands ,you could to it to stop at any time. That's not a strong likelihood of existential threat. How could.it kill us all in three hours?

loosemore claims that Yudkowsky-type nightmare scenarios are “logically incoherent at a fundamental level”. If all that’s actually true is that an AI triggering such a scenario would have to be somewhat oddly designed,

I'll say! Its logically possible to design a car without brakes or a steering wheel, but it's not likely. Now you don't have an argument in favour of there being a strong likelihood of existential threat from ASI.

Comment by TAG on How factories were made safe · 2021-09-16T23:08:46.183Z · LW · GW

George Bernard Shaw. 1856-1950.

Comment by TAG on A Semitechnical Introductory Dialogue on Solomonoff Induction · 2021-09-16T19:22:10.764Z · LW · GW

"ASHLEY: Uh, but you didn’t actually use the notion of computational simplicity to get that conclusion; you just required that the supply of probability mass is finite and the supply of potential complications is infinite. Any way of counting discrete complications would imply that conclusion, even if it went by surface wheels and gears.

"BLAINE: Well, maybe. But it so happens that Yudkowsky did invent or reinvent that argument after pondering Solomonoff induction, and if it predates him (or Solomonoff) then Yudkowsky doesn’t know the source. Concrete inspiration for simplified arguments is also a credit to a theory, especially if the simplified argument didn’t exist before that.

"ASHLEY: Fair enough."

I think Ashley deserves an answer to "the objection "[a]ny way of counting discrete complications would imply that conclusion, even if it went by surface wheels and gears", not a claim about who invented what first!

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-16T19:00:33.661Z · LW · GW

My reconstruction of Loosemore's point is that an AI wouldnt have two sets of semantics , one for interpreting verbal commands, and another for negotiating the world and doing things.

My reconstruction of Yudkowkys argument is that it depends on what I've been calling the Ubiquitous Utility Function. If you think of any given AI as having a separate module where its goals or values are hard coded then the idea that they were hard coded wrong, but the AI is helpless to change them, is plausible.

Actual AI researchers don't believe in ubiquitous UF's because only a few architectures gave them. EY believes in them for reasons unconnected with empirical evidence about AI architectures.

Comment by TAG on “Who’s In Charge? Free Will and the Science of the Brain” · 2021-09-16T17:07:21.158Z · LW · GW

If you try to imagine your will, your decision-making apparatus, as something outside of “every single fact about” the universe, as it has been perennially tempting to do, you end up in a morass of speculation about mind and matter, body and spirit, and where they intersect and how.

Just as “life” is completely embodied in the material world, and is not some extramaterial essence breathed into it; so “ego” and “will” are as well. This doesn’t make them any less wonderful or worth getting excited about

Or as I like to put it...

According to science, the human brain/body is a complex mechanism made up of organs and tissues which are themselves made of cells which are themselves made of proteins, and so on. Science does not tell you that you are a ghost in a deterministic machine, trapped inside it and unable to control its operation.: it tells you that you are, for better or worse, the machine itself. So the scientific question of free will becomes the question of how the machine behaves, whether it has the combination of unpredictability, self direction, self modification and so on, that might characterise free will... depending on how you define free will

Comment by TAG on How factories were made safe · 2021-09-15T21:08:45.927Z · LW · GW

GBS got a good lifespan out of his vegetarian diet.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-15T14:51:02.660Z · LW · GW

I'm well aware that the big people get treated right. That's compatible with the little people being shot. Look how Haziq has been treated for asking a question.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-15T13:32:20.681Z · LW · GW

Scott Alexander is LW-adjacent enough to be relevant in your mind, but he has a page of notable mistakes he’s made.

I am using lesswrong exclusively of the codexes.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-15T00:00:09.347Z · LW · GW

Einstein admitted to a "greatest mistake".

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-14T23:54:23.386Z · LW · GW

At least I've got you thinking.

I previously gave you a short list of key ideas. Auman, Bayes, Solomonoff, and so on.

Now, you’re saying that LW ignores the messenger. Also bad if true, of course, but it’s an entirely different failure mode.

No, it's not very different. Shooting the messenger, ignoring the messenger, and and quietly updating without admitting it, are all ways that confirmation bias manifests. Aren't you supposed to know about this stuff?

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-14T20:40:09.915Z · LW · GW

In common sense terms, telling an audience that the messenger is an idiot who shouldn't be listened to because he's an idiot, is shooting the messenger. It's about as central an classic an example you can get. What else would it be?

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-14T20:06:37.326Z · LW · GW

Where are (say) Richard Feynman’s?

Good grief... academics revise and retract things all the times. The very word "errata" comes from.the world of academic publishing!

If you have in mind some concrete examples where LW should have errata, they might be interesting.)

I've already told you.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-14T19:43:30.643Z · LW · GW

What official LW positions would you expect there to be errata for?

I'm specifically referencing RAZ/ the Sequences. Maybe theyre objectively perfect, and nothing of significance has happened in ten years..

As I'm forever pointing out, there are good objections to many of the postings in the sequences from well informed people , to be found in the comment s...but no one has admitted that a single one is actually right, no one has attempted to go back and answer them, and they simply disappear from RAZ.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-14T18:02:08.550Z · LW · GW

Anyone can suffer from confirmation bias.

How can you tell you're not?

Here's a question: where are the errata? Why has lesswrong never officially changed its mind about anything?

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-14T17:38:31.806Z · LW · GW

If Loosemore had called Yudkowsky an idiot, you would not be saying "maybe he is".

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-14T17:34:05.263Z · LW · GW

Maybe you have a few concrete examples of messenger-shooting that are better explained as hostile reaction to evidence of being wrong rather than as hostile reaction to actual attack?

Better explained in whose opinion? Comfirmation bias will make you see neutral criticism as stack, because that gives you areason to reject it.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-14T12:07:40.281Z · LW · GW

I did ask the question “what’s your evidence?”

And I supplied some, which you then proceeded to nitpick, implying that it wasn't good enough, implying that very strong evidence is needed.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-14T11:52:51.511Z · LW · GW

I think ChristianKl gave one excellent rational reason to treat the two comments differently: all else being equal, being nice improves the quality of subsequent discussion and being nasty makes it worse, so we should apply higher standards to nastiness than to niceness

Here's an argument against it: having strong conventions against nastiness means you never get any kind of critique or negative feedback at all, and essentially just sit in an echo chamber. Treating rationality as something that is already perfect is against rationality.

Saying "we accept criticism , if it is good criticism" amounts to the same thing, because you can keep raising the bar.

Saying "we accept criticism , if it comes from the right person" amounts to the same thing, because you nobody has to be the right person.

Saying "we accept criticism , if it is nice" amounts to the same thing, because because being criticized never feels entirely nice.

But you understand all that , so long as it applies to an outgroup.

We are, I think, dealing with that old problem of motivated cognition. As Gilovich says: “Conclusions a person does not want to believe are held to a higher standard than those they do.

EY gives the examples of creationists, who are never convinced by any amount of fossils. That example, you can understand.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-13T19:45:41.522Z · LW · GW

If your original comment had said “at least some”, I would have found it more reasonable.

As stated , it was exactly as reasonable as yours. There is not and never was any objective epistemic or rational reason to treat the two comments differently.

but that’s why “what’s your evidence?” is a reasonable question.

You havent' shown that in any objective way, because it's only an implication of :-

also think “LW people will respond to an interesting mathematical question about the foundations of decision theory by investigating it” is a more reasonable guess a priori than “LW people will respond to … by attacking the person who raises it because it threatens their beliefs”.

..which is just an opinion. You have two consistent claims ..that my claim is apriori less likely, and that it needs to be supported by evidence. But they aren't founded on anything.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-13T19:38:40.394Z · LW · GW

That's not how it works. An apparent ad hom is usually taken as evidence that an ad hom took place. You are engaging in special pleading. This is like the way that people who are suffering from confirmation bias will demand very high levels of evidence before they change their minds. Not that you are suffering from confirmation bias.

Brown is terrible and so is everyone associated with him,

Another wild exageration of what I said.

Comment by TAG on Erratum for "From AI to Zombies" · 2021-09-12T23:03:36.876Z · LW · GW

But do you think he actually said it?

I don't think he said it clearly, and I don't think he said anything else clearly. Believe it or not, what I am doing is charitable interpretation...I am trying to make sense of what he said. If he thinks Bayes is systematically better than science, that would imply "Bayes is better than science, so replace science with Bayes", because that makes more sense than "Bayes is better than science, so don't replace Science with Bayes". So I think that is what he is probably saying.

he failure to distinguish between “this person said a thing” and “this person said something that implies a thing”,

Maybe it's Sally Anne fallacy, maybe its charitable interperetation. One should only use charitable interpretation where the meaning is unclear. Sally-Anne is only a fallacy where the meaning is clear.

If you think you know what he meant, stand by it and defend your interpretation. If you don’t think you know what he meant, admit that outright.

I am engaging in probablistic reasoning.

Okay, so another person misinterpreted him in a similar way.

Why should I make any attempt to provide evidence, when you are going to reject it out of hand.

He cannot explicitly reject every possible mistake someone might make while reading his essays.

No, but he could do a lot better. (An elephant-in-the-room issue here is that even though he is still alive, no-one expects him to pop up and say something that actually clarifies the issue).

So first off I don’t think I know what you mean by “systematically”.

It's about the most basic principle of epistemology, and one which the rationalsphere accepts: lucky guesses stopped clocks are not knowledge, even when they are right, because they are not reliable and systematic.

I think, a meaningless question without specifying what it’s supposed to be better at.

Obviously, that would be the stuff that science is already doing, since EY has argued, at immense length, that it gets quantum mechanics right,.

Eliezer doesn’t use the word. It seems clear, at least, that he’s dubious “teach more Bayes to Robert Aumann” would cause Robert Aumann to have more correct beliefs. So, maybe Eliezer doesn’t even think Bayes is systematically better in the sense that you mean?

If there is some objective factor about a person that makes them incapable of understanding Bayes , then a Bayesian should surely identify it. But where else has EY ever so much as hinted that some people are un-Bayesian?

Dunno if you think this either, also probably not super relevant.

Why do I have to tell you what I think in order for you to tell me what you think?

Here's the exchange:

Me: Do you think LessWrong-at-large currently thinks “individuals should be willing to trust in Bayes over science”?

You: Dunno if you think this either, also probably not super relevant.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-12T20:24:32.864Z · LW · GW

I can at least agree that:

We are, I think, dealing with that old problem of motivated cognition. As Gilovich says: “Conclusions a person does not want to believe are held to a higher standard than those they do.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-12T13:15:45.718Z · LW · GW

You don’t have to provide evidence.

Not in absolute terms, no. But in relative terms, people are demanding that I supply evidence to support my guess, but not demanding the same from you.

Maybe I need to say explicitly that when I say that it’s “possible” to be both an AI researcher and what I take Eliezer to have meant by an idiot, I don’t merely mean that it’s not a logical impossibility, or that it’s not precluded by the laws of physics; I mean that, alas, foolishness is to be found pretty much everywhere, and it’s not tremendously unlikely that a given AI researcher is (in the relevant sense) an idiot.

Which , again, is just to say that the apparent ad hom was possibly true, which, again , is an excuse you could make for anything. Maybe Smith whom Brown has accused of being a wife beater, actually is a wife beater.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-12T13:15:45.385Z · LW · GW

You don’t have to provide evidence.

Not in absolute terms, no. But in relative terms, people are demanding that I supply evidence to support my guess, but not demanding the same from you.

Maybe I need to say explicitly that when I say that it’s “possible” to be both an AI researcher and what I take Eliezer to have meant by an idiot, I don’t merely mean that it’s not a logical impossibility, or that it’s not precluded by the laws of physics; I mean that, alas, foolishness is to be found pretty much everywhere, and it’s not tremendously unlikely that a given AI researcher is (in the relevant sense) an idiot.

Which , again, is just to say that the apparent ad hom was possibly true, which, again , is an excuse you could make for anything. Maybe Smith whom Brown has accused of being a wife beater, actually is a wife beater.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-11T20:41:23.593Z · LW · GW

When I quoted evidence of EY ad-homming someone?

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-11T20:39:42.859Z · LW · GW

But my guess—which is only a guess, and I’m not sure what concrete evidence one could possibly have for it—is that in most such scenarios at least some LWers would be (1) interested and (2) not dismissive.

"At least some" is a climbdown. If I were allowed to rewrite my original comment to "at least some lesswrongians would shoot the messenger" , then we would not be in disagreement.

I guess we could get some evidence by looking at how similar things have been treated here. The difficulty is that so far as I can tell there hasn’t been anything that quite matches

Except criticism of the lesswrongian version of Bayes, and the lesswrongian version of Aumann, and the lesswrongian version of Solomonoff, and of the ubiquitous utility function, and the MWI stuff....

but I don’t know whom you’re accusing of exactly what epistemic double standards.

Everyone who thinks I have to support my guess about how lesswrongians would behave with evidence, but isn't asking for your evidence for your guess.

Comment by TAG on Erratum for "From AI to Zombies" · 2021-09-11T19:46:25.617Z · LW · GW

(It is perhaps unsurprising that the things I referred to in shorthand as X and Y, are things that I had written in an earlier comment.)

It perhaps unhelpful that you never said which if your previous comments they referred to .

Your reply to this suggested that Bayes should replace science iff individuals should be willing to trust in Bayes over science.

I suggested Bayes should replace science if it is objectively, systemstically better. In other words, Bayes replacing science is somethung EY should have said , because it follows from the other claim.

But I can't get "you" to make a clear statement that "individuals should use Bayes" means "Bayes is systematically better".

Instead, you said

For example, that Bayes might give some people better answers than science, and not give other people better answers than science?

If Bayes is better without being systematically better,if it only works for some people, then you shouldn't replace science with it. But what does that even mean? Why would it only work for some people? How are you testing rhat?

And where the hell did Yudkowsky say anything of the kind?

I'm not the only person who ever thought EY meant to replace science with Bayes ( and it's a reasonable conclusion if you believe that Bayes is systematically better for individuals)

For instance see this...please

I can't be completely sure that's what he meant because he is such an unclear writer ... but you can't be completely sure of your interpretation either, for the same reason.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-09T20:16:28.614Z · LW · GW

As long as someone extracts any positive utility at all from a future day of existing then continuing to exist is better than death

You are assuming selfishness. A person has to trade off the cost of cryo against the benefits of leaving money to their family, or charity.

And while yes certain humans live in chronic pain any technology able to rebuild a cryo patient can almost certainly fix the problem causing it.

Now assuming benevolent motivations.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-09T20:09:18.362Z · LW · GW

Are you in favour of downvoting lazy praise?

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-09T19:09:52.075Z · LW · GW

Waking from cryo is equivalent to exile. Exile is a punishment.

Comment by TAG on Erratum for "From AI to Zombies" · 2021-09-09T19:00:59.195Z · LW · GW

If Y is supposed to be your third option of using Bayes if it suits you, then it is still active here, and is evidence of motte an Bayleyism about Bayes.

Comment by TAG on Erratum for "From AI to Zombies" · 2021-09-09T18:58:18.890Z · LW · GW

You know that we are arguing about an article that is literally titled :

"The Dilemma: Science or Bayes?"

I'm not seeing much hint of a third option there.

Comment by TAG on Erratum for "From AI to Zombies" · 2021-09-09T18:43:55.942Z · LW · GW

For example, that Bayes might give some people better answers than science, and not give other people better answers than science?

Why?

If thats systematic, then the people who don't get good answers from it are just applying it wrong...and I can repeat my previous comment with "correct Bayes" substituted for "Bayes".

And if it's random, Bayes isn't that much good.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-09T18:40:20.639Z · LW · GW

Are you fine with downvoting?

And what about the epistemic double standard ?

Comment by TAG on Erratum for "From AI to Zombies" · 2021-09-09T18:34:54.424Z · LW · GW

Feel free to show how it is a false dichotomy by stating the third option.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-09T18:30:14.348Z · LW · GW

an outright refutation

Who gets to decide what's outright?Reality isn't a system where objective knowledge just pops up in people's brains , it's a system where people exchange arguments , facts and opinions , and may or may not change their minds.

There are still holdouts against evolution,relativity, quantum, climate change, etc. As you know. And it seems to them ..it seems to them that they are being objective and reasonable.

From the outside, they are biased towards tribal beliefs. How do you show that someone is not? Not having epistemic double standards would be a good start ..

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-09T17:45:56.471Z · LW · GW

"Clearly" and "it seems" are both the same, bad, argument. They both pass off a subjective assement as a fact

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-09T17:20:39.759Z · LW · GW

it’s just that this seems like the sort of question that a lot of LWers are very interested in

Seems to whom? It seems to me that a lot of less wrongers would messenger-shoot. Why do I have to provide evidence for the way things seem to me, but you don't need to provide evidence of the way things seem to you?

BTW, in further evidence, Haziqs question has been downvoted to -2.

I don’t understand what you mean by “That’s an objection that could be made to anything”.

Anything is not necessarily true.

Comment by TAG on What Motte and Baileys are rationalists most likely to engage in? · 2021-09-09T16:16:54.582Z · LW · GW

Bayes!

The Bailey is that Bayes is just maths, and you therefore can't disagree with it.

When it is in inevitably pointed out that self described Bayesians don't use explicit maths that much, then they fall back to the Motte .. Bayes is actually just a bunch of heuristics for probabilistic reasoning.

Comment by TAG on Assigning probabilities to metaphysical ideas · 2021-09-09T15:51:57.519Z · LW · GW

I was making a dig at Solomonoff induction. SIs essentially contain machine code.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-09T14:57:50.678Z · LW · GW

Your evidence for the contrary claim.

The possibility that he isn’t is just one of the several degrees of separation between your offered evidence (EY called someone an idiot once) and the claim it seems to be intended to support

That's an objection that could be made to anything. There is still no evidence for the contrary claim that lesswrong will abandon long held beliefs quickly and willingly.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-09T14:42:46.962Z · LW · GW

Or normal people are just wrong.

Wrong about their values, or wrong about the actions they should take to maximize their values? Is it inconceivable that someone with strong preferences for maintaining their social connections, etc., could correctly reject cryonics?

As for a “hell world”, known human history has had very few humans living in “hell” conditions for long.

But you can still have a preference for experiencing zero torture.

Comment by TAG on Erratum for "From AI to Zombies" · 2021-09-09T14:30:05.880Z · LW · GW

If Bayes does not give you, as an individual, better answers than science , there is no point in using it to override science.

If some Bayesian approach -- there's a lot of inconsistency about what "Bayes" means -- is systematically better than conventional science, the world of professional science should adopt it. It would be inefficient not to.

Comment by TAG on Assigning probabilities to metaphysical ideas · 2021-09-08T23:30:19.791Z · LW · GW

Well, it's not defined as the study of bitstrings or programmes.

Comment by TAG on Assigning probabilities to metaphysical ideas · 2021-09-08T22:58:19.837Z · LW · GW

I have seen smart people say there is no way to assign probabilities to metaphysical ideas.

There's objective probabilities and subjective probabilities, and there's absolute probabilities and relative probabilities. So that's four quadrants.

Subjective is easier than objective, and relative is easier than absolute. So subjective+relative is the easiest quadrant. Even if you are sceptical about absolute objective probability, you are still entitled to your own subjective opinion...for what it's worth...because eveyone is.

(If it's not obvious, the more difficult quadrants carry more weight).

Comment by TAG on What Motte and Baileys are rationalists most likely to engage in? · 2021-09-08T21:02:08.641Z · LW · GW

To try and put it bluntly and briefly: Don’t choose to suspend disbelief for multiple core hypotheses within your argument, while simultaneously holding that the final conclusion built off of them is objectively likely and has been supported throughout.

I agree with what you are saying...but my brief version would be "don't confuse absolute plausibility with relative plausibility".

Comment by TAG on What Motte and Baileys are rationalists most likely to engage in? · 2021-09-08T20:34:01.058Z · LW · GW

(b) at odds with a lot of other rationalist ideas.

The great strength of Rationalism...yes, I'm saying something positive. .is that it's flaws can almost always be explained using concepts from its own toolkit.

Comment by TAG on What Motte and Baileys are rationalists most likely to engage in? · 2021-09-08T20:28:09.152Z · LW · GW

In my experience Scott has an epistemic style where he assumes and seeks out contrary information, and Eliezer does not...he's more into early cognitive closure. It's not just tone, it's method.

Comment by TAG on Conflict in Kriorus becomes hot today, updated, update 2 · 2021-09-07T23:26:39.885Z · LW · GW

From metaculus...

Early attempts at cryonics facilities have previously failed when the organisations went bankrupt. Several facilities existed in the US starting in the 1960s, which often relied on funding from the living relatives of the cryopreserved, and could not maintain conditions when relatives were no longer willing or able to pay. As a result, all but one of the documented cryonic preservations prior to 1973 ended in failure, and the thawing out and disposal of the bodies.

It may be very bad and wrong or cryo preservation companies to shrug off their responsibilities in this way, but blaming individuals might not be hitting the right target. The standard capitalist company structure is a poor fit for preventing the problem, since they involve no duty to do something forever ... the individuals running the company can wind it up, with no personal responsibility.

Comment by TAG on Is LessWrong dead without Cox’s theorem? · 2021-09-07T17:07:12.985Z · LW · GW

Meaning that no one is able to make an assessment of LW that the is not based on rigourous evidence? But that isn't something that I can achieve, and it isn't something I want.