Posts
Comments
Yes oops
I refer you to my response to Said Achmiz's comment. Do you have a better way of estimating animal consciousness? Sure, the report isn't perfect, but it's better than alternatives. It's irrational to say "well, we don't know exactly how much they suffer, so let's ignore them entirely." https://www.goodthoughts.blog/p/refusing-to-quantify-is-refusing
Fischer's not against using it for tradeoffs, he's against using it as a singular indicator of worth.
But then you'd lose out on being the creatures.
The dark arts of expected value calculations relying on conservatively downgrading the most detailed report on the subject. What a joke.
But I'm not trolleying them--I'm talking about how bad their suffering is.
As they describe in the report, the philosophical assumptions are mostly inconsequential and assumed for simplicity. The rest of your critique is just describing what they did, not an objection to it. It's not precise and they admit quite high uncertainty, but it's definitely better than alternatives (E.g. neuron counts).
It's not that piece. It's another one that got eaten by a Substack glitch unfortuantely--hopefully it will be back up soon!
He thinks it's very near zero if there is a gap.
If you half and don't think that your credence should be 2/3 in heads after finding out it's Monday you violate the conservation of evidence. If you're going to be told what time it is, your credence might go up but has no chance of going down--if it's day 2 your credence will spike to 100, if it's day 1 it wont' change.
Yes--Lewis held this, for instance, in the most famous paper on the topic.
Lots of people disagree with 2.
I didn't make a betting argument.
Impervious to reason? I sent you an 8,000 word essay giving reasons for it!
Just to be clear, I banned you because I find your comments to be annoying consistently. You are, in fact, the first commenter I've ever banned.
As for the question, they look at the various neural correlates of suffering on different theories, split their credence across them, and divy up the results based on expected consciousness. The report is more detailed.
https://benthams.substack.com/p/moral-realism-is-true
It may be imaginable, but if it's false, who cares. Like, suppose I argue, that fundamental reality has to meet constraint X and view Y is the only plausible view that does so. Listing off a bunch of random ones that meet constraint X but are false doesn't help you .
Well, UDASSA is false https://joecarlsmith.com/2021/11/28/anthropics-and-the-universal-distribution. As I argue elsewhere, any view other than SIA implies the doomsday argument. The number of possible beings isn't equal to the number of "physically limited beings in our universe," and there are different arrangements for the continuum points.
The argument for Beth 2 possible people is that it's the powerset of continuum points. SIA gives reason to think you should assign a uniform prior across possible people. There could be a God-less universe with Beth 2 people, but I don't know how that would work, and even if there's some coherent model one can make work without sacrificing simplicity, P(Beth 2 people)|Theism>>>>>>>>>>>>>>>>>>>>>>P(Beth 2 people)|Atheism. You need to fill in the details more beyond just saying "there are Beth 2 people," which will cost simplicity.
Remember, this is just part of a lengthy cumulative case.
If theism is true then all possible people exist but they're not all here. SIA gives you a reason to think many exist but says nothing about where they'd be. Theism predicts a vast multiverse.
The cases are non-symmetrical because a big universe makes my existence more likely but it doesn't make me more likely to get HTTTTTTTHTTHHTTTHTTTHTHTHTTHHTTTTTTHHHTHTTHTTTHHTTTTHTHTHHHHHTTTTHTHHHHTHHHHHHHTTTTHHTHHHTHTTTTTHTTTHTTHHHTHHHTHHTHTHTHTHTHHTHTHTTHTHHTTHTHTTHHHHHTTTTTTHHTHTTTTTHHTHHTTHTTHHTTTHTTHTHTTHHHTTHHHTHTTHHTTHTTTHTHHHTHHTHHHHTHHTHHHTHHHHTTHTTHTHHTHTTHTHHTTHHTTHHTH. The most specific version of the evidence is I get those sequence of coin flips, which is unaffected by the number of people, rather than that someone does that. My view follows trivially from the widely adopted SIA which I argued for in the piece--it doesn't rely on some basic math error.
I didn't attack his character, I said he was wrong about lots of things.
//If you add to the physical laws code that says "behave like with Casper", you have re-implemented Casper with one additional layer of indirection. It is then not fair to say this other world does not contain Casper in an equivalent way.//
No, you haven't reimplemented Casper, you've just copied his physical effects. There is no Casper, and Casper's consciousness doesn't exist.
Your description of the FDT stuff isn't what I argued.
//I've just skimmed this part, but it seems to me that you provide arguments and evidence about consciousness as wakefulness or similar, while Yudkowsky is talking about the more restricted and elusive concept of self-awareness. //
Both Yudkowsky and I are talking about having experiences, as he's been explicit about in various places.
//Your situation is symmetric: if you find yourself repeatedly being very confident about someone not knowing what they are saying, while this person is a highly regarded intellectual, maybe you are overconfident and wrong! I consider this a difficult dilemma to be in. Yudkowsky wrote a book about this problem, Inadequate Equilibria, so it's one step ahead of you on the meta.//
I don't talk about the huge range of topics Yudkowsky does. I don't have super confident views on any topic that is controvsial among the experts--but Yudkowsky's views aren't, they mostly just rest on basic errors.
I think this comment is entirely right until the very end. I don't think I really attack him as a person--I don't say he's evil or malicious or anything in the vicinity, I just say he's often wrong. Seems hard to argue that without arguing against his points.
I never claimed Eliezer says consciousness is nonphysical--I said exactly the opposite.
If you look at philosophers with Ph.Ds who study decision theory for a living, and have a huge incentive to produce original work, none of them endorse FDT.
Yeah, I was just kidding!
About three quarters of academic decision theorists two box on Newcombe's problem. So this standard seems nuts. Only 20% one box. https://survey2020.philpeople.org/survey/results/4886?aos=1399
My goal was to get people to defer to Eliezer. I explicitly say he's an interesting thinker who is worth reading.
I dispute that . . .
I didn't say Eliezer was a liar and a fraud. I said he was often overconfident and eggregiously wrong, and explicitly described him as an interesting thinker who was worth reading.
//It presents each disagreement as though Eliezer were going against an expert consensus, when in fact each position mentioned is one where he sided with a camp in an extant expert divide.//
Nope false. There are no academic decision theorists I know of who endorse FDT, no philosophers of mind who agree with Eliezer's assessment that epiphenomenalism is the term for those who accept zombies, and no relevant experts about consciousness who think that animals aren't conscious with Eliezer's confidence--that I know of.
The examples just show that sometimes you lose by being rational.
Unrelated, but I really liked your recent post on Eliezer's bizarre claim that character attacks last is an epistemic standard.
What's your explanations of why virtually no published papers defend it and no published decision theorists defend it? You really think none of them have thought of it or anything in the vicinity?
I mean like, I can give you some names. My friend Ethan who's getting a Ph.D was one person. Schwarz knows a lot about decision theory and finds the view crazy--MacAskill doesn't like it either.
I wouldn't call a view crazy for just being disbelieved by many people. But if a view is both rejected by all relevant experts and extremely implausible, then I think it's worth being called crazy!
I didn't call people crazy, instead I called the view crazy. I think it's crazy for the reasons I've explained, at length, both in my original article and over the course of the debate. It's not about my particular decision theory friends--it's that the fact that virtually no relevant experts agree with an idea is relevant to an assessment of it.
I'm sure Soares is a smart guy! As are a lot of defenders of FDT. Lesswrong selects disproportionately for smart, curious, interesting people. But smart people can believe crazy things--I'm sure I have some crazy beliefs; crazy in the sense of being unreasonable such that pretty much all rational people would give them up upon sufficient ideal reflection and discussion with people who know what they're talking about.
Good one!
Though is there a reason?
Yep
You can make it with Parfit's hitchiker, but in that case there's an action before hand and so a time when you have the ability to try to be rational.
There is a path from the decision theory to the predictor, because the predictor looks at your brain--with the decision theory it will make--and bases the decision on the outputs of that cognitive algorithm.
The Demon is omniscient.
FDTists can't self-modify to be CDTists, by stipulation. This actually is, I think, pretty plausible--I couldn't choose to start believing FDT.
Well here's one indication--I don't know if there's a single published academic philosophy paper defending FDT. Maybe there's one--certainly not many. Virtually no decision theorists defend it. I don't know much about Soares, but I know he's not an academic philosopher, and I think there are pretty unique skills involved in being an academic philosopher.
Yeah, I agree I have lots of views that LessWrongers find dumb. My claim is just that it's bad when those views are hard to communicate on account of the way LW is set up.
The description is exactly as you describe in your article. I think my original was clear enough, but you describe your interpretation, and your interpretation is right. You proceed to bite the bullet.
How'd you feel about a verbal debate?
Philosophy is pretty much the only subject that I'm very informed about. So as a consequence, I can confidently say Eliezer is eggregiously wrong about most of the controversial views I can fact check him on. That's . . . worrying.
I felt like I was following the entire comment, until you asserted that it rules out zombies.
If you only got rid of consciousness behavior would change.
You might be able to explain Chalmers' behavior, but that doesn't capture the subjective experience.
It's not epiphenomenalism because the law invokes consciousness. On the interactionalist account, consciousness causes things rather than just the physical stuff causing things. If you just got rid of consciousness, you'd get a physically different world.
I don't think that induction on the basis of "science has explained a lot of things therefore it will explain consciousness" is convincing. For one, up until this point, science has only explained physical behavior, not subjective experience. This was the whole point (see Goff's book Galileo's error). For another, this seems to prove too much--it would seem to suggest that we could discover the corect modal beliefs in a test tube.
Did you read the next sentence? The next sentence is " (note, this is not exactly how I feel about Yudkowsky, I don’t think he’s knowingly dishonest, but I just thought it was a good quote and partially represents my attitude towards Yudkowsky)." The reason I included the quote was that it expressed how I feel about Yud minus the lying part--every time I examine one of his claims in detail, it almost always turns out false, often egregiously so.
I don't think that arguments about whether animals are conscious are value questions. They are factual questions--do animals have experience. Is there something it's like to be them?
On Cannell, as I said, I'm too ignorant to evaluate his claims in detail. My claim is just there are smart sounding people who claim Eliezer is naive about AI.
On the zombie argument, the physical facts are not why it happens in the relevant sense. If god causes a couch to disappear in one world, that is physically identical to another world in which Allah caused the couch to disappear, which is physically identical to a third world in which there is a fundamental law that causes it to disappear. Physical identity has to do with the way that the physical stuff composing a world behaves.