Posts

Eugenics Performed By A Blind, Idiot God 2023-09-17T20:37:13.650Z
Contra Heighn Contra Me Contra Heighn Contra Me Contra Functional Decision Theory 2023-09-17T02:15:41.430Z
The commenting restrictions on LessWrong seem bad 2023-09-16T16:38:26.318Z
Ethics Needs A Marginal Revolution 2023-09-15T19:08:19.378Z
Contra Heighn Contra Me Contra Functional Decision Theory 2023-09-11T19:49:43.216Z
Anyone want to debate publicly about FDT? 2023-08-29T03:45:54.239Z
Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong 2023-08-27T01:06:37.355Z
The Joan of Arc Challenge For Objective List Theory 2023-08-22T08:01:53.802Z
The Lopsided Lives Argument For Hedonism About Well-being 2023-08-22T07:59:46.186Z
Infinite Ethics: Infinite Problems 2023-08-16T02:44:06.253Z
Underwater Torture Chambers: The Horror Of Fish Farming 2023-07-26T00:27:15.490Z
Nature Releases A Stupid Editorial On AI Risk 2023-06-29T19:00:58.170Z
Two Pieces of Advice About How to Remember Things 2023-05-22T18:10:45.362Z
Deontological Norms are Unimportant 2023-05-18T09:33:31.628Z
The Orthogonality Thesis is Not Obviously True 2023-04-05T21:06:49.386Z
Two Dogmas of LessWrong 2022-12-15T17:56:06.462Z

Comments

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-09-27T19:41:39.621Z · LW · GW

I think this comment is entirely right until the very end.  I don't think I really attack him as a person--I don't say he's evil or malicious or anything in the vicinity, I just say he's often wrong.  Seems hard to argue that without arguing against his points.  

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-09-23T20:23:07.032Z · LW · GW

I never claimed Eliezer says consciousness is nonphysical--I said exactly the opposite.

Comment by omnizoid on Contra Heighn Contra Me Contra Heighn Contra Me Contra Functional Decision Theory · 2023-09-18T03:12:26.571Z · LW · GW

If you look at philosophers with Ph.Ds who study decision theory for a living, and have a huge incentive to produce original work, none of them endorse FDT.  

Comment by omnizoid on Contra Yudkowsky on Epistemic Conduct for Author Criticism · 2023-09-18T02:29:00.700Z · LW · GW

Yeah, I was just kidding! 

Comment by omnizoid on Contra Yudkowsky on Epistemic Conduct for Author Criticism · 2023-09-18T02:23:11.140Z · LW · GW

About three quarters of academic decision theorists two box on Newcombe's problem.  So this standard seems nuts.  Only 20% one box.  https://survey2020.philpeople.org/survey/results/4886?aos=1399

Comment by omnizoid on Contra Yudkowsky on Epistemic Conduct for Author Criticism · 2023-09-18T02:21:46.648Z · LW · GW

My goal was to get people to defer to Eliezer.  I explicitly say he's an interesting thinker who is worth reading. 

Comment by omnizoid on Contra Yudkowsky on Epistemic Conduct for Author Criticism · 2023-09-18T02:21:01.558Z · LW · GW

I dispute that . . . 

Comment by omnizoid on Contra Yudkowsky on Epistemic Conduct for Author Criticism · 2023-09-18T02:20:38.956Z · LW · GW

I didn't say Eliezer was a liar and a fraud.  I said he was often overconfident and eggregiously wrong, and explicitly described him as an interesting thinker who was worth reading. 

Comment by omnizoid on Contra Yudkowsky on Epistemic Conduct for Author Criticism · 2023-09-18T02:18:14.355Z · LW · GW

//It presents each disagreement as though Eliezer were going against an expert consensus, when in fact each position mentioned is one where he sided with a camp in an extant expert divide.// 

Nope false.  There are no academic decision theorists I know of who endorse FDT, no philosophers of mind who agree with Eliezer's assessment that epiphenomenalism is the term for those who accept zombies, and no relevant experts about consciousness who think that animals aren't conscious with Eliezer's confidence--that I know of.  

Comment by omnizoid on Contra Heighn Contra Me Contra Heighn Contra Me Contra Functional Decision Theory · 2023-09-18T00:54:40.262Z · LW · GW

The examples just show that sometimes you lose by being rational.  

Unrelated, but I really liked your recent post on Eliezer's bizarre claim that character attacks last is an epistemic standard. 

Comment by omnizoid on Contra Heighn Contra Me Contra Heighn Contra Me Contra Functional Decision Theory · 2023-09-18T00:52:40.169Z · LW · GW

What's your explanations of why virtually no published papers defend it and no published decision theorists defend it?  You really think none of them have thought of it or anything in the vicinity? 

Comment by omnizoid on Contra Heighn Contra Me Contra Heighn Contra Me Contra Functional Decision Theory · 2023-09-17T20:43:23.508Z · LW · GW

I mean like, I can give you some names.  My friend Ethan who's getting a Ph.D was one person.  Schwarz knows a lot about decision theory and finds the view crazy--MacAskill doesn't like it either.

Comment by omnizoid on Contra Heighn Contra Me Contra Heighn Contra Me Contra Functional Decision Theory · 2023-09-17T18:59:04.216Z · LW · GW

I wouldn't call a view crazy for just being disbelieved by many people.  But if a view is both rejected by all relevant experts and extremely implausible, then I think it's worth being called crazy!  

I didn't call people crazy, instead I called the view crazy.  I think it's crazy for the reasons I've explained, at length, both in my original article and over the course of the debate.  It's not about my particular decision theory friends--it's that the fact that virtually no relevant experts agree with an idea is relevant to an assessment of it.  

I'm sure Soares is a smart guy!  As are a lot of defenders of FDT.  Lesswrong selects disproportionately for smart, curious, interesting people.  But smart people can believe crazy things--I'm sure I have some crazy beliefs; crazy in the sense of being unreasonable such that pretty much all rational people would give them up upon sufficient ideal reflection and discussion with people who know what they're talking about. 

Comment by omnizoid on The Lopsided Lives Argument For Hedonism About Well-being · 2023-09-17T18:55:40.138Z · LW · GW

Good one!  

Though is there a reason? 

Comment by omnizoid on The Lopsided Lives Argument For Hedonism About Well-being · 2023-09-17T15:29:06.978Z · LW · GW

Yep

Comment by omnizoid on Contra Heighn Contra Me Contra Heighn Contra Me Contra Functional Decision Theory · 2023-09-17T15:26:20.895Z · LW · GW

You can make it with Parfit's hitchiker, but in that case there's an action before hand and so a time when you have the ability to try to be rational.  

There is a path from the decision theory to the predictor, because the predictor looks at your brain--with the decision theory it will make--and bases the decision on the outputs of that cognitive algorithm. 

Comment by omnizoid on Contra Heighn Contra Me Contra Heighn Contra Me Contra Functional Decision Theory · 2023-09-17T15:24:22.139Z · LW · GW

The Demon is omniscient.  

FDTists can't self-modify to be CDTists, by stipulation.  This actually is, I think, pretty plausible--I couldn't choose to start believing FDT. 

Comment by omnizoid on Contra Heighn Contra Me Contra Heighn Contra Me Contra Functional Decision Theory · 2023-09-17T15:21:46.117Z · LW · GW

Well here's one indication--I don't know if there's a single published academic philosophy paper defending FDT.  Maybe there's one--certainly not many.  Virtually no decision theorists defend it.  I don't know much about Soares, but I know he's not an academic philosopher, and I think there are pretty unique skills involved in being an academic philosopher.

Comment by omnizoid on The commenting restrictions on LessWrong seem bad · 2023-09-16T18:40:56.298Z · LW · GW

Yeah, I agree I have lots of views that LessWrongers find dumb.  My claim is just that it's bad when those views are hard to communicate on account of the way LW is set up.  

Comment by omnizoid on Contra Heighn Contra Me Contra Functional Decision Theory · 2023-09-15T19:04:26.659Z · LW · GW

The description is exactly as you describe in your article.  I think my original was clear enough, but you describe your interpretation, and your interpretation is right.  You proceed to bite the bullet.  

Comment by omnizoid on Anyone want to debate publicly about FDT? · 2023-08-31T19:37:28.993Z · LW · GW

How'd you feel about a verbal debate? 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-29T04:03:02.751Z · LW · GW

Philosophy is pretty much the only subject that I'm very informed about.  So as a consequence, I can confidently say Eliezer is eggregiously wrong about most of the controversial views I can fact check him on.  That's . . . worrying. 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-28T14:32:31.754Z · LW · GW

I felt like I was following the entire comment, until you asserted that it rules out zombies.

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-28T03:15:39.637Z · LW · GW

If you only got rid of consciousness behavior would change.  

You might be able to explain Chalmers' behavior, but that doesn't capture the subjective experience. 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T22:23:01.556Z · LW · GW

It's not epiphenomenalism because the law invokes consciousness.  On the interactionalist account, consciousness causes things rather than just the physical stuff causing things.  If you just got rid of consciousness, you'd get a physically different world.  

I don't think that induction on the basis of "science has explained a lot of things therefore it will explain consciousness" is convincing.  For one, up until this point, science has only explained physical behavior, not subjective experience.  This was the whole point (see Goff's book Galileo's error).  For another, this seems to prove too much--it would seem to suggest that we could discover the corect modal beliefs in a test tube. 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T13:33:09.619Z · LW · GW

Did you read the next sentence?  The next sentence is " (note, this is not exactly how I feel about Yudkowsky, I don’t think he’s knowingly dishonest, but I just thought it was a good quote and partially represents my attitude towards Yudkowsky)."  The reason I included the quote was that it expressed how I feel about Yud minus the lying part--every time I examine one of his claims in detail, it almost always turns out false, often egregiously so.  

I don't think that arguments about whether animals are conscious are value questions.  They are factual questions--do animals have experience.  Is there something it's like to be them? 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T12:28:45.888Z · LW · GW

On Cannell, as I said, I'm too ignorant to evaluate his claims in detail. My claim is just there are smart sounding people who claim Eliezer is naive about AI.  

On the zombie argument, the physical facts are not why it happens in the relevant sense.  If god causes a couch to disappear in one world, that is physically identical to another world in which Allah caused the couch to disappear, which is physically identical to a third world in which there is a fundamental law that causes it to disappear.  Physical identity has to do with the way that the physical stuff composing a world behaves.  

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T04:01:38.174Z · LW · GW

I don't find Eliezer that impressive, for reasons laid out in the article.  I argued for animal sentient extensively in the article.  Though the main point of the article wasn't to establish nonphysicalism or animal consciousness but that Eliezer is very irrational on those subjects. 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T03:59:38.159Z · LW · GW

Notably, about three quarters of decision theorists two box.  I wasn't arguing for non-physicalism so much as arguing that Eliezer's specific argument against physicalism shows that he doesn't know what he's talking about.   Pain is a subset of suffering--it's the physical version of suffering, but the same argument can be made for suffering.  I didn't comment on Everetianism because I don't know enough (just that I think it's suspicious that Eliezer is so confident) nor on probability theory.  I didn't claim there was a contradiction between Bayesian and frequentist methods. 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T03:53:29.148Z · LW · GW

In the blackmail case, we're just stipulating that the scenario is as described.  It doesn't matter why it is that way.  

In the procreation case, I don't know why they have to be inhuman.  They're just acting for similar reasons to you. 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T03:51:11.619Z · LW · GW

Yeah I can see how that could be annoying.  In my defense, however, I am seriously irritated by this and I think there's nothing wrong with being a big snarky sometimes.  Eliezer seemed to think in this FaceBook exchange that his view just falls naturally from understanding consciousness.  But that is a very specific and implausible model. 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T02:56:36.081Z · LW · GW

Your father followed FDT and had the same reasons to procreate as you.  He is relevantly like you. 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T02:55:41.170Z · LW · GW

Suppose that I beat up all rational people so that they get less utility.  This would not make rationality irrational.  It would just mean that the world is bad for the rational.  The question you've described might be a fine one, but it's not what philosophers are arguing about in Newcombe's problem.  If Eliezer claims to have revolutionized decision theory, and then doesn't even know enough about decision theory to know that he is answering a different question from the decision theorists, that is an utter embarrassment that significantly undermines his credibility.  

And in that case, Newcombe's problem becomes trivial.  Of course if Newcombe's problem comes up a lot, you should design agents that one box--they get more average utility.  The question is about what's rational for the agent to do, not what's rational for it to commit to, become, or what's rational for its designers to do.  

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T02:46:18.655Z · LW · GW

Sorry, I said twin case, I meant the procreation case! 

The simulation case seems relevantly like the normal twin case which I'm not as sure about. 

Legible precommitment is not crazy!  Sometimes, it is rational to agree to do the irrational thing in some case.  If you have the ability to make it so that you won't later change your mind, you should do that.  But once you're in that situation, it makes sense to defect. 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T02:39:55.749Z · LW · GW

I agree!  Eliezer deserves praise for writing publicly about his ideas.  My article never denied that.  It merely claimed that he often confidently says things that are totally wrong. 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T02:38:57.761Z · LW · GW

I really appreciate that!  Though if you like the things I write, you can find my blog at benthams.substack.com

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T02:28:36.349Z · LW · GW

Your points I think are both addressed by the point MacAskill makes that, perhaps in some cases it's best to be the type of agent that follows functional decision theory.  Sometimes rationality will be bad for you--if there's a demon who tortures all rational people, for example.  And as Schwarz points out, in the twin case, you'll get less utility by following FDT--you don't always want to be a FDTist.  

I find your judgment about the blackmail case crazy!  Yes, agents who give in to blackmail do worse on average.  Yes, you want to be the kind of agent who never gives in to blackmail.  But all of those are consistent with the obvious truth that giving into blackmail, once you're in that scenario, makes things worse for you and is clearly irrational. 

Comment by omnizoid on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T02:22:24.180Z · LW · GW

The fact that someone argues his positions publicly doesn't make it so that they necessarily have an idea what they're talking about.  Deepak Chopra argues his positions publicly.

Comment by omnizoid on The Joan of Arc Challenge For Objective List Theory · 2023-08-25T04:05:44.584Z · LW · GW

We are stipulating that we would have the same evidence in both cases, so it would lead to the same beliefs, just with different truth values. 

Comment by omnizoid on The Lopsided Lives Argument For Hedonism About Well-being · 2023-08-22T23:19:03.678Z · LW · GW

We're asking what's good for the person, not what deal they'd accept.  If we ask whether the person who is constantly tortured is well off, the answer is obviously no!  If naive OLT is true, then they would be well off.  It doesn't matter if they can ever use the knowledge. 

Comment by omnizoid on Infinite Ethics: Infinite Problems · 2023-08-17T21:38:05.430Z · LW · GW

There are as many even numbers as there are total numbers.  They are the same cardinality.  

Comment by omnizoid on Infinite Ethics: Infinite Problems · 2023-08-17T20:36:56.412Z · LW · GW

If you rearrange heaven to hell, you get a different average.  So you either have to think rearrangement matters or that they're equal.  

Comment by omnizoid on Infinite Ethics: Infinite Problems · 2023-08-17T20:31:45.609Z · LW · GW

You can also get the total of a single galaxy--the problem is how you count up things in an infinite world. 

Comment by omnizoid on Infinite Ethics: Infinite Problems · 2023-08-17T07:19:04.242Z · LW · GW

You have not understood the problem.  There are not more happy people than unhappy people in any rigorous sense--the infinities are of the same cardinality.  And the pasadena game scenario gives indeterminate averages.  Also, average utilitarianism is crazy, and implies you should create lots of miserable people in hell as long as they're slightly less miserable than existing people. 

Comment by omnizoid on Infinite Ethics: Infinite Problems · 2023-08-16T17:13:02.923Z · LW · GW

Why is the only thing that we can use galaxies?  We can compare people in any ways.  

If you rearrange people, standard mathematics says that you can turn HEAVEN into HELL.  Infinity/1 billion = infinity.  You have to change the math of infinity, not just the math of ethics where you add up infinity. 

Comment by omnizoid on Infinite Ethics: Infinite Problems · 2023-08-16T17:09:58.982Z · LW · GW

You don't have to be a totalist to think that HEAVEN>HELL.  The problem is that it's intuitively obvious that if we discovered that every galaxy in the universe was filled with almost all happy people, that would seem better than if they were filled almost exclusively with miserable people. 

Comment by omnizoid on Infinite Ethics: Infinite Problems · 2023-08-16T05:54:04.737Z · LW · GW

Wait, do you agree that rearranged heaven gets hell?  If so, you either have to deny that HEAVEN>HELL or that arrangement matters.  

You're assuming we're comparing them by galaxies.  But there's no natural way to individuate that explains why we should do that.  

Comment by omnizoid on Infinite Ethics: Infinite Problems · 2023-08-16T04:22:40.910Z · LW · GW

That implies that order matters!  If you rearange heaven, you get hell.   There are other problems with ordering--some series can sum to any number depending on arrangement,. 

Comment by omnizoid on Underwater Torture Chambers: The Horror Of Fish Farming · 2023-07-26T18:33:02.754Z · LW · GW

True!  Will fix. 

Comment by omnizoid on Underwater Torture Chambers: The Horror Of Fish Farming · 2023-07-26T18:31:40.358Z · LW · GW

There are lots of people who have tried to esstimate the intensity of fish suffering.  I think the rethink priorities report is the best methodologically--but the others tend to estimate that fish suffer more.  If you note in the RP report, they say that those controversial philosophical assumptions don't affect the end result much.  I agree that it's very uncertain--this report gives the lowest estimate I was able to find.