My thoughts on the Beff Jezos - Connor Leahy debate
post by Ariel Kwiatkowski (ariel-kwiatkowski) · 2024-02-03T19:47:08.326Z · LW · GW · 23 commentsContents
One hour in Two hours in The end Final thoughts Personal takeaway None 23 comments
Link:
Personal note: I'm somewhat in between safetyism and e/acc in terms of their general ideologies/philosophies. I don't really consider myself a part of either group. My view on AI x-risk is that AI can be potentially an existential threat, but we're nowhere near that point right now, so safety research is valuable, but not urgent. For this reason, in practical terms, I'm somewhat closer to e/acc, because I think there's a lot of value to be found in technological progress, so we should keep developing useful AI.
I'm hoping this debate will contain solid arguments as to why we shouldn't keep developing AI at full speed, ideally ones that I haven't heard before. I will write this post as a series of notes throughout the video.
One hour in
This is insufferable. Connor started with fairly direct questions, Beff bounces around them for no good reason, but eventually reaches a simple answer - yes, it's possible that some technologies should be banned. So far this seems to be the only concrete thing that was said?
At some point they start building their respective cases - what if you had a false vacuum device? Would we be fucked? Should we hide it? What should we do? And on Beff's side - what if there are dangerous aliens?
For the love of god, please talk about the actual topic.
About 50 minutes in, Connor goes on an offensive in a way that, to me is an extremely blatant slippery slope reasoning. The main point is that if you care about growth, you cannot care about anything else, because of course everyone's views are the extremist parodies of themselves. Embarrassing tbh. Ostensibly, Connor avoids making any concrete statements about his own values, because any such statements could be treated the same way. "You like puppies and friendship? Well I guess nobody will grow food anymore because they will be busy cuddling puppies".
He also points out, many many times, that "is" != "ought", which felt like virtue signalling? Throwing around shibboleths? Not quite sure. But not once was it a good argument as far as I can tell. Example exchange (my interpretation, the conversation was chaotic so hopefully I'm not misunderstanding)
B: Your values are not growth? How so?
C: Because I like puppies and happines and friendship [...]
B: Why do you like friendship? Because evolution hard-coded this in humans
C: You're mixing "is" and "ought"
He was not, in fact, mixing "is" and "ought". But stating that he did was a simple way to discredit anything he said using fancy rationalist words.
So far, the discussion is entirely in the abstract, and essentially just covers the personal philosophical views and risk aversion of each participant. Hopefully it gets to the point.
Two hours in
Beff brings up geopolitics. Who cares? But Connor didn't even coherently express his point of view on AI risk, so I can't blame him.
"Should the blueprints for F16 be open-sourced? Answer the question. Answer the question! Oh I was just trying to probe your intuition, I wasn't making a point"
Immediately followed by "If an AI could design an F16, should it be open-sourced?"
Exchange at about 1:33
C: You heard it, e/acc isn't about maximizing entropy [no shit?!]
B: No, it's about maximizing the free energy
C: So e/acc should want to collapse the false vacuum?
Holy mother of bad faith. Rationalists/lesswrongers have a problem with saying obviously false things, and this is one of those.
It's in line with what seems like Connor's debate strategy - make your opponent define their views and their terminal goal in words, and then pick apart that goal by pushing it to the maximum. Embarrassing.
B: <long-ish monologue about building stuff or becoming a luddite living in the woods and you should have the freedom of choice>
C: Libertarians are like house cats, fully dependent on a system they neither fully understand nor appreciate.
Thanks for that virtue signal, very valuable to the conversation.
The end
After about 2 hours and 40 minutes of the "debate", it seems we finally got to the point! Connor formulates his argument for why we should be worried about AI safety. Of course, he doesn't do it directly, but it's close enough.
"I'm not claiming I know on this date, with this thing, this thing will go wrong [...] which will lead to an unrecoverable state. I'm saying, if you keep just randomly rolling the dice, over and over again, with no plan to ever stop rolling or removing the bad faces of the die, somehow, then eventually you roll death. Eventually you roll x-risk."
FINALLY! This is, so far, the only direct argument regarding AI x-risk. Unfortunately, it mostly relies on a strawman - the assumption that the only alternative to doomerism (and Beff's stance) is eternally pushing technology forward, never ever stopping or slowing down, no matter the situation at hand.
That's obviously absurd.
If I were to respond to this myself, I'd say - at some point, depending how technology progresses, we might very well need to pause, slow down, or stop entirely. As we move into the future, we will constantly reevaluate the situation and act accordingly. If, for example, next year we get an AI trained and instructed to collect diamonds in Minecraft, instead hack the computer it's running on using some weird bit manipulation or cosmic rays, then yes, we'd probably need to slow down and figure that out. But that's not the reality that we live in right now.
This sentiment seems to be shared by Beff.
C: If you don't do policy, if you don't improve institutions [...] [we'll be doomed, presumably]
B: No, we should do all that, I just think right now it's far too early [...]
To which Connor has another one of the worst debate arguments ever:
"So when is the right time? When do we know?"
Beff only really said "I don't think it's right now", which is pretty much the same thing I'd say. I don't know when is the right time to stop AI development. I don't know when is the right time to stop overpopulation on Mars, or when to build shelters against microscopic black holes bombarding Earth from the orbit. If any of these problems arises, at least in a foreseeable short or long term - overpopulation on Mars, microscopic black hole bombardment, or dangerously powerful AI - I will entirely support using the understanding of the problem we'll have at that time to tackle the problem.
In response, Connor resorts to yelling that "You don't have a plan!"
This is the point where we should move on to narrowing down why we need to have a plan for overpopulation on Mars right now. Perhaps we do. But instead, the discussion moved on to rocket flight path, neanderthals and more platitudes.
A whole 5-10 minutes of actual discussion on topic that devolved into pointless yelling. meh
Final thoughts
This was largely a display of tribal posturing via two people talking past each other. We need debates about this, but this wasn't it. I suspect that Beff wanted to approach this in good faith, but didn't have a plan for the debate, so he was just struggling to navigate the discussion. Connor just wanted an easy win in a debate, and to do a character assassination on Beff, calling him evil, showing that he's a hypocrite. All the fun stuff that wins debates, but doesn't get anyone closer to the truth.
Poor performance from both of them, but particularly Connor's behavior is seriously embarrassing to the AI safety movement.
Personal takeaway
I don't think this moved my opinion on AI safety and x-risk either way. It would be a bit silly, since the discussion mostly did not concern AI safety. But it certainly made me more skeptical of people who consider Connor to be some sort of authority on the topic.
23 comments
Comments sorted by top scores.
comment by gilch · 2024-02-04T03:22:41.452Z · LW(p) · GW(p)
I had watched the whole thing and came away with a very different impression. From where I'm standing, Connor is just correct about everything he said, full stop. Beff made a few interesting points but was mostly incoherent, equivocating, and/or evasive. Connor tried very hard for hours to go for his cruxes [? · GW] rather than get lost in the weeds, but Beff wouldn't let him. Maybe Connor could have called him on it more skillfully, but I don't think I could have done any better. Maybe he'll try a different tack if there's a next time. The moderator really should have intervened.
At some point they start building their respective cases - what if you had a false vacuum device? Would we be fucked? Should we hide it? What should we do? And on Beff's side - what if there are dangerous aliens?
For the love of god, please talk about the actual topic.
This is the actual topic. It's the Black Marble [? · GW] thought experiment by Bostrom, and the crux of the whole disagreement! Later on Connor called it rolling death on the dice. Non-ergodicity. Beff's whole position seems to be to redefine "the good" to be "acceleration of growth", but Connor wants to add "not when it kills you!"
About 50 minutes in, Connor goes on an offensive in a way that, to me is an extremely blatant slippery slope reasoning. The main point is that if you care about growth, you cannot care about anything else, because of course everyone's views are the extremist parodies of themselves. Embarrassing tbh.
Again, Connor is simply correct here. This is not a novel argument. It's Goodhart's Law. [? · GW] You get what you optimize, even if it's only a proxy for what you want. The tails come apart [LW · GW]. You can overshoot and get your proxy rather than your target. Remember, Beff's position: "growth = good", which is obviously (to me, Connor, and Eliezer) false. Connor tried very hard to lead Beff to see why, but Beff was more interested in muddying the waters than achieving clarity or finding cruxes.
He also points out, many many times, that "is" != "ought", which felt like virtue signalling? Throwing around shibboleths? Not quite sure. But not once was it a good argument as far as I can tell.
Again, Connor is simply correct. This isn't about virtue signaling at all; that completely misses the point. Beff is equivocating. Connor is trying to point out the distinct definitions required to separate the concepts so he can move the argument forward to the next step. Beff just wasn't listening.
"Should the blueprints for F16 be open-sourced? Answer the question. Answer the question! Oh I was just trying to probe your intuition, I wasn't making a point"
Immediately followed by "If an AI could design an F16, should it be open-sourced?"
Is there something wrong with trying to understand the other position before making a point? No, and Beff should have tried harder to understand the other position. Kudos to Connor for trying. This is the Black Marble again (maybe a gray one in this case). Beff seems to have the naiive position that open source is an unmitigated good, which is obviously (to me and Connor) false, because infohazards [? · GW]. I don't think F16s were a great example, but it could have been any number of other things.
So e/acc should want to collapse the false vacuum?
Holy mother of bad faith. Rationalists/lesswrongers have a problem with saying obviously false things, and this is one of those.
Totally unfair characterization. I think this is Connor simply not understanding Beff's position, rather than Connor doing anything underhanded. The question was not simply rhetorical, and the answer was important for updating Connor's understanding (of Beff's position). From Connor's point of view, an intelligence explosion eats most of the future light cone anyway, so it's not that different from a false vacuum collapse: everybody dies, and the future has no value. There are some philosophies that actually bite the bullet to remain consistent in the limit and actually want all humans to die. (Nick Land came up.) Connor thinks Beff's might be that on reflection, but it's not for the reason Connor thought here.
It's in line with what seems like Connor's debate strategy - make your opponent define their views and their terminal goal in words, and then pick apart that goal by pushing it to the maximum. Embarrassing.
Again, this is what Eliezer, Connor, and I think is the obvious thing that would happen once an unaligned superintelligence exists: it pushes its goals to the limit at the expense of all we value. This is not Connor being unfair; this is literally his position.
Libertarians are like house cats, fully dependent on a system they neither fully understand nor appreciate.
Thanks for that virtue signal, very valuable to the conversation.
OK, maybe that's a signal (it's certainly a quip), but the point is valid, stands, and Connor is correct. I am sympathetic to the libertarian philosophy, but the naiive application is incomplete and cannot stand on its own.
After about 2 hours and 40 minutes of the "debate", it seems we finally got to the point!
Finally? Connor has been talking about this the whole time. Black marble!
If I were to respond to this myself, I'd say - at some point, depending how technology progresses, we might very well need to pause, slow down, or stop entirely.
Yep. That was yesterday. Connor would be interested in talking all about why he thinks that and (as evidenced by the next quote) wants to know Beff's criteria for when that point is, so Connor can move on and either explain why that point has already passed, or point out that Beff doesn't have any criteria and will just go ahead and draw the black marble without even trying to prepare for it. (Which means everybody dies.)
To which Connor has another one of the worst debate arguments ever: "So when is the right time? When do we know?"
Connor is correctly making a very legit point here. There are no do-overs. If you draw the black marble before you're prepared for it, then everybody dies. If you refuse to even think about how to prepare for it and not only keep drawing marbles but try to draw them faster and faster, then by default you die, and sooner and sooner! This is not unfair and this is not a bad argument. This is legitimately Connor's position (and mine and Bostrom's).
I don't know when is the right time to stop overpopulation on Mars.
That is a very old, very bad argument. If NASA discovered a comet big and fast enough to cause a mass extinction event that they estimated to have a 10% chance of colliding with Earth in 100 years, we shouldn't start worrying about it until it's about to hit us. Right? Or from the glass-half-full perspective, we've got a 90% chance of surviving anyway, so let's just forget about the whole thing. Right? Do you understand how absurd that sounds?
But Connor (and Eliezer and I (and Hinton)) don't think we have 100 years. We think it's probably decades or less, maybe much less. And Connor (and Eliezer and I) don't think we have a 90% chance of surviving by default. Quite the reverse, or even worse.
In response, Connor resorts to yelling that "You don't have a plan!"
No shit. Not only that, but e/acc seems to be trying very hard to make the problem worse, by giving us even less time to prepare and sabotaging efforts to buy more.
This is the point where we should move on to narrowing down why we need to have a plan for overpopulation on Mars right now. Perhaps we do.
Yes. That would have been good. I could tell Connor was really trying to get there. Beff wasn't listening though.
This was largely a display of tribal posturing via two people talking past each other.
Maybe describes Beff. Connor tried. Could've been better, but we have to start somewhere. Maybe they'll learn from their mistakes and try again.
Poor performance from both of them, but particularly Connor's behavior is seriously embarrassing to the AI safety movement.
I was embarrassed by Connor's headshot comment, which I thought was inappropriate. Thought experiments that could be interpreted as veiled death threats against one's interlocutor are just plain rude. Could have been worded differently. I don't think Connor actually meant it that way, and perfection is an unreasonable standard in a frustrating three-hour slog of a debate. But still bad form.
Besides that (which you didn't even mention), I cannot imagine what Connor possibly could have done differently to meet your unstated standards, given his position. Should he have not gone for cruxes? Because that's how progress gets made. Debaters can easily waste inordinate amounts of time on points that neither cares about (that don't matter) because they happened to come up. Connor was laser focused on making some actual progress in the arguments, but Beff was being so damn evasive that he managed to waste a couple of hours anyway. It's a shame, but this is so not on Connor. What do you even want from him?
Replies from: ariel-kwiatkowski, None, lahwran↑ comment by Ariel Kwiatkowski (ariel-kwiatkowski) · 2024-02-04T11:57:49.541Z · LW(p) · GW(p)
For what it's worth, I think you're approaching this in good faith, which I appreciate. But I also think you're approaching the whole thing from a very, uh, lesswrong.com-y perspective, quietly making assumptions and using concepts that are common here, but not anywhere else.
I won't reply to every individual point, because there's lots of them, so I'm choosing the (subjectively) most important ones.
This is the actual topic. It's the Black Marble [? · GW] thought experiment by Bostrom,
No it's not, and obviously so. The actual topic is AI safety. It's not false vacuum, it's not a black marble, or a marble of any color for that matter.
Connor wasn't talking about the topic, he was building up to the topic using an analogy, a more abstract model of the situation. Which might be fair enough, except you can't just assert this model. I'm sure saying that AI is a black marble will be accepted as true around here, but it would obviously get pushback in that debate, so you shouldn't sneak it past quietly.
Again, Connor is simply correct here. This is not a novel argument. It's Goodhart's Law. [? · GW]
As I'm pretty sure I said in the post, you can apply this reasoning to pretty much any expression of values or goals. Let's say your goal is stopping AI progress. If you're consistent, that means you'd want humanity to go extinct, because then AI would stop. This is the exact argument that Connor was using, it's so transparent and I'm disappointed that you don't see it.
Again, this is what Eliezer, Connor, and I think is the obvious thing that would happen once an unaligned superintelligence exists: it pushes its goals to the limit at the expense of all we value. This is not Connor being unfair; this is literally his position.
Great! So state and defend and argue for this position, in this specific case of an unaligned superintelligence! Because the way he did it in a debate, was just by extrapolating whatever views Beff expressed, without care for what they actually are, and showing that when you push them to the extreme, they fall apart. Because obviously they do, because of Goodhart's Law. But you can't dismiss a specific philosophy via a rhethorical device that can dismiss any philosophy.
Finally? Connor has been talking about this the whole time. Black marble!
Again, I extremely strongly disagree, but I suspect that's a mannerism common in rationalist circles, using additional layers of abstraction and pretending they don't exist. Black marble isn't the point of the debate. AI safety is. You could put forward the claim that "AI = black marble". I would lean towards disagreeing, I suspect Beff would strongly disagree, and then there could be a debate about this proposition.
Instead, Connor implicitly assumed the conclusion, and then proceeded to argue the obvious next point that "If we assume that AI black marble will kill us all, then we should not build it".
Duh. The point of contention isn't that we should destroy the world. The point of contention is that AI won't destroy the world.
Connor is correctly making a very legit point here.
He's not making a point. He's again assuming the conclusion. You happen to agree with the conclusion, so you don't have a problem with it.
The conclusion he's assuming is: "Due to the nature of AI, it will progress so quickly going forward that already at this point we need to slow down or stop, because we won't have time to do that later."
My contention with this would be "No, I think AI capabilities will keep growing progressively, and we'll have plenty of time to stop when that becomes necessary."
This is the part that would have to be discussed. Not assumed.
That is a very old, very bad argument.
Believe it or not, I actually agree. Sort of. I think it's not good as an argument, because (for me) it's not meant to be an argument. It's meant to be an analogy. I think we shouldn't worry about overpopulation on Mars because the world we live in will be so vastly different when that becomes an immediate concern. Similarly, I think we shouldn't (overly) worry about superintelligent AGI killing us, because the state of AI technology will be so vastly different when that becomes an immediate concern.
And of course, whether or not the two situations are comparable would be up to debate. I just used this to state my own position, without going the full length to justify it.
Yes. That would have been good. I could tell Connor was really trying to get there. Beff wasn't listening though.
I kinda agree here? But the problem is on both sides. Beff was awfully resistant to even innocuous rhethorical devices, which I'd understand if that started late in the debate, but... it took him like idk 10 minutes to even respond to the initial technology ban question.
At the same time Connor was awfully bad at leading the conversation in that direction. Let's just say he took the scenic route with a debate partner who made it even more scenic.
Besides that (which you didn't even mention), I cannot imagine what Connor possibly could have done differently to meet your unstated standards, given his position. [...] What do you even want from him?
Great question. Ideally, the debate would go something like this.
B: So my view is that we should accelerate blahblah free energy blah AI blah [note: I'm not actually that familiar with the philosophical context, thermodynamic gods and whatever else; it's probably mostly bullshit and imo irrelevant]
C: Yea, so my position is if we build AI without blah and before blah, then we will all die.
B: But the risk of dying is low because of X and Y reasons.
C: It's actually high because of Z, I don't think X is valid because W.
And keep trying to understand at what point exactly they disagree. Clearly they both want humanity/life/something to proliferate in some capacity, so even establishing that common ground in the beginning would be valuable. They did sorta reach it towards the end, but at that point the whole debate was played out.
Overall, I'm highly disappointed that people seem to agree with you. My problem isn't even whether Connor is right, it's how he argued for his positions. Obviously people around here will mostly agree with him. This doesn't mean that his atrocious performance in the debate will convince anyone else that AI safety is important. It's just preaching to the choir.
Replies from: gilch, gilch↑ comment by gilch · 2024-02-05T05:39:27.814Z · LW(p) · GW(p)
As I'm pretty sure I said in the post, you can apply this reasoning to pretty much any expression of values or goals. Let's say your goal is stopping AI progress. If you're consistent, that means you'd want humanity to go extinct, because then AI would stop. This is the exact argument that Connor was using, it's so transparent and I'm disappointed that you don't see it.
I see what you're saying, and yes, fully general counterarguments are suspect, but that is totally not what Connor was doing. OK, sure, instrumental goals are not terminal values. Stopping AI progress is not a terminal value. It's instrumental, and hopefully temporary. Bostrom himself has said that stopping progress on AI indefinitely would be a tragedy, even if he does see the need for it now. That's why the argument can't be turned on Connor.
The difference is, and this is critical, Beff's stated position (as far as Connor or I can tell) is that acceleration of growth equals the Platonic Good. This is not instrumental for Beff; he's claiming it's the terminal value in his philosophy, i.e., the way you tell what "Good" is. See the difference? Connor thinks Beff hasn't thought this through, and this would be inconsistent with Beff's moral intuitions if pressed. That's the Fisher-Price Nick Land comment. Nick bit the bullet and said all humans die is good, actually. Beff wouldn't even look.
↑ comment by gilch · 2024-02-05T05:24:25.471Z · LW(p) · GW(p)
No it's not, and obviously so. The actual topic is AI safety. It's not false vacuum, it's not a black marble, or a marble of any color for that matter.
It is, and Connor said so repeatedly throughout the conversation. AI safety is a subtopic, a special case, of Connor's main thrust, albeit the most important one. (Machine transcript, emphasis mine.)
Non-ergodicity, not necessarily AI:
The world is not ergodic, actually. It's actually a very non-ergodic you can die. [...] I'm wondering if you agree with this, forget [A]I for a moment that at some point not saying it's [A]I just at some point we will develop technology that is so powerful that if you fuck it up, it blows up everybody.
Connor explicitly calls out AGI as not his main point:
The way I see things is, is that never mind. Like, I know AGI is the topic I talk about the most and whatever comes the most pressing one, but [A]I actually AGI is not the main thing I care about. The main thing I care about is technology in general, and of which AGI is just the most salient example in the current future. You know, 50 if I was born 50 years ago, I would care about nukes [...] And the thing I fundamentally care about is the stewardship of technology. [...] of course things can go bad. It's like we're[...] mimetically engineering, genetically engineering, super beings. Like, of course this is dangerous. Like, if we were genetically engineering super tigers, people would be like, hey, that seems maybe a bit, but let let's talk about this
Beff starts talking before he could finish, so skipping ahead a bit:
The way I see things is, is that our civilization is just not able to handle powerful technology. I just don't trust our institutions. Our leaders are, you know, distributed systems. Anything with, you know, hyper powerful technology at this point in time, this doesn't mean we couldn't get to systems that could handle this technology without catastrophic or at least vastly undesirable side effects. But I don't think we're there.
This is Connor's mindset in the whole debate. Backing up a bit:
But I want to make clear again, just the point I'm trying to make here. Is that the point I'm trying to make here is, is that predictably, if you have a civilization that doesn't even try, that just accelerates fast as possible, predictably guaranteed, you're not going to make it. You're definitely not going to make it. At some point, you will develop technology that is too powerful to handle if you just have the hands of random people, and if you do it as unsafe as possible, eventually an accident will happen. We almost nuked ourselves twice during the Cold War, where only a single person was between a nuke firing and it not happening. If the same thing happens with, say, superintelligence or some other extremely powerful technology which will happen in your scenario sooner or later. You know, maybe it goes well for 100 years, maybe it goes well for a thousand years, but eventually your civilization is just not going to make it.
Also the rolling death comment I mentioned previously. And the comment about crazy wackos.
↑ comment by [deleted] · 2024-02-04T08:30:19.139Z · LW(p) · GW(p)
Connor is correctly making a very legit point here. There are no do-overs. If you draw the black marble before you're prepared for it, then everybody dies. If you refuse to even think about how to prepare for it and not only keep drawing marbles but try to draw them faster and faster, then by default you die, and sooner and sooner! This is not unfair and this is not a bad argument. This is legitimately Connor's position (and mine and Bostrom's).
So just to make this clear: a "black marble" is some kind of asymmetric technology. For example, a machine gun isn't a black marble because for every gun that a person could buy or build themselves, large governments will have 100. A pandemic virus that with a high fatality rate after a lengthy delay and didn't mutate to become less deadly* would be a black marble, because current technology makes it cheap and easy to build any string of RNA you want, while the hospital care to save one person is extremely labor and material intensive, and often fails. *(evolutionary forces want to make the virus shorter, removing it's ability to kill after a delay, which is why this likely won't work)
You feel confident that inside the total number of "marbles" between (1) right now and (2) humans develop off planet or interstellar colonies contains at least one black marble. And therefore if humans draw the marbles faster and faster, planning to leave the planet soon, they will pull a black one.
Ok. And then the counter argument would be that you're probably wrong, because no black marbles have been drawn yet, and you would need to prove they exist before any action is taken about them? (and not to get sucked too far into the weeds, but most claims about a "superintelligence" are kinda like a fictional black marble that may simply not be that effective)
Beff's whole love story to capitalism and thermodynamics to me seems like simply an argument that since the start of the industrial revolution, technology has been net good and no black marbles were drawn, therefore the right choice is to continue. And it's a good argument without all the baggage, because it's empirical. (and a fair counter would be how technology has only been 'net good' when various actions, mostly government, stopped it from only enriching the owners of coal mines while the miners lost their limbs and died from lung disease...)
↑ comment by the gears to ascension (lahwran) · 2024-02-04T03:55:47.098Z · LW(p) · GW(p)
What do you even want from him?
I want someone who has any significant experience in highly adversarial debates where the point is to communicate to the audience why you think your interlocutor is not a good choice to ally with and has nothing to do with epistemics unless you can establish that social context. Connor failed to establish that social context in the presence of someone with high skill at destroying it. Beff won the debate, even though his arguments sucked. This does not make me agree with him.
But I don't think beff would have accepted the debate if he didn't expect to be able to win. I'm really frustrated with folks here for their blindness to how lopsided the debate was socio-emotionally.
What I'd look forward to is a debate with someone with significant experience establishing the epistemics frame, like, you know, an experienced professor. Eg, Bengio.
Replies from: gilch↑ comment by gilch · 2024-02-04T04:10:56.354Z · LW(p) · GW(p)
OK, that's a fair enough ask. Do you have an alternative candidate in mind with approximately Connor's position and said experience? If wishes were horses beggars could ride. Connor understands the arguments and the epistemics, to the point that (from my perspective) he's doing an even better job at live debates than Yudkowsky. (You might not consider that a high bar.) The only way he gets more debate skill is more practice, or perhaps much more specific guidance than you have given. Maybe doesn't have to be public, but would Beff have agreed otherwise? And who would critique them?
I'm really frustrated with folks here for their blindness to how lopsided the debate was socio-emotionally.
Not obviously true to me, although admitedly bad if so. I accept that my perspective might be biased here, as I went in already somewhat familiar with Connor's arguments. But I can only call what I'm capable of seeing. What's your evidence? Anything legible to me? Beff's fan club in the YouTube comments (or on Twitter X)? That's not a good indicator of how a neutral party would see it, although I can see the comments themselves maybe skewing their perspective.
↑ comment by the gears to ascension (lahwran) · 2024-02-04T04:16:00.507Z · LW(p) · GW(p)
I do not have an alternate candidate in mind besides Bengio, and I don't know if we should expect to be able to get him to have a debate like this. If Connor were to ruthlessly drill this in debates with people who are capable of acting on Beff's level of consistent bad faith but are actually friendly, that might do the trick, not sure. But he has to be open to feedback that I currently model him as not being: things like "that argument structure will not work".
(It might be more effective to have Bengio debate Connor in a format like this, actually.)
The marginal fan club member is who I'm concerned about, so yeah, the edge of beff's fan club is my threat model. Neutral parties don't matter significantly in my model; what matters is how many high skill technical people are following the instructions of the conceptual entity beff represents an instance of.
Replies from: gilch↑ comment by gilch · 2024-02-04T04:22:12.746Z · LW(p) · GW(p)
That seems like a pretty uphill battle, because they already kind of vibe with Beff, and this would naturally prejudice them. How big/dangerous is e/acc, really? Are they getting worse? Maybe we should be choosing different battles.
Connor also has fans (like me) and Beff utterly failed to move me. Would Beff draw away the marginal rationalist with his performance? I kind of think not. But that's maybe not the part that matters.
comment by DaemonicSigil · 2024-02-03T23:21:09.370Z · LW(p) · GW(p)
C: You heard it, e/acc isn't about maximizing entropy [no shit?!]
B: No, it's about maximizing the free energy
C: So e/acc should want to collapse the false vacuum?
Holy mother of bad faith. Rationalists/lesswrongers have a problem with saying obviously false things, and this is one of those.
It's in line with what seems like Connor's debate strategy - make your opponent define their views and their terminal goal in words, and then pick apart that goal by pushing it to the maximum. Embarrassing.
I agree with you that Connor performed very poorly in this debate. But this one is actually fair game. If you look at Beff's writings about "thermodynamic god" and these kinds of things, he talks a lot about how these ideas are supported by physics and the Crooks fluctuation theorem. Normally in a debate if someone says they value X, you interpret that as "I value X, but other things can also be valuable and there might be edge cases where X is bad and I'm reasonable and will make exceptions for those."
But physics doesn't have a concept of "reasonable". The ratio between the forward and backward probabilities in the Crooks fluctuation theorem is exponential in the amount of entropy produced. It's not exponential in the amount of entropy produced plus some correction terms to add in reasonable exceptions for edge cases. Given how much Beff has emphasized that his ideas originated in physics, I think it's reasonable to take him at his word and assume that he really is talking about the thing in the exponent of the Crooks fluctuation theorem. And then the question of "so hey, it sure does look like collapsing the false vacuum would dissipate an absolutely huge amount of free energy" is a very reasonable one to ask.
Replies from: Algon, Mitchell_Porter↑ comment by Algon · 2024-02-04T00:09:16.775Z · LW(p) · GW(p)
No, it's about maximizing the free energy
Wait, what? This is literally the opposite of what thermodynamics does though?
Replies from: DaemonicSigil↑ comment by DaemonicSigil · 2024-02-04T01:59:15.048Z · LW(p) · GW(p)
Yes. I think Beff was speaking imprecisely there. In order to be consistent with what he's written elsewhere, he should have said something like: "maximizing the rate of free energy dissipation".
↑ comment by Mitchell_Porter · 2024-02-03T23:59:16.758Z · LW(p) · GW(p)
Free energy is energy available for work. How do you harness the energy released by vacuum decay?
comment by Said Achmiz (SaidAchmiz) · 2024-02-04T03:41:54.542Z · LW(p) · GW(p)
B: Why do you like friendship? Because evolution hard-coded this in humans
C: You’re mixing “is” and “ought”
He was not, in fact, mixing “is” and “ought”.
Seems like he was, actually. How was he not?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2024-02-04T04:13:33.280Z · LW(p) · GW(p)
At the micro level, connor should have immediately proceeded to explain why that mixes is and ought, so that people watching the debate would see the mistake, even if beff was just going to abandon the line of argument as soon as he sees he can't win using it.
At the macro level, we need someone with skill in highly adversarial debates who also is high skill at explaining things straightforwardly in a way that will be a natural way to communicate for a wide audience. Connor has not been this person consistently enough. He was initially doing pretty well, but it hasn't held up.
I mean, for the record, he did better than I could. But that's not saying much, I haven't even participated in debate clubs before. The only benchmark I'd set is "better than Connor". Based on some fuck-your-feelings stuff Connor has said, I would expect him to be on board with me being blunt about this. Beff is a world-class propagandist. Rationalist style debate will not work in response to that.
comment by Roman Malov · 2024-02-03T21:19:04.986Z · LW(p) · GW(p)
The current population size that Mars can support is 0, so even 1 person would be overpopulation. To complete the analogy, we are currently sending the entire population to Mars, and someone says: "But what about oxygen? We don't know if it's on Mars, maybe we should work on spacesuits?" and another says, "Nah, we'll figure it out when we get there."
comment by the gears to ascension (lahwran) · 2024-02-03T20:33:40.401Z · LW(p) · GW(p)
+1 on can we get connor to please stop doing these debates and just... let them not happen until someone more qualified ends up doing them? for example, I'd love to see beff debate Yoshua Bengio. (Or maybe somewhat more usefully, I'd like to see Bengio debate Alex Turner.)
Replies from: Chris_Leong↑ comment by Chris_Leong · 2024-02-04T06:52:41.273Z · LW(p) · GW(p)
If I'm being honest, I don't see Beff as worthy of debating Yoshua Bengio.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2024-02-05T04:45:47.342Z · LW(p) · GW(p)
yeah, fair point. in that case, bengio and turner.
Replies from: bideup↑ comment by bideup · 2024-02-05T11:09:45.831Z · LW(p) · GW(p)
Are you interested in these debates in order to help form your own views, or convince others?
I feel like debates are inferior to reading people's writings for the former purpose, and for the latter they deal collateral damage by making the public conversation more adversarial.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2024-02-05T12:33:01.018Z · LW(p) · GW(p)
for bengio and turner, the former. for bezos vs connor, definitely the latter, but the public conversation is already adversarial which is why I care to respond in a way that seeks to establish truthseeking in a hostile context, ie, we can honor the reasonable claims but must dishonor the way the reasonable claims are entered in order to get any use out of that. Connor is reasonably good at this but needs to tone down some traits that I don't know how to advise further on. bengio and turner would hopefully do it in text I guess, yeah.
comment by gilch · 2024-02-05T04:24:15.112Z · LW(p) · GW(p)
Connor explains more about what he was trying to do here: https://twitter.com/NPCollapse/status/1753902877452439681#m
There is a pattern of debate where you make an argument of the form "X -> Y", and the other person hears "X is true", and then retorts with "But X isn't true!"
There is a viral (and probably fake) meme about prisoners and having breakfast that illustrates this pattern.
Why is it useful to make arguments of this shape? Why not just talk about X directly?
Arguments like this are useful to avoid arguing about points that aren't actually cruxes and wasting time in a debate.
As a concrete example, it is worth asking the question "if you believed that AGI was dangerous (X) -> would you agree it shouldn't be open sourced (Y)?"
The reason this is useful to establish before talking about whether AGI is actually dangerous or not is that if the other person denies that we shouldn't open source it even in principle (denies "X -> Y", independent of whether X is true or not, which is a thing more than one person I have debated has bitten the bullet on), then there's no point in arguing about X, because whether or not it is true, it will not change their view on Y, which is the thing I care about.
If the other person agrees that if it was really that dangerous, then yeah maybe it shouldn't be open sourced (accepts "X -> Y", but not "X is true"), then it is useful to move on to a discussion about whether X is true or not, because it is an actual crux that could lead to minds being changed.
Mapping out what the cruxes/degrees of freedom are in an opponent's worldview is the core of understanding the other and hopefully changing minds, rather than wasting time on points that the opponent has already decided to never change their mind on.
Unfortunately, if it takes someone 20 minutes to answer a simple yes or no "X -> Y" question, this can still run out the clock. Alas.
There are a number of other recent Tweets from Connor (@NPCollapse) with more thoughts about the debate.