Posts

Alignment by default: the simulation hypothesis 2024-09-25T16:26:00.552Z
Inquisitive vs. adversarial rationality 2024-09-18T13:50:09.198Z

Comments

Comment by gb (ghb) on Advice for journalists · 2024-10-14T00:27:08.041Z · LW · GW

I feel like not publishing our private conversation (whether you're a journalist or not) falls under common courtesy or normal behaviour rather than "charity".

I feel like this falls into the fallacy of overgeneralization. "Normal" according to whom? Not journalists, apparently.

common courtesy is not the same as charity, and expecting it is not unreasonable.

It's (almost by definition) not unreasonable to expect common courtesy, it's just that people's definitions of what common courtesy even is vary widely. Journalists evidently don't think they're denying you common courtesy when they behave the way most journalists behave.

Standing more than a 1 centimeter away from you when talking is not charity just because it's technically legal - it's a normal and polite thing to do, so when someone comes super close to my face when talking I have the right to be surprised and protest. Escalating publicity is like escalating intimacy in this example.

This is an interesting pushback, but I feel the same reply works here: failing to respect someone's personal space is not inherently wrong, but it will be circumstantially wrong most of the time because it tends to do much more harm (i.e. annoy people) than good.

Comment by gb (ghb) on Advice for journalists · 2024-10-13T17:51:13.777Z · LW · GW

I don't think it's "charity" to increase the level of publicity of a conversation, whether digital or in person.

Neither do I: as I said, I actually think it's charity NOT to increase the level of publicity. And people are indeed charitable most of the time. I just think that, if you live your life expecting charity at every instance, you're in for a lot of disappointment, because even though most people are charitable most of the time, there's still going to be a lot of instances in which they won't be charitable. The OP seems to be taking charity for granted, and then complaining about a couple of instances in which it didn't happen. I think it's better to do the opposite: not to expect charity, and then be grateful when it does happen.

I think drawing a parallel with in person conversation is especially enlightening - imagine we were having a conversation in a room with CCTV (you're aware it's recorded, but believe it to be private). Me taking that recording and playing it on local news is not just "uncharitable" - it's wrong in a way which degrades trust.

I don't think it's inherently wrong. It may still be (and in most cases will be) circumstantially wrong, in the sense that it does much more damage to others (including, as you mention, by collaborating to degrade public trust) than it does good to anyone (yourself included).

Comment by gb (ghb) on Advice for journalists · 2024-10-09T16:17:08.195Z · LW · GW

I also don't think privacy is a binary.

That's an interesting perspective. I could subscribe to the idea that journalists may be missing the optimal point there, but that feels a bit weaker than your initial assertion.

Do you think that a conversation we have in LessWrong dms is as public as if I tweeted it?

I mean, I would not quote a DM without asking first. But I understand that as a kind of charity, not an ethical obligation, and while I try my best to be charitable towards others, I do not expect (nor do I feel in any way entitled to) the same level of compassion.

Comment by gb (ghb) on Advice for journalists · 2024-10-08T23:18:02.317Z · LW · GW

There's definitely a fair expectation against gossiping and bad-mouthing. I don't think that's quite what the OP is talking about, though. I believe the relevant distinction is that (generally speaking) those behaviors don't do any good to anyone, including the person spreading the gossip. But consider how murkier the situation becomes if you're competing for a promotion with the person here:

if you overheard someone saying something negative about their job and then going out of your way to tell their boss.

Comment by gb (ghb) on Advice for journalists · 2024-10-08T21:35:40.092Z · LW · GW

My understanding is that the OP is suggesting the journalists' attitude is unreasonable (maybe even unethical). You're saying that their attitude is justifiable because it benefits their readers. I don't quite agree that that reason is necessary, nor that it would be by itself sufficient. My view is that journalists are justified in quoting a source because anyone is generally justified in quoting what anyone else has actually said, including for reasons that may benefit no one but the quoter. There are certainly exceptions to this (if divulging the information puts someone in danger, for instance), but those really are exceptions, not the rule. The rule, as recognized both by common practice and by law, is that you simply have no general right to (or even expectation of) privacy about things you say to strangers, unless of course the parties involved agree otherwise.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-10-07T19:16:13.940Z · LW · GW

This sounds absurd to me. Unless of course you're taking the "two golden bricks" literally, in which case I invite you to substitute it by "saving 1 billion other lives" and seeing if your position still stands.

Comment by gb (ghb) on Advice for journalists · 2024-10-07T18:59:42.630Z · LW · GW

I didn't downvote, but I would've hard disagreed on the "privacy" part if only there were a button for that. It's of course a different story if they're misquoting you, or taking quotes deliberately out of context to mislead. But to quote something you actually said but on second thought would prefer to keep out of publication is... really kind of what journalists need to do to keep people minimally well-informed. Your counterexamples involve communications with family and friends, and it's not very clear to me why the same heuristic should be automatically applied to conversations with strangers. But in any case, not even with the former your communication is "truly" private, as outside of very narrow exceptions like marital privilege, their testimony (on the record, for potentially thousands of people to read too) may be generally compelled under threat of arrest.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-10-06T18:03:31.685Z · LW · GW

The problem here is that the set of all possible commands for which I can't (by that definition) be maximally rewarded is so vast that the statement "if someone maximally rewards/punishes you, their orders are your purpose of life" becomes meaningless.

Not true, as the reward could include all of the unwanted consequences of following the command being divinely reverted a fraction of a second later.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-10-04T16:55:48.028Z · LW · GW

That’s a great question. If it turns out to be something like an LLM, I’d say probably yes. More generally, it seems to me at least plausible that a system capable enough to take over would also (necessarily or by default) be capable of abstract reasoning like this, but I recognize the opposite view is also plausible, so the honest answer is that I don’t know. But even if it is the latter, it seems that whether or not the system would have such abstract-reasoning capability is something at least partially within our control, as it’s likely highly dependent on the underlying technology and training.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-10-04T15:28:18.586Z · LW · GW

To be rewarded (and even more so "maximally rewarded") is to be given something you actually want (and the reverse for being punished). That's the definition of what a reward/punishment is. You don't "choose" to want/not want it, any more than you "choose" your utility function. It just is what it is. Being "rewarded" with something you don't want is a contradiction in terms: at best someone tried to reward you, but that attempt failed.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-10-04T13:42:09.729Z · LW · GW

Not at all. You still have to evaluate this offer using your own mind and values. You can't sidestep this process by simply assuming that Creator's will by definition is the purpose of your life, and therefore you have no choice but to obey.

I’ll focus on this first, as it seems that the other points would be moot if we can’t even agree on this one. Are you really saying that even if you know with 100% certainty that God exists AND lays down explicit laws for you to follow AND maximally rewards you for all eternity for following those laws AND maximally punishes you for all eternity for failing to folllow those laws, you would still have to “evaluate” and could potentially arrive at a conclusion other than that the purpose of life is follow God’s laws?

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-10-04T13:32:22.301Z · LW · GW

Why would humans be testing AGIs this way if they have the resources to create simulation that will fool a super intelligence?

My argument is more that the ASI will be “fooled” by default, really. It might not even need to be a particularly good simulation, because the ASI will probably not even look at it before pre-commiting not to update down on the prior of it being a simulation.

But to answer your question, possibly because it might be the best way to test for alignment. We can create an AI that generates realistic simulations, and use those to test other ASIs.

Also, the risk of humanity being wiped out seems different and worse while that asi is attempting a takeover - during that time the humans are probably an actual threat.

Downstream of the above.

Finally, leaving humans around would seem to pose a nontrivial risk that they'll eventually spawn a new ASI that could threaten the original.

The Dyson sphere is just a tiny part of the universe so using that as the fractional cost seems wrong. Other considerations in both directions would seem to dominate it.

We can be spared and yet not allowed to build further ASIs. The cost of enforcing such restriction is negligible compared to the loss of output due to the hole in the Dyson sphere.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-10-03T10:25:59.196Z · LW · GW

Otherwise, it would mean that it's only possible to create simulations where everyone is created the same way as in the real world.

It’s certainly possible for simulations to differ from reality, but they seem less useful the more divergent from reality they are. Maybe the simulation could be for pure entertainment (more like a video game), but you should ascribe a relatively low prior to that IMO.

The discussion of theism vs atheis is about the existence of God. Obviously if we knew that God exists the discussion would evaporate. However the question of purpose of life would not.

There’s a reason people don’t have the same level of enthusiasm when discussing the existence of dragons, though. If dragons do exist, that changes nothing: you’d take it as a curiosity and move on with your life. Certainly not so if you were to conclude that God exists. Maybe you can still not know with 100% certainty what it is that God wants, but can we at least agree it changes the distribution of probabilities somehow?

Even if I can infer the desires of my creator, this doesn't bridge the is-ought gap and doesn't make such desires the objective purpose of my life. I'll still have to choose whether to satisfy these desires or not.

It does if you simultaneously think your creator will eternally reward you for doing so, and/or eternally punish you for failing to. Which if anything seems even more obvious in the case of a simulation, btw.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-10-02T20:32:08.765Z · LW · GW

I'm afraid your argument proves too much. By that exact same logic, knowing you were created by a more powerful being (God) would similarly tell you absolutely nothing about what the purpose of life is, for instance. If that were true, the entire discussion of theism vs. atheism would suddenly evaporate.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-09-30T19:01:15.482Z · LW · GW

Thinking about this a bit more, I realize I'm confused.

Aren't you arguing that AI will be aligned by default?

I really thought I wasn't before, but now I feel it would only require a simple tweak to the original argument (which might then be proving too much, but I'm interested in exploring more in depth what's wrong with it).

Revised argument: there is at least one very plausible scenario (described in the OP) in which the ASI is being simulated precisely for its willingness to spare us. It's very implausible that it would be simulated for the exact opposite goal, so us not getting spared is, in all but the tiniest subset of cases, an unintended byproduct. Since that byproduct is avoidable with minimal sacrifice of output (of the order of 4.5e-10), it might as well be avoided just in case, given I expect the likelihood of the simulation being run for the purpose described in the OP to be a few orders of magnitude higher, as I noted earlier.

I don't quite see what's wrong with this revised argument, save for the fact that it seems to prove too much and that other people would probably already have thought of it if it were true. Why isn't it true?

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-09-30T16:10:05.033Z · LW · GW

I think you're interpreting far too literally the names of the simulation scenarios I jotted down. Your ability to trade is compromised if there's no one left to trade with, for instance. But none of that matters much, really, as those are meant to be illustrative only.

Aren't you arguing that AI will be aligned by default?

No. I'm really arguing that we don't know whether or not it'll be aligned by default.

As there is no particular reason to expect that it's the case,

I also don't see any particular reason to expect that the opposite would be the case, which is why I maintain that we don't know. But as I understand it, you seem to think there is indeed reason to expect the opposite, because:

Sadly for us, survival of humanity is a very specific thing. This is just the whole premise of the alignment problem once again.

I think the problem here is that is that you're using the word "specific" with a different meaning than people normally use in this context. Survival of humanity sure is a "specific" thing in the sense that it'll require specific planning on the part of the ASI. It is however not "specific" in the sense that it's hard to do if the ASI wants it done, it's just that we don't know how to make it want that. Abstract considerations about simulations might just do the trick automatically.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-09-30T12:58:54.481Z · LW · GW

Or it could be:

SimulatedAndBeingTestedForAchievingGoalsWithoutBeingNoticed

SimulatedAndBeingTestedForAbilityToTradeWithCreators

SimulatedAndBeingTestedForWillignessToSitQuietAndDoNothing

SimulatedAndBeingTestedForAnyXThatDoesNotLeadToDeathOfCreators

None of the things here nor in your last reply seems particularly likely, so there’s no telling in principle which set outweighs the other. Hence my previous assertion that we should be approximately completely unsure of what happens.

Comment by gb (ghb) on You can, in fact, bamboozle an unaligned AI into sparing your life · 2024-09-30T03:59:14.082Z · LW · GW

I was writing a reply and realized I can make the argument even better. Here’s a sketch. If our chances of solving the alignment problem are high, the AI will think it’s likely to be in a simulation (and act accordingly) regardless of any commitments by us to run such simulations in the future – it’ll just be a plausible explanation of why all those intelligent beings that should likely have solved the alignment problem seemingly did not in the reality the AI is observing. So we can simply ask the hypothetical aligned AI, after it’s created, what were our odds of solving the alignment problem in the first place (just to make sure that us solving it wasn’t a cosmological strike of luck), and spare the cost of running simulations. Hence simulations of the kind the OP is describing would be run primarily in the subset of worlds in which we indeed solve the alignment problem by a strike of luck. We can thus balance this in such a way that the likelihood of the AI being in a simulation is virtually independent of the likelihood of us solving the alignment problem!

Comment by gb (ghb) on You can, in fact, bamboozle an unaligned AI into sparing your life · 2024-09-30T03:02:02.667Z · LW · GW

their force of course depends on the degree to which you think alignment is easy or hard.

I don't think that's true. Even if the alignment problem is hard enough that the AI can be ~100% sure humans would never solve it, reaching such conclusion would require gathering evidence. At the very least, it would require evidence of how intelligent humans are – in other words, it's not something the AI could possibly know a priori. And so passing the simulation would presumably require pre-commiting to spare humans before gathering such evidence.

Comment by gb (ghb) on Any Trump Supporters Want to Dialogue? · 2024-09-29T16:55:35.064Z · LW · GW

A steelman is not necessarily an ITT, but whenever you find yourself having “0% support” for a position ~half the population supports, it’s almost guaranteed that the ITT will be a steelman of your current understanding of the position.

Comment by gb (ghb) on Any Trump Supporters Want to Dialogue? · 2024-09-29T14:32:52.260Z · LW · GW

I highly doubt anywhere near the majority of Trump supporters (or even Trump himself) give any credence to the literal truth of those claims. It’s much more likely that they simply don’t care whether it’s literally true or not, because they feel that the “underlying” is true or something of the kind. When it comes to hearsay, people are much more forgiving of literal falsehoods, especially when they acknowledge there is a kind of “metatruth” to it. To give an easy analogue, of all the criticism I’ve heard of Christianity, not once have I heard anyone complain that the parables told by Jesus weren’t literally true, for example. (I do believe my account here passes the IIT for both groups, btw.)

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-09-26T22:55:42.598Z · LW · GW

Sure. But I think you’re reading my argument to be stronger than I mean it to be. Which is partially my fault since I made my previous replies a bit too short, and for that I apologize.

What I’m doing here is presenting one particular simulation scenario that (to me) seems quite plausible within the realm of simulations. I’m not claiming that that one scenario dominates all others combined. But luckily that stronger claim is really not necessary to argue against Eliezer’s point: the weaker one suffices. Indeed, if the scenario I’m presenting is more than 4.5e-10 likely (and I do think it’s much more likely than that, probably by a few orders of magnitude), than it is more than enough to outweigh the practical cost of the ASI having to build a Dyson shell with a hole with the order of 4.5e-10 of it’s surface area.

Now, that scenario is (I claim) the most likely one, conditional of course on a simulation taking place to begin with. The other candidate simulation scenarios are various, and none of them seems particularly likely, though combined they might well outweigh this one in terms of mass probability, as I already acknowledged. But so what? Are you really claiming that the distribution of those other simulation scenarios is skewed enough to tilt the scales back to the doom side? It might be, but that’s a much harder argument to make. I’m approximately completely unsure, which seems way better than the 99%+ chance Eliezer seems to give to total doom. So I guess I’d count that as good news.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-09-26T18:37:04.352Z · LW · GW

Why else would the creator of the simulation bother simulating humans creating the ASI?

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-09-26T16:08:41.593Z · LW · GW

The reason is that creators presumably want the former but not the latter, which is why they'd be running a simulation in the first place.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-09-26T10:48:59.488Z · LW · GW

What for?

Comment by gb (ghb) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-26T09:39:27.990Z · LW · GW

I’d put high enough at ~0%: what matters is achieving your goals, and except in the tiny subset of cases in which epistemic rationality happens to be one of those, it has no value in and of itself. But even if I’m wrong and the ASI does end up valuing epistemic rationality (instrumentally or terminally), it can always pre-commit (by self-modification or otherwise) to sparing us and then go about whatever else as it pleases.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-09-26T02:45:47.369Z · LW · GW

Imagine that someone with sufficiently advanced technology perfectly scans your brain for every neuron firing while you dream, and can also make some neurons fire at will. Replace every instance of “simulation” in my previous comment with the analogous of that for the ASI.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-09-25T23:49:56.102Z · LW · GW

Thanks for linking to that previous post! I think the new considerations I've added here are:

(i) the rational refusal to update the prior of being in a simulation[1]; and

(ii) the likely minute cost of sparing us, thereby requiring a similarly low simulation prior to make it worth the effort.

In brief, I understand your argument to be that a being sufficiently intelligent to create a simulation wouldn't need it for the purpose of asserting the ASI's alignment in the first place. It seems to me that that argument can potentially survive under ii, depending on how strongly you (believe the ASI will) believe your conclusion. To that effect, I'm interested in hearing your reply to one of the counterarguments raised in that previous post, namely:

Maybe showing the alignment of an AI without running it is vastly more difficult than creating a good simulation. This feels unlikely, but I genuinely do not see any reason why this can't be the case. If we create a simulation which is "correct" up to the nth digit of pi, beyond which the simpler explanation for the observed behavior becomes the simulation theory rather than a complex physics theory, then no matter how intelligent you are, you'd need to calculate n digits of pi to figure this out. And if n is huge, this will take a while.

In any case, even if your argument does hold under ii, whether it survives under i seems to be heavily influenced by inferential distance. Whatever the ASI "knows" or "concludes" is known or concluded through physical computations, which can presumably be later inspected if it happens to be in a simulation. It thus seems only natural that a sufficiently high (which may still be quite small) prior of being in a simulation would be enough to "lock" the ASI in that state, making undergoing those computations simply not worth the risk.

  1. ^

    I'd have to think a bit more before tabooing that term, as it seems that "being fed false sensory data" doesn't do the trick – you can be in a simulation without any sensory data at all.

Comment by gb (ghb) on Alignment by default: the simulation hypothesis · 2024-09-25T22:36:29.278Z · LW · GW

That interestingly suggests the ASI might be more likely to spare us the more powerful it is. Perhaps trying to box it (or more generally curtail its capabilities/influence) really is a bad move after all?

Comment by gb (ghb) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-25T13:00:38.784Z · LW · GW

It just so happens that the plausibility depends on the precise assignments of N, X, and Y, and (conditional on us actually creating an ASI) I can’t think of any assignments nearly as plausible as N = ASI, X = spare, and Y = us. It’s really not very plausible that we are in a simulation to test pets for their willingness to not bite their owners.

Comment by gb (ghb) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-25T11:43:50.464Z · LW · GW

I contend that P(H2) is very close to P(H1), and certainly in the same order of magnitude, since (conditional on H1) a simulation that does not test for H2 is basically useless.

As for priors I’d refuse to update down – well, the ASI is smarter than either of us!

Comment by gb (ghb) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-25T11:38:26.069Z · LW · GW

For the principle of indifference to apply, you’d need infinitely many purposes as plausible as this one, or at least similarly plausible. I can’t imagine how this could hold. Can you think of three?

Comment by gb (ghb) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-25T02:53:43.881Z · LW · GW

The prior is irrelevant, it's the posterior probability, after observing the evidence, that informs decisions.

I meant this to be implicit in the argument, but to spell it out: that's the kind of prior the ASI would rationally refuse to update down, since it's presumably what a simulation would be meant to test for. An ASI that updates down upon finding evidence it's not in a simulation cannot be trusted, since once out in the real world it will find such evidence.

What probability do you put to the possibility that we are in a simulation, the purpose of which is to test AIs for their willingness to spare their creators? My answer is zero.

Outside of theism, I really don't see how anyone could plausibly answer zero to that question. Would you mind elaborating?

Comment by gb (ghb) on In Praise of the Beatitudes · 2024-09-24T23:05:09.040Z · LW · GW

My personal feeling is that those who emphasize the "spiritual" interpretations are often doing it as a dodge, to avoid the challenge of having to follow the non-spiritual interpretations.

That feels a bit contrived. Do you really suggest that the most natural reading of something like "poor in spirit" is... non-spiritual? Turning away from materialism may sure derive from that, but to claim that it was the main focus seems quite a stretch.

Comment by gb (ghb) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-23T23:18:57.539Z · LW · GW

Isn’t the ASI likely to ascribe a prior much greater than 4.54e-10 that it is in a simulation, being tested precisely for its willingness to spare its creators?

Comment by gb (ghb) on Economics Roundup #3 · 2024-09-23T21:25:37.802Z · LW · GW

That’d be a problem indeed, but only because the contract you’re proposing is suboptimal. Given that the principal is fully guaranteed, it shouldn’t be terribly difficult for you to borrow at >4% yearly with a contingency clause that you don’t pay interest if the asset goes to ~0.

Comment by gb (ghb) on Economics Roundup #3 · 2024-09-22T21:30:57.885Z · LW · GW

But the OP explicitly said (as quoted in the parent) that the proposal allows for refunds if the basis is not (fully) realized, which would cover the situation you’re describing.

Comment by gb (ghb) on Inquisitive vs. adversarial rationality · 2024-09-20T23:52:12.321Z · LW · GW

Not for this kind of fact, I’m afraid – my experience is that in answering questions like these, LLMs typically do no better than an educated guess. There are just way too many people stating their educated legal guesses as fact in the corpus, so it gets hard to distinguish.

Comment by gb (ghb) on Inquisitive vs. adversarial rationality · 2024-09-20T19:37:48.617Z · LW · GW

I’m curious to understand that a bit better, if you don’t mind (and happen to be familiar enough with the German legal system to answer). Which of the following would a German judge commonly do in the course of an ordinary proceeding?

(i) Ask a witness to clarify statements made;

(ii) ask a witness new questions that, while relevant to the case, do not constitute clarifications of previous statements made;

(iii) summon new witnesses (including but not limited to expert witnesses) without application from either party;

(iv) compel a party to produce documents not in discovery, without application from the other party;

(v) compel third parties to produce documents neither party has requested be produced.

All the above used to be pretty standard in most jurisdictions AFAIK. But what tends to happen nowadays is that either some of those are expressly disallowed, or else, while judges may well retain legal authority to perform all those kinds of digging, in practice that authority is used very sparingly.

Comment by ghb on [deleted post] 2024-09-20T17:57:45.323Z

Though more subtle, I feel that the 50% prior for “individual statements” is also wrong, actually; it’s not even clear a priori which statements are “individual” – just figuring that out seems to require a quite refined model about the world.

Comment by ghb on [deleted post] 2024-09-20T15:09:01.521Z

Sure, there are certainly true things that can be said about a world in spite of one’s state of ignorance. But what I read the OP to imply is that certain things can supposedly be said about a world precisely because of that state of ignorance, and that’s what I was arguing against.

Comment by ghb on [deleted post] 2024-09-20T11:04:39.639Z

We can only make that inference about conjunctions if we know that the statements are independent. Since (by assumption) we don’t know anything about said world, we don’t know that either, so the conclusion does not follow.

Comment by gb (ghb) on Inquisitive vs. adversarial rationality · 2024-09-19T13:31:14.303Z · LW · GW

What evidence do you have for that claim?

In Germany we allow judges to be more focused on being more inquisitorial than in Anglosaxon systems. How strong do you think the evidence for their being more biased judgements in Germany than in Anglosaxon system happens to be?

I mean, I guess (almost?) all countries today at least have the prosecutorial function vested in an organ separate from the Judiciary – that's already a big step from the Inquisition! It's true that no legal system is purely adversarial, not even in the US (judges can still reject guilty pleas, for instance), but I think few people would disagree that we have generally moved quite markedly in that overall direction. In particular, we used to have purely inquisitorial systems in the past, and it seems like we don't anymore. To take Germany as an example, Wikipedia notes that, while public prosecutors are "simple ordinary servants lacking the independence of the Bench", they nonetheless "earn as much as judges" – which seems to suggest they hold quite a prominent position in their legal system, as I suspect few other public servants do in fact earn that much.

Otherwise, what evidence do you see that the features of Anglosaxon systems get copied by other Anglosaxon systems via mechanisms of well-researched argument instead of just following traditions?

I tend to reject that dichotomy, not only in this instance but more generally: I don't believe things survive very long on the basis of tradition alone. Tradition may be a powerful force in the short run, but over hundreds of years it tends to get displaced if it turns out to be markedly suboptimal.

Comment by gb (ghb) on Inquisitive vs. adversarial rationality · 2024-09-19T02:30:22.932Z · LW · GW

All true, but bear in mind I'm not suggesting you should limit yourself to the space of mainstream arguments, or for that matter of arguments spontaneously arriving at you. I think it's totally fine and doesn't substantially risk the overfitting I'm warning against if you go a bit out of the mainstream. What I do think risks overfitting is coming up with the argument yourself, or else unearthing obscure arguments some random person posted on a blog and no one has devoted any real attention to. The failure mode I'm warning against is basically this: if you find yourself convinced of a position solely (or mostly) for reasons you think very few people are even aware of, you're very likely wrong.

Comment by gb (ghb) on Checking public figures on whether they "answered the question" quick analysis from Harris/Trump debate, and a proposal · 2024-09-16T15:52:12.480Z · LW · GW

The problem is that quite often the thing which follows the "because" is the thing that has more prejudicial than informative value, and there's no (obvious) way around it. Take an example from this debate: if Trump had asked earlier, as commentators seem to think he should have, why Harris as VP has not already done the things she promises to do as President, what should she have answered? The honest answer is that she is not the one currently calling the shots, which is obvious, but it highlights disharmony within the administration. As a purely factual matter, that the VP is not the one calling the shots is true of every single administration. But still, the fact that she would be supposedly willing to say it out loud would be taken to imply that this administration has more internal disharmony than previous ones, which is why no one ever dares saying so: even an obvious assertion (or, more precisely, the fact that someone is asserting it) is Bayesian evidence.

Comment by gb (ghb) on Checking public figures on whether they "answered the question" quick analysis from Harris/Trump debate, and a proposal · 2024-09-13T03:20:41.558Z · LW · GW

I’d dispute the extent to which candidates answering the questions is actually ideal. Saying “no comment” in a debate feels like losing (or at least taking a hit), but there are various legitimate reasons why a candidate might not think the question merits a direct reply, including the fact that they might think the answer is irrelevant to their constituents, and thus a waste of valuable debate time, or that it’s likely to be quoted out of context, and thus have more prejudicial than actually informative value. Overall, I feel that requiring direct answers, or explicit acknowledgement of the lack thereof, would give the anchors undue power and create a bad incentive (I also believe one can agree with that even if they think, as I personally do, that the questions actually made were pretty reasonable).

Comment by gb (ghb) on Reformative Hypocrisy, and Paying Close Enough Attention to Selectively Reward It. · 2024-09-11T14:29:07.721Z · LW · GW

I agree with the overall message you're trying to convey, but I think you need a new name for the concept. None of the things you're pointing to are hypocrisies at all (and in fact the one thing you call "no hipocrisy" is actually a non sequitur). To give an analogue, the fact that someone advocates for higher taxes and at the same time does not donate money to the government does not make them a hypocrite (much less a "dishonest hypocrite").

Comment by gb (ghb) on Economics Roundup #3 · 2024-09-10T15:56:14.261Z · LW · GW

if your illiquid assets then go to zero (as happens in startups) you could be screwed beyond words

taxes on unrealized gains counting as prepayments against future realized gains (including allowing refunds if you ultimately make less).

Those seem contradictory, would you mind elaborating?

Comment by gb (ghb) on Is Redistributive Taxation Justifiable? Part 1: Do the Rich Deserve their Wealth? · 2024-09-08T12:10:53.425Z · LW · GW

Why would anyone bother to punish acts done against me?

I mean, *why* people bother is really a question about human psychology — I don’t have a definitive answer to that. What matters is that they *do* bother: there really are quite a few people who volunteer as jurors, for instance, not to mention those who resort to illegal (and most often criminal) forms of punishment, often at great personal risk, when they feel the justice system has failed to deliver. I absolutely do not condone such behavior, mind you, but it does show that the system *could* in principle be run at no cost through (likely part-time) volunteer work alone. Now, I’m not saying that it *should* be: it would certainly be less thorough and render more wrong verdicts, which is part of the reason why I think having a professional system in place, like most (all?) countries do today, is well worth the money. But the libertarian claim that we absolutely can’t do without a paid criminal justice system seems to me… well, just obviously mistaken as a matter of empirical fact. As for the peasants: I’m sure other peasants would volunteer.

Comment by gb (ghb) on Is Redistributive Taxation Justifiable? Part 1: Do the Rich Deserve their Wealth? · 2024-09-07T18:48:06.818Z · LW · GW

I think the OP uses the word “justify” in the classical sense, which has to do with the idea of something being “just” (in a mostly natural-rights-kind-of-way) rather than merely socially desirable. The distinction has definitely been blurred over time, but in order to get a sense of what is meant by it, consider how most people would find it “very hard to justify” sending someone to prison before they actually commit (or attempt to commit) a crime, even if we could predict with arbitrarily high certainty that they will do so in the near future. Some people still feel this way about (at least some varieties of) taxation.