Posts

Defense Against The Dark Arts: An Introduction 2023-12-25T06:36:06.278Z
The Dark Arts 2023-12-19T04:41:13.356Z
Beyond the Data: Why aid to poor doesn't work 2023-10-25T05:03:39.402Z

Comments

Comment by Lyrongolem (david-xiao) on Priors and Prejudice · 2024-06-12T01:01:02.976Z · LW · GW

Excellent post! I found the starcraft fairly amusing. Though, I am curious. Doesn't your analogy of starcraft solve the issue with trapped priors? 

Like you said, players who played all three factions mostly agree that all factions tend to be roughly similar in difficulty. However, to play all 3 factions you must arbitrarily start off playing one faction. If such people had their priors completely trapped then they wouldn't be able to change their mind after the first game, which clearly isn't true. 

I feel like even if two people disagree in theory they tend to agree in practice once you have experience with every viewpoint and point to concrete examples. (For instance, the EA and Effective Samaritan likely both agree that Denmark style social democracy is generally good while Maoist Communism is generally bad, even if they disagree on what socialism is or whether it's good or not!). 

Clearly the rationalist strategy then is not to immediately assume your right (which evidence doesn't support) but to run an experiment and figure out who's right. Notably, you shouldn't be using underhanded tactics!

I wake up to an email, thanking me and explaining how my donation has helped launch charter cities in two developing countries. Of course getting the approvals required some dirty political maneuvering, but that is the price of getting anything done.

I think of the Effective Samaritan, who has just woken up to a similar thankful email from the Developing Unions Project. In it, they explain how their donation helped make it possible for them to open a new branch of advocacy, lobbying to shut down two charter cities whose lax regulations are abused by employers to circumvent union agreements. It will require some dirty political maneuvering to get them shut down, but the ends will justify the means.

Like, this seems pretty clearly like a prisoner's dilemma doesn't it? You have concluded 'the benefits will exceed the costs' without being able to convince a reasonable opponent of the same using empirical evidence, and you went ahead and caused tangible harm anyways. Meanwhile the effective Samaritan used similar tactics to end your experiment before it bore fruit. Lose lose. You were both better off agreeing that 'underhand tactics bad' and proceeding accordingly. 

Why not just decide not to fight eachother? He starts unions in one developing country and you do a charter city in another. If one strategy is clearly better (which you both seem to insist on) then clearly the winning choice is to stop. There's no need for randomization or compromise, just moderation. You don't need to try and actively undermine eachother's efforts if we expect the results to speak for themselves. Somewhere in reality there is a truth somewhere between your worldviews. You just need to find out. 

As long as you recognize potential biases and are willing to experiment wouldn't you eventually arrive at the correct conclusions? Why bemoan the priors? They don't actually effect reality. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2024-01-13T05:45:06.447Z · LW · GW

Glad you enjoyed! 

Let me send a PM regarding a dialogue... 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2024-01-09T02:53:54.122Z · LW · GW

But point 3 was already a counterfactual by your own formulation of it. 

Well, no, it's not. Because I am speaking about future events (ie: should we give aid or not), not past events. 

I suppose that if you are convinced that Ukraine is going to win, then a marginal increase in aid is expected to shorten the war, but there is no reason to suspect that proponents of point 3 mean are referring to marginal adjustments in the amount of help

I'm not. Current battlefield conditions suggest that the war will be a protracted stalemate favoring Russia absent strategically meaningful aid. And by strategically meaningful I mean either providing capabilities that allow retaking of territory or negating a long term weakness (say, shell or manpower shortages). But I digress. In any case, I'm arguing from the perspective of military capability, not as an expert, but as someone who is familiar with expert arguments (I could cite, for instance, oryx, the Insititute for the study of war, Perun, etc). Basic understanding of battlefield dynamics and conditions at a strategic level.

From the standpoint of someone like Vivek — or for that matter from the standpoint of someone who understands how present resources can be converted into revenue streams and vice versa — additional donations to the war effort do constitute an intensification of aid, even if the rate of resource transfers remain the same.

And here again... this doesn't really address my point, mainly that statements 2 and 3 are essentially statements about relative strategic capability between two state actors, and this is neither domain level expert knowledge nor exceedingly complicated. You cannot argue, for instance, that the US does not have transatlantic power projection (aircraft carriers say hello). In the same way, you cannot argue Russia has a capability to win a quick and decisive war over Ukraine without western aid, because we saw them fail. Empirically speaking they lack a capability, and everyone who follows the conflict is aware of this.  

Supposing for the sake of argument that his analysis is conventionally unqualified, it does not imply that he has insufficient evidence to hold the position he does.

I feel like we're going in circles now. It could be that I failed to make my points clearly, or you failed to understand them. But in any case my position is that matters of historical military capability (note historical: as in past tense, already occurred) is not up for debate. 2) and 3) fly in the face of it. 

In any case I think this is a good place to discontinue, I don't think we're getting any benefit from further discussion. 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2024-01-07T22:40:58.062Z · LW · GW

I understand how you use the terms, but my point is that Vivek does not in fact demonstrate the information gap you impute to him. I am confident he would be easily able to address your objections.

Ok. Let me address this then. 

The fact that the war has persisted for so long seems sufficient proof that, in the absence of the aid, Ukraine would have quickly surrendered or at worst suffered a quick defeat. In either case, the war would have been shorter. Point 3 is unambiguously correct, and even most people on your side of the issue would agree with that (ie. they believe that a large part of the reason Ukraine has been able to fight so long has been the aid)

I'll contend this is either part of an information gap or a very strange interpretation of events. 

Consider the following series of statements: As the Russian army has more mass and equipment than the Baltic states, the Russians can take the Baltics whenever they please. Therefore, it's inevitable that Russia will emerge victorious, and defending the Baltics is pointless.

On paper, this would seem to be roughly accurate, except of course it completely ignores the NATO intervention which will likely happen, NATO troops forward positioned in the Baltics, as well as Russia's existing commitments in Ukraine. 

In much the same way, saying that 'Ukraine would have quickly surrendered or suffered a quick defeat' is only correct in counterfactual realities. You could of course argue that if the West did not help Ukraine structure it's military prior to the invasion, no help of any kind was delivered (even from Eastern Europe) during the invasion, and magically granted Putin infinite domestic popularity, the war would've ended quickly. But at that point we are living in a different reality. A reality where Russia actually had the capability for a Desert Storm esque operation. 

This is, to the best of my knowledge, not even something the realists argued after the initial invasion failed. While prior to the invasion this was the narrative, afterwards this was clearly shown to be false. 

Western aid did not intensify to a meaningfully significant degree prior to the battle of Kyiv, which was Russia's only hope of a 'quick victory'. While stingers, NLAWs, and other anti tank equipment was useful, the West primarily aimed to supply Ukraine for the purposes of a protracted insurgency, not a conventional war. We did not see deliveries of heavy equipment, and even now we're still waiting on F-16s. 

The results of Western aid have also been mixed. While humanitarian and financial support has allowed the Ukranian state and economy to continue on life support, we see that much of NATO's doctrine does not apply in Ukraine, as Ukraine doesn't have the air superiority necessary for combined arms operations. Some systems, like air defense, HIGHMARS, and long range strike missiles (Storm Shadow, ATACAMS) have played a key role, but they neither provided a decisive strategic advantage nor negated one on the part of the Russians. (partly because they were delivered in insufficient quantities) You can argue that Ukraine would suffer greatly if they lacked these options, but arguing they would've suffered quick, decisive defeat runs completely contradictory to reality, as they lacked these capabilities prior to the push on Kyiv and survived anyhow. (if you want to argue Russia 'wins' a quick and decisive victory without taking Kyiv or holding most of Ukraine's territory, be my guest, but I think we can both agree that would be ridiculous)

Overall, if Russia had shown a capability to win (the VKS secures air dominance, Russian logistics could secure a sustained push deep into the Ukranian heartland, Russian deployments significantly exceeded Ukraine's mobilization pool) you may have a case Ukraine would've lost quickly. But anybody who has observed the retreat from Kyiv can understand that Russia simply doesn't have that capability. They are not the U.S military, and the VKS is not the USAF. They do not have the air superiority necessary for blitzkrieg. This war is primarily an attritional battle, and if Ukraine's effort did not collapse prior to delivery of NATO aid it's rather contradictory to argue they would collapse immediately after. (indeed, they performed well on the Kharkiv counteroffensive while aid was still ramping up) 

I believe this to be a part of an information gap. Not understanding Russia and Ukraine's true military capabilities. (understanding them is, of course, a key part of any geopolitical judgement, since otherwise you cannot tell whether a side is on the brink of defeat or victory). If Vivek was not aware of this gap, then he made an unqualified analysis, and if he was then his analysis is clearly wrong. 

The realists argue that regardless of Ukraine's military potential Ukranian statehood is not a relevant concern, and should be handed over to Russia (likely along with Eastern Europe to broker an alliance against China). Even this aside, they do not believe Russia has a decisive capability advantage. Only an attritional advantage. Thus they can believe 2) and 3), but only assuming the absence of aid. I thus don't believe Vivek is actually arguing for the realist position, but if you believe he is feel free to find sources. I have not seen any indication of this being the case. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2024-01-07T22:01:17.810Z · LW · GW

Yes. This analysis primarily applies to low information environments (like the lay circuits I participated in). I would not use this on for example, the national circuit. 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2024-01-06T23:26:39.417Z · LW · GW

Sort of, but you're missing my main point, which is simply that what Vivek did is not actually dark arts, and that what you are doing is. His arguments, as you summarised them into bullet points, are topical and in good faith. They are at worst erroneous and not an example of bullshitting.

Ah, ok. Allow me a clarification then. 

In typical terms, ultra-BS is lying. (as in, you know you are wrong and speak as if you're right anyways). In my view, however, there's also an extension to that. If you are aware that you don't have knowledge on a topic and make wild assertions anyhow to support a narrative (say, if I declared that Kremlin whisperers are considering a coup against Putin) I would also be 'BS-ing'. I'm not lying in the traditional sense, as it's certainly possible I'm correct (however unlikely). But if I clearly don't have information then I can't act as if I do. Thus I'd consider some 'erroneous' arguments by Vivek to be bullshit, because it displays an information gap I have trouble believing he wasn't aware of. 

So, in the interest of clarity. Consider again the points Vivek made: 

  1. Aid doesn't serve American interests
  2. The war effort is doomed
  3. Aid prolongs the war (a peace deal is better)

My assessment of 1) is still the same, although you're right. It's possible Vivek has different politics. So I'm comfortable believing this is merely erroneous rather than bullshit. The same cannot be said for 2) and 3), however. 

To say that aid doesn't serve American interests legitimately is a qualified assessment. You must have an understanding of American interests, and the specific geopolitical situation at hand. That by proxy means an understanding of Ukraine, it's geopolitical significance, it's battlefield dynamics and how an outcome of the war may effect the U.S. If you do not understand geopolitics, and simply cherrypick arguments, I'd contend that you're still using ultra-BS, because even though yourover all point is legitimate the process you used to defend it is not. 

With knowledge about the specific situation in Ukraine, you cannot reasonably believe 2) and 3). In effect, it ignores defense economics, long run battlefield outcomes, historical precedent, and a variety of other things which is a prerequisite for making a proper geopolitical argument. 

Imagine for example of an anti war protestor arguing that the U.S should withdraw from Vietnam because 

  1. It doesn't serve American interests
  2. The U.S, in pure military terms, is losing the war
  3. Ho Chi Minh was legitimately democratic

I would believe this argument is 'BS', as said protestor clearly doesn't understand the Vietnam war, regardless of whether his geopolitics are correct. He is applying (or more likely, borrowing) analysis he didn't critically think about to a situation he doesn't understand. The U.S was clearly not losing the war in military terms, as we can observe with casualty figures. Ho Chi Minh's multiple antidemocratic practices (intimidating voters, purging opposition) are likewise also ignored. 

Much the same with Vivek. Either he had the necessary information to make a qualified analysis, or he did not. I find it implausible he studied the issue and still had an information gap. On the contrary, if he analyzed the situation without first studying it (which I find more likely) it would also be 'BS'.

Is my position more clear now?  

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2024-01-06T01:50:31.482Z · LW · GW

Have you given even a moment's thought to what Vivek might say in response to your objections? I get the impression that you haven't, and that you know essentially nothing about the views of the opposing side on this issue.

Well... yes. It's essentially covered by what I went over. In my view at least, me and Vivek have a narrative disagreement, as opposed to a dispute over a single set or series of facts. In any case, I imagine the points of contest would be

  1. The benefit of Ukraine aid for US foreign policy
  2. The costs imposed on the US 
  3. Moral concerns with more vague ideas like 'supporting democracy'

There's many rebuttals I could foresee him giving, such as poor battlefield outcomes in Ukraine, relatively more pressing domestic concerns at home, or some variation of realist foreign policy values. In any case I find those arguments unconvincing, which I've tried to articulate. 

I could respond to your arguments, but then I doubt it's much use to explain my position on books I haven't read and thinkers I'm not familiar with. I'm still not entirely sure what exactly you're arguing for, only that you believe my argument is wrong. Can you present a coherent narrative independently rather than simply citing people? 

In the interests of moving forward the discussion, let me try to summarize what I feel you've attempted to communicate. 

  1. Continued efforts by the Ukranian military and state are likely doomed to fail
  2. Aiding Ukraine does not meaningfully diminish the threat to eastern europe or europe in general
  3. Finland's Accession to NATO was not a meaningful security dilemma for Russia, but Ukraine is 
  4. Historically speaking, it would have been better for Great Britain to make peace with Hitler. Appeasement is a viable strategy. 

Is this correct? I am comfortable having a longer discussion if you like, but then it's not a focus of this post, only a subpoint. If you'd like to have a debate in private messages I'm open, but otherwise I think I've answered your main question. Yes, I did consider counterarguments and competing narratives. I commonly do so in regular debate. I did not find them convincing. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2024-01-05T01:33:54.793Z · LW · GW

Mhm, yes! Of course. 

So, this may seem surprising, but I'd consider Dark Arts to be a negligible part of me being undefeated. At least, in the sense that I could've easily used legitimate arguments and rebuttals instead to the same effect. 

As you might already know, lay judges tend to judge far more based off speaking skill, confidence, body language, and factors other than the actual content of the argument. In that sense being the better debater usually gets you a win, regardless of the content of your argument, since the judge can't follow anything except for the most apparent 'wins' and 'losses' on either side. All else being equal (and in debate, it usually is, since debaters usually steal good arguments until everyone is running similar cases) we should expect the better debater to win. 

So why use the Dark Arts? Well... it may sound a little disappointing, but really, it's just laziness. Neither me nor my partner wanted to go through the trouble of researching a good case. I had college apps, among other things, and he had his own commitments. The ability to BS my way out of an impossible situation thus allows me to skimp out on prep time in favor of arguing on the fly. Did this make me a 'better' debater? Kind've, in the sense that I can do far more with far less strong of a case, but then at the same time I'd much rather run a bulletproof case (only, of course, if I didn't have to research it myself). The Dark arts saved my ass in this situation, since my case was garbage, but if I knew ahead of time I couldn't use them I'd have just made a good case instead. 

I still think the concept is helpful, which is why I've posted it, but if your goal is to maximize your debate victories rather than time spent prepping, I'd recommend you just do more prep and speaking drills. It tends to pay off. The Dark Arts is not your first choice for consistent, high level victories. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2024-01-05T01:22:49.185Z · LW · GW

Hm? Is it? Feel free to correct me if I'm wrong, but in my experience flow judges (who tend to be debaters) tend to grade more on the quality of the arguments as opposed to the quality of the evidence. If you raise a sound rebuttal to a good argument it doesn't score, but if you fail to rebut a bad argument it's still points in your favor. 

Is it different in college? 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2024-01-03T00:31:38.722Z · LW · GW

Mhm, yes

I think society has a long way to go before we reach workable consensus on important issues again. 

That said, while I don't have an eye on solutions, I do believe I can elaborate a bit on what caused the problem, in ways I don't usually see discussed in public discourse. But that's a separate topic for a separate post, in my view. I'm completely open to continuing this conversation within private messages if you like though. 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2024-01-01T23:49:48.057Z · LW · GW

Thanks for reading!

After reading this and your dialogue with Isusr, it seems that Dark Arts arguments are logically consistent and that the most effective way to rebut them is not to challenge them directly in the issue.

Not quite. As I point out with my example of 'ultra-BS', much of the Dark Arts as we see in politics is easily rebuttable by specific evidence. It's just simply not time efficient in most formats. 

jimmy and madasario in the comments asked for a way to detect stupid arguments. My current answer to that is “take the argument to its logical conclusion, check whether the argument’s conclusion accurately predicts reality, and if it doesn’t, it’s probably wrong”

Mhm, yes. I think this is a helpful heuristic. I thought of it, but neglected to mention. Thank you for the addition! I think people will find it helpful. 

(though, I must caution, many people have rather misinformed models of how the world works, so this may or may not be helpful depending on who specifically is using this heuristic) 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2024-01-01T21:21:14.443Z · LW · GW

Thanks for the update! I think this is probably something important to take into consideration when evaluating ASI arguments. 

That said, I think we're starting to stray from the original topic of the Dark Arts, as we're focusing more on ASI specifically rather than the Dark Arts element of it. In the interest of maintaining discussion focus on this post, would you agree to continuing AGI discussion in private messages? 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2024-01-01T02:29:33.972Z · LW · GW

It's funny, I'm pretty familiar with this level of analysis, but I still notice myself thinking a little differently about the bookstore guy in light of what you've said here. I know people do the unbalancing thing you're talking about. (Heck, I used to quite a lot! And probably still do in ways I haven't learned to notice. Charisma is a hell of a drug when you're chronically nervous!) But I didn't think to think of it in these terms. Now I'm reflecting on the incident and noticing "Oh, yeah, okay, I can pinpoint a bunch of tiny details when I think of it this way."

Glad you appreciated my analysis!

The fact that I couldn't tell whether any of these were "ultra-BS" is more the central point to me.

Hm... I think we may have miscommunicated somewhere. From what I understand at least, what you saw was distinctly not 'ultra-BS' as I envision it. 

In persuasion, students of rhetoric generally classify two types of persuasive styles, 'central' and 'peripheral', route, specifically. Whereas central route persuasion focuses more on overt appeals to logic, peripheral route focuses more on other factors. Consider, for instance, the difference between an advertisement extolling the nutritional benefits of their drink, as opposed to an ad for the same company showing a half naked girl sampling it. Both aim to 'convince' the consumer to buy their product, except one employs a much different strategy than the other. 

More generally, central route persuasion is explicit. We want you to convince you of 'X', here are the arguments for 'X'. The drink is nutritious and good for your health, you should Buy the Drink. Peripheral route persuasion is more implicit, though at times it's no less subtle. This pretty and sexually appealing girl loves this drink, why don't you? Doesn't evolution make you predisposed to trust pretty people? Wouldn't you want to be more like them? Buy the drink 

I consider ultra-BS a primarily 'central route' argument, as the practitioner uses explicit reasoning to support explicit narrative arguments. It's often ill intentioned sure, and clearly motivated, intellectually dishonest reasoning, but that's besides the point. It still falls under the category of 'central route' arguments. 

Putting someone off balance, on the other hand, is more 'peripheral route' persuasion. There's far more emphasis on the implicit messaging. You don't know what you're doing, do you? Trust me instead, come on.

In the case of your atheist friend, it's not really possible to tell what persuasion technique they used, because it wasn't really clear. But the indicators you received were accurate, because under those conditions he would be incentivized to use dishonest techniques like ultra-BS. That's not to say, however, that they did use ultra-BS!

In that sense, I think I might conclude that your implicit primers and vibes are very good at detecting implicit persuasion, which typically but not always has a correlation with dark artsy techniques. Dark Arts often relies on implicit messaging, because if the message were explicit (see with sexual advertising techniques) it would be, well... ridiculous. ('So I should buy your product just because one pretty person drunk it? What kinda logic is that?) 

However, 'ultra-BS' is an explicit technique, which is why I believe your typical indicators failed. You saw the indicators for what you're used to associating with 'honest discussion', indicators like evidence, a coherent narrative, and good presentation skills. In a interpersonal setting, these indicators likely would've been sufficient. Not so in politics. 

That said...

If I could trouble you to name it: Is there a more everyday kind of example of ultra-BS? Not in debate or politics?

This is a bit hard, since 'ultra-BS' is a technique designed for the environment of politics by a special kind of dishonest people. Regular people tend to be intellectually honest. You won't see them support a policy one moment and oppose it the same evening. You also don't see them wielding more sophisticated evidences and proofs in daily discussion, which is why we see 'ultra-BS' far less often in everyday life. If someone is pulling out evidence at all chances are they've already 'won' the argument. Regular people also tend to have far less stake/interest in their political positions, unlike say, debaters or politicians. The incentives and structure of the format is different.

The most similar example I can think of off the top of my head is a spat between domestic partners. Say, Alice and Bob. 

Alice: You never take out the trash (evidence), look after the kids (evidence), or say you care about me (evidence). And now you've forgotten about our anniversary? (evidence) How dare you?? Do you really care about me? (narrative: Bob doesn't care about Alice) 

But then, this isn't a perfect fit for ultra-BS, since 1) Alice isn't necessarily aware she's overgeneralizing 2) Alice doesn't care about the specific examples she uses, she's just as likely responding to a 'vibe' of laziness or lack of care from her partner. 3) The evidence is well... not very sophisticated. 

But it general, I guess it's similar in that Alice is supporting a dubious narrative with credible evidence (a pretty general summary of 'ultra-BS'). Sure, Bob did do all these things, and probably cares for Alice in other ways which she isn't acknowledging (or who knows, maybe he really doesn't care about Alice).  

Is this example satisfying? 

Thanks for the response in any case, I really enjoy these discussions! Would you like to do a dialogue sometime? 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2024-01-01T01:43:09.244Z · LW · GW

So I think what you are saying is an ultra-BS argument is one that you know is obviously wrong.

Yep, pretty much. Part of the technique is knowing the ins and outs of our own argument. As I use ultra-BS prominently in debate, I need to be able to rebut the argument when I'm inevitably forced to argue the other side. I thus draw the distinction between ultra-BS along these lines. If it's not obviously wrong (to me, anyways) it's speculation. I can thus say that extended Chinese real economic stagnation for the next 10 years is educated speculation, while imminent Chinese economic collapse is ultra-BS. 

If you don't know, you cannot justify a policy of preemptive nuclear war over AI.  That's kinda my point.  I'm not even trying to say, object level, whether or not ASI actually will be a threat humans need to be willing to go to nuclear war over.  I am saying the evidence right now does not support that conclusion.  (it doesn't support the conclusion that ASI is safe either, but it doesn't justify the most extreme policy action)

So, this is where I withdraw into acknowledging my limits. I don't believe I have read sufficient ASI literature to fully understand this point, so I'm not too comfortable offering any object level predictions or narrative assessments. I can agree that many ASI arguments follow the same narrative format as ultra-BS, and there are likely many bad ASI arguments which can be revealed as wrong through careful (or even cursory) research. However, I'm not sufficiently educated on the subject to actually evaluate the narrative, thus the unsatisfactory response of 'I'm not sure, sorry'. 

However, if your understanding of ASI is correct, and there indeed is insufficient provable evidence, then yes, I can agree ASI policies cannot be argued for with provable evidence. Note again, however, that this would essentially be me taking your word for everything, which I'm not comfortable doing. 

Currently, my priors on ASI ruin are limited, and I'll likely need to do more specific research on the topic. 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2024-01-01T01:32:58.373Z · LW · GW

Finding reliable sources is 99% of the battle, and I have yet to find one which would for sure pass the "too good to check" situation: https://www.astralcodexten.com/p/too-good-to-check-a-play-in-three

Completely fair. Maybe I should share a few then? 

I find Money & Macro (economics youtuber with Ph.d in the field) to be a highly reliable source capable of informed and nuanced reporting. Here is, for instance, his take on the Argentine dollarization plan, which I found much more comprehensive than most media sources. 

Argentina's Radical Plan to End Inflation, Explained - YouTube

In terms of Ukraine reporting, I rely pretty heavily on Perun, who likewise provides very informative takes with high emphasis on research and prevalent defense theories. 

All Bling, no Basics - Why Ukraine has embarrassed the Russian Military (youtube.com)

See here, for instance, on his initial reaction to the invasion, and predictions of many of the war's original dynamics (acute manpower shortages on the part of Russia, effects of graft and corruption, a close match of capabilities and tendency to devolve towards a longer war). 

I consider these sources highly reliable, based off their ability to make concrete, verifiable predictions, steer clear of political biases, and provide coherent worldview models. Would you like to check them out and provide your thoughts? 

You explained that sunk cost fallacy pushed you for this example, but it's still not too late to add a different example, put this one into Google doc and make it optional reading and note your edit. People may read this in the future, and no reason not to ease the concept for them!

Maybe a good idea. It depends on whether I can muster the energy for a separate edit, and if I can find a good relevant example. Do you have any suggestions in that regard? I know that unless I stumble across something very good I'm unlikely to make an edit. 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2023-12-31T04:22:45.210Z · LW · GW

Right, about this. So the overall point of the Ramaswamy example was to illustrate how subject specific knowledge is helpful in formulating a rebuttal and distinguishing between bullshit and non-bullshit claims. 

See for example, this comment

This sure sounds like something a bullshit debater would say. Hundreds of thousands of people dying doesn't really mean a country isn't about to give up. Maybe it's the reason they are about to give up; there's always a line, and whos to say it isn't in the hundreds of thousands? Zelensky having popular support does seem to support your point, and I could go check primary sources on that, but even if I did your point about "selecting the right facts and omitting others" still stands, and there's no easy way to find out if you're full of shit here or not.

Yes, that's the whole point. I didn't think it was a problem before, but now... well...

I think I'm starting to realize the dilemma I'm in. I aimed to explain something in full object level terms so I can properly explain why subject matter knowledge helps discern between a true and a false claim... but then actually discerning what's true and what's false requires subject matter knowledge I can't properly distill in the span of a few thousand words. Catch-22, oops. 

I could bring out the factual evidence and analyze it if you like, but I don't think that was your intention. In any case, feedback appreciated! Yes, this was definitely an issue, I'll take more care in future examples. 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2023-12-31T04:12:59.093Z · LW · GW

Very nice! Now... here's the catch. Some of my arguments relied on dark arts techniques. Others very much don't. I can support a generally valid claim with an invalid or weak argument. I can do the same with an obviously invalid claim. Can you tell me what specifically I did? No status points for partially correct answers!

Now, regarding learned helplessness. Yes, it's similar, though I'd put in an important caveat. I consider discerning reliable sources and trusting them to be a rational decision, so I wouldn't go as far as calling the whole ordeal of finding what is true a lost cause. But then in general I'm taking a similar position as Scott. 

edit: oops, my bad, this was meant to be a response to above, I saw this pop up in the message feed without context

Comment by Lyrongolem (david-xiao) on Beyond the Data: Why aid to poor doesn't work · 2023-12-31T04:02:47.963Z · LW · GW

Understood. 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2023-12-31T04:02:08.271Z · LW · GW

Thanks for reading!

Understood. I think this is a consensus among many comments, so probably something I should work on. I've broadened things to be a bit too general, and the result was that I couldn't bring out much in the way of specific insights, as on a bigger more general level much of this is obvious. 

I should probably make follow up posts addressing nerd sniping and other aspects, it would likely be more helpful. Staying within the realm of learned experiences is probably also a good call. 

In any case, thanks for the feedback! I'll do my best to act on it in subsequent posts. 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2023-12-31T03:51:03.799Z · LW · GW

Thanks for your comment! 

Hm... right. Yes, I focused a lot on combating the Dark Arts, but not as much on identification. Probably worthy of it's own post. But my schedule is packed. We'll see if I get to it. 

Regarding defense tools, I'm a little mixed. I think traditional defenses like (relatively) trustworthy institutions, basic fact checks, and common sense are still quite viable, but at the end of the day even something as powerful as current day GPT is hardly a substitute for genuine research. A first line of defense and heuristics are good, but imo there has to be some focus on understanding the subject matter if we do want to send the Dark Artisan packing. 

Comment by Lyrongolem (david-xiao) on Beyond the Data: Why aid to poor doesn't work · 2023-12-31T03:44:59.723Z · LW · GW

Hm? I'm unsure if I presented my point correctly, but my intent was to show that aid in general tends to not resolve the problems causing poverty, irrespective of cost/benefit. I think I brought this up in another comment, comparing it to painkillers. If your leg is broken a painkiller will probably help, cost effective or not. But your leg is still broken, at the end of the day, and the painkiller doesn't actually 'solve' the problem in the same way a surgery and a splint would. 

Do you take issue with this? 

On that note I do believe many EA charities (givedirectly especially) does seem more effective than many traditional interventions (notably, giving corrupt governments money and telling it to spend on the people rather than the army). My stance is still roughly the same regardless though on aid. Effective or not it fails to resolve the root issue. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-31T03:38:31.750Z · LW · GW

Oooh, I think I can classify some of this! 

A few weeks ago I met a fellow who seems to hail from old-guard atheism. Turn-of-the-century "Down with religion!" type of stuff. He was leading a philosophy discussion group I was checking out. At some point he said something (I don't remember what) that made me think he didn't understand what Vervaeke calls "the meaning crisis". So I brought it up. He started going into a kind of pressured debate mode that I intuitively recognized from back when I swam in activist atheism circles. I had a hard time pinning down the moves he was doing, but I could tell I felt a kind of pressure, like I was being socially & logically pulled into a boxing ring. I realized after a few beats that he must have interpreted what I was saying as an assertion that God (as he thought others thought of God) is real. I still don't know what rhetorical tricks he was doing, and I doubt any of them were conscious on his part, but I could tell that something screwy was going on because of the way interacting with him became tense and how others around us got uneasy and shifted how they were conversing. (Some wanted to engage & help the logic, some wanted to change the subject.)

So, about this, I think this is a typical case of status game esque 'social cognition'. If membership in a certain group is a big part of your identity, the group can't be wrong. (Imagine if you're a devout Churchgoer, and someone suggests your priest may be one of many pedophiles). There's an instinctive reaction of 'well, church is a big part of my life, and makes me feel like a full, happy person, very good vibes... unlike pedophilia' so they snap to defending their local priest. You may see the 'happens in other places but not here' defense. Social cognition isn't a full proof dark arts happened, but it usually is a good indicator (since by nature it tends to be irrational). In this case it's an atheist who bases status on being an athiest feeling their personal beliefs/worth are being attacked, and responding as a result. I'd read up on Will Storr's The Status Game if you're interested. 

Another example: Around a week ago I bumped into a strange character who runs a strange bookstore. A type of strange that I see as being common between Vassar and Ziz and Crowley, if that gives you a flavor. He was clearly on his way out the door, but as he headed out he directed some of his… attention-stuff… at me. I'm still not sure what exactly he was doing. On the surface it looked normal: he handed me a pamphlet with some of the info about their new brick-and-mortar store, along with their online store's details. But there was something he was doing that was obviously about… keeping me off-balance. I think it was a general social thing he does: I watched him do it with the young man who was clearly a friend to him and who was tending the store. A part of me was fascinated. But another part of me was throwing up alarm bells. It felt like some kind of unknown frame manipulation. I couldn't point at exactly how I was being affected, but I knew that I was, because my inner feet felt less firmly on inner ground in a way that was some kind of strategic.

I think I can understand in general terms what might've happened. There's a lot of ways to 'suggest' something without verbally saying it. Think of an advertisement having a pretty girl in the product (look at you, so fat and ugly, don't you want to be more like us?). It's not explicit, of course, that's the point, but it's meant to take peripheral instead of central route persuasion. 

From a more 'human' example, I might think of a negotiator seating their rival in front of the curtains while the sun is shining through to disorient them, or a parent asking one sibling to do something after having just yelled at another. In all cases there's a hidden message of sorts, which can at times be difficult to put into words but is usually felt as a vibe. I have difficulty describing it myself. 

I think one I can describe might be the sandwich example (though this isn't something I've seen in my own life). You have something important to talk about someone with, and they're maintaining eye contact and 'paying attention', but they're also nibbling on the sandwich and enjoying themselves. (indirect communication: This is not too big of an issue). Or maybe they put the sandwich down occasionally check their watch, and their half eaten sandwich (why are you making me wait? can't you see I'm hungry and busy?). 

I obviously can't say what exactly they did. But I think vibe wise the effect was similar to some of the techniques I illustrated above. They did something, it wasn't apparent what, for a desired effect. I'll call it peripheral techniques of communication (as opposed to central). 

I think the preacher example is similar. (implicit message: I'm attacking you, your tribal groups, your status, and offering you some free status right now for beating me in front of your friends. Why don't you come give it a try?) What specific technique they used, I'm not sure, but I think it had the effect of communicating an implicit message (thus the reaction). 

And yes, you're right, none of these are 'ultra-BS', I consider them different techniques with a different purpose. I do think they are techniques though, and someone familiar with them can recognize them. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-31T03:15:00.356Z · LW · GW

Of course. Glad you enjoyed! 

I think that part of it is probably you not having much experience with debate or debate adjacent fields. (quite understandable, given how toxic it's become). It took me some lived experience to recognize it, after all. 

If you want to see it at work, I recommend just tuning into any politician during a debate. I think you'll start recognizing stuff pretty quick. Wish you happy hunting in any case. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-31T02:37:35.059Z · LW · GW

Interesting question!

So, I think the difference is that ASI is ultimately far more difficult to prove on either side. However, the general framework maps pretty similarly. 

Allow me to compare ASI with another X-risk scenario we're familiar with, cold war MAD. The general argument goes:

The Cold War argument is:

         (1) USSR improves and builds thousands of non-hypersonic nuclear tipped missiles.  (did actually happen)

        (2) USSR decides to risk nuclear annihilation by killing all it's rivals on the planet

        (3) due to miscalculations, perceived nuclear attack, and/or security threats, USSR gains:

                  (a) credible (or whatever passes for credible in that paranoid era) evidence they're getting nuked

        (4) at a certain point, Russia determines that today is the day to launch the nukes, and everyone dies

What's the difference between this and hypersonics, or ASI? Ultimately, even if Washington and Moscow sat down and tried to give an accurate assessment of P(Armageddon) I doubt they'd have succeeded in producing an accurate estimate. The narrative is difficult to prove or disprove, all we know was that we came close (see Cuban missile crisis) but it never actually happened. 

The issue for hypersonics isn't the framework, it's that the narrative itself fails to stand up to scrutiny (see my explanation). We know for a fact that those probabilities are extraordinarily unlikely. NATO doesn't leave coordinates to nuclear launch sites lying around! Governments take nuclear threats very seriously! Unlike in the cold war I'd consider this narrative easily disprovable. 

I have flagrantly disregarded relevant evidence suggesting that point 3 doesn't happen. 

With ASI we're more or less completely in the dark. You can't really verify if a point is 'obviously not going to happen', to the best of my understanding. Sure, you can say 'probably' or 'probably not', but you'd have to be the judge of that. There is less empirical evidence (that you presented, anyways) in regards to ASI being legitimate or not legitimate. 

Is there an argument suggesting that ASI X risk is highly unlikely? I think it probably does exist, but then there may be rebuttals to that. Without full context it's difficult to judge. 

That said, this only applies to the ASI argument as you presented it. I'm sure my assessment will vary based off who and how the argument is presented, and what evidence is cited. But to the best of my understanding your ASI argument as presented is improvable on both sides. I could call it ultra-BS, but I think speculation is just as accurate a descriptor. To make it more than that you'll need to cite evidence and address counterarguments, that's what distinguished a good theory from BS and speculation. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-31T02:05:58.472Z · LW · GW

Hello, and thank you for the comment! 

So, regarding policy discussions and public discourse, I think you can roughly group the discussion pools into two categories. Public and expert level discussions.

While the experts certainly aren't perfect, I'd contend in general you find much greater consensus on higher level issues. There may be, for example, disputes on when climate change become irreversible, to what extent humans would be impacted, or how to best go about solving the larger problem. But you will rarely (if ever) find a climate scientist claim climate change is a hoax engineered by the government. In this regard, I don't think evidence standards are the issue. Moreso communication to the general public, and being able to garner credibility.

Public discourse, on the contrary, is basically just chaos. Partly because the 'thinkers' in public space (think media pundits, youtubers, twitter warriors) tend to be motivated reasoners selling sensationalist nonsense, and partly because public discourse doesn't sanction you for nonsense. (you can ban the trolls and the bots, but they still keep on coming, and of course there's no punishment for non experts making wild claims)

In this regard I'm also feeling a bit helpless. I know it sounds rather bad, but in my personal opinion I think accepting the expert consensus tends to be the generally favorable strategy for the public. Implementing mechanisms like watchdogs, whistleblowers, and vetting mechanisms for the experts is good to have the public trust expert consensus, but I think by and large you can't really expect public discourse to reach better conclusions with consistency, not independently anyways. 

There is no 'unified' public forum for argument. Rather, there's millions of private and semi-public spaces, forums and subreddits, varying echo chambers, etc. I'm still uncertain if I've ever found a truly genuine public space, as opposed to a larger subcommunity holding certain viewpoints. Trying to control it all seems to be an exercise in futility. 

If you are just trying to create a place where discussion can happen, however, I think it's far easier. To the best of your ability adopt stringent evidence standards, and try to ensure all parties involved are acting in good faith. (or that not being possible, try to always assume good faith, and punish bad faith harshly) 

Granted, these are just a few examples off the top of my head, and they probably aren't the best. (I'm a bit stumped on the issue myself, it feels exhausting). Do you have any ideas? I'd love to hear them. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-31T01:50:13.556Z · LW · GW

Hm... I'm not too sure how much I agree with this, can you raise an example of what you mean? 

In my experience while uncritical reading of evidence rarely produces a result (uncritical reading in general rarely ever does) close examination of facts usually leads to support for certain broad narratives. I might bring up examples of flat earth and vaccines. While some people don't believe in the consensus, I think they are the exception that proves the rule. By and large people are able to understand basic scientific evidence, or so I think. 

Do you believe otherwise? 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-31T01:44:52.906Z · LW · GW

Thanks for the addition! I actually didn't consider this, and neither did my opponents. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-31T01:44:05.280Z · LW · GW

Glad you enjoyed!

So, I know this sounds like a bit of a cop out, but hear me out. The better debater usually wins the debate, irrespective of techniques. 

There's a lot that goes into a debate. There's how well you synergize with your partner, how confident you sound, how much research you've prepared, how strong of a writer you are... etc. There are times where a good constructive speech can end the debate before your opponent even starts talking, and other times where adamant refusal to accept the facts can convince the judge you're right. There's also sheer dumb luck. (Did the judge pay attention?)

I think of it as a lot like poker, in that regard. Ultra-BS is one of many techniques you'd use, like a poker face. It's not a silver bullet or a free win though (as powerful as it is). Some of our rounds were very close. 

If two people both have a poker face, who wins? 

Well... I can't say for sure, but I'd conclude neither side has an advantage over the other. (unless, of course, one person knows the technique better!) 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-31T01:38:26.695Z · LW · GW

Yep! It's very similar. The weakness it exploits (lack of time to properly formulate a response) is the same, but the main difference is that your avenue of attack is a believable narrative rather than multiple pieces of likely false information the judge can't understand either. (it's why I prefer ultra-BS, as opposed to a flood of regular BS). 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-31T00:59:19.393Z · LW · GW

Mhm? Right, in my personal opinion I don't consider kritiks/theory as ultra-BS. This is mainly because ultra-BS is intuitive narrative framing, and usually not too complicated (the idea is to sound right, and avoid the trouble of actually having to explain yourself properly). Kritiks/theory are the opposite, if that makes sense. They're highly technical arguments that don't make sense outside of debate specific settings, which most lay judges simply won't understand. In my experience it's almost never a good idea to run them unless you're with a tech or a flow judge (and then a good chunk of flow judges don't like it either). 

That said, yes, judges do often vote for horrible arguments, or for whomever speaks better, irrespective of argument content, so I'd be careful labeling something 'ultra-BS'. Sometimes a bad judge is a bad judge, there's nothing you can do there. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-26T23:09:24.332Z · LW · GW

Mhm! Unsure if you saw, but I made a post.

Defense Against The Dark Arts: An Introduction — LessWrong

Could I have your thoughts on this? 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2023-12-26T03:31:19.982Z · LW · GW

Hm... right. I think your critiques are pretty on point in that regard. I may have diluted focus too much and sacrificed insight for a broad overview. Focus on a more specific technique is probably better. 

I have a few ideas in mind, but I thought I'd get your opinion first. Do you think there's any part of this post that warrants more detailed explanation/exploration with greater focus? 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2023-12-26T00:04:02.148Z · LW · GW

The glib answer to how to avoid falling victim to the Dark Arts is to just be right, and not let counterarguments change your mind. Occlumency, if you like.

Well, yes, but I'm unsure if this is too helpful. Part of the intention behind my post was to distill what I viewed as potentially useful advice. Do you have any? If not, that's fine, but I'm unsure if it's too valuable for the readership. 

One problem is the bullshit asymmetry principle, which you describe but don't call by name: rebutting narratives through analyses of individual claims is infeasibly expensive. But far worse is answering the wrong question, letting the enemy choose the battlefield. Sticking with the war in Ukraine for an example, it'd be like answering the question of why Russia would blow up its own pipeline (Is Putin stupid? Is it like Cortés burning his ships? Is it the Wagner Group trying to undermine Putin?) instead of saying, "Wtf? No, it's obviously the US." 

I think I can take issue with this logic. Ukraine can benefit from German economic ties being severed from Russia. Russia can benefit with Germany hydrocarbons being depleted further (part of Russian strategy was restriction of gas exports to drive up energy prices), and of course the US benefits from there being less Russian trade flows. Analysis of the relevant actors would likely lead to convergence on a more informed judgement. 

By your logic, wouldn't I find myself drawing outlandish conclusions? Why would the Ukraine invasion ever happen? Why would Russia compromise it's geopolitical position and encourage Finland to join the alliance, European remilitarization, and increased reliance on America for security partnerships? Is Putin stupid? Is it like Cortes burning his ships? Is it the Wagner Group trying to undermine Putin? Wtf, no!! It's obviously the US. The white house actively encouraged the invasion of Ukraine! Just look at what their German minions did with Ostpolitik! 

I could go on, but I don't think I need to. Saying that 'X is obviously true' in absence of compelling evidence while refusing to analyze the requisite evidence seems like weak intellectual work at best. It tends to result in conspiracy theories. Of course I can't have complete confidence in my story, but I can claim to have done the proper analytical work and arrived at the most reasonable conclusion.

To clarify my point about the Snake Island massacre: yeah, I think the audio was legit too. No, I believe the Ukraine government knew they were alive (or at least had good reason to think so), and pretended otherwise for propaganda reasons. Can I prove this? No, I don't in fact have access to high-level military intelligence. This is the trap I'm warning against! Getting bogged down trying to ascertain exactly what the Ukrainian military knew and when they knew it is missing the point, which is whether or not they're incentivized to deceive you, and so whether you should trust anything they say, one way or the other.

Isn't this just a false dichotomy? Either we can trust them or we cannot. I find this misleading. Suppose a salesman is trying to sell me a particular canned food product. He may explain that it's nutritious, affordable, and has significant dietary effects that might make me popular with the ladies. (don't you want lean muscles?) I know he is a motivated reasoner, but that's not to say I can't glean useful information or distinguish between 'likely true' and 'likely false' statements. I can reason that the salesman is being honest about the price (because he's making the sale). Regardless of whether this price is worth it. I can reason that the nutritional values listed on the can are probably accurate (as otherwise the FDA would come down on his head). I can conclude that there are likely some dietary effects, but their extent would depend on research I would want to do myself, rather than taking his word for it. There are probably legitimately good things about the product, regardless of the salesman's presence. 

In much the same way, we can trust some things the Ukranian government says, and their reports usually provide useful information as to what's happening in the war, even though we would be idiots to trust them completely. They are a 'noisy' version of reality we need to filter through. Not reality. That doesn't mean they aren't useful as an information source, however, or that we should automatically assume them to be liars. 

The same goes for your ad hoc determinations of which states are "legitimate," based on considerations of "international law," your personal moral views regarding "democracy," and expedients of maintaining US hegemony. You're answering the wrong question. Happily, in this case, I've figured out the correct answer: there is no such thing as a morally legitimate state. 

So... Russia has no right to exist, Ukraine has no right to exist, the US has no right to exist, all of this is pointless? I'm not too sure what you're implying here, but I'm unsure if I like the direction of this conversation either. I think I'll stop here, I don't imagine further discourse will be helpful. 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2023-12-25T23:22:31.068Z · LW · GW

Thanks so much for your feedback!

Hm... right. I did get feedback warning that the Ramaswamy example was quite distracting (my beta reader reccomended flat eartherism or anti-vaxxing instead). In hindsight it may have been a better choice, but I'm not too familiar with geology or medicine, so I didn't think I could do the proper rebuttal justice. The example was meant to show how proper understanding of a subject could act as a very strong rebuttal against intuitive bullshit, but then I think I may not have succeeded in making that point. I think this was a case of sunk cost fallacy at work, I already wrote a good part and I opted not to get rid of it. 

The 2nd points harks on something valid which also irks me, but I think Scott beat you to the punch.

Oh? I never saw this article before, thank you for linking. 

Even that given though, I don't think any of these things as given are particularly potent defences against the dark arts as put - either in debates or in life. 

Persuasive bullshit stands because it is intuitive. Explanation can help preserve that intuitiveness in the face of conflicting arguments. Effective persuasive bullshit is that which requires more work to be rendered unintuitive than to be said. 

I think effective defence against the dark arts in debates as opponents is learning how to mitigate back to the null hypothesis of uncertainty as efficiently as possible. 

I think effective defence against the dark arts in life in general in my experience usually comes down to recognising motivated beliefs and the attached rhetorical frames used for hypothesis privilege. What distinguishes the dark arts from just saying things is the aspect of deceit. 

Hm... not too I understand what you mean. Would you mind illustrating with a few examples of more 'potent defenses', as you see them? Always open to having more tools in my toolbox. These methods I presented in my post are just some heuristics that work for me, not an exhaustive list. I would be grateful if you could provide me some. 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2023-12-25T22:16:56.702Z · LW · GW

Hello, and thanks for the comment!

Hm... yes, the central narrative is always hard to rebut. But since no argument exists independently of the facts, I thought I would focus on verification of factual information. I found the methods I used helpful in that regard. I'm sorry it didn't work for you, but then, I'm not claiming that it would work for everyone in all situations. These are the methods I personally found helpful. The algorithmic solution (ie: actually learning about the topic yourself) has been what I consider the only reliable defense. Even if you turn out to be wrong, you have still taken all the information into account and arrived at an actual conclusion that is your own. 

I might use the example of traditional Chinese medicine. As compared to Western medicine, both have very long histories and coherent (at least internally) understandings of the human body and what treats diseases. As an outsider looking in, divorced from the real world, you would likely not be able to tell which side is 'right'. But at the end of the day, we can observe that much of Western medicine does actually work and much of the traditional treatments are bogus (or can otherwise be explained scientifically with the placebo effect). I do believe rebutting narratives through analysis of individual facts is possible, because it has happened before. (not to say it's perfect, 1000 years ago if you were born in China you might reach the conclusion medicine has a overall weak correlation with health outcomes) 

So now I'm curious. What's your model for defense against the dark arts? When I try to rebut a central narrative, I usually go to the specific facts pertaining to that narrative. If the debate is student loans, I'll likely have to explore the Benett hypothesis. If the debate is Ukraine, I'll likely have to review regular and military history. The truthful narrative, in my view, does not exist in the abstract, but rather as a combination of evidence chains, which combine into a coherent model. Do you see it differently? 

Now, on this point more specifically. 

I believed the the lie of Snake Island massacre.

Are you sure this was a lie? To the best of my understanding, the audio was legitimate, the soldiers really did tell that to the Russians. When the Ukranian military lost contact with them after the fact, they assumed the worst (as militaries typically do), and they were thought of as dead. Turns out they were captured alive. The Ukrainian government has not denied that fact. Instead, they dispute the circumstances of how the soldiers ended up in Russian custody (ie: did they surrender or were they captured after a fight?). 

The fog of war is a real phenomenon. Some information is true, some is not, but not all false information is misleading with the deliberate intention of being misleading. Do either governments publish accurate casualty figures? Probably not. Are they incentivized to publish stories of heroics? Yes. Do they claim that individuals who are verifiably alive are in fact dead? Not to the best of my knowledge. 

I feel your narrative is flawed, partly because I feel this piece of evidence is flawed. If someone wants to make up their mind on whether or not your narrative is accurate, looking through the evidence is likely helpful. If you want to point out the flaws in my argument feel free to do so. It will likely encourage a more accurate picture of events. 

That said... 

The temptation is hard to resist, so here's just one hole in the argument presented: the island of Taiwan is as much a part of China as Crimea is of Ukraine. No principled stance about the "rules-based international order" would let you side with both Ukraine and Taiwan.

So, I think I can defend this using the principles of legitimacy. Consider the frozen conflict between North and South Korea. Both sides claim to be the sole legitimate government of Korea. So who's legitimate? The UN seems to consider them both as independent states. I don't find this at odds with a stance about the rules based international order. (which, by the way, I argued was in US interests, I never argued about it's morality)  

However, there is the moral dimension, which in my view is more important than the legal one. South Korea is a vibrant democracy. North Korea is a floundering dictatorship. I consider the South more morally legitimate, independent of the international stance on the matter. In much the same way, I consider Taiwan to have a much greater moral legitimacy to rule Taiwan because of it's democratic mandate. (I would support Hong Kong autonomy on the same principle) They have never declared independence formally, but clearly the CCP does not have control over Taiwan. I thus feel justified viewing it as a state in the middle of a frozen conflict. 

So, what about Crimea? There is a war going on,  so there is conflict, but Russian forces are occupying and administering the region. Legally, however, few countries recognize the annexation. Then we have to consider the referendum... I think the facts speak for themselves here. The vote was held under military supervision. Basic legal procedures were ignored. Russia's 'little green men' took over governmental buildings before a vote was held, not after. I don't believe there's any basis for calling such a 'vote' legitimate. 

So where does that leave us? There are many points in the international order regarding the legitimacy of states that we might consider 'awkward.' For instance, with the Koreas and Taiwan. I contend that while both Koreas have legal legitimacy, only the south has moral legitimacy. Russian forces in Crimea have neither moral nor legal legitimacy. Thus, I can support South Korea, Taiwan, and Ukraine at the same time as morally legitimate states. Does this argument satisfy your requirements? 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-25T19:16:17.496Z · LW · GW

I'm glad you enjoyed it!

In particular, it highlights a gap in my way of reasoning. I notice that even after you give examples, the category of "ultra-BS" doesn't really gel for me. I think I use a more vague indicator for this, like emotional tone plus general caution when someone is trying to persuade me of something.

Hm... this is interesting. I'm not too sure I understand what you mean though. Do you mind providing examples of what categories and indicators you use? 

I think I'm missing something obvious, or I'm missing some information. Why is this clearly ridiculous?

Right, so, I think I may have omitted some relevant context here. In public forum debate, one of the primary ways to win is to 'terminally outweigh on impacts', or proving that a certain policy action prevents catastrophe. The 'impact' of preventing said catastrophe is so big that it negates all of your opponent's arguments, even if they are completely legitimate. Think of it as an appeal to X-risk. The flip side is that our X-risk arguments tend to be highly unsophisticated and overall quite unlikely. 

Consider this part:

] If we're destroyed by a first strike, there is no MAD, and giving Russia the Arctic would immediately be an existential threat. 

The unspoken but implicit argument is that Russia doesn't need a reason to nuke us. If we give them the Arctic there's no question, we will get nuked. (or at least, Russia is crazy enough to consider a full on nuclear attack, international fallout and nuclear winter be damned). This was actually what my opponents argued. My point relied on too many ridiculous assumptions. (a common and valid rebuttal of X risk arguments in debate)

Then there's the factual rebuttal. I did a cursory overview of it, but I never fully elaborated. The idea is that multiple things prevent a successful nuclear first strike. First, and most obviously, would be the U.S nuclear triad. The idea is that we have a land (ICBM silos), sea (nuclear submarines), and air (bomber aircraft from supercarriers) deterrent against nuclear attacks. For a successful nuclear first strike to be performed Russia must locate all of our military assets (plus likely that of our NATO allies as well), take them all out at once, all while the CIA somehow never gets wind of a plan. It requires that Russia essentially be handed coordinates of where every single US nuke is, and for them to have the necessary delivery systems to destroy them. (good luck trying to reach an underwater sub, or an aircraft that's currently flying) It also requires the biggest intelligence failure in world history. 

Could it happen? Maybe? But then the chance is so small I'd rather bet on an asteroid destroying the earth within the next hour. In any case the plan wouldn't rely on hypersonics. It'd rely on all American civilian and military leaders simultaneously developing Alzheimer's. It'd also require the same to happen on the Russian side, since Russian nuclear doctrine is staunchly against use of nuclear weapons unless their own nuclear capabilities are threatened or if the Russian state is facing an existential threat (like say, imminent nuclear Armageddon). 

For anyone who has studied the subject, this is rather basic knowledge, but then most judges (and debaters as well) don't enter the room having already studied nuclear doctrine. Reactions like yours are thus part of what I was counting on when making the argument. It works because in general I can count on people not having prior knowledge. (don't worry, you're not alone) Thus, I can win by 'outnerding' them with my peculiar love for strange subjects. 

 However, the argument isn't just ridiculous for anybody with knowledge of US/Russian nuclear doctrine. It also seems rather incongruous with most people's model of the world (my debate partner stared at me as I made the argument, his expression was priceless). Suppose Russia was prepared to nuke the US, and had a credible first strike capability. Why isn't Uncle Sam rushing to defend his security interests? Why haven't pundits and politicians sounded the alarm? Why has there been no diplomatic incidents? A second Cuban missile crisis? A Russian nuclear attack somewhere else?

Overall, you could say that while my line of logic is not necessarily ridiculous (indeed, Kinzhal can reach the US) the conclusions I support (giving Russia the Arctic is an existential threat) definitely are. It's ridiculous because it somehow postulates massive consequences while resulting in no real world action, independent of any facts. Imagine if I argued that the first AGI was discovered in 1924 before escaping from a secret lab (said AGI has apparently never made waves since). Regardless of history you can likely conclude I'm being a tinfoil hat conspiracy theorist. 

I hope that answers your question! Is everything clear now? 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2023-12-25T18:09:29.686Z · LW · GW

Right, probably a good idea. Let me edit and add this to the top... 

Comment by Lyrongolem (david-xiao) on Defense Against The Dark Arts: An Introduction · 2023-12-25T18:09:01.991Z · LW · GW

Thanks so much! 

The format wasn't intentional by the way, I copy and pasted from google docs. No wonder it looked wierd. 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-21T05:43:16.306Z · LW · GW

Glad you enjoyed! Now you mention it, I think I might make a continuation post sometime. Would you mind giving me a few ideas on what sort've dark artsy techniques I should cover, or what you're curious about in general? 

Comment by Lyrongolem (david-xiao) on The Dark Arts · 2023-12-19T20:32:10.499Z · LW · GW

It seems to me ultra-BS is perhaps continuous with hyping up one particular way that reality might in fact be, in a way that is disproportionate to your actual probability, and that is also continuous with emphasizing a way that reality might in fact be which is actually proportionate with your subjective probability.

 Yep! I think this is a pretty good summary. You want to understand reality just enough to where you can say things that sound plausible (and are in line with your reasoning) but omit just enough factual information to where your case isn't undermined. 

I once read a post (I forget where) that an amateur historian will be able to convince an uneducated onlooker of any historical argument simply because history is so full of empirical examples. Whatever argument you're making, you can almost always find at least one example supporting your claim. Whether the rest of history contradicts their point is irrelevant, as the uneducated onlooker doesn't know history. Same principle here. Finding plausible points to support an unplausible argument is almost trivially easy. 

About public belief: I think that people do tend to pick up at least vaguely on what the words they encounter are optimized for, and if you have the facts on your side but optimize for the recipient's belief you do not have much advantage over someone optimizing for the opposite belief if the facts are too complicated. Well actually, not so confident about, but I am confident about this:  if you optimize for tribal signifiers - for appearing to be supporting the correct side to others on "your" side - then you severely torpedo your credibility re: convincing the other side. And I do think that that tends to occur whenever something gets controversial.

Yeah, I definitely agree. At some point you reach a hard limit on how much an uneducated onlooker is able to understand. They may have a vague idea, but your guess is as good as mine in terms of what that looks like. If the onlooker can't tell which of two experts to believe they'll have even more trouble with two people spouting BS. (if the judges were perfect Bayesian reasoners, you should expect them to do the logical equivalent of ignoring everything me and my opponet say, since we're likely both wrong in every way that matters). Thus, they mostly default to tribal signals, and that not being possible, to whichever side appears more confident/convincing. 

It's not really possible to argue against tribal signals, because at that point logic flies out the window and what matters is whether you're on someone's 'side', whatever that means. It's why you don't usually see tribal appeals in debate (unless you're me, and prep 2 separate cases for the 2 main tribes). 

Comment by Lyrongolem (david-xiao) on A Socratic dialogue with my student · 2023-12-12T00:10:53.935Z · LW · GW

I've sent you his Discord information via PM. (After obtaining permission, of course.

Thank you very much! I think I'll enjoy the chat. Just sent him the friend request. Oh, and, my discord is the same as my lesswrong btw.

Yep. In a debate competition, you can win with arguments that are obviously untrue to anyone who knows what you're talking about

YES! Hahhahahaa... it's quite dumb. The information you can reasonably convey in 4 minutes is so short that even when your case is common sense it's hard to actually prove your point. I can bring up a variety of commonsense and economic arguments for why student loan forgiveness inflates prices, but my opponents can basically just say 'nu-uh' the entire debate, citing some random article saying it... somehow creates 1.2 million jobs? I sometimes wish I could just throw a book at them and say 'read the damn research!' 

But then, I should talk, I'm equally guilty. On the affirmative side I decided to all in on an emotional appeal to the starving children of bankrupt parents, and when my opponents brought up the obvious objection (rising tuition prices due to overcharge) I decided to sneakily claim that forgiveness wasn't an actual subsidy and thus doesn't allow the government to read prices. I also told the judge, verbatim, that my opponents were 'misrepresenting their own evidence' by claiming that forgiveness as a subsidy. I even invited the judge to examine the evidence himself, saying that it was on our side (it wasn't). Seeming reasonable won us that debate, even though I most definitely was not being reasonable. 

“The Dark Side of the Force is a pathway to many abilities some consider to be unnatural.” 

But hey, is fine. This is debate, and the only crime is to lose. We went undefeated again. Long live the dark arts! 

Comment by Lyrongolem (david-xiao) on A Socratic dialogue with my student · 2023-12-09T06:55:28.180Z · LW · GW

This was super fun to read, thanks for sharing! Hm... your new student seems like an interesting person to talk to. Mind asking if he'd be interested in a chat with someone else his age? I'm also a public form (debate format) debater in high school, and I'm doing prep work for this particular topic on student loans as well. I'd love to get a chance to talk with him a bit, and I feel like he may enjoy it as well. 

On that note, I think I can elaborate a bit on the format a bit in ways others might find helpful. 

Public forum is one of many debate formats with it's own time and argument structure. The general idea goes something like this. You and your partner (this is a 2v2 format) prepare a case about a few weeks to a month in advance for a topic that's disclosed prior to the debate. Each side gets 4 minutes to make their initial speech. From there, each side gets a rebuttal, a cross examination, a summary, etc etc. About an hour later, each side gives their closing statements (final focus) and the judge drops their vote for the side which was more 'persuasive'. 

Now, I think your student is a bit new to the format, because it seems like he hasn't gotten the optimal mindset for the format yet. In public forum, being persuasive almost never means being right. Quite the opposite, actually. You typically be persuasive by being completely damn wrong. 

Let me illustrate with an example. In one of my last debate tournaments the resolution was: "The US military should substantially increase it's military presence in the Artic." Seems pretty clear cut and typically vague, and it could go any direction. A reasonable person might consider future artic trade routes, security obligations to neighbors, defense of strategic chokepoints or resources... 

Fortunately, we were debaters, not reasonable people, so me and my partner ran two main arguments. Climate change and nuclear Armageddon. On the affirmative, the argument was fairly straightforward. Climate change bad, renewables good. To stop climate change we thus need rare earth minerals... but... China has 90% of them. We proceeded to find an evidence card saying how the Artic has a massive deposit of rare earth minerals, and how the US military should deploy forces to maintain security against grayzone operations. On the negative, things were much more fun. We found some instances of Russia sending ships/submarines to the US coast, the range of a nuclear hypersonic missile, and a few buildups of military bases in the Artic. We then proceeded to argue that Russia had a credible first strike capability and if we didn't take the Artic we'd be at risk of nuclear Armageddon (yes, I'm serious, I actually argued that). 

We clean swept that entire tournament without losing a single round. 

For an outsider, I imagine this might seem pretty ludicrous (to be honest I'd think so too) but in the context of the actual format it makes perfect sense. The debaters aren't experts, nor are they proficient in Bayes craft. Their readings are limited, and their prep time even more so. On a regular debate most teams would be scurrying to make counterarguments for common objections opponents might raise, often as late as the night before. We have just enough information and confidence to sound like experts, but only to a layperson. In front of an actual expert I'd imagine we look ridiculous. (I'd love to hear Bryan Caplan's reaction to my argument saying student loan forgiveness boosts the economy). But that's not a relevant concern. Nobody is an expert. 

So what even if they were? You have 4 minutes for the main speech, and 4 minutes for the rebuttal. It takes 5 minutes to make a bullshit claim and a whole debate to prove it wrong. This is part of the reason why scientists typically don't debate flat earthers. Any tinfoil hat theorist worth their salt can spend a minute spinning some wild story an exasperated expert will have to spend hours to disprove. Thus, most debaters can spare themselves the trouble of even trying. 

Case in point: my evidence card for Russian nuclear threats was the range of the Kinzhal hypersonic missile, about 1000 miles. Coincidentally, around the same distance from the Artic to the US mainland. Thus the argument for why control of the Artic is important. You can hit the mainland US with a first strike from the Artic, but not from Moscow. If my opponents spent 2 minutes to read the card they would've discovered another missile I neglected to mention, the Avantgarde, which has a range of 3000 miles. Even if they didn't know this, the argument is obviously bogus. Hypersonics are not a credible first strike capability unless Russia has the ISR to identify and destroy ALL of our nuclear submarines, silos, and aircraft at the same time. But of course, my opponents never read much material on nuclear doctrine, so they repeated the claims about mutually assured destruction which I was able to shoot down with ease. (What mutually assured destruction? We'd be dead before we could react). 

Likewise with rare earth minerals. I neglected to mention the US is not fully mining it's stockpiles. I neglected to mention other mineral reserves. I also neglected to mention that China has no military forces in the Artic, and there's no credible threat to defend against. Even if there was, I had no evidence of any US mining interests in the Artic. (I actually pointed this out against a team that stole the case and tried to run it against us. 'Why are you sending the military? What are they going to do, mine the minerals with tanks? Bomb the deposits with HIGHMARS?)

Now, all of this is obvious in hindsight, but in an hour long debate a team only has 2 minutes of prep time, so there's basically no room for anybody except the fastest readers to credibly review all the evidence carefully. (Hell, I read at 1000wpm and I still have trouble). Thus, most of the time you can safely get away with the most egregious bullshit. In a setting where all claims are purportedly from 'experts' or 'reliable sources', where each word comes with complete confidence even when the speaker is lying through their teeth, it's rarely an efficent strategy to actually pursue the truth. Rhetorical flourishes, appeals to fear, ridiculously outsized impacts, and weak arguments are the name of the day. 

 In that sense, I think I've illustrated that there isn't such a thing as a truly 'indefensible' argument, only overly scrupulous debaters. With my four years of experience in the format I have reasonable confidence I can beat an novice in a fair public forum debate, even while taking a completely ridiculous stance like flat eartherism. Much the same with student loans, though the problem is less acute. Your student could do the same. Say with a straight face that student loans help the economy, and the power of social cognition will make it so. 

To conclude, never argue with a public forum debater. They will drag you down to their level, and beat you with experience. 

(Note: This was an argumentative piece by a debater. Realities explained are not necessarily endorsed, and arguments made typically do not reflect my opinion or that of any sane person. I disavow responsibility for anyone who takes my arguments seriously :P) 

Comment by Lyrongolem (david-xiao) on Beyond the Data: Why aid to poor doesn't work · 2023-11-15T05:34:40.726Z · LW · GW

Hm... pretty similar here. I also don't have much of a media presence. I haven't tried EA forums yet, mainly because I consider myself intellectually more aligned with LW, but in any case I'm open to looking. This is looking to be a more personal conversation now. Would you like to continue in direct messages? Open to hearing your suggestions, I'm just as clueless right now. 

Comment by Lyrongolem (david-xiao) on Beyond the Data: Why aid to poor doesn't work · 2023-11-13T04:12:18.865Z · LW · GW

Likely a good suggestion. I'm in a few communities myself. But then, I'm unsure if you're familiar with how discord works. Discord is primarily a messaging app with public server features tacked on. Not the sort of community for posts like this. Are you aware of any particular communities within discord I could join? The general platform has many communities, much like reddit, but I'm not aware of any similar to lesswrong. 

Comment by Lyrongolem (david-xiao) on Beyond the Data: Why aid to poor doesn't work · 2023-11-11T21:04:19.074Z · LW · GW

Many thanks for the kind words, I appreciate it. 

You're probably right. I mainly started on lesswrong because this is a community I'm familiar with, and a place I can expect to understand basic norms. (I've read the sequences and have some understanding of rationalist discourse). I'm unsure how I'd fare in other communities, but then, I haven't looked either. Are you familiar with any? I don't know myself. 

Comment by Lyrongolem (david-xiao) on Beyond the Data: Why aid to poor doesn't work · 2023-11-07T07:07:40.349Z · LW · GW

Thanks for your reply! 

Yes, you're right, I realize I was rather thin on evidence for the link between institutional weakness and corruption. I believe this was like mind fallacy on my end, I assumed the link was obvious. But since clearly it was not allow me to go back and contextualize it.

Disclaimer: It's late and I'm tired, prose quality will be lower than usual, and I'll be prone to some rather dry political jokes. 

To understand the link between institutions and corruption, I think it's helpful just to use simple mental models. Consider this simple question: what causes corruption? The answer seems fairly straightforward. People are corrupt, they want money, etc etc. But clearly, this isn't everything. Humans in different countries coming from similar racial, social, and class backgrounds tend to be varying levels of 'corrupt', but even countries with similar backgrounds often have wildly varying corruption levels. Take North and South Korea, for one example. Both were unified states emerging from occupation post WWII, but they took wildly different paths in their development as countries. 

South Korea eventually transitioned from a military dictatorship to a free market democracy who know today. North Korea, however, remained a military dictatorship. This resulted in stark differences in corruption handling on both sides. In the global corruptions perceptions index, South Korea ranks 31st, while North Korea ranks an appalling 171st. Why was this? 

The answer, I think, is institutions. South Korea, having developed a free market system and accountable mode of governance, is able to check the power of it's political and economic elites. If the president of North Korea decides he wants to abuse his power, the people have no recourse. If the president of South Korea decides to abuse their power, they end up in prison. (See the 7 korean heads of state that ended up in jail, quite impressive for a 40 year period. We had 4 years with Trump and only managed a mugshot.) 

Memes aside, strong institutions typically have a variety of methods to align their leaders with the needs of the people and stave off corruption. Understanding that those in power tend to abuse said power unless restricted in some way, most democracies have institutions in place to ensure no one person dominates the system. Typically, the most straightforward answer is elections. In democracies, a leader can be corrupt to the extent the public tolerates it. Be too corrupt and you end up losing elections, or serving prison time (cough cough South Korea). There is also much greater public oversight and freedom of information, which creates a drive towards transparency. If the CCP is corrupt there's no real way to hold them accountable. The secret police will arrive to have a word with you. Xi Jinping can do as he likes and nobody has a say about it. If the American president tried to do the same we would see the news awash with headlines of scandal. The other political party would cackle, and voters would scramble to find a more reliable candidate. 

These mechanisms of alignment, as I'd call them, are far from perfect. Even in liberal democracies like the US it's common knowledge most congressmen are multi millionaires, and for individuals with modest state salaries they sure have an uncanny knack for obtaining huge sums of wealth. (Perhaps Pelosi should give day trading a try, she sure seems to have talent) However, the fact remains that corruption tends to be discouraged as a general rule, and instances of corruption tend to be far less overt and damaging. We don't see, for instance, the head of state winning a state run lottery. Or Congress passing themselves a 10 billion dollar pay raise. Backroom deals, cushy corporate jobs, insider trader and the like are acceptable. Outright raising land rents like a feudal lord to fund direct salary increases is not. Our leaders are, in the end, constrained by law. This allows regular citizens to do the work of making money... mostly. Corruption will hurt, but it isn't crippling as there's at least some accountability. 

Case in point, many of the worst offenders are simply convicted of federal corruption charges. Our last president was impeached twice and is currently on trial for, among other things, fraud and corruption. Imperfect as they are, these are still methods to keep the guy in charge accountable. This does not exist in countries with weak institutions. Many times, in fact, the institutions end up bolstering corruption.

Let's return to the example of Mexico. Recall how Carlos Slim was able to build a tele-networking monopoly to plunder the wealth of the people. You might wonder how in the world he managed this, surely the law wouldn't stand for such overtly criminal business practices? The issue is, not too surprisingly, that the law is on Slim's side. Recurso de Amparo, originally a law designed to protect the constitutional rights of citizens, has been exploited by Slim's lawyers to shield his business practices. See how Slim was able to dodge a record fine. This was the very same law Slim attempted to fall back on when he attempted his monopolistic practices in the US, only it didn't exist. American law, in it's infinite magnanimity to the rich and powerful, still managed to slap Carlos Slim with a fines for his comparatively much more minor transgressions. The tactics which Slim used to succeed in Mexico failed utterly in the US. Mostly if not completely due to the nature of the US' institutions. For all it's faults, the modern US is no longer in the gilded age. Overt monopolies are no longer allowed.

It's not just that weak institutions favor the businesses of oligarchs either. Weak institutions actively give businesses to oligarchs. Consider post Soviet Russia, which saw previously state owned companies auctioned off to political cronies at bargain prices. Or post Soviet Hungary, which likewise handed billions to unscrupulous businessmen willing to play political games of power. Slim himself was an example of such a politically created oligarch in Mexico, having acquired his telecommunications company with shady backroom deals. Lacking any oversight, corrupt government officials stole like there was no tomorrow, happily selling the public good for private benefit. These are the structures who create men like Carlos Slim, Sandor Csyani, and Sergei Shoigu. All of whom received their wealth, not through aptitude, service, or innovation, but rather, political machinations. 

How is the normal businessman meant to succeed, in an environment like this? The question is rhetorical. Of course they don't. You do not compete with Carlos Slim's megacorporation in Mexico. You do not attempt to create rival banks against Sandor Csyani in Hungary. You don't invest in quality as a Russian procurement company, you invest in another mansion for Sergei Shoigu. In all examples listed we see the same problem. People will be corrupt if there is no institutional safeguards, much like they would commit crimes if there was no police force. Being corrupt, they naturally move to stifle free market competition and forcibly seize the assets of anyone who is successful but lacks a powerful political patron. There is no point in trying. There is no point in innovating. There is no point in trying to create jobs or lift people out of poverty. Now, playing politics and kissing the Supreme leader's arse? That's where the money is.

Having said all this, we find ourselves full circle back where we started, once again at institutions. As I illustrate, much of the problem with corruption (and yes, this applies to war also) is essentially an alignment issue between the ruler and ruled. The ruler(s) doesn't need to care about their subjects, because they're the damn ruler, they can do whatever they like. Corruption can leave schools underfunded and famine relief nonexistent. Wars can devastate families and tear lives to pieces. But the almighty Shepard cares nothing for the suffering of his sheep. Why should he? There is, after all, nothing stopping him. 

I hope this was a coherent narrative to your satisfaction. Feel free to ask for elaboration or provide critiques. I'll apologize in advance for the poor quality, but it's late on a school night so I'll have to get going. Hope you enjoyed, and look forward to your response! 

Comment by Lyrongolem (david-xiao) on Beyond the Data: Why aid to poor doesn't work · 2023-11-03T01:26:06.722Z · LW · GW

Yes, they are. In the main post my only quote blocks are direct copy/pastes from the web version of the book. 

Comment by Lyrongolem (david-xiao) on Beyond the Data: Why aid to poor doesn't work · 2023-11-02T05:57:34.538Z · LW · GW

In my head I rephrased that thesis as poor institutions and practices can impair efficiency totally, which I found as unsurprising as a charity add turns as not entirely accurate. So if you target readers who find this controversial I may just not be the right reader for the feedback you seek.


Right, that makes sense, and it was part of the angle I was taking. When I said controversial I was mainly referring to the more general claim that aid tends to be ineffective in reducing long term poverty, with few exceptions. (the implication being that aid fails to address institutional issues) The idea that monetary resources plays a small (or as I argue, largely negligible role) in addressing long term issues seems to me like it would be controversial to many EAs. But then, this mostly semantical and hardly the main point. Let's get into the heart of the issue. 

Still, I gave some time thinking at: What could you do to make me update? Instead of mere illustration of failures when your thesis was ignored, can you also present cases where following this very thesis did make a success?

A very insightful question. I was initially a bit dubious myself. Where has my thesis been followed by aid organizations? Certainly I don't recall any charities focusing on reforming government institutions! But then, on second thought, that was almost the entire point. It wasn't aid programs reforming governments, but rather, people. 

Consider all the wealthiest nations in the world. With few exceptions, the richest nations are the ones with strong institutions, particularly representative, democratic ones. Although there are exceptions, they tend to be few and far between (see Singapore with an authoritarian technocracy that's ruthlessly efficient, or Qatar with their absurd amounts of oil wealth. Meanwhile, nations with defunct or nonexistent institutions (see North Korea, The Congo, Mexico, South Africa) invariably face poverty and destitution on a mass scale. Even in China, one of the great economic success stories, we still see defunct instructional inheritances like the hukou system result in situations like 25% of the Chinese workforce being trapped in subsistence agriculture (compared to around 2% in the US, mostly industrial farmers). 

In that sense, I believe I can answer your question about precision. 

What’s the minimal institutions before charity get efficient? How much efficiency do we gain for what progress in institutions? Could you find if institutions explain more variance than, say, war and corruption?

I would liken institutions to a force multipliers in the military sense. One soldier with a gun >>> one hundred soldiers with spears. In the same way, powerful institutions enhance the ability of monetary and other resources to address poverty. Consider the following example, Mexico. Mexico has 44% of it's population living in below subsistence conditions, with about 9% in extreme poverty. Part of the reason for this is stark inequality. In 2021, the wealthiest 10% of households held nearly 80% of household wealth. Meanwhile, the bottom 50% have less than 5% of the wealth, and the figure has only decreased over the years. Even among the top 10%, inequality is appalling, with top ranking businessmen like Carlos Slim making billions through corrupt business practices that plundered the country's wealth and gutted it's government. 

What difference does more money even make, in a situation like this? Even if household wealth were to double tomorrow the poorest households would still be teetering on the precipice of starvation as kleptocrats like Slim make billions. The problem is not inherently lack of food, of resources, of technology or productivity. But rather, deeply rotten, unfair institutions which favor businessmen like Slim at the expense of the bottom millions. 

I could probably continue for hours on the one example of Mexico, but I don't think I need to. Kraut has already done a long, several hour long series on the development of Mexican corruption and institutions. 

(1) The Mexican American Border | From War to Wall - YouTube

The bottom line is that institutions matter, not just as a sidenote enabling aid but the chief driver of prosperity in nations. Strong market and regulatory institutions in the US created the incentives necessary to create world renowned innovators and technologies. Gates with Microsoft computers, Jobbs with Apple phones, and now openAI with GPT. Even beforehand the countless explosions of patent technologies and industrial growth was one of the chief drivers of American wealth. This is not merely the case now, but throughout all of history. When our strongest companies and businesspeople succeed we can hope to enjoy (at least partially) increased tax money, social benefits, and increased income and jobs. Though these institutions aren't perfect, Americans can share or at least coexist with the benefits of growth. The same is not true in Mexico, where a zero sum game sees people like Slim win at the expense of everyone else. 

In that sense, I find this question rather misguided. 

Could you find if institutions explain more variance than, say, war and corruption?

As I illustrate with Mexico, institutions do not explain more variance than war and corruption. Rather, they are the very causes of war and corruption. Lack of institutional safeguards and transparency create situations where the elite can plunder the wealth of the commons.  You will notice that Slim did not successfully get away with his business practices in the US. Institutional safeguards like a (mostly) functioning legal and justice system forced him out with countless fines. Meanwhile, many others like Slim may steal from the Mexican people with impunity. They are the reason why the Mexican government colludes with cartels. They are the reason why crime stays at appalling highs. They are, in short, the reason Mexico is poor. To date there are more rich Mexican Americans than there are rich Mexicans. How comically tragic is that? 

Then there is war. We have to remember that lack of checks on autocratic power is often what causes this problem in the first place. See Putin in Ukraine, the countless warlords in Africa, or, most famously, Hitler in Europe. Fundamentally we see how lack of constraints on warmongering dictators allows wars of conquest, genocide for national or ethnic grandeur. Actions that would be unthinkable in democracies are a fact of life in dictatorships, simply because the dictator has power and their personal interests do not align with the interests of the state. 

The issue of institutions are not unique to Mexico. Rather, we see them all throughout the world. The best comparison I can think of is the difference between Poland and Hungary, both former Soviet bloc states from similar beginnings which eventually saw a massive shift in institutional development. From similar beginnings, they took dramatically different paths. Poland, following the devastation of Soviet rule, was able to reconstruct following lines of western institutional development, with free markets, fair elections, and checks on elite power. Hungary, meanwhile, followed a very different path, echoing the plunder of formerly state owned companies by Russian Oligarchs. To date, Hungary is the most corrupt country in the EU, a country ran by a select business elite with connections to dictator Victor Orban. And make no mistake, he is a dictator. Now, years later, the economy of Poland is projected to overtake the UK. Meanwhile, despite generous EU subsidies, 20% of Hungarians face risk of poverty or social exclusion

As someone who has studied history and modern politics as a hobby, I can point you to any number of examples. Success in Botswana. Failure in Haiti. Famine in Russia, in China, in India. Economic miracles in South Korea, in Japan, and in Taiwan. I could craft 3 separate posts worth of content and it would still not be enough. But I think this is sufficient to underscore my point. 

I will concede that there are still exceptions. The Gulf Arab monarchies survive off natural resources soon to fade into irrelevancy. Singapore and China off efficient (or at least supposedly efficient) models of governance that we already see failing in China. But they are tiny, with unique circumstances, whereas the modern liberal democracies are almost without fail rich and well developed. 

In that sense, I'm led to believe Fukuyama was right in some aspects. Though it may not be the end of history, Western liberal democracy certainly is a contender for one of the greatest innovations humanity has ever made, with it's strong institutions, rule of law, free markets and respect for human dignity. It has, more widely and more consistently than any other method, driven out poverty and raised nations to prosperity. 

Does this make you update? Regardless of whether you do or don't, I'd appreciate your thoughts. Thank you for the question! It forced me to think deeper about my beliefs and justify them more coherently. I'm unsure if this is helpful in the realm of aid specifically, but I believe it does provide ample evidence for my thesis and raise it's coherency. 

Comment by Lyrongolem (david-xiao) on Beyond the Data: Why aid to poor doesn't work · 2023-10-31T04:38:03.729Z · LW · GW

(Do you want to prove EA is doom to fail? I don’t think so but that’s one way to read the title.)

Hm... right. That would make sense. I think I can see how people might misread that. No, I had no intention of doing anything like that. I was trying to address the shortcomings of charity, particularly in the realm of structural and institutional rot (and the other myriad causes of poverty). EA charity faces many of the same issues in this regard, but 'doomed to fail' is hardly the point I would like to make. (If anything I try my best to advocate the opposite by making a donation myself) I was merely trying to point out that foreign/charity aid in general cannot hope to solve ingrained root causes of poverty without substantial, unsustainably large investments. 

Part of the issue may have to do with my writing style. I try to aim for emotionally evocative, powerful posts. I find that this is a good way to get people engaged and generate discussion. (It also tends to be more fulfilling to write) This seems to have gotten in the way of clarity. Given the weight and circumstances of the subject matter (millions of people living in misery) I thought it was more than appropriate to amp up my language. Of course, this is still no excuse for being unclear. I should probably re-examine my diction. 

That said, do you think I could change the title or edit in a disclaimer? The title itself was largely a stylistic choice, while I certainly could've said 'aid to the poor has certain practical limitations' I feel like that's hardly interesting nor conductive towards sparking a discussion. I am ultimately still presenting what I believe is a more controversial thesis, and I thought my title should reflect that. 

Comment by Lyrongolem (david-xiao) on Beyond the Data: Why aid to poor doesn't work · 2023-10-31T04:21:03.793Z · LW · GW

Thanks so much for your comment! 

Hm... yes, upon further reflection your summarization seems accurate, or at least highly plausible. I am not too sure what the mindset of the average LWer or EA looks like myself. (although I've frequented the site for some time, I'm mainly reading random frontpage posts that pique my interest, I don't attend meetups, participate in group activities, or much other things of that nature) It's not merely reading like I haven't engaged much in their world. The truth is I simply haven't, I have no intention of hiding it. I tagged the post EA because my points on aid address charities in general quite broadly, and so I thought it would be of interest to EA adjacent individuals. I also hoped that they might be able to enlighten me a bit on the many parts of EA I still don't fully understand. The post was never meant to critique or even focus on EA. 

This may have gotten lost in everything else I was attempting to do in the post, but one of the central motivations was to disprove a point I saw in a RA fundraiser that unconditional cash transfers could 'eradicate' global poverty. I found the initiative commendable, but unrealistic for a variety of reasons, many of which I detailed in the post. I never meant to say the aid wouldn't help, but rather, it was likely insufficient to meet their goal of ending long term poverty. 

That said, yes, you are right. My evidence does not support the claim that aid is completely ineffective in ending long term poverty. But rather, that aid requires much higher volumes to solve the long term issues, in conjunction with many other things. In my mind this was still meant aid was an inadequate solution since I didn't believe the volumes required to solve the issue would be a reasonable demand upon charity or foreign aid (just look at the enormous price tag of millennium villages). Thinking back, I probably exaggerated a bit in the title and in some of my claims. While the logical points may have been sound, I may have mispresented them in the title and elsewhere. (I realize I sound a bit silly in hindsight, it's easy to see how people might interpret the phrase 'doesn't work' as useless versus inefficient to the point of implausibility) 

I think part of my issue with this post is that I'm really just uncertain what my audience believes and how they might react or interpret different things I say. While I have some idea and a few vague guesses, there's no real way to know for certain. I'm also unsure if I would have any way of knowing without simply accruing direct experience, but your thoughts definitely helped me in this regard. I will keep in mind your model of LW when making posts in the future. Thanks once again! 

That said, do you have any critiques/questions regarding the post personally? I'd be happy to continue chatting about any potential weak spots or logical errors.