Posts

Have any parties in the current European Parliamentary Election made public statements on AI? 2024-05-10T10:22:48.342Z
On what research policymakers actually need 2024-04-23T19:50:12.833Z
The Filan Cabinet Podcast with Oliver Habryka - Transcript 2023-02-14T02:38:34.867Z
Open & Welcome Thread - November 2022 2022-11-01T18:47:40.682Z
Health & Lifestyle Interventions With Heavy-Tailed Outcomes? 2022-06-06T16:26:49.012Z
Open & Welcome Thread - June 2022 2022-06-04T19:27:45.197Z
Why Take Care Of Your Health? 2022-04-06T23:11:07.840Z
MondSemmel's Shortform 2022-02-02T13:49:32.844Z
Recommending Understand, a Game about Discerning the Rules 2021-10-28T14:53:16.901Z
Quotes from the WWMoR Podcast Episode with Eliezer 2021-03-13T21:43:41.672Z
Another Anki deck for Less Wrong content 2013-08-22T19:31:09.513Z

Comments

Comment by MondSemmel on Arbital has been imported to LessWrong · 2025-02-20T16:35:11.287Z · LW · GW

I assume the idea of "lens" as a term is that it's one specific person's opinionated view of a topic. As in, "here's the concept seen through EY's lens". So terms like "variant" or "alternative" are too imprecise, but e.g. "perspective" might also work.

Comment by MondSemmel on ozziegooen's Shortform · 2025-02-19T22:07:33.095Z · LW · GW

Based on AI organisations frequently achieving the opposite of their chosen name (OpenAI, Safe Superintelligence, etc.), UNBIASED would be the most biased model, INTELLECT would be the dumbest model, JUSTICE would be particularly unjust, MAGA would in effect be MAWA, etc.

Comment by MondSemmel on Daniel Kokotajlo's Shortform · 2025-02-19T15:00:42.610Z · LW · GW

I'm skeptical to which extent the latter can be done. That's like saying an AI lab should suddenly care about AI safety. One can't really bolt a security mandate onto an existing institution and expect a competent result.

Comment by MondSemmel on Daniel Kokotajlo's Shortform · 2025-02-19T10:21:21.341Z · LW · GW

or stopped disclosing its advancements publicly

Does this matter all that much, given lack of opsec, relationships between or poaching of employees of other labs, corporate espionage, etc.?

Comment by MondSemmel on nikola's Shortform · 2025-02-19T10:17:53.925Z · LW · GW

A more cynical perspective is that much of this arms race, especially the international one against China (quote from above: "If we don't build fast enough, then the authoritarian countries could win."), is entirely manufactured by the US AI labs.

Comment by MondSemmel on Jay Bailey's Shortform · 2025-02-17T15:26:44.096Z · LW · GW

Thanks! I haven't had time to skim either report yet, but I thought it might be instructive to ask the same questions to Perplexity Pro's DR mode (it has the same "Deep Research" as OpenAI, but otherwise has no relation to it, in particular it's free with their regular monthly subscription, so it can't be particularly powerful): see here. (This is the second report I generated, as the first one froze indefinitely while the app generated the conclusion, and thus the report couldn't be shared.)

Comment by MondSemmel on Jay Bailey's Shortform · 2025-02-15T16:23:29.948Z · LW · GW

How about "analyze the implications for risk of AI extinction, based on how OpenAI's safety page has changed over time"? Inspired by this comment (+ follow-up w/ Internet archive link).

Comment by MondSemmel on Mateusz Bagiński's Shortform · 2025-02-10T20:50:13.010Z · LW · GW

apparently China as a state has devoted $1 trillion to AI

Source? I only found this article about 1 trillion Yuan, which is $137 billion.

Comment by MondSemmel on artifex0's Shortform · 2025-02-06T10:15:06.893Z · LW · GW

If this risk is in the ballpark of a 5% chance in the next couple of years, then it seems to me entirely dominated by AI doom.

Comment by MondSemmel on ChristianKl's Shortform · 2025-01-30T14:13:52.405Z · LW · GW

Yeah. Though as a counterpoint, something I picked up from IIRC Scott Alexander or Marginal Revolution is that the FDA is not great about accepting foreign clinical trials, or demands that they always be supplemented by trials of Americans, or similar.

Comment by MondSemmel on What Goes Without Saying · 2025-01-25T22:24:46.882Z · LW · GW

Milton Friedman teaspoon joke

Total tangent: this article from 2011 attributes the quote to a bunch of people, and finds an early instance in a 1901 newspaper article.

Comment by MondSemmel on Yonatan Cale's Shortform · 2025-01-20T15:18:23.122Z · LW · GW

Law question: would such a promise among businesses, rather than an agreement mandated by / negotiated with governments, run afoul of laws related to monopolies, collusion, price gouging, or similar?

Comment by MondSemmel on Noosphere89's Shortform · 2025-01-18T19:07:13.810Z · LW · GW

I like Yudkowsky's toy example of tasking an AGI to copy a single strawberry, on a molecular level, without destroying the world as a side-effect.

Comment by MondSemmel on I'm offering free math consultations! · 2025-01-14T22:14:14.213Z · LW · GW

You're making a very generous offer of your time and expertise here. However, to me your post still feels way, way more confusing than it should be.

Suggestions & feedback:

  • Title: "Get your math consultations here!" -> "I'm offering free math consultations for programmers!" or similar.
    • Or something else entirely. I'm particularly confused how your title (math consultations) leads into the rest of the post (debuggers and programming).
  • First paragraph: As your first sentence, mention your actual, concrete offer (something like "You screenshare as you do your daily tinkering, I watch for algorithmic or theoretical squiggles that cost you compute or accuracy or maintainability." from your original post, though ideally with much less jargon). Also your target audience: math people? Programmers? AI safety people? Others?
  • "click the free https://calendly.com/gurkenglas/consultation link" -> What you mean is: "click this link for my free consultations". What I read is a dark pattern à la: "this link is free, but the consultations are paid". Suggested phrasing: something like "you can book a free consultation with me at this link"
  • Overall writing quality
    • Assuming all your users would be as happy as the commenters you mentioned, it seems to me like the writing quality of these posts of yours might be several levels below your skill as a programmer and teacher. In which case it's no wonder that you don't get more uptake.
    • Suggestion 1: feed the post into an LLM and ask it for writing feedback.
    • Suggestion 2: imagine you're a LW user in your target audience, whoever that is, and you're seeing the post "Get your math consultations here!" in the LW homepage feed, written by an unknown author. Do people in your target audience understand what your post is about, enough to click on the post if they would benefit from it? Then once they click and read the first paragraph, do they understand what it's about and click on the link if they would benefit from it? Etc.
Comment by MondSemmel on quila's Shortform · 2025-01-14T16:24:18.275Z · LW · GW

Are you saying that the 1 aligned mind design in the space of all potential mind designs is an easier target than the subspace composed of mind designs that does not destroy the world?

I didn't mean that there's only one aligned mind design, merely that almost all (99.999999...%) conceivable mind designs are unaligned by default, so the only way to survive is if the first AGI is designed to be aligned, there's no hope that a random AGI just happens to be aligned. And since we're heading for the latter scenario, it would be very surprising to me if we managed to design a partially aligned AGI and lose that way.

No, because the you who can ask (the persons in power) is themselves misaligned with the 1 alignment target that perfectly captures all our preferences.

I expect the people in power are worrying about this way more than they worry about the overwhelming difficulty of building an aligned AGI in the first place. (Case in point: the manufactured AI race with China.) As a result I expect they'll succeed at building a by-default-unaligned AGI and driving themselves and us to extinction. So I'm not worried about instead ending up in a dystopia ruled by some government or AI lab owner.

Comment by MondSemmel on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2025-01-13T22:58:19.421Z · LW · GW

Have donated $400. I appreciate the site and its team for all it's done over the years. I'm not optimistic about the future wrt to AI (I'm firmly on the AGI doom side), but I nonetheless think that LW made a positive contribution on the topic.

Anecdote: In 2014 I was on a LW Community Weekend retreat in Berlin which Habryka either organized or did a whole bunch of rationality-themed presentations in. My main impression of him was that he was the most agentic person in the room by far. Based on that experience I fully expected him to eventually accomplish some arbitrary impressive thing, though it still took me by surprise to see him specifically move to the US and eventually become the new admin/site owner of LW.

Comment by MondSemmel on Bryce Robertson's Shortform · 2025-01-09T12:23:28.404Z · LW · GW

Recommendation: make the "Last updated" timestamp on these pages way more prominent, e.g. by moving them to the top below the page title. (Like what most news websites nowadays do for SEO, or like where timestamps are located on LW posts.) Otherwise absolutely no-one will know that you do this, or that these resources are not outdated but are actually up-to-date.

The current timestamp location is so unusual that I only noticed it by accident, and was in fact about to write a comment suggesting you add a timestamp at all.

Comment by MondSemmel on OpenAI #10: Reflections · 2025-01-08T11:58:06.424Z · LW · GW

The frustrating thing is that in some ways this is exactly right (humanity is okay at resolving problems iff we get frequent feedback) and in other ways exactly wrong (one major argument for AI doom is that you can't learn from the feedback of having destroyed the world).

Comment by MondSemmel on OpenAI #10: Reflections · 2025-01-08T11:53:36.250Z · LW · GW

The implication is that you absolutely can't take Altman at his bare word, especially when it comes to any statement he makes that, if true, would result in OpenAI getting more resources. Thus you need to a) apply some interpretative filter to everything Altman says, and b) listen to other people instead who don't have a public track record of manipulation like Altman.

Comment by MondSemmel on Alexander Gietelink Oldenziel's Shortform · 2025-01-07T14:57:16.922Z · LW · GW

My current model is that ML experiments are bottlenecked not on software-engineer hours, but on compute. See Ilya Sutskever's claim here

That claim is from 2017. Does Ilya even still endorse it?

Comment by MondSemmel on quila's Shortform · 2025-01-06T18:18:03.121Z · LW · GW

I guess we could in theory fail and only achieve partial alignment, but that seems like a weird scenario to imagine. Like shooting for a 1 in big_number target (= an aligned mind design in the space of all potential mind designs) and then only grazing it. How would that happen in practice?

And what does it even mean for a superintelligence to be "only misaligned when it comes to issues of wealth distribution"? Can't you then just ask your pretty-much-perfectly-aligned entity to align itself on that remaining question?

Comment by MondSemmel on quila's Shortform · 2025-01-06T17:12:38.691Z · LW · GW

The default outcome is an unaligned superintelligence singleton destroying the world and not caring about human concepts like property rights. Whereas an aligned superintelligence can create a far more utopian future than a human could come up with, and cares about capitalism and property rights only to the extent that that's what it was designed to care about.

So I indeed don't get your perspective. Why are humans still appearing as agents or decision-makers in your post-superintelligence scenario at all? If the superintelligence for some unlikely reason wants a human to stick around and to do something, then it doesn't need to pay them. And if a superintelligence wants a resource, it can just take it, no need to pay for anything.

Comment by MondSemmel on quila's Shortform · 2025-01-04T23:27:13.088Z · LW · GW

Another issue is the Eternal September issue where LW membership has grown a ton due to the AI boom (see the LW site metrics in the recent fundraiser post), so as one might expect, most new users haven't read the old stuff on the site. There are various ways in which the LW team tries to encourage them to read those, but nevertheless.

Comment by MondSemmel on quila's Shortform · 2025-01-04T23:21:04.340Z · LW · GW

I guess part of the issue is that in any discussion, people don't use the same terms in the same way. Some people call present-day AI capabilities by terms like "superintelligent" in a specific domain. Which is not how I understand the term, but I understand where the idea to call it that comes from. But of course such mismatched definitions make discussions really hard. Seeing stuff like that makes it very understandable why Yudkowsky wrote the LW Sequences...

Anyway, here is an example of a recent shortform post which grapples with the same issue that vague terms are confusing.

Comment by MondSemmel on The Online Sports Gambling Experiment Has Failed · 2025-01-04T21:51:54.979Z · LW · GW

I appreciate the link and the caveats!

Re: "the total number of pages does sometimes decrease", it's not clear to me that that's the case. These plots show "number of pages published annually", after all. And even if that number is an imperfect proxy for the regulatory burden of that year, what we actually care about is in any case not the regulatory burden of a year, but the cumulative regulatory burden. That cannot possibly have stayed flat for 2000~2012, right? So that can't be what the final plot in the pdf is saying.

Comment by MondSemmel on The Online Sports Gambling Experiment Has Failed · 2025-01-04T14:55:55.807Z · LW · GW

I don't think elite behavior is at all well-characterized by assuming they're trying to strike a sensible tradeoff here. For example, there are occasionally attempts to cut outdated regulations, and these never find any traction (e.g. the total number of pages of legislation only grows, but never shrinks). Which isn't particularly surprising insofar as the power of legislatives is to pass new legislation, so removing old legislation doesn't seem appealing at all.

Comment by MondSemmel on The Online Sports Gambling Experiment Has Failed · 2025-01-04T14:51:31.959Z · LW · GW

Sure, but they should instead be surprised by large-scale failures of non-libertarian policy elsewhere, the archetypal case being NIMBY policies which restrict housing supply and thus push up housing prices. Or perhaps an even clearer case is rent control policies pushing up prices.

Comment by MondSemmel on Recommending Understand, a Game about Discerning the Rules · 2025-01-02T20:28:44.413Z · LW · GW

As mentioned at the top, this video game is inspired by the board game Zendo, which is a bit like what you propose, and which I've seen played at rationalist meetups.

Zendo is a game of inductive logic in which one player, the Moderator, creates a secret rule that the rest of the players, try to figure out by building and studying configurations of the game pieces. The first player to correctly guess the rule wins.

For games with similar themes, Wikipedia also suggests the games Eleusis (with standard playing cards) and Penultima (with standard chess pieces).

Comment by MondSemmel on Comment on "Death and the Gorgon" · 2025-01-02T16:12:50.589Z · LW · GW

Yes, while there are limits to what kinds of tasks can be delegated, web hosting is not exactly a domain lacking in adequate service providers.

Comment by MondSemmel on Alexander Gietelink Oldenziel's Shortform · 2024-12-29T10:56:09.818Z · LW · GW

1) "there are many worlds in which it is too late or fundamentally unable to deliver on its promise while prosaic alignment ideas do. And in worlds in which theory does bear fruit" - Yudkowsky had a post somewhere about you only getting to do one instance of deciding to act as if the world was like X. Otherwise you're no longer affecting our actual reality. I'm not describing this well at all, but I found the initial point quite persuasive.

2) Highly relevant LW post & concept: The Tale of Alice Almost: Strategies for Dealing With Pretty Good People. People like Yudkowsky and johnswentworth think that vanishingly few people are doing something that's genuinely helpful for reducing x-risk, and most people are doing things that are useless at best or actively harmful (by increasing capabilities) at worst. So how should they act towards those people? Well, as per the post, that depends on the specific goal:

Suppose you value some virtue V and you want to encourage people to be better at it.  Suppose also you are something of a “thought leader” or “public intellectual” — you have some ability to influence the culture around you through speech or writing.

Suppose Alice Almost is much more V-virtuous than the average person — say, she’s in the top one percent of the population at the practice of V.  But she’s still exhibited some clear-cut failures of V.  She’s almost V-virtuous, but not quite.

How should you engage with Alice in discourse, and how should you talk about Alice, if your goal is to get people to be more V-virtuous?

Well, it depends on what your specific goal is.

...

What if Alice is Diluting Community Values?

Now, what if Alice Almost is the one trying to expand community membership to include people lower in V-virtue … and you don’t agree with that?

Now, Alice is your opponent.

In all the previous cases, the worst Alice did was drag down the community’s median V level, either directly or by being a role model for others.  But we had no reason to suppose she was optimizing for lowering the median V level of the community.  Once Alice is trying to “popularize” or “expand” the community, that changes. She’s actively trying to lower median V in your community — that is, she’s optimizing for the opposite of what you want.

The mainstream wins the war of ideas by default. So if you think everyone dies if the mainstream wins, then you must argue against the mainstream, right?

Comment by MondSemmel on Anthropic leadership conversation · 2024-12-21T01:35:39.720Z · LW · GW

Tiny editing issue: "[] everyone in the company can walk around and tell you []" -> The parentheses are empty. Maybe these should be for italicized formatting?

Comment by MondSemmel on Anthropic leadership conversation · 2024-12-21T00:36:26.935Z · LW · GW

Thanks for posting this. Editing feedback: I think the post would look quite a bit better if you used headings and LW quotes. This would generate a timestamped and linkable table of contents, and also more clearly distinguish the quotes from your commentary. Example:

Tom Brown at 20:00:

the US treats the Constitution as like the holy document—which I think is just a big thing that strengthens the US, like we don't expect the US to go off the rails in part because just like every single person in the US is like The Constitution is a big deal, and if you tread on that, like, I'm mad. I think that the RSP, like, it holds that thing. It's like the holy document for Anthropic. So it's worth doing a lot of iterations getting it right.

<your commentary>

Comment by MondSemmel on Anthropic leadership conversation · 2024-12-21T00:31:02.832Z · LW · GW

I did something similar when I made this transcript: leaving in verbal hedging particularly in the context of contentious statements etc., where omitting such verbal ticks can give a quite misleading impression.

Comment by MondSemmel on The Dissolution of AI Safety · 2024-12-13T10:36:43.709Z · LW · GW

I think it would need to be closer to "interacting with the LLM cannot result in exceptionally bad outcomes in expectation", rather than a focus on compliance of text output.

Comment by MondSemmel on The Dissolution of AI Safety · 2024-12-12T22:29:07.406Z · LW · GW

Any argument which features a "by definition" has probably gone astray at an earlier point.

In this case, your by-definition-aligned LLM can still cause harm, so what's the use of your definition of alignment? As one example among many, the part where the LLM "output[s] text that consistently" does something (whether it be "reflects human value judgements" or otherwise), is not something RLHF is actually capable of guaranteeing with any level of certainty, which is one of many conditions a LLM-based superintelligence would need to fulfill to be remotely safe to use.

Comment by MondSemmel on David Gross's Shortform · 2024-12-12T22:16:05.143Z · LW · GW

How about "idle musings" or "sense of wonder", rather than "curiosity"? I remember a time before I had instant access to google whenever I had a question. Back then, a thought of "I wonder why X" was not immediately followed by googling "why X", but sometimes instead followed by thinking about X (incl. via "shower thoughts"), daydreaming about X, looking up X in a book, etc. It's not exactly bad that we have search engines and LLMs nowadays, but for me it does feel like something was lost, too.

Comment by MondSemmel on MondSemmel's Shortform · 2024-12-12T00:19:35.396Z · LW · GW

Media is bizarre. Here is an article drawing tenuous connections between the recent assassin of a healthcare CEO with rationalism and effective altruism, and here is one who does the same with rationalism and Scott Alexander. Why, tho?

Comment by MondSemmel on Sapphire Shorts · 2024-12-07T11:26:14.633Z · LW · GW

Related, here is something Yudkowsky wrote three years ago:

I'm about ready to propose a group norm against having any subgroups or leaders who tell other people they should take psychedelics.  Maybe they have individually motivated uses - though I get the impression that this is, at best, a high-variance bet with significantly negative expectation.  But the track record of "rationalist-adjacent" subgroups that push the practice internally and would-be leaders who suggest to other people that they do them seems just way too bad.

I'm also about ready to propose a similar no-such-group policy on 'woo', tarot-reading, supernaturalism only oh no it's not really supernaturalism I'm just doing tarot readings as a way to help myself think, etc.  I still think it's not our community business to try to socially prohibit things like that on an individual level by exiling individuals like that from parties, I don't think we have or should have that kind of power over individual behaviors that neither pick pockets nor break legs.  But I think that when there's anything like a subgroup or a leader with those properties we need to be ready to say, "Yeah, that's not a group in good standing with the rest of us, don't go there."  This proposal is not mainly based on the advance theories by which you might suspect or guess that subgroups like that would end badly; it is motivated mainly by my sense of what the actual outcomes have been.

Since implicit subtext can also sometimes be bad for us in social situations, I should be explicit that concern about outcomes of psychedelic advocacy includes Michael Vassar, and concern on woo includes the alleged/reported events at Leverage.

Comment by MondSemmel on Alexander Gietelink Oldenziel's Shortform · 2024-12-05T18:57:25.692Z · LW · GW

I mean, here are two comments I wrote three weeks ago, in a shortform about Musk being able to take action against Altman via his newfound influence in government:

That might very well help, yes. However, two thoughts, neither at all well thought out: ... Musk's own track record on AI x-risk is not great. I guess he did endorse California's SB 1047, so that's better than OpenAI's current position. But he helped found OpenAI, and recently founded another AI company. There's a scenario where we just trade extinction risk from Altman's OpenAI for extinction risk from Musk's xAI.

And:

I'm sympathetic to Musk being genuinely worried about AI safety. My problem is that one of his first actions after learning about AI safety was to found OpenAI, and that hasn't worked out very well. Not just due to Altman; even the "Open" part was a highly questionable goal. Hopefully Musk's future actions in this area would have positive EV, but still.

Comment by MondSemmel on Alexander Gietelink Oldenziel's Shortform · 2024-12-05T18:53:06.107Z · LW · GW

all the focus on the minutia of OpenAI & Anthropic may very well end up misplaced.

This doesn't follow. The fact that OpenAI and Anthropic are racing contributes to other people like Musk deciding to race, too. This development just means that there's one more company to criticize.

Comment by MondSemmel on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-04T13:06:34.806Z · LW · GW

Re: the history of LW, there's a bunch more detail at the beginning of this podcast Habryka did in early 2023.

Comment by MondSemmel on papetoast's Shortforms · 2024-12-03T19:25:38.732Z · LW · GW

I could barely see that despite always using a zoom level of 150%. So I'm sometimes baffled at the default zoom levels of sites like LessWrong, wondering if everyone just has way better eyes than me. I can barely read anything at 100% zoom, and certainly not that tiny difference in the formulas!

Comment by MondSemmel on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-02T12:22:00.384Z · LW · GW

I can't find any off the top of my had, but I'm pretty sure the LW/Lightcone salary question has been asked and answered before, so it might help to link to past discussions?

Comment by MondSemmel on Sinclair Chen's Shortform · 2024-11-29T20:22:26.584Z · LW · GW

Apologies if I gave the impression that "a selfish person should love all humans equally"; while I'm sympathetic to arguments from e.g. Parfit's book Reasons and Persons[1], I don't go anywhere that far. I was making a weaker and (I think) uncontroversial claim, something closer to Adam Smith's invisible hand: that aggregating over every individual's selfish focus on close family ties, overall results in moral concerns becoming relatively more spread out, because the close circles of your close circle aren't exactly identical to your own.

  1. ^

    Like that distances in time and space are similar. So if you imagine people in the distant past having the choice for a better life at their current time, in exchange for there being no people in the far future, then you wish they'd care about more than just their own present time. A similar logic argues against applying a very high discount rate to your moral concern for beings that are very distant to you in e.g. space, close ties, etc.

Comment by MondSemmel on Sinclair Chen's Shortform · 2024-11-29T11:10:05.142Z · LW · GW

Well, if there were no minds to care about things, what would it even mean that something should be terminally cared about?

Re: value falloff: sure, but if you start with your close circle, and then aggregate the preferences of that close circle (who has close circles of their own), and rinse and repeat, then this falloff for any individual becomes comparatively much less significant for society as a whole.

Comment by MondSemmel on Repeal the Jones Act of 1920 · 2024-11-28T12:06:33.039Z · LW · GW

Maybe our disagreement is that I'm more skeptical about the legislature proactively suggesting any good legislation? My default assumption is that without leadership, hardly anything of value gets done. Like, it's an obviously good idea to repeal the Jones Act, and yet it's persisted for a hundred years.

Comment by MondSemmel on Repeal the Jones Act of 1920 · 2024-11-27T23:29:02.963Z · LW · GW

This discussion felt fine to me, though I'm not sure to which extent anyone got convinced of anything, so it might have been fine but not necessarily worthwhile, or something. Anyway, I'm also in favor of people being able to disengage from conversations without that becoming a meta discussion of its own, so... *shrugs*.

I do agree that it's easy to have discussions about politics become bad, even on LW.

Comment by MondSemmel on Repeal the Jones Act of 1920 · 2024-11-27T23:21:29.152Z · LW · GW

I know that Trump doesn't have control of the legislature or anything, but I guess I'm still not quite understanding how all this is supposed to relate to the Jones Act question. Do you think if (big if) Trump wanted the Jones Act repealed, it would not be possible to find a (potentially bipartisan) majority of votes for this in the House and the Senate? (Let's leave the filibuster aside for a moment.) This is not like e.g. cutting entitlement programs; the interest groups defending the Jones Act are just not that powerful.

Comment by MondSemmel on Repeal the Jones Act of 1920 · 2024-11-27T21:48:11.798Z · LW · GW

I don't know, I've been reading a lot of Slow Boring and Nate Silver, and to me this just really doesn't seem to remotely describe how the Trump coalition works. Beginning with the idea that there are powerful party elites whose opinion Trump has to care about, rather than the other way around.

Like, the fact that Trump moderated the entire party on abortion and entitlement cuts seems like pretty strong evidence against that idea, as well. Or, Trump's recent demand that the US Senate should confirm his appointees via recess appointments, similarly really does not strike me as Trump caring about what party elites think.

My model is more like, both Trump and party elites care about what their base thinks, and Trump can mobilize the base better (but not perfectly) than the party elites can, so Trump has a stronger position in that power dynamic. And isn't that how he won the 2016 primary in the first place? He ran as a populist, so of course party elites did not want him to win, since the whole point of populist candidates is that they're less beholden to elites. But he won, so now those elites mostly have to acquiesce.

All that said, to get back to the Jones Act thing: if Trump somehow wanted it repealed, that would have to happen via an act of Congress, so at that point he would obviously need votes in the US House and Senate. But that could in principle (though not necessarily in practice) happen on a bipartisan vote, too.

EDIT: And re: the importance of party elites, that's also kind of counter to the thesis that today's US political parties are very weak. Slow Boring has a couple articles on this topic, like this one (not paywalled), all based on the book "The Hollow Parties".

Comment by MondSemmel on Repeal the Jones Act of 1920 · 2024-11-27T20:38:40.852Z · LW · GW

so, convincing Republicans under his watch of just replacing the Jones Act is hardly possible

Given how the Trump coalition seems to have worked so far, I don't find this rejoinder plausible. Yes, Trump is not immune from his constituents. For example, he walked back from what some consider to be the greatest achievement of his presidency (i.e. Operation Warp Speed), because his base was or became increasingly opposed to vaccination.

But in many other regards he's shown a strong ability to make his constituents follow him (we might call it "leadership", even if we don't like where he leads to), rather than the other way around. Like, his Supreme Court appointments overturned Roe v. Wade, but in this year's presidential election he campaigned against a national abortion ban, because he figured such a ban would harm his election prospects. And IIRC he's moderated his entire party on making (or at least campaigning on) cuts to entitlement programs, too, again because it's bad politics.

This is not to say that Trump will advocate for repealing the Jones Act. But rather that if he doesn't do it, it will be because he doesn't want to, not because his constituents don't. The Jones Act is just not politically important enough for a rebellion by his base.

A much bigger problem here would be that Trump seems to have very dubious instincts on foreign policy and positive-sum trade (e.g. he's IIRC been advocating for tarifs for a long time), and might well interpret repealing the Jones Act as showing weakness towards foreign nations, or some such.