Posts

Have any parties in the current European Parliamentary Election made public statements on AI? 2024-05-10T10:22:48.342Z
On what research policymakers actually need 2024-04-23T19:50:12.833Z
The Filan Cabinet Podcast with Oliver Habryka - Transcript 2023-02-14T02:38:34.867Z
Open & Welcome Thread - November 2022 2022-11-01T18:47:40.682Z
Health & Lifestyle Interventions With Heavy-Tailed Outcomes? 2022-06-06T16:26:49.012Z
Open & Welcome Thread - June 2022 2022-06-04T19:27:45.197Z
Why Take Care Of Your Health? 2022-04-06T23:11:07.840Z
MondSemmel's Shortform 2022-02-02T13:49:32.844Z
Recommending Understand, a Game about Discerning the Rules 2021-10-28T14:53:16.901Z
Quotes from the WWMoR Podcast Episode with Eliezer 2021-03-13T21:43:41.672Z
Another Anki deck for Less Wrong content 2013-08-22T19:31:09.513Z

Comments

Comment by MondSemmel on Anthropic leadership conversation · 2024-12-21T01:35:39.720Z · LW · GW

Tiny editing issue: "[] everyone in the company can walk around and tell you []" -> The parentheses are empty. Maybe these should be for italicized formatting?

Comment by MondSemmel on Anthropic leadership conversation · 2024-12-21T00:36:26.935Z · LW · GW

Thanks for posting this. Editing feedback: I think the post would look quite a bit better if you used headings and LW quotes. This would generate a timestamped and linkable table of contents, and also more clearly distinguish the quotes from your commentary. Example:

Tom Brown at 20:00:

the US treats the Constitution as like the holy document—which I think is just a big thing that strengthens the US, like we don't expect the US to go off the rails in part because just like every single person in the US is like The Constitution is a big deal, and if you tread on that, like, I'm mad. I think that the RSP, like, it holds that thing. It's like the holy document for Anthropic. So it's worth doing a lot of iterations getting it right.

<your commentary>

Comment by MondSemmel on Anthropic leadership conversation · 2024-12-21T00:31:02.832Z · LW · GW

I did something similar when I made this transcript: leaving in verbal hedging particularly in the context of contentious statements etc., where omitting such verbal ticks can give a quite misleading impression.

Comment by MondSemmel on The Dissolution of AI Safety · 2024-12-13T10:36:43.709Z · LW · GW

I think it would need to be closer to "interacting with the LLM cannot result in exceptionally bad outcomes in expectation", rather than a focus on compliance of text output.

Comment by MondSemmel on The Dissolution of AI Safety · 2024-12-12T22:29:07.406Z · LW · GW

Any argument which features a "by definition" has probably gone astray at an earlier point.

In this case, your by-definition-aligned LLM can still cause harm, so what's the use of your definition of alignment? As one example among many, the part where the LLM "output[s] text that consistently" does something (whether it be "reflects human value judgements" or otherwise), is not something RLHF is actually capable of guaranteeing with any level of certainty, which is one of many conditions a LLM-based superintelligence would need to fulfill to be remotely safe to use.

Comment by MondSemmel on David Gross's Shortform · 2024-12-12T22:16:05.143Z · LW · GW

How about "idle musings" or "sense of wonder", rather than "curiosity"? I remember a time before I had instant access to google whenever I had a question. Back then, a thought of "I wonder why X" was not immediately followed by googling "why X", but sometimes instead followed by thinking about X (incl. via "shower thoughts"), daydreaming about X, looking up X in a book, etc. It's not exactly bad that we have search engines and LLMs nowadays, but for me it does feel like something was lost, too.

Comment by MondSemmel on MondSemmel's Shortform · 2024-12-12T00:19:35.396Z · LW · GW

Media is bizarre. Here is an article drawing tenuous connections between the recent assassin of a healthcare CEO with rationalism and effective altruism, and here is one who does the same with rationalism and Scott Alexander. Why, tho?

Comment by MondSemmel on Sapphire Shorts · 2024-12-07T11:26:14.633Z · LW · GW

Related, here is something Yudkowsky wrote three years ago:

I'm about ready to propose a group norm against having any subgroups or leaders who tell other people they should take psychedelics.  Maybe they have individually motivated uses - though I get the impression that this is, at best, a high-variance bet with significantly negative expectation.  But the track record of "rationalist-adjacent" subgroups that push the practice internally and would-be leaders who suggest to other people that they do them seems just way too bad.

I'm also about ready to propose a similar no-such-group policy on 'woo', tarot-reading, supernaturalism only oh no it's not really supernaturalism I'm just doing tarot readings as a way to help myself think, etc.  I still think it's not our community business to try to socially prohibit things like that on an individual level by exiling individuals like that from parties, I don't think we have or should have that kind of power over individual behaviors that neither pick pockets nor break legs.  But I think that when there's anything like a subgroup or a leader with those properties we need to be ready to say, "Yeah, that's not a group in good standing with the rest of us, don't go there."  This proposal is not mainly based on the advance theories by which you might suspect or guess that subgroups like that would end badly; it is motivated mainly by my sense of what the actual outcomes have been.

Since implicit subtext can also sometimes be bad for us in social situations, I should be explicit that concern about outcomes of psychedelic advocacy includes Michael Vassar, and concern on woo includes the alleged/reported events at Leverage.

Comment by MondSemmel on Alexander Gietelink Oldenziel's Shortform · 2024-12-05T18:57:25.692Z · LW · GW

I mean, here are two comments I wrote three weeks ago, in a shortform about Musk being able to take action against Altman via his newfound influence in government:

That might very well help, yes. However, two thoughts, neither at all well thought out: ... Musk's own track record on AI x-risk is not great. I guess he did endorse California's SB 1047, so that's better than OpenAI's current position. But he helped found OpenAI, and recently founded another AI company. There's a scenario where we just trade extinction risk from Altman's OpenAI for extinction risk from Musk's xAI.

And:

I'm sympathetic to Musk being genuinely worried about AI safety. My problem is that one of his first actions after learning about AI safety was to found OpenAI, and that hasn't worked out very well. Not just due to Altman; even the "Open" part was a highly questionable goal. Hopefully Musk's future actions in this area would have positive EV, but still.

Comment by MondSemmel on Alexander Gietelink Oldenziel's Shortform · 2024-12-05T18:53:06.107Z · LW · GW

all the focus on the minutia of OpenAI & Anthropic may very well end up misplaced.

This doesn't follow. The fact that OpenAI and Anthropic are racing contributes to other people like Musk deciding to race, too. This development just means that there's one more company to criticize.

Comment by MondSemmel on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-04T13:06:34.806Z · LW · GW

Re: the history of LW, there's a bunch more detail at the beginning of this podcast Habryka did in early 2023.

Comment by MondSemmel on papetoast's Shortforms · 2024-12-03T19:25:38.732Z · LW · GW

I could barely see that despite always using a zoom level of 150%. So I'm sometimes baffled at the default zoom levels of sites like LessWrong, wondering if everyone just has way better eyes than me. I can barely read anything at 100% zoom, and certainly not that tiny difference in the formulas!

Comment by MondSemmel on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-02T12:22:00.384Z · LW · GW

I can't find any off the top of my had, but I'm pretty sure the LW/Lightcone salary question has been asked and answered before, so it might help to link to past discussions?

Comment by MondSemmel on Sinclair Chen's Shortform · 2024-11-29T20:22:26.584Z · LW · GW

Apologies if I gave the impression that "a selfish person should love all humans equally"; while I'm sympathetic to arguments from e.g. Parfit's book Reasons and Persons[1], I don't go anywhere that far. I was making a weaker and (I think) uncontroversial claim, something closer to Adam Smith's invisible hand: that aggregating over every individual's selfish focus on close family ties, overall results in moral concerns becoming relatively more spread out, because the close circles of your close circle aren't exactly identical to your own.

  1. ^

    Like that distances in time and space are similar. So if you imagine people in the distant past having the choice for a better life at their current time, in exchange for there being no people in the far future, then you wish they'd care about more than just their own present time. A similar logic argues against applying a very high discount rate to your moral concern for beings that are very distant to you in e.g. space, close ties, etc.

Comment by MondSemmel on Sinclair Chen's Shortform · 2024-11-29T11:10:05.142Z · LW · GW

Well, if there were no minds to care about things, what would it even mean that something should be terminally cared about?

Re: value falloff: sure, but if you start with your close circle, and then aggregate the preferences of that close circle (who has close circles of their own), and rinse and repeat, then this falloff for any individual becomes comparatively much less significant for society as a whole.

Comment by MondSemmel on Repeal the Jones Act of 1920 · 2024-11-28T12:06:33.039Z · LW · GW

Maybe our disagreement is that I'm more skeptical about the legislature proactively suggesting any good legislation? My default assumption is that without leadership, hardly anything of value gets done. Like, it's an obviously good idea to repeal the Jones Act, and yet it's persisted for a hundred years.

Comment by MondSemmel on Repeal the Jones Act of 1920 · 2024-11-27T23:29:02.963Z · LW · GW

This discussion felt fine to me, though I'm not sure to which extent anyone got convinced of anything, so it might have been fine but not necessarily worthwhile, or something. Anyway, I'm also in favor of people being able to disengage from conversations without that becoming a meta discussion of its own, so... *shrugs*.

I do agree that it's easy to have discussions about politics become bad, even on LW.

Comment by MondSemmel on Repeal the Jones Act of 1920 · 2024-11-27T23:21:29.152Z · LW · GW

I know that Trump doesn't have control of the legislature or anything, but I guess I'm still not quite understanding how all this is supposed to relate to the Jones Act question. Do you think if (big if) Trump wanted the Jones Act repealed, it would not be possible to find a (potentially bipartisan) majority of votes for this in the House and the Senate? (Let's leave the filibuster aside for a moment.) This is not like e.g. cutting entitlement programs; the interest groups defending the Jones Act are just not that powerful.

Comment by MondSemmel on Repeal the Jones Act of 1920 · 2024-11-27T21:48:11.798Z · LW · GW

I don't know, I've been reading a lot of Slow Boring and Nate Silver, and to me this just really doesn't seem to remotely describe how the Trump coalition works. Beginning with the idea that there are powerful party elites whose opinion Trump has to care about, rather than the other way around.

Like, the fact that Trump moderated the entire party on abortion and entitlement cuts seems like pretty strong evidence against that idea, as well. Or, Trump's recent demand that the US Senate should confirm his appointees via recess appointments, similarly really does not strike me as Trump caring about what party elites think.

My model is more like, both Trump and party elites care about what their base thinks, and Trump can mobilize the base better (but not perfectly) than the party elites can, so Trump has a stronger position in that power dynamic. And isn't that how he won the 2016 primary in the first place? He ran as a populist, so of course party elites did not want him to win, since the whole point of populist candidates is that they're less beholden to elites. But he won, so now those elites mostly have to acquiesce.

All that said, to get back to the Jones Act thing: if Trump somehow wanted it repealed, that would have to happen via an act of Congress, so at that point he would obviously need votes in the US House and Senate. But that could in principle (though not necessarily in practice) happen on a bipartisan vote, too.

EDIT: And re: the importance of party elites, that's also kind of counter to the thesis that today's US political parties are very weak. Slow Boring has a couple articles on this topic, like this one (not paywalled), all based on the book "The Hollow Parties".

Comment by MondSemmel on Repeal the Jones Act of 1920 · 2024-11-27T20:38:40.852Z · LW · GW

so, convincing Republicans under his watch of just replacing the Jones Act is hardly possible

Given how the Trump coalition seems to have worked so far, I don't find this rejoinder plausible. Yes, Trump is not immune from his constituents. For example, he walked back from what some consider to be the greatest achievement of his presidency (i.e. Operation Warp Speed), because his base was or became increasingly opposed to vaccination.

But in many other regards he's shown a strong ability to make his constituents follow him (we might call it "leadership", even if we don't like where he leads to), rather than the other way around. Like, his Supreme Court appointments overturned Roe v. Wade, but in this year's presidential election he campaigned against a national abortion ban, because he figured such a ban would harm his election prospects. And IIRC he's moderated his entire party on making (or at least campaigning on) cuts to entitlement programs, too, again because it's bad politics.

This is not to say that Trump will advocate for repealing the Jones Act. But rather that if he doesn't do it, it will be because he doesn't want to, not because his constituents don't. The Jones Act is just not politically important enough for a rebellion by his base.

A much bigger problem here would be that Trump seems to have very dubious instincts on foreign policy and positive-sum trade (e.g. he's IIRC been advocating for tarifs for a long time), and might well interpret repealing the Jones Act as showing weakness towards foreign nations, or some such.

Comment by MondSemmel on Eli's shortform feed · 2024-11-26T20:58:57.946Z · LW · GW

Agreed insofar as shortform posts are conceptually shortlived, which is a bummer for high-karma shortform posts with big comments treads.

Disagreed insofar by "automatically converted" you mean "the shortform author has no recourse against this". I do wish there were both nudges to turn particularly high-value shortform posts (and particularly high-value comments, period!) into full posts, and assistance to make this as easy as possible, but I'm against forcing authors and commenters to do things against their wishes.

(Side note: there are also a few practical issues with converting shortform posts to full posts: the latter have titles, the former do not. The former have agreement votes, the latter do not. Do you straightforwardly port over the karma votes from shortform to full post? Full posts get an automatic strong upvote from their author, whereas comments only get an automatic regular upvote. Etc.)

Still, here are a few ideas for such non-coercive nudges and assistance:

  • An opt-in or opt-out feature to turn high-karma shortform posts into full posts.
  • An email reminder or website notification to inform you about high-karma shortform posts or comments you could turn into full posts, ideally with a button you can click which does this for you.
  • Since it can be a hassle to think up a title, some general tips or specific AI assistance for choosing one. (Though if there was AI assistance, it should not invent titles out of thin air, but rather make suggestions which closely hew to the shortform content. E.g. for your shortform post, it should be closer to "LessWrong shortform posts above some amount of karma should get automatically converted into personal blog posts", rather than "a revolutionary suggestion to make LessWrong, the greatest of all websites, even better, with this one simple trick".)
Comment by MondSemmel on Sinclair Chen's Shortform · 2024-11-26T11:07:25.718Z · LW · GW

Re: moral patienthood, I understand the Sam Harris position (paraphrased by him here as "Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe.") as saying that anything else that supposedly matters, only matters because conscious minds care about it. Like, a painting has no more intrinsic value in the universe than any other random arrangement of atoms like a rock; its value stems purely from conscious minds caring about it. Same with concepts like beauty and virtue and biodiversity and anything else that's not directly about conscious minds.

And re: caring more about one's close circle: well, everyone in your close circle has their own close circle they care about, and if you repeat that exercise often enough, the vast majority of people in the world are in someone's close circle.

Comment by MondSemmel on Habryka's Shortform Feed · 2024-11-23T10:07:43.043Z · LW · GW

How would you avoid the data contamination issue where the AI system has been trained on the entire Internet and thus already knows about all of these vulnerabilities?

Comment by MondSemmel on Akash's Shortform · 2024-11-20T18:58:03.789Z · LW · GW

Yudkowsky has a pinned tweet that states the problem quite well: it's not so much that alignment is necessarily infinitely difficult, but that it certainly doesn't seem anywhere as easy as advancing capabilities, and that's a problem when what matters is whether the first powerful AI is aligned:

Safely aligning a powerful AI will be said to be 'difficult' if that work takes two years longer or 50% more serial time, whichever is less, compared to the work of building a powerful AI without trying to safely align it.

Comment by MondSemmel on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-11-18T19:20:56.331Z · LW · GW

It seems to me like the "more careful philosophy" part presupposes a) that decision-makers use philosophy to guide their decision-making, b) that decision-makers can distinguish more careful philosophy from less careful philosophy, and c) that doing this successfully would result in the correct (LW-style) philosophy winning out. I'm very skeptical of all three.

Counterexample to a): almost no billionaire philanthropy uses philosophy to guide decision-making.

Counterexample to b): it is a hard problem to identify expertise in domains you're not an expert in.

Counterexample to c): from what I understand, in 2014, most of academia did not share EY's and Bostrom's views.

Comment by MondSemmel on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-11-18T17:39:40.243Z · LW · GW

Presumably it was because Google had just bought DeepMind, back when it was the only game in town?

Comment by MondSemmel on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-11-18T17:36:20.039Z · LW · GW

This NYT article (archive.is link) (reliability and source unknown) corroborates Musk's perspective:

As the discussion stretched into the chilly hours, it grew intense, and some of the more than 30 partyers gathered closer to listen. Mr. Page, hampered for more than a decade by an unusual ailment in his vocal cords, described his vision of a digital utopia in a whisper. Humans would eventually merge with artificially intelligent machines, he said. One day there would be many kinds of intelligence competing for resources, and the best would win.

If that happens, Mr. Musk said, we’re doomed. The machines will destroy humanity.

With a rasp of frustration, Mr. Page insisted his utopia should be pursued. Finally he called Mr. Musk a “specieist,” a person who favors humans over the digital life-forms of the future.

That insult, Mr. Musk said later, was “the last straw.”

And this article from Business Insider also contains this context:

Musk's biographer, Walter Isaacson, also wrote about the fight but dated it to 2013 in his recent biography of Musk. Isaacson wrote that Musk said to Page at the time, "Well, yes, I am pro-human, I fucking like humanity, dude."

Musk's birthday bash was not the only instance when the two clashed over AI. 

Page was CEO of Google when it acquired the AI lab DeepMind for more than $500 million in 2014. In the lead-up to the deal, though, Musk had approached DeepMind's founder Demis Hassabis to convince him not to take the offer, according to Isaacson. "The future of AI should not be controlled by Larry," Musk told Hassabis, according to Isaacson's book.

Comment by MondSemmel on Alexander Gietelink Oldenziel's Shortform · 2024-11-16T19:41:55.719Z · LW · GW

Most configurations of matter, most courses of action, and most mind designs, are not conducive to flourishing intelligent life. Just like most parts of the universe don't contain flourishing intelligent life. I'm sure this stuff has been formally stated somewhere, but the underlying intuition seems pretty clear, doesn't it?

Comment by MondSemmel on Lao Mein's Shortform · 2024-11-16T14:22:41.242Z · LW · GW

What if whistleblowers and internal documents corroborated that they think what they're doing could destroy the world?

Comment by MondSemmel on Lao Mein's Shortform · 2024-11-16T14:17:05.217Z · LW · GW

Ilya is demonstrably not in on that mission, since his step immediately after leaving OpenAI was to found an additional AGI company and thus increase x-risk.

Comment by MondSemmel on Lao Mein's Shortform · 2024-11-16T14:14:24.837Z · LW · GW

I don't understand the reference to assassination. Presumably there are already laws on the books that outlaw trying to destroy the world (?), so it would be enough to apply those to AGI companies.

Comment by MondSemmel on Lao Mein's Shortform · 2024-11-15T21:38:41.405Z · LW · GW

Just as one example, OpenAI was against SB 1047, whereas Musk was for it. I'm not optimistic about regulation being enough to save us, but presumably they would be helpful, and some AI companies like OpenAI were against even the limited regulations of SB 1047. Plus SB 1047 also included stuff like whistleblower protections, and that's the kind of thing that could help policymakers make better decisions in the future.

Comment by MondSemmel on Lao Mein's Shortform · 2024-11-15T21:29:38.375Z · LW · GW

I'm sympathetic to Musk being genuinely worried about AI safety. My problem is that one of his first actions after learning about AI safety was to found OpenAI, and that hasn't worked out very well. Not just due to Altman; even the "Open" part was a highly questionable goal. Hopefully Musk's future actions in this area would have positive EV, but still.

Comment by MondSemmel on Lao Mein's Shortform · 2024-11-15T17:31:27.233Z · LW · GW

That might very well help, yes. However, two thoughts, neither at all well thought out:

  • If the Trump administration does fight OpenAI, let's hope Altman doesn't manage to judo flip the situation like he did with the OpenAI board saga, and somehow magically end up replacing Musk or Trump in the upcoming administration...
  • Musk's own track record on AI x-risk is not great. I guess he did endorse California's SB 1047, so that's better than OpenAI's current position. But he helped found OpenAI, and recently founded another AI company. There's a scenario where we just trade extinction risk from Altman's OpenAI for extinction risk from Musk's xAI.
Comment by MondSemmel on Lao Mein's Shortform · 2024-11-08T14:20:13.138Z · LW · GW

You can't trust exit polls on demographics crosstabs. From Matt Yglesias on Slow Boring:

Over and above the challenge inherent in any statistical sampling exercise, the basic problem exit pollsters have is that they have no way of knowing what the electorate they are trying to sample actually looks like, but they do know who won the election. They end up weighting their sample to match the election results, which is good because otherwise you’d have polling error about the topline outcome, which would look absurd. But this weighting process can introduce major errors in the crosstabs.

For example, the 2020 exit poll sample seems to have included too many college- educated white people. That was a Biden-leaning demographic group, so in a conventional poll, it would have simply exaggerated Biden’s share of the total vote. But the exit poll knows the “right answer” for Biden’s aggregate vote share, so to compensate for overcounting white college graduates in the electorate, it has to understate Biden’s level of support within this group. That is then further offset by overstating Biden’s level of support within all other groups. So we got a lot of hot takes in the immediate aftermath of the election about Biden’s underperformance with white college graduates, which was fake, while people missed real trends, like Trump doing better with non-white voters.

To get the kind of data that people want exit polls to deliver, you actually need to wait quite a bit for more information to become available from the Census and the voter files about who actually voted. Eventually, Catalist produced its “What Happened in 2020” document, and Pew published its “Behind Biden’s 2020 Victory” report. But those take months to assemble, and unfortunately, conventional wisdom can congeal in the interim.

So just say no to exit poll demographic analysis!

Comment by MondSemmel on Lao Mein's Shortform · 2024-11-08T12:00:59.207Z · LW · GW

Democrats lost by sufficient margins, and sufficiently broadly, that one can make the argument that any pet cause is a responsible or contributing factor. But that seems like entirely the wrong level of analysis to me. See this FT article called "Democrats join 2024’s graveyard of incumbents", which includes this striking graph:

So global issues like inflation and immigration seem like much better explanatory factors to me, rather than things like the Gaza conflict which IIRC never made the top 10 in any issue polling I saw.

(The article may be paywalled; I got the unlocked version by searching Bing for "ft every governing party facing election in a developed country".)

Comment by MondSemmel on JargonBot Beta Test · 2024-11-01T22:26:44.528Z · LW · GW

I would strongly bet against majority using AI tools ~daily (off the top of my head: <40% with 80% confidence?): adoption of any new tool is just much slower than people would predict, plus the LW team is liable to vastly overpredict this since you're from California.

That said, there are some difficulties with how to operationalize this question, e.g. I know some particularly prolific LW posters (like Zvi) use AI.

Comment by MondSemmel on Habryka's Shortform Feed · 2024-10-29T21:30:55.973Z · LW · GW

Oh, and the hover tooltip for the agreement votes is now bugged; IIRC hovering over the agreement vote number is supposed to give you some extra info just like with karma, but now it just explains what agreement votes are.

Comment by MondSemmel on Habryka's Shortform Feed · 2024-10-29T21:23:26.268Z · LW · GW

Comparing with this Internet Archive snapshot from Oct 6, both at 150% zoom, both in desktop Firefox in Windows 11: Comparison screenshot, annotated

  • The new font seems... thicker, somehow? There's a kind of eye test you do at the optician where they ask you if the letters seem sharper or just thicker (or something), and this font reminds me of that. Like something is wrong with the prescription of my glasses.
  • The new font also feels noticeably smaller in some way. Maybe it's the letter height? I lack the vocabulary to properly describe this. At the very least, the question mark looks noticeably weird. And e.g. in "t" and "p", the upper and lower parts of the respective letter are weirdly tiny.
  • Incidentally there were also some other differences in the shape and alignment of UI elements (see the annotated screenshot).
Comment by MondSemmel on Habryka's Shortform Feed · 2024-10-29T19:09:00.894Z · LW · GW

Up to a few days ago, the comments looked good on desktop Firefox, Windows 11, zoom level 150%. Now I find them uncomfortable to look at.

Comment by MondSemmel on Habryka's Shortform Feed · 2024-10-29T12:05:21.872Z · LW · GW

I don't know what specific change is responsible, but ever since that change, for me the comments are now genuinely uncomfortable to read.

Comment by MondSemmel on The Summoned Heroine's Prediction Markets Keep Providing Financial Services To The Demon King! · 2024-10-28T10:27:15.058Z · LW · GW

No, and "invested in the status quo" wasn't meant as a positive descriptor, either. This is describing a sociopath who's optimizing for success within a system, not one who overthrows the system. Not someone farsighted.

Comment by MondSemmel on johnswentworth's Shortform · 2024-10-27T20:18:30.400Z · LW · GW

Even for serious intellectual conversations, something I appreciate in this kind of advice is that it often encourages computational kindness. E.g. it's much easier to answer a compact closed question like "which of these three options do you prefer" instead of an open question like "where should we go to eat for lunch". The same applies to asking someone about their research; not every intellectual conversation benefits from big open questions like the Hamming Question.

Comment by MondSemmel on The Summoned Heroine's Prediction Markets Keep Providing Financial Services To The Demon King! · 2024-10-26T18:51:09.708Z · LW · GW

Stylistic feedback: this was well-written. I didn't notice any typos. However, there are a lot of ellipses (17 in 2k words), to the point that I found them somewhat distracting from the story. Also, these ellipses are all formatted as ". . .", i.e. as three periods and two spaces. So they take up extra room on the page due to the two extra spaces, and are rendered poorly at the end of a row. These latter issues don't occur when you instead use something like an ellipsis symbol ("…").

Comment by MondSemmel on The Summoned Heroine's Prediction Markets Keep Providing Financial Services To The Demon King! · 2024-10-26T18:44:44.554Z · LW · GW

I enjoyed this. Thanks for writing it!

Comment by MondSemmel on Big tech transitions are slow (with implications for AI) · 2024-10-26T12:33:38.959Z · LW · GW

Worldwide sentiment is pretty against immigration nowadays. Not that it will happen, but imagine if anti-immigration sentiment could be marshalled into a worldwide ban on AI development and deployment. That would be a strange, strange timeline.

Comment by MondSemmel on Jimrandomh's Shortform · 2024-10-24T21:13:15.580Z · LW · GW

I'm very familiar with this issue; e.g. I regularly see Steam devs get hounded in forums and reviews whenever they dare increase their prices.

I wonder to which extent this frustration about prices comes from gamers being relatively young and international, and thus having much lower purchasing power? Though I suppose it could also be a subset of the more general issue that people hate paying for software.

Comment by MondSemmel on Nathan Young's Shortform · 2024-10-15T02:51:35.750Z · LW · GW

The idea that popularity must be a sign of shallowness, and hence unpopularity or obscurity a sign of depth, sounds rather shallow to me. My attitude here is more like, if supposedly world-shattering insights can't be explained in relatively simple language, they either aren't that great, or we don't really understand them. Like in this Feynman quote:

Once I asked him to explain to me, so that I can understand it, why spin-1/2 particles obey Fermi-Dirac statistics. Gauging his audience perfectly, he said, "I'll prepare a freshman lecture on it." But a few days later he came to me and said: "You know, I couldn't do it. I couldn't reduce it to the freshman level. That means we really don't understand it."

Comment by MondSemmel on Nathan Young's Shortform · 2024-10-14T13:53:07.115Z · LW · GW

Does this tag on Law-Thinking help? Or do you mean "lawful" as in Dungeons & Dragons (incl. EY's Planecrash fic), i.e. neutral vs. chaos vs. lawful?

Comment by MondSemmel on Nathan Young's Shortform · 2024-10-14T13:50:15.244Z · LW · GW

This is far too cynical. Great writers (e.g. gwern, Scott Alexander, Matt Yglesias) can write excellent, technical posts and comments while still getting plenty attention.