Posts

Virtue signaling, and the "humans-are-wonderful" bias, as a trust exercise 2025-02-13T06:59:17.525Z
My simple AGI investment & insurance strategy 2024-03-31T02:51:53.479Z
Aligned AI is dual use technology 2024-01-27T06:50:10.435Z
You can just spontaneously call people you haven't met in years 2023-11-13T05:21:05.726Z
Does bulemia work? 2023-11-06T17:58:27.612Z
Should people build productizations of open source AI models? 2023-11-02T01:26:47.516Z
Bariatric surgery seems like a no-brainer for most morbidly obese people 2023-09-27T01:05:32.976Z
Bring back the Colosseums 2023-09-08T00:09:53.723Z
Diet Experiment Preregistration: Long-term water fasting + seed oil removal 2023-08-23T22:08:49.058Z
The U.S. is becoming less stable 2023-08-18T21:13:11.909Z
What is the most effective anti-tyranny charity? 2023-08-15T15:26:56.393Z
Michael Shellenberger: US Has 12 Or More Alien Spacecraft, Say Military And Intelligence Contractors 2023-06-09T16:11:48.243Z
Intelligence Officials Say U.S. Has Retrieved Craft of Non-Human Origin 2023-06-06T03:54:42.389Z
What is the literature on long term water fasts? 2023-05-16T03:23:51.995Z
"Do X because decision theory" ~= "Do X because bayes theorem" 2023-04-14T20:57:10.467Z
St. Patty's Day LA meetup 2023-03-18T00:00:36.511Z
Will 2023 be the last year you can write short stories and receive most of the intellectual credit for writing them? 2023-03-16T21:36:27.992Z
When will computer programming become an unskilled job (if ever)? 2023-03-16T17:46:35.030Z
POC || GTFO culture as partial antidote to alignment wordcelism 2023-03-15T10:21:47.037Z
Acolytes, reformers, and atheists 2023-03-10T00:48:40.106Z
LessWrong needs a sage mechanic 2023-03-08T18:57:34.080Z
Extreme GDP growth is a bad operating definition of "slow takeoff" 2023-03-01T22:25:27.446Z
The fast takeoff motte/bailey 2023-02-24T07:11:10.392Z
On second thought, prompt injections are probably examples of misalignment 2023-02-20T23:56:33.571Z
Stop posting prompt injections on Twitter and calling it "misalignment" 2023-02-19T02:21:44.061Z
Quickly refactoring the U.S. Constitution 2022-10-30T07:17:50.229Z
Announcing $5,000 bounty for (responsibly) ending malaria 2022-09-24T04:28:22.189Z
Extreme Security 2022-08-15T12:11:05.147Z
Argument by Intellectual Ordeal 2022-08-12T13:03:21.809Z
"Just hiring people" is sometimes still actually possible 2022-08-05T21:44:35.326Z
Don't take the organizational chart literally 2022-07-21T00:56:28.561Z
Addendum: A non-magical explanation of Jeffrey Epstein 2022-07-18T17:40:37.099Z
In defense of flailing, with foreword by Bill Burr 2022-06-17T16:40:32.152Z
Yes, AI research will be substantially curtailed if a lab causes a major disaster 2022-06-14T22:17:01.273Z
What have been the major "triumphs" in the field of AI over the last ten years? 2022-05-28T19:49:53.382Z
What an actually pessimistic containment strategy looks like 2022-04-05T00:19:50.212Z
The real reason Futarchists are doomed 2022-04-01T18:37:20.387Z
How to prevent authoritarian revolts? 2022-03-20T10:01:52.791Z
A non-magical explanation of Jeffrey Epstein 2021-12-28T21:15:41.953Z
Why do all out attacks actually work? 2020-06-12T20:33:53.138Z
Multiple Arguments, Multiple Comments 2020-05-07T09:30:17.494Z
Shortform 2020-03-19T23:50:30.391Z
Three signs you may be suffering from imposter syndrome 2020-01-21T22:17:45.944Z

Comments

Comment by lc on Shortform · 2025-02-15T13:22:03.942Z · LW · GW

Moral intuitions are odd. The current government's gutting of the AI safety summit is upsetting, but somehow less upsetting to my hindbrain than its order to drop the corruption charges against a mayor. I guess the AI safety thing is worse in practice but less shocking in terms of abstract conduct violations.

Comment by lc on Virtue signaling, and the "humans-are-wonderful" bias, as a trust exercise · 2025-02-14T22:33:43.180Z · LW · GW

It helps, but this could be solved with increased affection for your children specifically, so I don't think it's the actual motivation for the trait.

The core is probably several things, but note that this bias is also part of a larger package of traits that makes someone less disagreeable. I'm guessing that the same selection effects that made men more disagreeable than women are also probably partly responsible for this gender difference.

Comment by lc on Virtue signaling, and the "humans-are-wonderful" bias, as a trust exercise · 2025-02-14T21:25:18.596Z · LW · GW

I suspect that the psychopath's theory of mind is not "other people are generally nicer than me", but "other people are generally stupid, or too weak to risk fighting with me".

That is true, and it is indeed a bias, but it doesn't change the fact that their assessment of whether others are going to hurt them seems basically well calibrated. The anecdata that needs to be explained is why nice people do not seem to be able to tell when others are going to take advantage of them, but mean people do. The posts' offered reason is that generous impressions of others are advantageous for trust-building.

Mr. Portman probably believed that some children forgot to pay for the chocolate bars, because he was aware that different people have different memory skills.

This was the explanation he offered, yeah.

Comment by lc on Virtue signaling, and the "humans-are-wonderful" bias, as a trust exercise · 2025-02-13T09:43:28.756Z · LW · GW

This post is about a suspected cognitive bias and why I think it came to be. It's not trying to justify any behavior, as far as I can tell, unless you think the sentiment "people are pretty awful" justifies bad behavior in of itself.

The game theory is mostly an extended metaphor rather than a serious model. Humans are complicated.

Comment by lc on Wired on: "DOGE personnel with admin access to Federal Payment System" · 2025-02-07T00:39:24.528Z · LW · GW

Elon already has all of the money in the world. I think he and his employs are ideologically driven, and as far as I can tell they're making sensible decisions given their stated goals of reducing unnecessary spend/sprawl. I seriously doubt they're going to use this access to either raid the treasury or turn it into a personal fiefdom. It's possible that in their haste they're introducing security risks, but I also think the tendency of media outlets and their sources will be to exaggerate those security risks. I'd be happy to start a prediction market about this if a regular feels very differently.

If Trump himself was spearheading this effort I would be more worried.

Comment by lc on Shortform · 2025-02-06T17:23:41.247Z · LW · GW

Anthropic has a bug bounty for jailbreaks: https://hackerone.com/constitutional-classifiers?type=team

If you can figure out how to get the model to give detailed answers to a set of certain questions, you get a 10k prize. If you can find a universal jailbreak for all the questions, you get 20k.

Comment by lc on Viliam's Shortform · 2025-02-03T23:54:09.330Z · LW · GW

Yeah, one possible answer is "don't do anything weird, ever". That is the safe way, on average. No one will bother writing a story about you, because no one would bother reading it.

You laugh, but I really think a group norm of "think for yourself, question the outside world, don't be afraid to be weird" is part of the reason why all of these groups exist. Doing those things is ultimately a luxury for the well-adjusted and intelligent. If you tell people over and over to question social norms some of those people will turn out to be crazy and conclude crime and violence is acceptable.

I don't know if there's anything to do about that, but it is a thing.

Comment by lc on Thread for Sense-Making on Recent Murders and How to Sanely Respond · 2025-02-01T19:57:06.507Z · LW · GW

So, to be clear, everyone you can think of has been mentioned in previous articles or alerts about Zizians so far? Because I have only been on the periphery of rationalist events for the last several years, but in 2023 I can remember sending this[1] post about rationalist crazies into the San Antonio LW groupchat. A trans woman named Chase Carter, who doesn't generally attend our meetups, began to argue with me that Ziz (who gets mentioned in the article as an example) was subject to a "disinformation campaign" by rationalists, her goals were actually extremely admirable, and her worst failure was a strategic one in not realizing how few people were like her in the world. At the next meetup we agreed to talk about it further, and she attended (I think for the first time) to explain a very sympathetic background of Ziz's history and ideas. This was after the alert post but years before any of the recent events.

I have no idea if Chase actually self-identifies as a "Zizian" or is at all dangerous and haven't spoken to her in a year and a half. I just mention her as an example; I haven't heard her name brought up anywhere and I really wouldn't expect to know any of these people to begin with on priors.

  1. ^

    Misremembered that I sent the alert post into the chat, but actually it was the Habryka post about rationalist crazies.

Comment by lc on Thread for Sense-Making on Recent Murders and How to Sanely Respond · 2025-02-01T07:33:24.381Z · LW · GW

I know you're not endorsing the quoted claim, but just to make this extra explicit: running terrorist organizations is illegal, so this is the type of thing you would also say if Ziz was leading a terrorist organization, and you didn't want to see her arrested.

Comment by lc on Thread for Sense-Making on Recent Murders and How to Sanely Respond · 2025-02-01T03:07:49.084Z · LW · GW

Why did 2 killings happen within the span of one week?

According to law enforcement the two people involved in the shootout received weapons and munitions from Jamie Zajko, and one of them also applied for a marriage certificate with the person who killed Curtis Lind. Additionally I think it's also safe to say from all of their preparations that they were preparing to commit violent acts.

So my best guess is that:

  • Teresa Youngblut and/or Felix Bauckholt were co-conspirators with the other people committing violent crimes
  • They were preparing to commit further violent crimes
  • They were worried that they might be arrested
  • They made an agreement with each other to shoot it out with law enforcement in the event someone tried to arrest them
  • If the press/law enforcement isn't lying, they were stopped on the road by a border patrol officer that was checking up on a visa, they thought were about to be taken in for something more serious, and *Teresa pulled a gun

The border patrol officer seems like a hero. Whether he meant it or not, he died to save the lives of several other people.

Comment by lc on Anthropic CEO calls for RSI · 2025-01-29T23:49:46.827Z · LW · GW

I think an accident that caused a million deaths would do it.

Comment by lc on Assume Bad Faith · 2025-01-28T23:51:07.856Z · LW · GW

I think this post is quite good, and gives a heuristic important to modeling the world. If you skipped it because of title + author, you probably have the wrong impression of its contents and should give it a skim. Its main problem is what's left unsaid.

Some people in the comments reply to it that other people self-deceive, yes, but you should assume good faith. I say - why not assume the truth, and then do what's prosocial anyways?

Comment by lc on [Link] A community alert about Ziz · 2025-01-26T17:27:47.436Z · LW · GW

You're probably right, I don't actually know many/haven't actually interacted personally with many trans people. But also, I'm not really talking about the Zizians in particular here, or the possibility of getting physically harmed? It just seems like being trans is like taking LSD, in that it makes a person ex-ante much more likely to be someone who I've heard of having a notoriously bizarre mental breakdown that resulted in negative consequences for the people they've associated themselves with.

Comment by lc on [Link] A community alert about Ziz · 2025-01-26T07:08:23.015Z · LW · GW

"Assumed to be dangerous" is overstated, but I do think trans people as a group are a lot crazier on average, and I sort of avoid them personally.

It also seems very plausible to me, unfortunately, that a community level "keep your distance from trans people" rule would have been net positive starting from 2008. Not just because of Ziz; trans people in general have this recurring pattern of involving themselves in the community, then deciding years later that the community is the source of their mental health problems and that they should dedicate themselves to airing imaginary grievances about it in public (or in this case committing violent crimes).

Comment by lc on [Link] A community alert about Ziz · 2025-01-26T03:09:05.031Z · LW · GW

be "born a woman deep down"

start a violent gang

Comment by lc on [Link] A community alert about Ziz · 2025-01-26T03:01:14.353Z · LW · GW

This is the craziest shit I have ever read on LessWrong, and I am mildly surprised at how little it is talked about. I get that it's very close to home for a lot of people, and that it's probably not relevant to either rationality as a discipline or the far future. But like, multiple unsolved murders by someone involved in the community is something that I would feel compelled to write about, if I didn't get the vague impression that it'd be defecting in some way.

Comment by lc on Shortform · 2025-01-22T19:49:41.164Z · LW · GW

Most of the time when people publicly debate "textualism" vs. "intentionalism" it smacks to me of a bunch of sophistry to achieve the policy objectives of the textualist. Even if you tried to interpret English statements like computer code, which seems like a really poor way to govern, the argument that gets put forth by the guy who wants to extend the interstate commerce clause to growing weed or whatever is almost always ridiculous on its own merits.

The 14th amendment debate is unique, though, in that the letter of the amendment goes one way, and the stated interpretation of the guy who authored the amendment actually does seem to go the exact opposite way. The amendment reads:

All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside

Which is pretty airtight. Virtually everyone inside the borders of the United States is "subject to the jurisdiction" of the United States, certainly people here on visa, with the possible exception of people with diplomatic immunity. And yet:

[Howard] added that citizenship "will not, of course, include persons born in the United States who are foreigners, aliens, who belong to the families of ambassadors or foreign ministers accredited to the Government of the United States, but will include every other class of persons"

So it seems like in this case the textualism vs. intentionalism debate is actually possibly important.

Comment by lc on Shortform · 2025-01-21T20:54:29.106Z · LW · GW

What’s reality? I don’t know. When my bird was looking at my computer monitor I thought, ‘That bird has no idea what he’s looking at.’ And yet what does the bird do? Does he panic? No, he can’t really panic, he just does the best he can. Is he able to live in a world where he’s so ignorant? Well, he doesn’t really have a choice. The bird is okay even though he doesn’t understand the world. You’re that bird looking at the monitor, and you’re thinking to yourself, ‘I can figure this out.’ Maybe you have some bird ideas. Maybe that’s the best you can do

Comment by lc on Shortform · 2025-01-19T00:36:17.583Z · LW · GW

Sarcasm is when we make statements we don't mean, expecting the other person to infer from context that we meant the opposite. It's a way of pointing out how unlikely it would be for you to mean what you said, by saying it.

There are two ways to evoke sarcasm; first by making your statement unlikely in context, and second by using "sarcasm voice", i.e. picking tones and verbiage that explicitly signal sarcasm. The sarcasm that people consider grating is usually the kind that relies on the second category of signals, rather than the first. It becomes more funny when the joker is able to say something almost-but-not-quite plausible in a completely deadpan manner. Compare:

  1. "Oh boyyyy, I bet you think you're the SMARTEST person in the WHOLE world." (Wild, abrupt shifts in pitch)
  2. "You must have a really deep soul, Jeff." (Face inexpressive)

As a corollary, sarcasm often works more smoothly when it's between people who already know each other, not only because it's less likely to be offensive, but also because they're starting with a strong prior about what their counterparties are likely to say in normal conversation.

Comment by lc on Applying traditional economic thinking to AGI: a trilemma · 2025-01-15T20:44:26.045Z · LW · GW

This all seems really clearly true.

Comment by lc on [New Feature] Your Subscribed Feed · 2025-01-15T19:47:36.211Z · LW · GW

Just reproduced it; all I have to do is subscribe to a bunch of people and this happens and the site becomes unusable:

Comment by lc on [New Feature] Your Subscribed Feed · 2025-01-15T19:45:45.776Z · LW · GW

The image didn't upload but it's a picture of my browser saying that the web page's javascript is using a ton of resources and I can force-stop it if I wish

Comment by lc on Shortform · 2025-01-15T19:26:20.315Z · LW · GW

I am a little confused as to why Israel does not have the hostages yet. My understanding was that Israel has essentially taken control of Gaza and decimated the Hamas leadership. Who are they even "negotiating" with to secure their release? Why can't the IDF just kidnap and waterboard that person to get the location of the remaining prisoners? Does the person with the authority to make a deal also not know? Are there clandestine cells of Hamas personnel hiding in a basement somewhere waiting for some "signal" from a third party to give up the Israelis?

Comment by lc on [New Feature] Your Subscribed Feed · 2025-01-15T19:07:06.220Z · LW · GW

Somehow I just found out about this. Although this happened within a few minutes of me trying to use it:

Comment by lc on Zvi’s 2024 In Movies · 2025-01-13T19:43:46.186Z · LW · GW

I watched Fall Guy last month because I noticed it had 5 stars on Zvi's letterboxd. Can confirm it's an amazing movie.

Comment by lc on Semi-conductor/AI Stock Discussion. · 2025-01-05T08:53:01.500Z · LW · GW

Nice.

Comment by lc on Benito's Shortform Feed · 2025-01-02T20:42:38.609Z · LW · GW

Suppose Sam Bankman-Fried is imprisoned for 25 years. After that time, he will be a decent, law-abiding member of society, who is safe to release from prison.

I voted 75% because taken literally I think in 25 years AI will be so advanced that he won't have much of an ability to impact the world at all 🤓

(Otherwise 40%)

Comment by lc on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2025-01-02T16:57:06.980Z · LW · GW

That's a lot more than I expected!

Comment by lc on Shortform · 2025-01-01T18:01:17.601Z · LW · GW

I'll note that when people are doing something in near mode, like running a restaurant, they rarely feel it necessary engage in lofty symbolism. You instead get this sort of thing much more often in internet political discussions.

Comment by lc on Shortform · 2025-01-01T17:26:38.401Z · LW · GW

I am skeptical of there being legitimate reasons for talking in "symbolic speak" about the real world. I think one reason people do this is so they can cause in listeners emotional reactions that are appropriate for their "myth" but not appropriate for what's actually true. This is a peculiar way of misleading people, often including one's self.

Another reason is so that people can talk about things without having to take definite stances on what they mean. This ambiguity often just amounts to merely refusing to choose between several plausible truth conditions for their statements, but there's something emotionally attractive about that to some people. This also seems not legitimate...

A reason that is legitimate is to use a metaphor to help someone grasp something by pointing out similarities to things they are already familiar with. This is sometimes done in science education. But the metaphors are discarded as misleading simplifications once understanding progresses.

Of course, a totally valid reason for talking in story is to entertain, as we do in fiction. But in that context everyone knows we are just engaged in entertainment and not talking about the real world."

- Sean Last, during a conversation about the Jordan Peterson/Richard Dawkins religion discussion, which happened a few months back

Comment by lc on By default, capital will matter more than ever after AGI · 2024-12-31T01:16:45.822Z · LW · GW

They have "galaxy brains", but applying those galaxy brains strategically well towards your goals is also an aspect of intelligence. Additionally, those "galaxy brains" may be ineffective because of issues with alignment towards the company, whereas in a startup often you can get 10x or 100x more out of fewer employees because they have equity and understand that failure is existential for them. Demis may be smart, but he made a major strategic error if his goal was to lead in the AGI race, and despite the fact that the did he is still running DeepMind, which suggests an alignment/incentive issue with regards to Google's short term objectives.

Comment by lc on By default, capital will matter more than ever after AGI · 2024-12-30T17:07:14.210Z · LW · GW

On the contrary, massive stellar objects post-AGI will be less useful to you than a house is today, as far as your selfish personal preferences are concerned. Consider the difference in your quality of life living in a nice house vs. skimping and saving 50% and living in a cheap apartment so you can save money. Next, consider the difference in your quality of life owning your own planet (replete with superintelligent servants) vs. owning merely half a planet. What can you do with a whole planet that you can't do with half a planet? Not that much.

It matters if it means I can live twice as long, because I can purchase more negentropy with which to maintain whatever lifestyle I have.

Comment by lc on By default, capital will matter more than ever after AGI · 2024-12-30T16:30:38.408Z · LW · GW

#1 and #2 are serious concerns, but there's not really much I can do about them anyways. #3 doesn't make any sense to me.

You'll probably be able to buy planets post-AGI for the price of houses today

Right, and that seems like OP's point? Because I can do this, I shouldn't spend money on consumption goods today and in fact should gather as much money as I can now? Certainly massive stellar objects post-AGI will be more useful to me than a house is pre-agi?

As to this:

By contrast, *very few people are spending money to influence AGI development right now. If you want future beings to have certain inalienable rights, or if you want the galaxies to be used in such-and-such a way, you can lobby AGI companies right now to change their spec/constitution/RLHF, and to make commitments about what values they'll instill, etc.

I guess I just don't really believe I have much control over that at all. Further, I can specifically invest in things likely to be important parts of the AGI production function, like semiconductors, etc.

Comment by lc on Shortform · 2024-12-30T03:15:24.459Z · LW · GW

If you're so smart why aren't you married?

Comment by lc on Shortform · 2024-12-30T01:07:45.325Z · LW · GW

In many/most police departments, the coroner is the person with final say about whether or not a death was a murder. Like other unaccountable government bureaucrats, coroners can be pretty bad at their jobs, if for no other reason than that there is no one to stop them from being incompentent.

You might wonder why it's not the detective's job to decide whether to open an investigation. And often it's because the detective is graded on the percentage of cases they're able to solve. If detectives were given the responsibility of determining which cases of theirs merited investigation, they might try to avoid investigating cases that seemed more difficult.

Still, the coroner's job is hard, and they will naturally want to keep good rapport with the people they're working with every day. So if there are odd or bizarre features making a death suspicious - a broken window, a recent threat - those circumstances can fail to make it to their report, and what was once an obvious homicide can vanish into thin air.

Comment by lc on Shortform · 2024-12-25T08:05:59.917Z · LW · GW

I find it suspicious that a lot of the criticisms I read online of Indian-Americans (nepotism, obsequiousness, "dual loyalty", scheming) are very similar to the criticisms I hear of Jews.

Comment by lc on o3 · 2024-12-20T22:07:31.476Z · LW · GW

I don't emphasize this because I care more about humanity's survival than the next decades sucking really hard for me and everyone I love.

I'm flabbergasted by this degree/kind of altruism. I respect you for it, but I literally cannot bring myself to care about "humanity"'s survival if it means the permanent impoverishment, enslavement or starvation of everybody I love. That future is simply not much better on my lights than everyone including the gpu-controllers meeting a similar fate. In fact I think my instincts are to hate that outcome more, because it's unjust.

But how do LW futurists not expect catastrophic job loss that destroys the global economy?

Slight correction: catastrophic job loss would destroy the ability of the non-landed, working public to paritcipate in and extract value from the global economy. The global economy itself would be fine. I agree this is a natural conclusion; I guess people were hoping to get 10 or 15 more years out of their natural gifts.

Comment by lc on leogao's Shortform · 2024-12-18T20:33:52.386Z · LW · GW

I think if I got asked randomly at an AI conference if I knew what AGI was I would probably say no, just to see what the questioner was going to tell me.

Comment by lc on avturchin's Shortform · 2024-12-15T17:51:55.558Z · LW · GW

Saying "I have no intention to kill myself, and I suspect that I might be murdered" is not enough.

Frankly I do think this would work in many jurisdictions. It didn't work for John McAfee because he has a history of crazy remarks, it sounds like the sort of thing he'd do to save face/generate intrigue if he actually did plan on killing himself, and McAfee made no specific accusations. But if you really thought Sam Altman's head of security was going to murder you, you'd probably change their personal risk calculus dramatically by saying that repeatedly on the internet. Just make sure you also contact police specifically with what you know, so that the threat is legible to them as an institution.

Comment by lc on avturchin's Shortform · 2024-12-15T17:33:19.001Z · LW · GW

If someone wants to murder you, they can. If you ever walk outside, you can't avoid being shot by a sniper.

If the person or people trying to murder you is omnicompetent, then it's hard. If they're regular people, then there are at least lots of temporary measures you can take that would make it more difficult. You can fly to a random state or country and check into a motel without telling anybody where you are. Or you could find a bunch of friends and stay in a basement somewhere. Mobsters used to call doing that sort of thing for a time before a threat had receded "going to ground".

Wearing a camera that is streaming to a cloud 24/7, and your friends can publish the video in case of your death... seems a bit too much. (Also, it wouldn't protect you e.g. against being poisoned. But I think this is not a typical way how whistleblowers die.) Is there something simpler?

If you move to New York or London, your every move outside of a private home or apartment will already be recorded. Then place a security camera in your house.

Comment by lc on avturchin's Shortform · 2024-12-14T20:01:01.184Z · LW · GW

Tapping the sign:

Comment by lc on Shortform · 2024-12-11T03:57:51.531Z · LW · GW

Postdiction: Modern "cancel culture" was mostly a consequence of new communication systems (social media, etc.) rather than a consequence of "naturally" shifting attitudes or politics.

Comment by lc on AI Safety is Dropping the Ball on Clown Attacks · 2024-12-10T05:26:00.462Z · LW · GW

I have a draft that has wasted away for ages. I will probably post something this month though. Very busy with work.

Comment by lc on China Hawks are Manufacturing an AI Arms Race · 2024-12-01T20:20:35.596Z · LW · GW

The original comment you wrote appeared to be a response to "AI China hawks" like Leopold Aschenbrenner. Those people do accept the AI-is-extremely-powerful premise, and are arguing for an arms race based on that premise. I don't think whether normies can feel the AGI is very relevant to their position, because one of their big goals is to make sure Xi is never in a position to run the world, and completing a Manhattan Project for AI would probably prevent that regardless (even if it kills us).

If you're trying to argue instead that the Manhattan Project won't happen, then I'm mostly ambivalent. But I'll remark that that argument feels a lot more shaky in 2024 than in 2020, when Trump's daughter is literally retweeting Leopold's manifesto.

Comment by lc on China Hawks are Manufacturing an AI Arms Race · 2024-12-01T18:33:21.729Z · LW · GW

No, my problem with the hawks, as far as this criticism goes, is that they aren't repeatedly and explicitly saying what they will do

One issue with "explicitly and repeatedly saying what they will do" is that it invites competition. Many of the things that China hawks might want to do would be outside the Overton window. As Eliezer describes in AGI ruin:

The example I usually give is "burn all GPUs". This is not what I think you'd actually want to do with a powerful AGI - the nanomachines would need to operate in an incredibly complicated open environment to hunt down all the GPUs, and that would be needlessly difficult to align. However, all known pivotal acts are currently outside the Overton Window, and I expect them to stay there. So I picked an example where if anybody says "how dare you propose burning all GPUs?" I can say "Oh, well, I don't actually advocate doing that; it's just a mild overestimate for the rough power level of what you'd have to do, and the rough level of machine cognition required to do that, in order to prevent somebody else from destroying the world in six months or three years."

Comment by lc on China Hawks are Manufacturing an AI Arms Race · 2024-12-01T18:29:19.005Z · LW · GW

What does winning look like? What do you do next? How do you "bury the body"? You get AGI and you show it off publicly, Xi blows his stack as he realizes how badly he screwed up strategically and declares a national emergency and the CCP starts racing towards its own AGI in a year, and... then what? What do you do in this 1 year period, while you still enjoy AGI supremacy? You have millions of AGIs which can do... stuff. What is this stuff? Are you going to start massive weaponized hacking to subvert CCP AI programs as much as possible short of nuclear war? Lobby the UN to ban rival AGIs and approve US carrier group air strikes on the Chinese mainland? License it to the CCP to buy them off? Just... do nothing and enjoy 10%+ GDP growth for one year before the rival CCP AGIs all start getting deployed? Do you have any idea at all? If you don't, what is the point of 'winning the race'?

The standard LW & rationalist thesis (which AFAICT you agree with) is that sufficiently superintelligent AI is a magic wand that allows you to achieve whatever outcome you want. So one answer would be to prevent the CCP from doing potentially nasty things to you while they have AGI supremacy. Another answer might be turn the CCP into a nice liberal democracy friendly to the United States. Both of these are within the range of things the United States has done historically when they have had the opportunity.

Comment by lc on The Big Nonprofits Post · 2024-11-30T00:03:55.663Z · LW · GW

Did you look into: https://longtermrisk.org/?

Comment by lc on Shortform · 2024-11-28T20:23:27.534Z · LW · GW

"Spy" is an ambiguous term, sometimes meaning "intelligence officer" and sometimes meaning "informant". Most 'spies' in the "espionage-commiting-person" sense are untrained civilians who have chosen to pass information to officers of a foreign country, for varying reasons. So if you see someone acting suspicious, an argument like "well surely a real spy would have been coached not to do that during spy school" is locally invalid.

Comment by lc on Habryka's Shortform Feed · 2024-11-23T21:40:14.158Z · LW · GW

Why hardware bugs in particular?

Comment by lc on Shortform · 2024-11-23T15:27:16.104Z · LW · GW

Well that's at least a completely different kind of regulatory failure than the one that was proposed on Twitter. But this is probably motivated reasoning on Microsoft's part. Kernel access is only necessary for IDS because of Microsoft's design choices. If Microsoft wanted, they could also have exported a user API for IDS services, which is a project they are working on now. MacOS already has this! And Microsoft would never ever have done as good a job on their own if they hadn't faced competition from other companies, which is why everyone uses CrowdStrike in the first place.