Posts

Smallpox vaccines are widely available, for now 2023-01-13T20:02:44.972Z
The Unilateralist’s “Curse” Is Mostly Good 2020-04-13T22:48:22.589Z

Comments

Comment by David Hornbein on Anthropic AI made the right call · 2024-04-15T17:55:24.721Z · LW · GW

I'm coming to this late, but this seems weird. Do I understand correctly that many people were saying that Anthropic, the AI research company, had committed never to advance the state of the art of AI research, and they believed Anthropic would follow this commitment? That is just... really implausible.

This is the sort of commitment which very few individuals are psychologically capable of keeping, and which ~zero commercial organizations of more than three or four people are institutionally capable of keeping, assuming they actually do have the ability to advance the state of the art. I don't know whether Anthropic leadership ever said they would do this, and if they said it then I don't know whether they meant it earnestly. But even imagining they said it and meant it earnestly there is just no plausible world in which a company with hundreds of staff and billions of dollars of commercial investment would keep this commitment for very long. That is not the sort of thing you see from commercial research companies in hot fields.

If anyone here did believe that Anthropic would voluntarily refrain from advancing the state of the art in all cases, you might want to check if there are other things that people have told you about themselves, which you would really like to be true of them, but you have no evidence for other than their assertions, and would be very unusual if they were true.

Comment by David Hornbein on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2024-03-19T18:26:30.342Z · LW · GW

Ben is working on a response, and given that I think it's clearly the right call to wait a week or two until we have another round of counter-evidence before jumping to conclusions. If in a week or two people still think the section of "Avoidable, Unambiguous falsehoods" does indeed contain such things, then I think an analysis like this makes sense

This was three months ago. I have not seen the anticipated response. Setting aside the internal validity of your argument above, the promised counterevidence did not arrive in anything like a reasonable time.

TracingWoodgrains clearly made the right call in publishing, rather than waiting for you.

Comment by David Hornbein on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-19T23:00:38.371Z · LW · GW

Yes, obviously, but they use different strategies. Male sociopaths rarely paint themselves as helpless victims because it is not an effective tactic for men. One does notice that, while the LW community is mostly male, ~every successful callout post against a LW community organization has been built on claims of harm to vulnerable female victims.

Comment by David Hornbein on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-19T21:05:33.985Z · LW · GW

When you say "it's clearly the right call to wait a week or two until we have another round of counter-evidence before jumping to conclusions", is this a deliberate or accidental echo of the similar request from Nonlinear which you denied?

Like, on the deliberate way of reading this, the subtext is "While Lightcone did not wait a week or two for counter-evidence and still defends this decision, you should have waited in your case because that's the standard you describe in your article." Which would be a hell of a thing to say without explicitly acknowledging that you're asking for different standards. (And would also misunderstand TracingWoodgrains's actual standard, which is about the algorithm used and not how much clock time is elapsed, as described in their reply to your parent comment.) Or on the accidental way of reading this, the subtext is "I was oblivious to how being publicly accused of wrongdoing feels from the inside, and I request grace now that the shoe is on the other foot." Either of these seems kind of incredible but I can't easily think of another plausible way of reading this. I suppose your paragraph on wanting to take the time to make a comprehensive response (which I agree with) updates my guess towards "oblivious".

Comment by David Hornbein on Nonlinear’s Evidence: Debunking False and Misleading Claims · 2023-12-12T23:49:13.512Z · LW · GW

On Pace's original post I wrote:

"think about how bad you expect the information would be if I selected for the worst, credible info I could share"

Alright. Knowing nothing about Nonlinear or about Ben, but based on the rationalist milieu, then for an org that’s weird but basically fine I’d expect to see stuff like ex-employees alleging a nebulously “abusive” environment based on their own legitimately bad experiences and painting a gestalt picture that suggests unpleasant practices but without any smoking-gun allegations of really egregious concrete behavior (as distinct from very bad effects on the accusers); allegations of nepotism based on social connections between the org’s leadership and their funders or staff; accusations of shoddy or motivated research which require hours to evaluate; sources staying anonymous for fear of “retaliation” but without being able to point to any legible instances of retaliation or concrete threats to justify this; and/or thirdhand reports of lying or misdirection around complicated social situations.

[reads post]

This sure has a lot more allegations of very specific and egregious behavior than that, yeah.

Having looked at the evidence and documentation which Nonlinear provides, it seems like the smoking-gun allegations of really egregious concrete behavior are probably just false. I have edited my earlier comment accordingly.

Comment by David Hornbein on The likely first longevity drug is based on sketchy science. This is bad for science and bad for longevity. · 2023-12-12T19:09:58.472Z · LW · GW

This is a bit of a tangent, but is there a biological meaning to the term "longevity drug"? For a layman like me, my first guess is that it'd mean something like "A drug that mitigates the effects of aging and makes you live longer even if you don't actively have a disease to treat." But then I'd imagine that e.g. statins would be a "longevity drug" for middle-aged men with a strong family history of heart disease, in that it makes the relevant population less susceptible to an aging-related disease and thereby increases longevity, yet the posts talk about the prospect of creating the "first longevity drug" so clearly it's being used in a way that doesn't include statins. Is there a specific definition I'm ignorant of, or is it more of a loose marketing term for a particular subculture of researchers and funders, or what?

Comment by David Hornbein on Principles For Product Liability (With Application To AI) · 2023-12-11T10:50:22.916Z · LW · GW

We can certainly debate whether liability ought to work this way. Personally I disagree, for reasons others have laid out here, but it's fun to think through.

Still, it's worth saying explicitly that as regards the motivating problem of AI governance, this is not currently how liability works. Any liability-based strategy for AI regulation must either work within the existing liability framework, or (much less practically) overhaul the liability framework as its first step.

Comment by David Hornbein on What I Would Do If I Were Working On AI Governance · 2023-12-09T23:09:33.826Z · LW · GW

Cars are net positive, and also cause lots of harm. Car companies are sometimes held liable for the harm caused by cars, e.g. if they fail to conform to legal safety standards or if they sell cars with defects. More frequently the liability falls on e.g. a negligent driver or is just ascribed to accident. The solution is not just "car companies should pay out for every harm that involves a car", partly because the car companies also don't capture all or even most of the benefits of cars, but mostly because that's an absurd overreach which ignores people's agency in using the products they purchase. Making cars (or ladders or knives or printing presses or...) "robust to misuse", as you put it, is not the manufacturer's job.

Liability for current AI systems could be a good idea, but it'd be much less sweeping than what you're talking about here, and would depend a lot on setting safety standards which properly distinguish cases analogous to "Alice died when the car battery caught fire because of poor quality controls" from cases analogous to "Bob died when he got drunk and slammed into a tree at 70mph".

Comment by David Hornbein on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-06T04:25:37.527Z · LW · GW

It's fun to come through and look for interesting threads to pull on. I skim past most stuff but there's plenty of good and relevant writing to keep me coming back. Yeah sure it doesn't do a super great job of living up to the grandiose ideals expressed in the Sequences but I don't really mind, I don't feel invested in ~the community~ that way so I'll gladly take this site for what it is. This is a good discussion forum and I'm glad it's here.

Comment by David Hornbein on OpenAI: Facts from a Weekend · 2023-11-28T16:24:38.356Z · LW · GW

Toner's employer, the Center for Security and Emerging Technology (CSET), was founded by Jason Matheny. Matheny was previously the Director of the Intelligence Advanced Research Projects Activity (IARPA), and is currently CEO of the RAND Corporation. CSET is currently led by Dewey Murdick, who previously worked at the Department of Homeland Security and at IARPA. Much of CSET's initial staff was former (or "former") U.S. intelligence analysts, although IIRC they were from military intelligence rather than the CIA specifically. Today many of CSET's researchers list prior experience with U.S. civilian intelligence, military intelligence, or defense intelligence contractors. Given the overlap in staff and mission, U.S. intelligence clearly and explicitly has a lot of influence at CSET, and it's reasonable to suspect a stronger connection than that.

I don't see it for McCauley though.

Comment by David Hornbein on why did OpenAI employees sign · 2023-11-27T23:27:38.536Z · LW · GW

Suppose you're an engineer at SpaceX. You've always loved rockets, and Elon Musk seems like the guy who's getting them built. You go to work on Saturdays, you sometimes spend ten hours at the office, you watch the rockets take off and you watch the rockets land intact and that makes everything worth it.

Now imagine that Musk gets in trouble with the government. Let's say the Securities and Exchange Commission charges him with fraud again, and this time they're *really* going after him, not just letting him go with a slap on the wrist like the first time. SpaceX's board of directors negotiates with SEC prosecutors. When they emerge they fire Musk from SpaceX, and remove Elon and Kimbal Musk from the board. They appoint Gwynne Shotwell as the new CEO.

You're pretty worried! You like Shotwell, sure, but Musk's charisma and his intangible magic have been very important to the company's success so far. You're not sure what will happen to the company without him. Will you still be making revolutionary new rockets in five years, or will the company regress to the mean like Boeing? You talk to some colleagues, and they're afraid and angry. No one knows what's happening. Alice says that the company would be nothing without Musk and rails at the board for betraying him. Bob says the government has been going after Musk on trumped-up charges for a while, and now they finally got him. Rumor has it that Musk is planning to start a new rocket company.

Then Shotwell resigns in protest. She signs an open letter calling for Musk's reinstatement and the resignation of the board. Board member Luke Nosek signs it too, and says his earlier vote to fire Musk was a huge mistake. 

You get a Slack message from Alice saying that she's signed the letter because she has faith in Musk and wants to work at his company, whichever company that is, in order to make humanity a multiplanetary species. She asks if you want to sign.

How do you feel?

Comment by David Hornbein on OpenAI: The Battle of the Board · 2023-11-22T20:20:11.951Z · LW · GW

I really don't think you can justify putting this much trust in the NYT's narrative of events and motivations here. Like, yes, Toner did publish the paper, and probably Altman did send her an email about it. Then the NYT article tacitly implies but *doesn't explicitly say* this was the spark that set everything off, which is the sort of haha-it's-not-technically-lying that I expect from the NYT. This post depends on that implication being true.

Comment by David Hornbein on Am I going insane or is the quality of education at top universities shockingly low? · 2023-11-21T23:20:54.784Z · LW · GW

Yeah it's pretty bad. There are some professors who are better than just reading the textbook, but unfortunately they're the exceptions. My undergrad experience got a lot more productive once I started picking my courses based on the *teacher* more than on the *subject*.

Comment by David Hornbein on OpenAI: Facts from a Weekend · 2023-11-20T17:55:30.985Z · LW · GW

Maybe Shear was lying. Maybe the board lied to Shear, and he truthfully reported what they told him. Maybe "The board did *not* remove Sam over any specific disagreement on safety" but did remove him over a *general* disagreement which, in Sutskever's view, affects safety. Maybe Sutskever wanted to remove Altman for a completely different reason which also can't be achieved after a mass exodus. Maybe different board members had different motivations for removing Altman.

Comment by David Hornbein on OpenAI: Facts from a Weekend · 2023-11-20T17:37:55.261Z · LW · GW

My guess: Sutskever was surprised by the threatened mass exodus. Whatever he originally planned to achieve, he no longer thinks he can succeed. He now thinks that falling on his sword will salvage more of what he cares about than letting the exodus happen.

Comment by David Hornbein on Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk · 2023-11-05T17:18:43.950Z · LW · GW

I note that the comments here include a lot of debate on the implications of this post's thesis and on policy recommendations and on social explanations for why the thesis is true. No commenter has yet disagreed with the actual thesis itself, which is that this paper is a representative example of a field that is "more advocacy than science", in which a large network of Open Philanthropy Project-funded advocates cite each other in a groundless web of footnotes which "vastly misrepresents the state of the evidence" in service of the party line.

Comment by David Hornbein on [deleted post] 2023-11-03T18:38:13.487Z

I can hail a robotaxi. So can anyone living in San Francisco, Phoenix, Beijing, Shanghai, Guangzhou, Shenzhen, Chongqing or Wuhan. The barriers to wider rollout are political and regulatory, not technological.

Waymo cars, and I believe Apollo and Cruise as well, are "level 4" autonomous vehicles, i.e. there is no human involved in driving them whatsoever. There is a "human available to provide assistance" in roughly the same sense that a member of the American Automobile Association has an offsite human available to provide assistance in case of crashes, flat tires, etc.

I don't see any reason to think AGI is imminent but this particular argument against it doesn't go through. Robotaxi tech is very good and improving swiftly.

Comment by David Hornbein on Announcing MIRI’s new CEO and leadership team · 2023-10-15T01:51:51.046Z · LW · GW

Seems also worth mentioning that Gretta Duleba and Eliezer Yudkowsky are in a romantic relationship, according to Yudkowsky's Facebook profile.

Comment by David Hornbein on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T20:39:13.511Z · LW · GW

At the risk of butting in, I also didn't participate because none of the options reflected my own views on the important virtue of Petrov Day, which is more like "Do not start a fucking nuclear war". You can try to distill that down to nicely categorized principles and virtues, and some of those might well be good things on their own, but at this level of abstraction it doesn't capture what's special about Petrov Day to me.

Trying to file down the Petrov Day narrative into supporting some other hobbyhorse, even if it's a wonderful hobbyhorse which I otherwise support like resisting social pressure, is a disservice to what Stanislav Petrov actually did. The world is richer and more complex than that.

I personally preferred the past Petrov Day events, with the button games and standoffs between different groups and all that. They didn't perfectly reflect the exact dilemma Petrov faced, sure, but what could. They were messy and idiosyncratic and turned on weird specific details. That feels like a much closer reflection of what makes Petrov's story compelling, to me. Maybe the later stages of this year's event would've felt more like that, if I'd seen them at the time, but reading the description I suspect probably not.

I like that you guys are trying a bunch of different stuff and it's fine if this one thing didn't land for me.

Comment by David Hornbein on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-20T19:54:27.899Z · LW · GW

One very common pattern is, most people oppose a technology when it's new and unfamiliar, then later once it's been established for a little while and doesn't seem so strange most people think it's great.

Comment by David Hornbein on Sharing Information About Nonlinear · 2023-09-07T18:49:48.247Z · LW · GW
  1. Offering a specific amount of pay, in cash and in kind, and then not doing the accounting to determine whether or not that amount was actually paid out. If I’m charitable to the point of gullibility, then this is unethical and culpable negligence. Probably it’s just fraud. (Assuming this allegation is true, of course, and AFAIK it is not yet disputed.)
  2. Screenshots of threats to retaliate for speaking up.

EDIT: Nonlinear has now replied and disputed many of the allegations. I am persuaded that the allegation of fraud/negligence around payment is simply false. As for the screenshots of threats to retaliate, my opinion is that retaliation or threats to retaliate are perfectly justified in the face of the behavior which Nonlinear alleges. Nonlinear also provides longer chatlogs around one of the screenshotted texts which they argue recontextualizes it.

Comment by David Hornbein on Sharing Information About Nonlinear · 2023-09-07T17:56:34.148Z · LW · GW

think about how bad you expect the information would be if I selected for the worst, credible info I could share

 

Alright. Knowing nothing about Nonlinear or about Ben, but based on the rationalist milieu, then for an org that’s weird but basically fine I’d expect to see stuff like ex-employees alleging a nebulously “abusive” environment based on their own legitimately bad experiences and painting a gestalt picture that suggests unpleasant practices but without any smoking-gun allegations of really egregious concrete behavior (as distinct from very bad effects on the accusers); allegations of nepotism based on social connections between the org’s leadership and their funders or staff; accusations of shoddy or motivated research which require hours to evaluate; sources staying anonymous for fear of “retaliation” but without being able to point to any legible instances of retaliation or concrete threats to justify this; and/or thirdhand reports of lying or misdirection around complicated social situations.

[reads post]

This sure has a lot more allegations of very specific and egregious behavior than that, yeah.

EDIT: Based on Nonlinear's reply and the thorough records they provide, it seems that the smoking-gun allegations of really egregious concrete behavior are probably just false. This leaves room for unresolvable disagreement on the more nebulous accusations, but as I said initially, that's the pattern I'd expect to see if Nonlinear were weird but basically fine.

Comment by David Hornbein on Dear Self; we need to talk about ambition · 2023-08-29T20:19:30.976Z · LW · GW

I have seen many people try to become more ambitious. I have also seen many people in fact become more ambitious. But I don't know that I've ever seen anyone try to become more ambitious and succeed, leaving aside the sort of thing you call "hoops" and "fake ambition".

There's this thing, this spark, you could call it "independent ambition" or being "agenty" or "strategic", and other subcultures have other words. But whatever you call it, you can't get it by aiming at it directly. When I've seen someone get it, she gets it by aiming at something else, and if there happens to be a path to her deeper goal just by jumping through the right hoops—well, that's much better, actually, it's more reliable and usually saves her a tremendous amount of trouble. But if the only way to get her deeper goal is to break away from the usual paths and go do some bizarre novel thing, then sometimes she'll do that instead. But of course there can be no hoop for breaking free of hoops, there must be some impetus from outside the system of hoops.

Comment by David Hornbein on Elizabeth's Shortform · 2023-08-24T02:56:11.920Z · LW · GW

Back in the ancient days we called all this stuff "media".

Comment by David Hornbein on A Hill of Validity in Defense of Meaning · 2023-07-17T19:32:14.381Z · LW · GW

I don't understand the logic of this. Does seem like game-theoretically the net-payout is really what matters. What would be the argument for something else mattering?

 

BEORNWULF: A messenger from the besiegers!

WIGMUND: Send him away. We have nothing to discuss with the norsemen while we are at war.

AELFRED: We might as well hear them out. This siege is deadly dull. Norseman, deliver your message, and then leave so that we may discuss our reply.

MESSENGER: Sigurd bids me say that if you give us two thirds of the gold in your treasury, our army will depart. He reminds you that if this siege goes on, you will lose the harvest, and this will cost you more dearly than the gold he demands.

The messenger exits.

AELFRED: Ah. Well, I can’t blame him for trying. But no, certainly not.

BEORNWULF: Hold on, I know what you’re thinking, but this actually makes sense. When Sigurd’s army first showed up, I was the first to argue against paying him off. After all, if we’d paid right at the start, then he would’ve made a profit on the attack, and it would only encourage more. But the siege has been long and hard for us both. If we accept this deal *now*, he’ll take a net loss. We’ve spent most of the treasury resisting the siege—

WIGMUND: As we should! Millions for defense, but not one cent for tribute!

BEORNWULF: Certainly. But the gold we have left won’t even cover what they’ve already spent on their attack. Their net payout will still be negative, so game-theoretically, it doesn’t make sense to think of it as “tribute”. As long as we’re extremely sure they’re in the red, we should minimize our own costs, and missing the harvest would be a *huge* cost. People will starve. The deal is a good one.

WIGMUND: Never! if once you have paid him the danegeld, you never get rid of the Dane!

BEORNWULF: Not quite. The mechanism matters. The Dane has an incentive to return *only if the danegeld exceeds his costs*.

WIGMUND: Look, you can mess with the categories however you like, and find some clever math that justifies doing whatever you’ve already decided you want to do. None of that constrains your behavior and so none of that matters. What matters is, take away all the fancy definitions and you’re still just paying danegeld.

BEORNWULF: How can I put this in language you’ll understand—it doesn’t matter whether the definitions support what *I* want to do, it matters whether the definitions reflect the *norsemen’s* decision algorithm. *They* care about the net payout, not the gross payout.

AELFRED: Hold on. Are you modeling the norsemen as profit-maximizers?

BEORNWULF: More or less? I mean, no one is perfectly rational, but yeah, everyone *approximates* a rational profit-maximizer.

WIGMUND: They are savage, irrational heathens! They never even study game theory!

BEORNWULF: Come on. I’ll grant that they don’t use the same jargon we do, but they attack because they expect to make a profit off it. If they don’t expect to profit, they’ll stop. Surely they do *that* much even without explicit game theoretic proofs.

AELFRED: That affects their decision, yes, but it’s far from the whole story. The norsemen care about more than just gold and monetary profit. They care about pride. Dominance. Social rank and standing. Their average warrior is a young man in his teens or early twenties. When he decides whether to join the chief’s attack, he’s not sitting down with spreadsheets and a green visor to compute the expected value, he’s remembering that time cousin Guthrum showed off the silver chalice he looted from Lindisfarne. Remember, Sigurd brought the army here in the first place to avenge his brother’s death—

BEORNWULF: That’s a transparent pretext! He can’t possibly blame us for that, we killed Agnarr in self-defense during the raid on the abbey.

WIGMUND: You can tell that to Sigurd. If it had been my brother, I’d avenge him too.

AELFRED: Among their people, when a man is murdered, it’s not a *tragedy* to his family, it’s an *insult*. It can only be wiped away with either a weregeld payment from the murderer or a blood feud. Yes, Sigurd cares about gold, but he also cares tremendously about *personally knowing he defeated us*, in order to remove the shame we dealt him by killing Agnarr. Modeling his decisions as profit-maximizing will miss a bunch of his actual decision criteria and constraints, and therefore fail to predict the norsemen’s future actions.

WIGMUND: You’re overcomplicating this. If we pay, the norsemen will learn that we pay, and more will come. If we do not pay, they will learn that we do not pay, and fewer will come.

BEORNWULF: They don’t care if we *pay*, they care if it’s *profitable*. This is basic accounting.

AELFRED: They *do* care if we pay. Most of them won’t know or care what the net-payout is. If we pay tribute, this will raise Sigurd’s prestige in their eyes no matter how much he spent on the expedition, and he needs his warriors’ support more than he needs our gold. Taking a net loss won’t change his view on whether he’s avenged the insult to his family, and we do *not* want the Norsemen to think they can get away with coming here to avenge “insults” like killing their raiders in self-defense. On the other hand, if Sigurd goes home doubly shamed by failing to make us submit, they’ll think twice about trying that next time.

BEORNWULF: I don’t care about insults. I don’t care what Sigurd’s warriors think of him. I don’t care who can spin a story of glorious victory or who ends up feeling like they took a shameful defeat. I care about how many of our people will die on norse spears, and how many of our people will die of famine if we don’t get the harvest in. All that other stuff is trivial bullshit in comparison.

AELFRED: That all makes sense. You still ought to track those things instrumentally. The norsemen care about all that, and it affects their behavior. If you want a model of how to deter them, you have to model the trivial bullshit that they care about. If you abstract away what they *actually do* care about with a model of what you think they *ought* to care about, then your model *won’t work*, and you might find yourself surprised when they attack again because they correctly predict that you’ll cave on “trivial bullshit”. Henry IV could swallow his pride and say “Paris is well worth a mass”, but that was because he was *correctly modeling* the Parisians’ pride.

WIGMUND: Wait. That is *wildly* anachronistic. Henry converted to Catholicism in 1593. This dialogue is taking place in, what, probably the 9th century?

AELFRED: Hey, I didn’t make a fuss when you quoted Kipling.

Comment by David Hornbein on A Hill of Validity in Defense of Meaning · 2023-07-16T19:30:55.537Z · LW · GW

"he didn't end up with more money than he started with after the whole miricult thing" is such a weirdly specific way to phrase things.

My speculation from this is that MIRI paid Helm or his lawyers some money, but less money than Helm had spent on the harassment campaign, and among people who know the facts there is a semantic disagreement about whether this constitutes a "payout". Some people say something like "it's a financial loss for Helm, so game-theoretically it doesn't provide an incentive to blackmail, therefore it's fine" and others say something like "if you pay out money in response to blackmail, that's a blackmail payout, you don't get to move the bar like that".

I would appreciate it if someone who knows what happened can confirm or deny this.

(AFAICT the only other possibility is that somewhere along the line, at least one of the various sources of contradictory-sounding rumors was just lying-or-so-careless-as-to-be-effectively-lying. Which is very possible, of course, that happens with rumors a lot.)

Comment by David Hornbein on Some reasons to not say "Doomer" · 2023-07-10T17:34:36.075Z · LW · GW

Yudkowsky says it's now "short-term publishable, fundable, 'relatable' topics affiliated with academic-left handwringing"

I assume this means, like, Timnit Gebru and friends.

Comment by David Hornbein on Some reasons to not say "Doomer" · 2023-07-10T07:05:51.955Z · LW · GW

If you want people to stop calling doomers "doomers", you need to provide a specific alternative. Gesturing vaguely at the idea of alternatives isn't enough. "Thou shalt not strike terms from others' expressive vocabulary without suitable replacement." 

Doomers used to call themselves the "AI safety community" or "AI alignment community", but Yudkowsky recently led a campaign to strike those terms and replace them with "AI notkilleveryoneism". Unfortunately the new term isn't suitable and hasn't been widely adopted (e.g. it's not mentioned in the OP), which leaves the movement without a name its members endorse.

People are gonna use *some* name for it, though. A bunch of people are spending tens of millions of dollars per year advocating for a very significant political program! Of course people will talk about it! So unless and until doomers agree on a better name for themselves (which is properly the responsibility of the doomers, and not the responsibility of their critics) my choices are calling it "AI safety" and getting told that no, that's inaccurate, "AI safety" now refers to a different group of people with a different political program, or else I can call it "doomers" and get told I'm being rude. I don't want to be inaccurate or rude, but if you make me pick one of the two, then I'll pick rude, so here we are.

If the doomers were to agree on a new name and adopt it among themselves, I would be happy to switch. (Your "AI pessimist" isn't a terrible candidate, although if it caught on then it'd be subject to the same entryism which led Yudkowsky to abandon "AI safety".) Until then, "doomer" remains the most descriptive word, in spite of all its problems.

Comment by David Hornbein on Consider giving money to people, not projects or organizations · 2023-07-04T21:15:54.554Z · LW · GW

That's correct.

I guess if they found themselves in a similar situation then I'd *want* them to ask me for help. I have a pretty easy time saying no to people and wouldn't feel bad about sympathetically rejecting a request if that felt like the right call, and maybe that's not the norm, idk. But in any case, I offered one-off gifts, and it was always understood that way.

Comment by David Hornbein on Consider giving money to people, not projects or organizations · 2023-07-04T16:42:52.413Z · LW · GW

Once per. That's not a policy, I just haven't yet felt moved to do it twice, so I haven't really thought about it.

Comment by David Hornbein on Consider giving money to people, not projects or organizations · 2023-07-03T17:08:33.495Z · LW · GW

I've actually done this and it worked incredibly well, so I'm not persuaded by your vague doubts that it's possible. If you insist on using "opinion of established organizations" as your yard stick then I'll add that a strong majority of the people I supported would later go on to get big grants and contracts from well-respected organizations, always after years of polishing and legibilizing the projects which I'd supported in their infancy.

Certainly it wouldn't work for the median person on Earth. But "LW reader who's friends with a bunch of self-starting autodidacts and has enough gumption to actually write a check" is not the median person on Earth, and people selected that hard will often know some pretty impressive people.

Comment by David Hornbein on Consider giving money to people, not projects or organizations · 2023-07-03T05:09:54.736Z · LW · GW

If you make programmer money and a bunch of your friends are working on weird projects that take two hours to explain and justify—and I know that describes a lot of people here—then you're in an excellent position to do this. Essentially it's legibility arbitrage.

Comment by David Hornbein on Consider giving money to people, not projects or organizations · 2023-07-03T05:02:47.467Z · LW · GW

I've had very good results from offering unsolicited no-strings $X,000 gifts to friends who were financially struggling while doing important work. Some people have accepted, and it took serious pressure off them while they laid the foundations for what's become impressive careers. Some turned me down, although I like to imagine that knowing they had backup available made them feel like they had more slack. It's a great way to turn local social knowledge into impact. The opportunities to do this aren't super frequent, but when it arises the per-dollar impact is absolutely insane.

Some advice for anyone who's thinking of doing this:

—Don't mention it to anyone else. This is sensitive personal stuff. Your friend doesn't want some thirdhand acquaintance knowing about their financial hardship. Talking about it in general terms without identifying information is fine.

—Make gifts, not loans. Loans often put strain on relationships and generally make things weird. People sometimes make "loans" they can't afford to lose without realizing what they're doing, but you're not gonna give away money without understanding the consequences. Hold fast to this; if the recipient comes back three years later when they're doing better and offers to pay it back (this has happened to me), tell them to pay it forward instead.

—The results will only be as good as your judgements of character and ability. Everyone makes mistakes, but if you make *big* mistakes on either of those, then this probably isn't your comparative advantage.

Comment by David Hornbein on Reacts now enabled on 100% of posts, though still just experimenting · 2023-05-29T19:23:26.941Z · LW · GW

I'm strongly against letting anyone insert anything into the middle of someone else's post/comment. Nothing should grab the microphone away from the author until they've finished speaking.

When Medium added the feature that let readers highlight an author's text, I found it incredibly disruptive and never read anything on Medium ever again. If LW implemented inline reader commentary in a way that was similarly distracting, that would probably be sufficient to drive me away from here, too.

Comment by David Hornbein on Why is violence against AI labs a taboo? · 2023-05-27T00:31:18.036Z · LW · GW
Comment by David Hornbein on Why is violence against AI labs a taboo? · 2023-05-27T00:15:48.374Z · LW · GW
Comment by David Hornbein on Why is violence against AI labs a taboo? · 2023-05-27T00:13:39.954Z · LW · GW
Comment by David Hornbein on Why is violence against AI labs a taboo? · 2023-05-27T00:11:23.196Z · LW · GW
Comment by David Hornbein on Why is violence against AI labs a taboo? · 2023-05-27T00:10:22.643Z · LW · GW

You can imagine an argument that goes "Violence against AI labs is justified in spite of the direct harm it does, because it would prevent progress towards AGI." I have only ever heard people say that someone else's views imply this argument, and never actually heard someone actually advance this argument sincerely; nevertheless the hypothetical argument is at least coherent.

Yudkowsky's position is that the argument above is incorrect because he denies the premise that using violence in this way would actually prevent progress towards AGI.  See e.g. here and the following dialogue. (I assume he also believes in the normal reasons why clever one-time exceptions to the taboo against violence are unpersuasive.)

Comment by David Hornbein on Elements of Rationalist Discourse · 2023-02-13T02:54:42.554Z · LW · GW

I would expand "acts to make the argument low status" to "acts to make the argument low status without addressing the argument". Lots of good rationalist material, including the original Sequences, includes a fair amount of "acts to make arguments low status". This is fine—good, even—because it treats the arguments it targets in good faith and has a message that rhymes with "this argument is embarrassing because it is clearly wrong, as I have shown in section 2 above" rather than "this argument is embarrassing because gross stupid creeps believe it".

Many arguments are actually very bad. It's reasonable and fair to have a lower opinion of people who hold them, and to convey that opinion to others along with the justification. As you say, "you shouldn't engage by not addressing the arguments and instead trying to execute some social maneuver to discredit it". Discrediting arguments by social maneuvers that rely on actual engagement with the argument's contents is compatible with this.

Comment by David Hornbein on My Model Of EA Burnout · 2023-02-03T02:01:54.621Z · LW · GW

What is "EA burnout"? Personally I haven't noticed any differences between burnout in EAs and burnout in other white-collar office workers. If there are such differences, then I'd like to know about them. If there aren't, then I'm skeptical of any model of the phenomenon which is particular to EA.

Comment by David Hornbein on Smallpox vaccines are widely available, for now · 2023-01-14T00:12:09.784Z · LW · GW

My impression from rather cursory research is that serious or long-lasting side effects are extremely rare. I would guess that most of the health risk is probably concentrated in car accidents on the way to/from the vaccine clinic. Minor side effects like "the injection site is mildly sore for a couple weeks" are common. Injection with the bifurcated needle method also produced a small permanent scar (older people often have these), although all or most of the current vaccinations are done with the subcutaneous injection method common with other vaccines and so do not produce scarring.

I naively guess that from the perspective of society at large the biggest cost of the vaccine program is the operational overhead of distribution and administration, not the side effects; and that on the personal scale the biggest cost is the time it takes to register for and receive the vaccine, rather than the side effects.

As to the benefit side of the equation, the risks of outbreak are extremely conjectural and rely on several layers of guesswork about technology development and adversarial political decisions—two areas which are notoriously hard to predict—so I don't have much to say on that front beyond "make your best guess".

Comment by David Hornbein on Ngo and Yudkowsky on scientific reasoning and pivotal acts · 2022-08-08T04:35:36.714Z · LW · GW

Black's development of specific heat capacity and latent heat is widely attested, including in the Wikipedia articles on Black and on the history of thermodynamics. I don't recall where I first saw the claim.

Comment by David Hornbein on Ngo and Yudkowsky on scientific reasoning and pivotal acts · 2022-08-04T03:24:29.919Z · LW · GW

Yudkowsky is correct. The advance that made the steam engine useful was Watt's separate condenser. The separate condenser was based on the research of Joseph Black, who did much of the work of quantifying thermodynamics. Black was a close friend of Watt, lent Watt money to finance his R&D, and introduced Watt to his first business partner John Roebuck.

Before Watt, the early, crude steam engines like Savery's and Newcomen's were preceded by early, crude research on pressure from scientists like Papin. These engines were niche tools with only one narrow economically-useful application (pumping water out of mines).

The linked article is completely wrong in claiming Carnot's work was the "First Stirrings of Thermodynamics", and wrong in treating Watt's invention of the separate condenser as a sideshow.

Comment by David Hornbein on Changing the world through slack & hobbies · 2022-07-23T19:54:40.342Z · LW · GW

There are investments you can’t make from a structured, nine-to-five, narrowly teleological environment. ... The best search strategies for complex problems like life generally don’t seek out particular homogeneous objectives, but interesting novelty. The search space is too complicated and unknown for linear objective-chasing to work. ... you cannot pursue interesting novelty—things that no one else is doing or which you have never seen before, or the little threads of nagging curiosity or doubt—by chasing along known direct value gradients. But that’s where the treasure is.

Quit Your Job

Comment by David Hornbein on Relationship Advice Repository · 2022-06-21T19:52:08.594Z · LW · GW

Registering that I much prefer the format of the older repositories you link to, where additions are left as comments that can be voted on, over the format here, where everything is in a giant list sorted by topic rather than ranking. For any crowdsourced repository, most suggestions will be mediocre or half-baked, but with voting and sorting it's easy to read only the ones that rise to the top. I'd also be curious to check out the highest-voted suggestions on this topic, but not curious enough to wade through an unranked list of (I assume) mostly mediocre and half-baked ideas to find them.

Comment by David Hornbein on AGI Ruin: A List of Lethalities · 2022-06-08T19:58:54.217Z · LW · GW

I disagree strongly. To me it seems that AI safety has long punched below its weight because its proponents are unwilling to be confrontational, and are too reluctant to put moderate social pressure on people doing the activities which AI safety proponents hold to be very extremely bad. It is not a coincidence that among AI safety proponents, Eliezer is both unusually confrontational and unusually successful.

This isn't specific to AI safety. A lot of people in this community generally believe that arguments which make people feel bad are counterproductive because people will be "turned off".

This is false. There are tons of examples of disparaging arguments against bad (or "bad") behavior that succeed wildly. Such arguments very frequently succeed in instilling individual values like e.g. conscientiousness or honesty. Prominent political movements which use this rhetoric abound. When this website was young, Eliezer and many others participated in an aggressive campaign of discourse against religious ideas, and this campaign accomplished many of its goals. I could name many many more large and small examples. I bet you can too.

Obviously this isn't to say that confrontational and insulting argument is always the best style. Sometimes it's truth-tracking and sometimes it isn't. Sometimes it's persuasive and sometimes it isn't. Which cases are which is a difficult topic that I won't get into here (except to briefly mention that it matters a lot whether the reasons given are actually good). Nor is this to say that the "turning people off" effect is completely absent; what I object to is the casual assumption that it outweighs any other effects. (Personally I'm turned off by the soft-gloved style of the parent comment, but I would not claim this necessarily means it's inappropriate or ineffective—it's not directed at me!) The point is that this very frequent claim does not match the evidence. Indeed, strong counterevidence is so easy to find that I suspect this is often not people's real objection.

Comment by David Hornbein on Vavilov Day Discussion Post · 2022-01-31T06:48:58.323Z · LW · GW

The solution to your first problem may not be easy, but it is obvious: those who want community holidays with different emphasis and/or more variety of holidays can create those holidays. The culture belongs to those who put in the work to create it, both in practice and in justice. This goes double if you're correct that "we currently don’t have enough rationalist holidays and people are desperate for more" (which I have no independent opinion on).

Comment by David Hornbein on Conversation on technology forecasting and gradualism · 2021-12-10T13:13:08.372Z · LW · GW

The idea of "hardware overhang" from Chinese printing tech seems extremely unlikely. There was almost certainly no contact between Chinese and European printers at the time. European printing tech was independently derived, and differed from its Chinese precursors in many many important details. Gutenberg's most important innovation, the system of mass-producing types from a matrix (and the development of specialized lead alloys to make this possible), has no Chinese precedent. The economic conditions were also very different; most notably, the Europeans had cheap paper from the water-powered paper mill (a 13th-century invention), which made printing a much bigger industry even before Gutenberg.

Comment by David Hornbein on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-10T04:35:14.526Z · LW · GW

"6 Karma across 11 votes" is, like, not good. It's about what I'd expect from a comment that is "mildly toxic [but] does raise [a] valid consideration" and "none of the offenses ... are particularly heinous", as you put it. (For better or worse, comments here generally don't get downvoted into the negative unless they're pretty heinous; as I write this only one comment on this post has been voted to zero, and that comment's only response describes it as "borderline-unintelligible".) It sounds like you're interpreting the score as something like qualified approval because it's above zero, but taking into account the overall voting pattern I interpret the score more like "most people generally dislike the comment and want to push it to the back of the line, even if they don't want to actively silence the voice". This would explain Rob calibrating the strength of his downvote over time.