Ebenezer Dukakis's Shortform

post by Ebenezer Dukakis (valley9) · 2024-05-18T22:11:09.830Z · LW · GW · 14 comments

Contents

14 comments

14 comments

Comments sorted by top scores.

comment by Ebenezer Dukakis (valley9) · 2024-06-17T01:30:09.162Z · LW(p) · GW(p)

About a month ago [LW(p) · GW(p)], I wrote a quick take suggesting that an early messaging mistake made by MIRI was: claim there should be a single leading FAI org, but not give specific criteria for selecting that org. That could've lead to a situation where Deepmind, OpenAI, and Anthropic can all think of themselves as "the best leading FAI org".

An analogous possible mistake that's currently being made: Claim that we should "shut it all down", and also claim that it would be a tragedy if humanity never created AI, but not give specific criteria for when it would be appropriate to actually create AI.

What sort of specific criteria? One idea: A committee of random alignment researchers is formed to study the design; if at least X% of the committee rates the odds of success at Y% or higher, it gets the thumbs up. Not ideal criteria, just provided for the sake of illustration.

Why would this be valuable?

  • If we actually get a pause, it's important to know when to unpause as well. Specific criteria could improve the odds that an unpause happens in a reasonable way.

  • If you want to build consensus for a pause, advertising some reasonable criteria for when we'll unpause could get more people on board.

Replies from: Vaniver
comment by Vaniver · 2024-06-17T20:15:12.404Z · LW(p) · GW(p)

I think Six Dimensions of Operational Adequacy was in this direction; I wish we had been more willing to, like, issue scorecards earlier (like publishing that document in 2017 instead of 2022). The most recent scorecard-ish thing was commentary on the AI Safety Summit responses.

I also have the sense that the time to talk about unpausing is while creating the pause; this is why I generally am in favor of things like RSPs and RDPs. (I think others think that this is a bit premature / too easy to capture, and we are more likely to get a real pause by targeting a halt.)

comment by Ebenezer Dukakis (valley9) · 2024-05-18T22:11:09.932Z · LW(p) · GW(p)

Regarding the situation at OpenAI, I think it's important to keep a few historical facts in mind:

  1. The AI alignment community has long stated that an ideal FAI project would have a lead over competing projects. See e.g. this post [LW · GW]:

Requisite resource levels: The project must have adequate resources to compete at the frontier of AGI development, including whatever mix of computational resources, intellectual labor, and closed insights are required to produce a 1+ year lead over less cautious competing projects.

  1. The scaling hypothesis wasn't obviously true around the time OpenAI was founded. At that time, it was assumed that regulation was ineffectual because algorithms can't be regulated. It's only now, when GPUs are looking like the bottleneck, that the regulation strategy seems viable.

What happened with OpenAI? One story is something like:

  • AI safety advocates attracted a lot of attention in Silicon Valley with a particular story about AI dangers and what needed to be done.

  • Part of this story involved an FAI project with a lead over competing projects. But the story didn't come with easy-to-evaluate criteria for whether a leading project counted as a good "FAI project" or an bad "UFAI project". Thinking about AI alignment is epistemically cursed; people who think about the topic independently rarely have similar models.

  • Deepmind was originally the consensus "FAI project", but Elon Musk started OpenAI because Larry Page has e/acc beliefs [EA(p) · GW(p)].

  • OpenAI hired employees with a distribution of beliefs about AI alignment difficulty, some of whom may be motivated primarily by greed or power-seeking.

  • At a certain point, that distribution got "truncated" with the formation of Anthropic.

Presumably at this point, every major project thinks it's best if they win, due to self-serving biases.

Some possible lessons:

  • Do more message red-teaming. If an organization like AI Lab Watch had been founded 10+ years ago, and was baked into the AI safety messaging along with "FAI project needs a clear lead", then we could've spent the past 10 years getting consensus on how to anoint one or a just a few "FAI projects". And the campaign for AI Pause could instead be a campaign to "pause all AGI projects except the anointed FAI project". So -- when we look back in 10 years on the current messaging, what mistakes will seem obvious in hindsight? And if this situation is partially a result of MIRI's messaging in the past, perhaps we should ask hard questions about their current pivot towards messaging? (Note: I could be accused of grinding my personal axe here, because I'm rather dissatisfied with current AI Pause messaging [EA(p) · GW(p)].)

  • Assume AI acts like magnet for greedy power-seekers. Make decisions accordingly.

comment by Ebenezer Dukakis (valley9) · 2024-06-19T05:15:05.523Z · LW(p) · GW(p)

Some recent-ish bird flu coverage:

Global health leader critiques ‘ineptitude’ of U.S. response to bird flu outbreak among cows

A Bird-Flu Pandemic in People? Here’s What It Might Look Like. TLDR: not good. (Reload the page and ctrl-a then ctrl-c to copy the article text before the paywall comes up.) Interesting quote: "The real danger, Dr. Lowen of Emory said, is if a farmworker becomes infected with both H5N1 and a seasonal flu virus. Flu viruses are adept at swapping genes, so a co-infection would give H5N1 opportunity to gain genes that enable it to spread among people as efficiently as seasonal flu does."

Infectious bird flu survived milk pasteurization in lab tests, study finds. Here's what to know.

1 in 5 milk samples from grocery stores test positive for bird flu. Why the FDA says it’s still safe to drink -- see also updates from the FDA here: "Last week we announced preliminary results of a study of 297 retail dairy samples, which were all found to be negative for viable virus." (May 10)

The FDA is making reassuring noises about pasteurized milk, but given that CDC and friends also made reassuring noises early in the COVID-19 pandemic, I'm not fully reassured.

I wonder if drinking a little bit of pasteurized milk every day would be helpful inoculation? You could hedge your bets by buying some milk from every available brand, and consuming a teaspoon from a different brand every day, gradually working up to a tablespoon etc.

comment by Ebenezer Dukakis (valley9) · 2025-02-12T12:18:26.698Z · LW(p) · GW(p)

Regarding articles which target a popular audience such as How AI Takeover Might Happen in 2 Years [LW · GW], I get the sense that people are preaching to the choir by posting here and on X. Is there any reason people aren't pitching pieces like this to prestige magazines like The Atlantic or wherever else? I feel like publishing in places like that is a better way to shift the elite discourse, assuming that's the objective. (Perhaps it's best to pitch to publications that people in the Trump admin read?)

Here's an article on pitching that I found on the EA Forum [EA · GW] by searching. I assume there are lots more tips on pitching online if you search.

Replies from: Seth Herd, joshua-clymer
comment by Seth Herd · 2025-02-12T12:40:10.010Z · LW(p) · GW(p)

I do think that pitching publicly is important.

If the issue is picked up by liberal media, it will do more harm than good with conservatives and the current administration. Avoiding polarization is probably even more important than spreading public awareness. That depends on your theory if change, but you should have one carefully thought to guide publicity efforts.

Replies from: valley9, weibac
comment by Ebenezer Dukakis (valley9) · 2025-02-13T00:34:37.992Z · LW(p) · GW(p)

Likely true, but I also notice there's been a surprising amount of drift of political opinions from the left to the right in recent years. The right tends to put their own spin on these beliefs, but I suspect many are highly influenced by the left nonetheless.

Some examples of right-coded beliefs which I suspect are, to some degree, left-inspired:

  • "Capitalism undermines social cohesion. Consumerization and commoditization are bad. We're a nation, not an economy."

  • "Trans women undermine women's rights and women's spaces. Motherhood, and women's dignity, must be defended from neoliberal profit motives."

  • "US foreign policy is controlled by a manipulative deep state that pursues unnecessary foreign interventions to benefit elites."

  • "US federal institutions like the FBI are generally corrupt and need to be dismantled."

  • "We can't trust elites. They control the media. They're out for themselves rather than ordinary Americans."

  • "Your race, gender, religion, etc. are some of the most important things about you. There's an ongoing political power struggle between e.g. different races."

  • "Big tech is corrosive for society."

  • "Immigration liberalization is about neoliberal billionaires undermining wages for workers like me."

  • "Shrinking the size of government is not a priority. We should make sure government benefits everyday people."

  • Anti-semitism, possibly.

One interesting thing has been seeing the left switch to opposing the belief when it's adopted by the right and takes a right-coded form. E.g. US institutions are built on white supremacy and genocide, fundamentally institutionally racist, backed by illegitimate police power, and need to be defunded/decolonized/etc... but now they are being targeted by DOGE, and it's a disaster!

(Note that the reverse shift has also happened. E.g. Trump's approaches to economic nationalism, bilateral relations w/ China, and contempt for US institutions were all adopted by Biden by some degree.)

So yeah, my personal take is that we shouldn't worry about publication venue that much. Just avoid insulting anyone, and make your case in a way which will appeal to the right (e.g. "we need to defend our traditional way of being human from AI"). If possible, target center-leaning publications like The Atlantic over explicitly progressive publications like Mother Jones.

Replies from: Seth Herd
comment by Seth Herd · 2025-02-24T23:23:29.113Z · LW(p) · GW(p)

That is a great point and your examples are fascinating!

I think polarization is still quite possible and should be avoided at high cost. If AI safety becomes the new climate change, it seems pretty clear that it will create conflict in public opinion and deadlock in politics.

Replies from: valley9
comment by Ebenezer Dukakis (valley9) · 2025-02-25T05:10:52.324Z · LW(p) · GW(p)

I think the way the issue is framed matters a lot. If it's a "populist" framing ("elites are in it for themselves, they can't be trusted"), that frame seems to have resonated with a segment of the right lately. Climate change has a sanctimonious frame in American politics that conservatives hate.

Replies from: Seth Herd
comment by Seth Herd · 2025-02-25T05:31:10.979Z · LW(p) · GW(p)

Agreed, tone and framing are crucial. The populist framing might work for conservatives, but it will also set off the enemy rhetoric detectors among liberals. So coding it to either side is prone to backfire. Based on that logic, I'm leaning toward thinking that it needs to be framed to carefully avoid or walk the center line between the terms and framings of both sides.

It would be just as bad to have it polarized as conservative, right? Although we've got four years of conservatism, so it might be worth thinking seriously about whether that trade might be worth it. I'm not sure a liberal administration would undo restrictions on AI even if they had been conservative-coded...

Interesting. I'm feeling more like saying "the elites want to make AI that will make them rich while putting half the world out of a job". That's probably true as far as it goes, and it could be useful.

Replies from: valley9
comment by Ebenezer Dukakis (valley9) · 2025-02-25T10:27:06.366Z · LW(p) · GW(p)

it will also set off the enemy rhetoric detectors among liberals

I'm not sure about that, does Bernie Sanders rhetoric set off that detector?

comment by Milan W (weibac) · 2025-02-12T17:14:11.445Z · LW(p) · GW(p)

Maybe one can start with prestige conservative media? Is that a thing? I'm not from the US and thus not very well versed.

Replies from: valley9
comment by Ebenezer Dukakis (valley9) · 2025-02-13T00:17:19.098Z · LW(p) · GW(p)

I think the National Review is the most prestigious conservative magazine in the US, but there are various others. City Journal articles have also struck me as high-quality in the past. I think Coleman Hughes writes for them, and he did a podcast with Eliezer Yudkowsky at one point.

However, as stated in the previous link, you should likely work your way up and start by pitching lower-profile publications.

comment by joshc (joshua-clymer) · 2025-02-12T17:05:18.414Z · LW(p) · GW(p)

Seems like a reasonable idea.

I'm not in touch enough with popular media to know:
- Which magazines are best to publish this kind of thing if I don't want to contribute to political polarization
- Which magazines would possibly post speculative fiction like this (I suspect most 'prestige magazines' would not)

If you have takes on this I'd love to hear them!