Posts

INTELLECT-1 Release: The First Globally Trained 10B Parameter Model 2024-11-29T23:05:00.108Z
The U.S. National Security State is Here to Make AI Even Less Transparent and Accountable 2024-11-24T09:36:39.768Z
Anthropic teams up with Palantir and AWS to sell AI to defense customers 2024-11-09T11:50:34.050Z
Dario Amodei — Machines of Loving Grace 2024-10-11T21:43:31.448Z

Comments

Comment by Matrice Jacobine on Making a conservative case for alignment · 2024-11-16T02:27:00.493Z · LW · GW

My impression is that (without even delving into any meta-level IR theory debates) Democrats are more hawkish on Russia while Republicans are more hawkish on China. So while obviously neither parties are kum-ba-yah and both ultimately represent US interests, it still makes sense to expect each party to be less receptive to the idea of ending any potential arms race against the country they consider an existential threat to US interests if left unchecked, so the party that is more hawkish on a primarily military superpower would be worse on nuclear x-risk, and the party that is more hawkish on a primarily economic superpower would be worse on AI x-risk and environmental x-risk. (Negotiating arms control agreements with the enemy superpower right during its period of liberalization and collapse or facilitating a deal between multiple US allies with the clear goal of serving as a counterweight to the purported enemy superpower seems entirely irrelevant here.)

Comment by Matrice Jacobine on Proposing the Conditional AI Safety Treaty (linkpost TIME) · 2024-11-16T00:01:48.443Z · LW · GW

(Discussion continues here on EA Forum.)

Comment by Matrice Jacobine on Proposing the Conditional AI Safety Treaty (linkpost TIME) · 2024-11-15T20:30:19.326Z · LW · GW

Fortunately, the existential risks posed by AI are recognized by many close to President-elect Donald Trump. His daughter Ivanka seems to see the urgency of the problem. Elon Musk, a critical Trump backer, has been outspoken about the civilizational risks for many years, and recently supported California’s legislative push to safety-test AI. Even the right-wing Tucker Carlson provided common-sense commentary when he said: “So I don’t know why we’re sitting back and allowing this to happen, if we really believe it will extinguish the human race or enslave the human race. Like, how can that be good?” For his part, Trump has expressed concern about the risks posed by AI, too.

This is a strange contrast from the rest of the article, considering both Donald and Ivanka Trump's positions are largely informed by the "situational awareness" position arguing that the US should develop AGI before China to ensure US victory over China – which is explicitly the position Tegmark and Leahy argue against (and consider existentially harmful) when they call to stop work on AGI and work on international co-operation to restrict it and develop tool AI instead.

I still see this kind of confusion between the two positions a fair bit and it is extremely strange. It's like if back in the original Cold War people couldn't tell the difference between anti-communist hawks and the Bulletin of the Atomic Scientists (let alone anti-war hippies) because technically they both considered nuclear arms race to be very important for the future of humanity.

Comment by Matrice Jacobine on The Hopium Wars: the AGI Entente Delusion · 2024-10-14T11:45:23.082Z · LW · GW

Relevant: China not that interested in developing AGI and substantial factions of Chinese elites are concerned about AI safety

Comment by Matrice Jacobine on The Hopium Wars: the AGI Entente Delusion · 2024-10-13T20:27:55.210Z · LW · GW

(Defining Tool AI as a program that would evaluate the answer to a question given available data without seeking to obtain any new data, and then shut down after having discovered the answer) While those arguments (if successful) argue that it's harder to program a Tool AI than it might look at first, so AI alignment research is still something that should be actively researched (and I doubt Tegmark think AI alignment research is useless), they don't really address the point that making aligned Tool AIs are still in some sense "inherently safer" than making Friendly AGI because the lack of a singleton scenario mean you don't need to solve all moral and political philosophy from first principles in your garage in 5 years and hope you "get it right" the first time.

Comment by Matrice Jacobine on In defense of technological unemployment as the main AI concern · 2024-08-30T20:52:39.922Z · LW · GW

The bottom 55% of the world population own ~1% of capital, the bottom 88% own ~15%, and the bottom 99% own ~54%, which is a majority, but the top 1% are the millionaires (not even multi-millionaires or billionaires) likely owning wealth more vitally important to the economy than personal property and bank accounts, and empirically they seem to be doing fine dominating the economy already without neoclassical catechism about comparative advantage preventing them from doing that. However you massage the data it seems highly implausible that driving the value of labor (the non-capital factor of production) to zero wouldn't be a global catastrophic risk and value drift risk/s-risk.

Comment by Matrice Jacobine on In defense of technological unemployment as the main AI concern · 2024-08-30T15:47:39.633Z · LW · GW

Wiping out 99% of the world population is a global catastrophic risk, and likely a value drift risk and s-risk.

Comment by Matrice Jacobine on In defense of technological unemployment as the main AI concern · 2024-08-28T17:38:19.978Z · LW · GW

Thanks for writing this, this is something I have thought about before trying to convince people who are more worried about "short-term" issues to take the "long-term" risks seriously. Essentially, one can think of two major "short-term" AI risk scenarios (or, at least "medium-term" ones that "short-term"ists might take seriously), essentially corresponding to the prospects of automating the two factors of production:

  1. Mass technological unemployment causing large swathes of workers to become superfluous and then starved out by the now AI-enabled corporations (what you worry about in this post)
  2. AI increasingly replacing "fallible" human decision-makers in corporations if not in government, pushed by the necessity to maximize profits to be unfettered by any moral and legal norm (even more so than human executives are already incentivized to be; what Scott worries about here)

But if 1 and 2 happens at the same time, you've got your more traditional scenario: AI taking over the world and killing all humans as they have become superfluous. This doesn't provide a full-blown case for the more Orthodox AI-go-FOOM scenario (you would need ), but at least serve as a case that one should believe Reform AI Alignment is a pressing issue, and those who are convinced about that will ultimately be more likely to take the AI-go-FOOM scenario seriously, or at least operationalize one's differences with believers in only object-level disagreements about intelligence explosion macroeconomics, how powerful is intelligence as a "cognitive superpower", etc. as opposed to the tribalized meta-level disagreements that define the current "AI ethics" v. "AI alignment" discourse.

Comment by Matrice Jacobine on In defense of technological unemployment as the main AI concern · 2024-08-28T15:19:33.234Z · LW · GW

(Admittedly, AI will probably progress simultaneously with robots, which will hit people who do more hands-on work too.)

This looks increasingly unlikely to me. It seems to me (from an outsider's perspective) that the current bottleneck in robotics is the low dexterity of existing hardware far more than the software to animate robot arms, or even the physics simulation software to test it. And on the flip side current proto-AGI research makes the embodied cognition thesis seems very unlikely.

Comment by Matrice Jacobine on In defense of technological unemployment as the main AI concern · 2024-08-28T15:14:00.435Z · LW · GW

At least under standard microeconomic assumptions of property ownership, you would presumably still have positive productivity of your capital (like your land). 

Well, we're not talking about microeconomics, are we? Unemployment is a macroeconomic phenomenon, and we are precisely talking about people who have little to no capital, need to work to live, and therefore need their labor to have economic value to live.