Posts

Best arguments against instrumental convergence? 2023-04-05T17:06:14.011Z

Comments

Comment by lfrymire on The Best Tacit Knowledge Videos on Every Subject · 2024-04-15T14:50:10.275Z · LW · GW

Domain: Piano

Link: Seymour Bernstein Teaches Piano https://youtu.be/pRLBBJLX-dQ?si=-6EIvGDRyw0aJ0Sq

Person: Seymour Bernstein

Background: Pianist and composer, performed with the Chicago Symphony Orchestra, Adjunct Associate Professor of Music and Music Education at New York University.

Why: Tonebase (a paid music learning service) recorded a number of free to watch conversations with Bernstein while he plays through or teaches a piece. Bernstein is about 90 years old at the time of recording and shares an incredible amount of tacit knowledge, especially about body mechanics when playing piano.

Comment by lfrymire on Best arguments against instrumental convergence? · 2023-04-05T21:11:07.528Z · LW · GW

Re: specific claims to falsify, I generally buy the argument. 

If I had to pick out specific aspects which seem weaker, I think they would mostly be related to our confusion around agent foundations. It isn't trivially obvious to me that the way we describe "intelligence" or "goals" within the instrumental convergence argument is a good match for the way current systems operate (though it seems close enough, and we shouldn't expect to be wrong in a way that makes the situation better).

Comment by lfrymire on Best arguments against instrumental convergence? · 2023-04-05T21:01:41.278Z · LW · GW

I would agree that instrumental convergence is probably not a necessary component of AI x-risk, so you're correct that "crux" is a bit of a misnomer. 

However, in my experience it is one of the primary arguments people rely on when explaining their concerns to others. The correlation between credence in instrumental convergence and AI x-risk concern seems very high. IMO it is also one of the most concerning legs of the overall argument. 

If somebody made a compelling case that we should not expect instrumental convergence by default in the current ML paradigm, I think the overall argument for x-risk would have to look fairly different from the one that is usually put forward.