TAI?

post by Logan Zoellner (logan-zoellner) · 2021-03-30T12:41:29.790Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    12 adamShimi
    6 Daniel Kokotajlo
    5 maximkazhenkov
    3 Steven Byrnes
    1 Gerald Monroe
None
1 comment

It looks like people around here are now  using the acronym TAI with the accompanying definition "transformative AI is AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution."

Is there some kind of consensus that this hasn't already happened?  

Because my current belief is that if Moore's laws stopped tomorrow and there was absolutely 0 innovation in AI beyond what GANs and Transformers give us, the social implications are already of that magnitude, they're just not "evenly distributed".

Here's what I think a world where our current level of AI becomes evenly distributed looks like:

So for those who don't think TAI exists, is the claim:

  1. The story you've told requires innovations that do not yet exist
  2. The story you've told doesn't count as FAI
  3. Something else?

Specifically, "If Moore's law stopped tomorrow and there are no more 'breakthroughs' in AI --I'm not counting what an expert in 2021 would consider an obvious or incremental improvement or application-- what would a world where such technology was 'evenly distributed' look like, and how would it fall short of TAI?"

Edit: I thought I should add that I don't think the industrial revolution is "evenly distributed" yet either.  Let's posit the industrial age as ending with the introduction of the personal computer in 1976.  US GDP/capita was then $27,441.89 (in 2012 dollars).  World GDP/capita for 2019 was only $11,442.   And every  country poorer than South Korea has not yet reached this level.

Answers

answer by adamShimi · 2021-03-30T14:46:11.294Z · LW(p) · GW(p)

Quick answer without any reference, so probably biased towards my internal model: I don't think we reached TAI yet because I believe that if you remove every application of AI in the world (to simplify the definitions, every product of ML), the vast majority of people wouldn't see any difference, and probably some positive difference (less attention manipulation on social media for example).

Compare with removing every computing device, or removing electricity.

And taking as examples the AI we're making now, I expect that your first two points are wrong: people are already trying to build AI into everything, and it's basically always useless/not that much useful.

(An example of the disconnect between AI as thought about here or in research lab, and practical application, is that AFAIK, nobody knows how to make money with RL)

The question of whether we have enoigh resources to scale to TAI right now is one I haven't thought about enough for a decent answer, but you can find discussions of it on LW.

answer by Daniel Kokotajlo · 2021-03-31T07:50:06.041Z · LW(p) · GW(p)

Do you think that with current technology we'll end up with a GWP growth rate of 10%+ per year? If not, then it probably doesn't count as transformative. If so, well, I guess I'd like to see more argument for that.

answer by [deleted] · 2021-03-31T09:50:39.247Z · LW(p) · GW(p)

This is the 3D printing hype all over again. Remember how every object in sight was going to be made in a 3D printer? How we won't ever need to go to a store again because we'll be able to just download the blueprint for every product from the internet and make it ourselves? How we're going to print our clothes, furniture, toys and appliances at home and it's only going to cost pennies of raw materials and electricity? Yeah right.

So let me throw down the exact opposite predictions for social implications if there was absolutely 0 innovation in AI:

  • AI continues to try to shoehorn itself into every product imaginable and mostly fail because it's a solution looking desperately for a problem
  • Almost no labor (big exception: self-driving) has been replaced by robots. The robots that do exist are not ML-based
  • Universal Basic Income doesn't see widespread adoption and it has nothing to do with AI, one way or another
  • <1% of YouTube views is produced by AI generated content
  • Space is literally the worst place to apply AI - the stakes couldn't be higher, the training data couldn't be sparser and the tasks are so varied and complex and unpredictable they stretch even the generalization capability of human intelligence; it's the pinnacle of AI-hubris thinking AI will "revolutionize" every single field

(I use ML and AI interchangeably because AI in the broad sense just means software at this point)

In fact, since I don't believe in slow take-off, I'll do one better: these are my predictions for what will actually happen right up until FOOM.

It's time for a reality check for not only AI, but digital technologies in general (AR/MR, folding phones, 5G, IoT). We wanted flying cars, instead we got AI-recommended 140 characters.

answer by Steven Byrnes · 2021-03-31T13:02:06.512Z · LW(p) · GW(p)

You say "absolutely 0 innovation in AI" at the top but then say "no more 'breakthroughs' in AI --I'm not counting what an expert in 2021 would consider an obvious or incremental improvement or application" at the bottom. Even leaving aside that those two quotes are not equivalent, I think there's a lot of scope for disagreement and confusion here.

Any company or research group trying to do anything new with ML—industrial robotics, for example—will immediately discover that it doesn't work on the first try, so then they work at it, and probably have clever ideas along the way, and publish or patent them, and maybe eventually they get it to work, or maybe not.

Is that "absolutely 0 innovation"? No. Is that "obvious"? Maybe the application is "obvious", or maybe not, but the steps to get it to work are not obvious. Is that "incremental improvement"? Maybe, maybe not. Is it a "breakthrough"? Well, it depends on what "breakthrough" means. In 100 years, no one will be telling stories of how heroic industrial researcher Esme figured out how to get the drone to avoid hitting branches. But on the other hand, maybe lots of people before Esme were trying to get the drone to not hit branches, and they all failed until Esme succeeded.

If "absolutely 0 innovation" is to be taken literally, well, we don't have industrial robotics, and we don't have human-level movie scripts, etc., and we're not going to get them without innovation. If you mean something like "soon" or "by default", that's a different question.

In any case, my answer is that, however transformative the bullet points you list would be, they're not nearly as transformative as "an AI that can do literally every aspect of my job, and yours, but better and cheaper". That's what IJ Good called "the last invention that man need ever make"—because the AI can take the initiative to come up with all future inventions, found all future companies, discover all future scientific truths, etc. etc. Think "very, very, very transformative". And I do think that, to get there, we need things that most people would call "breakthroughs", even if they have some continuity with existing ideas.

answer by [deleted] · 2021-03-30T20:57:49.462Z · LW(p) · GW(p)

Most labor (including almost  physical labor) has been replaced by robots.  The jobs that remain consist of research and application of AI and robotics.

This conclusion is still 'doubted'.  I generally agree with you that this is possible but there is a huge gap between where we are now, and actually reliable, real time, economical to deploy robotics.  As far as I know, actual robotics using deep learning for commercial tasks is extremely rare.  I have not heard of any, I've just seen OpenAI and Google's demos.  

It's sort of the difference between "have demoed a train that could run in a tunnel" and "have dug a tunnel" and "have a working subway line" to "the whole city is interconnected".

In real life examples the gap there was many decades.  

https://en.wikipedia.org/wiki/Beach_Pneumatic_Transit [1869]

https://en.wikipedia.org/wiki/Tremont_Street_subway [1903]

https://en.wikipedia.org/wiki/IND_Sixth_Avenue_Line [1940] : approximately the completion date of the NYC system

comment by Logan Zoellner (logan-zoellner) · 2021-03-30T23:46:15.271Z · LW(p) · GW(p)

Yeah, I definitely think we're very early in the transition.  I would still say it's extremely likely (>90%) even given no new "breakthroughs".  

The real-life commercial uses of AI+robotics are still pretty limited at this point.  Off the top of my head I can only think of Roomba, Tesla, Kiva and those security robots in malls.

Anecdotally, from the people I talk do deep learning + any application in science seems to yield  immediate low-hanging fruit (one recent example being protein folding).  I think the limiting factor right now is the number of deep learning + robotics experts is extremely small.  It's also the case that a robot has to be very cheap to compete with an employee making minimum wage (even in developed countries).  If there were 10000x as many deep learning experts and everyone in the world was earning $30/hour I think we would see robots taking over many more jobs than we do presently.

I also think it's likely that better AI + more compute will dramatically accelerate this transition.  Maybe there will be some threshold at which this transition will become more obviously inevitable than it is today.  

 

Perhaps"when will TAI be developed?" is something that can only be answered retrospectively.  By way of analogy, it now seems obvious to us that the invention of the steam engine (1698) and flying shuttle (1733) marked the beginning of a major change in how humans worked, but it wasn't until the 1800's that those changes began to appear in the labor market.

Replies from: None
comment by [deleted] · 2021-03-31T00:21:34.776Z · LW(p) · GW(p)

Sure. And the kiva and roomba examples : at a low level both machines could work using pure non deep learning software. 2d SLAM is a 'classic' technique at this point, and nothing in the way kiva robots move in x-y grids requires deep learning to work.

Robots that for example do soft complex object picking are using DL, and are an example of a machine that actually needs it to work. Ditto any autonomous car.

Yeah Tesla is using DL for the distance estimation. Dunno about the mall robots.

1 comment

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2021-03-30T17:44:51.473Z · LW(p) · GW(p)

Meta-comment on transformative AI.

I'm not sure this terminology has reached fixation yet and is still provisional. For example, I haven't seen it bubble up to replace talk in AI safety writing that would otherwise talk about superintelligence or strong optimization pressure. It seems mostly geared towards folks talking about policy and explaining why policy is needed, so caveat that it's a bit of jargon (like lots of things we say around here) that may have specific meaning that make it hard to answer this question because TAI is going to naturally be geared towards the stuff that's not here or almost here in order to get folks to take action rather than on the "boring" stuff we already live with and can see that it's not immediately transforming everything on the order of hours/days.