Artificial General Horsiness

post by robotelvis · 2023-11-11T05:15:03.107Z · LW · GW · 0 comments

This is a link post for https://messyprogress.substack.com/p/artificial-general-horsiness

Contents

No comments

A lot of attention has been paid to the question of Artificial General Intelligence (AGI). Will artificial intelligence ever reach the level at which it can accomplish every intellectual task a human can do, and if so, how soon will that happen?

I think that this is the wrong question.

Cars are a long way from achieving Artificial General Horsiness. Even today, 100 years after the invention of the car, there are many things a horse can do vastly better than a car, such as jumping over fences, navigating by itself, swimming through water, dealing with mud, navigating through tight passageways, fueling themselves from vegetation, and providing social companionship.

And yet cars have still managed to replace horses for pretty much all tasks that people previously used horses for. This is because it turned out that, while cars can’t do everything horses can do, their speed and cost advantages were big enough that it was worth adapting the rest of the world (e.g. building roads) to compensate for their limitations.

There is a long history of people claiming that a task is a true mark of intelligence right until a computer becomes good at it, at which people people decide that that task doesn’t require true intelligence. So far that list includes logical reasoning, playing chess, translating between languages, driving cars, writing poetry, creating art, playing video games, and passing academic exams.

I expect that people will still be claiming that computers aren’t “really” intelligent long after the human race has been subjugated by its AI overlords and humans no longer have any economic purpose.


In 2011 Marc Andreessen said that software was eating the world. What he meant by this was that computer technology had reached a level of maturity where it had the potential to transform pretty much every area of industry. All that needed to happen was for software engineers to do the often-mundane bits of software engineering needed to connect computers to the problems we wanted them to solve.

And Marc Andreessen was mostly right. In the years since he wrote that post, a large fraction of our economy has been transformed by technology, with companies either being transformed by computers, or replaced by tech-driven companies. We can see this in the way Uber replaced Taxis, Amazon took over shopping, AirBnb changed the hotel market, and streaming companies took over TV and Music. 

I suspect that we are currently at a similar point with AI. The AI we have isn’t “artificial general intelligence” but it is already good enough to replace humans for many tasks that many humans make a well paid living from doing, whether they are artists, lawyers, doctors, software engineers, screen writers, teachers, or journalists.

All that needs to happen is for humans to do the relative mundane task of connecting AI to the problems we want them to solve, through fairly well-understood processes like data set creation, fine tuning, prompt engineering, and prompt chaining. This work hasn’t been done yet, and it will take time, but it’s not clear there is any technological impediment to it happening.

Of course, doing relatively mundane tasks is the sort of thing that AIs are good at. As I was writing this post, OpenAI released GPTs, a tools that makes it easier to connect GPT to task-specific data sources and to teach it to do the job you want it to do. We should expect to see more such developments, making it easier for AIs to take over more jobs, even if the underlying intelligence doesn’t improve (which it will).


I work as a software engineer and spend a large fraction of my time using ChatGPT. When I have a new problem to solve, I’ll ask GPT what are good approaches to solve it, then I’ll ask GPT to write code to implement the components I need, then I’ll ask GPT to explain any code I don’t understand, then I’ll give it any error messages the code produces and ask it to fix them, then I’ll tell it ways I want it to change the code. 

I still have to make a lot of the higher level decisions myself, and I still need to write the trickiest code myself, and I still need to rewrite code it writes badly, but I feel more like an engineering manager managing a team of AI coders than like an individual contributor. 

Even if AI can’t do everything a human can do, the fact that it can do some of the things a human can do will transform the nature of many jobs.

If someone can use GPT to quickly teach them things they don’t know, then it becomes much less necessary for people to spend years obtaining specialized knowledge. Previously getting up to speed would require reading a ton of books, having already read the books necessary to understand those books, and having known which books to read. Nowadays GPT can just give you the knowledge you need right now.

If a job consists of a small part that only a super-smart human can do, and a large part that a low skill human could do then it now becomes possible for one super-smart human to do the work that used to require many more people. In some cases this means that a senior person who would previously have managed a team of juniors now instead directs the activities of a team of “AI flunkies”.

Part of the reason for the recent Hollywood screenwriters strike is the fear that AI will give Directors the ability to write scripts themselves without needing to hire screenwriters to do it for them. In many cases the Directors had always had the skills needed to write good scripts, but what they didn’t previously have was the time. If AI can do the lower-skill word-smithing, the Director can focus their attention on the higher level plot arcs and character development.

Even if the AI isn’t quite as intelligent as the humans, the fact that it is vastly faster and vastly cheaper is often more than enough to compensate - as was the case when cars replaced horses.


In some ways it seems good to have AI to do our work for us. Most people don’t actually enjoy doing work and if we didn’t have to do as much of it then we could spend more time relaxing, or spending time with friends and family.

Maybe we’ll even reach a state where human labor isn’t needed at all. We’ll live in a utopia where all problems we might care about are taken care of by AI, and we can just relax in the presence of our robot servants.

But work comes paired with power. If you have the ability to do some valuable task that few people are able to do then that gives you the ability to charge money for your labor and exert influence over other people.

And work also comes paired with meaning. People define themselves by the change they seek to make in the world and the value they bring to others. If someone has nothing they can do that an AI can’t do better then they may see no reason to live.

It has long been normal human behavior for the people who do some valuable job to erect barriers to prevent other people competing with them and thus reducing their power. Many things done by doctors could be done just as well by nurses. Many things done by lawyers could be done just as well by paralegals. Many things done by university graduates could be done just as well by people without a degree. Many things done by union members could be done just as well by people outside the union. 

It seems natural to assume that there will be efforts to put up similar barriers to prevent AIs from doing lucrative tasks. Some of these efforts may even succeed. It has always been the case that who gets paid the most is as much a consequence of who has the power to limit competition as it is a consequence of who has valuable skills.

In the long term, I expect that AI will be broadly positive for society. But AI is going to significantly increase the economic power of some groups and significantly decrease the economic power of other groups, and that is something that always leads to social disruption.

0 comments

Comments sorted by top scores.