[Video] Why Not Just: Think of AGI Like a Corporation? (Robert Miles)

post by habryka (habryka4) · 2018-12-23T21:49:06.438Z · LW · GW · 1 comments

This is a link post for https://www.youtube.com/watch?v=L5pUA3LsEaw

Contents

1 comment

Robert Miles has been creating AI-Alignment related videos for a while now, but I found this one particularly good.

Here is the automatically generated Youtube transcript. Obviously it's not very good, but at least it makes the post searchable (In case Robert reads this and has a transcript for the video lying around, I would love to replace this with one that has proper capitalization and punctuation marks and other luxuries):

hi so I sometimes see people saying

things like okay so your argument is

that at some point in the future we're

going to develop intelligent agents that

are able to reason about the world in

general and take actions in the world to

achieve their goals

these agents might have superhuman

intelligence that allows them to be very

good at achieving their goals and this

is a problem because they might have

different goals from us but don't we

kind of have that already corporations

can be thought of as super intelligent

agents they're able to think about the

world in general and they can outperform

individual humans across a range of

cognitive tasks and they have goals

namely maximizing profits or shareholder

value or whatever and those goals aren't

the same as the overall goals of

humanity so corporations are a kind of

misaligned super intelligence the people

who say this having established the

metaphor at this point tend to diverge

mostly along political lines some say

corporations are therefore a clear

threat to human values and goals in the

same way that misaligned super

intelligences are and they need to be

much more tightly controlled if not

destroyed all together others say

corporations are like misaligned super

intelligences but corporations have been

instrumental in the huge increases in

human wealth and well-being that we've

seen over the last couple of centuries

with pretty minor negative side effects

overall if that's the effect of

misaligned super intelligences I don't

see why we should be concerned about AI

and others say corporations certainly

have their problems but we seem to have

developed systems that keep them under

control well enough that they're able to

create value and do useful things

without literally killing everyone so

perhaps we can learn something about how

to control or align super intelligences

by looking at how we handle corporations

so we're gonna let the first to fight

amongst themselves and we'll talk to the

third guy so how good is this metaphor

our corporations really like misaligned

artificial general super intelligences

quick note before we start we're going

to be comparing corporations to AI

systems and this gets a lot more

complicated when you consider that

corporations in fact use AI systems so

for the sake of simplicity we're going

to assume that corporations don't use AI

systems because otherwise the problem

gets recursive and like not in a cool

way

first off our corporations agents in the

relevant way I would say yeah pretty

much I think that it's reasonably

productive to think of a corporation as

an agent

they do seem to make decisions and take

actions in the world in order to achieve

goals in the world but I think you face

a similar problem thinking of

corporations as agents as you do when

you try to think of human beings as

agents in economics it's common to model

human beings as agents that want to

maximize their money in some sense and

you can model corporations in the same

way and this is useful but it is kind of

a simplification in that human beings in

practice want things that aren't just

money

and while corporations are more directly

aligned with profit maximizing than

individual human beings are it's not

quite that simple so yes we can think of

corporations as agents but we can't

treat their stated goals as being

exactly equivalent to their actual goals

in practice more on that later so

corporations are more or less agents are

they generally intelligent agents again

yeah I think so I mean corporations are

made up of human beings so they have all

the same general intelligence

capabilities that human beings have so

then the question is are they super

intelligent this is where things get

interesting because the answer is kind

of like SpaceX is able to design a

better rocket than any individual human

engineer could design rocket design is a

cognitive task and SpaceX is better at

that than any human being therefore

SpaceX is a super intelligence in the

domain of rocket design but a calculator

is a super intelligence in the domain of

arithmetic that's not enough our

corporation's general super

intelligences do they outperform humans

across a wide range of cognitive tasks

as an AGI code in practice it depends on

the task consider playing a strategy

game for the sake of simplicity let's

use a game that humans still beat AI

systems at like Starcraft if a

corporation for some reason had to win

at Starcraft it could perform about as

well as the best human players it would

do that by hiring the best human players

but you won't achieve superhuman play

that way a human player acting on behalf

of the corporation is just a human

player and the corporation doesn't

really have a way to do much better than

that

a team of reasonably good Starcraft

players working together to control one

army will still lose to a single very

good player working alone this seems to

be true for a lot of strategy games the

classic example is the game of Kasparov

versus the world where Garry Kasparov

played against the entire rest of the

world cooperating on the Internet

the game was kind of weird but Kasparov

ended up winning and the kind of real

world strategy that corporations have to

do seems like it might be similar as

well when companies outsmart their

competition it's usually because they

have a small number of decision makers

who are unusually smart rather than

because they have a hundred reasonably

smart people working together for at

least some tasks teams of humans are not

able to effectively combine their

intelligence to achieve highly

superhuman performance so corporations

are limited to around human level

intelligence of those tasks to break

down where this is let's look at some

different options corporations have four

ways to combine human intelligences one

obvious way is specialization if you can

divide the task into parts that people

can specialize in you can outperform

individuals you can have one person

who's skilled at engine design one who's

great at aerodynamics one who knows a

lot about structural engineering and one

who's good at avionics can you tell I'm

not a rocket surgeon anyway if these

people with their different skills are

able to work together well with each

person doing what they're best at the

resulting agent will in a sense have

superhuman intelligence no single human

could ever be so good at so many

different things but this mechanism

doesn't get you superhumanly high

intelligence just superhumanly broad

intelligence whereas super intelligence

software AGI might look like this so

specialization yields a fairly limited

form of super intelligence if you can

split your task up but that's not easy

for all tasks for example the task of

coming up with creative ideas or

strategies isn't easy to split up you

either have a good idea or you don't but

as a team you can get everyone to

suggest a strategy or idea and then pick

the best one that way a group can

perform better than any individual human

how much better though and how does that

change with the size of the team I got

curious about exactly how this works so

I came up with a toy model now I'm not a

statistician I'm a computer scientist so

rather than working it out properly I

just simulated it a hundred million

times because that was quicker okay so

here's the idea quality distribution for

an individual human will model it as a

normal distribution with a mean of 100

and a standard deviation of 20 so what

this means is you ask a human for a

suggestion and sometimes they do really

well and come up with a hundred

30-level strategy sometimes they screw

up and can only give you a 70 idea but

most of the time it's around 100 now

suppose we had a second person whose

intelligence is the same as the first we

have both of them come up with ideas and

we keep whichever idea is better the

resulting team of two people combined

looks like this

on average the ideas are better the mean

is now 107 and as we keep adding people

the performance gets better here's 5

people 10 20 50 100

remember these are probability

distributions so the height doesn't

really matter the point is that the

distributions move to the right and get

thinner the average idea quality goes up

and the standard deviation goes down so

we're coming up with better ideas and

more reliably but you see how the

progress is slowing down we're using a

hundred times as much brain power here

but our average ideas are only like 25%

better what if we use a thousand people

ten times more resources again only gets

us up to around a hundred and thirty

five diminishing returns so what does

this mean for corporations well first

off to be fair this team of a thousand

people is clearly super intelligent the

worst ideas it ever has are still so

good that an individual human will

hardly ever manage to think of them but

it's still pretty limited there's all

this space off to the right of the graph

that it would take vast team sizes to

ever get into if you're wondering how

this would look with seven billion

humans well you have to work out the

statistical solution yourself the point

is the team isn't that super intelligent

because it's never going to think of an

idea that no human could think of which

is kind of obvious when you think about

it but AGI is unlimited in that way and

in practice even this model is way too

optimistic for corporations firstly

because it assumes that the quality of

suggestions for a particular problem is

uncorrelated between humans which is

clearly not true and secondly because

you have to pick out the best suggestion

but how can you be sure that you'll know

the best idea when you see it it happens

to be true a lot of the time for a lot

of problems that we care about that

evaluating solutions is easier than

coming up with them you know Homer it's

very easy to criticize machine learning

relies pretty heavily on this like

writing a program that differentiates

pictures of cats and dogs is really hard

but evaluating such a program is fairly

simple you

show it lots of pictures of cats and

dogs and see how well it does the clever

bit is in figuring out how to take a

method for evaluating solutions and use

that to create good solutions anyway

this assumption isn't always true and

even when it is the fact that evaluation

is easier or cheaper than generation

doesn't mean that evaluation is easy or

cheap

like I couldn't generate a good rocket

design myself but I can tell you that

this one needs work so evaluation is

easier than generation but that's a very

expensive way to find out and I wouldn't

have been able to do it the cheap way by

just looking at the blueprints the

skills needed to evaluate in advance

whether a given rocket design will

explode are very closely related to the

skills needed to generate a non

exploding rocket design so yeah even if

a corporation could somehow get around

being limited to the kind of ideas that

humans are able to generate they're

still limited to the kind of ideas that

humans are able to recognize as good

ideas just how serious is this

limitation how good are the strategies

and ideas that corporations are missing

out on well take a minute to think of an

idea that's too good for any human to

recognize it as good got one well it was

worth a shot we actually do have an

example of this kind of thing in move 37

from alphago's 2016 match with world

champion Lisa doll this kind of

evaluation value that's a very that's a

very surprising move I thought I thought

it was I thought it was a mistake yeah

that turned out to be pretty much the

move that won the game but you're go

playing corporation is never going to

make move 37 even if someone happens to

suggest it it's almost certainly not

going to be chosen

normally human we never play this one

because it's not enough for someone in

your corporation to have a great idea

the people at the top need to recognize

that it's a great idea that means that

there's a limit on the effective

creative or strategic intelligence of a

corporation which is determined by the

intelligence of the decision-makers and

their ability to know a good idea when

they see one okay what about speed

that's one of the things that makes AI

systems so powerful and one of the ways

that software IGI is likely to be super

intelligent the general trend is we go

from computer

can't do this at all two computers can

do this much faster than people not

always but in general so I wouldn't be

surprised if that pattern continues with

AGI how does the corporation rate on

speed again it kind of depends

this is closely related to something

we've talked about before parallelizable

ax t some tasks are easy to split up and

work on in parallel and some aren't

for example if you've got a big list of

a thousand numbers and you need to add

them all up it's very easy to paralyze

if you have ten people you can just say

okay you take the first hundred numbers

you take the second hundred you take the

third and so on have everybody add up

their part of the list and then at the

end you add up everyone's totals however

long the list is you can throw more

people at it and get it done faster much

faster than any individual human code

this is the kind of task where it's easy

for corporations to achieve superhuman

speed but suppose instead of summing a

list you have a simple simulation that

you want to run for say a thousand

seconds you can't say okay you work out

the first hundred seconds of the

simulation you do the next hundred and

you do the next hundred and so on

because obviously the person who's

simulating second 100 needs to know what

happened at the end of second 99 before

they can get started so this is what's

called an inherently serial task you

can't easily do it much faster by adding

more people you can't get a baby in less

than nine months by hiring two pregnant

women

you know most real-world tasks are

somewhere in between you get some

benefits from adding more people but

again you hit diminishing returns some

parts of the task can be split up and

worked on in parallel some parts need to

happen one after the other so yes

corporations can achieve superhuman

speed add some important cognitive tasks

but really if you want to talk about

speed in a principled way you need to

differentiate between throughput how

much goes through the system within a

certain time and latency how long it

takes a single thing to go through the

system these ideas are most often used

in things like networking and I think

that's the easiest way to explain it so

basically let's say you need to send

someone a large file and you can either

send it over a dial-up internet

connection or you can send them a

physical disk through the postal system

the dial-up connection is low latency

each bit of the file goes through the

system quickly but it's also low

throughput the rate at which you can

send data is pretty low whereas sending

the physical disk is high latency it

might take days for the first

to arrive but it's also high-throughput

you can put vast amounts of data on the

disk so your average data sent per

second could actually be very good

corporations are able to combine human

intelligences to achieve superhuman

throughput so they can complete large

complex tasks faster than individual

humans could but the thing is a system

can't have lower latency than its

slowest component and corporations are

made of humans so corporations aren't

able to achieve superhuman latency and

in practice as you've no doubt

experienced is quite the opposite so

corporate intelligence is kind of like

sending the physical disk corporations

can get a lot of cognitive work done in

a given time but they're slow to react

and that's a big part of what makes

corporations relatively controllable

they tend to react so slowly that even

governments are sometimes able to move

fast enough to deal with them

software super intelligence is on the

other hand could have superhuman

throughput and superhuman latency which

is something we've never experienced

before in a general intelligence so our

corporations super intelligent agents

well they're pretty much generally

intelligent agents which are somewhat

super intelligent in some ways and

somewhat below human performance in

others so yeah kinda the next question

is are they misaligned but this video is

already like 14 and a half minutes long

so we'll get to that in the next video

[Music]

I want to end the video by saying a big

thank you to my excellent patrons it's

all of these people here in this video

I'm especially thanking Pablo area or

Pablo a de aluminio Sushil recently I've

been putting a lot of time into some

projects that I'm not able to talk about

but as soon as I can and the patrons

will be the first to know

thank you again so much for your

generosity and thank you all for

watching I'll see you next time

[Music]

1 comments

Comments sorted by top scores.

comment by TheWakalix · 2019-02-19T20:59:12.394Z · LW(p) · GW(p)

Hi. So I sometimes see people saying things like, “Okay, so your argument is that at some point in the future we’re going to develop intelligent agents that are able to reason about the world in general and take actions in the world to achieve their goals. These agents might have superhuman intelligence that allows them to be very good at achieving their goals, and this is a problem because they might have different goals from us. But don’t we kind of have that already? Corporations can be thought of as superintelligent agents. They’re able to think about the world in general and they can outperform individual humans across a range of cognitive tasks. And they have goals - namely, maximizing profits or shareholder value or whatever - and these goals aren’t the same as the overall goals of humanity. So corporations are a kind of misaligned superintelligence.”

The people who say this, having established the metaphor, at this point tend to diverge, mostly along political lines. Some say, “Corporations are therefore a clear threat to human values and goals in the same way that misaligned superintelligences are, and they need to be much more tightly controlled if not destroyed altogether.” Others say, “Corporations are like misaligned superintelligences, but corporations have been instrumental in the huge increases of human wealth and well-being that we’ve seen over the last couple of centuries, with pretty minor negative side effects overall. If that’s the effect of misaligned superintelligences, I don’t see why we should be concerned about AI.” And others say, “Corporations certainly have their problems, but we seem to have developed systems that keep them under control well enough that they’re able to create value and do useful things without literally killing everyone. So perhaps we can learn something about how to control or align superintelligences by looking at how we handle corporations.”

So we’re gonna let the first two fight amongst themselves and we’ll talk to the third guy.

So how good is this metaphor? Are corporations really like misaligned superintelligences? (Quick note before we start: we’re going to be comparing corporations to AI systems, and this gets a lot more complicated when you consider that corporations in fact use AI systems. So for the sake of simplicity, we’re going to assume that corporations don’t use AI systems, because otherwise the problem gets recursive and, like, not in a cool way.)

First off, are corporations agents in the relevant way? I would say “yeah, pretty much.” I think that it’s reasonably productive to think of a corporation as an agent. They do seem to make decisions and take actions in the world in order to achieve goals in the world. But I think you face a similar problem thinking of corporations as agents as you do when you try to think of human beings as agents. In economics, it’s common to model human beings as agents that want to maximize their money in some sense. And you can model corporations in the same way, and this is useful. But it is kind of a simplification in that human beings in practice want things that aren’t just money. And while corporations are more directly aligned with profit maximizing than individual human beings are, it’s not quite that simple. So yes, we can think of corporations as agents, but we can’t treat their stated goals as being exactly equivalent to their actual goals in practice. More on that later.

So corporations are more or less agents. Are they generally intelligent agents? Again, yeah, I think so. I mean, corporations are made up of human beings, so they have all the same general intelligence capabilities that human beings have.

So then the question is: are they superintelligent? This is where things get interesting, because the answer is “kind of.” Like, SpaceX is able to design a better rocket than any individual human engineer could design. Rocket design is a cognitive task, and SpaceX is better at that than any human being. Therefore, SpaceX is a superintelligence... in the domain of rocket design. But a calculator is a superintelligence in the domain of arithmetic. That’s not enough.

Are corporations general superintelligences? Do they outperform humans across a wide range of cognitive tasks, as an AGI could? In practice, it depends on the task. Consider playing a strategy game. For the sake of simplicity, let’s use a game that humans still beat AI systems at, like Starcraft. If a corporation, for some reason, had to win at Starcraft, it could perform about as well as the best human players. It would do that by hiring the best human players. But you won’t achieve superhuman play that way. A human player acting on behalf of the corporation is just a human player, and the corporation doesn’t really have a way to do much better than that. A team of reasonably good Starcraft players working together to control one army will still lose to a single very good player working alone.

This seems to be true for a lot of strategy games. The classic example is the game of Kasparov versus the World, where Garry Kasparov played against the entire rest of the world cooperating on the Internet. The game was kind of weird, but Kasparov ended up winning. And the kind of real-world strategy that corporations have to do seems like it might be similar as well. When companies outsmart their competition, it’s usually because they have a small number of decision-makers who are unusually smart, rather than because they have a hundred reasonably smart people working together. For at least some tasks, teams of humans are not able to effectively combine their intelligence to achieve highly superhuman performance.

So corporations are limited to around human-level intelligence on those tasks. To break down where this is, let’s look at some different options corporations have. Four ways to combine human superintelligences. One obvious way is specialization: if you can divide the task into parts that people can specialize in, you can outperform individuals. You can have one person who’s skilled at engine design, one who’s great at aerodynamics, one who knows a lot about structural engineering, and one who’s good at avionics. [Graph: multiple narrow curves.] Can you tell I’m not a rocket surgeon? Anyway, if these people with their different skills are able to work together well, with each person doing what they’re best at, the resulting agent will in a sense have superhuman intelligence. No single human could ever be so good at so many different things. [Graph: the maximum of these curves is a broad curve.] But this mechanism doesn’t get you superhumanly high intelligence, just superhumanly broad intelligence, whereas superintelligent software (AGI) might look like this. [Graph: a both broad and high curve.]

So specialization yields a fairly limited form of superintelligence, if you can split your task up. But that’s not easy for all tasks. For example, the task of coming up with creative ideas or strategies isn’t easy to split up. You either have a good idea or you don’t. But as a team, you can get everyone to suggest a strategy or idea, and then pick the best one. That way, a group can perform better than any individual human. How much better, though, and how does that change with the size of the team? I got curious about exactly how this works, so I came up with a toy model. Now, I’m not a statistician, I’m a computer scientist, so rather than working it out properly I just simulated it a hundred million times, because that was quicker.

Okay, so here’s the ideal quality distribution for an individual human. We’ll model it as a normal distribution with a mean of 100 and a standard deviation of 20. So what this means is you ask a human for a suggestion, and sometimes they do really well and come up with a 130-level strategy. Sometimes they screw up and can only give you a 70 idea. But most of the time, it’s around 100. Now suppose we had a second person whose intelligence is the same as the first. We have both of them come up with ideas and we keep whichever idea is better. The resulting team of two people combined looks like this. On average, the ideas are better. The mean is now, what, 107? And as we keep adding people, the performance gets better. Here’s 5 people, 10, 20, 50, 100. Remember, these are probability distributions, so the height doesn’t really matter. The point is that the distributions move to the right and get thinner. The average idea quality goes up and the standard deviation goes down. So we’re coming up with better ideas and more reliably. But you see how the progress is slowing down. We’re using a hundred times as much brainpower here, but our average ideas are only like 25% better. What if we use a thousand people, ten times more resources? Again, only gets us up to around 135. Diminishing returns.

So what does this mean for corporations? Well, first off, to be fair, this team of a thousand people is clearly superintelligent. The worst ideas it ever has are still so good that an individual human will hardly ever manage to think of them. But it’s still pretty limited. There’s all this space off to the right of the graph that it would take vast team sizes to ever get into. If you’re wondering how this would look with seven billion humans, well, you have to work out the statistical solution yourself. The point is the team isn’t that superintelligent because it’s never going to think of an idea that no human could think of, which is kind of obvious when you think about it. But AGI is unlimited in that way.

And in practice, even this model is way too optimistic for corporations. Firstly, because it assumes that the quality of suggestions for a particular problem is uncorrelated between humans, which is clearly not true. And secondly, because you have to pick out the best suggestion, but how can you be sure that you’ll know the best idea when you see it? It happens to be true a lot of the time, for a lot of the problems that we care about, that evaluating solutions is easier than coming up with them. “You know, Homer, it’s very easy to criticize.” Machine learning relies pretty heavily on this. Like, writing a program that differentiates pictures of cats and dogs is really hard, but evaluating such a program is fairly simple. You just show it lots of pictures of cats and dogs and see how well it does. The clever bit is in figuring out how to take a method for evaluating solutions and use that to create good solutions.

Anyway, this assumption isn’t always true, and even when it is the fact that evaluation is easier or cheaper than generation, doesn’t mean that evaluation is easy or cheap. Like, I couldn’t generate a good rocket design myself, but I can tell you that this one needs work. [Video of a rocket exploding.] So evaluation is easier than generation. But that’s a very expensive way to find out, and I wouldn’t have been able to do it the cheap way by just looking at the blueprints. The skills needed to evaluate in advance whether a given rocket design will explode are very closely related to the skills needed to generate a non-exploding rocket design.

So yeah, even if a corporation could somehow get around being limited to the kind of ideas that humans are able to generate, they’re still limited to the kind of ideas that humans are able to recognize as good ideas. Just how serious is this limitation? How good are the strategies and ideas that corporations are missing out on? Well, take a minute to think of an idea that’s too good for any human to recognize it as good. Got one? Well, it was worth a shot. We actually do have an example of this kind of thing, in move 37 from AlphaGo’s 2016 match with world champion Lee Sedol. “That’s a very… that’s a very surprising move. I thought it was a mistake.” Yeah, that turned out to be pretty much the move that won the game. But your Go-playing corporation is never going to make move 37. Even if someone happens to suggest it, it’s almost certainly not going to be chosen. “Normally, human, we never play this one because it’s bad!” It’s not enough for someone in your corporation to have a great idea. The people at the top need to recognize that it’s a great idea. That means that there’s a limit on the effective creative or strategic intelligence of a corporation which is determined by the intelligence of the decision-makers and their ability to know a good idea when they see one.

Okay. What about speed? That’s one of the things that makes AI systems so powerful, and one of the ways that software AGI is likely to be superintelligent. The general trend is we go from “computer can’t do this at all” to “computers can do this much faster than people”. Not always, but in general. So I wouldn’t be surprised if that pattern continues with AGI. How does the corporation rate on speed? Again, it kind of depends. This is closely related to something we’ve talked about before: parallelizability. Some tasks are easy to split up and work on in parallel, and some aren’t. For example, if you’ve got a big list of a thousand numbers and you need to add them all up, it’s very easy to parallelize. If you have ten people, you can just say, “Okay, you take the first hundred numbers, you take the second hundred, you take the third, and so on.” Have everybody add up their part of the list, and then at the end, you add up everyone’s totals. However long the list is, you can throw more people at it and get it done faster - much faster than any individual human could. This is the kind of task where it’s easy for corporations to achieve superhuman speed.

But suppose instead of summing a list, you have a simple simulation that you want to run for, say, a thousand seconds. You can’t say, “okay, you work out the first hundred seconds of the simulation, and you do the next hundred, and you do the next hundred, and so on,” because obviously the person who’s simulating second 100 needs to know what happened at the end of second 99 before they can get started. So this is what’s called an inherently serial task. You can’t easily do it much faster by adding more people. You can’t get a baby in less than nine months by hiring two pregnant women, you know. Most real-world tasks are somewhere in between. You get some benefits from adding more people, but again, you hit diminishing returns. Some parts of the task can be split up and worked on in parallel. Some parts need to happen one after the other.

So yes, corporations can achieve superhuman speed at some important cognitive tasks, but really, if you want to talk about speed in a principled way, you need to differentiate between throughput (how much goes through the system within a certain time) and latency (how long it takes a single thing to go through the system). These ideas are most often used in things like networking, and I think that’s the easiest way to explain it. So basically, let’s say you need to send someone a large file, and you can either send it over a dial-up internet connection, or you can send them a physical disk through the postal system. The dial-up connection is low-latency (each bit of the file goes through the system quickly) but it’s also low-throughput (the rate at which you can send data is pretty low). Whereas sending the physical disk is high-latency (it might take days for the first bit to arrive) but it’s also high-throughput (you can put vast amounts of data on the disk, so your average data sent per second could actually be very good).

Corporations are able to combine human intelligences to achieve superhuman throughput, so they can complete large complex tasks faster than individual humans could. But the thing is, a system can’t have lower latency than its slowest component. And corporations are made of humans, so corporations aren’t able to achieve superhuman latency. And in practice, as you’ve no doubt experienced, it’s quite the opposite. So corporate intelligence is kind of like sending the physical disk. Corporations can get a lot of cognitive work done in a given time, but they’re slow to react. And that’s a big part of what makes corporations relatively controllable: they tend to react so slowly that even governments are sometimes able to move fast enough to deal with them. Software superintelligence, on the other hand, could have superhuman throughput and superhuman latency, which is something we’ve never experienced before in a general intelligence.

So are corporations superintelligent agents? Well, they’re pretty much generally intelligent agents which are somewhat superintelligent in some ways and somewhat below human performance in others. So yeah, kinda?

The next question is: are they misaligned? But this video is already like 14 and a half minutes long, so we’ll get to that in the next video.