Artificial general intelligence is here, and it's useless

post by George3d6 · 2019-10-23T19:01:26.584Z · LW · GW · 22 comments

This is a link post for https://blog.cerebralab.com/Artificial_general_intelligence_is_here,_and_it's_useless

Contents

  1. We already have AGIs
  2. Defining intelligence
  3. Testing intelligence
  4. The problem of gradual tool building
  In conclusion
None
22 comments

*Disclaimer: I originally wrote this for my own blog and as an editorial for skynet today, a few weeks ago I started reading LW and I thought people here might enjoy it, I think it's in the spirit of the conversation here. So I made a few edits and the text bellow is the result.*

One of the most misunderstood ideas that's polluting the minds of popular "intellectuals", many of them seemingly accustomed with statistics and machine learning, is the potential or threat that developing an artificial general intelligence (AGI) would present to our civilization.

This myth stems from two misunderstanding of reality.

One such misunderstanding is related to the refinement and extensibility of current ML algorithms and FPA hardware, however discussing that always leads of people arguing "what ifs" (e.g What if cheap quantum computers with very efficient I/O become a thing ?), thus, I won't pursue that line of thought here.

A second, more easy to debunk misunderstanding, is related to the practicality of an AGI. Assuming that our wildest dreams of hardware were to come true… would we be able to create and AGI and would this AGI actually have any effect upon the world other than being a fun curiosity ?

1. We already have AGIs

AGI is here, as of the time of writing there are an estimate of 7,714,576,923 AGI algorithms residing upon our planet. You can use the vast majority of them for less than 2$/hour. They can accomplish the vast majority intellectual task that can be well-defined by humans, not to mention they can invent new tasks themselves.

They are capable of creating new modified version of themselves, updating their own algorithms, sharing their algorithms with other AGIs and learning new complex skills. To add to that, they are energy efficient, you can keep one running for optimally for 5 to 50$/day depending on location, much less than your average server farm used to train a complex ML model.

This is a rather obvious observation, but one that needs to be noted nonetheless. Nobody has ever complained about the severe lack of humans in the world. If I were to ask anyone what they would envision as making a huge positive impact upon the future, few would answer vastly increasing birth rates.

So, if we are agreed on the fact that 70 billion people wouldn't be much better than 7 billion, that is to say, adding brains doesn't scale linearly… why are we under the assumption that artificial brains would help ?

If we disagree on that assumption, than what's stopping the value of human mental labor from sky-rocketing if it's in such demand ? What's stopping hundreds of millions of people with perfectly average working brains from finding employment ?

Well, you could argue, it's about the quality of the intelligence, not all humans are intellectually equal. Having a few million artificial Joe nobody wouldn't help the world much, but having even a few dozen Von Neumanns would make a huge difference. Or, even better, it's about having an AI that's more intelligent than any human that we've yet encountered.

This leads me to my second point.

2. Defining intelligence

How does one define an intelligent human or intelligence in general ?

An IQ test is the go-to measure of human intelligence used by sociologists and psychologists.

However, algorithms can readily out-score humans in IQ tests, for a very long time, and they are able to do so reliably with more and more added constraints.

The problem here is that IQ tests are designed for humans… not machines.

But, let's assume we come up with a "machine intelligence quotient" (MIQ) test that our potential AGI friends cannot use their perks to "cheat" on.

But how do we design it to avoid the pitfalls of a human IQ test when taken to the extremes ? Our purpose, after all, is not to "grade" algorithms with this test in a stagnant void, but to improve them in such a way that scoring higher on the tests means they are more "intelligent" in general.

In other words, this MIQ test needs to be very efficient at spotting intelligence outliers.

If we come back to an IQ test, we'll notice it's rather inaccurate at the thin ends of the distribution.

Have you ever heard of Marilyn vos Savant or Chris Langan or Terence Tao or William James Sidis or Kim Ung-yong ? These are, as far as IQ tests are concerned, the most intelligence members that our species currently contains.

While I won't question their presumed intelligence, I can safely say their achievements are rather unimpressive. Go down the list of high IQ individuals and what you'll mostly find is rather mediocre people with a few highly interesting quirks (can solve equations quickly, can speak a lot of languages, can memorize a lot of things… etc).

Once you consider the top 100,000 or so, there is most certainly a great overlap with people that have created impressive works of engineering, designed foundational experiments, invented theories that explain various natural phenomenon, became great leaders… etc (let's call these people "smart", for the sake of brevity).

But IQ is not a predictor of that, it's more of a filter. You can almost guarantee that a highly smart person (as viewed by society) will have a high IQ, but having the highest IQ doesn't even guarantee you will be in the top 0.x% of "smart" people as defined by your achievements.

So even if we somehow manage to create this MIQ to be as good of an indicator of whether or not a machine is intelligent as IQ is for humans, it will still be a bad criterion of benchmark against.

Indeed, the only reason IQ is a somewhat useful marker for humans is because natural selection did not use IQ, start having some mad-scientists IQ-based human optimization farms and soon enough you might end up with individuals that can score 300 on an IQ test but can't hold civil conversations or operate in society or tie their shoelaces.

This seems like an obvious point, a good descriptor of {X} becomes bad once it starts being used as a guidelines to maximize {X}. This should be especially salient to AI researchers and anyone working in automatic differentiation based modeling, optimizing a model for a known criterion (performance on training data) does not guarantee success on future similar data (testing/validation set), indeed, optimizing it too much can lead to a worst model than stopping at some middle ground.

We might be able to say "An intelligent algorithm should have an MIQ of at least 100", but we'll hardly be able to say "Having an MIQ of 500 means that an algorithm has super-human intelligence".

The problem is that we aren't intelligent enough to define intelligent. If the definition of intelligence does exist, there is no clear path to finding out what it is.

Even worse, I would say it's highly unlikely that the definition of intelligence exists. What I might consider intelligent is not what you would consider intelligent… our definitions may overlap to some extent, we'll likely be able to come up with a common definition of what constitutes "average" intelligence (e.g. the IQ test), but they would diverge towards the tail.

Most people will agree as to whether or not someone is somewhat intelligent, but they will disagree to no end on a "most intelligent people in the world" list.

Well, you may say, that could indeed be true, but we can judge people by their achievements. We can disagree all day on whether or not some high IQ individual like Chris Langan is a quack numerologist or a misunderstood god. But we can all agree that someone like Richard Feynman or Tom Mueller or Alan Turing is rather bright based on their achievements alone.

Which brings me to my third point.

3. Testing intelligence

The problem of our hypothetical superhuman AGI, since we can't come up with a simple test to determine it's intelligence, is that it would have to prove itself as capable.

This can be done in three ways:

  1. Use previous data and see if the "AI" is able to perform on said data as well or better than humans.
  2. Use very good simulations of the world and see if the "AI" is able to achieve superhuman results competing inside said simulations.
  3. Give the "AI" the resource to manifest itself in the real world and act in much the same way the human would (with all the benefits of having a computer for a brain).

Approach number (1) is how we currently train ML algorithms, and it has the limitation of only allowing us to train on very limited tasks where we know all the possible "paths" once the task is complete.

For example, we can train a cancer detecting "AI" on a set of medical imaging data, because it's rather easy to then take all of our subjects and test whether or not they "really" have cancer using more expensive methods or waiting.

It's rather hard to train a cancer curing "AI", since that problem contains "what ifs" that can't be explored. There are limitless treatment options and given that a treatment fails (the person dies of cancer)… we can't really go back and try again to see what the "correct" solution was.

I wrote a bit more about this problem here, if you're interested in understanding it a bit more. But, I assume that most readers with some interest in statistics and/or machine learning have stumbled upon this issue countless times already.

This can be solved by (2), which is creating simulations in which we can test an infinite amount of hypothesis. The only problem being that creating simulations is rather computationally and theoretically expensive.

To go back to our previous example of cancer curing "AI", currently the "bleeding edge" of biomolecular dynamics simulation, is being able to simulate a single medium-sized gene, in a non reactive substance, for a few nanoseconds, by making certain intelligent assumptions that speed up our simulation by making it a bit less realistic, on a supercomputer worth a few dozens of millions of dollars… Yeah, the whole simulation idea might not work out that well after all.

Coincidentally, most phenomenon that can be easily simulated are also the kind of phenomenon that are either very simplistic or fall into category (1), go figure.

So we are left with approach (3), giving our hypothetical AGI the physical resources to put its humanity-changing ideas into practice. But… who's going to give away those resources ? Based on what proof ? Especially when the proposition here is that we might have to try out millions of these AGIs before one of them is actually extremely smart, much like we do with current ML models.

The problem, of course, is that resources are finite and controlled by people (not in the sense that they are stagnant, but we can't create an infinite amount of resources on demand, it takes time). Nothing of extrinsic value is free.

Which leads me to my fourth point about AGI being useless.

4. The problem of gradual tool building

The process by which humanity advances is, once you boil it down, one of building new tools using previous tools.

People in the bronze age weren't unable to smelt steel because they didn't have the intelligence to "figure it out", but because they didn't have the tools to reach the desired temperature, evaluate the correct iron & carbon mixture, mine the actual iron out of the ground and establish the trade networks required to make the whole process viable.

As tools of bronze helped us build better tools of bronze, that helped us discover more easy to mine iron deposits, build ships and caravans to trade the materials needed to process said iron and build better smelters, we were suddenly able to smelt steel. Which leads us to being able to build better and better tooling out of steel… etc, etc.

Human civilization doesn't advance by breeding smarter and smarter humans, it advanced by building better and better tools.

The gradual knowledge that we acquire is mostly due to our tools. We don't own our knowledge of particle physics or chemistry to a few intelligent blokes that "figured it out". We owe it to the years of cumulative tool building that lead us to being able to build the tools to perform the experiments that gave us insights into the very world we inhabit.

Take away Max Planck and you might set quantum mechanics back by a few years, but in a rather short time someone would have probably figured out the same things he did. This is rather obvious when you look at multiple discoveries throughout history, that is to say, people discovering the same thing, at about the same time, without being aware of each other’s works.

Have Max Planck be born among a tribe of hunter gatherers in the neolithic period, and he might become a particularly clever hunter or shaman… but it's essentially impossible for him to have the tooling which allowed him to make the same discoveries about nature as 20th century Plank.

However, to some extent, the process of tool building is inhibited by time and space. If we decided to build a more efficient battery, or a more accurate electronic microscope, or a more accurate radio telescope, we wouldn't be limited by our intelligence, but by the thousands of hours required for our factories to build the better tools required to build better factories to build even better tools required to build even better factories in order to build even more amazing tools… etc.

Thousands of amazing discoveries, machines and theories lie within our grasp and one of the biggest bottlenecks is not intelligence but resources.

Not matter how smart your interstellar spaceship design is, you will still need rare metals, radioactive materials, hard to craft carbon fiber and the machinery to put it all together. Which is rather difficult, since we've collectively decided those resources are much better spent on portable masturbation aids, funny looking things to stick on our bodies and giant bombs just in case we need to murder all of humanity.

So, would a hypothetical superintelligent AGI help this process of tools building ? Most certainly. But it will probably end up with the same bottlenecks people that want to create amazing things face today, other people not wanting to give up their toys and the physical reality requiring the passage of time to shape.

Don't get me wrong, I'm not necessarily blaming people for choosing to focus on 3rd printing complexly shaped water-resistant phallus-like structures instead of using those resources to research senolytic nanobots. As I've mentioned before, defining intelligence is hard, and so is defining progress. What might seem "awesome" for the AGI reading this article could be rather boring or stupid for a few other billions of AGIs.

In conclusion

If you think you're specific business case might require an AGI, feel free to hire one of the 3 billion people living in South East Asia that will gladly help you with tasks for a few dollars an hour. It's likely to be much cheaper, Amazon is already doing it with Alexa, since it turns out to be somewhat cheaper than doing it via machine learning.

Is that to say I am "against" the machine learning revolution ?

No, fuck no, I'd be an idiot and a hypocrite if I thought machine learning wouldn't lead to tremendous human progress in the following decades.

I specifically wanted to work in the area of generic machine learning. I think it's the place to be in the next 10 or 20 years in terms of exciting developments.

But we have to stop romanticizing or fear-mongering about the pointless concept of a human-like intelligence being produced by software. Instead, we should think of machine learning (or "AI", if you must really call it that) as a tool in our great arsenal of thinking tools.

Machine learning is awesome if you apply it to the set of problems it's good at solving and if we try to extent that set of problems by being better and collecting and building algorithms we might be able to accomplish some amazing feats. But the idea that an algorithm that can mimic a human would be of particular use to us, is similarly silly to the idea that a hammer which also serves as a teaspoon would revolutionize the world of construction. Tools are designed to be good at their job, not much else.

And who knows, maybe someday we'll combine some of these awesome algorithms we've developed, add a few extra bits, shield them inside a realistic-looking robot body and realize that the "thing" we've create might well be a human… then it can join us and the other billions of humans in being mostly useless at doing anything of real value.

22 comments

Comments sorted by top scores.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-24T01:06:45.429Z · LW(p) · GW(p)

Thanks for posting this here; the downvotes are probably because you don't seem to understand the positions you are arguing against, and since those positions are what we are famous for here, we get even more salty than we would if you were shitting on someone else. :)

[edit: Looking back, this is an unusually harsh tone for me. I sincerely regret any hurt it may have caused. It seemed to me to be justified retaliatory snark, given the harsh language you used to describe people like me who are concerned about AGI.]

Here's my point-by-point lightning rebuttal:


AGI is here, as of the time of writing there are an estimate of 7,714,576,923 AGI algorithms residing upon our planet. You can use the vast majority of them for less than 2$/hour. They can accomplish the vast majority intellectual task that can be well-defined by humans, not to mention they can invent new tasks themselves.

There's a big difference between being able to accomplish tasks that can be well-defined by humans, and being able to accomplish all strategically & economically relevant tasks. The latter is what people are talking about when they talk about superintelligence (See: Bostrom, Superintelligence)

Also, it's debatable that current algorithms are better than humans at the vast majority of well-defined intellectual tasks. Here's an intellectual task that is well-defined: Produce a 2-hour video that will make at least $10M at the box office this summer. Here's another: Tell me which of these thousand stocks will yield the highest returns if I invest my savings in it, over the next year.

They are capable of creating new modified version of themselves, updating their own algorithms, sharing their algorithms with other AGIs and learning new complex skills.

Yeah but they aren't good at it, or at least, there are lots of very useful modifications and updates that still require humans, not to mention even more possible modifications that humans can't do but could in principle be done by AGI.

So, if we are agreed on the fact that 70 billion people wouldn't be much better than 7 billion, that is to say, adding brains doesn't scale linearly… why are we under the assumption that artificial brains would help ?

Adding brains doesn't scale linearly but it does scale to some significant extent. See e.g. here for fun discussion. To answer your question, as Bostrom might, artificial brains can differ from human brains in speed, organization, and quality/intelligence. And they tend to be cheap, too. So hypothetical future AGI systems might be like humans only much faster, or like humans only much cheaper, or like humans only much better at working in teams, or... just a lot smarter than humans. (that's the point you are going to discuss later I take it)

If we disagree on that assumption, than what's stopping the value of human mental labor from sky-rocketing if it's in such demand ? What's stopping hundreds of millions of people with perfectly average working brains from finding employment ?

I'm not sure I follow your dilemma here. Why is it that the value of average human mental labor must skyrocket, if we think that 70 billion people would be better than 7 billion?


Well, you could argue, it's about the quality of the intelligence, not all humans are intellectually equal. Having a few million artificial Joe nobody wouldn't help the world much, but having even a few dozen Von Neumanns would make a huge difference. Or, even better, it's about having an AI that's more intelligent than any human that we've yet encountered.

Yes. Though as mentioned before, there's also speed, cheapness, and coordination ability to consider. They would make AGI world-shakingly powerful even if it was no smarter than me.

This leads me to my second point.
2. Defining intelligence
How does one define an intelligent human or intelligence in general ?

Good question. Have you looked at the literature on the subject, with respect to AGI at least? Bostrom spends a chapter or so of his book on this question, and e.g. Shane Legg talks about it in his dissertation and some of his published work.

An IQ test is the go-to measure of human intelligence used by sociologists and psychologists.
However, algorithms can readily out-score humans in IQ tests, for a very long time, and they are able to do so reliably with more and more added constraints.
The problem here is that IQ tests are designed for humans… not machines.

Yes.

But, let's assume we come up with a "machine intelligence quotient" (MIQ) test that our potential AGI friends cannot use their perks to "cheat" on.
But how do we design it to avoid the pitfalls of a human IQ test when taken to the extremes ? Our purpose, after all, is not to "grade" algorithms with this test in a stagnant void, but to improve them in such a way that scoring higher on the tests means they are more "intelligent" in general.
In other words, this MIQ test needs to be very efficient at spotting intelligence outliers.
If we come back to an IQ test, we'll notice it's rather inaccurate at the thin ends of the distribution.
Have you ever heard of Marilyn vos Savant or Chris Langan or Terence Tao or William James Sidis or Kim Ung-yong ? These are, as far as IQ tests are concerned, the most intelligence members that our species currently contains.
While I won't question their presumed intelligence, I can safely say their achievements are rather unimpressive. Go down the list of high IQ individuals and what you'll mostly find is rather mediocre people with a few highly interesting quirks (can solve equations quickly, can speak a lot of languages, can memorize a lot of things… etc).

This relates to Goodhart's Law [LW · GW]. IQ is a proxy for intelligence, and intelligence is a proxy for world-shaking-ability. So yeah we should expect there to be outliers that are very high in the proxy but not in the other thing.

Once you consider the top 100,000 or so, there is most certainly a great overlap with people that have created impressive works of engineering, designed foundational experiments, invented theories that explain various natural phenomenon, became great leaders… etc (let's call these people "smart", for the sake of brevity).
But IQ is not a predictor of that, it's more of a filter. You can almost guarantee that a highly smart person (as viewed by society) will have a high IQ, but having the highest IQ doesn't even guarantee you will be in the top 0.x% of "smart" people as defined by your achievements.

I don't remember where I saw this (Gwern maybe?) but I think I read that no, IQ continues to be predictive even fairly far out into the tail.

So even if we somehow manage to create this MIQ to be as good of an indicator of whether or not a machine is intelligent as IQ is for humans, it will still be a bad criterion of benchmark against.
Indeed, the only reason IQ is a somewhat useful marker for humans is because natural selection did not use IQ, start having some mad-scientists IQ-based human optimization farms and soon enough you might end up with individuals that can score 300 on an IQ test but can't hold civil conversations or operate in society or tie their shoelaces.
This seems like an obvious point, a good descriptor of {X} becomes bad once it starts being used as a guidelines to maximize {X}. This should be especially salient to AI researchers and anyone working in automatic differentiation based modeling, optimizing a model for a known criterion (performance on training data) does not guarantee success on future similar data (testing/validation set), indeed, optimizing it too much can lead to a worst model than stopping at some middle ground.

Nice--that is basically exactly Goodhart's Law. OK, we are on the same page here. And actually I think this is an interesting point you are making, that I hadn't considered before. If intelligence is a proxy for world-shaking-ability, then maybe superintelligence won't be that powerful after all... Doesn't seem right to me, but I gotta count it as a decent and novel argument at least. I guess the objection that comes to mind is that this argument might be too general, it might prove too much. For example, vehicular speed is only a proxy for how long a transatlantic journey takes--there are other things to consider, like stops, loading and unloading times, etc. But it would be foolish to say, prior to the advent of planes, "So now we are about to invent aeroplanes that can travel very quickly. But we shouldn't expect transatlantic journey times to decrease, because vehicular speed is only a proxy for journey time!"

Intelligence may be only a proxy for world-shaking-ability, but remember, AI scientists are not just optimizing for MIQ, they are optimizing for world-shaking-ability. More powerful, impactful algorithms will be sought out, replicated, modified and expanded on.

We might be able to say "An intelligent algorithm should have an MIQ of at least 100", but we'll hardly be able to say "Having an MIQ of 500 means that an algorithm has super-human intelligence".
The problem is that we aren't intelligent enough to define intelligent. If the definition of intelligence does exist, there is no clear path to finding out what it is.
Even worse, I would say it's highly unlikely that the definition of intelligence exists. What I might consider intelligent is not what you would consider intelligent… our definitions may overlap to some extent, we'll likely be able to come up with a common definition of what constitutes "average" intelligence (e.g. the IQ test), but they would diverge towards the tail.

We don't actually need a definition of intelligence in order to be worried about the possible harmful effects of smarter-than-us AGI. See this paper for more.

Most people will agree as to whether or not someone is somewhat intelligent, but they will disagree to no end on a "most intelligent people in the world" list.
Well, you may say, that could indeed be true, but we can judge people by their achievements. We can disagree all day on whether or not some high IQ individual like Chris Langan is a quack numerologist or a misunderstood god. But we can all agree that someone like Richard Feynman or Tom Mueller or Alan Turing is rather bright based on their achievements alone.

Yes.

Which brings me to my third point.
3. Testing intelligence
The problem of our hypothetical superhuman AGI, since we can't come up with a simple test to determine it's intelligence, is that it would have to prove itself as capable.
This can be done in three ways:
Use previous data and see if the "AI" is able to perform on said data as well or better than humans.
Use very good simulations of the world and see if the "AI" is able to achieve superhuman results competing inside said simulations.
Give the "AI" the resource to manifest itself in the real world and act in much the same way the human would (with all the benefits of having a computer for a brain).
Approach number (1) is how we currently train ML algorithms, and it has the limitation of only allowing us to train on very limited tasks where we know all the possible "paths" once the task is complete.
For example, we can train a cancer detecting "AI" on a set of medical imaging data, because it's rather easy to then take all of our subjects and test whether or not they "really" have cancer using more expensive methods or waiting.
It's rather hard to train a cancer curing "AI", since that problem contains "what ifs" that can't be explored. There are limitless treatment options and given that a treatment fails (the person dies of cancer)… we can't really go back and try again to see what the "correct" solution was.
I wrote a bit more about this problem here, if you're interested in understanding it a bit more. But, I assume that most readers with some interest in statistics and/or machine learning have stumbled upon this issue countless times already.
This can be solved by (2), which is creating simulations in which we can test an infinite amount of hypothesis. The only problem being that creating simulations is rather computationally and theoretically expensive.
To go back to our previous example of cancer curing "AI", currently the "bleeding edge" of biomolecular dynamics simulation, is being able to simulate a single medium-sized gene, in a non reactive substance, for a few nanoseconds, by making certain intelligent assumptions that speed up our simulation by making it a bit less realistic, on a supercomputer worth a few dozens of millions of dollars… Yeah, the whole simulation idea might not work out that well after all.
Coincidentally, most phenomenon that can be easily simulated are also the kind of phenomenon that are either very simplistic or fall into category (1), go figure.
So we are left with approach (3), giving our hypothetical AGI the physical resources to put its humanity-changing ideas into practice. But… who's going to give away those resources ? Based on what proof ? Especially when the proposition here is that we might have to try out millions of these AGIs before one of them is actually extremely smart, much like we do with current ML models.
The problem, of course, is that resources are finite and controlled by people (not in the sense that they are stagnant, but we can't create an infinite amount of resources on demand, it takes time). Nothing of extrinsic value is free.
Which leads me to my fourth point about AGI being useless.

So yeah, it might cost us some real-world resources to test our AGI in the real world, and if we don't do that, we'll be stuck to testing it in simulation which has limitations.

So what?

This is already true of current ML systems, but it doesn't stop progress or even slow it down much.

4. The problem of gradual tool building
The process by which humanity advances is, once you boil it down, one of building new tools using previous tools.
People in the bronze age weren't unable to smelt steel because they didn't have the intelligence to "figure it out", but because they didn't have the tools to reach the desired temperature, evaluate the correct iron & carbon mixture, mine the actual iron out of the ground and establish the trade networks required to make the whole process viable.
As tools of bronze helped us build better tools of bronze, that helped us discover more easy to mine iron deposits, build ships and caravans to trade the materials needed to process said iron and build better smelters, we were suddenly able to smelt steel. Which leads us to being able to build better and better tooling out of steel… etc, etc.
Human civilization doesn't advance by breeding smarter and smarter humans, it advanced by building better and better tools.

It was a bit of both, presumably. But yes, point taken.

The gradual knowledge that we acquire is mostly due to our tools. We don't own our knowledge of particle physics or chemistry to a few intelligent blokes that "figured it out". We owe it to the years of cumulative tool building that lead us to being able to build the tools to perform the experiments that gave us insights into the very world we inhabit.
Take away Max Planck and you might set quantum mechanics back by a few years, but in a rather short time someone would have probably figured out the same things he did. This is rather obvious when you look at multiple discoveries throughout history, that is to say, people discovering the same thing, at about the same time, without being aware of each other’s works.
Have Max Planck be born among a tribe of hunter gatherers in the neolithic period, and he might become a particularly clever hunter or shaman… but it's essentially impossible for him to have the tooling which allowed him to make the same discoveries about nature as 20th century Plank.
However, to some extent, the process of tool building is inhibited by time and space. If we decided to build a more efficient battery, or a more accurate electronic microscope, or a more accurate radio telescope, we wouldn't be limited by our intelligence, but by the thousands of hours required for our factories to build the better tools required to build better factories to build even better tools required to build even better factories in order to build even more amazing tools… etc.
Thousands of amazing discoveries, machines and theories lie within our grasp and one of the biggest bottlenecks is not intelligence but resources.
Not matter how smart your interstellar spaceship design is, you will still need rare metals, radioactive materials, hard to craft carbon fiber and the machinery to put it all together. Which is rather difficult, since we've collectively decided those resources are much better spent on portable masturbation aids, funny looking things to stick on our bodies and giant bombs just in case we need to murder all of humanity.
So, would a hypothetical superintelligent AGI help this process of tools building ? Most certainly. But it will probably end up with the same bottlenecks people that want to create amazing things face today, other people not wanting to give up their toys and the physical reality requiring the passage of time to shape.

Eventually it will end up with bottlenecks, yes. Speed of light, speed of factory assembly robot, speed of DNA synthesis, etc. But you've gotta admit, there's a lot of room for improvement before we get to that point.

Don't get me wrong, I'm not necessarily blaming people for choosing to focus on 3rd printing complexly shaped water-resistant phallus-like structures instead of using those resources to research senolytic nanobots. As I've mentioned before, defining intelligence is hard, and so is defining progress. What might seem "awesome" for the AGI reading this article could be rather boring or stupid for a few other billions of AGIs.

I'm not sure I get your point at the end--are you saying that different AGIs might value different things? Yes, but how is that a reason not to be concerned?


In conclusion
Artificial general intelligence is something we have plenty of here on Earth, most of it goes to waste, so I'm not sure designing AGI based on a human model would help us much.

Disagree.

Superhuman artificial general intelligence is not something that we can define, since nobody has come up with a comprehensive definition of intelligence that is self-sufficient, rather than requiring real world trial and error.

There is a literature on this, which I agree hasn't been fully satisfactory, but I think it's done good enough. Besides, we don't need a definition to be concerned.

Superhuman artificial general intelligence is not something we can test, since we can't gather statistically valid training datasets for complex problem and we can't afford to test via trial and error in the real world.

All the more reason to be concerned! People will create and deploy these things before they have tested them properly!

Even if superhuman artificial intelligence was somehow created, there's no way of knowing that they'd be of much use to us. It may be that intelligence is not the biggest bottleneck to our current problems, but rather time and resources.

What? Of course they would be of much use to us--if they wanted to be, at least. Even if intelligence isn't our biggest bottleneck, it is a limitation and so overcoming it would help greatly. There's lots of room for improvement.

Argument: Intelligence lets us design new tech faster. New tech is super useful. Therefore, superhuman AGI could be super useful.

If you think you're specific business case might require an AGI, feel free to hire one of the 3 billion people living in South East Asia that will gladly help you with tasks for a few dollars an hour. It's likely to be much cheaper, Amazon is already doing it with Alexa, since it turns out to be somewhat cheaper than doing it via machine learning.

Again, the thought is that AGI could be better than humans in speed, quality, organization, or cheapness, not just equal to humans.

Is that to say I am "against" the machine learning revolution ?
No, fuck no, I'd be an idiot and a hypocrite if I thought machine learning wouldn't lead to tremendous human progress in the following decades.

It seems like some of your arguments are too general; they prove too much; for example, they could just as well be used to argue for the conclusion that ML won't lead to tremendous human progress.

I specifically wanted to work in the area of generic machine learning. I think it's the place to be in the next 10 or 20 years in terms of exciting developments.
But we have to stop romanticizing or fear-mongering about the pointless concept of a human-like intelligence being produced by software. Instead, we should think of machine learning (or "AI", if you must really call it that) as a tool in our great arsenal of thinking tools.

Sticks and stones may hurt my bones, but words will never hurt me...

Machine learning is awesome if you apply it to the set of problems it's good at solving and if we try to extent that set of problems by being better and collecting and building algorithms we might be able to accomplish some amazing feats. But the idea that an algorithm that can mimic a human would be of particular use to us, is similarly silly to the idea that a hammer which also serves as a teaspoon would revolutionize the world of construction. Tools are designed to be good at their job, not much else.

Again, the thought is that AGI could be better than humans in speed, quality, organization, or cheapness, not just equal to humans. And it's pretty obvious why AGI of this sort would be useful.


...


If you'd like to talk more about this with me, I'd be happy to continue the conversation! Send me a PM, maybe we can skype or something.

Replies from: George3d6
comment by George3d6 · 2019-10-24T08:32:35.334Z · LW(p) · GW(p)

a)

The first part of the article was not meant to me taken on it's own, rather, it was meant to be a premise on which to say: "There is no guarantee AGI will happen or be useful and based on current evidence of how the things we want to model AGI after scale I'm inclined to think it won't be useful".

Compare it, if you wish, to a someone giving the argument "planes have flown through the clouds and satellites have taken photos of space and studies have been done on the effects of prayers and up until now no gods have been found nor their effects seen". It's a stupid argument, it doesn't in itself prove that there is no God, but it's necessary and I think it can help when everyone thinks there is a God.

Since everyone except for (funny enough) most ML researchers I've ever meet (as in, people that are actually building the AI, guys like Francois Chollet, not philosophers/category-theorists/old professors that haven't publish a relevant paper in 30 years :p), seem to come from the (seemingly irrational) premise that AGI is a given considering how technology advances.

I don't particularly think that this argument is back-up-able more so than the "there is no God" or "money has no inherent value" argument is back-up-able. Since, funnily enough, arguing against the purely imaginary things is basically impossible.

It's impossible to prove the in-existance of AGI or the equivalency of humans with AGI or argue about power consumption and I/O overhead + algorithmic complexity required for synchronization.... on computer that don't exist. At most I can say "computer today are much less efficient in terms of power consumption than the human brain at tasks like NLP and image-detection" and I can back it up with empirical evidence like how much power a GPU consumes vs how much claories your average human requires, and comparing those as W and as cost-of-production. At most I can come up with examples like Alexa and Google using m-turk like systems for hard NLP tasks rather than ML algorithms (despite both of them having invested literal billions in this are and academia having worked on it for 60+ years).

But at the end of the day I know that these argument don't disprove AGI, they just prove that I don't understand technology enough to realize that AGI is inevitable.

Still, I think these kind of arguments are useful to hopefully make fence-sitters realize how silly the AGI position is, the later two chapters are my arguments for *why* even the AGI God converts imagine will not be as all-powerful as we might think.

b)

All the more reason to be concerned! People will create and deploy these things before they have tested them properly!

I think there's a lot of place where I'm unclear in the article because I oscillate between what kind of language to use. E.g.:


Here by "test" I mean something like "Given an intrinsic motivation agent augmented by bleeding-edge semi-supervised models to help it at complex & common tasks, it would still need to be given a small amount of physical resources for it to train on how to use them in a non-simulated environment and for us to evaluate it's performance.... which would take a lot of time, since you can't speed up real life and would be expensive" rather than "Allow skynet to take control of the nuclear arsenal for experimental purposes"

I think that it's mainly my fault for not being more explicit with stuff like this, but the other side of that is articles turning into boring 10,000 page essays with a lot of ****, I will try to update that particular statement though.

c)

It seems like some of your arguments are too general; they prove too much; for example, they could just as well be used to argue for the conclusion that ML won't lead to tremendous human progress.

I actually think that, from your perspective, I could be seen as arguing this.

I'm pretty sure that from your perspective I would actually hold this view, the clarification I made was to specify that I don't think this view is absolute (i.e. I think that AI will leads to x human progress and most proponents of AGI seem to think it will lead to x * 100,000,000, but in spite of that difference I think even x will be significant).

At least if you count human progress in something simple to measure like how much energy we capture and how little energy we have to spend on building nice human housing an delicious human food (e.g. a civilization with a Dyson sphere would be millions of times as advanced as one without one under this definition)

comment by gilch · 2019-10-24T05:36:54.977Z · LW(p) · GW(p)

I think it's in the spirit of the conversation here.

AGI is certainly a topic of great interest here, but the "spirit" of this post is pretty far off of LessWrong standards:

One of the most misunderstood ideas that's polluting the minds of popular "intellectuals", many of them seemingly accustomed with statistics and machine learning, is the potential or threat that developing an artificial general intelligence (AGI) would present to our civilization.

This myth stems from two misunderstanding of reality.

Ouch? "polluting" minds? "intellectuals" in scare quotes? "Myth" from "misunderstanding of reality"? We call ourselves "rationalists" here. Understanding reality (and winning) is exactly what we value. It looks like you're deliberately trying to insult us. I think this explains the downvotes.

But I have a fairly thick skin and care more about the truth than whether you're trying to being mean or not, so I would have pretty much overlooked this if you had a good argument. (Rationalists love good arguments, especially when they contradict our beleifs.) Sadly, this is not the case. We've tread this ground before.

We already have AGIs

You missed the "artificial" part, and that matters. Yes, a human is a General Intelligence. You spend a section elaborating on that, but it's not news to anyone. Human brains are very limited by biology in ways that simply don't apply to machines. Fitting through the birth canal, for example.

How does one define an intelligent human or intelligence in general ?

I would point to AIXI as the most "intelligent" agent possible in the limit. This has a very formalized definition of "intelligent agent". Intelligence doesn't have to be human-like at all. Think of it as an "optimization process" or outcome pump [LW · GW] instead if "intelligence" has too much anthropomorphic baggage for you.

I had heard of Terrence Tao, but I do agree that IQ is an imperfect measure of g. And the g factor is not at all the same thing as rationality. But it's not relevant given the AIXI formalism.

Human civilization doesn't advance by breeding smarter and smarter humans, it advanced by building better and better tools.

Not exactly true. There are native tribesmen that could walk naked into the jungle and come out in a few days with clothes, weapons, and a basket full of fruit and game. You couldn't do that. Their technology had to "advance" to that point. I saw a textbook on how to build an entire machine shop starting with a lathe. I saw a video of someone building an electric battery using only stone-age tools and materials foraged from nature. "Technology" means technique more than it means "tools". Notice the Greek root techne or "art" in both words. Human civilization advanced by accumulating culture, and that component of our smarts is evolving faster than our DNA. I would identify the invention of the printing press as the point where civilization really took off. It's not that resources are irrelevant, obviously they're required, but they're not as important as you seem to think.

What really hit this point home for me was That Alien Message [LW · GW]. You'll have to respond to that, at least, before I can take you seriously.

There are good reasons (like biology) to think that molecular nanotechnology is physically feasible. Once that's started, it probably wouldn't require more than CHON (carbon, hydrogen, oxygen, nitrogen) from the air and energy to produce the kind of radical abundance that makes all the other resources either obsolete or really cheap to acquire.

Replies from: TAG, George3d6
comment by TAG · 2019-10-24T12:08:01.924Z · LW(p) · GW(p)

I would point to AIXI as the most “intelligent” agent possible in the limit. This has a very formalized definition of “intelligent agent”.

AIXI isn't possible in the limit, becasuse it's uncomputable. It also has fundamental limitations.

Replies from: gilch
comment by gilch · 2019-10-25T00:09:04.366Z · LW(p) · GW(p)

From the abstract of the AIXI paper:

We give strong arguments that the resulting AIξmodel is the most intelligent unbiased agent possible.

Emphasis mine. What you quoted above was me paraphrasing that. Nobody is claiming we can actually implement Solomonoff induction, that's obviously uncomputable. Think of "possible" here as meaning "possible to formalize", not "possible to implement on real physics".

I bring up AIXI only as a formal definition of what is meant by "intelligent agent", specifically in order to avoid anthropomorphic baggage and to address the original poster's concerns about measuring machine intelligence by human standards.

An agent (humans included) is "intelligent" only to the degree that it approximates AIXI.

Replies from: TAG
comment by TAG · 2019-10-26T11:50:23.638Z · LW(p) · GW(p)

Think of “possible” here as meaning “possible to formalize”

Why not say “possible to formalize”, if that is what is meant?

It is tempting to associate intelligence with optimisation, but there is a problem. Optimising for one thing is pretty automatically optimising against other things, but AI theorists need a concept of general intelligence -- optimising accross the board.

AIXI, as a theoretical concept. is a general optimiser, so if AIXI is possible, general optimisation is possible. But AIXI isn't possible in the relevant sense. You can't build it out of atoms.

If AIXI, is possible, the "Artificial general optimisation" would make sense. Since it is not possible, the use of optimisation to clarify the meaning of "intelligence" leaves AGI=AGO as contradictory concept, like a square circle.

An agent (humans included) is “intelligent” only to the degree that it approximates AIXI.

Which brings in the the other problem with AIXI. Humans can model themselves, which is doing something AIXI cannot.

Replies from: gilch, gilch
comment by gilch · 2019-10-26T18:01:02.766Z · LW(p) · GW(p)

Optimising for one thing is pretty automatically optimising against other things,

Yes.

but AI theorists need a concept of general intelligence -- optimising accross the board.

No. There is no optimizing across the board, as you just stated. Optimizing by one standard is optimizing against the inverse of that standard.

But AIXI can optimize by any standard we please, by setting its reward function to reflect that standard. That's what we mean by an AI being "general" instead of "domain specific" (or "applied", "narrow", or "weak" AI). It can learn to optimize any goal. I would be willing to call an AI "general" if it's at least as general as humans are.

comment by gilch · 2019-10-26T16:32:52.245Z · LW(p) · GW(p)

Think of “possible” here as meaning “possible to formalize”

Why not say “possible to formalize”, if that is what is meant?

I DID say that, and you just quoted it! I didn't say it in advance, and could not have been reasonably expected to, because it was obvious: AIXI is based on Solomonoff induction, which is not computable. No-one is claiming that it is, so stop putting words in my mouth. I cannot hope to predict all possible misinterpretations of what I say in advance (and even if I could, the resulting advance corrections would make the text too long for anyone to bother reading), but can only correct them as they are revealed.

If AIXI, is possible, the "Artificial general optimisation" would make sense. Since it is not possible, the use of optimisation to clarify the meaning of "intelligence" leaves AGI=AGO as contradictory concept, like a square circle.

No, that does not follow. Optimization does not have to be perfect to count as optimization. We're not claiming that AGI will be as smart as AIXI, just smart enough to be dangerous to humans.

Which brings in the the other problem with AIXI. Humans can model themselves, which is doing something AIXI cannot.

AIXI can model any computable agent. It can't model itself except approximately (because it's not computable), but again, humans are no better: We cannot model other humans, or even ourselves, perfectly. A related issue: AIXI is dualistic in the sense that its brain does not inhabit the same universe it can act upon. An agent implemented on real physics would have to be able to deal with this problem once it has access to its own brain, or it may simply wirehead itself. But this is an implementation detail, and not an obstacle to using AIXI as the definition of "the most intelligent unbiased agent possible".

comment by George3d6 · 2019-10-24T08:48:42.505Z · LW(p) · GW(p)
I would point to AIXI as the most "intelligent" agent possible in the limit. This has a very formalized definition of "intelligent agent". Intelligence doesn't have to be human-like at all.

This seems to be a silly though-experiment to me:

To quote from the article

There is an agent, and an environment, which is a computable function unknown to the agent

which is equivalent to

There is an agent and a computable function unknown to the agent

If you want to reduce the universe to a computable function where randomized exploration is enough for us to determine the shape... and exploration is free... then yeah, I can't argue against that reductionist model.

In the reality we live in, however, we don't know:

a) Random exploration is not cheap, indeed, random exploration is quite expensive. Acquiring any given "interesting" insight about our shared perception (aka the world) is probably costly to the tune of a few hundreds of millions of dollars, or at least to the tune of a lot of wasted time and elecricity if we lower the bar for interesting.

b) "Simpler computations are more likely a priori to describe the environment than more complex ones" ... there are people working on encoding that have wrestled with a version of this problem, it ends up not being a very simple heuristic to implement, even when your domain is as limited as "The sum of all 50s jazz songs", I think a heuristic that even roughly approximates simplifying an equation that describes something as complex as the world is impossible to reach.... and if the simplification is done one a small part of the equation there's no guarantee the equation you end up with won't be more complex than if you were to not simplify anything.

c) Whether the universe contains enough resources to define itself, or even a small portion of itself within some given margin of errors, is unkown to use. It might be that all the resource in the universe are not enough to simulate the behavior of the fundamental particles in an apple (or of all humans on Earth). There's actual observations about seeming "randomness" in particle physics that would back up this view. This argument I make, since I assume most proponents of AIXI will assume it might try to gain "real" insights via some sort of simulation-building.

But maybe I'm miss-understanding AIXI... in that case, please let me know

Replies from: gilch
comment by gilch · 2019-10-25T00:40:22.232Z · LW(p) · GW(p)

This seems to be a silly though-experiment to me

Math is naught but thought experiments, and yet unreasonably effective in science.

Also, AIXI has been directly approximated using Monte Carlo methods, and the resulting agent systems do show "intelligent" behavior, so the formalism basically works. I am not suggesting this is a good path to AGI, that's not the point.

My point is that AIXI is a non-anthropomorphic definition of "intelligent agent", in direct response to your "Defining intelligence" section where you specifically say

The problem is that we aren't intelligent enough to define intelligent. If the definition of intelligence does exist, there is no clear path to finding out what it is.

Even worse, I would say it's highly unlikely that the definition of intelligence exists. What I might consider intelligent is not what you would consider intelligent…

And I'm pointing out that we have a definition already, and that definition is AIXI.

If you want to reduce the universe to a computable function

That's called "physics"! We're using computer programs to model the universe. The map is not the territory.

Simpler computations are more likely a priori to describe the environment than more complex ones

Also known as Occam's Razor. Solomonoff induction, which AIXI is based on, is a formalization of the principle. Since the hypothesis space is literally all possible computer programs, the set is infinite. We can't very well assign them all equal probability of being the correct model, or our probabilities would add up to infinity instead of the expected 100%.

Whether the universe contains enough resources to define itself, or even a small portion of itself within some given margin of errors, is unkown to use.

We humans predict small portions of the universe all the time. It's called "planning". More formally, it's called "physics". To the extent that parts of the universe are truly random, it's irrelevant to the question of artificial intelligence, which need only concern itself with predicting what it can, and accounting for uncertainty in the rest. Humans are no better. But even quantum physics is Turing computable. We have no good reason to think that the laws of physics are not computable, but there are unknown initial conditions.

Replies from: George3d6
comment by George3d6 · 2019-10-26T13:32:38.453Z · LW(p) · GW(p)
Math is naught but thought experiments, and yet unreasonably effective in science.

A reduce sub-set of mathematics, yes, but that reduced sub-set is all that survives. Numerology, for example, has been and still is useless to science, however that hasn't stopped hundreds of thousands of people from becoming numerologists.

Further more, math is often used as a way to formalize scientific finding, but the fact that a mathematical formalism for a thing exist doesn't mean that thing exists.

Also, AIXI has been directly approximated using Monte Carlo methods, and the resulting agent systems do show "intelligent" behavior, so the formalism basically works. I am not suggesting this is a good path to AGI, that's not the point.

This is the point where I start to think that, although we both seem to speak English, we certainly understand fully different things from some words and turns of phrase.

Why bring up AIXI if you yourself admit you are not going to defend the approach as a good path to AGI ?

Why is a system with "some" intelligent behavior proof that the paradigm is useful at all ?

I can use pheromones to train and ant to execute certain "intelligent" tasks, or food to train a parrot or a dog. Yet the fact that African parrots and indeed learns simple NLP tasks leads no credence to the idea that and African parrot based bleeding-edge NLP system is something worth pursuing.

And I'm pointing out that we have a definition already, and that definition is AIXI.

If we have a definition that is non-functional, i.e. one that can't be used to actually get an intelligent agent, I would claim we don't have a definition.

We have some people that imagine they have a definition but have no proof said definition works.

Would you consider string theory to be a definition of how the universe works in spite of essentially no experimental evidence backing it up ? If so, that might be the crux of our argument here (though one that I wouldn't claim I'm able to resolve)

That's called "physics"! We're using computer programs to model the universe. The map is not the territory.

Yes, and physics is *very bad* at modeling complex systems, that's why 2 centuries were spent pondering about how to model systems consisting of 3 interacting objects.

Physics is amazing at launching rockets into space, causing chain reactions and creating car engines.

But if you were to model even a weak representation of a simple gene (aka one that doesn't stand up to reality, one that couldn't be used to model an entire's bacteria DAN plus afferent structural elements), it would take you a few days and some super-computers to get a few nano-seconds of that simulation: https://onlinelibrary.wiley.com/doi/abs/10.1002/jcc.25840

That is not to take anything away from how amazing the fact that we can model the above is, but if you are assuming that an efficient yet reasonably cheap (computationally) model of the universe exists, or if you are assuming it *can* exist you are ignoring all evidence thus far which is pointing towards the fact that:

a) It doesn't exist

b) There's no intuition or theory pointing us towards the fact that one could be built, and indeed there's some evidence (again, see inherent randomness or co-dependences of various particles that can propagate into large differences in macro systems if even a single particle is modeled incorrectly).

Also known as Occam's Razor. Solomonoff induction, which AIXI is based on, is a formalization of the principle. Since the hypothesis space is literally all possible computer programs, the set is infinite. We can't very well assign them all equal probability of being the correct model, or our probabilities would add up to infinity instead of the expected 100%.

No idea what you're on about here

We humans predict small portions of the universe all the time. It's called "planning". More formally, it's called "physics". To the extent that parts of the universe are truly random, it's irrelevant to the question of artificial intelligence, which need only concern itself with predicting what it can, and accounting for uncertainty in the rest. Humans are no better. But even quantum physics is Turing computable. We have no good reason to think that the laws of physics are not computable, but there are unknown initial conditions.

Humans are better, since humans can have sensory experience and they can act with the environment.

A hypothetical system that *learns* the universe can't interact with the environment, that's the fundamental difference and I spend a whole chapter trying to explain it.

Assuming a human can't model "X" that's fine, because a human can design an experiment to see how "X" happens, that's how we usually do science, indeed, our models are usually just guidelines for how to run experiments.

All but madmen would use theory alone to build or understand a complex system. On the other hand, we have managed to "understand" complex systems many times with not theoretical backing whatsoever (e.g. being able to cure bacterial infections via antibiotics before being able to even see single bacteria under a microscope, not to even mention having some sort of coherent model about what bacterias are and how one might model them, which is still an issue today).

If our hypothetical AGI is bound by the same modeling limitations we are, then it has to run experiments in the real world, and once again we come up to the cost problem. I.e. the experiments needed to better understand the universe might not be too hard to think of, they might just take too long, be physically impossible with our current technology or too expensive... and then even a SF-level AGI becomes as slightly better scientists, rather than a force for unlocking the mysteries of the universe.

Replies from: gilch
comment by gilch · 2019-10-26T17:35:01.964Z · LW(p) · GW(p)

This is the point where I start to think that, although we both seem to speak English, we certainly understand fully different things from some words and turns of phrase.

Why bring up AIXI if you yourself admit you are not going to defend the approach as a good path to AGI ?

Implementing true AIXI is not physically possible. It's uncomputable. I did not say that "AIXI" is not a good path to AGI. I said that a Monte Carlo approximation of AIXI is not a good path.

And it's not that it can't work, clearly it does (and indeed any intelligent agent, humans included, is going to be some kind of an approximation of AIXI) but that the Monte Carlo AIXI has certain problems that make the approach not good:

  1. It's not efficient; other approximations besides Monte Carlo AIXI are probably easier to get good performance out of, the human brain being one such example.
  2. Direct attempts at approximation will run into the wireheading problem. Any useful agent with physical influence over its own brain can run into this issue unless it is specifically dealt with. (AIXI's brain is not even in the same universe it acts on.)
  3. The Sorcerer's Apprentice problem: any useful reward function we can come up with seems to result in an agent that is very dangerous to humans. The AIXI paper takes this function as a given without explaining how to do it safely. This is Bostrom's orthogonality thesis: a general optimizer can be set to optimize pretty much anything. It's not going to have anything like a conscience unless we program that in, and we don't know how to do that yet. If we figure out AGI before we figure out Friendly AGI, we're dead. Or worse.

From the Aribital article:

AIXI is the perfect rolling sphere of advanced agent theory - it's not realistic, but you can't understand more complicated scenarios if you can't envision the rolling sphere.

Replies from: George3d6
comment by George3d6 · 2019-10-26T19:19:52.674Z · LW(p) · GW(p)
Implementing true AIXI is not physically possible. It's uncomputable. I did not say that "AIXI" is not a good path to AGI. I said that a Monte Carlo approximation of AIXI is not a good path.

So, you as saying "it's no a good path"

*but* then you claim:

And it's not that it can't work, clearly it does (and indeed any intelligent agent, humans included, is going to be some kind of an approximation of AIXI) but that the Monte Carlo AIXI has certain problems that make the approach not good:

My argument here is that it doesn't, it's an empty formalism with no practical application.

You claim that this approach can work, but again, I'm saying you don't understand the practical problems of the universe we live in: There are literal physical limitations to computational resources. This approach is likely too stupid to be able to simulate anything relevant even if all the atoms in the universe were to be arrange to create an optimal computer to run it. Or, at least, you have no leg to stand on claiming otherwise considering current performance of an optimize yet inferior implementation.

So essentially:

You are claiming that an in-practical model is proof that a practical model could exist because the in-practical model would work in a fictional reality thus a practical.

It makes no sense to me.

Replies from: gilch
comment by gilch · 2019-10-26T20:17:44.976Z · LW(p) · GW(p)

My argument here is that it doesn't, it's an empty formalism with no practical application. ... There are literal physical limitations to computational resources.

Where do I even start with this? That argument proves too much. You could apply the same argument to engineering in general. "Well, it would take infinite computing power to sum up an integral, so I guess we can't ever use numerical approximations." Please read through An Intuitive Explanation of Solomonoff Induction [LW · GW]. In particular, I will highlight:

But we can find short­cuts. Sup­pose you know that the ex­act recipe for bak­ing a cake asks you to count out one molecule of H2O at a time un­til you have ex­actly 0.5 cups of wa­ter. If you did that, you might not finish the cake be­fore the heat death of the uni­verse. But you could ap­prox­i­mate that part of the recipe by mea­sur­ing out some­thing very close to 0.5 cups of wa­ter, and you’d prob­a­bly still end up with a pretty good cake.

Similarly, once we know the ex­act recipe for find­ing truth, we can try to ap­prox­i­mate it in a way that al­lows us to finish all the steps some­time be­fore the sun burns out.

Replies from: George3d6
comment by George3d6 · 2019-10-27T08:16:01.440Z · LW(p) · GW(p)

Now you are literally using Zeno to steal cattles.

The problem with your very wide perspective seems to be that you are basically taking Pascal's wager.

But for now I honestly don't have time to continue this argument, especially if you're gonna take the read-this-book-peasant style approach to filling understanding/interpretation gaps.

Though this discussion has given me an idea about a "Why genetically engineered parrots will bring about the singularity" as a counter-argument to this kind of logic.

Other than that, congrats you "win", I'm afraid however that I no further understand your position or why you hold it then when we began. Nor do I understand what would change your position or what it's principal pillars are... :/

Replies from: gilch
comment by gilch · 2019-10-27T19:33:37.540Z · LW(p) · GW(p)

You're fighting a strawman, George. You clearly do not understand our real arguments. Attempts to point this out have only been met with your hostility. I do not have the patience to tutor one so unwilling to study.

If you have any desire to cross the inferential gap [LW · GW], I will refer you to the LessWrong FAQ

If your post involves topics that were already covered in the sequences you should build on them, not repeat what has already been said. If your post makes mistakes that were warned against in the sequences, you'll likely be downvoted and directed to the sequence in question.

That is exactly what is happening here. The symptoms of this dialogue are diagnostic of an inferential gap. Your case is not the first. Read the Sequences, George. Especially the parts we've linked you to.

On the other hand, we're well aware that it can take a long time to read through several years worth of blog posts, so we've labeled the most important as "core sequences". Looking through the core sequences should be enough preparation for most of the discussions that take place here. We do recommend that you eventually read them all, but you can take your time getting through them as you participate. Before discussing a specific topic, consider looking to see if if there is any obvious sequence on that topic.

Replies from: George3d6
comment by George3d6 · 2019-10-27T21:50:56.739Z · LW(p) · GW(p)

Again, I think you are veering into religious thinking here, just because something has Eliezer Yudkowsky's name on it it doesn't mean that it's true.

Personally I know the essay and I happen to fundamentally disagree with it. I'm a pragmatic bayesian at the best of times and a radical empiricist at my worst, so the kind of view Eliezer espouses has very little sway on me.

But despite the condescending voice you give this reply, if I am to make a very course assumption, I can probably summarize our difference here as me putting too much weight behind error accumulation in my model of the world or you not taking into account how error accumulation works (not saying one perspective or the other is correct, again, this is I think where we differ and I assume our difference is quite fundamental in nature).

Given the fact that your arguments seem to be mainly based on simple formal model working in what I see as an "ideal" universe, from which you then draw your chain of inferences leading to powerful AGI, I assume you might have a background in mathematics and/or philosophy.

I do think that my article is actually rather bad at addressing AGI from this angle.

I'm honestly unsure if the issue could even be addressed from this perspective, but I do think it might be worth a broader piece addressing why this perspective is flawed (i.e. an argument for why a perspective/model/world-view based on a long inferential distances is inherently flawed).

So, I honestly think this conversation might not have been pointless after all, at least not from my side, because it gives me an idea for an essay and a reason to write it.

Granted, I assume you have still gained nothing in terms of understanding my perspective, because quite frankly I did a bad job at addressing it in such a way that you would understand, I was not addressing the correct problem. So for that I am sorry.

Then again, I might be making too many assumptions about your perspective and background here, stacking imperfect inference upon imperfect inference and creating a caricature that does not match reality in any meaningful way.

comment by Matthew Barnett (matthew-barnett) · 2019-10-23T22:30:39.126Z · LW(p) · GW(p)

One reason why artificial intelligence might be more useful than a human for some service is because artificial intelligence is software, and therefore you can copy-paste it for every service that we might want in an industry.

Recruiting and training humans takes time, whereas if you already have an ML model that performs well on a given task, you only need to acquire the relevant hardware to run the model. If hardware is cheap enough, I can see how using artificial intelligence could be much cheaper than spending money on {training + recruiting + wages} for a human. Automation in jobs such as audio transcription exemplify this trend -- although I think the curve for automation is smooth as the software services require continuously less supervision over time as they improve.

Replies from: George3d6
comment by George3d6 · 2019-10-24T09:07:41.870Z · LW(p) · GW(p)
Recruiting and training humans takes time, whereas if you already have an ML model that performs well on a given task, you only need to acquire the relevant hardware to run the model. If hardware is cheap enough, I can see how using artificial intelligence could be much cheaper than spending money on {training + recruiting + wages} for a human. Automation in jobs such as audio transcription exemplify this trend -- although I think the curve for automation is smooth as the software services require continuously less supervision over time as they improve.

In theory I agree with this, in practice:

It seems that the scope of ML has remained stable over the last 40 years or so (various NLP tasks, image classification and element outlining/labeling, numerical equation construction to predict a category/number... with added generative models for images that seem to have only recently gained interest).

In spite of the reduced scope of tasks it seems that the amount of people working on maintaining ML infrastructure and working in ML research is increasing.

A specialized company seems to pop up in every field from cabbage maturation detection to dog breed validation with it's dozens of employees our of which at least a few are actually responsible for the task of copy-pasting code from github, and often enough they seem to fail at it or perform unreasonably badly.

Ever had to figure out why the specific cudnn/pytroch/tensorflow setup on a given environment is not working ?

Granted, again, I do agree in theory with your point. I don't think my argument relies on replication cost. But I can't see a future where replication costs are not a huge issue the same way I can't see a future where everyone agrees on {X}, it's no theoretically impossible, far from it, but technical over-complexity and competing near-equivalent standards is an issue with social roots that humans can't fix.

comment by Vanessa Kosoy (vanessa-kosoy) · 2019-10-25T20:07:26.174Z · LW(p) · GW(p)

The entire argument seems to boil down to

  1. Give the "AI" the resource to manifest itself in the real world and act in much the same way the human would (with all the benefits of having a computer for a brain)...

So we are left with approach (3), giving our hypothetical AGI the physical resources to put its humanity-changing ideas into practice. But… who's going to give away those resources ? Based on what proof ?

We already connect AI systems to the real world, for example Facebook's algorithms that learn based on user behavior. There's nothing implausible about this.

Besides that:

They [humans] are capable of creating new modified version of themselves, updating their own algorithms...

Human ability to self-modify is very limited. Most of our algorithms are opaque to our conscious mind.

...I would say it's highly unlikely that the definition of intelligence exists.

There is already progress in defining intelligence, starting from Legg and Hutter 2007.

What's truly jarring, though, is the passage

Have you ever heard of Marilyn vos Savant or Chris Langan or Terence Tao or William James Sidis or Kim Ung-yong ? These are, as far as IQ tests are concerned, the most intelligence members that our species currently contains.

While I won't question their presumed intelligence, I can safely say their achievements are rather unimpressive.

Terence Tao is a fields medalist. You bet I heard of Tao. If a Fields Medal is "rather unimpressive", I can't imagine what is impressive.

comment by FactorialCode · 2019-10-24T02:09:46.893Z · LW(p) · GW(p)

I'm just gonna leave this here and highlight points 2 and 3.

Replies from: George3d6
comment by George3d6 · 2019-10-24T09:10:33.071Z · LW(p) · GW(p)

I agree with points 2 and 3. But I think it's only really a counter for the equivalency between AI and human-brain-baring systems. I think the article in itself stands in spite of that. (See my other comments, especially the one in reply to Daniel Kokotajlo [LW · GW]>