Has anyone on LW written about material bottlenecks being the main factor in making any technological progress?

post by George3d6 · 2021-01-27T14:14:18.556Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    10 Razied
    5 DTX
    5 Dirichlet-to-Neumann
    4 ChristianKl
    4 Gerald Monroe
    2 habryka
    2 ChristianKl
None
No comments

One thing I never understood in the internet sphere labelled "rationalists" (LW, OB, SSC... etc) is a series of seemingly strong beliefs about the future and/or about reality, the main one being around "AI".

Even more so, I never understood why people believe that thinking about certain problems (e.g. AI Alignment) is more efficient than random at solving certain problems, given no evidence of it being so (and no potential evidence, since the problems are in the future).

I've come to believe that I (and I'm sure many other people) differ from the mainstream (around these parts, that is) in a belief I can best outline as:

"Reason" may not be a determining factor in achiving agency over the material world, but rather, the limiting factor might be resources (inlcuding e.g. the resources needed to faciliatate physical labour, or to faciliate the power supply of a super-computer). What is interpreted as "reason causing an expoential jump in technology", could and should be interpreted as random luck of experimenting in the right direction, but in hindsight we rationalize it by saying the people exploring that direction "were smarter". More importantly, science and theoretical models are linked to technological inovation less than people think in the first place (see most of post 19th century physics/chemistry, including things like general relativity, not being required for most technological applications, including those credited to physics science)

I've considered writing an article aimed solely at the LW/SSC crowd trying to defend something-like the above proposition with historical evidence, but the few times I tried it was rather tedious. I still want to do so at some point, but I'm curious if anyone wrote this sort of article before, essentially something that boils down to "A defence of a mostly-sceptical take on the world which can easily be digested by someone from the rationalist-blogosphere demographic"

I understand this probably sounds insane to the point of trolling to people here, but please keep an open mind, or at least please grant me that I'm not trolling, the position outlined above would be fairly close to what an empiricist or skeptic would hold, heck, even lightweight, since a skeptic might be skeptic of us being able to gain more knowledge/agency over the outside world in the first place, at least in a non-random way.

Answers

answer by Razied · 2021-01-28T21:51:45.750Z · LW(p) · GW(p)

You can send DNA sequences to businesses right now that will manufacture the proteins that those sequences encode. Literally the only thing standing between you and nanotechnology is a good enough theory of proteins and their functions. Developing a good theory of proteins seems pretty much a pure-Reason problem. 

You can make money by simply choosing a good product on Alibaba, making a website that appeals to people, using good marketing tactics and drop-shipping, no need for any physical interaction. The only thing you need is a good theory of consumer psychology. That seems like an almost-pure-Reason problem. 

It seems completely obvious to me that reason is by far the dominant bottleneck in obtaining control over the material world.

comment by George3d6 · 2021-01-29T00:32:13.302Z · LW(p) · GW(p)

You can send DNA sequences to businesses right now that will manufacture the proteins that those sequences encode

Have you ever tried this ? I have, it comes with loads of *s

Developing a good theory of proteins seems pretty much a pure-Reason problem

Under the assumption that we know all there is about proteins, which I've seen no claims of made by anyone. Current knowledge is limited and in-vitro, and doesn't generalize to "weird" families of proteins".

"Protein-based nanotechnology" requires:

  • weird proteins with properties not encountered yet
  • complex in-vivo behavior, i.e. where we still have no clue about basically anything, since you can't crystalize a protein to tell how it's folding, those nice animation you saw on youtube, I'm afraid, are pure speculation

So no, not really, you can maybe get a tomato to be a bit spicy, I saw a stream where one of the most intelligent and applied biology-focused polymath I ever saw (Though Emporium) tried to figure out if there was a theoretical way to do it and gave up after 30 minutes.

You can get stuff to glow, that too, and it can be really useful, but we've been doing that for 200+ years.

You can make money by simply choosing a good product on Alibaba, making a website that appeals to people, using good marketing tactics and drop-shipping, no need for any physical interaction. The only thing you need is a good theory of consumer psychology. That seems like an almost-pure-Reason problem. 

It seems completely obvious to me that reason is by far the dominant bottleneck in obtaining control over the material world.

I think the thing you fail to understand here is randomness, chance.

You think that "Ok, this requires little physical labour, so 100% of it is thinking" but you fail to grasp even the possibility that there could be things where there is not enough information for reason to be useful, or even worst, that almost everything falls into that category.

If I chose 1,000,000 random products from alibab and resell them on amazon at 3x, I'm bound to hit gold.

But if I hit gold with 1/100 products, I'm still,  on the whole, losing 98% of my investment.

You think "but I know a guy that sold X and he chose X based on reason and he made 3x his money back"

And yes, yes you might, but that doesn't preclude the existence of another 99 guys you don't know of that tried the same thing and lost because they usually don't make internet videos telling you about it.

Granted, I'm being coy here, realistically the way e.g. reselling works is on a "huge risk of collapse" model (most things make back 1.1x but you're always 1x deep into the thing you are buying not coming through, not being in demand or otherwise not facilitating the further sale), but the above model is easier to understand.

And again, the important thing here is that "will X resell on amazon" can be something that is literally impossible to figure out without buying "X" and trying to sell it on amazon.

And "will 10X resell on amazon" and "will 100X resell on amazon" are, similarly, no the same question, there's some similarity between them, but figuring out how that number before "X" scales might in itself only be determinable by experiments.

 

***

Then again, I wouldn't claim to be an expert in any of those fields, but neither are you, and the thing that I don't get is why you are so certain that "reason" is the main bottleneck when in any given field, the actual experts seem to be clamoring for more experiments, more, better labs... and smart grad students go for a dime a dozen.

Or, forget the idea of consensus, who's to say what the consensus is. But why assume you can see the bottleneck at all? Why not think "I have no idea what the bottleneck is"? To be perfectly fair, if you queried me for long enough, that's probably the answer I'd give.

The perspective you expose paints a world that makes no sense, where a deep conspiracy has to be at play for the most intelligent people not to have taken over the world.

Replies from: Razied
comment by Razied · 2021-01-29T02:25:19.719Z · LW(p) · GW(p)

The situations where Reason stops being useful is when you make optimal bayesian use of sensory information, in that situation, yeah, additional experiments are required to make progress. However that is a monstrously high bar to pass. We already know that quantum mechanics governs everything about protein behavior in principle, if you gave a million motivated super-Einsteins 1000 years to think, do you seriously believe that they could not produce a theory of weird proteins never encountered before? 

I also think we mean slightly different things by "bottlenecked by reason", what I mean is something like "given a problem and your current resources, there exists an amount N of Reasoning ability that will make you able to solve the problem, for most problems we have today in the developed world". The amount required for specific problems might be very large, and small increases below that might not overwhelm the noise and randomness of the world. So I don't find it surprising that intelligent people have not completely taken over the world.

answer by DTX · 2021-01-27T18:27:16.687Z · LW(p) · GW(p)

I can't point to any single good canonical example, but this definitely comes up from time to time in comment threads. There's the whole issue that computers can't act in the world at all unless they're physically connected to hardware controllers that can interface with some physical system we actually care about being broken or misused. Usually, the workaround there is AI will be so persuasive that they can just get people with bodies to do the dirty work that requires being able to actually touch stuff in order to repurpose manufacturing plants or whatever it is we're worried they might do. 

That does seem like there is a missing step in there somewhere. I don't think the bottleneck right now to building out a terrorist organization is that the recruiters aren't smart enough, but AI threat tends to just use "intelligence" as a shorthand for good at literally anything.

Strangely enough, actual AI doomsday fiction doesn't seem to do this. Usually, the rogue AI directly controls military hardware to begin with, or in a case like Ex Machina, Eva is able to manipulate people at least in part because she is able to convincingly take the form of an attractive embodied woman. A sufficiently advanced AI could presumably figure out that being an attractive woman helps, but if the technology to create convincing artificial bodies doesn't exist, you can't use it. This tends to get handwaved away by assuming sufficiently advanced AI can invent whatever nonexistent technology they need from scratch. 

comment by ChristianKl · 2021-01-27T21:37:50.106Z · LW(p) · GW(p)

You don't need to be very persuasive to get people to take action in the real world. 

Especially right now a lot of people work from home and take their orders from a computer and trust it to give them good orders.

Replies from: DTX
comment by DTX · 2021-01-28T17:08:36.869Z · LW(p) · GW(p)

Although this is probably true in general, it degrades when trying to get people to do something extremely high-cost like destroy all of humanity. You either need to be very persuasive or trick them about the cost. It's hard to get people to join ISIS knowing they're joining ISIS. It's a lot easier to get them to click on ransomware that can be used to fund ISIS.

Replies from: ChristianKl
comment by ChristianKl · 2021-01-28T21:37:32.895Z · LW(p) · GW(p)

You don't need to tell people "destroy all of humanity" to establish a dictatorship where the AGI is in control of everything and it becomes effectively impossible for individual humans to challenge AGI power.

Replies from: DTX
comment by DTX · 2021-01-28T22:31:50.865Z · LW(p) · GW(p)

Helping someone establish a dictatorship is still a high cost action that I think requires being more persuasive than convincing someone to do their job without decisively proving you're actually their boss. 

Replies from: Dustin, ChristianKl
comment by Dustin · 2021-01-29T00:47:15.866Z · LW(p) · GW(p)

I think the idea is that the AI doesn't say "help me establish a dictatorship".  The AI says "I did this one weird trick and made a million dollars, you should try it too!" but surprise, the weird trick is step 1 of 100 to establish The AI World Order.

Replies from: ChristianKl
comment by ChristianKl · 2021-01-29T14:49:13.406Z · LW(p) · GW(p)

Or it says: "Policing is very biased against Black people. There should be an impartial AI judge that's unbiased, so that there aren't biased court judgements against Black People"

Or it says: "There's child porn on the computer of person X" [person X being a person that challenges the power of the AI and the AI puts it there]"

Or it says: "We give pay you a good $1,000,000 salary to vote in the board the way we want to convert the top levels of the hierachy of the company into being AGI directed"

And it does 100,000s of those things in parallel. 

comment by ChristianKl · 2021-01-29T10:55:42.417Z · LW(p) · GW(p)

There's no reason why the AGI can't decisively prove they are the boss. For big corporations being in control of the stock means being the boss who makes the decisions at the top. 

A police bureau that switches to using a software that goes out and tell them were to patrol to be better at catching crime doesn't think they are establishing a dictatorship either. 

The idea that an AGI wants to establish a dictatorship can easily be labeled as an irrational conspiracy theory. 

comment by TheSimplestExplanation · 2021-01-28T11:42:45.563Z · LW(p) · GW(p)

There’s the whole issue that computers can’t act in the world at all unless they’re physically connected to hardware controllers that can interface with some physical system we actually care about being broken or misused. Usually, the workaround there is AI will be so persuasive that they can just get people with bodies to do the dirty work that requires being able to actually touch stuff in order to repurpose manufacturing plants or whatever it is we’re worried they might do.

In those cases it probably wouldn't be very hard to get people to act in the world. Since there would be at least some people who want the AI to change the world. As evidenced by the fact that they just spend huge amounts of resources to create it in the first place.

AI threat tends to just use “intelligence” as a shorthand for good at literally anything.

That is pretty much the definition of intelligence. At least as far as expectations are concerned.(not that I'm arguing by definition, that is just what we mean when we talk about that.)

Replies from: DTX
comment by DTX · 2021-01-28T18:53:56.224Z · LW(p) · GW(p)

The distinction in this specific case here is between intelligence and persuasiveness. To the extent that some elements of persuasiveness are inherently embodied, as in people are more likely to trust you if you're also a person, that is at best orthogonal to intelligence.

More generally, "effectiveness" as some general purpose quality of agents that can do things is limited by the ability to acquire and process information, but also by the ability to act on it. You may know that being tall makes you more likely to be elected to office, but if you can't make yourself any taller, you can't use the information to make your campaign more likely to succeed. 

As a more fantastical but maybe more relevant example, people often mention something like turning the moon into comptronium. Part of doing that is knowing how to do it. But we already know how to do it. We understand at the level of fusion and fission how to transmute elements into different elements, and we understand, given some elements that act as semiconductors, how to produce general-purpose computational processors. The actual reason we can't do it, aside from not wanting to disrupt the earth's orbit and potentially end human civilization, is (1) there is inherent propagation delay in moving material from wherever it is created to wherever it needs to be used and this delay is much greater when the distances to move are greater than planet-scale, (2) machines that can actually transmute rocks to silicon don't presently exist and there is non-zero manufacturing delay in creating them, and (3) we have no means of harnessing sufficient energy to actually transmute matter at the necessary scale. 

Can gaining more information solve these problems? Maybe. There might exist unknown physics that enable easier or faster methods than we presently know of, but there is non-zero propagation delay in creation of new knowledge of physics as well. You have to conduct experiments. At high-energy, sub-particle scale, these have become extremely expensive and time consuming. AI threat analysis tends to get around this one by proposing they can just simulate physics to such perfect fidelity that experimentation is no longer necessary, but this seem question-begging because you need to already know rules of physics that haven't been discovered yet to be able to do this. 

While presumably a collection of brains better than human brains can figure out a way to make this happen faster, maybe even decades rather than centuries faster, "foom" type analyses that claim the ability to recursively rewrite one's own source code better than the original coder means it will happen in days or even hours come across more as mysticism than real risk analysis. 

comment by George3d6 · 2021-01-28T14:09:48.784Z · LW(p) · GW(p)

I don't necessarily think you have to take the "AI" example for the point to make sense though.

I think "reasoning your way to a distant inference", as a human, is probably a far less controversial example that could be used here. In that most people here seem to assume there are ways to make distant inferences (e.g. about the capabilities of computers in the far off future), which historically seems fairly far fetched, it almost never happens when it does it is celebrated, but the success rate seems fairly small and there doesn't seem to be a clear formula for it that works.

answer by Dirichlet-to-Neumann · 2021-01-27T15:28:02.148Z · LW(p) · GW(p)

I've always thought the same thing regarding a couple of claims that are well accepted around here, like galactic-scale space travel and never-ending growth. I'm not sure enough of my knowledge of physic to try to write a big post about it, but I'd be interested if someone did it (or I may want to work with someone on it).

 

[EDITED to replace "time" by "space" in "galactic-scale space travel". I guess there is a Freudian explanation of this kind of lapses, which is certainly either funny or true.

comment by MikkW (mikkel-wilson) · 2021-01-27T18:56:44.428Z · LW(p) · GW(p)

I don't see what you mean when you say galactic-scale time travel being a well-accepted claim here. I've never heard people talking about that as if it were something that obviously works (since, if I understand what you mean, it doesn't, unless it's just referring to simple relativistic effects, in which case it's trivial).

While something approximating never-ending growth may be a common assumption, I'm not sure what percentage of people here believe in genuinely unlimited growth (that never, at any point stops), and growth that goes on for a very long extent, so long that the world as we know it will be nothing like it currently is before it stops. The first version is a claim that I'm skeptical of (though I can envision some ways it could end up being true), and is somewhat at odds with our current best understanding of physics, while the second claim is straightforward if you examine current solar energy technology, the complete power output of our sun (that is, in all directions, not just towards Earth), and then consider the power output and abundance of all the stars in the reachable universe.

Replies from: Dirichlet-to-Neumann
comment by Dirichlet-to-Neumann · 2021-01-27T21:56:34.491Z · LW(p) · GW(p)

I don't know why "time" somehow entered my comment, I was thinking about galactic-scale SPACE travel.

The second part of your comment illustrates this corrected point : "consider the power output and abundance of all the stars in the reachable universe". You assume here that the reachable universe is more than just the Solar system. I think this claim is debatable at best in it's weakest versions (ie we will establish colonies on some other stellar systems), and very unlikely in the stronger version that you seems to accept (we will establish a lot of colonies in many different systems that will have significant economic interactions in both directions between other stellar systems).

Concerning the second part of your comment, I tend to think our resource and energy consumption has good chances of dooming us before we get a chance to "escape" at the Solar level system. I am also sceptical of anything that sounds like a Dyson sphere... 

 

Replies from: None, mikkel-wilson
comment by [deleted] · 2021-01-28T07:21:49.448Z · LW(p) · GW(p)

Concerning the second part of your comment, I tend to think our resource and energy consumption has good chances of dooming us before we get a chance to "escape" at the Solar level system. I am also sceptical of anything that sounds like a Dyson sphere... 

The "has good chances of dooming us" unfortunately isn't a good sign that you have thought a lot about the problem.  What resource and energy consumption are you thinking of and why specifically do you believe it means 'doom'?

Just taking a top level view:

       a.  Most of the earth's surface and underwater have not yet been exploited for minerals.  (underwater isn't cost effective, deep enough mines are not cost effective, entire continents are too cold, Siberia has vast wealth but is too cold, and so on).  "Not cost effective" doesn't mean it's impractical or that mining companies wouldn't develop the technology to do it once it's needed - it means that there are easier, competing sources for minerals that have to be exhausted first, however long that takes. 

       b.  Energy is abundant, the squabble right now is that fossil fuels are cheaper if it's externalities are ignored.  If fossil fuels had their externalities priced in, we would already be using solar/wind/nuclear in whatever combination is most efficient.

       c.  In the timescales that matter, resources are inexhaustible*.  There are hundreds of millions of billions of years of sunlight remaining, and every item "consumed" by a human is heaped mainly into landfills, where all of the elements remain, it is simply a matter of energy (and better robotics) to recover them.

       d.  We do have a major problem with greenhouse gases.  But this problem isn't an "extinction of humanity" level problem, it is a "major real estate markdown and possibly mass destruction and death in equatorial regions".  There are colder areas of the planet that would become inhabitable in the worst  warming scenario, or even more extreme measures could be taken to keep first world residents alive.  (food grown in algae tanks, etc).  It's an oncoming tragedy but I don't see the evidence the assume extinction is on the table.  

 

*with the sole exception of helium

 

I don't see any reason to look further.  Do you have any evidence to disprove a-d or is this something you just read somewhere and you have not examined critically?

comment by MikkW (mikkel-wilson) · 2021-01-28T02:16:04.254Z · LW(p) · GW(p)

Why are you sceptical of "anything that sounds like a Dyson sphere"? It's not particularly unrealistic given modern technology (i.e. rockets and solar panels) - the only pain points are a) making use of the energy collected, b) getting the materials to make it, and c) getting the panels in place (which will require an upfront investment of energy). Regarding using the energy produced, it would be inefficient to try to transport the energy back to Earth (though if costs went down significantly, it could still be justified), but using solar satelites for either computation or a permanent off-earth colony would be justified- particularly with computation, this could allow us to redirect on-earth sources of energy to other uses of energy, or reduce overall Earthside consumption of energy. Regarding materials, there's a lot of materials on Earth and in other places in the solar system- at worst we can mine asteroids, but I'm not sure that'd even be neccesary.

A Dyson sphere doesn't need to be built all at once. Once it becomes feasible to launch solar computers into space, and make a profit selling computing time, the sector will naturally grow exponentially- now, it may or may not be bounded by some ceiling of demand, but even if only 1 / 100th or 1/ 1,000th of the sun's output gets captured, that would represent a huge change in how things work

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2021-01-28T15:39:24.501Z · LW(p) · GW(p)

Do we know of materials that could make a good dyson sphere?

Replies from: mikkel-wilson, habryka4, gilch
comment by MikkW (mikkel-wilson) · 2021-01-28T20:37:04.909Z · LW(p) · GW(p)

A Dyson sphere wouldn't be much different from a big cloud of modern satellites, perhaps with bigger solar panels, but the materials would be the same.

comment by habryka (habryka4) · 2021-01-28T18:57:14.487Z · LW(p) · GW(p)

You don't need strong materials for a dyson sphere. You basically just put solar panels into low-orbit until you captured all of the outgoing light (or like any appreciable fraction of it, you just do it until you have the energy you need).

comment by gilch · 2021-01-28T21:10:07.354Z · LW(p) · GW(p)

You might be confusing "Dyson sphere" with the Dyson shells from science fiction, which is more specific type of Dyson sphere. You don't need "scrith" or "neutronium" to make a Dyson sphere out of satellites (a Dyson swarm) which is the more realistic type that Dyson originally proposed, or out of statites (a Dyson bubble).

comment by TheSimplestExplanation · 2021-01-28T12:02:07.289Z · LW(p) · GW(p)

claims that are well accepted around here, like galactic-scale space travel and never-ending growth.

I don't think anyone is claiming that never-ending growth is possible. Even if measured in Utility rather than Mass/Energy. Well technically you have "never-ending growth" if you asymptotically approach the Limit.

As for galactic-scale space travel that is perfectly possible.

comment by George3d6 · 2021-01-28T14:12:34.591Z · LW(p) · GW(p)

This, I assume, you'd base on a "hasn't happened before, no other animal or thing similar to us is doing it as far as we know, so it's improbable we will be able to do it" type assumption? Or something different?

answer by ChristianKl · 2021-01-27T21:38:13.095Z · LW(p) · GW(p)

Even more so, I never understood why people believe that thinking about certain problems (e.g. AI Alignment) is more efficient than random at solving certain problems, given no evidence of it being so (and no potential evidence, since the problems are in the future).

The point of focusing on AI Alignment isn't that it's an efficient way to discover new technology but that it's a way that makes it less likely that humanity will develop technology that destroys humanity. 

A trade that makes us develop technology slower but increases the chances that humanity survives is worth it. 

comment by George3d6 · 2021-01-28T14:05:43.204Z · LW(p) · GW(p)

The point of focusing on AI Alignment isn't that it's an efficient way to discover new technology but that it's a way that makes it less likely that humanity will develop technology that destroys humanity. 

 

Is "proper alignment" not a feature of an AI system, i.e. something that has to be /invented/discovered/built/?

This sound like semantics vis-a-vis the potential stance I was referring to above. 

Replies from: ChristianKl
comment by ChristianKl · 2021-01-28T14:07:57.249Z · LW(p) · GW(p)

It is a feature of the AI system but it's very important to first discover proper alignment before discovering AGI. If you randomly go about making discoveries it's more likely that you end up discovering AGI and ending humanity before discovering proper alignment.

answer by [deleted] · 2021-01-27T20:38:44.966Z · LW(p) · GW(p)

Not only do I agree with you, but I think a pretty compelling argument can be made.

The insight came to me when I was observing my pet's behaviors.  I realized that you might be able to make them 'smarter', but because the animal still has finite I/O - it has no thumbs, it cannot speak, it doesn't have the right kind of eyes for tool manipulation - it wouldn't get much benefit.

This led to a general realization.  The animal has a finite set of actions it can make each timestep.  (finite control channel outputs).  It needs to choose, from the set of all the actions it can take, one that will result in meeting the animal's goals 

Like any real control system, the actual actions taken are suboptimal.  When the animal jumps when startled, the direction it bounds may not always be the perfect one.  It may not use the best food-gathering behavior.

But if you could cram a bigger brain in and search more deeply for a better action, the gain might be very small.  An action that is 95% as good as the best action means that the better brain only gains you 5%.

This applies to "intelligence" in general.  A smarter cave man may only be able to do slightly better than his competitors, not hugely better.  Ultimate outcomes may be heavily governed by luck or factors intelligence cannot affect, such as susceptibility to disease.  

This is true even if the intelligence is "infinite".  A infinitely intelligent cave person is one who every action is calculated to be the most optimal one he can make with the knowledge he/she has.  

Another realization that comes out of this is our modern world may only be possible because of stupid people.  Why is that?  Well, the most optimal action you can take as a human being is the one that gives you descendants who survive to mate.  Agriculture, machinery,  the printing press, the scientific method - the individual steps to reach these things were probably often done by tinkerers who would have been better served individually by finding a way to murder their rivals for mates/spending it on food gathering in the immediate term, etc.  For example, agriculture may not have paid off in the lifespan of the first cave person to discover it.

Anyways, a millions of times smarter AI is like a machine, given a task, that can pick the 99th percentile action instead of the 95th percentile action (humans).  This isn't all that effective alone.  The real power of AI would be that they don't need to sleep, and can be used to in vast arrays that coordinate better with each other, and they always pick that 99th percentile action, they don't get tired or bored or distracted.  And they can be made to coordinate with each other rationally where they share data and don't argue with each other.  And you can clone them over and over.  

This should allow for concrete, near term goals we have as humans to be accomplished.  

But I don't think, for the most part, the scary possibilities could be done invisibly.  For example, in order for the AI to develop a bioweapon to can kill everyone, it would need to do it like humans would do it, just more efficiently.  As in, by building a series of mockups of human bodies - at least the lungs, compared to what modern day researchers do - and trying out incremental small changes to known to work viruses.  Or trying out custom proteins on models of cell biology.  

It needs the information to do it, and the only way to get that information requires a series of controlled experiments done by physical systems, controlled by the AI , in the real world.  

Same with developing MNT or any of the other technologies we are pretty sure physics allows, we just don't have the ability to exploit.  I think these things are all possible but the way to make them real would take a large amount of physical resources to methodically work your way up the complexity chain.

comment by George3d6 · 2021-01-28T14:15:12.808Z · LW(p) · GW(p)

I believe this echos out my thoughts perfectly, I might quote it in full if I ever do get around to reviving that draft.

The bit about "perfect" as not giving slack for development, I think, could be used even in the single individual scenario if you assume any given "ideal" action as lower chance of discovering something potential useful than a "mistake". I.e. adding:

  • Actions have unintended and unknown consequences that reveal an unknown landscape of possibilities
  • Actions have some % chance of being "optimal", but one can never be 100% certain they are so, just weigh them as having a higher or lower chance of being so
  • "optimal" is a shifting goal-post and every action will potentially shift it

I think the "tinkerer" example is interesting, but even that assumes "optimal" to be goals dictates by natural-selection, and as per the sperm bank example, nobody care about those goals per-say, their subconscious machinery and the society they live in "cares". So maybe a complementary individual-focused example would be a world in which inactivity is equivalent to the happiest state possible (i.e. optimal for the individual) or at least some form of action that does not lend to itself or something similar to it's host being propagated through time.

comment by Matt Goldenberg (mr-hire) · 2021-01-28T15:42:03.340Z · LW(p) · GW(p)

I realized that you might be able to make them 'smarter', but because the animal still has finite I/O - it has no thumbs, it cannot speak, it doesn't have the right kind of eyes for tool manipulation - it wouldn't get much benefit.

I'm fairly skeptical of this claim. It seems to me that even moderate differences in animal intelligence in E.G dogs leads to things like tool use and better ability to communicate things to humans.

Replies from: DTX
comment by DTX · 2021-01-28T18:26:40.133Z · LW(p) · GW(p)

To expand, I actually think it applies much more to AI than to animals. Part of the advantage of being an animal is our interface to the rest of the world is extremely flexible regarding the kinds of inputs it can accept and outputs it can produce. Software systems often crash because xml doesn't specify whether you can include whitespace in a message or not. Part of why AlphaGo isn't really "intelligent" isn't anything about the intrinsic limitations of what types of functions its network architecture can potentially learn and represent. It isn't intelligent because it can't even accept an input that isn't a very specific encoding of a Go board and can't produce any outputs except moves in a game of Go. 

It's isn't like a dog and more like a dog that can only eat one specific flavor of one specific brand of dog food. Much of the practical difficulty in creating general purpose software systems is just that there is no general purpose communication protocol. It's why we have succeeded so far in producing things that can accept and produces images and text, because they analogize well to how animals communicate with the rest of the world, so we understand them and can create digital encodings of them. But even those still rely upon character set encodings, pixel metadata specifications, and video codecs that themselves have no ability to learn or adapt. 

comment by habryka (habryka4) · 2021-01-28T19:13:43.499Z · LW(p) · GW(p)

This led to a general realization.  The animal has a finite set of actions it can make each timestep.  (finite control channel outputs).  It needs to choose, from the set of all the actions it can take, one that will result in meeting the animal's goals 

It seems that by having access to things like language, a computer, programming languages, the problems of a finite problem space quickly get resolves and no longer pose an issue. Theoretically I could write a program to make me billions of dollars on the stock market tomorrow. So the space of actions is large enough that performing well at it easily leads to vast increases in performance. 

I agree that there are some small action-spaces in which being better at performing in them might not help you very much, but I don't think humans or AIs have that problem. 

Replies from: None
comment by [deleted] · 2021-01-28T19:58:47.708Z · LW(p) · GW(p)

Please note that the set of actions is constrained to the set, from all the high value actions you know about, that have the highest value.

While yes such a program probably exists (a character sequence that could be typed in at a human timescale to earn 1 billion), you don't have the information to even consider it as a valid action. Therefore you (probably) cannot do it. You would need to become a quant and it would take both luck and years of your life.

And as a perfect example, this isn't the optimal action per nature. The optimal action was probably to socially defeat your rivals back in high school and to immediately start a large family, then cheat on your wife later for additional children.

If your brain were less buggy - aka 'smarter' in an evolutionary sense - this and similar "high value" moves would be the only action you could consider and humans would still be in the dark ages.

Replies from: habryka4
comment by habryka (habryka4) · 2021-01-28T21:11:35.512Z · LW(p) · GW(p)

You would need to become a quant and it would take both luck and years of your life.

Well, sure, because I am a fleshy meat human, but it sure seems that you could build a hypothetical mind that is much better at being a quant than humans, who wouldn't need years of their life to learn it (the same way that we build AIs that are much much better at Go than humans, and don't need years of existence to train to a level that vastly outperforms human players). 

Replies from: None
comment by [deleted] · 2021-01-28T22:34:39.623Z · LW(p) · GW(p)

That's the part I am saying isn't true, or it wasn't until recently. The mind if it is limited to a human body has finite I/O. It may simply not be possible to read enough in a humans working lifespan to devise a reliable way to get a billion dollars. (Getting lucky in a series of risky bets is a different story - in that case you didn't really solve the problem you just won the lottery)

And even if I posit it is true now, imagine you had this kind of mind but were a peasant in russia in 1900? What meaningful thing can you do? You might devise a marginally better way to plow the fields - but again, with limited I/O and lifespan your revised way may not be as overall robust and effective as the way the village elders show you to do it. This is because your intelligence cannot increase the observations you need and you might need decades of data to devise an optimal strategy.

So this relate to the original topic, that to make a hyper intelligent AI it needs to have access to data, clean data with cause and effect, and the best way to do that is to give it access to robotics and the ability to build things.

This limiting factor of physicality might end up making AIs controllable even if they are in theory exponential, the same way a void coefficient makes a nuclear reactor stable.

Replies from: habryka4
comment by habryka (habryka4) · 2021-01-28T22:53:53.248Z · LW(p) · GW(p)

I really don't buy the "you need to run lots of experiments to understand how the world works" hypothesis. It really seems like we could have figured out relativity, and definitely newtonian physics, without any fancy experiments. The experiments were necessary to create broad consensus among the scientific community, but basically any video stream a few minutes long would have been sufficient to derive newtonian physics, and probably even sufficient to derive relativity and quantum physics. Definitely if you include anything like observations about objects in the night sky. And indeed, Einstein didn't really run any experiments, just thought-experiments, with a few physical facts about the constant nature of the speed of light which can easily be rederived from visual artifacts that occur all the time. 

For some theoretical coverage of the bayesian ideal here (which I am definitely not saying is achievable), see Eliezer's posts on Occam's razor and Solomonoff induction [LW · GW].

If I had this kind of mind as a Russian peasant in 1900? I would have easily developed artificial fertilizer, which is easily producible given common-household items in 1900, and became rich, then probably used my superior ability to model other people to become extremely socially influential, and then develop some pivotal technology like nanotechnology or nukes to take over the world. 

I don't see why I would be blocked on IO in any meaningful way? Modern scientists don't magically have more I/O than historical people, and a good fraction of our modern inventions don't require access to particularly specialized resources. What they have is access to theoretical knowledge and other people's observations, but that's exactly what a superintelligent AI would be able to independently generate much better. 

Replies from: None
comment by [deleted] · 2021-01-29T06:06:47.034Z · LW(p) · GW(p)

Well, for relativity you absolutely required observations that couldn't be seen in a simple video stream.  And unfortunately I think you are wrong, I think there are a very large number of incorrect physical models that would also fit the evidence in a short video that a generative network for this.  (also there is probably a simpler model than relativity that is still just as correct, it is improbable that we have found the simplest possible model over the space of all of mathematics)

My evidence for this is pretty much any old machine learning model will overfit to an incorrect/non general model unless the data set is very, very large and you are very careful on the training rules.

I think you could not have invented fertilizer for the same reason.  Remember, you are infinitely smart but you have no more knowledge than a russian peasant.  So you will know nothing of chemistry, and you have no knowledge of how to perform chemistry with household ingredients.  Also, you have desires - to eat, to mate, shelter - and your motivations are the same as the russian peasant, just now with your infinite brainpower you will be aware of the optimal path possible from the possible strategies you are able to consider given the knowledge that you have.

Learning chemistry does not accomplish your goals directly, you may be aware of a shorter-term mechanism to do this, and you do not know you will discover anything if you study chemistry.

Replies from: habryka4
comment by habryka (habryka4) · 2021-01-29T06:48:24.460Z · LW(p) · GW(p)

What observations do I need that are not available in a video stream? I would indeed bet that within the next 15 years, we will derive relativity-like behavior from nothing but videostreams using AI models. Any picture of the night sky will include some kind of gravitational lensing behavior, which was one of the primary pieces of evidence we used to derive relativity. Before we discovered general relativity we just didn't have a good hypothesis for why that lensing was present (and the effects were small, so we kind of ignored them).

The space of mathematical models that are as simple as relativity strikes me as quite small, probably less than 10000 bits. Like, encoding a simulation in Python with infinite computing power to simulate relativistic bodies is really quite a short program, probably less than 500 lines. There aren't that many programs of that length that fit the observations of a video stream. Indeed, I think it is very likely that no other models that are even remotely as simple fit the data in a videostream. Of course it depends on how exactly you encode things, but I could probably code you up a python program that simulates general relativity in an afternoon, assuming infinite compute, under most definitions of objects.

Replies from: None
comment by [deleted] · 2021-01-29T08:12:20.059Z · LW(p) · GW(p)

Again, what you are missing is there are other explanations that also will fit the data.  As an analogy, if someone draws from a deck of cards and presents the cards as random numbers, you will not be able to deduce what they are doing if you have no prior knowledge of cards, and only a short sequence of draws.  There will be many possible explanations and some are simpler than 'is drawing from a set of 52 elements'.

Replies from: habryka4
comment by habryka (habryka4) · 2021-01-29T23:15:39.092Z · LW(p) · GW(p)

Yeah, that's why I used the simplicity argument. Of course there are other explanations that fit the data, but are there other explanations that are remotely as simple? I would argue no, because relativity is just already really simple, and there aren't that many other theories at the same level of simplicity.   

Replies from: None
comment by [deleted] · 2021-01-30T19:55:25.770Z · LW(p) · GW(p)

I see that we need to actually do this experiment in order for you to be convinced. But I don't have infinite compute. Maybe you can at least vaguely understand my point : given the space of all functions in all of mathematics, are you certain nothing fits a short sequence of observed events better than relativity? What if there is a little bit of noise in the video?

I would assume other functions also match. Heck, ReLu with the right coefficients matches just about anything so...

Replies from: habryka4
comment by habryka (habryka4) · 2021-01-30T20:06:22.052Z · LW(p) · GW(p)

ReLu with the right coefficients in a standard neural net architecture is much much more complicated than general relativity. General relativity is a few thousand bits long when written in Python. Normal neural nets almost never have less than a megabyte of parameters, and state of the art models have gigabytes and terrabytes worth of parameters. 

Of course there are other things in the space of all mathematical functions that will fit it as well. The video itself is in that space of functions, and that one will have perfect predictive accuracy. 

But relativity is not a randomly drawn element from the space of all mathematical functions. The equations are exceedingly simple. "Most" mathematical functions have an infinite number of differing terms. Relativity has just a few, so few indeed that translating it into a language like python is pretty easy, and won't result in a very long program.

Indeed, one thing about modern machine learning is that it is producing models with an incredibly long description length, compared to what mathematicians and physicists are producing, and this is causing a number of problems for those models. I expect future more AGI-complete systems to produce much shorter description-length models. 

answer by habryka · 2021-01-28T19:10:39.358Z · LW(p) · GW(p)

The most basic argument is that it really doesn't take a lot of material resources to be very smart. Human brains run on a few watts, and we have more than enough easily available material resources in our environment to build much much much bigger brains. 

Then, it doesn't seem like "access to material resources" is what distinguishes humanity's success from other animals' success. Sure seems like we pretty straightforwardly won by being smarter and better at coordinating. 

Also, between groups of humans, it seems that development of better technologies has vastly outperformed access to more resources (i.e. having a machine gun doesn't take very much materials, but easily allows you to win wars against less technologically advanced civilizations). Daniel Kokotajlo's work has studied in-depth the effect that better technology seems to have had on conquerors when trying to conquer the americas. 

Now, you might doubt the connection between intelligence and developing new technologies. To me, it seems really obvious that there are some properties of a mind that determine how good it is at developing new technologies, holding environmental factors constant. We've seen drastic differences between different societies and different species in this respect, so there clearly is some kind of property here. I don't see how the environmental effects would dominate, given that most technologies we are developing just involve the use of existing components we already have (like, writing a new computer program that is better at doing something doesn't require special new resources). 

Now the risk is that you get an AI that is much better at solving problems and developing new technologies than humans. It seems that humans are really not great at it, and that the upper bound for competence is far above where we are. This makes both sense on priors (why would the first species to make use of extensive tool-making already be at the maximum), but also from an inside-view (human minds sure don't seem very optimized for actually developing new technologies, given that we have a brain that only takes in a few watts, and have been mostly optimized for other constraints). I don't care whether you call it intelligence, and it definitely shouldn't be conflated with the concept of human intelligence. Like, humans are sometimes smarter in a very specific and narrow way and the variation between individuals humans is overall pretty minimal. When I talk about machine intelligence I mean a much broader set of potential ways to be better at thinking.

comment by George3d6 · 2021-01-29T00:50:47.570Z · LW(p) · GW(p)

We've seen drastic differences between different societies and different species in this respect, so there clearly is some kind of property here

Is there?

Writing, agriculture, animal husbandry, similar styles of architecture and most modern inventions from flight to nuclear energy to antibiotics seem to have been developed in a convergent way given some environmental factors.

But I guess it boils down to a question of studying history, which ultimately has no good data and is only good for overfitting bias. So I guess it may be that there's no way to actually argue against or for either of the positions here, now that I think about it.

So thanks for your answer, it cleared a few things up for me, I think, when constructing this reply.

Replies from: habryka4
comment by habryka (habryka4) · 2021-01-29T06:46:17.437Z · LW(p) · GW(p)

But I guess it boils down to a question of studying history, which ultimately has no good data and is only good for overfitting bias.

What a weird statement. Of course history rules out 99.9% of hypotheses about how the world came to be. We can quibble over the remaining hypotheses, but obvious ones like "the world is 10000 years old" and "human populations levels reached 10 billion at some point in the past" are all easily falsified. Yes, there is some subjectivity in history, but overall, it still reduces the hypothesis space by many many orders of magnitude. 

We know that many thousands of years of history never had anything like the speed of technological development as we had in the 20th century. There was clearly something that changed during that time. And population is not sufficient, since we had relatively stable population levels for many thousands of years before the beginning of the industrial revolution, and again before the beginning of agriculture.   

Replies from: George3d6
comment by George3d6 · 2021-01-29T22:41:34.541Z · LW(p) · GW(p)

What a weird statement. Of course history rules out 99.9% of hypotheses about how the world came to be. We can quibble over the remaining hypotheses, but obvious ones like "the world is 10000 years old" and "human populations levels reached 10 billion at some point in the past" are all easily falsified. Yes, there is some subjectivity in history, but overall, it still reduces the hypothesis space by many many orders of magnitude. 

I will note that the 10,000 years-old thing is hardly ruled out by "history", more so by geology or physics, but point taken, even very little data and bad models of reality can lead to ruling out a lot of things with very high certainty.

We know that many thousands of years of history never had anything like the speed of technological development as we had in the 20th century. There was clearly something that changed during that time. And population is not sufficient, since we had relatively stable population levels for many thousands of years before the beginning of the industrial revolution, and again before the beginning of agriculture.   

This is however the kind of area where I always find history doesn't provide enough evidence, which is not to say this would help my point or harm yours. Just to say that I don't have enough certainty that statements like the above have any meaning, and in order to claim what I'd have wanted (what I was asking the question about) I would have to make a similar claim regarding history.

In brief I'd want to argue with the above statement by pointing out:

  1. Ongoing process since the ancient Greeks, with some interruptions. But most of the "important stuff" was figured out a long time ago (I'm fine living with Greek architecture, crop selection, heating, medicine and even logic and mathematics).
  2. "Progress" bringing about issues that we solve and call "progress", i.e. smallpox and the bubonic plague up until we "progressed" to cities that could make them problematic. On the whole there's no indication lifespan or happiness has greatly increased, the increases in lifespan exist, but once you take away "locked up in a nursing home" as "life" and exclude "death of kids <1 year" (or, alternatively, if you want to claim kids <1 year are as precious as a fully developed conscious human, once you include abortions into our own death statistics)... we haven't made a lot of "progress" really.
  3. A "cause" being attributed to the burst of technology in some niches in the 20th century, instead of it just being viewed as "random chance", i.e. the random chance of making the correct 2 or 3 breakthroughs at the same time.

And those 3 points are completely different threads that would dismantle the idea you present, but I'm just bringing them up as potential threads that would dismantle your idea. Overall I hold very little faith in them besides (3), I think your view of history is more correct. But there's no experiment I can run to find out, no way I can collect further data, nothing stopping me from overfitting a model to agree with some subconscious bias I have.

In day to day life, if I believe something (e.g. neural networks are the best abstractions for generic machine learning) and I'm face with an issue (e.g. loads of customers are getting bad accuracy from my NN based solution) I can at least hope to be open minded enough to try other things and see that I might have been wrong (e.g. gradient tree boosting might be a better abstraction than NNs in many cases) or, failing to find a better working hypothesis that provides experimental evidence, I can know I don't know (e.g. go bankrupt and never get investor money again because I squanderd it away).

With the study of history I don't see how I can go through that process, I feel a siren call that says "I like this model of the world", and I can fit historical evidence to it without much issue. And I have no way to properly weighting the evidence and ultimately no experimental proof that could increase or decrease my confidence in a significant way. No "skin in the game", besides wanting to get a warm fuzzy feeling from my historical models.

But again, I think this is not to say that certain hyptohesis (e.g. the Greek invented a vaccum based steam engine) can't be certainly discounted, and I think that in of itself can be quite useful, you are correct there.

answer by ChristianKl · 2021-01-28T14:35:53.972Z · LW(p) · GW(p)

Your paragraph that outlines your position mixes multiple different things into the concept of reason.

There's the intelligence of individual scientists or engineers, there are conceptual issues and there's the quality of institutions.

An organizations that's a heavily disfunctional immoral maze is going to innovate less new technology then an organization with access to the same resources but with a better organizational setup. 

When it comes to raw intelligence that a lot of the productive engineers have an IQ that far exceeds that of the average population.

Conceptual insights like the idea of running controlled trials heavily influence the medical technology that can be developed in our society. We might have had concepts that would have allowed us to produce a lot more vaccines against COVID-19 much earlier

No comments

Comments sorted by top scores.