Posts

It's time for a self-reproducing machine 2024-08-07T21:52:22.819Z

Comments

Comment by Carl Feynman (carl-feynman) on Review: Planecrash · 2024-12-27T20:03:16.927Z · LW · GW

All the pictures are missing for me.

Comment by Carl Feynman (carl-feynman) on What are the strongest arguments for very short timelines? · 2024-12-26T19:45:18.691Z · LW · GW

Is this the consensus view? I think it’s generally agreed that software development has been sped up. A factor of two is ambitious! But that’s what it seems to me, and I’ve measured three examples of computer vision programming, each taking an hour or two, by doing them by hand and then with machine assistance. The machines are dumb and produce results that require rewriting. But my code is also inaccurate on a first try. I don’t have any references where people agree with me. And this may not apply to AI programming in general.

You ask about “anonymous reports of diminishing returns to scaling.” I have also heard these reports, direct from a friend who is a researcher inside a major lab. But note that this does not imply a diminished rate of progress, since there are other ways to advance besides making LLMs bigger. O1 and o3 indicate the payoffs to be had by doing things other than pure scaling. If there are forms of progress available to cleverness, then the speed of advance need not require scaling.

Comment by Carl Feynman (carl-feynman) on What are the main arguments against AGI? · 2024-12-26T17:07:49.313Z · LW · GW

One argument against is that I think it’s coming soon, and I have a 40 year history of frothing technological enthusiasm, often predicting things will arrive decades before they actually do. 😀

Comment by Carl Feynman (carl-feynman) on Shortform · 2024-12-26T17:03:44.425Z · LW · GW

These criticisms are often made of “market dominant minorities”, to use a sociologist’s term for what American Jews and Indian-Americans have in common. Here’s a good short article on the topic: https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=5582&context=faculty_scholarship

Comment by Carl Feynman (carl-feynman) on Purplehermann's Shortform · 2024-12-26T16:47:49.692Z · LW · GW

This isn’t crazy— people have tried related techniques.  But it needs more details thought out. 

In the chess example, the AIs start out very stupid, being wired at random.  But in a game between two idiots, moving at random, eventually someone is going to win.  And then you reinforce the techniques used by the winner, and de-reinforce the ones used by the loser.  In any encounter, you learn, regardless of who wins.  But in an encounter between a PM and a programmer, if the programmer fails, who gets reinforced?  It might be because the programmer is dumb, and should be de-reinforced.  But it might be because the PM is dumb, and asked for something impossible or far beyond what can be done, in which case it should be de-reinforced.  But it might be because the PM came up with a task just barely beyond the programmer’s ability, which is good and should be reinforced.  We somehow need to keep the PM producing problems which are hard but possible.  Maybe the programmer could be tasked with coming up with either a solution or a proof of impossibility?  

AlphaGo had a mechanism which tracked how important each move was.  It was trained to predict the probability that white would win, on each position encountered in the game.    Moves where this probability swung wildly were given a larger weight in reinforcement.  This was important for concentrating training on decisive moves, allowing the extraction of information from each move instead of each game. It’s not clear if this is possible in the programming task.

Comment by Carl Feynman (carl-feynman) on Why is neuron count of human brain relevant to AI timelines? · 2024-12-26T15:19:52.106Z · LW · GW

This is a great question!

Point one:

The computational capacity of the brain used to matter much more than it matters now.  The AIs we have now are near-human or superhuman at many skills, and we can measure how skill capacity varies with resources in the near-human range.  We can debate and extrapolate and argue with real data.

But we spent decades where the only intelligent system we had was the human brain, so it was the only anchor we had for timelines.  So even though it’s very hard to make good estimates from, we had to use it.

Point two:

Most information that gives rise to the human mind is learned, not evolved.

The information encoded by evolution is less than a hundred megabytes.  It’s limited by the size of the genome (1 gigabytes).  Moreover, we know that much of the genome is unimportant for mental development.  About 40% is parasitic (viruses and transposons).  Much of the remaining DNA is not under evolutionary control, varying randomly between individuals.  Of expressed genes, only about a quarter appear to be expressed in the brain.  And some of them encode things AI doesn’t need, like the high-reliability plumbing of the circle of Willis, or the mysteries of love, or the biochemical pickiness of the blood-brain barrier, or wanting to pee when you hear running water.  So the “program” contributed by evolution is no more than the size of a largish program like a compiler.  (I would claim it’s probably even less.  I think the important instincts plus the learning algorithms are only a few thousand lines of code.  But that’s debatable.)

On the other hand, the amount learned in a lifetime is on the order of one or a few gigabytes.

Point three:

Most of the information accumulated by evolution has been destroyed.  All of the information accumulated in a species is lost when that species goes extinct. And most species have gone extinct, leaving no descendants.  The world of the Permian period was (as far as we know) just as busy as today, with hundreds of thousands of animal species.  Just one of those species, a little burrowing critter among many other types of little burrowing critters, was the ancestor of all mammals.  All the other little burrowing critters lost out.  All their evolutionary innovations have been lost.

This doesn’t apply to species with horizontal transmission of genes, like bacteria.  But it applies to animals, who are the only creatures with brains.

Comment by Carl Feynman (carl-feynman) on Purplehermann's Shortform · 2024-12-26T14:23:49.272Z · LW · GW

What’s a PM?

Comment by Carl Feynman (carl-feynman) on What are the strongest arguments for very short timelines? · 2024-12-24T22:34:42.887Z · LW · GW

I disagree that there is a difference of kind between "engineering ingenuity" and "scientific discovery", at least in the business of AI.  The examples you give-- self-play, MCTS, ConvNets-- were all used in game-playing programs before AlphaGo.  The trick of AlphaGo was to combine them, and then discover that it worked astonishingly well.  It was very clever and tasteful engineering to combine them, but only a breakthrough in retrospect.  And the people that developed them each earlier, for their independent purposes?  They were part of the ordinary cycle of engineering development: "Look at a problem, think as hard as you can, come up with something, try it, publish the results."  They're just the ones you remember, because they were good.  

Paradigm shifts do happen, but I don't think we need them between here and AGI.

Comment by Carl Feynman (carl-feynman) on What are the strongest arguments for very short timelines? · 2024-12-24T21:56:54.901Z · LW · GW

Came here to say this, got beaten to it by Radford Neal himself, wow!  Well, I'm gonna comment anyway, even though it's mostly been said.

Gallagher proposed belief propagation as an approximate good-enough method of decoding a certain error-correcting code, but didn't notice that it worked on all sorts of probability problems.  Pearl proposed it as a general mechanism for dealing with probability problems, but wanted perfect mathematical correctness, so confined himself to tree-shaped problems.  It was their common generalization that was the real breakthrough: an approximate good-enough solution to all sorts of problems.  Which is what Pearl eventually noticed, so props to him.

If we'd had AGI in the 1960s, someone with a probability problem could have said "Here's my problem.  For every paper in the literature, spawn an instance to read that paper and tell me if it has any help for my problem."  It would have found Gallagher's paper and said "Maybe you could use this?"

Comment by Carl Feynman (carl-feynman) on What are the strongest arguments for very short timelines? · 2024-12-24T21:23:59.927Z · LW · GW

Summary: Superintelligence in January-August, 2026.  Paradise or mass death, shortly thereafter.

This is the shortest timeline proposed in these answers so far.  My estimate (guess) is that there's only 20% of this coming true, but it looks feasible as of now.  I can't honestly assert it as fact, but I will say it is possible.

It's a standard intelligence explosion scenario: with only human effort, the capacities of our AIs double every two years.  Once AI gets good enough to do half the work, we double every one year.  Once we've done that for a year, our now double-smart AIs help us double in six months.  Then we double in three months, then six weeks.... to perfect ASI software, running at the the limits of our hardware, in a finite time.  Then the ASI does what it wants, and we suffer what we must.

I hear you say "Carl, this argument is as old as the hills.  It hasn't ever come true, why bring it up now?"  The answer is, I bring it up because it seems to be happening.  

  • For at least six months now, we've had software assistants that can roughly double the productivity of software development.  At least at the software company where I work, people using AI (including me) are seeing huge increases in effectiveness.  I assume the same is true of AI software development, and that AI labs are using this technology as much as they can.  
  • In the last few months, there's been a perceptible increase in the speed of releases of better models.  I think this is a visible (subjective, debatable) sign of an intelligence explosion starting up.  

So I think we're somewhere in the "doubling in one year" phase of the explosion.  If we're halfway through that year, the singularity is due in August 2026.  If we're near the end of that year, the date is January 2026.

There are lots of things that might go wrong with this scenario, and thereby delay the intelligence explosion.  I will mention a few, so you don't have to.

First, the government might stop the explosion, by banning AI being used for the development of AI.  Or perhaps the management of all major AI labs will spontaneously not be so foolish as to.  This will delay the problem for an unknown time.

Second, the scenario has an extremely naive model of intelligence explosion microeconomics.  It assumes that one doubling of "smartness" produces one doubling of speed.  In Yudkowsky's original scenario, AIs were doing all the work of development, and this might be a sensible assumption.  But what has actually happened is that successive generations of AI can handle larger and larger tasks, before they go off the rails.  And they can handle these tasks far faster than humans.  So the way we work now is that we ask the AI to do some small task, and bang, it's done.  It seems like testing is showing that current AIs can do things that would take a human up to an hour or two.  Perhaps the next generation will be able to do tasks up to four hours.  The model assumes that this allows a twofold speedup, then fourfold, etc.  But this assumption is unsupported.

Third, the scenario assumes that near-term hardware is sufficient for superintelligence.  There isn't time for the accelerating loop to take effect in hardware.  Even if design was instant, the physical processes of mask making, lithography, testing, yield optimization and mass production take more than a year.  The chips that the ASI will run on in mid-2026 have their design almost done now, at the end of 2024.  So we won't be able to get to ASI, if the ASI requires many orders of magnitude more FLOPs than current models. Instead, we'll have to wait until the AI designs future generations of semiconductor technology.  This will delay matters by years (if using humans to build things) or hours (if using nanotechnology.)

(I don't think the hardware limit is actually much of a problem; AIs have recently stopped scaling in numbers of parameters and size of training data.  Good engineers are constantly figuring out how to pack more intelligence into the same amount of computation.  And the human brain provides an existence proof that human-level intelligence requires much less training data.  Like Mr. Helm-Burger above, I think human-equivalent cognition is around 10^15 Flops.  But reasonable people disagree with me.)

Comment by Carl Feynman (carl-feynman) on Robbin's Farm Sledding Route · 2024-12-22T16:50:19.169Z · LW · GW

There’s a shorter hill with a good slope in McLellan park, about a mile away.  It debouches into a flat area, so you can coast a long time and don’t have to worry about hitting a fence.  If you’ve got the nerve, you can sled onto a frozen pond and really go far.

The shorter hill means it’s quicker to climb, so it provides roughly equal fun per hour.

Comment by Carl Feynman (carl-feynman) on Biological risk from the mirror world · 2024-12-20T18:57:27.883Z · LW · GW

This is a lot easier to deal with than other large threats.  The CO2 keeps rising because fossil fuels are so nearly indispensable.  AI keeps getting smarter because they’re harmless and useful now and only dangerous in some uncertain future.  Nuclear weapons still exist because they can end any war.  But there is no strong argument for building mirror life.

I read (much of) the 300 page report giving the detailed argument.  They make a good case that the effects of a release of a mirror bacterium would be apocalyptic.  But what I found more interesting and encouraging was the discussion of the benefits of creating a mirror bacterium.  They’re honestly pretty small.  
—It would make it cheaper to manufacture some theoretical drugs, not yet known to be effective.

—it would be an amazing biochemical stunt.  Someone might get a Nobel Prize.

—and that’s it!
Given the very small “pro” factors and the very large “con”, I would think it would be very easy to prevent anyone from doing it.  Sensible people will refrain, and sensible lawmakers will forbid it.  Moreover, given the indiscriminate nature of mirror bacteria, I find it hard to believe any group will want to release it.

Comment by Carl Feynman (carl-feynman) on Biological risk from the mirror world · 2024-12-20T16:53:14.461Z · LW · GW

The advantage is that they would have neither predators nor parasites, and their prey would not have adapted defenses to them.  This would be true of any organism with a sufficiently unearthly biochemistry.  Mirror life is the only such organism we are likely to create in the near term.

Comment by Carl Feynman (carl-feynman) on The Dangers of Mirrored Life · 2024-12-15T14:31:27.434Z · LW · GW

Thanks.

Comment by Carl Feynman (carl-feynman) on The Dangers of Mirrored Life · 2024-12-13T18:31:38.835Z · LW · GW

Has anyone been able to get to the actual “300 page report”?  I follow the link in the second line of this article and I get to a page that doesn’t seem to have any way to actually download the report.

Comment by Carl Feynman (carl-feynman) on "The Solomonoff Prior is Malign" is a special case of a simpler argument · 2024-11-25T14:51:45.692Z · LW · GW

“…Solomonoff’s malignness…”

I was friends with Ray Solomonoff; he was a lovely guy and definitely not malign.

Epistemic status: true but not useful.

Comment by Carl Feynman (carl-feynman) on O O's Shortform · 2024-11-17T16:38:18.847Z · LW · GW

I was all set to disagree with this when I reread it more carefully and noticed it said “superhuman reasoning” and not “superintelligence”.  Your definition of “reasoning” can make this obviously true or probably false.  

Comment by Carl Feynman (carl-feynman) on Shortform · 2024-11-10T16:16:43.920Z · LW · GW

The Antarctic Treaty (and subsequent treaties) forbid colonization.  They also forbid extraction of useful resources from Antarctica, thereby eliminating one of the main motivations for colonization.  They further forbid any profitable capitalist activity on the continent.  So you can’t even do activities that would tend toward permanent settlement, like surveying to find mining opportunities, or opening a tourist hotel.  Basically, the treaty system is set up so that not only can’t you colonize, but you can’t even get close to colonizing.

Northern Greenland is inhabited, and it’s at a similar latitude.

(Begin semi-joke paragraph) I think the US should pull out of the treaty, and then announce that Antarctica is now part of the US, all countries are welcome to continue their purely scientific activity provided they get a visa, and announce the continent is now open to productive activity.  What’s the point of having the world’s most powerful navy if you can’t do a fait accompli once in a while?  Trump would love it, since it’s simultaneously unprecedented, arrogant and profitable.  Biggest real estate development deal ever!  It’s huuuge!

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-11-06T17:23:05.197Z · LW · GW

A fascinating recent paper on the topic of human bandwidth  is https://arxiv.org/abs/2408.10234.  Title and abstract:

The Unbearable Slowness of Being

Jieyu Zheng, Markus Meister

This article is about the neural conundrum behind the slowness of human behavior. The information throughput of a human being is about 10 bits/s. In comparison, our sensory systems gather data at an enormous rate, no less than 1 gigabits/s. The stark contrast between these numbers remains unexplained. Resolving this paradox should teach us something fundamental about brain function: What neural substrate sets this low speed limit on the pace of our existence? Why does the brain need billions of neurons to deal with 10 bits/s? Why can we only think about one thing at a time? We consider plausible explanations for the conundrum and propose new research directions to address the paradox between fast neurons and slow behavior.

Comment by Carl Feynman (carl-feynman) on The Median Researcher Problem · 2024-11-05T16:48:51.548Z · LW · GW

They’re measuring a noisy phenomenon, yes, but that’s only half the problem.  The other half of the problem is that society demands answers.  New psychology results are a matter of considerable public interest and you can become rich and famous from them.  In the gap between the difficulty of supply and the massive demand grows a culture of fakery.  The same is true of nutrition— everyone wants to know what the healthy thing to eat is, and the fact that our current methods are incapable of discerning this is no obstacle to people who claim to know.

For a counterexample, look at the field of planetary science.  Scanty evidence dribbles in from occasional spacecraft missions and telescopic observations, but the field is intellectually sound because public attention doesn’t rest on the outcome.

Comment by Carl Feynman (carl-feynman) on What's a good book for a technically-minded 11-year old? · 2024-11-05T16:06:18.307Z · LW · GW

Here is a category of book that I really loved at that age: non-embarrasing novels about how adults do stuff.  Since, for me, that age was in 1973, the particular books I name might be obsolete. There’s a series of novels by Arthur Hailey, with titles like “Hotel” and “Airport”, that are set inside the titular institutions, and follow people as they deal with problems and interact with each other.  And there is no, or at least minimal, sex, so they’re not icky to a kid.  They’re not idealized; there is a reasonable degree of fallibility, venality and scheming, but that is also fascinating.  And all the motivations, and the way the systems work, is clearly explained, so it can be understood by an unsophisticated reader.

These books were bestsellers back in the day, so you might be able to find a copy in the library.  See if he likes it!

Another novel in this vein is “The view from the fortieth floor”, which is about a badly managed magazine going bankrupt.  Doesn’t sound amazing, I know, but if you’re a kid, who’s never seen bad managers blunder into ineluctable financial doom, it’s really neat.

My wife is a middle school librarian.  I’ll ask her when I see her for more books like this.

Comment by Carl Feynman (carl-feynman) on What's a good book for a technically-minded 11-year old? · 2024-11-05T15:32:17.204Z · LW · GW

Doesn’t matter, because HPMOR is engaging enough on a chapter-by-chapter basis.  I read lots of books when I was a kid when I didn’t understand the overarching plot.  As long as I had a reasonable expectation that cool stuff would happen in the next chapter, I’d keep reading.  I read “Stand On Zanzibar” repeatedly as a child, and didn’t understand the plot until I reread it as an adult last year.  Same with the detective novel “A Deadly Shade of Gold”.  I read it for the fistfights, snappy dialogue, and insights into adult life.  The plot was lost on me.

Comment by Carl Feynman (carl-feynman) on Purplehermann's Shortform · 2024-11-03T16:33:37.570Z · LW · GW

In general the human body is only capable of healing injuries that are the kind of thing that, if they were smaller, would still leave the victim alive, in the Stone Age.  If an injury is of a type that would be immediately fatal in the Stone Age, there’s no evolutionary pressure to make it survivable.  For example, we can regrow peripheral nerves, because losing a peripheral nerve means a numb patch and a weak limb, but you could live with this for a few months even if you’re a caveman.  On the other hand, we can’t regrow spinal cord, because a transected spinal cord  is fatal within a day or two even given the finest Stone Age nursing care (it didn’t become survivable until about 1946.). On the third hand, we can heal brain from strokes, even though brain is more complex than spinal cord, because a small stroke is perfectly survivable as long as you have someone to feed you until you get better.  We can survive huge surgical incisions, even though those would be fatal in the Stone Age, because small penetrating wounds were survivable, and the healing mechanisms can just do the same thing all along the incision.  This is why we sew wounds up: to convince the healing mechanisms that it’s only a small cut.

Unfortunately this argument suggests regrowing limbs is impossible.  An amputation is bad but survivable, and after it heals, you can still get around.  But many years of spending a lot of bodily energy on regrowing a limb that is pretty useless for most of that time doesn’t seem worthwhile.

Some particular problems I see:

In humans, there’s no mechanism for a growing limb to connect correctly to an adult injury site.  For example, there’s already a bunch of scar tissue there, which has to be cleared away progressively as the limb grows.  Evolution has not seen fit to provide us with this complex biochemistry, unlike the case of salamanders.

Children have a high level of circulating growth hormone, which tells the arm cells how fast to grow.  If you tried to provide this to an adult, their other bones would also grow, causing deformity (acromegaly).

It’s odd that we can’t grow new teeth when the old ones fall out.  More than once, I mean.  Drilling for cavities makes sense because the enamel (outer tooth layer) is essentially dead, and doesn’t regrow.  But we should be able to grow a whole new tooth from the root when we get a cavity.

Comment by Carl Feynman (carl-feynman) on Electrostatic Airships? · 2024-10-28T21:08:38.061Z · LW · GW

To hold the surface out, you need to have a magnetic field tangent to the surface.  But you can’t have a continuous magnetic field tangent to every point on the surface of a sphere.  That’s a theorem of topology, called the Hairy Ball Theorem.  So there has to be some area of the ball that’s unsupported.  I guess if the area is small enough, you just let it dimple inwards in tension.  The balloon would be covered in dimples, like a golf ball.

Comment by Carl Feynman (carl-feynman) on Drake Thomas's Shortform · 2024-10-25T21:16:30.438Z · LW · GW

Thanks for clearing that up.  It sounds like we’re thinking along very similar lines, but that I came to a decision to stop earlier.  From a position inside one of major AI labs, you’ll be positioned to more correctly perceive when the risks start outweighing the benefits.  I was perceiving events more remotely from over here in Boston, and from inside a company that uses AI as a one of a number of tools, not as the main product.

I’ve been aware of the danger of superintelligence since the turn of the century, and I did my “just now orienting to the question” back in the early 2000s.  I decided that it was way too early to stop working on AI back then, and I should just  “monitor for new considerations or evidence or events.”  Then in 2022, Sydney/Bing came along, and it was of near-human intelligence, and aggressively misaligned, despite the best efforts of its creators.  I decided that was close enough to dangerous AI that it was time to stop working on such things.  In retrospect I could have kept working safely in AI for another couple of years, i.e. until today.  But I decided to pursue the “death with dignity” strategy: if it all goes wrong, at least you can’t blame me.  Fortunately my employers were agreeable to have me pivot away from AI; there’s plenty of other work to be done.

Comment by Carl Feynman (carl-feynman) on Drake Thomas's Shortform · 2024-10-23T21:15:47.708Z · LW · GW

I’m not “trying to figure out” whether to work on capabilities, having already decided I’ve figured it out and given up such work.  Are you interested in talking about this to someone like me?  I can’t tell whether you want to restrict discussion to people who are still in the figuring out stage.  Not that there’s anything wrong with that, mind you.

Comment by Carl Feynman (carl-feynman) on What if AGI was already accidentally created in 2019? [Fictional story] · 2024-10-20T23:06:00.533Z · LW · GW

It’s a cute idea, but AI is a terrible fiction writer.

Comment by Carl Feynman (carl-feynman) on leogao's Shortform · 2024-10-16T15:28:57.232Z · LW · GW

Not only is this true in AI research, it’s true in all science and engineering research.  You’re always up against the edge of technology, or it’s not research.  And at the edge, you have to use lots of stuff just behind the edge.  And one characteristic of stuff just behind the edge is that it doesn’t work without fiddling.  And you have to build lots of tools that have little original content, but are needed to manipulate the thing you’re trying to build.

After decades of experience, I would say: any sensible researcher spends a substantial fraction of time trying to get stuff to work, or building prerequisites. 

This is for engineering and science research.  Maybe you’re doing mathematical or philosophical research; I don’t know what those are like.

Comment by Carl Feynman (carl-feynman) on Point of Failure: Semiconductor-Grade Quartz · 2024-09-30T22:50:31.017Z · LW · GW

High-purity quartz is used as crucibles in which to melt silicon for semiconductors.  It’s not a directly consumed raw material.  Based on my understanding of the process, the problem is that a little bit of the crucible dissolves into the silicon every time a batch is melted.  How long does a crucible last, and can its life be extended by various forms of cleverness? If the lifetime of a crucible is longer than the time it takes to restore the mines to production, then the interruption might not be serious.  Assuming that the mines produce extra-fast for a while, to make up for the gap in production, of course.

IIRC, this sort of interruption to chip supply has happened twice, once with a glue factory fire in Nagoya, and once because of floods in Thailand that simultaneously destroyed several assembly plants.  Both interruptions lasted about four months before production was restored, and resulted in a brief price increase instead of the Moore’s Law price decreases that semiconductor prices usually enjoy.

Epistemic status: I was trained as an electrical engineer, and worked for many years as a chip designer, but have not actually been in the business in this century, so any detailed knowledge is possibly obsolete.

Comment by Carl Feynman (carl-feynman) on Bogdan Ionut Cirstea's Shortform · 2024-09-24T21:02:59.417Z · LW · GW

Yes, things have certainly changed in the four months since I wrote my original comment, with the advent of o1 and Sakana’s Artificial Scientist.  Both of those are still incapable of full automation of self-improvement, but they’re close.  We’re clearly much closer to a recursive speed up of R&D, leading to FOOM.

Comment by Carl Feynman (carl-feynman) on On the destruction of America’s best high school · 2024-09-15T23:45:04.369Z · LW · GW

I don’t think articles like this belong on Less Wrong, so I downvoted it.  Presumably the author agrees, to some extent, or he wouldn’t have felt the need to pre-apologize.   If people disagree with me, they are of course free to upvote it.

Also, the article in question was posted in October of 2020.  Why bring it up now?  It’s not like we can do anything about it.

Comment by Carl Feynman (carl-feynman) on Does a time-reversible physical law/Cellular Automaton always imply the First Law of Thermodynamics? · 2024-08-30T19:46:26.125Z · LW · GW

“Time-Symmetric” and “reversible” mean the same thing to me: if you look at the system with reversed time, it obeys the same law.  But apparently they don’t mean the same to OP, and I notice I am confused.  In any event, as Mr Drori points out, symmetry/reversibility implies symmetry under time translation.  If, further, the system can be described by a Hamiltonian (like all physical systems) then Noether’s Theorem applies, and energy is conserved.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-18T01:55:03.716Z · LW · GW

Now I know more!  Thanks.  

That would suggest that an equal mass of tiny wind turbines would be more efficient.  But I see really big turbines all over the midwest.  What's the explanation?

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-18T01:51:16.747Z · LW · GW

Yes.  I know him.  We met years ago when I was a grad student at the Media Lab.  I haven't followed his work on self-reproduction in detail, but from what I've seen he is not aiming at economically self-sufficient devices, while I am.  So I'm not too impressed.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-17T21:57:51.774Z · LW · GW

As I describe in my first reply to Jackson Wagner above, I can tolerate some inefficiency, as long as I stay above Soviet-style negative productivity.  The goal is minimum reproduction time.  Once I've scaled up, I can build a rolling mill if needed.

You could mill every single plate in a motor core out of sheet stock on a milling machine...

As you point out, that would be madness.  I've got a sheet rolling machine listed, so I assume I can take plate and cold-roll it into sheet.  Or heat the plate and hot-roll it if need be. The sheets are only a meter long and a few centimeters wide, so the rolling machine fits inside.  They function like shingles for building the outside enclosure, and for various machine guards internally, so they don't have to be big.

where are you quenching the stuff?

I'm quenching in a jar of used lubricant.  Or fresh oil, if need be.  6% of the input is oil.

alloys isn't necessarily that you can't substitute X for Y, but that X costs three or four or ten times as much as Y for the specific application that Y is optimized for. 

I'm a little reluctant to introduce this kind of evidence, but I've seen lots of machinist videos where they say "I pulled this out of the scrap bin, not sure what it is, but lets use it for this mandrel" (or whatever).  And then it works fine.  I am happy to believe that different alloys differ by tens of percent in their characteristics, and that getting the right alloy is an important occupation for real engineers.  I just don't think that many thousands of them all vary by "three or four or ten times."  I think I can get away with six or so.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-17T21:31:12.416Z · LW · GW

I was actually thinking of a pair of humanlike arms with many degrees of freedom, and one or more cameras looking at things.  You can have dozens of single datum sensors, or one camera.  It's much cheaper.  Similarly, once you have some robot arms, there's no gain in including many single use motors.  For example, when I include an arbor press, I don't mean a motorized press.  I mean a big lever that you grab with the robot arm and pull down, to press in a shaft or shape a screw head.

There are two CNC machine tools, to automate some part shaping while the robot does something else.  

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-17T21:22:18.170Z · LW · GW

Yes, absolutely!  A fine description of the current state of the art.  I upvoted your post by 6 points (didn't know I could do that!).  

 I'm imagining doing everything the machinist has to do with a mobile pair of robot arms. I can imagine a robot doing everything you listed in your first list of problems.  Your "stupider stuff" is all software problems, so will be fixed once, centrally, and for good on the Autofac.  The developers can debug their software as it fails, which is not a luxury machinists enjoy.

Call a problem that requires human input a "tough" problem.   We can feed the solutions to any tough problems back into the model, using fine-tuning or putting it in the prompt.  So ideally, any tough problem will have to be solved once.  Or a small number of times, if the VLM is bad at generalizing.  The longer we run the Autofacs, the more tough problems we hit, resolve, and never see again. With an exponentially increasing number of Autofacs, we might have to solve an exponentially increasing number of tough problems.  This is infeasible and will destroy the scheme.  We have to hope that the tough problems per hour per Autofac drops faster than the number of Autofacs increases.  It's a hope and only a hope-- I can't prove it's the case.

What's your feeling about the distribution of tough problems?  

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-17T20:52:45.940Z · LW · GW

Wow, I think that comment is as long as my original essay.  Lots of good points.  Let me take them one by one.

I see a few potential benefits to efficiency-imparing simplifications:

  1. lt reduces the size/cost/complexity of the initial self-replicating system.  (I think this motivation is misplaced, and we should be shooting for a much larger initial size than 1 meter cubed.)

The real motivation for the efficiency-impairing simplifications is none of size, cost or complexity.  It is to reduce replication time.  We need an Autofac efficient enough that what it produces is higher value than what it consumes.  We don't want to reproduce Soviet industry, much of which processed expensive resources into lousy products worth less than the inputs.  Having achieved this minimum, however, the goal is to allow the shortest possible time of replication.  This allows for the most rapid production of the millions of tons of machinery needed to produce massive effects.

Consider that the Autofac, 50 kg in a 1 m^3, is modeled on a regular machine shop, with the machinist replaced by a robot.  The machine shop is 6250 kg in 125 m^3.  I just scale it down by a factor of 5, and thereby reduce the duplication time by a factor of 5.  So it duplicates in 5 weeks instead of 25 weeks.  Suppose we start the Autofac versus the robot machine shop at the same time.  After a year, there are 1000 Autofacs versus 4 machine shops; or in terms of mass, 50,000 kg of Autofac and 25,000 kg of machine shop.  After two years, 50,000,000 kg of Autofac versus 100,000 kg of machine shop.  After 3 years, it's even more extreme.  At any time, we can turn the Autofacs from making themselves to making what we need, or to making the tools to make what we need.  The Autofac wins by orders of magnitude even if it's teeny and inefficient, because of sheer speed.

That's why I picked a one meter cube.  I would have picked a smaller cube, that reproduced faster, but that would scale various production processes beyond reasonable limits.  I didn't want to venture beyond ordinary machining into weird techniques only watchmakers use.

I see a few potential benefits to efficiency-imparing simplifications:

  1. ...
  2. It reduces the engineering effort needed to design the initial self-replicating system.

This is certainly a consideration.  Given the phenomenal reproductive capacity of the Autofac, there's an enormous return to finishing design as quickly as possible and getting something out there.

To me, it seems that the Autofac dream comes from a particular context -- mid-20th-century visions of space exploration -- that have unduly influenced Feynman's current concept.

Let me tell you some personal history.  I happened upon the concept of self-reproducing machines as a child or teenager, in an old Scientific American from the fifties.  This was in the 1970s.  That article suggested building a self-reproducing factory boat, that would extract resources from the sea, and soon fill up the oceans and pile up on beaches.  It wasn't a very serious article.  Then I went to MIT, in 1979.  Self-reproducing machines were in the air-- Eric Drexler was theorizing about mechanical bacteria, and NASA was paying people to think about what eventually became the 1981 lunar factory design study.  I thought that sending a self-reproducing factory to the asteroid belt was the obvious right thing, and thought about it, in my baby-engineer fantasy way.  But I could tell I was ahead of my time, so I turned my attention to supercomputers and robots and AI and other stuff for a few decades.

A few years ago I picked up the idea of self-reproducing boats again.  I imagined a windmill on deck for power, and condensing Seacrete and magnesium from the water for materials.  There was a machine shop below decks, building all the parts.  But I couldn't make the energy economy work out, even given the endless gales of the Southern Ocean.  So I asked myself, what about just the machine shop part?  Then I realized the reproduction time was the overriding consideration.  How can I figure out the reproduction time?  Well, I could estimate the time to do it with a regular human machine shop, and I remembered Eric Drexler's scaling laws.  And wow, five weeks?!  That's short enough to be a really big deal!  So, a certain amount of calculation and spreadsheets later, here we are, the Autofac.

I considered varied environments for situating  the Autofac:

  • a laboratory in Boston.  Good for development, but doesn't allow rapid growth.
  • a field near a railroad and power line in the Midwest.  Good for the resource inputs, but the neighbors might reasonably complain when the steel mill starts belching flame, or the Autofacs pile up sky-high.
  • Baffin Island.  Advantages described above.
  • Antarctic Icecap.  Bigger than Baffin, but useful activities are illegal.  Shortage of all elements except carbon, oxygen, nitrogen and hydrogen.
  • The Moon.  Even bigger.  Ironically, shortage of carbon, nitrogen and hydrogen.  No wind, so the Autofac has to include solar cell manufacture from the git-go. There will be lots of problems understanding vacuum manufacturing.  Obvious first step toward Dyson Sphere.
  • Carbonaceous asteroids.  Obvious second step toward Dyson Sphere.

So, I decided to propose an intermediate environment.  Obviously, it was rooted in the mid-20th-century visions of space exploration.  But that didn't set the size, or the use of Baffin Island, or anything else really.  We'll build a Dyson Sphere eventually, but I don't feel the need to do it personally.

More to come.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-10T21:27:09.565Z · LW · GW

If you look at what I wrote, you will see that I covered both of these.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-10T21:25:47.724Z · LW · GW

Yeah, I looked at various forms of printing from powder as a productive system.  The problem is that the powder is very expensive, more expensive than most of the parts that can be produced from it.  And it can’t produce some parts— like ball bearings or cast iron— so you need tools to make those.  And by the time you add those in, it turns out you don’t need powder metallurgy.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-10T21:06:03.722Z · LW · GW

#2 is why I’m coming up with this scheme despite my substantial p(doom).  I think we can do something like this with subhuman (hence non-dangerous) levels of AI.  Material abundance is one of the things we expect from the Singularity.  This provides abundance without superhuman AI, reducing the impetus toward it.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-10T20:35:02.418Z · LW · GW

You could go some way with 1980s-level integrated circuits for all the onboard electronics.  The manufacturing requirements are much more tolerable.  But even 1980s semiconductors require a couple of dozen chemically exotic and ultra pure feedstocks.  The Autofacs would have to build a complex chemical industry before they could start building chips.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-08T21:53:17.085Z · LW · GW

I’ve been avoiding Factorio. I watched a couple of videos of people playing it, and it was obviously the most interesting game in the world, and if I tried it my entire life would get sucked in. So I did the stoic thing, and simply didn’t allow myself to be tempted.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-08T13:29:23.850Z · LW · GW

Yes.  The alternate approach to achieving a self-reproducing machine is to build a humanoid robot that can be dropped into existing factories, then gradually replace the workers that build it with robots.  That path may well be the one that succeeds.  Either path delivers an enormous expansion of industrial capabilities.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-08T01:13:32.188Z · LW · GW

Well, I seem to be talking to someone who knows more about alloys than I do.  How many alloys do you think I need?  I figure there's a need for Neodymium Iron Boron, for motor cores, Cast Iron in the form of near-net-shape castings for machine frames, and some kind of hardenable tool steel for everything else.  But I'm uncertain about the "everything else".  

I don't think the "staggering number of standardized alloys" needs to alarm us.  There are also a staggering number of standardized fasteners out there, but I think 4 sizes of machine screws will suffice for the Autofac.  We don't need the ultimate in specialized efficiency that all those alloys give us.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-08T01:01:28.684Z · LW · GW

They depend on lapping, which can be done "manually" by the robot arm.  I forgot to list "abrasive powder" in my list of vitamin ingredients.  Fixed now.

The fancier optical techniques provide precision on the order of a wavelength, which is far in excess of our needs.  All we need is eyeball-class optical techniques like looking along an edge to make sure it's straghtish, or pressing a part against a surface plate and seeing if light passes under it.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-08T00:54:28.671Z · LW · GW

I've operated a lathe and a mill (both entirely manual), various handheld power tools, a robot arm, robot eyes, autonomous mobile robots, and a data center.  For the rest, I've read books and watched videos.

I've built and/or maintained various kinds of robots.

I have no experience with cutting-edge VLMs.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-08T00:18:12.829Z · LW · GW

Yes, that's totally part of the starter pack.  All the electronics are imported-- CPUs, radios, cameras, lights, voltage converters, wire harnesses, motor controllers...

I don't know how to plan the split between the part of the thinking that is done inside the Autofac and the part that is done in the data center.

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-08T00:11:58.936Z · LW · GW

Nowadays we use carbide bits, but we used to use steel bits to cut steel.  It's called high speed steel.  It differs from regular steel by being a different crystal structure, that is harder (and more brittle).  It used to be perfectly common to cut a steel-cutting tool out of steel, then apply heat treatment to induce the harder crystal structure, and use the hardened tool to cut the softer original steel.  It's one of the reasons I specified steel instead of aluminum or brass.

The machine shop can use a tool until it wears down too much, then un-harden it (a different heat treatment), cut it back to have a sharp edge again, and then re-harden.  Steel really is amazing stuff.

I've looked into machine tool techniques pretty closely, and I believe I can make them with only 2% by weight that's not steel or lubricant.  In a lot of ways, it's going back to the designs they used a hundred years ago, before they had good plastics or alloys.  For example, the only place you HAVE to use plastic is as a flexible wire insulation.  

I welcome your suggestions as to inputs I may have overlooked.  

Comment by Carl Feynman (carl-feynman) on It's time for a self-reproducing machine · 2024-08-08T00:02:54.101Z · LW · GW