Posts

self-improvement-executors are not goal-maximizers 2023-06-01T20:46:16.964Z
how humans are aligned 2023-05-26T00:09:20.626Z
AI self-improvement is possible 2023-05-23T02:32:08.472Z
What new technology, for what institutions? 2023-05-14T17:33:28.889Z
addicts are misaligned, so keep things linear 2023-05-09T01:47:52.887Z
neuron spike computational capacity 2023-05-01T00:28:45.532Z
"notkilleveryoneism" sounds dumb 2023-04-28T19:46:01.000Z
cyberpunk raccoons 2023-04-28T02:52:20.473Z
ask me about my battery 2023-04-21T21:55:10.411Z
childhood is foom 2023-04-18T23:03:02.036Z
grey goo is unlikely 2023-04-17T01:59:57.054Z

Comments

Comment by bhauth on Progress links and tweets, 2023-06-01 · 2023-06-02T01:48:06.708Z · LW · GW

The article has a detailed analysis

I sure didn't see one! I saw some analysis of the cost of energy used for grinding up rock, with no consideration of other costs. Can you point me to the section with detailed analysis of the costs of mining, crushing, and spreading the rock, or the capital costs of grinders? A detailed analysis would have numbers for these things, not just dismiss them.

If you think that analysis goes wrong, I'd be curious to understand exactly where?

OK then.

What is quite certain is that the vast majority of that expense, both financially and in terms of energy, comes not from mining or crushing but from milling the crushed rock down to particle size.

Digging up and crushing olivine to gravel would be $20-30/ton. We know this from the cost of gravel and the availability of olivine deposits. That alone makes this uneconomical, yet the author just dismisses them as negligible next to the cost of milling. So either the dismissal is wrong, or the milling cost estimation is wrong, or both.

for an all-inclusive energy cost of 61 kWh ($9.15) per tonne of rock – about $7.32 per sequestered tonne of CO2

Why is the cost per ton of CO2 lower than the cost per ton of rock, when 1 ton of rock stores much less than 1 ton of CO2?

And the largest rock mills are large indeed; the biggest on the market can process tens of thousands of tonnes a day. It should be clear by now that capital expenditures, while not irrelevant, are small compared to the cost of energy

That's quite a non sequitur! We know what grinding rock to fine powder costs. Use those costs, not the cost of electricity.

Comment by bhauth on Progress links and tweets, 2023-06-01 · 2023-06-01T23:34:56.171Z · LW · GW

olivine weathering

Gravel costs money. Making olivine gravel costs maybe $20/ton. You'd need to dig up 2.3 tons of pure Mg silicate to potentially absorb 1 ton of CO2, and realistically speaking your "ore" won't be pure or react completely, so the correct ratio is >3.

Suppose you do that. Great, you exposed some fresh magnesium silicate to the CO2 in air, and now a very thin layer of carbonate will form on the surface as it very slowly reacts. If you crush it to fine particles and spread it over a large area, you can get it to actually react, but that involves transporting it to a grinder and then spreading it out, which would bring your cost to probably >$200 per ton of CO2 absorbed. Not great. (Plus, all this digging and grinding uses energy, and probably involves vehicles that burn fuel.)

The above link talks about the cost of electricity needed to grind up a ton of olivine. This is a weird approach because people already grind up a lot of rocks and we know a lot about how much that currently costs. You should always base cost estimates on the costs of the most similar existing things. (Why don't people do that?)

Comment by bhauth on AI self-improvement is possible · 2023-05-23T06:32:05.285Z · LW · GW

The point there was just that we don't see an inverse relationship, with smarter humans having slower development during childhood. Yet, we do see that inverse relationship when we compare humans to other animals.

Regarding the other half of D:prodigy...I was making an empirical argument based on a large volume of literature, but if you consider the energy landscape of ANN systems, plateauing at bad performance means getting stuck in bad minima, and increasing the number and quality of good paths through the energy landscape is both what makes that less likely and what increases the speed of gradient descent.

As I noted, this raises the question of what's different about human childhood development that requires it to be slow.

Comment by bhauth on What new technology, for what institutions? · 2023-05-15T15:41:57.592Z · LW · GW

To me it seems a personality trait of well informed people, that they are not as interested in searching or building capital.

Yes, there's a tradeoff between putting effort into research and putting it into "hustle", and usually people specialize in doing one. But it's not like "ability to partner with someone who searches for capital" is the real bottleneck. I'd say instead that there are certain people in the position to raise capital, but they have to believe in the technology and pitch it themselves, and they need to be on the same wavelength as people like Bill Gates and the moral maze masters, and the people in those positions who can communicate with investors are more likely to be delusional than to understand technology really well.


Also as an aside, what is your interpretation of the Bill Gates article? I see no particular evidence of a lack of physics knowledge, are you referring to the take about the water comments or? It's definitely not an in-depth description of the problems with PWRs or BWRs, but I think is an acceptable explanation of the advantages of using LMRs. Maybe there is some other comment I am missing, but it comes across as an easily accessible article written to persuade the layman of the benefits of his endeavor?

Sure, I can explain.

First, water isn’t very good at absorbing heat—it turns to steam and stops absorbing heat at just 100 degrees C

Water is actually rather good at absorbing heat. It has a much higher heat capacity than sodium, boiling absorbs a lot of heat if you boil it, and in a typical BWR design it boils at 285 C.

The Natrium plant uses liquid sodium, whose boiling point is more than 8 times higher than water’s

Gates is using unspecified temperature units and pressure, presumably Celcius at 1 bar. Divisions of temps in C aren't meaningful - does water have -3x the boiling point of ammonia?

Unlike water, the sodium doesn’t need to be pumped, because as it gets hot, it rises, and as it rises, it cools off

Water does that too. It's an almost universal property of liquids. You can do natural convection cooling with water.

Safety isn’t the only reason I’m excited about the Natrium design

The TerraPower Natrium design is much less safe than current reactors, and using sodium does nothing to improve safety. The sodium reduces reactivity so if the coolant boils off then reactivity increases. That's bad. The neutrons are fast so neutron lifetime is short so response time needs to be fast. That's bad. IIRC the design still involves robots moving fuel rods around during operation. That can fail.

It's just a really terrible design. Bad safety, and very expensive design decisions. Supposedly in the future they plan to use a "Pascal" heavy water moderated CO2 cooled reactor, which I always considered a better approach, but I have little faith in TerraPower doing a good job on it.

Like other power plant designs, it uses heat to turn water into steam, which moves a turbine, which generates electricity. ... It also includes an energy storage system that will allow it to control how much electricity it produces at any given time.

If you're using steam, the low-pressure steam turbines are big and have a lot of inertia compared to the low-pressure steam going through them, so they take a long time to spin up. That's a big reason why coal plants aren't load-following like gas turbines.

They're also expensive, so you really want to avoid them for cost reasons, and if you do have them you want to run them all the time. So with natural gas, the combined cycle plants with steam turbines also tend to run continuously.

Comment by bhauth on $250 prize for checking Jake Cannell's Brain Efficiency · 2023-05-15T11:42:45.152Z · LW · GW

But in that sense I should reassert that my model applies most directly only to any device which conveys bits relayed through electrons exchanging orbitals, as that is the generalized electronic cellular automata model, and wires should not be able to beat that bound. But if there is some way to make the interaction distance much much larger - for example via electrons moving ballistically OOM greater than the ~1 nm atomic scale before interacting, then the model will break down.

The mean free path of conduction electrons in copper at room temperature is ~40 nm. Cold pure metals can have much greater mean free paths. Also, a copper atom is ~0.1 nm, not ~1 nm.

Comment by bhauth on $250 prize for checking Jake Cannell's Brain Efficiency · 2023-05-15T11:40:51.191Z · LW · GW

The amount dissipated within the 30-meter cable is of course much less than that, or else there would be nothing left for the receiver to measure.

Signals decay exponentially and dissipation with copper cables can be ~50dB. At high frequencies, most of the power is lost.

Comment by bhauth on $250 prize for checking Jake Cannell's Brain Efficiency · 2023-05-01T03:22:59.389Z · LW · GW

I made a post which may help explain the analogy between spikes and multiply-accumulate operations.

Comment by bhauth on "notkilleveryoneism" sounds dumb · 2023-04-29T01:52:29.963Z · LW · GW

I think we're on the same page here. Sorry if I was overly aggressive there, I just have strong opinions on that particular subtopic.

Comment by bhauth on "notkilleveryoneism" sounds dumb · 2023-04-29T00:09:26.471Z · LW · GW

People say AI concerns are a weird silly outlandish doomer cult no matter how everything is phrased.

No, you're dead wrong here. Polls show widespread popular concern about AI developments. You should not give up on "not seeming like a weird silly outlandish doomer cult". If you want to actually get things done, you cannot give up on that.

Comment by bhauth on "notkilleveryoneism" sounds dumb · 2023-04-28T23:39:00.687Z · LW · GW

the last few times people tried naming this thing, people shifted to using it in a more generic way that didn't engage with the primary cruxes of the original namers

Yes, but, that's because:

"AI Safety" and "AI Alignment" aren't sufficiently specific names, and I think you really can't complain when those names end up getting used to mean things other than existential safety

(Which I agree with you about.)


the word is both specific enough and sounds low-status-enough that you can't possibly try to redefine it in a vague applause-lighty way that people will end up Safetywashing

OK, but now it's being used on (eg) Twitter as an applause light for people who already agree with Eliezer, and the net effect of that is negative. Either it's used internally in places like LessWrong, where it's unnecessary, or it's used in public discourse, where it sounds dumb which makes it counterproductive.

And, sure, there should also be a name that is also, like, prestigious and reasonable sounding and rolls off the tongue. But most of the obvious words are kind a long and a mouthful and are likely to have syllables dropped for convenience

Yes, that's what I'm trying to make a start on getting done.

as a joke-name, things went overboard and it's getting used more often than it should

Yes, that is what I think. Here's a meme account on Twitter. Here's Zvi using it. These are interfaces to people who largely think it sounds dumb.

Comment by bhauth on Contra Yudkowsky on AI Doom · 2023-04-24T03:18:16.574Z · LW · GW
  1. "These GPUs cost $1M and use 10x the energy of a human for the same work" is still a pretty bad deal for any workers that have to compete with that. And I don't expect economic gains to go to displaced workers.

  2. Even if an AI is more expensive per computational capacity than humans, it being much faster and immortal would still be a threat. I could imagine a single immortal human genius becoming world-emperor eventually. Now imagine them operating 10^6 or even 10^3 faster than ordinary humans.

Comment by bhauth on ask me about my battery · 2023-04-22T13:45:38.678Z · LW · GW

Li-ion batteries have solid particles that Li ions migrate into and out of. This can cause particles to break up, especially at high charge/discharge rates. Because there are fewer ions to migrate, fast discharge at low charge is bad for battery lifetime and gives lower voltage.

SMAC batteries have solid particles that dissolve and form as the battery is operated. It doesn't matter if those break up. To some extent, the maximum discharge rate would decrease as some smaller particles disappear during discharge. There's also some Ostwald ripening that happens, which decreases discharge rate a bit over time, until the next charge cycle, but the extent is limited.

Li-ion batteries are limited largely by SEI growth from electrolyte-Li reaction. Charging and discharging accelerates SEI growth because it causes cracking in the existing SEI, especially at high rates.

SMAC battery lifetime would probably be limited by water migration, with charge cycles being irrelevant and only time & temperature being important, but the long-term lifetime isn't clear at this point. Yes, there is a SEI in SMAC batteries, but it's a thin SEI that works for Na but not Li, with less surface area, so it wouldn't cause much capacity loss.

The relative charge rate of Li-ion vs SMAC depends on the thickness of the electrolyte layers, which depends on the manufacturing process rather than the chemistry. The experimental data I got doesn't really indicate this because an insulating oxide layer was forming, and because the test cells used much thicker layers than commercial cells would. But I'd expect it to be similar, meaning max charge rates between 0.1C and 10C.

I'm not sure why balancing would be different.

Comment by bhauth on ask me about my battery · 2023-04-22T04:41:03.386Z · LW · GW

Oxygen reacts spontaneously with sodium metal. (And various other things.) That causes current to go in and not come back out, but other things can cause that too.

When that happens, telling why can be complicated, but one way to tell if oxygen is the problem is to take away the oxygen and see if that helps.

Comment by bhauth on ask me about my battery · 2023-04-22T04:16:29.153Z · LW · GW

Sure, but first off, professors don't typically let people work on whatever they want in their lab, so to be able to work on my own thing, there were some...compromises. Plus COVID was happening. So, equipment availability wasn't quite what I'd hoped for.

With batteries, perhaps the main thing you do is assemble test cells and do electrical testing on those. The first thing I tested for was water leakage. That's one of the big obvious questions people had about my design:

> Won't water rapidly diffuse across the electrolyte layer, and either react with Na or get electrolyzed directly?

Normally, that's true, which is why the SMAC design needs special "very salty" water. I estimated the activation energy for water migration to the electrolyte layer being high enough, but I wanted to get experimental validation. So, I did a test with no electrolyte salt, and here was the cyclic voltammetry plot:

You can see current is negligible and very linear, no nonlinearity showing water electrolysis. The little spike is from me bumping a wire.

Then I started making proper test cells - well, manually assembled test cells with graphite electrodes and a teflon foam gasket, not airtight commercial ones. The problem was, this is obviously air-sensitive, and I didn't have a glovebox available. Towards the end I managed to borrow a glovebox for a bit, but even then I didn't have properly degassed liquids. So, all my CV data was affected by oxygen contamination, but I guess I'll show some CV plots anyway.

Comment by bhauth on ask me about my battery · 2023-04-21T23:33:53.386Z · LW · GW

See this blog post for an initial answer.

Comment by bhauth on ask me about my battery · 2023-04-21T22:42:22.140Z · LW · GW
  • That would be pretty long. The intro might be something like this.
  • It's hard to do multiple levels of technical detail in one post.
  • I want to know how to order things if I'm talking to a LW-ish investor-ish person.
Comment by bhauth on ask me about my battery · 2023-04-21T22:30:39.200Z · LW · GW

The goal of the battery design was to be something suitable for electric cars, with similar specific energy (that's watt-hours/kg) to Li-ion, less flammability, and lower cost.

If you mean "why was I designing a battery", I guess I just like thinking about designs for new technology.

Comment by bhauth on grey goo is unlikely · 2023-04-20T02:50:01.096Z · LW · GW

Such actuator design specifics aren't relevant to my point. If you want to move a large distance, powered by energy from a chemical reaction, you have to diffuse to the target point, then use the chemical energy to ratchet the position. That's how kinesin works. A chemical reaction doesn't smoothly provide force along a range of movement. Thus, larger movements per reaction take longer.

Comment by bhauth on grey goo is unlikely · 2023-04-19T00:25:02.155Z · LW · GW

I guess to start with, let's say we're making diamond. We're building up a block of the stuff from carbon, and the dangling bonds on the edge of the structure connect to hydrogens. My first though would be a condensation reaction: We stick a methanediol onto the structure, replacing two dangling hydrogens with bonds to a new carbon atom. Two molecules of water are produced, made from the hydroxyl groups and those two hydrogens.

I think existing proteins can do condensation reactions okay, but maybe those become impossible when the carbon you're trying to attach to is already bonded to three other carbons?

Condensation reactions are only possible in certain circumstances. Maybe read about the mechanism of aldol condensation and get back to me. Also, methanediol is in equilibrium with formaldehyde in water.

I realize you don't know my background, but if you want to say I'm wrong about something chemistry-related, you'll have to put in a little more effort than that.

Comment by bhauth on grey goo is unlikely · 2023-04-18T22:21:11.366Z · LW · GW

Regarding 5, my understanding is that mechanosynthesis involves precise placement of individual atoms according to blueprints, thus making catalysts that selectively bind to particular molecules unnecessary.

No, that does not follow.

The shell could be made of diamond panels with airtight joints. The daughter cell's internal components and membrane are manufactured inside the parent cell, then the membrane is added to the parent cell's membrane, it unfolds in an origami fashion into two membranes of original size, then the daughter cell separates.

...for one thing, that's not airtight.

It seems to me that the "step" for molecular linear motors could be an arbitrarily long distance.

No, the steps happen by diffusion so they become slower. That's why slower muscles are more efficient.

The "floppy enzymes" has the same solution as section 8. In chapter 13 of Nanosystems Drexler also gives three different ways this problem is solved, two of which involve molecular manipulators:

see this reply

Comment by bhauth on grey goo is unlikely · 2023-04-18T22:12:54.221Z · LW · GW

you might find this post interesting

Comment by bhauth on grey goo is unlikely · 2023-04-18T19:09:59.869Z · LW · GW

Signal propagation is faster in larger axons.

Comment by bhauth on grey goo is unlikely · 2023-04-17T18:22:34.145Z · LW · GW

Yes, you need some kind of switch for any mechanical computer. My point was that you need multiple mechanical "amplifiers" for each single positioner arm, the energy usage of that would be substantial, and if you have a binary mechanical switch controlling a relatively large movement, then the thermal noise will put it in an intermediate state a lot of the time so the arm position will be off.

Comment by bhauth on grey goo is unlikely · 2023-04-17T18:02:47.721Z · LW · GW

You're misunderstanding the point of those proposed amino acids. They're proposals for things to be made by (at least partly) non-enzymatic lab-style chemical processes, processed into proteins by ribosomes, and then used for non-cell purposes. Trying to use azides (!) or photocrosslinkers (?) in amino acids isn't going to make cells work better.

There really isn't much improvement to be had by using different amino acids.

Comment by bhauth on grey goo is unlikely · 2023-04-17T17:52:16.071Z · LW · GW

replied here

Comment by bhauth on grey goo is unlikely · 2023-04-17T17:50:56.856Z · LW · GW

replied here

Comment by bhauth on grey goo is unlikely · 2023-04-17T17:46:51.811Z · LW · GW

You want to grow brains that work more like CPUs. The computational paradigm used by CPUs is used because it's conceptually easy to program, but it has some problems. Error tolerance is very poor; CPUs can be crashed by a single bit-flip from cosmic rays. CPUs also have less computational capacity than GPUs. Brains work more like...neural networks; perhaps that's where the name came from.

Comment by bhauth on grey goo is unlikely · 2023-04-17T17:41:23.650Z · LW · GW

I really like this post, I hope to see more like it on less wrong, and I strong-upvoted it.

Thanks, glad you liked it. You made quite the comment here, but I'll try to respond to most of it.

Metal surfaces: Why not just build up the metal object in an oxygen-free environment, then add on an external passivation layer at then end?

  1. To build up metal, you need to carry metal atoms somehow. That requires moving ions, because otherwise there's no motive force for the transfer, plus your carrier would probably be stuck to the metal.

Without proteins carrying ions in water, this is difficult. The best version of what you're proposing is probably directed electrochemical deposition in some solvent that has a wide electrochemical window and can dissolve some metal ions. Such solvents would denature proteins.

  1. Inputs and outputs need to be transferred between compartments. Cells do use "airlock" type structures for transferring material, but some leakage would be inevitable.

The passivation layer could be engineered to be more stable than the naturally occurring oxidation layer the metal would normally have. There would still be a minimum size for metal objects, of course. (Or more precisely, a minimal curvature.) Corrosion could definitely be a problem, but Cathodic protection might help in some cases.

It's true that proteins can be designed to bind strongly to metal oxide surfaces and inhibit corrosion fairly well. That's actually an interesting research topic that might be useful for steel. But even that isn't good enough on such a small scale, and you'd need to fully cover all exposed surfaces.

The only other option for "engineering" more-stable surfaces is metal nitrides or carbides, but that requires high temperatures, it's not something enzymes can do.

Cathodic protection doesn't help here. It doesn't maintain a perfect equilibrium, and objects would still do Ostwald ripening and tend to become more spherical.

Agree that electrostatic motors are the way to go here. I'm not sure the power supply necessarily has to be an ion gradient, nor that the motor would otherwise need to be made from metal. Metal might be actively bad, because it allows the electrons to slosh around a lot.

I'm not sure what you mean by electrons "sloshing around".

What about this general scheme for a motor?: Wheel-shaped molecules, with sites that can hold electrons. A high voltage coming from a power supply deposits electrons on one wheel, and produces holes on the other wheel. The corresponding sites are attracted to each other and once they get close enough, the electrons jump into the holes, filling them. Switching is determined by the proximity of various sites on the wheels as they move relative to each other. Considering how electrons are able to jump around between sites in the electron transport chain, this doesn't seem impossible.

It's certainly possible to make electromechanical computers with relays. And it's possible to use MEMS electrostatic actuators for relays. They're just not as good as semiconductors for computers. The MEMS relay approach is actually used in some devices for handling high-frequency radio signals.

Consider the analogous versions of ionic and electrostatic motors, and think about what's better. Ionic motors use tubes filled with water instead of conductive wires with insulation; those transmit signals slower but are easier to make. Ionic motors can dump ions into solution instead of needing a conductor at a lower voltage. Ionic motors don't have to deal with possible unintentional electrolysis. Ion gates are much easier to make with proteins than electrical switches.

Electrostatic motors are generally switched for each rotational step, but consider: If you want to compete with the energy usage of ionic motors, you can only use a few electron-volts per rotation. Semiconductor switches and relays are not so good for measuring out individual electrons.

Instead of sites that hold electrons, why not use sites that hold ions?


An intermediate design that I can imagine is a block with a series of tubes and chambers embedded in it. (The bulk of the block can hold electronics.) Most of the tubes are filled with water, so nanobot components can happily bounce around in them. But lots of components are also mounted to the walls of the tubes. You can't clump together if you're bonded to the wall of your tube. A small minority of tubes can be filled with gas, or even under vacuum, for any weird processes that may require those conditions. Pumping energy is volume times pressure, so the energy requirements could be reasonable as long as the volume is small.

That's basically the same proposal as having gas-filled compartments in large cells, so this applies:

Any self-replicating cell must move material between outside and multiple compartments. Gas leakage by the transporters would be inevitable. Cellular vacuum pumps would require too much energy and may be impossible. Also, strongly binding the compounds used (eg CO2) to carriers at every step would require too much energy. ("Too much energy" means too much to be competitive with normal biological processes.)

If the objects bound to the walls of gas-filled compartments have movable arms, on a small enough scale, those arms would also get stuck to (or away from) the walls by electrostatic and dispersion forces.


The reason for doing things at high temperatures is to do reactions with a high activation energy. If we're designing custom catalysts (artificial enzymes) for our nanobots, we can probably finesse it so that the enzyme coaxes the reactants into the high-energy intermediate state, even if the ambient temperature is low (via coupling to a more favorable reaction, for example).

Yes, enzymes can catalyze some difficult reactions. The main tools they use for that are hydrogen bonding patterns that stabilize specific conformations, and electrostatic fields around the active site. There are also p450 oxidases that put a reactive site in a hydrophobic pocket, and use it to oxidize hydrocarbons in a semi-controlled way to create a way to process them.

But enzymes aren't magic. They have limitations. For example, methane can be metabolized by some bacteria, but it's always oxidized to methanol in a reaction that consumes NAD(P)H. Half the energy of the methane is wasted to get it to a state that can be metabolized, and there's just no way around that.

Another notable limitation of enzymes is the difficulty of making aliphatic hydrocarbons. That's why hydrophobic stuff is almost always fatty acids or terpenes from DMAPP.

I'm fairly familiar with protein mechanisms and their limitations. Is there some other type of mechanism you're proposing for low-temperature catalysis, something that enzymes don't already use?

Also, covalent single-bonds can rotate, so there's nothing preventing the existence of a covalently bonded structure that can also exhibit conformational changes.

Yes, proteins are covalently bonded. They're also non-covalently bonded. If all their structure was covalent, then they wouldn't be able to do conformational changes. And because some of it is non-covalent, they denature at high temperature.

Also, I'd guess that the stupidity of evolution has left a lot of low hanging fruit for humans. For example, rather than trying to do a reaction with proteins, we can do it with a group of complicated catalysts synthesized by proteins.

I think the word for that is "cofactor".

I'd bet on diamond synthesis still being possible somehow, but it does seem like a genuinely complicated question, so I'll have to look into it further.

OK. Maybe you'll learn something from the attempt.

Doesn't have to be rigid, can still be connection based. For example, there could be simple protein-based building blocks that act like legos. An assembly head can assemble these, and move around on the surface of the part it's building by accepting signals that correspond to "move 1 block left", etc. Position is always exactly known, not because there's a rigid beam anywhere in the system, but because we know the exact integer number of steps the assembly head has moved since the start.

OK, suppose you have a linear motor (like myosin) which is controlled by a signal (like a DNA sequence) that indicates a series of movements. (Something more computer-like would be less efficient than that). Also remember that on a molecular scale, energy-efficient = reversible. ATPase spins in both directions.

Compared to coding for a protein sequence, you're using more information and more energy to do this. It's also rather difficult to get single-protein-spacing-level control.

So, you're imagining something like a protein with regularly spaced sites that can be attached to, and something that travels along it, with an enzyme-like tooltip that can bind to those sites to do a reaction that connects something. And that is...actually similar to how cytoskeletons work, but obviously they're not directly controlled by DNA or RNA.

my general picture is that whenever a cell decides to make proteins somewhere and then transport them somewhere else, that costs genome space, which is very limited, so the cell can't do that very often. Nanobots genuinely have different constraints from life here, in particular they have cheaper genome space, and so they can have custom designed pipes for every type of protein they use, and the pipe leads right to the chamber where that protein is being used. Huge information cost, but if it makes things work way better, it's probably worthwhile for nanobots. I totally believe that life is using those techniques exactly as much as is optimal for it, though.

Specifying positions with positioners would require more bits of information than coding for proteins. DNA has high information density, and something much more compact than that wouldn't have strong enough binding to be read accurately.

In what sense would nanobots have "cheaper genome space" than current cells? What mechanism do you envision being used for information storage?

I also don't know why you're scoffing at the potential for building computers here. The supposed "embodied" computation of existing cells is currently computing things that are keeping us alive, which is great, but you can't exactly solve any other important problems on it. It's not a flexible universal computer, in the sense of a Turing machine that can run any program.

If the point of your nanobots is to be "like current life, but worse, except it also produces a computer" then I think the usual word for that is "neurons". The resulting computer would need to be better than current systems.

You want to grow brains that work more like CPUs. The computational paradigm used by CPUs is used because it's conceptually easy to program, but it has some problems. Error tolerance is very poor; CPUs can be crashed by a single bit-flip from cosmic rays. CPUs also have less computational capacity than GPUs. Brains work more like...neural networks; perhaps that's where the name came from.

Drexler envisioned such mechanical computers being used to control internal processes as well; that's why I made the comparison. According to some people, this would be an advantage over how cells work for controlling internal operations, but I disagree.

Comment by bhauth on grey goo is unlikely · 2023-04-17T05:25:51.403Z · LW · GW

That framing is unnatural to me. I see "solving a problem" as being more like solving several mazes simultaneously. Finding or seeing dead ends in a maze is both a type of progress towards solving the maze and a type of progress towards knowing if the maze is solvable.

Comment by bhauth on grey goo is unlikely · 2023-04-17T04:53:17.207Z · LW · GW

That has already happened naturally and also already been done artificially.

See this paper for reasons why codons are almost universal.

Comment by bhauth on grey goo is unlikely · 2023-04-17T03:41:37.031Z · LW · GW

Is that a question? If I'm given an impossible task, I try to find a way around it. The details would depend on the specifics of your hypothetical situation.

Or are you saying that the flaw in my argument is that...I didn't have the right emotional state while writing it? I'm not sure I understand your point.

Comment by bhauth on [Link] Sarah Constantin: "Why I am Not An AI Doomer" · 2023-04-13T21:59:20.182Z · LW · GW

I was talking about (time-shifted correlation) vs causation. That's what people get confused about.

Mark Teixeira wears 2 different socks when playing baseball. That's because he did that once and things went better. Why do you think he does that?

Comment by bhauth on [Link] Sarah Constantin: "Why I am Not An AI Doomer" · 2023-04-12T03:31:00.490Z · LW · GW

Agency requires reasoning about the consequences of one’s actions. "I need to do such-and-such, to get to my goal." This requires counterfactual, causal reasoning.

Have you ever tried to explain the difference between correlation and causation to someone who didn't understand it? I'm not convinced that this is even something humans innately have, rather than some higher-level correction by systems that do that.

A computer chess engine trained exclusively on one format for representing the game would generally not be able to transfer its knowledge to a different format.

You can hook a chess-playing network up to a vision network and have it play chess using images of boards - it's not difficult. Perhaps a better example is that language models can be easily coupled to image models to get prompted image generation. You can also translate between language pairs that didn't have direct translations in the training data.

thus we do not know how to build machines that can pursue goals coherently and persistently

This post seems rather specific to LLMs for how much it's trying to generalize; I think there's been more progress on that than Sarah seems to realize.

Comment by bhauth on [deleted post] 2023-04-12T03:06:15.946Z

This post is simultaneously true and hypothetical, simultaneously me and a character. I made a Starcraft bot and it works pretty well, but I'm not going to decide what to do based on LessWrong comments, but LW should think about what to tell people to do with what they make, because people are doing things and are hoping to get some personal benefit or at least recognition from what they've done.

If the goal was to make that point, I would have written a post saying that directly, but I actually wrote this as a way to catch specific people looking for specific things, which is why I included the specific techniques I did, which are also techniques that need to be considered wrt agentic AI. One such person emailed me already, so this was successful.

I'd be more interested in seeing you post on whatever subjects you typically write about

I'm not sure eg industrial chemistry is something that people want to see on LW - do you have any more specific suggestions?