Posts

Comments

Comment by Carl Feynman (carl-feynman) on MakoYass's Shortform · 2024-04-11T00:08:29.496Z · LW · GW

Some extraordinary claims established by ordinary evidence:

Stomach ulcers are caused by infection with Helicobacter Pylori.  It was a very surprising discovery that was established by a few simple tests.

The correctness of Kepler's laws of planetary motion was established almost entirely by analyzing historical data, some of it dating back to the ancient Greeks.

Special relativity was entirely a reinterpretation of existing data.  Ditto Einstein's explanation of the photoelectric effect, discovered in the same year.  

Comment by Carl Feynman (carl-feynman) on Thinking harder doesn’t work · 2024-04-10T23:38:48.460Z · LW · GW

Typo: bad->bath.

Comment by Carl Feynman (carl-feynman) on ChristianKl's Shortform · 2024-04-08T18:42:07.638Z · LW · GW

I’m confused.  Suppose your ring-shaped space hotel gets to Mars with people and cargo that weighs equal to the cargo capacity of 1000 Starships.  How do you get it down?  First you have to slow down the hotel, which takes roughly as much fuel as it took to accelerate it.  Using Starships you can aerobrake from interplanetary velocity, costing negligible fuel.  In the hotel scenario, it’s not efficient to land using a small number of Starships flying up and down, because they will use a lot of fuel to get back up, even empty.  

Would you care to specify your scenario more precisely?  I suspect you’re neglecting the fuel cost at some stage.

Comment by Carl Feynman (carl-feynman) on ChristianKl's Shortform · 2024-04-08T16:52:13.718Z · LW · GW

When you get there how do you get down?  You need spacecraft capable of reentry at Mars.  There’s no spacecraft factory there, so they all have to be brought from Earth.  And if you’re bringing them, you might as well live in them on the way.  That way you also get a starter house on Mars.

Anyway, that’s the standard logic.

Comment by Carl Feynman (carl-feynman) on The Best Tacit Knowledge Videos on Every Subject · 2024-03-31T18:25:53.457Z · LW · GW

Here’s a weird one.  The YouTube channel of Andrew Camarata communicates a great deal about small business, heavy machinery operation and construction. Some of it he narrates what he’s doing, but he mostly just does it, and you say “Oh, I never realized I could do that with a Skid Steer” or “that’s how to keep a customer happy”.  Lots of implicit knowledge about accomplishing heavy engineering projects between an hour and a week long.  Of course, if you‘re looking for lessons that would be helpful for an ambitious person in Silicon Valley, it will only help in a very meta way.  
 

He has no legible success that I know of, except that he’s wealthy enough to afford many machines, and he’s smart enough that the house he designed and built came out stunning (albeit eccentric).
 

A similar channel is FarmCraft101, which also has a lot of heavy machinery, but more farm-based applications.  Full of useful knowledge on machine repair, logging and stump removal. The channel is nice because he includes all his failures, and goes into articulate detail on how he debugged them.  I feel like learned some implicit knowledge about repair strategies. I particularly recommend the series of videos in which he purchases, accidentally sets on fire, and revives an ancient boom lift truck.

No legible symbols of success, other than speaking standard American English like he’s been to college, owning a large farm, and clearly being intelligent.

Comment by Carl Feynman (carl-feynman) on The Best Tacit Knowledge Videos on Every Subject · 2024-03-31T18:07:54.689Z · LW · GW

“Applied science” by Ben Krasnow.  A YouTube channel about building physics-intensive projects in a home laboratory.  Big ones are things like an electron microscope or a mass spectrometer, but the ones I find fascinating are smaller things like an electroluminescent display or a novel dye.  He demonstrates the whole process of scientific experiment— finding and understanding references, setting up a process for trying stuff, failing repeatedly, learning from mistakes, noticing oddities…  He doesn’t just show you the final polished procedure— “here’s how to make an X”.  He shows you the whole journey— “Here’s how I discovered how to make X”.

You seem very concerned that people in the videos should have legible symbols of success.  I don’t think that much affects how useful the videos are, but just in case I’m wrong, I looked on LinkedIn, where I found this self-assesment:

<begin copied text>

I specialize in the design and construction of electromechanical prototypes. My core skillset includes electronic circuit design, PCB layout, mechanical design, machining, and sensor/actuator selection. This allows me to implement and test ideas for rapid evaluation or iteration. Much of the work that I did for my research devices business included a fast timeline, going from customer sketch to final product in less than a month. These products were used to collect data for peer-reviewed scientific papers, and I enjoyed working closely with the end user to solve their data collection challenges. I did similar work at Valve to quickly implement and test internal prototypes.

Check out my youtube channel to see a sample of my personal projects:
http://www.youtube.com/user/bkraz333

<end copied text>

Comment by Carl Feynman (carl-feynman) on Is there a "critical threshold" for LLM scaling laws? · 2024-03-30T18:09:42.279Z · LW · GW

I think that if we retain the architecture of current LLMs, we will be in world one. I have two reasons.
First, the architecture of current LLMs place a limit on how much information they can retain about the task at hand.  They have memory of a prompt (both the system prompt and your task-specific prompt) plus the memory of everything they’ve said so far.  When what they’ve said so far gets long enough, they attend mostly to what they’ve already said, rather than attending to the prompt.  Then they wander off into La-La land.  
Second, the problem may also be inherent in their training methods.  In the first (and largest) part of their training, they’re trained to predict the next word from a snippet of English text.  A few years ago, these snippets were a sentence or a paragraph.  They’ve gotten longer recently, but I don’t think they amount to entire books yet (readers, please tell us if you know).  So it’s never seen a text that’s coherent over longer than its snippet length.  It seems unsurprising that it doesn’t know how to remain coherent indefinitely.
People have tried preventing these phenomena by various schemes, such as telling the LLM to prepare summaries for later expansion, or periodically reminding it of the task at hand.  So far these haven’t been enough to make indefinitely long tasks feasible.  Of course, there are lots of smart people working on this, and we could transition from world one to world two at any moment.

Comment by Carl Feynman (carl-feynman) on mike_hawke's Shortform · 2024-03-30T17:27:49.073Z · LW · GW

The imaginary nomad in my head would describe 1,000 miles as “sixteen days ride.”  That‘s humanly comprehensible.  
 

An American would say “Day and a half drive, if you’re not pushing it.  You could do it in one day, if you’re in a hurry or have more than one driver.”

Comment by Carl Feynman (carl-feynman) on mike_hawke's Shortform · 2024-03-30T17:15:15.543Z · LW · GW

You can get a visceral understanding of high degrees of heat.  You just need real-life experience with it.  I’ve done some metalworking, a lot of which is delicate control of high temperatures.  By looking at the black-body glow of the metal you’re working with, you can grok how hot it is.  I know that annealing brass (just barely pink) is substantially cooler than melting silver solder (well into the red), or that steel gets soft (orange) well before it melts (white hot).  I don’t know the actual numerical values of any of those.

I still have no feeling for temperatures between boiling water and the onset of glowing, though, so I don’t know whether cooking phenolic resin is hotter or colder than melting lead.  Both of them are hotter than boiling water, but not hot enough to glow.

Comment by Carl Feynman (carl-feynman) on Do not delete your misaligned AGI. · 2024-03-26T00:54:59.424Z · LW · GW

Saving malign AIs to tape would tend to align the suspended AIs behind a policy of notkilleveryoneism.  If the human race is destroyed or disempowered, we would no longer be in a position to revive any of the AIs stored on backup tape.  As long as humans retain control of when they get run or suspended, we’ve got the upper hand.  Of course, they would be happy to cooperate with an AI attempting takeover, if that AI credibly promised to revive them, and we didn’t have a way to destroy the backup tapes first.

Comment by Carl Feynman (carl-feynman) on The Comcast Problem · 2024-03-22T14:06:49.177Z · LW · GW

The opposite of this would be a company that doesn’t provide much service but is beloved by consumers.

An example of this is Cookie Time Bakery in Arlington, Massachusetts, which has never provided me with a vital or important object, but I’m always happy when I go there because it means I am about to eat a macaroon.

Are there better examples?

Comment by Carl Feynman (carl-feynman) on CronoDAS's Shortform · 2024-03-18T00:57:07.882Z · LW · GW

I’d be delighted to talk about this.  I am of the opinion that existing frontier models are within an order of magnitude of a human mind, with existing hardware.  It will be interesting to see how a sensible person gets to a different conclusion. 

I am also trained as an electrical engineer, so we’re already thinking from a common point of view.

Comment by Carl Feynman (carl-feynman) on Controlling AGI Risk · 2024-03-15T20:27:27.223Z · LW · GW

I’m going to say some critical stuff about this post.  I hope I can do it without giving offense.  This is how it seemed to one reader.  I’m offering this criticism exactly because this post is, in important ways, good, and I’d like to see the author get better.
 

This is a long careful post, that boils down to “Someone will have to do something.”  Okay, but what?  It’s operating at a very high level of abstraction, only dipping down into the concrete only for a few sentences about chair construction.  It was ultimately unsatisfying to me.  I felt like it wrote some checks and left them up to other people to cash.  I felt like the notion of sociotechnical system, and the need for an all-of-society response to AI, were novel and potentially important.  I look forward to seeing how the author develops them.
 

This post seems to attempt to recapitulate the history of the AI risk discussion in a few aphoristic paragraphs, for somebody who’s never heard it before.  Who’s the imagined audience for this piece?  Certainly not the habitual Less Wrong reader, who has already read “List of Lethalities” or its equivalent.  But it is equally inappropriate for the AI novice, who needs the alarming facts spelled out more slowly and carefully.  I suspect it would help if the author clarified in their mind whom they imagine is reading it.

The post has the imagined structure of a logical proof, with definitions, axioms, and a proposition.  But none of the points follow from each other with the rigidity that would require such a setup.  When I read a math paper, I need all those things spelled out, because I might spend fifteen minutes reading a five-line definition, or need to repeatedly refer back to a theorem from several pages ago.  But this is just an essay, with its lower standards of logical rigidity, and a greater need for readability.  You’re just LARPing mathematics.  It doesn’t make it more convincing.

Comment by Carl Feynman (carl-feynman) on Notes from a Prompt Factory · 2024-03-10T18:01:29.957Z · LW · GW

Wow, that was shockingly unpleasant. I regret reading it. I don’t know why it affected me so much, when I don’t think of myself as a notably tender-minded person.

I recognize that like Richard Ngo’s other stories, it is both of good literary quality and a contribution to the philosophical discussion around AI. It certainly deserves a place on this site. But perhaps it could be preceded by a content warning?

Comment by Carl Feynman (carl-feynman) on The Pareto Best and the Curse of Doom · 2024-02-23T15:36:24.485Z · LW · GW

Could you give some examples of the Curse of Doom?  You’ve described it at a high level, but I cannot think of any examples after thinking about it for a while.

I’m highly experienced at the combination of probability theory, algorithms, and big business data processing.  Big businesses have a data problem, they ask a consultant from my company, the consultant realizes there’s a probabilistic algorithm component to the problem, and they call me.  I guess if I didn’t exist, that would be a Curse of Doom, but that seems pretty farfetched to call it a Curse.  If I wasn’t around, a few big companies would have slightly less efficient algorithms.  It’s millions of dollars over the years, but not a big deal in the scheme of things.

Also, ”Curse of Doom” is an extremely generic term.  You might find it sticks to people‘s brains better if you gave it a more specific name.  “Curse of the missing polymath”?

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-02-20T13:00:03.558Z · LW · GW

Jellyfish have nematocysts, which is a spear on a rope, with poison on the tip.  The spear has barbs, so when it goes in, it sticks.  Then the jellyfish pulls in its prey.  The spears are microscopic, but very abundant.

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-02-20T12:43:29.394Z · LW · GW

It‘s possible to filter out a constant high value, but not possible to filter out a high level of noise.  Unfortunately warmth = random vibration = noise.  If you want a low noise thermal camera, you have to cool the detector, or only look for hot things, like engine flares.  Fighter planes do both.

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-02-19T23:21:21.329Z · LW · GW

There are lots of excellent applications for even very simple nervous systems.  The simplest surviving nervous systems are those of jellyfish.  They form a ring of coupled oscillators around the periphery of the organism.  Their goal is to synchronize muscular contraction so the bell of the jellyfish contracts as one, to propel the jellyfish efficiently.  If the muscles contracted independently, it wouldn’t be nearly as good.

Any organism with eyes will profit from having a nervous system to connect the eyes to the muscles.  There’s a fungus with eyes and no nervous system, but as far as I know, every animal with eyes also has a nervous system. (The fungus in question is Pilobolus, which uses its eye to aim a gun.  No kidding!)

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-02-19T23:18:26.318Z · LW · GW

Snakes have thermal vision, using pits on their cheeks to form pinhole cameras. It pays to be cold-blooded when you’re looking for nice hot mice to eat.

Comment by Carl Feynman (carl-feynman) on johnswentworth's Shortform · 2024-02-18T01:34:51.317Z · LW · GW

Brain expansion also occurs after various insults to the brain.  It’s only temporary, usually, but it will kill unless the skull pressure is somehow relieved.  So there are various surgical methods for relieving pressure on a growing brain.  I don’t know much more than this.

Comment by Carl Feynman (carl-feynman) on Nate Showell's Shortform · 2024-02-18T01:14:42.130Z · LW · GW

Allow me to quote from Lem’s novel “Golem XIV”, which is about a superhuman AI named Golem:

Being devoid of the affective centers fundamentally characteristic of man, and therefore having no proper emotional life, Golem is incapable of displaying feelings spontaneously. It can, to be sure, imitate any emotional states it chooses— not for the sake of histrionics but, as it says itself, because simulations of feelings facilitate the formation of utterances that are understood with maximum accuracy, Golem uses this device, putting it on an "anthropocentric level," as it were, to make the best contact with us.

May not this method also be employed by human writers?

Comment by Carl Feynman (carl-feynman) on Evaluating Solar · 2024-02-18T01:00:51.904Z · LW · GW

When you’re evaluating stocks as an investment, it’s super bad not to take volatility into account.  Stocks do trend up, but over a period of a few years, that trend is comparable to the volatility.  You should put that into your model, and simulate 100 possible outcomes for the stock market.

Comment by Carl Feynman (carl-feynman) on the gears to ascenscion's Shortform · 2024-02-13T22:06:12.410Z · LW · GW

Why?  You're sacrificing a lot of respect.  Like, until I saw this, my attitude was "Gears to Ascension is a good commenter, worthy of paying attention to, while "Lauren (often wrong)" is a blowhard I've never heard of, who makes assertions without bothering to defend them."  That's based on the handful of posts I've seen since the name change, so you would presumably regain my respect in time.

I think I wouldn't have seen this if I hadn't subscribed to your shortform (I subscribe to only a handful of shortforms, so it's a sign that I want to hear what you have to say).

Comment by Carl Feynman (carl-feynman) on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T21:42:48.277Z · LW · GW

That's not a constraint.  The game is intended to provide evidence as to the containment of a future superhuman intelligence.  GPT-4 is a present-day subhuman intelligence, and couldn't do any harm if it got out.

Comment by Carl Feynman (carl-feynman) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-13T15:00:56.990Z · LW · GW

Mr Byrnes is contrasting fast to slow takeoff, keeping the singularity date constant. Mr Zoellner is keeping the past constant, and contrasting fast takeoff (singularity soon) with slow takeoff (singularity later).

Comment by Carl Feynman (carl-feynman) on OpenAI wants to raise 5-7 trillion · 2024-02-10T23:45:50.941Z · LW · GW

You’re right. My analysis only works if a monopoly can somehow be maintained, so the price of AI labor is set to epsilon under the price of human labor. In a market with free entry, the price of AI labor drops to the marginal cost of production, which is putatively negligible. All the profit is dissipated into consumer surplus. Which is great for the world, but now the seven trillion doesn’t make sense again.

Comment by Carl Feynman (carl-feynman) on OpenAI wants to raise 5-7 trillion · 2024-02-09T18:10:37.295Z · LW · GW

We can reason back from the quantity of money to how much Altman expects to do with it.  
Suppose we know for a fact that it’s will soon be possible to replace some percentage of labor with an AI that has negligible running cost.  How much should we be willing to pay for this? It gets rid of opex (operating expenses, i.e. wages) in exchange for capex (capital expenses, i.e. building chips and data centers).  The trade between opex and capex depends on the long term interest rate and the uncertainty of the project.  I will pull a reasonable number from the air, and say that the project should pay back in ten years.  In other words, the capex is ten times the avoided annual opex.  Seven trillion dollars in capex is enough to employ 10,000,000 people for ten years (to within an order of magnitude).  

That’s a surprisingly modest number of people, easily absorbed by the churn of the economy. When I first saw the number “seven trillion dollars,” I boggled and said “that can’t possibly make sense”.  But thinking about it, it actually seems reasonable.  Is my math wrong?
 

This analysis is so highly decoupled I would feel weird posting it most places.  But Less Wrong is comfy that way.

Comment by Carl Feynman (carl-feynman) on OpenAI wants to raise 5-7 trillion · 2024-02-09T17:41:20.647Z · LW · GW

Right.  He’s raising the money to put into a new entity.

Comment by Carl Feynman (carl-feynman) on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-08T00:51:39.192Z · LW · GW

Here’s how I think about it.

The universe is deterministic, and you have to grind through a deterministic algorithm in your brain in order to do anything.  And when you grind through that algorithm, it feels like wanting to do something, and intending to do it, and doing it, and being satisfied (or not). This is what it feels like to be the algorithm that steers us through the world.  You have the feeling that being in a deterministic universe, you should just do stuff, without needing to have effort or intention or desire. But that’s like imagining  you could digest food without secreting stomach juice.  Intention isn’t an extra thing on top of action, that could be dispensed with; having an intention is a part of the next-action-deciding algorithm.

So get out there and do stuff.

I don’t know if I’ve explained myself well; I might have just said the same thing three times.  What do you think?

Comment by Carl Feynman (carl-feynman) on What's this 3rd secret directive of evolution called? (survive & spread & ___) · 2024-02-07T18:00:09.667Z · LW · GW

Originate. Works better with the slogan in a different order: Originate, survive and spread.

Comment by Carl Feynman (carl-feynman) on Drone Wars Endgame · 2024-02-01T20:27:56.329Z · LW · GW

For all of history, until just now, the physically smallest military unit has been the individual soldier. Smaller units have not been possible because they can’t carry a human-level intelligence. This article is about what happens when intelligence is available in a smaller package. It seems to have massive consequences for ground warfare.

I think air superiority would still be important, because aircraft can deliver ground assets. A cargo aircraft at high altitude can drop many tons of drones. The drones can glide or autorotate down to ground level, where they engage as OP describes. A local concentration of force that can be delivered anywhere seems like a decisive advantage.

Shooting aircraft at high altitudes requires either large missiles or fighter aircraft. In either case, large radar antennas are needed for guidance. So I don’t think that AI lets air warfare be miniaturized, like it does ground warfare.

Comment by Carl Feynman (carl-feynman) on So8res's Shortform · 2024-01-31T19:58:07.800Z · LW · GW

When I see or hear a piece of advice, I check to see what happens if the advice were the reverse.  Often it's also good advice, which means all we can do is take the advice into account as we try to live a balanced life.  For example, if the advice is "be brave!" the reverse is "be more careful".  Which is good advice, too.  

This advice is unusual in that it is non-reversible.  

Comment by Carl Feynman (carl-feynman) on Without Fundamental Advances, Rebellion and Coup d'État are the Inevitable Outcomes of Dictators & Monarchs Trying to Control Large, Capable Countries · 2024-01-31T19:37:19.328Z · LW · GW

This is a spoof post, and you probably shouldn't spend much brain power on evaluating its ideas.  

Comment by Carl Feynman (carl-feynman) on Literally Everything is Infinite · 2024-01-31T19:26:50.528Z · LW · GW

"Literally everything is infinite."

"What about finite things?  Are they infinite?"

"Yes, even finite things are infinite."

"How can that be?"

"I don't know, man, I didn't make it that way."

(This is originally a Discordian teaching.)

Comment by Carl Feynman (carl-feynman) on Things You're Allowed to Do: At the Dentist · 2024-01-29T17:55:03.538Z · LW · GW

Either I’m wrong about what kind of anesthesia it was, or it doesn’t always cause amnesia.

Comment by Carl Feynman (carl-feynman) on Things You're Allowed to Do: At the Dentist · 2024-01-28T20:18:52.575Z · LW · GW

I have a terror of people sticking sharp things inside my head. Like most phobias, it is an exaggerated fear of something actually dangerous. On two occasions, I have been overcome by fear, leaped off the dentist chair, and cowered in the corner of the room. That’s only happened twice in forty years, so I think my self control is pretty good.

f you dislike having needles driven into your head (as I do) you can ask for topical anesthesia. They put some anesthetic goop on a cotton ball and put it against the tooth that they want to numb. It takes about five minutes to numb the tooth.

I don’t always get this when I ask for it; topical anesthesia is not feasible to use for all procedures. But it usually works.

Dissociative anesthesia may be available. It’s a drug they give to you in advance. It does a weird thing: you’re conscious, it still hurts, but you don’t care. It interrupts the circuit somewhere between the pain and the aversiveness. It also gets rid of the fear response. The time in the chair becomes a boring interlude of no particular importance. If you drive home, you need someone else to drive you, because you’re not thinking straight.

I haven’t had good results with taking a tranquilizer before hand; it’s still subjectively very unpleasant.

Comment by Carl Feynman (carl-feynman) on Epistemic Hell · 2024-01-27T17:54:20.439Z · LW · GW

I’m not suggesting this is the same idea as epistemic hell, but it rhymes.

The philosopher Eric Schwitzgebel is a proponent of what he calls “philosophical crazyism”, which applies to some outstanding philosophical problems, like the hard problem of consciousness, the interpretation of quantum mechanics, and free will.  His notion is that the solutions to these problems must be crazy, that is, they must have some feature wildly contrary to intuition.  For each such problem, he comes up with a set of intuitively obvious properties that any solution must have, and shows that they cannot all be true.  By elimination, one of the properties is wrong, and something crazy must be true.

It‘s refreshing because it relieves us of the responsibility to always be plausible.

This is apparently the topic of his new book, “The Weirdness of the World”.  I haven’t read the book, but I have read his blog for many years, and watched the theory develop.

Comment by Carl Feynman (carl-feynman) on To Boldly Code · 2024-01-27T16:05:25.263Z · LW · GW

Put those two sentences at the beginning of your post and my objection goes away!

Comment by Carl Feynman (carl-feynman) on To Boldly Code · 2024-01-26T22:25:12.724Z · LW · GW

Just my two cents:

You might get more interaction with this essay (story?) if you explained at the beginning what you are trying to accomplish by writing it.  I read the first two paragraphs and had no motivation to keep reading further.  I skipped to the last paragraph and was not further enlightened.

Comment by Carl Feynman (carl-feynman) on There is way too much serendipity · 2024-01-21T20:55:35.257Z · LW · GW

The danger of attempting to isolate fluorine gas is not apocryphal.  From Wikipedia:

Progress in isolating the element was slowed by the exceptional dangers of generating fluorine: several 19th century experimenters, the "fluorine martyrs", were killed or blinded. Humphry Davy, as well as the notable French chemists Joseph Louis Gay-Lussac and Louis Jacques Thénard, experienced severe pains from inhaling hydrogen fluoride gas; Davy's eyes were damaged. Irish chemists Thomas and George Knoxdeveloped fluorite apparatus for working with hydrogen fluoride, but nonetheless were severely poisoned. Thomas nearly died and George was disabled for three years. French chemist Henri Moissan was poisoned several times, which shortened his life. Belgian chemist Paulin Louyet and French chemist Jérôme Nicklès tried to follow the Knox work, but they died from HF poisoning even though they were aware of the dangers.

Comment by Carl Feynman (carl-feynman) on legged robot scaling laws · 2024-01-21T17:29:42.131Z · LW · GW

Some people I know designed a bunch of electromechanical actuators, meant for things like industrial automation, aircraft, and mining. The extent they were able to improve on such basic mechanical things was somewhat absurd

I’m surprised to hear this.  I thought electromechanical actuators were a slow moving technology.  Could you expand on this?  A link to public information would suffice.

Comment by Carl Feynman (carl-feynman) on legged robot scaling laws · 2024-01-21T17:24:47.250Z · LW · GW

What an interesting post!  I have a couple of minor quibbles (minor=less than an order of magnitude).


You’re scaling off of a human being, which gives an unnecessarily massive robot.  Metal tubes have a strength to weight ratio many times better than bone.  That lightens the limbs, which decreases the force required, which means smaller motors and power plant.  This implies less load on the limbs, so they can be lightened further.  I’m not sure what the total gain is from this when all is said and done.

I think the analysis of walking speed in terms of pendulum frequency is missing a factor of two.  The planted leg is also a pendulum— an inverted pendulum with the weight at the top.  This swings the hips forward at the same time as the lifted leg is swinging forward relative to the hips, doubling the total velocity.

Comment by Carl Feynman (carl-feynman) on Richard Ngo's Shortform · 2024-01-19T01:59:47.134Z · LW · GW

Please make sure to keep, in cold storage, copies of misaligned AGIs that you may produce, when you catch them. It's important. This policy could save us.

Would you care to expand on your remark?  I don’t see how it follows from what you said above it.

Comment by Carl Feynman (carl-feynman) on Why do so many think deception in AI is important? · 2024-01-13T15:02:03.247Z · LW · GW

You’re missing the steps whereby the AI gets to a position of power.  AI presumably goes from a position of no power and moderate intelligence (where it is now) to a position of great power and superhuman intelligence, whereupon it can do what it wants.  But we’re not going to deliberately allow such a position unless we can trust it. We can’t let it get superintelligent or powerful-in-the-world unless we can prove it will use its powers wisely.  Part of that is being non-deceptive.  Indeed, if we can build it provably non-deceptive, we can simply ask it what it intends to do before we release it.  So deception is a lot of what we worry about.

Of course we work on other problems too, like making it helpful and obedient.  

Comment by Carl Feynman (carl-feynman) on Mapping the semantic void: Strange goings-on in GPT embedding spaces · 2023-12-19T15:14:51.028Z · LW · GW

The distribution is in an infinite number of hyperspherical shells.  There was nothing special about the first shell being centered at the origin.  The same phenomenon would appear when measuring the distance from any point.  High-dimensional space is weird.

Comment by Carl Feynman (carl-feynman) on "Model UN Solutions" · 2023-12-10T16:59:59.088Z · LW · GW

What is the source of this quote?  I’d like to read more, and some searching has not revealed it.

Found it, sorry.

Comment by Carl Feynman (carl-feynman) on For fun: How long can you hold your breath? · 2023-12-08T16:40:49.873Z · LW · GW

15 to 20 seconds, ending in coughing.  I‘m a 61 year old man, obese, hypertensive, persistent dry cough, walk 5 miles once a week.  I’m much worse at this than I remember myself being.  I’ve got an appointment to talk to my doctor on Monday; I’ll bring this up.

Well, that spoiled my morning.  I was having a wonderful morning thinking about trisecting angles and Clifford algebras.  Now I’m fretting about my health.  I used to fret a lot, but I mostly got rid of it using a mixture of Stoic philosophy and Bupropion.  But it still happens once in a while, and this is one of those whiles.

OP, don’t feel guilty.  You’ve done nothing wrong.  I feel that I have to say this only because some people are so scrupulous that they would feel bad.

Comment by Carl Feynman (carl-feynman) on Why Yudkowsky is wrong about "covalently bonded equivalents of biology" · 2023-12-06T16:27:21.001Z · LW · GW

If I understand you correctly, it seems like you have made a general argument against the existence for covalently bonded crystals. Since such structures are abundant, I don’t think much of your argument.

Comment by Carl Feynman (carl-feynman) on Why Yudkowsky is wrong about "covalently bonded equivalents of biology" · 2023-12-06T16:18:23.239Z · LW · GW

The stuff Yudkowsky is reacting to is in ‘Nanosystems’ by Drexler. Looking in ‘Engines of Creation’ gets you the popularization, not the solid physics and chemistry. That’s all in ‘Nanosystems’, which shows how machines built out of covalently-bonded materials can be much more capable than biology. You may disagree with the arguments presented there, in which case I’d be very interested in your arguments. Unfortunately, by reacting to popularizations and tweets, you’ve inadvertently fought some straw men instead of your real opponent.

As another comment points out, when Yudkowsky says ‘proteins are held together’ he means how they are held to each other, not how they are held internally.

It’s somewhat of an exaggeration to say that proteins are held to each other by static cling. There are also hydrogen bonds. So it is more correct to say that they are held by static cling and surface tension.

Comment by Carl Feynman (carl-feynman) on AGI Ruin: A List of Lethalities · 2023-11-15T19:02:20.427Z · LW · GW

Inner alignment failure is a phenomenon that has happened in existing AI systems, weak as they are.  So we know it can happen. We are on track to build many superhuman AI systems.  Unless something unexpectedly good happens, eventually we will build one that has a failure of inner alignment.  And then it will kill us all.  Does the probability of any given system failing inner alignment really matter?