Posts

Comments

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-05-20T17:13:51.172Z · LW · GW

The standard reply is that investors who know or suspect that the market is being systematically distorted will enter the market on the other side, expecting to profit from the distortion. Empirically, attempts to deliberately sway markets in desired directions don’t last very long.

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-05-19T19:38:13.328Z · LW · GW

When I brought up sample inefficiency, I was supporting Mr. Helm-Burger‘s statement that “there's huge algorithmic gains in …training efficiency (less data, less compute) … waiting to be discovered”.  You’re right of course that a reduction in training data will not necessarily reduce the amount of computation needed.  But once again, that’s the way to bet.

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-05-19T16:30:16.739Z · LW · GW

Here are two arguments for low-hanging algorithmic improvements.

First, in the past few years I have read many papers containing low-hanging algorithmic improvements.  Most such improvements are a few percent or tens of percent.  The largest such improvements are things like transformers or mixture of experts, which are substantial steps forward.  Such a trend is not guaranteed to persist, but that’s the way to bet.

Second, existing models are far less sample-efficient than humans.  We receive about a billion tokens growing to adulthood.  The leading LLMs get orders of magnitude more than that.  We should be able to do much better.  Of course, there’s no guarantee that such an improvement is “low hanging”.  

Comment by Carl Feynman (carl-feynman) on yanni's Shortform · 2024-05-05T19:32:46.766Z · LW · GW

This question is two steps removed from reality.  Here’s what I mean by that.  Putting brackets around each of the two steps:

what is the threshold that needs meeting [for the majority of people in the EA community] [to say something like] "it would be better if EAs didn't work at OpenAI"?
 

Without these steps, the question becomes 

What is the threshold that needs meeting before it would be better if people didn’t work at OpenAI?

Personally, I find that a more interesting question.  Is there a reason why the question is phrased at two removes like that?  Or am I missing the point?

Comment by Carl Feynman (carl-feynman) on Some Experiments I'd Like Someone To Try With An Amnestic · 2024-05-05T11:42:39.042Z · LW · GW

Some comments:

The word for a drug that causes loss of memory is “amnestic”, not “amnesic”.  The word “amnesic” is a variant spelling of “amnesiac”, which is the person who takes the drug.  This made reading the article confusing.

Midazolam is the benzodiazepine most often prescribed as an amnestic.  The trade name is Versed (accent on the second syllable, like vurSAID).  The period of not making memories lasts less than an hour, but you’re relaxed for several hours afterward.  It makes you pretty stupid and loopy, so I would think the performance on an IQ test would depend primarily on how much Midazolam was in the bloodstream at the moment, rather than on any details of setting.

Comment by Carl Feynman (carl-feynman) on Ironing Out the Squiggles · 2024-05-01T19:07:30.063Z · LW · GW

An interesting question!  I looked in “Towards Deep Learning Models Resistant to Adversarial Attacks” to see what they had to say on the question.  If I’m interpreting their Figure 6 correctly, there’s a negligible increase in error rate as epsilon increases, and then at some point the error rate starts swooping up toward 100%.  The transition seems to be about where the perturbed images start to be able to fool humans.  (Or perhaps slightly before.).  So you can’t really blame the model for being fooled, in that case.  If I had to pick an epsilon to train with, I would pick one just below the transition point, where robustness is maximized without getting into the crazy zone.

All this is the result of a cursory inspection of a couple of papers.  There’s about a 30% chance I’ve misunderstood.

Comment by Carl Feynman (carl-feynman) on List your AI X-Risk cruxes! · 2024-04-28T21:55:27.658Z · LW · GW

Here’s an event that would change my p(doom) substantially:

Someone comes up with an alignment method that looks like it would apply to superintelligent entities.  They get extra points for trying it and finding that it works, and extra points for society coming up with a way to enforce that only entities that follow the method will be created.

So far none of the proposed alignment methods seem to stand up to a superintelligent AI that doesn’t want to obey them.  They don’t even stand up to a few minutes of merely human thought.  But it‘s not obviously impossible, and lots of smart people are working on it.

In the non-doom case, I think one of the following will be the reason:

—Civilization ceases to progress, probably because of a disaster.

—The governments of the world ban AI progress.

—Superhuman AI turns out to be much harder than it looks, and not economically viable.

—The above happy circumstance, giving us the marvelous benefits of superintelligence without the omnicidal drawbacks.

Comment by Carl Feynman (carl-feynman) on Spatial attention as a “tell” for empathetic simulation? · 2024-04-26T20:22:52.971Z · LW · GW

You write:

…But I think people can be afraid of heights without past experience of falling…

I have seen it claimed that crawling-age babies are afraid of heights, in that they will not crawl from a solid floor to a glass platform over a yawning gulf.  And they’ve never fallen into a yawning gulf.  At that age, probably all the heights they’ve fallen from have been harmless, since the typical baby is both bouncy and close to the ground.

Comment by Carl Feynman (carl-feynman) on MichaelDickens's Shortform · 2024-04-26T19:38:19.182Z · LW · GW

Various sailors made important discoveries back when geography was cutting-edge science.  And they don't seem particularly bright.

Vasco De Gama discovered that Africa was circumnavigable.

Columbus was wrong about the shape of the Earth, and he discovered America.  He died convinced that his newly discovered islands were just off the coast of Asia, so that's a negative sign for his intelligence (or a positive sign for his arrogance, which he had in plenty.)

Cortez discovered that the Aztecs were rich and easily conquered.

Of course, lots of other would-be discoverers didn't find anything, and many died horribly.

So, one could work in a field where bravery to the point of foolhardiness is a necessity for discovery.

Comment by Carl Feynman (carl-feynman) on Spatial attention as a “tell” for empathetic simulation? · 2024-04-26T19:08:25.878Z · LW · GW

We've learned a lot about the visual system by looking at ways to force it to wrong conclusions, which we call optical illusions or visual art.  Can we do a similar thing for this postulated social cognition system?  For example, how do actors get us to have social feelings toward people who don't really exist?  And what rules do movie directors follow to keep us from getting confused by cuts from one camera angle to another?

Comment by Carl Feynman (carl-feynman) on Johannes C. Mayer's Shortform · 2024-04-26T14:18:09.310Z · LW · GW

I would highly recommend getting someone else to debug your subconscious for you.  At least it worked for me.  I don’t think it would be possible for me to have debugged myself.
 

My first therapist was highly directive.  He’d say stuff like “Try noticing when you think X, and asking yourself what happened immediately before that.  Report back next week.” And listing agenda items and drawing diagrams on a whiteboard.  As an engineer, I loved it.  My second therapist was more in the “providing supportive comments while I talk about my life” school.  I don’t think that helped much, at least subjectively from the inside.

Here‘s a possibly instructive anecdote about my first therapist.  Near the end of a session, I feel like my mind has been stretched in some heretofore-unknown direction.  It’s a sensation I’ve never had before.  So I say, “Wow, my mind feels like it’s been stretched in some heretofore-unknown direction.  How do you do that?”  He says, “Do you want me to explain?”  And I say, “Does it still work if I know what you’re doing?”  And he says, “Possibly not, but it’s important you feel I’m trustworthy, so I’ll explain if you want.”  So I say “Why mess with success?  Keep doing the thing. I trust you.”  That’s an example of a debugging procedure you can’t do to yourself.

Comment by Carl Feynman (carl-feynman) on Examples of Highly Counterfactual Discoveries? · 2024-04-25T01:30:37.396Z · LW · GW

Wegener’s theory of continental drift was decades ahead of its time. He published in the 1920s, but plate tectonics didn’t take over until the 1960s.  His theory was wrong in important ways, but still.

Comment by Carl Feynman (carl-feynman) on Johannes C. Mayer's Shortform · 2024-04-24T16:51:50.733Z · LW · GW

I was depressed once for ten years and didn’t realize that it was fixable.  I thought it was normal to have no fun and be disagreeable and grumpy and out of sorts all the time.  Now that I’ve fixed it, I’m much better off, and everyone around me is better off.  I enjoy enjoyable activities, I’m pleasant to deal with, and I’m only out of sorts when I’m tired or hungry, as is normal.

If you think you might be depressed, you might be right, so try fixing it.  The cost seems minor compared to the possible benefit (at least it was in my case.). I don’t think there’s a high possibility of severe downside consequences, but I’m not a psychiatrist, so what do I know.

I had been depressed for a few weeks at a time in my teens and twenties and I thought I knew how to fix it: withdraw from stressful situations, plenty of sleep, long walks in the rain.  (In one case I talked to a therapist, which didn’t feel like it helped.)  But then it crept up on me slowly in my forties and in retrospect I spent ten years being depressed.

So fixing it started like this.  I have a good friend at work, of many years standing.  I’ll call him Barkley, because that‘s not his name.  I was riding in the car with my wife, complaining about some situation at work.  My wife said “well, why don’t you ask Barkley to help?”  And I said “Ahh, Barkley doesn’t care.”  And my wife said “What are you saying?  Of course he cares about you.”  And I realized in that moment that I was detached from reality, that Barkley was a good friend who had done many good things for me, and yet my brain was saying he didn’t care.  And thus my brain was lying to me to make me miserable.  So I think for a bit and say “I think I may be depressed.”  And my wife thinks (she told me later) “No duh, you’re depressed. It’s been obvious for years to people who know you.”  But she says “What would you like to do about it?” And I say, “I don’t know, suffer I guess, do you have a better idea?”  And she says “How about if I find you a therapist?”  And my brain told me this was doomed to fail, but I didn’t trust my brain any more, so I said “Okay”.

So I go to the therapist, and conversing with him has many desirable mind-improving effects, and he sends me to a psychiatrist, who takes one look at me and starts me on SSRIs.  And years pass, and I see a different therapist (not as good) and I see a different psychiatrist (better).  
 

And now I’ve been fine for years.  Looking back, here are the things I think worked:

—Talking for an hour a week to a guy who was trying to fix my thinking was initially very helpful.  After about a year, the density of improvements dropped off, and, in retrospect, all subsequent several years of therapy don’t seem that useful.  But of course that’s only clear in retrospect.  Eventually I stopped, except for three-monthly check-ins with my psychiatrist.  And I recently stopped that.

—Wellbutrin, AKA Bupropion.  Other SSRIs had their pluses and minuses and I needed a few years of feeling around for which drug and what dosage was best.  I ended up on low doses of Bupropion and escitalopram.  The Escitalopram doesn‘t feel like it does anything, but I trust my psychiatrist that it does.   Your mileage will vary.

—The ability to detect signs of depression early is very useful.  I can monitor my own mind, spot a depression flare early, and take steps to fix it before it gets bad.  It took a few actual flares, and professional help, to learn this trick.


—The realization that I have a systematic distortion in mental evaluation of plans, making actions seem less promising that they are.  When I’m deciding whether to do stuff, I can apply a conscious correction to this, to arrive at a properly calibrated judgement.

—The realization that, in general, my thinking can have systematic distortions, and that I shouldn’t believe everything I think.  This is basic less-wrong style rationalism, but it took years to work through all the actual consequences on actual me.

—Exercise helps.  I take lots of long walks when I start feeling depressed.  Rain is optional. 

Comment by Carl Feynman (carl-feynman) on hydrogen tube transport · 2024-04-19T20:31:14.876Z · LW · GW

    …run electricity through the pipe…

Simpler to do what some existing electric trains do: use the rails as ground, and have a charged third rail for power.  We don’t like this system much for new trains, because the third rail is deadly to touch.  It’s a bad thing to leave lying on the ground where people can reach it.  But in this system, it’s in a tube full of unbreathable hydrogen, so no one is going to casually come across it.

Comment by Carl Feynman (carl-feynman) on Johannes C. Mayer's Shortform · 2024-04-19T20:23:04.286Z · LW · GW

That hasn’t been my experience.  I’ve tried solving hard problems, sometimes I succeed and sometimes I fail, but I keep trying.

Whether I feel good about it is almost entirely determined by whether I’m depressed at the time.  When depressed, by brain tells me almost any action is not a good idea, and trying to solve hard problems is particularly idiotic and doomed to fail.  Maddeningly, being depressed was a hard problem in this sense, so it took me a long time to fix.  Now I take steps at the first sign of depression.

Comment by Carl Feynman (carl-feynman) on MakoYass's Shortform · 2024-04-11T00:08:29.496Z · LW · GW

Some extraordinary claims established by ordinary evidence:

Stomach ulcers are caused by infection with Helicobacter Pylori.  It was a very surprising discovery that was established by a few simple tests.

The correctness of Kepler's laws of planetary motion was established almost entirely by analyzing historical data, some of it dating back to the ancient Greeks.

Special relativity was entirely a reinterpretation of existing data.  Ditto Einstein's explanation of the photoelectric effect, discovered in the same year.  

Comment by Carl Feynman (carl-feynman) on Thinking harder doesn’t work · 2024-04-10T23:38:48.460Z · LW · GW

Typo: bad->bath.

Comment by Carl Feynman (carl-feynman) on ChristianKl's Shortform · 2024-04-08T18:42:07.638Z · LW · GW

I’m confused.  Suppose your ring-shaped space hotel gets to Mars with people and cargo that weighs equal to the cargo capacity of 1000 Starships.  How do you get it down?  First you have to slow down the hotel, which takes roughly as much fuel as it took to accelerate it.  Using Starships you can aerobrake from interplanetary velocity, costing negligible fuel.  In the hotel scenario, it’s not efficient to land using a small number of Starships flying up and down, because they will use a lot of fuel to get back up, even empty.  

Would you care to specify your scenario more precisely?  I suspect you’re neglecting the fuel cost at some stage.

Comment by Carl Feynman (carl-feynman) on ChristianKl's Shortform · 2024-04-08T16:52:13.718Z · LW · GW

When you get there how do you get down?  You need spacecraft capable of reentry at Mars.  There’s no spacecraft factory there, so they all have to be brought from Earth.  And if you’re bringing them, you might as well live in them on the way.  That way you also get a starter house on Mars.

Anyway, that’s the standard logic.

Comment by Carl Feynman (carl-feynman) on The Best Tacit Knowledge Videos on Every Subject · 2024-03-31T18:25:53.457Z · LW · GW

Here’s a weird one.  The YouTube channel of Andrew Camarata communicates a great deal about small business, heavy machinery operation and construction. Some of it he narrates what he’s doing, but he mostly just does it, and you say “Oh, I never realized I could do that with a Skid Steer” or “that’s how to keep a customer happy”.  Lots of implicit knowledge about accomplishing heavy engineering projects between an hour and a week long.  Of course, if you‘re looking for lessons that would be helpful for an ambitious person in Silicon Valley, it will only help in a very meta way.  
 

He has no legible success that I know of, except that he’s wealthy enough to afford many machines, and he’s smart enough that the house he designed and built came out stunning (albeit eccentric).
 

A similar channel is FarmCraft101, which also has a lot of heavy machinery, but more farm-based applications.  Full of useful knowledge on machine repair, logging and stump removal. The channel is nice because he includes all his failures, and goes into articulate detail on how he debugged them.  I feel like learned some implicit knowledge about repair strategies. I particularly recommend the series of videos in which he purchases, accidentally sets on fire, and revives an ancient boom lift truck.

No legible symbols of success, other than speaking standard American English like he’s been to college, owning a large farm, and clearly being intelligent.

Comment by Carl Feynman (carl-feynman) on The Best Tacit Knowledge Videos on Every Subject · 2024-03-31T18:07:54.689Z · LW · GW

“Applied science” by Ben Krasnow.  A YouTube channel about building physics-intensive projects in a home laboratory.  Big ones are things like an electron microscope or a mass spectrometer, but the ones I find fascinating are smaller things like an electroluminescent display or a novel dye.  He demonstrates the whole process of scientific experiment— finding and understanding references, setting up a process for trying stuff, failing repeatedly, learning from mistakes, noticing oddities…  He doesn’t just show you the final polished procedure— “here’s how to make an X”.  He shows you the whole journey— “Here’s how I discovered how to make X”.

You seem very concerned that people in the videos should have legible symbols of success.  I don’t think that much affects how useful the videos are, but just in case I’m wrong, I looked on LinkedIn, where I found this self-assesment:

<begin copied text>

I specialize in the design and construction of electromechanical prototypes. My core skillset includes electronic circuit design, PCB layout, mechanical design, machining, and sensor/actuator selection. This allows me to implement and test ideas for rapid evaluation or iteration. Much of the work that I did for my research devices business included a fast timeline, going from customer sketch to final product in less than a month. These products were used to collect data for peer-reviewed scientific papers, and I enjoyed working closely with the end user to solve their data collection challenges. I did similar work at Valve to quickly implement and test internal prototypes.

Check out my youtube channel to see a sample of my personal projects:
http://www.youtube.com/user/bkraz333

<end copied text>

Comment by Carl Feynman (carl-feynman) on Is there a "critical threshold" for LLM scaling laws? · 2024-03-30T18:09:42.279Z · LW · GW

I think that if we retain the architecture of current LLMs, we will be in world one. I have two reasons.
First, the architecture of current LLMs place a limit on how much information they can retain about the task at hand.  They have memory of a prompt (both the system prompt and your task-specific prompt) plus the memory of everything they’ve said so far.  When what they’ve said so far gets long enough, they attend mostly to what they’ve already said, rather than attending to the prompt.  Then they wander off into La-La land.  
Second, the problem may also be inherent in their training methods.  In the first (and largest) part of their training, they’re trained to predict the next word from a snippet of English text.  A few years ago, these snippets were a sentence or a paragraph.  They’ve gotten longer recently, but I don’t think they amount to entire books yet (readers, please tell us if you know).  So it’s never seen a text that’s coherent over longer than its snippet length.  It seems unsurprising that it doesn’t know how to remain coherent indefinitely.
People have tried preventing these phenomena by various schemes, such as telling the LLM to prepare summaries for later expansion, or periodically reminding it of the task at hand.  So far these haven’t been enough to make indefinitely long tasks feasible.  Of course, there are lots of smart people working on this, and we could transition from world one to world two at any moment.

Comment by Carl Feynman (carl-feynman) on mike_hawke's Shortform · 2024-03-30T17:27:49.073Z · LW · GW

The imaginary nomad in my head would describe 1,000 miles as “sixteen days ride.”  That‘s humanly comprehensible.  
 

An American would say “Day and a half drive, if you’re not pushing it.  You could do it in one day, if you’re in a hurry or have more than one driver.”

Comment by Carl Feynman (carl-feynman) on mike_hawke's Shortform · 2024-03-30T17:15:15.543Z · LW · GW

You can get a visceral understanding of high degrees of heat.  You just need real-life experience with it.  I’ve done some metalworking, a lot of which is delicate control of high temperatures.  By looking at the black-body glow of the metal you’re working with, you can grok how hot it is.  I know that annealing brass (just barely pink) is substantially cooler than melting silver solder (well into the red), or that steel gets soft (orange) well before it melts (white hot).  I don’t know the actual numerical values of any of those.

I still have no feeling for temperatures between boiling water and the onset of glowing, though, so I don’t know whether cooking phenolic resin is hotter or colder than melting lead.  Both of them are hotter than boiling water, but not hot enough to glow.

Comment by Carl Feynman (carl-feynman) on Do not delete your misaligned AGI. · 2024-03-26T00:54:59.424Z · LW · GW

Saving malign AIs to tape would tend to align the suspended AIs behind a policy of notkilleveryoneism.  If the human race is destroyed or disempowered, we would no longer be in a position to revive any of the AIs stored on backup tape.  As long as humans retain control of when they get run or suspended, we’ve got the upper hand.  Of course, they would be happy to cooperate with an AI attempting takeover, if that AI credibly promised to revive them, and we didn’t have a way to destroy the backup tapes first.

Comment by Carl Feynman (carl-feynman) on The Comcast Problem · 2024-03-22T14:06:49.177Z · LW · GW

The opposite of this would be a company that doesn’t provide much service but is beloved by consumers.

An example of this is Cookie Time Bakery in Arlington, Massachusetts, which has never provided me with a vital or important object, but I’m always happy when I go there because it means I am about to eat a macaroon.

Are there better examples?

Comment by Carl Feynman (carl-feynman) on CronoDAS's Shortform · 2024-03-18T00:57:07.882Z · LW · GW

I’d be delighted to talk about this.  I am of the opinion that existing frontier models are within an order of magnitude of a human mind, with existing hardware.  It will be interesting to see how a sensible person gets to a different conclusion. 

I am also trained as an electrical engineer, so we’re already thinking from a common point of view.

Comment by Carl Feynman (carl-feynman) on Controlling AGI Risk · 2024-03-15T20:27:27.223Z · LW · GW

I’m going to say some critical stuff about this post.  I hope I can do it without giving offense.  This is how it seemed to one reader.  I’m offering this criticism exactly because this post is, in important ways, good, and I’d like to see the author get better.
 

This is a long careful post, that boils down to “Someone will have to do something.”  Okay, but what?  It’s operating at a very high level of abstraction, only dipping down into the concrete only for a few sentences about chair construction.  It was ultimately unsatisfying to me.  I felt like it wrote some checks and left them up to other people to cash.  I felt like the notion of sociotechnical system, and the need for an all-of-society response to AI, were novel and potentially important.  I look forward to seeing how the author develops them.
 

This post seems to attempt to recapitulate the history of the AI risk discussion in a few aphoristic paragraphs, for somebody who’s never heard it before.  Who’s the imagined audience for this piece?  Certainly not the habitual Less Wrong reader, who has already read “List of Lethalities” or its equivalent.  But it is equally inappropriate for the AI novice, who needs the alarming facts spelled out more slowly and carefully.  I suspect it would help if the author clarified in their mind whom they imagine is reading it.

The post has the imagined structure of a logical proof, with definitions, axioms, and a proposition.  But none of the points follow from each other with the rigidity that would require such a setup.  When I read a math paper, I need all those things spelled out, because I might spend fifteen minutes reading a five-line definition, or need to repeatedly refer back to a theorem from several pages ago.  But this is just an essay, with its lower standards of logical rigidity, and a greater need for readability.  You’re just LARPing mathematics.  It doesn’t make it more convincing.

Comment by Carl Feynman (carl-feynman) on Notes from a Prompt Factory · 2024-03-10T18:01:29.957Z · LW · GW

Wow, that was shockingly unpleasant. I regret reading it. I don’t know why it affected me so much, when I don’t think of myself as a notably tender-minded person.

I recognize that like Richard Ngo’s other stories, it is both of good literary quality and a contribution to the philosophical discussion around AI. It certainly deserves a place on this site. But perhaps it could be preceded by a content warning?

Comment by Carl Feynman (carl-feynman) on The Pareto Best and the Curse of Doom · 2024-02-23T15:36:24.485Z · LW · GW

Could you give some examples of the Curse of Doom?  You’ve described it at a high level, but I cannot think of any examples after thinking about it for a while.

I’m highly experienced at the combination of probability theory, algorithms, and big business data processing.  Big businesses have a data problem, they ask a consultant from my company, the consultant realizes there’s a probabilistic algorithm component to the problem, and they call me.  I guess if I didn’t exist, that would be a Curse of Doom, but that seems pretty farfetched to call it a Curse.  If I wasn’t around, a few big companies would have slightly less efficient algorithms.  It’s millions of dollars over the years, but not a big deal in the scheme of things.

Also, ”Curse of Doom” is an extremely generic term.  You might find it sticks to people‘s brains better if you gave it a more specific name.  “Curse of the missing polymath”?

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-02-20T13:00:03.558Z · LW · GW

Jellyfish have nematocysts, which is a spear on a rope, with poison on the tip.  The spear has barbs, so when it goes in, it sticks.  Then the jellyfish pulls in its prey.  The spears are microscopic, but very abundant.

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-02-20T12:43:29.394Z · LW · GW

It‘s possible to filter out a constant high value, but not possible to filter out a high level of noise.  Unfortunately warmth = random vibration = noise.  If you want a low noise thermal camera, you have to cool the detector, or only look for hot things, like engine flares.  Fighter planes do both.

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-02-19T23:21:21.329Z · LW · GW

There are lots of excellent applications for even very simple nervous systems.  The simplest surviving nervous systems are those of jellyfish.  They form a ring of coupled oscillators around the periphery of the organism.  Their goal is to synchronize muscular contraction so the bell of the jellyfish contracts as one, to propel the jellyfish efficiently.  If the muscles contracted independently, it wouldn’t be nearly as good.

Any organism with eyes will profit from having a nervous system to connect the eyes to the muscles.  There’s a fungus with eyes and no nervous system, but as far as I know, every animal with eyes also has a nervous system. (The fungus in question is Pilobolus, which uses its eye to aim a gun.  No kidding!)

Comment by Carl Feynman (carl-feynman) on Alexander Gietelink Oldenziel's Shortform · 2024-02-19T23:18:26.318Z · LW · GW

Snakes have thermal vision, using pits on their cheeks to form pinhole cameras. It pays to be cold-blooded when you’re looking for nice hot mice to eat.

Comment by Carl Feynman (carl-feynman) on johnswentworth's Shortform · 2024-02-18T01:34:51.317Z · LW · GW

Brain expansion also occurs after various insults to the brain.  It’s only temporary, usually, but it will kill unless the skull pressure is somehow relieved.  So there are various surgical methods for relieving pressure on a growing brain.  I don’t know much more than this.

Comment by Carl Feynman (carl-feynman) on Nate Showell's Shortform · 2024-02-18T01:14:42.130Z · LW · GW

Allow me to quote from Lem’s novel “Golem XIV”, which is about a superhuman AI named Golem:

Being devoid of the affective centers fundamentally characteristic of man, and therefore having no proper emotional life, Golem is incapable of displaying feelings spontaneously. It can, to be sure, imitate any emotional states it chooses— not for the sake of histrionics but, as it says itself, because simulations of feelings facilitate the formation of utterances that are understood with maximum accuracy, Golem uses this device, putting it on an "anthropocentric level," as it were, to make the best contact with us.

May not this method also be employed by human writers?

Comment by Carl Feynman (carl-feynman) on Evaluating Solar · 2024-02-18T01:00:51.904Z · LW · GW

When you’re evaluating stocks as an investment, it’s super bad not to take volatility into account.  Stocks do trend up, but over a period of a few years, that trend is comparable to the volatility.  You should put that into your model, and simulate 100 possible outcomes for the stock market.

Comment by Carl Feynman (carl-feynman) on the gears to ascenscion's Shortform · 2024-02-13T22:06:12.410Z · LW · GW

Why?  You're sacrificing a lot of respect.  Like, until I saw this, my attitude was "Gears to Ascension is a good commenter, worthy of paying attention to, while "Lauren (often wrong)" is a blowhard I've never heard of, who makes assertions without bothering to defend them."  That's based on the handful of posts I've seen since the name change, so you would presumably regain my respect in time.

I think I wouldn't have seen this if I hadn't subscribed to your shortform (I subscribe to only a handful of shortforms, so it's a sign that I want to hear what you have to say).

Comment by Carl Feynman (carl-feynman) on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T21:42:48.277Z · LW · GW

That's not a constraint.  The game is intended to provide evidence as to the containment of a future superhuman intelligence.  GPT-4 is a present-day subhuman intelligence, and couldn't do any harm if it got out.

Comment by Carl Feynman (carl-feynman) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-13T15:00:56.990Z · LW · GW

Mr Byrnes is contrasting fast to slow takeoff, keeping the singularity date constant. Mr Zoellner is keeping the past constant, and contrasting fast takeoff (singularity soon) with slow takeoff (singularity later).

Comment by Carl Feynman (carl-feynman) on OpenAI wants to raise 5-7 trillion · 2024-02-10T23:45:50.941Z · LW · GW

You’re right. My analysis only works if a monopoly can somehow be maintained, so the price of AI labor is set to epsilon under the price of human labor. In a market with free entry, the price of AI labor drops to the marginal cost of production, which is putatively negligible. All the profit is dissipated into consumer surplus. Which is great for the world, but now the seven trillion doesn’t make sense again.

Comment by Carl Feynman (carl-feynman) on OpenAI wants to raise 5-7 trillion · 2024-02-09T18:10:37.295Z · LW · GW

We can reason back from the quantity of money to how much Altman expects to do with it.  
Suppose we know for a fact that it’s will soon be possible to replace some percentage of labor with an AI that has negligible running cost.  How much should we be willing to pay for this? It gets rid of opex (operating expenses, i.e. wages) in exchange for capex (capital expenses, i.e. building chips and data centers).  The trade between opex and capex depends on the long term interest rate and the uncertainty of the project.  I will pull a reasonable number from the air, and say that the project should pay back in ten years.  In other words, the capex is ten times the avoided annual opex.  Seven trillion dollars in capex is enough to employ 10,000,000 people for ten years (to within an order of magnitude).  

That’s a surprisingly modest number of people, easily absorbed by the churn of the economy. When I first saw the number “seven trillion dollars,” I boggled and said “that can’t possibly make sense”.  But thinking about it, it actually seems reasonable.  Is my math wrong?
 

This analysis is so highly decoupled I would feel weird posting it most places.  But Less Wrong is comfy that way.

Comment by Carl Feynman (carl-feynman) on OpenAI wants to raise 5-7 trillion · 2024-02-09T17:41:20.647Z · LW · GW

Right.  He’s raising the money to put into a new entity.

Comment by Carl Feynman (carl-feynman) on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-08T00:51:39.192Z · LW · GW

Here’s how I think about it.

The universe is deterministic, and you have to grind through a deterministic algorithm in your brain in order to do anything.  And when you grind through that algorithm, it feels like wanting to do something, and intending to do it, and doing it, and being satisfied (or not). This is what it feels like to be the algorithm that steers us through the world.  You have the feeling that being in a deterministic universe, you should just do stuff, without needing to have effort or intention or desire. But that’s like imagining  you could digest food without secreting stomach juice.  Intention isn’t an extra thing on top of action, that could be dispensed with; having an intention is a part of the next-action-deciding algorithm.

So get out there and do stuff.

I don’t know if I’ve explained myself well; I might have just said the same thing three times.  What do you think?

Comment by Carl Feynman (carl-feynman) on What's this 3rd secret directive of evolution called? (survive & spread & ___) · 2024-02-07T18:00:09.667Z · LW · GW

Originate. Works better with the slogan in a different order: Originate, survive and spread.

Comment by Carl Feynman (carl-feynman) on Drone Wars Endgame · 2024-02-01T20:27:56.329Z · LW · GW

For all of history, until just now, the physically smallest military unit has been the individual soldier. Smaller units have not been possible because they can’t carry a human-level intelligence. This article is about what happens when intelligence is available in a smaller package. It seems to have massive consequences for ground warfare.

I think air superiority would still be important, because aircraft can deliver ground assets. A cargo aircraft at high altitude can drop many tons of drones. The drones can glide or autorotate down to ground level, where they engage as OP describes. A local concentration of force that can be delivered anywhere seems like a decisive advantage.

Shooting aircraft at high altitudes requires either large missiles or fighter aircraft. In either case, large radar antennas are needed for guidance. So I don’t think that AI lets air warfare be miniaturized, like it does ground warfare.

Comment by Carl Feynman (carl-feynman) on So8res's Shortform · 2024-01-31T19:58:07.800Z · LW · GW

When I see or hear a piece of advice, I check to see what happens if the advice were the reverse.  Often it's also good advice, which means all we can do is take the advice into account as we try to live a balanced life.  For example, if the advice is "be brave!" the reverse is "be more careful".  Which is good advice, too.  

This advice is unusual in that it is non-reversible.  

Comment by Carl Feynman (carl-feynman) on Without Fundamental Advances, Rebellion and Coup d'État are the Inevitable Outcomes of Dictators & Monarchs Trying to Control Large, Capable Countries · 2024-01-31T19:37:19.328Z · LW · GW

This is a spoof post, and you probably shouldn't spend much brain power on evaluating its ideas.  

Comment by Carl Feynman (carl-feynman) on Literally Everything is Infinite · 2024-01-31T19:26:50.528Z · LW · GW

"Literally everything is infinite."

"What about finite things?  Are they infinite?"

"Yes, even finite things are infinite."

"How can that be?"

"I don't know, man, I didn't make it that way."

(This is originally a Discordian teaching.)

Comment by Carl Feynman (carl-feynman) on Things You're Allowed to Do: At the Dentist · 2024-01-29T17:55:03.538Z · LW · GW

Either I’m wrong about what kind of anesthesia it was, or it doesn’t always cause amnesia.