Purplehermann's Shortform

post by Purplehermann · 2024-10-17T20:36:28.896Z · LW · GW · 23 comments

Contents

23 comments

23 comments

Comments sorted by top scores.

comment by Purplehermann · 2024-11-02T21:37:29.751Z · LW(p) · GW(p)

What happened to regrowing limbs? From what little I understand, with pluripotent stem cells we could do a lot, except cancer.

Why don't we use stem cells instead of drilling for cavities? While there are a few types of tissue, tumors are fairly rare in teeth, likely due to minimal blood flow.

Replies from: carl-feynman, Jemist, ChristianKl
comment by Carl Feynman (carl-feynman) · 2024-11-03T16:33:37.570Z · LW(p) · GW(p)

In general the human body is only capable of healing injuries that are the kind of thing that, if they were smaller, would still leave the victim alive, in the Stone Age.  If an injury is of a type that would be immediately fatal in the Stone Age, there’s no evolutionary pressure to make it survivable.  For example, we can regrow peripheral nerves, because losing a peripheral nerve means a numb patch and a weak limb, but you could live with this for a few months even if you’re a caveman.  On the other hand, we can’t regrow spinal cord, because a transected spinal cord  is fatal within a day or two even given the finest Stone Age nursing care (it didn’t become survivable until about 1946.). On the third hand, we can heal brain from strokes, even though brain is more complex than spinal cord, because a small stroke is perfectly survivable as long as you have someone to feed you until you get better.  We can survive huge surgical incisions, even though those would be fatal in the Stone Age, because small penetrating wounds were survivable, and the healing mechanisms can just do the same thing all along the incision.  This is why we sew wounds up: to convince the healing mechanisms that it’s only a small cut.

Unfortunately this argument suggests regrowing limbs is impossible.  An amputation is bad but survivable, and after it heals, you can still get around.  But many years of spending a lot of bodily energy on regrowing a limb that is pretty useless for most of that time doesn’t seem worthwhile.

Some particular problems I see:

In humans, there’s no mechanism for a growing limb to connect correctly to an adult injury site.  For example, there’s already a bunch of scar tissue there, which has to be cleared away progressively as the limb grows.  Evolution has not seen fit to provide us with this complex biochemistry, unlike the case of salamanders.

Children have a high level of circulating growth hormone, which tells the arm cells how fast to grow.  If you tried to provide this to an adult, their other bones would also grow, causing deformity (acromegaly).

It’s odd that we can’t grow new teeth when the old ones fall out.  More than once, I mean.  Drilling for cavities makes sense because the enamel (outer tooth layer) is essentially dead, and doesn’t regrow.  But we should be able to grow a whole new tooth from the root when we get a cavity.

comment by J Bostock (Jemist) · 2024-11-03T00:24:39.207Z · LW(p) · GW(p)

The regrowing limb thing is a nonstarter due to the issue of time if I understand correctly. Salamanders that can regrow limbs take roughly the same amount of time to regrow them as the limb takes to grow in the first place. So it would be 1-2 decades before the limb was of adult size. Secondly it's not as simple as just smearing on some stem cells to an arm stump. Limbs form because of specific signalling molecules in specific gradients. I don't think these are present in an adult body once the limb is made. So you'd need a socket which produces those which you'd have to build in the lab, attach to blood supply to feed the limb, etc.

Replies from: Purplehermann
comment by Purplehermann · 2024-11-03T17:30:05.061Z · LW(p) · GW(p)

The first issue seems minor - even if true, a 40 year old man could have a new arm by 60

Replies from: Jemist
comment by J Bostock (Jemist) · 2024-11-03T19:05:30.466Z · LW(p) · GW(p)

For a good few years you'd have a tiny baby limb, which would make it impossible to have a normal prosthetic. I also think most people just don't want a tiny baby limb attached to them. I don't think growing it in the lab for a decade is feasible for a variety of reasons. I also don't know how they planned to wire the nervous system in, or ensure the bone sockets attach properly, or connect the right blood vessels. The challenge is just immense and it gets less and less worth over time it as trauma surgery and prosthetics improve.

comment by ChristianKl · 2024-11-04T13:42:46.976Z · LW(p) · GW(p)

There seem to be clinical trials underway for regrowing teeth in Japan: https://www.popularmechanics.com/science/health/a60952102/tooth-regrowth-human-trials-japan/

comment by Purplehermann · 2024-12-05T21:51:27.524Z · LW(p) · GW(p)

DOGE.

This is an opportunity to work with the levers of real power. If there are 5 people here who work on this for two years, that's an in with Senators, Congressman, bureaucrats and possibly Musk.

Just showing up and making connections while doing hard work is the most efficient way to get power right now, in the time before AI gets dangerous and power will be very relevant.

I do not believe that this should be taken as an opportunity to evangelize. People, not ideology.

This seems like something worth funding if someone would like to but can't afford it.

comment by Purplehermann · 2024-12-27T10:47:20.063Z · LW(p) · GW(p)

A message from Claude:

'''This has been a fascinating and clarifying discussion. A few key insights I'll take away:

The distinction between bounded and unbounded optimization is more fundamental than specific value differences between AIs. The real existential threat comes from unbounded optimizers. The immune system/cancer metaphor provides a useful framework - it's about maintaining a stable system that can identify and prevent destructive unbounded growth, not about enforcing a single value set. The timing challenge is critical but more specific than I initially thought - we don't necessarily need the "first" AGI to be perfect, but we need bounded optimizers to establish themselves before any unbounded ones emerge.

Some questions this raises for further exploration:

What makes a Schelling fence truly stable under recursive self-improvement? Could bounded optimizers coordinate even with different base values, united by shared meta-level constraints? Are there ways to detect early if an AI system will maintain bounds during capability gain?

The framing of "cancer prevention" versus "value enforcement" feels like an important shift in how we think about AI governance and safety. Instead of trying to perfectly specify values, perhaps we should focus more on creating robust self-limiting mechanisms that can persist through capability gains.'''

comment by Purplehermann · 2024-12-26T14:03:10.481Z · LW(p) · GW(p)

I notice that I'm confused.

Google made an amazing AI for playing chess, by allowing it to make its own data.

Why hasn't the same thing happened for programming? Have it generate a bunch of pictures with functionality expectations (a PM basically), have it write and run code, then check the output against the requirements it created, then try again when it doesn't come out right.

This is even easier where the pm is unnecessary - leetcode, codewars, euler...

You could also pay PMs to work with the AI developers, instead of the code tutors xAI is hiring.

There seems to be a preference to having the LLMs memorize code instead of figuring things out itself.

If you run out of things like that you could have it run random programs in different languages, only learning from those that work.

I haven't used genesis, but that also seems like a mostly-built validator for programs that AIs can use to create and train on their own data.

With the amount of compute going into training, it should be easy to create huge amounts of data?

Replies from: carl-feynman, carl-feynman
comment by Carl Feynman (carl-feynman) · 2024-12-26T16:47:49.692Z · LW(p) · GW(p)

This isn’t crazy— people have tried related techniques.  But it needs more details thought out. 

In the chess example, the AIs start out very stupid, being wired at random.  But in a game between two idiots, moving at random, eventually someone is going to win.  And then you reinforce the techniques used by the winner, and de-reinforce the ones used by the loser.  In any encounter, you learn, regardless of who wins.  But in an encounter between a PM and a programmer, if the programmer fails, who gets reinforced?  It might be because the programmer is dumb, and should be de-reinforced.  But it might be because the PM is dumb, and asked for something impossible or far beyond what can be done, in which case it should be de-reinforced.  But it might be because the PM came up with a task just barely beyond the programmer’s ability, which is good and should be reinforced.  We somehow need to keep the PM producing problems which are hard but possible.  Maybe the programmer could be tasked with coming up with either a solution or a proof of impossibility?  

AlphaGo had a mechanism which tracked how important each move was.  It was trained to predict the probability that white would win, on each position encountered in the game.    Moves where this probability swung wildly were given a larger weight in reinforcement.  This was important for concentrating training on decisive moves, allowing the extraction of information from each move instead of each game. It’s not clear if this is possible in the programming task.

Replies from: Purplehermann
comment by Purplehermann · 2024-12-26T22:33:55.152Z · LW(p) · GW(p)

The point was more about creating your own data being easy, just generate code then check it by running it. Save this code, and later use it for training.

If we wanted to go the way of AlphaZero it doesn't seem crazy.

De-enforce commands, functions, programs which output errors, for a start.

I didn't think of the pm as being trained by these games, that's interesting. Maybe have two instances competing to get closer on some test cases the pm can prepare to go with the task, and have them competing on time, compute, memory, and accuracy. You can de-enforce the less accurate, and if fully accurate they can compete on time, memory, cpu.

I'm not sure "hard but possible" is the bar - you want lots of examples of what doesn't work along with what does, and you want it for easy problems and hard ones so the model learns everything

comment by Carl Feynman (carl-feynman) · 2024-12-26T14:23:49.272Z · LW(p) · GW(p)

What’s a PM?

Replies from: Purplehermann
comment by Purplehermann · 2024-12-26T16:06:26.444Z · LW(p) · GW(p)

Product manager, non-technical counterpart to a team lead in a development team

comment by Purplehermann · 2024-11-02T21:34:47.803Z · LW(p) · GW(p)

Why aren't research companies made in Africa/Middle East/China for human research- cut out most of the bureaucracy and find out fast if something works, if it does set up a company in a 1st world country to go through the steps?

Replies from: JBlack, Everything-Apologist
comment by JBlack · 2024-11-03T02:04:00.509Z · LW(p) · GW(p)

Research companies work best when there's plenty of infrastructure that can supply stuff they need to do the research. Including, to mention one recent case, electricity [EA · GW]. It also helps to be in an area where there is stable government that can protect the research site from civil or military unrest, and too much (or too unpredictable) corruption. You also want it to be a place where your researchers are happy to live while they do their research, and where you can relatively easily recruit other skilled workers.

China does meet these requirements, but it is not exactly lacking in bureaucracy so I'm not sure why it made the list. If you're doing research involving human trials of some sort, you also want to be able to communicate well with the participants so extensive knowledge of the language and culture will be very useful.

All that said, plenty of organizations do carry out research all over the world, not just in rich countries with a lot of bureaucracy.

comment by Everything-Apologist · 2024-11-03T15:41:50.340Z · LW(p) · GW(p)

Speaking as a low-level employee in the medtech industry, I can report that a lot of companies are satisfied doing human research in Eastern Europe, where costs are low and regulations lax, but infrastructure is still available. I have heard that there is a bit of "rotation" among these countries over time, since costs and regulations tend to increase wherever human research becomes popular.

comment by Purplehermann · 2025-01-06T19:32:04.008Z · LW(p) · GW(p)

Writing tests, QA and Observability are probably going to stay for a while and work hand in hand with AI programming, as other forms of programming start to disappear. At least until AI programming becomes very reliable.

This should allow for working code to be produced way faster, likely giving more high-quality 'synthetic' data, but more importantly massively changing the economics of knowledge work

comment by Purplehermann · 2024-10-17T20:36:28.992Z · LW(p) · GW(p)

Devops Mentality is correct:

Friction is a big deal.

Being able to change code and deploy immediately (or in a few minutes) matters immensely.

This might just be feedback loops at an extreme, but I believe it's something more.

The benefit of going from 20wpm to 40wpm was not ×2 to my writing and experience, it was more like ×8. I fully intend to reach 60wpm.

It was closer to a ×2 to my developing experience, which is interesting as most of software development isn't the actual typing. Another anecdote is that the best programmers I know all have 60+wpm (there was one react front end guy who was good, but mostly at design).

Reducing friction is underrated, even if friction is only 10% of a job (typing during coding), improving that friction substantially matters more than 10%.

This may have to do with flow, concentration or enjoyment

comment by Purplehermann · 2024-12-14T17:19:17.345Z · LW(p) · GW(p)

University education can be made free pretty cheaply.

The cost at scale is in the credentials- you need to make tests, test students, and check those tests.

The classes can be filmed once, and updated every few years if necessary. Each course can have a forum board for discussion and meeting up for studying in groups.

See course credentials for things like AWS.

Replies from: Dagon, Viliam
comment by Dagon · 2024-12-15T00:29:04.013Z · LW(p) · GW(p)

Well, no.  the easy part of a university undergrad education is pretty cheap (to provide) already.  It's bundled with a lot of things that make it expensive to buy traditionally.

Replies from: Purplehermann
comment by Purplehermann · 2024-12-15T22:22:57.446Z · LW(p) · GW(p)

So unbundle it?

comment by Viliam · 2024-12-14T20:01:30.944Z · LW(p) · GW(p)

Whether something is technically and economically possible is just a part of the puzzle. The remaining part is whether the people who make decisions have the incentives to do so.

According to Bryan Caplan, schools certify: intelligence, conscientiousness, and conformity. Online learning would certify intelligence, conscientiousness (even more than school attendance), but not conformity. Would the employers be okay with that?

Also, some prestigious universities select for having tons of money and/or the right social connections. The education is not the point. The point is that your parents had to be a part of the social "inner circle" to get you to the university, and you spent a few years socializing with other kids of the same kind, establishing the "inner circle" of the next generation. Making the credentials available to hoi polloi would defeat the entire purpose.

Replies from: Purplehermann
comment by Purplehermann · 2024-12-14T21:56:07.784Z · LW(p) · GW(p)

There is a beautiful thing called unilateral action.

I believe most employers mostly don't care about conformity as such.

The inner circle stuff is only true of elite schools AFAIK. You can outcompete the rest of the universities