Posts

Purplehermann's Shortform 2024-10-17T20:36:28.896Z

Comments

Comment by Purplehermann on How to Make Superbabies · 2025-02-20T20:42:00.933Z · LW · GW

He already addressed this. 

If somehow international cooperation gives us a pause on going full AGI or at least no ASI - what then?

Just hope it never happens, like nuke wars? 

The answer now is to set later generations up to be more able. 

This could mean doing fundamental research (whether in AI alignment or international game theory or something else), it could mean building institutions to enable it, and it could mean making them actually smarter.

Genes might be the cheapest/easist way to affect marginal chances given the talent already involved in alignment and the amount of resources required to get involved politically or in building institutions

Comment by Purplehermann on xpostah's Shortform · 2025-02-20T20:05:12.808Z · LW · GW

A few notes on massive cities:

Cities of 10Ms exist, there is always some difficulty in scaling, but scaling 1.5-2 OOMs doesn't seem like it would be impossible to figure out if particularly motivated. 

 

China and other countries have built large cities and then failed to populate them

 

The max population you wrote (1.6B) is bigger than china, bigger than Africa, similar to both American Continents plus Europe .

Which is part of why no one really wants to build something so big, especially not at once.

 

Everything is opportunity cost, and the question of alternate routes matters alot in deciding to pursue something. Throwing everything and the kitchen sink at something costs a lot of resources.  

Given that VR development is currently underway regardless, starting this resource intense project which may be made obsolete by the time it's done is an expected waste of resources. If VR hit a real wall that might change things (though see above).

If this giga-city would be expected to 1000x tech progress or something crazy then sure, waste some resources to make extra sure it happens sooner rather than later.

 

Tl;dr:

Probably wouldn't work, there's no demand,   very expensive, VR is being developed and would actually be able to say what you're hoping but even better

Comment by Purplehermann on xpostah's Shortform · 2025-02-20T16:56:04.825Z · LW · GW

Vr might be cheaper

Comment by Purplehermann on How to Make Superbabies · 2025-02-20T08:27:00.107Z · LW · GW

Have you thought about how to get the data yourself?

 

Perhaps offering payment to people willing to get iq tested and give a genetic sample,  and paying more for higher scores on the test?

I understand that money is an issue, but as long as you're raising this seems like an area you could plug infinite money into and get returns

Comment by Purplehermann on Born on Third Base: The Case for Inheriting Nothing and Building Everything · 2025-02-19T14:05:25.667Z · LW · GW

This seems... evil or at the very least zero-sum thinking to me.

Would you want to stop the successful from paying for their children's education? Spending their time on raising their children? Do you want to take all children away from their parents to make sure they aren't put on different footing? Perhaps genetically enforce equality?

I would much rather governments try to preserve hereditary positive dynamics,  while getting involved with negative ones. 

We'll have won once all trees are positive and successful, and bad apples do not create generations of bad trees

There is something fundamentally compelling about the idea that every generation should start fresh, free from the accumulated advantages or disadvantages of their ancestors.
...
...
The death tax does not punish success—it prevents success from becoming hereditary. It ensures that the cycle of opportunity begins anew with each generation.
Comment by Purplehermann on Purplehermann's Shortform · 2025-02-19T13:53:05.111Z · LW · GW

Keeping humans around is the correct move for a powerful AGI, assuming it isn't being existentially threatened. 

For a long while human inputs will be fairly different from silicon inputs, and humans can do work - intellectual or physical - and no real infrastructure is necessary for human upkeep or reproduction (compared to datacenters). 

Creating new breeds of human with much higher IQs and creating (or having them create) neuralink-like tech to cheaply increase human capabilities will likely be a very good idea for AGIs.

Most people here seem worried about D tier ASIs, ASIs should see the benefits of E tier humans (250+ IQ and/or RAM added through neuralink-like tech) and even D tier humans (genesmith on editing, 1500+ IQs with cybernetics vastly improving cognition and capability)

 

 

'Sparing a little sunlight' for an alternative lifeform which creates a solid amount of redundancy as well as being more effecient for certain tasks and allowing for more diverse research, as well as having minimal up-front costs  is overdetermined

Comment by Purplehermann on How AI Takeover Might Happen in 2 Years · 2025-02-19T13:30:14.744Z · LW · GW

The Fønix team is just heating water, which is great but actual distillation (with automated re-adding of specific minerals) is probably what you're actually going to want so as to avoid all contamination not just biological.

In this size of structure growing food isn't really worth it,  storing food for 10 years is actually easier (according to claude). It does need to come stocked though

Comment by Purplehermann on A History of the Future, 2025-2040 · 2025-02-19T12:33:21.820Z · LW · GW

It's more that it stops being relevant to humans, as keeping humans in the loop slows down the exponential growth

 

I do think VR and neuralink-like tech will be a very big deal though,  especially in regards to allowing people experiences that would otherwise be expensive in atoms

Comment by Purplehermann on What About The Horses? · 2025-02-11T15:28:08.650Z · LW · GW

At what IQ do you think humans are able to "move up to higher levels of abstraction"? 

(Of course this assumes AIs don't get the capability to do this themselves)

Re robotics advancing while AI intelligence stalls, robotics advancing should be enough to replace any people who can't take advantage of automation of their current jobs.

 

I don't think you're correct in general,  but it seems that automation will clear out at least the less skilled jobs in short order (decades at most)

Comment by Purplehermann on Wired on: "DOGE personnel with admin access to Federal Payment System" · 2025-02-06T22:25:57.183Z · LW · GW

I very much hope the computers brought in were vetted and kept airgapped.

You keep systems separate, yes. 

For some reason I assumed that write permissions were on user in the actual system/secure network and any data exporting would be into secured systems. If they created a massive security leak for other nations to exploit, that's a crux for me on whether this was reckless.

 

Added: what kind of idiot purposely puts data in the wrong system purposely? The DOGE guys doing this could somehow make sense,  governmental workers??

Comment by Purplehermann on Wired on: "DOGE personnel with admin access to Federal Payment System" · 2025-02-06T22:16:22.854Z · LW · GW

No.

I'm not familiar with public documentation on this.

Comment by Purplehermann on Wired on: "DOGE personnel with admin access to Federal Payment System" · 2025-02-06T15:50:19.320Z · LW · GW

I know people who have gotten access to similarly important governmental systems at younger ages. 

Don't worry about it too much. 

 

If they abuse it,  it'll cost their group lots of political goodwill. (Recursive remove for example)

Comment by Purplehermann on Yudkowsky on The Trajectory podcast · 2025-01-26T00:35:08.468Z · LW · GW

Musk at least is looking to upgrade humans with Neuralink

If he can add working memory can be a multiplier for human capabilities, likely to scale with increased IQ.

 

Any reason the 4M$ isn't getting funded? 

Comment by Purplehermann on Purplehermann's Shortform · 2025-01-26T00:18:51.129Z · LW · GW

Any good, fairly up-to-date lists of the relevant papers to read to catch up with AI research (as far as a crash course will take a newcomer)?

 

Preferably one that will be updated

Comment by Purplehermann on We probably won't just play status games with each other after AGI · 2025-01-15T22:42:41.970Z · LW · GW

Reading novels with ancient powerful beings is probably the best direction you have for imagining how status games amongst creatures which are only loosely human look.

 

Resources being bounded, there will tend to always be larger numbers of smaller objects (given that those objects are stable).

There will be tiers of creatures. (In a society where this is all relevant)

While a romantic relationship skipping multiple tiers wouldn't make sense,  a single tier might.

 

The rest of this is my imagination :)

Base humans will be F tier, the lowest category while being fully sentient. (I suppose dolphins and similar would get a special G tier).

Basic AGIs (capable of everything a standard human is, plus all the spikey capabilities) and enhanced humans E tier.

Most creatures will be here.

D tier:

Basic ASIs and super enhanced humans (gene modding for 180+ IQ plus SOTA cyborg implants) will be the next tier, there will be a bunch of these in absolute terms but relative to the earlier tier rarer.

C tier:

Then come Alien Intelligence, massive compute resources supporting ASIs trained on immense amounts of ground reality data, biological creatures that have been redesigned fundamentally to function at higher levels and optimally synergize with neural connections (whether with other carbon based or silicon based lifeforms)

B tier:

Planet sized clusters running ASIs will be a higher tier.

A, S tiers:

Then you might get entire stars, then galaxies.

There will be much less at each level.

 

Most tiers will have a -, neutral or +.

- : prototype, first or early version. Qualitatively smarter than the tier below, but non-optimized use of resources, often not the largest gap from the + of the earlier tier

Neutral: most low hanging optimizations and improvements and some harder ones at this tier are implemented

+: highly optimized by iteratively improved intelligences or groups of intelligences at this level, perhaps even by a tier above. 

Comment by Purplehermann on Purplehermann's Shortform · 2025-01-06T19:32:04.008Z · LW · GW

Writing tests, QA and Observability are probably going to stay for a while and work hand in hand with AI programming, as other forms of programming start to disappear. At least until AI programming becomes very reliable.

This should allow for working code to be produced way faster, likely giving more high-quality 'synthetic' data, but more importantly massively changing the economics of knowledge work

Comment by Purplehermann on Biological risk from the mirror world · 2025-01-02T20:33:16.104Z · LW · GW

Is there a reason that random synthetic cells will not be mirror cells?

Comment by Purplehermann on Biological risk from the mirror world · 2025-01-02T05:26:20.266Z · LW · GW

https://english.elpais.com/science-tech/2024-12-31/protocells-emerge-in-experiment-simulating-lifeless-world-there-is-no-divine-breath-of-life.html

We have here some scientists making cells. Looks like a dangerous direction

Comment by Purplehermann on By default, capital will matter more than ever after AGI · 2024-12-29T16:47:06.213Z · LW · GW

Humans seem way more energy and resource efficient in general, paying for top talent is an exception not the rule- usually it's not worth paying for top talent.

Likely to see many areas where better economically to save on compute/energy by having human do some of the work.

Split information workers vs physical too, I expect them to have very different distributions of what the most useful configuration is.

This post ignores likely scientific advances in bioengineering and cyborg surgeries, I expect humans to be way more efficient for tons of jobs once the standard is 180 IQ with a massive working memory

Comment by Purplehermann on Hire (or Become) a Thinking Assistant · 2024-12-27T11:05:42.663Z · LW · GW

I do things like this at times with my teams.

Important things:

  • Don't think you need to solve the actual problem for them

  • Do solve 'friction' for them as much as possible

  • Do feel free to look up other sources so you can offer more perspective and to take off the load of having to find relevant info

  • positive energy, attentive etc

  • if they're functioning well just watch and listen while being interested and unobtrusive, at most very minor inputs if you're pretty sure it'll be helpful

If stuck at a crossroads ask them how long they think each path will take/ how hard it'll be, and give them feedback if you think they're wrong. Help them start working on one, people can get stuck for longer than it would take to actually do one option.

  • if lost, methodically go through the different areas where the issue could be and methodically go through all the directions they could take for each area and in general. You don't need to think these up, but keep track of them and help guide towards picking apart the problem and solution spaces. This takes some mental load off.
Comment by Purplehermann on Purplehermann's Shortform · 2024-12-27T10:47:20.063Z · LW · GW

A message from Claude:

'''This has been a fascinating and clarifying discussion. A few key insights I'll take away:

The distinction between bounded and unbounded optimization is more fundamental than specific value differences between AIs. The real existential threat comes from unbounded optimizers. The immune system/cancer metaphor provides a useful framework - it's about maintaining a stable system that can identify and prevent destructive unbounded growth, not about enforcing a single value set. The timing challenge is critical but more specific than I initially thought - we don't necessarily need the "first" AGI to be perfect, but we need bounded optimizers to establish themselves before any unbounded ones emerge.

Some questions this raises for further exploration:

What makes a Schelling fence truly stable under recursive self-improvement? Could bounded optimizers coordinate even with different base values, united by shared meta-level constraints? Are there ways to detect early if an AI system will maintain bounds during capability gain?

The framing of "cancer prevention" versus "value enforcement" feels like an important shift in how we think about AI governance and safety. Instead of trying to perfectly specify values, perhaps we should focus more on creating robust self-limiting mechanisms that can persist through capability gains.'''

Comment by Purplehermann on The Field of AI Alignment: A Postmortem, and What To Do About It · 2024-12-26T23:22:57.665Z · LW · GW

A few thoughts.

  1. Have you checked what happens when you throw physic postdocs at the core issues - do they actually get traction or just stare at the sheer cliff for longer while thinking? Did anything come out of the Illiad meeting half a year later? Is there a reason that more standard STEMs aren't given an intro into some of the routes currently thought possibly workable, so they can feel some traction? I think either could be true- that intelligence and skills aren't actually useful right now, the problem is not tractable, or better onboarding could let the current talent pool get traction - and either way it might not be very cost effective to get physics postdocs involved.

  2. Humans are generally better at doing things when they have more tools available. While the 'hard bits' might be intractable now, they could well be easier to deal with in a few years after other technical and conceptual advances in AI, and even other fields. (Something something about prompt engineering and Anthropic's mechanistic interpretability from inside the field and practical quantum computing outside).

This would mean squeezing every drop of usefulness out of AI at each level of capability, to improve general understanding and to leverage it into breakthroughs in other fields before capabilities increase further. In fact, it might be best to sabotage semiconductor/chip production once the models one gen before super-intelligence/extinction/ whatever, giving maximum time to leverage maximum capabilities and tackle alignment before the AIs get too smart.

  1. How close is mechanistic interpretability to the hard problems, and what makes it not good enough?
Comment by Purplehermann on Purplehermann's Shortform · 2024-12-26T22:33:55.152Z · LW · GW

The point was more about creating your own data being easy, just generate code then check it by running it. Save this code, and later use it for training.

If we wanted to go the way of AlphaZero it doesn't seem crazy.

De-enforce commands, functions, programs which output errors, for a start.

I didn't think of the pm as being trained by these games, that's interesting. Maybe have two instances competing to get closer on some test cases the pm can prepare to go with the task, and have them competing on time, compute, memory, and accuracy. You can de-enforce the less accurate, and if fully accurate they can compete on time, memory, cpu.

I'm not sure "hard but possible" is the bar - you want lots of examples of what doesn't work along with what does, and you want it for easy problems and hard ones so the model learns everything

Comment by Purplehermann on Purplehermann's Shortform · 2024-12-26T16:06:26.444Z · LW · GW

Product manager, non-technical counterpart to a team lead in a development team

Comment by Purplehermann on Purplehermann's Shortform · 2024-12-26T14:03:10.481Z · LW · GW

I notice that I'm confused.

Google made an amazing AI for playing chess, by allowing it to make its own data.

Why hasn't the same thing happened for programming? Have it generate a bunch of pictures with functionality expectations (a PM basically), have it write and run code, then check the output against the requirements it created, then try again when it doesn't come out right.

This is even easier where the pm is unnecessary - leetcode, codewars, euler...

You could also pay PMs to work with the AI developers, instead of the code tutors xAI is hiring.

There seems to be a preference to having the LLMs memorize code instead of figuring things out itself.

If you run out of things like that you could have it run random programs in different languages, only learning from those that work.

I haven't used genesis, but that also seems like a mostly-built validator for programs that AIs can use to create and train on their own data.

With the amount of compute going into training, it should be easy to create huge amounts of data?

Comment by Purplehermann on A Matter of Taste · 2024-12-19T18:37:51.903Z · LW · GW

There's a certain breadth of taste in reading you can only aquire by reading (and enjoying!) low quality internet novels after you've already developed sophisticated taste.

Comment by Purplehermann on Purplehermann's Shortform · 2024-12-15T22:22:57.446Z · LW · GW

So unbundle it?

Comment by Purplehermann on Purplehermann's Shortform · 2024-12-14T21:56:07.784Z · LW · GW

There is a beautiful thing called unilateral action.

I believe most employers mostly don't care about conformity as such.

The inner circle stuff is only true of elite schools AFAIK. You can outcompete the rest of the universities

Comment by Purplehermann on Purplehermann's Shortform · 2024-12-14T17:19:17.345Z · LW · GW

University education can be made free pretty cheaply.

The cost at scale is in the credentials- you need to make tests, test students, and check those tests.

The classes can be filmed once, and updated every few years if necessary. Each course can have a forum board for discussion and meeting up for studying in groups.

See course credentials for things like AWS.

Comment by Purplehermann on Biological risk from the mirror world · 2024-12-13T13:49:59.020Z · LW · GW

This implies that we should stop life from developing independently, and that if contact is made with aliens then the human making contact and any environment that's been in chain of proximity should be spaced

Comment by Purplehermann on Should there be just one western AGI project? · 2024-12-06T12:05:53.047Z · LW · GW

Start small, once you have an attractive umbrella working for a few projects you can take in the rest of the US, the the world

Comment by Purplehermann on Should there be just one western AGI project? · 2024-12-06T12:04:38.510Z · LW · GW

In my work I aggregate multiple other systems' work as well as doing my own.

I think a similar approach may be useful. Create standardized outputs each project has to send to the overarching org, allow each to develop their own capabilities and to a degree how what is required to make those outputs meaningfully reflect on the capabilities and R&D of the project.

This will lay the ground to self-regulate, keeps most of the power with the org (assuming it is itself good at actual research and creation) conditional on the org playing nice and being upstanding with the contributing members, and without limiting any project before it is necessary.

Comment by Purplehermann on Purplehermann's Shortform · 2024-12-05T21:51:27.524Z · LW · GW

DOGE.

This is an opportunity to work with the levers of real power. If there are 5 people here who work on this for two years, that's an in with Senators, Congressman, bureaucrats and possibly Musk.

Just showing up and making connections while doing hard work is the most efficient way to get power right now, in the time before AI gets dangerous and power will be very relevant.

I do not believe that this should be taken as an opportunity to evangelize. People, not ideology.

This seems like something worth funding if someone would like to but can't afford it.

Comment by Purplehermann on Purplehermann's Shortform · 2024-11-03T17:30:05.061Z · LW · GW

The first issue seems minor - even if true, a 40 year old man could have a new arm by 60

Comment by Purplehermann on Purplehermann's Shortform · 2024-11-02T21:37:29.751Z · LW · GW

What happened to regrowing limbs? From what little I understand, with pluripotent stem cells we could do a lot, except cancer.

Why don't we use stem cells instead of drilling for cavities? While there are a few types of tissue, tumors are fairly rare in teeth, likely due to minimal blood flow.

Comment by Purplehermann on Purplehermann's Shortform · 2024-11-02T21:34:47.803Z · LW · GW

Why aren't research companies made in Africa/Middle East/China for human research- cut out most of the bureaucracy and find out fast if something works, if it does set up a company in a 1st world country to go through the steps?

Comment by Purplehermann on "Slow" takeoff is a terrible term for "maybe even faster takeoff, actually" · 2024-10-18T11:08:24.952Z · LW · GW

Something like iterative/cliff, with fast and slow expressing time scales

Comment by Purplehermann on "Slow" takeoff is a terrible term for "maybe even faster takeoff, actually" · 2024-10-18T10:44:50.572Z · LW · GW

Can you sort the poll options by popularity?

Comment by Purplehermann on "Slow" takeoff is a terrible term for "maybe even faster takeoff, actually" · 2024-10-18T10:44:18.983Z · LW · GW

Iterative/Sudden

Comment by Purplehermann on Overview of strong human intelligence amplification methods · 2024-10-18T10:40:06.579Z · LW · GW

I can only describe the Product, not the tech. The idea would be to plug in a bigger working memory in the area of the brain currently holding working memory. This is the piece I think matters most

On reflection something like wolfram alpha should be enough for calculations, and a well indexed reservoir of knowledge with an LLM pulling up relevant links with summaries should be good enough for the rest

Comment by Purplehermann on Species as Canonical Referents of Super-Organisms · 2024-10-18T10:35:52.341Z · LW · GW

Inside the super organism you are correct, but the genome is influenced by outside forces as whole over the ages - and any place where this breaks down for long enough you eventually get two species instead of one.

Therefore outside groups can treat the species as a super organism in general, the individual members must be dealt with individually when there is previous loyalty to another member of the other species.

For example, an Englishman and his dog vs an eskimo and his dog. The two humans may be against each other, the dogs may be against each other, but the opposite human/dog interactions would be standard if they weren't already attached to other in-species members.

Comment by Purplehermann on Species as Canonical Referents of Super-Organisms · 2024-10-18T10:29:17.783Z · LW · GW

This gives the bones of a proper theoretical foundation on the moral duties between members of different species.

For example, this would back the intuition of eating dog to be worse than eating a bear or octupus, regardless of intelligence, and of killing rats out of hand

Comment by Purplehermann on Isaac King's Shortform · 2024-10-18T10:24:18.617Z · LW · GW

They'd not identical. First, they have a different status, much the same as citizens and aliens have different rights. Second, different species of animals have different relationships with humanity: Dogs are bred to be symbiotic companions Cats are parasites if allowed, pest control if tolerated Rats are disease vector scavengers Chickens are livestock - they lay infertile eggs for human consumption!

Comment by Purplehermann on Overview of strong human intelligence amplification methods · 2024-10-18T10:02:06.378Z · LW · GW

I'm not sure how well curated and indexed most information is.

Working memory allows for looking at the whole picture at once better with the full might of human intelligence (which is better at many things than LLMs), while removing frictions that come from delays and effort expended in search for data and making calculations.

Of course we have smart people together now, but getting multiple 7+SD people together would have many further benefits beyond having them work solo.

We probably have at least a generation (we're probably going to slow down before we hit SAGI due to the data wall, limited production of new compute, and regulation).

The focus should be on moving quickly to get a group ecliping current human capabilities ASAP, not on going much further

Comment by Purplehermann on Laziness death spirals · 2024-10-18T09:53:15.330Z · LW · GW

The idea of inertia is excellent, extending the idea of momentum far further (and naturally inspiring thoughts on mass, velocity, etc)

Comment by Purplehermann on Purplehermann's Shortform · 2024-10-17T20:36:28.992Z · LW · GW

Devops Mentality is correct:

Friction is a big deal.

Being able to change code and deploy immediately (or in a few minutes) matters immensely.

This might just be feedback loops at an extreme, but I believe it's something more.

The benefit of going from 20wpm to 40wpm was not ×2 to my writing and experience, it was more like ×8. I fully intend to reach 60wpm.

It was closer to a ×2 to my developing experience, which is interesting as most of software development isn't the actual typing. Another anecdote is that the best programmers I know all have 60+wpm (there was one react front end guy who was good, but mostly at design).

Reducing friction is underrated, even if friction is only 10% of a job (typing during coding), improving that friction substantially matters more than 10%.

This may have to do with flow, concentration or enjoyment

Comment by Purplehermann on Overview of strong human intelligence amplification methods · 2024-10-17T20:25:55.999Z · LW · GW

On human-computer interfaces: Working memory, knowledge reservoirs and raw calculation power seem like the easiest pieces, while fundamentally making people better at critical thinking, philosophy or speeding up actual comprehension would be much for difficult.

The difference being upgrading the core vs plug-ins.

Curated reservoirs of practical and theoretical information, well indexed, would be very useful to super geniuses.

On human-human: You don't actually need to hook them up physically. Having multiple people working on different parts of a problem lets them all bounce ideas off each other.

Overall: The goal should be to create a number of these people, then let them plan out the next round if their intelligence doesn't do it.

If humanity can make 100 7+SD humans hooked up with large amounts of computing power, curated knowledge + tons of raw data, and massive working memories, they'll be able to figure out any further steps much better than we can.

Comment by Purplehermann on Social Capital Paradoxes · 2020-09-13T22:33:15.012Z · LW · GW

The virus example doesn't seem right to me.

  1. The claim doesn't seem necessarily true. Why would a more transmissible virus be more deadly? (aside from general virulence I suppose..). It isn't hard to think of some very contagious viruses that don't seem all that deadly (herpes, cold, flu) and some much less transmissible viruses that are quite deadly (AIDS).

I suppose more deadly viruses generally require more transmissibility to survive than less deadly ones do, but A -> B does not mean B -> A.

  1. Viruses are generally horizontal, I don't see why one horizontal thing would a priori be more negative than another horizontal thing, just based on a vertical vs horizontal idea
Comment by Purplehermann on Social Capital Paradoxes · 2020-09-13T22:27:10.610Z · LW · GW
Comment by Purplehermann on The Four Children of the Seder as the Simulacra Levels · 2020-09-10T09:57:52.799Z · LW · GW

Well now I need to read your simalcrum posts, this is brilliant