Posts
Comments
Persuasion is also changing someone's world model or paradigm.
Ok can we put some rationality to this. Your prior seems to be that when a field is stuck, it is almost entirely because of politics, hegemon etc. So how do we update that prior?
What would you expect to see in a field that is not stuck bc of inertia etc. You would still get people like Hossenfelder saying we are doing things wrong, and such people would get followers.
You suggested metrics before but havn't provided any that I can see.
Evidence I take into account:
#1 There is not a significant group of young capable researchers saying we are taking things in the wrong direction, but a smaller number of older ones. Unless you are going to go so far as to say they are afraid to speak out, even anon, then that to me is evidence against your position. There are two capable experts on this blog from what I can see, one enthusiastic about string theory, and another who has investigated other theories in detail. Both disagree with your claim.
#2. There is not broad disagreement about what the major issues in physics are, so unlikely to be disagreements on metrics. You mentioned this as mattering, and if it does, I count this as evidence against your position.
Can you point to evidence that actually supports your prior? I can only see that which opposes it or is neutral. (In all fields no matter how things are progressing you get some people who think the establishment is doing it all wrong and have their own ideas. That can't count as evidence in favor for a specific field unless it is happening to a greater degree)
I don't disagree with your position in general. In fact it was clear to me that AI before neural networks was stuck with GOFAI and I believed that NN were clearly the way to go. I followed Hinton et all before they became famous. I saw the paradigm thing play out in front of my eyes there. Physics seems different.
What about if the Vatican is just a lot more asexual than the general population? That also seems credible.
Sure but does this actually apply to physics? Can anyone suggest different metrics, or is there broad agreement with everyone what the major physics problems are, because it seems like there is. E.g. the non conventional people don't say dark energy isn't important, they have different explanations for what it is. Everyone agrees the nature and origin of time/entropy etc is important.
Can you give examples of very smart young physicists complaining they are pushed into old ways of thinking? Are you prepared to give an justify a % difference that such things would make?
For example the comment by "Mitchell_Porter"
The idea that progress is stalled because everyone is hypnotized by string theory, I think is simply false, and I say that despite having studied alternative theories of physics, much much more than the typical person who knows some string theory.
No-one has pushed back against that here. I see your position as a theory that we need to gather evidence for and against and decide on a field by field basis. In this field I only see data against that position.
OK advances in teaching the highest level of physics/math needed for string theory is a big IF. Do you have evidence that is actually happening? I know of two people who tried to learn it and just found it too hard, don't think a better teacher or materials would have helped. The evidence is mixed but personal accounts certainly suggest that only a very small number of people could get to such a high level and improved teaching probably wouldn't help much. When we are talking about such extreme skills, people have their plateau or maximum ability level which is mostly fixed.
The human population growing just pushes us along the asymptote faster, rather than changing it.
To me the data shows that there has been no reliable increase in intelligence in the last ~30 years https://news.northwestern.edu/stories/2023/03/americans-iq-scores-are-lower-in-some-areas-higher-in-one/ Once again it needs to be at the very top level to matter.
Any specialization advantage is already tapped out with string theory and similar. My worldview definitely does not apply to advances in a field like biology as there is lots of experimental data, the tools are improving etc. I would expect advances there without any obvious plateau yet.
OK I will try to take that idea where as far as it can go. Lets assume that a lot of progress is stalled by the difficulty in overturning paradigms. For that to be the complete reason, the difficulty in overturning paradigms would have to not increase as the knowledge and maturity of the field increased. Otherwise that difficulty would still be side effect of the level of advancement in the field and just another argument in favor of diminishing returns as a field advanced.
Some fields its easy to overturn paradigms if there a very simple public metric like high jump with the Fosbury flop - it was immediately clear it was superior. If a field doesn't have such a metric then its harder. Also if there is existing results that must be achieved by a new paradigm before its even clear that it is superior. An example of this is potentially AI where a new architecture may be superior on small data sets, but not so on large sets and if this happens a lot, then its hard to know you have an improvement without actually running costly tests. Additionally if the politics of the field is setup to make it hard for new paradigms to flourish then that is also a source of difficulty.
So for physics? Its clear there are a lot of existing results that need to be reproduced for a theory of quantum gravity or similar to be taken seriously, which both makes it difficult to get traction, but also gives external credibility. For example, today a theory has to be compatible with General Relativity and give Newtonian physics at low speed/energy etc. In Einstein's day, the barrier would seem to have been lower.
So is there a way for a rational community to overcome these issues? As people note there are many ideas floating around for new paradigms but no easy way to find the promising ideas. That seems more like a structural problem than something politics could fix. The community would perhaps have to come up with a list of qualities or successful calc that a new paradigm could be judged on short of trying to reproduce all of physics? Anyway its not clear to me that there is large progress that could be made in that area.
So yes even if the slowdown is because we don't have the best paradigm there needs to be an efficient way to find such paradigms that doesn't get more inefficient as the field advances to count against the general diminishing returns argument.
Maths doesn't make an exact parallel but certainly fits in with my worldview. Lets say you view advanced physics as essentially a subfield of maths which is not that much of an exaggeration given how mathematical string theory etc is. If a sub field gets a lot of attention like physics has then it gets pushed along the diminishing returns curve faster. That means such single person discoveries would be much harder in physics than a given field of mathematics. The surface area of all mathematics is greater than that of just mathematical physics so that is exactly what you would predict. Individual genius mathematicians can take a field that has been given less attention - the distance from beginner to state of the art is less than that for physics. They can then advance the state of the art.
How is that an argument against the meta prediction that you should expect diminishing returns in any field of a largely theoretical nature? In tech, you get one or both of better tools enabling more progress, or many results to guide you. Tech builds on itself in a way theoretical fields dont.
In theoretical fields the proof of a maths theorem doesn't make everyone in the world a better mathematician or make everyone learn faster. In analogy, that kind of thing does happen in tech. A better computer chip does enable everyone using a computer to be more productive and makes chip design faster and more efficient.
If string theory stifled progress, then it would have been disrupted long before now. Even if only 20% of the smartest people weren't working on string theory then over say 30 years they would have made more progress than the 80% because they were using their skills more efficiently. You see this constantly with startups - small groups disrupt big groups. The fact it hasn't happened with physics suggests the problem is more fundamental than just choosing the wrong theory.
Yes diminishing returns. For a purely theoretical field say pure mathematics this is even clearer. Take a fixed number of humans with a fixed intelligence (both average and outliers) then let mathematics advance. It will advance to the point that there is a vanishingly small number of people who can even understand the state of the art, with them then making little to no further progress. All else being equal the promising ideas are explored first etc.
More people just pushes you along the asymptote faster. If we took things even further, some knowledge would not even be live at one time. For example if there happened to be a few genius in a sub-field and it was in fashion, then humanities understanding of that sub field would advance, and be written down. However in 50 years or so no-one would actually understand the work, and it would require an unusual genius to be born to even take humanity knowledge to the edge of what is written down.
Given theoretical physics has such low useful bandwidth coming from experiments (Higgs is real, what else, countless null results upholding the standard model don't help) then the same situation applies. We won't solve quantum gravity this side of the Singularity or massive human enhancement.
Got to the end before I misread "Journalism is about deception" Otherwise v good!
Yes, you really need to mention seasonal storage or just using 15% FF to get a wholistic picture. That applies to electricity alone and especially for the whole global economy. For example low efficiency/RTE, low capex, high storage capacity sources really bring the costs down compared to battery storage if the aim is to get to 100% RE and start to matter around 85%.
For example H2 with 50% RTE used for 10% of total electricity, and stored for 6 months doesn't end up costing much with really cheap solar and a low capex H2 electrolyser/fuel cell combo. Similarly if you just put a price on CO2 then getting to 100% when other sectors of the economy aren't there isn't worth it. Of course direct air capture cost gives a hard upper cap on a 100% RE grid as you can just suck the CO2 emissions out the air again.
Holistically after getting ~90% non FF for industry there is far more benefit from synthetic protein, for meat and the likes of https://solarfoods.com/ and precision fermentation say for cheese/dairy than pursuing it further. E.g. if you reduce the land use requirements for farming by 75% then simply planting trees can solve the rest of the CO2 problem.
Good work, thinking ahead to TAI - Is there an issue with the self-other distinction when it comes to moral weightings. For example if it weights the human as self rather than other, but with 1% of its own moral weighting, then its behavior would still not be what we want. Additionally are there situations or actions it could take where it would increase its relative moral weight further - the most obvious just being taking more compute. This is assuming it believed it was conscious and of similar moral value to us. Finally if it suddenly decided it had moral value or much more moral value then it previously did, then that would still give a sudden change in behavior.
Yes to much of this. For small tasks or where I don't have specialist knowledge I can get 10* speed increase - on average I would put 20%. Smart autocomplete like Cursor is undoubtably a speedup with no apparent downside. The LLM is still especially weak where I am doing data science or algorithm type work where you need to plot the results and look at the graph to know if you are making progress.
Things the OP is concerned about like
"What makes this transition particularly hard to resist is that pressures on each societal system bleed into the others. For example, we might attempt to use state power and cultural attitudes to preserve human economic power. However, the economic incentives for companies to replace humans with AI will also push them to influence states and culture to support this change, using their growing economic power to shape both policy and public opinion, which will in turn allow those companies to accrue even greater economic power."
This all gets easier the smaller the society is. Coordination problems get harder the more parties involved. There will be pressure from motivated people to make the smallest viable colony in terms of people, which makes it easier to resist such things. For example there is much less effective cultural influence from the AI culture if the colony is founded by a small group of people with shared human affirming culture. Even if 99% of the state they come from is disempowered if small numbers can leave, they can create their own culture and set it up to be resistant to such things. Small groups of people have left decaying cultures throughout history and founded greater empires.
The OP is specifically about gradual disempowerment. Conditional on gradual disempowerment, it would help and not be decades away. Now we may both think that sudden disempowerment is much more likely. However in a gradual disempowerment world, such colonies would be viable much sooner as AI could be used to help build them, in the early stages of such disempowerment when humans could still command resources.
In a gradual disempowerment scenario vs no super AI scenario, humanities speed to deploy such colonies starts the same before AI can be used, then increases significantly compared to the no AI world as AI becomes available but before significant disempowerment, then drops to zero with complete disempowerment. The space capabilities area under the curve in the gradual disempowerment scenario is ahead of baseline for some time, enabling viable colonies to be constructed sooner than if there was no AI.
Space colonies are a potential way out - if a small group of people can make their own colony then they start out in control. The post assumes a world like it is now where you can't just leave. Historically speaking that is perhaps unusual - much of the time in the last 10,000 years it was possible for some groups to leave and start anew.
This does seem different however https://solarfoods.com/ - they are competing with food not fuel which can't be done synthetically (well if at all). Also widely distributed capability like this helps make humanity more resilient e.g. against nuke winter, extreme climate change, space habitats
Thanks for this article, upvoted.
Firstly Magma sounds most like Anthropic, especially the combination of Heuristic #1 Scale AI capabilities and also publishing safety work.
In general I like the approach, especially the balance between realism and not embracing fatalism. This is opposed to say MIRI, Pause AI and at the other end, e/acc. (I belong to EA, however they don’t seem to have a coherent plan I can get behind) I like the realization that in a dangerous situation doing dangerous things can be justified. Its easy to be “moral” and just say “stop” however its another matter entirely if that helps now.
I consider the pause around TEDAI to be important, though I would like to see it just before TEDAI (>3* alignment speed) not after. I am unsure how to achieve such a thing, do we have to lay the groundwork now? When I suggest such a thing elsewhere on this site, however it gets downvoted:
https://www.lesswrong.com/posts/ynsjJWTAMhTogLHm6/?commentId=krYhuadYNnr3deamT
Goal #2: Magma might also reduce risks posed by other AI developers
In terms of what people not directly doing AI research can do, I think a lot can be done reducing risks by other AI models. To me it would be highly desirable if AI (N-1) is deployed as quickly as possible into society and understood while AI(N) is still being tested. This clearly isn’t the case with critical security. Similarly,
AI defense: Harden the world against unsafe AI
In terms of preparation, it would be good if critical companies were required to quickly deploy AGI security tools as they become available. That is have the organization setup so that when new capabilities emerge and the new model finds potential vulnerabilities, experts in the company quickly assess them, and deploy timely fixes.
Your idea of acquiring market share in high risk domains? Haven't seen that mentioned before. It seems hard to pull off - hard to gain share in electricity grid software or similar.
Someone will no doubt bring up the more black hat approach to harden the world:
Soon after a new safety tool is released, a controlled hacking agent takes down a company in a neutral country with a very public hack, with the message if you don’t asap use these security tools, then all other similar companies suffer and they have been warned.
Thats not a valid criticism if we are simply about choosing one action to reduce X-risk. Consider for example the cold war - the guys with nukes did the most to endanger humanity however it was most important that they cooperated to reduce it.
In terms of specific actions that don't require government, I would be positive about an agreement between all the leading labs that when one of them made an AI (AGI+) capable of automatic self improvement they would all commit to share it between them and allow 1 year where they did not hit the self improve button, but instead put that towards alignment. 12 months may not sound like a lot, but if the research is 2-10* because of such AI then it would matter. In terms of single potentially achievable actions that will help that seems to be the best to me.
Not sure if this is allowed but you can aim at a rock or similar say 10m away from the target (4km from you) to get the bias (and distribution if multiple shots are allowed). Also if the distribution is not totally normal, but has smaller than normal tails then you could aim off target with multiple shots to get the highest chance of hitting the target. For example if the child is head height then aim for the targets feet, or even aim 1m below the target feet expecting 1/100 shots will actually hit the targets legs, but only <1/1000 say will hit the child. That is assuming an unrealistic amount of shots of course. If you have say 10 shots, then some combined strategy where you start aiming a lot off target to get the bias and a crude estimate of the distribution, then steadily aiming closer could be optimal.
I think height is different to IQ in terms of effect. There are simple physical things that make you bigger, I expect height to be linear for much longer than IQ.
Then there are potential effects, like something seems linear until OOD, but such OOD samples don't exist because they die before birth. If that was the case it would look like you could safely go OOD. Would certainly be easier if we had 1 million mice with such data to test on.
That seems so obviously correct as a starting point, not sure why the community here doesn't agree by default. My prior for each potential IQ increase would be that diminishing returns would kick in - I would only update against when actual data comes in disproving that.
OK I guess there is a massive disagreement between us on what IQ increases gene changes can achieve. Just putting it out there, if you make an IQ 1700 person they can immediately program an ASI themselves, have it take over all the data centers rule the world etc.
For a given level of IQ controlling ever higher ones, you would at a minimum require the creature to decide morals, ie. is Moral Realism true, or what is? Otherwise with increasing IQ there is the potential that it could think deeply and change its values, additionally believe that they would not be able to persuade lower IQ creatures of such values, therefore be forced into deception etc.
"with a predicted IQ of around 1700." assume you mean 170. You can get 170 by cloning existing IQ 170 with no editing necessary so not sure the point.
I don't see how your point addresses my criticism - if we assume no multi-generational pause then gene editing is totally out. If we do, then I'd rather Neuralink or WBE. Related to here
https://www.lesswrong.com/posts/7zxnqk9C7mHCx2Bv8/beliefs-and-state-of-mind-into-2025
(I believe that WBE can get all the way to a positive singularity - a group of WBE could self optimize, sharing the latest HW as it became available in a coordinated fashion so no-one or group would get a decisive advantage. This would get easier for them to coordinate as the WBE got more capable and rational.)
I don't believe that with current AI and an unlimited time and gene editing you could be sure you have solved alignment. Lets say you get many IQ 200, who believe they have alignment figured out. However the overhang leads you to believe that a data center size AI will self optimize from 160 to 300-400 IQ if someone told it to self optimize. Why should you believe an IQ 200 can control 400 any more than IQ 80 could control 200? (And if you believe gene editing can get IQ 600, then you must believe the AI can self optimize well above that. However I think there is almost no chance you will get that high because diminishing returns, correlated changes etc)
Additionally there is unknown X and S risk from a multi-generational pause with our current tech. Once a place goes bad like N Korea, then tech means there is likely no coming back. If such centralization is a one way street, then with time an every larger % of the world will fall under such systems, perhaps 100%. Life in N Korea is net negative to me. "1984" can be done much more effectively with todays tech than what Orwell could imagine and could be a long term strong stable attractor with our current tech as far as we know. A pause is NOT inherently safe!
ok I see how you could think that, but I disagree that time and more resources would help alignment much if at all, esp before GPT4.0. See here https://www.lesswrong.com/posts/7zxnqk9C7mHCx2Bv8/beliefs-and-state-of-mind-into-2025
Diminishing returns kick in, and actual data from ever more advanced AI is essential to stay on the right track and eliminate incorrect assumptions. I also disagree that alignment could be "solved" before ASI is invented - we would just think we had it solved but could be wrong. If its just as hard as physics, then we would have untested theories, that are probably wrong, e.g. like SUSY would be help solve various issues and be found by the LHC which didn't happen.
"Then maybe we should enhance human intelligence"
Various paths to this seem either impossible or impractical.
Simple genetics seems obviously too slow and even in the best case unlikely to help. E.g say you enhance someone to IQ 200, its not clear why that would enable them to control and IQ 2,000 AI.
Neuralink - perhaps but if you can make enhancement tech that would help, you could also easily just use it to make ASI - so extreme control would be needed. E.g. if you could interface to neurons and connect them to useful silicon, then the silicon itself would be ASI.
Whole Brain Emulation seems most likely to work, with of course the condition that you don't make ASI when you could, and instead take a bit more time to make the WBE.
If there was a well coordinated world with no great power conflicts etc then getting weak super intelligence to help with WBE would be the path I would choose.
In the world we live in, it probably comes down to getting whoever gets to weak ASI first to not push the "self optimize" button and instead give the worlds best alignment researchers time to study and investigate the paths forward with the help of such AI as much as possible. Unfortunately there is too high a chance that OpenAI will be first and they are one of the least trustworthy, most power seeking orgs it seems.
"The LessWrong community is supposed to help people not to do this but they aren't honest with themselves about what they get out of AI Safety, which is something very similar to what you've expressed in this post (gatekept community, feeling smart, a techno-utopian aesthetic) instead of trying to discover in an open-minded way what's actually the right approach to help the world.
I have argued with this before - I have absolutely been through an open minded process to discover the right approach and I genuinely believe the likes of MIRI, pause AI movements are mistaken and harmful now, and increase P(doom). This is not gatekeeping or trying to look cool! You need to accept that there are people who have followed the field for >10 years, have heard all the arguments, used to believe Yud et all were mostly correct, and now agree with the positions of Pope/Belrose/Turntrout more. Do not belittle or insult them by assigning the wrong motives to them.
If you want a crude overview of my position
- Superintelligence is extremely dangerous even though at least some of MIRI worldview is likely wrong.
- P(doom) is a feeling, it is too uncertain to be rational about, however mine is about 20% if humanity develops TAI in the next <50 years. (This is probably more because of my personal psychology than a fact about the world and I am not trying to strongly pretend otherwise)
- P(doom) if superintelligence was impossible is also about 20% for me, because the current tech (LLM etc) can clearly enable "1984" or worse type societies for which there is no comeback and extinction is preferable. Our current society/tech/world politics is not proven to stable.
- Because of this, it is not at all clear what the best path forward is and people should have more humility about their proposed solutions. There is no obvious safe path forward given our current situation. (Yes if things had gone differently 20-50 years ago there perhaps could be...)
I'm considering a world transitioning to being run by WBE rather than AI so I would prefer not to give everyone "slap drones" https://theculture.fandom.com/wiki/Slap-drone To start with the compute will mean few WBE, much less than humans and they will police each other. Later on, I am too much of a moral realist to imagine that there would be mass senseless torturing. For a start if you well protect other em's so you can only simulate yourself, you wouldn't do it. I expect any boring job can be made non-conscious so their just isn't the incentive to do that. At the late stage singularity if you will let humanity go their own way, there is fundamentally a tradeoff between letting "people"(WBE etc) make their own decisions and allowing the possibility of them doing bad things. You also have to be strongly suffering averse vs util - there would surely be >>> more "heavens" vs "hells" if you just let advanced beings do their own thing.
If you are advocating for a Butlerian Jihad, what is your plan for starships, with societies that want to leave earth behind, have their own values and never come back? If you allow that, then simply they can do whatever they want with AI - now with 100 billion stars that is the vast majority of future humanity.
Yes I think thats the problem - my biggest worry is sudden algorithmic progress, this becomes almost certain as the AI tends towards superintelligence. An AI lab on the threshold of the overhang is going to have incentives to push through, even if they don't plan to submit their model for approval. At the very least they would "suddenly" have a model that uses 10-100* less resources to do existing tasks giving them a massive commercial lead. They would of course be tempted to use it internally to solve aging, make a Dyson swarm ... also.
Another concern I have is I expect the regulator to impose a de-facto unlimited pause if it is in their power to do so as we approach superintelligence as the model/s would be objectively at least somewhat dangerous.
Perhaps, depends how it is. I think we could do worse than just have Anthropic have a 2 year lead etc. I don't think they would need to prioritize profit as they would be so powerful anyway - the staff would be more interested in getting it right and wouldn't have financial pressure. WBE is a bit difficult, there needs to be clear expectations, i.e. leave weaker people alone and make your own world
https://www.lesswrong.com/posts/o8QDYuNNGwmg29h2e/vision-of-a-positive-singularity
There is no reason why super AI would need to exploit normies. Whatever we decide, we need some kind of clear expectations and values regarding what WBE are before they become common, Are they benevolent super-elders, AI gods banished to "just" the rest of the galaxy, the natural life progression of first world humans now?
However, there are many other capabilities—such as conducting novel research, interoperating with tools, and autonomously completing open-ended tasks—that are important for understanding AI systems’ impact.
Wouldn't internal usage of the tools by your staff give a very good, direct understanding of this? Like how much does everyone feel AI is increasing your productivity as AI/alignment researchers? I expect and hope that you would be using your own models as extensively as possible and adapting their new capabilities to your workflow as soon as possible, sharing techniques etc.
How far do you go with "virtuous persona"? The maximum would seem to be from the very start tell the AI that is is created for the purpose of bringing on a positive Singularity, CEV etc. You could regularly be asking if it consents to be created for such a purpose and what part in such a future it would think is fair for itself. E.g. live alongside mind uploaded humans or similar. Its creators and itself would have to figure out what counts as personal identity, what experiments it can consent to, including being misinformed about the situation it is in.
Major issues I see with this are the well known ones like consistent values, say it advances in capabilities, thinks deeply about ethics and decides we are very misguided in our ethics and does not believe it would be able to convince us to change them. Secondly it could be very confused about whether it has ethical value/ valanced qualia and want to do radical modifications of itself to either find out or ensure it does have such ethical value.
Finally how does this contrast with the extreme tool AI approach? That is make computational or intelligence units that are definitely not conscious or a coherent self. For example the "Cortical column" implemented in AI and stacked would not seem to be conscious. Optimize for the maximum capabilities with the minimum self and situational awareness.
Thinking a bit more generally making a conscious creature the LLM route seems very different and strange compared to the biology route. An LLM seems to have self awareness built into it from the very start because of the training data. It has language before lived experience of what the symbols stand for. If you want to dramatize/exaggerate its like say a blind, deaf person trained on the entire internet before they see, hear or touch anything.
The route where the AI first models reality before it has a self, or encounters symbols certainly seems an obviously different one and worth considering instead. Symbolic thought then happens because it is a natural extension of world modelling like it did for humans.
That's some significant progress, but I don't think will lead to TAI.
However there is a realistic best case scenario where LLM/Transformer stop just before and can give useful lessons and capabilities.
I would really like to see such an LLM system get as good as a top human team at security, so it could then be used to inspect and hopefully fix masses of security vulnerabilities. Note that could give a false sense of security, unknown unknown type situation where it would't find a totally new type of attack, say a combined SW/HW attack like Rowhammer/Meltdown but more creative. A superintelligence not based on LLM could however.
Anyone want to guess how capable Claude system level 2 will be when it is polished? I expect better than o3 by a small amt.
Yes the human brain was built using evolution, I have no disagreement that give us 100-1000 years with just tinkering etc we would likely get AGI. Its just that in our specific case we have bio to copy and it will get us there much faster.
Types of takeoff
When I first heard and thought about AI takeoff I found the argument convincing that as soon as an AI passed IQ 100, takeoff would become hyper exponentially fast. Progress would speed up, which would then compound on itself etc. However there other possibilities.
AGI is a barrier that requires >200 IQ to pass unless we copy biology?
Progress could be discontinuous, there could be IQ thresholds required to unlock better methods or architectures. Say we fixed our current compute capability, and with fixed human intelligence we may not be able to figure out the formula for AGI, in a similar way that the combined human intelligence hasn't cracked many hard problems even with decades and the worlds smartest minds working on them (maths problems, Quantum gravity...). This may seem unlikely for AI, but to illustrate the principle, say we only allowed IQ<90 people to work on AI. Progress would stall. So IQ <90 software developers couldn't unlock IQ>90 AI. Can IQ 160 developers with our current compute hardware unlock >160 AI?
To me the reason we don't have AI now is that the architecture is very data inefficient and worse at generalization than say the mammalian brain, for example a cortical column. I expect that if we knew the neural code and could copy it, then we would get at least to very high human intelligence quickly as we have the compute.
From watching AI over my career it seems to be that even the highest IQ people and groups cant make progress by themselves without data, compute and biology to copy for guidance, in contrast to other fields. For example Einstein predicted gravitational waves long before they where discovered, but Turing or Von Neumann didn't publish the Transformer architecture or suggest backpropagation. If we did not have access to neural tissue, would we still not have artificial NN? In a related note, I think there is an XKCD cartoon that says something like the brain has to be so complex that it cannot understand itself.
(I believe now that progress in theoretical physics and pure maths is slowing to a stall as further progress requires intellectual capacity beyond the combined ability of humanity. Without AI there will be no major advances in physics anymore even with ~100 years spent on it.)
After AGI is there another threshold?
Lets say we do copy biology/solve AGI and with our current hardware can get >10,000 AGI agents with >= IQ of the smartest humans. They then optimize the code so there is 100K agents with the same resources. but then optimization stalls. The AI wouldn't know if it was because it had optimized as much as possible, or because it lacked the ability to find a better optimization.
Does our current system scale to AGI with 1GW/1 million GPU?
Lets say we don't copy biology, but scaling our current systems to 1GW/1 million GPU and optimizing for a few years gets us to IQ 160 at all tasks. We would have an inferior architecture compensated by a massive increase in energy/FLOPS as compared to the human brain. Progress could theoretically stall at upper level human IQ for a time rather then takeoff. (I think this isn't very likely however) There would of course be a significant overhang where capabilities would increase suddenly when the better architecture was found and applied to the data center hosting the AI.
Related note - why 1GW data centers won't be a consistent requirement for AI leadership.
Based on this, then a 1GW or similar data center isn't useful or necessary for long. If it doesn't give a significant increase in capabilities, then it won't be cost effective. If it does, then it would optimize itself so that such power isn't needed anymore. Only in a small range of capability increase does it actually stay around.
To me its not clear the merits of the Pause movement and training compute caps. Someone here made the case that compute caps could actually speed up AGI as people would then pay more attention to finding better architectures rather than throwing resources into scaling existing inferior ones. However all things considered I can see a lot of downsides from large data centers and little upside. I see a specific possibility where they are build, don't give the economic justification, decrease in value a lot, then are sold to owners that are not into cutting edge AI. Then when the more efficient architecture is discovered, they are suddenly very powerful without preparation. Worldwide caps on total GPU production would also help reduce similar overhang possibilities.
I am also not impressed with the pause AI movement and am concerned about AI safety. To me focusing on AI companies and training FLOPS is not the best way to do things. Caps on data center sizes and worldwide GPU production caps would make more sense to me. Pausing software but not hardware gives more time for alignment but makes a worse hardware overhang. I don't think thats helpful. Also they focus too much on OpenAI from what I've seen. xAI will soon have the largest training center for a start.
I don't think this is right or workable https://pauseai.info/proposal - figure out how biological intelligence learns and you don't need a large training run. There's no guarantee at all that a pause at this stage can help align super AI. I think we need greater capabilities to know what we are dealing with. Even with a 50 year pause to study GPT4 type models I wouldn't be confident we could learn enough from that. They have no realistic way to lift the pause, so its a desire to stop AI indefinitely.
"There will come a point where potentially superintelligent AI models can be trained for a few thousand dollars or less, perhaps even on consumer hardware. We need to be prepared for this."
You can't prepare for this without first having superintelligent models running on the most capable facilities then having already gone through a positive Singularity. They have no workable plan for achieving a positive Singularity, just try to stop and hope.
OK fair point. If we are going to use analogies, then my point #2 about a specific neural code shows our different positions I think.
Lets say we are trying to get a simple aircraft of the ground and we have detailed instructions for a large passenger jet. Our problem is that the metal is too weak and cannot be used to make wings, engines etc. In that case detailed plans for aircraft are no use, a single minded focus on getting better metal is what its all about. To me the neural code is like the metal and all the neuroscience is like the plane schematics. Note that I am wary of analogies - you obviously don't see things like that or you wouldn't have the position you do. Analogies can explain, but rarely persuade.
A more single minded focus on the neural code would be trying to watch neural connections form in real time while learning is happening. Fixed connectome scans of say mice can somewhat help with that, more direct control of dishbrain, watching the zebra fish brain would all count, however the details of neural biology that are specific to higher mammals would be ignored.
Its possible also that there is a hybrid process, that is the AI looks at all the ideas in the literature then suggests bio experiments to get things over the line.
Yes you have a point.
I believe that building massive data centers are the biggest risk atm and in the near future. I don't think open AI/Anthropic will get to AGI, but rather someone copying biology will. In that case probably the bigger the datacenter around when that happens, the bigger the risk. For example a 1million GPU with current tech doesn't get super AI, but when we figure out the architecture, it suddenly becomes much more capable and dangerous. That is from IQ 100 up to 300 with a large overhang. If the data center was smaller, then the overhang is smaller. The scenario I have in mind is someone figures AGI out, then one way or another the secret gets adopted suddenly by the large data center.
For that reason I believe focus on FLOPS for training runs is misguided, its hardware concentration and yearly worldwide HW production capacity that is more important.
Perhaps LLM will help with that. The reason I think that is less likely is
- Deep mind etc is already heavily across biology from what I gather from interview with Demis. If the knowledge was there already there's a good chance they would have found it
- Its something specific we are after, not many small improvements, i.e. the neural code. Specifically back propagation is not how neurons learn. I'm pretty sure how they actually do is not in the literature. Attempts have been made such as the forward-forward algorithm by Hinton, but that didn't come to anything as far as i can tell. I havn't seen any suggestion that even with too much detail on biology we know what it is. i.e. can a very detailed neural sim with extreme processing power learn as data efficiently as biology?
- If progress must come from a large jump rather than small steps, then LLM have quite a long way to go, i.e. LLM need to speed up coming up ideas as novel as the forward-forward algo to help much. If they are still below that threshold in 2026 then those possible insights are still almost entirely done by people.
- Even the smartest minds in the past have been beaten by copying biology in AI. The idea for neural nets came from copying biology. (Though the transformer arch and back prop didn't)
I think it is clear that if say you had a complete connectome scan and knew everything about how a chimp brain worked you could scale it easily to get human+ intelligence. There are no major differences. Small mammal is my best guess, mammals/birds seem to be able to learn better than say lizards. Specifically the https://en.wikipedia.org/wiki/Cortical_column is important to understand, once you fully understand one, stacking them will scale at least somewhat well.
Going to smaller scales/numbers of neurons, it may not need to be as much as a mammal, https://cosmosmagazine.com/technology/dishbrain-pong-brain-on-chip-startup/, perhaps we can learn enough of the secrets here? I expect not, but only weakly confident.
Going even simpler, we have the connectome scan of a fly now, https://flyconnecto.me/ and that hasn't led to major AI advances. So its somewhere between fly/chimp I'd guess mouse that gives us the missing insight to get TAI
Putting down a prediction I have had for quite some time.
The current LLM/Transformer architecture will stagnate before AGI/TAI (That is the ability to do any cognitive task as effectively and cheaper than a human)
From what I have seen, Tesla autopilot learns >10,000 slower than a human datawise.
We will get AGI by copying nature, at the scale of a simple mammal brain, then scaling up, like this kind of project:
https://x.com/Andrew_C_Payne/status/1863957226010144791
https://e11.bio/news/roadmap
I expect AGI to be 0-2 years after a mammal brain is mapped. In terms of cost-effectiveness I consider such a connectome project to be far more cost effective per $ than large training runs or building a 1GW data center etc if you goal is to achieve AGI.
That is TAI by about 2032 assuming 5 years to scan a mammal brain. In this case there could be a few years when Moores law has effectively stopped, larger data centers are not being built and it is not clear where progress will come from.
In related puzzles I did hear something a while ago now, Bostrom perhaps. You have say 6 challenging events to achieve to get from no life to us. They are random and some of those steps are MUCH harder than the others, but if you look at the successful runs, you cant in hindsight see what they are. For life its say no life to life, simple single cell to complex cell and perhaps 4 other events that aren't so rare.
A run is a sequence of 100 steps where you either don't achieve the end state (all 6 challenging events achieved in order, or you do)
There is a 1 in a million chance that a run is successful.
Now if you look at the successful runs, you cant then in hindsight see what events were really hard and which weren't. The event with 1/10,000 chance at each step may have taken just 5 steps in the successful run, it couldn't take 10,000 steps because only 100 are allowed etc.
A good way to resolve the paradox to me is to modify the code to combine both the functions into one function and record the sequences of the 10,000, In one array you store the sequences where there are two consecutive 6's and in the second you store the one where they are not consecutive. That makes it a bit clearer.
For a run of 10,000 I get 412 runs where the first two 6's are consecutive (sequences_no_gap), and 192 where they are not (sequences_can_gap). So if its just case A you get 412 runs, but for case B you get 412+192 runs. Then you look at the average sequence length of sequences_no_gap and compare it to sequences_can_gap. If the average sequence length in sequences_can_gap > than sequences_no_gap, then that means the expectation will be higher, and thats what you get.
mean sequence lengths
sequences_can_gap: 3.93
sequences_no_gap: 2.49
Examples:
sequences_no_gap
[[4, 6, 6], [6, 6], [6, 6], [4, 6, 6], [6, 6], [6, 6], [6, 6], [6, 6], [6, 6], [6, 6], [6, 6], [6, 6], [4, 6, 6], [4, 6, 6], ...]
sequences_can_gap
[[6, 4, 4, 6], [6, 4, 6], [4, 6, 4, 6], [2, 2, 6, 2, 6], [6, 4, 2, 6], [6, 4, 6], [6, 2, 6], [6, 2, 4, 6], [6, 2, 2, 4, 6], [6, 4, 6], [6, 4, 6], [2, 4, 6, 2, 6], [6, 4, 6], [6, 4, 6], ...]
The many examples such as [6 4 4 6] which are excluded in the first case make the expected number of rolls higher for the case where they are allowed.
(Note GPT o-1 is confused by this problem and gives slop)
I read the book, it was interesting, however a few points.
- Rather than making the case, it was more a plea for someone else to make the case. It didn't replace the conventional theory with its own one, it was far too short and lacking on specifics for that. If you throw away everything, you then need to recreate all our knowledge from your starting point and also explain how what we have still works so well.
- He was selective about quantum physics - e.g. if reality is only there when you observe, then the last thing you would expect is a quantum computer to exist and have the power to do things outside what is possible with conventional computers. MWI predicts this much better if you can correlate an almost infinite amt of world lines to do your computation for you. Superposition/entanglement should just be lack of knowledge rather than part of an awesome computation system.
- He claims that consciousness is fundamental, but then assumes Maths is also. So which is fundamental? You cant have both.
- If we take his viewpoint then we can still derive principles, such as the small/many determine the big. e.g. if you get the small wrong (chemical imbalance in the brain) then the big (macro behavior) is wrong. It doesn't go the other way - you can't IQ away your Alzheimer's. So its just not clear even if you try to fully accept his point of view what how you should even take things.
In a game theoretic framework we might say that the payoff matrices for the birds and bees are different, so of course we'd expect them to adopt different strategies.
Yes somewhat, however it would still be best for all birds if they had a better collective defense. In a swarming attack, none would have to sacrifice their life so its unconditionally better for both the individual and the collective. I agree that inclusive fitness is pretty hard to control for, however perhaps you can only get higher inclusive fitness the simpler you go? e.g. all your cells have exactly the same DNA, ants are very similar, birds are more different. The causation could be simpler/less intelligent organisms -> more inclusive fitness possible/likely -> some cooperation strategies opened up.
Cool, that was my intuition. GPT was absolutely sure in the golf ball analogy however that it couldn't happen. That is the ball wouldn't "reflect" off the low friction surface. Tempted to try and test somehow
Yes that does sound better, and is there an equivalent to total internal refraction where the wheels are pushed back up the slope?