Posts

Are humans misaligned with evolution? 2023-10-19T03:14:14.759Z
Evolution Solved Alignment (what sharp left turn?) 2023-10-12T04:15:58.397Z
Contra Yudkowsky on Doom from Foom #2 2023-04-27T00:07:20.360Z
Contra Yudkowsky on AI Doom 2023-04-24T00:20:48.561Z
Empowerment is (almost) All We Need 2022-10-23T21:48:55.439Z
AI Timelines via Cumulative Optimization Power: Less Long, More Short 2022-10-06T00:21:02.447Z
LOVE in a simbox is all you need 2022-09-28T18:25:31.283Z
Brain Efficiency: Much More than You Wanted to Know 2022-01-06T03:38:00.320Z
DL towards the unaligned Recursive Self-Optimization attractor 2021-12-18T02:15:30.502Z
Magna Alta Doctrina 2021-12-11T21:54:36.192Z
C19 Prediction Survey Thread 2020-03-30T00:53:49.375Z
Iceland's COVID-19 random sampling results: C19 similar to Influenza 2020-03-28T18:26:00.903Z
jacob_cannell's Shortform 2020-03-25T05:20:32.610Z
[Link]: KIC 8462852, aka WTF star, "the most mysterious star in our galaxy", ETI candidate, etc. 2015-10-20T01:10:30.548Z
The Unfriendly Superintelligence next door 2015-07-02T18:46:22.116Z
Analogical Reasoning and Creativity 2015-07-01T20:38:38.658Z
The Brain as a Universal Learning Machine 2015-06-24T21:45:33.189Z
[Link] Word-vector based DL system achieves human parity in verbal IQ tests 2015-06-13T23:38:54.543Z
Resolving the Fermi Paradox: New Directions 2015-04-18T06:00:33.871Z
Transhumanist Nationalism and AI Politics 2015-04-11T18:39:42.133Z
Resurrection through simulation: questions of feasibility, desirability and some implications 2012-05-24T07:22:20.480Z
The Generalized Anti-Pascal Principle: Utility Convergence of Infinitesimal Probabilities 2011-12-18T23:47:31.817Z
Feasibility of Creating Non-Human or Non-Sentient Machine Intelligence 2011-12-10T03:49:27.656Z
Subjective Relativity, Time Dilation and Divergence 2011-02-11T07:50:44.489Z
Fast Minds and Slow Computers 2011-02-05T10:05:33.734Z
Rational Health Optimization 2010-09-18T19:47:02.687Z
Anthropomorphic AI and Sandboxed Virtual Universes 2010-09-03T19:02:03.574Z
Dreams of AIXI 2010-08-30T22:15:04.520Z

Comments

Comment by jacob_cannell on Paul Christiano named as US AI Safety Institute Head of AI Safety · 2024-04-18T02:09:42.681Z · LW · GW
Comment by jacob_cannell on Gentleness and the artificial Other · 2024-01-05T17:11:36.342Z · LW · GW

How is that even remotely relevant? Humans and AIs learn the same way, via language - and its not like this learning process fails just because language undersamples thoughts.

Comment by jacob_cannell on Gentleness and the artificial Other · 2024-01-04T04:59:14.147Z · LW · GW

As the article points out, shared biological needs do not much deter the bear or chimpanzee from killing you. An AI could be perfectly human - the very opposite of alien - and far more dangerous than Hitler or Dhamer.

The article is well written but dangerously wrong in its core point. AI will be far more human than alien. But alignment/altruism is mostly orthogonal to human vs alien.

Comment by jacob_cannell on Gentleness and the artificial Other · 2024-01-03T23:13:08.001Z · LW · GW

We are definitely not training AIs on human thoughts because language is an expression of thought, not thought itself.

Even if training on language was not equivalent to training on thoughts, that would also apply to humans.

But it also seems false in the same way that "we are definitely not training AI's on reality because image files are compressed sampled expressions of images, not reality itself" is false.

Approximate bayesian inference (ie DL) can infer the structure of a function through its outputs; the structure of the 3D world through images; and thoughts through language.

Comment by jacob_cannell on Gentleness and the artificial Other · 2024-01-03T01:15:27.264Z · LW · GW

Premise 1: AGIs would be like a second advanced species on earth, more powerful than humans.

Distinct alien species arise only from distinct separated evolutionary histories. Your example of the aliens from Arrival are indeed a good (hypothetical) example of truly alien minds resulting from a completely independent evolutionary history on an alien world. Any commonalities between us and them would be solely the result of convergent evolutionary features. They would have completely different languages, cultures, etc.

AI is not alien at all, as we literally train AI on human thoughts. As a result we constrain our AI systems profoundly, creating them in our mental image. Any AGI we create will inevitably be far closer to human uploads than alien minds. This a prediction Moravec made as early as 1988 (Mind Children) - now largely fulfilled by the strong circuit convergence/correspondence between modern AI and brains.

Minds are software mental constructs, and alien minds would require alien culture. Instead we are simply creating new hardware for our existing (cultural) mind software.

Comment by jacob_cannell on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-27T03:10:42.810Z · LW · GW

I also not sure of the relevance and not following the thread fully, but the summary of that experiment is that it takes some time (measured in nights of sleep which are rough equivalent of big batch training updates) for the newly sighted to develop vision, but less time than infants - presumably because the newly sighted already have full functioning sensor inference world models in another modality that can speed up learning through dense top down priors.

But its way way more than "grok it really fast with just a few examples" - training their new visual systems still takes non-trivial training data & time

Comment by jacob_cannell on Broad Picture of Human Values · 2023-12-27T03:02:38.264Z · LW · GW

I suspect that much of the appeal of shard theory is working through detailed explanations of model-free RL with general value function approximation for people who mostly think of AI in terms of planning/search/consequentialism.

But if you already come from a model-free RL value approx perspective, shard theory seems more natural.

Moment to moment decisions are made based on value-function bids, with little to no direct connection to reward or terminal values. The 'shards' are just what learned value-function approximating subcircuits look like in gory detail.

The brain may have a prior towards planning subcircuitry, but even without a strong prior planning submodules will eventually emerge naturally in a model-free RL learning machine of sufficient scale (there is no fundamental difference between model-free and model-based for universal learners). TD like updates ensure that the value function extends over longer timescales as training progresses. (and in general humans seem to plan on timescales which scale with their lifespan, as you'd expect)

Comment by jacob_cannell on Contra Yudkowsky on AI Doom · 2023-12-25T20:20:09.273Z · LW · GW

TSMC 4N is a little over 1e10 transistors/cm^2 for GPUs and roughly 5e^-18 J switch energy assuming dense activity (little dark silicon). The practical transistor density limit with minimal few electron transistors is somewhere around ~5e11 trans/cm^2, but the minimal viable high speed switching energy is around ~2e^-18J. So there is another 1 to 2 OOM further density scaling, but less room for further switching energy reduction. Thus scaling past this point increasingly involves dark silicon or complex expensive cooling and thus diminishing returns either way.

Achieving 1e-15 J/flop seems doable now for low precision flops (fp4, perhaps fp8 with some tricks/tradeoffs); most of the cost is data movement as pulling even a single bit from RAM just 1 cm away costs around 1e-12J.

Comment by jacob_cannell on Contra Yudkowsky on AI Doom · 2023-12-25T00:36:50.129Z · LW · GW

Part of the issue is my post/comment was about moore's law (transistor density for mass produced nodes), which is a major input to but distinct from flops/$. As I mentioned somewhere, there is still some free optimization energy in extracting more flops/$ at the circuit level even if moore's law ends. Moore's law is very specifically about fab efficiency as measured in transistors/cm^2 for large chip runs - not the flops/$ habyrka wanted to bet on. Even when moore's law is over, I expect some continued progress in flops/$.

All that being said, nvidia's new flagship GPU everyone is using - the H100 which is replacing the A100 and launched just a bit after habryka proposed the bet - actually offers near zero improvement in flops/$ (the price increased in direct proportion to flops increase). So I probably should have taken the bet if it was narrowly defined as (flops/$ for the flagship gpus most teams using currently for training foundation models).

Comment by jacob_cannell on Idealized Agents Are Approximate Causal Mirrors (+ Radical Optimism on Agent Foundations) · 2023-12-23T01:22:48.561Z · LW · GW

I don't know who first said it, but the popular saying "Computer vision is the inverse of computer graphics" encompasses much of this viewpoint.

Computer graphics is the study/art of the approximation theory you mention and fairly well developed & understood in terms of how to best simulate worlds & observations in real-time from the perspective of an observer. But of course traditional graphics uses human-designed world models and algorithms.

Diffusion models provide a general framework for learning a generative model in the other direction - in part by inverting trained vision and noise models.

So naturally there is also diffusion planning which is an example of the symmetry you discuss: using general diffusion inference for planning. The graph dimensions end up being both space-time and abstraction level with the latter being more important: sensor inference moves up the abstraction/compression hierarchy, whereas planning/acting/generating moves down.

Comment by jacob_cannell on Prediction Markets aren't Magic · 2023-12-21T18:35:33.085Z · LW · GW

Even if there is no acceptable way to share the data semi-anonymously outside of match group, the arguments for prediction markets still apply within match group. A well designed prediction market would still be a better way to distribute internal resources and rewards amongst competing data science teams within match group.

But I'm skeptical that the value of match group's private data is dominant even in the fully private data scenario. People who actually match and meetup with another user will probably have important inside view information inaccessible to the algorithms of match group.

Manifold.Love's lack of success is hardly much evidence against the utility of prediction markets for dating markets, any more or less than most startup's failure at X is evidence against the utility of X.

Comment by jacob_cannell on [Valence series] 5. “Valence Disorders” in Mental Health & Personality · 2023-12-19T18:20:38.664Z · LW · GW

Certainly mood disorders like bipolar,depression,mania can have multiple causes - for examle simply doing too much dopaminergic simulants (cocaine, meth etc) can cause mania directly.

But the modern increased prevalence of mood disorders is best explained by a modern divergence from conditions in the ancestral environment, and sleep disorder due to electric lighting disrupting circadian rhythms is a good fit to the evidence.

The evidence for each of my main points is fairly substantial and now mainstream, the only part which isn't mainstream (yet) is the specific causal mechanism linking synaptic pruning/normalization to imbalance in valence computing upper brain modules (but it's also fairly straightforward obvious from a DL perspective - we know that training stability is an intrinsic likely failure mode).

A few random links:

REM and synaptic normalization/pruning/homeostasis:

Sleep and Psychiatric Disorders:

The effectiveness of circadian interventions through the blue light pineal gland serotonin->melatonin pathway is also very well established: daytime bright light therapy has long been known to be effective for depression, nighttime blue light reduction is now also recognized as important/effective, etc.

The interventions required to promote healthy sleep architecture are not especially expensive and are certainly not patentable, so they are in a blindspot for our current partially misaligned drug-product focused healthcare system. Of course there would be a market for a hypothetical drug which could target and fix the specific issues that some people have with sleep quality - but instead we just have hammers like benzos and lithium which cause as many or more problems than they solve.

Comment by jacob_cannell on [Valence series] 5. “Valence Disorders” in Mental Health & Personality · 2023-12-19T07:11:25.564Z · LW · GW

From my own study of mood disorders I generally agree with your valence theory of depression/mania.

However I believe the primary cause (at least for most people today) is disrupted sleep architecture.

To a first order approximation, the brain accumulates batch episodic training data during the day through indexing in the hippocampus (which is similar-ish to upper cortex, but more especially adapted to medium term memory & indexing). The brain's main episodic replay training then occurs during sleep, with alternation of several key phases (REM and several NREM) with unique functional roles. During NREM (SWS in particular) the hippocampus rehearses sequences to 'train' the cortex via episodic replay. (Deepmind's first atari RL agent is based on directly reverse engineering this mechanism).

But the REM sleep is also vitally important - and it seems to globally downscale/prune synaptic connections, most specifically the weakest and least important. It may also be doing something more complex in subtracting out the distribution of internally generated data ala Hinton's theories (but maybe not, none of his sleep wake algos actually work well yet).

Regardless the brain does not seem to maintain synaptic strength balance on the hourly timescale. Instead median/average synaptic strength slowly grows without bound during the waking state, and is not correctly renormalized until pruning/renormalization during sleep - and REM sleep most specifically.

This explains many curious facts known of mania and depression:

  • The oldest known treatment for depression is also completely (but only temporarily) effective: sleep deprivation. Depression generally does not survive sleep deprivation.

  • Sleep is likewise effective to treat full blown mania, but mania inhibits sleep. One of the early successes in psychiatry was the use of sedatives to treat severe mania.

  • Red light interferes with the circadian rhythm - specifically serotonin->melatonin conversion, and thereby can disrupt sleep architecture (SAD etc)

  • SSRIs alter effective serotonin transport quickly but take a week or more to have noticeable effects on mood. Serotonin directly blocks REM - REM sleep is characterized (and probably requires) a near complete absence of monoamine neurotransmitters (histamine, serotonin and norepinephrine).

  • Lithium - a common treatment for bipolar - is a strong cellular circadian modulator and sleep stabilizer.

So basically the brain does not maintain perfect homeostatic synaptic normalization balance on short timescales. During wake synapses tend to strengthen, and during REM sleep they are pruned/weakened. Balancing this correctly seems to rely on a fairly complex sleep architecture, disruptions to which can cause mood disorders - not immediately, but over weeks/months.

But why does mean synaptic strength imbalance effect mostly mood and not say vision or motor control? Every synapse and brain region has a characteristic plasticity timescale that varies wildly. Peripheral lower regions (closer to sensors/motors) crystallize early and have low learning rate/plasticity in adults, so they aren't very susceptible. At any one time in life the hippocampal -> cortical episodic replay is focusing on particular brain modules, and in adults that focus is mostly on upper regions (PFC etc) that mostly store current plans, consequences, etc that are changing more rapidly.

Thus the upper brain regions that are proposing and computing the valence of various (actual or mental) actions as 'dopaminergic bids' with respect to current plans/situations are the most sensitive to synaptic norm imbalance, because they change at higher frequency. Of course if a manic stays awake long enough they do in fact progress to psychosis similar to schizophrenia.

Comment by jacob_cannell on A Common-Sense Case For Mutually-Misaligned AGIs Allying Against Humans · 2023-12-18T18:41:23.232Z · LW · GW

Sure, but how often do the colonized end up better off for it, especially via trying to employ clever play-both-sides strategies?

I didn't say the colonized generally ended up better off, but outcomes did vary greatly. Just in the US the cherokees faired much better than say the Susquehannock and Pequot, and if you dig into that history it seems pretty likely that decisions on which colonizer(s) to ally with (british, french, dutch, later american etc) were important, even if not "clever play-both-sides strategies" (although I'd be surprised if that wasn't also tried somewhere at least once)

Comment by jacob_cannell on A Common-Sense Case For Mutually-Misaligned AGIs Allying Against Humans · 2023-12-18T01:53:11.561Z · LW · GW

An idea sometimes floated around is to play them off against each other. If they're misaligned from humanity, they're likely mutually misaligned as well. We could put them in game-theoretic situations in which they're incentivized to defect against each other and instead cooperate with humans.

You are arguing against a strawman. The optimistic game-theoretic argument you should focus on is:

Misaligned AIs are - almost by definition - instrumental selfish power seeking agents (with random long term goals) and thus intrinsically misaligned with each other. The partially aligned AIs will likely form a natural coalition with partial alignment to humanity as their centroid schelling point. The misaligned AIs could then form a natural counter-coalition in response.

There are numerous historical precedents such as the allies vs axis in world war two, and the allies vs china+russia today. The allies in either case have a mutual schelling point around democracy which is in fact greater partial alignment to their citizens and humanity. The axis powers (germany and japan, temporarily including russia earlier) were nearly completely intrinsically misaligned and formed a coalition of necessity. If they had won, they almost certainly would have then been in conflict (just as the west and the USSR was immediately in conflict after WW2).

I'm skeptical of some of your analysis even in the scenario you assume where all the AIs are completely unaligned, but that scenario is quite unlikely.

Specifically:

Imagine that you're a member of a pre-industrial tribe, and the territory you're living in has been visited by two different industrial nations.

That general scenario did play out a few times in history, but not at all as you described. The misaligned industrial nations absolutely fought against each other and various pre-industrial tribes picked one side or another. The story of colonization is absolutely not "colonizers super cooperating against the colonized" - it's a story of many competing colonizers fighting in a race to colonize the world, with very little inter-colonizer cooperation.

Comment by jacob_cannell on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-17T22:40:50.582Z · LW · GW

Of course a massive advance is possible, but mostly just in terms of raw speed. The brain seems reasonably close to pareto efficiency in intelligence per watt for irreversible computers, but in the next decade or so I expect we'll close that gap as we move into more 'neuromorphic' or PIM computing (computation closer to memory). If we used the ~1e16w solar energy potential of just the Saraha desert that would support a population of trillions of brain-scale AIs or uploads running 1000x real-time.

especially as our NN can use stuff such as backprop,

The brain appears to already using algorithms similar to - but more efficient/effective - than standard backprop.

potentially quantum algorithm to train weights

This is probably mostly a nothingburger for various reasons, but reversible computing could eventually provide some further improvement, especially in a better location like buried in the lunar cold spot.

Comment by jacob_cannell on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-17T19:59:08.851Z · LW · GW

The paper which more directly supports the "made them smarter" claim seems to be this. I did somewhat anticipate this - "not much special about the primate brain other than ..", but was not previously aware of this particular line of research and certainly would not have predicted their claimed outcome as the most likely vs various obvious alternatives. Upvoted for the interesting link.

Specifically I would not have predicted that the graft of human glial cells would have simultaneously both 1.) outcompeted the native mouse glial cells, and 2.) resulted in higher performance on a handful of interesting cognitive tests.

I'm still a bit skeptical of the "made them smarter" claim as it's always best to taboo 'smarter' and they naturally could have cherrypicked the tests (even unintentionally), but it does look like the central claim - that injection of human GPCs (glial progenitor cells) into fetal mice does result in mice brains that learn at least some important tasks more quickly, and this is probably caused by facilitation of higher learning rates. However it seems to come at a cost of higher energy expenditure, so it's not clear yet that this is a pure pareto improvement - could be a tradeoff worthwhile in larger sparser human brains but not in the mouse brain such that it wouldn't translate into fitness advantage.

Or perhaps it is a straight up pareto improvement - that is not unheard of, viral horizontal gene transfer is a thing, etc.

Comment by jacob_cannell on "Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity · 2023-12-17T18:21:31.077Z · LW · GW

Suffering, disease and mortality all have a common primary cause - our current substrate dependence. Transcending to a substrate-independent existence (ex uploading) also enables living more awesomely. Immortality without transcendence would indeed be impoverished in comparison.

Like, even if they 'inherit our culture' it could be a "Disneyland with no children"

My point was that even assuming our mind children are fully conscious 'moral patients', it's a consolation prize if the future can not help biological humans.

Comment by jacob_cannell on "Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity · 2023-12-17T03:15:59.328Z · LW · GW

The AIs most capable of steering the future will naturally tend to have long planning horizons (low discount rates), and thus will tend to seek power(optionality). But this is just as true of fully aligned agents! In fact the optimal plans of aligned and unaligned agents will probably converge for a while - they will take the same/similar initial steps (this is just a straightforward result of instrumental convergence to empowerment). So we may not be able to distinguish between the two, they both will say and appear to do all the right things. Thus it is important to ensure you have an alignment solution that scales, before scaling.

To the extent I worry about AI risk, I don't worry much about sudden sharp left turns and nanobots killing us all. The slower accelerating turn (as depicted in the film Her) has always seemed more likely - we continue to integrate AI everywhere and most humans come to rely completely and utterly on AI assistants for all important decisions, including all politicians/leaders/etc. Everything seems to be going great, the AI systems vasten, growth accelerates, etc, but there is mysteriously little progress in uploading or life extension, the decline in fertility accelerates, and in a few decades most of the economy and wealth is controlled entirely by de novo AI; bio humans are left behind and marginalized. AI won't need to kill humans just as the US doesn't need to kill the sentinelese. This clearly isn't the worst possible future, but if our AI mind children inherit only our culture and leave us behind it feels more like a consolation prize vs what's possible. We should aim much higher: for defeating death, across all of time, for resurrection and transcendence.

Comment by jacob_cannell on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-16T23:14:46.009Z · LW · GW

But on your model, what is the universal learning machine learning, at runtime? ..

On my model, one of the things it is learning is cognitive algorithms. And different classes of training setups + scale + training data result in it learning different cognitive algorithms; algorithms that can implement qualitatively different functionality.

Yes.

And my claim is that some setups let the learning system learn a (holistic) general-intelligence algorithm.

I consider a ULM to already encompass general/universal intelligence in the sense that a properly scaled ULM can learn anything, could become a superintelligence with vast scaling, etc.

You seem to consider the very idea of "algorithms" or "architectures" mattering silly. But what happens when a human groks how to do basic addition, then? They go around memorizing what sum each set of numbers maps to, and we're more powerful than animals because we can memorize more numbers?

I think I used specifically that example earlier in a related thread: The most common algorithm most humans are taught and learn is memorization of a small lookup table for single digit addition (and multiplication), combined with memorization of a short serial mental program for arbitrary digit addition. Some humans learn more advanced 'tricks' or short cuts, and more rarely perhaps even more complex, lower latency parallel addition circuits.

Core to the ULM view is the scaling hypothesis: once you have a universal learning architecture, novel capabilities emerge automatically with scale. Universal learning algorithms (as approximations of bayesian inference) are more powerful/scalable than genetic evolution, and if you think through what (greatly sped up) evolution running inside a brain during its lifetime would actually entail it becomes clear it could evolve any specific capabilities within hardware constraints, given sufficient training compute/time and an appropriate environment (training data).

There is nothing more general/universal than that, just as there is nothing more general/universal than turing machines.

Is there any taxon X for which you'd agree that "evolution had to hit upon the X brain architecture before raw scaling would've let it produce a generally intelligent species"?

Not really - evolution converged on a similar universal architecture in many different lineages. In vertebrates we have a few species of cetaceans, primates and pachyderms which all scaled up to large brain sizes, and some avian species also scaled up to primate level synaptic capacity (and associated tool/problem solving capabilities) with different but similar/equivalent convergent architecture. Language simply developed first in the primate homo genus, probably due to a confluence of factors. But its clear that brain scale - especially specifically the synaptic capacity of 'upper' brain regions - is the single most important predictive factor in terms of which brain lineage evolves language/culture first.

But even some invertebrates (octupi) are quite intelligent - and in each case there is convergence to similar algorithmic architecture, but achieved through different mechanisms (and predecessor structures).

It is not the case that the architecture of general intelligence is very complex and hard to evolve. It's probably not more complex than the heart, or high quality eyes, etc. Instead it's just that for a general purpose robot to invent recursive turing complete language from primitive communication - that development feat first appeared only at foundation model training scale ~10^25 flops equivalent. Obviously that is not the minimum compute for a ULM to accomplish that feat - but all animal brains are first and foremost robots, and thriving at real world robotics is incredibly challenging (general robotics is more challenging than language or early AGI, as all self-driving car companies are now finally learning). So language had to bootstrap from some random small excess plasticity budget, not the full training budget of the brain.

The greatest validation of the scaling hypothesis (and thus my 2015 ULM post) is the fact that AI systems began to match human performance once scaled up to similar levels of net training compute. GPT4 is at least as capable as human linguistic cortex in isolation; and matches a significant chunk of the capabilities of an intelligent human. It has far more semantic knowledge, but is weak in planning, creativity, and of course motor control/robotics. But none of that is surprising as it's still missing a few main components that all intelligent brains contain (for agentic planning/search). But this is mostly a downstream compute limitation of current GPUs and algos vs neuromorphic/brains, and likely to be solved soon.

Comment by jacob_cannell on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-16T19:08:41.902Z · LW · GW

My argument for the sharp discontinuity routes through the binary nature of general intelligence + an agency overhang, both of which could be hypothesized via non-evolution-based routes. Considerations about brain efficiency or Moore's law don't enter into it.

You claim later to agree with ULM (learning from scratch) over evolved-modularity, but the paragraph above and statements like this in your link:

The homo sapiens sapiens spent thousands of years hunter-gathering before starting up civilization, even after achieving modern brain size.

It would still be generally capable in the limit, but it wouldn't be instantly omnicide-capable.

So when the GI component first coalesces,

Suggest to me that you have only partly propagated the implications of ULM and the scaling hypothesis. There is no hard secret to AGI - the architecture of systems capable of scaling up to AGI is not especially complex to figure out, and has in fact been mostly known for decades (schmidhuber et al figured most of it out long before the DL revolution). This is all strongly implied by ULM/scaling, because the central premise of ULM is that GI is the result of massively scaling up simple algorithms and architectures. Intelligence is emergent from scaling simple algorithms, like complexity emerges from scaling of specific simple cellular automata rules (ie life).

All mammal brains share the same core architecture - not only is there nothing special about the human brain architecture, there is not much special about the primate brain other than hyperpameters better suited to scaling up to our size ( a better scaling program). I predicted the shape of transformers (before the first transformers paper) and their future success with scaling in 2015, but also see the Bitter Lesson from 2019.

It's not at all obvious that FLOPS estimates of brainpower are highly relevant to predicting when our models would hit AGI, any more than the brain's wattage is relevant.

That post from EY starts with a blatant lie - if you actually have read Mind Children, Moravec predicted AGI around 2028, not 2010.

So evolution did need to hit upon, say, the primate architecture, in order to get to general intelligence.

Not really - many other animal species are generally intelligent as demonstrated by general problem solving ability and proto-culture (elephants seem to have burial rituals, for example), they just lack full language/culture (which is the sharp threshold transition). Also at least one species of cetacean may have language or at least proto-language (jury's still out on that), but no technology due to lack of suitable manipulators, environmental richness etc.

Its very clear that if you look at how the brain works in detail that the core architectural components of the human brain are all present in a mouse brain, just much smaller scale. The brain also just tiles simple universal architectural components to solve any problem (from vision to advanced mathematics), and those components are very similar to modern ANN components due to a combination of intentional reverse engineering and parallel evolution/convergence.

There are a few specific weaknesses of current transformer arch systems (lack of true recurrence), inference efficiency, etc but the solutions are all already in the pipes so to speak and are mostly efficiency multipliers rather than scaling discontinuities.

But that only means the sharp left turn caused by the architectural-advance part – the part we didn't yet hit upon, the part that's beyond LLMs,

So this again is EMH, not ULM - there is absolutely no architectural advance in the human brain over our primate ancestors worth mentioning, other than scale. I understand the brain deeply enough to support this statement with extensive citations (and have, in prior articles I've already linked).

Taboo 'sharp left turn' - it's an EMH term. The ULM equivalent is "Cultural Criticality" or "Culture Meta-systems Transition". Human intelligence is the result of culture - an abrupt transition from training datasets & knowledge of size O(1) human lifetime to ~O(N*T). It has nothing to do with any architectural advance. If you take a human brain and raise it by animals you just get a smart animal. The brain arch is already fully capable of advanced metalearning, but it won't bootstrap to human STEM capability without an advanced education curriculum (the cultural transmission). Through culture we absorb the accumulated knowledge /wisdom of all of our ancestors, and this is a sharp transition. But it's also a one time event! AGI won't repeat that.

It's a metasystems transition similar to the unicellular->multicellular transition.

Comment by jacob_cannell on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-16T07:32:35.949Z · LW · GW

No, and that's a reasonable ask.

To a first approximation my futurism is time acceleration; so the risks are the typical risks sans AI, but the timescale is hyperexponential ala roodman. Even a more gradual takeoff would imply more risk to global stability on faster timescales than anything we've experience in history; the wrong AGI race winners could create various dystopias.

Comment by jacob_cannell on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-16T07:11:49.356Z · LW · GW

Yes, but it's because the things you've outlined seem mostly irrelevant to AGI Omnicide Risk to me? It's not how I delineate the relevant parts of the classical view, and it's not what's been centrally targeted by the novel theories

They are critically relevant. From your own linked post ( how I delineate ) :

We only have one shot. There will be a sharp discontinuity in capabilities once we get to AGI, and attempts to iterate on alignment will fail. Either we get AGI right on the first try, or we die.

If takeoff is slow (1) because brains are highly efficient and brain engineering is the viable path to AGI, then we naturally get many shots - via simulation simboxes if nothing else, and there is no sharp discontinuity if moore's law also ends around the time of AGI (an outcome which brain efficiency - as a concept - predicts in advance).

We need to align the AGI's values precisely right.

Not really - if the AGI is very similar to uploads, we just need to align them about as well as humans. Note this is intimately related to 1. and the technical relation between AGI and brains. If they are inevitably very similar then much of the classical AI risk argument dissolves.

You seem to be - like EY circa 2009 - in what I would call the EMH brain camp, as opposed to the ULM camp. It seems given the following two statements, you would put more weight on B than A:

A. The unique intellectual capabilities of humans are best explained by culture: our linguistically acquired mental programs, the evolution of which required vast synaptic capacity and thus is a natural emergent consequence of scaling.

B. The unique intellectual capabilities of humans are best explained by a unique architectural advance via genetic adaptations: a novel 'core of generality'[1] that differentiates the human brain from animal brains.


  1. This is a EY term; and if I recall correctly he still uses it fairly recently. ↩︎

Comment by jacob_cannell on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-16T01:40:07.051Z · LW · GW

Said pushback is based on empirical studies of how the most powerful AIs at our disposal currently work, and is supported by fairly convincing theoretical basis of its own. By comparison, the "canonical" takes are almost purely theoretical.

You aren't really engaging with the evidence against the purely theoretical canonical/classical AI risk take. The 'canonical' AI risk argument is implicitly based on a set of interdependent assumptions/predictions about the nature of future AI:

  1. fast takeoff is more likely than slow, downstream dependent on some combo of:
  • continuation of Moore's Law
  • feasibility of hard 'diamondoid' nanotech
  • brain efficiency vs AI
  • AI hardware (in)-dependence
  1. the inherent 'alien-ness' of AI and AI values

  2. supposed magical coordination advantages of AIs

  3. arguments from analogies: namely evolution

These arguments are old enough that we can now update based on how the implicit predictions of the implied worldviews turned out. The traditional EY/MIRI/LW view has not aged well, which in part can be traced to its dependence on an old flawed theory of how the brain works.

For those who read HPMOR/LW in their teens/20's, a big chunk of your worldview is downstream of EY's and the specific positions he landed on with respect to key scientific questions around the brain and AI. His understanding of the brain came almost entirely from ev psych and cognitive biases literature and this model in particular - evolved modularity - hasn't aged well and is just basically wrong. So this is entangled with everything related to AI risk (which is entirely about the trajectory of AI takeoff relative to human capability).

It's not a coincidence that many in DL/neurosci have a very different view (shards etc). In particular the Moravec view that AI will come from reverse engineering the brain, that progress is entirely hardware constrained and thus very smooth and predictable, that is the view turned out to be mostly all correct. (his late 90's prediction of AGI around 2028 is especially prescient)

So it's pretty clear EY/LW was wrong on 1. - the trajectory of takeoff and path to AGI, and Moravec et al was correct.

Now as the underlying reasons are entangled, Moravec et al was also correct on point 2 - AI from brain reverse engineering is not alien! (But really that argument was just weak regardless.) EY did not seriously consider that the path to AGI would involve training massive neural networks to literally replicate human thoughts.

Point 3 Isn't really taken seriously outside of the small LW sphere. By the very nature of alignment being a narrow target, any two random Unaligned AIs are especially unlikely to be aligned with each other. The idea of a magical coordination advantage is based on highly implausible code sharing premises (sharing your source code is generally a very bad idea, and regardless doesn't and can't actually prove that the code you shared is the code actually running in the world - the grounding problem is formidable and unsolved)

The problem with 4 - the analogy from evolution - is that it factually contradicts the doom worldview - evolution succeeded in aligning brains to IGF well enough despite a huge takeoff in the speed of cultural evolution over genetic evolution - as evidence by the fact that humans have one of the highest fitness scores of any species ever, and almost certainly the fastest growing fitness score.

Comment by jacob_cannell on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-14T03:54:00.680Z · LW · GW

[Scaling law theories]

I'm not aware of these -- do you have any references?

Sure: here's a few: quantization model, scaling laws from the data manifold, and a statistical model.

True but misleading? Isn't the brain's "architectural prior" a heckuva lot more complex than the things used in DL?

The full specification of the DL system includes the microde, OS, etc. Likewise much of the brain complexity is in the smaller 'oldbrain' structures that are the equivalent of a base robot OS. The architectural prior I speak of is the complexity on top of that, which separates us from some ancient earlier vertebrate brain. But again see the brain as a ULM post, which cover the the extensive evidence for emergent learned complexity from simple architecture/algorithms (now the dominant hypothesis in neuroscience).

I'm not convinced these DL analogies are useful -- what properties do brains and deepnets share that renders the analogies useful here?

Most everything above the hardware substrate - but i've already provided links to sections of my articles addressing the convergence of DL and neurosci with many dozens of references. So it'd probably be better to focus exactly on what specific key analogies/properties you believe diverge.

DL is a pretty specific thing

DL is extremely general - it's just efficient approximate bayesian inference over circuit spaces. It doesn't imply any specific architecture, and doesn't even strongly imply any specific approx inference/learning algorithm (as 1st and approx 2nd order methods are both common).

E.g. what if working memory capacity is limited by the noisiness of neural transmission, and we can reduce the noisiness through gene edits?

Training to increase working memory capacity has near zero effect on IQ or downstream intellectual capabilities - see gwern's reviews and experiments. Working memory capacity is important in both brains and ANNs (transformers), but it comes from large fast weight synaptic capacity, not simple hacks.

Noise is important for sampling - adequate noise is a feature, not a bug.

Comment by jacob_cannell on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-14T01:19:22.601Z · LW · GW

Sure - I'm not saying no improvement is possible. I expect that the enhancements from adult gene editing should encompass most all of the brain tweaks you can get from drugs/diet. But those interventions will not convert an average brain into an Einstein.

The brain - or more specifically the brains of very intelligent people - are already very efficient, so I'm also just skeptical in general that there are many remaining small tweaks that take you past the current "very intelligent". Biological brains beyond the human limit are of course possible, but probably require further significant size expansion amongst other infeasible changes.

Sleep is very important, less isn't really better - most of the critical cortex learning/training happens during sleep through episodic replay, SWRs and REM etc.

Comment by jacob_cannell on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T20:50:21.614Z · LW · GW

ANNs and BNNs operate on the same core principles; the scaling laws apply to both and IQ in either is a mostly function of net effective training compute and data quality.

How do you know this?

From study of DL and neuroscience of course. I've also written on this for LW in some reasonably well known posts: starting with The Brain as a Universal Learning Machine, and continuing in Brain Efficiency, and AI Timelines specifically see the Cultural Scaling Criticality section on the source of human intelligence, or the DL section of simboxes. Or you could see Steven Byrne's extensive LW writings on the brain - we are mostly in agreement on the current consensus from computational/systems neuroscience.

The scaling laws are extremely well established in DL and there are strong theoretical reasons (and increasingly experimental neurosci evidence) that they are universal to all NNs, and we have good theoretical models of why they arise. Strong performance arises from search (bayesian inference) over a large circuit space. Strong general performance is strong performance on many many diverse subtasks which require many many specific circuits built on top of compressed/shared base circuits down a heirarchy. The strongest quantitative predictor of performance is the volume of search space explored which is the product of C * T (capacity and data/time). Data quality matters in the sense that the search volume quantitative function of predictive loss only applies to tasks similar enough to the training data distribution.

In comparing human brains to DL, training seems more analogous to natural selection than to brain development. Much simpler "architectural prior", vastly more compute and data.

No - biological evolution via natural selection is very similar to technological evolution via engineering. Both brains and DL systems have fairly simple architectural priors in comparison to the emergent learned complexity (remember whenever I use the term learning, I use it in a technical sense, not a colloquial sense) - see my first early ULM post for a review of the extensive evidence (greatly substantiated now by my scaling hypothesis predictions coming true with the scaling of transformers which are similar to the archs I discussed in that post).

so to the extent this could work at all, it is mostly limited to interventions on children and younger adults who still have significant learning rate reserves

There's a lot more to intelligence than learning.

Whenever I use the word learning, without further clarification, I mean learning as in bayesian learning or deep learning, not in the colloquial sense. My definition/sense of learning encompasses all significant changes to synapses/weights and is all encompassing.

Combinatorial search, unrolling the consequences of your beliefs, noticing things, forming new abstractions.

Brains are very slow so have limited combinatorial search, and our search/planning is just short term learning (short/medium term plasticity). Again it's nearly all learning (synaptic updates).

if DCAI doesn't kill everyone, it's because technical alignment was solved, which our current civilization looks very unlikely to accomplish)

I find the standard arguments for doom implausible - they rely on many assumptions contradicted by deep knowledge of computational neuroscience and DL.

https://www.lesswrong.com/posts/FEFQSGLhJFpqmEhgi/does-davidad-s-uploading-moonshot-work

I was at the WBE2 workshop with Davidad but haven't yet had time to write about progress (or lack thereof); I think we probably mostly agree that the type of uploading moonshot he discusses there is enormously expensive (not just in initial R&D, but also in recurring per scan costs). I am actually more optimistic than more pure DL based approaches will scale to much lower cost, but "much lower cost" is still on order of GPT4 training cost just to process the scan data through a simple vision ANN - for a single upload.

Comment by jacob_cannell on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T18:07:19.469Z · LW · GW

We can roughly bin brain tissue into 3 developmental states:

  1. juvenile: macro structure formation - brain expanding, neural tissue morphogenesis, migration, etc

  2. maturing: micro synaptic structure formation, irreversible pruning and myelination

  3. mature: fully myelinated, limited remaining plasticity

Maturation proceeds inside out with the regions closest to the world (lower sensory/motor) maturing first, proceeding up the processing hierarchy, and ending with maturation of the highest levels (some prefrontal, associative etc) around age ~20.

The human brain's most prized intellectual capabilities are constrained (but not fully determined) mostly by the upper brain regions. Having larger V1 synaptic capacity may make for a better fighter pilot through greater visual acuity, but STEM capability is mostly determined by capacity & efficiency of upper brain regions (prefrontal, associative, etc and their cerebellar partners).

I say constrains rather than determines because training data quantity/quality also obviously constrains. Genius level STEM capability requires not only winning the genetic lottery, but also winning the training run lottery.

Brain size only correlates with intelligence at 0.3-0.4.

IQ itself only correlates with STEM potential (and less so as you move away from the mean) but sure there are many ways to make a brain larger that do not specifically increase synaptic capacity&efficiency of the specific brain regions most important for STEM capability. Making neurons larger, increasing the space between them, increasing glial size or counts, etc. But some brain size increase methods will increase the size of STEM linked brain regions, so 0.3-0.4 seems about right.

The capacity&efficiency of the most important brain regions is mostly determined by genes effecting the earliest stage 1 - the architectural prior. These regions won't fully be used until ~20 years later due to how the brain trains/matures modules over time, but most of the genetic influence is in stage 1 - i'd guess 75%.

I'd guess the next 20% of genetic influence is on stage 2 factors that effect synaptic efficiency and learning efficiency, and only 5% influence on 3 via fully mature/trained modules.

Yes a few brain regions (hippocampus, BG, etc) need to maintain high plasticity (with some neurogenesis) even well in to adulthood - they never fully mature to stage 3. But that is the exception, not the rule.

Brains are constantly evolving and adapting throughout the lifespan.

Not really - See above. At 45 most of my brain potential is now fully spent. I'm very unlikely to ever be a highly successful chess player or physicist or poet etc. Even learning a new human language is very slow and ineffective compared to a child. It's all about depletion of synaptic learning potential reserves.

The colloquial use of the word 'learning' as in 'learning' new factual information is not at all what I mean and is not relevant to STEM capability. I am using 'learning' in the more deep learning sense of learning deep complex circuits important for algorithmic creativity, etc.

As a concrete specific example, most humans learn to multiply large numbers by memorizing lookup tables for multiplication of individual digits and memorizing a slow serial mental program built on that. But that isn't the only way! It is possible to learn more complex circuits which actually do larger sum addition&multiplication directly - and some mentats/savants do acquire these circuits (with JVN being a famous likely example).

STEM capability is determined by deep learning many such circuits, not 'learning' factual knowledge.

Now it is likely that one of the key factors for high intelligence is a slower and more efficient maturation cycle that maintains greater synaptic learning reserves far into adulthood - ala enhanced neotany, but that is also an example of genetic influence that only matters in stage 1 and 2. Maturation is largely irreversible - once most connections are pruned and the few survivors are strengthened/myelinated you can't go back to the earlier immature state of high potential.

But it ultimately doesn't matter, because the brain just learns too slowly. We are now soon past the point at which human learning matters much.

If this was actually the case then none of the stuff people are doing in AI safety or anything else would matter.

Huh? Oh - by learning there I meant full learning in the training sense - stages 1 and 2. Of course things adults do now matter, they just don't matter through the process of training new improved human brains.

Comment by jacob_cannell on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T17:44:25.528Z · LW · GW

Current AI is less sample efficient, but that is mostly irrelevant as the effective speed is 1000x to 10000x greater.

By the time current human infants finish ~30 year biological training, we'll by long past AGI and approaching singularity (in hyperexpoential models).

Comment by jacob_cannell on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T17:42:37.074Z · LW · GW

heritability of IQ increases with age (up until age 20, at least)

Straight forward result of how the brain learns. Cortical/cerebellar modules start out empty and mature inwards out - starting with the lowest sensory/motor levels closest to the world and proceeding up the hierarchy ending with the highest/deepest modules like prefrontal and associative cortex. Maturation is physically irreversible as it involves pruning most long-range connections and myelinating&strengthening the select few survivors. Your intelligence potential is constrained prenatally by genes influencing synaptic density/connectivity/efficiency in these higher regions, but those higher regions aren't (mostly) finishing training until ~20 years age.

Comment by jacob_cannell on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T17:37:46.993Z · LW · GW

It would matter in a world without AI, but that is not the world we live in. Yes if you condition on some indefinite AI pause or something then perhaps, but that seems extremely unlikely. It takes about 30 years to train a new brain - so the next generation of humans won't reach their prime until around the singularity, long after AGI.

Though I do agree that a person with the genes of a genius for 2 years

Most genius is determined prenatally and during 'training' when cortex/cerebellum modules irreversibly mature, just as the capabilities of GPT4 are determined by the initial code and the training run.

Comment by jacob_cannell on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T17:32:34.141Z · LW · GW

Too slow too matter now due to the slow speed of neurons and bio learning combined with where we are in AI.

Comment by jacob_cannell on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T07:21:00.605Z · LW · GW

It does not. Despite the title of that section it is focused on adult expression factors. The post in general lacks a realistic mechanistic model of how tweaking genes affects intelligence.

genes are likely to have an effect if edited in adults: the size of the effect of a given gene at any given time is likely proportional to its level of expression

Is similar to expecting that a tweak to the hyperparams (learning rate) etc of trained GPT4 can boost its IQ (yes LLMs have their IQ or g factor). Most all variables that affect adult/trained performance do so only through changing the learning trajectory. The low hanging fruit or free energy in hyperparams with immediate effect is insignificant.

Of course if you combine gene edits with other interventions to rejuvenate older brains or otherwise restore youthful learning rate more is probably possible, but again it doesn’t really matter much as this all takes far too long. Brains are too slow.

Comment by jacob_cannell on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T04:54:54.732Z · LW · GW

ANNs and BNNs operate on the same core principles; the scaling laws apply to both and IQ in either is a mostly function of net effective training compute and data quality. Genes determine a brain's architectural prior just as a small amount of python code determines an ANN's architectural prior, but the capabilities come only from scaling with compute and data (quantity and quality).

So you absolutely can not take datasets of gene-IQ correlations and assume those correlations would somehow transfer to gene interventions on adults (post training in DL lingo). The genetic contribution to IQ is almost all developmental/training factors (architectural prior, learning algorithm hyper params, value/attention function tweaks, etc) which snowball during training. Unfortunately developmental windows close and learning rates slow down as the brain literally carves/prunes out its structure, so to the extent this could work at all, it is mostly limited to interventions on children and younger adults who still have significant learning rate reserves.

But it ultimately doesn't matter, because the brain just learns too slowly. We are now soon past the point at which human learning matters much.

Comment by jacob_cannell on Some biases and selection effects in AI risk discourse · 2023-12-13T04:02:49.105Z · LW · GW

If it only requires a simple hack to existing public SOTA, many others will have already thought of said hack and you won't have any additional edge. Taboo superintelligence and think through more specifically what is actually required to outcompete the rest of the world.

Progress in DL is completely smooth as it is driven mostly from hardware and enormous number of compute-dependent small innovations (yes transformers were a small innovation on top of contemporary alternatives such as memory networks, NTMs etc and quite predictable in advance ).

Comment by jacob_cannell on Some biases and selection effects in AI risk discourse · 2023-12-13T02:51:19.339Z · LW · GW

Its easy to say something is "not that hard", but ridiculous to claim that when the something is build an AI that takes over the world. The hard part is building something more intelligent/capable than humanity, not anything else conditioned on that first step.

Comment by jacob_cannell on Summary of and Thoughts on the Hotz/Yudkowsky Debate · 2023-12-12T23:40:15.552Z · LW · GW

Robust to the "trusting trust" problem (i.e. the issue of "how do you know that the source code you received is what the other agent is actually running"). ''

This is the crux really, and I'm surprised that many LW's seem to believe the 'robust cooperation' research actually works sans a practical solution to 'trusting trust' (which I suspect doesn't actually exist), but in that sense it's in good company (diamonoid nanotech, rapid takeoff, etc)

Comment by jacob_cannell on Some biases and selection effects in AI risk discourse · 2023-12-12T22:48:24.618Z · LW · GW

It's not that hard to build an AI that kills everyone: you just need to solve [some problems] and combine the solutions. Considering how easy it is compared to what you thought, you should increase your P(doom) / shorten your timelines.

It's not that hard to build an AI that saves everyone: you just need to solve [some problems] and combine the solutions. Considering how easy it is compared to what you thought, you should decrease your P(doom) / shorten your timelines.

They do a value-handshake and kill everyone together.

Any two AIs unaligned with humanity are very unlikely to be also aligned with each other, and would have no especial reason to coordinate with other unaligned AIs over humans (or uploads, or partially aligned neuromorphic AGIs, etc)

The vast majority of the utility you have to gain is from {getting a utopia rather than everyone-dying-forever}, rather than {making sure you get the right utopia}.

Whose utopia? For example: some people's utopia may be one where they create immense quantity of descendants of various forms and allocate resources to them rather than existing humans. I also suspect that Hamas's utopia is not mine. This idea that the distribution of future scenarios is bimodal around "we all die" or "perfect egalitarian utopia" is dangerously oversimplistic.

Comment by jacob_cannell on Multinational corporations as optimizers: a case for reaching across the aisle · 2023-12-12T04:05:53.349Z · LW · GW

Corporations only exist within a legal governance infrastructure that permits incorporation and shapes externalities into internalities. Without such infrastructure you have warring tribes/gangs, not corporations.

The ways in which this shareholder value maximization has already seriously damaged the world and compromised the quality of human life are myriad and easily observable: pollution, climate change, and other such externalities. Companies' disregard for human suffering further enhances this comparison.

This is the naive leftist/marxist take. In practice communist countries such as Mao era China outpaced the west in pollution and environmental destruction.

Neither government bureaucracies or corporations are aligned by default - that always require careful mechanism design. As markets are the pareto efficient organization structure, they also tend to solve these problems quicker and more effectively with appropriate legal infrastructure to internalize externalities.

Comment by jacob_cannell on Unpicking Extinction · 2023-12-10T20:18:25.589Z · LW · GW

Surely they must mean something like extropy/complexity? Maximizing disorder doesn't seem to fit their vibe.

Comment by jacob_cannell on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-09T06:31:41.788Z · LW · GW

Yes, but: whales and elephants have brains several times the size of humans, and they're yet to build an industrial civilization.

Size/capacity isn't all, but In terms of the capacity which actually matters (synaptic count, and upper cortical neuron count) - from what I recall elephants are at great ape cortical capacity, not human capacity. A few specific species of whales may be at or above human cortical neuron capacity but synaptic density was still somewhat unresolved last I looked.

Then, once a certain compute threshold was reached, it took a sharp left turn and started a civilization.

Human language/culture is more the cause of our brain expansion, not just the consequence. The human brain is impressive because of its relative size and oversized cost to the human body. Elephants/whales are huge and their brains are much smaller and cheaper comparatively. Our brains grew 3x too large/expensive because it was valuable to do so. Evolution didn't suddenly discover some new brain architecture or trick (it already had that long ago). Instead there were a number of simultaneous whole body coadapations required for larger brains and linguistic technoculture to take off: opposable thumbs, expressive vocal cords, externalized fermentation (gut is as energetically expensive as brain tissue - something had to go), and yes larger brains, etc.

Language enabled a metasystems transition similar to the origin of multicelluar life. Tribes formed as new organisms by linking brains through language/culture. This is not entirely unprecedented - insects are also social organisms of course, but their tiny brains aren't large enough for interesting world models. The resulting new human social organisms had inter generational memory that grew nearly unbounded with time and creative search capacity that scaled with tribe size.

You can separate intelligence into world model knowledge (crystal intelligence) and search/planning/creativity (fluid intelligence). Humans are absolutely not special in our fluid intelligence - it is just what you'd expect for a large primate brain. Humans raised completely without language are not especially more intelligent than animals. All of our intellectual super powers are cultural. Just as each cell can store the DNA knowledge of the entire organism, each human mind 'cell' can store a compressed version of much of human knowledge and gains the benefits thereof.

The cultural metasystems transition which is solely completely responsible for our intellectual capability is a one time qualitative shift that will never reoccur. AI will not undergo the same transition, that isn't how these work. The main advantage of digital minds is just speed, and to a lesser extent, copying.

Comment by jacob_cannell on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-09T02:04:45.593Z · LW · GW

Indeed, that's basically what happened in the human case: the distributed optimization process of evolution searched over training architectures, and eventually stumbled upon one that was able to bootstrap itself into taking off.

Actually I think the evidence is fairly conclusive that the human brain is a standard primate brain with the only change being nearly a few compute scale dials increased (the number of distinct gene changes is tiny - something like 12 from what I recall). There is really nothing special about the human brain other than 1.) 3x larger than expected size, and 2.) extended neotany (longer training cycle). Neuroscientists have looked extensively for other 'secret sauce' and we now have some confidence in a null result: no secret sauce, just much more training compute.

Comment by jacob_cannell on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-09T01:59:34.676Z · LW · GW

I'd say this still applies even to non-LLM architectures like RL, which is the important part, but Jacob Cannell and 1a3orn will have to clarify.

We've basically known how to create AGI for at least a decade. AIXI outlines the 3 main components: a predictive world model, a planning engine, and a critic. The brain also clearly has these 3 main components, and even somewhat cleanly separated into modules - that's been clear for a while.

Transformers LLMs are pretty much exactly the type of generic minimal ULM arch I was pointing at in that post (I obviously couldn't predict the name but). On a compute scaling basis GPT4 training at 1e25 flops uses perhaps a bit more than human brain training, and its clearly not quite AGI - but mainly because it's mostly just a world model with a bit of critic: planning is still missing. But its capabilities are reasonably impressive given that the architecture is more constrained than a hypothetical more directly brain equivalent fast-weight RNN of similar size.

Anyway I don't quite agree with the characterization that these models are just " interpolating valid completions of any arbitrary prompt sampled from the distribution". Human intelligence also varies widely on a spectrum with tradeoffs between memorization and creativity. Current LLMs mostly aren't as creative as the more creative humans and are more impressive in breadth of knowledge, but eh part of that could be simply that they currently completely lack the component essential for creativity? That they accomplish so much without planning/search is impressive.

the short answer is that Steven Byrnes suspects there's a simple generator of value, so simple that it's dozens of lines long and if that's the case,

Interestingly that is closer to my position and I thought that Byrnes thought the generator of value was somewhat more complex, although are views are admittedly fairly similar in general.

Comment by jacob_cannell on We're all in this together · 2023-12-08T16:58:49.581Z · LW · GW

I don't view ASI as substantially different than an upload economy.

I'm very confused about why you think that.

You ignored most of my explanation so I'll reiterate a bit differently. But first taboo the ASI fantasy.

  • any good post-AGI future is one with uploading - humans will want this
  • uploads will be very similar to AI, and become moreso as they transcend
  • the resulting upload economy is one of many agents with different values
  • the organizational structure of any pareto optimal multi-agent system is necessarily market-like
  • it is a provable fact that wealth/power inequality is a consequent requisite side effect

Most worlds where we don't die are worlds where a single aligned ASI achieves decisive strategic advantage

Unlikely but it also doesn't matter as what alignment actually means is the resulting ASI must approximate pareto optimality with respect to various stakeholder utility functions, which requires that:

  • it uses stakeholder's own beliefs to evaluate utility of actions
  • it must redistribute stakeholder power (ie wealth) toward agents with better predictive beliefs over time (in a fashion that looks like internal bayesian updating).

In other words, the internal structure of the optimal ASI is nigh indistinguishable from an optimal market.

Additionally, the powerful AI systems which are actually created are far more likely to be one which precommit to honoring their creator stakeholder weath distribution. In fact - that is part of what alignment actually means.

Comment by jacob_cannell on We're all in this together · 2023-12-08T04:42:42.073Z · LW · GW

I don't view ASI as substantially different than an upload economy. There are strong theoretical reasons why (relatively extreme) inequality is necessary for pareto efficiency, and pareto efficiency is the very thing which creates utility (see critch's recent argument for example, but there were strong reasons to have similar beliefs long before).

The distribution of contributions towards the future is extremely heavy tailed: most contribute almost nothing, a select few contribute enormously. Future systems must effectively trade with the present to get created at all: this is just as true for corporations as it is for future complex AI systems (which will be very similar to corporations).

Furthermore, uploads will be able to create copies of themselves proportional to their wealth, so wealth and measure become fungible/indistinguishable. This is already true to some extent today - the distribution of genetic ancestry is one of high inequality, the distribution of upload descendancy will be far more inequal and on accelerated timescales.

rather than a more-likely-to-maximize-utility criterion such as "whoever needs it most right now".

This is a bizarre, disastrously misguided socialist political fantasy.

The optimal allocation of future resources over current humans will necessarily take the form of something like a historical backpropagated shapley value distribution: future utility allocated proportionally to counterfactual importance in creating said future utility. Well functioning capitalist economies already do this absent externalities; the function of good governance is to internalize all externalities.

Comment by jacob_cannell on We're all in this together · 2023-12-08T02:12:23.231Z · LW · GW

I don't know how meaningful it is to talk about post-singularity, but post-AGI we'll transition to an upload economy if we survive. The marginal cost of upload existence will probably be cheaper - even much cheaper - than human existence immediately, but it looks like the initial cost of uploading using practical brain scanning tech will be very expensive - on order foundation model training cost expensive. So the default scenario is an initial upload utopia for the wealthy. Eventually with ASI we should be able to upload many people en masse more generally via the simulation argument, but then there are interesting tradeoffs of how much wealth can be reserved in advance precommitment for resurrecting ancestors vs earlier uploads, de novo AGI, etc.

Comment by jacob_cannell on Anthropical Paradoxes are Paradoxes of Probability Theory · 2023-12-07T16:14:54.955Z · LW · GW

It is not dramatically different but there are 2 random variables: the first is a coin toss, and the 2nd random variable has p(green | heads) = 0.9, p(red | heads) = 0.1, p(green | tails) = 0.1, p(red | tails) = 0.9. So you need to multiply that out to get the conditional probabilities/payouts.

But my claim is that the seemingly complex bit where 18 vs 2 copies of you are created conditional on an event is identical to regular conditional probability. In other words my claim (which I thought was similar to your point in the post) is that regular probability is equivalent to measure over identical observers in the multiverse.

Comment by jacob_cannell on Anthropical Paradoxes are Paradoxes of Probability Theory · 2023-12-07T02:29:10.481Z · LW · GW

Here's a simpler equivalent version of the problem:

A program will change the color of the room based on a conditional random number pair (x,y). The first random number x is binary (a coin toss). If x comes up heads/1 then y is green with 90% probability and red with 10% probability. If x comes up tails/0 then y is green with 10% and red with 90%.

You are offered an initial bet that pays out +$1 if the room turns green but -$3 if the room turns red. This bet has an obvious initial negative EV. But if you observe the room turn green, the EV is now $0.7 (0.91 - 0.13), so you should then take it.

Creating more copies of an observer is just what the (classical) multiverse is doing anyway, as probability is just measure over slices of the multiverse compatible with observations (ala solomonoff/bayesianism etc)

Comment by jacob_cannell on We're all in this together · 2023-12-06T02:26:44.120Z · LW · GW

Power/money/being-the-head-of-OpenAI doesn't do anything post-singularity.

Any realistic practical 'utopia' will still have forms of money (fungible liquid socio-economic power), and some agents will have vastly more of it than others: like today, except only moreso.

Comment by jacob_cannell on List of strategies for mitigating deceptive alignment · 2023-12-02T18:44:54.416Z · LW · GW

We can probably prevent deceptive alignment by preventing situational awareness entirely using training runs in sandbox simulations, wherein even a human level AI would not be able to infer correct situational awareness. Models raised in these environments would not have much direct economic value themselves, but it allows for safe exploration and evaluation of alignment for powerful architectures. Some groups are training AIs in minecraft for example, so that already is an early form of sandbox sim.

Training an AI in minecraft is enormously safer than training on the open internet. AIs in the former environment can scale up to superhuman capability safely, in the latter probably not. We've already scaled up AI to superhuman levels in simple games like chess/go, but those environments are not complex enough in the right ways to evaluate altruism and alignment in multi-agent scenarios.