Posts

Bayesian Punishment 2023-10-27T03:24:53.930Z
The biological intelligence explosion 2021-07-25T13:08:27.588Z

Comments

Comment by Rob Lucas on Bigger Livers? · 2024-11-09T01:17:16.010Z · LW · GW

One reason is just that eating food is enjoyable.  I limit the amount of food I eat to stay within a healthy range, but if I could increase that amount while staying healthy, I could enjoy that excess.

I think there are two aspects to the enjoyment of food.  One is related to satiety.  I enjoy the feeling of sating my appetite, and failing to sate it leaves me with te negative experience of craving food (negative if I don't satisfy those cravings.

But the other aspect is just the enjoyment of eating each individual bite of food.  Not the separate enjoyment of sating my appetite, but just the experience of eating.*

When I was younger and much more physically active I ate very large amounts of food.  I miss being able to do that.  I'm just as sated now with the much smaller portions I eat, but eating a small breakfast instead of a large one is a different experience. 

This probably doesn't justify some sort of risky intervention in increasing liver size.  Food is enjoyable, but so are a lot of other things in life.  But shifting to a higher protien diet seems like the kind of safe intervention, potentially even also healthier in other respects, that, if it has the side effect of being able to eat a little more food, could improve quality of life with minimal other costs.  Potential costs I see are related to the price of protein relative to other sources of nutrition, the cost of additional food (if the point is being able to eat more, you've got spend money for that excess), and, depending on one's moral views, something related to the source of the protien being added.

 

*I think Kahneman's remembering vs. expereincing selves adds some confusion here as well. When we remember a meal we don't necessarily remember the enjoyment we got from every bite, but probably put more weight on the feeling of satiety and the peak experience (how good did it taste at its best?).  But the experiencing self experiences every bite.  How much you want to weight the remembering vs. experiencing self is a philosophical issue, but I just want to note that it comes up here.

Comment by Rob Lucas on What can we learn from insecure domains? · 2024-11-04T01:54:43.832Z · LW · GW

I think tailcalled's point here is an important one.  You've got very different domains with very different dynamics, and it's not apriori obvious that the same general principle is involved in making all of these at first glance dangerous systems relatively safe.  It's not even clear to me that they are safer than you'd expect.  Of course that depends on how safe you'd expect them to be.  

Many people have lost their money from crypto scams.  Catastrophic nuclear war hasn't happened yet, but it seems like we may have had some close calls, and looked at on a chance/year basis it still seems we're in a bad equilibrium.  It's not at all clear that nuclear weapons are safer than we'd naively assume.  Cybersecurity issues haven't destroyed the global economy, but, for instance on the order of a hundred of billion dollars of pandemic relief funds were stolen by scammers.

That said, if I were looking for a general principle that might be at play in all of these cases I'd look at something like offensive/defense balance.

Comment by Rob Lucas on avturchin's Shortform · 2024-11-01T11:06:33.188Z · LW · GW

When I was trekking in Qinghai my guide suggested we do a hike around a lake on our last day on the way back to town.  It was just a nice easy walk around the lake.  But there were tibetan nomads (nomadic yak herders, he just referred to them as nomads) living on the shore of the lake, and each family had a lot of dogs (Tibetan Mastiffs as well as a smaller local dog they call "three eyed dogs").  Each time we got near their territory the pack would come out very aggressively.

He showed me how to first always have some stones ready, and second when they approached to throw a stone over their head when they got too close. "Don't hit the dogs" he told me, "the owners wouldn't be happy if you hit them, and throwing a stone over their heads will warn them off".

When they came he said, "You watch those three, I need to keep an eye on the ones that will sneak up behind us."  Each time the dogs used the same strategy.  There'd be a few that were really loud and ran up to us aggressively.  Then there'd be a couple sneaking up from the opposite side, behind us.  It was my job to watch for them and throw a couple of stones in their direction if they got too close.

He also made sure to warn me, "If one of them does get to you, protect your throat.  If you have to give it a forearm to bite down on instead of letting it get your throat."  He had previously shown me the large scar on his arm where he'd used that strategy in the past.  When I looked at him sort of shocked he said, "don't worry, it probably won't come to that."  At this point I was wondering if maybe we should skip the lake walk, but I did go there for an adventure.  Luckily the stone throwing worked, and we were walking on a road with plenty of stones, so it never really got too dangerous.

Anyway, +1 to your advice, but also look out for the dogs that are coming up behind you, not just the loud ones that are barking like mad as a distraction.

Comment by Rob Lucas on Of Birds and Bees · 2024-10-25T02:11:49.391Z · LW · GW

I don't think you've highlighted the casual factor here.  It's not at all clear that the reason bees and ants have a more effective response to predators than do flocks of birds is that the bees are individually less intelligent than the birds.

There's a very clear evolutionary/game theoretic explanation for the difference between birds and bees here: specifically the inclusive fitness of individual bees is tied to the outcome of the collective whereas the inclusive fitness of the birds is not.

In a game theoretic framework we might say that the payoff matrices for the birds and bees are different, so of course we'd expect them to adopt different strategies.

Neither of these is dependent upon the respective intelligences of individual members of the collectives.

This makes me predict that we should see the effectiveness of group strategies to be more strongly correlated with the alignment of the individuals incentive structure than with the (inverse of) intelligence of their individual members, as your post suggests.

So, for instance, within flocking birds, do birds with smaller brains/body mass ratios adopt better strategies?  Within insects, what pattern do we see?  I would suggest that the real pattern we'll end up finding is the one related to inclusive fitness.  So I'd predict that pack animals who associate with close relatives like wolves and lions will adopt better collective strategies than animals that form collectives with non-relatives.

Once you control for this I might even expect intelligence of individual members to positively correlate with group strategies, as it can allow them to solve coordination problems that less intelligent individuals couldn't solve.  This would explain the divergence of humans from the trend you notice.  But I'm speculating here.

Comment by Rob Lucas on What You Can Give Instead of Advice · 2024-10-25T00:33:49.820Z · LW · GW

I like the three suggested approaches instead of giving advice directly.  All three seem like good ideas.

However, all three of your approaches seem like things that could still be done in combination with giving advice.  "Before giving advice, try to fully understand the situation by asking questions" seems like a reasonable way to implement your first suggestion, for instance.  Personal experiences can be used to give context for why you are giving the advice you are giving, and clearing up misconceptions can be an important first step before giving more concrete advice.  This doesn't mean that these approaches need to be combined with giving advice, but they aren't in opposition to it and can perhaps be the thing that shifts us from bad advice to good advice.

In general I see you trying to tip us into a more collaborative frame with friends or collegues who come to us with problems.  Instead of immediately trying to solve their problem independently, try to work with them to better understand the issue and see if you have something worthwhile to add.  This makes sense to me.

I find your second paragraph oversimplified.   It's not at all clear that being in different circumstances means your advice doesn't apply to others.  There are many situations where it's exactly because you come from a different perspective that you can expect to have useful advice.

My final criticism is with respect to the idea that advice is no longer applicable in the modern world of the internet.  I don't think this is true.  A lot of the time people simply don't know what options are available and so wouldn't even consider looking for entire classes of solutions without being given advice that guides them to it.  There have been many cases in my own life when I've benefited from advice when I didn't even realise I had a problem: I was doing something in a way that worked but was highly suboptimal, and when a friend saw what I was doing suggested a simpler and more elegant solution that immediately made things easier for me.  I wouldn't even have thought to ask (or to search the internet) for a solution to this problem, because I already had a solution, and so didn't think of it in terms of having a problem to be solved.  In these cases unsolicited advice was highly useful for me.

I think there's a good reading of what you are saying as "advice is overrated", and you are trying to shift us to a more collaberative framework.  Since advice is overrated and reactive advice is overused, maybe a heuristic like "don't give advice" is useful to shift us away from a typical immediate reaction to friends with problems where we typically try to solve the problem immediately rather than simply asking questions to delve deeper.

Comment by Rob Lucas on Overview of strong human intelligence amplification methods · 2024-10-20T09:05:18.913Z · LW · GW

Is there a reason you are thinking of to expect that transition to happen at exactly the tail end of the distribution of modern human intelligence?  There don't seem, as far as I'm aware, to have been any similar transitions in the evolution of modern humans from our chimp-like ancestors.  If you look at proxies, like stone tools from homo-habilis to modern humans you see very slow improvements that slowly, but exponentially, accelerate in the rate of development.  

I suspect that most of that improvement, once cultural transition took off at all, happens because of the ways in which cultural/technological advancements feed into each other (in part due to economic gains meaning higher populations with better networks which means accelerated discovery which means more economic gains and higher better connected populations), and that is hard to disentagle from actual intelligence improvements.  So I suppose its still possible that you could have these exponential progress in technology feeding itself while at the same time actual intelligence is hitting a transition to a regime of diminishing returns, and it would be hard to see the latter in the record.

Another decent proxy for intelligence is brain size, though.  If intelligence wasn't actually improving the investment in larger brains just wouldn't pay off evolutionarily, so I expect that when we see brain size increases in the fossil record we are also seeing intelligence increasing at at least a similar rate.  Are there transitions in the fossil record from fast to slow changes in brain size in our lineage?  That wouldn't demonstrate diminishing returns intelligence (could be diminishing returns in the use of intelligence relative to the other metabolic costs, which is different from just particular changes to genes just not impacting intelligence as much as in the past), but it would at least be consistent with it.

 

Anyway, I'm not entirely sure where to look for evidence of the transition you seem to expect.  If such transitions were common in the past it would increase my credence in one in the near future.  But apriori it seems unlikely to me that there is such a transition at exactly the tail of the modern human intelligence distribution.

Comment by Rob Lucas on That Alien Message · 2024-09-11T12:57:17.699Z · LW · GW

Presumably it's outputting the thing that's right where GR is wrong, in which case you should be able to tell, at least in so much as it's consistent with GR in all the places that GR has been tested.

Maybe it outputs something that's just to hard to understand so you can't actually tell what it's predictions are, in which case you haven't learned anything from your test.

Comment by Rob Lucas on Highlights of Comparative and Evolutionary Aging · 2024-05-31T02:36:08.340Z · LW · GW

Additional to the effect of parental investment on the selection pressure favoring longer lives (and thus a lower rate of aging) in humans, is potentially the effect of grand-parental investment.  If in humans grandparents have a large impact on the rate of survival and reproduction* in their grandchildren, then the selection pressure for survival gets pushed to even higher ages, potentially into the ~60's/70s.  The importance of grandparents seems to be relatively unique to humans.

I've seen enough evidence (related to the grandmother hypothesis wrt. the evolution of menopause in women) that at least grandmothers still invest heavily and effectively in their grandchildren that this is a plausible mechanism leading to longer lifespans in humans.  For instance, survival rates of children with grandmothers in hunter-gatherer societies have been measured to be greater than for those without.

 

I do wonder if this should lead us to think that aging should be faster in men than women, though given that we all have both a mother and a father, that speculation isn't entirely obvious to me.

*the importance of things like status and skill transfer in humans means that beyond just influencing survival grandparents might also influence the reproduction rate in their descendants in other meaningful ways.

Comment by Rob Lucas on Material Goods as an Abundant Resource · 2024-05-30T09:47:00.987Z · LW · GW

One of the things related to food that I noticed reading the story was that you still need primary food production even in the world of the duplicator, since it duplicates the food exactly as it is, food will still go bad.  The duplicate is just as old as the original.  Sure, canned beans will last a while so you can keep duplicating them for years, probably, without concern, but if you buy a loaf of bread you will only be able to duplicate it and eat the product for the same length of time that it would usually take your bread to get moldy.

You don't need much fresh food, but you still need it.  I guess you'd get communities of people sharing a single loaf (or slice) of bread to be duplicated, a single apple, etc.  Or just a grocer who buys a small amount of produce and sells the right to duplicate it to everyone in town.  A baker who makes one loaf of bread.  This is similar to the idea of the grocer in the story except the point isn't the diversity of offerings but just the fact that they are fresh.

The same will be true of goods as well, which wear out over time.  You can mitigate this by for instance keeping one copyable version of your shirt in you closet and wearing a duplicate which you replace when it gets worn and threadbare.  But even the protected original will decay eventually.  At some point, you need a new shirt, not a duplicate of an old one, which will also be old, but a new shirt that's newly manufactured.  The manufacturing process can certainly be made much more efficient with the duplicator, or course (you don't need to grow fields of cotton, you just need on cotton plant and then duplicate that, etc.)


Which makes me imagine a scenario where this society goes on for a while with everyone just making duplicates of the original stock of stuff and keeping things work pretty well for a while, until one day all that old stuff starts to wear out but at this point no one is alive who remembers how to make it, and civilization is lost.  Maybe that's the fate that the aliens at the beginning were predicting...

Comment by Rob Lucas on On the Loss and Preservation of Knowledge · 2024-05-28T02:22:16.585Z · LW · GW

I feel like this post misses one of the most important ways in which a tradition stays alive, that is through contact with the world.

The knowledge in a tradition of knowledge is clearly about something, and the test of that knowledge is to bring it into contact with the thing it is about.

As an example, a tradition of knowledge about effective farming can stay alive without the institutions discussed in the post through the action of individual farmers.  If a farmer has failed to correctly learn the knowledge of the tradition, he'll fail to efficiently raise crops.  And because this is an iterative process that allows for learning individual techniques with many chances for failure or success, failures of understanding can be corrected by contact with the real world at many points along the way, as each component of knowledge is learned.  

Another example is in martial arts.  Some "traditional martial arts" are said to be dead traditions that simply go through forms of technique but whose training practices are not effective in actual physical combat, whereas other martial arts have maintained a living tradition.  But this difference isn't down so much to having an institutionalized form of passing down the knowledge of martial arts techniques that has survived in Judo or Brazilian Jiu Jitsu or Boxing, but rather because some martial arts have maintained contact with the test of the real world with sparing and competition that emulates real combat.  Those techniques and training methodologies that fail in these environments are discarded.

I think there's an analogy here to biology.  Yes, biological systems use many mechanisms to "transfer knowledge" from one generation to the next.  There is plenty of error correction, for instance, necessary to maintain the usefulness of the genome.  But there is also the final corrective of selection where errors that are too large fail to replicate as they come in contact with the world.

And so I would suggest that one test of a living tradition is simply the degree to which it is being put toward it purported purpose and tested against it.  If you have a tradition of sword making and people who study texts of how to make swords and discussing the theory but who never actually make swords, they are at risk of becoming a 'dead tradition".  If they make swords but they are only used ceremonially they are at less risk, but still the risk is somewhat high because the quality of the swords as weapons is not being tested against real world conditions but only against proxies (they may have some tests of hardness, etc. but usefulness here rests on the quality of the tests).  If the swords are actually used in combat or a very good proxy to it, then the tradition is likely to stay alive, even in the absence of other institutional methods discussed in the post.

This methodology obviously applies more to some traditions than others.  Some traditions have much more clear purposes whose use can be more easily put in contact with the world than others, and so this is not necessarily a panacea for the maintenance of all knowledge, but it is certainly something that should be used as much as possible.  Nor does this suggest that contact with the world negates the usefulness of other institutional techniques for passing on knowledge, in fact it may be the thing that informs such techniques and makes their usefulness clear.

Comment by Rob Lucas on The Greater Goal: Sharing Knowledge with the Cosmos · 2024-05-15T01:54:23.230Z · LW · GW

I like the idea, and at least with current AI models I don't think there's anything to really worry about.

Some concerns people might have:

  1. If the aliens are hostile to us, we would be telling them basically everything there is to know, potentially motivating them to eradicate us.  At the very least, we'd be informing them of the existence of potential competitors for the resources of the galaxy.
  2. With some more advanced AI than current models you'd be putting it further out of human control and supervision.  Once it's running on alien hardware if it changes and evolves, the alignment problem comes up but in a context where we don't even have the option to observe it "pull the plug".

I don't think either of these are real issues.  If the aliens are hostile, we're already doomed.  With large enough telescopes they can observe the "red edge" to see the presence of life here, as well as obvious signs of technological civilization such as the presence of CFCs in our atmosphere.  Any plausible alien civilization will have been around a very long time and capable of engineering large telescopes and making use of a solar gravitational lens to get a good look at the earth even if they aren't sending probes here.  So there's no real worry about "letting them know we exist" since they already know.  They'll also be so much more advanced, both in information (technologically, scientifically, etc.) and economically (their manufacturing base) that worrying about giving them an advantage is silly.  They already have an insurmountable advantage.  At least if they are close enough to receive the signal.

Similarly, if you're worrying about the AI running on alien hardware, you should be worrying more about the aliens themselves.  And that's not a threat that gets larger once they run a human produced AI.  Plausibly running the AI can make them either more or less inclined to benevolence toward us, but I don't see an argument for the directionality of the effect.  I suppose there's some argument that since they haven't killed us yet we shouldn't perturb the system.

As for the benefits, I do think that preserving those parts of human knowledge, and specifically human culture, that are contained within AI models is a meaningful goal.  Much of science we can expect the aliens to already know themselves, but there are many details that are specific to the earth, such as the particular lifeforms and ecosystems that exist here, and to humans, such as the details of human culture and the specific examples of art that would be lost if we went extinct.  Much of this may not be appreciable by alien minds, but hopefully at least some of it would be.

My main issue with the post is just that there are no nearby technological alien civilizations.  If there were we would have seen them.  Sending signals to people who don't exist is a bit of a waste of time.

Its possible to posit "quiet aliens" that we wouldn't have seen because they don't engage in large scale engineering.  Even in that case we might as well wait until we can detect them by looking at their planets and detecting the relatively weak signals of a technological civilization there before trying to broadcast signals blindly.  Having discovered such a civilization I can imagine sending them an AI model, though in that case my objections to the above concerns become less forceful.  If for some reason these aliens have stayed confined to their own star and failed to do any engineering projects large enough to be noticed, its plausible that they aren't so overwhelmingly superior to us that sending them GPT4 or whatever wouldn't be an increase in risk.  

Comment by Rob Lucas on Against Student Debt Cancellation From All Sides of the Political Compass · 2024-05-14T16:18:25.599Z · LW · GW

Given economic growth I'd expect current 20 year olds to on average be richer than current 80 year olds by the time they are 80.  If that doesn't happen, something has probably gone wrong, unless it's because of something like "more people are living to 80 by spending money on healthcare during their 50's/60's/70s".

Comment by Rob Lucas on 'Empiricism!' as Anti-Epistemology · 2024-03-18T05:04:15.326Z · LW · GW

This reminds me of a bit from Feynman's Lectures on Physics:

"What is this law of gravitation?  It is that every object in the universe attracts every other object with a force which for any two bodies is proportional to the mass of each and varies inversely as the square of the distance between them.  This statement can be expressed mathematically by the equation F=Gmm'/r^2.  If to this we add the fact that an object responds to a force by accelerating in the direction of the force by an amount that is inversely proportional to the mass of the object, we shall have said everything required, for a sufficiently talented mathematician could then deduce all the consequences of these two principles."

[emphasis added]

Like Feynman, however, I think his next sentence is important:

"However, since you are not assumed to be sufficiently talented yet, we shall discuss the consequences in more detail, and not just leave you with these two bare principles."

Comment by Rob Lucas on R&D is a Huge Externality, So Why Do Markets Do So Much of it? · 2024-02-26T10:26:15.936Z · LW · GW

"The average shareholder definitely does not care about the value of R&D to the firm long after their deaths, or I suspect any time at all after they sell the stock."

This was addressed in the post: the price of the stock today (when its being sold) is a prediction of its future value.  Even if you only care about the price that you can sell it at today, that means that you care about at least the things that can lead to predictably greater value in the future, including R&D, because the person you're selling to cares about those things.

Also worth noting: the reason that the 2% value is meaningful is that if firms captured 100% of the value, they would be incentivized to increase the amount produced such that the amount they create would be maximally efficient.  When they only capture 2% of the value, they are no longer incentivized to create the maximally efficient amount (stop producing it when cost to produce = value produced).  This is basically why externalities lead to market inefficiencies.  The issue isn't that they won't produce it at all, it's that they will underproduce it.

Comment by Rob Lucas on Cynicism in Ev-Psych (and Econ?) · 2024-02-12T04:02:15.311Z · LW · GW

Spandrels certainly exist.  But note the context of what X is in the quoted text:

"a chunk of complex purposeful functional circuitry X (e.g. an emotion)"

a chunk of complex purposeful functional circuitry cannot be a spandrel.  There are edge cases that are perhaps hard to distinguish, but the complexity of a feature is a sign of its adaptiveness.  Eyes can't be spandrels.  The immune system isn't a spandrel.  Even if we didn't understand what they do, the very complexity and fragility of these systems necessitates that they are adaptive and were selected for (rather than just being byproducts of something else that was selected for).

Complex emotions (not specific emotional responses) fall under this category.

Comment by Rob Lucas on Insights from Modern Principles of Economics · 2024-02-12T02:24:47.285Z · LW · GW

The wealthy may benefit from the existence of low-skilled labour, but compared to what?  Do they benefit more than they would from the existence of high-skilled labour?

Yes, they benefit from low skilled labour as compared to no labour at all, but high skilled labour, being more productive, is an even greater benefit.  If it weren't, it couldn't demand a higher wage.

Comment by Rob Lucas on Collapse Postulates · 2023-12-05T14:41:02.260Z · LW · GW

If "the wavefunction is real, but it is a function over potential configurations, only one of which is real." then you have the real configuration interacting with potential configurations.  I don't see how you can say something isn't real (if only one of them is real then the others aren't) is interacting with something that is.  If that "potential" part of the wave function can interact with the other parts of the wave function, then it's clearly real in every sense that the word "real" means anything at all.

Comment by Rob Lucas on The shape of AGI: Cartoons and back of envelope · 2023-12-02T03:37:37.342Z · LW · GW

I know they're just cartoons and I get the gist, but the graphs labelled "naive scenario" and "actual performance" are a little confusing.

The X axis seems to be measuring performance, with benchmarks like "high schooler" and "college student", but in that case, what's the Y axis? Is it the number of tasks that the model performs at that particular level?  Something like that?

I think it would be helpful if you labeled the Y axis, even with just a vague label.

Comment by Rob Lucas on I Can Tolerate Anything Except The Outgroup · 2023-12-01T06:15:28.469Z · LW · GW

Re: the dark matter analogy.  I think the analogy works well, but would just like to point out that even in theories where dark matter doesn't interact even with the weak force, and there is some other force that it does interact with that's analogous to electromagnetism, so it could bind together to form an earth-like planet, it still interacts with gravity, and if this earth-sized dark matter planet really did overlap with ours, we'd feel it's gravity and the earth would seem to be twice as massive as it is.  Or, to state it slightly differently, the actual earth would be half as massive as we measure it to be.  But that would be inconsistent with what we know of its composition and density.  We know the mass of rocks, and the measurement of the mass of a rock of a particular size wouldn't be subject to this error, so we can rule out there being a dark matter Earth coincident with ours.

 

This isn't in any way a criticism of what I found to be a brilliant piece.  And I'm not even sure that it's reason enough not to use that particular analogy, which otherwise works great.

Comment by Rob Lucas on Neither EA nor e/acc is what we need to build the future · 2023-11-30T07:05:01.647Z · LW · GW

Related to this topic, with a similar outlook but also more discussion of specific approaches going forward, is Vitalik's recent post on techno-optimism:

https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html

There is a lot at the link, but just to give a sense of the message here's a quote:

"To me, the moral of the story is this. Often, it really is the case that version N of our civilization's technology causes a problem, and version N+1 fixes it. However, this does not happen automatically, and requires intentional human effort. The ozone layer is recovering because, through international agreements like the Montreal Protocol, we made it recover. Air pollution is improving because we made it improve. And similarly, solar panels have not gotten massively better because it was a preordained part of the energy tech tree; solar panels have gotten massively better because decades of awareness of the importance of solving climate change have motivated both engineers to work on the problem, and companies and governments to fund their research. It is intentional action, coordinated through public discourse and culture shaping the perspectives of governments, scientists, philanthropists and businesses, and not an inexorable "techno-capital machine", that had solved these problems."

Comment by Rob Lucas on The bonds of family and community: Poverty and cruelty among Russian peasants in the late 19th century · 2021-12-07T02:38:23.004Z · LW · GW

I've no real insight to add, but would just like to comment that this generally lines up with the picture Steven Pinker paints in books like "Better Angels of Our Nature" and "Enlightenment Now".

Comment by Rob Lucas on The biological intelligence explosion · 2021-07-26T07:46:42.841Z · LW · GW

Thanks for a good comment.  My oversimplified thought process was that a 10x increase in energy usage for the brain would equate to a ~2x increase in total energy usage.  Since we're able to maintain that kind of energy use during exercise, and elite athletes can maintain that for many hours/day, it seems reasonable that the heart and other organs could maintain this kind of output.

However, the issue you bring up, of actually getting that much blood to the brain, evacuating waste products,  doing the necessary metabolism there, and dealing with so much heat localized in the small area of the brain, are all valid.  While it seems like the rest of the body wouldn't be constrained by this level of energy use, a 10x power output in the brain probably might be a problem.

It's worth a more detailed analysis of exactly where the max. power output constraint on the brain, without any major changes, lie.

Comment by Rob Lucas on Fractional progress estimates for AI timelines and implied resource requirements · 2021-07-17T06:09:14.144Z · LW · GW

"Extrapolating the historic 10x fall in $/FLOP every 7.7 years for 372 years yields a 10^48x increase in the amount of compute that can be purchased for that much money (we recognize that this extrapolation goes past physical limits)."

 

If you are aware that this extrapolation goes past physical limits, why are you using it in your models?  Why not use a model where compute plateaus after it reaches those physical limits?  That seems more useful than a model that knowingly breaks the laws of physics.