Posts
Comments
Could you clarify a bit here. Is Hanson talking about specific cultures or all of the instances of culture?
Thanks that does help clarify the challenges for me.
I was just scrolling through Metaculus and its predictions for the US Elections. I noticed that pretty much every case was a conditional If Trump wins/If doesn't win. Had two thought about the estimates for these. All seem to suggest the outcomes are worse under Trump. But that assessment of the outcome being worse is certainly subject to my own biases, values and preferences. (For example, for US voters is it really a bad outcome if the probability of China attacking Taiwan increases under Trump? I think so but other may well see the costs necessary to reduce the likelihood as high for something that is not something that actually involves the USA.)
So my first though was how much bias should I infer as present in these probability estimates? I'm not sure. But that does relate a bit to my other thought.
In one sense you could naively apply the p, therefore not p is the outcome for the other candidate as only two actually exist. But I think it is also clear that the two probability distributions don't come from the same pool so conceivably you could change the name to Harris and get the exact same estimates.
So I was thinking, what if Metaculus did run the two cases side by side? Would seeing p(Haris) + p(Trump) significantly different than 1 suggest one should have lower confidence in the estimates? I am not sure about that.
What if we see something like p(H) approximately equale to p(T)? does that suggest the selected outcome is poorly chosen as it is largely independant of the elected candidate so the estimates are largely meaninless in terms of election outcomes? I have a stronger sense this is the case.
So my bottome line now is that I should likely not hold a high confidence that the estimates on these outcomes are really meaninful with regards to the election impacts.
Had something of a similar reaction but the note about far-UV not having the same problems as other UV serilization (i.e., also harmful to humans) I gather the point is about locality. UV in ducks will kill viri in the air system. But the spread of an airborn illness goes host-to-target before it passed through the air system.
As such seems that while the in-duct UV solution would help limit spread, it's not going to do much to clean the air in the room while people are in it exhailing, coughing or sneezing, talking....
I suspect it does little to protect the people directly next/in front of a contagious person but probably good for those practicing that old 6 foot rule (or whatever the arbitray distancing rule was).
Just my guess though.
Quick comment regarding research.
If far-UV is really so great, and not that simple, I would assume that any company that would be selling and installing might not be some small Mom and Pop type operation. If that holds, why are the companies that want to promote and sell the systems using them and then collecting the data?
Or is would that type of investment be seen as too costly even for those with a direct interest in producing the results to bolster sales and increase the size of the network/ecosystem?
I think perhaps a first one might be:
On what evidence do I conclude what I think is know is correct/factual/true and how strong is that evidence? To what extent have I verified that view and just how extensively should I verify the evidence?
After that might be a similar approach to the implications or outcomes of applying actions based on what one holds as truth/fact.
I tend to think of rationality as a process rather than endpoint. Which isn't to say that the destination is not important but clearly without the journey the destination is just a thought or dream. That first of a thousand steps thing.
What happens when Bob can be found in or out of the set of bald things at different times or in different situations, but we might not understand (or even be well aware) of the conditions that drive Bob's membership in the set when we're evaluating baldness and Bob?
Can membership in baldness turn out to be some type of quantum state thing?
That might be a basis for separating the concept of fuzzy language and fuzzy truth.But I would agree that if we can identify all possible cases where Bob is or is not in the set of baldness one might claim truth is no longer fuzzy but one needs to then prove that knowledge of all possible states has been established I think.
I really like the observation in your Further Thoughts point. I do think that is a problem people need to look at as I would guess many will view the government involvement from a acting in public interests view rather than acting in either self interest (as problematic as that migh be when the players keep changing) or from a special interest/public choice perspective.
Probably some great historical analysis already written about events in the past that might serve as indicators of the pros and cons here. Any historians in the group here?
Strong upvote based on the first sentence. I often wonder why people think an ASI/AGI will want anything that humans do or even see the same things that biological life sees as resources. But it seems like under the covers of many arguments here that is largely assumed true.
I am a bit confused on point 2. Other than trading or doing it your selfs what other ways are you thinking about getting something?
That is certainly a more directly related, non-obvious aspect for verification. Thanks.
I assumed John was pointing at verifying that perhaps the chemicals used in the production of the chair might have some really bad impact on the environmnet, start causing a problem with the food chain eco system and make food much scarcers for everyone -- including the person who bought the chair -- in the meaningfully near future. Something a long those lines.
As you note, verifying the chair functions as you want -- as a place to sit that is comfortable -- is pretty easy. Most of us probably do that without even really thinking about it. But will this chair "kill me" in the future is not so obvious or easy to assess.
I suspect at the core, this is a question about an assumption about evaluating a simple/non-complex world and doing so in an inherently complex world do doesn't allow true separability in simple and independant structures.
In terms of the hard to verify aspect, while it's true that any one person will face any number of challenges do we live in a world where one person does anything on their own?
How would the open-source model influence outcomes? When pretty much anyone can take a look, and persumable many do, does the level of verifcation, or ease of verification, improve in your model?
Kind of speculative on my part and nothing I've tried to research for the comment. I am wondering is the tort version of reasonableness is a good model for new, poorly understood technologies. Somewhat thinking about the picture in https://www.lesswrong.com/posts/CZQYP7BBY4r9bdxtY/the-best-lay-argument-is-not-a-simple-english-yud-essay distinguising between narrow AI and AGI.
Tort law reasonableness seems okay for narrow AI. I am not so sure about the AGI setting though.
So I wonder if a stronger liability model would not be better until we have a good bit more direct experience with more AGIish models and functionality/products and a better data set to assess.
The Public Choice type cynic in me has to wonder if the law is making a strong case for the tort version of liability under a reasonable man standard if I should not think it's more about limiting the liability for harms the companies might be enabling (I'm thinking what would we have if social media companies faced stronger obligations for what is posted on their networks rather that the imunity they were granted) and less about protecting the general society.
Over time perhaps liability moves more towards the tort world of a reasonable man but is that were this should start? Seems like a lower bar than is justified.
I find this rather exciting -- and clearly the cryonics implications are positive. But beyond that, and yes, this is really scifi down the road thinking here, the implications for education/learning and treatment of things like PTSD seems huge. Assuming we can figure out how to control these. Of course I'm ignoring some of the real down sides like manipulation of memory for bad reasons or an Orwellean application. I am not sure those types of risks at that large in most open societies.
Thanks. Just took a quick glance as the abstract but looks interesting. Will have something to read while waiting at the airport for a flight tomorrow.
Is that thought one that is generally shared for those working in the field of memory or more something that is new/cutting edge? It's a very interesting statement so if you have some pointers to a (not too difficult) a paper on how that works, or just had the time to write something up, I for one would be interested and greatful.
I think you're right that the incentive structure around AI safety is important for getting those doing the work to do it as well as they can. I think there might be something to be said for the suggestion of moving to a cash payment over equity but think that needs a lot more development.
For instnace, if everyone is paid up front for the work they are doing to protect the world from some AI takeover in the future, then they are no longer tied to that future in terms of their current state. That might not produce any better results than equity that could decline in value in the future.
Ultimately the goal has to be that those able to and doing the work have a pretty tight personal interests stance on future state of AI. It might even be the case that such a research effort alignment is only loosely connected to compensation.
Additionally, as you note, it's not entirely those working to limit a bad outcome for humans in general from AGI but also what the incentives are for the companies as a whole. Here I think the discussion regarding AI liabilities and insurance might matter more. Which also opens up a whole question about corporate law. Years ago, pre 1930s, banking law used to hold the bankers liable for twice the losses from bank failures to make them better at risk management with other peoples' money. That seems to have been a special case that didn't apply to other businesses even if they were largely owned by outsiders. Perhaps making those who are in conrtol of AI development and deplyment, or are largely the ones financing the efforts, personally responsibile might be a better incentive structure.
All of these are difficult to work though to get a good, and fair, structure in place. I don't think any one approach will ultimately be the solution but all or some combination of them might be. But I also think it's a given that the risk will always remain. So figuring out just what level of risk as acceptable is also needed, and problematic in its own way.
Actually checking those hypotheses statistically would be a pretty involved project; subtle details of accounting tend to end up relevant to this sort of thing, and the causality checks are nontrivial. But it's the sort of thing economists have tools to test.
Yes, it would be a challenge statistically, and measurment a challenge as well. It's not really about subtle accounting details but the economic costs -- opportunity costs, subjective costs, expected costs. Additionally, economics has been trying to explain the existance, size and nature of the firm at least a century but still has not come to a firm conclusion.
I suspect a big part of the problem here is that a firm is a rather complex "thing" and and it's not clear any single explanation that is logically consistent internally can explain the phenomena as the whole does not necessarily hold to some easily understood collection of parts. For instance, at a certain size do we think of a firm as a market particiant maximizing profits (or some internal dominance metric), a hybrid part market participant and part internal market or perhaps no longs even a market participant even when providing goods/services to some external market but really functioning as an alternative market form for those acting within the that large firm? If you accept the view that explaining the firm requires explanations at each of those levels and believe such a theory exists, then you also have to believe that some unified theory of micro and macro economics also exist as it's basically the same problem.
So I'm not sure it's correct to say "economist have tools to test" in the sense of and they will come up with clear and uncontested answers rather than perhaps have shed a bit of light on something but have not yet identified the elephant they are touching.
First, I have to note this is way more than I can wrap my head around in one reading (in fact it was more than I could read in one sitting so really have not completed reading it) but thank you for posting this as it presents a very complicated subject in a framework I find more accessible that prior discussings here (or anywhere else I've looked at). But then I'm just a curious outsider to this issue who occasionally explores the discussion so information overload is normal I think.
I particularly like the chart and how it laid out the various states/outcomes.
I think it would be more correct to say that is a part of the literature related to the theory of the firm. The theory of the firm covers a lot of ground and in some ways various branches have somewhat challenging relationships with their internal logic and approaches.
I don't find this as convincing as others for a number or reasons. Caveat: I did a rather shallow read of the post and have not done deep thinking about the resonse below.
First, most of the managers I've worked with, and how I was as a manager, don't act like the dominance seekers you're describing. Not a claim that it doesn't exist, just that in my personal experience it doesn't seem to be something that seems to have been a big driver within the companies.
Second, I think the assumption of economic inefficiency exists therefore these big companies should all be failing seems a bit off target. The real question is not are they economically efficient or not but rather are they more efficient than alternative market arangements.
The above makes me think if all/most managergs are as decribled and the large business is far from achievable economic efficiency then we should see a lot of managers quiting their job when they plateau for whatever reason, starting small companies (something of the better to rule in hell than serve in heaven view). Those smaller more efficient companies driven by the domenant hungry former manager turned founder/owen would then start eatting away to the big companies. So we should not see large companies that persiste for so long. We should see a lot more creative-distruction occuring at the level of the business entity not just in the product space.
Now, I do think people/managers are to some extent motivated by the dominance goals but that is only part of the story and not sufficient to reach the conclusion about why these big companies exist. I think it also a bit of a mistake to thing big, for profit companies, even with the bureaucrasy work internally like big government.
I am not at all sure just how I would empirically test the hypothisis presented as it's such a hidden type of metric.
Last, I think one will probably find a pretty good measure of how big a company grows by looking at things like cost of using market exhange, internal economies of scale and internal network effects. How much of the size is explained that way would be the interesting question. WIth an answer there I think one might looking into just what the effect of the dominance movtivation on size might be. Does that increase or decrease the observed size?
And then we also have the whole moral hazzard problem with those types of incentives. Could I put myself at a little risk of some AI damages that might be claimed to have much broader potential?
That touches on a view I've been holding for a while now. One often hears the phrase, those that forget the past are doomed to repeat it (or close to that). But it struck me one that that many seem to hold on to the past, never letting it go and so dooming themselves and everyone else to continued living in that past. When we're never getting past the injustices of the past we keep them in the present and keep living them. I think this might be part of why we see many of the existing conflicts in the world -- from the racial issues in the USA, the wars and strife in the Middle East, the escalating conflict between China and the west, its threats of forceful reunification of Taiwan, the ongoing conflict on the Korean penensula and probably a host of other cases in Africa and Central/South America.
Making it worse, the attachment to the past gives some levers to then manipulate behavior and actions of many for political or personal goals and purposes. I think if we could forgive the past in which none of us were actually alive and focused on solving the current problems, rather than addressing the crimes of people long dead peace might be a bit easier to achieve.
But I think the "We suffered and we forgive, why can't you?" is not the way to present the idea.
Still reading and thanks for the write up. Much better than I could do myself and have been thinking it's time to revisit and see where things stand.
But think this is an obvious type so wanted to mention it for your edit. "In other words, if your biological age is lower than your biological age, you’re doing great." I assum you mean lower than your chronoloical age there.
So was farther along than I thought. Quick question on the reprogramming aspect. Certainly tissue complexity is a problem when the reaction rates are different and we probably really need to keep something of a balance in general state of cells. Does the interval between cycles of the YF application greatly impact results? You mention a one shot treatment not realy deliering much in the way of benefit so I'm wonder if there an interval period in a cyclic treatement that effectely becomes a bunch of one shot type treatments that are really going no where?
I do agree with your point but think you are creating a bit of a strawman here. I think the OP goal was to present situations in which we need to consider AI liability and two of those situations would be where Coasean barganing is possible and where it fails do the the (relatively) Judgement Proof actor. I'd also note that legal trends have tended to be to always look for the entity with the deepest pockets that you have some chance of blaming.
So while the example of the gun is a really poor case to apply Coase for I'm not sure that really detracts from the underlying point/use of Coasean bargaining with respect to approaches to AI liability or undestanding how to look at various cases. I don't think the claim is that AI liability will be all one type or the other. But I think the ramification here is that trying to define a good, robust AI liability strucutre is going to be complex and difficult. Perhaps to the point we shouldn't really attempt to do so in a legaslative setting but maybe in a combination of market risk managaement (insurance) and courts via tort complaints.
But that also seems to be an approach that will result in a lot of actual harms done as we all figure out where the good equilibrium might be (assuming it even exists).
Puting this in a bad way, or provocative way, but underlying your description of love seems to be a "what's in it for me" attitude. In other words, what I hear you saying about love is about you rather than about what you're offering the other.
I agree with your presentation about true versus false and if we're smitten by some image we've created in our own head, or bought into, that's not likely to last or be all that healthy. But we also probably go through a stage in every relationship where the image in our head is not completely accurate and, in cases of relationships were trying to extend, err on the side of over assessment of the best in the person and under assessment of their flaws.
But at least for me, when thinking about love it's more about the acceptance of an other with all their flaws and still wanting to be around them or give something of yourself to them and help and make their life better.
So, the idea of "true love" -- which I don't really believe in, or at least just see it as poetry -- is more about that selfless giving than anything else. Exactly how well anyone can live that life everyday for someone else I'm not sure at all.
So I'd settle for practicle love which I'll define as a two way street of mutual concern, compromise and tolerance with strong emotional attachments both selfish (want them in my life) and unselfish (want their life to be fulfilling and happy).
Yeah, I was a bit less clear in my statement than I could have been.
- You might take a look at things like
Bucannan & Tullock The Caclulus of Consent
Sam Peltzmans Towards a more General Theory of Regulation
There was also an old political economy paper published lin the late 19th century I think, in a French journal. The English title is "The Chairman's Problem", IIRC -- I never read it but it was mentioned by one of my professors. Basically discussing the challenges of voting cycles and agenda setting. It might be something you can find and was written by someone that was actually living with and dealing with a real political/governance problem hands on.
Gordon Tullock and Anne Krugueger are probably the correct starting point to get some insights to the concept of rent-seeking in political economy systems -- govenrments.
And of course just reading the rule books for the various governments or parts of the government -- for the US that would be looking at the Constitution and the rules governing internal processes for both the House and Senate. Parlimentary systems will have similar rules of governance.
Looking at the organizational charts likely also help -- what are the committee structures and how does legislation flow through.
I think a lot of the above hits on the idea of gears-level models (and you can likely find good references to alternative perspectives if any of the above seem to slip into the area you hope to avoid). That said I'm not sure I would view political governance as truely having any gears. I think all the rules tend to become more like the Pirate's Code in Piarates of the Caribbean: more like guidelines than hard and fast rule.
Could you perhaps expand on how conflating cash flow and expected net worth at future time t relates to accurate measurement of current and past inflation in housing or shelter costs?
I agree it creates a confusion. The increase costs should not be labled inflaiton because you're making apples to organges comparisons regarding the good class one claims to be pricing/indexing
Owner equivalent rent seem a bit of a misdirect for tracking inflation. I own and I can tell you that I don't pay anything close to what someone renting a similar house pays.
The other thing that kind of jumps out at me here is the hourly wages. Have to dig a bit more to be sure but in general it seems that non-hourly wage incomes have been growing faster than hourly wage incomes. Inflation is suppose to tell us something about real prices. But the real prices most people care about is what their income is buying. It would be interesting to use that hourly wages data as the base for the other series and see what the chart looks like. (Note -- I should acknowledge that this is somewhat addressed in the chart on share of disposable income.)
You note a bit about the composition of food but pretty high level. I wonder if we're not seeing food prices going up as peoples' incomes raise due to changes in what they are buying and eatting -- both at home and away from home. If that is the case it's not really inflation but simply higher quality foods that will cost more regardless of inflation.
Interesting point about the medical assessment aspect. Where are you located by the way?
I did some quick searching just to find out a bit about combantrin. This and this have some good information about just what you would be taking. (Side note, I'm in the US and both these are US based sources but my initial search got hits primarily from Australia so might be other people would be more comfortable with or even that provide additional aspects for consideration.)
While might go without saying here, I'll say it anyhow, there are some indicated risks but that is all probably very conditional on the person so I would not just go out and give this a try. The other thing that I notices was that combantrin/pyrantel seems primarily to target pinworms but the information mentions it might be used to treat other types of worms. I didn't dig deeper on that to see if perhaps other anthelmintics might be more effective for other worms.
The way it works was also a bit interesting. The drug paralyzes the worms which are then expelled duing a bowel movement. I'm not sure if that leaves open the possibility that any worm not expelled could recover and you're still dealing with a problem. If so then perhaps a laxative after taking the pyrahtel would be a good idea. Or perhaps follow the pre colonoscopy proceedures.
For some that might not matter as one of the listed side effects of the drug is diarrhea.
I'm not quite sure what to make of "adequately tuned" here. If that means tuned well enough that 99% of the audiance cannot tell the difference between that and a better tuned piano then I'm not sure how they then rate the performance lower than the alternative performance with a better tuned piano.
I do agree that there is likely to be a range in which those like me might actually hear the difference but not be able to articulate, even to ourselves, the source of the sense something is not quite right. Perhaps that's the area you're thinking of. If so then I think that's something this post helps with. Knowing more about the mechancis of the tool we might have more ability to understand where our sense of "offness" is coming from.
So was going out to that 1% tail level the right level? I don't know and am pretty sure I only have something of an arbitrary and ad hoc way of trying to say what I might think is the right level for most situations I might think about. I don't know if that is just a feature of the world or a problem in getting less wrong.
I'm not sure I agree on this. Pretty much all logical arguments hang together in a self-consident way but that does not ensure they are true conclusions. This seems to be a confusion between valid and true.
I think what self-consistency means is that one needs to dig deeper into the details of the underlying premises to know if you have a true conclusion. The inconsistent argument just tells us we should not rely on that argument but doesn't really tell us if the conclusion is true or not.
I liked the post and really liked learning about what it means to tune a piano. I had no idea that is was that envolved (makes me wonder if perhaps somethnig like a 12 sting guitar has some similar tuning aspects). So thanks for writing this.
But I also wonder, how to I extrapolate or generalize this. I come away with the question "Are there any actual tuned pianos and how would we know?" That kind of generalized into "We live in an imperfect world, and I already know that."
But the post also tells me that some people can make things better than I would have been able or even known that it could be done or even was achieved. I am sure I would have been one of the people (possibly that very last to ever notice) the piano was not quite right.
That leads me to thinking about when do the tails matter? Sure, for perhaps a small number of people in the world the better tuned piano makes the world a better place for them. For most the improvement is beyond their comprehension so the world has really not improved.
I wonder a bit about where else this might be playing out, where people see something could be improved and want to get others involved or resources reallocated but struggle to do so and feel very frustrated by the appathy they feel they are confronted with.
Might be worth thinking about the many markets that exist rather than thinking this is some single homogenous market.
A lot of people will still play pianos and take private piano lessons. That market may not be able to afford the $10 tuning but could still support the $2, less perfect, tuning.
If that hypothesis is correct then less experienced tuners still have a path for skill development and gaining experience.
I think another path is that some shift from a market setting (paying someone else) to DIY and start learning how to tune their own, or their friends, piano. I suspect the hand tools needed are not that complex or expensive so that would not be a barrier.
Perhaps the biggest barrier might be beginners and less experienced tuners might not have developed ear and without a good mentor to help them train their ear might not be able to be as good as they perhaps could.
if, judging by looking at some economical numbers, poverty already doesn't exist for centuries, why do we feel so poor; or perhaps, why do we act as if we are poor
Some years back (or perhaps a couple/few decades) Verner Smith was running some experimental economics, I believe with econ student, which were producing some odd or difficult to explain results. Durnig the game play it was nearly universal that players would accept an absolutely lower payout than accept the higher payout option when the other play would then get most of the gains. From a rational actor perspective that seemed to be the same as people refusing to pick up the $5 bill on the ground. Even worse perhaps because at least one of the players, if not both, had to be actively throwing it on the ground.
Jame Buchanan sugested that perhaps absoulte resulter were in fact not the key criteria but the relative outcomes. Makes sense in many ways from an econcomic perspective where pretty much everthing, for instance prices, is relative and not based on absolute levels.
I don't think that would explain poverty, or the sense of povery, entirely but do think it probably has something to do with it. At least in terms of the question posed above.
I do think you're correct that it would be a good decision for some. I would also say establishing this as a norm might induce some to take the easy way out and it be a mistake for them.
Might be the case that councelors should be prepared to have a real converstation with HS students that come to that decision but not really make it one schools promote as a path forward. But I do know I was strongly encouraged to complete HS even when I was not really happy with it (and not doing well by many metrics) but recognized as an intelligent kid. I often think I should have just dropped out, got me GED, worked (which I was already doing and then skipping school often) and then later pursued college (which I also did a few years after I graduated HS). I do feel I probably lost some years playing the expected path game.
I honestly don't know if I understand what Eliezer is getting at so might be far off. If the premise is that increased real income (that 100-fold increase) has not really decreased what is undertood as poverty in human existance then income related factors (a UBI) seem definitionally ineffective as well.
But I'm not sure that he is subtly trying to say the whole UBI effort is essentially a fool's errand.
But I'm far from sure that his suggestion is that a UBI, at some level of abundance, can eliminate poverty because all the basic necessities (not sure how one defines that in a world where people do seem to care about relative outcomes over absolute outcomes) are guaranteed.
I have to think that this is one of those hard areas to get a consistent measure of a comment thing. For example, is the 3 hour lunch meeting with a client really the same as the 3 houts a factory worker put in or the three hours recorded by a software engineer records for a specific project worked on?
I suppose we can say in each cases there is some level of "standing around" rather than real work. But I do suspect that the types of work don't as one climbs the income ladder you start seeing more of the gray areas because the output of the effort becomes less directly measurable.
I also think that in the OP one of the factors in work was the unpleasant nature of the effort. While hardly universally true I have to speculate that at the higher income levels a larger percentage of people are doing things they find both interesting and enjoyable than hold at lower levels.
But clearly those hypothesis would likewise by challenging to evaluate as well.
I would assum they have the math right but not really sure why anyone cares. It's a bit like the Voter's Paradox. In and of it self it points to an interesting phenomena to investivate but really doesn't provide guidance for what someone should do.
I do find it odd that the probabilities are so low given the total votes you mention, and adding you also have 51 electoral blocks and some 530-odd electoral votes that matter. Seems like perhaps someone is missing the forest for the trees.
I would make an observation on your closing thought. I think if one holds that people who are not well informed, or perhaps less intelligent and so not as good at choosing good representatives then one quickly gets to most/many people should not be making their own economic decisions on consumption (or savings or investments). Simple premise here is that capital allocation matters to growth and efficiency (vis-a-vis production possibilities frontier). But that allocation is determined by aggregate spending on final goods production -- i.e. consumer goods.
Seems like people have a more direct influence on economic activity and allocation via their spending behavior than the more indirect influence via politics and public policy.
The danger is, you’ll become something like a moron. You’ll become incapable of learning from most people in the world, no matter how much experience they may have in their particular areas that may be much greater than yours.
Seemed to jump out to me.
While I don't always follow my own advice I do most of the time approach others from a view point that I can learn something from anyone and everyone.
While I've not actually seen this argument, I suspect one answer to the decline in popularity of Georgism might be found in the Coase Therom. Government taxing all land rends is merely a transfer of ownership and results in the known transfer aspect of the assignment of property rights in Coase's example. But the allocative impact of that assignment is supposed to be zero, resources still get allocated to the highest valued use. As such, I would wonder if market prices (i.e., rental prices for a place to live) would actually change or that somehow more of the assumed withheld land actually comes to market.
Maybe, but I have to also put this all in a somewhat different frame. Is the universe populated by birds or mice? Are the resources nice ground full or worms or perhaps dangerous traps with the cheese we want?
So if we're birds and the universe resources are worms, maybe a race. If we're all mice and resources are those dangerous traps with cheese, well, the old saying "The early bird might get the worm but the second mouse gets the cheese." In a universe populated by mice & cheese, civilation expansion may well be much slower and measured.
Perhpas we can add one of the thoughts from the Three Body Problem series -- advertising your civilation in the universe might be a sure way to kill yourself. Possibly fits with the Grabby Aliens thought but would argue for a different type of expansion patter I would think.
That, and I'm not sure how the apparent solution ot energy problems (apparenly a civilization has no engery problem so accelleration and decellerations costs don't really matter) impacts a desire for additional resources. And if the energy problem is not solved then we need to know the cost curves for accelleration and decelleration to optimize speed in that resource search/grab.
Could that just shift the problem a bit? If we get a class of really smart people they can subjugate everyone else pretty easily too -- perhaps even better than some AGI as they start with a really good understanding of human nature, cultures, failing and how to exploit for their own purposes. Or they could simply be better suited to taking advantage of and surviving with a more dangerous AI on the loose. We end up in some hybrid world where humanity is not extinct but most peoples' life is pretty poor.
I suppose one might say that the speed and magnitude of the advances here might be such that we get to corrigible AI before we get incorrigible super humans.
I'm currious about your thought.
Quick, caveate, I'm trying to say all futures are bleak and no efforts lead where we want. I'm actually pretty positive about our future, even with AI (perhaps naively). We clearly already live in a world where the most intelligent could be said to "rule" but the rest of us average Joes are not slaves or surfs everywhere. Where the problems exist is more where we have cultural and legal failings rather than just outright subjugation by the brighter bulbs. But going back to the darker side here, the one's that tend to successfully exploit/game/or ignore the rules are the smarter ones in the room.
I'm not sure I agree with the two system rejection here. Not saying it's not correct but I think before suggesting that system redundency is not an evolutionary outcome here merely because we don't see other species developing magnetic sensing seems wrong.
First, I think we need to consider just how critical such an ability is for survival of the species. If most other species have near 0 benefit from sensing the magnetic fields then one should not be surprised they don't have it.
On the other hand, I've seen a suggestion that primates (I actually don't recall the claim limited to primates but seems most accurate) only evolved two arms, legs, ears, eye because more added little value but if they only had one lossing it would essentially result in nonsurvival.
I think you're making a really large ask when one can do a simple google search to get a good start. One could also just access one of the several LLM available online, pose the question and work through some subsequent questions with the LLM.
However, in light of being helpful and making positive and not just negative contributions I think you'll find this goole search link of immediate help in getting started on answering your own questions.
If the lens of public goods is not helpful then perhaps look at positive externalities. The two are fairly closely related with regard to the question you're asking about. Tyler Cowan's blurb (scroll down a littel) on Public Goods and Externalities notes how markets will under produce goods with positive external effects.
Again, this is a general point. One can bring in additional details to support the claim that the existing outcome is optimal or to support the claim that it is not optimal. But that was the point of my comment. We cannot just start with market outcome and claim success.