What resources have increasing marginal utility?
post by Qiaochu_Yuan · 2014-06-14T03:43:14.195Z · LW · GW · Legacy · 63 commentsContents
63 comments
Most resources you might think to amass have decreasing marginal utility: for example, a marginal extra $1,000 means much more to you if you have $0 than if you have $100,000. That means you can safely apply the 80-20 rule to most resources: you only need to get some of the resource to get most of the benefits of having it.
At the most recent CFAR workshop, Val dedicated a class to arguing that one resource in particular has increasing marginal utility, namely attention. Initially, efforts to free up your attention have little effect: the difference between juggling 10 things and 9 things is pretty small. But once you've freed up most of your attention, the effect is larger: the difference between juggling 2 things and 1 thing is huge. Val also argued that because of this funny property of attention, most people likely undervalue the value of freeing up attention by orders of magnitude.
During a conversation later in the workshop I suggested another resource that might have increasing marginal utility, namely trust. A society where people abide by contracts 80% of the time is not 80% as good as a society where people abide by contracts 100% of the time; most of the societal value of trust (e.g. decreasing transaction costs) doesn't seem to manifest until people are pretty close to 100% trustworthy. The analogous way to undervalue trust is to argue that e.g. cheating on your spouse is not so bad, because only one person gets hurt. But cheating on spouses in general undermines the trust that spouses should have in each other, and the cumulative impact of even 1% of spouses cheating on the institution of marriage as a whole could be quite negative. (Lots of things about the world make more sense from this perspective: for example, it seems like one of the main practical benefits of religion is that it fosters trust.)
What other resources have increasing marginal utility? How undervalued are they?
63 comments
Comments sorted by top scores.
comment by raisin · 2014-06-14T15:05:30.564Z · LW(p) · GW(p)
Time spent with fiction when it's about some coherent body, be it a video game, book, tv series etc. Usually, the more time you spend with that coherent fictional body, the more immersed you become which means you can enjoy it more.
Replies from: ShardPhoenix, Leonhart↑ comment by ShardPhoenix · 2014-06-15T03:34:02.054Z · LW(p) · GW(p)
I think like a lot of things this is an S-curve - it takes a while to get into it before you enjoy your time the most, but eventually you start to get sick of it.
comment by Manfred · 2014-06-14T05:26:55.411Z · LW(p) · GW(p)
Railroads in Monopoly.
Railroads in an actual railroad monopoly.
Time spent with individuals - I'd rather spend time with friends than strangers.
Replies from: raisin, 9eB1↑ comment by raisin · 2014-06-14T13:28:55.139Z · LW(p) · GW(p)
I don't understand the last one. Is the thing that is measured here the quality of individuals you spend time with, or the quality of time you spend with individuals, or the amount of time? In any case, you should elaborate.
Replies from: Manfred↑ comment by Manfred · 2014-06-14T14:09:25.060Z · LW(p) · GW(p)
The last hour I spent with my best friend was more fun than the first hour.
Replies from: raisin↑ comment by raisin · 2014-06-14T14:48:26.220Z · LW(p) · GW(p)
That clears it, thanks. The sentence "I'd rather spend time with friends than strangers." just confused me a little because I wasn't sure if you were comparing time spent with friends vs. strangers.
edit. Now I understood it. You were talking about the whole timespan from the start of the friendship until the last moment. I thought at first that you were talking about a single session spent with an individual
↑ comment by 9eB1 · 2014-06-14T08:36:32.494Z · LW(p) · GW(p)
Railroads in an actual railroad monopoly only have this property at small sizes, not at the limit, because the value of new stops is decreasing as you exploit less and less economically active areas. The fact that the network that's able to reach route N+1 includes route N doesn't make up for the fact that no one was going to N anyway. Plus there are costs to the network of new lines, like new switches needing to be installed, the complexity of managing routes, etc. If you were a railroad exec and you had unlimited resources (so it wasn't merely a question of the costs increasing faster than the benefits), you still wouldn't snap your fingers and cover the surface of the earth with railroad tracks. True examples in the realm of commerce and physical items are pretty much impossible, unless you are a paperclipper.
Other things that have network effects but don't have increasing marginal utility are markets (the marginal stock trader provides no liquidity and makes no trades), Facebook (the marginal account has no friends), telephone networks (the marginal customer makes and receives no calls), etc. Decreasing marginal utility is very universal. Even trust, which is a very good example, is probably more like one of these tipping point things rather than true in an absolute sense. The marginal value of a trust increment may always be positive, but it decreases past the tipping point.
Replies from: Manfred↑ comment by Manfred · 2014-06-14T08:56:37.510Z · LW(p) · GW(p)
In the case of a monopoly on something (railroads aren't really the greatest thing to have a monopoly on, because taking the train has so many substitutes - the ideal would be more like water and air), the number of sources that you wish to own is "all of them." If you lose even one source of that something, that's quite bad, worse than losing the second.
In general, there are two ways of avoiding the un-realism of increasing marginal utility - either have there be some upper limit on the valuable stuff that prevents it from getting out of hand, or have the marginal utility only be increasing within some common domain but decreasing eventually. A monopoly is more like the first of these than the second.
Replies from: 9eB1↑ comment by 9eB1 · 2014-06-14T16:51:30.715Z · LW(p) · GW(p)
But a monopoly wanting all of something isn't the same as increasing marginal utility, it just means that marginal utility is always positive. For increasing marginal utility it has to be the case that each unit increases the value of the monopoly more than the last unit. Once a network has become large enough, you can ignore the existing network for the purposes of comparing the marginal utility of additional nodes in it. For monopolies that aren't based on network effects but pricing power, you get most of the pricing power at market shares significantly less than 100%. So there is some market share increment where you get the benefits of monopoly pricing with your normal cost structure, and the next market share increments don't allow you to increase your prices but still have your cost structure in place, ergo they have a marginal utility less than the monopolizing increment.
comment by Salemicus · 2014-06-16T22:16:22.788Z · LW(p) · GW(p)
Lots of things have increasing marginal utility at some hypothetical margin. But very few things have increasing marginal utility at the margin on which they are utilised, precisely because if people notice that increasing marginal utility, they will increase their consumption, until they hit a new point on the utility curve where the marginal utility is no longer increasing.
For example, shminux, above, talks about education. We can well imagine that education has steeply increasing marginal utility at some levels; once you have made the investment in learning to read, using that knowledge to learn some more things is very cheap compared to the benefits. But people are already aware of this, and so have already acted to do far more than just learn some basics, to the extent that, at the margin, educational consumption appears to be a costly signaling race.
comment by Lumifer · 2014-06-14T04:50:04.780Z · LW(p) · GW(p)
I am not sure about the attention example, there looks to be an issue with units. For example, if we think in terms of percentages, going from juggling 10 things to 9 gives ~11% more attention to the nine remaining things. Going from 2 things to 1 gives 100% more attention to the remaining single. And that's just math, not increasing marginal utility.
And if we're talking about resources to be amassed by societies, pretty much anything with a network effect qualifies.
Replies from: lfghjkl↑ comment by lfghjkl · 2014-06-15T00:25:29.870Z · LW(p) · GW(p)
Going from 2 things to 1 gives 100% more attention to the remaining single.
The effect will be much higher than that:
Because the brain cannot fully focus when multitasking, people take longer to complete tasks and are predisposed to error. When people attempt to complete many tasks at one time, “or [alternate] rapidly between them, errors go way up and it takes far longer—often double the time or more—to get the jobs done than if they were done sequentially,” states Meyer.[9] This is largely because “the brain is compelled to restart and refocus”.[10] A study by Meyer and David Kieras found that in the interim between each exchange, the brain makes no progress whatsoever. Therefore, multitasking people not only perform each task less suitably, but lose time in the process.
So, by focusing your attention on a single task instead of trying to do two at the same time you'll be done with that task in less than a quarter of the time (and not half as one would expect).
comment by Luke_A_Somers · 2014-06-15T01:44:12.933Z · LW(p) · GW(p)
Multipurpose components, be they Lego, 80-20 pieces (the industrial version of Lego), electronics components, or disk space for a computer program - the number of things you can build from them grows rapidly as the number of them you have available increases, until you literally have more than you know what to do with.
Replies from: drethelin↑ comment by drethelin · 2014-06-16T21:06:14.090Z · LW(p) · GW(p)
Marginal utility per piece quickly hits diminishing returns
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-06-18T02:05:16.307Z · LW(p) · GW(p)
Yes, obviously. I was pointing out that initial regime. I also pointed out the crossover point - when you don't have a purpose for the next piece.
comment by Richard_Kennaway · 2014-06-14T06:10:12.099Z · LW(p) · GW(p)
Intelligence, on both an individual and societal level. Fooming AI is based on that idea. However, increasing the amount of this resource is a hard problem.
Perhaps rationality?
The early stages of any new thing with a lot of potential will behave that way, not only by networking effects, but people figuring out better ways of doing whatever it is, until both aspects reach saturation.
For every thing with increasing marginal returns, is there a saturation point, and what does it look like?
Replies from: knb↑ comment by knb · 2014-06-16T11:01:06.089Z · LW(p) · GW(p)
Intelligence, on both an individual and societal level. Fooming AI is based on that idea. However, increasing the amount of this resource is a hard problem.
I wonder if this is really true. The world doesn't seem to be dominated by super high g people. If anything it seems like we see diminishing returns from extra intelligence past the 130-140 level. If there were increasing returns from each added IQ point, it seems like we would see vast resources and power controlled by super geniuses.
It seems like easier self-modification is what makes AIs potentially foomy.
Replies from: gwern, Richard_Kennaway↑ comment by gwern · 2014-06-16T16:49:05.678Z · LW(p) · GW(p)
The world doesn't seem to be dominated by super high g people.
Consider the implications of the Ivies having mean SATs ~>2100.
Replies from: Desrtopa↑ comment by Desrtopa · 2014-06-16T23:35:57.635Z · LW(p) · GW(p)
I don't think most of the LW population would regard that as "super high." Plus, most people in the Ivy League having IQs upwards of 130 doesn't equate to most people with IQs upwards of 130 making it into the Ivy League. I'd be interested to know what the correlation with financial success is for additional IQ above the mean among Ivy Leaguers.
Replies from: gwern, The_Duck, Lumifer↑ comment by gwern · 2014-06-18T03:44:56.819Z · LW(p) · GW(p)
I don't think most of the LW population would regard that as "super high."
Then they should better consider what percentile of the population that corresponds, and what a mean SAT in that range implies about the tails.
Plus, most people in the Ivy League having IQs upwards of 130 doesn't equate to most people with IQs upwards of 130 making it into the Ivy League.
Irrelevant to the question as asked. Just as you pointed out about Ivy leagues and IQs - the world can be dominated by super high g people doesn't equate to most people with super high g dominating the world.
Replies from: Desrtopa↑ comment by Desrtopa · 2014-06-19T02:13:48.079Z · LW(p) · GW(p)
I haven't been keeping track of the results of each yearly survey, but I recall that Less Wrong, if it doesn't still, at least used to have mean SAT scores over 2100 as well. Maybe I'm mistaken and most of the membership here views Less Wrong as a "super high G" community, but I don't.
There are people at the tails who I would regard as having "super high G," but this brings us back to knb's comment above about the appearance of diminishing returns above the 130-140 IQ level. I'm not sold on this being the case, but still, for Ivy Leaguers, who have a high level of clout in our society, to have an average IQ around that level, does not address the question of whether additional IQ above that level has diminishing impact.
Replies from: gwern↑ comment by gwern · 2014-06-19T20:10:08.882Z · LW(p) · GW(p)
Maybe I'm mistaken and most of the membership here views Less Wrong as a "super high G" community, but I don't.
How much time do you spend with normal people? What's your score on Murray's high-IQ bubble checklist?
but still, for Ivy Leaguers, who have a high level of clout in our society, to have an average IQ around that level, does not address the question of whether additional IQ above that level has diminishing impact.
No, but the original claim was clearly wrong. Society is dominated by high-IQ people. Diminishing returns seems to be weirdly interpreted as 'no returns' in a lot of people's minds.
It may help if I quote a bit of what I've written on a similar issue before about diminishing returns to research:
The Long Stagnation thesis can be summarized as: "Western civilization is experiencing a general decline in marginal returns to investment". That is, every $1 or other resource (such as 'trained scientist') buys less in human well-being or technology than before, aggregated over the entire economy.
This does not imply any of the following:
No exponential curves exist (rather, they are exponential curves which are part of sigmoids which have yet to level off; Moore's law and stagnation can co-exist)
Sudden dramatic curves can exist even amid an economy of diminishing marginal returns; to overturn the overall curve, such a spike would have to be a massive society-wide revolution that can make up for huge shortfalls in output.
- Any metrics in absolute numbers have ceased to increase or have begun to fall (patents can continue growing each year if the amount invested in R&D or number of researchers increases)
- We cannot achieve meaningful increases in standards of living or capabilities (the Internet is a major accomplishment)
- Specific scientific or technological will not be achieved (eg. AI or nanotech) or be achieved by certain dates
- The stagnation will be visible in a dramatic way (eg. barbarians looting New York City)
Similarly, arguing over diminishing returns to IQ is building in a rather strange premise to the argument: that the entities in discussion will be within a few standard deviations of current people. It may be true that people with IQs of 150 are only somewhat more likely to be billionaires ruling the world than 140, but how much does that help when you're considering the actions of people with IQs much much higher? The returns can really add up.
To take an example I saw today: Hsu posted slides from an April talk, which on pg10 points out that the estimates of the additive genetic influence on intelligence (the kind we can most easily identify and do stuff like embryo selection with) & estimates of number of minor alleles imply a potential upper bound of +25 SD if you can select all beneficial variants, or in more familiar notation, IQs of 475 (100 + 15 * 25). Suppose I completely totally grant all assumptions about diminishing marginal returns to IQ based on the small samples we have available of 130+; what happens when someone with an IQ of 475 gets turned loose? Who the heck knows; they'll probably rule the world, if they want.
One of the problems with discussing this is that IQ scores and all research based on it is purely an ordinal scale based on comparing existing humans, while what we really want is to measures of intelligence on a cardinal scale which lets us compare not just humans but potential future humans and AIs too.
For all we know, diminishing returns in IQ is purely an artifact of human biology: maybe each standard deviation represents less and less 'objective intelligence', and the true gains to objective intelligence don't diminish at all or in some cases increase (chimps vs humans)!
(Hsu likes to cite a maize experiment where "over 100 generations of selection have produced a difference in oil content between the high and low selected strains of 32 times the original standard deviation!"; so when we're dealing with something that's clearly on a cardinal scale - oil content - the promised increases can be quite literal. Intelligence is not a fluid, so we're not going to get 25x more 'brain fluid', but that doesn't help us calculate the consequences: an intelligent agent is competing against humans and other software, and small absolute edges may have large consequences. A hedge fund trader who can be right 1% more of the time than his competition may be able to make a huge freaking fortune. Or, a researcher 1% better at all aspects of research may, under the log-normal model of research productivity proposed by Shockley, be much more than 1% more productive than his peers.)
We know 'human' is not a inherent limit on possible cognition or a good measurement of all activities/problems: eg chess programs didn't stagnate in strength after Deep Blue beat Kasparov, having hit the ceiling on possible performance but they kept getting better. Human performance turned out to not run the gamut from worst to best-possible but rather marked out a fairly narrow window that the chess programs were in for a few decades but passed out of, on their trajectory upwards on whatever 'objective chess intelligence' metric there may be.
(I think this may help explain why some events surprise a lot of observers: when we look at entities below the human performance window, we just see it as a uniform 'bad' level of performance, we can't see any meaningful differences and can't see any trends, so our predictions tend to be hilariously optimistic or pessimistic based on our prior views; then, when they finally enter the human performance window, we can finally apply our existing expertise and become surprised and optimistic, and then the entities can with small objective increases in performance move out of the human window entirely and it becomes an activity humans are now uncompetitive at like chess but may still contribute a bit on the margin in things like advanced chess, and eventually becomes truly superhuman as computer chess will likely soon be.
Replies from: Lumifer, satt, Desrtopa, Eugine_Nier↑ comment by satt · 2014-06-23T01:31:23.608Z · LW(p) · GW(p)
One of the problems with discussing this is that IQ scores and all research based on it is purely an ordinal scale based on comparing existing humans, while what we really want is to measures of intelligence on a cardinal scale which lets us compare not just humans but potential future humans and AIs too.
For this reason, it seems to me that conjectures about people with no negative variants getting a 25 SD IQ gain are untestable. How would one distinguish such people from someone with a gain of only(!) 15 SD or 10 SD or even 7 SD, when the population available to norm IQ tests consists of only 7 billion people?
Replies from: gwern↑ comment by gwern · 2014-06-24T01:46:41.417Z · LW(p) · GW(p)
Create enough people at 15SD to test the 25SD subjects. :)
More seriously, this may be practically untestable but I think it's also the sort of thing which doesn't need to be tested - if we're ever in a position that the answer might matter, we have bigger fish to fry.
↑ comment by Desrtopa · 2014-06-21T18:57:21.028Z · LW(p) · GW(p)
I never argued that intelligence beyond the range accessible by human deviation is impossible, or that differences beyond that range would not be highly determinative, but this is still not the same as increasing marginal returns on intelligence. If an individual had hundreds of trillions of dollars at their disposal, there would be numerous problems that they could resolve that people with fortunes in the mere tens of billions could not, but that doesn't mean that personal fortunes have increasing marginal returns. It seems to me that you are looking for reasons to object to my comments that are not provided in their content.
Replies from: gwern↑ comment by gwern · 2014-06-21T19:50:19.421Z · LW(p) · GW(p)
but this is still not the same as increasing marginal returns on intelligence.
Half my comment was pointing out why, if there were increasing returns, that was consistent with our observations and supported by non-human examples.
It seems to me that you are looking for reasons to object to my comments that are not provided in their content.
No. I am objecting to your same line of thought that I have been objecting to quite from the start:
The world doesn't seem to be dominated by super high g people.
To repeat myself: this is empirically false, the domination is as we would expect for both increasing & decreasing marginal returns, and more broadly does not help us in putting anything but a lower bound on future developments such as selected humans or AIs.
↑ comment by Eugine_Nier · 2014-06-21T03:07:30.603Z · LW(p) · GW(p)
The stagnation will be visible in a dramatic way (eg. barbarians looting New York City)
That happened a while ago.
Replies from: gwern↑ comment by gwern · 2014-06-21T03:36:29.164Z · LW(p) · GW(p)
And yet, NYC is still there, and unlike Rome post-barbarians, has only grown in population.
EDIT: and to expand on my point with Rome, disturbances are very common in great metropolises and imperial capitals; pointing to a blackout from over a third of a century ago as indicating the decline of America is like pointing to the Marian or Gracchian riots in Rome as indicating the fall of the Roman empire. (What, you don't remember either? Exactly.)
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-06-21T16:31:53.356Z · LW(p) · GW(p)
As it happens I am familiar with the Gracchian riots, they certainly weren't indicative of the fall of the Roman Empire as the Roman Empire didn't exist then; however, the riots were most definitely indicative of the collapse of the Roman Republic.
Replies from: gwern↑ comment by gwern · 2014-06-21T19:50:11.980Z · LW(p) · GW(p)
however, the riots were most definitely indicative of the collapse of the Roman Republic.
The 'collapse' of the Roman Republic didn't involve barbarians. Which was the point of the observation. Should America one day 'collapse', may God send us a collapse as dire and apocalyptic and with terrible outcomes as the collapse of the Roman Republic...
↑ comment by The_Duck · 2014-06-24T02:27:02.381Z · LW(p) · GW(p)
I'd be interested to know what the correlation with financial success is for additional IQ above the mean among Ivy Leaguers.
I'm pretty sure I've seen a paper discussing this and probably you can find data if you google around for "iq income correlation" and similar.
↑ comment by Richard_Kennaway · 2014-06-16T12:07:06.971Z · LW(p) · GW(p)
The world doesn't seem to be dominated by super high g people.
There aren't all that many of them. But consider, say, Jobs, Gates, Peter Thiel, and the like.
it seems like we would see vast resources and power controlled by super geniuses.
Jobs, Gates, and Thiel again, depending on how vast and how much power. But why would a genius necessarily go for vast resources and power? Would that have helped Einstein think about physics?
Btw, this is a reason I find Batman completely implausible. I'm willing to suspend that and be entertained, but he seems to spring into existence as an adult, fully formed with several lifetimes worth of knowledge, experience, wealth, and power. The only backstory I can make up to explain that is that in a former life as a genius he cracked the problems of how to retain all one's memories through rebirth, and how to ensure an auspicious rebirth. He really does have several lifetimes' worth of knowledge and experience, and then got himself reborn in a position to inherit vast wealth and power as soon as he reached legal adulthood.
comment by Multiheaded · 2014-06-16T09:10:27.473Z · LW(p) · GW(p)
But cheating on spouses in general undermines the trust that spouses should have in each other, and the cumulative impact of even 1% of spouses cheating on the institution of marriage as a whole could be quite negative.
In the comments on Scott's blog, I've recently seen the claim that this is the opposite of how traditional marriage actually worked; there used to be a lot more adultery in old times, and it acted as a pressure valve for people who would've divorced nowdays, but naturally it was all swept under the rug.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2014-06-17T03:57:37.488Z · LW(p) · GW(p)
Interesting. Link?
comment by tsathoggua · 2014-06-14T22:31:54.935Z · LW(p) · GW(p)
I am not sure that trustworthiness has increased marginal utility. Think about ebay or Amazon, what is the difference between 99% positive and 100% positive. Or 97% positive or 100% positive. It would seem to me that with trustworthiness there is a tipping point, at which there is a huge spike in marginal utility, and all other increases don't really add much utility.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-06-15T07:45:12.075Z · LW(p) · GW(p)
100% positive on Amazon isn't the same as the 100% trust mean. 100% on amazon really is just a bit higher the 99%. 100% trust can't be expressed by Amazon ratings as the the underlying rating can still be hacked or 'optimized'.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2014-06-16T04:22:05.788Z · LW(p) · GW(p)
Agreed. The mapping from Amazon ratings to actual trustworthiness is pretty nonlinear.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-06-16T07:13:03.325Z · LW(p) · GW(p)
Nonlinearity alone wouldn't be a problem. The problem is that the mapping isn't injective.
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2014-06-16T19:06:19.569Z · LW(p) · GW(p)
For Less Mathy Humans(tm) "100% trust between humans is not expressible by any Amazon rating" (I think)
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-06-19T23:43:53.415Z · LW(p) · GW(p)
Both your examples are actually just about diminishing marginal penalties as you add more attention demands, moving away from 1, or as you add more defections, moving away from 0. The real question is whether there's a resource with no natural maximum that increases in marginal utility; and this shall perhaps be difficult to find.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2014-06-20T03:15:14.544Z · LW(p) · GW(p)
That's a good way of putting it. I had a vague thought pointing in this direction but wasn't able to verbalize it.
comment by fortyeridania · 2014-06-17T22:19:44.124Z · LW(p) · GW(p)
A related concept is that of the threshold good. (Perhaps someone with more economics schooling can help out with the formally correct term.) It's something that is useless until a certain threshold amount is obtained.
An example is the length of a bridge. A bridge that goes 90% of the way across a ravine is not twice as good as one that goes 45% across. Both are equally useless (for most purposes). Another example would be the stones in an arch--the final stone, or capstone, is a sine qua non.
The existence of threshold goods is what motivates the concept of assurance contracts, according to which people pledge money iff enough other people pledge enough money to get a project done.
comment by Gunnar_Zarncke · 2014-06-14T10:45:46.498Z · LW(p) · GW(p)
Knowledge, esp. math knowledge. It is difficult to measure the amount as well as the benefit, but it feels like one additional year of math education (which builds upon previous math knowledge) allows to modell (and thus understand in depth) significally more phenomena and structures than the previous years.
The question may be how valuable this ability is. I get the impression that it significantly simplifies understanding conrete practival domains (which can be modelled by the math in question).
This is related to the more general education comment.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-14T20:19:16.891Z · LW(p) · GW(p)
it feels like one additional year of math education (which builds upon previous math knowledge) allows to modell (and thus understand in depth) significally more phenomena and structures than the previous years.
Since math professors don't look like bodhisattvas, I rather suspect there is a turnover point when the marginal utility starts to decrease.
Generally speaking, when you start learning an unfamiliar skill the first steps have close to zero marginal utility and only when you can actually achieve something does your utility increase. Once you achieve competence, however, I doubt that your marginal utility will continue to increase.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-06-14T21:10:09.908Z · LW(p) · GW(p)
I fully agree. But there's probably a turning point for any kind of increasing marginal utility.
Replies from: Lumifercomment by James_Miller · 2014-06-14T05:00:57.744Z · LW(p) · GW(p)
A society where people abide by contracts 80% of the time is not 80% as good as a society where people abide by contracts 100% of the time; most of the societal value of trust (e.g. decreasing transaction costs) doesn't seem to manifest until people are pretty close to 100% trustworthy.
I don't agree since a society without contracts would be very, very bad. Still you ask an overall excellent question.
Replies from: Luke_A_Somers, pinkocrat↑ comment by Luke_A_Somers · 2014-06-15T01:38:53.448Z · LW(p) · GW(p)
Yes, a society without contracts is very very bad. But the difference in badness between 100% and 99% compliance is much greater than between 80% and 79% compliance.
↑ comment by pinkocrat · 2014-07-14T23:32:50.938Z · LW(p) · GW(p)
I don't understand your objection. What good would (written) contracts be if everyone always kept their word anyway?
Replies from: James_Miller↑ comment by James_Miller · 2014-07-15T00:44:09.364Z · LW(p) · GW(p)
Verbal agreements can be contracts.
comment by Wei Dai (Wei_Dai) · 2014-06-20T00:30:01.908Z · LW(p) · GW(p)
What other resources have increasing marginal utility?
Matter, if negentropy translates to utility by more than its square root, for example if negentropy translates linearly to increased lifespan and/or population, and we value lifespan/population linearly as well.
How undervalued are they?
I'm guessing that most people do not realize the above, and therefore underestimate just how high the maximum utility of the universe can be.
comment by Shmi (shminux) · 2014-06-14T03:48:14.146Z · LW(p) · GW(p)
Education has an interesting marginal utility curve, I guess.
comment by [deleted] · 2015-09-02T04:29:39.389Z · LW(p) · GW(p)
uranium
comment by drethelin · 2014-06-16T21:05:54.883Z · LW(p) · GW(p)
This seems to be more about value thresholds vs increasing marginal utility. Once you have 10 hours a day of free time, the 11th is not gonna be that much more valuable, if we measure how much stuff needs your attention every day by time it takes instead of by number of things you have to pay attention to.
It's important to trust your spouse a lot but on a numerical level the point from 98 to 99 percent isn't going to change what you do a lot.
having 100 lego pieces is probably more than 10 times as good as having 10 and maybe 1000 is times better but I don't think 2000 is more than twice as good.
Getting immersed in a fictional world makes further fiction in that world more interesting, up until you've read enough that you start seeing repetition.
comment by Elo · 2014-06-15T01:31:16.337Z · LW(p) · GW(p)
Does this just mean that marginal utility is non-linear at the minima and maxima?
while the change from zero control over a supply chain in any given significantly complicated product (i.e. a computer); up to a fractional control may impart an initial high utility (i.e. I make all the mice - everyone needs to come to me for their mice); The following utility if you were to gather more control (i.e. I also make all the keyboards - everyone also needs to come to me for the keyboards) is a lot less of a utility increase. as is screen, motherboards, ram, and N pieces required to create a computer, up until the last several where control of the final pieces will give you the status of computer-master-overlord. like none before you...
come to think of it; resources when they are below a threshold for high-level production automation. for example wool. one sheep may produce between 5-10kg of wool. in the hands of any single person the value of the wool is of a certain low-level utility, but as one person amasses enough of the resource to allow a production-line to make use of the wool the utility increases and we can get yarn and socks in an efficiency that no small amount of the resource could.
where 1kg of coal will provide little utility to anyone but santa, having enough coal to run a power-station is a quite high utility in comparison to making many sad children...
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-06-15T06:59:42.466Z · LW(p) · GW(p)
Does this just mean that marginal utility is non-linear at the minima and maxima?
Mathematically, everything is non-linear at its minima and maxima. Linear functions do not have minima or maxima.
Replies from: roystgnr↑ comment by roystgnr · 2014-06-16T02:55:58.260Z · LW(p) · GW(p)
Linear functions on closed bounded domains can (and on finite dimensional closed bounded domains must, IIRC) have minima and maxima. This seems to be Elo's implicit assumption in the first paragraph, that we were just talking about resources which are available in quantities between 0% and 100%.
comment by buybuydandavis · 2014-06-14T08:41:22.480Z · LW(p) · GW(p)
Levels of abstraction.
Replies from: Torello